Q&A: Microsoft's Lili Cheng talks about emotionally intelligent machines

artificial intelligence
Credit: CC0 Public Domain

For machines to be truly intelligent, some artificial intelligence (AI) researchers believe that computers must recognize, understand and respond to human emotions. In short, machines must be equipped with emotional intelligence: the ability to express and identify one's emotions, and to empathetically interact with others.

That unequivocally human quality "is key to unlocking emergence of machines that are not only more general, robust and efficient, but that also are aligned with the values of humanity," Microsoft AI researchers Daniel McDuff and Ashish Kapoor wrote in a recent paper.

Mastering will enable computers to better support humans in their physical and mental health, as well as learning new skills, wrote McDuff and Kapoor.

The Seattle Times talked with Microsoft's Corporate Vice President of AI and Research, Lili Cheng, about developments in machine emotional , and what it means for humans. (The conversation has been edited for length and clarity.)

Q: What is the technology behind machine emotional intelligence?

A: The way the human brain works is there are some things that you've learned over time, but then there's other things that you do very intuitively. Right before you get into an accident, your body might tense up and you might not think about (how you would) respond to certain things - (in various other situations) that applies to many different emotions like happiness and joy.

(The paper by McDuff and Kapoor) looked at how humans respond with fear. Could we help automated systems learn from some of the same techniques?

They took a simulation of someone driving, and they measured the person's pulse to see how they responded if you're trying to avoid crashing or hitting something.

And then they built that model and applied it to an automated car that's driving through the same simulation. They found that applying the fight-or-flight (reaction) that a person (experiences), building an algorithm around that, and applying that to an automated car helped it respond very quickly in some situations.

They're not just using fear. They're combining that with the other tech that is more rational, just like a person would do, because you're not always just responding to fear.

They can simulate some things in a virtual environment and still do these tests without actually having to have real cars instrumented. It would be very hard to do these sorts of tests in the real world.

Q: What are some other for machine emotional intelligence?

A: The real dream is that (emotional intelligence) can better help you in situations that are more meaningful to you - (for instance,) you're in a stressful meeting, or you want to meet someone new. And helping people to better manage their emotions and do better in those kind of situations would really be great ... (Not just) "open my mail. Help me get that task done. Help me get my meeting set up more quickly."

Q: How might these machines help people manage their emotions?

Xiaoice is a very popular chatbot in China. It started as a research project. The goal of the project is really fascinating, because unlike a lot of the assistance that helps you get work done, the goal of Xiaoice was just to make an engaging experience. How can we help the system encourage people to talk (with each other) and encourage them to include this chatbot in conversations?

One of the things that (Xiaoice) does in the conversation is understand, "Oh, that's a question, or this is a time when you just want to converse with me, and you don't really want me to have a response." So (the chatbot) is trying to break down the conversation into different ways that a person might think about communication. Maybe the bot just listens a little more.

They have conversations that are hours long with people basically just needing empathy, to talk to someone, to do more fun, social things. That's an example of a system that is designed to study more emotional-social interactions, than to just be productive.

Q: It's like a therapy bot?

It could potentially (be). One of the skills might be to help you if you're having relationship problems. But it also does very practical things, like teaches you English, or helps you with translation, so it better understands what your goal is and then surfaces the skills that you would want.

Q: How is the bot able to comprehend if someone is sad or asking a question?

A: It's a lot of different things—it might be the intonation of your voice or the actual words that you're saying, as well as the type of interaction that you're having, your history and some of the things you've done in the past.

Q: Could emotional intelligence also help combat the perpetuation of biases in AI algorithms?

A: It's our dream—we're still a long way from AI being self-aware of when it's being biased and when it isn't. We still have that problem with people, so that's probably something that we need to work on for a long time. But if we don't start thinking about it and working on it now at the root of the systems that we create, then we will for sure create systems that replicate a lot of the problems that we see in society.

We have a collaboration with a group called OpenAI to make sure that the tools are available, so that you don't have to have a Ph.D in AI to make things. Training people to be able to build the kinds of things that they want to experience is super critical for society at large.

And (in terms of) datasets, we want to make sure that men and women, older and younger people, and those who live in different places are all represented in the data that we use to train the (AI) models. If it's just a very limited side of society, then just from a business perspective you're not going to reach ... a lot of your customers, and we want these tools to solve a lot of problems.

Q: How far away do you think developers are from incorporating emotional intelligence into every AI machine?

It depends on how you define emotional intelligence. What I love about conversation AI is that from a very basic level if you don't have some human qualities in the systems, they feel broken. If something doesn't know how to respond if you greet it, or doesn't know some of your history, they don't feel very intelligent.

In a lot of the conversational systems that we're seeing, people think about (emotional intelligence) early on (in the development). A lot of times the people designing the systems are not engineers—they're writers, or they're designers who have thought about the type of emotions that you might have if you use a product. We're a long way from making that (process) automated. But I think it's great to see people thinking about (emotional intelligence) in the very beginning.

Q: Do you foresee any potential dangers to machines having emotional intelligence?

A: Like with everything, the more things seem natural, we need to make sure that people understand when things are real or not. For everything we do, we have an ethics board to look at Microsoft's products and we have a set of design guidelines.

We apply privacy laws to everything we do and ... making sure that we don't have bias in our own data is very critical for every product that we have. We try to share those guidelines with anyone using our tools to create their own experience.

©2019 The Seattle Times
Distributed by Tribune Content Agency, LLC.

Citation: Q&A: Microsoft's Lili Cheng talks about emotionally intelligent machines (2019, August 7) retrieved 29 March 2024 from https://techxplore.com/news/2019-08-qa-microsoft-lili-cheng-emotionally.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Can videogames promote emotional intelligence in teenagers?

4 shares

Feedback to editors