Sramana Mitra: Talk to me a bit about how you deal with emotions. You presented that as one of your key differentiators. Talk to me about how you achieve emotional parsing in your agents.
Jonathan Crane: Here’s how we basically look at it. If a customer wanted to renew a wireless service contract and I knew that the person I was speaking to was very wrong in the cycle, my emotional intensity at that point is not going to be very strong. It’s going to be a fairly calm and directed conversation. I know this person’s one month to go. I’m going to raise up the emotional tenor of the conversation. I’m going to be intense. I’m going to be sympathetic. I’m going to offer up a number of options for you to consider in order for you to renew. I’m going to be very gleeful when you’ve accepted the opportunity to renew. I’ll have the ability to say, “That’s terrific. We’re really excited about having you on again.” It’s that kind of response. In addition, if you spoke to Amelia wrongly, she has the ability to say to you, “Excuse me. That’s really inappropriate. I hope that we won’t continue at that level of interaction.” She has an opportunity to recognize sentiment as it rises or lowers in the customer’s voice and verbiage and she can respond correspondingly.
Sramana Mitra: That’s like telling the customer to behave. Customers don’t always behave when you ask them to behave.
Jonathan Crane: That’s true. So your interaction is to reflect the appropriate amount of emotion. You might want to curb their responses or you may take it up a bit to see if you can get this customer to interact with you in a different way. More and more, the emotional nature that we would expect out of this kind of computer animation is going to be an important ingredient on whether or not you’re going to interact. The reason we say that is because you just look at an IVR. You look at Siri and ask Siri certain questions. She’s not going to be able to answer those. In the case of this IVR, you’re going to be able to make decisions on whether or not you need to get additional help. Again, the customer has gotten a little heated. You need to make a decision. I sit on the phone with that agent. Amelia then brings those into our process ontology.
Sramana Mitra: What are open problems that you see from where you sit? I’m specifically interested in AI-related things that you are working on.
Jonathan Crane: I think one of the things that we all have to understand is that we are in a new field here. Let’s put a couple of issues that we think could come at us. Number one is the labor arbitrage and outsourcing of labor, people could perceive that the introduction of artificial intelligence could in fact take jobs. One of the things that we’ve looked at is this is a chance to look at functions inside of a company and either eliminate those and give people an opportunity to elevate. One of the ways we look at this is to try to elevate the function that I could do.
If I’m a good working agent, why don’t I get a very intelligent agent to sit alongside and make me better, smarter, and faster? There’s clearly an issue as we go into the next phase. Like we had in the manufacturing facilities with robotics, we’re going to be dealing with a sense of rejection by humans saying, “I can do the job better. Why would I be replaced by a virtual employee?” You’re always going to deal with that. The second thing you’re going to deal with is what level of emotion should you embed in this virtual employee such that for example, you come into an insurance company. You want to let me know that your husband has passed away. Would you like it if I said to you, “I’m very sorry to hear that. I know you’ve been married for 25 years. It must be very hard on the two children.” Then you just go, “Thank you very much.”
Then you realize that was a computer that was programmed to be empathetic and said words that, even though they might have felt comfortable, now you start to feel uncomfortable. We’re just testing those waters. We have to see what the human acceptance is going to be of being able to make computers human. That’s one of the areas. We just need to be sensitive to interfacing with the public, using these virtual employees, and determining the best use for them. If we use them as job elimination agents, then you’re going to get a rebellion inside of the work forces. They’ll try to make those efforts not work. We’ve seen that in automation when it comes to just managing IT. It’s just a natural human reaction to invasion of their work space.
Sramana Mitra: Great. Thank you for your time.