Sramana Mitra: How many customers like Shell are you working with right now? What can we learn, trends wise, on who the early adopters are?
Andy Peart: We have over 200 projects across a range of organizations. We tend to target larger organizations because they tend to be more advanced in their thinking and their whole approach to the customer experience. Also, they are recognizing that natural language is going to be critical to them over the next five years. This is partly because customers are increasingly demanding it as consumers are increasingly comfortable with talking to devices and expecting those devices to understand them.
People are now demanding that of their providers. Enterprises need to respond. Further we believe that if they don’t, it’s going to open up a real gap that will be filled by a small number of tech goliaths who will start to own the customer relationship. You look at what Amazon is doing with Echo. Why are they doing that? It’s clearly to get themselves into the household and to be able to listen in to what’s happening and control who fulfils and delivers on that and builds that customer relationship.
What are enterprises going to do about that? Some may be happy to let that happen, but some want to do something about it. They are recognizing that they want a Siri or Echo-like experience for their organization. Where do they go for that? There are very few organizations who can help them meet that requirement if they’re looking for an intelligent conversational type of experience for their customers. We are one of those few companies who can help them do that.
Sramana Mitra: How do you price this technology? When you do a Shell deal for instance, how do you price the deals?
Andy Peart: We are flexible in terms of our approach, but typically, there will be several elements to the pricing package. Teneo is a platform that can be used either by our partners, clients, or our own people to help build these artefacts. There is a license for the ability to be able to use Teneo. Further, we typically would price it based on a number of interactions. We will work with a customer to set an appropriate number of conversations. That would be charged on an ongoing basis.
Then there’s a services element. We build on behalf of clients or the key thing about Teneo is that it allows non-computational linguists to build these sorts of sophisticated applications. Some of our clients like to provide their own resources and with a little bit of mentoring and training want to build it themselves. Then of course depending on how it’s hosted, we will either charge a hosting fee or they will self-host. Particularly with our financial clients, there’s no way that they would want us to host it. They want to host it themselves.
There are other models because what we’ve talked about so far have been the traditional enterprises. The consumer device manufacturers with whom we are working tend to want to differentiate their products by providing speech capabilities. There, typically, the pricing might be on a per device basis.
Andreas Wieweg: It’s a usage basically.
Sramana Mitra: Let’s talk about your trend of natural language whether it’s voice-triggered or regular text-triggered. How do you interpret the overall trends in the industry? This whole Amazon Echo thing is a pretty significant move in the direction of consumers getting very comfortable interfacing with the device.
We’ve had that with Siri but Siri is not as good a product. Echo is a much better product than Siri, so people are using it more and are getting more out of it. This will probably drive the trends quite a bit further than Siri was able to. How do you interpret what’s happening?
Andreas Wieweg: A different take on the question is, forget the modality. Whether it’s speech, text, or gesture, the trend is towards human beings interacting in a natural way with stuff around me and I can get that stuff around me. What is the natural way for me? I communicate with it in my language. Some people like to communicate a lot with voice maybe because they’re driving a car.
Other people love text. What I believe strongly in is that we should allow people to communicate in the way that fits them. The application should support all modalities. You also said that Echo is a good product. I agree. It has one thing that Siri doesn’t have. It has the ability for third-party to add in. The drawback is they base it on the natural language. Once I add applications to my Echo, I have to remember all the invocation commands.
From my perspective, that is the weakness. It should all the time be that I will tell the system what I want to happen. The system figures it out for me. For example, “Andreas, you told me to order pizza. However, you are in the US and not in your hometown, so I can’t use your normal preferences. There is an Italian pizza nearby. Do you want me to order from that?”
Sramana Mitra: Your point of view is that the interfacing is going to go more in the natural language direction and less with recognized command structures.
Andreas Wieweg: Yes.