Sramana Mitra: Let me push back on that. AI is much better at handling a very large number of permutations and combinations than human beings. I would question the validity of your logic that AI cannot be trusted to manage variations. I think AI is perfect for managing variations.
Raj Valli: I want to give you an example of why I think that. I’m going to disagree with you on that particular point. If you’re familiar with Norman AI from MIT, they have a website norman-ai.mit.edu. They trained the same AI machine.
In one AI, they gave exposure to all the news articles out there and to another machine, they just gave access to Reddit articles or something like that. They wanted to put the machine through a Rorschach’s inkblot test.
The normal one on an inkblot experiment said a group of birds was sitting on top of a tree branch and the other one was saying that a man dies of electrocution. With the same plot and AI, but based on what data was entered into the machine system, it showed two different interpretations of the same inkblot.
Sramana Mitra: I know a lot about AI as well. I’m an MIT computer scientist, so I’m going to push back again on this. The example you’re giving is a non-contextual AI example. Your domain is a contextual AI situation. It’s a constrained domain. It’s a constrained set of vocabulary. It’s a constrained set of scenarios.
Why should a constrained AI not be able to handle the variations? And in fact, if you set up your machine learning algorithm properly, you can even come up with those variations and the machine will learn.
Raj Valli: First of all, that’s why we are doing AI. We’re not taking anything away from the capability of the AI to make recommendations that are likely to be more correct than a human being because the AI can chew through more data points than a human being can. There is absolutely no contest on that particular viewpoint.
In my capacity as the founder and CEO of this company, I would not trust an AI 100% to run with the dataset all the time because there is an inherent bias here in the system. If you think about bins of knowledge, a student may be very weak in multiplication but very strong in addition, so the machines and their ability to start making recommendations is going to be always given by the context of why this particular student struggles with this particular concept.
If there is any contextual information that needs to be added to the mixture, AI is always going to be lacking. That’s the first point. The second point is that when you come into nodal decisions, the number of data points that you need to tell a machine to take basic decisions would be enormously high if you don’t take away a few paths.
It’s the chess game analogy. If you don’t take away the points where a human being can easily say that that rook is going to move this way in a chess game or that the horse is going to move a particular way, it’s a very easy reflexive decision a human being can take to teach the machine to take such reflexive decisions based on some contextual reality.
It will take a lot of data points and you have to constantly keep training and retraining the machine just because you want to make the machine fail-safe. A far faster process would be if the AI can be coupled with a human decision-making capability.
We take decision making in layers because we can accentuate the learning process by providing the human beings to take far more intelligent contextual decisions far faster and in a more superior manner than just purely relying on machines to make those decisions. That’s one of the reasons why we have this machine-human interface there.
Although you say that the Rorschach test with Norman MIT is non-contextual, I would argue that it’s contextual. I’ll give another contextual example. There was an article that came on NPR several years back where they were talking about legal cases being determined because there are so many case laws.
It’s contextual. It’s very deterministic because case law is very deterministic and they wanted to make sure if AI machines can be trained to provide contextual cases and provide an indication of guilt or lack of guilt based on prior data on which machines can be trained upon.
The machines can make recommendations, but they were saying that all black men were mostly guilty but they’re not because history is biased. In each context you’re trying to add, the machine is going to be having an inherent bias based on the dataset that you have loaded the machines to be trained in. If someone says that no data set is going to be biased, I just don’t buy that.