Sramana Mitra: Talk a bit about trends. If you look at the last 18 months of your deal flow, what are you spotting?
Alok Nandan: One of the big trends is AI. The next trend is around the explainability or accountability of AI. How do you make sure that the machine learning models are explainable to a human being? That is an important trend which will apply to most of our portfolio companies as well as other companies out there.
Sramana Mitra: I have a question about this before the next point. I have been thinking about explainability. Your learning models start evolving. You have a learning model buried into any workflow function. Then the learning starts to kick in gear and starts making decisions and gets smart.
To what extent do you see the engineers and architects of these systems simulating where this model is going to go?
Alok Nandan: When you say where the model will go, do you mean how it will evolve?
Sramana Mitra: Yes. As an architect of an AI system, you set up the heuristics on what’s going to happen. The data comes in and the model starts to evolve. To what extent can you control that evolution is the question I’m asking.
Alok Nandan: I think what companies will have to do is build guardrails around these models. If the model starts to produce results that are beyond a certain expected spectrum of results, it needs to be flagged automatically. That is doable. I think that will happen.
I’m pretty sure that’s already happening. Salesforce has some of these checks and balances. At least, they were talking about it a year ago. They were talking about these guardrails built in so that a semi-expert doesn’t shoot themselves in the foot.
These guardrails are being built. There will be more and more research on this. There will be more product innovation around building these guardrails so that they’re automatically caught when models go out of bounds.
Sramana Mitra: One of my concerns is that the first wave of AI models don’t have guardrails. Sales is probably not the most risky territory. HR is a more risky territory. Then you move on to medical healthcare data. That gets into very risky territory.
Alok Nandan: With healthcare, there has to be a human in the loop. It cannot be that the machine is producing a result, and there is no human reviewing that.
Even for financials. Let’s say there’s a model that decides whether somebody is creditworthy or not. There could be some bias built into the model. That’s where explainability comes in. That’s where the human in the loop has to be in the decision-making path to provide that check and balance.