Sramana Mitra: What is your estimate of the timeframe for all this to come into society?
Paul Daugherty: It’s a 10 plus year transition. We’re at the very early stages of applying AI. It’s going to take a while for the technology to mature in a lot of areas. It’s going to take time to figure out how to implement it in an industrialized way across the whole system. We should think about healthcare as a decade-long plus transformation of continually applying the technology in different ways.
There’re a lot of ways we can drive much better health outcomes very quickly with a lot of the technologies we talked about. It’s a matter of picking the spots where we can solve the problems right now using the technology that’s available and then working on some of the things that require more collaboration. We just need to start today and start pushing the change we can. It will take a long time to get to the nirvana of everything intelligence-powered.
Sramana Mitra: The other question I have on the timeline issue is your example about the epilepsy transplant for instance. What is the regulatory process around that? What is the estimated time around getting stuff like that to market?
Paul Daugherty: It’s still a long process to get innovations like that to the market, especially things that are implantable. I’m not sure where they are in the regulatory approval process. I know they’re in trials. The regulatory issues around AI are something we need to keep working on. There’re two sides to this. One is where can we streamline the regulations so we can move products faster.
The other side of the debate is, there are many that are calling for more regulations around AI to make sure we don’t have negative impacts. As an organization, our view is, it’s too early to regulate AI itself. We don’t think regulating AI is the right answer. We can benefit from better public private sector collaboration, which is something we’re trying to encourage a lot of.
As part of the meetings on AI for American industry that the White House convened, the discussion was how to get public sector and industry working together so that there’s no fear of regulation or fear of what might happen, but we’re able to accelerate some of these powerful new products to the marketplace. A good example of that happening is what the FDA did in the US recently.
Many might be concerned about what’s the new regulatory process going to be for AI-enabled products that are regulated by the FDA. The FDA said, “We’re not going to regulate it right now.” But they introduced guidelines. Those guidelines freed up the potential for organizations to use AI in ways that will be acceptable to FDA. That type of proactive public and private collaboration is going to be important to accelerate the progress as we go forward.
Sramana Mitra: Last week, I had one of these conversations with a French company. We talked about GDPR. Some of the discussion in the regulatory circles around GDPR is around explainable AI. The issue is that AI is a bit of a black box. It would be hard to make policy decisions on all this unless that AI can be explained somehow. That’s a slightly tricky problem because how the machine learns often is not entirely predictable. What is your thought about this topic?
Paul Daugherty: I don’t think that’s that hard of an issue to manage in the right way, to be honest. We talk about this a lot in the book. We have a couple of chapters devoted to responsible AI, which is applying AI in the right ways that are compatible with the way we want the system to work in the society. We talked about exploitability a lot in there. You’re right. AI is a black box with probabilistic processes.
That’s fine for many processes as long as you can verify the results in the right way. As long as you do that for certain things like vision recognition for self-driving vehicles, unexplainable black box type of vision is okay because you can probabilistically understand how you’re producing the right result. For some business processes, you need to be able to explain your work. You need to be able to explain exactly what happened.
In other cases, you don’t want to use deep learning models where you can’t exactly verify how the decision was made. The way we look at it is you need to look at every case where you’re applying AI, look at that process, and say, “Is it okay to use a black box process?” The reality is, for business and society, there are certain things we need to make sure are explainable.
In my view, it’s not that hard. It just comes down to a matter of deciding when and where you’re going to use the right types of algorithms. The other thing is there’s a lot of great research happening. We’re doing something on how to make certain algorithms more explainable.
There’s a technology called LIME or Local Interpretable Model Explanation, which is using AI itself to explain the results coming from models. Over time, we’ll find that we have new techniques that make black box algorithms more explainable.