Shantanu Nigam: Most of our nurses are in the profession because they care about the patient. When you have a probabilistic system, by design, it will have one patient slip through it and will not be 100% accurate.
When the nurses care about the patients so much and they see that one patient slips through, their confidence in these solutions goes down. Change management becomes even more complex here. We have to get them through that mindshift.
For pressure injury, the known protocol is Braden. It has a 50% accuracy. They’re okay in applying Braden knowing it’s accurate one in two cases. Jvion is accurate 9 out of 10 times. Humans have to stay on top of any probabilistic system.
The moment they lose that or they try to misinterpret AI, losing trust comes into play. They will then talk to others on the floor and will kill adoption. Over time, we’ve invested to try and eliminate that.
Sramana Mitra: Nurses are not trained to think probabilistically. How do you circumvent that problem?
Shantanu Nigam: It takes a lot of time. It takes a very thoughtful approach. It takes a nurse to talk to another nurse to be more effective. First you need to have a nurse walk the floor with other nurses. You need to show them the macro-level value of these kinds of solutions. We have to tell them what AI can and cannot do.
Once they are on board with understanding the limitations of AI, it can be applied in the most effective manner. If they do not understand the limitations of AI, then there’s going to be disappointment, which will lead to a lack of adoption. Even worse than lack of adoption, it can lead to harm to a patient.
There are three points that nurses need to understand. Number one is, the system could be wrong. It’s mostly right, but it could be wrong. Second point is, humans have to stay on top of the system. You take it as a guidance. Then you apply your intelligence and validate. If medical reconciliation could be a problem, spend another 45 seconds with that patient to talk about the importance of medical reconciliation.
The third point here is, when you do see these kinds of problems, maintain the feedback loop. Go back and keep measuring. With results, the confidence will build. With results, they will start to trust the system more.
Sramana Mitra: Let’s double-click on the AI aspect of this solution. What is AI doing? What dataset is it operating on? What’s happening?
Shantanu Nigam: For patients inside a hospital, we tap into EMR data, which is the clinical record of the patient. It has as much demographic information as you can get with some aspects of therapy being applied to them.
We have about 30 million or so patient data, which translates to about 10% of the patient population of the country. Augment all of that with the patterns you see in other data and other socio-economic markers, that is the data we use.
What is AI doing here? Let me explain what AI is doing. AI, in a typical situation, will find risk on patient. It’s the most common way of applying AI. Here’s a risk of sepsis for the patient. Go and do something about it. The Jvion AI does that and also, it finds the likelihood of changing the outcome and the right interventions that can change the outcome.