Sramana Mitra: My next question is going to be about what trends are you seeing. How are your customers using data? Obviously you have a very rich dataset in your ecosystem.
What are some unique and interesting visionary ways that data is being used and interesting applications of it being developed on top of that data?
Matthew Michela: Excellent question. At the base of our role in healthcare is interoperability and extending the ability to break down data silos and make data available. I would actually argue that the trend itself and the language we use around interoperability is really about interconnectivity.
It rises a level above that to not just figure out, on a technical level, how to get data from A to B, but also how to make that data relevant by combining it with other data that then can be useful in making decisions or doing other things.
While we are a technology company at our core, it’s really about connecting these disparate constituencies in the healthcare ecosystem that have very different goals, purposes, and timing. Ultimately, they really do need access in a usable way. There’s this continued evolution of interoperability and the continued requirements by folks like the Federal government through OIC.
If we can’t figure out what’s going on out there with an individual patient and understand that on a population basis, nothing is going to happen
because we’re interconnected around data. The big trend, in my mind, over the course of the last 20 years of my 35 years in healthcare is about interoperability. It’s actually becoming real now. The pace of technology change and the evolution in capability in the last five years is geometrically driving that access to types of data.
I would note that as a very big trend. What do we do with that? Another super hot emerging trend over the last few years is AI or artificial intelligence. Most aptly, people are starting to recognize it’s really augmented intelligence. You have the newest version using high-processing power machine learning.
What you need for AI to work for it to be built and then to continue to evolve and become useful is, you need access to data. This data needs to be very diverse data. What people misunderstood when they’re first building AI was they thought they could go to one institution or two, collect their data, build the model, and that it would work.
What they recognized is there is this bias in data depending on where it is born or stored. My example would be, we could have a really great academic medical center on the West Coast that replaces all the capital equipment every three years. It has very highly-trained master technicians in the radiology department.
They, at production, run data that looks a certain way and has a certain bias, which is fundamentally different from the exact same treatment plan or the exact same equipment that may not have been updated with a slightly less-trained technician and a slightly less well-funded technical infrastructure. The bias in that is really different.
Within AI, what we’ve all recognized is, it’s really great to go to where the data is perfect but if you want AI to really work in the real world, you have to go to places where data isn’t perfect. The data is collected in software versions that haven’t been updated for a while with different operating systems, geographies, and patient types. Since we are in a big broad network and we have about 85% market penetration and academic and tertiary centers in the US, we live where complicated medical care occurs.
But we touch community centers. We touch patient centers that are public service and Medicaid. They’re probably getting blood drawn in a variety of different places. The diversity of our network and the ability to acquire lots of different data helps us fuel AI companies. We’re working with a number of partners there in helping them build their models.