This article was originally posted on HITConsultant.net

Medicomp Systems’ Dr. Jay Anders was interviewed about what needs to change for machine learning to make a profound impact for providers and patients.

Ever since IBM Watson wowed audiences with its superior knowledge on Jeopardy, the buzz around the potential and power of machine learning and artificial intelligence (AI) has grown across industries worldwide. However, Medicomp Systems’ Dr. Jay Anders says don’t let the hype fool you—AI in healthcare is far from perfect.

“The sad truth is that there is a long, long road ahead until we see the profound, meaningful results that providers are longing for and that patients deserve,” said Anders, CMO of the clinical data solutions provider based in Chantilly, VA.

Right now, the potential for disruption from AI and machine learning in healthcare is showing up in the areas of research, imaging, diagnostics, and treatment—along with triaging chatbots to support various care delivery and administrative functions. The technologies hold great promise for positively impacting health outcomes and providing deep insights into the health of populations.

Despite the buzz, machine learning is still in its early stages within healthcare, and its insights will be limited until it can deliver more accurate, comprehensive and actionable data in a usable format. So, what’s holding up AI from doing that now? According to Anders it’s a two-part dirty little secret nobody is talking about: most healthcare organization are drowning in data, and most of that data is inherently flawed.

 

Secondly, that steady stream of flawed data output leads to flaws in care delivery, potentially putting lives at risk. So how does machine learning and AI go from mastering the game of Jeopardy to no longer putting people’s lives in jeopardy when it comes to care delivery? HT Consultant turned to Anders to discuss the problems and the possible solutions:

Q

AI and machine learning are expected to disrupt healthcare, and both have investors pouring money into the technology to solve some the most pressing healthcare issues in the U.S. However, you have stated that AI and machine learning have a long way to meet their potential in healthcare. Before we dive into that, can you tell me what kind of potential you think these technologies hold for the industry?

AI technologies will eventually become more sophisticated and a better fit for healthcare but there is a lot of work to do first. The greatest potential of AI is the ability of platforms like IBM Watson or Google DeepMind to review hundreds of thousands of studies to understand exotic diseases better or to help incredibly complex patients. AI also has great potential for analyzing financial data and operational data to help providers to identify areas for cost savings or increased revenue. But the clinical application for patients is much different because the output is about treating people– not improving the bottom line or increasing efficiency.

It’s also important to understand that for run-of-the-mill healthcare AI cannot replace a physician to diagnose a patient or triage a condition. For example, the common complaint of chest pain could just be a sore rib, or it could be massive heart attack which is vastly different. Relying on a machine, instead of the smartest computer in the room which is between the physicians’ ears, is just too risky because the machine is only as smart as the input it is given.

The machine is entirely dependent on the clinical data it has received which could be flawed, incomplete, or even include contrasting information. Besides, people want to be cared for by people, not machines. While machines can help physicians, they cannot make the final decision or diagnosis. After all, in healthcare, the stakes are much higher for AI—lives could be lost.

Q

Do you think AI/ machine learning has the potential to be ubiquitously pervasive in all facets of healthcare delivery?

No and it isn’t necessary for all facets of healthcare. At least 80% of healthcare delivery, especially in an ambulatory setting, doesn’t require more than a well-trained physician. Regarding leveraging AI for population health, its application and value will be limited depending on the data that can be fed into the platform. You can have the smartest algorithms but if you don’t have the smartest data to fuel that output, those algorithms are deeply flawed, and their results are unreliable at best.

Q

Let’s talk about what’s holding these emerging technologies back from meeting their potential currently. You said that it’s a two-part secret that no one wants to talk about, mainly dealing with data quality. Let’s break this down: You say part of the problem is this garbage in garbage out effect, that there is poor data aggregation and thus flawed results happening. Can you give us an example of this in play right now? What comes to mind off that bat?

Any machine is only as smart as the data it is given. If that data is inaccurate or incomplete, the outcomes are diminished—and even obsolete. Unlike AI technologies, physicians are not only highly trained, but they are also human beings. They know how and when to ask further questions based on their real-world experience and education. They can see the patient face-to-face and understand their behavior rather than only analyzing data points. They can translate clinical information into action.

Most importantly, they can make meaning of conflicting data to form their diagnosis. However, AI cannot identify those data gaps or inconsistencies. They cannot flag that problem. In short, one typo can be very dangerous. Clinical information regarding patients must always be filtered through the lens of a physician and delivered with knowledge, care, and compassion.

Q

Drawing from my examples here and playing on the second part of the issue, doesn’t this mean that the data people are pooling together to create more personalized means of diagnosis and treatment are also incredibly flawed? Does that mean they could move backward as a result? Are we doing irreparable damage here?

A: There are so many what if’s in this question. For analyzing complex conditions, the more data the AI platform gets, the better it will be—assuming that data is accurate and complete. However, the physician still plays a critical role in translation and decision-making. Any information that comes out of the platform, about personalized medicine or other treatments, must be interpreted correctly with the patient at the point of care. A physician must take that AI output, make meaning of it, and apply it to that individual patient. The physician must still be the ultimate arbiter of what the treatment is going to be. We cannot be solely reliant on a machine.

Q

Despite what seems like serious repercussions the buzz is growing around AI and machine learning. Why is there such dissonance between the potential and the problems right now? Are tech innovators trying to pull the cart before the horse here?

Innovation is exciting, and it’s not surprising that tech innovators are looking to realize the promise of AI in healthcare, as they have in other industries. However, for investors and businesses banking on AI, this work is also about monetizing these platforms which may have been trained to play Jeopardy and do taxes but not to advance the treatment of diseases.  The tech innovators are all about the buzz, but most of their use cases are not realistic or achievable—and when the buzz is about potential, not about results, it’s a clear indicator that there is a problem. Thus far, we have seen no examples of how AI has improved healthcare delivery by helping physicians or patients.

Q

Interoperability has long been a challenge regarding HIT. So, in many ways, this is nothing new. Because of this, could these challenges have been prevented? If so, what kind of approach should have been considered?

Optimal care requires that all members of the care team have access to a patient’s complete medical history. One of the biggest challenges of achieving true interoperability is that healthcare organizations are not exchanging patient data. They are exchanging ‘stuff’ because the data isn’t structured. Without structured data, providers typically cannot make meaning of data they receive from other health systems or provider groups.

This is extremely problematic because having access to the right data and the right time, at the point of care when the patient is in the room, is mission critical to delivering high-quality care. If our industry cannot find a more effective way to exchange, organize and apply clinically relevant data, the future of AI will be even more limited in scope and impact.

Q

So now we are dealing with flawed tech and flawed data, where do we go from here? How do we fix this? And who needs to fix what? Is this more about innovators making changes, health organizations and establishments or both? If it’s a collaborative effort how do both sides start working toward the common goal?

The answer is collaboration. There must be a real collaborative effort amongst all stakeholders in the healthcare ecosystem to solve the data problem. This effort must encompass all of the key players like government, large health systems, medical associations, and technology companies. We must come together to establish a better approach to sharing data – not just for the benefit of AI but for the benefit of patients who deserve better.

Secondly, it’s not enough to just share data. It must be presented to physicians at the point of care. We must focus on proving the right data at the right time with the right patient—and the only way to ensure that the right data is coming out is to ensure that the right data is going in.

Q

Other than learning from our mistakes, do you think this means little value has yet to be derived from the implementation of AI and machine learning? Also, couldn’t one make the argument here this is just a natural progression/evolution of tech? We have seen mistakes like this before. For example, the dot.com bubble comes to mind?

As with any innovation, part of the process is learning. It is a natural evolution, but we don’t have that luxury on the clinical side of healthcare. As physicians, we have very little tolerance for mistakes. We’re not allowed to make mistakes because picking the wrong treatment or making the wrong diagnosis mean that patients can get sick and even die.

We’ve got to be right, and that means that the machine has to be right to be useable. Even if an AI platform is correct 90% of the time, that remaining 10% is just too dangerous. It means that 1 in 10 patients could be harmed.

Q

Keeping all the challenges (and proposed solutions) in mind, where do you see these technologies taking healthcare in the next five years? Furthermore, how do you use that perspective to shape the trajectory of what Medicomp Systems is doing when it comes to too much data, not enough actionable insights?

First, we must realize that even when AI reaches its full potential, it will never replace the physician. People want to be treated by people. But as an industry, the first step we must take is addressing interoperability and settling on structured data elements that can be readily exchanged. Currently, that high-quality data to fuel AI platforms is severely lacking in healthcare.

That’s why Medicomp is dedicated to not only creating that structured data and delivering it at the point of care but also making AI and machine learning more usable by presenting AI-derived insights within existing physician workflows, driving greater physician productivity and better clinical decision-making. Medicomp’s solutions also facilitate AI insights by storing captured chart notes in a structured data format that can be analyzed using AI algorithms.

Q

Do you have an action plan or any insights you can share with health organizations as they begin to prepare for the potential of these technologies entering the landscape at some point? Are there things they can be doing right now to plan for what’s ahead?

As organizations prepare to leverage AI and machine learning technologies, they must first set realistic expectations. We’re not living in a sci-fi movie. Healthcare organizations must acknowledge that clinical application of these innovations is much more challenging than applying it to financial or operational data. Most importantly, they must be willing to invest in the data that will fuel these systems—that data can either empower the machine to succeed or ensure its failure.