There’s been a lot of noise made recently about the clinical usability of EHRs and how it can be improved. Ultimately, there are two approaches to “breaking the usability barrier” of these systems:

  1. Provide users with instant access to relevant information for any condition for a patient. This information should be connected to clinically responsive workflows that mirror the way physicians and nurses think and enable them to get all their work done at the point of care––including documentation, quality measures, specific protocols, diagnostic and E&M coding––all while managing issues related to clinical risk management for value-based care.
  2. Design a system that clinicians can largely ignore and then use analytics, artificial intelligence, and other technologies to handle all coding, risk management, compliance, and documentation “cleanup” after the encounter.

Given the current state of EHRs for clinical use, it is no surprise that there is a lot of excitement surrounding technologies such as ambient artificial intelligence (AI) to capture sound during an encounter, the use speech recognition to turn it into text, processing the text with natural language processing (NLP) to convert the text to data, then applying analytics to evaluate the encounter and then populate the patient record so that the provider can avoid using the unusable.

Sounds great, doesn’t it?

However, by enabling the clinician to largely ignore the EHR, what do we lose?

Speech recognition is getting much better at converting sound to text and is generally regarded as approaching 98% accuracy. But NLP, while improving, still has an error rate requires manual review to correct results to an acceptable level. And this must be done after the fact. So, in addition to the problems with data fidelity, there is a lag between the acquisition of information and the clinician’s ability to act upon it at the point of care.

This is not to say that voice control of systems is misguided. It has great promise for navigation, command, and control, especially where clinical data fidelity is not a concern. However, with the shift to value-based care and the focus on effective management and treatment of chronic conditions, it is crucial that hallmark indicators of disease status and progression are reliable and instantly available to providers at the point of care. This requires that the spoken word be converted into actionable, structured clinical data that can be diagnostically filtered for presentation to the user.

With this type of instantly available and reliable clinical data, users can act at the most appropriate time––the point of care––rather than waiting for post-encounter processing and cleanup of patient information.

I am very excited about emerging technology that will be available later this year to provide a voice-to-data (and not just to text) capability in real time, combining speech recognition, NLP, and clinically dynamic command-and-control paired with a clinical data relevancy engine. This combination will enable users to quickly navigate the EHR, see all relevant information for any problem, take (timely) appropriate action, fulfill all documentation, coding, and quality requirements, and capture clean structured clinical data.

All of this will happen at the point of care, controlled by voice, powered by a clinical relevancy data engine that shows users what they need, when they need it, and links to workflows required to complete the encounter (and related coding and quality requirements) while still with the patient.

When this is reality, it will enable clinicians to finally move from “EHR avoidance” to “EHR engagement.”

###