by Jim Dwyer -
Artificial Intelligence (AI) in Healthcare holds vast potential to improve the highly regulated, complex, data-intensive, and life-affecting industry.
The Healthcare industry has already taken the first steps toward AI, implementing clinical decision support to improve diagnostics and treatment, using rule-based automation for nonclinical operational efficiencies and processes, and leveraging pattern matching and customer segmentation to improve the patient experience. Even advanced AI approaches such as Machine Learning approaches, and Cognitive Algorithms are now also being applied to clinical and operational use cases.
Further advancements and new opportunities AI are hitting the news daily, from cancer detection to automating eligibility inquiries, and executives are optimistic about AI’s potential to improve healthcare. Investors share this optimism, pouring more than $4 billion into healthcare AI across 367 deals in 2019[1], providing significant capital to invest in AI development and deployment.
However, the crowded AI market has generally failed to deliver to date on the potential of AI with an insufficient value generated to justify cost and disruption. Healthcare AI is, in short, entering the disillusionment phase of the hype cycle, where new investments and projects have to have a clear path to value, whether that value is in clinical outcomes, operational efficiencies, or patient/member satisfaction and retention.
AI Journey - Potential Promise and Pitfalls
With the broad, and growing, a spectrum of current and evolving AI use cases, the key challenge facing Healthcare executives is the identification and selection of the right ones – considering the technical risks to developing a successful model, the degree and immediacy of clinical impact, and the financial costs and potential returns. These considerations also vary based on the area and purpose of the use case, specifically in the fields of clinical AI, operational efficiencies, and patient experience.
Clinical AI
Clinical AI holds the promise of easing the burden on overworked clinicians and pointing to treatment decisions that improve the quality and efficiency of care delivery. Basic clinical decision support systems have been around for decades. But skepticism of technology leads many doctors to ignore or override them. New AI use cases need to win favor with practitioners by supporting them and making their jobs easier, not second-guessing them.
Beyond provider skepticism, Clinical AI adoption shares the thorniest problem facing clinical decision support: data quality. The messy reality of medical records, even when digitized and managed in an EMR, tripped up IBM's highly promoted Watson AI system when it was deployed at a Texas cancer hospital. "[T]he acronyms, human errors, shorthand phrases, and different styles of writing" were too much to handle.
On the other hand, when clean, controlled data is available, machine learning can shine. Last month, Google published the finding that one of its AI models spotted breast cancer in de-identified screening mammograms with greater accuracy than human experts, producing fewer false positives and false negatives. Google's AI subsidiary, DeepMind, worked with the UK's Cancer Research Centre, Northwestern University and the Royal Surrey County Hospital to train and deploy the AI model. Using scanned data from 91,000 women in the United Kingdom and the United States, the model was able to more effectively screen for breast cancer using less information than human doctors, relying solely on X-ray images, while doctors had access to patient histories and prior mammograms.
That is, the model was able to dispense with interpreting subjective histories and analyzing unstructured clinical notes and instead go straight to the raw, clean clinical data in the form of the x-rays alone to create a statistically superior diagnosis.
The promise of clinical AI is highly dependent on the quality and validity of the data provided.
A successful clinical AI proof of concept requires clean data – which may drive prerequisite projects in data acquisition and preparation. Further, the use case must have clearly defined objectives that generate organizational value above investment. AI can drive important gains in quality improvement and care delivery but must have specific, measurable objectives. These are often use cases where clinical AI is applied to case rate/shared risk diagnostics and treatments.
Operational Efficiency AI
Applying AI to operational efficiency in the provider market holds the potential to yield “hands-free” billing and collection in the continuing evolution of revenue cycle automation. A summary of the revenue cycle key steps shows that virtually every step can benefit from AI. (Keeping in mind that “AI” in operational efficiency includes the spectrum of AI functions from traditional rule-based decision support to newer machine learning techniques).
Providers spent a projected $282 billion in administrative costs for billing and insurance in 2019[1]. As the table above highlights, much of this work has the potential to be impacted, automated, and improved by AI, with a high value and quantifiable potential payback.
Operational efficiency, with its historical cost baselines and ease of measuring value, is the most quantifiable entry point for providers introducing AI to impact and demonstrate business value.
A successful operational efficiency AI proof of concept requires historical cost baselines and single variant objectives that generate organizational value above investment. These use cases often take a single revenue cycle key step to start and apply AI tools to measure improvements and administrative cost gains.