3 Major Bottlenecks of AI in Healthcare

In 2016, neural network pioneer and Turing Award winner Geoff Hinton made a bold prediction: “We should stop training radiologists now,” he said. “It is just completely obvious deep learning is going to do better than radiologists.”

Fast forward nearly a decade, and you’ll notice that while AI and machine learning (ML) models have made strides in image-based diagnosis and other medical tasks, radiologists haven’t yet gone anywhere. 

It’s a similar situation across the healthcare industry, where AI hasn’t had the paradigm-busting impact initially predicted. At least, not yet.

Just have a look at the share of U.S. job postings that require AI-related skills in the chart (that’s healthcare close to the bottom).
But even though companies such as Pera Labs, HyberAspect, NeuraLight, Protai, and others have made noise in the healthcare AI space, a series of bottlenecks have made full-on implementation by large hospitals and medical systems extremely difficult. 
Indeed, implementing human-centric AI is critical to its widespread adoption in healthcare, which can improve patient satisfaction and hospital efficiency. But several significant roadblocks stand in its way. Here are the most important.

Data Issues and AI Bias

AI models are useless without being fed high-quality data, which doesn’t happen nearly enough in healthcare. Despite the massive amounts of data produced by the healthcare space – around 30 percent of all the world’s data – data quality issues have plagued the sector for years and have harmed the clinical implementation of AI.

Part of this is due to the massive data surface area that must be probed for relevant information:

  • Medical databases containing peer-reviewed literature and studies such as PubMed, EMBASE, Cochrane Library, and MEDLINE
  • Insurance databases
  • Medical imaging databases
  • Electronic health records (EHR) and electronic medical records (EMR)
  • Postmarket surveillance

Much of this information is siloed in different repositories, often making healthcare data difficult to access and collect. Busy medical professionals often view data collection as an inconvenience. Collected clinical data can be incomplete or contain errors. And EHR/EMR systems are often incompatible across various providers, resulting in localized data that are difficult to integrate.

Additionally, data privacy considerations around the presence of personally identifying information (PII) and protected health information (PHI) adds another challenge. Companies and healthcare systems need to be sure they’re on the right side of the Health Insurance Portability and Accountability Act (HIPAA) and other regulations before using healthcare data.

These are all big problems for AI, which requires large, high-quality datasets to provide accurate results. The inevitable outcome of this lack of data is potential AI bias, such as racial or gender disparities, based on a dearth of relevant data or even the inherent biases of those who built the AI model.  

“In many cases,” writes AI reporter Kyle Wiggers in VentureBeat, “relevant datasets simply don’t exist, and there is a challenge of either learning from a small number of examples or leveraging AI to construct a good proxy for these datasets.”

Explainability and User Trust

Hand-in-hand with data issues and bias is the danger of black box-style AI models that make it difficult or impossible to understand how they generate specific predictions. Not only does a lack of understanding undermine trust in AI models, but it’s also dangerous because technicians may not discover flaws in the models until well after deployment.

It’s an especially important issue given the prevalence of bias we mentioned above, along with AI hallucinations, which are confident responses by AI systems such as large language models (LLM) that sound plausible but are completely wrong. 

Explainable AI (XAI) can help in this regard. XAI includes technologies and processes that help users interpret AI algorithms and how they work, along with explaining the rationale for making significant recommendations (such as major surgery). XAI provides this rationale in natural language, making it easier for clinicians and patients to understand and trust the models.  

Compliance and Regulations

We already mentioned issues around privacy regulations when training AI models for healthcare, which can be a major – but necessary – barrier to more widespread AI adoption. The sheer sensitivity around health data and privacy makes using real health data to train AI models extremely difficult.

But other regulatory hurdles often help to stonewall AI adoption. The arcane regulatory approval process for new medical technologies is time-consuming and one many companies take years to navigate successfully. And especially in the U.S. – known as the most litigious society in the world – liability concerns often also play a role in new technology adoption.

Researchers from the Brookings Institution, a U.S.-based think tank, add that developing complementary technologies or processes can help improve explainability, build trust, and facilitate greater AI adoption. This can include innovations in:

  • The ownership of health data and who can use it
  • Approval processes for new medical devices and software
  • Algorithmic transparency, data collection, and regulation
  • The development of clinical trial standards for AI systems
  • Liability rules involving medical providers and developers of new technology

CapeStart and Healthcare AI

CapeStart’s AI, machine learning (ML), and natural language processing (NLP) experts work with some of the world’s largest drug, biologics, medical device manufacturers, and healthcare systems, to facilitate the safe and responsible adoption of the technology.

From improving the speed and efficiency of systematic literature reviews, pharmacovigilance, and clinical evaluation reports, to providing image and data annotation for AI models that use computer vision, CapeStart can help remove bottlenecks and improve the efficiency of your next project.

Contact Us.