Human-Centric AI: The Interactions Between Humans and AI in Healthcare

Healthcare systems across the world are under stress. The grinding nature of the pandemic put many hospitals on the back foot, and the aging population’s growing need for care has led to healthcare staff and physician shortages across Europe, the U.S., Canada, and beyond.

Artificial intelligence (AI)-based tools were designed, in part, to help make healthcare staff more efficient – making them an ideal tool for the age of worker shortages. 

But up to now a remarkably small number of AI applications have made it from the research lab into clinical practice. To a large degree, that’s because many AI applications for healthcare weren’t designed with human requirements in mind.

The Main Bottlenecks of AI Adoption in Healthcare

Many hospitals have already dipped their toes into the AI implementation pool through predictive analytics (to analyze and improve spending, patient flow, and other indicators) and robotic surgery

But we’re still a long way away from having ubiquitous AI systems throughout the healthcare workflow. A recent study in the British Medical Journal concluded that of more than 200 AI prediction tools developed for healthcare, only two demonstrated real usefulness in guiding clinical decisions – and many were deemed capable of doing more harm than good. 

Why? 

Poor Quality Data

The healthcare industry generates around 30 percent of the world’s data volume, a number that will likely reach nearly 40 percent by 2025 – a rate of data growth that’s a full 10 percent faster than the financial services business.

But it’s an increasingly acknowledged fact that poor healthcare data quality issues have had a significant effect not just on the healthcare system – through inaccurate diagnoses and delayed treatments – but also in developing AI applications for the industry. 

After all, AI and machine learning (ML) applications are generally only as good as the data upon which they are trained. “If the datasets are bad,” explains Harvard Business Review, “the algorithms make poor decisions.” 

Interoperability

Data quality and other issues can cause potential interoperability issues when deploying algorithms and AI models at different hospitals with different infrastructures, physicians, patients, and data. Many systems developed within the clinical confines of a single hospital often face extreme difficulty when deployed at a new facility.

Bias and Discrimination

Civil liberties groups have often argued that AI models, if not designed with the utmost care, can perpetuate discrimination and racism and even make them worse. 

Studies have shown that AI systems are capable of various kinds of bias based on their training materials. These include making predictions based on the brand of medical device used, or the model concluding that people lying down for an X-ray are more likely to develop serious Covid illness (in that study, the most seriously ill patients’ images were taken lying down).

Researchers say that to avoid any potential bias, AI models must be designed and trained using a diverse group of stakeholders, keeping in mind human-centered AI principles (which we’ll explain further below).

Poor Model Design

A common pitfall of new technology products the world over is a potential lack of focus or direction. Technology developers often become so enamored with their technology that they forget the problem they’re trying to solve, and the same is true for some AI models in healthcare. An AI model designed to solve a poorly defined problem, or that doesn’t consider the needs of physicians and other healthcare workers, is usually doomed to failure. 

At the same time, AI models designed and tightly focused on solving pain points at any stage in a specific healthcare workflow can provide significant value. 

Workflow Fit and Lack of Trust

Workflow fit and trust issues are probably the biggest stumbling block. It’s no secret that most physicians have tight schedules and are creatures of habit who rely on well-documented routines formed through the crucible of years of experience in healthcare settings. 

Add that most healthcare professionals are often required to make quick decisions in life-or-death situations, and it’s easy to see why they’ll reject a new tool that doesn’t fit seamlessly into their workflow or causes complexity issues. 

New tools need to earn healthcare workers’ trust through accuracy and reliability – not an easy proposition, especially if an AI tool provides recommendations that are wrong or don’t make sense, or if there’s no way of knowing how the model came to its conclusion. 

At the same time, some health workers may trust AI models too much, leading to wrong diagnoses – or worse. 

“Undertrust occurs when trust falls short of the AI’s actual capabilities. Several studies have shown how radiologists overlooked small cancers identified by the AI,” writes Sean Carney, former Chief Experience Design Officer at Royal Philips. “On the other extreme, overtrust leads to over-reliance on AI, causing radiologists to miss cancers that the AI failed to identify.”

You, Your Physician, and Your AI Agent

One area where AI has already shown its value is in healthcare research, from pharmacovigilance to the creation of systematic literature reviews (SLRs) and clinical evaluation reports (CERs). But clinical applications of AI are less common for now. 

Developers of healthcare AI systems have realized that it’s not about replacing humans. Rather, it’s about finding the right applications that provide the most value and that augment the work performed by human healthcare professionals. 

A hybrid human-AI approach combines the value of AI in finding and exploiting efficiencies while keeping healthcare an empathetic, personalized, and human-centric experience for patients. Empathy is a major driver in positive healthcare experiences and outcomes. 

That’s a big reason why human-centered AI – medical AI models driven by what is “humanly desirable” from the perspectives of various stakeholders – has become a hot topic in healthcare. Indeed, instead of simply evaluating models by accuracy, developers must also consider the clinical context in which these models are deployed – including the empathetic, intellectual, and emotional elements often present in healthcare situations. 

That’s why the healthcare team of the future is likely a complementary relationship between your physician and an AI agent. AI agents can help scale the effectiveness of human physicians by automating menial tasks such as writing letters. Chatbots can help evaluate symptoms and triage patients. And models designed to scan medical images can spot anomalies much faster than humans.  

At the same time, those models need a steady pair of human hands on the wheel so they don’t go into the ditch. Viewed in this way, a future AI model can act as a physician’s co-worker by offering evidence-based suggestions. 

Human-Centric AI Can Improve Healthcare

The benefits of using AI in healthcare settings are clear, but it’s not a magic pill able to replace all healthcare workers – and that’s a good thing. Experts agree AI is at its best when combined with human interaction and empathy: AI-driven automation can significantly lower the day-to-day menial task burden on healthcare workers while not removing the human-centric approach provided by a live nurse or doctor.  

In such a situation, everyone wins:

  • Healthcare systems, who can see more patients while shrinking waiting lists.
  • Healthcare workers, who have less menial tasks to perform.
  • Patients, who can get diagnosed and treated faster and more efficiently.  

CapeStart’s AI and machine learning engineers, data scientists, and data annotation specialists work daily with leading healthcare organizations to improve efficiencies and health outcomes. 

Get in touch with us today to learn what we can do for your healthcare or health research organization.

Contact Us.