Trust is becoming a form of social capital in the healthcare AI arena that demands a comprehensive approach. A number of flawed algorithms have entered the market and have been found to include bias and lack of reproducibility or transparency. This has the impact of damaging trust; more needs to be done to foster safe and validated algorithms that can improve outcomes, health equity and clinical work burdens.
Tools and processes have been developed to address bias and these need to be supported by building diverse data science teams. A number of technological tools and checklists have been developed to address racial and gender bias in algorithms. These tools can be adapted to healthcare and built upon.
More cooperation across the industry is needed to create processes for Good Algorithmic Practices across use cases and the lifecycle of algorithms. The FDA has fallen behind and does not address the entire spectrum of algorithms. Industry consortia are urgently needed to act as “Consumer Reports” on algorithms and create certification processes across the various stages of the lifecycle.
The “AI and Trust in Healthcare” report examines the growing role of AI in healthcare and the underlying factors that can both harm and help build trust by end users of products and services that utilize artificial intelligence. The report also proposes an intra-industry consortium to address some of the critical areas that are central to patient safety and building an ecosystem of validated, transparent and health equity-oriented models with the potential for beneficial social impact.
Over the past several years, AI has become one of the most discussed technologies in society. With the potential to determine who gets what form of medical care and when, the stakes are high with AI algorithms if they are not deployed with care. Already we have seen many examples of algorithms containing bias with respect to race and gender enter the market, and there are many clinical decision support tools being used that still have problematic science behind them.
A review of clinical algorithms currently in use across multiple specialties found a rather large number of cases where race correction was used inappropriately. Earlier this year we discussed additional cases in our podcast episode with Dr. Tania Martin-Mercado, who highlighted the case of the algorithm used for kidney disease, the glomerular function algorithm, which results in African-Americans waiting longer for kidney transplants.
During the first year of the COVID-19 pandemic, hundreds of algorithms were developed to aid in diagnosis through analyses of x-rays and CT scans. One study showed that none of these algorithms were reproducible. The reproducibility crisis in AI in medicine has the potential to undermine trust in AI products by both providers and patients. Princeton University researchers have recently held a workshop and released a white paper on the extent of this problem in machine learning including many examples in medicine.
The “AI and Trust in Healthcare” report provides an overview of some of the challenges in building AI models for healthcare and medicine, the tools and processes that can be used to address problems such as bias and drift and the steps companies can take to build trust by following both good data science and intentional efforts to build diverse teams capable of addressing the multiple axes of bias.
Finally, the solution to these problems requires more than the attention of individual companies. The FDA and regulatory environment have fallen behind in addressing the challenges that confront a rapidly growing technology with high stakes. The author proposes consortia around the various use cases for AI that would provide a more transparent and scientifically rigorous approach to certifying algorithms, after they are assessed for validation, data governance, bias, explainability and impact on health equity.
In addition to the consortia for AI in healthcare the analyst examines a recent proposal that calls for the use of liability insurance in AI for healthcare to drive adoption of the highest quality algorithms. The certification process that AI consortia would develop could work in tandem with the insurance industry to certify vetted algorithms that would receive lower premiums for going through the certification process.
Readers of our report will learn about the state-of-the-art processes for bias and risk mitigation that draws upon work developed within government and think tanks with programs focused on bias and AI. We link these processes to some emerging data science work on the complexity of digital health data. This will be of use to both data scientists and executives interested in developing innovative machine learning tools that have a reduced risk of doing harm.
Table of Contents
Executive Summary
Samples
LOADING...
Companies Mentioned
- ClosedLoop.AI
- UC Health