Healthy Lifestyle

Artificial Intelligence in Healthcare Raises Concerns of Perpetuating Racial Inequities and Highlights the Urgency for Ethical Algorithmic Solutions

“Artificial intelligence, according to doctors, data scientists, and hospital executives, holds the potential to tackle previously insurmountable problems,” said with an air of anticipation. AI is already demonstrating its ability to assist clinicians in diagnosing breast cancer, interpreting X-rays, and predicting patients in need of additional care. Nevertheless, amidst the growing excitement, there lies a cautionary note: these powerful new tools have the capacity to perpetuate long-standing racial disparities in healthcare delivery.

Dr. Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation, warned, “If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system.”

These cutting-edge healthcare tools are often constructed using machine learning, a subset of AI that trains algorithms to identify patterns in vast datasets, such as billing information and test results. By recognizing these patterns, the algorithms can forecast future outcomes, such as the likelihood of a patient developing sepsis. They can tirelessly monitor every patient in a hospital, alerting clinicians to potential risks that may otherwise go unnoticed due to overworked staff.

However, the data on which these algorithms are built frequently reflect existing inequities and biases that have long plagued the US healthcare system. Research indicates that healthcare providers often deliver different treatment to white patients compared to patients of color. These discrepancies in care are then ingrained in the data, which subsequently train the algorithms. Additionally, people of color are often underrepresented in the training datasets.

Dr. Sendak emphasized, “When you learn from the past, you replicate the past. You further entrench the past. Because you take existing inequities and you treat them as the aspiration for how healthcare should be delivered.”

A significant study published in the journal Science in 2019 found that an algorithm used to predict healthcare needs for over 100 million people exhibited bias against Black patients. This algorithm relied on healthcare spending to forecast future health requirements. However, due to historical disparities in access to care, Black patients often had lower healthcare expenditures. Consequently, the algorithm recommended extra care only when Black patients were significantly sicker than their white counterparts.

Building clinical AI tools using data that may harbor bias is akin to navigating a minefield, as Dr. Sendak warned, “and [if you’re not careful] your stuff’s going to blow up and it’s going to hurt people.”

Addressing the challenge of eradicating racial bias, Dr. Sendak teamed up with pediatric emergency medicine physician Dr. Emily Sterrett in 2019 to develop an algorithm for predicting childhood sepsis at Duke University Hospital’s emergency department.

Sepsis occurs when the body overreacts to an infection, attacking its own organs. Although rare in children, with approximately 75,000 annual cases in the US, this preventable condition proves fatal for nearly 10% of affected kids. Prompt administration of antibiotics is highly effective in treating sepsis. However, its diagnosis poses challenges since early symptoms, such as fever, elevated heart rate, and increased white blood cell count, resemble those of other illnesses, including the common cold.

An algorithm capable of predicting the risk of sepsis in children would be revolutionary for physicians nationwide. Dr. Sterrett emphasized, “When it’s a child’s life on the line, having a backup system that AI could offer to bolster some of that human fallibility is really, really important.”

Nonetheless, the groundbreaking Science study on bias served as a stark reminder to Dr. Sendak and Dr. Sterrett of the importance of careful design. The team spent a month training the algorithm to identify sepsis based on vital signs and lab tests instead of readily available but often incomplete billing data. Throughout the first 18 months of development, any adjustment made to the program triggered quality control tests to ensure that the algorithm identified sepsis equally well for individuals of all races and ethnicities.

However, nearly three years into their deliberate and systematic effort, the team discovered that bias had still managed to infiltrate their algorithm. Dr. Ganga Moorthy, a global health fellow with Duke’s pediatric infectious diseases program, presented research to the developers revealing that doctors at Duke took longer to order blood tests for Hispanic children eventually diagnosed with sepsis compared to white children.

“One of my major hypotheses was that physicians were taking illnesses in white children perhaps more seriously than those of Hispanic children,” explained Dr. Moorthy. She also questioned whether the presence of interpreters slowed down the process.

Dr. Sendak expressed his frustration, stating, “I was angry with myself. How could we not see this? We totally missed all of these subtle things that if any one of these was consistently true could introduce bias into the algorithm.”

The team had unknowingly overlooked this delay, potentially teaching their AI to inaccurately believe that Hispanic children developed sepsis at a slower pace than other children, a time difference that could prove fatal.

Regulators are beginning to take notice of this issue. In recent years, hospitals and researchers have established national coalitions to share best practices and develop “playbooks” to combat bias. However, there are indications that few hospitals are truly grappling with the equity threat posed by this new technology.

Paige Nong, a researcher, conducted interviews with officials at 13 academic medical centers last year, revealing that only four considered racial bias during the development or evaluation of machine learning algorithms.

Nong remarked, “If a particular leader at a hospital or a health system happened to be personally concerned about racial inequity, then that would inform how they thought about AI. But there was nothing structural, there was nothing at the regulatory or policy level that was requiring them to think or act that way.”

Some experts compare this aspect of AI to the “wild west” due to the absence of clear regulations. Recent investigations in 2021 found the Food and Drug Administration’s policies regarding racial bias in AI to be inconsistent, with only a fraction of algorithms incorporating racial information in public applications.

Over the past 10 months, the Biden administration has released numerous proposals to establish safeguards for this emerging technology. The FDA now requires developers to outline steps taken to mitigate bias and disclose the data used to build new algorithms.

In April, the Office of the National Coordinator for Health Information Technology proposed regulations mandating that developers provide clinicians with a comprehensive understanding of the data utilized in their algorithms. Kathryn Marchesini, the agency’s chief privacy officer, referred to these regulations as a “nutrition label” that enables doctors to understand “the ingredients used to make the algorithm.” The hope is that increased transparency will assist providers in determining whether an algorithm is unbiased enough to be safely used on patients.

Last summer, the Office for Civil Rights at the US Department of Health and Human Services proposed updated regulations explicitly prohibiting clinicians, hospitals, and insurers from engaging in discriminatory practices through the use of clinical algorithms. Melanie Fontes Rainer, the agency’s director, stated that although federal anti-discrimination laws already prohibit such activities, her office aims “to make sure that [providers and insurers] are aware that this isn’t just ‘Buy a product off the shelf, close your eyes and use it.'”

The industry receives these regulatory measures with a mix of appreciation and wariness. While many AI and bias experts welcome the increased attention, some express concerns. Several academics and industry leaders desire explicit guidelines from the FDA that outline the necessary steps for developers to prove the impartiality of their AI tools. Others advocate for the ONC to require developers to publicly disclose the “ingredient list” of their algorithms, enabling independent researchers to assess them for flaws.

There are also concerns that these proposals, particularly the HHS’s explicit prohibition on discriminatory AI, could have unintended consequences. Carmel Shachar, executive director of the Petrie-Flom Center for Health Law Policy at Harvard Law School, worries that hospitals with limited resources may struggle to comply with the law in the absence of clear guidance, potentially leading them to avoid using any AI altogether.

Dr. Sendak at Duke University welcomes the new regulations aimed at eliminating bias from algorithms but raises concerns about the lack of acknowledgment from regulators regarding the resources required to identify and monitor these issues. He believes that investments should be made to address this problem adequately.

In the absence of additional funding and clear regulatory guidance, AI developers are left to address these problems on their own. At Duke, the team immediately conducted new rounds of testing after discovering potential bias against Hispanic patients in their algorithm for predicting childhood sepsis. It took eight weeks to conclusively determine that the algorithm performed equally well for all patients. Dr. Sendak finds this conclusion sobering rather than comforting, stating, “Every time you become aware of a potential flaw, there’s that responsibility of asking, ‘Where else is this happening?'”

Dr. Sendak plans to establish a more diverse team consisting of anthropologists, sociologists, community members, and patients to collaborate on identifying bias in Duke’s algorithms. However, he emphasizes that addressing underlying racial inequities in the entire healthcare sector is necessary for these new tools to have a positive impact.

In conclusion, the issue of bias in AI algorithms within the healthcare sector is gaining attention from regulators and researchers alike. While efforts are being made to combat this problem, there is a need for clearer regulations and increased transparency to ensure that these powerful tools do not perpetuate existing racial inequities. Only through a comprehensive and concerted effort can the healthcare industry effectively address bias and provide equitable care for all individuals.

Editor PSCKS News

Editorial board member of PSCKS News. Our mission is to share authentic local news and some healthy lifestyle things.

Related Articles

Back to top button