AI models exhibit bias in medical treatment recommendations

AI models exhibit bias in medical treatment recommendations

USA – A recent study conducted by researchers at the Icahn School of Medicine at Mount Sinai has revealed concerning disparities in medical treatment recommendations made by artificial intelligence (AI) models.

Despite identical clinical symptoms, AI systems provided varying treatment plans based solely on patients’ socioeconomic and demographic characteristics, including income, race, and gender.

​This finding raises significant concerns about potential biases in AI-driven healthcare solutions and perpetuate existing healthcare inequities

MedExpo Africa 2025

In the study, researchers created profiles for 36 fictional patients and simulated 1,000 emergency room scenarios. Nine different AI healthcare models were tasked with managing these cases.

The findings were striking: wealthier patients were more likely to be recommended advanced diagnostic tests such as CT scans and MRIs, whereas lower-income individuals often received no further testing, even when clinical indicators suggested the need.

These discrepancies were observed across both proprietary and open-source AI systems, highlighting a widespread issue. ​

The study’s results mirror real-world healthcare disparities, where access to advanced medical services often correlates with socioeconomic status.

For instance, previous research has shown that Black Americans are less likely to receive timely and appropriate medical interventions compared to their white counterparts.

Similarly, a study analyzing social media posts found that AI models were significantly less effective at detecting signs of depression in Black Americans compared to white Americans, due to differences in language patterns. ​

Experts emphasize the importance of addressing these biases to ensure equitable healthcare outcomes.

Dr. Girish Nadkarni, co-lead of the Mount Sinai study, stated, “AI can transform healthcare, but only if used ethically and transparently.”

Coauthor Dr. Eyal Klang echoed this sentiment, stressing the need for oversight to prevent algorithmic bias and ensure equitable care. ​

To mitigate AI bias, researchers advocate for several strategies: ​

  • Diversifying Training Data: Ensuring that AI models are trained on diverse datasets that accurately represent various demographic groups. ​
  • Implementing Fairness-Aware Algorithms: Developing algorithms designed to identify and correct biases during the decision-making process. ​
  • Conducting Regular Audits: Performing ongoing evaluations of AI systems to detect and address any emerging biases.