AI in Health Care Raises Promise and Bias Concerns

AI’s Growing Role in Modern Health Care

Artificial intelligence (AI) is increasingly becoming a cornerstone in health care, transforming how providers interact with patients, diagnose conditions, and manage data. At the forefront of this shift is Dr. Andrew Carroll, a seasoned physician with nearly three decades of experience. In his Chandler, Arizona office, Carroll utilizes Nabla, an AI-powered tool integrated into his electronic health records. Instead of typing, the program listens, transcribes, and drafts clinical notes in real time, allowing Carroll to maintain eye contact and focus fully on his patient.

“AI is helping us do what we’re trained to do—care for patients,” Carroll remarked. However, despite its efficiency, experts caution that unchecked AI may perpetuate biases embedded in the data it’s trained on.

Bias in AI: A Lingering Challenge

Dr. Bradley Greger, associate professor at Arizona State University, emphasized in a recent interview that AI’s strength—processing massive amounts of data—is also its weakness. “AI helps accelerate understanding,” he said. “But it only knows what it’s been trained on, and that data often includes human bias.”

Bias, once an evolutionary tool to make rapid decisions, can manifest as prejudice in AI systems. When AI is trained on historical medical data that reflects societal inequities, the algorithms can replicate and even amplify those inequities.

Gender Bias in Language Models

Recent research titled “Gender Bias in LLMs for Long-Term Care” evaluated how two large language models—Meta’s Llama 3 and Google’s Gemma—handled gender-specific patient records. The study altered the gender in over 600 health care records to test for discrepancies in language and diagnosis.

Findings revealed that while Llama 3 showed minimal bias, Gemma frequently downplayed women’s health conditions using vague language like “health complications,” while offering specific terminology such as “chest infection” or “COVID-19” for men. Additionally, phrases highlighting satisfaction were more commonly associated with male patients, reinforcing gender stereotypes in health care documentation.

Racial Disparities in Cancer Diagnostics

Another study from Duke University examined how the AI model Mirai predicted breast cancer risk. Though advanced, Mirai and its simplified counterpart AsymMirai were less accurate for Black patients compared to white patients. Researchers attributed this to the training data, which predominantly featured white individuals. This discrepancy underscores the urgent need for diverse datasets to ensure equitable health outcomes.

AI in Diagnosis and Patient Experience

Colorado-based tech company Aclarion is tackling pain diagnostics by using AI to analyze MRI data and identify chemical markers in spinal discs. CEO Brent Ness explained, “The raw waveform data is unintelligible to most physicians. Our AI translates it into actionable insights.” This approach enables doctors to detect causes of pain that would otherwise remain invisible on standard scans.

Dr. Carroll believes that with access to more granular genetic and ethnic data, AI could eventually reduce bias. “Black is not just Black,” he said, noting that ethnicity plays a key role in medical outcomes. “You can have a Black person who is of Nigerian, Puerto Rican, or Jamaican descent.” By considering such distinctions, AI could personalize early interventions for conditions like diabetes.

Patient Trust and Legislative Safeguards

Despite AI’s capabilities, many patients remain skeptical. According to a Pew Research study, most prefer that doctors—not algorithms—make the final call on treatment. Concerns include the inability of AI to understand the emotional and cultural nuances of health decisions.

To address these concerns, Arizona passed House Bill 2175, mandating that only licensed medical professionals can make final decisions on insurance claims, not AI. The law takes effect July 1, 2026.

Real-Life Cases Highlight Importance of Human Judgment

Dr. Carroll recounted treating a 71-year-old Japanese woman who declined treatment for Stage 1 breast cancer. “It may have been a personal or cultural decision,” he said, “but I respected it.” He emphasized the importance of compassion—something no algorithm can replicate. “A machine would have told her to undergo surgery and radiation to reduce costs,” he noted. Carroll stayed in touch with her and her family until her passing, underscoring the irreplaceable value of human connection in health care.

Patients as Advocates in the AI Era

As AI becomes more accessible, patients are also turning to tools like ChatGPT and Gemini to interpret test results and understand conditions. Dr. John Oertle, Chief Medical Officer at Envita Medical Centers, sees this as a positive trend. “It’s important that patients advocate for themselves,” he said. “Paired with guidance from a physician, AI can be a great educational tool.”

Data from the Annenberg Public Policy Center shows that 63% of Americans consider AI-generated health information reliable, indicating a growing trust in the technology when used responsibly.

Looking Ahead: Balancing Innovation with Ethics

AI is undoubtedly reshaping health care, offering faster diagnoses, personalized treatment plans, and streamlined administrative tasks. Yet, the potential for bias looms large. Experts agree that while AI can be a powerful assistant, it should never replace human oversight. As AI tools continue to evolve, transparent validation, diverse training data, and ethical considerations will be critical in ensuring they serve all patients equitably.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter