ConversationTechSummitAsia

AI Algorithms in Health Insurance: A Double-Edged Sword

Evidence suggests that insurance companies use AI to delay or limit health care that patients need. FatCameraE+ via Getty Images
Evidence suggests that insurance companies use AI to delay or limit health care that patients need. FatCameraE+ via Getty Images

Over the past decade, health insurance companies have increasingly turned to artificial intelligence algorithms to optimize their operations. Unlike the use of AI in medical diagnostics and treatment, insurers utilize these algorithms to make decisions about covering healthcare treatments and services recommended by a patient’s physicians.

AI in Prior Authorization

One of the prevalent applications of AI in health insurance is in the realm of prior authorization. This process requires doctors to obtain approval from insurers before providing certain types of care. Insurers employ algorithms to assess whether the requested care is ‘medically necessary’ and merits coverage. Furthermore, these AI systems also play a role in determining the extent of care a patient is entitled to, such as the number of days a patient can stay in the hospital post-surgery.

Options for Denied Claims

When an insurer declines to pay for a treatment recommended by a doctor, patients generally have three options:

– Appeal the decision, a process that is often time-consuming, costly, and requires expert assistance. Statistically, only about 1 in 500 claim denials are appealed.
– Opt for a different treatment that the insurer will cover.
– Pay for the recommended treatment out of pocket, which is often financially unfeasible due to high healthcare costs.

Concerns About Health Impact

From a health law and policy perspective, the impact of insurance algorithms on patient health is concerning. While insurers argue that AI allows for quick and safe decision-making, potentially reducing unnecessary treatments, evidence suggests these systems can also delay or deny essential care under the guise of cost-saving.

A Pattern of Withholding Care

The algorithms are designed to process patient records and relevant data, comparing them with current medical standards to decide on coverage. However, insurers have not disclosed the inner workings of these systems, leaving their exact operation shrouded in mystery. This lack of transparency raises fears that insurers may use algorithms to withhold care for costly, long-term, or terminal conditions.

Impact on Vulnerable Groups

Research indicates that patients with chronic illnesses are more likely to face coverage denials. Vulnerable groups, including Black, Hispanic, and LGBTQ+ individuals, also experience higher rates of claim denials, exacerbating existing health disparities. Additionally, there is evidence suggesting that prior authorization may inadvertently increase healthcare costs.

Regulatory Challenges

Unlike medical AI tools, which are subject to FDA review, insurance AI tools remain largely unregulated. Insurers often cite trade secrets to avoid disclosing their algorithms’ decision-making processes. This results in a lack of external validation for their safety and efficacy.

Some states are taking steps to regulate insurance AI. For example, California has enacted a law requiring a licensed physician to oversee the use of insurance coverage algorithms. However, these regulations often fall short, leaving significant discretion to insurers.

The Role of the FDA

Experts argue that it is imperative to regulate health care coverage algorithms to bridge the gap between insurer actions and patient needs. The FDA, with its medical expertise, could play a crucial role in evaluating these algorithms before their deployment in coverage decisions.

Although the FDA currently reviews many medical AI tools, extending its oversight to include insurance algorithms may require legislative action. Congress could amend the definition of a medical device to encompass these algorithms, enabling the FDA to regulate them effectively.

A Push for National Standards

While the movement towards regulating AI use in health insurance is gaining momentum, it lacks a robust push. National standards, potentially enforced by the FDA, could offer a cohesive regulatory framework, preventing a patchwork of state-level regulations.

As the conversation around AI in health insurance grows, the stakes remain high, with patients’ lives on the line. Ensuring that these algorithms are used ethically and effectively is crucial for the future of healthcare.

Note: This article is inspired by content from . It has been rephrased for originality. Images are credited to the original source.