ConversationTechSummitAsia

AMA Calls for Greater Transparency in AI for Medical Imaging

artificial intelligence AI explainable ethical responsible
artificial intelligence AI explainable ethical responsible

The American Medical Association (AMA) has taken a significant step towards ensuring transparency in the use of artificial intelligence (AI) across radiology and other medical specialties. At its recent annual meeting, the AMA passed a resolution urging for increased clarity around how AI models reach their conclusions. This move is aimed at maximizing trust among physicians and the public.

Push for Explainable AI Tools

The AMA is advocating for the development of ‘explainable AI tools’ that incorporate safety and efficiency data with detailed explanations of their outputs. This initiative is part of a broader effort to ensure that physicians can rely on these tools with confidence when making critical healthcare decisions.

Oversight and Regulation

The resolution also calls for enhanced oversight and regulation of augmented intelligence (AI) and machine learning algorithms used in clinical settings. According to a June 11 announcement, the AMA suggests that a third party, such as a medical association like itself or federal regulators, should determine whether AI algorithms are explainable. This approach is intended to mitigate potential biases from vendors.

Statement from AMA Leadership

Dr. Alexander Ding, a radiologist and AMA Board Member, emphasized the critical need for transparency in clinical AI tools. “With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients,” he stated. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences. The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible and impactful tools used in patient care.”

Impact on Clinical Decision-Making

The AMA’s Council on Science and Public Health report, which served as the basis for this policy, highlighted the risks associated with unexplainable clinical AI. When AI decisions are not transparent, the expertise of radiologists and other physicians is effectively sidelined, posing challenges in assessing the accuracy of AI outputs. This situation could put healthcare professionals in difficult positions, requiring them to act on potentially flawed information.

Intellectual Property vs. Transparency

A key aspect of the new policy is the balance between protecting intellectual property and ensuring transparency in AI applications. The AMA asserts that while intellectual property should be protected, concerns over infringement should not outweigh the need for explainability in medical AI tools. This stance underscores the importance of clear and open AI processes in safeguarding patient care.

Broader Implications

The resolution by the AMA reflects a growing recognition of the transformative potential of AI in healthcare, coupled with the need for responsible implementation. As AI technologies continue to evolve, the call for transparency and oversight becomes increasingly critical to ensure patient safety and trust in medical innovations.

Call to Action

The AMA’s initiative is a timely reminder of the importance of transparency and accountability in AI-driven healthcare solutions. By advocating for explainable AI, the association aims to empower physicians with the tools they need to make informed, patient-centered decisions.

For further updates and insights on AI in healthcare, follow us at aitechtrend.com.

Note: This article is inspired by content from https://radiologybusiness.com/topics/artificial-intelligence/life-or-death-consequences-ama-pushes-greater-transparency-imaging-ai. It has been rephrased for originality. Images are credited to the original source.