AI’s Growing Role in Legal Practice Brings New Challenges
As artificial intelligence continues to permeate various industries, the legal sector is grappling with a new and pressing issue: the emergence of AI-generated fake content in court filings. One alarming incident occurred in Illinois, where Associate Judge Jeffrey Goffinet discovered a legal brief citing a completely fabricated case. Despite consulting multiple research systems and even physical law books, the case in question was nowhere to be found.
This incident came shortly after Illinois implemented a Supreme Court policy allowing AI use in the courts, provided it adheres to ethical and legal standards. Goffinet, who helped shape the policy, emphasized the inevitability of AI in law, stating, “We have to learn how to coexist with it.”
Widespread Concerns Over AI Hallucinations
Across the country, courts are seeing a rise in AI-generated “hallucinations”—false statements presented as fact. These errors have appeared in over 500 U.S. court cases since early 2025, according to Damien Charlotin, a senior research fellow at HEC Paris. Charlotin noted that although awareness of AI use is growing, institutional responses remain limited.
Rabihah Butler of the Thomson Reuters Institute warned that AI’s polished output could deceive even seasoned professionals. “If you’re not paying attention and doing your due diligence, the hallucination is being treated as a factual piece of information,” she explained.
States Take Action with Policies and Legislation
To address the growing concern, at least 10 states and Washington, D.C., have issued ethical guidance on AI use in law. These opinions, while not legally binding, serve as standards of professional conduct. States such as Texas and Ohio have gone further, enacting specific rules and restrictions. For example, Ohio prohibits AI use in translating legal documents that could influence a case’s outcome.
Meanwhile, states like California and Louisiana are introducing legislation aimed at ensuring attorneys verify AI-generated content. Louisiana requires lawyers to exercise “reasonable diligence” to confirm the authenticity of evidence, while California’s proposed law emphasizes client confidentiality and data integrity.
Balancing Efficiency with Responsibility
AI holds promise for streamlining legal work by automating administrative tasks, analyzing contracts, and even drafting court briefs. However, this efficiency must be counterbalanced by ethical responsibility. Many state policies recommend attorneys not charge clients for time saved by using AI tools, promoting fairness and transparency.
Legal professionals are also encouraged to use proprietary AI platforms that safeguard sensitive data, rather than open-source tools that could compromise confidentiality.
Education and Training: A Key Focus
Legal experts agree that education is crucial. Michael Hensley, counsel at FBT Gibbons, expressed hopes that both bar associations and law schools integrate AI training into their curricula. “It’s absolutely imperative that law schools have a session on AI,” he said.
Brad Johnson, Executive Director of the Texas Center for Legal Ethics, echoed this sentiment, stating that lawyers must maintain a current understanding of AI technology to assess its risks effectively.
In a Bloomberg Law survey, over half of the 750+ respondents indicated their firms had already invested in AI tools, with 21% planning to do so within a year. Common uses include legal research, drafting communication, summarizing documents, and reviewing case files. However, those avoiding AI cited concerns over accuracy, ethics, and data privacy.
Courts Remain Cautious
While law firms become increasingly comfortable with AI, court systems remain wary. Diane Robinson of the National Center for State Courts noted that courts are still grappling with altered evidence and briefs filled with hallucinations. “Fake evidence is nothing new,” she said, “but with AI, the ability to fabricate has become much easier.”
Robinson oversees an AI policy consortium aimed at developing resources for ethical AI use in courts. She believes that education, rather than simple warnings, is key to preventing misuse. “You cannot prevent a mistake just by telling people, ‘Don’t make a mistake.’ It’s about setting up processes to make people aware,” said Charlotin.
This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.
