DOJ Attorney Removed After Fake Citations Spark AI Warning

AI legal risks - DOJ Attorney Removed After Fake Citations Spark AI Warning

DOJ Attorney in Raleigh Faces Backlash for Fake Citations

In a case that has drawn significant attention to AI legal risks, a U.S. Department of Justice attorney in Raleigh recently lost his position after submitting a legal brief with fabricated quotations and legal citations. The incident, which unfolded in a federal courtroom, has sparked renewed warnings among legal professionals about the dangers of relying on artificial intelligence for legal research and filings.

The controversy began when Rudy Renfer, a DOJ lawyer, admitted to filing a legal brief with “incorrect citations” in an ongoing case. Magistrate Judge Robert Numbers, overseeing the case, demanded a thorough explanation and considered potential sanctions. According to court documents, Renfer explained that the errors resulted from the “inadvertent filing of an unfinalized draft document.” Nevertheless, Judge Numbers expressed deep concerns about the accuracy of certain quotations and representations in the filings, as well as the sufficiency of Renfer’s explanation.

As the legal community grappled with the implications, Renfer was promptly removed from the case. Attempts to reach him for comment were unsuccessful at the time of reporting.

U.S. Attorney Issues AI Warning to DOJ Staff

The day after Renfer’s dismissal, Ellis Boyle, the lead U.S. Attorney for the Eastern District of North Carolina, issued a stern warning to his staff regarding the use of artificial intelligence in legal proceedings. In an internal memo that quickly gained the attention of legal professionals, Boyle emphasized that “AI may hallucinate, but that does not excuse you from your obligations.” He urged attorneys to personally verify every citation and legal proposition in their briefs, cautioning that reliance on AI-generated content without proper fact-checking could have serious consequences.

Boyle’s memo continued: “We cannot misquote or make up any quotes or inaccurately pinpoint cite quotes to a case or make up a fake case. This is federal Big Boy court. Act out of fear, or respect, or reverence, or some combination thereof, accordingly.” His message was clear: AI legal risks cannot be underestimated, and the burden of accuracy always lies with the attorney.

This incident is not isolated. Across the United States, there have been other instances where attorneys have submitted briefs generated or assisted by artificial intelligence, only to later find that the technology had fabricated case law or misrepresented existing legal precedents. These so-called “AI hallucinations” can not only undermine the integrity of court proceedings but also put attorneys at risk for professional discipline or sanctions.

The growing adoption of AI tools in the legal sector has brought efficiency gains but also fresh challenges. While AI can help with research and drafting, legal experts warn that it is susceptible to generating plausible-sounding but entirely fictitious information. As seen in the Raleigh case, the consequences for failing to verify AI-generated content can be severe.

The focus on AI legal risks has prompted many law firms and government agencies to revisit their policies on technology use. Boyle, who was appointed in 2025 by U.S. Attorney General Pam Bondi, made it clear that his office has a zero-tolerance policy for using AI to draft briefs without thorough human review. In a statement, he reiterated, “When we became aware of this situation, we drafted and circulated an office-wide memo that clarified and reiterated the policy that you’re not allowed to use AI to draft a brief that’s filed with a court.”

Boyle’s district, which stretches from Raleigh to the North Carolina coast, is now leading by example in addressing the potential pitfalls of AI in legal practice. Legal professionals are being reminded that, despite advances in technology, the core responsibilities of legal research, citation, and ethical practice cannot be outsourced to machines.

As the legal field continues to integrate digital tools, experts urge vigilance and transparency when using AI. The focus on AI legal risks will likely intensify as more stories like the Raleigh incident come to light. Law schools, bar associations, and federal agencies are expected to provide clearer guidance and training on the ethical use of AI in legal settings.

For now, the message from prosecutors like Boyle is unequivocal: technology can assist, but it cannot replace the critical judgment and ethical standards required in the practice of law. Every attorney bears the responsibility to ensure that the work submitted to courts is both accurate and authentic, regardless of the tools used in its preparation.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter