Australian Court Issues Rules on Generative AI Use in Law

generative AI in law - Australian Court Issues Rules on Generative AI Use in Law

Generative AI in law has come under fresh scrutiny as the Federal Court of Australia recently issued clear guidance to lawyers regarding its use. The court emphasized that any misuse of generative AI that contravenes new rules could result in significant consequences, including adverse costs orders and professional sanctions. This move comes amid a rising trend of legal documents containing errors or fictitious citations generated by AI tools.

Background: AI Errors Prompt Judicial Response

Australia, like many nations, has witnessed a surge in legal filings tainted by generative AI mistakes, including fabricated case citations and quotes. Responding to these developments, the Federal Court released a new practice note outlining the requirements and expectations for AI use in legal proceedings.

Chief Justice Debra Mortimer declared that submitting false or inaccurate information to the court is “unacceptable,” stressing the legal community’s obligation not to mislead the court or other parties. Mortimer highlighted that such actions undermine the fair and efficient administration of justice, potentially increasing legal costs and prolonging cases.

Disclosure and Verification Requirements

According to the new rules, lawyers must exercise extreme caution when applying generative AI in pleadings, written submissions, affidavits, and expert reports. The court noted that generative AI can produce fictitious cases, incorrect citations, fabricated quotes, and factual inaccuracies. As a result, legal professionals are required to:

  • Disclose the use of generative AI at the beginning of any document submitted to the court.
  • Clearly identify where and how generative AI was used in creating content.
  • Verify all legal authorities and sources cited in documents to ensure their existence and relevance.
  • Ensure any affidavits or expert reports generated with the help of AI reflect the true knowledge or recollection of the individual involved.

Additionally, when generative AI is used to summarize, analyze, or create images, videos, or sound recordings for court evidence, full disclosure is mandatory. This transparency, the court argues, is essential for preserving the integrity of legal proceedings and maintaining public trust in the legal system.

Risks of Generative AI in Law

The guidance also addresses the risks associated with inputting confidential, suppressed, or private information into generative AI tools. Chief Justice Mortimer warned that even unintentional sharing of sensitive data with AI systems could have serious consequences, adding a further layer of responsibility for legal professionals.

While the court “embraces” the potential of new technology to improve efficiency in litigation, Mortimer cautioned that generative AI must be used “appropriately and with due care.” Failure to comply with the new rules, she said, could lead to adverse costs orders and other sanctions for breaching legal and professional obligations.

Recent Cases Highlight the Problem

There have been at least 73 cases in Australia where the use of generative AI in legal documents resulted in false citations, fictitious cases, or fabricated quotes. Notably, a Victorian lawyer became the first in the country to face disciplinary proceedings for submitting documents containing AI-generated errors, resulting in the loss of his right to practise as a principal lawyer. Similar investigations are underway in other states, including Western Australia and New South Wales.

The courts have recognized a troubling pattern: if AI-generated errors go undetected, they can propagate further false citations in future cases. In one judgment, an appellate court discovered that a cited case did not exist, with the opinion noting it was likely “a product of hallucination by a large language model.”

Judges as AI Gatekeepers

The chief justice of the High Court, Stephen Gageler, has remarked that Australian judges are increasingly acting as “human filters” to screen legal arguments created using AI. He described the current level of AI-generated content in courtrooms as having reached an “unsustainable phase,” underlining the urgency behind the Federal Court’s new rules on generative AI in law.

Conclusion: The Future of Generative AI in Law

The Federal Court’s guidance on generative AI in law signals a pivotal moment for legal professionals in Australia. While the technology offers the promise of efficiency and innovation, it must be balanced against the imperative to uphold the integrity of legal proceedings. Lawyers are now tasked with navigating these new rules, ensuring that the use of generative AI enhances rather than undermines the justice system. As courts continue to adapt to technological change, the responsible use of generative AI in law will remain a key issue for the legal profession, the judiciary, and the broader community.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Analyzes how businesses deploy AI at scale across operations, analytics, and automation. Delivers practical insights for CXOs and technology leaders.

Subscribe to our Newsletter