AI ‘Hallucinations’ Lead to Courtroom Scrutiny for Major Law Firms

AI ‘Hallucinations’ Lead to Courtroom Scrutiny for Major Law Firms

High-profile legal firms face judicial backlash for submitting AI-generated court filings with fabricated citations.

AuthorNithya Shri MohandassMay 28, 2025, 9:52 AM

The growing adoption of artificial intelligence (AI) in the legal sector is under increasing scrutiny as several high-profile U.S. law firms face consequences for filing court documents containing fabricated case citations generated by AI chatbots like ChatGPT.

Incident Overview

The latest firm in the spotlight is Butler Snow, a Mississippi-founded law firm with over 400 attorneys, after it submitted two court filings in an Alabama case containing non-existent legal citations

The filings were prepared in defence of Jeff Dunn, the former Alabama Department of Corrections Commissioner, in a lawsuit brought by inmate Frankie Johnson, who alleged he was repeatedly attacked while incarcerated.

In a written submission, Butler Snow partner Matthew Reeves expressed regret over the error, acknowledging his "lapse in diligence and judgment" for failing to verify the authenticity of the citations generated using generative AI. The firm has since apologised to U.S. District Judge Anna Manasco, who is yet to decide whether to impose sanctions.

This incident adds to a growing list of AI-induced legal blunders, commonly referred to as "AI hallucinations"—a term used when generative AI fabricates information that appears plausible but is entirely false.

A Pattern of Missteps Among Big Law Firms

The issue is not isolated to Butler Snow. 

Just last week, a lawyer at Latham & Watkins, representing AI company Anthropic in a high-stakes copyright lawsuit involving music lyrics, apologised to a California federal judge for submitting an expert report that included an article title fabricated by AI. 

Court Filing Sanctions

Lawyers representing the music publishers in the case have asked the court to exclude the report. A ruling on the matter is pending.

Earlier this month, a court-appointed special master, retired Judge Michael Wilner, imposed $31,100 in sanctions against K&L Gates and boutique firm Ellis George

The penalty was levied for submitting a court brief with non-existent case law, misleading the court in a dispute involving former Los Angeles County District Attorney Jackie Lacey and State Farm Insurance.

Lack of AI Training in the Legal Profession

Experts argue that these incidents underscore a broader issue in the legal profession: the lack of adequate training in AI technologies.

Indeed, professional conduct rules and judicial ethics require attorneys to thoroughly vet legal citations and arguments, whether produced by humans or machines. Violating these rules can result in sanctions, reputational damage, and potential malpractice claims.

Expert Commentary

Sunil Ambalavelil, Chairman of Kaden Boriss and a seasoned legal advisor, emphasises that the growing penetration of AI into the legal profession is inevitable—but not without risk.

  • “Legal professionals must continue to exercise independent judgment and uphold the integrity of the legal process.”

  • “Uncritical reliance on generative AI tools, particularly without verification, can lead to severe ethical and procedural consequences.”

Legal Industry Grapples with Generative AI

Since the rise of generative AI tools like OpenAI’s ChatGPT, many law firms have integrated them into workflows for research, drafting, and case preparation

A 2024 American Bar Association (ABA) survey revealed that 35% of law firms reported using generative AI tools in some capacity. However, only 12% provided formal training on their ethical use.

In response to the rising misuse, various courts across the U.S. have begun issuing standing orders requiring attorneys to disclose whether AI was used in preparing legal documents and, if so, how the outputs were verified.

Also Read: 5 Legal Issues to Consider While Using AI

A Cautionary Tale for the Legal Profession

These high-profile incidents serve as a warning to the legal community that while AI tools offer efficiency, they must be used responsibly. Courts are making it clear that automation is not an excuse for legal malpractice.

For now, judges, bar associations, and legal educators are calling for clearer guidelines, mandatory training, and transparency standards to ensure that AI enhances rather than undermines the legal process.

Key Takeaways

  • Butler Snow is under judicial scrutiny after filing court briefs containing ChatGPT-generated false citations.

  • Other major firms, including Latham & Watkins and K&L Gates, have faced similar issues involving AI hallucinations.

  • A $31,100 sanction was recently imposed for misleading a court with non-existent case law.

  • Experts point to a lack of AI education in law as a root cause of these mistakes.

  • Courts and bar associations are now demanding AI transparency and verification from legal professionals.

For any enquiries or information, contact info@thelawreporters.com or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.