AI Ruling Sparks Legal Warning: Your Chatbot Conversations May be Used in Court

AI Ruling Sparks Legal Warning: Your Chatbot Conversations May be Used in Court

US lawyers caution that exchanges with AI tools lack legal privilege and could be disclosed in criminal and civil cases.

AuthorStaff WriterApr 16, 2026, 11:40 AM

As people increasingly turn to artificial intelligence for advice, some US lawyers are warning clients not to treat AI chatbots as trusted confidants when their freedom or legal liability is at stake.

These warnings have taken on added urgency after a federal judge in New York ruled earlier this year that the former chief executive of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges.

In the wake of the ruling, attorneys have advised that conversations with chatbots such as Anthropic’s Claude and OpenAI’s ChatGPT could be requested by prosecutors in criminal cases or by opposing parties in civil litigation.

“We are telling our clients: you should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim.

Communications between individuals and their lawyers are almost always considered confidential under US law. However, AI chatbots are not lawyers, and legal advisers are urging clients to take steps to keep their interactions with such tools as private as possible.

In emails to clients and advisories published on their websites, more than a dozen major US law firms have outlined measures for individuals and companies to reduce the risk of AI chats ending up in court.

Similar warnings are also appearing in engagement agreements between firms and their clients. For example, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer’s advice or communications with a chatbot could waive the legal protection known as attorney–client privilege, which typically shields exchanges between lawyers and their clients.

The case that triggered concern involved Bradley Heppner, former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty.

Heppner had used Anthropic’s chatbot Claude to prepare reports about his case for his lawyers, who later argued that those AI exchanges should be withheld because they contained details relating to his defence.

Prosecutors contended they were entitled to the material Heppner created using Claude, as his defence lawyers were not directly involved and attorney–client privilege does not extend to chatbot interactions.

Voluntarily sharing information from a lawyer with any third party can jeopardise the legal protections normally afforded to such communications.

Manhattan-based US District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated using Claude in connection with the case.

“No attorney–client relationship exists, or could exist, between an AI user and a platform such as Claude,” Rakoff wrote.

Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the US Attorney’s Office in Manhattan declined to comment.

Courts are already grappling with the growing use of artificial intelligence by lawyers and self-represented litigants, which has, among other issues, led to legal filings containing fictitious cases generated by AI.

Rakoff’s decision marks an early test in the AI chatbot era for fundamental legal protections governing attorney–client communications and materials prepared for litigation.

On the same day as Rakoff’s ruling, US Magistrate Judge Anthony Patti in Michigan found that a woman representing herself in a lawsuit against her former employer did not have to disclose her chats with OpenAI’s ChatGPT regarding her employment claims.

Patti treated the woman’s AI chats as part of her personal “work product” for the case, rather than as communications with a third party that could be used by her employer.

“ChatGPT and other generative AI programmes are tools, not persons,” Patti wrote in his order.

The privacy and usage terms of both OpenAI and Anthropic state that the companies may share user data with third parties. They also advise users to consult qualified professionals before relying on chatbots for legal advice.

At a February hearing in Heppner’s case, Rakoff noted that Claude “expressly provided that users have no expectation of privacy in their inputs”.

Representatives for OpenAI and Anthropic did not immediately respond to requests for comment.

Law Firms Move to Set Guardrails

Advice from lawyers has ranged from urging clients to choose AI platforms carefully to recommending specific wording for chatbot prompts.

Los Angeles-based O’Melveny & Myers and other firms said in client advisories that “closed” AI systems designed for corporate use may offer stronger safeguards for legal communications, although these remain largely untested.

Some firms said AI-assisted legal research is more likely to be protected by attorney–client privilege when conducted at a lawyer’s direction. Where a lawyer recommends using AI, individuals should state this explicitly in their prompts, according to New York-headquartered law firm Debevoise & Plimpton.

“I am conducting this research at the direction of counsel for X litigation,” the firm suggested users include.

References to AI use are also becoming more common in contracts between law firms and clients, according to a review of agreements published on a US government website.

Sher Tremonte, which frequently represents white-collar criminal defendants, stated in a March contract: “Disclosure of privileged communications to a third-party AI platform may constitute a waiver of attorney–client privilege.”

Justin Ellis of New York-based firm MoloLamken and other lawyers expect further rulings will clarify when AI chats may be used as evidence.

Until then, attorneys say a longstanding principle still applies: do not discuss your case with anyone except your lawyer — including AI.

 

For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004Follow The Law Reporters on WhatsApp Channels.