
Two US Federal Judges Admit AI Errors by Staff Led to Faulty Court Rulings
US judges acknowledge staff used AI in drafting decisions, prompting Senate scrutiny and calls for stricter oversight.
Two federal judges have admitted that the use of artificial intelligence by their staff contributed to errors in recent court rulings, following an inquiry by US Senate Judiciary Committee Chairman Chuck Grassley.
Letters released by Grassley’s office on Thursday show that US District Judge Henry Wingate of Mississippi and US District Judge Julien Xavier Neals of New Jersey confirmed that the decisions in unrelated cases bypassed their chambers’ usual review processes before being issued. Both judges said they have since introduced measures to strengthen the review of rulings.
In his letter, Neals, based in Newark, stated that a draft decision in a securities lawsuit “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.” He added that a law school intern had used OpenAI’s ChatGPT for research without authorisation or disclosure. Neals said his chambers has since established a written AI policy and improved its review procedures. Reuters previously reported that AI-generated research had been cited in the decision, according to a person familiar with the matter.
Wingate said in his letter that a law clerk in his court in Jackson had used Perplexity “as a foundational drafting assistant to synthesise publicly available information on the docket.” He described the posting of the draft decision as “a lapse in human oversight.” Wingate subsequently removed and replaced the original order in a civil rights case and had previously declined to comment, citing “clerical errors.”
Neither judge immediately responded to requests for comment sent to their court staff.
Grassley had requested clarification on whether AI had been used in the rulings after lawyers in the cases highlighted factual inaccuracies and other significant errors. In a statement on Thursday, he praised the judges for acknowledging the mistakes and urged the judiciary to implement stronger AI guidelines.
“Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” Grassley said.
Lawyers have also faced increasing scrutiny from judges across the US for apparent misuse of AI. In recent years, courts have imposed fines and other sanctions in numerous cases where lawyers failed to properly verify AI-generated content.
For any enquiries please fill out this form, or contact info@thelawreporters.com and Follow The Law Reporters on WhatsApp Channels