New York State Courts Issue AI Use Policy for Judges and Staff, Restricting Tools to Private Models

New York State Courts Issue AI Use Policy for Judges and Staff, Restricting Tools to Private Models

Rules aim to prevent bias, protect confidential information, and ensure AI supports -- not replaces -- human judgment

AuthorStaff WriterOct 13, 2025, 9:55 AM

The New York State court system announced a new policy on the use of artificial intelligence by judges and other court staff, joining at least four other US states that have adopted similar rules over the past year.

 

The interim policy, which applies to all judges, justices, and non-judicial employees in the New York Unified Court System, limits the use of generative AI to approved products and mandates AI training.

 

New York’s policy prohibits judges and staff from inputting confidential or privileged information, or documents submitted in court, into a generative AI programme that does not operate on a private model. Private models, as defined by the policy, are those under the control of the court system and which do not share data with public tools.

 

The court did not immediately respond to a request for comment on how the new policy would be monitored.

 

It is “critical to ensure that material that reflects harmful bias, stereotypes, or prejudice” does not appear in court-related work, according to the policy. Judges and staff remain responsible for their output, and AI technology must be used in a manner consistent with their ethical obligations, the document states.

 

“While AI can enhance productivity, it must be utilised with great care,” Chief Administrative Judge Joseph Zayas said in a statement. “It is not designed to replace human judgement, discretion, or decision-making.”

 

States including California, Delaware, Illinois, and Arizona have adopted AI rules or policies, while others are assessing the use of generative AI within their courts.

 

Lawyers across the country have increasingly faced fines and other sanctions from judges for apparent misuse of AI, as fictitious case citations and other errors continue to appear in legal filings. Professional conduct rules do not bar lawyers from using AI, but they can be disciplined for failing to verify court submissions.

 

Judges are also under scrutiny. US Senate Judiciary Committee Chairman Chuck Grassley on Monday asked two federal judges to clarify whether AI had been used to prepare recent orders that contained “substantive errors,” highlighting growing concern over the technology’s impact on judicial decision-making.

 

For any enquiries please fill out this form, or contact info@thelawreporters.com and  Follow  The Law Reporters on WhatsApp Channels