New York Judge Sanctions Lawyer Over AI-Generated Filings

Share This Post


A New York judge has sanctioned an attorney after he filed AI-generated court documents that included fabricated quotes and citations.

New York Supreme Court judge Joel Cohen this month granted a motion to sanction defence attorney Michael Fourte for the erroneous filings in a case involving a disputed loan.

Cohen noted that Fourte’s brief explaining his AI usage had also been generated using an AI tool.

Hallucinations

“In other words, counsel relied upon unvetted AI — in his telling, via inadequately supervised colleagues — to defend his use of unvetted AI,” Cohen said in a decision filed earlier this month, 404 Media earlier reported.

“This case adds yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession,” Cohen wrote.

The plaintiffs in the case had moved to sanction Fourte after finding inaccurate or fabricated citations and quotations in his filings.

Fourte’s document submitted in opposition to the sanctions motion contained more than double the amount of erroneous or entirely made-up material as the earlier documents, Cohen wrote.

Fourte initially declined to disclose whether or not he had used AI, and in oral arguments he told the court that the cases cited “are not fabricated at all”, the judge said.

Fourte later admitted that AI had been used, but blamed the errors on colleagues who he claimed had failed to vet the materials.

In some previous cases, lawyers have submitted AI-generated materials without being aware that responses from generative AI tools routinely include fabricated material, known as hallucinations.

A lawyer in such a case who faced sanction in 2023 said he believed ChatGPT was a type of search engine.

Professional tools

In some more recent cases, however, lawyers used AI more deliberately, but failed to submit outputs to a thorough vetting process.

In May, a California judge imposed $31,000 (£23,064) in sanctions against two law firms after receiving a supplemental brief that included “numerous false, inaccurate, and misleading legal citations and quotations”.

Judge Michael Wilner said in his judgement that “no reasonably competent attorney should outsource research and writing” to artificial intelligence tools.

“I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Wilner wrote.

The lawyers involved in the sanctions gave sworn statements admitting the offending documents were generated with Google’s Gemini as well as specialised AI tools in the Westlaw Precision with CoCounsel offering.

Such cases are an indicator of underlying unresolved issues with generative AI, even in specialised professional tools, that have also led to complaints about inaccurate AI-generated online search summaries from Google and false AI-generated headlines in Apple iPhones.

In a related case, a group of residents of a public housing project in New York City who had submitted an AI-generated lawsuit against the city to prevent the demolition of the Fulton and Elliott-Chelsea houses drew the anger of a judge who found fictional legal citations.

AI boom

The residents, who had been acting as their own lawyers, used xAI’s Grok chatbot to generate the documents, according to Thomas Hillgardner, the attorney the residents hired following the fiasco.

“They were unfamiliar with the dangers,” Hillgardner said, according to a report by local media outlet Crains New York Business.

AI companies have reported surging demand for AI tools over the past two years, but industry analysts have warned that the use of such tools for business purposes remains unproven.

AI agents have been heavily promoted by tech companies, but Gartner predicted in June that 40 percent of AI agent projects are likely to be cancelled within only two years due to rising costs and a poor return on investment, adding that the use cases for agentic AI are currently immature.

Much agentic AI is driven by hype, with the vast majority of so-called AI agents actually being repackaged versions of existing products such as chatbots, the analysts said.



Source link

spot_img

Related Posts

spot_img