A Palo Alto, California, lawyer with nearly a half-century of experience admitted to an Oakland federal judge this summer that legal cases he referenced in an important court filing didn’t actually exist and appeared to be products of artificial intelligence “hallucinations.”
Jack Russo, in a court filing, described the apparent AI fabrications as a “first-time situation” for him and added, “I am quite embarrassed about it.”
A specialist in computer law, Russo found himself in the rapidly growing company of lawyers publicly shamed as wildly popular but error-prone artificial intelligence technology like ChatGPT collides with the rigid rules of legal procedure.
Hallucinations—when AI produces inaccurate or nonsensical information—have posed an ongoing problem in the generative AI that has birthed a Silicon Valley frenzy since San Francisco’s OpenAI released its ChatGPT bot in late 2022.
In the legal arena, AI-generated errors are drawing heightened scrutiny as lawyers flock to the technology, and irate judges are making referrals to disciplinary authorities and, in dozens of U.S. cases since 2023, levying financial penalties of up to $31,000, including a California-record fine of $10,000 last month in a Southern California case.
Chatbots respond to users’ prompts by drawing on vast troves of data and use pattern analysis and sophisticated guesswork to produce results. Errors can occur for many reasons, including insufficient or flawed AI-training data or incorrect assumptions by the AI. It affects not just lawyers, but ordinary people seeking information, as when Google’s AI overviews last year told users to eat rocks, and add glue to pizza sauce to keep the cheese from sliding off.
Russo told Judge Jeffrey White he took full responsibility for not ensuring the filing was factual, but said a long recovery from COVID-19 at an age beyond 70 led him to delegate tasks to support staff without “adequate supervision protocols” in place.
“No sympathy here,” internet law professor Eric Goldman of Santa Clara University said. “Every lawyer can tell a sob story, but I’m not falling for it. We have rules that require lawyers to double-check what they file.”
The judge wrote in a court order last month that Russo’s AI-dreamed fabrications were a first for him, too. Russo broke a federal court rule by failing to adequately check his motion to throw out a contract dispute case, White wrote. The court, the judge sniped, “has been required to divert its attention from the merits of this and other cases to address this issue.”
White issued a preliminary order requiring Russo to pay some of the opposing side’s legal fees. Russo told White his firm, Computerlaw Group, had “taken steps to fix and prevent a reoccurrence.” Russo declined to answer questions from this news organization.
As recently as mid-2023, it was a novelty to find a lawyer facing a reprimand for submitting court filings referring to nonexistent cases conjured up by artificial intelligence, but now such incidents arise nearly by the day, and even judges have been implicated, according to a database compiled by Damien Charlotin, a senior fellow at French business school HEC Paris who is tracking worldwide legal filings containing AI hallucinations.
“I think the acceleration is still ongoing,” Charlotin said.
Charlotin said his database includes “a surprising number” of lawyers who are sloppy, reckless or “plain bad.”
In May, San Francisco lawyer Ivana Dukanovic admitted in the U.S. District Court in San Jose to an “embarrassing and unintentional mistake” by herself and others at legal firm Latham & Watkins.
While representing San Francisco AI giant Anthropic in a music copyright case, they submitted a filing with hallucinated material, Dukanovic wrote. Dukanovic—whose company bio lists “artificial intelligence” as one of her areas of legal practice—blamed the creation of the false information on a particular chatbot: Claude.ai, the flagship product of her client Anthropic.
Judge Susan van Keulen ordered part of the filing removed from the court record. Dukanovic, who, along with her firm, appears to have dodged sanctions, did not respond to requests for comment.
Charlotin has found 113 U.S. cases involving lawyers submitting filings with hallucinated material, mostly legal-case citations, that have been the subject of court decisions since mid-2023. He believes many court submissions with AI fabrications are never caught, potentially affecting case outcomes.
Court decisions can have “life-changing consequences,” including in matters involving child custody or disability claims, law professor Goldman said.
“The stakes in some cases are so high, and if someone is distorting the judge’s decision-making, the system breaks down,” Goldman said.
Still, AI can be a useful tool for lawyers, finding information people might miss, and helping to prepare documents, he said. “If people use AI wisely, it helps them do a better job,” Goldman said. “That’s pushing everyone to adopt it.”
Survey results released in April by the American Bar Association, the nation’s largest lawyers group, found that AI use by law firms almost tripled last year to 30% of responding law offices from 11% in 2023, and that ChatGPT was the “clear leader across firms of every size.”
Fines may be the least of a lawyer’s worries, Goldman said. A judge could flag an attorney to their licensing organization for discipline, or dismiss a case, or reject a key filing, or view everything the lawyer does in the case with skepticism. A client could sue for malpractice. Orders to pay the other side’s legal fees can require six-figure payments.
Charlotin’s database shows judges slapping many lawyers with warnings or referrals to disciplinary authorities, and sometimes purging all or part of a filing from the court record, or ordering payment of the opposition’s fees. Last year, a federal appeals court in California threw out an appeal it said was “replete with misrepresentations and fabricated case law,” including “two cases that do not appear to exist.”
Charlotin expects his database to keep swelling.
“I don’t really see it decrease on the expected lines of ‘surely everyone should know by now,'” Charlotin said.
#YR@ MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.
Citation:
Chatbot dreams generate AI nightmares for Bay Area lawyers (2025, October 8)
retrieved 8 October 2025
from https://techxplore.com/news/2025-10-chatbot-generate-ai-nightmares-bay.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.