AI is breakin' the law
Judges are fed up with cognitive automation
This week I want to tell a short story, a very minor legal drama. It’s a little bit sad, a little bit funny, and in the end, just a little bit gratifying.
I know nothing about Michael Jarrus or his mother, Linda Jarrous (that’s not a typo), other than that in April they filed a lawsuit against a bevy of federal agencies and state officials in Michigan, for reasons largely unknown. As will prove to be important in this tale, they have not hired attorneys to represent them, but instead are proceeding “pro se,” acting on their own behalf. From what I can infer, Michael Jarrus is seeking to have his right to own a firearm reinstated. This is the sad part, because it suggests that he likely has mental health issues, as that’s one of the few limitations placed on gun ownership that’s still left in the US (although the NRA is working hard to remove even that restriction). Somewhere out there, a real person is struggling. My heart aches for him.
Returning to the legal situation, the Jarrus family filed their lawsuit in federal district court in Michigan, and administration of the case was assigned to federal magistrate judge Anthony Patti. Unless you’re an attorney practicing in federal court, you probably are unaware of the noble group of quiet heroes called “federal magistrates” who help to ensure our legal system functions (relatively) smoothly. These are officials who handle the portion of legal proceedings that the presiding district court judges would rather not deal with. Magistrates handle the shit work, essentially.
After filing their initial complaint, the plaintiffs proceeded to unleash a fusillade of motions against the defendants, by my count at least 150 separate pleadings. Annoying! And not uncommon with pro se litigants, they often learn just enough to be dangerous, and they think they’re being clever in burying their opponents in paperwork.
But they are not. In reality, there is a long list of requirements they must abide by in pursuing their claims, and you can’t abuse the process. These requirements are called “the Federal Rules of Civil Procedure,” which includes Rule 11, a rule that—to put it collequially—requires that litigants not make shit up:
(b) Representations to the Court. By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:
(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;
So. Magistrate Judge Patti was tasked with dealing with this legal “deluge” of motions (his words) from the plaintiffs, and in November he tossed almost all of them out. In doing so, Patti observed that many of them bore the hallmarks of being AI generated. Nonetheless, he permitted the plaintiffs to amend their complaint to continue pursuing their claims, but with a stern warning that they must follow all Federal Rules in doing so. And he explicitly warned the plaintiffs to be wary of using AI, as they would be held accountable under Rule 11 for making valid representations of the law:
It appears to the Court that Plaintiffs may be utilizing Artificial Intelligence (“AI”) technology. That is a bad idea in one very real and important sense: Plaintiffs will be held responsible for the content of their filings under Fed. R. Civ. P. 11, including the accuracy of all facts and law, regardless of whether they used generative artificial intelligence to assist them. The Court will not accept AI as an excuse for inaccuracies and will hold Plaintiffs (and all counsel) responsible for whatever filings appear with their signatures.
While the Court will not directly prohibit Plaintiffs from utilizing generative AI, the Court STRONGLY suggests that Plaintiffs do not attempt to utilize it and should certainly never rely upon it. Indeed, this Court recently sanctioned a pro se litigant $200 per misrepresentation or AI-generated “phantom” or “hallucinated” citation. Plaintiffs are hereby DIRECTED to carefully read Fed. R. Civ. P. 11 in its entirety and to be on notice that any misrepresentation or AI-generated “phantom” citation will be sanctionable, per violation
Bad idea! STRONGLY suggest you don’t do it! Rarely does one get such explicit warnings from a legal authority. It’s almost as if Magistrate Patti could feel the slow-moving trainwreck headed his way.
Because sure enough, after receiving this order, the Jarrus family promptly ignored it entirely. That is to say, they immediately filed a slew of objections and—yes, you guessed it—they once again relied on ChatGPT to generate their briefs.
This did not go well.
Predictably, ChatGPT hallucinated frequently, largely by citing real cases but then inventing fake propositions of law that they purportedly stood for. And just to underscore the stupidity of it all, the plaintiffs also cut-and-paste directly from the LLM output without bothering to delete the telltale signs of their having used it, leaving in statements such as, “Here’s the revised Paragraph 2,” sprinkled throughout their briefs.
Bad idea! It was STRONGLY suggested they not do this!
It’s at this point that I imagine Magistrate Judge Patti looking out the window of his small office in the eastern district court of Michigan, pondering the ash gray winter sky, wondering how it’d all come to this. In my mind’s eye, I see him wandering down the hall to see Judge F. Kay Behm, the presiding judge over this case, and explaining—a look of sad desperation in his eyes—his completely inability to get these litigants to stop using AI despite being explicitly warned of the consequences.
Well who knows. But what definitely did happen is that Judge Behm sanctioned the plaintiffs for violating Rule 11 through their use of ChatGPT, and she didn’t mince words. In her published decision, which means it can be cited by other courts as precedent, she explained why litigants cannot outsource their thinking to these mindless tools, and drew a clear contrast between human thinking and LLM output:
Although Chat GPT generated “holdings” that looked like they could plausibly have appeared in the cited cases, in fact it overstated their holdings to a significant degree. And while a litigant might get away with similar overstatements because they could, perhaps, reason their way to showing how a case’s stated holding might extend to novel situations, an LLM does not reason in the way a litigant must. To put it in a slightly different way, LLMs do not perform the metacognitive processes that are necessary to comply with Rule 11. LLMs are tools that “emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning.” [Citation] When an LLM overstates a holding of a case, it is not because it made a mistake when logically working through how that case might represent a “nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;” it is just piecing together a plausible-looking sentence – one whose content may or may not be true.
Judge Behm then fined the plaintiffs $240, the cost of an annual subscription to ChatGPT Plus. Oh, and see that [citation] in there? That’s her quoting my essay in The Verge last week. After 18 months of advocacy pushing back on AI hype, there’s precedential federal caselaw distinguishing human reasoning from LLM output, and yes I’m taking a victory lap.
On the whole, this case is a tiny one. But it adds to the mounting number of legal decisions wherein exasperated judges demand that litigants and their attorneys stop using AI to cognitively automate the legal process—I’ve pasted some quotes from other cases below, for those curious. I suspect their frustration is borne in part from the way that AI degrades something fundamental to the practice of law. The most intellectually stimulating aspect of being a lawyer is reasoning, thinking through various principles, weighing their relevance, imagining consequences, engaging in metacognitive processes, and producing new piece of knowledge in the form of a decision. Indeed, the entire common-law system is a form of distributed cognition, one that AI counterfeits at scale. Large-language models erode the very thing that defines the legal profession as a profession.
And all of that is just as true, if not more so, for education.
Judges annoyed with AI, a legal primer
“[Law firms] abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.” Mata v. Avianca, Inc., (2023 USDC SDNY) 678 F. Supp.3d 443, 448
“Here, Appellants filed an opening brief replete with misrepresentations and fabricated case law….Appellants’ brief includes only a handful of accurate citations, almost all of which were of little use to this Court because they were not accompanied by coherent explanations of how they supported Appellants’ claims. We are therefore compelled to strike Appellants’ brief and dismiss the appeal.” Grant v. City of Long Beach, 96 F.4th 1255, 1256-57 (9th Cir 2024)
“The Court’s suspicion that [use of an LLM] has occurred here is heightened by the fact that, while the Court has been unable to discover a case called Dino v. Pelliccioni, it has discovered an Italian soccer player named Dino Pelliccioni. That is exactly the sort of error to which large-language-model artificial intelligence software is prone.” Rasmussen v. Rasmussen, Sonoma County Superior Court, Case No. 24CVC2293
“Plaintiff’s counsel repeatedly violated Rule 11. His citation to cases with dubious relevance, which appears to stem from undiscerning reliance on AI, is concerning.” Flowz Digital v. Dalal, USDC, Central District, Case No. 2:25-cv-00709
“I conclude that the lawyers involved in filing the Original and Revised Briefs collectively acted in a manner that was tantamount to bad faith. The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong. Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology – particularly without any attempt to verify the accuracy of that material. And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way.” Lacey v. State Farm, USDC, Central District, 5/5/25, Case No. CV 24-5205 FMO
These cases were compiled by the Hon. Vedica Puri (retired), a former judge in San Francisco now with ADR Services, a mediation and legal services provider.




Cited in case law. Absolute Legend.
This case isn’t about AI “thinking badly.”
It’s about what happens when institutions confuse procedural fluency with reasoned agency.
Rule-based systems fail the moment responsibility becomes abstracted.