It was clear that things had gone off the rails when a run-of-the-mill dispute with a homeownerâs association spiraled so far that the plaintiff started invoking the Racketeer Influenced and Corrupt Organizations (RICO) Act â a 1970 federal law meant for prosecuting organized crime groups.
The drama started back in early 2025. A married couple in Florida was late on HOA fees totaling a few hundred dollars. Rather than dispute the fees directly, they took the unusual step of filing a lawsuit against the association, arguing that a state statute rendered the collection of the fees illegal. They opted to represent themselves as they took their fight to the court â âpro se,â in attorney lingo â with the help of generative AI, which they used to draft and file an increasingly bizarre barrage of legal paperwork.
The couple were âjust swinging a sword at anything [they] could possibly hit,â a lawyer involved with the case told Futurism. âInitially, nobody realized how unhinged things would get.â
The husband-and-wife duo was using AI to churn out virtually unlimited new accusations and legalese, resulting in a dizzying flood of AI-generated court documents. And as hundreds of pages of AI-generated material piled up, the attorney we spoke to recalled, the plaintiffsâ claims grew increasingly wild. Within weeks of filing the suit, what had started as a minor dispute devolved into bombastic claims that read less like housing law and more like a screenplay: the HOA and the lawyers were, together, involved in a sprawling RICO conspiracy to defraud homeowners, the plaintiffs alleged, and needed to be held to account by federal investigators.
âIt was just draining,â recalled the lawyer, who spoke on the condition of anonymity because she didnât want to risk provoking the couple further. âWe were just getting hammered. Every day.â
Eventually, the couple started filing AI-generated bar complaints against individual lawyers involved with the case, and claiming to have alerted the FBI to their supposed crimes.
âIt evolved into this thing where everyday itâd be five, ten, 12 different filings, all sort of doing the same thing, everyday, saying, âI want my judgment today. I want sanctions against all the lawyers. All the lawyers should be disbarred. All of them are committing fraud. There are RICO violations,'â said the lawyer.
Eventually, one of the firms involved with the suit requested that the plaintiffs share their AI prompts with the court. They refused, responding that they were in the process of building a âproprietaryâ AI framework designed to interpret and analyze Florida law, which they planned to turn into a business.
The allegations were eventually dismissed with prejudice, meaning that the plaintiffs are forbidden from appealing the courtâs decision â a sanction handed down to cases judges find to be frivolous or abusive of the court system. The fees the couple failed to pay in the first place, meanwhile, totaled roughly about what they paid to file the initial complaint.
***
Chaotic provocateurs who pepper courts with frivolous legal actions, often motivated by animus or a poor understanding of the law, are nothing new.Â
But the use â and misuse â of AI chatbots like OpenAIâs ChatGPT and Googleâs Gemini by litigants, particularly non-lawyers who choose to represent themselves in legal fights, can pour gas on the flames of this old problem. Easy-to-access chatbots offer a powerful new way to generate a firehose of legal paperwork that looks, at least on a first glance, legitimate â and itâs falling on judges, clerks, and attorneys to sort out the resulting deluge, in work that lawyers say is intensely time-consuming and expensive.
âIt triples the amount of paperwork that I have to go through,â said Sophia Ficarrotta, an attorney in Washington state who represents victims of intimate partner violence and often encounters defendants who are using AI to make their way through the court system. âItâs really tedious. And then when I go through all those filings, I have to bill my clients for it.â
âItâs really difficult,â she added, âto spend time looking through something that I know that my clients shouldnât have to pay for.â
While some people are reportedly finding success in legal standoffs with the help of AI, lawyers like Ficarrotta told us that chatbots are also facilitating chaotic and burdensome legal conflicts across the US as pro se parties like the Florida couple use them to file oceans of documents in support of flawed or groundless claims, supercharging the impact of flimsy cases and wreaking havoc on already slow-moving courts.
We spoke to lawyers and paralegals â most of whom opted not to be identified by name, citing concern over client privacy or fear of inciting further legal action from eager litigants â who work in a broad selection of specialties. We also reviewed large numbers of AI-generated court documents they pointed us to, filed by self-represented plaintiffs in local, state, and federal court.
The AI cases were all over the map. Some dealt with payment, collections, and foreclosures. Others were family law matters including custody disagreements and divorce settlements. There were disputes between individuals and small local businesses, as well as individuals against one another. Some of the more immediately fantastical allegations were brought against the government, large corporations, and public figures like billionaires.
The phenomenon is even exasperating institutions. In a lawsuit filed earlier this month against OpenAI, the insurance company Nippon Life Insurance alleged that ChatGPTâs legal counsel led a woman to fire her human lawyer and, opting to instead represent herself, launch a dubious new lawsuit against the insurer over a settled disability claim that had originally been dismissed.
In the complaint, which accuses ChatGPT of acting as an unlicensed lawyer, Nippon claims that it incurred a staggering $300,000 in legal fees as it defended against the wave of frivolous AI-generated content being filed by the woman. By stoking groundless legal theories, the complaint argues, ChatGPT âaided and abettedâ the womanâs âabuse of the judicial process.âÂ
In response to Reuters, OpenAI said that Nipponâs lawsuit âlacks any merit whatsoeverâ and pointed to its terms of service, which forbid using ChatGPT-generated output âfor any purpose that could have a legal or material impactâ on another person. (We reached out to a legal representative for Nippon, but didnât hear back.)
The phenomenon is the latest illustration of AIâs narrative gap. While optimists say it can help amateurs take on intimidating institutions without the help of expensive subject matter experts like attorneys, thatâs not always a good thing. It can also embolden cranks and agitators â or, lawyers we spoke to suggested, people dealing with mental health or substance abuse issues â to embark on quixotic legal quests that waste time and money, or unnecessarily cause what should be standard proceedings to drag on for weeks or even months as people leaning on the tech churn out a flood of bad arguments.
Itâs one thing to scheme about the law with a sycophantic chatbot; itâs another to craft and present a sound argument. And a good human lawyerâs job, legal professionals we spoke to emphasized, isnât just to spam the court with documents on your behalf. Itâs also to tell you when youâre wrong.
âA lot of what you pay the lawyer for is lawyersâ judgment,â said the lawyer from the HOA case. âKnowing when to push, when not to push, whatâs going to work with the judge, whatâs not going to work with the judge, whatâs a feasible argument, whatâs not.â
âIf youâre just mindlessly firing all these things that are AI-generated,â she added, âsomehow that judgment just goes out the window.â
***
Legal professionals we spoke to emphasized sheer volume as the most frustrating new element of chatbot-era pro se cases. Whereas self-represented litigants pre-AI mightâve submitted initial complaints as brief as one or two pages, these filings sometimes now total hundreds of pages, which is unusual in conventional practice.
âThese filings that are coming from the generative AI, are long, very long,â said one lawyer. Another said one self-represented plaintiff she encountered filed a nearly 600-page complaint.
It doesnât stop there. Once a complaint is filed, attorneys told us, litigants using AI often proceed to file a steady drip of new motions and other documents, prompting the professionals on the other side of the case to pour a huge number of hours into reading and responding to the outflow of material.
âSome file as many as four [motions] a week, and all need to be responded to,â lamented another lawyer. âIt takes countless hours each week just to respond to them.â
Compounding that volume, legal workers say, is a disorienting veneer of legibility that AI can bring to flawed or baseless arguments. AI-generated court documents we reviewed showed self-represented litigants filing complex-looking theories packed with confident legal jargon. But often, these documents are the product of a process that could be referred to as cogency-washing: chatbots taking incomplete, biased, or even delusional claims and organizing them into authoritative nonsense.
âThe burden shifts over to who youâre suing to have to rebut everything, and actually litigate against you, and do an enormous amount of work,â observed another attorney, who works for a local government on the west coast. His office, he told us, has been struggling to keep up with a wave of AI-generated legal actions and correspondence from locals.
All this time adds up. One lawyer relayed that a dispute that historically wouldâve cost a client about $2,000 wound up costing over $20,000 as the opposing party filed AI-generated motion after AI-generated motion; another said that a similar case pushed what shouldâve been about $5,000 in client fees to over $70,000.
âThereâs no easy or inexpensive way to get a vexatious litigant out of a case,â said one lawyer. âBefore, such a person would need to spend the money on an attorney or find an attorney that is willing to work on an arrangement that does not require money up front (hard to find). That is no longer the case.â
In some cases, lawyers told us, theyâve been able to successfully petition a judge to make AI-happy pro se litigants reimburse their clients for a portion of those fees. But in other instances, state statutes can prevent attorneys from making that kind request, meaning that the financial burden of AI-exacerbated court battles may still fall on their clients.
Several lawyers noted the persistence of hallucinations in these cases, with plaintiffs citing AI-fabricated laws that donât actually exist, or mangling legal advice so badly that itâs completely misleading the user.
âItâs a civil case where the rules of evidence donât apply, but their AI is telling them that they should be citing criminal cases and using criminal pattern jury instructions,â said Ficarrotta, the Washington lawyer. âBut my case doesnât have a jury. Weâre not going to trial. Itâs just a hearing.â
Even when an AI-contrived legal argument proves to be unserious on its merits, the consequences can be anything but. In addition to ending up saddled with the opposing councilâs attorney fees, many of these AI-focused petitioners have faced court sanctions including expensive fines and harsh dismissals from fed-up judges. Others have wound up being labelled vexatious litigants, which generally means that they now need to request permission to file future lawsuits.
The time and energy these cases eat up go beyond that of individual lawyers. As Ficarrotta noted, the commissioner or the judge hearing the case âhas to go and spend time reading all of it, and reviewing all of it, before they even come out and sit on the bench.â
In one case she was involved in, Ficarrotta said, that meant the commissioner had to spend her night looking through â500 pages of AI-generated pleadings, none of which were relevant.â And meanwhile, she added, âthere were a bunch of people also on that docket who were waiting to be heard.â
âTheyâre preventing other people from accessing the justice that they need,â she continued, âjust by putting themselves on the docket.â
âThe judges in my somewhat rural county are not pleased,â added another lawyer, âwith their dockets being crammed with pleadings that are mostly nonsense generated by AI.â
As one Texas-based paralegal put it, the disruption happens âall the way downâ the court.
âThe courts take all filings seriously. And all of this sh*t, before it gets in front of a judge, is clogging the system,â said the paralegal. âWe have to respond on the defense side, but also, the clerks have to do stuff at the courts. Staff attorneys are reviewing some filings. Theyâre having to look up these bullsh*t cases that donât even exist.â
Have you been involved in a legal situation involving AI? Get in touch with us at tips@futurism.com. We can keep you anonymous.
***
At times, AI use has spilled over into personal harassment against legal professionals.
A few years back, one lawyer told us â before ChatGPT and similar bots were even released â he took on a pro bono lawsuit on behalf of a young artist. The case centered on a straightforward payment dispute; the plaintiff he represented won, meaning that the defendant now owed money. Her payments were spotty, the lawyer said, but overall, the case had been open-and-shut.
But in 2025, the defendant suddenly re-emerged. In a torrent of emails, she outlined a new and seemingly AI-generated legal theory, in which she argued that she didnât have to pay after all â and that the lawyer himself had engaged in some kind of grave misconduct.
âEverything is blown out into such crazy proportions,â the lawyer said of the emails, which he described as a âfan fictionâ narrative of the case.
âI mean, Iâm a lawyer. Iâm the opposing counsel here. Iâm sort of the bad guy,â he said. But itâd been âpretty vanilla, my representation in this. It wasnât even that adversarial.â
As of January 2026, he said, the defendant had sent him over 300 accusatory emails, all of which appeared to be AI-generated. And like the plaintiffs in the HOA case, she escalated her qualms to the state, urging in AI-spun complaints to the state bar that the lawyer should lose his license to practice.
âWhatever version of events that she fed into it,â said the lawyer, âthe AI somehow compounded it.â
***
To be sure, people filing their own lawsuits certainly arenât the only ones whoâve been caught misusing AI in a courtroom. Many trained lawyers have been caught submitting drafts with nonexistent, AI-hallucinated case law, drawing ire and sanctions from judges. As judges have noted in searing condemnations, this kind of lazy error is an issue of negligence: legal professionals who should know better are leaning on cheap-and-easy tech to do sloppy work.
In contrast, much of the problematic AI use by self-represented litigants, as described by lawyers we spoke with and outlined in AI-generated court filings we reviewed, would be poorly understood as straightforward negligence. The laypeople bringing the suits think they have a case, and instead of explaining why they donât â like a responsible lawyer would â chatbots designed to be agreeable to users are helping to craft spurious legal theories that send everyone involved into an unnecessary legal quagmire.
âAI will absolutely tell you what you want to hear,â said the local government attorney. âAnd right now it will do that and hallucinate case law. But even if it starts getting law correct, that doesnât mean it can really make independent heuristic judgment on whether the case is still worthwhile.â
Lawyers we spoke to emphasized that their value is, in large part, discernment: identifying strong arguments, understanding the processes and nuances of the court, and steering potential litigants toward a best course of action â or, at times, a lack thereof.
âIf someone says to me, âDo you think I have a strong case?â Iâm always going to be real with them. Because, one, we can never guarantee anything, but two, we have to set realistic expectations,â Ficarrotta said. âThere is no sounding board with AI. There is no setting realistic expectations. Which is why itâs so surprising to pro se litigants when they come in and things arenât the way that they thought they were going to be.â
In other words, a sycophantic lawyer is a bad lawyer. And when ingratiating chatbots collide with consequential legal choices, they can steer users down self-destructive paths.
âThere are a lot of serious consequences once you start engaging the judicial system, and if youâre not doing it in good faith, or if you start screwing up a lot, you can get in quite a bit of trouble,â the local government attorney warned, pointing to consequences like sanctions, as well as the reputational damage someone could suffer once a case goes into the public record. âAnd thatâs just on the civil side.â
âI think people underestimate,â he added, âhow litigation impacts their lives.â
Beyond sanctions, some courts that have encountered this issue have taken some limited steps to curb unwieldy AI lawsuits, for example requiring that all parties disclose when documents are AI-generated.Â
But the phenomenon sits at a complicated crossroads, and even lawyers who say AI has burdened courts were hesitant to dissuade self-represented people from using the tech entirely.
The legal system is riddled with serious access problems that prevent a massive number of people â particularly poor and marginalized ones â from obtaining justice, regardless of whether their claims have merit. The vast majority of those who turn to self-representation do so purely out of need; that a new technology could serve as a democratizing force within the legal world, legal access advocates urge, is essential terrain to explore.
âWe have an access to justice crisis in our country,â said Lou Rulli, a professor of law at the University of Pennsylvania who has written extensively on legal access. âSo many Americans just donât have the ability to obtain counsel, even in the most important things affecting their lives.â
âWeâre still at an early stage of AI,â he continued, âbut this represents an opportunity to democratize our legal system, to demystify our complex court procedures, and to help to give folks who donât have access to counsel an opportunity to understand complex things in more simple language â and to have the support, at least in limited ways, to protect their most vital interests.â
Jennifer Gundlach, a law professor at Hofstra University who directs the Hofstra Law Pro Se Legal Assistance Program â a legal aid center that helps self-represented people navigate the complexities of the legal system â is also optimistic about generative AI. She shared that sheâs seen first-hand self-represented folks find success with the help of the tech, particularly when chatbot use is coupled with informed guidance from organizations like hers.
âWe have to be creative and forward-thinking in how to use this tool responsibly,â Gundlach said of generative AI. âAnd if lawyers have great power in the monopoly of the legal profession, we also have to make it our responsibility to do what we can to improve access to our justice system when we are not willing or able to provide the realm of legal representation thatâs needed.â
In one case we found, after a self-represented plaintiff was accused of using AI to formulate unclear arguments, a judgeâs final order â to dismiss the case, which the judge decided lacked merit â spoke to this very tension. He wrote that âartificial intelligence may ultimately prove a helpful tool to assist pro se litigants in bringing meritorious cases to the courts,â and that âin that way, artificial intelligence has the potential to contribute to the cause of justice.â
But, he cautioned, âaccessing any beneficial use of artificial intelligence requires carefully understanding its limitations.â
âFor example, if merely asked to write an opposition to an opposing partyâs motion or brief, or to respond to a court order, an artificial intelligence program is likely to generate such a response,â the order continues, âregardless of whether the response actually has an arguable basis in the law.â
Indeed, the risk of not understanding the techâs limitations is that over-reliance on chatbots can send folks tunneling down expensive, life-changing holes that they may never have needed to start digging in the first place â and dragging others down with them as they go.
âThe barriers shouldnât be insurmountable,â the local government attorney reflected. But AI âlowers that barrier immensely,â he warned, and âjust because you canâ file a lawsuit, it âdoesnât mean you should.â
âAnd just because the âcanâ part has suddenly become a lot easier,â he added, âit doesnât mean itâs still a good idea.â
More on AI and the real consequences of reinforcing unreality: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking