The Rise of AI Assisted Pro Se Employment Litigation: What Employers Need to Know

This post was originally published on this site.

Pro se employment lawsuits are surging – and generative artificial intelligence (GAI) is quietly reshaping what those cases look like, how long they last, and how expensive they are to defend. Gone are the days of nonsensical handwritten pleadings; pro se plaintiffs are now arriving with polished filings and relentless motion practice. Although employers continue to prevail overwhelmingly on the merits, AI has changed the economics of defense.

The Trend: More Filings, Higher Costs, Longer Timelines

Pro se filings in federal employment cases have climbed sharply in recent years. According to LexisNexis’ Lex Machina’s 2026 Employment Litigation Report, the number of unrepresented plaintiffs more than doubled between 2021 and 2025, increasing from 2,052 to 4,388 cases. Over the same period, pro se plaintiffs’ share of all federal employment litigation rose from 9.7 percent to 16.5 percent. This growth is occurring against the backdrop of a record 26,635 federal employment lawsuits filed in 2025 – the highest total in at least a decade.

Why this matters for employers: GAI tools such as ChatGPT and Claude have dramatically lowered the barrier to entry for unrepresented litigants. Plaintiffs can now generate facially sophisticated complaints, opposition briefs, and discovery motions in minutes. As many employers are experiencing firsthand, AI “sycophancy” – the tendency of these tools to prioritize user validation over objective accuracy – is also fueling dramatically inflated settlement demands, often in the hundreds of thousands of dollars. As a result, nuisance‑value resolutions, once more common in pro se cases, are increasingly elusive, and employers should expect higher defense costs.

The paradox: Despite greater volume and surface‑level sophistication, pro se plaintiffs continue to lose the majority of the time. Lex Machina reports that, in federal employment cases terminated between 2023 and 2025, defendants prevailed by a margin of more than 40 to 1. Nearly half of those cases were resolved on procedural grounds alone, and only 29 percent settled – compared to 77 percent of cases involving represented plaintiffs. The challenge for employers is not ultimate liability, but endurance: AI is keeping weak cases alive longer, driving up costs even when the outcome is predictable.

How Courts Are Responding

Federal courts are increasingly confronting AI‑assisted pro se litigation, though their approaches vary.

  • Sanctions for fabricated citations: Courts have begun imposing monetary sanctions where AI‑generated filings cite nonexistent authority. In Allen v. Casper (N.D. Ill. Mar. 2026), the court imposed a $1,500 Rule 11 sanction after a pro se plaintiff filed a 112‑page opposition brief containing entirely fictitious cases, emphasizing that pro se status does not shelter plaintiffs from Rule 11 sanctions. Other courts have imposed sanctions ranging from $500 to $10,000 for similar misuse.
     
  • Formal judicial warnings: In Tantaros v. Fox News Network (S.D.N.Y. Mar. 2026), the court struck a pro se plaintiff’s unauthorized filing and warned that she would be subject to sanctions for future filings containing AI abuse. The Tenth Circuit issued a similar warning in Biglow v. Dell Technologies (Mar. 2026), cautioning a pro se appellant of the responsibility to ensure that citations to legal authority are not fabrications but instead point to real cases that at least “arguably stand” for the propositions for which they are cited.
     
  • Procedural relief for employers: Some courts are more directly addressing the burden on defendants. In Thomas v. Delaware Technical and Community College (D. Del. Nov. 2025), after the pro se plaintiff filed more than 49 filings and used AI without properly verifying them, the court relieved the employer of any obligation to respond to future filings unless ordered to do so – a significant cost‑containment measure.
     
  • Standing orders regulating AI use: In Rako v. VMware LLC (N.D. Cal. Feb. 2026), the court prohibited filings containing AI‑hallucinated citations and later required AI‑related disputes to proceed through the meet‑and‑confer process after parties began weaponizing AI accusations.
     
  • Discovery of AI interactions – a developing issue: Recent rulings in United States v. Heppner (S.D.N.Y. 2026), Warner v. Gilbarco (E.D. Mich. 2026), and In re OpenAI, Inc. Copyright Infringement Litigation (S.D.N.Y. 2026) reflect an evolving body of case law analyzing whether a party’s AI chatbot exchanges are discoverable or protected by the attorney-client or work product doctrine. As discussed in Baker Donelson’s A Legal Framework for the Discoverability of AI, courts tend to focus on who created the AI prompts, whether counsel directed the research, and the confidentiality terms of the platform used. Materials generated independently by litigants on consumer AI platforms are far less likely to receive protection.

Key Takeaways for Employers

Plan for higher defense spend: The low‑cost pro se case is increasingly the exception. Employers should anticipate increased defense budgets driven by longer case lifecycles and the added work required to vet AI‑generated filings.

Reset settlement expectations early: Inflated demands are becoming common. Employers should evaluate pro se claims promptly and be prepared to litigate through dismissal where the facts support it.

Leverage early dispositive motion practice: Failure to exhaust and timeliness defenses continue to defeat pro se claims with regularity. AI‑polished pleadings often mask foundational legal and factual gaps that remain vulnerable to early challenge.

Treat fabricated citations as an opportunity, not a nuisance: Courts increasingly expect employers to identify and document AI‑generated errors. A well‑developed record can support an employer’s motion for sanctions and provide meaningful procedural relief against pro se litigants who are not using AI appropriately.

Use emerging procedural tools: Courts are adopting standing orders, imposing sanctions, and limiting response obligations in appropriate cases. Employers should ensure their outside counsel is actively pursuing these remedies when warranted.

Address AI‑assisted work performed by your employees related to employment decisions and litigation: AI‑generated materials created by employees independently on consumer platforms may be discoverable. To reduce this risk, employers should ensure AI‑assisted legal work is conducted under the direction of counsel and on platforms with appropriate confidentiality protections.

Consider discovery into plaintiffs’ AI use: In appropriate cases, AI prompts and outputs may reveal gaps between allegations and the plaintiff’s actual factual understanding.

Maintain trial readiness: AI cannot examine witnesses, object in real time, or persuade juries. A demonstrated willingness to proceed to trial in meritorious cases remains one of the strongest deterrents to prolonged, AI‑driven litigation.

If you’d like to discuss this further, please reach out to Jennifer K. McCarty or your Baker Donelson Labor & Employment attorney.

Leave a Reply

Your email address will not be published. Required fields are marked *