Homepage Technology AI is currently messing up legal work. The solution? More...

AI is currently messing up legal work. The solution? More AI, naturally

Robot, AI, artificial intelligence, philosophy, thinking
Shutterstock.com

Generative AI is now embedded in legal work, whether firms approve of it or not.

Others are reading now

Generative AI is now embedded in legal work, whether firms approve of it or not. As courts flag a growing number of filings riddled with fabricated cases, law firms are responding with an ironic fix: deploying more AI to police the AI lawyers already use.

The approach reflects a reluctant acceptance that banning chatbots outright is no longer realistic.

Hallucinations in court

Cozen O’Connor recently began testing software designed to detect AI hallucinations after a judge sanctioned two of its lawyers for submitting briefs that cited nonexistent cases, according to reporting by Business Insider. The firm had rules barring the use of public chatbots for drafting, but those guardrails failed.

Kristina Bakardjiev, a partner overseeing technology at the firm, said lawyers will experiment with AI tools regardless of policy. “You have to be pragmatic,” she said, adding that the risk now lies in catching mistakes before a judge does.

The penalties were not trivial. A Nevada judge gave the lawyers a choice between financial sanctions or reporting their misconduct to law school deans and bar authorities. Both chose the latter, and the firm dismissed the lawyer who used ChatGPT.

Also read

A rising problem

Courts are encountering bogus citations with increasing frequency. Legal analyst Damien Charlotin has tracked cases where judges identified hallucinated content in filings. His database grew from 120 cases between April 2023 and May 2025 to 660 by December, with several new incidents appearing each day.

Charlotin said most cases involve self-represented litigants or small firms, but large firms are not immune. Errors often enter through junior staff, contractors or seemingly minor tasks like footnote formatting.

The issue is not confined to law. Deloitte agreed to partially refund the Australian government this year after officials found errors in a report that were allegedly AI-generated.

Walled gardens

Experts say hallucinations are difficult to eliminate entirely because large language models are built to predict text, not verify facts. Michael Dahn of Thomson Reuters said errors drop sharply when AI tools are forced to cite from closed, vetted datasets rather than the open internet.

That logic underpins products from Thomson Reuters’ Westlaw and LexisNexis, which market their AI tools as safer because they draw only from curated legal databases. LexisNexis has extended that approach through a partnership with legal AI startup Harvey, which constrains models to approved sources and provides logs showing how answers were generated.

Also read

AI checking AI

Cozen O’Connor is testing a tool from startup Clearbrief that scans drafts for fabricated citations, typos and claims unsupported by sources. Clearbrief’s software, built as a Microsoft Word plug-in, generates a cite-check report before a document is filed or reviewed by a partner.

Founder Jacqueline Schafer said partners are ultimately responsible for accuracy under federal rules, even if juniors handle citation checks. Bakardjiev said maintaining a record of cite-check reports could help firms show judges they took reasonable steps to prevent errors.

For now, firms accept that hallucinations are here to stay. The practical response is training lawyers to treat chatbot output as a draft, not an authority, and layering verification tools on top.

In the legal world, the solution to AI’s mistakes may simply be more AI.

Sources: Cozen O’Connor Law Firm, Business Insider

Also read

Ads by MGDK