Garner v. Kadince, 2025 UT App 80: Utah Lawyers Sanctioned for Citing AI Hallucinations

In a recent ruling, the Utah Court of Appeals sanctioned attorneys for submitting a legal brief containing fabricated case citations generated by artificial intelligence. This incident, involving the case Garner v. Kadince, serves as a stark reminder of the perils of uncritical reliance on AI tools in the legal profession.
attorney meeting with client at desk

The Utah Court of Appeals sent a clear message to the legal world: using AI doesn’t excuse sloppy lawyering. In Garner v. Kadince, 2025 UT App 80, the court sanctioned two attorneys for filing a legal brief that cited fake cases generated by ChatGPT. One of the made-up cases, “Royer v. Nelson,” didn’t exist in any legal database—only in the chatbot’s imagination.

The attorneys admitted the mistake. They said the brief had been written by an unlicensed law clerk who used AI. One of them signed the document without checking the citations. Neither had policies in place to govern AI use in the firm.

The court wasn’t amused.

Why This Matters

This case marks Utah’s first disciplinary action over AI-generated “hallucinations” in court filings. It won’t be the last. Courts across the country are starting to confront the risks of AI in the legal profession. The lesson is simple: AI can help lawyers work faster—but not if it makes them careless.

Judges and opposing counsel shouldn’t have to verify whether a case an attorney cites actually exists. That’s the attorney’s job. The integrity of the judicial system depends on the accuracy and honesty of legal filings. As AI becomes more integrated into legal practice, attorneys must ensure that technology enhances, rather than undermines, the pursuit of justice.

The Consequences

The Utah court imposed several penalties:

  • Reimbursement of opposing counsel’s fees.
  • A full refund to the Petitioner for any fees paid for the flawed petition.
  • A donation of $1,000 to the legal aid organization “and Justice for All”.

Similar incidents have occurred elsewhere, notably in Mata v. Avianca, where attorneys faced sanctions for submitting AI-generated briefs with fictitious citations.

The Bigger Picture

AI isn’t going away. Tools like ChatGPT are becoming common in law offices, especially among junior staff. Used properly, AI can streamline research, draft basic documents, and flag issues. But the duty to verify the accuracy of legal citations and arguments remains paramount.

This case is a warning. Law firms need clear policies. Attorneys must double-check AI-generated content. And courts will not tolerate fake precedent, no matter how it got there.

Technology changes fast. Professional responsibility doesn’t.

This isn’t just a Utah issue—it’s a turning point. Lawyers across the country should take note: due diligence is still non-negotiable.

Originally Published: May 27, 2025

How can we help you?

Call us at 801-448-7451, or use this contact form.

    Related Articles

    State v. Abonza: Why Police Can’t Arrest You for What You Might Do
    In State v. Abonza, 2025 UT App 101, the Utah Court of Appeals delivered a sharp reminder: Police need probable cause at the moment of arrest, not a...
    July 23, 2025
    State v Hintze, 2022 UT App 117
    This appeal involves a question of whether a defendant’s Sixth Amendment right to a speedy trial was violated after the State did not prosecute the...
    July 16, 2025
    State v Buranek, 2025 UT App 92
    This appeal involved a question of whether a trial court improperly denied a defendant’s motion for a directed verdict, claiming he had been...
    June 23, 2025

    Ready to explore our other articles?