Garner v. Kadince, 2025 UT App 80: Utah Lawyers Sanctioned for Citing AI Hallucinations

In a recent ruling, the Utah Court of Appeals sanctioned attorneys for submitting a legal brief containing fabricated case citations generated by artificial intelligence. This incident, involving the case Garner v. Kadince, serves as a stark reminder of the perils of uncritical reliance on AI tools in the legal profession.
attorney meeting with client at desk

The Utah Court of Appeals sent a clear message to the legal world: using AI doesn’t excuse sloppy lawyering. In Garner v. Kadince, 2025 UT App 80, the court sanctioned two attorneys for filing a legal brief that cited fake cases generated by ChatGPT. One of the made-up cases, “Royer v. Nelson,” didn’t exist in any legal database—only in the chatbot’s imagination.

The attorneys admitted the mistake. They said the brief had been written by an unlicensed law clerk who used AI. One of them signed the document without checking the citations. Neither had policies in place to govern AI use in the firm.

The court wasn’t amused.

Why This Matters

This case marks Utah’s first disciplinary action over AI-generated “hallucinations” in court filings. It won’t be the last. Courts across the country are starting to confront the risks of AI in the legal profession. The lesson is simple: AI can help lawyers work faster—but not if it makes them careless.

Judges and opposing counsel shouldn’t have to verify whether a case an attorney cites actually exists. That’s the attorney’s job. The integrity of the judicial system depends on the accuracy and honesty of legal filings. As AI becomes more integrated into legal practice, attorneys must ensure that technology enhances, rather than undermines, the pursuit of justice.

The Consequences

The Utah court imposed several penalties:

  • Reimbursement of opposing counsel’s fees.
  • A full refund to the Petitioner for any fees paid for the flawed petition.
  • A donation of $1,000 to the legal aid organization “and Justice for All”.

Similar incidents have occurred elsewhere, notably in Mata v. Avianca, where attorneys faced sanctions for submitting AI-generated briefs with fictitious citations.

The Bigger Picture

AI isn’t going away. Tools like ChatGPT are becoming common in law offices, especially among junior staff. Used properly, AI can streamline research, draft basic documents, and flag issues. But the duty to verify the accuracy of legal citations and arguments remains paramount.

This case is a warning. Law firms need clear policies. Attorneys must double-check AI-generated content. And courts will not tolerate fake precedent, no matter how it got there.

Technology changes fast. Professional responsibility doesn’t.

This isn’t just a Utah issue—it’s a turning point. Lawyers across the country should take note: due diligence is still non-negotiable.

Originally Published: May 27, 2025

How can we help you?

Call us at 801-448-7451, or use this contact form.

    Related Articles

    State v Jones, 2025 UT App 56
    Facts of the Case Jones was convicted at trial on three counts of assault against a peace officer. While police conducted a roadside DUI...
    May 1, 2025
    Case Brief: State v Austin, 2025 UT App 51
    This case involves an internet sting operation conducted by an undercover police officer posing as someone offering a minor for sex with adults. The...
    April 24, 2025
    State v Lightel, 2025 UT App 40
    Facts of the Case Lightel pled guilty to multiple counts of sexual exploitation of a minor based on his possession of child sex abuse material (CSAM...
    April 16, 2025

    Ready to explore our other articles?