Tech

Chatbot Disaster: Lawyer’s Reliance On AI Results In Fake Court Decisions

A lawyer got the idea to use ChatGPT to do legal research for a case, so the bot gave him other similar cases he could cite in court – something fairly typical for lawyers to do.  There was only one little problem: all cases cited by the AI were fake.

According to the New York Times, a customer named Roberto Mata sued Avianca after a serving cart injured his knee during a flight. Avianca tried to toss the case, but Mata’s lawyer objected and submitted a brief with more than half a dozen relevant court decisions: Varghese v. China Southern Airlines, Shaboon v. Egyptair, and Martinez v. Delta Air Lines, among others.

Sadly, neither the airline’s lawyers nor the judge himself could find the quotations cited.

The lawyer who used ChatGPT, Steven Schwartz, has been practicing law for three decades. Schwartz said that he had no intention to deceive the court and that he had never used the chatbot. He even asked the bot if the cases cited were real and the AI said that the cases could be found in reputable legal databases. But nobody could find them.

ChatGPT, like other chatbots, may provide wrong information. These AIs are not intelligent, but they are a statistical model trained to speak like a human, and when asked, they should answer with something that has a probability of being true or something that just sounds true.

The lawyer ordered a hearing next month to discuss potential sanctions for Schwartz for filing a legal brief using fake court decisions.

RA Staff

Written by RA News staff.

Recent Posts

How Are Texas High School Basketball Championships Decided?

Once district play ends, the UIL basketball…

8 hours ago

Texas Camps Install Flood Sirens After Deadly Camp Mystic Flood

Nearly six months after a devastating flood…

1 day ago

Christmas Day 2025: What’s Open, What’s Closed, and Where You Can Still Go

Most stores and restaurants close on Christmas…

1 day ago

This website uses cookies.