Lawyers Who Used ChatGPT To Dummy Up Cases Throw Themselves On The Mercy Of The Court

That show cause hearing tomorrow is going to be WILD.

Young Girl in Front of Computer with Brown Bag FrownTomorrow at noon the hapless lawyers who ChatGPT-ed themselves onto the front page of the New York Times will appear before US District Judge Kevin Castel to explain why they should not be sanctioned.

The case of two attorneys OK Boomer-ing themselves into the soup has been a punchline for two weeks, providing us all a much needed break from politics and the endless culture war that dominates every news cycle. Are these guys Trumpers? Covid truthers? Don’t know, don’t care!

We know that they relied on an AI chatbot to research whether the two-year statute of limitations in an international treaty might be trumped by the three-year state provision, or alternatively if it might be tolled by bankruptcy. We know that they responded to a motion to dismiss by citing multiple non-existent cases. And we know that, when opposing counsel brought this to their attention, they then produced fake opinions generated by ChatGPT and attached them in an affidavit to the court.

According to their own filings, Steven Schwartz is an experienced personal injury lawyer at the New York law firm Levidow, Levidow, & Oberman. When his tort case against Avianca Airlines arising from an incident on an international flight in 2019 was removed to federal court, he handed it off to his colleague Peter LoDuca who, unlike Schwartz, was admitted to practice in the Southern District of New York. But Schwartz continued to do the work, including drafting his response to Avianca’s motion to dismiss do the expiration of the statute of limitations.

In a May 23 affidavit and a June 6 declaration, Schwartz took responsibility for the false filings, vowing that LoDuca had had no knowledge that the documents were inaccurate. Levidow, which has hired outside counsel, submitted a lengthy response to the show cause order in which it claims the mistakes were entirely inadvertent and thus it and its lawyers cannot be sanctioned under Rule 11(c)3, which requires not just incompetence, but actual bad faith.

“The law in this Circuit is clear: imposing sua sponte sanctions under Rule 11 requires a finding of subjective bad faith, not just that the conduct was objectively unreasonable,” they write.

In support of this, they attach a chat log replicating the prompts used by Schwartz to generate the research undergirding his opposition to Avianca’s motion to dismiss. They argue that it was a good faith workaround for a small firm which can’t afford Lexis or Westlaw, and whose access to federal decisions had been cut off by Fastcase. And anyway they’ve been so thoroughly humiliated that professional and/or monetary sanctions are unnecessary.

Sponsored

The lawyer, Mr. Schwartz, had no idea this was happening, even when opposing counsel brought their inability to locate the cases to his attention. ChatGPT even assured him the cases were real and could be found on Westlaw and LexisNexis, and continued to provide extended excerpts and favorable quotations. Now that Mr. Schwartz and the Firm know ChatGPT was simply making up cases, they are truly mortified; they had no intention of defrauding the Court, and the mere accusation – repeated in hundreds (if not thousands) of articles and online posts – has irreparably damaged their reputations. They have apologized to the Court in earlier submissions and do so again here.

What they don’t do is attach the chat logs showing how Schwartz got ChatGPT to generate the fake opinions he had LoDuca submit to the court on April 25 after opposing counsel said they didn’t exist and the Judge Castel ordered them to cough them up. In fact, the brief is pretty handwave-y about this part of the process, which was apparently so onerous that it required an extra week for LoDuca to get back from “vacation.”

“Mr. Schwartz printed two of the eight cases from Fastcase  and returned to ChatGPT for the remaining six cases,” they write, carefully sidestepping the admission that Fastcase was perfectly capable of producing the two real US Circuit decisions cited, which would have tipped any honest practitioner off that the two Eleventh Circuit decisions cited were unavailable on Fastcase because they did not exist. Particularly since Schwartz had already been told that by Avianca’s lawyers and the court!

In short, while LoDuca, Schwartz, and Levidow want credit for “promptly acknowledg[ing] their mistakes,” there’s still a big hole in the middle of this case where the chatbot, which previously only spit out three paragraph excerpts of the made up opinions, suddenly switched directions and started producing three page opinions.

Guess we’ll find out tomorrow if Judge Castel noticed that, too.

Sponsored

Mata v. Avianca [Docket via Court Listener]


Liz Dye lives in Baltimore where she writes about law and politics and appears on the Opening Arguments podcast.

CRM Banner