The non-culpability of ChatGPT in legal cases
« previous post | next post »
"Second Circuit Refers Lawyer for Disciplinary Proceedings Based on AI-Hallucinated Case in Brief", by Eugene Volokh, The Volokh Conspiracy, reason | 1.30.2024
From Park v. Kim, decided today by the Second Circuit (Judges Barrington Parker, Allison Nathan, and Sarah Merriam); this is the 13th case I've seen in the last year in which AI-hallucinated citations were spotted:
We separately address the conduct of Park's counsel, Attorney Jae S. Lee. Lee's reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT. Because citation in a brief to a non-existent case suggests conduct that falls below the basic obligations of counsel, we refer Attorney Lee to the Court's Grievance Panel, and further direct Attorney Lee to furnish a copy of this decision to her client, Plaintiff-Appellant Park….
Park's reply brief in this appeal was initially due May 26, 2023. After seeking and receiving two extensions of time, Attorney Lee filed a defective reply brief on July 25, 2023, more than a week after the extended due date. On August 1, 2023, this Court notified Attorney Lee that the late-filed brief was defective, and set a deadline of August 9, 2023, by which to cure the defect and resubmit the brief. Attorney Lee did not file a compliant brief, and on August 14, 2023, this Court ordered the defective reply brief stricken from the docket. Attorney Lee finally filed the reply brief on September 9, 2023.
The reply brief cited only two court decisions. We were unable to locate the one cited as "Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep't 2014)." Appellant's Reply Br. at 6. Accordingly, on November 20, 2023, we ordered Park to submit a copy of that decision to the Court by November 27, 2023. On November 29, 2023, Attorney Lee filed a Response with the Court explaining that she was "unable to furnish a copy of the decision." Although Attorney Lee did not expressly indicate as much in her Response, the reason she could not provide a copy of the case is that it does not exist—and indeed, Attorney Lee refers to the case at one point as "this non-existent case."
Attorney Lee's Response states:
I encountered difficulties in locating a relevant case to establish a minimum wage for an injured worker lacking prior year income records for compensation determination …. Believing that applying the minimum wage to in injured worker in such circumstances under workers' compensation law was uncontroversial, I invested considerable time searching for a case to support this position but was unsuccessful….
Consequently, I utilized the ChatGPT service, to which I am a subscribed and paying member, for assistance in case identification. ChatGPT was previously provided reliable information, such as locating sources for finding an antic furniture key. The case mentioned above was suggested by ChatGPT, I wish to clarify that I did not cite any specific reasoning or decision from this case.
In this post we are discussing the application of AI-generated legal information in a specific judicial case. In such an instance, the onus is completely on the attorney who utilized the ChatGPT service that she subscribed to and paid for. Here ChatGPT is like a tool, and it is the professional duty of the lawyer to use it in a responsible manner.
Now, when it comes to "AI plagiarism" (1/4/24), "in which OpenAI programs parrot large chunks of [copyrighted] NYT material" without acknowledgement or payment, that may be a different matter. Here the AI program is a creator which is acting at the behest of it owner, OpenAI. This means that the owner is liable for the behavior of its machine. The same holds true for other types of abuse of the power belonging to LLMs.
Selected readings
- "AI percolates down through the legal system" (12/16/23) — with lengthy bibliography on AI, LLM, etc.
- "AI and the law" (10/15/23)
- "AI and the law, part 2" (10/19/23) — another long bibliography
Just within the last two years, there have been hundreds of Language Log posts on ChatGPT, DeepL, and other types of LLMs.
[h.t. Kent McKeever]
Haamu said,
February 2, 2024 @ 12:42 pm
A lot of ethical confusion seems to be arising because of the insistence on viewing these AIs as somehow fundamentally new and different. But these are not really new ethical questions. It seems that a good baseline could be ascertained simply by substituting an actual human being into the fact pattern, and seeing if it makes sense. For instance:
One wonders if this simple thought experiment would alleviate any of Attorney Lee's perplexity.
Ethan A Merritt said,
February 2, 2024 @ 1:00 pm
I'm wondering about the antic furniture key that ChatGPT helped locate. In my experience both keys and furniture just quietly go about their business. Where, exactly, did ChatGPT suggest looking for an antic one?
Rodger C said,
February 2, 2024 @ 1:09 pm
Where, exactly, did ChatGPT suggest looking for an antic one?
In a pile of antique hay?
Seth said,
February 2, 2024 @ 1:29 pm
@Ethan A Merritt – Maybe at Disneyland? It seems like just the place, from a movie where it sings and dances like "I'm a key, job for me, open theeee, cabinetry!"
David Morris said,
February 2, 2024 @ 5:09 pm
Is the citation 114 A.D.3d 947 (3d Dep't 2014) at least plausible, or would a US judge spot it as phoney a mile away? (I can spot Australian citations, but not necessarily US ones.)
I understood 'attic furniture'.
MN said,
February 2, 2024 @ 9:00 pm
So they had a beef with Bourguignon?
schumb hopes said,
February 3, 2024 @ 3:30 am
Mistaking antic for antique is similar to haven for heaven; which's lodged into regular uses in non-English media.
AntC said,
February 3, 2024 @ 6:14 am
would a US judge spot it as phoney a mile away?
I'm no Judge nor in the US, but a 'tool" as dumb as Google or wiktionary immediately questions that collocation "antic furniture key". Did you mean "antique"? "attic"?
I as an avid follower of words know 'antic' is an obsolete spelling — but so obsolete it's unlikely to be of any relevance to a current legal case or current furniture. Perhaps English is not this Attorney's first Language? Then they should have learnt way back before they were let loose in any sort of Practice that English is full of traps for Lawyers.
Why did the Attorney even volunteer this "antic" as corroboration for anything? Especially since their judgement was already under the spotlight? Wouldn't they ask a Senior colleague to check their submission to the Panel? Didn't they realize they were in career-ending peril? How were they ever let out of Law School?
Thank you to the Second Circuit for taking this Attorney out of circulation before they do more damage.
Izzy Grosof said,
February 3, 2024 @ 10:22 am
As for the plausibility of the citation "114 A.D.3d 947 (3d Dep't 2014).", there is a real case listed as "114 A.D.3d 947":
https://casetext.com/case/kay-v-desantis-2
It's an appeal in a child support case. It was in the New York Supreme Court, Apellate Division, Second Department, and it is in fact from 2014.
So I think the citation would pass a first inspection, until someone tried to search it up. However, I'm not a lawyer, I could be missing something.
Lex said,
February 10, 2024 @ 3:24 am
@David Morris and Izzy Grosof
I can’t readily figure out the proposition for which the case was cited. It’s a federal court, but citations to state cases are normal enough when the underlying claims involve state law.
So, it might or might not set off alarm bells; but that’s basically a moot point: because federal law clerks are giant homework nerds—forged in the fires of law review and possessed of an uncanny knack for spotting errors as tiny as impertinently un-italicized periods— so almost always check citations, especially when the case cited is unfamiliar. Like, it’s basically their number one job. (Presumably opposing counsel would be more familiar with the state law, and might spot the fugazi immediately.)