AI percolates down through the legal system

« previous post | next post »

There has been considerable concern that AI (e.g., ChatGPT and other LLM-enabled devices) would unduly influence sensitive sectors of society (e.g., the law, health care, education, etc.).  Some of the anti-AI rhetoric has bordered on alarmist (I will write a post about that within a few days.

For now, here's an example of how humans will fight back.

AI in Court
5th Circuit Seeks Comment on Proposed AI Rule

Lawyers will have to certify they did not use AI, or verify any work produced by AI.

Josh Blackman, The Volokh Conspiracy (11/29/23)

—–

The U.S. Court of Appeals for the Fifth Circuit is soliciting comments on a proposed change. Rule 32.3 would now restrict the use of generative AI (with changes in red):

32.3. Certificate of Compliance. See Form 6 in the Appendix of Forms to the Fed. R. App. P. Additionally, counsel and unrepresented filers must further certify that no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human. A material misrepresentation in the certificate of compliance may result in striking the document and sanctions against the person signing the document.

The new certificate of compliance would require the lawyer to check one of two boxes:

3. This document complies with the AI usage reporting requirement of 5th Cir. R. 32.3 because:

– no generative artificial intelligence program was used in the drafting of this document, or

– a generative artificial intelligence program was used in the drafting of this document and all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.

I think this proposal strikes a good balance. Lawyers are not barred from using generative AI, but they have to attest that they used this technology. And you can be certain that briefs with the AI box checked will be reviewed more carefully. Indeed, I would be eager to see an empirical study performed about the briefs that check the second box. Clients may also want to see this information–they may have thoughts about lawyers who bill their time to use generative AI.

Who will speak for AI?  AI itself?

 

Selected readings

[h.t. Kent McKeever]



7 Comments

  1. Gregory Kusnick said,

    December 16, 2023 @ 10:13 am

    "reviewed for accuracy and approved by a human"

    But apparently there's no requirement that said human have any particular qualifications or competency at drafting legal briefs.

  2. Scott P. said,

    December 16, 2023 @ 11:57 am

    Gregory,

    Yes, but if an unqualified human approves it, then the legal firm who hired that person can be held responsible. That's already covered by existing rules.

  3. Philip Taylor said,

    December 16, 2023 @ 12:23 pm

    "Lawyers will have to certify they did not use AI, or verify any work produced by AI" — Am I the only reader to have parsed that as "Lawyers will have to certify [that] they neither used AI nor did they verify any work produced by AI" ?

  4. Karl Weber said,

    December 16, 2023 @ 4:41 pm

    Seems as though this is not just a theoretical possibility: https://abovethelaw.com/2023/12/michael-cohen-trump-lawyer-chatgpt/

  5. mg said,

    December 16, 2023 @ 6:22 pm

    There have already been cases where briefs created using generative AI that included fake citations to non-existent cases have been submitted to courts. The lawyers used ChatGPT or the like and didn't bother to check the output. Infamous example from this summer https://www.courthousenews.com/sanctions-ordered-for-lawyers-who-relied-on-chatgpt-artificial-intelligence-to-prepare-court-brief/

  6. Richard Hershberger said,

    December 17, 2023 @ 1:46 pm

    To expand on Scott's reply to Gregory, I am not a lawyer but I work for one. It is not at all uncommon for me to write the first draft of a legal pleading, which may involve researching and citing cases. I cannot, however, sign a pleading. That is for my boss. He is the one with a license potentially on the line. He reviews my draft, often making changes. How much time he devotes to this review depends on the nature of the pleading. In the general case it also depends on how willing the lawyer is to rely on the non-lawyer's work. I have been with him fifteen years. At this point we understand one another well, and I will point out any parts that I think require special attention.

    The essence of the proposed rule is that a human reviewed the document. The human who signed it is on the hook, regardless of whether they personally reviewed it, but this is no different from how it has always been.

  7. Lex said,

    December 18, 2023 @ 12:40 pm

    “AI percolates down through the legal system”

    Tangential: For all the talk about AI in the court system (hallucinating cases, its potential uses in discovery and legal research, “What does this all mean for the future?,” etc.) it’s being rapidly adopted into the arbitration system, particularly international commercial arbitration. (Seriously, just google for something like “AI and arbitration.”) The UAE, Singapore, etc. may be the real proving ground for AI and legal merits arguments.

    Interesting but unsurprising: many of the very first people to really look at AI and legal merits arguments (c. 1990-2000) are the same people who you find operating in/adjacent to the ICA space.

RSS feed for comments on this post