It's impossible to detect LLM-created text

« previous post | next post »

Last year, I expressed considerable skepticism about the prospects for accurate detection of text generated by Large Language Models ("Detecting LLM-created essays?", 12/20/2022). Since then, many new systems claiming to detect LLM outputs have emerged, notably Turnitin's "AI writing detector".

In a recent post on AI Weirdness ("Don't use AI detectors for anything important", 6/30/2023), Janelle Shane presents multiple examples of multiple kinds of failure, and explains why things are not likely to change.

In addition to examples of false positives triggered by her own writing (posted last year), she cites a study showing that such detectors are (at least statistically) biased against non-native writers: Weixin Liang et al., "GPT detectors are biased against non-native English writers", 4/18/2023.

And she also shows, by example, that such detectors can be fooled by a simple expedients like asking the LLM to "Elevate the following text by employing literary language". Her conclusion:

What does this mean? Assuming they know of the existence of GPT detectors, a student who uses AI to write or reword their essay is LESS likely to be flagged as a cheater than a student who never used AI at all.

There will no doubt be an on-going LLM generation/detection arms race, though it seems that the detectors have been losing badly from the beginning.

Update — This does NOT mean that LLMs (of current types) can reliably create coherent and correct texts. Aside from the well-documented "hallucination" issues, there are other systematic problems due to the fact that such systems try to deal with things like logic and plot using nothing more than word-association patterns on various scales. Of course, it's possible that current explorations in "neuro-symbolic AI" will fix this — for one such approach, see Lionel Wong et al.,  "From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought", 6/23/2023.

 



3 Comments

  1. Dick Margulis said,

    July 5, 2023 @ 8:11 am

    You're discussing whether an algorithm can detect AI-generated text.

    A different question is how well human editors do at the task. So far, I've seen only anecdotal material on that subject. Are you aware of any studies that might address that question? For that matter, do you have any thoughts on how such a study might be conducted, given the wide variation in abilities among human editors?

  2. KevinM said,

    July 5, 2023 @ 10:49 am

    This post is unintentionally amusing for anyone who would recognize an "LLM" as someone with an advanced legal degree, commonly in tax law.

  3. Chris Barts said,

    July 5, 2023 @ 1:36 pm

    There's a simple, desperate logic here:

    Schools and other businesses will decide they need to detect AI-generated work.

    They will see these AI-based tools claiming to be able to do the job.

    Therefore, they will declare that those tools work, and ignore the evidence to the contrary.

    "It works because it must work" is a powerful logic.

RSS feed for comments on this post