Archive for Computational linguistics

The state of speech-to-text

…if you haven't noticed, is good. There are many applications, from conversing with Siri and Alexa and Google Assistant, to getting voicemail in textual form, to automatically generated subtitles, and so on. For linguists, one parochial (but important) application is accurate automatic transcription of speech corpora, and the example that motivates this post comes from that world.

Read the rest of this entry »

Comments (8)

LLMs can't reason?

…though they often do a credible job of faking it.  An interesting (preprint) paper by Konstantine Arkoudas, "GPT-4 Can't Reason", brings the receipts.

Read the rest of this entry »

Comments (11)

ROT-LLM?

There's a puzzling new proposal for watermarking AI-generated text — Alistair Croll, "To Watermark AI, It Needs Its Own Alphabet", Wired 7/27/2023:

We need a way to distinguish things made by humans from things made by algorithms, and we need it very soon. […]

Fortunately, we have a solution waiting in plain sight. […]

If the companies who pledged to watermark AI content at the point of origin do so using Unicode—essentially giving AI its own character set—we’ll have a ready-made, fine-grained AI watermark that works across all devices, platforms, operating systems, and websites.

Read the rest of this entry »

Comments (22)

Mark Twain's new novel?

Today's Non Sequitur:


Read the rest of this entry »

Comments (14)

Radial dendrograms

From Sarah Gao and Andrew Gao, "On the Origin of LLMs: An Evolutionary Tree and Graph for 15,821 Large Language Models", arxiv.org 7/19/2023:

That's not a vinyl — it's a "radial dendrogram" — showing the evolutionary tree of nearly 6,000 Large Language Models posted at Hugging Face. Zeroing in on one quadrant, so you can read the labels:

Read the rest of this entry »

Comments (2)

Watermarking text?

Ashley Belanger, "OpenAI, Google will watermark AI-generated content to hinder deepfakes, misinfo", ars technica 7/21/2023:

Seven companies — including OpenAI, Microsoft, Google, Meta, Amazon, Anthropic, and Inflection —- have committed to developing tech to clearly watermark AI-generated content. That will help make it safer to share AI-generated text, video, audio, and images without misleading others about the authenticity of that content, the Biden administration hopes.

The link goes to a 7/21 White House with the title "FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI". One of that document's many bullet points:

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

Read the rest of this entry »

Comments (10)

The LLM-detection boom

Joe Marshall, "As AI cheating booms, so does the industry detecting it: ‘We couldn’t keep up with demand’", The Guardian 7/5/2023:

Since its release last November, ChatGPT has shaken the education world. The chatbot and other sophisticated AI tools are reportedly being used everywhere from college essays to high school art projects. A recent survey of 1,000 students at four-year universities by Intelligent.com found that 30% of college students have reported using ChatGPT on written assignments.

This is a problem for schools, educators and students – but a boon for a small but growing cohort of companies in the AI-detection business. Players like Winston AI, Content at Scale and Turnitin are billing for their ability to detect AI-involvement in student work, offering subscription services where teachers can run their students’ work through a web dashboard and receive a probability score that grades how “human” or “AI” the text is.

Read the rest of this entry »

Comments (5)

Alan Turing's revenge?

Ilia Shumailov et al., "The Curse of Recursion: Training on Generated Data Makes Models Forget", 5/31/2023:

What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs.

Read the rest of this entry »

Comments (14)

It's impossible to detect LLM-created text

Last year, I expressed considerable skepticism about the prospects for accurate detection of text generated by Large Language Models ("Detecting LLM-created essays?", 12/20/2022). Since then, many new systems claiming to detect LLM outputs have emerged, notably Turnitin's "AI writing detector".

In a recent post on AI Weirdness ("Don't use AI detectors for anything important", 6/30/2023), Janelle Shane presents multiple examples of multiple kinds of failure, and explains why things are not likely to change.

Read the rest of this entry »

Comments (3)

Quirky speech-to-text, weird diarization

From Daniel Deutsch:

We had a long drive yesterday, so we listened to a “robot” reading the entire indictment. It certainly isn’t flawless, but I was surprised by how good it is, especially when it gets “excited” while enacting dialogue.

Indeed, the text-to-speech quality is quite good — though unfortunately they don't tell us which TTS software they used.

Here's the opening, which is indeed entirely clear and even nearly natural-sounding:

Read the rest of this entry »

Comments (2)

LLMs as coders?

I've recently seen many articles like this one, "You probably don't need to learn to code anymore" (Medium 6/5/2023), arguing that Large Language Models will make human programming (and human programmers) unnecessary. These arguments puzzle me, because my experience with LLMs suggests that they can't be relied on even for very simple programming tasks. After the fold, I'll give a recent example from (the experimental version of) Bard.

Read the rest of this entry »

Comments (23)

"Wordectomy"

The medical news site MedPage Today has recently added a daily game page, "Wordectomy", in which a medically-relevant Wikipedia article is presented with all letters blanked out except for punctuation and (some) function words, e.g.

Read the rest of this entry »

Comments (10)

Hack of the year: 1980

I recently stumbled on this 5/10/2023 Medium article by David Brock, "A Backup of Historical Proportions" — which reminded me of the Xerox Palo Alto Research Center ("PARC") and the Xerox Alto. Those were the people and the machine that invented interactive GUIs on bit-mapped displays, the computer mouse, and so on — though it took Steve Jobs to "borrow" the ideas and turn them into a social (and business) success.

But as a speech person, I always thought it was odd and unfortunate that the Alto had no provision for audio input or output — and I was impressed by the hack that Henry Thompson used to get around the audio output problem for his 1980 Berkeley thesis, "Stress and Salience in English: Theory and Practice".

Read the rest of this entry »

Comments (11)