Archive for Artificial intelligence

Name-transcription slop

Friday's On The Media, "Deep Fakes, Data Centers, And AI Slop — Are We Cooked?" has some linguistically-interesting discussion, especially the part about the rise of AI-generated trolling — more on that later. But this post is just a quick note on a widespread symptom of current end-to-end speech-to-text technology, where the text end of the process is letter-sequence tokens of obscure origin, yielding some peculiar spelling errors.

Read the rest of this entry »

Comments (6)

Voices as instruments, instruments as voices

Yesterday I pointed out the trombonish glissando in Bobby Vinton's "Blue Velvet"; today, during my morning ablutions, on the radio I heard a jazz singer do a whole song sounding like a musical instrument.  I don't think there was any digital or electronic assistance, just his natually endowed voice.

Read the rest of this entry »

Comments (6)

More on algorithmic culture

In "Agentic culture" (8/30/2025) and "'Moloch's bargain'?" (10/12/2025) I cited some work on how interactions among algorithmic "agents" can create (socially) bad results that were not directly programmed by their inventors. I continue to be surprised at how little attention has been paid to this issue in the media, given the excitement over agentic AI. I've found a fair amount of other research with similar content, as  searches like this illustrate, which makes me wonder even more about the relative lack of uptake.

Read the rest of this entry »

Comments (6)

"LLM Council"

Comments (4)

Language variation writ large

The Vastness of Language Variation Across the Globe
Panel. AAAS 2026 Annual Meeting.  Coming in February, 2026.

Organizer:  Lenore Grenoble, University of Chicago, Chicago, IL
Co-Organizer:  Jeff Good, University at Buffalo, Buffalo, NY
Moderator:  Jeff Good, University at Buffalo, Buffalo, NY

Panelists

"Multilingual Language Ecologies and Linguistic Diversity",
Wilson de Lima Silva, Linguistics, University of Arizona, Tucson, AZ

"AI Approaches to the Study of Gesture, Prosody, and Linguistic Diversity",
 Kathryn Franich, Linguistics, Harvard University, Cambridge, MA

"Sometimes Big Questions Call for Small Data",
Gareth Roberts, Linguistics, University of Pennsylvania, Philadelphia, PA

Read the rest of this entry »

Comments off

AI to the rescue of a Greek philosopher's work buried by Vesuvius

A year and a half ago, we learned of the initial AI-assisted decipherment of a charred scroll that had been buried for two millennia under the volcanic ashes of Mt. Vesuvius (eruption 79AD) in the city of Herculaneum:  "AI (and human ingenuity) to the rescue" (2/6/24).

Since then, researchers have continued to work on the scroll until now they have identified the precise text on it:

Lost Work of Greek Philosopher Philodemus Unearthed from Herculaneum Scroll
By Tasos Kokkinidis, Greek Reporter (May 6, 2025)

Read the rest of this entry »

Comments (19)

"Moloch's bargain"?

In “Agentic Culture” (8/30/2025), I cited some work by economists about agentic collusion in fixing prices and dividing markets — to which I might add links here, here, and here. And in that post, I noted that the problematic effects of AI agents learning from their social interactions in other areas have been mostly ignored.

But here it comes: Batu El and James Zou, "Moloch's Bargain: Emergent Misalignment When LLMs Compete for Audiences", 10/7/2025.

Read the rest of this entry »

Comments (10)

Discourse on the AI Method of Rightly Reasoning

An interesting recent paper (Adithya Bhaskar, Xi Ye, & Danqi Chen, “Language Models that think, chat better”, arXiv.org 09/24/2025) starts like this:

THINKING through the consequences of one’s actions—and revising them when needed—is a defining feature of human intelligence (often called “system 2 thinking”, Kahneman (2011)). It has also become a central aspiration for large language models (LLMs).1

The footnote:

1Language models think, therefore, language models are?

Read the rest of this entry »

Comments (4)

AI: Not taking jobs yet?

Martha Gimbel et al., "Evaluating the Impact of AI on the Labor Market: Current State of Affairs", The Budget Lab (Yale) 10/1/2025:

Overall, our metrics indicate that the broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago, undercutting fears that AI automation is currently eroding the demand for cognitive labor across the economy.

While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labor market as much, or more, dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialize.

Read the rest of this entry »

Comments (8)

Charlie Hustle in the AI industry

Would You Work ‘996’? The Hustle Culture Trend Is Taking Hold in Silicon Valley.
The number combination refers to a work schedule — 9 a.m. to 9 p.m., six days a week — that has its origins in China’s hard-charging tech scene.
By Lora Kelley, NYT (Sept. 28, 2025)

The inverse of involution.

Working 9 to 5 is a way to make a living. But in Silicon Valley, amid the competitive artificial intelligence craze, grinding “996” is the way to get ahead. Or at least to signal to those around you that you’re taking work seriously.

Read the rest of this entry »

Comments (11)

LLMs and tree-structuring

"Active Use of Latent Tree-Structured Sentence Representation in Humans and Large Language Models." Liu, Wei et al. Nature Human Behaviour (September 10, 2025).

Abstract

Understanding how sentences are represented in the human brain, as well as in large language models (LLMs), poses a substantial challenge for cognitive science. Here we develop a one-shot learning task to investigate whether humans and LLMs encode tree-structured constituents within sentences. Participants (total N = 372, native Chinese or English speakers, and bilingual in Chinese and English) and LLMs (for example, ChatGPT) were asked to infer which words should be deleted from a sentence. Both groups tend to delete constituents, instead of non-constituent word strings, following rules specific to Chinese and English, respectively. The results cannot be explained by models that rely only on word properties and word positions. Crucially, based on word strings deleted by either humans or LLMs, the underlying constituency tree structure can be successfully reconstructed. Altogether, these results demonstrate that latent tree-structured sentence representations emerge in both humans and LLMs.

Read the rest of this entry »

Comments (7)

More of GPT-5's absurd image labelling

GPT-5 is impressively good at some things (see "No X is better than Y", 8/14/2025, or "GPT-5 can parse headlines!", 9/7/2025), but shockingly bad at others. And I'm not talking about "hallucinations", which is a term used for plausible but false facts or references — such mistakes remain a problem, but every answer is not a hallucination. Adding labels to images that it creates, on the other hand, remains reliably and absurdly bad.

Read the rest of this entry »

Comments (17)

GPT-5 can parse headlines!

At least sometimes…

Philip Taylor sent a link to this Guardian article "West Point cancels ceremony to honor Tom Hanks as ‘outstanding US citizen’", with the comment

It was only on reading the article that I realised that West Point was/were not cancelling the ceremony in order to honour Tom Hanks (as I had originally thought/believed) but were in fact cancelling a ceremony intended to honour Tom Hanks …

I've been meaning to test GPT-5's parsing ability, ever since I discovered its surprising ability to represent semantic scope ambiguities in correct predicate logic (see "No X is better than Y", 8/13/2025, and the details of its analyses).

Read the rest of this entry »

Comments (6)