Archive for Artificial intelligence

Language variation writ large

The Vastness of Language Variation Across the Globe
Panel. AAAS 2026 Annual Meeting.  Coming in February, 2026.

Organizer:  Lenore Grenoble, University of Chicago, Chicago, IL
Co-Organizer:  Jeff Good, University at Buffalo, Buffalo, NY
Moderator:  Jeff Good, University at Buffalo, Buffalo, NY

Panelists

"Multilingual Language Ecologies and Linguistic Diversity",
Wilson de Lima Silva, Linguistics, University of Arizona, Tucson, AZ

"AI Approaches to the Study of Gesture, Prosody, and Linguistic Diversity",
 Kathryn Franich, Linguistics, Harvard University, Cambridge, MA

"Sometimes Big Questions Call for Small Data",
Gareth Roberts, Linguistics, University of Pennsylvania, Philadelphia, PA

Read the rest of this entry »

Comments

AI to the rescue of a Greek philosopher's work buried by Vesuvius

A year and a half ago, we learned of the initial AI-assisted decipherment of a charred scroll that had been buried for two millennia under the volcanic ashes of Mt. Vesuvius (eruption 79AD) in the city of Herculaneum:  "AI (and human ingenuity) to the rescue" (2/6/24).

Since then, researchers have continued to work on the scroll until now they have identified the precise text on it:

Lost Work of Greek Philosopher Philodemus Unearthed from Herculaneum Scroll
By Tasos Kokkinidis, Greek Reporter (May 6, 2025)

Read the rest of this entry »

Comments (19)

"Moloch's bargain"?

In “Agentic Culture” (8/30/2025), I cited some work by economists about agentic collusion in fixing prices and dividing markets — to which I might add links here, here, and here. And in that post, I noted that the problematic effects of AI agents learning from their social interactions in other areas have been mostly ignored.

But here it comes: Batu El and James Zou, "Moloch's Bargain: Emergent Misalignment When LLMs Compete for Audiences", 10/7/2025.

Read the rest of this entry »

Comments (10)

Discourse on the AI Method of Rightly Reasoning

An interesting recent paper (Adithya Bhaskar, Xi Ye, & Danqi Chen, “Language Models that think, chat better”, arXiv.org 09/24/2025) starts like this:

THINKING through the consequences of one’s actions—and revising them when needed—is a defining feature of human intelligence (often called “system 2 thinking”, Kahneman (2011)). It has also become a central aspiration for large language models (LLMs).1

The footnote:

1Language models think, therefore, language models are?

Read the rest of this entry »

Comments (4)

AI: Not taking jobs yet?

Martha Gimbel et al., "Evaluating the Impact of AI on the Labor Market: Current State of Affairs", The Budget Lab (Yale) 10/1/2025:

Overall, our metrics indicate that the broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago, undercutting fears that AI automation is currently eroding the demand for cognitive labor across the economy.

While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labor market as much, or more, dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialize.

Read the rest of this entry »

Comments (8)

Charlie Hustle in the AI industry

Would You Work ‘996’? The Hustle Culture Trend Is Taking Hold in Silicon Valley.
The number combination refers to a work schedule — 9 a.m. to 9 p.m., six days a week — that has its origins in China’s hard-charging tech scene.
By Lora Kelley, NYT (Sept. 28, 2025)

The inverse of involution.

Working 9 to 5 is a way to make a living. But in Silicon Valley, amid the competitive artificial intelligence craze, grinding “996” is the way to get ahead. Or at least to signal to those around you that you’re taking work seriously.

Read the rest of this entry »

Comments (11)

LLMs and tree-structuring

"Active Use of Latent Tree-Structured Sentence Representation in Humans and Large Language Models." Liu, Wei et al. Nature Human Behaviour (September 10, 2025).

Abstract

Understanding how sentences are represented in the human brain, as well as in large language models (LLMs), poses a substantial challenge for cognitive science. Here we develop a one-shot learning task to investigate whether humans and LLMs encode tree-structured constituents within sentences. Participants (total N = 372, native Chinese or English speakers, and bilingual in Chinese and English) and LLMs (for example, ChatGPT) were asked to infer which words should be deleted from a sentence. Both groups tend to delete constituents, instead of non-constituent word strings, following rules specific to Chinese and English, respectively. The results cannot be explained by models that rely only on word properties and word positions. Crucially, based on word strings deleted by either humans or LLMs, the underlying constituency tree structure can be successfully reconstructed. Altogether, these results demonstrate that latent tree-structured sentence representations emerge in both humans and LLMs.

Read the rest of this entry »

Comments (7)

More of GPT-5's absurd image labelling

GPT-5 is impressively good at some things (see "No X is better than Y", 8/14/2025, or "GPT-5 can parse headlines!", 9/7/2025), but shockingly bad at others. And I'm not talking about "hallucinations", which is a term used for plausible but false facts or references — such mistakes remain a problem, but every answer is not a hallucination. Adding labels to images that it creates, on the other hand, remains reliably and absurdly bad.

Read the rest of this entry »

Comments (17)

GPT-5 can parse headlines!

At least sometimes…

Philip Taylor sent a link to this Guardian article "West Point cancels ceremony to honor Tom Hanks as ‘outstanding US citizen’", with the comment

It was only on reading the article that I realised that West Point was/were not cancelling the ceremony in order to honour Tom Hanks (as I had originally thought/believed) but were in fact cancelling a ceremony intended to honour Tom Hanks …

I've been meaning to test GPT-5's parsing ability, ever since I discovered its surprising ability to represent semantic scope ambiguities in correct predicate logic (see "No X is better than Y", 8/13/2025, and the details of its analyses).

Read the rest of this entry »

Comments (6)

From the Vice Provost for Tokenization

Or rather, messages from Penn's Office of the Vice Provost for Research, mysteriously tokenized and re-formatted by gmail.

The start of the Fall 2025 OVPR email newsletter, as displayed by MS Outlook, has 14 bullet points referencing hyperlinked subtopics:

But gmail (where I first read the newsletter) shows me the same information as 14 columns of (individually) hyperlinked textual tokens, with a bullet on the first token of each column:

Read the rest of this entry »

Comments (5)

"What makes an AI system an agent?"

And what are the consequences of the growing population of AI agents?

In "Agentic culture", I observed that today's "AI agents" have the same features that made "Agent Based Models", 50 years ago, a way to model the emergence and evolution of culture. And I expressed surprise that (almost) none of the concerns about AI impact have taken account of this obvious fact.

There was a little push-back in the comments, for example the claim that "There may come a time when AI is autonomous, reflective and has motives, but that is a long, long way off." Which misses the point, given the entirely unintelligent nature of old-fashioned ABM systems.

Antonio Gulli from Google has recently posted Agentic Design Systems, which offers some useful (and detailed) descriptions of the state of the agentic art, along with example code.

Read the rest of this entry »

Comments (8)

Agentic culture

Back in the 1940s, Stanislaw Ulam and John von Neumann came up with the idea of "Cellular automata", which started with models of crystal growth and self-replicating systems, and continued over the decades with explorations in many areas, popularized in the 1970s by Conway's Game of Life. One strand of these explorations became known as Agent-based Models, applied to problems in ecology, sociology, and economics. One especially influential result was Robert Axelrod's work in the mid-1980s on the Evolution of Cooperation.  For a broader survey, see De Marchi and Page, "Agent-based models", Annual Review of Political Science, 2014.

Read the rest of this entry »

Comments (19)

More on GPT-5 pseudo-text in graphics

In "Chain of thought hallucination?" (8/8/2025), I illustrated some of the weird text representations that GPT-5 creates when its response is an image rather than a text string. I now have its recommendation for avoiding such problems — which sometimes works, so you can try it…

Read the rest of this entry »

Comments (19)