Archive for Artificial intelligence

Agentic culture

Back in the 1940s, Stanislaw Ulam and John von Neumann came up with the idea of "Cellular automata", which started with models of crystal growth and self-replicating systems, and continued over the decades with explorations in many areas, popularized in the 1970s by Conway's Game of Life. One strand of these explorations became known as Agent-based Models, applied to problems in ecology, sociology, and economics. One especially influential result was Robert Axelrod's work in the mid-1980s on the Evolution of Cooperation.  For a broader survey, see De Marchi and Page, "Agent-based models", Annual Review of Political Science, 2014.

Read the rest of this entry »

Comments (19)

More on GPT-5 pseudo-text in graphics

In "Chain of thought hallucination?" (8/8/2025), I illustrated some of the weird text representations that GPT-5 creates when its response is an image rather than a text string. I now have its recommendation for avoiding such problems — which sometimes works, so you can try it…

Read the rest of this entry »

Comments (19)

The Heisig method for learning sinographs

I Used to Know How to Write in Japanese:
Somehow, though, I can still read it
Marco Giancotti, Aether Mug (August 14, 2025)

During the last thirty to forty years, two of the most popular dictionaries for mastering sinographs were those of James Heisig and Rick Harbaugh.  I was dubious about the efficacy of both and wished that my students wouldn't use them, but language learners flocked to these extremely popular dictionaries, thinking that they offered a magic trick for remembering the characters.

The latter relied on fallacious etymological "trees" and was written by an economist, and the former was based on brute memorization enhanced by magician's tricks and was written by a philosopher of religion.  Both placed characters on a pedestal of visuality / iconicity without integrating them with spoken language.

I have already done a mini-review of Harbaugh's Chinese Characters and Culture: A Genealogy and Dictionary (New Haven: Yale Far Eastern Publications, 1998) on pp. 25-26 here:  Reviews XI, Sino-Platonic Papers, 145 (August, 2004).  The remainder of this post will consist of extracts of Giancotti's essay and the view of a distinguished Japanologist-linguist on Heisig's lexicographical methods.

Read the rest of this entry »

Comments (22)

AI waifu & husbando

Forty-five or so years ago, my Chinese and émigré friends who knew Chinese language and were familiar with Chinese society and culture used to josh each other about these terms:

fūrén 夫人 ("madam; Mrs.")

wàifū 外夫 ("outside husband", but sounds like "wife")

nèirén 內人 (lit., "inside person", i.e. my "[house]wife")

The first term is an established lexical item, and the second two are jocular or ad hoc, plus there are other regional and local expressions formed in a similar fashion, as well as some japonismes.

All of these terms were formed from the following four morphosyllables:

夫 ("man; male adult; husband")

rén 人 ("man; person; people")

wài 外 ("outside")

nèi 內 ("inside")

Read the rest of this entry »

Comments (6)

No X is better than Y

The following sentence in this Bloomberg story

I’m of the mindset that no car payment is better than a new car payment – hence why my 2017 Volvo will likely stick around for a few more years – but I’ve been enticed more about the electric vehicles on the market.

…could lead the reader down a garden path of wondering why a new car payment is the best car payment.

Read the rest of this entry »

Comments (29)

I.E. A.I.

In an update to "Morpho-phonologically AI", I wrote

Ironically, since this puzzle was vocalically inspired by the term "AI" , I'm guessing that current AI systems are not very good at solving (or creating) puzzles like this. I'll give it a try later today.

But it seems that I was wrong.

Read the rest of this entry »

Comments (3)

Large Language Pal restored

"OpenAI Brings Back Fan-Favorite GPT-4o After a Massive User Revolt", Gizmodo 8/10/2025:

After a disastrous 72 hours that saw its most loyal users in open revolt, OpenAI is making a major U-turn.

In a series of posts on X (formerly Twitter) Sunday, CEO Sam Altman announced that the company is bringing back its beloved older AI models, including GPT-4o, and dramatically increasing usage limits for paying subscribers, a clear peace offering to a furious customer base.

Read the rest of this entry »

Comments off

Chain of thought hallucination?

Avram Pitch, "Meet President Willian H. Brusen from the great state of Onegon", The Register 8/8/2025:

OpenAI's GPT-5, unveiled on Thursday, is supposed to be the company's flagship model, offering better reasoning and more accurate responses than previous-gen products. But when we asked it to draw maps and timelines, it responded with answers from an alternate dimension.

Read the rest of this entry »

Comments (21)

AI for reconstructing degraded Latin text

AI Is Helping Historians With Their Latin
A new tool fills in missing portions of ancient inscriptions from the Roman Empire

By Nidhi Subbaraman Aug. 6, 2025

In recent years, we have encountered many cases of AI assisting (or not) in the decipherment of ancient manuscripts in diverse languages.  See several cases listed in the "Selected readings".  Now it's Latin's turn to benefit from the ministrations of artificial intelligence.

People across the Roman Empire wrote poetry, kept business accounts and described their conquests and ambitions in inscriptions on pots, plaques and walls.

The surviving text gives historians a rare glimpse of life in those times—but most of the objects are broken or worn.

“It’s like trying to solve a gigantic jigsaw puzzle, only there is tens of thousands more pieces to that puzzle, and about 90% of them are missing,” said Thea Sommerschield, a historian at the University of Nottingham.

Now, artificial intelligence is filling in the blanks.

An AI tool designed by Sommerschield and other European scientists can predict the missing text of partially degraded Latin inscriptions made hundreds of years ago and help historians estimate their date and place of origin.

Read the rest of this entry »

Comments (8)

Baby talk

From here (at least that's where I saw it):


Read the rest of this entry »

Comments (8)

Little Models, Language and otherwise

The last couple of months have seen some interesting publications about AI systems that are small by modern standards.

Read the rest of this entry »

Comments off

AMI not AGI?

From Yann LeCun's presentation at the AI, Science and Society event in Paris last February:

Read the rest of this entry »

Comments (2)

"Like learning physics by watching Einstein do yoga"

The most interesting LLM research that I've seen recently is from Alex Cloud and others at Anthropic and Truthful AI, "Subliminal Learning: Language models transmit behavioral traits via hidden signals in data", 7/20/2025:

ABSTRACT: We study subliminal learning, a surprising phenomenon where language models transmit behavioral traits via semantically unrelated data. In our main experiments, a "teacher" model with some trait T (such as liking owls or being misaligned) generates a dataset consisting solely of number sequences. Remarkably, a "student" model trained on this dataset learns T. This occurs even when the data is filtered to remove references to T. We observe the same effect when training on code or reasoning traces generated by the same teacher model. However, we do not observe the effect when the teacher and student have different base models. To help explain our findings, we prove a theoretical result showing that subliminal learning occurs in all neural networks under certain conditions, and demonstrate subliminal learning in a simple MLP classifier. We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development. Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.

Read the rest of this entry »

Comments (2)