Archive for Artificial intelligence

AI for reconstructing degraded Latin text

AI Is Helping Historians With Their Latin
A new tool fills in missing portions of ancient inscriptions from the Roman Empire

By Nidhi Subbaraman Aug. 6, 2025

In recent years, we have encountered many cases of AI assisting (or not) in the decipherment of ancient manuscripts in diverse languages.  See several cases listed in the "Selected readings".  Now it's Latin's turn to benefit from the ministrations of artificial intelligence.

People across the Roman Empire wrote poetry, kept business accounts and described their conquests and ambitions in inscriptions on pots, plaques and walls.

The surviving text gives historians a rare glimpse of life in those times—but most of the objects are broken or worn.

“It’s like trying to solve a gigantic jigsaw puzzle, only there is tens of thousands more pieces to that puzzle, and about 90% of them are missing,” said Thea Sommerschield, a historian at the University of Nottingham.

Now, artificial intelligence is filling in the blanks.

An AI tool designed by Sommerschield and other European scientists can predict the missing text of partially degraded Latin inscriptions made hundreds of years ago and help historians estimate their date and place of origin.

Read the rest of this entry »

Comments (8)

Baby talk

From here (at least that's where I saw it):


Read the rest of this entry »

Comments (8)

Little Models, Language and otherwise

The last couple of months have seen some interesting publications about AI systems that are small by modern standards.

Read the rest of this entry »

Comments off

AMI not AGI?

From Yann LeCun's presentation at the AI, Science and Society event in Paris last February:

Read the rest of this entry »

Comments (2)

"Like learning physics by watching Einstein do yoga"

The most interesting LLM research that I've seen recently is from Alex Cloud and others at Anthropic and Truthful AI, "Subliminal Learning: Language models transmit behavioral traits via hidden signals in data", 7/20/2025:

ABSTRACT: We study subliminal learning, a surprising phenomenon where language models transmit behavioral traits via semantically unrelated data. In our main experiments, a "teacher" model with some trait T (such as liking owls or being misaligned) generates a dataset consisting solely of number sequences. Remarkably, a "student" model trained on this dataset learns T. This occurs even when the data is filtered to remove references to T. We observe the same effect when training on code or reasoning traces generated by the same teacher model. However, we do not observe the effect when the teacher and student have different base models. To help explain our findings, we prove a theoretical result showing that subliminal learning occurs in all neural networks under certain conditions, and demonstrate subliminal learning in a simple MLP classifier. We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development. Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.

Read the rest of this entry »

Comments (2)

Neighborhood PR Bots

The PR campaign for the Unitree GI Robot now comes in at least three local variants: the "Uncle Bot" in China,  "Jake the Rizzbot" in Austin, and a gay version of Jake in Los Angeles.

Read the rest of this entry »

Comments (3)

The effect of AI tools on coding

Joel Becker et al., "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity", METR 7/10/2025:

Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February–June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early-2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%—AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect—for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.

Read the rest of this entry »

Comments (10)

AI win of the day

In "Beautiful music and logical warts", I quoted (part of) the trollish conclusion of Rousseau's Lettre sur la Musique Française:

Je crois avoir fait voir qu’il n’y a ni mesure ni mélodie dans la musique française, parce que la langue n’en est pas susceptible ; que le chant français n’est qu’un aboiement continuel, insupportable à toute oreille non prévenue; que l’harmonie en est brute, sans expression, et sentant uniquement son remplissage d'écolier ; que les airs français ne sont point des airs ; que le récitatif français n’est point du récitatif. D’où je conclus que les Français n’ont point de musique et n’en peuvent avoir, ou que, si jamais ils en ont une, ce sera tant pis pour eux.

I believe I have shown that there is neither rhythm nor melody in French music, because the language is not capable of them; that French song is only a continual barking, unbearable to any unbiased ear; that the harmony is crude, without expression, and full of childish padding; that French airs are not airs; that French recitative is not recitative. From which I conclude that the French have no music and never will have any, or that, if ever they have some, it will be a disappointment for them.

Read the rest of this entry »

Comments (5)

…"wasted little time VERB.ing"…

Commenters noted the ambiguity of this sentence quoted earlier today in "Rococo":

When President Donald Trump returned to the White House in January, he wasted little time redecorating.

From Bob Ladd: "I was genuinely uncertain when I read the sentence about 'wasting little time' whether Trump had in fact gone right to work redecorating or rather had decided not to bother.

Read the rest of this entry »

Comments (16)

"AI" == "vehicle"?

Back in March, the AAAI ("Association for the Advancement of Artificial Intelligence") published an "AAAI Presidential Panel Report on the Future of AI Research":

The AAAI 2025 presidential panel on the future of AI research aims to help all AI stakeholders navigate the recent significant transformations in AI capabilities, as well as AI research methodologies, environments, and communities. It includes 17 chapters, each covering one topic related to AI research, and sketching its history, current trends and open challenges. The study has been conducted by 25 AI researchers and supported by 15 additional contributors and 475 respondents to a community survey.

You can read the whole thing here — and you should, if you're interested in the topic.

Read the rest of this entry »

Comments (4)

The linguistic pragmatics of LLMs

"Does GPT-4 Surpass Human Performance in Linguistic Pragmatics?" Bojic, Ljubiša et al. Humanities and Social Sciences Communications 12, no. 1 (June 10, 2025). Ljubiša Bojić, Predrag Kovačević, & Milan Čabarkapa.  Humanities and Social Sciences Communications volume 12, Article number: 794 (2025)

Read the rest of this entry »

Comments (3)

AI schoolwork

Current LLMs can answer questions or follow instructions in a way that makes them useful as cheap and quick clerical assistants. Many students use them for doing homework, writing papers, and even taking exams — and many journalists, government functionaries, lawyers, scientists, etc., are using them in similar ways. The main drawback from users' point of view is that LLMs often make stuff up — this seems to have happened a couple of weeks ago to the crew who composed the MAHA report, and is an increasingly widespread problem in court documents. Attempts at AI-detectors have totally failed, and so the current academic trends are either in the direction of testing methods that isolate students from LLM-connected devices, or in the direction of syllabus structures that directly encourage students to use LLMs, but try to teach them to use them better.

Read the rest of this entry »

Comments (6)

"Artificial Intelligence and its evil twin, Darwinism"

In Daniel Dennett's 1995 book Darwin's Dangerous Idea: Evolution and the Meanings of Life, the chapter titled "Chomsky contra Darwin, Four Episodes" ends with this provocative sentence:

The hostility to Artificial Intelligence and its evil twin, Darwinism, lies just beneath the surface of much of the most influential work in recent twentieth-century philosophy.

Read the rest of this entry »

Comments (13)