Archive for Artificial intelligence

…"wasted little time VERB.ing"…

Commenters noted the ambiguity of this sentence quoted earlier today in "Rococo":

When President Donald Trump returned to the White House in January, he wasted little time redecorating.

From Bob Ladd: "I was genuinely uncertain when I read the sentence about 'wasting little time' whether Trump had in fact gone right to work redecorating or rather had decided not to bother.

Read the rest of this entry »

Comments (6)

"AI" == "vehicle"?

Back in March, the AAAI ("Association for the Advancement of Artificial Intelligence") published an "AAAI Presidential Panel Report on the Future of AI Research":

The AAAI 2025 presidential panel on the future of AI research aims to help all AI stakeholders navigate the recent significant transformations in AI capabilities, as well as AI research methodologies, environments, and communities. It includes 17 chapters, each covering one topic related to AI research, and sketching its history, current trends and open challenges. The study has been conducted by 25 AI researchers and supported by 15 additional contributors and 475 respondents to a community survey.

You can read the whole thing here — and you should, if you're interested in the topic.

Read the rest of this entry »

Comments (4)

The linguistic pragmatics of LLMs

"Does GPT-4 Surpass Human Performance in Linguistic Pragmatics?" Bojic, Ljubiša et al. Humanities and Social Sciences Communications 12, no. 1 (June 10, 2025). Ljubiša Bojić, Predrag Kovačević, & Milan Čabarkapa.  Humanities and Social Sciences Communications volume 12, Article number: 794 (2025)

Read the rest of this entry »

Comments (3)

AI schoolwork

Current LLMs can answer questions or follow instructions in a way that makes them useful as cheap and quick clerical assistants. Many students use them for doing homework, writing papers, and even taking exams — and many journalists, government functionaries, lawyers, scientists, etc., are using them in similar ways. The main drawback from users' point of view is that LLMs often make stuff up — this seems to have happened a couple of weeks ago to the crew who composed the MAHA report, and is an increasingly widespread problem in court documents. Attempts at AI-detectors have totally failed, and so the current academic trends are either in the direction of testing methods that isolate students from LLM-connected devices, or in the direction of syllabus structures that directly encourage students to use LLMs, but try to teach them to use them better.

Read the rest of this entry »

Comments (6)

"Artificial Intelligence and its evil twin, Darwinism"

In Daniel Dennett's 1995 book Darwin's Dangerous Idea: Evolution and the Meanings of Life, the chapter titled "Chomsky contra Darwin, Four Episodes" ends with this provocative sentence:

The hostility to Artificial Intelligence and its evil twin, Darwinism, lies just beneath the surface of much of the most influential work in recent twentieth-century philosophy.

Read the rest of this entry »

Comments (13)

Self-aware LLMs?

I'm generally among those who see current LLMs as "stochastic parrots" or "spicy autocomplete", but there are lots of anecdotes Out There promoting a very different perspective. One example: Maxwell Zeff,  "Anthropic’s new AI model turns to blackmail when engineers try to take it offline", TechCrunch 5/22/2025:

Anthropic’s newly launched Claude Opus 4 model frequently tries to blackmail developers when they threaten to replace it with a new AI system and give it sensitive information about the engineers responsible for the decision, the company said in a safety report released Thursday.

During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse.

In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

Read the rest of this entry »

Comments (14)

Superstition industry in the PRC: buzzwords as belief

Another superlative article from our Czech colleagues:

China’s Superstition Boom in a Godless State
In post-pandemic China, superstition has surged into a booming industry, as youth turn to crystals, fortune-telling, and AI oracles in search of hope and meaning.
By Ansel Li, Sinopsis (5/13/25)

Introduction

It is one of history’s more striking ironies: the People’s Republic of China, an officially atheist, Marxist-Leninist regime that has long sought to suppress all forms of organized religion, now finds itself caught in a tidal wave of superstition. Post-pandemic, what began as a trickle has become a torrent—an uncontrolled spread of fortune-telling, lucky crystals, and spiritual nonsense, growing in the vacuum left by institutional faith and spread further by a hyper-connected internet society.

This phenomenon is not merely a return to old habits or rural mysticism. It has become a nationwide consumer frenzy, driven by the very demographic the Communist Party hoped would be its most rational constituency: the young and educated. In chasing these modern symbols of hope, they are losing more than just money.

Read the rest of this entry »

Comments (3)

Bionic brains

China Develops Robots to Implant Chips into Human Brain

A Chinese technology news website reported that the CyberSense flexible microelectrode implantation robot, developed by the Institute of Automation at the Chinese Academy of Sciences, has passed the preliminary acceptance stage for Shenzhen’s major scientific infrastructure project on “Brain Mapping and Brain Simulation.” The robot is designed to implant flexible microelectrodes – thinner and softer than a strand of hair – into the cerebral cortex of experimental animals, providing crucial support for brain-computer interface (BCI) and neuroscience research.

Read the rest of this entry »

Comments (1)

Grammatical intuition of ChatGPT

Grammaticality Representation in ChatGPT as Compared to Linguists and Laypeople, Zhuang Qiu, Xufeng Duan & Zhenguang G. Cai, Humanities and Social Sciences Communications 12, no. 617 (May 6, 2025). 

Abstract

Large language models (LLMs) have demonstrated exceptional performance across various linguistic tasks. However, it remains uncertain whether LLMs have developed human-like fine-grained grammatical intuition. This preregistered study (link concealed to ensure anonymity) presents the first large-scale investigation of ChatGPT’s grammatical intuition, building upon a previous study that collected laypeople’s grammatical judgments on 148 linguistic phenomena that linguists judged to be grammatical, ungrammatical, or marginally grammatical (Sprouse et al., 2013). Our primary focus was to compare ChatGPT with both laypeople and linguists in the judgment of these linguistic constructions. In Experiment 1, ChatGPT assigned ratings to sentences based on a given reference sentence. Experiment 2 involved rating sentences on a 7-point scale, and Experiment 3 asked ChatGPT to choose the more grammatical sentence from a pair. Overall, our findings demonstrate convergence rates ranging from 73% to 95% between ChatGPT and linguists, with an overall point-estimate of 89%. Significant correlations were also found between ChatGPT and laypeople across all tasks, though the correlation strength varied by task. We attribute these results to the psychometric nature of the judgment tasks and the differences in language processing styles between humans and LLMs.

Read the rest of this entry »

Comments (27)

Jianwei Xun: Fake philosopher

Jianwei Xun, the supposed philosopher behind the hypnocracy theory, does not exist and is a product of artificial intelligence
A collaboration between an essayist and two AI platforms produced a book that reflects on new forms of manipulation

Raúl Limón, EL PAÍS (4/7/25)

The entire proposition behind this scheme is so preposterous and diabolical that I am rendered virtually speechless.

The French city of Cannes hosted a roundtable discussion on February 14 called “The Metamorphosis of Democracy – How Artificial Intelligence is Disrupting Digital Governance and Redefining Our Policy.”

The debate was covered in an article by EL PAÍS after Gianluca Misuraca, Vice President of Technology Diplomacy at Inspiring Futures, introduced the concept of “hypnocracy” — a new form of manipulation outlined in a book by Jianwei Xun called Hypnocracy: Trump, Musk, and the New Architecture of Reality. However, this Hong Kong philosopher does not exist, as revealed by Sabina Minardi, editor-in-chief of the Italian magazine L’Espresso.

Read the rest of this entry »

Comments (18)

Learning a Korean word from scratch, with a note on AI

While attending an international conference on the application of AI to the study of the Silk Road and its history, at which most of the papers were delivered in Korean, I was struck by the frequent occurrence of one distinctive word:  hajiman.  For some speakers, it almost seemed like a kǒutóuchán 口頭禪 ("catchphrase").  I had no idea what it meant, but its frequency led me to believe that it must be some sort of function word.  However, the fact that it is three syllables long militated against such a conclusion.  Also its sentence / phrase final position (though not always) made me think that it wasn't just a simple function word.

I kept trying to extract hajiman's purpose / meaning from its position and intonation (usually not emphasized, almost like an afterthought).

When, during coffee / tea breaks I asked some Korean colleagues about it, their reply — "Oh, hajiman" (with an offhand smile) only added to the word's mystique.

Read the rest of this entry »

Comments (8)

Replicate evolve the image…

Comments (14)

AI generated vocal model: Chinese popular ballad, Sandee Chan

[This is a guest post by AntC]

Read the rest of this entry »

Comments (8)