Archive for Artificial intelligence

Touring the Turing Test again

The buzz about Large Language Models has re-ignited interest in Alan Turing's famous 1950 article "Computing Machinery and Intelligence". Two interesting recent discussions: Jessica Riskin, "A Sort of Buzzing Inside My Head", NYRB 6/25/2023, and Mustafa Suleyman, "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma", Random House 9/5/2023.

Suleyman's book won't be released until 9/5/2023, so it's interesting that several outlets have blurbed one of its ideas ten weeks early: Brad Stone, "AI Leader Proposes a New Kind of Turing Test for Chatbots", Bloomberg 6/20/2023, and Sawdah Bhaimiya, "DeepMind's co-founder suggested testing an AI chatbot's ability to turn \$100,000 into \$1 million to measure human-like intelligence", Business Insider 6/20/2023.  Based just on Business Insider's title, Suleyman's proposal puzzled me, since we don't usually think of machine-trading systems as measuring intelligence — at least not the intelligence of the system rather than its designer. But in fact Suleyman has something different in mind, more along the lines of  an extended "shark tank" competition:

In describing his proposal, Suleyman argues that there’s a misplaced focus in the tech industry on the distant possibility of achieving artificial general intelligence, or AGI: algorithms with cognitive abilities that match or exceed humans’. Instead, he said the more achievable and meaningful short-term goal is what he calls artificial capable intelligence, or ACI: programs that can set goals and achieve complex tasks with minimal human intervention.

To measure whether a machine has achieved ACI, he describes a “modern Turing test” — a new north star for researchers — in which you give an AI \$100,000 and see if it can turn the seed investment into \$1 million. To do so, the bot must research an e-commerce business opportunity, generate blueprints for a product, find a manufacturer on a site like Alibaba and then sell the item (complete with a written listing description) on Amazon or

Suleyman expects AI will pass this more practical threshold sometime in the next two years. “We don’t just care about what a machine can say; we also care about what it can do,” he writes. And when that happens, he says, “The consequences for the world economy are seismic.”

Read the rest of this entry »

Comments (21)

AI for Akkadian

Article by Melanie Lidman in The Times of Israel (6/17/23):

Groundbreaking AI project translates 5,000-year-old cuneiform at push of a button

‘Google Translate’-like program for Akkadian cuneiform will enable tens of thousands of digitized but unread tablets to be translated to English. Accuracy is debatable.

Opening and key paragraphs:

Cuneiform is the oldest known form of writing, but it is so difficult to read that only a few hundred experts around the world can decode the clay tablets filled with wedge-shaped symbols. Now, a team of archaeologists and computer scientists from Israel has created an AI-powered translation program for ancient Akkadian cuneiform, allowing tens of thousands of already digitized tablets to be translated into English instantaneously.

Read the rest of this entry »

Comments off

Thai to English translation gets injected with Tamil

[This is a guest post by Charles Belov]

I pasted the following Thai, which I got from a YouTube channel, into Google translate. The results were mostly in English, but Google Translate injected some apparent Tamil as well and then just gives up and leaves some of the Thai untranslated.

"ตลอดระยะเวลาการทำงานในวงการบันเทิงมันทำให้เราได้เรียนรู้ว่าจริงๆ เเล้วความสุขอยู่รอบตัวเราไปหมด เเล้วความสุขมันง่ายมาก จริงๆ บางทีความสุขมันก็ไม่ต้องมีเงินเยอะมากมาย ความสุขในชีวิตของผมมันคือการมีอิสรภาพ

ผมรู้สึกว่ามันเเค่ต้อง balance ชีวิตให้มากขึ้น รักตัวเองให้เป็น เงินก็ต้องหา เเต่ก็ต้องให้เวลากับตัวเอง เเคร์ตัวเอง เเคร์คนอื่นน้อยลง"

ฟิล์ม ธนภัทร คนหิวความสำเร็จ กับอิสรภาพของชีวิต

translated to English as:

"During the time of working in the entertainment industry, it made us learn that really, happiness doesn't need much money, so much happiness. in my life it is கெர்பியைப்ப்பு

I feel that you have to find balance in your life, but you have to make time for yourself, take care of yourself, and take care of others less"

Film ตันที่ร ตั้วิที่ สุ้วิต้ามี่ สุ้าวิต้วั่ม

Read the rest of this entry »

Comments (3)

ChatGPT has a sense of humor (sort of)

Benj Edwards has a mirthful article in Ars Technica (6/9/23)

Researchers discover that ChatGPT prefers repeating 25 jokes over and over

When tested, "Over 90% of 1,008 generated jokes were the same 25 jokes."

[includes an AI generated image of "a laughing robot"]

On Wednesday, two German researchers, Sophie Jentzsch and Kristian Kersting, released a paper that examines the ability of OpenAI's ChatGPT-3.5 to understand and generate humor. In particular, they discovered that ChatGPT's knowledge of jokes is fairly limited: During a test run, 90 percent of 1,008 generations were the same 25 jokes, leading them to conclude that the responses were likely learned and memorized during the AI model's training rather than being newly generated.

The two researchers, associated with the Institute for Software Technology, German Aerospace Center (DLR), and Technical University Darmstadt, explored the nuances of humor found within ChatGPT's 3.5 version (not the newer GPT-4 version) through a series of experiments focusing on joke generation, explanation, and detection. They conducted these experiments by prompting ChatGPT without having access to the model's inner workings or data set.

[Jentzsch and Kersting] listed the top 25 most frequently generated jokes in order of occurrence. Below, we've listed the top 10 with the exact number of occurrences (among the 1,008 generations) in parenthesis:

Read the rest of this entry »

Comments (12)


As I am about to deliver a keynote address to an international conference on Chinese language pedagogy, I receive news of this new LLM that knocks my socks off:

InternLM is a multilingual large language model jointly developed by Shanghai AI Lab and SenseTime (with equal contribution), in collaboration with the Chinese University of Hong Kong, Fudan University, and Shanghai Jiaotong University.

Technical report: [PDF]

Note: Please right click the link above to directly download the PDF file.


We present InternLM, a multilingual foundational language model with 104B parameters. InternLM is pre-trained on a large corpora with 1.6T tokens with a multi-phase progressive process, and then fine-tuned to align with human preferences. We also developed a training system called Uniscale-LLM for efficient large language model training. The evaluation on a number of benchmarks shows that InternLM achieves state-of-the-art performance in multiple aspects, including knowledge understanding, reading comprehension, mathematics, and coding. With such well-rounded capabilities, InternLM achieves outstanding performances on comprehensive exams, including MMLU, AGIEval, C-Eval and GAOKAO-Bench, without resorting to external tools. On these benchmarks, InternLM not only significantly outperforms open-source models, but also obtains superior performance compared to ChatGPT. Also, InternLM demonstrates excellent capability of understanding Chinese language and Chinese culture, which makes it a suitable foundation model to support Chinese-oriented language applications. This manuscript gives a detailed study of our results, with benchmarks and examples across a diverse set of knowledge domains and tasks.

Read the rest of this entry »

Comments (1)

ChatGPT does Emily Dickinson writing a recipe for Pad Thai (and haiku too)

From Scott D. Seligman via Facebook:

  ChatGPT is really creeping me out. I asked it for a recipe for Pad Thai in the form of an Emily Dickinson poem. I'm no poetry maven, but the damned thing seems to have the ability to turn a phrase, at least some of the time.

Below is what I got in response. [Note to Jeanne Larsen, Jenny Shepherd and any other poets or poetesses with
whom I am acquainted: I hear Starbucks may be hiring baristas].

Read the rest of this entry »

Comments (16)

Decipherment of Linear A

Methodologically, the following communication from Elizabeth J. W. Barber is too important to be left buried in a comment to this post:  "ChatGPT does cuneiform studies" (5/21/23)

As I showed in my 1974 book, Archaeological Decipherment, there is a mathematical algorithm showing how much text one needs to PROVABLY accomplish a decipherment for what sort of script. Since 1974, we haven't added enough new text to our pile of LINEAR A to make it over the hump, if the language it hides is unrelated to anything we already know (or if the hidden language, like Semitic, "cross-classifies" its morphemes between consonants and vowels, since each phonological sign in Linear A represents one C and one V). And if it IS hiding some language we already have a linguistic handle on, we are still scarcely up to the top of the hump. So what language, or language family might one try? We already know that Linear A shows virtually nothing in the way of suffixing or other inflection, so it looks very UN-Indo-European.

Read the rest of this entry »

Comments (2)

Sperm whale talk

Animal communication is not a favorite topic here at Language Log, but according to the following account, one project concerning it seems serious and is being conducted by credible scientists.  Although their claims for its ultimate significance may be inflated, I believe the research they are undertaking is worth considering, especially after hearing the clicks and codas of the sperm whales, which do appear to be communicating data.

Can Understanding Whale Speech Help Us Talk to Aliens?

Biologist David Gruber thinks decoding the language of whales could be just the first step in understanding what other lifeforms are saying—in this world and out of it.

Alexandra Marvar, The Daily Beast (5/13/23)

Read the rest of this entry »

Comments (3)

The perils of AI (Artificial Intelligence) in the PRC

Here at Language Log, for the last couple months, we've been having long, intense discussions about ChatGPT and other AI chatbots and LLM (Large Language Model) applications.  Now, it seems that the battle over such AI programs has reached the level of ideological warfare.

"America, China and a Crisis of Trust"

Opinion | The New York Times (4/14/23)

Indeed, a story making the rounds in Beijing is that many Chinese have begun using ChatGPT to do their ideology homework for the local Communist Party cell, so they don’t have to waste time on it.

I have some evidence that this might well be true.  Already about half-a-dozen years ago, my M.A. students from the PRC whose parents were CCP members told me that the government required daily interaction with the propaganda installed on their phones — upon pain of being demoted or dismissed.  They had to read a specified amount of Xi-speak and answer questions about the content.  This demanded a serious investment of time (hours).  It was considered to be especially onerous for those CCP members whose day jobs (doctors, bureaucrats, stock brokers, etc., etc.) already demanded a very full work schedule in the office.  So many, if not most of them, hired various human and electronic services to meet the obligations.

Read the rest of this entry »

Comments (12)

An example of ChatGPT "hallucinating"?


In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data.


I had mentioned such AI hallucinating in a previous post once or twice (see "Selected readings"), so it's good to have a concrete example.

Is the account below an instance of ChatGPT "hallucinating"?  Its explanation of gato-por-liebre (cat-for-hare) in Spanish would seem so.

[The following is a guest post by Conal Boyce.]

Read the rest of this entry »

Comments (16)


Knowing how much I like to invent terms for things that have no name ("topolect", "character amnesia", etc.), and needing a word for the parlance produced by ChatGPT-4 and kindred AI chatbots, Conal Boyce asked me to coin a term for it.  I instantly obliged him by coming up with "pablumese" to designate the sort of language that is unremittingly neutral and takes no stance on any subject or topic it addresses.

Conal liked my invention and responded:

Here's one of the problems with ChatGPT and its brethren: Not only does it spew what Victor calls 'pablumese' but for technical questions it then mixes its pablumese with quantitative nonsense, creating a truly creepy kind of output.

I was curious to see how it would handle the question of how many copper atoms fit into the cross-section of a typical copper wire. It responded in a way that made it sound very knowledgeable, breaking everything down into tiny (sometimes condescending) steps, and yet, at the very end of its perfect logic, it botched its answer, because it was unable to do a conversion between millimeters and picometers correctly.

But here's the kicker: What makes this stuff maximally odious is that the creeps who design it will succeed in taking over the world anyway, because this week "version 4 is astonishingly better than the beta ChatGPT!!!" and version 5 next week will be astonishingly better than…. etc. etc. until they've improved it enough that it really will threaten the jobs of 3/4 of the human race. It must be an absolutely sickening time to be a young person, trying to plan one's career.

Read the rest of this entry »

Comments (25)

The mind of artificial intelligence

Sean Carroll's Preposterous Universe Podcast #230

Raphaël Millière on How Artificial Intelligence Thinks, March 20, 2023 / Philosophy, Technology, Thinking / Comments    

Includes transcript of the two hour podcast.

Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.

[The above introduction was artificially generated by ChatGPT.]

Read the rest of this entry »

Comments (6)

ChatGPT-4: threat or boon to the Great Firewall?

"The practical value of LLMs is high enough that it will induce Chinese to seek out the best systems, and they will not be censored by China.”

"Yes, the Chinese Great Firewall will be collapsing"

by  Tyler Cowen Marginal Revolution (March 21, 2023)

Something that the PRC censors had not predicted:

As framed from China:

Fang Bingxing, considered the father of China’s Great Firewall, has raised concerns over GPT-4, warning that it could lead to an “information cocoon” as the generative artificial intelligence (AI) service can provide answers to everything.

Fang said the rise of generative AI tools like ChatGPT, developed by Microsoft-backed OpenAI and now released as the more powerful ChatGPT-4 version, pose a big challenge to governments around the world, according to an interview published on Thursday by Red Star News, a media affiliate to state-backed Chengdu Economic Daily.

“People’s perspectives can be manipulated as they seek all kinds of answers from AI,” he was quoted as saying.

Fang, a computer scientist and former government official, is widely considered the chief designer of China’s notorious internet censorship and surveillance system. He played a key role in creating and developing the Great Firewall, a sophisticated system of internet filters and blocks that allows the Chinese government to control what its citizens can access online.

Comments (4)