Archive for Artificial intelligence

The perils of AI (Artificial Intelligence) in the PRC

Here at Language Log, for the last couple months, we've been having long, intense discussions about ChatGPT and other AI chatbots and LLM (Large Language Model) applications.  Now, it seems that the battle over such AI programs has reached the level of ideological warfare.

"America, China and a Crisis of Trust"

Opinion | The New York Times (4/14/23)

Indeed, a story making the rounds in Beijing is that many Chinese have begun using ChatGPT to do their ideology homework for the local Communist Party cell, so they don’t have to waste time on it.

I have some evidence that this might well be true.  Already about half-a-dozen years ago, my M.A. students from the PRC whose parents were CCP members told me that the government required daily interaction with the propaganda installed on their phones — upon pain of being demoted or dismissed.  They had to read a specified amount of Xi-speak and answer questions about the content.  This demanded a serious investment of time (hours).  It was considered to be especially onerous for those CCP members whose day jobs (doctors, bureaucrats, stock brokers, etc., etc.) already demanded a very full work schedule in the office.  So many, if not most of them, hired various human and electronic services to meet the obligations.

Read the rest of this entry »

Comments (12)

An example of ChatGPT "hallucinating"?

Definition

In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data.

(source)

I had mentioned such AI hallucinating in a previous post once or twice (see "Selected readings"), so it's good to have a concrete example.

Is the account below an instance of ChatGPT "hallucinating"?  Its explanation of gato-por-liebre (cat-for-hare) in Spanish would seem so.

[The following is a guest post by Conal Boyce.]

Read the rest of this entry »

Comments (16)

Pablumese

Knowing how much I like to invent terms for things that have no name ("topolect", "character amnesia", etc.), and needing a word for the parlance produced by ChatGPT-4 and kindred AI chatbots, Conal Boyce asked me to coin a term for it.  I instantly obliged him by coming up with "pablumese" to designate the sort of language that is unremittingly neutral and takes no stance on any subject or topic it addresses.

Conal liked my invention and responded:

Here's one of the problems with ChatGPT and its brethren: Not only does it spew what Victor calls 'pablumese' but for technical questions it then mixes its pablumese with quantitative nonsense, creating a truly creepy kind of output.

I was curious to see how it would handle the question of how many copper atoms fit into the cross-section of a typical copper wire. It responded in a way that made it sound very knowledgeable, breaking everything down into tiny (sometimes condescending) steps, and yet, at the very end of its perfect logic, it botched its answer, because it was unable to do a conversion between millimeters and picometers correctly.

But here's the kicker: What makes this stuff maximally odious is that the creeps who design it will succeed in taking over the world anyway, because this week "version 4 is astonishingly better than the beta ChatGPT!!!" and version 5 next week will be astonishingly better than…. etc. etc. until they've improved it enough that it really will threaten the jobs of 3/4 of the human race. It must be an absolutely sickening time to be a young person, trying to plan one's career.

Read the rest of this entry »

Comments (25)

The mind of artificial intelligence

Sean Carroll's Preposterous Universe Podcast #230

Raphaël Millière on How Artificial Intelligence Thinks, March 20, 2023 / Philosophy, Technology, Thinking / Comments    

Includes transcript of the two hour podcast.

Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.

[The above introduction was artificially generated by ChatGPT.]

Read the rest of this entry »

Comments (6)

ChatGPT-4: threat or boon to the Great Firewall?

"The practical value of LLMs is high enough that it will induce Chinese to seek out the best systems, and they will not be censored by China.”

"Yes, the Chinese Great Firewall will be collapsing"

by  Tyler Cowen Marginal Revolution (March 21, 2023)

Something that the PRC censors had not predicted:

As framed from China:

Fang Bingxing, considered the father of China’s Great Firewall, has raised concerns over GPT-4, warning that it could lead to an “information cocoon” as the generative artificial intelligence (AI) service can provide answers to everything.

Fang said the rise of generative AI tools like ChatGPT, developed by Microsoft-backed OpenAI and now released as the more powerful ChatGPT-4 version, pose a big challenge to governments around the world, according to an interview published on Thursday by Red Star News, a media affiliate to state-backed Chengdu Economic Daily.

“People’s perspectives can be manipulated as they seek all kinds of answers from AI,” he was quoted as saying.

Fang, a computer scientist and former government official, is widely considered the chief designer of China’s notorious internet censorship and surveillance system. He played a key role in creating and developing the Great Firewall, a sophisticated system of internet filters and blocks that allows the Chinese government to control what its citizens can access online.

Comments (4)

This is the 4th time I've gotten Jack and his beanstalk

Bill Benzon shares the response he got from ChatGPT to the prompt, "Tell me a story."

Read the rest of this entry »

Comments (30)

ChatGPT writes VHM

[This is a guest post by ChatGPT]

Write a Language Log blog post by Victor H. Mair

Title: "The Wonders of the Chinese Language"

As one of the world's oldest and most complex languages, Chinese has captivated linguists and language enthusiasts for centuries. From its unique writing system to its vast vocabulary and intricate grammar, Chinese is a fascinating and challenging language that continues to surprise and delight those who study it.

Perhaps the most distinctive feature of Chinese is its writing system. Chinese characters are visual symbols that convey meaning, and they are used in both written and spoken language. Unlike the phonetic alphabets used in many other languages, Chinese characters do not correspond to specific sounds, but rather to concepts or ideas. This makes the learning process much more complex, as students must memorize thousands of characters and learn their meanings and pronunciations.

Read the rest of this entry »

Comments (22)

Vignettes of quality data impoverishment in the world of PRC AI

Some snippets:

Limited data sets a hurdle as China plays catch-up to ChatGPT

Lack of high-quality Chinese texts on Internet a barrier to training AI models.

Ryan McMorrow, Nian Liu, Eleanor Olcott, and Madhumita Murgia, FT, Ars Technica (2/21/23)

Baidu struggled with its previous attempt at a chatbot, known as Plato, which analysts said could not even answer a simple question such as: “When is Alibaba co-founder Jack Ma’s birthday?”

Analysts point to the lack of high-quality Chinese-language text on the Internet and in other data sets as a barrier for training AI software.

GPT, the program underlying ChatGPT, sucked in hundreds of thousands of English academic papers, news articles, books, and social media posts to learn the patterns that form language. Meanwhile, Baidu’s Ernie has been trained primarily on Chinese-language data as well as English-language data from Wikipedia and Reddit.

Read the rest of this entry »

Comments (11)

Uh-oh! DeepL in the classroom; it's already here

Yesterday in my Classical Chinese class, we were reading Ouyang Xiu's (1007-1072) "Discussion on 'Biographies of Eunuchs'" in the New History of the Five Dynasties (written 1036-1039, published 1072).  Here's the relevant passage:

Móu zhī ér bùkě wéi. Wéi zhī ér bùkě chéng. Zhì qí shén zé jù shāng ér liǎng bài. ——“Xīn wǔdài shǐ huàn zhě chuán lùn”

謀之而不可為。為之而不可成。至其甚則俱傷而兩敗。 ——《新五代史宦者傳論》 

[Because of the special circumstances of this post, I will not adhere to my usual custom of providing Pinyin Romanization, Hanzi transcription, and English translation all three together.]

Read the rest of this entry »

Comments (8)

ChatGPT: Theme and Variations

[This is a guest post by Conal Boyce]

Here I’ll recount some recent exchanges I had with ChatGPT. Given the scope of ChatGPT, and the fact that it’s in a self‑described intermediate state, our various impressions of it as of February 2023 must be like those of the three blind men examining an elephant — except the elephant is running. In the heart of the professional programmer, ChatGPT creates existential dread since it can spit out in a few seconds a page of code which would have required hours or days for him/her to write and debug — and that only after a lifetime of coding. For the rest of us, for the moment at least, it just provokes curiosity perhaps.

Read the rest of this entry »

Comments (21)

DeepL Translator

I have often sung the praises of Google Translate (see "Selected readings" below for a few sample posts), but now I've learned about an online translator that, for many languages, may be even better.  Since we've been discussing phenomenal developments in AI quite a bit lately (see also under "Selected readings" below), now seems as good a time as any to introduce DeepL to the collective Language Log readership.

In truth, we've barely mentioned DeepL before (see comments here, here, here, and here), so I really didn't notice it until this past week when my students and auditors from East Asia told me about it.  Seeing what DeepL could do, I was simply overwhelmed.  Let me explain how that happened.

Most of the participants in my Middle Vernacular Sinitic (MVS) seminar (all attendees are from China, Japan, and Korea), said that they've been using it regularly for years.  They also mentioned that they use OCR apps on their phones.  The scanned texts they use can then be fed into various applications for translation.  Many of them also use Grammarly to improve the quality of their writing.  Lately I myself have noticed that when I write papers, essays, and letters in word processing programs (e.g., Microsoft Word), the processor gives me mostly good suggestions for getting rid of superfluous, redundant, awkward suggestions.

Specifically, what impressed me so much about DeepL in this instance is that we were faced with a Dutch translation of a rare, medieval Chinese text with a lot of esoteric vocabulary.  The Dutch translator had done a commendable job of getting from the difficult Chinese to Dutch, but then we had to use OCR on his limited circulation Dutch publication to produce a document to feed into DeepL.  When I read the resulting English translation, I was amazed at how faithfully the English conveyed the sense and the feeling of the extremely recondite medieval Chinese text.  Of course, the English wasn't  perfect, but it made a tremendous contribution toward getting a handle on what was happening in the medieval Chinese text that had seldom been read by anyone (it was lost for more than a thousand years) and had never been translated into any other language beside Dutch.

Read the rest of this entry »

Comments (7)

Digitization of Babylonian fragments

Once again, DH to the rescue:

AI Deciphers Ancient Babylonian Texts And Finds Beautiful Lost Hymn

Eat your heart out, ChatGPT.

Tom Hale, IFLScience (2/7/23)

It used to be that paleographers and philologists labored mightily trying to piece together bits and pieces of old manuscripts, using only their own mental and visual powers. Now they can call on AI allies to provide decisive assistance.

Researchers have crafted an artificial intelligence (AI) system capable of deciphering fragments of ancient Babylonian texts. Dubbed the “Fragmentarium,” the algorithm holds the potential to piece together some of the oldest stories ever written by humans, including the Epic of Gilgamesh.

Read the rest of this entry »

Comments (10)

Artificial Intelligence in Language Education: with a note on GPT-3

Registration is open for Artificial Intelligence in Language Education

Please join us for Penn Language Center's annual Language Educator Symposium, co-sponsored by Educational Linguistics at Penn GSE
 
ARTIFICIAL INTELLIGENCE in LANGUAGE EDUCATION
 
Symposium: Saturday, March 25, 2023 at the Kislak Center, Van Pelt Library
Pre-Symposium Workshop: Friday, March 24, 2023 in the Collaborative Classroom, Van Pelt Library
 
Featured Speakers
  • Eleni Miltsakaki, Department of Computer & Information Science, University of Pennsylvania
  • Gareth Roberts, Department of Linguistics, University of Pennsylvania
  • Per Urlaub, Global Languages, Massachusetts Institute of Technology
  • Eva Dessein, Global Languages, Massachusetts Institute of Technology
  • Iryna Kozlova, Graduate School of Education, University of Pennsylvania
Visit our symposium website for a detailed program and registration information. This is an in-person only event. Space is limited so register today!

Read the rest of this entry »

Comments (4)