AI humor of the day

Let's start with the last four panels of today's Doonesbury:

Read the rest of this entry »

Comments (1)


Thought panzers

Vacillating Chinese terminology for think tanks

Mark Metcalf wrote to tell me:

Global Times*just ran an article that might be of interest regarding PRC think tanks and a new book related to this topic: “Researchers, scholars explore methods to boost China’s influence of thoughts”.

*an appendage of People's Daily

I was caught up short by the clumsy expression "influence of thoughts".  But something else about this new development bothered me much more.  Mark tracked down the title of the book in question:

《Sīxiǎng tǎnkè: Zhōngguó zhìkù de guòqù, xiànzhuàng yǔ wèilái 思想坦克:中国智库的过去、现状与未来》("Thought tanks [armored vehicles]: the past, present, and future of China's wisdom warehouses"]) [VHM — intentionally awkward translation for special effect, to be explained below]

What jumped out at me in the title was the use of tǎnkè 坦克 for (think) tank. In my Chinese studies, I learned that tǎnkè 坦克 was a military weapon and not a repository. And when you Google images of tǎnkè 坦克, all you see are images of tracked vehicles. That's how all my Pleco dictionaries translate the term, as well. However, when you put the term into Google Translate, it provides both the tracked vehicle and an alternative translation: "a large receptacle or storage chamber, especially for liquid or gas" with yóuxiāng 油箱 ("oil / gas[oline] / fuel tank") as a synonym. Yet GT can't translate the term sīxiǎng tǎnkè 思想坦克.  [VHM:  And well it should not.  See more below.]

Going out on a limb, could the expression sīxiǎng tǎnkè 思想坦克 have the dual meaning (i.e., a pun) for an offensive organization ("vehicle") that is used to control / defend the narrative of the CCP?

Read the rest of this entry »

Comments (15)


Legally binding hallucinations

I missed this story when it happened 10 days ago, and caught up with it yesterday because the BBC also got the word — Maria Yagoda, "Airline held liable for its chatbot giving passenger bad advice – what this means for travellers", BBC 2/23/2024:

In 2022, Air Canada's chatbot promised a discount that wasn't available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare after the fact.

According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn't offer the discount. Instead, the airline said the chatbot was a "separate legal entity that is responsible for its own actions". […]

The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees. "It should be obvious to Air Canada that it is responsible for all the information on its website," read tribunal member Christopher Rivers' written response. "It makes no difference whether the information comes from a static page or a chatbot."

Read the rest of this entry »

Comments (19)


"It crosses the i's and dots the t's"

In a YouTube video yesterday, Michael Popok explained the differences (in New York State law) among a "verdict", a "decision and order", and a "judgment", in the context of the latest stage of Donald Trump's civil fraud case. Those intricacies are an interesting aspect of the sociolinguistics of the law, but the topic of this post is Popok's word-exchange speech error at about 4:45:

uh it crosses the i's and dots the t's
sorry

dots the i's and crosses the t's

Read the rest of this entry »

Comments (14)


Modals, idiolects, garden-path sentences, and English translations of a ninth-century Chinese poem

Here I present a digest of four scientific linguistics papers from the latter part of the month of January, 2024 to show that our field is very much alive in diverse subfields at the beginning of the new year.

"The Semantics, Sociolinguistics, and Origins of Double Modals in American English: New Insights from Social Media." Morin, Cameron et al. PLOS ONE 19, no. 1 (January 24, 2024): e0295799.

Abstract: In this paper, we analyze double modal use in American English based on a multi-billion-word corpus of geolocated posts from the social media platform Twitter. We identify and map 76 distinct double modals totaling 5,349 examples, many more types and tokens of double modals than have ever been observed. These descriptive results show that double modal structure and use in American English is far more complex than has generally been assumed. We then consider the relevance of these results to three current theoretical debates. First, we demonstrate that although there are various semantic tendencies in the types of modals that most often combine, there are no absolute constraints on double modal formation in American English. Most surprisingly, our results suggest that double modals are used productively across the US. Second, we argue that there is considerable dialect variation in double modal use in the southern US, with double modals generally being most strongly associated with African American Language, especially in the Deep South. This result challenges previous sociolinguistic research, which has often highlighted double modal use in White Southern English, especially in Appalachia. Third, we consider how these results can help us better understand the origins of double modals in America English: although it has generally been assumed that double modals were introduced by Scots-Irish settlers, we believe our results are more consistent with the hypothesis that double modals are an innovation of African American Language.

Read the rest of this entry »

Comments off


ChatGPT having a stroke?

Or a psychotic episode? ICYMI — Maxwell Zeff, "ChatGPT Went Berserk, Giving Nonsensical Responses All Night", Gizmodo 2/21024:

ChatGPT started throwing out “unexpected responses” on Tuesday night according to OpenAI’s status page. Users posted screenshots of their ChatGPT conversations full of wild, nonsensical answers from the AI chatbot.

Read the rest of this entry »

Comments (12)


Political aspects of teaching Classical Chinese at First Girls High School in Taipei

This issue caused quite a hullabaloo more than a month ago and, during the runup to the national election that was going on at that time, it generated a lot of hot rhetoric.  It's important to note that First Girls High School is an elitist, influential institution that is very hard to get into.

The debate over how much and what sort of Classical Chinese to include in the curriculum grew quite heated, so naturally I quickly wrote a detailed post on the subject, but then my computer crashed because of one of the many dreaded, hated "updates" that I have to endure for the sake of "security" (the bane of my life), and I lost my carefully prepared post on the Classical Chinese debate — same thing happened to the draft of my post on the Tokyo restaurant sign that supposedly "hurt the feelings of the Chinese people".  It has taken me till now to find the time to reconstruct them.

Read the rest of this entry »

Comments (8)


Hurting the feelings of the Chinese people in Tokyo?

Sign outside a Tokyo restaurant:


(source)

Read the rest of this entry »

Comments (8)


Jumbled pinyin

I spotted this not-too-old post on Stephen Jones: a blog, "Interpreting pinyin" (10/9/17).

Read the rest of this entry »

Comments (4)


More AI humor

Comments (2)


Relative clause attachment of the week

Comments (8)


LLM vs. a cat?

A bit of AI anti-hype — Sissi Cao, "Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.", Observer 2/15/2024:

“The brain of a house cat has about 800 million neurons. You have to multiply that by 2,000 to get to the number of synapses, or the connections between neurons, which is the equivalent of the number of parameters in an LLM,” LeCun said, noting that the largest LLMs have about the same number of parameters as the number of synapses in a cat’s brain. For example, OpenAI’s GPT-3.5 model, which powers the free version of ChatGPT, has 175 billion parameters. The more advanced GPT-4, is said to be run on eight language models, each with 220 billion parameters.

“So maybe we are at the size of a cat. But why aren’t those systems as smart as a cat?” LeCun asked. “A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”

Read the rest of this entry »

Comments (11)


Political drumbeat: cultural confidence

Yesterday, the hypernationalistic CCP government propaganda organ, Global Times, published the following article:

"China shows cultural confidence as world shares Spring Festival’s spirit, legacy, joy", by Ai Peng, Global Times (2/18/24)

Mark Metcalf called the conspicuous expression "cultural confidence" to my attention:

It's appeared in LL twice. 

Apparently it has propaganda 'legs' and, of course, the blessing of Xi Dada – see the articles below. It has even showed up in numerous Jiěfàngjūn 解放军报 (People's Liberation Army Daily) articles in recent months.
 
Is it just another throwaway term or is it being used to push CCP members toward a particular goal?
Considered from another perspective, all this talk about instilling confidence could easily be interpreted to mean that CCP members don't have the desired level of cultural confidence ("Party" confidence?).

Read the rest of this entry »

Comments (4)