The historical vagaries of a Shanghai temple and town name: Lu Ji and the "Wenfu" ("Rhapsody on literature")

From Rostislav Berezkin, who teaches at Fudan University:

The place where I stay is called Qibao town, now Minhang district of Shanghai. The name means "Seven Treasures". It comes from the name of the Buddhist temple called Qibaosi. Legend says that the temple was built by the Lu family to commemorate Lu Ji* and Lu Yun, brothers of the 3rd cent. AD who were very famous poets and politicians.  Their tombs were located there. It became known as Lubaosi (Precious Temple of Lu). But 500 years later the king of Wuyue (907-978) during the Ten Kingdoms (907-979) period visited the place. When he asked the name of the temple, he misheard it as "Six Treasures Temple"; "six" is pronounced somewhat like "lok" in modern Shanghainese (it's "luc" in modern Vietnamese, also an equivalent of the "entering" tone). Apparently this is very close to the medieval pronunciation of the Lu surname ("[main]land"). The king was perplexed because there are seven treasures in Buddhism, not six. Therefore, he decided to donate the precious manuscript of the Lotus Sutra in gold letters he had made before, so that it would constitute the seventh treasure. Then the monastery became known as the Qibaosi.

Read the rest of this entry »

Comments (1)


kempt and sheveled

From François Lang:

I did not know you'd invented "topolect" and "character amnesia"!
 
Now…since you have a predilection for naming heretofore unnamed things, I am wondering if you could work your linguistic magic to describe words like "unkempt" and "disheveled", which appear far more often than their equivalent without the negative prefix.
 

I hope that pushes some linguistic buttons (assuming, of course, that no such word actually exists!).

The best I've come up with is "arhizomorphic", but I'm sure you and your Language Log groupies can do better!

Read the rest of this entry »

Comments (26)


E-mail etiquette

New article by Stephen Johnson in Lifehacker (3/24/23):

"These Are the Most Savage Ways to Start or End an Email:

How you start and end your work email says something about your worth as a person"

N.B.:  This is about work email — a very different kettle of fish from personal email, email with friends, and email in general.  You work those things out on your own.  If the solutions you arrive at are suitable, the relationship will persist.  If not, it will wither.

Selections from Johnson's article:

How do you begin your work emails? Do you go with a simple “Hey?” Or are you into formal greetings like “Good afternoon?” or “Salutation, right, trusty, and well-beloved friend?” Or are you one of those absolute animals that just starts—with no foreplay at all? How about the closing? Are you one of those annoying, “Thank you in advance” people? Or are you more like, “Byeeeeee?”

Back in the pre-computer days, this wouldn’t be a question. There were hard-and-fast rules for business correspondence: You started the letter with “Dear, Mr. Jenkins,” and ended it with, “Sincerely yours.” Anything else would mark you as a communist or beatnik.

Read the rest of this entry »

Comments (30)


can you not

Hidden behind the Keurig in our departmental office, I've been noticing a gawky, ungainly, stray coffee mug with these three words on the side:

can

you

not

No capitalization and no punctuation.

I was mystified.  Whatever could that mean?  I can imagine an arch, haughty, snotty person saying that to someone implying that they don't want the person to whom they're talking to do whatever it is they're doing.  In essence, I suppose it means "You're bothering / bugging / annoying me"; "stop doing that"; "get lost".

Read the rest of this entry »

Comments (24)


Pablumese

Knowing how much I like to invent terms for things that have no name ("topolect", "character amnesia", etc.), and needing a word for the parlance produced by ChatGPT-4 and kindred AI chatbots, Conal Boyce asked me to coin a term for it.  I instantly obliged him by coming up with "pablumese" to designate the sort of language that is unremittingly neutral and takes no stance on any subject or topic it addresses.

Conal liked my invention and responded:

Here's one of the problems with ChatGPT and its brethren: Not only does it spew what Victor calls 'pablumese' but for technical questions it then mixes its pablumese with quantitative nonsense, creating a truly creepy kind of output.

I was curious to see how it would handle the question of how many copper atoms fit into the cross-section of a typical copper wire. It responded in a way that made it sound very knowledgeable, breaking everything down into tiny (sometimes condescending) steps, and yet, at the very end of its perfect logic, it botched its answer, because it was unable to do a conversion between millimeters and picometers correctly.

But here's the kicker: What makes this stuff maximally odious is that the creeps who design it will succeed in taking over the world anyway, because this week "version 4 is astonishingly better than the beta ChatGPT!!!" and version 5 next week will be astonishingly better than…. etc. etc. until they've improved it enough that it really will threaten the jobs of 3/4 of the human race. It must be an absolutely sickening time to be a young person, trying to plan one's career.

Read the rest of this entry »

Comments (24)


The mind of artificial intelligence

Sean Carroll's Preposterous Universe Podcast #230

Raphaël Millière on How Artificial Intelligence Thinks, March 20, 2023 / Philosophy, Technology, Thinking / Comments    

Includes transcript of the two hour podcast.

Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.

[The above introduction was artificially generated by ChatGPT.]

Read the rest of this entry »

Comments (5)


ChatGPT-4: threat or boon to the Great Firewall?

"The practical value of LLMs is high enough that it will induce Chinese to seek out the best systems, and they will not be censored by China.”

"Yes, the Chinese Great Firewall will be collapsing"

by  Tyler Cowen Marginal Revolution (March 21, 2023)

Something that the PRC censors had not predicted:

As framed from China:

Fang Bingxing, considered the father of China’s Great Firewall, has raised concerns over GPT-4, warning that it could lead to an “information cocoon” as the generative artificial intelligence (AI) service can provide answers to everything.

Fang said the rise of generative AI tools like ChatGPT, developed by Microsoft-backed OpenAI and now released as the more powerful ChatGPT-4 version, pose a big challenge to governments around the world, according to an interview published on Thursday by Red Star News, a media affiliate to state-backed Chengdu Economic Daily.

“People’s perspectives can be manipulated as they seek all kinds of answers from AI,” he was quoted as saying.

Fang, a computer scientist and former government official, is widely considered the chief designer of China’s notorious internet censorship and surveillance system. He played a key role in creating and developing the Great Firewall, a sophisticated system of internet filters and blocks that allows the Chinese government to control what its citizens can access online.

Comments (4)


Writing English with Chinese characters

Responding to "Transcriptional Chinese animal imagery for English daily greetings" (3/13/23), Mary Erbaugh, using Yale Cantonese romanization, writes:

————

I've never seen it done with animal names, though probably easier to remember, amusing.

I'm used to the English word pronunciations in old fashioned HK (& Taiwan) almanacs, like the Bou Lòh Maahn Yauh (Cant.) / Bāo luò wàn yǒu (Mand.) 包纙萬有 ("all-inclusive"), available in any Chinatown; English title The Book of Myriad Things, an All-Inclusive Reference.  In the exposition below, I use the 1993 Hong Kong edition published by Jeuih Bóu Làuh Yanchaatchóng 聚寳樓印刷廠 [VHM:  聚[jeui6]寳[bou2]樓[lau4/lau2]印[yan3]刷[chaat3]廠[chong2] — Cantonese conversion by this tool; (Modern Standard Mandarin) MSM transcription in pinyin: Jùbǎo lóu yìnshuā chǎng].  It gets re-published every year, in near-identical form, except for the calendars.

Read the rest of this entry »

Comments (2)


No depth-charge channel is too noisy to be confused by

Yuhan Zhang, Rachel Ryskin & Edward Gibson, "A noisy-channel approach to depth-charge illusions." Cognition, March 2023:

The “depth-charge” sentence, No head injury is too trivial to be ignored, is often interpreted as “no matter how trivial head injuries are, we should not ignore them” while the literal meaning is the opposite – “we should ignore them”. Four decades of research have failed to resolve the source of this entrenched semantic illusion. Here we adopt the noisy-channel framework for language comprehension to provide a potential explanation. We hypothesize that depth-charge sentences result from inferences whereby comprehenders derive the interpretation by weighing the plausibility of possible readings of the depth-charge sentences against the likelihood of plausible sentences being produced with errors. In four experiments, we find that (1) the more plausible the intended meaning of the depth-charge sentence is, the more likely the sentence is to be misinterpreted; and (2) the higher the likelihood of our hypothesized noise operations, the more likely depth-charge sentences are to be misinterpreted. These results suggest that misinterpretation is affected by both world knowledge and the distance between the depth-charge sentence and a plausible alternative, which is consistent with the noisy-channel framework.

Yuhan Zhang discusses the paper in a thread on Twitter.

Speaking of depth, I'm definitely out of mine when it comes to noisy-channel frameworks. But it isn't the case that I'm not so ignorant as to fail to recognize that this paper is not too unimportant for Language Log not to pay no attention to it.

(Hey, ChatGPT — betcha can't make sense out of that!)

Comments (36)


Serif or sans serif?

Most people care about their typefaces

Appearances matter, especially whether fonts have serifs or not.

"Font Wars Spread After State Department Replaces Times New Roman with Calibri

"'I'm banging my head against the wall;' camps divided in fallout from government efforts to make documents easier to read"

By Katie Deighton, WSJ (3/14/23)

One wonders whether it is a matter of functionality and efficiency or esthetics and taste.  Whatever motivates the confrontation, one thing is evident, and that is that people have deeply held opinions in favor of / against one side or the other.

What sounds like a typeface tempest-in-a-teapot has boiled over in the U.S. and U.K., where changes in document requirements have set off a war of words among cantankerous font factions.

The State Department announced in January that Calibri would replace Times New Roman on official documents to make them easier to read. U.K.’s Home Office, for similar reasons, x-ed out the 83-year-old Times New Roman, which has the wings and feet on letters known as serif style.

Read the rest of this entry »

Comments (23)


"Subscribe to Open"

As the S2O website explains,

“Subscribe to Open” (S2O) is a pragmatic approach for converting subscription journals to open access—free and immediate online availability of research—without reliance on either article processing charges (APCs) or altruism. […]

S2O allows publishers to convert journals from subscriptions to OA, one year at a time. Using S2O, a publisher offers a journal’s current subscribers continued access. If all current subscribers participate in the S2O offer (simply by not opting out) the publisher opens the content covered by that year’s subscription. If participation is not sufficient—for example, if some subscribers delay renewing in the expectation that they can gain access without participating—then that year’s content remains gated.

The offer is repeated every year, with the opening of each year’s content contingent on sufficient participation. In some cases, access to backfile content may be used to enhance the offer.

Read the rest of this entry »

Comments (1)


So many words for "donkey"

Almost as many as Eskimo words for "snow".  (hee-hee haw-haw) (see below for a sampling)

I've always been a great admirer of donkeys, and I love to hear them bray and make all sorts of other expressive sounds, some of which I am incapable of adequately expressing in words — especially when they are being obdurately stubborn and are unwilling to move, no matter what.  Anyway, their vocabulary extends way beyond the basic "hee-haw":

Read the rest of this entry »

Comments (25)


This is the 4th time I've gotten Jack and his beanstalk

Bill Benzon shares the response he got from ChatGPT to the prompt, "Tell me a story."

Read the rest of this entry »

Comments (30)