Archive for Words words words

Two new foreign words: Turkish kahvalti and French pavé

I probably learn at least one or two new foreign words per day, and they always delight me no end.

The first new foreign word I learned today is Turkish kahvalti (lit., "before coffee) which means "breakfast".    

Inherited from Ottoman Turkish قهوه آلتی (ḳahve altı, food taken before coffee; especially breakfast or lunch), from قهوه (ḳahve) and آلت (alt), equivalent to kahve (coffee) +‎ alt (under, lower, below) +‎ (possessive suffix), literally under coffee. (Wiktionary)

This tells us how important coffee is in Turkish life.

Read the rest of this entry »

Comments (22)

Footguns and rakestomping

I've recently noticed two compound neologisms, both involving metaphors about foot-related self injury.

The first one was in an article in Medium on 6/27/2025, "Why Google is Betting 8 Years on a Programming Language That Doesn’t Exist Yet". That article explains that

In 2022, Google introduced Carbon, a potential successor to C++. Unlike Go or Rust, Carbon wasn’t ready for prime time. In fact, it was barely out of the conceptual phase.

And among the reasons given for the effort [emphasis added]:

C++ has steep learning curves and footguns.

The second foot-related compound was in a 7/24/2025 TPM article, "Why is Jeff Bezos rakestomping the Post?".

Read the rest of this entry »

Comments (19)

Interpersonal and socio-cultural alignment

In a comment on "Alignment", Sniffnoy wrote:

At least as far as I'm aware, the application of "alignment" to AI comes from Eliezer Yudkowsky or at least someone in his circles. He used to speak of "friendly AI" and "unfriendly AI". However, the meaning of these terms was fairly different from the plain meaning, which confused people. So at some point he switched to talking about "aligned" or "unaligned" AI.

This is certainly true — see e.g. Yudkowsky's 2016 essay "The AI alignment problem: why it is hard, and where to start".

However, an (almost?) exactly parallel usage was established in the sociological literature, more than half a century earlier, as discussed in Randall Stokes and John Hewitt, "Aligning actions" (1976):

Read the rest of this entry »

Comments (4)

Alignment

In today's email there was a message from AAAI 2026 that included a "Call for the Special Track on AI Alignment""

AAAI-26 is pleased to announce a special track focused on AI Alignment. This track recognizes that as we begin to build more and more capable AI systems, it becomes crucial to ensure that the goals and actions of such systems are aligned with human values. To accomplish this, we need to understand the risks of these systems and research methods to mitigate these risks. The track covers many different aspects of AI Alignment, including but not limited to the following topics:

Read the rest of this entry »

Comments (11)

Bibliographical cornucopia for linguists, part 1

Bibliographical cornucopia for linguists, part 1

Since we have such an abundance of interesting articles for this fortnight, I will divide the collection into two parts, and provide each entry with an abstract or paragraph length quotation.

A fundamental question in word learning is how, given only evidence about what objects a word has previously referred to, children are able to generalize to the correct class. How does a learner end up knowing that “poodle” only picks out a specific subset of dogs rather than the broader class and vice versa? Numerous phenomena have been identified in guiding learner behavior such as the “suspicious coincidence effect” (SCE)—that an increase in the sample size of training objects facilitates more narrow (subordinate) word meanings. While SCE seems to support a class of models based in statistical inference, such rational behavior is, in fact, consistent with a range of algorithmic processes. Notably, the broadness of semantic generalizations is further affected by the temporal manner in which objects are presented—either simultaneously or sequentially. First, I evaluate the experimental evidence on the factors influencing generalization in word learning. A reanalysis of existing data demonstrates that both the number of training objects and their presentation-timing independently affect learning. This independent effect has been obscured by prior literature’s focus on possible interactions between the two. Second, I present a computational model for learning that accounts for both sets of phenomena in a unified way. The Naïve Generalization Model (NGM) offers an explanation of word learning phenomena grounded in category formation. Under the NGM, learning is local and incremental, without the need to perform a global optimization over pre-specified hypotheses. This computational model is tested against human behavior on seven different experimental conditions for word learning, varying over presentation-timing, number, and hierarchical relation between training items. Looking both at qualitative parameter-independent behavior and quantitative parameter-tuned output, these results support the NGM and suggest that rational learning behavior may arise from local, mechanistic processes rather than global statistical inference.

Read the rest of this entry »

Comments off

"AI" == "vehicle"?

Back in March, the AAAI ("Association for the Advancement of Artificial Intelligence") published an "AAAI Presidential Panel Report on the Future of AI Research":

The AAAI 2025 presidential panel on the future of AI research aims to help all AI stakeholders navigate the recent significant transformations in AI capabilities, as well as AI research methodologies, environments, and communities. It includes 17 chapters, each covering one topic related to AI research, and sketching its history, current trends and open challenges. The study has been conducted by 25 AI researchers and supported by 15 additional contributors and 475 respondents to a community survey.

You can read the whole thing here — and you should, if you're interested in the topic.

Read the rest of this entry »

Comments (4)

Mapping the exposome

More than 20 years ago, I posted about the explosion of -ome and -omic words in biology: "-ome is where the heart is", 10/27/2004. I listed more than 40 examples:

behaviourome, cellome, clinome, complexome, cryptome, crystallome, ctyome, degradome, enzymome,epigenome, epitome, expressome, fluxome, foldome, functome, glycome, immunome, ionome, interactome, kinome, ligandome, localizome, metallome, methylome, morphome, nucleome, ORFeome, parasitome, peptidome, phenome, phostatome, physiome, regulome, saccharome, secretome, signalome, systeome, toponome, toxicome, translatome, transportome, vaccinome, and variome.

Read the rest of this entry »

Comments (14)

Linguistics bibliography roundup

Something for everyone

Read the rest of this entry »

Comments (3)

Pronouncing DOGE

Coby L. wrote to ask why DOGE is pronounced with a final /ʒ/ rather than a final /dʒ/.

The Department Of Government Efficiency is clearly a backronym of the Doge meme, which references a Shiba Inu dog. According to Wikipedia, the meme can be pronounced /doʊʒ/ or /doʊdʒ/ or /doʊɡ/, though all I've heard from the media is /doʊʒ/. I guess Coby's experience is similar, hence the question. Wikipedia says that the memetic cryptocurrency Dogecoin is pronounced either /doʊʒkɔɪn/ or /doʊdʒkɔɪn/, but apparently not /doʊgkɔɪn/.

Read the rest of this entry »

Comments (21)

Hiberno-English: it's a soft day

Spending some time in Ireland, I hear people saying "It's a soft day" or "It's a soft day, thank God!".  Not knowing what that expression implies, I do a search and find that "A soft day is what the Irish call a very very damp fog or a mizzle, which is a cross between a mist and a drizzle." (source)  Mizzle is also the color of a shade of paint. (source)

"Soft day" is a phrase derived from Irish lá bog (lit.) ("overcast day; light drizzle/mist").

That reaction to a moist, overcast day tells you something about the Irish mindset and helps you understand Irish sentiment and humor.

Read the rest of this entry »

Comments (13)

Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch

Having just a couple of months ago burrowed my way into the center of one of the world's most famous Neolithic barrows, more specifically a passage tomb at Newgrange (ca. 3200 BC, older than Stonehenge, which I had visited the previous week, and the Egyptian pyramids, which I have yet to behold in person) in County Meath, Ireland with J. P. Mallory, Indo-European archeolinguist and author of In Search of the Irish Dreamtime:  Archaeology and Early Irish Literature (London: Thames & Hudson, 2016), all 6'7" of him and 6'2" of me, making it a difficult crawl / squeeze for the two of us, I was keen to read this article:

To Historians and Tourists, It’s a Mysterious Ancient Burial Site. It Used to Be My Playground.
Author Oliver Smith spent many childhood days exploring a prehistoric mound near his grandparents’ house in Wales. As an adult, he found himself irresistibly drawn back to it—and other sites like it.
By Oliver Smith. WSJ (Feb. 12, 2025)

Read the rest of this entry »

Comments (6)

ADS WotY 2024

The American Dialect Society's Word of the Year vote was last night, and the overall WotY winner was rawdog. You can read the whole list and voting tallies in the ADS press release.

Read the rest of this entry »

Comments (3)

Crisps and chips

I love potato chips, but am not a fan of french fries, so I'm all confused when I'm in Britain where "chips" are "crisps" and "fries" are "chips"!

One reason I like potato chips is because they are salty and savory to counteract all the sweets I consume, so I keep a big box of 18 small bags of chips and Doritos, Cheetos, and Fritos on hand to rescue me from hunger pangs whenever I feel them coming on.  But I dislike Pringles because they're not real.

The British take their crisps more seriously than any other nation
No other snack bridges the class divide in the same way
Economist (12/19/24)

This is a book review of Crunch: An Ode to Crisps. By Natalie Whittle. Faber; 256 pages; £18.99

Read the rest of this entry »

Comments (60)