Archive for Words words words

Meh

The OED dates meh as an interjection back to 1992, in an internet newsgroup, and as an adjective back to 2007 in The Guardian:

The man could scarcely walk. Two hours later he was cheerfully high-kicking a suicide bomber out the back of a train. Nuts. But somehow it all seemed, to use a bit of internet parlance, a bit ‘meh’.

But this bit of "internet parlance" has started showing up in news headlines, without excuses or scare quotes, and not just in places like college papers.

[Update– For more on the origins and progress of meh, see "Meh-ness to society" (Ben Zimmer, 6/98/2006), "Awwa, meh, feh, heh" (Ben Zimmer, 2/16/2007),  "The 'meh' wars" (Ben Zimmer, 11/21/2008), "The 'meh' wars, part 2" (11/24/2008), "Meh again" (Arnold Zwicky, 12/1/2011), "Words for 'meh'" (Mark Liberman, 12/22/2011), "Three scenes in the life of 'meh'" (Ben Zimmer, 2/26/2012).]

Read the rest of this entry »

Comments (40)

The invention of English

Comments (23)

Intentional for good

Marie Solis, "When Did Everything Become So ‘Intentional’?", NYT 9/29/2025:

Dating, walking, working out, watching a movie at home, watching a movie in the theater, thrift shopping, grocery shopping, meal prepping, playing trivia, making coffee, drinking coffee, consuming alcohol, making friends, making plans with friends, playing the guitar, journaling, arguing, reading, thinking, scrolling, breathing.

You can just do all of these things. Or you can do them “intentionally,” as a growing chorus of lifestyle gurus, influencers and perhaps slightly overtherapized people you may know personally are preaching lately. […]

A close linguistic relative to mindfulness, living intentionally suggests being present and self-aware. Your words and actions are in near-perfect alignment. Possibly, you’ve meditated recently. True to its literal definition, being “intentional” also implies a series of deliberate choices.

Read the rest of this entry »

Comments (14)

Two new foreign words: Turkish kahvalti and French pavé

I probably learn at least one or two new foreign words per day, and they always delight me no end.

The first new foreign word I learned today is Turkish kahvalti (lit., "before coffee) which means "breakfast".    

Inherited from Ottoman Turkish قهوه آلتی (ḳahve altı, food taken before coffee; especially breakfast or lunch), from قهوه (ḳahve) and آلت (alt), equivalent to kahve (coffee) +‎ alt (under, lower, below) +‎ (possessive suffix), literally under coffee. (Wiktionary)

This tells us how important coffee is in Turkish life.

Read the rest of this entry »

Comments (22)

Footguns and rakestomping

I've recently noticed two compound neologisms, both involving metaphors about foot-related self injury.

The first one was in an article in Medium on 6/27/2025, "Why Google is Betting 8 Years on a Programming Language That Doesn’t Exist Yet". That article explains that

In 2022, Google introduced Carbon, a potential successor to C++. Unlike Go or Rust, Carbon wasn’t ready for prime time. In fact, it was barely out of the conceptual phase.

And among the reasons given for the effort [emphasis added]:

C++ has steep learning curves and footguns.

The second foot-related compound was in a 7/24/2025 TPM article, "Why is Jeff Bezos rakestomping the Post?".

Read the rest of this entry »

Comments (19)

Interpersonal and socio-cultural alignment

In a comment on "Alignment", Sniffnoy wrote:

At least as far as I'm aware, the application of "alignment" to AI comes from Eliezer Yudkowsky or at least someone in his circles. He used to speak of "friendly AI" and "unfriendly AI". However, the meaning of these terms was fairly different from the plain meaning, which confused people. So at some point he switched to talking about "aligned" or "unaligned" AI.

This is certainly true — see e.g. Yudkowsky's 2016 essay "The AI alignment problem: why it is hard, and where to start".

However, an (almost?) exactly parallel usage was established in the sociological literature, more than half a century earlier, as discussed in Randall Stokes and John Hewitt, "Aligning actions" (1976):

Read the rest of this entry »

Comments (4)

Alignment

In today's email there was a message from AAAI 2026 that included a "Call for the Special Track on AI Alignment""

AAAI-26 is pleased to announce a special track focused on AI Alignment. This track recognizes that as we begin to build more and more capable AI systems, it becomes crucial to ensure that the goals and actions of such systems are aligned with human values. To accomplish this, we need to understand the risks of these systems and research methods to mitigate these risks. The track covers many different aspects of AI Alignment, including but not limited to the following topics:

Read the rest of this entry »

Comments (11)

Bibliographical cornucopia for linguists, part 1

Bibliographical cornucopia for linguists, part 1

Since we have such an abundance of interesting articles for this fortnight, I will divide the collection into two parts, and provide each entry with an abstract or paragraph length quotation.

A fundamental question in word learning is how, given only evidence about what objects a word has previously referred to, children are able to generalize to the correct class. How does a learner end up knowing that “poodle” only picks out a specific subset of dogs rather than the broader class and vice versa? Numerous phenomena have been identified in guiding learner behavior such as the “suspicious coincidence effect” (SCE)—that an increase in the sample size of training objects facilitates more narrow (subordinate) word meanings. While SCE seems to support a class of models based in statistical inference, such rational behavior is, in fact, consistent with a range of algorithmic processes. Notably, the broadness of semantic generalizations is further affected by the temporal manner in which objects are presented—either simultaneously or sequentially. First, I evaluate the experimental evidence on the factors influencing generalization in word learning. A reanalysis of existing data demonstrates that both the number of training objects and their presentation-timing independently affect learning. This independent effect has been obscured by prior literature’s focus on possible interactions between the two. Second, I present a computational model for learning that accounts for both sets of phenomena in a unified way. The Naïve Generalization Model (NGM) offers an explanation of word learning phenomena grounded in category formation. Under the NGM, learning is local and incremental, without the need to perform a global optimization over pre-specified hypotheses. This computational model is tested against human behavior on seven different experimental conditions for word learning, varying over presentation-timing, number, and hierarchical relation between training items. Looking both at qualitative parameter-independent behavior and quantitative parameter-tuned output, these results support the NGM and suggest that rational learning behavior may arise from local, mechanistic processes rather than global statistical inference.

Read the rest of this entry »

Comments off

"AI" == "vehicle"?

Back in March, the AAAI ("Association for the Advancement of Artificial Intelligence") published an "AAAI Presidential Panel Report on the Future of AI Research":

The AAAI 2025 presidential panel on the future of AI research aims to help all AI stakeholders navigate the recent significant transformations in AI capabilities, as well as AI research methodologies, environments, and communities. It includes 17 chapters, each covering one topic related to AI research, and sketching its history, current trends and open challenges. The study has been conducted by 25 AI researchers and supported by 15 additional contributors and 475 respondents to a community survey.

You can read the whole thing here — and you should, if you're interested in the topic.

Read the rest of this entry »

Comments (4)

Mapping the exposome

More than 20 years ago, I posted about the explosion of -ome and -omic words in biology: "-ome is where the heart is", 10/27/2004. I listed more than 40 examples:

behaviourome, cellome, clinome, complexome, cryptome, crystallome, ctyome, degradome, enzymome,epigenome, epitome, expressome, fluxome, foldome, functome, glycome, immunome, ionome, interactome, kinome, ligandome, localizome, metallome, methylome, morphome, nucleome, ORFeome, parasitome, peptidome, phenome, phostatome, physiome, regulome, saccharome, secretome, signalome, systeome, toponome, toxicome, translatome, transportome, vaccinome, and variome.

Read the rest of this entry »

Comments (14)

Linguistics bibliography roundup

Something for everyone

Read the rest of this entry »

Comments (3)

Pronouncing DOGE

Coby L. wrote to ask why DOGE is pronounced with a final /ʒ/ rather than a final /dʒ/.

The Department Of Government Efficiency is clearly a backronym of the Doge meme, which references a Shiba Inu dog. According to Wikipedia, the meme can be pronounced /doʊʒ/ or /doʊdʒ/ or /doʊɡ/, though all I've heard from the media is /doʊʒ/. I guess Coby's experience is similar, hence the question. Wikipedia says that the memetic cryptocurrency Dogecoin is pronounced either /doʊʒkɔɪn/ or /doʊdʒkɔɪn/, but apparently not /doʊgkɔɪn/.

Read the rest of this entry »

Comments (21)

Hiberno-English: it's a soft day

Spending some time in Ireland, I hear people saying "It's a soft day" or "It's a soft day, thank God!".  Not knowing what that expression implies, I do a search and find that "A soft day is what the Irish call a very very damp fog or a mizzle, which is a cross between a mist and a drizzle." (source)  Mizzle is also the color of a shade of paint. (source)

"Soft day" is a phrase derived from Irish lá bog (lit.) ("overcast day; light drizzle/mist").

That reaction to a moist, overcast day tells you something about the Irish mindset and helps you understand Irish sentiment and humor.

Read the rest of this entry »

Comments (13)