Discourse on the AI Method of Rightly Reasoning
An interesting recent paper (Adithya Bhaskar, Xi Ye, & Danqi Chen, “Language Models that think, chat better”, arXiv.org 09/24/2025) starts like this:
THINKING through the consequences of one’s actions—and revising them when needed—is a defining feature of human intelligence (often called “system 2 thinking”, Kahneman (2011)). It has also become a central aspiration for large language models (LLMs).1
The footnote:
1Language models think, therefore, language models are?
Read the rest of this entry »

