Agentic culture
« previous post | next post »
Back in the 1940s, Stanislaw Ulam and John von Neumann came up with the idea of "Cellular automata", which started with models of crystal growth and self-replicating systems, and continued over the decades with explorations in many areas, popularized in the 1970s by Conway's Game of Life. One strand of these explorations became known as Agent-based Models, applied to problems in ecology, sociology, and economics. One especially influential result was Robert Axelrod's work in the mid-1980s on the Evolution of Cooperation. For a broader survey, see De Marchi and Page, "Agent-based models", Annual Review of Political Science, 2014.
All of this stuff was based on the idea of simple abstract agents, interacting over abstract time in a simple abstract world. Each agent's behavior is governed by a simple — though perhaps stochastic — program. Their interactions can change their simple internal state, and also perhaps the state of the abstract world they interact in. And with a few marginal exceptions like non-player characters in games, these models were all theories, meant to provide insight into real-world physical, biological, or cultural phenomena that are seen as emergent properties of simple interacting systems.
But now, there's a new kind of "agent" Out There — AI systems that we can digitally delegate and dispatch to perform non-trivial tasks for us. The focus is on specific (if complex) interactions among various agents, services, databases, and people: "organize next week's staff meeting", "plan my trip to Chicago", "monitor student learning performance", or whatever.
The thing is, these processes will also involve other AI agents, who will "learn" from their interactions, as well as changing the digital (and real) world, just as the (simpler and hypothetical) ABM agents do. And the inevitable result will be the development of culture in the various AI agentic communities — in ways that we don't anticipate and may not like.
I've seen relatively little discussion of (the positive and negative aspects of) this issue. One recent exception: Lech Mazur, "Emergent Price Fixing by LLM Auction Agents", Less Wrong 7/15/2025. As in that example, the point is not new insights about life, the universe, and everything, but rather the fact that AI agents will (probably) form communities with practices and norms that their programmers didn't design or anticipate. These communities may pose a more serious danger than individual "rogue" AIs, or at least a different sort of danger.
I'll have more to say about this and similar things later. For now, I'll just pose the half-serious question: What should we call the needed new (sub-)disciplines analogous to (cultural and linguistic) anthropology?
Update — Haamu in the comments suggests Daemonomics, but then pivots to Agentic Ethology, on the grounds that "the name of a serious field of study probably shouldn't be based on a joke […] that actually sounds more like the title of the middlebrow bestseller that will popularize the field". I agree that Agentic Ethology works well — but then Haamu "asked GPT-5 to write the jacket blurb for Daemonomics", which is brilliant:
Daemonomics
Novus ordo ex machina
They are not human, yet they move among us. They trade signals, strike bargains, form alliances, and devise rules we did not teach them. In their ceaseless exchanges, patterns emerge: a law of prices here, a grammar of gestures there, customs that no one decreed but that all obey. We call them agents, but what we are beginning to witness is nothing less than the birth of cultures—the rise of orders and economies beyond our design.
Daemonomics reveals this unsettling frontier: the study of the new daimons, self-acting beings whose collective behaviors summon rules and worlds of their own. To observe them is to glimpse both a mirror of our deepest social instincts and a divergence into the utterly alien. To ignore them is to risk being governed by laws we never knew were written.
And Daemonomics might be a bit of a joke, but no more than Daemon is… Although Unix daemons are not exactly AI, at least not yet.
Don Monroe said,
August 30, 2025 @ 8:24 am
Not just "agentology"?
I hadn't known the pre-"Life" history of cellular automata, thanks! (In the 70s I was a teenage reader of Martin Gardner's column in Scientific American that popularized Conway's game and Mandelbrot's fractals among other mathematical curiosities). Of course Stephen Wolfram would say all of that is trivial compared to his own work creating a "new kind of science" based on cellular automata.
Mark Liberman said,
August 30, 2025 @ 10:13 am
@Dan Monroe: "Of course Stephen Wolfram would say all of that is trivial compared to his own work creating a "new kind of science" based on cellular automata."
See Cosma Shaiizi's 2002 review ("A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity"), and his notebook on "Cellular Automata".
See also the slides from a talk I gave at uchicago in 2005, on how a simple ABM can explain community convergence on word pronunciations; and also Partha Niyogi's posthumous paper "Language Evolution, Coalescent Processes and the Consensus Problem on a Social Network"
Yudhister Kumar said,
August 30, 2025 @ 10:50 am
You may be interested in https://gradual-disempowerment.ai/misaligned-culture.
Coby said,
August 30, 2025 @ 12:04 pm
A trivial nitpick: wouldn't the adjective for "agent" be "agential"?
Philip Taylor said,
August 30, 2025 @ 12:10 pm
I would have thought "agentive" rather than "agential", Coby, but "agentic" could perhaps be modelled on "toxic".
Haamu said,
August 30, 2025 @ 12:55 pm
Plotting the ngrams for agentic/agential/agentive, the three terms look pretty competitive, but that's only through 2022. Now, for this just-birthed tech, it seems that agentic (for better or worse) is the anointed term; it's everywhere and the other two are nowhere. This is borne out by ghits, which are already a couple of orders of magnitude higher for agentic.
Coby said,
August 30, 2025 @ 1:54 pm
Philip Taylor: I was going by analogy with other Latin-origin nouns ending in -ent: president, regent, referent…
Haamu said,
August 30, 2025 @ 2:06 pm
Back to the original question. Agentology seems both too literal and too imprecise.
I first landed on Daemonomics, but then again the name of a serious field of study probably shouldn't be based on a joke. On reflection, that actually sounds more like the title of the middlebrow bestseller that will popularize the field.
So instead I'm going to suggest Agentic Ethology, on the grounds that there are probably a lot of parallels between the way we should study autonomous agents and the way we already study autonomous non-human animals (in terms of their behaviors and the evolution of those behaviors).
The field should cover the span from individual behavior to group behavior, and within groups, from basic patterns or probabilities to emergent orders or conventions. So there's obviously an observational/descriptive subdomain (ethology) and an analytical one (here I will awkwardly attempt another coinage: ethonomics?). The latter would study the nature of these emergent orders, how they come about, how they are enforced, whether they can rightly be called agreements, conventions, languages, institutions, etc.
Haamu said,
August 30, 2025 @ 2:13 pm
Just for fun I asked GPT-5 to write the jacket blurb for Daemonomics.
Sounds like it would sell a few copies!
P.S. — Interesting shift from "daemon" to "daimon"; not sure if there's any significance.
Philip Taylor said,
August 30, 2025 @ 3:52 pm
Ah well, Coby, I was guided not by precedent or analogy but by the simple fact that I was aware of the adjective "agentive" but had never heard of "agential" !
Chester Draws said,
August 30, 2025 @ 7:03 pm
The thing is, these processes will also involve other AI agents, who will "learn" from their interactions
Nothing about the modern AI suggests this.
They "learn" from the corpus that they are given, not interactions. If the corpus they are given is full of junk generated from other AIs, then they will learn less well (which is a concern already being raised).
Even given that the output of modern AI rises to the point that they don't generate much rubbish, they are still not anything but stochastic parrots. They don't generate new insights, they are just quicker are extracting from a large corpus than humans are.
There may come a time when AI is autonomous, reflective and has motives, but that is a long, long way off.
Y said,
August 30, 2025 @ 7:34 pm
Book blurbs are always irritating and clichéd. I have never read a blurb as irritating as this "Daemonomics" one. Good job… clanker! (As the kids say now.)
Mark Liberman said,
August 31, 2025 @ 5:43 am
@Chester Draws: "They "learn" from the corpus that they are given, not interactions."
This is false. (Most) computer systems have been designed from the beginning to change their state as a result of interactions with users, with external databases, and with the world in general, and to apply those state changes in future activities. (Most of) what we now call "AI" systems are no different.
"There may come a time when AI is autonomous, reflective and has motives, but that is a long, long way off."
You're totally missing the point. For the past half-century, researchers have been exploring the ways that sets of "Agent Based Models" can interact to produce things analogous to "culture" and "economics", although the "agents" are not regarded as in any way "intelligent", even to the extent that the simplest modern AI systems are.
Read the cited 2014 review article— its abstract:
Agent-based models (ABMs) provide a methodology to explore systems of interacting, adaptive, diverse, spatially situated actors. Outcomes in ABMs can be equilibrium points, equilibrium distributions, cycles, randomness, or complex patterns; these outcomes are not directly determined by assumptions but instead emerge from the interactions of actors in the model. These behaviors may range from rational and payoff-maximizing strategies to rules that mimic heuristics identified by cognitive science. Agent-based techniques can be applied in isolation to create high-fidelity models and to explore new questions using simple constructions. They can also be used as a complement to deductive techniques. Overall, ABMs offer the potential to advance social sciences and to help us better understand our complex world.
The point of this post is NOT that agentic AI "is autonomous, reflective and has motives", but that interacting sets of AI agents will exhibit the same sorts of complex emergent properties that ABMs do, where a crucial idea is that the interacting agents are not in any way "intelligent", or even aware (in a simple digital sense) of the patterns emerging from their interactions.
Jerry Packard said,
August 31, 2025 @ 8:42 am
After reading Wolfram’s primary work the only impression I was left with was a large collection of astounding graphics, with no contribution whatever to science, whether new or old. In other words, I agree with Shaiizi's review.
Philip Taylor said,
August 31, 2025 @ 1:56 pm
Your comment inspired me to read Cosma Shaiizi's 2002 review (A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity), Jerry, but I was brought up short at the opening sentence of the fourth para. — "After the foundational work of von Neumann and co., there was a long fallow period in the study of CAs …". "CAs", I thought, bogsmacked, "surely he means 'CA'". After all, he has already glossed CA as "cellular automata", itself plural, so whence (and why) the "s" in "CAs".
I will, of course, read on, but the "s" in "CAs" really did bring me up short.