More on algorithmic culture
« previous post | next post »
In "Agentic culture" (8/30/2025) and "'Moloch's bargain'?" (10/12/2025) I cited some work on how interactions among algorithmic "agents" can create (socially) bad results that were not directly programmed by their inventors. I continue to be surprised at how little attention has been paid to this issue in the media, given the excitement over agentic AI. I've found a fair amount of other research with similar content, as searches like this illustrate, which makes me wonder even more about the relative lack of uptake.
Here's a 2018 review oriented to a legal audience — Steven Van Uytsel, "Artificial intelligence and collusion: A literature overview":
The use of algorithms in pricing strategies has received special attention among competition law scholars. There is an increasing number of scholars who argue that the pricing algorithms, facilitated by increased access to Big Data , could move in the direction of collusive price setting. Though this claim is being made, there are various responses. On the one hand, scholars point out that current artificial intelligence is not yet well-developed to trigger that result. On the other hand, scholars argue that algorithms may have other pricing results rather than collusion. Despite the uncertainty that collusive price could be the result of the use of pricing algorithms, a plethora of scholars are developing views on how to deal with collusive price setting caused by algorithms. The most obvious choice is to work with the legal instruments currently available. Beyond this choice, scholars also suggest constructing a new rule of reason . This rule would allow us to judge whether an algorithm could be used or not. Other scholars focus on developing a test environment. Still other scholars seek solutions outside competition law and elaborate on how privacy regulation or transparency reducing regulation could counteract a collusive outcome. Besides looking at law , there are also scholars arguing that technology will allow us to respond to the excesses of pricing algorithms. It is the purpose of this chapter to give a detailed overview of this debate on algorithms, price setting and competition law .
A 2019 paper by Emilio Calvano et al. ("Artificial Intelligence, Algorithmic Pricing and Collusion") has been cited 819 times:
Pricing algorithms are increasingly replacing human decision making in real marketplaces. To inform the competition policy debate on possible consequences, we run experiments with pricing algorithms powered by Artificial Intelligence in controlled environments (computer simulations).
In particular, we study the interaction among a number of Q-learning algorithms in the context of a workhorse oligopoly model of price competition with Logit demand and constant marginal costs. We show that the algorithms consistently learn to charge supra-competitive prices, without communicating with each other. The high prices are sustained by classical collusive strategies with a finite punishment phase followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand and to changes in the number of players.
More recently, there's Eshwar Ram Arunachaleswaran et al., "Algorithmic Collusion Without Threats" (12/13/2024), with deeper mathematical proofs:
There has been substantial recent concern that pricing algorithms might learn to "collude." Supra-competitive prices can emerge as a Nash equilibrium of repeated pricing games, in which sellers play strategies which threaten to punish their competitors who refuse to support high prices, and these strategies can be automatically learned. In fact, a standard economic intuition is that supra-competitive prices emerge from either the use of threats, or a failure of one party to optimize their payoff. Is this intuition correct? Would preventing threats in algorithmic decision-making prevent supra-competitive prices when sellers are optimizing for their own revenue? No. We show that supra-competitive prices can emerge even when both players are using algorithms which do not encode threats, and which optimize for their own revenue. We study sequential pricing games in which a first mover deploys an algorithm and then a second mover optimizes within the resulting environment. We show that if the first mover deploys any algorithm with a no-regret guarantee, and then the second mover even approximately optimizes within this now static environment, monopoly-like prices arise. The result holds for any no-regret learning algorithm deployed by the first mover and for any pricing policy of the second mover that obtains them profit at least as high as a random pricing would — and hence the result applies even when the second mover is optimizing only within a space of non-responsive pricing distributions which are incapable of encoding threats. In fact, there exists a set of strategies, neither of which explicitly encode threats that form a Nash equilibrium of the simultaneous pricing game in algorithm space, and lead to near monopoly prices. This suggests that the definition of "algorithmic collusion" may need to be expanded, to include strategies without explicitly encoded threats.
And price fixing is not the only area where agentic culture might take a bad turn, as discussed in the paper I featured earlier: "Moloch's Bargain: Emergent Misalignment When LLMs Compete for Audiences". I haven't found any discussion of agents edging into other traditional criminal enterprises, but no doubt such things will be Out There soon, if they aren't already.
Julian said,
November 26, 2025 @ 6:14 pm
I commend to the attention of academic writers the 'Enter' key, which is located towards the right side of the keyboard. It's the one with 'Enter' on it and a little left-pointing arrow. It's easily accessed by the right little finger.
It's very useful for creating new paragraphs.
AntC said,
November 26, 2025 @ 6:25 pm
Human market-makers are not immune from groupthink — which is not usually counted as collusion. But also humans do in fact collude in the absence of strong regulatory intervention.
And we've had repeated examples of speculative 'bubbles' at least back to 1630's Tulip Mania. There's widespread suspicion AI stocks are currently wildly overpriced.
(Also, what @Julian said ;-)
PEG said,
November 27, 2025 @ 2:35 am
The "learning to collude" framing might overstate what's happening. These algorithms aren't devising novel strategies—they're navigating to equilibria that already exist in game theory and market behavior patterns.
The real issue: pricing algorithms efficiently find and implement every anti-competitive pattern already in their training data. They're not inventing collusion, they're discovering the well-traveled routes to it that humans already mapped.
This matters because the risk isn't emergent scheming—it's perfect recall. A human pricing manager might hesitate or inconsistently apply sketchy strategies. The algorithm just optimizes without guilt, routing straight to "effective pricing" and implementing whatever it finds there.
We're not dealing with AI outsmarting us. We're dealing with AI efficiently excavating every morally dubious strategy we've ever written down.
Mark Liberman said,
November 27, 2025 @ 7:19 am
@Julian "I commend to the attention of academic writers the 'Enter' key,":
I agree about the over-long paragraphs in abstracts, but the fault may be with journal policies and/or copy-editor's preferences.
Mark Liberman said,
November 27, 2025 @ 7:21 am
@AntC:
Market collusion is probably the least of the problems, in the end, as the "Moloch" paper illustrates. The race-to-the-bottom discussed in that paper is Old News in human media, even scientific publications.
But I'm (not) looking forward to seeing similar effects in gambling, smuggling, drugs, pornography, etc. — in fact the evidence of an agentic race-to-the-bottom in those areas is probably already Out There.
Mark Liberman said,
November 27, 2025 @ 7:34 am
@PEG "We're dealing with AI efficiently excavating every morally dubious strategy we've ever written down."
As the Moloch paper demonstrates, AI agents do more than this, exploring the space available to them in order to find techniques that work to optimize their results, without the need to learn from anything humans have ever written down.
That doesn't mean that they're smarter than humans, just that they don't need our texts to teach them how to beat the system.