More on algorithmic culture
« previous post |
In "Agentic culture" (8/30/2025) and "'Moloch's bargain'?" (10/12/2025) I cited some work on how interactions among algorithmic "agents" can create (socially) bad results that were not directly programmed by their inventors. I continue to be surprised at how little attention has been paid to this issue in the media, given the excitement over agentic AI. I've found a fair amount of other research with similar content, as searches like this illustrate, which makes me wonder even more about the relative lack of uptake.
Here's a 2018 review oriented to a legal audience — Steven Van Uytsel, "Artificial intelligence and collusion: A literature overview":
The use of algorithms in pricing strategies has received special attention among competition law scholars. There is an increasing number of scholars who argue that the pricing algorithms, facilitated by increased access to Big Data , could move in the direction of collusive price setting. Though this claim is being made, there are various responses. On the one hand, scholars point out that current artificial intelligence is not yet well-developed to trigger that result. On the other hand, scholars argue that algorithms may have other pricing results rather than collusion. Despite the uncertainty that collusive price could be the result of the use of pricing algorithms, a plethora of scholars are developing views on how to deal with collusive price setting caused by algorithms. The most obvious choice is to work with the legal instruments currently available. Beyond this choice, scholars also suggest constructing a new rule of reason . This rule would allow us to judge whether an algorithm could be used or not. Other scholars focus on developing a test environment. Still other scholars seek solutions outside competition law and elaborate on how privacy regulation or transparency reducing regulation could counteract a collusive outcome. Besides looking at law , there are also scholars arguing that technology will allow us to respond to the excesses of pricing algorithms. It is the purpose of this chapter to give a detailed overview of this debate on algorithms, price setting and competition law .
A 2019 paper by Emilio Calvano et al. ("Artificial Intelligence, Algorithmic Pricing and Collusion") has been cited 819 times:
Pricing algorithms are increasingly replacing human decision making in real marketplaces. To inform the competition policy debate on possible consequences, we run experiments with pricing algorithms powered by Artificial Intelligence in controlled environments (computer simulations).
In particular, we study the interaction among a number of Q-learning algorithms in the context of a workhorse oligopoly model of price competition with Logit demand and constant marginal costs. We show that the algorithms consistently learn to charge supra-competitive prices, without communicating with each other. The high prices are sustained by classical collusive strategies with a finite punishment phase followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand and to changes in the number of players.
More recently, there's Eshwar Ram Arunachaleswaran et al., "Algorithmic Collusion Without Threats" (12/13/2024), with deeper mathematical proofs:
There has been substantial recent concern that pricing algorithms might learn to "collude." Supra-competitive prices can emerge as a Nash equilibrium of repeated pricing games, in which sellers play strategies which threaten to punish their competitors who refuse to support high prices, and these strategies can be automatically learned. In fact, a standard economic intuition is that supra-competitive prices emerge from either the use of threats, or a failure of one party to optimize their payoff. Is this intuition correct? Would preventing threats in algorithmic decision-making prevent supra-competitive prices when sellers are optimizing for their own revenue? No. We show that supra-competitive prices can emerge even when both players are using algorithms which do not encode threats, and which optimize for their own revenue. We study sequential pricing games in which a first mover deploys an algorithm and then a second mover optimizes within the resulting environment. We show that if the first mover deploys any algorithm with a no-regret guarantee, and then the second mover even approximately optimizes within this now static environment, monopoly-like prices arise. The result holds for any no-regret learning algorithm deployed by the first mover and for any pricing policy of the second mover that obtains them profit at least as high as a random pricing would — and hence the result applies even when the second mover is optimizing only within a space of non-responsive pricing distributions which are incapable of encoding threats. In fact, there exists a set of strategies, neither of which explicitly encode threats that form a Nash equilibrium of the simultaneous pricing game in algorithm space, and lead to near monopoly prices. This suggests that the definition of "algorithmic collusion" may need to be expanded, to include strategies without explicitly encoded threats.
And price fixing is not the only area where agentic culture might take a bad turn, as discussed in the paper I featured earlier: "Moloch's Bargain: Emergent Misalignment When LLMs Compete for Audiences". I haven't found any discussion of agents edging into other traditional criminal enterprises, but no doubt such things will be Out There soon, if they aren't already.