Replicate evolve the image…

« previous post | next post »

From r/chatgpt:

I tried the "Create the exact replica of this image, don't change a thing" 101 times, but with Dwayne Johnson
byu/Foreign_Builder_2238 inChatGPT

Similar headshot evolutions have been around for a while, but this is the first one that I've seen morphing into "modernism".

For some reason, the analogous evolution of text or speech passages doesn't seem to be a thing, though it certainly ought to be possible. Maybe people haven't done it because there's no linguistic equivalent of immediate visual perception?

Update — Following up on the notes in the comments about the "telephone game", I found this recent paper: Jérémy Perez et al., "When LLMs play the telephone game: Cumulative changes and attractors in iterated cultural transmissions", arXiv preprint 2024:

As large language models (LLMs) start interacting with each other and generating an increasing amount of text online, it becomes crucial to better understand how information is transformed as it passes from one LLM to the next. While significant research has examined individual LLM behaviors, existing studies have largely overlooked the collective behaviors and information distortions arising from iterated LLM interactions. Small biases, negligible at the single output level, risk being amplified in iterated interactions, potentially leading the content to evolve towards attractor states. In a series of telephone game experiments, we apply a transmission chain design borrowed from the human cultural evolution literature: LLM agents iteratively receive, produce, and transmit texts from the previous to the next agent in the chain. By tracking the evolution of text toxicity, positivity, difficulty, and length across transmission chains, we uncover the existence of biases and attractors, and study their dependence on the initial text, the instructions, language model, and model size. For instance, we find that more open-ended instructions lead to stronger attraction effects compared to more constrained tasks. We also find that different text properties display different sensitivity to attraction effects, with toxicity leading to stronger attractors than length. These findings highlight the importance of accounting for multi-step transmission dynamics and represent a first step towards a more comprehensive understanding of LLM cultural dynamics.

There's a github repository, and a site where you can see (some of?) the communication chains.



3 Comments »

  1. david said,

    May 9, 2025 @ 7:23 am

    As a child in the 50s we used to play a game called “telephone”. We would sit in a row and the starter would whisper a few words in their neighbor’s ear, who whispers what they heard to the next neighbor, and so on along the row. The last child says it out loud and then the starter repeats what they said.

  2. Mike Grubb said,

    May 9, 2025 @ 9:09 am

    Continuing on from david's post, an alternate name for the game in my region (late '70s into the '80s in south central Penna.) was "whisper down the lane." That different name may have been a function of families that didn't use telephones in the area. What david left implied was that chances were better that the output statement would be different from the starting statement than that the two would match. I'm not sure if it's an example of evolution or entropy, though.

  3. Robot Therapist said,

    May 9, 2025 @ 1:34 pm

    "Send three and fourpence, we're going to a dance"

RSS feed for comments on this post · TrackBack URI

Leave a Comment