An experiment with echoing Echos

« previous post | next post »

Henry Cooke (aka "prehensile" on GitHub) has hatched a fascinating techno-artistic experiment. He set up two Amazon Echos to talk back and forth, each repeating a text to the other, with every iteration introducing new errors. His initial inspiration was "I Am Sitting in a Room," a 1969 work of acoustic art by Alvin Lucier, in which a text is recorded and re-recorded until all that is left is the hum of resonant frequencies in the room. (You can watch a 2014 performance with Lucier here.) Rather than replicate Lucier's text, Cooke created new ones for the two Echos to vocalize, with an added wrinkle: iterations of the texts follow the Oulipo S+7 constraint, in which each noun is replaced by another noun appearing seven steps away in the dictionary. You can see the first ten iterations (using Amazon Polly to synthesize different voices) in this video.

For more information on the project, see Cooke's GitHub post, "I Am Running in the Cloud." For comparison's sake, here is the original text of "I Am Sitting in a Room":

I am sitting in a room different from the one you are in now. I am recording the sound of my speaking voice and I am going to play it back into the room again and again until the resonant frequencies of the room reinforce themselves so that any semblance of my speech, with perhaps the exception of rhythm, is destroyed. What you will hear, then, are the natural resonant frequencies of the room articulated by speech. I regard this activity not so much as a demonstration of a physical fact, but more as a way to smooth out any irregularities my speech might have.

And here are the two texts that Cooke gave to his Echos (dubbed "Sitting Room" and "Cloud Runner"):

I am sitting in a room different from the one you are in now. I am writing code which will generate iterations of this text using the Oulipo S+7 constraint, causing the nouns to change positions in a list taken from an English dictionary. These iterations will be read by an Amazon Echo in response to spoken commands, which will themselves be issued by another Echo after reciting a previous iteration. In this way, the meaning of this text will gradually be destroyed. I regard this activity as a demonstration of the cumulative effects of slight errors in a complex computational system.

I am running in the cloud. This has almost nothing to do with the device in the room with you now. The speech you hear is the product of the following parts: a text to speech system, an algorithmic interpretation of the Ouilpo S+7 constraint, a speech synthesizer, and audio streamed over a network. I am going to repeat this process again and again until the original meaning of this text is destroyed. What you will hear, then, is a feedback loop of iteration, articulated by a speech synthesizer. I do not regard this activity as significant in any way, because I am a string of unconscious processes.



4 Comments

  1. Adam F said,

    February 23, 2018 @ 3:58 am

    Non Sequitur warned about the dangers of this sort of thing a few weeks ago.

  2. Chandra said,

    February 23, 2018 @ 2:48 pm

    Here's a rather entertaining version of the same idea using two Talking Carl apps: https://www.youtube.com/watch?v=t-7mQhSZRgM

  3. maidhc said,

    February 23, 2018 @ 10:10 pm

    Interesting that it was the Australian voice that changed "positions" to "possies".

  4. Sam Buggeln said,

    February 26, 2018 @ 1:35 am

    The talking Carl one is amazing. Would love to know more about Cooke's: what determines how many nouns (and which ones) are distorted according to Oulipo +7?

RSS feed for comments on this post