Close verbal shadowing

« previous post | next post »

Rhett & Link:

"They're so close they can finish each other's sentences."

In the first part of the skit, Rhett & Link are demonstrating "close shadowing". This technique was probably invented as a children's game many millennia ago; but two decades before Rhett & Link were born, it was applied as a method for studying the psychology of speech perception. As the Wikipedia article explains, "Speech shadowing was first used as a research technique by the Leningrad Group led by Ludmilla Andreevna Chistovich in the late 1950s". The earliest notice to make it out to the West, I think, was L.A. Chistovich, "Classification of rapidly repeated speech sounds", Akusticheskii Zhurnal, Vol. 6, 1960, pp. 392-398 — if you can find a scan of that article, or other early work by the Leningrad Group, please let me know.

No superpowers are required, though there are significant individual differences in how closely different people can shadow the speech of others. And you don't need to be shadowing your BFF — but shadowing latency is a sensitive dynamic indicator of the predictability or redundancy of the shadowed material, and and it's plausible that close friends might be more predictable for you than others. Though on the other hand, some people might prefer friends whose conversational contributions are less predictable…

For a review of the foundational work  in this area, see William Marslen-Wilson, "Speech shadowing and speech comprehension", Speech Communication 1985:

Pioneering research by Chistovich and her colleagues used speech shadowing to study the mechanisms of immediate speech processing, and in doing so exploited the phenomenon of close shadowing, where the delay between hearing a speech stimulus and repeating it is reduced to 250 msec or less, The research summarised here began with an extension of Chistovich's findings to the close shadowing of connected prose. Twenty-five percent of the women tested were able to accurately shadow connected prose at mean delays ranging from 250 to 300 msec. The other women, and all the men tested, were only able to do so at longer latencies, averaging over 500 msec. These are called distant shadowers. A second series of experiments established that close, just as much as distant shadowers, were syntactically and semantically analysing the material as they repeated it. This was reflected in the ways their spontaneous errors were constrained, and in their sensitivity to disruptions of the syntactic and semantic structure of the materials they were shadowing. A third series of experiments showed that the difference between close and distant shadowers was in their output strategy. Close shadowers are able to use the products of on-line speech analysis to drive their articulatory apparatus before they are fully aware of what these products are. This means that close shadowing not only provides a continuous reflection of the outcome of the process of language comprehension, but also does so relatively unaffected by post-perceptual processes. In this sense, therefore, close shadowing provides us with uniquely privileged access to the properties of the system.

As far as I know, the intervention of the Future Police hasn't been previously reported — or at least it hasn't been documented in the peer-reviewed literature. But then it wouldn't be, would it?



  1. Oop said,

    September 24, 2016 @ 12:54 pm

    Чистович Л.А. «Классификация звуков речи при их быстром повторении» с. 392-398
    And their whole archive:

    [(myl) Спасибо! ]

  2. Lazar said,

    September 24, 2016 @ 1:51 pm

    Rhett and Link often do something similar where they co-operatively improvise the lyrics to a song.

  3. Keith M Ellis said,

    September 24, 2016 @ 9:50 pm

    An incidence of someone speech shadowing was the core around which was written one of the best and most frightening episodes of modern Doctor Who, "Midnight".

    An isolated tourist bus on an exotic, mysterious planet, a vehicle malfunction, noises outside, a moment of darkness, then one passenger seems catatonic until she begins to slowly, then more quickly, speech shadow everyone. Then only the Doctor. Then this scene.

  4. Rubrick said,

    September 24, 2016 @ 11:03 pm

    Close shadowers are able to use the products of on-line speech analysis to drive their articulatory apparatus before they are fully aware of what these products are.

    Very impressive, given dialup speeds in 1985….

  5. tangent said,

    September 24, 2016 @ 11:45 pm

    Has other work also found that "close shadowing" is more often an ability of women? That would be interesting to understand.

    The 25% number does make me wonder if their findings might have been 1 of 4 women and 0 of 4 men.

    [(myl) I also found that difference interesting. Your worry about N is not well founded — from the cited Marslen-Wilson 1985 paper:

    In the search for accurate close shadowers, I tested 65 students from MIT and Wellesley College (40 men and 25 women).

    However, the choice of institutions may have mattered — Wellesley students may simply be more verbal (in some relevant sense) than MIT students. Furthermore, the passages that the subjects attempted to shadow were read by a male speaker — it's possible that the difference in f0 between the target and the subject's own voice is a feature. Marslen-Wilson writes:

    If the stimulus materials had been read by a woman, then some of the men might also have been able to shadow clearly at short latencies.

    I've looked in the literature for any further exploration of individual differences in shadowing latency, and have not found anything relevant — but I may just have missed it. This is one feature of of a broader study of fluency differences that a student and I are currently planning to carry out.]

  6. Daniel Barkalow said,

    September 26, 2016 @ 4:05 pm

    If you can fill in the blank at the end of someone's sentence, that makes you cloze friends, by definition.

  7. Lance Nathan said,

    September 27, 2016 @ 2:11 am

    I vividly remember asking Julie Sedivy about this when I was in her class as an undergraduate: how can we be sure that when we hear words, we're actually processing them as words on the fly, rather than taking in the sounds? She asked me to echo her as she said a sentence, which I did without too much trouble. Then she called on someone else and, to demonstrate that processing had to be part of the ability to repeat, asked them to do the same with another sentence. What she didn't warn us was that the new sentence was in Czech.

    The student she picked, unfortunately, happened to be an exceptional mimic who was able to more or less echo the Czech sentence, but the point was clear.

RSS feed for comments on this post