Although I understand the interest of the topic, and although Kieran Snyder clearly did a lot of work for a substantial blog posting, I think the results are given too much credit, almost inevitably now that they are featured in news media everywhere (in her defense, she does note some methodological problems herself that should lead to a more cautious interpretation).
First and most importantly, you really cannot do this kind of work without having recordings that are available for repeated inspection. Any human coder looking at some phenomenon is likely to fall prey to all sorts of subtle and less subtle biases influencing judgment and coding decisions. Without recordings, one is likely to miss things, overcount things, mistake schisms for overlaps in multi-party interaction, etc. Without recordings, there is no way to redo the counts, verify or replicate. Related to this, it would be important to have independent evidence of factors like social status, social networks, structure of the organisation, structure of the meetings, etc.
Second, it is well known that there are many different kinds of simultaneous talk (some of the basic references here are Jefferson 1984, 1986, Lerner 1989, and Schegloff 2000). For instance, competitive overlap is very different from cooperative overlap but both fall under the definition of "interruption" that Snyder supplies. This informal study has either lumped all of these together, but more likely has focused on an undefined subset of these using a folk conception of 'interruption' (often including competitive overlaps, sometimes no doubt counting other types, and likely ignoring backchannel overlaps or early starts). The numbers we get out of such an approach are uninformative because they have lumped together so many possibly qualitatively different phenomena.
So. There may well be a pattern here — after all, stories about this are often based on the experiences of participants in actual conversations. But taking notes on things that strike one as interruptions in meetings is really not the way to find out whether it is actually the case, and what the phenomenon looks like qualitatively.
In the spirit of providing a constructive contribution, let me spell out some of the design characteristics that a serious study of a possible role of gender in competitive overlap would have to have. Maybe the attention for this informal account builds enough momentum to design and carry such a study. This study would:
- Start with a solid understanding of the full possibility space, i.e. first consider all the different ways in which participants can produce simultaneous speech.
- Then carry out a qualitative analysis of the phenomenon of interest (perhaps: competitive overlaps), demonstrating how it works in a small number of actual cases where the relevant factors can be directly observed.
- Use the results of the qualitative analysis to define a number of relevant things to code for in a larger amount of cases (e.g., overlap type, timing, speech rate, sequential structure, participation framework, gender, social asymmetries)
- For better control and comparability, look at overlaps in one particular sequential environment (e.g. answers to questions).
- Throughout, try to control for whatever is suspected to be the main causal factor (if there is enough data, track the phenomenon in different configurations of this factor, e.g. different gender balances, different participation frameworks, different configurations of social statuses).
- Throughout, try to control for type of institutional interaction (and as a corrolary, for relative social asymmetry of participants)
- Ideally, look at a cross-cultural corpus, or limit the claims to the investigated society and setting as appropriate — don't assume that WEIRD populations (Henrich et al. 2010) are the universal yardstick for how interaction works, or how humans work (Yuan et al 2007 make a start with this).
- Ideally, look at a sizable corpus of everyday social interaction as a way to get a baseline or 'default' measure. Work meetings are a very specific type of institutional context, and the structure of interaction and turn-taking is likely to be influenced by this.
Together, these points are simply some best practices for the scientific study of social interaction. Precisely because interaction is right under our nose, because we all have our subjective experiences, sensitivities and insensitivities, we should strive for an understanding of the target phenomenon that is accurate, accountable to the facts, and open to verification.
Henrich, Joseph, Steven J. Heine, and Ara Norenzayan. 2010. “The Weirdest People in the World?” Behavioral and Brain Sciences 33 (2-3): 61–83. doi:10.1017/S0140525X0999152X.
Jefferson, Gail. 1973. “A Case of Precision Timing in Ordinary Conversation: Overlapped Tag-Positioned Address Terms in Closing Sequences.” Semiotica 9 (1): 47–96. doi:10.1515/semi.19220.127.116.11.
Jefferson, Gail. 1984. “Notes on Some Orderlinesses of Overlap Onset.” In Discourse Analysis and Natural Rhetoric, edited by Valentina D’Urso and P. Leonardi, 11–38. Padua, Italy: Cleup Editore.
Jefferson, Gail. 1986. “Notes on ‘Latency’ in Overlap Onset.” Human Studies 9 (2-3): 153–83. doi:10.1007/BF00148125.
Lerner, Gene H. 1989. “Notes on Overlap Management in Conversation: The Case of Delayed Completion.” Western Journal of Speech Communication 53 (2): 167–77.
Schegloff, Emanuel A. 2000. “Overlapping Talk and the Organization of Turn-Taking for Conversation.” Language in Society 29 (1): 1–63.
Yuan, Jiahong, Mark Liberman, and Christopher Cieri. 2007. “Towards an Integrated Understanding of Speech Overlaps in Conversation”. presented at ICPhS XVI, Saarbrücken.
Above is a guest post by Mark Dingemanse. Readers may also be interested in his recent post "Huh? The growing pains of pragmatic typology", 7/10/2014.
An editorial note: I strongly support Mark's closing assertion that studies of interaction should be "accurate, accountable to the facts, and open to verification"; and to that end, I wish that his organization took a less proprietary attitude towards its datasets.