A few days ago, I asked for help in tracking down some of the scientific support for Matt Richtel's claims about the bad effects of "digital overload" ("More factoid tracking", 9/1/2009). One of the more trackable factoids was the "study conducted at Stanford University, which showed that heavy multimedia users have trouble filtering out irrelevant information — and trouble focusing on tasks". And sure enough, The Neurocritic quickly came up with a reference that fits: Eyal Ophir, Clifford Nass, and Anthony Wagner, "Cognitive control in media multitaskers", PNAS, Published online before print 8/24/2009.
I believe that the full paper is freely available at the link given above (please let me know if this is wrong), and if you're interested in this topic, I urge you to read it. As in the case of the last paper by a Stanford psychologist that was discussed here, you should start by asking "Never mind the conclusions, what's the evidence?". And again, you may conclude that the descriptions of this research — in the popular press and even in the original paper — lead readers pretty far beyond the interpretations warranted by the research itself.
Here's the paper's abstract:
Chronic media multitasking is quickly becoming ubiquitous, although processing multiple incoming streams of information is considered a challenge for human cognition. A series of experiments addressed whether there are systematic differences in information processing styles between chronically heavy and light media multitaskers. A trait media multitasking index was developed to identify groups of heavy and light media multitaskers. These two groups were then compared along established cognitive control dimensions. Results showed that heavy media multitaskers are more susceptible to interference from irrelevant environmental stimuli and from irrelevant representations in memory. This led to the surprising result that heavy media multitaskers performed worse on a test of task-switching ability, likely due to reduced ability to filter out interference from the irrelevant task set. These results demonstrate that media multitasking, a rapidly growing societal trend, is associated with a distinct approach to fundamental information processing.
Note, by the way, that the authors (unlike Matt Richtel) carefully avoid any suggestion of causation, instead writing about "systematic differences in information processing styles" and "a distinct approach to fundamental information processing". This way of putting it is consistent with media multitasking causing cognitive disabilities (in this experiment's sample), or cognitive disabilities causing more media multitasking, or some third variable influencing both. But as we'll see, it's not completely clear that the claimed association (between media multitasking and certain information-processing deficits) even exists.
What's the evidence? It starts, of course, with that "trait media multitasking index". They administered a 20-minute online questionnaire to 262 Stanford undergraduates, presumably from the Psych Department's subject pool.
The questionnaire addressed 12 different media forms: print media, television, computer-based video (such as YouTube or online television episodes), music, nonmusic audio, video or computer games, telephone and mobile phone voice calls, instant messaging, SMS (text messaging), email, web surfing, and other computer-based applications (such as word processing). For each medium, respondents reported the total number of hours per week they spend using the medium. In addition, they filled out a media-multitasking matrix, indicating whether, while using this primary medium, they concurrently used each of the other media “Most of the time,” “Some of the time,” “A little of the time,” or “Never.” As text messaging could not accurately be described by hours of use, this medium was discarded from the analysis as a primary medium, although it still appeared as an option in the matrix (meaning respondents could still report text messaging while being engaged in other media).
They then created a "Media Multitasking Index" (MMI).
Step 1 was to translate verbal responses about concurrent media usage into numbers, on a scale from "Never" = 0 to "Most of the time" = 1.
Step 2 was to create the sum mi of the numeric responses about concurrent media use for each of the 11 primary media. Thus a student who indicated that while web surfing (the 10th medium in the list), she listened to music "Most of the time", engaged in voice phone calls "Some of the time", checked email "Some of the time", engaged in IM conversations "Some of the time", and responded to SMS messages "Some of the time", would get m10 = 1+0.67+0.67+0.67+0.67 = 3.68.
Step 3 created the individual's MMI as a weighted sum of the mi values,
where hi the number of hours per day that the student estimated she spent using primary medium i, and and htotal is the sum of all media hours. Note that multimedia hours are counted multiple times, so that a respondent's htotal may be larger than her number of waking hours. Unsurprisingly, therefore,
Media multitasking was correlated with total hours of media use, r (260) = 0.46, P < 0.001.
The authors observe that
The MMI produced a relatively normal distribution, with a mean of 4.38 and standard deviation of 1.52. This suggests that there is not a bimodal distribution of “heavy multitaskers” and “nonmultitaskers.” Nonetheless, we can identify individuals who very frequently use multiple media and those who tend to limit their use of multiple media. [...]
Based on the questionnaire, those students with an MMI less than one standard deviation below the mean (LMMs) or an MMI greater than one standard deviation above the mean (HMMs) were invited to participate. The invitation yielded 22 LMMs and 19 HMMs who gave informed consent and participated in the study for course credit.
(In fact, the number actually participating in the various subexperiments varied, down to 15 "Low Media Multitaskers" (LMMs) and 15 "High Media Multitaskers" (HMMs).)
Before going on to the behavioral experiments and their results, it's worth saying a few things about this Media Multitasking Index. First, high MMI values may correspond to many different patterns of life experience. You might be someone who has music coming into at least one earbud during many other life activities, media-related or otherwise; you might be someone who likes to keep a baseball or basketball game going in the background while studying or websurfing; you might be someone who's often talking on the phone or IMing friends while surfing the web. Each of these patterns could contribute the same amount to the MMI, but it's plausible that their psychological causes and effects (and their associations with other relevant lifestyle variables) are quite different.
Given this likely diversity of HMM experience (and we'll see later that the HMM group tends to have more variable performance on the behavioral tests), the rather small number of subjects is a reason for concern. Given that the study aimed to understand the effects of interactions among 12 different forms of "media", it's troubling that there were only 15-19 "High Media Multitaskers" tested, and that we don't know what sorts of media multitasking they actually reported engaging in. (And while we're focusing on this issue, it's worth noting that the pool of survey responders with MMIs at least one sd above or below the mean should have been (1/3)*262 ≅ 87, raising as usual the possibility that the transition from 87 possibilities to 19+22=41 volunteers might have introduced some sort of selection bias.)
This is Yet Another Reason to publish raw data — there's no reason that the authors couldn't have published (in a convenient digital form) all their media-use matrices, keyed to the performance of individual subjects on the various behavioral tests. This might make it possible to consider various alternative theories about which aspects of the media-use responses explained how much of the observed differences in performance (to the extent that the small number of subjects permits).
But this brings up a second — and much more important — issue with the MMI: it's entirely based on self-report. It's well known that self-reported measures of life patterns are not very reliable. Specifically, according to Bradley Greenberg et al., "Comparing survey and diary measures of Internet and traditional media use", Communication Reports 18(1):1–8, 2005:
Between the survey and diary methods, correlations range from .20 for listening to music off-line to .58 for email sent. Internet estimates are correlated .39 and television time estimates are correlated .35.
In general, subjects tend to over-report such activities in survey responses — by about 25% overall, in the just-cited study. But more important, the divergences between facts and self-reports are subject to several forms of recall bias, as a result of which conclusions based on self-reported data always need to be interpreted with great caution.
Let me give a personal example that may be relevant to this case. If you were to ask me to estimate how many stair-steps I encounter per day, the answer I give today would be significantly higher than the answer I would have given before August 3, when I tore some ligaments in my left knee. For someone on crutches, with one leg unable to bear weight, stairs are a lot more salient than they are for someone who can bound up and down them without any problems or concerns. There are a surprising number of stairways around Penn's campus that I never really noticed before, even though I go up and down them almost every day.
So it wouldn't be surprising to find that people with certain physical handicaps tend to give a higher estimate of the number of stairsteps they encounter in daily life, just because climbing stairs is more salient and thus more memorable to them. If we use such self-reported estimates as the basis for an epidemiological study of the effect of chronic stair-climbing on physical agility, we risk being led astray.
Similarly, it's plausible that people who "have trouble filtering out irrelevant information — and trouble focusing on tasks" might find the distractions of everyday multitasking more salient and thus more memorable, and therefore might tend to give somewhat higher estimates of how much of such multitasking they engage in. So given noisy estimates of media multitasking, and even a modest salience effect of this kind, the set of subjects with high MMIs may be significantly enriched in individuals who are more distractable or otherwise cognitively less able to deal with the demands of such multitasking.
I'm not going to discuss all of this paper's experimental results in detail, but I'd like to point out three things that seem to apply across all of them: the differences between the LMM and HMM groups are rare (that is, they appear only in a fraction of the experimental conditions where they might have been found), modest in effect size, and associated with higher variance in the HMM sample.
Thus consider their "Filter Task", where "participants viewed two consecutive exposures of an array of rectangles and had to indicate whether or not a target (red) rectangle had changed orientation from the first exposure to the second, while ignoring distractor (blue) rectangles":
Fig. 1. The filter task. (A) A sample trial with a 2-target, 6-distractor array. (B) HMM and LMM filter task performance as a function of the number of distractors (two targets). Error bars, SEM.
The only statistically significant difference between the LMM and HMM groups appears in the 6-distractor case, where there is a difference of about 0.3 (1.4 to 1.7) in the performance measure K. K is a somewhat complex measure, described in Edward Vogel et al., "Neural measures reveal individual differences in controlling access to working memory", Nature 438:500–503, 2005:
We computed visual memory capacity with a standard formula that essentially assumes that if an observer can hold in memory K items from an array of S items, then the item that changed should be one of the items being held in memory on K/S trials, leading to correct performance on K/S of the trials on which an item changed. To correct for guessing, this procedure also takes into account the false alarm rate.
In the crucial six-distractor case, the standard error for the HMM group was about 0.15 items, which corresponds to a standard deviation of about 0.15*sqrt(19) = 0.65, or more than twice the difference in the groups' mean values. The standard error for the LMM group was about a third of that. We don't have the raw data to check, but this is consistent with a pattern in which the lower performance of the HMM group was driven mainly by a few individuals with especially low scores, who are in the HMM group not because they actually engage a lot media multitasking, but because they are "multitasking handicapped" and so overestimate their consumption.
I'll end with their "two- and three-back" task, a version of a classic short-term memory experiment:
[P]articipants were presented a series of individual letters in the middle of the screen. Each letter was presented for 500 ms, followed by a white screen for 3,000 ms. Upon presentation of a letter, participants were to indicate whether or not the present letter was a “target,” meaning that it matched the letter presented two (for the two-back task) or three (for the three-back task) trials ago.
Before seeing the data, you might expect that the HMM group would be more likely to forget the correct letter in all conditions, and even more likely to do so in the three-back condition compared to the two-back condition. But what happened was this:
There was no significant difference in the hit rate for the 2-back or 3-back conditions, or in the false alarm rate for the 2-back condition. The only difference was in the false alarm rate for the 3-back condition. Again, the difference was modest in terms of effect size: about 4.8% vs. 3.2%, or a difference of about 1.6% in false alarm rate in the 3-back condition, where the HMM group has a standard error of about 0.7, or a standard deviation of 0.7*sqrt(15) = 2.7, corresponding to an effect size of about d=0.6.
The authors also performed a more complicated piece of data analysis:
This [pattern of performance] indicates that the HMMs were more susceptible to interference from items that seemed familiar, and that this problem increased as working memory load increased (from the two- to the three-back task). This problem also became more acute for HMMs as the task progressed, because proactive interference from irrelevant letters accumulated across the experiment. Specifically, a general linear model of the likelihood of a false alarm in the three-back task revealed: (a) no main effect of HMM/LMM status nor of time, but (b) a significant HMM/LMM status*time interaction, such that the number of false alarms increased over time more rapidly for HMMs, B = 0.081, P < 0.001. These data demonstrate that HMMs are not only less capable of filtering out irrelevant stimuli from their environment, but also are less capable of filtering out irrelevant representations in memory.
This seems plausible. But it leaves entirely open the question of causation. Are the members of the HMM group "less capable of filtering out irrelevant stimuli from their environment [... and] less capable of filtering out irrelevant representations in memory" because their brains have been damaged by media multitasking? Or are they in the HMM group, not because they actually do more media multitasking, but because they're less able to deal with it and therefore more likely to remember and report it? Or, perhaps, are they more tolerant of media multitasking because they just don't care as much about the details of what happened where and when?
This paper also leaves open the question of what sorts of media multitasking are really in play here, and what other variables might be associated with MMI differences. The authors did consider the problem of associated variables:
[A] new group of 110 participants filled out the MMI questionnaire as well as their SAT scores, gender, the Need for Cognition index questionnaire, the Big Five Trait Taxonomy (including measures of extraversion, agreeableness, conscientiousness, neuroticism, and openness), and a creativity task derived from the Torrance Tests of Creative Thinking (TTCT). The data from participants with an MMI one standard deviation or more above and below the mean (16 HMMs and 17 LMMs) were compared.
They found no significant differences — but again, it's a very small sample, and differences that would be rejected in this test as plausibly due to sampling error might still have been large enough to affect the experimental results in the earlier sample.
If we had access to the raw media matrix and the associated experimental scores, we might find (say) that their HMM group happened to include a subgroup whose MMI is high because they (report that they) listen to music during all other life activities, and who tend to smoke pot more often than their peers. Alternatively, Ophir et al.'s 15-19 HMM students might happen to include a substantial mix of driven over-achievers who network obsessively while studying and web-surfing, and are also among the substantial percentage of Stanford undergraduates who take Ritalin or Adderall in order to improve their academic performance and decrease the need for sleep.
From a certain point of view, all of this criticism is unfair. You can always think of alternative models to fit and new controls to run, to check and perhaps to challenge the conclusions of any study. And compared to some attempts to enlist science in discussions of social policy, Matt Richtel's (implict) reference to this paper rates very highly: the research is actually on the topic, and supports the desired conclusion.
But in my opinion, it doesn't support it nearly strongly enough. What's at stake here is a set of major choices about social policy and personal lifestyle. If it's really true that modern digital multitasking causes significant cognitive disability and even brain damage, as Matt Richtel claims, then many very serious social and individual changes are urgently needed. Before starting down this path, we need better evidence that there's a real connection between cognitive disability and media multitasking (as opposed to self-reports of media multitasking). We need some evidence that the connection exists in representative samples of the population, not just a couple of dozen Stanford undergraduates enrolled in introductory psychology. And we need some evidence that this connection, if it exists, has a causal component in the multitasking-to-disability direction.
What does all this have to do with language? Well, most of the types of "media" at issue here involve speech or text. This is also an interesting example of the rhetorical trope "studies show that …". And it's part of my on-going investigation of the interpretation of generic plurals. Of course, all of these connections are minor compared to the possibility that digital media are ruining our brains.