There's a special place in purgatory reserved for scientists who make bold claims based on tiny effects of uncertain origin; and an extra-long sentence is imposed on those who also keep their data secret, publishing only hard-to-interpret summaries of statistical modeling. The flames that purify their scientific souls will rise from the lake of lava that eternally consumes the journalists who further exaggerate their dubious claims. Those fires, alas, await Drew P. Cingel and S. Shyam Sundar, the authors of "Texting, techspeak, and tweens: The relationship between text messaging and English grammar skills", New Media & Society 5/11/2012:
The perpetual use of mobile devices by adolescents has fueled a culture of text messaging, with abbreviations and grammatical shortcuts, thus raising the following question in the minds of parents and teachers: Does increased use of text messaging engender greater reliance on such ‘textual adaptations’ to the point of altering one’s sense of written grammar? A survey (N = 228) was conducted to test the association between text message usage of sixth, seventh and eighth grade students and their scores on an offline, age-appropriate grammar assessment test. Results show broad support for a general negative relationship between the use of techspeak in text messages and scores on a grammar assessment.
Some of the journalists who will fuel the purifying flames: Maureen Downey, "ZOMG: Text-speak and tweens: Notso gr8 4 riting skillz", Atlanta Journal-Constitution; Sarah D. Sparks, "Duz Txting Hurt Yr Kidz Gramr? Absolutely, a New Study Says", Education Week; Mark Prigg, "OMG: Researchers say text messaging really is leading to a generation with poor grammar skills", The Daily Mail; Gregory Ferenstein, "Texting Iz Destroying Student Grammar", TechCrunch; the anonymous author of "Texting tweens lack 'gr8' grammar", CBC 7/26/2012; … And, of course, in a specially-hot lava puddle all his own, the guy who wrote the press release from Penn State: Matt Swayne, "No LOL matter: Tween texting may lead to poor grammar skills", 7/26/2012.
What did Cingel and Sundar actually do? If you're in a rush, let's cut to the crucial table, and their explanation of it:
In order to better understand the variance in grammar assessment scores, predictor variables were tested as part of a stepwise multiple regression model. Specifically, four variables were tested in the following order: grade, average amount of sent message adaptation, total number of text messages sent and received, and perceived utility of text messaging. The average number of message adaptations in received text messages was excluded from this model due to this variable’s multicollinearity, or high correlation, with average sent text message adaptation. This analysis yielded two statistically significant predictors of grammar scores: grade (β = .23, p < .01) and average sent message adaptation (β = -.20, p < .01). Neither total number of text messages (β = -.09, p = .10) nor perceived utility (β = -.12, p = .14) were found to be significant predictors.
("Grade" is 6th, 7th, or 8th; and "sent adaptation" refers to the students' self-report of how often they used various texting-associated writing conventions in texts that they sent.)
I want to point out three things about this table:
- The "Average sent adaptation" explained 4.7% of the variance in grammar scores (0.101-0.054 = 0.047). This is a pitifully small effect.
- "Grade" explained a bit more of the "grammar assessment" variance (5.4%) They actually tell us what the average "grammar assessment" scores by grade were, and also what the variances were: "Sixth graders scored a mean of 17.26 (SD = 3.11), seventh graders scored a mean of 17.92 (SD = 2.74), and eighth graders scored a mean of 18.27 (SD = 2.66)". Thus Grade accounted for at most about one question's worth of variation (on a 22-question test), and so "Average sent adaptation" accounted for somewhat less than that. Alternatively, if we take the variance remaining after Grade was taken out to be about equal to the square of the standard deviation of the per-grade scores (about 3^2 = 9), then 4.7% of that variance is about half a question.
- Their survey collected at least 20 independent variables, relating to texting and to other things like television and music consumption. Apparently none of these had a statistically-significant effect except for that "sent message adaptation" measure. It would be very surprising, in a collection of this size and complexity, NOT to find at least one predictor variable that accounted for about 5% of the variance in a vector of random numbers.
OK, if you've got some time, here are the details. They looked at middle-schoolers in one school:
Participants were sixth, seventh, and eighth grade middle school students from a midsized school district on the east coast of the United States. English teachers were approached prior to the beginning of the study and asked to volunteer class time. [...] In all, 542 surveys were administered to students in the classroom; 228 completed surveys were returned, for a response rate of 42.1 percent. Of this final sample, 36.8 percent were from sixth grade (N = 84), 21.5 percent from seventh grade (N = 49), and 41.7 percent were from eight grade (N = 95). Ages ranged from 10 to 14, with a mean of 12.48. Males represented 39.1 percent of the final sample. [...]
Aside from grade-level and age, their independent variables fell into three general groups. The first was "usage" of text messaging, along with other life-style factors:
Adolescents were first asked to think about their average day and record the time they spend using a variety of technologies. Importantly, participants were asked to self-report the number of text messages they send and receive on an average day. In addition, respondents were asked to indicate the amount of time they spend studying, watching television, listening to music, and reading for pleasure. Finally, they were asked for the amount of free time they have each day. Answers were reported with a number which indicated the average amount of time spent engaging in each activity or the average number of sent and received text messages. Adolescents reported receiving 46.03 (SD = 83.61) and sending 45.11 (SD = 85.24) text messages per day.
The second was "attitudes" towards text messaging:
Next, the survey asked adolescents to record their attitudes toward text messaging by using a 5-point Likert-type scale, where an answer of 1 indicated that the respondent strongly disagreed and an answer of 5 indicated strong agreement. They were asked questions regarding the convenience and overall utility of the technology, such as ‘The speed of text messaging makes it convenient to use.’ Questions were also included to determine if an adolescent’s use of these technologies is primarily driven by parents or friends.
And the third was "textual adaptation" in text messaging:
The independent variable of sent and received message adaptation was assessed by asking participants to self-check their last three sent and their last three received text messages to separate individuals and record the number of adaptations present in each text message. This was done to ensure greater generalizability by including a wider range of messages, with a wider range of text message length. Also, it increased the chances of the text messages involving different groups of individuals, such as friends, parents, or siblings. For each of the three received text messages, participants were asked to list their relationship to the sender. This was also done for each sent text message. Participants then self-reported the number of adaptations they found in each text message and classified a given adaptation into one of five categories. The five categories of common text message adaptation identified in the survey were use of abbreviations or initialisms, omission of non-essential letters, substitution of homophones, punctuation adaptations, and capitalization adaptations.
Note, again, that this gave them a very large number of factors to work with, which is always helpful if you're determined to find a "statistically-significant" effect, and you don't plan to do any correction for the implicit multiple comparison.
Their dependent measure was
… a 22-item diagnostic grammar assessment instrument. This assessment was adapted from a ninth-grade grammar review test. The test was reviewed to ensure that students had been taught all of the concepts covered in this assessment by sixth grade so that the same version of the grammar assessment could be administered to all three grades. This was done so that adolescents’ scores could be compared to one another across grades. [...] Sixth graders scored a mean of 17.26 [out of 20] (SD = 3.11), seventh graders scored a mean of 17.92 (SD = 2.74), and eighth graders scored a mean of 18.27 (SD = 2.66).
Curiously, the version of the "Grammar Assessment" that they present as Appendix A has only 20 questions on it, but never mind that. And the questions on that test generally have very little to do with "grammar" in the traditional sense of that word, but never mind that for now either.
Aside from the problem that the effect was so small as to be effectively meaningless, I have some qualms about the way they collected the data:
Participants were introduced to the study by way of an opening statement in their classroom. After this was completed, they were given a grammar assessment, which was completed during class time. The grammar assessment lasted about 10 minutes. Once it was completed, participants handed in the grammar assessments and were in turn given a survey to complete at home. Attached to the take-home survey was a letter to parents, informing them about the procedure of the study, and seeking their informed consent by way of a signature for their child’s participation in this research. [...] Participants were told about the types of questions that they would need to answer on the survey and given time to begin completing the survey during class time, at the teacher’s discretion. Participants were informed that they were to think of their average day when completing questions regarding their media use. Finally, those who did not use a certain technology were told only to answer questions that applied to the technologies they have used. They were also given the opportunity to ask any questions they may have. Participants were given one week to return the completed surveys to class. After one week, surveys were collected and participants were verbally notified in the classroom about the study’s completion. Take-home surveys were linked to the grammar assessments through the use of unique identification codes. [...]
First, I will bet that the participants inferred (if they weren't explicitly told) that the study aimed to show that texting impacted their "grammar"; this could easily lead to a small bias in self-reporting of texting practices, based on their impression of their abilities in spelling, punctuation, etc.
And second, as recently discussed here, at least some current teens "are scornful of txt-speak abbreviations, and see them as something that clueless adults do", while others clearly make extensive use of these "adaptations". What was the ethnography of these attitudes in the school Cingel and Sundar studied? Was there an association with age? With sex? With socio-economic status? With race? With whatever form the "jocks vs. burnouts" opposition takes in that community?
I'll also note that this paper's bibliography curiously lacks a fairly long list of well-known and obviously-relevant publications in this area, which generally come to opposite conclusions. A few of them are listed below, with their abstracts:
Beverly Plester, Clare Wood, and Puja Joshi, "Exploring the relationship between children's knowledge of text message abbreviations and school literacy outcomes", British Journal of Developmental Psychology, March 2009.
This paper presents a study of 88 British 10–12-year-old children's knowledge of text message (SMS) abbreviations (‘textisms’) and how it relates to their school literacy attainment. As a measure of textism knowledge, the children were asked to compose text messages they might write if they were in each of a set of scenarios. Their text messages were coded for types of text abbreviations (textisms) used, and the ratio of textisms to total words was calculated to indicate density of textism use. The children also completed a short questionnaire about their mobile phone use. The ratio of textisms to total words used was positively associated with word reading, vocabulary, and phonological awareness measures. Moreover, the children's textism use predicted word reading ability after controlling for individual differences in age, short-term memory, vocabulary, phonological awareness and how long they had owned a mobile phone. The nature of the contribution that textism knowledge makes to children's word reading attainment is discussed in terms of the notion of increased exposure to print, and Crystal's (2006a) notion of ludic language use.
Nenagh Kemp, "Texting versus txtng: reading and writing text messages, and links with other linguistic skills", Writing Systems Research 2(1) 2010:
The media buzzes with assertions that the popular use of text-message abbreviations, or textisms (such as r for are) is masking or even causing literacy problems. This study examined the use and understanding of textisms, and links with more traditional language skills, in young adults. Sixty-one Australian university students read and wrote text messages in conventional English and in textisms. Textism messages were faster to write than those in conventional English, but took nearly twice as long to read, and caused more reading errors. Contrary to media concerns, higher scores on linguistic tasks were neutrally or positively correlated with faster and more accurate reading and writing of both message types. The types of textisms produced, and those least well understood by participants, are also discussed.
M.A. Drouin, "College students' text messaging, use of textese and literacy skills", Journal of Computer Assisted Learning, 2011:
In this study, I examined reported frequency of text messaging, use of textese and literacy skills (reading accuracy, spelling and reading fluency) in a sample of American college students. Participants reported using text messaging, social networking sites and textese more often than was reported in previous (2009) research, and their frequency of textese use varied across contexts. Correlational analyses revealed significant, positive relationships between text messaging frequency and literacy skills (spelling and reading fluency), but significant, negative relationships between textese usage in certain contexts (on social networking sites such as MySpace™ and Facebook™ and in emails to professors) and literacy (reading accuracy). These findings differ from findings reported in recent studies with Australian college students, British schoolchildren and American college students. Explanations for these differences are discussed, and future directions for research are presented.
Clare Wood, Sally Meachem, Samantha Bowyer, Emma Jackson, M. Luisa Tarczynski-Bowles, and Beverly Plester, "A longitudinal study of children's text messaging and literacy development", British Journal of Psychology. August 2011:
Recent studies have shown evidence of positive concurrent relationships between children's use of text message abbreviations (‘textisms’) and performance on standardized assessments of reading and spelling. This study aimed to determine the direction of this association. One hundred and nineteen children aged between 8 and 12 years were assessed on measures of general ability, reading, spelling, rapid phonological retrieval, and phonological awareness at the beginning and end of an academic year. The children were also asked to provide a sample of the text messages that they sent over a 2-day period. These messages were analyzed to determine the extent to which textisms were used. It was found that textism use at the beginning of the academic year was able to predict unique variance in spelling performance at the end of the academic year after controlling for age, verbal IQ, phonological awareness, and spelling ability at the beginning of the year. When the analysis was reversed, reading and spelling ability were unable to predict unique variance in textism usage. These data suggest that there is some evidence of a causal contribution of textism usage to spelling performance in children aged 8–12 years. However, when the measure of rapid phonological retrieval (rapid picture naming) was controlled in the analysis, the relationship between textism use and spelling ability just failed to reach statistical significance, suggesting that phonological access skills may mediate some of the relationship between textism use and spelling performance.
Also relevant (and also missing from the bibliography) is David Crystal's Txtng: The Gr8 Db8 (2008), discussed on Language Log in "Shattering the Illusions of Texting", 9/18/2008, "Menand on Linguistic Morality", 10/22/2008, and "Bad Language", 10/28/2008.
Finally, I need to remind everyone that texting-style abbreviations destroyed the Roman Empire: "pont max tr pot lol", 3/24/2008. As Catullus warned us, "coartatio et reges prius et beatas / perdidit urbes."