The Empire Snarks Back

« previous post | next post »

Nobody does sarcastic invective like the English, and Steve Connor, the science editor of The Independent, recently demonstrated his command of the form. But he started out in a shaky moral position, and he got his facts wrong, so it didn't turn out well for him.

Ben Goldacre started the whole thing ("World Conference of Science Journalists – Troublemakers Fringe, Penderel’s Oak Pub, Holborn, 1st July 8pm – Midnight", Bad Science, 6/24/2009):

Next week the World Conference of Science Journalists will be coming to London. A few of us felt they were might not adequately address some of the key problems in their profession, which has deteriorated to the point where they present a serious danger to public health, fail to keep geeks well nourished, and actively undermine the publics’ understanding of what it means for there to be evidence for a claim.

More importantly we fancied some troublemaking and a night in the pub.

As a result, you have the opportunity to come and see three angry nerds explain how and why mainstream media’s science coverage is broken, misleading, dangerous, lazy, venal, and silly. Join our angry rabble, and tell the world of science journalists exactly what you think about their work. All are welcome, admission is free. They may not come.

After the presentations (with powerpoint and everything, in a pub) we will attempt to collaboratively and drunkenly derive some best practise guidelines for health and science journalists, with your kind assistance.

I was on the wrong side of the Atlantic last night, but I drank an IPA in sympathy — and if you'll look at the bottom of this post, you'll find a link to my contribution towards those "best practise guidelines" — which features Mr. Connor himself.

On Monday, Steve Connor struck back ("Lofty medics should stick to their day job", The Independent, 6/30/2009):

The sixth World Conference of Science Journalists is underway in London. I can’t say it’s going to change my life, as I missed out on the previous five, but I did notice that it has attracted the attention of a bunch of medics with strong views on the state of science journalism today. […]

The medics met in a pub in London last night to explain why the "mainstream media's science coverage is broken, misleading, dangerous, lazy, venal and silly". All three speakers are gainfully employed by the public sector so they don't actually have to worry too much about the sort of pressures and financial constraints the mainstream media are under. But they nevertheless condescended to offer some advice on the sort of "best practice guidelines" I should be following, for which I suppose I should be eternally grateful.

But their arrogance is not new. Medical doctors in particular have always had a lofty attitude to the media's coverage of their profession, stemming no doubt from the God-like stance they take towards their patients. Although I wouldn't go as far as to say their profession is broken, dangerous, lazy, venal and silly – not yet anyway.

Ben Goldacre and his two fellow trouble-makers responded yesterday with a letter to The Independent, which may or may not get published there. But who reads letters to the editor in newspapers, anyhow? So more usefully, Ben reproduced the letter on his weblog ("Steve Connor is an angry man", Bad Science, 7/1/2009):

Your science journalist Steve Connor is furious that we are holding a small public meeting in a pub to discuss the problem that science journalists are often lazy and inaccurate. He gets the date wrong, claiming the meeting has already happened (it has not). He says we are three medics (only one of us is). He then invokes some stereotypes about arrogant doctors, which we hope are becoming outdated.

In fact, all three of us believe passionately in empowering patients, with good quality information, so they can make their own decisions about their health. People often rely on the media for this kind of information. Sadly, in the field of science and medicine, on subjects as diverse as MMR, sexual health, and cancer prevention, the public have been repeatedly and systematically misled by journalists.

We now believe this poses a serious threat to public health, and it is sad to see the problem belittled in a serious newspaper. Steve Connor is very welcome to attend our meeting, which is free and open to all.

OK, now for my small contribution to "best practice guidelines" for science journalism, which is also relevant because Steve Connor wrote one of the featured Bad Examples.  "Thou shalt not report odds ratios", 7/30/2007:

This is a second in a series of posts aimed at improving the rhetoric (and logic) of science journalism. Last time ("Two simple numbers", 7/22/2007), I asked for something positive: stories on "the genetic basis of X" should tell us how frequent the genomic variant is among people with X and among people without X. This time, I've got a related, but negative, request.

No, let's make it a commandment: Thou Shalt Not Report Odds Ratios. In fact, I'd like to suggest that any journalist who reports an odds ratio as if it were a relative risk should be fired sent back to school.

And here's where Steve Connor comes into it — You should really follow the link and read the whole discussion of odds ratios vs. risk ratios, but for those of you who don't follow links, I'll reproduce part of my discussion of his contribution:


Find any piece of reporting that talks about "raising the risk of X by Y%", or any of the many other ways of putting this same concept into English, and the chances are that you've found a violation of this commandment. Let me give two recent examples, among thousands lurking in the past month's news archive.

According to Steve Connor, "Childhood asthma gene identified by scientists", The Independent, 7/5/2007

A gene that significantly increases the risk of asthma in children has been discovered by scientists who described it as the strongest link yet in the search to find a genetic basis for the condition.

Inheriting the gene raises the risk of developing asthma by between 60 and 70 per cent – enough for researchers to believe that the discovery may eventually open the way to new treatments for the condition. [emphasis added]

The study in question (I believe — the article doesn't give any specific reference, as usual for the genre of science journalism) is Miriam F. Moffatt et al., "Genetic variants regulating ORMDL3 expression contribute to the risk of childhood asthma", Nature 448, 470-473 (26 July 2007). This is another big genome-wide association study — roughly 300,000 single-nucleotide polymorphisms were scanned in several populations in the UK and in Germany.

In this case, general information about allele frequencies is not provided (and perhaps was not available). However, this information is given in one crucial case:

In the subset of individuals for whom expression data are available, the T nucleotide allele at rs7216389 (the marker most strongly associated with disease in the combined GWA analysis) has a frequency of 62% amongst asthmatics compared to 52% in non-asthmatics (P = 0.005 in this sample).

[…]

Now, Steve Connor is not a sports columnist trying his hand at a science piece. (That's a plausible excuse for Denis Campbell's disastrously botched autism/MMR story in the Observer, memorably vivisected by Ben Goldacre in many Bad Science posts and a BMJ article.) Connor is listed as the "Science Editor" of the Independent, and he ought to know better.


Looking this over, I think that I may have been wrong on one point. When I wrote it, the many  recent news articles that discussed "raising the risk of X by Y%" or similar formulations were mostly talking about odds ratios, at least in the sample that I checked. When I look today, I also find many such articles (though perhaps not as many as I found then), but of the first few I checked, several were talking about risk ratios rather than odds ratios.

But I don't believe that this is because science journalists have taken my advice — I imagine that most of them are unaware that it was ever given. It's because some scientists' press releases cite risk ratios, while others cite odds ratios, and the science writers just go with what they're given. In July of 2007, I happened to check a number of studies where the culturally-standard methodology yields estimates of odds ratios; today, there are a larger number from subdisciplines where risk ratios seem to be the norm.

It would be nice if science writers knew the difference, asked questions in each case so as to sort out what the provided numbers really mean, and expressed them in a way that wouldn't mislead their readers.



15 Comments

  1. Bobbie said,

    July 2, 2009 @ 10:59 am

    As a technical editor, I'll do some of the worrying about a science editor who cannot even read a date correctly! You can do the heavy lifting by worrying about the difference in meanings and interpretations of risk rations and odds ratios. OK?

  2. Ginger Yellow said,

    July 2, 2009 @ 11:05 am

    We journalists can be defensive at times, sometimes justifiably, but Connor's piece is terrible. His entire argument is "Science journalism is bad, eh? Well doctors are bad people who leech off the taxpayer." What the hell is that argument doing coming from a science editor of a national newspaper? And anyone who actually listens to Goldacre's arguments and evidence (which he supplies in sad abundance) on the state of science journalism would find it hard to disagree with him.

  3. Mark P said,

    July 2, 2009 @ 11:06 am

    " … deteriorated to the point where they present a serious danger to public health …"

    Not quite. I think the mainstream media coverage of science and technology is of about the same quality that it has always been. Or has been for decades, at least.

    "broken, misleading, dangerous, lazy, venal, and silly."

    Yes, pretty much so. The real issue in my opinion is "lazy." Combine that with ignorant (in the best sense) and you can easily get "misleading" and "silly." How many journalists in the MSM have even a vague idea of what risk ratios or odds ratios are? I have an advanced degree in a scientific field and I would have to read up on them before I tried to explain them to someone.

    It would be nice if journalists could be taught that they do not know what these things mean, and so it would be a good idea for them to ask someone, and maybe even submit portions of their stories to their sources to see whether the stories are reasonably consistent with the facts they are trying to report.

    The truly unfortunate thing is that virtually every subject is approached with about the same level of understanding and regard for complex facts. The good news today is that readers have access to many other sources of information. The other unfortunate thing is that most news consumers have about the same approach to reading the news as the writers have to writing it.

    I should say that there are some science writers who are actually good and may well have a better grasp of a lot of areas than I do.

  4. tom p said,

    July 2, 2009 @ 11:19 am

    Mark P, I think that the 'serious danger to public health' line refers to the media's persistent lies (for that's all they can be after it was explained to them a million times) about the non-existent MMR-autism link. Their scaremongering has lead to a vastly reduced MMR uptake, which has, in turn, serious public health consequences.

  5. Ginger Yellow said,

    July 2, 2009 @ 11:35 am

    "I should say that there are some science writers who are actually good and may well have a better grasp of a lot of areas than I do."

    Undoubtedly. Part of the argument that Goldacre usually puts forward is that, particularly with regard to health stories, high profile science news is often written by non-specialists, while the specialists are relegated to covering less sensational stories.

  6. Ryan Denzer-King said,

    July 2, 2009 @ 3:49 pm

    Either I'm a bit slow or the 2007 Connor article is beyond repair (or maybe both). I see that this rs7216389 allele occurs is 52% of non-asthmatics and 62% of asthmatics. How does that translate into "if you have the gene, you have a 60-70% higher risk of developing asthma"? Was he looking at the 62% of asthmatics who have the allele? With my nothing-above-high-school-science background, I'm not sure I can formulate a coherent and accessible summary of that study, but I'd be tempted to summarize this as "if you have the gene, you have a 10% higher risk of developing asthma" (though I may be misinterpreting those figures).

  7. Sven Sinclair said,

    July 2, 2009 @ 5:22 pm

    Mark, your analysis of the Connor asthma article is incorrect. Let x be the fraction of population that has asthma. Then what the article says is that 0.62x of the population has asthma and the gene, 0.38x has asthma but not the gene, 0.52(1-x) has the gene, but no asthma, and 0.48(1-x) has neither asthma nor the gene.

    The conditional probability that you'll have asthma given that you have the gene (i.e., the risk of asthma with the gene) is then 0.62x/(0.52+0.1x) and the conditional probability that you'll have asthma given that you don't have the gene (i.e., the risk of asthma without the gene) is 0.38x/(0.48-0.1x). The ratio of the risks is then (0.62/0.38)*(0.48-0.1x)/(0.52+0.1x), which is between 1.4 and 1.5 for a reasonable guess for x (0-20%).

    So the example you cited implies that the gene raises the risk of asthma by 40-50%. Now that's not 60-70% as the news article stated, but the difference is not so great that some other factors (e.g., correlations with other known risk factors) that you may have missed couldn't explain it.

    [(myl) Yes, I believe that you're right — in my original post, I got the conditional probabilities tangled.

    But I continue to think that good journalistic practice would go beyond statements of the form "X increases the risk/chances/odds of Y by Z%" — which are often confusing at best — and tell us the results in ways that are easier to understand, e.g. "P% of people with the allele A had at least one asthma attack before age 7 (or whatever the criterion was), while only Q% of people with allele B did."

    If those numbers were (say) 10% and 15% — or 9% and 13% — which given the prevalence of asthma is probably about right, then readers can understand much better what the result means or doesn't mean.

    That's not the form in which the data was presented in the original paper, but with a little reasoning of the form that you give above, and a couple of simple questions asked of one of the authors, a science writer should be able to make the transformation for us. ]

  8. Sven Sinclair said,

    July 2, 2009 @ 5:27 pm

    P.S. Alternatively, Connor might have made an arithmetic error (maybe mixing up the signs in the numerator and denominator – that would get him to 60% if every 6th person has asthma, although that frequency seems too high).

    [(myl) The CDC's page on asthma statistics says that "In 2002, 72 people per 1,000 or 20 million people, currently had asthma …", and "30.8 million people (111 people per 1,000) had ever been diagnosed with asthma during their lifetime".

    So depending on how you count, the overall rate in the U.S. seems to be between 7 and 11 percent. If we take the mean of 9%, then by your reasoning, the conditional probability of asthma with the gene is about 10.5%, versus about 7.2% without it. The ratio of risks is about 1.45 — but I still think that the conditional probabilities are more informative, and should be given along with the ratio. ]

  9. Sven Sinclair said,

    July 2, 2009 @ 6:04 pm

    I agree with your responses – and with the general ideas you promote. Just wanted to point out a specific error in the otherwise very good post.

  10. chris said,

    July 2, 2009 @ 11:01 pm

    Mark P wrote:

    The good news today is that readers have access to many other sources of information. The other unfortunate thing is that most news consumers have about the same approach to reading the news as the writers have to writing it.

    Indeed. Personally I look forward to the continued decline of print media, when newspapers will be primarily online – maybe then online news will finally include hyperlinks to the sources used by the journalists (e.g. press releases) so lazy readers will be able to investigate the reported information with virtually no effort.

  11. Hans said,

    July 3, 2009 @ 4:17 am

    Hmm, now I'm confused…

    In his original post, Mark says: "If you're good at mental arithmetic, you may be worried that even the odds ratio doesn't quite make it to 1.6 or 1.7 in this case: (.62/.38)/(.52/.48) ≅ 1.51."

    But Sven Sinclair calculates the relative risk, so he needs an estimate of the prevalence, which he calls x, and finds an RR "between 1.4 and 1.5 for a reasonable guess for x (0-20%)."

    Those are completely different things, aren't they? Why then does Mark reply to Steve: " Yes, I believe that you're right — in my original post, I got the conditional probabilities tangled."?

    Shouldn't he have said: "Those are completely different things?" Or am I still in the dark about the difference between OR and RR? And isn't it true that he'll always find an RR close to the OR if he assumes a low prevalence?

    (Disclosure: I'm a science journalist…)

    [(myl) On Sven's reading (which I think is probably correct), the number that Connor cited was indeed a relative risk, though we're still not clear how to get it from the data in the paper. (I guess that it was probably supplied by the authors, or perhaps more likely by their publicist in a press release; and perhaps it arose from restricting attention to some sub-population.)

    My first attempt to get the cited number from the reported data was based on the idea that the number was an odds ratio, starting from the (relatively high) prevalences of the cited SNP. But as Sven pointed out, this is just a mistake — in order to get a number (whether an odds ratio or a risk ratio) that could be interpreted as a effect on an individual's likelihood of getting asthma, we should calculate the proportion of each genomically-defined group with asthma, not the proportion of asthmatics and non-asthmatics with the genomic variant. In other words, I got the conditional probabilities backwards.

    When you do it the right way round, the probabilities turn out to be fairly low (apparently in the 5-15% range, though the exact number depends on what you assume the population frequency of asthma to be). In that range of conditional probabilities, as you observe, odds ratios are not much different from risk ratios. So I must have been wrong in assuming that this was an example of a misleading use of the odds ratio. Instead, it must have been some other misleading use of numbers, perhaps cherry-picking from subpopulations.

    Again, I assume that the number came from the scientists. But a science journalist could find (and present to readers) some other numbers, like the proportion of asthmatics and non-asthmatics with the cited genomic variant (62% vs. 52%, given in the paper), or the actual estimated probabilities of getting asthma (not given in the paper, but apparently something like 7.2% without the gene, 10.5% with it).

    My general point is that the treatment of "X% more/less likely" or "X% increased/decreased risk" numbers in popular science articles is often misleading. And all snark aside, one of the laudable goals of the pub meeting was to discuss "best practices", which would include ways of dealing with such numbers. ]

  12. Picky said,

    July 6, 2009 @ 7:21 am

    For the record, the response from Messrs Bell, Boynton and Goldacre is published in today's Independent. (So: I read letters to the editor).

  13. John said,

    July 7, 2009 @ 8:33 am

    @ Ryan Denzer-King:

    "I see that this rs7216389 allele occurs is 52% of non-asthmatics and 62% of asthmatics. … I'd be tempted to summarize this as "if you have the gene, you have a 10% higher risk of developing asthma…"

    I think this shows an interesting ambiguity in how we talk about percentages. If somebody just tells me, "ratio A is 10% higher than ratio B," and I know that ratio A is 52%, then does that mean that ratio B is 62%, or is it rather 52% * 1.1 = 57.2% ? Without a lot of extra contextual information, I wouldn't really have any idea which they meant, and I'd have to ask for clarification.

    Maybe there's some convention about what such a thing should mean in the medical literature — I don't know, I'm just a math guy — but in the context of a news article, this manner of speaking seems horribly ambiguous. And yet I'm sure I've seen it hundreds of times in various newspapers over the years.

  14. Picky said,

    July 7, 2009 @ 9:41 am

    John: I thought we'd got over this ambiguity, and that it was now considered proper to say "10 percentage points" in your first example and "10 percent" in your second. Well, if we haven't, we should have.

  15. Sven Sinclair said,

    July 7, 2009 @ 9:48 am

    John – when I report relations between ratios in my work (as economist and actuary), I never write "A is 10% higher than B". Let's say A = 60% and B = 50%. I would either write "A is 10 percentage points higher than B" or "A is higher than B by a factor of 1.2". Both of those expressions sound awkward to me, but they generally avoid misunderstanding. And – as Mark would no doubt endorse – the best way is often just to say that A is 60% and B is 50%. (However, sometimes the pattern being described is a bit more complicated, say a trend based on more than two values, and has to be summarized for the readers.)

    BTW, Mark's recap of my reading of Connor is accurate.

RSS feed for comments on this post