Nobody does sarcastic invective like the English, and Steve Connor, the science editor of The Independent, recently demonstrated his command of the form. But he started out in a shaky moral position, and he got his facts wrong, so it didn't turn out well for him.
Ben Goldacre started the whole thing ("World Conference of Science Journalists – Troublemakers Fringe, Penderel’s Oak Pub, Holborn, 1st July 8pm – Midnight", Bad Science, 6/24/2009):
Next week the World Conference of Science Journalists will be coming to London. A few of us felt they were might not adequately address some of the key problems in their profession, which has deteriorated to the point where they present a serious danger to public health, fail to keep geeks well nourished, and actively undermine the publics’ understanding of what it means for there to be evidence for a claim.
More importantly we fancied some troublemaking and a night in the pub.
As a result, you have the opportunity to come and see three angry nerds explain how and why mainstream media’s science coverage is broken, misleading, dangerous, lazy, venal, and silly. Join our angry rabble, and tell the world of science journalists exactly what you think about their work. All are welcome, admission is free. They may not come.
After the presentations (with powerpoint and everything, in a pub) we will attempt to collaboratively and drunkenly derive some best practise guidelines for health and science journalists, with your kind assistance.
I was on the wrong side of the Atlantic last night, but I drank an IPA in sympathy — and if you'll look at the bottom of this post, you'll find a link to my contribution towards those "best practise guidelines" — which features Mr. Connor himself.
On Monday, Steve Connor struck back ("Lofty medics should stick to their day job", The Independent, 6/30/2009):
The sixth World Conference of Science Journalists is underway in London. I can’t say it’s going to change my life, as I missed out on the previous five, but I did notice that it has attracted the attention of a bunch of medics with strong views on the state of science journalism today. […]
The medics met in a pub in London last night to explain why the "mainstream media's science coverage is broken, misleading, dangerous, lazy, venal and silly". All three speakers are gainfully employed by the public sector so they don't actually have to worry too much about the sort of pressures and financial constraints the mainstream media are under. But they nevertheless condescended to offer some advice on the sort of "best practice guidelines" I should be following, for which I suppose I should be eternally grateful.
But their arrogance is not new. Medical doctors in particular have always had a lofty attitude to the media's coverage of their profession, stemming no doubt from the God-like stance they take towards their patients. Although I wouldn't go as far as to say their profession is broken, dangerous, lazy, venal and silly – not yet anyway.
Ben Goldacre and his two fellow trouble-makers responded yesterday with a letter to The Independent, which may or may not get published there. But who reads letters to the editor in newspapers, anyhow? So more usefully, Ben reproduced the letter on his weblog ("Steve Connor is an angry man", Bad Science, 7/1/2009):
Your science journalist Steve Connor is furious that we are holding a small public meeting in a pub to discuss the problem that science journalists are often lazy and inaccurate. He gets the date wrong, claiming the meeting has already happened (it has not). He says we are three medics (only one of us is). He then invokes some stereotypes about arrogant doctors, which we hope are becoming outdated.
In fact, all three of us believe passionately in empowering patients, with good quality information, so they can make their own decisions about their health. People often rely on the media for this kind of information. Sadly, in the field of science and medicine, on subjects as diverse as MMR, sexual health, and cancer prevention, the public have been repeatedly and systematically misled by journalists.
We now believe this poses a serious threat to public health, and it is sad to see the problem belittled in a serious newspaper. Steve Connor is very welcome to attend our meeting, which is free and open to all.
OK, now for my small contribution to "best practice guidelines" for science journalism, which is also relevant because Steve Connor wrote one of the featured Bad Examples. "Thou shalt not report odds ratios", 7/30/2007:
This is a second in a series of posts aimed at improving the rhetoric (and logic) of science journalism. Last time ("Two simple numbers", 7/22/2007), I asked for something positive: stories on "the genetic basis of X" should tell us how frequent the genomic variant is among people with X and among people without X. This time, I've got a related, but negative, request.
No, let's make it a commandment: Thou Shalt Not Report Odds Ratios. In fact, I'd like to suggest that any journalist who reports an odds ratio as if it were a relative risk should be fired sent back to school.
And here's where Steve Connor comes into it — You should really follow the link and read the whole discussion of odds ratios vs. risk ratios, but for those of you who don't follow links, I'll reproduce part of my discussion of his contribution:
Find any piece of reporting that talks about "raising the risk of X by Y%", or any of the many other ways of putting this same concept into English, and the chances are that you've found a violation of this commandment. Let me give two recent examples, among thousands lurking in the past month's news archive.
According to Steve Connor, "Childhood asthma gene identified by scientists", The Independent, 7/5/2007
A gene that significantly increases the risk of asthma in children has been discovered by scientists who described it as the strongest link yet in the search to find a genetic basis for the condition.
Inheriting the gene raises the risk of developing asthma by between 60 and 70 per cent - enough for researchers to believe that the discovery may eventually open the way to new treatments for the condition. [emphasis added]
The study in question (I believe — the article doesn't give any specific reference, as usual for the genre of science journalism) is Miriam F. Moffatt et al., "Genetic variants regulating ORMDL3 expression contribute to the risk of childhood asthma", Nature 448, 470-473 (26 July 2007). This is another big genome-wide association study — roughly 300,000 single-nucleotide polymorphisms were scanned in several populations in the UK and in Germany.
In this case, general information about allele frequencies is not provided (and perhaps was not available). However, this information is given in one crucial case:
In the subset of individuals for whom expression data are available, the T nucleotide allele at rs7216389 (the marker most strongly associated with disease in the combined GWA analysis) has a frequency of 62% amongst asthmatics compared to 52% in non-asthmatics (P = 0.005 in this sample).
Now, Steve Connor is not a sports columnist trying his hand at a science piece. (That's a plausible excuse for Denis Campbell's disastrously botched autism/MMR story in the Observer, memorably vivisected by Ben Goldacre in many Bad Science posts and a BMJ article.) Connor is listed as the "Science Editor" of the Independent, and he ought to know better.
Looking this over, I think that I may have been wrong on one point. When I wrote it, the many recent news articles that discussed "raising the risk of X by Y%" or similar formulations were mostly talking about odds ratios, at least in the sample that I checked. When I look today, I also find many such articles (though perhaps not as many as I found then), but of the first few I checked, several were talking about risk ratios rather than odds ratios.
But I don't believe that this is because science journalists have taken my advice — I imagine that most of them are unaware that it was ever given. It's because some scientists' press releases cite risk ratios, while others cite odds ratios, and the science writers just go with what they're given. In July of 2007, I happened to check a number of studies where the culturally-standard methodology yields estimates of odds ratios; today, there are a larger number from subdisciplines where risk ratios seem to be the norm.
It would be nice if science writers knew the difference, asked questions in each case so as to sort out what the provided numbers really mean, and expressed them in a way that wouldn't mislead their readers.