## Scientific pseudonyms

This article, about the grave (and life) of Powell Crosley Jr., reminded me of my graduate school colleague Crosley Shelvador, M.D.

OK, the "M.D." part is fictional, and the "colleague" part might be considered as a misleading way to refer to an elderly but functional refrigerator. For some of the facts, see "Dr. Alfred Crockus and Crosley Shelvador, M.D.", 9/19/2007; "Crosley Shelvador comes in from the cold", 9/20/2007; "Stronzo Bestiale, Galadriel Mirkwood, Crosley Shelvador, …", 10/10/2014.

Read the rest of this entry »

## Vaccine Efficiency?

Recently, two vaccine companies have presented evidence that their vaccines are respectively “90% effective” and “94½% effective”. True or False: Assuming these results hold up, the chances are respectively 9 in 10 (945 in 1,000) that if you get vaccinated you won’t get Covid? If you said True you are both woefully mistaken and doubtless far from alone. The articles report that the same large number of people got the vaccine and a placebo and of the first 95 people to show up with the disease 90% (94½%) came from the group that didn’t get the vaccine. In other words, if you got the disease the chances are 9 in 10 you didn’t get the vaccine. That is not the same thing as: if you got the vaccine the chances are 9 in 10 you didn’t get the disease.

Hypothetically — to make the arithmetic easy, but not unrealistically — suppose the number of volunteers in each group was 10,000 and of the first hundred people to get the disease 10 got the vaccine and 90 the placebo. Thus 90% of the infected folks came from the placebo group and it is reported that the vaccine was “90% effective”. If 90% effective means that 90% of vaccinated people didn’t get the disease and 10% got the disease, we have to look at the fraction of people who got the vaccine and also got the disease, which was 10 divided by 10,000 or .001, i.e., .1%, one tenth of one percent, not 10%.

Suppose, now, in an alternative experiment the experimenters had waited longer, until they had, not 100 but 1,000, infected volunteers and the same ratio of vaccine-to-placebo held: 900 infected volunteers from the placebo group and 100 from the vaccinated group. Then the fraction of people vaccinated who got the disease would be 100 divided by 10,000 or .01 or 1%, ten times as great as in the earlier experiment with only 100 infected volunteers, despite the ratio of vaccinated to placebo volunteers in the infected group remaining the same. The ratio of vaccinated to unvaccinated people in the infected group bears no direct relation to the probability that vaccination prevents infection. In the words of the drug companies, the vaccine would be 90% effective in both experiments, whereas nether experiment suggests anything like what most people would take “90% effective” to mean. The drug companies are evidently very good at creating vaccines and disastrous at talking about them.

## 35%, 3%, whatever…

Matt Herper: So for those just back from a tour of Jupiter’s moons, last night the FDA granted emergency use authorization of convalescent plasma to treat patients with Covid-19. Trump characterized the decision as a major breakthrough. FDA Commissioner Stephen Hahn, who joined him at a news conference to announce the decision, backed him up — but he also misspoke, claiming that giving plasma would help 35 out of 100 people treated.

Adam Feuerstein: Misspoke is being kind. Hahn grossly mischaracterized the benefit of convalescent plasma on Sunday night. I’ll just quote him here: “A 35% improvement in survival is a pretty substantial clinical benefit. What that means is — and if the data continue to pan out — 100 people who are sick with Covid-19, 35 would have been saved because of the administration of plasma.” […]

Matt: That number should be at best 5 out of 100 people. To my eye, it’s more like 3 out of 100 people. And all that is from subgroups of an observational study, so it should be taken with a grain of salt.

Researchers didn’t compare patients who got plasma to a control group. They compared those who got the drug early to those who got it late, and between high levels of antibodies in the plasma and low ones. For the main subset in the study, which was led by the Mayo Clinic, mortality at seven days was 11% for those who got lots of antibodies, versus 14% for those who got few. That’s three out of 100 — again, with a grain of salt.

Read the rest of this entry »

## Translating "phenotypically diverse"

Michael Marshall, "The hidden links between mental disorders", Nature 5/5/2020:

Perhaps there are several dimensions of mental illness — so, depending on how a person scores on each dimension, they might be more prone to some disorders than to others. An alternative, more radical idea is that there is a single factor that makes people prone to mental illness in general: which disorder they develop is then determined by other factors. Both ideas are being taken seriously, although the concept of multiple dimensions is more widely accepted by researchers.

The details are still fuzzy, but most psychiatrists agree that one thing is clear: the old system of categorizing mental disorders into neat boxes does not work.

Read the rest of this entry »

## Conceptual zombies and vampires

Lisa Feldman Barrett, "Zombie ideas", Observer 10/2019:

It’s October, a month auspicious for All Hallow’s Eve and everything spooky. Accordingly, our topic for this month is … zombies. Not the charmingly decayed corpses you encounter in movies and books, but zombie ideas. According to the economist Paul Krugman (2013), a zombie idea is a view that’s been thoroughly refuted by a mountain of empirical evidence but nonetheless refuses to die, being continually reanimated by our deeply held beliefs. […]

If you think that formal science training will zombie-proof your mind, you’re out of luck, my friend. Hordes of zombie ideas flourish in science (Brockman, 2015). They also fester in our own field, quietly biding their time in peer-reviewed papers and textbooks, waiting to infect another generation of unsuspecting psychological scientists.

Read the rest of this entry »

## Quantum Bullshit Detector

Twitter is a good medium for this:

## The life cycle of unicorns

Maybe the tide is turning against "Gene for X" thinking — Ed Yong, "A Waste of 1,000 Research Papers", 5/17/2019:

Decades of early research on the genetics of depression were built on nonexistent foundations. How did that happen?

In 1996, a group of European researchers found that a certain gene, called SLC6A4, might influence a person’s risk of depression.

[…]

But a new study—the biggest and most comprehensive of its kind yet—shows that this seemingly sturdy mountain of research is actually a house of cards, built on nonexistent foundations.

Read the rest of this entry »

## "Instant replay" and intellectual referees

The title of a post at MedPage Today echoes the widely negative reaction to obviously blown calls in the recent NFL conference title games — "Is Journal Peer-Review Now Just a Game? Milton Packer wonders if the time has come for instant replay":

Many believe that there is something sacred about the process by which manuscripts undergo peer-review by journals. A rigorous study described in a thoughtful paper is sent out to leading experts, who read it carefully and provide unbiased feedback. The process is conducted with honor and in a timely manner.

It sounds nice, but most of the time, it does not happen that way.

For some comments about the process from the perspective of editors, reviewers, and authors, see the rest of Packer's post. His experience is in the biomedical field, but the situation is similar in other fields. Amazingly bad stuff is often published in respectable and even eminent journals, and genuinely insightful work can be delayed for years by painfully slow interactions with inattentive and dubiously competent reviewers.

Read the rest of this entry »

## Group differences

Read the rest of this entry »

## Language machinery

Xavier Marquez, "Stalin as Reviewer #2", Abandoned Footnotes 117/2018:

Most people reading this blog probably know about Trofim Lysenko, who, with Stalin’s help, set back Soviet genetics in the late 1940s, preventing any discussion of Mendelian inheritance. Yet Stalin’s influence on Soviet scholarship after WWII was much more far reaching. He intervened in disputes concerning philosophy, physics, physiology, linguistics, and political economy; in fact one of the epithets by which he was sometimes referred in the press was “the coryphaeus of science”, i.e., the leader of the chorus of Soviet science. (Lysenko himself used the term in his eulogy for Stalin in 1953, though it was first used in 1939).

Most of these interventions were editorial in character. He edited pre-publication drafts of articles and books, often in close consultation with their authors and at great length (he was actually a decent editor), and occasionally provided feedback on published and unpublished work. And he did this despite the fact that he was the undisputed ruler of one of the victors of World War II, a country that was facing the gigantic task of reconstruction after one of the most destructive conflicts in human history. In short, he was the editor and reviewer from hell.

The story of Stalin’s intervention into Soviet linguistics is particularly funny, at least in the morbid way that anything from that time can be funny. And it also brings out some interesting points about how official ideological commitments both constrained and enabled Stalin and Stalinism.

Read the rest of this entry »

## Replicate vs. reproduce (or vice versa?)

Lorena Barba, "Terminologies for Reproducible Research", arXiv.org 2/9/2018:

Reproducible research—by its many names—has come to be regarded as a key concern across disciplines and stakeholder groups. Funding agencies and journals, professional societies and even mass media are paying attention, often focusing on the so-called "crisis" of reproducibility. One big problem keeps coming up among those seeking to tackle the issue: different groups are using terminologies in utter contradiction with each other. Looking at a broad sample of publications in different fields, we can classify their terminology via decision tree: they either, A—make no distinction between the words reproduce and replicate, or B—use them distinctly. If B, then they are commonly divided in two camps. In a spectrum of concerns that starts at a minimum standard of "same data+same methods=same results," to "new data and/or new methods in an independent study=same findings," group 1 calls the minimum standard reproduce, while group 2 calls it replicate. This direct swap of the two terms aggravates an already weighty issue. By attempting to inventory the terminologies across disciplines, I hope that some patterns will emerge to help us resolve the contradictions.

Read the rest of this entry »

## Belles infidèles in the neuroscience of bilingualism

Following up on "Citation crimes and misdemeanors" (9/9/2017), Breffni O'Rourke sent in a link to Michel Paradis, "More belles infidèles — or why do so many bilingual studies speak with forked tongue?", Journal of Neurolinguistics 2006:

This note reports misquotations, misinterpretations, misrepresentations, inaccuracies and plain falsehoods found in the literature on the neuroscience of bilingualism. They are astounding in both number and kind. Authors cite papers that do not exist, or that exist but are absolutely irrelevant to, or even occasionally argue against, the point they are cited to support; or they attribute a statement to the wrong source, sometimes to a person who has vehemently and persistently argued against it. Obvious errors are quoted for years by numerous authors who have not read the original paper, until somebody blows the whistle — and even then, some persevere. As Darwin [Darwin, C. (1872). The origin of species. 6th edition. New York: A. L. Burt.] put it: ‘great is the power of steady misrepresentation’.

Read the rest of this entry »