Tom Bartlett, "Document Sheds Light on Investigation at Harvard", Chronicle of Higher Education 8/19/2010:
Ever since word got out that a prominent Harvard University researcher was on leave after an investigation into academic wrongdoing, a key question has remained unanswered: What, exactly, did he do? [...]
An internal document, however, sheds light on what was going on in Mr. Hauser's lab. It tells the story of how research assistants became convinced that the professor was reporting bogus data and how he aggressively pushed back against those who questioned his findings or asked for verification.
A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology. The document is the statement he gave to Harvard investigators in 2007.
Bartlett's anonymous source paints an alarming picture of practices and relationships in Hauser's lab:
According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant's codes, he found that the monkeys didn't seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.
But Mr. Hauser's coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success. [...]
The research assistant who analyzed the data and the graduate student decided to review the tapes themselves, without Mr. Hauser's permission, the document says. They each coded the results independently. Their findings concurred with the conclusion that the experiment had failed: The monkeys didn't appear to react to the change in patterns.
They then reviewed Mr. Hauser's coding and, according to the research assistant's statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn't so much as flinch. It wasn't simply a case of differing interpretations, they believed: His data were just completely wrong.
As word of the problem with the experiment spread, several other lab members revealed they had had similar run-ins with Mr. Hauser, the former research assistant says. This wasn't the first time something like this had happened. There was, several researchers in the lab believed, a pattern in which Mr. Hauser reported false data and then insisted that it be used.
Let me say again: In addition to the obvious "best practices" of blind coding and careful calibration of inter-coder agreement, there's no longer any excuse not to publish the raw data from experiments like these.
Guarding against fraud or the suspicion of fraud is only one of the many reasons that this is a good idea.
[Update -- more coverage: Greg Miller, "Investigation Leaves Field in the Dark About a Colleague's Work", Science 20 August 2010; Derek Bickerton, "Why Hauser Did It: Scientific dogma, not Hauser, is to blame for misconduct", Psychology Today 8/19/2010; "Harvard Probes Claims of Scientific Misconduct", NPR 8/18/2010.]