Mike Paluska, "Investigator: Herman Cain innocent of sexual advances", CBS Atlanta, 11/10/2011:
Private investigator TJ Ward said presidential hopeful Herman Cain was not lying at a news conference on Tuesday in Phoenix.
Cain denied making any sexual actions towards Sharon Bialek and vowed to take a polygraph test if necessary to prove his innocence.
Cain has not taken a polygraph but Ward said he does have software that does something better.
Ward said the $15,000 software can detect lies in people's voices.
This amazingly breathless and credulous report doesn't even bother to tell us what the brand name of the software is, and certainly doesn't give us anything but Mr. Ward's unsupported (and in my opinion almost certainly false) assertion about how well it works:
Ward said the technology is a scientific measure that law enforcement use as a tool to tell when someone is lying and that it has a 95 percent success rate.
The screen views available in the report don't (as far as I can see) show us the software's name, but they do show that it's some sort of "voice stress" analyzer (perhaps one of Nemesysco's "Layered Voice Analysis" products?):
Curious readers might want to take a look at Harry Hollien and James Harnsberger, "Evaluation of two voice stress analyzers", J. Acoust. Soc. Am. 124(4):2458, October 2008; and James Harnsberger, Harry Hollien, Camilo Martin, and Kevin Hollien, "Stress and Deception in Speech: Evaluating Layered Voice Analysis", Journal of Forensic Sciences 54(3) 2009. The second paper's abstract:
This study was designed to evaluate commonly used voice stress analyzers—in this case the layered voice analysis (LVA) system. The research protocol involved the use of a speech database containing materials recorded while highly controlled deception and stress levels were systematically varied. Subjects were 24 each males/females (age range 18–63 years) drawn from a diverse population. All held strong views about some issue; they were required to make intense contradictory statements while believing that they would be heard/seen by peers. The LVA system was then evaluated by means of a double blind study using two types of examiners: a pair of scientists trained and certified by the manufacturer in the proper use of the system and two highly experienced LVA instructors provided by this same firm. The results showed that the “true positive” (or hit) rates for all examiners averaged near chance (42–56%) for all conditions, types of materials (e.g., stress vs. unstressed, truth vs. deception), and examiners (scientists vs. manufacturers). Most importantly, the false positive rate was very high, ranging from 40% to 65%. Sensitivity statistics confirmed that the LVA system operated at about chance levels in the detection of truth, deception, and the presence of high and low vocal stress states. [emphasis added]
You might also take a look at the section on "Voice Stress Technologies" in Robert Pool, Field Evaluation in the Intelligence and Counterintelligence Context, National Research Council, 2009:
One of the earliest products was the Psychological Stress Evaluator from Dektor Corporation. […] Other voice stress technologies include the Digital Voice Stress Analyzer from the Baker Group, the Computer Voice Stress Analyzer from the National Institute for Truth Verification, the Lantern Pro from Diogenes, and the Vericator from Nemesysco.
Over the years, these technologies have been tested by various researchers in various ways, and Rubin described a 2009 review of these studies that was carried out by Sujeeta Bhatt and Susan Brandon of the Defense Intelligence Agency. After examining two dozen studies conducted over 30 years, the researchers concluded that the various voice stress technologies were performing, in general, at a level no better than chance — a person flipping a coin would be equally good at detecting deception.
Let me quote at length what I wrote about this general topic more than seven years ago — "Analyzing voice stress", 7/2/2004:
Yesterday's NYT had an article on voice stress analyzers. As a phonetician — someone who studies the physics and physiology of speech — I've been amazed by this work for almost three decades. What amazes me is that research (of a sort) and commerce (at a low level) and law-enforcement applications (here and there) keep on keepin' on, decade after decade, in the absence of any algorithmically well defined, reproducible effect that an ordinary working speech researcher like me can go to the lab, implement and test.
Well, these days there's no need to go to the lab for this stuff — you just write and run some programs on your laptop. But that makes the whole thing all the more amazing, because after 50 years, it's still not clear what those programs should do. I'm not complaining that it's unclear whether the methods work — that's true too, but the real scandal is that it's still unclear what the methods are supposed to be.
Specifically, the laryngeal microtremors that these techniques depend on haven't ever been shown clearly to exist, as far as I know. No one has ever shown that if these microtremors exist, it's possible to measure them in the pitch of the voice, in a way that separates them from all the other phenomena that modulate the pitch at similar rates. And that's before we get to the question of how such undefined measurements might be related to truth-telling. Or not.
How can I make you see how amazing this is? Suppose that in 1957 some physiologist had hypothesized that cancer cells have different membrane potentials from normal cells — well, not different potentials, exactly, but a sort of a different mix of modulation frequencies in the variation of electrical potentials between the inside of the cell and the outside. And further suppose that some engineer cooked up a proprietary circuit to measure and display these alleged variations in "cellular stress" (to the eyes of a trained cellular stress expert, of course), and thereby to diagnose cancer, and started selling such devices to hospitals, and selling training courses in how to use them. And suppose that now, almost half a century later, there is still no documented, well-defined procedure for ordinary biomedical researchers to use to measure and quantify these alleged cell-membrane "tremors" — but companies are still making and selling devices using proprietary methods for diagnosing cancer by detecting "cellular stress" — computer systems now, of course — while well-intentioned hospital administrators and doctors are occasionally organizing little tests of the effectiveness of these devices. These tests sometimes work and sometimes don't, partly because the cellular stress displays need to be interpreted by trained experts, who are typically participating in a diagnostic team or at least given access to lots of other information about the patients being diagnosed.
This couldn't happen. If someone tried to sell cancer-detection devices on this basis, they'd get put in jail.
But as far as I can tell, this is essentially where we are with "voice stress analysis."
As far as I can tell, the situation has not changed since 2004, except that the software packages have niftier-looking user interfaces, and their developers and marketers use different packages of buzzwords. Thus the Nemesysco marketing literature talks about "wide range spectrum analysis and micro-changes in the speech waveform itself (not micro tremors!)".
But let me repeat, with emphasis, something that I wrote later in the same post:
I'm not prejudiced against "lie detector" technology — if there's a way to get some useful information by such techniques, I'm for it. I'm not even opposed to using the pretense that such technology exists to scare people into not lying, which seems to me to be its main application these days.
I'd be happy to participate in a fair test of whatever technology Mr. Ward was showing us, and I'd even be happy to help organize such a test. But pending some credible explanation of the algorithms used, and some credible test of their efficacy, color me skeptical.
A few other LL posts on related topics:
"Determining whether a lottery ticket will win, 99.999992849% of the time", 8/29/2004.
"KishKish BangBang", 1/17/2007
"Industrial bullshitters censor linguists", 4/30/2009 (see especially the comments threads, e.g. here, here, here, here.)
"Speech-based lie detection in Russia", 6/8/2011