Industrial bullshitters censor linguists

« previous post | next post »

A bullshit lie detector company run by a charlatan has managed to semi-successfully censor a peer reviewed academic article. And I don't like it one bit. But first, some background, and then we'll get to the censorship stuff.

Five years ago I wrote a Language Log post entitled "BS conditional semantics and the Pinocchio effect" about the nonsense spouted by a lie detection company, Nemesysco. I was disturbed by the marketing literature of the company, which suggested a 98% success rate in detecting evil intent of airline passengers, and included crap like this:

The LVA uses a patented and unique technology to detect "Brain activity finger prints" using the voice as a "medium" to the brain and analyzes the complete emotional structure of your subject. Using wide range spectrum analysis and micro-changes in the speech waveform itself (not micro tremors!) we can learn about any anomaly in the brain activity, and furthermore, classify it accordingly. Stress ("fight or flight" paradigm) is only a small part of this emotional structure

The 98% figure, as I pointed out, and as Mark Liberman made even clearer in a follow up post, is meaningless. There is no type of lie detector in existence whose performance can reasonably be compared to the performance of finger printing. It is meaningless to talk about someone's "complete emotional structure", and there is no interesting sense in which any current technology can analyze it. It is not the case that looking at speech will provide information about "any anomaly in the brain activity": at most it will tell you about some anomalies. Oh, the delicious irony, a lie detector company that engages in wanton deception.

So, ok, Nemesysco, as I said in my earlier post, is clearly trying to pull the wool over people's eyes. Disturbing, yes, but it doesn't follow from the fact that its marketing is wildly misleading that the company's technology is of no merit. However, we now know that the company's technology is, in fact, of no merit. How do we know? Because two phoneticians, Anders Eriksson and Francisco Lacerda, studied the company's technology, based largely on the original patent, and and provided a thorough analysis in a 2007 article Charlatanry in forensic speech science: A problem to be taken seriously, which appeared in the International Journal of Speech Language and the Law (IJSLL), vol 14.2 2007, 169–193, Equinox Publishing. Eriksson and Lacerda conclude, regarding the original technology on which Nemesysco's products are based, Layered Voice Analysis (LVA), that:

Any qualified speech scientist with some computer background can see at a glance, by consulting the documents, that the methods on which the program is based have no scientific validity.

OK, now for the censorship stuff. This is ugly. But complicated. As reported on the AAAS's Science website, Nemesysco's lawyers wrote to the publisher of IJSLL, and forced it to retract the Eriksson and Lacerda article, ceasing to distribute the article electronically. (See Brouhaha Over Controversial Forensic Technology: Journal Caves to Legal Threat, cached version here)

Nemesysco's point seems to have been that the article was unnecessarily personal. And here's the complication: it was indeed not necessary for the article to revolve as much as it did around the character of the founder of Nemesysco, Amir Liberman (still no relation!). The article clearly suggests Amir Liberman is a charlatan. Now, I'm convinced that Amir Liberman is in fact a charlatan. And I also think that the article is well-researched, and makes no unsubstantiated claims. Furthermore, and although this is unconventional in scientific journals, the article is greatly enhanced as regards its narrative structure by having a real live bad guy as the central character. But I'm not a lawyer, and can't evaluate whether any case against the journal would have had legal merit. So, I'm not going to debate the legal ins and outs. You can read around the web (e.g. on wikipedia) and, if you understand such things, decide for yourself. Whatever the legal issues, the possibility of such censorship is very disturbing.

The only thing about this affair that pleases me is that Nemesysco's action has without doubt brought more attention to the article, not less: you can find the original article online here. I imagine thousands of people have downloaded it. If enough of us to do the same, Nemesysco's censorship will obviously have failed to achieve its main objective. Having been withdrawn by the publisher, the article has perhaps lost the (anyway illusory) aura of scholarly perfection that a published peer reviewed article carries. But it won't have disappeared. And maybe, as a result of all this attention, some potential purchaser of Nemesysco's products, be it an insurance company or a government body, will think to ask a linguist before spending the money of customers, stockholders or taxpayers on bullshit that gives speech technology a bad name. Or maybe, just maybe, companies like Nemesysco will be encouraged to stop bullshitting, and start presenting customers with enough information to perform a fair assessment of their products.

(Hat tip to Robin Cooper, and the Facebook group Support Francisco Lacerda and Anders Eriksson, from which I learned of the Nemesysco censorship.)



30 Comments

  1. Lance said,

    April 30, 2009 @ 2:15 am

    Fascinating to read; thanks for linking to it. It's clear, too, from lines like

    but the code is rather messy and not particularly well structured and we decided it would not be worth the time and effort to clean up the code in order to convert it into a running program

    that the authors aren't pulling any punches (and aren't overly concerned about their tone). I can certainly understand why that is—LanguageLog has used a similar tone when discussing the Crockus and other, similar instances of charlatanry—but I can also see why the Journal felt it best to remove the article.

    Then again, I'm not familiar with the IJSLL in general; I'd certainly be surprised to find the discussion of the moral implications of lying to customers and suspects in, say, Linguistic Inquiry, but perhaps that sort of discussion is the bread-and-butter of the Journal.

  2. Sili said,

    April 30, 2009 @ 2:33 am

    And maybe, as a result of all this attention, some potential purchaser of Nemesysco's products, be it an insurance company or a government body, will think to ask a linguist before spending the money of customers, stockholders or taxpayers on bullshit that gives speech technology a bad name. Or maybe, just maybe, companies like Nemesysco will be encouraged to stop bullshitting, and start presenting customers with enough information to perform a fair assessment of their products.

    I'm sorry, but what sorta Utopia do you live in?

  3. Fluxor said,

    April 30, 2009 @ 3:08 am

    Reminds me of the Fruit Machine, and probably just as useful.

  4. perceval said,

    April 30, 2009 @ 3:57 am

    Sili, we live in the kind of utopia where we can blog and retweet this.

  5. Rubrick said,

    April 30, 2009 @ 5:17 am

    Hey, what do you expect from a company whose name anagrams to YES, CON ME?

  6. David Eddyshaw said,

    April 30, 2009 @ 7:35 am

    Well done, both for flagging this up and for linking to the article. This is the sort of occasion where the internet is very valuable.

    This abuse of legal means by charlatans to prevent exposure is familiar in the world of healthcare, too.

    The beautifully named "Bogus Pipeline Effect" is obviously close kin to the Placebo effect and raises a very similar moral question, although in medicine it can take the form of the more challenging:

    "If I lie to the patient he is more likely to recover than if I tell him the truth. What should I do?"

    Perhaps patients will start bringing lie detectors to their consultations ….

  7. Mark Liberman said,

    April 30, 2009 @ 7:59 am

    Those who are interested in the technical aspects might want to take a look at Apparatus and methods for detecting emotions, US patent 6,638,217 B1, October 28, 2003. I'm especially fond of Fig. 1A:

    There is a also a manuscript by Francisco Lacerda, "LVA-technology — A short analysis of a lie", which explains some of the reasons why the methods described in the 2003 patent are unlikely to produce a reliable picture of a speaker's emotional state. (However, there is also a 2007 patent by Amir Liberman — see below — which seems to be more relevant to the company's current product line.)

    It's to the credit of LVA's inventor, Amir Liberman, that the patent documents his proposed method in enough detail that it should be easy for anyone "skilled in the art" to implement it. Despite the fact that this is what patents are supposed to do, I haven't seen this for the microtremor-based "voice stress analyzers" that have been marketed for decades, as discussed here.

    And it's not at all to the credit of the community of speech scientists, myself included, that it was some four years after the issue date of the 2003 patent before someone (Eriksson and Lacerda 2007) first published a detailed critique. In particular, when I first wrote about the news reports of the Nemesysco products (back in August of 2004, 8 months after the patent was awarded — I didn't bother checking for patents. I was clearly wrong (though I guess I wouldn't have found the 2007 patent, which even in 2004 was probably more relevant to Nemesysco's claims).

    Since the method in the 2003 patent is very easy to implement, and there are a number of emotional speech and deceptive speech databases out there, it would be easy enough evaluate its variation across speakers, its behavior on various sorts of signals from the same speaker, its immunity to noise, etc.. Again, I feel that our community should feel somewhat ashamed of its failure to do so before now.

  8. marie-lucie said,

    April 30, 2009 @ 8:12 am

    … maybe, as a result of all this attention, some potential purchaser … , will think to ask a linguist …

    "Linguist" is not a legally protected designation such as "physician" or "lawyer" which gives the seal of approval to a person who has successfully undergone specific types of training. It is illegal for someone who has not graduated from an accredited medical school to call themselves a physician and practice as such, but anybody can call themselves a linguist, from a person who likes to learn languages (or is just reputed to speak more than one) to one who has taken a couple of courses in linguistics to one who has a PhD in the subject and is respected by colleagues in the field. So "we have consulted a linguist" does not mean anything unless the identity and credentials of the "linguist" are known.

    [(myl) Medical mystification aside, the same thing is really true about doctors. If you wanted to know (for example) about the sensitivity and specificity of real-time PCR tests for various strains of H1N1 influenza, you wouldn't get useful information from a randomly selected MD. In fact, you'd probably get the best information from a PhD researcher in a relevant subfield — and the quality of their answer would depend on their theoretical and practical knowledge of the technique, not their degree.

    The same is true in this case: the speech scientists and engineers who would have the best knowledge and skills to evaluate claims like those made by Nemesysco would have a mixture of backgrounds, at least in terms the category of their degree or the name of the organization that they work for. Some would be have degrees in electrical engineering or computer science, some would be linguists by training, some would be psychologists or physicists, and so on. ]

  9. Mark Liberman said,

    April 30, 2009 @ 8:59 am

    In my usual over-cautious way, I'd like to qualify David's statement that "we now know that the company's technology is, in fact, of no merit".

    What we know from the Eriksson and Lacerda paper, I think, is that the technology described in the founder's 2003 patent involves counting simple local properties of poorly-sampled speech waveforms, and is quite unlikely on general mathematical and scientific grounds to accomplish what the company claims that its devices can do.

    The company has not, as far as I know, provided any scientifically credible evidence that these techniques (or any other algorithms in its products) actually accomplish any of the things that it claims for them. (And there are two recent studies suggesting that their current system is no more accurate than a coin-flip at detecting deception in realistic settings, even when company-supplied operators are using it.)

    Eriksson and Lacerda did not themselves do a study showing that these techniques are completely useless, i.e. perform at chance in detecting deception or in classifying the emotional state of speakers, though they cite the final reports of two studies that did find performance near chance on some realistic stress and deception-detection tasks — see below.

    In such cases, the null hypothesis is that the proposed techniques don't work, and it should be the responsibility of the company selling the product to provide some evidence that they do. But the fact that we have no grounds for rejecting the null hypothesis doesn't mean that it has been proved to be true. And "has no merit" is a much stronger null hypothesis than "is not an effective lie detector".

    Let me emphasize that I'm persuaded by Eriksson and Lacerda's argument that the methods described in the 2003 patent seem very unlikely to work as advertised, and that in absence of credible evidence to the contrary, "responsible authorities and institutions should not get involved in such practices". (And the two credible 2008 studies showing chance performance at detecting deception make this argument even stronger.)

    One additional caution — it seems that the systems that Nemesysco currently sells incorporate additional algorithms beyond those described in the 2003 patent (including at least the methods of Amir Liberman's 2007 patent). As a result, Eriksson and Lacerda (rather convincing) debunking of the 2003 patent is not yet a complete account of why Nemesysco's products don't work (and whatever the algorithms are, it appears from two recent tests that the products indeed don't work).

  10. Mark Liberman said,

    April 30, 2009 @ 9:35 am

    As Ben Goldacre constantly reminds us, the situation in the biomedical area is not always better than this. Consider, for example, his history with Matthias Rath ("The doctor will sue you now", 4/9/2009).

    And the woo coefficient in forensic speech technology is by no means uniformly so high. Consider, for example, the Speaker Recognition Evaluation (SRE) program at the National Institute of Science and Technology. (The Linguistic Data Consortium at the University of Pennsylvania, which hosts Language Log, provides the data for these evaluations.)

  11. gota said,

    April 30, 2009 @ 9:40 am

    The effect described in the last paragraph has a name, it is known on the internets as the Streisand effect : http://en.wikipedia.org/wiki/Streisand_effect

  12. bianca steele said,

    April 30, 2009 @ 9:53 am

    Technology like this should be given to corporate customer service reps to use on their coworkers, so they can tell whether they're being lied to even though they don't understand the technical details of what they're being told. If they determine a coworker is lying to them, they have a few options. They can go back to the customer and say, "You know these f—ing engineers. I know this isn't true, but here's what I'm supposed to say. I recommend you call my boss and complain" (not in so many words, of course). Or, they can tell their coworker's supervisor things aren't working out. Or, they can find someone else in the organization — or outside of it — who will tell them the truth. It would help a lot.

  13. Rob P. said,

    April 30, 2009 @ 10:03 am

    As M. Liberman points out, the patentee is required to explain the invention in such a way that one of ordinary skill may use the claimed invention (under US law this is a requirement, most other countries' patent systems have similar requirements). In view of this requirement, A. Liberman may want to be careful about how he characterizes the authors' methods: Liberman counters that Eriksson and Lacerda used information from only one of three patents and that they never used one of Nemesysco's systems itself. "This attack is being made by people who never saw our technology, never touched the equipment," he says.

    This seems to imply that the one patent on its own does not function as advertised without information included in the other two and/or information hidden in the software of the commercial product. If so, he has failed to enable the invention and the claims are invalid.

  14. Mark Liberman said,

    April 30, 2009 @ 10:19 am

    Rob P.: … [Amir] Liberman counters that Eriksson and Lacerda used information from only one of three patents and that they never used one of Nemesysco's systems itself.

    The only other U.S. patent awarded to Amir Liberman that I can find in a search at freepatentsonline is US Patent 7165033, "Apparatus and methods for detecting emotions in the human voice", published 1/16/2007, which features this heartful graphic:

    This methods described in this patent are certainly different from those described in U.S. patent 6,638,217. And it appears from the description that the "layered" part of Nemesysco's "layered voice analysis" refers to some aspects of this method, so that Amir Liberman has some justification for asserting that the Eriksson and Lacerda paper is wrong to associate his company's current product with the 2003 patent rather than (or at least exclusive of) the 2007 patent. (This doesn't mean that the 2007 patent is methodologically more promising — I'll comment on this in a later post.)

    I couldn't find any reference to any specific patents on the Nemesysco web site, but I'll keep looking for the third one. Overall, I would be more less skeptical of Nemesysco if they made it easier to figure out what their system really does — the page on their site describing The LVA™ (Layered Voice Analysis) Technology is pretty much useless from this point of view.

    As for the charge that Eriksson and Lacerda "never used one of Nemesysco's systems itself", this is certainly true, but it should be irrelevant: if the method is to be used in security, law enforcement and other forensic applications, it should be documented well enough that independent experts can evaluate it. However, if some interested party will pay for one of their devices, I'd be happy to test it against relevant existing speech databases. [Update: not necessary, as Harry Hollien and James Harnsberger have already done a carefully-controlled double blind study — see below.]

  15. Mark Liberman said,

    April 30, 2009 @ 11:08 am

    There's are two more damaging (to Nemesysco) reports still in the literature.

    One is Harry Hollien and James Harnsberger, "Evaluation of two voice stress analyzers", J. Acoust. Soc. Am. 124(4):2458, October 2008. The abstract (emphasis added):

    The purpose of this study was to evaluate two commonly used voice stress analyzers: NITV's computer voice stress analyzer (CVSA) and Nemesysco's layered voice analysis (LVA) system. In both cases, a speech database was used, which contained materials recorded (1) in the laboratory, while highly controlled deceptive and shock induced stress levels were systematically varied and (2) during a field procedure. Subjects were 24 males and females (age range 18–63 years) drawn from a population representative of the United States. All held strong views on an issue and were required to make sharply derogatory statements about it. The systems were then evaluated in a double blind study using two sets of examiners: (1) two UF scientists trained/certified by the manufacturers and (2) either three experienced CVSA operators or two LVA instructors provided by the manufacturer(s). The results for both devices showed that the “true positive” (or hit) rates ranged from chance to somewhat higher levels—50% to 65%—for all conditions/types of materials (stressed-unstressed, truth, or deception). However, the false positive rate was just as high—often higher. Sensitivity statistics demonstrated that these systems operated at about chance levels. [Work supported by Counterintelligence Field Agency, DoD.]

    The conclusions of Kelly Damphousse, "Voice Stress Analysis: Only 15 Percent of Lies About Drug Use Detected in Field Test", National Institute of Justice Journal, 2008, were similar — an Editor's Note for this article says that "The study found that the average accuracy rate of these programs in detecting deception regarding drug use was approximately 50 percent—about as accurate as flipping a coin."

    As far as I know, Nemesysco has not sued the Acoustical Society of America or the U.S. Department of Justice to try to get these publications withdrawn.

    Earlier (unpublished, but internet-accessible) reports of both of these experiments are in the bibliography of Eriksson and Lacerda's censored paper: Hollien and Harnsberger, Voice Stress Analyzer Instrumentation Evaluation, CIFA Contract FA 4814-04-0011; and Damphousse et al., Assessing the Validity of Voice Stress Analysis Tools in a Jail Setting, Report submitted to the U.S. Department of Justice, 2007.

    I'm not expert in the laws relating to fraud, but it seems to me that it would be appropriate for prosecutors to look into whether the marketing of these systems breaks those laws.

  16. Richard T said,

    April 30, 2009 @ 11:09 am

    This has been exhaustively analysed by ministry of truth on http://www.ministryoftruth.me.uk. The British government is buying the system and software to 'sort out benefit cheats' – don't even get started on it. Well worth a read

  17. bianca steele said,

    April 30, 2009 @ 11:48 am

    Mark:
    The most immediate problem with that study is that it was sponsored by the DoD. A significant number of people will hear that and conclude that the truth about the subject is restricted, thus that the information published is entirely worthless. They will then, if they believe they are entitled to possess the information themselves, or that they are required to take the information into account in their actions, feel compelled to rely on some category of "trusted persons" in order to get the answer to their questions. Another possibility is that they will search the report for clues, but this begs the question why they think their ability to find apparent clues makes them the kind of person the government thinks ought to know the answers, among others.

  18. Mark Liberman said,

    April 30, 2009 @ 11:58 am

    Bianca Steele: The most immediate problem with that study is that it was sponsored by the DoD. A significant number of people will hear that and conclude that the truth about the subject is restricted, thus that the information published is entirely worthless.

    If so, then a significant number of people are fools. Of course, we knew that already.

  19. Mark P said,

    April 30, 2009 @ 12:14 pm

    Regarding Bianca Steele's comment, I think that the fact that the work was "supported by Counterintelligence Field Agency, DoD" makes these particular results more trustworthy. That agency would be very interested in a technique that could reliably tell when a person is lying. If they accept a report that says a particular machine doesn't work, it means they don't even think it's worth using as a way to intimidate people they question into confessing. I would be more suspicious if a DoD study had concluded that the machine works.

  20. Dan T. said,

    April 30, 2009 @ 12:39 pm

    What's actually needed for dealing with companies like this is a "bullshit detector", which would be distinct from a "lie detector" since bullshit is a distinct category from lies; lies are known by the speaker to be false, while bullshit is disseminated by people who simply don't care about its truth or falsity.

  21. J. W. Brewer said,

    April 30, 2009 @ 1:03 pm

    Who is the censor here? The complaining subject of the article or the publisher that caved rather than fighting for its authors (as well as for the reputation of its own pre-publication vetting process)? Some publishers develop reputations for fighting rather than caving in response to such threats (unless they become convinced that the particular article was, as a factual matter, wrong and indefensible), believing that such a reputation is a valuable asset, although perhaps developing and maintaining such a reputation requires financial resources that a publisher of a specialized academic journal (as opposed to a major newspaper or general-circulation magazine) is unlikely to have.

    Moreover, the risk/reward of fighting rather than caving may depend on where any lawsuit is likely to take place. Note that this particular journal is apparently published in London. English libel law is as notoriously plaintiff-friendly as US libel law is notoriously defendant-friendly. So scholars publishing work that they know is likely to piss off its potentially-litigious subject might wish to keep that in mind in deciding where to publish.

    Back in the '90's, I represented pro bono a sociology (I think it was) professor who had incurred the anger of his research subject, a somewhat controversial religious community (don't call them a "cult": you might get sued). We got the litigation against the professor and other critics of the group in the U.S. dismissed on technical grounds, with the group declining to appeal the dismissal, but following various threatening letters the entire first press run of a scholarly anthology being published out of the U.K. with a chapter about the group by my professor client was pulped or otherwise kept from the market, I believe to be replaced with a new press run absent the offending chapter.

  22. bianca steele said,

    April 30, 2009 @ 1:15 pm

    Mark P.:
    Certain technologies are considered "sensitive" as regards national security and are restricted in various ways. It might be impossible to export machinery making use of these technologies. Publications about them might be classified. Work on them might require clearances, which means projects that use them can be located only in the class of US firms and university labs that regularly do military work. And those are only the restrictions I know about. Then there are restrictions set by other national governments and by international bodies, the most famous probably being the IAEA.

    Then there are mere "secrets," some but not all of them protected by statute and regulation. Coca-Cola doesn't want you to know how they make their trademark sodas. The phone company doesn't want you to know how to make calls without paying for them, as was possible about forty years ago due to a flaw in their system as it existed at the time. It would be wise to distinguish between the two–but not always possible.

  23. Mark Liberman said,

    April 30, 2009 @ 1:29 pm

    J.W. Brewer: Some publishers develop reputations for fighting rather than caving in response to such threats (unless they become convinced that the particular article was, as a factual matter, wrong and indefensible), believing that such a reputation is a valuable asset, although perhaps developing and maintaining such a reputation requires financial resources that a publisher of a specialized academic journal (as opposed to a major newspaper or general-circulation magazine) is unlikely to have.

    From the 2/10/2009 AAAS ScienceNOW Daily News article:

    Janet Joyce, managing director at [the publisher] Equinox, declined to discuss the specifics of the case, but she says the journal–which is published biannually, has a circulation of less than 500, and employs no full-time staff–simply lacks the resources to put up a legal fight. The journal has agreed to publish a rebuttal letter from Lieberman [sic] and the company, but Joyce notes that "we didn't withdraw the article. It's still in print."

    The fact that Equinox is based in the U.K. certainly doesn't make their situation any easier.

    Also the Eriksson and Lacerda article used some phrases (e.g. "The ideas on which the products are based are simply complete nonsense") that would be normal in an editorial or a popular book, but are not common in scientific and technical journals. And there is one apparently false statement, on p. 180:

    Contrary to the claims of sophistication — 'The LVA software claims to be based on 8,000 mathematical algorithms applied to 129 voice frequencies' (Damphousse et al. 2007:15) — the LVA is a very simple program written in Visual Basic. The entire program code, published in the patent documents (Liberman 2003) comprises no more than 500 lines of code.

    I interpret the (admittedly odd) phrase quoted by Damphousse as referring to the frequency-domain analysis presented in Amir Liberman's 2007 patent, rather than the time-domain analysis in his 2003 patent.

    So apart from the small size of the journal, and the plaintiff-friendly character of British defamation law, Equinox may have been daunted the fact that the paper uses somewhat intemperate language and contains at least one apparent factual error.

  24. mgh said,

    April 30, 2009 @ 5:30 pm

    There was an entertaining New Yorker article on this topic last summer.

    If you're able to access it, don't miss the entire first section about a company called BBN and their Avoke software, used in call centers to detect when a customer is getting angry — the account of the bong-smoking surfer dude's meltdown when he thinks he's on hold is very funny.

    But here's the part relevant to Nemesysco:

    There is a small market for voice-based lie detectors, which are becoming a popular tool in police stations around the country. Many are made by Nemesysco, an Israeli company, using a technique called “layered voice analysis” to analyze some hundred and thirty parameters in the voice to establish the speaker’s pyschological state. The academic world is skeptical of voice-based lie detection, because Nemesysco will not release the algorithms on which its program is based; after all, they are proprietary. Layered voice analysis has failed in two independent tests. Nemesysco’s American distributor says that’s because the tests were poorly designed. (The company played Roger Clemens’s recent congressional testimony for me through its software, so that I could see for myself the Rocket’s stress levels leaping.) Nevertheless, according to the distributor more than a thousand copies of the software have been sold—at fourteen thousand five hundred dollars each—to law-enforcement agencies and, more recently, to insurance companies, which are using them in fraud detection.

    also, re: Rubrick, I think your anagram is missing an S

  25. Mark P said,

    April 30, 2009 @ 6:05 pm

    mgh, it doesn't surprise me that police departments would fall for this. After all, they have been using lie detectors for years. They also use profiling, which is the modern version of phrenology.

    [(myl) Actually, I think that this is unfair to the polygraph and to profiling. "Lie detection" based on expert interpretation of physiological parameters does work, at least in the sense that in tests like those that Hollien, Damphousse et al. ran, it produces results that are significantly better than flipping a coin. And one person's "profiling" is another person's "actuarial table".

    That's not to say that such techniques are not often abused. It's a good thing, in my opinion, that polygraph evidence is not admissible in court; and I'm not in favor of viewing actuarial statistics as valid "probable cause" in the legal sense.

    But arguments about these issues need to be based on the facts. If your reason for opposing polygraph evidence and profiling is the belief that they are completely invalid, in a statistical sense, then your arguments are built on sand and will be defeated by contact with reality.]

  26. Nathan Myers said,

    April 30, 2009 @ 8:43 pm

    Surely deceptive marketing verbiage, as published by Nemesyco, is itself properly a subject of linguistic study. Are there linguistic markers common to deceptive promotions that are vanishingly rare in legitimate materials?

    Surely there must be a market for deceptive-marketing detection software. Perhaps it could be installed as a plug-in in users' browsers, to lend a sickly cast to the backgrounds of suspect paragraphs. If it were to lend a sickly cast to the backgrounds of paragraphs promoting itself, would that mean it worked, or didn't?

  27. saif said,

    May 1, 2009 @ 8:50 am

    Here's a parallel expose on the British Government's enthusiasm for Nemesysco. Worrying!

    http://www.ministryoftruth.me.uk/2009/03/18/purnells-lie-detector-how-it-actually-works/#comments

  28. Mark P said,

    May 1, 2009 @ 11:43 am

    MYL: As usual, I tend to hyperbole, but there are enough instances of misuse that I suspect every use. One well know case of misuse of profiling is the identification by the FBI of Richard Jewell as the Atlanta olympic park bomber. The FBI essentially said he was the bomber and asked the public to furnish evidence to support that claim. Of course he was innocent, but that must have been little comfort, since the actual bomber was not found until much later.

    As to polygraphs, there are a number of known cases in which spies, for example, repeatedly passed polygraph tests.

    I think these two techniques are directly comparable to the proposed use of mass IR scanning to identify people with swine flu. Such scanners have been shown in controlled tests to be able to identify people with elevated temperatures fairly reliably. However, when they were used a few years ago during the SARS scare, they did not do so well. When the results were examined for their use in the Hong Kong airport (about 36 million people in the period) and in some Canadian airports (about 9 million people in the period), around 2600 people (total – something over 700 in Canada) were identified as having an elevated temperature. In Canada, the false positive rate (for elevated temperature, not for SARS) was greater than 90 percent. And although some people actually had an elevated temperatures, not one case of SARS was identified. My conclusion here is the same as for polygraphs and profiling: it might work in principle, but it does not work in practice.

    Everybody wants a magic box to solve their problems, but magic boxes that actually work are hard to find.

  29. Bob Ray said,

    May 5, 2009 @ 6:10 pm

    I can explain how a polygraph test can be correctly described as 98% accurate.

    1. Take 100 people.

    2. Make one of them steal something.

    3. Connect each of them to the polygraph and ask if he or she is the criminal.

    4. Designate one as the criminal on the basis of the polygraph test (or at random).

    5. On the basis of the designation, assign each person to one of two classes — criminal or innocent person.

    You're only wrong about two people. the actual criminal and the poor bastard you've wrongly designated as the criminal. You've correctly assigned 98% of your subjects.

    Et voila!

    [(myl) Yes, you've exactly recapitulated the reasoning in the 2004 blog post that David linked to in the third paragraph of the post above. ]

  30. AE said,

    August 22, 2009 @ 11:13 am

    Mark Liberman said on April 30, 2009 at 1:29
    —————————-
    And there is one apparently false statement, on p. 180:

    'The LVA software claims to be based on 8,000 mathematical algorithms applied to 129 voice frequencies' (Damphousse et al. 2007:15)

    I interpret the (admittedly odd) phrase quoted by Damphousse as referring to the frequency-domain analysis presented in Amir Liberman's 2007 patent, rather than the time-domain analysis in his 2003 patent.
    —————————————————–

    No, this is not a correct interpretation. The 2007 patent is about the so called “Love Detector” which, for obvious reasons, is not the one used in the Damphousse study. The quote is (as I remember it) a direct quote from an older version of Nemesysco’s own promotional material, once on the home pages but now removed. The 129 frequencies remain, however, but now referred to as “129 emotional parameters”.

RSS feed for comments on this post