Lombroso and Lavater, reborn as fake AI

« previous post | next post »

Drew Harwell, "A face-scanning algorithm increasingly decides whether you deserve the job", WaPo 10/22/2019:

An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce — and how prospective employees prove their worth.

Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score.

HireVue’s “AI-driven assessments” have become so pervasive in some industries, including hospitality and finance, that universities make special efforts to train students on how to look and speak for best results. More than 100 employers now use the system, including Hilton, Unilever and Goldman Sachs, and more than a million job seekers have been analyzed.

Let's start by noting that this system's algorithms are secret, undocumented, and unverified — which means that they probably have no predictive value whatsoever. I could give reasons for skepticism if the system had allegedly been trained on a large volume of interview data paired with job performance evaluations. But since no such training has ever taken place, as far as I can tell from a scan of the company's website, HireVue doesn't even rise to the level of pseudo-science — it's apparently just fake AI woo-woo.

The background for this stuff seems to be the practice of "Asynchronous Video Interviews" (AVI) — surveyed here — in which automatically-recorded answers are scored after the fact by human evaluators. I imagine that the AI version is trained to imitate those human evaluations, though again I haven't been able to find any documentation of the process, or of its value as a prediction of job performance. Please let me know if you find anything.

From Charles Darwin's autobiography:

On returning home from my short geological tour in North Wales, I found a letter from Henslow, informing me that Captain Fitz-Roy was willing to give up part of his own cabin to any young man who would volunteer to go with him without pay as naturalist to the Voyage of the "Beagle". I have given, as I believe, in my MS. Journal an account of all the circumstances which then occurred; I will here only say that I was instantly eager to accept the offer, but my father strongly objected, adding the words, fortunate for me, "If you can find any man of common sense who advises you to go I will give my consent." So I wrote that evening and refused the offer. On the next morning I went to Maer to be ready for September 1st, and, whilst out shooting, my uncle (Josiah Wedgwood.) sent for me, offering to drive me over to Shrewsbury and talk with my father, as my uncle thought it would be wise in me to accept the offer. My father always maintained that he was one of the most sensible men in the world, and he at once consented in the kindest manner. I had been rather extravagant at Cambridge, and to console my father, said, "that I should be deuced clever to spend more than my allowance whilst on board the 'Beagle';" but he answered with a smile, "But they tell me you are very clever."

Next day I started for Cambridge to see Henslow, and thence to London to see Fitz-Roy, and all was soon arranged. Afterwards, on becoming very intimate with Fitz-Roy, I heard that I had run a very narrow risk of being rejected, on account of the shape of my nose! He was an ardent disciple of Lavater, and was convinced that he could judge of a man's character by the outline of his features; and he doubted whether any one with my nose could possess sufficient energy and determination for the voyage. But I think he was afterwards well satisfied that my nose had spoken falsely.

At least Levater and Lombroso gave illustrative examples for their physiognomical nonsense.

 



21 Comments

  1. KeithB said,

    October 22, 2019 @ 4:13 pm

    They can throw some graphology in there and get a two-fer.

  2. Andrew Usher said,

    October 22, 2019 @ 5:39 pm

    While this assessment is not wrong it's probably not correct to call it 'fake AI'. Assuming this company is not an outright fraud (which is not likely), they're using real AI, even if its performance is bogus. Indeed it's more scary if the company actually believes in it than if they did not.

    As your comparison at the end indicates, this problem isn't new. It isn't really caused by AI, but by _human_ biases in the hiring process looking for the non-existent solution to hiring only the best people. Many techniques have been used and their only commonality, besides being non-transparent to the job seeker, is that they increase the bias against hiring anyone differing from 'normal' in whatever way is perceived by the technique. If there is any new threat by this one it is that it's still less transparent and concentrates power in a _single_ company and their computer algorithms. Whether the technique 'works' in improving the average suitability of those hired is not really important, nor can it be measured in any scientific manner.

    It is not necessarily the case that companies using this technique really believe in it, either. Besides the fact that they apparently can save time and money compared to their previous methods, like other 'management fads' it's more a matter of wanting the follow the bandwagon than evaluating the thing oneself. In the matter of hiring there's the additional concern that: if others are doing it and we aren't, we will get those that would be rejected by the others; no one wants to think that even if it actually will have no effect.

    Fundamentally, humans are not rational at judging strangers. No wonder – for most of our species' history we rarely had to. The premiss of HireVue is that AI will be more rational. But (apart from other reservations) we can't accept that until the technique has received a real test, as by experience we should know that corporations normally lie to sell their products when they can get away with it – and I don't believe they're telling the whole truth.

    If we do want to increase fairness in hiring – and we should, given that it's pretty clearly a social good – the only real way is the use the law to regulate hiring methods. Compare the polygraph (almost always illegal in the US) with drug testing (almost always legal). Note that anti-discrimination laws (even if you believe they over-reach) certainly have had an effect in the intended direction – but the more subtle discrimination in hiring is even more insidious, because less obvious. At the extreme the goverment even could (for some jobs) mandate 'random hiring', as the federal gov't used to for most of their own employees: hire from solely objective, public criteria and/or choose randomly from a pool meeting the same. Before dismissing that as absurd remember that secret hiring techniques are zero-sum; companies can't improve the _total_ labor pool that way.

    Academics can not really be blamed for not understanding some important things about how 'real companies' (including the administrative part of their own school!) work, because these are so little sensibly discussed. Economists in particular love to ignore such 'details', as most understand.

    The opposite trend in 'hiring' practice is evident is another well-known place: professional sports. The obvious difference is that individual 'performance' there is objective and not at all secret. No doubt sports teams are using computers to help evaluate players – but only in the same objective qualities that a human might look at, not in anything like this! (Draw a conclusion from that …)

    k_over_hbarc at yahoo dot com

  3. chris said,

    October 22, 2019 @ 8:21 pm

    "employability" score.

    If enough employers use the same score in making hiring decisions then it *really is* an employability score! Regardless of how arbitrary it is. (Well, assuming it at least correlates well with re-measurement of the same person.)

  4. Garrett Wollman said,

    October 22, 2019 @ 9:46 pm

    There really is a fundamental bias — seen in school admissions as well as hiring — to believe that there is some "objective" complete merit ordering over all humans. There isn't, and systems like this are doomed to failure, but only after causing great harm to many people.

  5. Andrew Usher said,

    October 22, 2019 @ 11:02 pm

    That's right. However, the hiring bias – as you note, having some parallel in college admissions offices (some colleges more than others, and I think we know which) – is not only to believe in such a universal merit function, but to assign unreasonably great importance to determining it. Thus, they become receptive to any thing that purports to help them do so, the less testable the better; I mean, you wouldn't want to be able to be proven wrong in hiring decisions, right? (Again, the same with other 'management fads'.)

    Even though there is no single merit function of the sort hiring managers seem to believe in, there are, and I don't believe you were denying it, areas where there is some merit function that we can measure fairly well objectively. And there should be nothing wrong with applying that where possible, as long as the distinction is kept in mind. That's especially notable now, because with AI systems as here, determined 'by a computer' can no longer be taken to mean 'objective' to any degree, in the normal sense.

    chris:
    Yes, but that's really a definition of the word 'employability' and not a fact about the hiring decisions. So it's not something I would consider interesting compared to the points I actually made, which were more than semantic.

  6. Philip Taylor said,

    October 23, 2019 @ 3:55 am

    Andrew — "Whether the technique 'works' in improving the average suitability of those hired is not really important". I would respectfully disagree. If it does work (and the take-up by organisation such as Hilton, Unilever and Goldman Sachs suggests (to me, at least) that it might, then I join with those who believe that it may well be a step forward in accelerating and improving the recruitment process. When snake oil reliably cures, it is no longer snake oil but as-yet-unexplained pharmacy.

  7. MattF said,

    October 23, 2019 @ 8:27 am

    And, bear in mind that evaluations of the results of the rankings are done by people who have every reason to be biased. It's hard to do that sort of evaluation correctly, and these folks aren't even trying.

  8. stephen said,

    October 23, 2019 @ 9:55 am

    They could test the quality of the system by using it on people who have already been hired, people who have worked there for years.

    They could use it on actors and politicians who have been in the public eye for years.

  9. David L said,

    October 23, 2019 @ 10:08 am

    There's a mildly critical story on this in today's WaPo. Companies use the system because it saves them a lot of time and therefore money. There's no evidence of its effectiveness in any independently testable way. What I found most alarming is that students and other job-seekers are trying to figure out how to get positive results when no-one, not even the company making the system, seems to be able or willing to say what behaviors lead to a good or bad result.

    It's like trying to ask questions of the oracle at Delphi in such a way as to get a favorable answer.

  10. David L said,

    October 23, 2019 @ 10:49 am

    Oopsie, I just realized that the post was based on the WaPo story, which I only saw this morning…

  11. Benjamin E. Orsatti said,

    October 23, 2019 @ 1:31 pm

    Oh, I expect we'll find out "what behaviors lead to a good or bad result" once the first couple employment discrimination class-action lawsuits start rolling in.

    For example, we might find out that the AI algorithm tends to deduct points for big noses, or foreign accents, or visible handicaps…

  12. Trogluddite said,

    October 23, 2019 @ 2:54 pm

    @Benjamin E. Orsatti
    …and not only "visible" handicaps. I am autistic, and although I do not have any intellectual impairments, this affects all three of the criteria mentioned in the main post; facial movement, word choice, and speaking voice (a list which could easily be extended).

    My differences in these areas certainly *are* perceived by the people around me; however, they are generally interpreted as if the behaviours were being produced by a non-autistic person, and so the intents and motivations behind them are commonly misinterpreted. For example, my flat affect and prosody are often taken to indicate aloofness or lack of enthusiasm, which are likely to be considered undesirable character traits in a potential employee.

    Since most people's analysis of my behaviour is largely sub-conscious and instinctive rather than rational, my disability is thus effectively "invisible" to them, and even full disclosure generally makes very little difference. My ability to adjust behaviours to compensate is severely limited by the perceptual differences which cause them and the considerable cognitive load imposed by "acting" the part of a non-autistic person (one might say that such attempts often fall into the "uncanny valley").

    During a face to face interview, I may at least have the opportunity to correct such misunderstandings, and to explore whether my autistic traits really would impair my ability to perform adequately as an employee. This is very unlikely to be possible should an unverified computer algorithm be employed to screen applicants.

  13. Benjamin E. Orsatti said,

    October 23, 2019 @ 3:12 pm

    What Trogluddite said is well-taken, from a legal perspective. To quote an article found on Westlaw:

    […]
    This was the problem underlying Amazon's much-publicized decision to abandon an AI hiring tool after beta-testing it. The tool relied on data compiled from ten years of Amazon's past hiring practices to identify the traits of successful software developers and other technical employees. Because the workforce had been male-dominated during that time, the tool screened out or assigned lower rankings to candidates with traits not found in that data pool, such as attendance at a women's college, participation on a women's sports team, or membership in a female sorority. The data essentially had past bias and discrimination "baked into" the results.

    The Amazon example highlights the importance of using human professionals to continually monitor their AI results and modify the algorithms and data sets if they discover discriminatory results.
    […]
    [M]ost discrimination claims lack direct evidence and instead rely on circumstantial evidence that is analyzed under the McDonnell Douglas burden-shifting analysis. Under that framework, once an employee demonstrates a prima face case of discrimination, the employer must articulate a legitimate nondiscriminatory reason for the challenged employment action. Employers relying on an AI-dictated or AI-informed decision may be hard-pressed to meet this burden.

    The problem is that the "black box" of AI may make it more difficult for employers to articulate a "legitimate nondiscriminatory" reason for a decision because they generally do not know (and often cannot know) how or why the AI tool did what it did. It is unclear whether the courts will find that an employer's decision to use AI, however laudable its goals in doing so, constitutes a "legitimate nondiscriminatory reason" for an employment action if the employer cannot explain the underlying decision path.
    […]
    Certain AI-powered screening and recruitment tools may have a discriminatory impact on individuals with a disability. For example, AI tools that analyze an individual's speech pattern in a recorded interview may negatively "rate" individuals with a speech impediment or hearing impairment who are otherwise well-qualified to perform the essential job functions. Perhaps recognizing this issue, Illinois enacted a law requiring employers to provide notice to applicants and get their consent before conducting a video interview to be analyzed by AI tools (see Box, Illinois Artificial Intelligence Video Act).

    Similar problems may arise with tools that analyze an interviewee's facial expressions. For example, certain facial patterns may correlate to individuals with genetic diseases or other disabilities, but are unrelated to the individual's ability to perform the essential job functions.

    Employers need to monitor the processes and results of AI tools to ensure that they are not eliminating potential candidates who need an accommodation to perform the essential job functions. For example, an algorithm that correlates gym membership with successful candidates may screen out disabled individuals who cannot workout at a gym, but can otherwise perform the essential job functions either with or without a reasonable accommodation.

    — Artificial Intelligence (AI) in the Workplace, Practical Law Practice Note w-018-7465

    All right, linguists — that's the law, now yinz go out and fix it!

  14. Andrew Usher said,

    October 23, 2019 @ 6:38 pm

    I meant no analysis of the implications under current law. What Trogluddite said is correct in essence, and matches what I said: people that are different in any perceptible way will lose, whether such is a recognised disability or not. They can (and may be forced to by the courts) build in protection for legally-protected classes or disabilities, but they can't protect everyone as that would nullify the whole system. Of course the same already occurs, but this AI looks to only make it worse and it's hard to see how it could be otherwise.

    I forgot to mention, but now must, that laws requiring employers to inform applicants about the AI use are worthless. People desiring the job will automatically agree to whatever is proposed, just as they now do to whatever the procedure is. I doubt there is a single example of notice laws alone having any effect – and that's also true (or nearly so) in consumer protection, where it should matter more (because consumers have free choice among competitors, while job seekers practically do not).

    Philip Taylor:
    You might have a case if there were any actual demonstration that it is working. None is given and one would think there'd be reason to disclose any that existed. What I meant is that it's possible it's having some effect (even if unmeasurable), but that doesn't outweigh the other points: any algorithm-based hiring is unfair and zero-sum, and this kind is especially bad because of its concentration of power at one company and total lack of
    accountability – and we should all know that unaccountable concentrations of power are a very bad thing.

    Other points have already been made: none of these people have interest in any objective test, to whatever extent it's possible. Yet they believe it works – contradiction? No, it's a regrettably common human behavior pattern, and I think the OP was trying to make just that point.

  15. Philip Taylor said,

    October 24, 2019 @ 3:29 am

    Andrew — Is not the fact that Messrs Hilton, Unilever and Goldman Sachs are willing to pay for and use the system an indication that it is working, at least in their eyes ? None are corporate babes-in-the-wood (far from it, in fact), and I would be very surprised if they were willing to invest valuable shareholder's capital in a system that brought them no benefits.

  16. Andrew Usher said,

    October 24, 2019 @ 5:39 pm

    You have altogether too much faith in corporations! Surely they, at least the managers behind the implementation, must believe it will work, but the human capacity for self-deception, which is highly amplified by the corporate-management structure's rampant bullshit, means that has essentially no value. Without any facts we must not assign credibility to a 'magic black box' such as this.

    The fact that a corporation has survived a long time says almost nothing about their perspicuity now.

  17. Philip Taylor said,

    October 25, 2019 @ 4:32 am

    I agree with all the points you make, Andrew, but unless the system can be shewn not to work, then I for one am willing to to give it the benefit of the doubt ("innocent until proven guilty"). And I would far sooner trust an AI system not to be prejudiced or discriminatory than any human interview panel !

  18. Trogluddite said,

    October 25, 2019 @ 3:56 pm

    @Philip Taylor
    Human beings design these systems, decide what data they should be used to "educate" them, and determine which distinguishing criteria the system should be "rewarded" for identifying.

    In this case, it is most likely that the system would be trained to identify correlations between applicant's video interviews and similar training videos which have been scored for "employability" by human arbiters. There isn't any other way, as the system cannot determine for itself from first principles what the behaviour of a successful candidate should look like. Thus, any systemic biases shown by the human arbiters will in some way be encoded into the AI algorithm, such that there is no reason to believe that it would be any less prejudiced than they were.

    The assumption that a computer system's disinterest in its output is a guarantor of fairness is rarely made explicit, but it seems to me to be a subtext of many articles that I've read about the wonders of AI systems. I believe this to be a dangerous fallacy which, if accepted, licences developers and users of such systems to sidestep accountability. Such a system could just as easily be trained to apply the physiognomical nonsense of Levater and Lombroso; it has no independent means by which to determine that their hypotheses are counter-factual, nor to determine whether the training corpus is truly representative of the population to which it will be applied.

    Biases need not be intentional. Of the cases where systemic discrimination has been demonstrated in AI systems, I am aware of none where anyone besides sensationalist conspiracy theorists has alleged that this was intentional on the part of the developers. Thus, we cannot be confident in the impartiality of the AI system even if we have complete confidence in the impartiality of the developers.

    Hence, I believe that employing the precautionary principle would be prudent. AI systems know nothing of guilt, innocence, or unintended consequences. We cannot discount that current users deem the system in question effective precisely because it recommends candidates who conform to existing systemic biases.

  19. Andrew Usher said,

    October 25, 2019 @ 7:06 pm

    In this case, I agree with all the points Trogluddite expressed in the last. I'd repeat again that as he stated, and I stated more briefly, these AI systems are fundamentally different from the ways in which computers were used before, and that this difference is enough to warrant reversing the burden of proof (to the developers and their company).

    The assumption that data must be scientifically objective because it comes from a computer was never the whole truth, but with AI systems it's totally worthless and harmful. If anyone can't understand this difference probably he shouldn't be commenting on the matter.

    Further – and this is no disrespect in itself – Philip is almost certainly old enough that he doesn't need to worry about the effect this system and its successors may have on his future employment. Trogluddite clearly does, and that's at least a tie-breaking factor.

    Last I must clarify that _even if_ this could be proven to have benefits in selecting job candidates, that would not necessarily outweigh its objectionability in deciding whether it should be banned. I mentioned in my first contribution the polygraph and drug testing, both of which clearly 'work' in this sense but are unjustified (excluding possibly drug testing justified on safety grounds); the fact the only one is generally banned in the US is a historical contingency.

  20. Dave Lewis said,

    October 28, 2019 @ 1:18 pm

    I'd like to push back a little on some of the more reflexive criticisms here. Yes, Hirevue is trying to do something very hard. Yes, they're probably doing it badly. Maybe their clients are wasting their money or, worse, just buying some technological cover for biased hiring practices.

    But their website at least expresses honest interest in attacking the bias and efficacy issues. And they claim to be following processes that, frankly, are better than the processes that most companies rolling out AI applications these days follow:

    https://www.hirevue.com/why-hirevue/ethical-ai

    So I'm inclined to keep a bit of an open mind.

  21. Andrew Usher said,

    October 28, 2019 @ 10:44 pm

    Of course they would _say_ that – much because of attempted protection from lawsuits – and they may even believe it, but no amount of good intentions ever saves an idea that's fundamentally wrong as this may be.

    This is not about having an open mind – I do, in the sense that I'd be willing to look at evidence that the system is an improvement. This is about, rather, if they should get the benefit of the doubt. It seems we would agree that they should not.

RSS feed for comments on this post