{"id":27770,"date":"2016-08-29T07:59:37","date_gmt":"2016-08-29T12:59:37","guid":{"rendered":"http:\/\/languagelog.ldc.upenn.edu\/nll\/?p=27770"},"modified":"2016-08-29T10:40:34","modified_gmt":"2016-08-29T15:40:34","slug":"the-new-ai-is-so-lifelike-its-prejudiced","status":"publish","type":"post","link":"https:\/\/languagelog.ldc.upenn.edu\/nll\/?p=27770","title":{"rendered":"The new AI is so lifelike it's prejudiced!"},"content":{"rendered":"<p>Arvind Narayanan, \"<a href=\"https:\/\/freedom-to-tinker.com\/blog\/randomwalker\/language-necessarily-contains-human-biases-and-so-will-machines-trained-on-language-corpora\/\" target=\"_blank\">Language necessarily contains human biases, and so will machines trained on language corpora<\/a>\", <em>Freedom to Tinker<\/em> 8\/24\/2016:<\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #000080;\">We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well.<\/span><\/p>\n<p><!--more-->This\u00a0all started in the 1960s, with <a href=\"https:\/\/en.wikipedia.org\/wiki\/Gerard_Salton\" target=\"_blank\">Gerald Salton<\/a> and the \"<a href=\"https:\/\/en.wikipedia.org\/wiki\/Vector_space_model\" target=\"_blank\">vector space model<\/a>\". The idea was to represent a document as a vector of word (or \"term\") counts &#8212; which like any vector, represents a point in a multi-dimensional space. Then the similarity between two documents can be calculated by correlation-like methods, basically as some simple function of the inner product of the two term vectors. And natural-language queries are also a sort of document, though usually a rather short one, so you can use this general approach for document retrieval by looking for documents that are (vector-space) similar to the query. It helps if you weight the document vectors by <a href=\"http:\/\/nlp.stanford.edu\/IR-book\/html\/htmledition\/inverse-document-frequency-1.html\" target=\"_blank\">inverse document\u00a0frequency<\/a>, and maybe use thesaurus-based term extension, and\u00a0relevance feedback, and &#8230;<\/p>\n<p>A vocabulary of 100,000 wordforms results in a 100,000-dimensional vector, but there's no conceptual problem with that, and sparse-vector coding techniques\u00a0means that there's no practical problem either. Except in the 1960s, digital \"documents\" were basically stacks of punched cards, and the market for digital document retrieval was therefore pretty small.\u00a0Also, those were the days when people thought that artificial intelligence was applied logic &#8212; one of Marvin Minsky's students once told me that Minsky warned him \"If you're counting higher than one, you're doing it wrong\". Still, Salton's students (like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Mike_Lesk\" target=\"_blank\">Mike Lesk<\/a> and <a href=\"https:\/\/scholar.google.com\/citations?user=0lic4McAAAAJ&amp;hl=en\" target=\"_blank\">Donna Harman<\/a>) kept the flame alive.<\/p>\n<p>Then came the world-wide web, and the Google guys' development\u00a0of \"page rank\", which extends a\u00a0vector-space model using\u00a0the eigenanalysis\u00a0of the citation graph of the web, and the growth of the idea that artificial intelligence might be applied statistics. Also out there\u00a0was\u00a0the idea of using\u00a0various dimensionality-reduction techniques to cut the order of those document vectors down from hundreds of thousands to hundreds.<\/p>\n<p>The first example was \"<a href=\"https:\/\/en.wikipedia.org\/wiki\/Latent_semantic_analysis\" target=\"_blank\">latent semantic analysis<\/a>\", based on the singular value decomposition of a term-by-document matrix. The initial idea was to make document storage and comparison more efficient &#8212; but this turned out not to be necessary. Another benefit was to create a sort of soft thesaurus, so that a query might fetch documents that don't feature the queried words, but do contain lots of words that often co-occur with those words. But LSA, interesting as it was, never really became a big thing.<\/p>\n<p>Then people began to explore small vector-space models based on other ways of doing dimensionality reduction on other kinds of word-cooccurrence statistics, especially looking at relationships among\u00a0nearby words. It didn't escape notice that this puts\u00a0into effect the\u00a0old idea of \"<a href=\"https:\/\/en.wikipedia.org\/wiki\/Distributional_semantics\" target=\"_blank\">distributional semantics<\/a>\", especially associated with <a href=\"https:\/\/en.wikipedia.org\/wiki\/Zellig_Harris\" target=\"_blank\">Zellig Harris<\/a> and <a href=\"https:\/\/en.wikipedia.org\/wiki\/John_Rupert_Firth\" target=\"_blank\">John Firth<\/a>, summarized in Firth's dictum that \"you shall know a word by the company it keeps\". Some examples are <a href=\"https:\/\/en.wikipedia.org\/wiki\/Word2vec\" target=\"_blank\">word2vec<\/a>, <a href=\"http:\/\/www.cis.upenn.edu\/~ungar\/eigenwords\/\" target=\"_blank\">eigenwords<\/a>, and\u00a0<a href=\"http:\/\/nlp.stanford.edu\/projects\/glove\/\" target=\"_blank\">GloVe<\/a>. These techniques let you produce approximate solutions to what might seem like hard problems, like London:England::Paris:?, using nothing but vector-space geometry. And it's easy to experiment with these techniques, as <a href=\"http:\/\/bookworm.benschmidt.org\/posts\/2015-10-30-rejecting-the-gender-binary.html\" target=\"_blank\">here<\/a> and <a href=\"http:\/\/www.languagejones.com\/blog-1\/2015\/11\/1\/word-embedding\" target=\"_blank\">here<\/a>.<\/p>\n<p>Continuing with Arvind Narayanan's blog post:<\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #000080;\">Specifically, we look at \u201cword embeddings\u201d, a state-of-the-art language representation used in machine learning. Each word is mapped to a point in a 300-dimensional vector space so that semantically similar words map to nearby points. <\/span><\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #000080;\">We show that a wide variety of results from psychology on human bias can be replicated using nothing but these word embeddings. We primarily look at the Implicit Association Test (IAT), a widely used and accepted test of implicit bias. The IAT asks subjects to pair concepts together (e.g., white\/black-sounding names with pleasant or unpleasant words) and measures reaction times as an indicator of bias. In place of reaction times, we use the semantic closeness between pairs of words. In short, we were able to replicate every single result that we tested, with high effect sizes and low p-values.<\/span><\/p>\n<p style=\"padding-left: 30px;\">[&#8230;]<\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #000080;\">We show that information about the real world is recoverable from word embeddings to a striking degree. The figure below shows that for 50 occupation words (doctor, engineer, \u2026), we can accurately predict the percentage of U.S. workers in that occupation who are women using nothing but the semantic closeness of the occupation word to feminine words!<\/span><\/p>\n<p style=\"padding-left: 30px;\"><a href=\"http:\/\/languagelog.ldc.upenn.edu\/myl\/bias-gender-occupation-association.png\"><img decoding=\"async\" title=\"Click to embiggen\" src=\"http:\/\/languagelog.ldc.upenn.edu\/myl\/bias-gender-occupation-association.png\" width=\"490\" \/><\/a><\/p>\n<p>The paper is Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan, \"<a href=\"http:\/\/randomwalker.info\/publications\/language-bias.pdf\" target=\"_blank\">Semantics derived automatically from language corpora necessarily contain human biases<\/a>\". It uses the pre-trained GloVe embeddings available\u00a0<a href=\"http:\/\/nlp.stanford.edu\/projects\/glove\/\" target=\"_blank\">here<\/a>, and you can read it to be convinced that many sorts of bias\u00a0reliably emerge from the patterns of word co-occurrence in such material.<\/p>\n<p>I'm pretty sure that Zellig Harris would not have found this surprising &#8212; one of his original motivations for developing\u00a0distributional methods was to find the latent political content of texts by completely objective means.<\/p>\n<p>And for a few differently-embedded words on what those marvelous word-embedding vectors leave out, see the discussion of the \"cookbook problem\" in <a href=\"http:\/\/www.ling.upenn.edu\/courses\/ling001\/syntax1.html\" target=\"_blank\">these lecture notes<\/a>.<\/p>\n<p>Update &#8212; I should note that I'm strongly in favor of word-embedding models and similar things, but I feel that people should understand what they are and how they work (or don't work), rather than seeing them as a magic algorithmic black box that does magical algorithmic things.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Arvind Narayanan, \"Language necessarily contains human biases, and so will machines trained on language corpora\", Freedom to Tinker 8\/24\/2016: We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[60],"tags":[],"class_list":["post-27770","post","type-post","status-publish","format-standard","hentry","category-computational-linguistics"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts\/27770","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=27770"}],"version-history":[{"count":11,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts\/27770\/revisions"}],"predecessor-version":[{"id":27786,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts\/27770\/revisions\/27786"}],"wp:attachment":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=27770"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=27770"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=27770"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}