"Artificial Intelligence and its evil twin, Darwinism"

« previous post | next post »

In Daniel Dennett's 1995 book Darwin's Dangerous Idea: Evolution and the Meanings of Life, the chapter titled "Chomsky contra Darwin, Four Episodes" ends with this provocative sentence:

The hostility to Artificial Intelligence and its evil twin, Darwinism, lies just beneath the surface of much of the most influential work in recent twentieth-century philosophy.

What Dennett meant by "Artificial Intelligence" in 1995 was no doubt rather different from what people take the word to mean now. Still, the intended meaning of his aphorism remains intact and relevant.

You need to start with his distinction between "skyhooks" and "cranes", described here by Wikipedia. And then read about how he learned that Noam Chomsky rejected Darwinism as  form of epistemelogical empiricism, i.e. a "crane" that learns in the genome rather than the neurome:

In March 1978, I hosted a remarkable debate at Tufts, staged, appropriately, by the Society for Philosophy and Psychology. Nominally a panel discussion on the foundations and prospects of Artificial Intelligence, it turned into a tag-team rhetorical wrestling match between four heavyweight ideologues: Noam Chomsky and Jerry Fodor attacking AI, and Roger Schank and Terry Winograd defending it. Schank was working at the time on programs for natural language comprehension, and the critics focused on his scheme for representing (in a computer) the higgledy-piggledy collection of trivia we all know and somehow rely on when deciphering ordinary speech acts, allusive and truncated as they are. Chomsky and Fodor heaped scorn on this enterprise, but the grounds of their attack gradually shifted in the course of the match, for Schank is no slouch in the bully-baiting department, and he staunchly defended his research project. Their attack began as a straightforward, “first-principles” condemnation of conceptual error—Schank was on one fool’s errand or another—but it ended with a striking concession from Chomsky: it just might turn out, as Schank thought, that the human capacity to comprehend conversation (and, more generally, to think) was to be explained in terms of the interaction of hundreds or thousands of jerry-built gizmos, but that would be a shame, for then psychology would prove in the end not to be “interesting.” There were only two interesting possibilities, in Chomsky’s mind: psychology could turn out to be “like physics” — its regularities explainable as the consequences of a few deep, elegant, inexorable laws — or psychology could turn out to be utterly lacking in laws—in which case the only way to study or expound psychology would be the novelist’s way (and he much preferred Jane Austen to Roger Schank, if that were the enterprise).

A vigorous debate ensued among the panelists and audience, capped by an observation from Chomsky’s colleague at MIT Marvin Minsky: “I think only a humanities professor at MIT could be so oblivious to the third ‘interesting’ possibility: psychology could turn out to be like engineering.” Minsky had put his finger on it. There is something about the prospect of an engineering approach to the mind that is deeply repugnant to a certain sort of humanist, and it has little or nothing to do with a distaste for materialism or science. Chomsky was himself a scientist, and presumably a materialist (his “Cartesian” linguistics did not go that far!), but he would have no truck with engineering. It was somehow beneath the dignity of the mind to be a gadget or a collection of gadgets. Better the mind should turn out to be an impenetrable mystery, an inner sanctum for chaos, than that it should turn out to be the sort of entity that might yield its secrets to an engineering analysis!

Though I was struck at the time by Minsky’s observation about Chomsky, the message didn’t sink in. […]

That's the crux of the "evil twins" idea: maybe the mind is a collection of gadgets, evolved by learning in the genome, the neurome, and culturome, and suitable for analysis by engineering techniques.

After touching on John Searle, Stephen Jay Gould, Steven Pinker, Herbert Spencer, McCullough and Pitts,  B.F. Skinner, Charles Babbage, Alan Turing, and others, Dennett zeroes in on Searle, ending the chapter with the "evil twins" sentence:

According to Searle, only artifacts made by genuine, conscious human artificers have real functions. Airplane wings are really for flying, but eagles’ wings are not. If one biologist says they are adaptations for flying and another says they are merely display racks for decorative feathers, there is no sense in which one biologist is closer to the truth. If, on the other hand, we ask the aeronautical engineers whether the airplane wings they designed are for keeping the plane aloft or for displaying the insignia of the airline, they can tell us a brute fact. So Searle ends up denying William Paley’s premise: according to Searle, nature does not consist of an unimaginable variety of functioning devices, exhibiting design. Only human artifacts have that honor, and only because (as Locke “showed” us) it takes a Mind to make something with a function!

Searle insists that human minds have “Original” Intentionality, a property unattainable in principle by any R-and-D process of building better and better algorithms. This is a pure expression of the belief in skyhooks: minds are original and inexplicable sources of design, not results of design. He defends this position more vividly than other philosophers, but he is not alone. The hostility to Artificial Intelligence and its evil twin, Darwinism, lies just beneath the surface of much of the most influential work in recent twentieth-century philosophy, as we shall see in the next chapter.

If you're interested, you should read the whole chapter, and indeed the whole book.

 

 



11 Comments »

  1. Jerry Packard said,

    June 5, 2025 @ 11:07 am

    What the AI/Darwinism debate seems to have left out is the role of epigenetics. It entails the proposition that Darwinism is not solely algorithmic after all, but rather contains behavioral forces that transcend simple probabilistic evolution. So, in other words, since we now admit that the giraffe’s neck may get longer due to the giraffe’s behavior, therefore the solely algorithmic Mendelian component is now affected by a form of intentionality that has leaked in from the mind of the giraffe.

  2. DJL said,

    June 5, 2025 @ 1:29 pm

    Classic passive-aggressive Dennett, that is

  3. J.W. Brewer said,

    June 5, 2025 @ 2:41 pm

    When I was an undergraduate linguistics major, Schank was a big name in the computer science department on campus, and when I took an artificial-intelligence-for-nonmajors CS class circa '85 (which I may have gotten credit toward the ling major for?) Schank was not the instructor but his ideas and approaches were very much ambient in the atmosphere. In hindsight, of course, Schank was a prominent figure in that whole dead-end where researchers working on getting computers to simulate fluency in natural language were definitely positively about three years away from the big breakthrough for maybe 30 years straight, but this time if you gave them a little more grant money it was definitely gonna work.

    I was interested in the time to learn that Schank's own Ph.D. was in linguistics (UT-Austin, 1969, sez the internet), although of course in those days computer science departments were full of tenured faculty whose doctorates were not in computer-science-as-such since doctorates in computer-science-as-such were such a recent development.

    With 40 years of hindsight it seems to me that Schank and Chomsky were probably both wrong about all sorts of important things and one shouldn't assume that the structure of this 1978 debate ensured that at least one participant had to be correct about something of any significance.

  4. Mark Liberman said,

    June 5, 2025 @ 4:40 pm

    @Jerry Packard : "What the AI/Darwinism debate seems to have left out is the role of epigenetics.":

    Epigenetics is just one more piece of the crane, from Dennett's point of view…

    Also, do you see AI and Darwinism as on opposite sides of the debate? Because Dennett clearly views them as allies, even if morally divergent.

  5. Rick Rubenstein said,

    June 5, 2025 @ 5:17 pm

    I'm hard pressed to think of anything whatsoever which is biological and is not well-described as "a collection of gadgets". It's gadgets all the way down.

    It's kind of odd that a person so enamored with "deep, elegant, inexorable laws" would apply his intellect to linguistics and politics, of all things.

  6. David Marjanović said,

    June 5, 2025 @ 6:54 pm

    epigenetics […] we now admit that the giraffe’s neck may get longer due to the giraffe’s behavior

    No.

    DNA methylation, which regulates the expression of the genes in question up or down, is heritable for a generation or three, but not longer. It's just not that stable. Lamarckism is dead and stays dead.

    Classic passive-aggressive Dennett, that is

    Please elaborate.

  7. Jon said,

    June 6, 2025 @ 1:42 am

    I second the suggestion to read the whole book. Fascinating stuff. In particular, Dennett's argument against Roger Penrose's claim that the workings of the human mind are somehow dependent on quantum effects in microtubules.

  8. Victor Manfredi said,

    June 6, 2025 @ 3:40 am

    "The idea that mental events are not among the data of science was the premise that led to behaviorism, and Dennett's position is a sophisticated descendant of that view… Of course, we would believe that anything that functioned physically and behaviorally like a grown human being was conscious, but the belief would be a conclusion from the evidence, rather than just a belief in the evidence. It is only Dennett's Procustean conception of scientific objectivity that leads him to think otherwise." Thomas Nagel, "Other Minds" (OUP 1995, 88f).

  9. PMB said,

    June 6, 2025 @ 5:35 am

    Darwinian evolution and AI, and many other phenomena that involve complex interactions (the "Free Market" for example) are indeed related. They are all chaotic systems, and lead to emergent patterns. The point about emergence is that it is completely unpredictable. No one could have predicted DNA from simple biochemistry, no one could predict intelligence (whatever that is) from DNA, no one could have predicted the kingfisher given archaea. It would possibly have been easier easier to predict the South Sea Bubble given coinage economies, but as far as I know no one did. And no one can predict what AI will do, except to say it probably won't be predicated on human welfare, not even that of trillionaires.

  10. Viseguy said,

    June 6, 2025 @ 11:30 am

    I second the suggestion to read the whole book.

    I'm neither a linguist nor a philosopher, but I took a flier and downloaded the book to my Kindle. Two chapters in, the blurbs are vindicated: it is indeed highly readable–a page-turner, in fact–and fits neatly into the category of summer reading, at least for non-philosophers.

  11. David Marjanović said,

    June 6, 2025 @ 5:34 pm

    no one can predict what AI will do

    It's going to produce a few gigabytes more bullshit every week.

RSS feed for comments on this post · TrackBack URI

Leave a Comment