The mind of artificial intelligence
« previous post | next post »
Sean Carroll's Preposterous Universe Podcast #230
Raphaël Millière on How Artificial Intelligence Thinks, March 20, 2023 / Philosophy, Technology, Thinking / Comments
Includes transcript of the two hour podcast.
Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.
[The above introduction was artificially generated by ChatGPT.]
Comments
Maria Comninou (March 20, 2023 at 2:41 pm)
I am always surprised at the ease that humans (mostly male in these fields) are willing to attribute consciousness to algorithms (AI) but deny it to non human animals!
Jim Wade (March 22, 2023 at 4:13 am)
The question about whether an AI machine will ever be able to think is, to me, the most important question to be addressed. This question is the hard problem of consciousness. The inner life of humans is a reality that is unexplainable. Self-aware consciousness is what leads to understanding the meaning of experience. Computers do not understand the meaning of anything. It is the human minds that interpret the findings of the algorithms that give them meaning. Computers do not have AHA moments. Computers are very valuable tools that can vastly expand the capabilities and achievements of human beings, but understanding is the purview of self-aware consciousness.
Selected readings
- "ChatGPT-4: threat or boon to the Great Firewall?" (3/21/23) — with a bibliography of previous posts on this subject
- "Heart-mind" (9/29/14)
[h.t. Bill Benzon]
Gregory Kusnick said,
March 23, 2023 @ 12:04 am
Carroll's podcast is hands-down my favorite. I highly recommend it (though I have not actually listened to this episode yet).
Bill Benzon said,
March 23, 2023 @ 8:10 am
Here's a passage late in the dialog that relates to a point Syd Lamb made decades ago:
This inferential competence follows from Lamb's point, that the meaning of a word is a function of its relationship with other words. Stated that way it seems pretty much the same as Firth's distributional semantics. And maybe it is, but maybe it explains distributional semantics.
As many of you know, Lamb is a first generation computational linguist, from the old old days when it was called machine translation. He favored a linguistic approach derived from Hjelmslev's stratification. He also favored a notation in the form of a relational network – he was one of the first to do so, and has told me he got the idea from work Halliday was doing in 63-64. We see Lamb's notation in more or less full form in Outline of Stratificational Grammar, 1966.
So, Lamb's point exists in the context of an explicitly drawn relational network. I don't know offhand whether the point was there in the 1966 book. It was told to me in the mid-late 1970s by Dave Hays, with whom I was studying. It's there explicitly in Lamb's 1999 Pathways of the Brain.
In any event, by the time GPT-3 first appeared in 2020 I'd been thinking about the success of neural networks in machine translation and had decided it was time I tried to convince myself that these results were not the result of some mystical machine voodoo but an intelligible manifestation of however it is that language works. I made Lamb's point the center of my thinking in GPT-3: Waterloo or Rubicon? Here be Dragons, pp. 15-19. That account substitues tap-dancing for technical detail, but it has served its purpose. The technical detail will have to be supplied by those who have mathematical skills that I lack.
Gene Hill said,
March 23, 2023 @ 11:26 am
Brings to mind one of the earliest adages of computer science. "Garbage in, garbage out"
Grant Castillou said,
March 23, 2023 @ 2:39 pm
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
john said,
March 24, 2023 @ 8:53 am
I enjoyed the point that saying that LLMs “just” predict the next word is like saying that humans “just” maximize their reproductive fitness. Yes, sure, but to get good at that requires some astonishing capabilities.
Vampyricon said,
April 25, 2023 @ 2:38 pm
Maria's comment is weird to me since I'm fairly certain Sean does believe in animal consciousness.