LLM vs. a cat?
« previous post | next post »
A bit of AI anti-hype — Sissi Cao, "Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I.", Observer 2/15/2024:
“The brain of a house cat has about 800 million neurons. You have to multiply that by 2,000 to get to the number of synapses, or the connections between neurons, which is the equivalent of the number of parameters in an LLM,” LeCun said, noting that the largest LLMs have about the same number of parameters as the number of synapses in a cat’s brain. For example, OpenAI’s GPT-3.5 model, which powers the free version of ChatGPT, has 175 billion parameters. The more advanced GPT-4, is said to be run on eight language models, each with 220 billion parameters.
“So maybe we are at the size of a cat. But why aren’t those systems as smart as a cat?” LeCun asked. “A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”
It's worth mentioning something that LeCun certainly knows, namely that a synapse, along with the cells it connects via their dendritic and axonal arborizations, is a much more complicated system than a single "parameter" connecting two abstract nodes in an LLM. It's not clear how much that complexity matters, but there are reasons to think that it probably does.
Anyhow, here's a LeCun's Xeet, echoing the message:
Before we reach Human-Level AI (HLAI), we will have to reach Cat-Level & Dog-Level AI.
We are nowhere near that.
We are still missing something big.
LLM's linguistic abilities notwithstanding.
A house cat has way more common sense and understanding of the world than any LLM.— Yann LeCun (@ylecun) February 5, 2023
And the relevant section of the interview in Dubai:
Update — I should add, of course, that the word smart covers a lot of ground, and cats can't summarize (or hallucinate) the news, or cite (or invent) scholarly articles, or etc.
Pamela said,
February 20, 2024 @ 9:30 am
How many times have humans made the mistake of underestimating the intelligence of a house cat? Now it's existential.
Topher Cooper said,
February 20, 2024 @ 9:56 am
The limited range of cognitive capabilities of the LLMs, however much massive amounts of training data — essentially, patterns to match against — it has and the surprising degree to which they can be used to emulate not-at-all-obviously linguistic aspects of general intelligence, has been something I have "preached" to people since the whole explosion of interest. But there is another "on the other hand" factor that moves the weight of evaluation in the direction of LLM matching CB (Cat Brain) in intelligence:
Most of the neural network of a cat brain is not devoted to what the comparison is about, while any part of the LLM which is not devoted to the intelligent behavior (or support for, or support for the support of, the seeming intelligent behavior) is a failure of the immense amount of LLM training that was done. An LLM doesn't need to keep a heart beating, coordinate walking, propioception, creating the illusion of sensory simultaneity despite delays due to neural distance, etc.
Tim Rowe said,
February 20, 2024 @ 12:41 pm
Because it isn't embodied, so it can't knock ornaments off a shelf?
Y said,
February 20, 2024 @ 1:51 pm
All those neurons in a cat's brain do a lot more than the reasoning that an LLM does: motor control, vision, hearing…
Mark Liberman said,
February 20, 2024 @ 2:23 pm
@Y: "All those neurons in a cat's brain do a lot more than the reasoning that an LLM does: motor control, vision, hearing…"
The term "LLM" (= "large language model") has taken over the space, but what LeCun is really talking about is "deep nets" (or "deep learning" or "(pseudo-)neural nets" or etc.), which have long been used for image and audio analysis, and increasingly for robotic control applications. But capable robot housecats are still a ways off…
Jonathan Smith said,
February 20, 2024 @ 10:48 pm
"a House Cat Is Smarter Than The Best A.I."… ok I see "cat" is a vivid and arguably numerically apt (?) comparison, but surely "ant" or indeed "bacterium" would make a sufficiently aspirational target for the serious AGI crowd to the extent such still exists.
Mostly what Tim Rowe said. The thing is that deep learning tools are doing real and profitable (if often dangerous/evil) work, meaning that the difference between e.g. knowing what a traffic light is and mirroring human judgment wrt what constitutes "image of traffic light" is immaterial to (and it would seem to some degree actually unrecognized by) the developers of these tools.
KeithB said,
February 21, 2024 @ 9:24 am
I am pretty sure a cat would fail this one, too, but MS Copilot fails math:
I asked MS copilot (chat GPT-4):
what are the last 30 digits of pi ? ;-)
Copilot:
The last 30 digits of pi are 3095295560. This was computed in 2022 by Timothy Mullican, who calculated 50 trillion digits of pi using a cloud computing service. The previous record was 31.4 trillion digits, set by Emma Haruka Iwao in 2019.
Terry Hunt said,
February 21, 2024 @ 1:39 pm
There's an implicit assumption being made that natural 'intelligence' (whatever it is) is correlated with the complexity and numbers of synaptic connections between brain cells – the "connectome."
However, from my reading of scientific news, some scientist are increasingly strongly suggesting a possibility I was first told of verbally some 20 years ago, that the basis of 'thinking' and 'consciousness' (whatever those are) arises in the complex 3-D structure of microtubules within each brain (and other types of) cell, making the true "connectome" orders of magnitude more complex. Artificial LLMs and neural nets have some way to go before they match this.
The microtubules are so small that quantum effects could be playing a major role in whatever is going on.
Gregg said,
February 21, 2024 @ 10:04 pm
My cat learned how to not walk on my computer keyboard in just a day or two, and also learned that my wife was not quite as strict as I was about this. Pretty impressive. However, he is not smart enough to give me remarkably ridiculous answers to simple questions, something that ChatGPT does easily. Six of one, oranges of the other.
David Marjanović said,
February 23, 2024 @ 5:13 pm
That's Roger Penrose's old idea (older than 20 years), and it's going nowhere. Microtubuli are way too big (there's water in them!) and way too warm to maintain any quantum superpositions inside them; and they're not only not connected across cell membranes, they're not even capable of branching.
Terry Hunt said,
February 24, 2024 @ 10:19 am
@ David Marjanović – I think the communication I received (at Imperial College) was probably prompted by S. Hagan et al's 2002 paper "Quantum Computation in Brain Microtubules? Decoherence and Biological Feasibility". As far as I understand it (which is not far at all) the quantum effects were thought to be in smaller structural components aligned along the microtubules, not the whole of them.