"AI" == "vehicle"?
« previous post |
Back in March, the AAAI ("Association for the Advancement of Artificial Intelligence") published an "AAAI Presidential Panel Report on the Future of AI Research":
The AAAI 2025 presidential panel on the future of AI research aims to help all AI stakeholders navigate the recent significant transformations in AI capabilities, as well as AI research methodologies, environments, and communities. It includes 17 chapters, each covering one topic related to AI research, and sketching its history, current trends and open challenges. The study has been conducted by 25 AI researchers and supported by 15 additional contributors and 475 respondents to a community survey.
You can read the whole thing here — and you should, if you're interested in the topic.
The chapter on "AI Perception vs. Reality", written by Rodney Brooks, asks "How should we challenge exaggerated claims about AI’s capabilities and set realistic expectations?" It sets the stage with an especially relevant lexicographical point:
One of the problems is that AI is actually a wide-reaching term that can be used in many different ways. But now in common parlance it is used as if it refers to a single thing. In their 2024 book [5] Narayanan and Kapoor likened it to the language of transport having only one noun, ‘vehicle’, say, to refer to bicycles, skate boards, nuclear submarines, rockets, automobiles, 18 wheeled trucks, container ships, etc. It is impossible to say almost anything about ‘vehicles’ and their capabilities in those circumstances, as anything one says will be true for only a small fraction of all ‘vehicles’. This lack of distinction compounds the problem of hype, as particular statements get overgeneralized.
(The cited book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.)
I'm used to making this point by noting that "AI" now just means something like "complicated computer program", but the vehicle analogy is better and clearer.
The Brooks chapter starts with this three-point summary:
- Over the last 70 years, against a background of constant delivery of new and
important technologies, many AI innovations have generated excessive hype. - Like other technologies these hype trends have followed the general Gartner
Hype Cycle characterization. - The current Generative AI Hype Cycle is the first introduction to AI for
perhaps the majority of people in the world and they do not have the tools to
gauge the validity of many claims.
Here's a picture of the "Gartner Hype Cycle", from the Wikipedia article:
A more elaborately annotated graph is here.
Wikipedia explains that "The hype cycle framework was introduced in 1995 by Gartner analyst Jackie Fenn to provide a graphical and conceptual presentation of the maturity of emerging technologies through five phases."
Jackie Fenn doesn't have a Wikipedia page — a gap someone should fix! — but her LinkedIn page provides relevant details.