"What makes an AI system an agent?"
« previous post | next post »
And what are the consequences of the growing population of AI agents?
In "Agentic culture", I observed that today's "AI agents" have the same features that made "Agent Based Models", 50 years ago, a way to model the emergence and evolution of culture. And I expressed surprise that (almost) none of the concerns about AI impact have taken account of this obvious fact.
There was a little push-back in the comments, for example the claim that "There may come a time when AI is autonomous, reflective and has motives, but that is a long, long way off." Which misses the point, given the entirely unintelligent nature of old-fashioned ABM systems.
Antonio Gulli from Google has recently posted Agentic Design Systems, which offers some useful (and detailed) descriptions of the state of the agentic art, along with example code.
The section on "What makes an AI system an Agent?" sets the stage:
In simple terms, an AI agent is a system designed to perceive its environment and take actions to achieve a specific goal. It's an evolution from a standard Large Language Model (LLM), enhanced with the abilities to plan, use tools, and interact with its surroundings. Think of an Agentic AI as a smart assistant that learns on the job. It follows a simple, five-step loop to get things done (see Fig.1):
-
- Get the Mission: You give it a goal, like "organize my schedule."
- Scan the Scene: It gathers all the necessary information—reading emails, checking calendars, and accessing contacts—to understand what's happening.
- Think It Through: It devises a plan of action by considering the optimal approach to achieve the goal.
- Take Action: It executes the plan by sending invitations, scheduling meetings, and updating your calendar.
- Learn and Get Better: It observes successful outcomes and adapts accordingly. For example, if a meeting is rescheduled, the system learns from this event to enhance its future performance.
At that point, Gulli notes that "Agents are becoming increasingly popular at a stunning pace".
And the chapter on "Inter-Agent Communication" explains:
Individual AI agents often face limitations when tackling complex, multifaceted problems, even with advanced capabilities. To overcome this, Inter-Agent Communication (A2A) enables diverse AI agents, potentially built with different frameworks, to collaborate effectively. This collaboration involves seamless coordination, task delegation, and information exchange.
Google's A2A protocol is an open standard designed to facilitate this universal communication. This chapter will explore A2A, its practical applications, and its implementation within the Google ADK.
We'll see how seamless and effective those agentic collaborations turn out to be.
One obvious question: whose interests will determine what counts as a "successful" outcome? The various human and institutional participants may have quite different ideas about this. And the AI agents will certainly develop their own (artificial analog of) interests, goals, and preferences, as Gulli's sketch tells us.
And again, these agentic interactions will foster emergent cultures, whose alignment with the goals of human individuals and groups is worth more thought than it's gotten so far. (Except in dystopian novels and movies…)
bks said,
September 4, 2025 @ 5:33 am
Still unsure what an "AI Agent" is. Back in the Jurassic I added a feature to an inventory control system that allowed for automated ordering if the level of stock in a particular store (part of a chain) fell below a level set by the store manager. Was that an AI agent?
When Isa Fulford, research lead for OpenAI's ChatGPT agent, was demonstrating AI agency, she had the system order cupcakes:
https://www.wired.com/story/openai-chatgpt-agent-launch/
There was no time in my life when an AI could "organize my schedule" in any meaningful way as it involved other people's schedules, some of whom had veto power over my choices. Ad hoc veto power.
Mark Liberman said,
September 4, 2025 @ 6:08 am
@bks "Back in the Jurassic I added a feature to an inventory control system that allowed for automated ordering if the level of stock in a particular store (part of a chain) fell below a level set by the store manager. Was that an AI agent?":
"Agentic" computer systems, in that sense, have been around essentially as long as computer systems have existed, as any programmer active 50 years ago can attest. The main difference now, aside from the growth of networked interactions, is that "AI" has come to mean "any reasonably complex computer program".
But it's clearly possible and even likely that internetization, the AI fad, and things like Google's A2A standard, will lead to new levels of inter-program interaction.
"There was no time in my life when an AI could "organize my schedule" in any meaningful way as it involved other people's schedules, some of whom had veto power over my choices."
When I was part of a so-called "artificial intelligence" group at Bell Labs in the early 1980s, a manager frustrated by time wasted in organizing meetings suggested that we construct a system of networked programs to take over the process. After some experimentation, we concluded that the problem was too hard, for reasons like the one you cite.
mark tiede said,
September 4, 2025 @ 1:11 pm
This brought to mind Bruce Sterling's wonderful 1996 short story "Bicycle Repairman" in which agents like these are called "mooks":
"Lyle cordially despised all low-down, phone-tagging, artificially intelligent mooks. For a while, in his teenage years, Lyle himself had owned a mook, an off-the-shelf shareware job that he'd installed in the condo's phone. Like most mooks, Lyle's mook had one primary role: dealing with unsolicited phone calls from other people's mooks. [….] Lyle hated the way a mook cataloged your personal interests and then generated relevant conversation. The machine-made intercourse was completely unhuman and yet perversely interesting, like being grabbed and buttonholed by a glossy magazine ad."
The story is internally dated 2037, so we appear to be somewhat ahead of schedule.
Chester Draws said,
September 4, 2025 @ 3:00 pm
When did we learn how to make computers "think it through"? That is a break-through I think I would have heard of.
The link says "While an LLM is not an agent in itself, it can serve as the reasoning core of a basic agentic system" which is not true. An LLM can do no "reasoning" at all. That makes me distrust the rest of the argument.
We may get there, but I suspect this is like fusion power. Always only 30 years away.
JPL said,
September 4, 2025 @ 4:59 pm
Do we really have to use the anthropomorphic imagery all the time? Couldn't we just say that people are working on engineering solutions of problems involving production and interaction of texts and images using reasonably complex computer programs based on extremely large corpora of texts? Since people nowadays seem to be having uncertainty about what it is to be human and what it means to be a living being, wouldn't it be healthier and nicer for all concerned to avoid considering people and other living beings as mere machines, and then treating them with machine-like callousness? Why the megalomania? The "five-step loop" is good enough for the machine, but completely inadequate for understanding purposeful adaptive action. Where are the externalities, and where are the ethical ideals and values? From Frankenstein to cybernetics to "AI", there's always been the tendency among the practitioners for a "God complex".
Mark Liberman said,
September 5, 2025 @ 6:51 am
The fact that some commenters are so resolutely missing the point may help to explain the failure of the larger world to discuss or even notice the broader issues exemplified by the paper cited in the earlier post.
These issues don't depend on the "agents" being intelligent, or being called intelligent, or even being discussed anthropomorphically. As the enormous literature on cellular automata and "agent based models" shows, collections of very simple interacting systems can develop unexpected emergent properties.
Re-read the earlier post and its links, and stop regurgitating assertions about how LLMs (or other things called AI) don't really learn or understand — which may be true, but is totally irrelevant to the point under discussion.