Last verified: April 2026
AI Agent Glossary: Every Term Defined (2026)
A vendor-neutral glossary of AI agent terminology. Every entry is two sentences. Every entry has a fragment URL for deep-linking. The page is built to be cited.
A
- A2A protocol#a2a-protocol
- A standard for agent-to-agent communication. The emerging counterpart to MCP for tool use. Several competing specifications exist as of early 2026; none has reached consensus. More on multi-agent systems →
- Agent loop#agent-loop
- The four-step cycle every modern AI agent runs: sense, think, act, observe. The cycle iterates until the goal is reached, the agent gives up, or a human stops it. More on the agent loop →
- Agentic AI#agentic-ai
- A near-synonym for AI agents, popularised in 2024 enterprise discourse. The word emphasises autonomy and goal-directed behaviour. By 2027 it will either be normalised vocabulary or replaced.
- Augmented LLM#augmented-llm
- Anthropic's term for a language model with retrieval, tool use, and memory. The minimum architecture that does anything an LLM-alone API cannot do.
- Autonomy#autonomy
- The defining axis of an AI agent. The amount of decision-making the agent does without human approval at every step. Autonomy is a spectrum, not a binary.
Source: Schluntz, Building effective agents (Anthropic, 2024)
C
- Centralised orchestration#centralised-orchestration
- A multi-agent topology where a supervisor agent routes tasks to specialised workers. Workers do not talk to each other. The simplest topology to debug. More on orchestration patterns →
- Chain-of-thought#chain-of-thought
- A reasoning pattern where the model explicitly writes intermediate reasoning steps before producing a final answer. Improves accuracy on multi-step tasks at a cost of more tokens.
- Copilot#copilot
- An inline-with-user assistant in a productivity or development surface. The user remains in the driver seat at every step. Distinct from an agent, which runs the whole task. Agent vs copilot →
Source: Wei et al. 2022, arXiv:2201.11903
D
- Deliberative agent#deliberative-agent
- An agent that reasons about the world before acting. Goal-based and utility-based agents in classical terminology; planner-executor agents in modern terminology.
E
- Emergent orchestration#emergent-orchestration
- A multi-agent topology with minimal predefined structure where agents self-organise. The most experimental and least production-ready pattern as of 2026.
F
- Fine-tuned model#fine-tuned-model
- A model whose weights have been adjusted on a specific dataset after pre-training. Fine-tuning is one way to specialise an agent for a domain; system-prompt engineering is another.
- Function calling#function-calling
- The mechanism by which an LLM emits a structured request to invoke an external function. Introduced by OpenAI in mid-2023, followed by Anthropic and Google. The default tool-use pattern from 2024 onwards. Full explanation →
G
- Goal-based agent#goal-based-agent
- An agent that acts to achieve an explicit goal, using search or planning to select actions. The classical taxonomy term that maps onto modern planner-executor agents.
Source: Russell and Norvig, 4th ed.
H
- Hierarchical orchestration#hierarchical-orchestration
- A multi-agent topology with layered delegation: a manager agent breaks the goal into sub-goals, delegates to team-lead agents, which delegate to workers. Useful when the task has natural depth.
- Hybrid agent#hybrid-agent
- An agent that combines multiple architectures, typically a fast reactive layer for tight control loops and a slow deliberative layer for planning. Common in robotics and complex production agents.
L
- Learning agent#learning-agent
- An agent that improves its performance from experience. The classical taxonomy includes a learning element, performance element, critic, and problem generator. Modern reinforcement-learning agents are the contemporary instance.
- LLM (large language model)#llm-large-language-model
- A neural-network model trained on a large corpus of text to predict the next token. The brain inside a modern AI agent. The agent is the system that includes the LLM. Agent vs LLM →
Source: Russell and Norvig, 4th ed.
M
- MCP (Model Context Protocol)#mcp
- An open standard introduced by Anthropic in late 2024 for connecting LLMs to external tools and data sources. Differs from raw function calling by supporting dynamic discovery of tools at runtime. Full explanation →
- Memory (short-term, long-term)#memory-short-term-long-term
- Short-term memory is the recent conversation in the context window. Long-term memory is retrieval against a vector database, document store, or structured database. Memory is where the agent carries state across iterations and sessions.
- Multi-agent system#multi-agent-system
- A system of more than one AI agent collaborating on a goal. More powerful than a single agent and considerably harder to run. Often overkill when a single agent with better tools would suffice. Full treatment →
P
- Pure-LLM agent#pure-llm-agent
- An agent built around a single language-model call with no tools and no memory beyond the context window. The simplest 2024-era chatbot with retrieval falls into this category.
- Planner-executor pattern#planner-executor-pattern
- An architectural pattern where a planner step decomposes the goal into sub-tasks, then an executor step runs the sub-tasks. Often implemented as two distinct model calls.
R
- Reactive agent#reactive-agent
- An agent that responds to stimuli without internal state or planning. The classical reflex agent. The simplest 2024-era one-shot LLM caller is a reactive agent in this sense.
- ReAct pattern#react-pattern
- Reasoning and Acting interleaved in a single agent run. The model alternates between reasoning steps and tool calls. The default pattern for tool-using agents.
- Reflection#reflection
- An architectural pattern where the agent (or a separate critic agent) scores its own output and decides whether to revise. Adds reliability at the cost of more tokens.
- Reflex agent#reflex-agent
- A simple reflex agent selects an action based only on the current percept, ignoring all history. A thermostat is the canonical non-AI example.
- Retrieval-augmented generation#retrieval-augmented-generation
- A pattern where the agent retrieves relevant documents before generating a response. RAG is a special case of tool use where the tool is "search a knowledge base". More on retrieval →
Source: Yao et al. 2022, arXiv:2210.03629
Source: Shinn et al. 2023, arXiv:2303.11366
Source: Russell and Norvig, 4th ed.
S
- Self-refine#self-refine
- A reflection variant where the agent iteratively critiques and revises its own output. Distinct from external-critic patterns where a separate agent does the critique.
- Single-agent system#single-agent-system
- An architecture with one AI agent and one agent loop. Simpler, cheaper, and easier to debug than a multi-agent system. The right choice for most production deployments.
- System prompt#system-prompt
- The instructions given to the model before any user input. Defines the agent's role, scope, tone, and guardrails. Where most operator-side customisation lives.
Source: Madaan et al. 2023, arXiv:2303.17651
T
- Tool routing#tool-routing
- The layer in an agent that decides which tool to call given the current state. In simple agents, the model itself decides; in production agents, a separate routing layer often pre-filters available tools.
- Tool use#tool-use
- The agent's capability to invoke external functions, retrieve documents, or write to systems outside the model. The architectural substance of a modern agent. Tool-using agents →
U
- Utility-based agent#utility-based-agent
- An agent that picks the action that maximises a utility function over outcomes. Used when goals conflict or can be achieved with varying quality.
Source: Russell and Norvig, 4th ed.
W
- Workflow automation#workflow-automation
- Deterministic step-running tools (Zapier, n8n, Make, Workato). Distinct from an AI agent in that workflow automation does not decide what to do next; the user pre-defines every step. Agent vs workflow automation →
- Workflow vs agent (Anthropic's distinction)#workflow-vs-agent
- Anthropic's framing draws the line at decision authority: a workflow follows a predefined path; an agent decides the path at runtime. The line determines who is responsible when the system fails.
Source: Schluntz, Building effective agents (Anthropic, 2024)