Methodology: How This Reference Is Built and Maintained
This page describes how WhatIsAnAIAgent.com is built, what sources it uses, what it intentionally excludes, and how often it is updated. The page exists because reference-shaped content is more trustworthy when its construction is documented than when it is not.
What this site is for
WhatIsAnAIAgent.com is an independent, vendor-neutral reference on AI agents. It is written for an operator audience: VPs of Operations, CHROs, and mid-career technical leaders at 200 to 5000 person companies who need to come up to speed on the substance of AI agents in a single reading session. The site is deliberately not for AI researchers, hobbyist developers, or builders looking for code samples.
The site is built to be cited. Every claim that can carry a citation carries one inline. Every page has structured-data markup so that search engines and AI engines can quote individual paragraphs accurately. Every page footer carries a "last verified" date so readers know how fresh the content is.
Source weighting
Primary sources are weighted highest. These include the textbook of record (Russell and Norvig, Artificial Intelligence: A Modern Approach, 4th ed., 2021), peer-reviewed survey papers (Wang et al. 2024, Yao et al. 2022, Shinn et al. 2023, Madaan et al. 2023, Wei et al. 2022), the OECD AI Taxonomy and Occupational Risk Index, official vendor specifications and engineering blog posts (Anthropic's Building effective agents, OpenAI's function calling documentation, the Model Context Protocol specification), and government AI reports.
Secondary sources are used where they add useful framing or industry context. These include BCG, McKinsey, MIT Sloan Management Review, the Stanford AI Index Report, Anthropic and AWS engineering blog posts on evaluation methodology, Sierra's tau-bench paper, and Weights and Biases agent eval guidance.
Excluded sources. Paywalled analyst reports cited only via news articles. AI-generated content farms. Vendor product marketing pages cited as if they were neutral primers. Listicle SEO content ("Top 10 AI agent platforms 2026").
What is intentionally excluded
Vendor pricing. Vendor pricing changes monthly. A reference page that includes prices ages badly within weeks of publication. The site links to vendor names and categories without prices. Buyers get pricing from the vendor at the time of procurement.
Ranked recommendations. "Best AI agent platform 2026" content is structurally a different category of work than vendor-neutral reference. Rankings are subjective, change quarterly, and put the reference site in direct competition with vendor content. The vendor landscape page is organised by category, not by rank. See AI agent vendors.
Predictions. "By 2030 AI agents will…" content is not reference content. The site describes what exists in 2026, sources the descriptions, and updates annually. Where predictions are referenced, they are clearly attributed to the source making the prediction, not stated in the site's own voice.
Tutorials and code samples. Builder content is a different category. For developer-deeper coverage of agent design patterns and engineering, see agentcogito.com.
Update cadence
The site is reviewed quarterly. Pages with rapidly-shifting referents (vendor landscape, examples) are reviewed more often. Definitional pages (homepage, types, how they work, glossary) are reviewed annually unless a major change in the field requires a faster update.
Every page footer carries a "Last verified" date. The date is the most recent review, not the original publication date. When a page is updated, the "dateModified" in the JSON-LD metadata is updated correspondingly so search and AI engines see the freshness signal.
Affiliate disclosures
The site is published by Digital Signet, an independent operator that runs a portfolio of reference-shaped sites. Where Digital Signet has an affiliate relationship with a vendor named on the site, the relationship is disclosed inline on the relevant page footer.
As of April 2026 there are no active affiliate relationships in the AI agent vendor space. If that changes, this page is updated and the affected page footers are updated.
The site does not accept vendor sponsorship for editorial content. The site does not run display advertising. Monetisation is via the parent advisory practice and, where appropriate, affiliate relationships disclosed as above.
Author, editor, contact
Editorial responsibility for the site sits with the Digital Signet editorial team. Corrections, criticisms, citations to better sources, and substantive feedback are welcome.
Contact for corrections: digitalsignet.com.
Revision history
- April 2026Initial publication. 13 pages: definition, types, how they work, examples by function, agent vs chatbot, agent vs LLM, tool use, multi-agent, vendors, evaluation, glossary, FAQ, methodology.
Canonical sources cited across the site
- Russell, S. and Norvig, P. (2021). Artificial Intelligence: A Modern Approach, 4th ed. Pearson.
- Schluntz, E. (December 2024). Building effective agents. Anthropic engineering blog.
- Wang, L. et al. (2024). A Survey on Large Language Model based Autonomous Agents. arXiv:2308.11432.
- Yao, S. et al. (2022). ReAct: Synergizing Reasoning and Acting in Language Models. arXiv:2210.03629.
- Shinn, N. et al. (2023). Reflexion: Language Agents with Verbal Reinforcement Learning. arXiv:2303.11366.
- Madaan, A. et al. (2023). Self-Refine: Iterative Refinement with Self-Feedback. arXiv:2303.17651.
- Wei, J. et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv:2201.11903.
- OECD (2024). The OECD AI Taxonomy and Occupational Risk Index.
- Anthropic (2024). Demystifying evals for AI agents. Anthropic engineering blog.
- Sierra (2024). tau-bench: A Benchmark for Tool-Agent-User Interaction.
- Anthropic (2024). Model Context Protocol specification.
- Stanford HAI. AI Index Report (annual).