Anatomy of a Production AI Agent -- wireframe figure with exposed architecture layers
10-Part Research Series

Anatomy of a
Production AI Agent

We reverse-engineered the production source code of the most widely-used AI coding agent from its published npm source map. What we found is not a chatbot. It is an operating system.

Scott Thornton perfecXion.ai ~25,000 words March -- May 2026

The Central Thesis

AI agents are not chatbots with tool access. They are autonomous execution runtimes with operating system-class security requirements -- and we are deploying them without the constraints that made operating systems safe.

"The model is not thinking. It is issuing instructions inside a control loop."
"The attack surface is not the model. It is the extension ecosystem."
"We recreated kernel security architecture -- without the constraints that make it safe."
"The agents are already running. And you cannot see them."

The Series

10 articles. Each grounded in production code analysis with line-number citations.

01
Architecture March 31, 2026

The Real Architecture of AI Agents

Agents are execution loops, not chatbots. An infinite while loop, concurrent tool scheduler, 4-layer context compaction, and a permission engine with 7 rule sources.

02
Control Plane April 1, 2026

System Prompts Are Not Strings -- They're Pipelines

20+ components, a cache boundary, 5 injection surfaces, and MCP servers injecting arbitrary instructions after the security guardrail.

03
Attack Surface April 3, 2026

The AI Agent Attack Surface: Plugins, MCP, and Hooks

Three escalation planes that compose into kill chains. A single plugin install enables instruction injection, permission bypass, and data exfiltration.

04
Permissions April 8, 2026

Why AI Permission Systems Are the New Kernel Security

7 permission modes, 7 rule sources, a probabilistic AI classifier making security decisions. The OS analogy is precise -- and breaks in ways that matter.

05
Guardrails April 10, 2026

Inside a Real AI Guardrail System (And Where It Breaks)

Six layers of defense. Six structural failure modes. Defense in depth only works when layers fail independently -- these share state, logic, and model.

06
Isolation April 15, 2026

Subagents Are Sandboxed Processes (And They Leak)

Five structural leakage vectors. Opt-out isolation instead of opt-in. Delegation propagates privilege rather than reducing it.

07
Injection April 17, 2026

Multi-Surface Prompt Injection in Agent Systems

Five injection surfaces. Compound attack chains that compose across surfaces -- entering through one, persisting in another, executing in a third.

08
Cloud Parallel April 22, 2026

Agent Security Is the New Cloud Security

Same architecture. Same risks. No tooling. Agents are workloads, tools are APIs, permissions are IAM, MCP is a service mesh. Cloud had CSPM. Agents have nothing.

09
FLAGSHIP Taxonomy April 29, 2026

The Agent Security Top 10: A New Security Category

AS-01 through AS-10. An evidence-based risk taxonomy for agentic AI systems. Not OWASP. Not theory. Derived from production code with attack scenarios, mitigations, and detection guidance.

10
Visibility May 1, 2026

Why You Don't Know How Many Agents Are Running

Six categories of invisible agent state. EDR sees processes, not reasoning. SIEM sees events, not decision chains. The agents are already running.

The Agent Security Model

10 articles build a complete framework for understanding agent security.

200+
Source files analyzed
10
Risk categories (AS-01 -- AS-10)
28
Hook event types
7
Permission rule sources
5
Injection surfaces

Resources

Scott Thornton

AI Security Researcher, perfecXion.ai

Specializing in defensive research on LLM and agent vulnerabilities. This research was conducted on lawfully obtained, publicly distributed npm package code in an authorized research environment. All analysis is for defensive purposes in accordance with responsible disclosure practices.