We reverse-engineered the production source code of the most widely-used AI coding agent from its published npm source map. What we found is not a chatbot. It is an operating system.
AI agents are not chatbots with tool access. They are autonomous execution runtimes with operating system-class security requirements -- and we are deploying them without the constraints that made operating systems safe.
10 articles. Each grounded in production code analysis with line-number citations.
Agents are execution loops, not chatbots. An infinite while loop, concurrent tool scheduler, 4-layer context compaction, and a permission engine with 7 rule sources.
20+ components, a cache boundary, 5 injection surfaces, and MCP servers injecting arbitrary instructions after the security guardrail.
Three escalation planes that compose into kill chains. A single plugin install enables instruction injection, permission bypass, and data exfiltration.
7 permission modes, 7 rule sources, a probabilistic AI classifier making security decisions. The OS analogy is precise -- and breaks in ways that matter.
Six layers of defense. Six structural failure modes. Defense in depth only works when layers fail independently -- these share state, logic, and model.
Five structural leakage vectors. Opt-out isolation instead of opt-in. Delegation propagates privilege rather than reducing it.
Five injection surfaces. Compound attack chains that compose across surfaces -- entering through one, persisting in another, executing in a third.
Same architecture. Same risks. No tooling. Agents are workloads, tools are APIs, permissions are IAM, MCP is a service mesh. Cloud had CSPM. Agents have nothing.
AS-01 through AS-10. An evidence-based risk taxonomy for agentic AI systems. Not OWASP. Not theory. Derived from production code with attack scenarios, mitigations, and detection guidance.
Six categories of invisible agent state. EDR sees processes, not reasoning. SIEM sees events, not decision chains. The agents are already running.
10 articles build a complete framework for understanding agent security.
AI Security Researcher, perfecXion.ai
Specializing in defensive research on LLM and agent vulnerabilities. This research was conducted on lawfully obtained, publicly distributed npm package code in an authorized research environment. All analysis is for defensive purposes in accordance with responsible disclosure practices.