After ATLAS: Why MAESTRO Is the Threat Modeling Framework Agentic AI Actually Needs
My previous article covered MITRE ATLAS at some depth: what it is, why it matters, and how the maturity filter (Feasible, Demonstrated, Realized) makes it a practical prioritization tool rather than just a theoretical catalog. If you haven’t read it, the short version is that ATLAS gives security teams a structured vocabulary for AI-targeted attacks, grounded in what adversaries have actually done. Fifty of its 167 techniques have been confirmed or “Realized” (another 121 are rated but unconfirmed; 46 remain unrated). That’s the part worth holding onto with this article.
Because here’s what ATLAS doesn’t cover: it can’t tell you how an attack might unfold in a system you’re building or defending right now, especially if that system involves autonomous agents with persistent memory, tool access, and the ability to spawn sub-agents. For a traditional web application, a retrospective TTP catalog is usually enough. The architecture is stable, and past patterns predict future ones with reasonable accuracy. Agentic AI doesn’t behave that way. An autonomous agent that can browse the web, call external APIs, write files, and delegate tasks to other agents creates an attack surface that’s still generating its first wave of documented incidents. The ATLAS case study record hasn’t caught up with what’s already in production.
That’s where MAESTRO comes in.
What MAESTRO Is and What Problem It’s Actually Solving
MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) was published in February 2025 by Ken Huang, co-chair of the CSA AI Safety Working Group. The framework’s central premise is that traditional threat modeling approaches weren’t designed for systems that make autonomous decisions, adapt behavior over time, and coordinate with other agents across trust boundaries.
That’s not a provocative claim. STRIDE models systems as static data flows between defined components (relying on Data Flow Diagrams to visualize a system at a specific point in time). PASTA’s attack simulation model assumes the system being analyzed has deterministic, bounded behavior, with no mechanism to represent a system that autonomously modifies its own goals or behavior at runtime.
Neither has a mechanism to address threats arising from goal misalignment, autonomous decision-making, or multi-agent collusion. A peer-reviewed 2025 study (Zambare, Thanikella, and Liu at Texas Tech University) reviewed existing frameworks and directly confirmed the gap, noting that STRIDE “does not model emergent behavior, cognitive reasoning of AI agents very well.” The OWASP Agentic Security Initiative reached the same conclusion, ultimately endorsing MAESTRO as a comprehensive extension of STRIDE for handling Agentic AI.


