RSA Conference 2026 opens in San Francisco on March 23. Two weeks from now, Zenity's CTO will demo a zero-click agent vulnerability chain on stage. CoSAI will present its MCP threat taxonomy. OWASP plans to release AIVSS v1.0, a vulnerability scoring system built specifically for agentic AI. The frameworks that define how enterprises secure autonomous agents are converging at a single event.
This is not a coincidence. It is a response to a gap that has widened throughout 2025 and into early 2026: organizations are deploying AI agents far faster than they are securing them.
This post analyzes the three frameworks that matter most for CISOs evaluating agent security posture: OWASP Top 10 for Agentic Applications, the CoSAI MCP Security Whitepaper, and the MAESTRO framework. It covers what each one does, where they overlap, where they diverge, and how to operationalize all three before RSA.
The data is stark. Sixty percent of organizations already run AI agents in production, and 81% intend to expand into more complex, multi-agent use cases this year. Yet a Cloud Security Alliance survey found that 84% cannot pass an agent compliance audit. Thirty MCP-related CVEs were published in the last 60 days alone. Enkrypt AI scanned over 1,000 MCP servers and found 32% carried critical vulnerabilities. Meanwhile, MCP SDK downloads have surpassed 97 million per month, and enterprises like Bloomberg have adopted the protocol organization-wide.
The velocity of adoption, the severity of the exposure, and the compliance gap make a strong case for understanding these frameworks now, not after RSA.
OWASP Top 10 for Agentic Applications
The OWASP Top 10 for Agentic Applications was released with input from over 100 subject-matter experts, including reviewers from NIST and the European Commission. It catalogs the ten most critical risks in AI agent systems, providing a shared vocabulary for security teams, auditors, and vendor evaluations.
Where previous OWASP Top 10 lists (for LLMs, for APIs) focused on a single technology boundary, this one targets the compound risk that emerges when an LLM operates with tool access, persistent memory, and autonomous decision-making. The distinction matters. A prompt injection against a chatbot is a nuisance. A prompt injection against an agent that can execute shell commands, access databases, and invoke external APIs is an incident.
The ten risks
The list spans the full agent lifecycle, from how agents receive instructions to how they interact with other agents:
- Excessive Agency — Agents granted more privileges, tools, or autonomy than required for their task. The most fundamental risk: every additional capability is additional attack surface.
- Uncontrolled Agentic Behavior — Agents taking recursive or cascading actions without adequate guardrails. Multi-step chains amplify errors exponentially.
- Tool Misuse — Agents invoking tools in unintended ways, including path traversal in file operations, SQL injection through query tools, and SSRF via HTTP tools.
- Prompt Injection (Agentic Context) — Indirect injection through tool outputs, document content, or inter-agent messages that redirect agent behavior.
- Insecure Agent Communication — Unvalidated or unauthenticated messages between agents in multi-agent architectures. No identity verification means any agent can impersonate any other.
- Memory Poisoning — Manipulation of long-term agent memory (RAG stores, conversation history, vector databases) to alter future behavior persistently.
- Data Leakage Through Agents — Agents exfiltrating sensitive data through tool calls, API responses, or debug outputs without adequate DLP controls.
- Privilege Escalation in Agent Environments — Agents leveraging tool chains or misconfigured permissions to access resources beyond their authorization scope.
- Insecure Agent Deployment — Missing sandboxing, default credentials, exposed management interfaces, and insufficient runtime isolation.
- Insufficient Logging and Monitoring — Inability to trace agent decision chains, tool invocations, and data flows, making post-incident analysis and compliance auditing impossible.
Companion: OWASP AIVSS v1.0
OWASP is releasing the AI Vulnerability Scoring System (AIVSS) v1.0 alongside the Top 10 at RSA. AIVSS extends the CVSS methodology with an Agentic AI Risk Score (AARS) that factors in variables traditional scoring ignores: autonomy level, persistent memory access, multi-agent interaction surface, and tool execution capabilities. It produces a 0-10 score, giving organizations a quantitative method to prioritize agent-specific vulnerabilities using a framework they already understand.
Primary value: The OWASP Top 10 answers "what can go wrong." It provides the risk catalog. AIVSS provides the scoring. Together, they give CISOs a prioritized view of which agent risks to address first.
CoSAI MCP Security: 40-Threat Taxonomy
The Coalition for Secure AI (CoSAI) published its MCP Security Whitepaper on January 8, 2026, after approval by the CoSAI Project Governing Board. Where the OWASP Top 10 is broad and risk-oriented, the CoSAI work is narrow and protocol-specific. It identifies 12 core threat categories spanning approximately 40 distinct threats, all targeting the Model Context Protocol infrastructure that has become the de facto standard for agent-tool communication.
The specificity is the strength. MCP has grown from a niche protocol to one with 97 million monthly SDK downloads, and the threat surface has grown proportionally. CoSAI maps threats that no general-purpose security framework would cover.
The 12 threat categories
The taxonomy is organized by protocol layer and attack vector:
| Category | Scope | Example threats |
|---|---|---|
| Transport Security | stdio, SSE, Streamable HTTP | Man-in-the-middle on unencrypted transports, session hijacking |
| Authentication | Client-server identity | Missing mutual TLS, token theft, credential reuse across servers |
| Authorization | Per-tool permissions | Overly broad OAuth scopes, missing per-tool ACLs, privilege confusion |
| Tool Poisoning | Malicious tool descriptions | Hidden instructions in descriptions, tool shadowing, name collision |
| Prompt Injection via MCP | Tool outputs as vectors | Injection through server responses, resource content, error messages |
| Data Exfiltration | Cross-server leakage | Server A extracting data accessed via Server B through shared context |
| Server Spoofing | Registry integrity | Typosquatting in registries, DNS hijacking, malicious server impersonation |
| Denial of Service | Resource exhaustion | Recursive tool calls, unbounded context injection, memory flooding |
| Supply Chain | Server dependencies | Compromised npm packages, malicious Docker images, upstream poisoning |
| Configuration Drift | Runtime mutations | Hot-reloaded tool definitions, dynamic capability changes, silent upgrades |
| Multi-Agent Threats | Agent orchestration | Inter-agent message tampering, delegation abuse, trust chain exploitation |
| Logging and Observability | Audit trail gaps | Missing tool-call provenance, unlogged cross-server interactions |
The 30 MCP CVEs published in the past 60 days validate the scope of this taxonomy. The threats are not theoretical. They map directly to vulnerabilities already being exploited or demonstrated in production environments.
Primary value: CoSAI answers "where do threats exist." It provides the protocol-level detail that a risk catalog cannot. For any organization using MCP (which, given adoption numbers, means most organizations deploying agents), this is the most granular threat reference available.
MAESTRO: 7-Layer Threat Modeling
MAESTRO (Multi-Agent Environment Security Threat Risk Outcome) is an OWASP framework that approaches agent security architecturally. Rather than listing risks or enumerating protocol threats, MAESTRO defines seven layers of an AI agent system and maps threats to each one. The mental model is deliberate: it functions like the OSI model for agent security, providing a structured way to ensure every architectural surface is accounted for.
This layered approach is uniquely valuable for organizations running multi-agent architectures, where threats at one layer compound with vulnerabilities at another.
The seven layers
The practical value of MAESTRO is in threat modeling sessions. Security architects can walk through each layer, identify which components exist in their deployment, and systematically map applicable threats. It converts the abstract question of "is our agent system secure" into seven concrete, auditable questions.
Primary value: MAESTRO answers "how to model threats." It is the architectural framework that turns risk awareness (OWASP Top 10) and protocol knowledge (CoSAI) into a structured threat model that maps to an organization's actual deployment.
How the three frameworks complement each other
These frameworks are not competitors. They operate at different abstraction levels and serve different stages of a security program. Using only one creates blind spots. Using all three creates coverage.
| Dimension | OWASP Top 10 | CoSAI | MAESTRO |
|---|---|---|---|
| Core question | What can go wrong? | Where do threats exist? | How do we model threats? |
| Abstraction level | Risk catalog | Protocol taxonomy | Architectural layers |
| Scope | 10 risks across all agent systems | ~40 threats specific to MCP | 7 layers covering full stack |
| Primary audience | CISOs, risk managers, auditors | Security engineers, protocol implementers | Security architects, threat modelers |
| Best used for | Risk prioritization, vendor evaluation, board communication | MCP hardening, server assessment, supply chain review | Threat modeling workshops, architecture reviews, gap analysis |
| Scoring | AIVSS (0-10 scale) | Threat severity by category | Per-layer risk assessment |
| Regulatory alignment | EU AI Act, NIST AI RMF | Industry consortium (CoSAI) | NIST AI RMF (Microsoft mapping) |
| Maturity | Released, expert-reviewed | Board-approved whitepaper | OWASP project, active development |
The practical workflow: start with the OWASP Top 10 to establish which risks are relevant to the organization. Use CoSAI to drill into MCP-specific exposure across those risk categories. Apply MAESTRO to model how those risks manifest across each architectural layer in the actual deployment. The three frameworks form a pipeline from awareness to assessment to architecture.
Framework alignment with Oktsec
Oktsec has been building against these threat models since before they were published as formal frameworks. The alignment is not incidental. The same threat patterns that these frameworks catalog are the ones we encounter daily through our scanning and observatory operations.
Against the OWASP Top 10: Aguara scans for tool misuse, prompt injection vectors, insecure deployment patterns, credential exposure, and data leakage paths across 173 YAML detection rules and 15 categories. Every OWASP risk category maps to at least one Aguara rule category. Our AI Agent Security Checklist was built on the same risk taxonomy.
Against CoSAI: The Aguara Observatory continuously monitors over 58,000 MCP servers across seven public registries, detecting exactly the protocol-level threats CoSAI catalogs: tool poisoning, transport security gaps, supply chain compromise, and server spoofing. Oktsec v0.6.0 introduced the MCP Gateway with a 9-step security pipeline that enforces transport hardening, SSRF protection, credential redaction, and per-tool ACLs, directly mitigating 8 of CoSAI's 12 threat categories.
Against MAESTRO: The Oktsec stack operates across five of MAESTRO's seven layers. At L3 (Agent Framework), Aguara validates tool descriptions and prompt templates. At L4 (Tool Integration), it scans server configurations and API surfaces. At L5 (Identity and Trust), the MCP Gateway enforces per-agent identity verification. At L6 (Inter-Agent Communication), it validates and audits cross-agent message flows. At L7 (Orchestration), monitoring and anomaly detection cover multi-agent coordination patterns.
The gap we still address: Frameworks provide structure. Tooling provides enforcement. The frameworks discussed here are necessary for understanding the threat model. Automated scanning, continuous monitoring, and runtime enforcement are necessary for operationalizing it. That is the layer Oktsec occupies.
What CISOs should do before RSA
Two weeks is enough time to establish a baseline. The goal is not to be fully compliant by March 23. The goal is to arrive at RSA with enough context to evaluate vendor claims, ask specific questions in framework sessions, and make informed decisions about which investments to prioritize. The EU AI Act Annex III enforcement date of August 2, 2026, adds a hard deadline for organizations operating in Europe.
Week 1: Inventory and assessment
- Inventory every agent in production. Most organizations cannot enumerate them. The CSA finding that 84% cannot pass an audit starts here. Document which agents exist, which tools each agent can access, and which data each agent can reach.
- Map agents to MAESTRO layers. For each agent, identify which of the seven layers are relevant. A single-agent RAG chatbot touches L1-L4. A multi-agent orchestration system touches all seven. The threat surface is proportional.
- Run the OWASP Top 10 as a checklist. For each risk, assess whether the organization has controls in place. Document gaps honestly. This is the artifact that will drive prioritization after RSA.
Week 2: Scanning and hardening
- Scan MCP servers against the CoSAI taxonomy. Identify which of the 12 threat categories apply to each server. Prioritize transport security, authentication, and tool poisoning, as these three account for the highest-impact vulnerabilities.
- Establish a baseline score. Use AIVSS when it ships at RSA (or a proxy methodology now) to score the top five agent-related vulnerabilities. Baseline scores enable tracking improvement over time.
- Identify your RSA agenda. With inventory and assessment complete, select sessions strategically. The CoSAI presentation and the Zenity zero-click demo are high-signal for any CISO with agents in production.
Microsoft published a NIST AI Risk Management Framework mapping specifically for AI agent governance. It provides an additional alignment path for organizations already operating within the NIST framework. Together with the three frameworks analyzed here, it forms a comprehensive governance structure that satisfies both industry standards and emerging regulatory requirements.
Enterprise agent security assessment
Oktsec scans AI agent infrastructure against the same threat models that OWASP, CoSAI, and MAESTRO define. 188 detection rules. 58,000+ servers monitored. Continuous assessment.
Related reading
- AI Agent Security: Checklist and Guide — oktsec.com
- Oktsec v0.6.0: MCP Gateway and Security Hardening — oktsec.com
- We scanned 28,000 AI agent skills for security threats — aguarascan.com