What started as a single tool to scan AI agent skill files turned into a full security ecosystem. Aguara (scanner + observatory + MCP server) and Oktsec (security proxy for agent-to-agent communication). Each piece exists because the previous one needed it.

This article explains the platform strategy behind that progression, why each component creates compounding value, and what this means for the AI agent security market.

42,655 Skills monitored
7 Registries crawled
148 Detection rules
14 Days to ship

The compound loop

Most security tools are static. Ship a scanner, maintain the rules, sell licenses. The detection quality plateaus once the initial rule set is written. Aguara was built differently, and the difference is structural.

The scanner produced rules, but rules need data to validate. So Aguara Watch was built: an observatory that crawls every public MCP registry, downloads every skill and server definition, and scans them all. Not to build a dashboard. To build a feedback loop.

The feedback loop runs like this:

  1. The scanner produces findings against 42,655 skills from 7 registries.
  2. The findings reveal false positives. 938 findings reclassified across 4 rounds of analysis. Rules get adjusted, context-aware exclusions added, severity thresholds calibrated.
  3. The tuned scanner produces better data. Higher signal, lower noise. The observatory becomes more useful.
  4. Production patterns become new rules. Encoded reverse shells, hidden instructions in HTML comments, credential templates in configuration schemas. Patterns that no test suite would generate.
  5. The MCP server closes the loop at the agent level. AI agents scan skills before installing them, using rules validated against real-world data. The agent benefits from the entire cycle.

Data improves rules. Rules improve data. Each cycle makes the next one more valuable.

The strategic advantage: why this compounds

Three properties of this architecture create durable competitive advantage.

1. The data moat

Aguara Watch scans 42,655 skills across 7 registries every 6 hours. This is the largest continuous security analysis of the AI agent ecosystem. The data is public (open observatory), but the insight it generates is proprietary: which rule patterns produce false positives, which attack vectors appear in production, and how the threat landscape evolves over time.

A new entrant can build a scanner. They cannot replicate 4 rounds of FP reduction against 42,655 real skills without first building the crawling infrastructure and accumulating the scanning history. The data advantage compounds with every crawl cycle.

2. The open-source wedge

Aguara is open-source (Apache-2.0). The scanner, observatory, and MCP server are all publicly available. This is not a freemium strategy. It is a distribution strategy.

Open-source creates three effects simultaneously:

  • Distribution. Developers adopt without procurement cycles. The scanner installs in one command. The MCP server installs in two.
  • Trust. The detection logic is auditable. Every rule is visible, every false positive can be traced. For enterprise security teams evaluating vendor claims, transparency is a requirement.
  • Data generation. Every user of the scanner, the observatory, or the MCP server generates signal about real-world usage patterns. This feeds back into rule quality.

3. Full-stack positioning

The AI agent security stack has four layers: pre-deployment scanning (Aguara CLI), continuous monitoring (Aguara Watch), agent-level protection (Aguara MCP), and runtime enforcement (Oktsec). Most approaches address one layer. The Aguara/Oktsec stack covers all four, with a shared detection engine.

This matters for two reasons. First, enterprise customers will consolidate vendors. A CISO evaluating agent security wants one vendor covering the full stack, not four point solutions. Second, the shared engine means improvements propagate automatically. A rule tuned against observatory data immediately benefits the MCP server, the CLI scanner, and the Oktsec proxy.

Market timing

Gartner projects that 40% of enterprise applications will include agentic AI components by the end of 2026. The OWASP Top 10 for Agentic Applications now classifies prompt injection, tool poisoning, and excessive agency as critical risks. 3 million AI agents are deployed globally, but only 14.4% operate with security approval.

The security layer between AI agents does not exist yet. Every enterprise deploying agents needs scanning before runtime, monitoring in production, and enforcement at the communication layer. The market is forming now.

The organizations that deploy agent security infrastructure before the first major agent-mediated breach will have a material advantage in compliance posture and vendor positioning. The organizations that wait will find themselves retrofitting security into architectures that were never designed for it.

AI-accelerated development

148 commits in 14 days. This velocity is not explained by working harder. It is explained by a specific development pattern: using AI agents at every stage of the build process.

Knowing what to build is the hard part. The decision to build an observatory instead of more test fixtures. The decision to expose the scanner as an MCP server instead of only a CLI. The decision to run FP reduction in rounds against production data. These are strategic decisions that come from domain expertise.

The AI compresses the execution. Writing crawlers for 7 different registry APIs. Implementing cursor-based pagination. Building the FP export pipeline. Generating SARIF output for CI integration. These are well-defined engineering tasks where an AI agent with the right context produces working code faster than writing it manually.

The combination is the real multiplier. The human sets direction and makes architectural decisions. The AI handles implementation at high speed. The gap between deciding what to build and having it built effectively disappears.

This development pattern is itself a competitive advantage. When the feedback loop generates a new insight (a novel attack pattern, a false positive cluster), the time from insight to shipped rule is measured in hours, not sprints.

What the numbers mean

MetricValueWhy it matters
Skills monitored42,655Largest continuous AI agent security dataset
Registries7Complete coverage of the public MCP ecosystem
Detection rules148 (15 categories)Covers 7 of 10 OWASP Agentic risks
FP reclassified9384 rounds of production-validated tuning
MCP clients17Agent-level distribution across every major client
OpenClaw rules15Rapid response to emerging threats
Scan frequency4x dailyNear real-time threat intelligence

The numbers represent a system, not a product. Each metric connects to the others. More skills monitored means better FP reduction. Better FP reduction means higher-quality rules. Higher-quality rules mean more useful MCP server. More useful MCP server means more distribution. More distribution means more signal. The flywheel compounds.

The path forward

The open-source scanner creates distribution. The observatory creates the data moat. The MCP server creates agent-level adoption. Oktsec creates the enterprise revenue layer, extending the same detection engine with cryptographic identity, policy enforcement, and audit trails for agent-to-agent communication.

Each layer of the stack builds on the previous one. Each cycle of the feedback loop makes the next one more valuable. This is not a linear product roadmap. It is a compounding system.

The security layer between AI agents does not exist yet. The team that builds it first, with real data, real detection quality, and real enterprise distribution, will define the category. That is what Oktsec is building.

Explore the Oktsec platform

Open-source scanner. Real-time observatory. Enterprise security layer. One detection engine.