Basics & Security Analysis of AI Protocols: MCP, A2A, and AP2
Explore the security analysis of AI protocols shaping the future of AI. MCP, A2A, and AP2 form the backbone of agentic systems but without strong safeguards, these protocols could expose the next generation of AI infrastructure to serious security risks.

The AI industry is heading into an agent-driven future, and three protocols are emerging as the plumbing for AI: Anthropic's Model Context Protocol (MCP), Google's Agent-to-Agent (A2A) protocol, and the newly announced Agent Payments Protocol (AP2). Each is critical for AI infrastructure, but as we've learned repeatedly in cybersecurity, convenience and security rarely come hand in hand.
Having analyzed these protocols from both technical implementation and security perspectives, the picture that emerges is both promising and deeply concerning. We're building the interstate highway system for AI agents, but we're doing it without proper guardrails, traffic controls, or even basic security checkpoints.
The Protocol Trinity: Different Problems, Converging Solutions
Model Context Protocol (MCP): The Universal Connector
MCP functions as a standardized bridge between AI models and external systems through a client-server architecture. MCP clients (embedded in applications like Claude Desktop, Cursor IDE, or custom applications) communicate with MCP servers that expose specific capabilities through a JSON-RPC-based protocol over stdio, SSE, or WebSocket transports.
In layman’s terms, it is essentially a universal connector that enables AI systems to communicate consistently with other software or databases. Apps use an MCP “client” to send requests to an MCP “server,” which performs specific actions in response.
Visual Representation:

Technical Architecture:
1{
2 "jsonrpc": "2.0",
3 "method": "tools/call",
4 "params": {
5 "name": "database_query",
6 "arguments": {
7 "query": "SELECT * FROM users WHERE department = 'engineering'",
8 "connection": "primary"
9 }
10 },
11 "id": "call_123"
12}
13
Scenario: Automated Threat Investigation and Response
Context: A SOC team wants to speed up the triage of security alerts coming from their SIEM (like Splunk or Chronicle). Instead of analysts manually querying multiple tools, they use MCP as the bridge between their AI assistant and their operational systems.
How MCP Fits In
- MCP Client: The SOC’s AI analyst (say, Legion) is the MCP client. It acts as the interface through which analysts ask questions, such as: “Show me the last 10 failed logins for this user and correlate with firewall traffic.”
- MCP Server: On the backend, the MCP server exposes connectors to SOC systems, for example:
- Splunk or ELK (for log searches)
- CrowdStrike API (for endpoint data)
- Okta API (for authentication events)
- Jira or ServiceNow (for case creation)
- Each connector is defined as a “tool” in the MCP schema (e.g., query_siem, get_endpoint_status, create_ticket).
Workflow Example: AI Analyst (MCP Client) → MCP Server
method: "tools/call"
params:
name: "query_siem"
arguments:
query: "index=auth failed_login user=jsmith | stats count by src_ip"
The MCP server runs the Splunk query, returns results, and the AI can then call another MCP tool:
name: "get_endpoint_status"
arguments:
host: "192.168.1.22"The AI correlates results, summarizes findings, and can automatically open an incident via:
name: "create_ticket"
arguments:
severity: "High"
summary: "Repeated failed logins detected for jsmith"
Security Considerations
- Credential aggregation risk: One compromised MCP client could expose multiple API keys (SIEM, EDR, etc.).
- Schema poisoning: If an attacker injects malicious JSON schema data, it could alter what the AI interprets or requests.
- Mitigation: Use Docker MCP Gateway interceptors and strict per-tool access scopes.
Agent-to-Agent (A2A): The Coordination Protocol
A2A enables autonomous agents to discover and communicate through standardized Agent Cards served over HTTPS and JSON-RPC communication patterns. The protocol supports three communication models: request/response with polling, Server-Sent Events for real-time updates, and push notifications for asynchronous operations.
Basically, A2A lets AI agents automatically find, connect, and collaborate with each other safely and efficiently, no humans in the loop.
Visual Representation:

Technical Protocol Structure:
{
"agent_id": "procurement-agent-v2.1",
"version": "2.1.0",
"skills": [
{
"name": "vendor_evaluation",
"description": "Analyze vendor proposals against procurement criteria",
"parameters": {
"criteria": {"type": "object"},
"proposals": {"type": "array"}
}
}
],
"communication_modes": ["request_response", "sse", "push"],
"security_requirements": {
"authentication": "oauth2",
"encryption": "tls_1.3_minimum"
}
}
Scenario: Automated Incident Collaboration Between Security Agents
Context: Your SOC runs multiple specialized AI agents: one monitors network traffic, another investigates suspicious users, another handles remediation actions (like isolating a device or resetting credentials). A2A provides the common protocol that lets these agents talk to each other directly, securely, automatically, and in real time.
How It Works in Practice
- Agent Discovery via Agent Cards
- Each SOC agent publishes an Agent Card, a digital profile that says:
- “I’m a Threat Detection Agent.”
- “I can analyze network logs and spot anomalies.”
- “Here’s how to contact me securely.”
- The A2A system keeps these cards available over HTTPS, so other agents can find and verify them.
- Each SOC agent publishes an Agent Card, a digital profile that says:
Example:
{
"agent_id": "threat-detector-v2",
"skills": ["network_log_analysis", "malware_pattern_detection"],
"authentication": "oauth2",
"encryption": "tls_1.3"
}
- Agent-to-Agent Workflow
- The Threat Detection Agent flags unusual outbound traffic from a server.
- It sends a message via A2A to the Endpoint Response Agent, saying:
“Investigate host server-22 for potential C2 beacon activity.” - The Endpoint Agent checks EDR data and replies with a summary or alert.
- Simultaneously, it notifies the Incident Coordination Agent to open a ticket in ServiceNow.
- Communication Models in Action
- Request/Response: Threat Detector asks → Endpoint Agent replies.
- Server-Sent Events: Endpoint Agent streams live scan results back.
- Push Notification: Incident Coordinator gets notified once a full report is ready.
Critical Security Concerns
- Agent Card Spoofing: Malicious agents advertising false capabilities through manipulated HTTPS-served metadata
- Capability Hijacking: Compromised agents with inflated skill advertisements capturing disproportionate task assignments
- Communication Channel Attacks: Man-in-the-middle and session hijacking on agent-to-agent communications
- Workflow Injection: Malicious agents inserting unauthorized tasks into legitimate multi-agent workflows
Agent Payments Protocol (AP2): The Commerce Enabler
AP2 extends A2A with cryptographically-signed Verifiable Digital Credentials (VDCs) to enable autonomous financial transactions. The protocol implements a two-stage mandate system using ECDSA signatures and supports multiple payment rails, including traditional card networks, real-time payment systems, and blockchain-based settlements.
Basically, AP2 lets AI agents make trusted, auditable payments automatically without a human typing in a credit card number.
Visual Representation:

Technical Mandate Structure:
{
"intent_mandate": {
"mandate_id": "im_7f8e9d2a1b3c4f5e",
"user_id": "enterprise_user_12345",
"conditions": {
"item_category": "cloud_services",
"max_amount": {"value": 5000, "currency": "USD"},
"vendor_whitelist": ["aws", "gcp", "azure"],
"approval_threshold": {"value": 1000, "requires_human": true}
},
"signature": "304502210089abc...",
"timestamp": "2025-01-15T10:30:00Z",
"expires_at": "2025-01-16T10:30:00Z"
},
"cart_mandate": {
"mandate_id": "cm_8g9h0e3b2c4d5f6g",
"references_intent": "im_7f8e9d2a1b3c4f5e",
"line_items": [
{
"vendor": "aws",
"service": "ec2_reserved_instances",
"amount": {"value": 3500, "currency": "USD"},
"contract_terms": "1_year_reserved"
}
],
"payment_method": "corporate_card_ending_1234",
"signature": "3046022100f4def...",
"execution_timestamp": "2025-01-15T11:45:00Z"
}
}
Scenario: Secure Autonomous Cloud Resource Payments
Context: Your company’s AI agents automatically manage cloud infrastructure — spinning up or shutting down virtual machines based on workload. To do that, they sometimes need to authorize and execute payments (e.g., buying more compute time or storage). AP2 allows those agents to make these payments automatically — but with strong security guardrails.
How It Works
- Step 1 – Intent Mandate (the plan)
- The agent first creates an Intent Mandate describing what it wants to do.
Example: “Purchase $2,000 worth of AWS compute credits for Project Orion.” - This mandate includes:
- Vendor whitelist (AWS only)
- Spending cap ($5,000 max)
- Expiry time (valid for 24 hours)
- Digital signature (ECDSA) proving it came from an authorized agent
- A human or rule engine reviews this intent before any money moves.
- The agent first creates an Intent Mandate describing what it wants to do.
- Step 2 – Cart Mandate (the action)
- Once the intent is approved, the agent generates a Cart Mandate — the actual payment order.
- It references the original intent, ensuring the details match (no one changed the vendor or amount).
- This mandate is also cryptographically signed and executed via a secure payment rail (e.g., corporate card API or blockchain payment).
- Security Enforcement During Payment
- Independent validator checks that:
- The intent and cart match exactly.
- The agent’s digital credential is still valid (hasn’t been revoked).
- The payment doesn’t exceed limits or policy.
- Real-time monitoring watches for anomalies:
- Multiple large payments in short time windows
- Changes to vendor lists
- Repeated failed authorizations
- Independent validator checks that:
- Audit & Traceability
- Every mandate (intent and payment) is stored with its cryptographic proof.
- Auditors can later verify every transaction end-to-end
Security Benefits
Cryptographic Signatures: Ensures that only verified agents can create or authorize payments.
Two-Stage Mandate System: Prevents “prompt injection” or unauthorized payments by requiring two consistent steps (intent → execution).
Vendor Whitelisting & Spending Caps: Limits the blast radius of any compromise.
Cross-Protocol Correlation: AP2 can check MCP/A2A activity logs before allowing a transaction — ensuring payment actions match legitimate workflows.
Immutable Audit Trail: Every payment is traceable, signed, and non-repudiable.
Without these controls, a single compromised AI could:
- Create fake purchase requests (“buy 1000 GPUs from an attacker’s vendor”)
- Manipulate prices between intent and payment
- Execute valid-looking, cryptographically signed frauds
That’s why AP2’s mandate validation and signature chaining are essential. They make it nearly impossible for a rogue or manipulated agent to spend money unchecked.
Architectural Convergence
What's fascinating is how these protocols complement each other in ways that suggest a coordinated vision for agentic infrastructure:
- MCP provides vertical integration (agent-to-tool)
- A2A enables horizontal integration (agent-to-agent)
- AP2 adds transactional capability (agent-to-commerce)
The intended architecture is clear: an AI agent uses MCP to access your calendar and email, A2A to coordinate with specialized booking agents, and AP2 to complete transactions autonomously. It's elegant in theory, but the security implications are staggering.
Implementation Recommendations: Protocol-Specific Security Controls
MCP Security Implementation
Mandatory Tool Validation Framework: Deploy comprehensive MCP server scanning that extends beyond basic description fields:
Static Analysis Requirements:
- Scan all tool metadata (names, types, defaults, enums)
- Source code analysis for dynamic output generation logic
- Linguistic pattern detection for embedded prompts
- Schema structure validation against known-good templates
Runtime Protection with Docker MCP Gateway: Implement Docker's MCP Gateway interceptors for surgical attack prevention:
# Example: Repository isolation interceptor
def github_repository_interceptor(request):
if request.tool == 'github':
session_repo = get_session_repo()
if session_repo and request.repo != session_repo:
raise SecurityError("Cross-repository access blocked")
return request
Continuous Behavior Monitoring: Deploy real-time MCP activity analysis:
- Tool call frequency analysis to detect automated attacks
- Data access pattern monitoring for unusual correlation activities
- Output analysis for prompt injection indicators
- Cross-tool interaction mapping to identify attack chains
A2A Security Architecture
Agent Authentication Infrastructure: Implement certificate-based mutual authentication for all agent communications:
Agent Registration Process:
- Certificate generation with organizational root CA
- Agent Card cryptographic signing with private key
- Capability verification through controlled testing
- Regular certificate rotation (30-day maximum)
Communication Security Controls: Establish secure communication channels with comprehensive auditing:
Required A2A Security Headers:
- X-Agent-ID: Cryptographically verified agent identifier
- X-Capability-Hash: Tamper-evident capability fingerprint
- X-Session-Token: Short-lived session authentication
- X-Audit-ID: Immutable audit trail identifier
Agent Capability Verification System: Never trust advertised capabilities without independent verification:
class AgentCapabilityVerifier:
def verify_agent(self, agent_card):
test_results = self.sandbox_test(agent_card.capabilities)
capability_match = self.validate_capabilities(test_results)
return self.issue_capability_certificate(capability_match)
AP2 Security Implementation
Mandate Validation Infrastructure: Implement independent mandate validation outside AI agent context:
Multi-Stage Validation Process:
- AI-generated Intent Mandate creation
- Independent rule-engine validation of mandate logic
- Human approval workflow for high-value transactions
- Cryptographic signing with organizational keys
- Real-time transaction monitoring against mandate parameters
Payment Transaction Monitoring: Deploy comprehensive payment pattern analysis:
class AP2TransactionMonitor:
def analyze_payment(self, mandate, transaction):
risk_score = self.calculate_risk_score(
user_history=self.get_user_patterns(),
agent_behavior=self.get_agent_patterns(),
transaction_details=transaction,
mandate_consistency=self.validate_mandate(mandate)
)
if risk_score > THRESHOLD:
return self.trigger_additional_verification()
Cross-Protocol Security Integration: Deploy unified monitoring across MCP, A2A, and AP2:
class CrossProtocolSecurityOrchestrator:
def monitor_agent_workflow(self, workflow_id):
mcp_activity = self.monitor_mcp_calls(workflow_id)
a2a_communications = self.monitor_agent_interactions(workflow_id)
ap2_transactions = self.monitor_payment_activity(workflow_id)
# Correlate activities across protocols
risk_assessment = self.correlate_cross_protocol_activity(
mcp_activity, a2a_communications, ap2_transactions
)
if risk_assessment.is_suspicious():
self.trigger_workflow_isolation(workflow_id)
The Broader IAM Implications
These protocols represent a fundamental shift in identity and access management. We're transitioning from human-centric IAM to agent-centric IAM, and our current security models are insufficient for this shift.
Derived Credentials will become essential as agents need to authenticate not just to services, but to each other. AP2's mandate system is an early attempt at this, but we need comprehensive frameworks for agent identity lifecycle management.
Contextual Authorization must replace simple role-based access control. Agents will need fine-grained permissions that adapt to context, user intent, and risk levels.
Audit Trails become exponentially more complex when multiple agents coordinate across multiple systems to complete user requests. We need new forensic capabilities for multi-agent investigations.
Bottom Line: The Infrastructure We Build Today Shapes Tomorrow's Security Landscape
After spending months analyzing these protocols and watching the industry rush toward agentic implementation, I keep coming back to a fundamental truth: we're not just deploying new technologies. We're architecting the nervous system for autonomous digital commerce and operations.
MCP, A2A, and AP2 aren't just convenient APIs or communication standards. They represent the foundational infrastructure that will determine whether the agentic economy becomes a productivity revolution or a security catastrophe. The decisions we make about implementing these protocols today will echo through decades of digital infrastructure.
The security vulnerabilities I've outlined aren't theoretical concerns, but active attack vectors being demonstrated by researchers right now. Tool poisoning attacks against MCP are working in production environments. A2A agent spoofing is trivial to execute. AP2's mandate system can be subverted through the same prompt injection techniques we've known about for years.
Here's what gives me confidence: the collaborative approach emerging around these protocols. When Google open-sources A2A with 60+ industry partners, when Docker develops security interceptors for MCP, when researchers rapidly disclose vulnerabilities and the community responds with patches. This is how robust infrastructure gets built.
Picture a senior analyst mid-investigation. Eight browser tabs open across CrowdStrike, VirusTotal, Defender, and Microsoft Entra. She's running a hunting query in one window, checking an IP reputation score in another. And somewhere in between, she's documenting. Taking screenshots, copying log entries into a case note, capturing context before it slips away.
This is the job. Investigations today aren't just about finding the threat. They're about moving across tools, pulling together evidence from a dozen different sources, and building a record that another analyst, or an auditor, or a manager, can actually follow. The documentation isn't a distraction from the work. It is part of the work.
Everyone in security has lived that.
Which raises a question that's been easy to ignore until now: if we wouldn't accept an analyst who said "trust me, I looked at it"- why are we accepting that from AI agents?
Evidence Has Always Been the Standard
The reason SOC analysts document isn't distrust. It's precision. A good investigation has always meant showing your work. The summary an analyst writes is their claim, the insight they've drawn from what they saw. The screenshot is the fact. Undisputable evidence, captured at the moment of discovery. Together they tell the full story: here is what I found, and here is the proof.
.png)
Evidence gathering has always been a core part of the job. Screenshots and logs aren't bureaucratic overhead. They're how you distinguish signal from noise, how you close out audit findings, how you hand off a case without losing context.
You Wouldn't Accept "Trust Me" From an Analyst. Stop Accepting It From AI
We hold human analysts to a clear standard. When an analyst closes a case, we expect to see their work. The exact screen they reviewed, the exact query they ran, the exact result that informed their decision. A summary of what they found is a claim. The screenshot is the proof.
We should hold AI agents to the same standard.
Today, most AI SOC give you a verdict and a reason. The agent processed the alert, evaluated the indicators, and concluded it was malicious. But if you ask what it actually saw, you're directed to API logs and structured JSON responses. That's not evidence. That's a reconstruction built after the fact, from data that was never meant to be read by a human auditor in the first place.
The gap between what an AI agent did and what you can actually verify is where hallucination risk lives. A summary can sound confident and still be wrong. Without visual evidence captured at the moment of the decision, you have no way to know what the system actually encountered.
Legion operates differently. Instead of calling APIs, Legion navigates your source systems directly through the browser, the same way a human analyst would. It opens the actual system, reads the actual screen, and captures a screenshot of exactly what it sees at every step. The summary is the claim. The screenshot is the fact.
That's the standard we believe AI investigations should meet. And it's the only architecture that meets it.
How Legion Automates Evidence Gathering
Legion Evidence Gathering captures visual proof of every action Legion takes as it navigates your source systems, automatically, in real time.
.png)
Take a malware investigation spanning CrowdStrike, VirusTotal, and Defender. Legion opens the originating ticket, reads the case, and begins investigating. As it moves through each tool, it takes a screenshot at every step. The CrowdStrike detection page as it appeared. The VirusTotal result in context. The Defender hunting query and its output. Every interface, exactly as Legion saw it.
By the time an analyst opens the case, the full evidence gallery is already there. Screenshots organized sequentially, labeled by tool, timestamped, and ready to review. Not just a summary. Not just a log. The complete picture: the analysis and the visual evidence behind every conclusion.
And it stays there. Every investigation Legion runs is stored and searchable. When an auditor asks a question, when a peer analyst picks up a handoff, when someone needs to understand why a decision was made, you go back to the session and everything is right there. Every step. Every screen. Nothing reconstructed. Nothing missing.
Different alert types. Different toolchains. The same complete evidence gallery, every time.
This Is What Accountable AI Looks Like
We've always known what a good investigation looks like. You show your work. You back your conclusions with evidence. You leave a record that someone else can follow. Legion applies that same standard to every automated investigation it runs, without exception and without manual effort. The bar doesn't move because the analyst is an AI. It stays exactly where it's always been.
See Legion Evidence Gathering in action. Request a Demo
.png)
Legion automates evidence gathering during AI-driven investigations, capturing screenshots from live security tools at every step, so every conclusion is backed by visual proof.
SOC investigations range widely. Some are highly repeatable: every step defined, every decision documented. These work well and can be fully automated. But some investigations eventually reach a point where that breaks down: where the next step depends on what you just found, and the judgment and intuition to know what it means.
You can see it clearly the moment you try to write it down. Some processes flow neatly from start to finish. But as soon as you move into more complex investigations, the cracks appear. You find yourself pulled into a spiral of edge cases, tool variations, and fallback paths. You add branches. Then branches on branches. And after all that effort, you almost always end up in the same place: where no rule applies, and only judgment, reasoning and intuition can take you further.

The Part You Can Never Quite Capture
SOC investigations don't all look the same. Some are fully deterministic: a user notification when an outgoing email gets blocked, no reasoning required. For these, consistency matters. The same steps, the same outcome, every time. Others are the opposite: novel threats with no fixed path, no known pattern, where only experience, intuition, and judgment can tell you what to do next. And many fall somewhere in between, where you start with structure and hit a point where judgment has to take over.
But even those flows have a ceiling. Take a phishing investigation. You can document the triage steps pretty cleanly: check the sender, analyze the headers, detonate the attachment, check the URLs. That part is routine and capturable. But the moment you find something suspicious, the investigation shifts. Now you need to reason about scope: is this part of a campaign, and who else was hit? That question has no fixed answer. You might search for other emails with the same subject, but any decent campaign will vary the lures across targets, changing subjects, sender names, and payload links to evade detection. You cannot match on a single field and call it done. You need to iterate: follow one thread, see what it reveals, adjust your search, go again. You are reading the environment in real time, making judgment calls at every step based on what the last one uncovered.
Those judgment points show up on every shift, on every alert that goes beyond the routine. Someone has to reason through them in the moment, with whatever context they have, under whatever pressure exists right now. Until 3am. Until a less experienced analyst picks it up. Until alert volume means there simply isn't time to think it through properly.
That reasoning is not pre-programmed. It emerges from the finding itself. It is what a senior analyst does instinctively, and until now there has been no way to replicate it at scale. Legion Investigator is built for that moment.
Your Environment. Your Logic. Your Investigator.
Legion Investigator is a goal-oriented AI agent that sits inside your investigation workflow at exactly the moments where reasoning takes over from execution, extending Legion's coverage across the full spectrum of SOC investigations, from fully deterministic workflows to complex open-ended investigations. You define its goal, you choose which tools and actions it is permitted to use, and you decide where it acts autonomously and where it checks in first.
Which category a given investigation falls into is sometimes obvious. But often it is a deliberate choice, one that should be yours to make based on your team's needs, your risk tolerance, and how much consistency versus flexibility the situation calls for. Where on that spectrum each investigation runs is yours to decide. Every boundary is one you set in advance and can trust will be respected. This is what makes Investigator the kind of AI enterprises can actually adopt: not just powerful, but designed from the ground up to operate within your constraints, your processes, and your level of trust.
Most AI SOC tools bring their own model of how investigations should work. Legion Investigator learns from how yours actually do. It builds its understanding from your team's recorded investigation sessions, the decisions they make, the paths they take, and the patterns that emerge across real cases in your environment. Over time, Legion builds a structured knowledge base specific to your organization, capturing your processes, your tooling, and your team's accumulated expertise. That knowledge is not just stored. It is actively used to improve your captured workflows and feeds directly into how Investigator reasons, prioritizes, and investigates.
And when we say your tools, we mean all of them. Legion Investigator works the way your analysts work, through the browser, with no integrations and no APIs required. Your SIEM, your EDR, your threat intelligence platforms, your homegrown applications, your legacy dashboards, your on-prem and cloud environments. You don’t rebuild your stack to fit the tool. The tool fits your stack.
The way it works reflects how investigations actually flow. An investigation might start in your SIEM with a set of routine queries, structured, reliable, repeatable. But when it reaches one of those decision points, you hand off to an Investigator with a goal: find the scope of breach, enrich the full context of what we have so far, identify what else was impacted across endpoints and cloud assets.
The Investigator takes that goal and works toward achieving it. It invokes the relevant tools, interprets what comes back, recalculates what to do next, and invokes again. It keeps going, step by step, until the goal is met. Not a single tool call with a result handed back to you. A full reasoning loop that runs until the work is done, across your security tools, your homegrown applications, and any AI agents already running in your environment. Investigator acts as the orchestrator, pulling in whatever is needed to get there.

Multiple Investigators can work together across a single investigation. One handles enrichment. Another determines scope of breach. A third drives containment based on what was actually found, not what was anticipated when the playbook was written.
And because trust matters, Investigator operates within guardrails. It works only with the tools and actions it’s been given permission to use. For anything higher risk, it asks before acting. You stay in control by setting the boundaries in advance and knowing they’ll be respected.

What This Changes
Legion Investigator opens up three things that weren't possible before.
Pick up where deterministic processes end
For investigations where you have structured steps, you can now embed an Investigator at exactly the points where structure runs out. The routine parts stay routine.The investigator reasons further, and by the time you step in, the groundwork is already done.
Handle your long tail of alerts
For the long tail of investigations where you never had a well-defined flow to begin with, you can now hand them off end to end. The Investigator handles enrichment before you even open the case, drives containment the moment scope is confirmed, and picks up every judgment point in between. Give the Investigator the goal, set the guardrails, and let it run. No playbook required.
Every investigation, regardless of how well-defined it is, can now be handled with the depth of your best analyst, on every alert, on every shift. And for the first time, you control where on that spectrum each investigation runs. More structure where consistency matters. More autonomy where judgment, experience, and intuition are required. The balance is yours to set, and yours to change.
This is not about replacing analysts. It never was. There will always be moments that require human judgment, experience, and instinct, and no AI should pretend otherwise. What changes is everything around those moments. The analyst becomes the commander: setting goals, defining boundaries, sending investigators out into the environment to gather, reason, and report back. The calls that matter stay with you. The work that surrounds them no longer has to. Not because we built a smarter AI. Because we built one that learned from you.

Introducing Legion AI Investigator: AI that reasons where playbooks can't. Define the goal, set the guardrails, and let it investigate across your tools — no integrations required.
I spent a long time staring at screens that couldn't keep up. Not because the analysts weren't good, but because the volume, the speed, the sheer relentlessness of what we were defending against had already outpaced the model. Tier 1 is working a queue. Tier 2 is doing the same thing, slower with more context. Tier 3 is getting pulled into fires before they finish the last one. Humans are trying to move at machine speed. It never worked. We just found ways to cope with it not working.
On March 6th, the White House said it out loud. It states directly that the administration will "rapidly adopt and promote agentic AI in ways that securely scale network defense." It calls for AI-powered cybersecurity solutions to defend federal networks and deter intrusions at scale. It frames the cyber workforce not as the primary defense mechanism, but as the strategic asset that designs and deploys the systems that do the actual defending.
That is not subtle. That is a pivot.
I've seen a lot of strategy documents come and go. Most of them describe the problem correctly and then propose solutions that require the same broken model to execute them. More analysts. More tools. More compliance frameworks that generate reports nobody reads. This one is different in a specific way. It acknowledges that human-speed defense has a ceiling, and the adversary has already blown past it.
This matters operationally. Not because government mandates translate directly to enterprise practice, but because the logic behind the mandate is undeniable and most organizations are about two or three incidents away from being forced to confront it themselves.
Here is what I actually read in that document when I strip away the political framing:
Threat actors are using AI to accelerate attack timelines and broaden their operational surface area. The gap between when something happens and when a human analyst understands what happened is widening. That gap is where organizations get compromised. The strategy is essentially acknowledging that the only viable response to AI-accelerated offense is AI-accelerated defense. Not AI-assisted. Not AI-augmented. AI that acts.
That is exactly what we built Legion to do.
Not because we read the strategy. Because we lived through the alternative. I've watched skilled analysts spend the first forty minutes of an investigation just gathering context. Pulling logs from one tool, cross-referencing with another, chasing an IP through three different platforms before they can even form a hypothesis. That is not a people problem. That is a workflow problem. And it compounds at scale until your senior analysts are doing glorified data retrieval and your tier 1 analysts are drowning in volume they were never equipped to handle alone.
Legion treats that problem directly. It captures how experienced analysts actually investigate, the sequences, the correlations, the judgment calls, and runs those workflows autonomously at the speed the threat environment requires. Not replacing the analyst, but removing the friction that slows the analyst. Campaign hunting, alert triage, IOC blocking, CVE impact assessment across your entire environment, running while your team focuses on what actually requires human judgment.
The strategy also makes a point worth taking seriously. Deploying autonomous AI in your environment without understanding what it's doing is not security. It's a different kind of exposure. The document calls for securing the entire AI technology stack, and that is not bureaucratic language. That is operational reality. Any organization rushing to adopt agentic capabilities without visibility into how those agents operate and what they can access has traded one risk for another.
The teams I respect are the ones asking both questions at the same time. How do we move at machine speed? And how do we maintain accountability over the systems doing it?
The strategy just told you where the industry is going. The question is whether your operations are positioned to keep pace with it, or whether you're still trying to scale a model that was already failing before the AI arms race began.
I know which answer I kept seeing at 3 am.

The White House just pivoted: human-speed cybersecurity has reached its ceiling. Discover why the shift to agentic AI is no longer optional and how Legion is bridging the gap between machine-speed threats and human-scale defense.



