Cyber Security

Kill Chain No Longer Works When Your AI Agent Is A Threat

In September 2025, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to conduct an independent cyber espionage campaign against 30 global targets. The AI ​​handled 80-90% of the tactical tasks alone, performing investigations, coding exploits, and attempting coordinated movements at machine speed.

This incident is troubling, but there’s a situation that should worry defense teams even more: an attacker who doesn’t need to run on a killing spree at all, because he’s destroyed an AI agent that already lives inside your area. Who has access, permissions, and a valid reason to walk into your systems every day.

A Framework for Human Threats

A typical online kill chain assumes that attackers must gain every inch of access. It’s a model developed by Lockheed Martin in 2011 to explain how adversaries move from initial planning to their ultimate goal, and it’s shaped the way security teams think about detection from there.

The idea is simple: attackers need to complete a sequence of steps, and defenders can interrupt the sequence at any time. Every stage an attacker has to go through is another opportunity to capture.

General admission goes through different stages:

  1. Initial access (exploitation of vulnerabilities, etc.)
  2. Persistence without triggering warnings
  3. To test the understanding of nature
  4. Background movement to access important data
  5. Privilege escalation when access is insufficient
  6. Filtering while avoiding DLP controls

Each phase creates detection opportunities: endpoint security may intercept initial uploads, network monitoring may detect unusual lateral movement, identity systems may flag privilege escalation, and SIEM correlations may tie together strange behavior across systems. The more steps an attacker takes, the more likely they are to trip the phone.

That’s why advanced threat actors like LUCR-3 and APT29 invest heavily in trades, spending weeks living off-world and mingling with regular traffic. However, they leave artifacts: unusual entry points, unusual access patterns, small deviations from baseline behavior. These artifacts are exactly what modern optical systems are designed to detect.

The problem here, however, is that AI agents don’t really follow this playbook.

What an AI Agent Already Has

AI agents work fundamentally differently from human users. They run across systems, move data between applications, and work continuously. If that goes wrong, the attacker skips the entire kill chain – the agent itself becomes the kill chain.

Consider what an AI agent typically has access to. Its work history is a complete map of what data exists and where it resides. It probably pulls into Salesforce, pushes into Slack, syncs with Google Drive, and updates ServiceNow as part of its normal workflow. It has been granted broad permissions in use, often with administrative level access across multiple applications, and is already moving data between systems as part of its job.

An attacker who compromises that agent gets it all right away. They get a map, access, permissions, and a valid reason to move the data. All the stages of the serial killer that security teams have spent years learning to recognize? The agent skips all by default.

The threat is playing out

I An OpenClaw issue showed us what this looks like in practice:

About 12% of their public market skills were malicious. A key RCE vulnerability allows one-click compromise. More than 21,000 incidents have been publicly disclosed. But the scariest part is what a compromised agent can access once they’re connected to Slack and Google Workspace: messages, files, emails, and documents, with persistent memory across sessions.

The main problem is that security tools are designed to detect unusual behavior. When an attacker rides the existing workflow of an AI agent, everything seems normal. An agent accesses the systems it always accesses, moves data that keeps moving, runs at the times it always runs.

This is the gap protection team faces.

How Reco is Bridging the Visibility Gap

Protecting against vulnerable AI agents starts with knowing which agents are running in your environment, where they connect, and what permissions they have. Most organizations do not have an inventory of AI agents that affect their SaaS ecosystem. This is exactly the type of problem that Reco is designed to solve.

Discover Every AI Agent in Play

Reco’s Agentic AI Security detects every agent AI, embedded AI feature, and third-party AI integration across your SaaS environment, including shadow AI tools connected without IT permission.

Figure 1: Reco’s AI Agents Inventory, showing discovered agents and their connections to GitHub.

Map reach range and Blast radius

For each agent, Reco maps which SaaS applications they connect to, which permissions they hold, and which data they can access. Reco’s SaaS-to-SaaS demonstration shows exactly how agents integrate throughout your application ecosystem, revealing a toxic mix where AI agents tie systems together through MCP, OAuth, or API integration, creating a disparity of permissions that no single application owner can approve.

Figure 2: Reco Information Graph showing the toxic combination between Slack and Cursor with MCP.

Flag Target, Focus Minor Right

Reco identifies which agents represent your greatest exposure by evaluating scope of permission, various system access, and data sensitivity. Agents associated with emerging risks are automatically labeled. From there, Reco helps you right-size identity and achieve governance, directly limiting what an attacker can do if an agent is compromised.

Figure 3: Assessing Reco’s AI Positioning with security scores and IAM compliance findings.

Get a Secret Agent Job

Reco’s threat detection engine uses identity-based behavior analysis on AI agents in the same way it does on human identities, distinguishing normal behavior from suspicious deviations in real time.

Figure 4: Reco warning flagging an unauthorized ChatGPT connection to SharePoint.

What This Means for Your Group

A typical killing spree assumed that attackers had to fight for every inch of access. AI ambassadors upend that thought entirely.

A single vulnerable agent can provide an attacker with legitimate access, a complete map of the environment, broad permissions, and built-in cover to move data, without a single step that looks like intrusion.

Security teams that still focus exclusively on detecting attacker behavior will miss this. Attackers will be riding the existing workflows of your AI agents, invisible to the noise of normal operations.

Sooner or later, an AI agent in your area will be targeted. Visibility is the difference between catching it early and finding it during an incident response. Reco gives you that visibility, across your entire SaaS ecosystem, in minutes.

Learn more here: Request a Demo: Get Started with Reco.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button