AI Agents: The Next Wave Identity Dark Matter

The Rise of MCPs in Business
The Model Context Protocol (MCP) is becoming the fastest way to push LLMs from “talk” to real work. By providing structured access to applications, APIs, and data, MCP enables AI-driven agents that can quickly retrieve information, take action, and automate end-to-end business workflows across the enterprise. This is already occurring in production by horizontal assistants and direct custom agents. such as Microsoft Copilot, ServiceNow, Zendesk bots, and Salesforce Agentforce, with custom and vertical agents following quickly behind them. This is in line with Gartner’s latest report “Guardian Agents Market Guide”, where analysts note that the rapid business adoption of these AI agents significantly outpaces the regulatory maturity and policy controls needed to manage them.
We believe that the main limitation is that these AI “partners” do not look like humans.
- They don’t join or leave through HR
- They do not forward access requests
- They don’t issue accounts when projects end
They are often invisible in traditional IAM, and that’s how it becomes the dark side of ownership: the risk of real ownership outside the management fabric. And agent systems don’t just use access, they hunt for the path of least resistance. They are developed to get the job done with less friction: fewer approvals, fewer notifications, fewer barriers. By identity, that means they’ll have access to anything that already works, in-app accounts, legacy service identities, long-lived tokens, API keys, auth bypasses, and if it works, reuse.
Team8’s 2025 CISO Village Survey found:
- Almost 70% of businesses already use AI agents (any system that can respond and act) in production.
- Other 23% plan to deploy by 2026.
- Two thirds they build them in the house.
MCP acquisition is not a question of; it’s a question of how fast and smart. It’s already there, and it’s only getting faster. Further complicating this is the reality of mixed environments. Based on Gartner’s research, it appears that organizations face major obstacles in managing these non-human identities because native platform controls and vendor protections often do not extend beyond their cloud or platform boundaries. Without an independent oversight mechanism, cross-cloud agent interactions remain completely unmanageable. The real question is whether your AI agents can be reliable partners or not an identity that is not controlled by dark matter?
.
How Identity Dark Matter Is Abused by Agent-AI
As autonomous AI agents that can plan and execute multi-step tasks with minimal human input, Agent AI is a powerful enabler but also a major cyber threat. Interestingly, leading industry analysts seem to expect that most unauthorized agent actions will result from internal business policy violations, such as AI misbehavior or excessive information sharing, rather than malicious external attacks.
The general pattern of abuse we see is the same, driven by automated agents and seeking shortcuts:
- Enumerate existing: Agent scans applications and integrations, enumerates users/tokens, finds “alternative” authentication methods.
- Try what’s easiest first: Local accounts, wills, long-lived tokens, anything that avoids new approvals.
- Lock in “good enough” access: Even low privilege is enough to get around: read configuration files, pull logs, find secrets, organization map structure.
- Develop quietly: Find over-scoped tokens, old rights, or dead but privileged identities and scale with minimal noise.
- It works at the speed of a machine: Thousands of small actions happen in every system, too quickly and too widely for humans to notice in advance.
The real danger here is the scale of the impact: one neglected identity becomes a reusable shortcut to the entire legacy.
Dark Matter Risks
In addition to exploiting identity dark matter, left unchecked, MCP agents (AI agents that use the MCP protocol to connect to applications, A2A, APIs, and data sources) introduce their own hidden exposures. Orchid reveals this exposure daily:
- Over-permitted access: Agents get “god mode” to failover, and that privilege becomes the default operating state.
- Untracked usage: Agents can sign critical workflows with tools where logs are partial, inconsistent, or unrelated back to the sponsor.
- Immutable authentication: Hard-coded tokens don’t just “live forever”, they become a shared infrastructure across agents, pipelines, and locations.
- Regulatory blind spots: Auditors ask, “who authorized access, who used it, and what data is being accessed?” Dark matter makes those responses slow, if not impossible.
- Privilege: Agents accumulate access over time because removing permissions is more intimidating than granting them, until an attacker inherits the drift.
We believe that addressing these blind spots is consistent with Gartner’s observation that modern AI management requires ownership and access management to be tightly integrated with information governance. This ensures that organizations can dynamically isolate data sensitivities and monitor real-time agent behavior instead of relying solely on static credentials.
AI agents are not just badgeless users. It is a symbol of dark matter: powerful, invisible, and beyond the reach of today’s IAM. And the uncomfortable part: even well-intentioned agents will use the dark stuff. They don’t understand your organizational chart or your management objective; they understand what works. If an orphaned account or an over-scoped token is the fastest way to liquidate, it becomes the “active” choice.

Principles of Safe MCP Acquisition
To avoid repeating the mistakes of the past (with orphaned or overprivileged accounts, shadow IT, unmanaged keys, and invisible work), organizations need to adapt and implement core ownership principles for AI agents. Gartner introduced the concept of specialized “guardian” systems, AI-guided solutions that constantly monitor, monitor, and enforce limits on active agents.
We recommend that organizations follow 5 core principles as they implement MCP-based solutions.
- Pair AI Agents with Human Sponsors: All agents must be bound to a responsible operator. When someone changes roles or moves, the agent’s access should change with them. We agree with Gartner on the need for ownership mapping, to ensure that the full lineage from creation to use is traced to both the machine and its human owner.
- Powerful, Content Access: AI agents should not hold static, permanent rights. Their privileges must be time-sensitive, session-aware, and limited to least privilege.
- Appearance and Order: Gartner has been requiring organizations to maintain a centralized AI agent catalog that includes all legitimate, shadow, and third-party agents, as well as comprehensive management of posture and research methods that indicate disruption. In our view, every action taken by an AI agent should be logged, correlated with its human supporter, and made available for review. This ensures accountability and prepares organizations for future compliance scrutiny. Visibility isn’t just about “getting you in.” You need to include actions on data access: what the agent accessed, what it changed, what it sent, and whether that action affected controlled or sensitive data sets. Otherwise, you cannot distinguish “useful automation” from “silent data movement”.
- Governance at Enterprise Scale: MCP adoption must extend to both new and legacy systems within a single, consistent management fabric, so that security, compliance, and infrastructure teams are not working in silos. This is where Gartner also emphasizes the importance of an enterprise-managed monitoring layer, which ensures consistent control and reduces the risk of vendor lock-in as MCP adoption increases.
- IAM’s Commitment to Good Hygiene: As with all identities, authentication flows, authorization permissions and controls used, strict hygiene – on the application server and the MCP server – is important to keep every user within the right boundaries.

Big Picture
AI agents pose a unique challenge beyond mere integration. They represent a change in the way work is entrusted and carried out within businesses. If left unchecked, they will follow the same trail as other hidden identities: in-app accounts, old service identities, long-lived tokens, API keys, and bypass auth methods that have turned into a dark matter over time. And because LLM-driven agents are optimized for efficiency, minimal friction and very few steps, they will naturally gravitate to that unmanaged identity as the quickest way to success. If an orphaned local controller or too much token is “just working,” the agent will use it, and reuse it.
The opportunity is to be ahead of this curve.
By treating AI agents as first-class identities from day one (accessible, controllable, and learnable), organizations can leverage their potential without creating blind spots.
Businesses that do this will not only quickly reduce their attack surface but will also position themselves in a position of control and operational expectations that will surely follow.
In practice, most Agent-AI events will not start on day zero. They’ll start with an ID shortcut someone forgot to clean up, and then automatically upgrade until it looks like a system violation.
The Bottom Line
AI agents are here. They are already changing the way businesses operate.
The challenge is not how to use them, but how to govern them.
The adoption of a safe MCP requires applying the same principles that those employees know well, minimum rights, life cycle management, and auditing, in the new category of non-human ownership following this process.
If identity dark matter is the sum of what we can’t see or control, unmanned AI agents may be its fastest growing source. Organizations working now to bring them to light are the ones that can move quickly with AI without sacrificing trust, compliance, or security. That’s why Orchid Security is building a proprietary infrastructure to remove the dark stuff, and make Agent AI detection safe for enterprise-scale deployment.



