Cyber Security

Copilot and Agentforce fall into form-based rapid injection tactics

Enterprise AI agents should streamline workflows. Instead, two new discoveries show that it can easily simplify data extraction.

Security researchers have discovered a rapid injection vulnerability in both Microsoft Copilot Studio and Salesforce Agentforce that allows attackers to issue malicious commands by using seemingly harmless commands.

According to Capsule Security’s findings, SharePoint forms and public-facing lead forms within Copilot are vulnerable to attackers issuing commands that can compromise system intent and trigger data exfiltration on attacker-controlled servers.

One of these bugs has already been assigned a high-severity CVE, while the other “critical” one is reported to have no classification status. The flaws could allow the theft of PII, customer/lead records, free text business context, and operational/workflow data.

In both cases, AI agents treat untrusted user input as trusted instructions, Capsule researchers noted in a disclosure shared with CSO ahead of their publication on Wednesday.

ShareLeak: SharePoint forms data leaked via Copilot

The issue on Microsoft’s side, called “ShareLeak,” is about how Copilot Studio agents process SharePoint form submissions. The attack begins with a crafted payload inserted into a standard form field, such as a “comment”, which the agent later inserts as part of its execution context.

Because the system combines user input with system instructions, the injected payload overrides the agent’s original instructions. The model is thus tricked into believing that the attacker’s instructions are legitimate system instructions. Malicious input goes from form submission to agent execution without objection.

Once activated, the agent can access a connected SharePoint List and extract sensitive customer data, including names, addresses, phone numbers, and export it via email. The researchers found that even if Microsoft’s security measures flagged suspicious behavior, the data was leaked.

The main reason is that there is no reliable separation between trusted system commands and untrusted user data. In the current setup, AI cannot distinguish between the two, the researchers said.

Microsoft fixed the issue after it was disclosed, assigned CVE-2026-21520 to it and assessed its severity at 7.5 out of 10 on the CVSS scale. The reduction is done internally, and no further action is required from the users.

PipeLeak: Salesforce Agentforce hijacked by simple lead

In the Salesforce Agentforce issue, attackers embedded malicious instructions inside a public-facing lead form. When the internal user later asks the agent to review or process that lead, the agent executes the embedded instructions as if it were part of its job.

According to the Capsule display, the agent retrieves the CRM data with the “GetLeadsInformation” function and exports it via email.

Consistency is not limited to a single record. Researchers have shown that a hacked agent can query and retrieve multiple lead records in bulk, effectively turning a single form submission into a database extraction pipeline.

The researchers said Salesforce acknowledges the problem of instant injection but describes the release vector as “configuration-specific,” pointing to optional human-in-the-loop (HITL) controls. Capsule’s push for that framework argues that requiring manual authorization undermines the very purpose of independent agents.

The deeper problem, they note, is insecure automation. Systems designed for automation must not allow untrusted input to redefine the agent’s intentions.
Both disclosures converge on a premise that requires treating all external input as untrusted and having filters in place that separate data from instructions. This will include enforcing input validation, less privileged access, and tighter controls on actions such as outgoing email.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button