Six bugs were found lurking in OpenClaw’s plumbing

Security researchers have discovered six critical flaws affecting the open source AI agent framework OpenClaw, popularly known as “the social network for AI agents.” The flaws were discovered by Endor Labs as its researchers ran the platform with an AI-driven application security assessment (SAST) engine designed to track how data flows through the agent’s AI software.
The bugs extend into several areas of web security, including server-side request forgery (SSRF), missing webhook authentication, authentication bypasses, and traversal, affecting a complex agent system that includes large-scale language models (LLMs) and using external tools and integrations.
The researchers also published a practical proof-of-concept exploit for each bug, which validates real-world exploits. OpenClaw has published patches and security advisories for the issues.
Errors include SSRF methods, auth bypass, and file escape
The Endor Labs disclosure lists six OpenClaw vulnerabilities by type of weakness and individual severity rather than CVE identifiers.
Several issues are SSRF bugs affecting different tools, including the gateway component (CVSS 7.6) that accepts user-supplied URLs to establish an outgoing WebSocket connection. The other two included SSRF in Urbit Verification (CVSS 6.5) and Image Tool SSRF (CVSS 7.6). These SSRF methods are rated at medium to high difficulty because they could allow access to internal services or cloud metadata endpoints, depending on the application.
Access control failures accounted for another set of findings. The “Telnyx” webhook handler designed to receive external events lacks proper webhook authentication (CVSS 7.5), which allows forgery requests from untrusted sources. Alternatively, an authentication bypass (CVSS 6.5) allowed unauthorized users to request the operation of a “Twilio” protected webhook without valid credentials.
The disclosure also described a traversal vulnerability (CVSS not shared) in the browser’s load management, where insufficient sanitation of file paths could allow writing outside of the intended scripts.
“The combination of AI-powered analysis and systematic manual verification provides a useful way to protect AI infrastructure,” the researchers said. “As AI agent frameworks become more common in enterprise environments, security analytics must evolve to address traditional vulnerabilities and AI-specific attack surfaces.”
Following the details revealed the accident
In order to overcome the limitations of “static analysis” tools that reportedly struggle with modern software stacks where input goes through multiple changes before reaching dangerous operations, Endor Labs used the AI SAST method, which, it says, preserves the context of all these changes.
This helped researchers to understand “not only where malicious activities exist but also whether the data controlled by attackers can be accessed.” The inspection engine maps the full journey of “trusted data”, from entry points such as HTTP parameters, configuration values, or external API responses to security-sensitive “sinks” such as network requests, file operations, or command execution.
Endor Labs said it has responsibly disclosed the vulnerability to OpenClaw maintainers, who are handling the issues, allowing researchers to publish technical details. The disclosure did not provide broad guidance for mitigation but noted that fixes were applied to all affected units.



