LangChain, LangGraph Flaws Reveal Files, Secrets, Databases in Widely Used AI Frameworks

Cybersecurity researchers have disclosed three security vulnerabilities affecting LangChain and LangGraph, which, if successfully exploited, can expose file system data, environment secrets, and chat history.
Both LangChain and LangGraph are open source frameworks used to build applications powered by Large Language Models (LLMs). LangGraph is built on the foundations of LangChain for complex and seamless workflows. According to Python Package Index (PyPI), LangChain, LangChain-Core, and LangGraph were downloaded over 52 million, 23 million, and 9 million times in the last week alone.
“Each vulnerability exposes a different class of business information: system files, environment secrets, and chat history,” Cyera security researcher Vladimir Tokarev said in a report published Thursday.
The issues, in short, provide three independent methods an attacker can use to extract sensitive data from any enterprise LangChain deployment. The risk details are as follows –
- CVE-2026-34070 (CVSS score: 7.5) – LangChain traversal vulnerability (“langchain_core/prompts/loading.py”) which allows access to certain files without authentication through its prompt loading API by providing a specially crafted prompt template.
- CVE-2025-68664 (CVSS score: 9.3) – Removal of an untrusted data vulnerability in LangChain that leaks API keys and environmental secrets by passing as input a data structure that tricks the application into interpreting it as a LangChain serialized object rather than normal user data.
- CVE-2025-67644 (CVSS score: 7.3) – An SQL injection vulnerability in the implementation of the LangGraph SQLite checkpoint that allows an attacker to execute SQL queries via metadata filter keys and execute SQL queries against the database.
Successful exploitation of the aforementioned flaws could allow an attacker to read sensitive files such as Docker configuration, siphon sensitive secrets through rapid injection, and access chat histories associated with sensitive workflows. It is worth noting that the details of CVE-2025-68664 were also shared by Cyata in December 2025, giving it the password LangGrinch.

The vulnerability is listed in the following versions –
- CVE-2026-34070 – langchain-core >=1.2.22
- CVE-2025-68664 – langchain-core 0.3.81 and 1.2.5
- CVE-2025-67644 – langgraph-checkpoint-sqlite 3.0.1
The findings also highlight how artificial intelligence (AI) pipelines are vulnerable to traditional security vulnerabilities, which could put entire systems at risk.
This development comes days after a critical security flaw affecting Langflow (CVE-2026-33017, CVSS score: 9.3) came under an active exploit within 20 hours of public disclosure, allowing attackers to extract sensitive data from developers’ environments.
Naveen Sunkavally, chief architect at Horizon3.ai, said the vulnerability shares the same root cause as CVE-2025-3248, and originates from unauthorized areas of malicious code execution. As threat actors move quickly to exploit newly disclosed flaws, it is important that users apply patches as soon as possible to be fully protected.
“LangChain doesn’t exist in isolation. It sits at the center of a large web of dependencies that extend the AI stack. Hundreds of libraries wrap LangChain, are simple, or depend on it,” Cyera said. “If there’s a vulnerability in the LangChain core, it doesn’t just affect direct users. It spills out into every downlink library, every village, every integration that gains the vulnerable code path.”



