Thousands of Google Cloud API Keys Are Revealed by Gemini Access After Enabling the API

New research has found that Google Cloud API keys, often designed as project identifiers for payment purposes, can be abused to authorize sensitive Gemini endpoints and access confidential data.
The discovery comes from Truffle Security, which found nearly 3,000 Google API keys (identified by the “AIza” prefix) embedded in client-side code to provide Google-related services such as embedded maps on websites.
“With a valid key, an attacker can access uploaded files, cached data, and charge LLM usage to your account,” said security researcher Joe Leon, adding the keys “now also authenticate to Gemini even though they weren’t intended for that.”
The problem occurs when users enable the Gemini API in a Google Cloud project (ie, the Generative Language API), causing existing API keys in that project, including those accessible through JavaScript website code, to gain covert access to Gemini endpoints without any warning or notification.
This effectively allows any attacker cracking websites to capture such API keys and use them for malicious purposes and share theft, including accessing sensitive files through endpoints /files and/cachedContents, and making Gemini API calls, racking up huge bills for victims.
In addition, Truffle Security found that creating a new API key in Google Cloud defaults to “Unrestricted,” which means it applies to all APIs enabled in the project, including Gemini.
“The result: thousands of API keys that were used as negative payment tokens are now live Gemini tokens sitting on the public internet,” Leon said. In total, the company said it found 2,863 live keys available on the public Internet, including a website associated with Google.
The disclosure comes as Quokka published a similar report, finding more than 35,000 unique Google API keys embedded in its 250,000 Android apps.
“In addition to cost-effectiveness with automated LLM applications, organizations should also consider how AI-enabled endpoints may interact with notifications, generated content, or connected cloud services in ways that increase the scope for vulnerable keystrokes,” the mobile security firm said.

“Even if no specific customer data is accessible, the combination of perceived access, share usage, and potential integration with Google Cloud’s broader services creates a risk profile that is very different from the original payment gateway model that developers relied on.”
Although the behavior was considered intentional, Google has stepped in to address the issue.
“We are aware of the report and are working with researchers to address the issue,” a Google spokesperson told The Hacker News via email. “Protecting our users’ data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys attempting to access the Gemini API.”
It is not yet known whether this issue has been exploited in the wild. However, in a Reddit post published two days ago, a user claimed the “theft” of a Google Cloud API Key resulted in $82,314.44 in charges between February 11 and 12, 2026, from a normal usage of $180 per month.
We’ve reached out to Google for additional comment, and will update the story when asked.
Users who have set up Google Cloud projects are advised to check their APIs and services, and make sure that APIs related to artificial intelligence (AI) are enabled. If they are enabled and publicly accessible (either client-side in JavaScript or hosted in a public repository), make sure the keys are rotated.
“Start with your oldest keys first,” says Truffle Security. “Those were most likely made public under the old directive that API keys are safe to share, and then get Gemini privileges over and over again when someone on your team enables the API.”
“This is a good example of how powerful the vulnerability is, and how APIs can be over-authorized after the fact,” said Tim Erlin, security strategist at Wallarm, in a statement. “Security testing, vulnerability scanning, and other testing must continue.”
“APIs are particularly tricky because changes to their functionality or to the data they can access are not a risk, but can directly increase the risk. The adoption of AI that works on these APIs, and using them, only accelerates the problem. Finding vulnerabilities is not really enough for APIs. Organizations must profile behavior and malicious activities and prevent wrongful access to data, wrongly identifying access to data.”



