Cyber Security

Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

IRavie LakshmananFebruary 12, 2026Cyber​​ Espionage / Artificial Intelligence

Google on Thursday said it had spotted a North Korean-linked threat actor known as UNC2970 it uses its Gemini model of artificial intelligence (AI) production to carry out investigations about its goals, as different hacking groups continue to use the tool to accelerate various stages of the life cycle of cyber attacks, to enable information operations, and even to attack domain models.

“The group used Gemini to gather OSINT and high-value targets to support campaign planning and reconnaissance,” the Google Threat Intelligence Group (GTIG) said in a report shared with Hacker News. “The actor’s intended profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.”

The tech giant’s threat intelligence team characterized the operation as blurring the lines between what constitutes normal research and malicious surveillance, allowing a government-backed actor to create phishing targets and identify soft targets for intrusion in the first place.

UNC2970 is a moniker assigned to a North Korean criminal group that overlaps with the collective followed by the Lazarus Group, Diamond Sleet, and Hidden Cobra. It is best known for orchestrating a long-running campaign code-named Operation Dream Job to target the aerospace, defense, and energy sectors with malware under the guise of approaching victims under the guise of job openings.

GTIG said UNC2970 “remains focused” on targeting security and recruiting companies in their campaigns, with targeted targets including searching for “information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.”

UNC2970 is far from the only threat actor who has abused Gemini to increase their capabilities and move from initial intelligence to effective targeting at a rapid clip. Some hacking teams that have integrated the tool into their workflow are as follows:

  • UNC6418 (Unspecified), to conduct targeted intelligence gathering, particularly seeking sensitive account information and email addresses.
  • Temp.HEX or Mustang Panda (China), compiling dossiers on specific individuals, including targets in Pakistan, and collecting operational and structural data on intelligence agencies in various countries.
  • APT31 or Judgment Panda (China), to automate risk analysis and generate targeted test programs by calling itself a security researcher.
  • APT41 (China), to extract definitions from the open source tool’s README.md pages, as well as troubleshooting and debugging exploit code.
  • UNC795 (China), to debug their code, conduct research, and develop web shells and scanners for PHP web servers.
  • APT42 (Iran), to facilitate the re-identification and social engineering of targets by creating engaging interactions from targets, as well as developing a Python-based Google Maps scraper, developing a SIM card management system in Rust, and researching a proof-of-concept (PoC) exploit for the WinRAR bug (CVE-80825-20825).

Google also said it discovered a malware called HONESTCUE that uses the Gemini API to extract next-level functionality, and an AI-generated phishing kit called COINBAIT that is built using Lovable AI and pretends to be a cryptocurrency exchange in exchange for information. Some aspects of COINBAIT-related activity are mentioned in a financially motivated collection called UNC5356.

“HONESTCUE is a downloader and launcher framework that sends information through Google Gemini’s API and receives C# source code as a response,” it says. “However, instead of using LLM to update itself, HONESTCUE calls the Gemini API to generate code that uses a ‘second stage’ function, which downloads and executes another malware.”

The second HONESTCUE fileless phase then takes the generated C# source code obtained from the Gemini API and uses the official .NET CSharpCodeProvider framework to compile and perform payloads directly in memory, thus leaving no artifacts on disk.

Google also drew attention to the recent ClickFix campaigns that use the public sharing feature of AI productivity services to host realistic-looking commands to fix a common computer problem and ultimately deliver information-stealing malware. Job tagged December 2025 by Huntress.

Finally, the company said it has identified and disrupted modeling attacks aimed at systematically querying a proprietary machine learning model to extract information and build another model that reflects the target’s behavior. In a large-scale attack of this nature, Gemini was targeted by more than 100,000 subjects who asked a series of questions intended to replicate the model’s reasoning ability across a wide range of tasks in languages ​​other than English.

Last month, Praetorian built a PoC extraction attack where the replica model achieved an accuracy rate of 80.1% by sending a series of 1,000 queries to the victim’s API and recording the results and training it for 20 periods.

“Many organizations think that keeping model weights private is sufficient protection,” said security researcher Farida Shafik. “But this creates a false sense of security. In fact, the behavior is a model. Every query-response pair is a replica training example. The model’s behavior is reflected in every API response.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button