Cyber Security

The Year of the AI-Assisted Attack

On December 4, 2025, a 17-year-old boy was arrested in Osaka under Japan’s Unauthorized Access Prevention Act. The young man had used malicious code to extract the personal data of more than 7 million users of Kaikatsu Club, Japan’s largest internet cafe chain. When asked, the young man shared his motive for the hack: he wanted to buy Pokémon cards.

In a sense, this is a common story. Since the 1990s, we’ve read about computer wunderkins like Kevin Mitnick, whose technical skills exceeded their judgment and who were lured into high-level computer crimes in pursuit of status, profit, or pleasure. But there is something different about this story: the young man in question was not a professional.

The rise of AI-assisted attacks

By 2025, LLM-supported conversational and agent systems are crossing the line, moving from helpful but error-prone coding assistants to end-to-end coding powerhouses. Throughout the year, several measures of cybercrime frequency and severity nearly doubled. Instances of malicious packages detected on public repositories increased by 75%, cloud penetration increased by 35%, and AI-generated phishing became effective beyond human red teams entirely. A more qualitative difference, however, was in the profiles of the attackers.

In February 2025, three teenagers (ages 14, 15, and 16) with no coding background used ChatGPT to create a tool that touched the Rakuten Mobile app 220,000 times, spending their earnings on game consoles and online gambling. In July 2025, a single actor using Claude Code, a sophisticated agent coding platform, carried out a phishing campaign targeting 17 organizations within one month, using the agent’s AI to create malicious code, organize stolen files, analyze financial records to measure demands, and write phishing emails. In December 2025, someone used Code Claude and ChatGPT to breach the Mexican government, targeting more than 10 organizations and stealing more than 195 million taxpayer records.

Although these attacks could happen before 2025, we are now seeing lone actor attacks that would be the hallmark of organized groups and small-scale attacks by non-technical people that would be the hallmark of an attack by a skilled hacker or engineer in the pre-AI era. By 2025, the barrier to entry for sophisticated attacks has been significantly lowered.

The negative numbers are rising

Throughout 2025, estimates of bot activity, malware, targeted compromise, and phishing have shown a dramatic increase. At the same time, LLM skill measures in technical benchmarks jumped forward.

By 2022, there were 55,000 malicious packages in public storage facilities, according to Sonatype. By 2025, that number had grown to 454,600. Significant jumps occur in 2023 (the year GPT-4 was released) and 2025 (the tentpole year of agent coding).

Another effective measure of real-world attacker power, exploit time, is virtually invisible from the pre-AI era. Exploitation time measures the time from when a vulnerability is propagated until an exploit is discovered in the wild.

This number has dropped from over 700 days in 2020 to just 44 days in 2025. This means that attackers are developing exploits for known vulnerabilities in less than 2 months, rather than nearly 2 years. In fact, Mandiant’s M-Trends 2026 report found that exploit timing has gotten worse – exploits now arrive routinely before patches, and 28.3% of CVEs were exploited within 24 hours of disclosure.

Throughout 2024, 2025, and early 2026, the performance of frontier models such as ChatGPT, Claude, and Gemini in benchmarks such as SWE-bench, to test the capabilities of software development, will be through the roof. By August 2024, advanced models can solve 33% of real GitHub issues on the bench. By December 2025, that number had increased to less than 81%.

Towards the end of 2024 and especially in 2025, AI-assisted coding has reached a tipping point. Supercharged coding, however, also has greater attack potential, and the environment in 2026 reflects these changes, with attacks occurring more frequently, with greater intensity, and greater impact.

It can’t take away the pain

The AI ​​speeds up both defenders and attackers. Unfortunately, based on data from 2025 and 2026, the arms race is in favor of the attackers. The average time to fix a known CVE of high or critical severity is now 74 days, according to the Edgescan 2025 Vulnerability Statistics report. Furthermore, 45% of vulnerabilities in systems maintained by large companies (1000+ employees) are never fixed.

Organizations have also been feeling the pressure from malware found in public package repositories. In September 2025, the Shai-Hulud attack targeting the npm ecosystem compromised more than 500 packages. More than 487 organizations had their secrets compromised, and $8.5m was stolen from Trust Wallet after attackers used exposed information to poison its Chrome extension. Many organizations implemented code freezes following the attack.

The detection problem includes this. In 2025, malicious npm packages appear as popular libraries like chalk and debug that include documentation, unit tests, and code designed to appear as legitimate telemetry modules. Rigorous analysis and signature scanners missed them completely – because the code, presumably generated by AI, looked like real software. As Chainguard CEO Dan Lorenc noted, “The complexity and scale of risk management has grown beyond the capabilities of many organizations to manage it themselves.”

Removes attack categories

The lesson for 2025 is that you cannot outrun this attack. The exploit window is shrinking faster than patch cycles can squeeze it, and AI-generated malware is bypassing the detection tools organizations have relied on for decades. The Venn diagram of “willing to attack” and “has the technical ability to attack” used to be a sliver, but it’s growing every month. At the same time, we’re building more software, faster. And if the attack on the supply chain is coming soon in 2026, what will 2027 look like with model capabilities driven to 10?

Thinking at speed and attacking out will get teams so far in the current position. Instead, the smart move is to eliminate all vulnerability categories, freeing teams to focus on the remaining areas. This is the Chainguard Libraries approach, which rebuilds every open source library from proven, profitable source code. The idea behind the Libraries is to make all classes of attacks structurally impossible, protecting users from CI/CD hijacking, dependency confusion, long-lived token theft, or package distribution attacks. When tested against 8,783 malicious npm packages, Chainguard Libraries blocked 99.7%. Against 3,000 malicious Python packages, they blocked about 98%.

454,600 malicious packages last year. 394,877 in one quarter. A student in Algeria developed a ransomware that hit 85 targets in its first month. A 17-year-old girl spent a record 7 million dollars to buy Pokémon cards. The tools that enable these attacks are becoming cheaper, faster, and more accessible. Instead of cringing when the next Axios or Shai-Hulud arrives next week or next month, you can just read about it over your coffee cup while your organization populates production systems, artifact managers, and developer workstations from Chainguard Libraries.

Note: This article was expertly written and contributed by Patrick Smyth, Senior Developer Relations Engineer, Chainguard.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button