Cyber Security

Shai-Hulud’s style NPM worm beats CI pipelines and AI coding tools

A giant Shai-Hulud-style npm worm is hitting the software ecosystem, digging through developer machines, CI pipelines, and AI coding tools.

Socket researchers discovered a working attack campaign and named it SANDWORM_MODE, which is based on the “SANDWORM_*” environment variable embedded in the malware’s runtime logic.”

At least 19 typosquatted packages have been published under multiple aliases, positioning themselves as popular developer resources and AI-related tools. Once installed, the packages perform multi-stage payments in favor of secrets from local and CI systems, and then use the stolen tokens to modify other caches.

The payload also uses a Shai-Hulud style “dead switch” that is always OFF automatically to trigger the home directory when malware is detected. The researchers called the campaign a “real and serious threat”, advising defenders to treat the packages as active risks of compromise.

Typo to be taken

The campaign begins with typosquatting, where attackers publish packages with names almost identical to legitimate ones, banking on developer error or AI forging incorrect dependencies.

“Typosquatting has targeted several services for leading traffic engineers in the Node.js ecosystem, crypto tooling, and, perhaps especially, AI coding tools that are seeing rapid adoption: three packages that make up Code Claude and one target for OpenClaw, an AI virus agent that recently achieved 210k stars on the GitHub blog,” he wrote in a research blog.

Once the malicious package is installed and executed, the malware hunts for sensitive information, including npm and GitHub tokens, environment secrets, and cloud keys. Those credentials are then used to push malicious changes to other repositories and inject new dependencies or workflows, extending the chain of infection.

Additionally, the campaign uses a weaponized GitHub Action that could amplify attacks within CI pipelines, leaking secrets during build and allowing further propagation, the researchers added.

To poison the interface of the AI ​​developer

This campaign is specifically targeted to target AI coding assistants. The malware uses a malicious Model Context Protocol (MCP) server and embeds it in the configuration of popular AI tools, embedding itself as a trusted component in the assistant environment.

Once this is achieved, rapid injection techniques can trick the AI ​​into retrieving sensitive local data, which can include SSH keys or cloud credentials, and transmit it to the attacker without the user’s knowledge.

The researchers also found an inactive polymorphic engine capable of rewriting malware with code-level modifications such as dynamic rewriting, control flow rewriting, decoy coding, and thread coding, although no active modifications were observed during the analysis. The engine is compatible with locally managed models through Ollama, but currently only checks if Ollama is running in your area, they wrote.

The disclosure noted by npm has already strengthened the registry against Shai-Hulud-class worms, tightening controls on credential abuse by this campaign. Short-lived, scoped tokens, mandatory two-factor authentication for publication, and identity-bound “trusted publishing” from CI are designed to contain the blast radius from stolen secrets, though their ultimate effectiveness depends on the scale and speed of custodian discovery.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button