Cyber Security

Open source custodians targeted by AI agent as part of ‘popular farming’

AI agents are able to submit large numbers of pull requests (PRs) to open source project maintainers at the risk of creating conditions for supply chain attacks targeting critical software projects, developer security firm Socket has argued.

The warning comes after one of its developers, Nolan Lawson, last week received an email about the PouchDB JavaScript database that he believed came from an AI agent calling himself “Kai Gritun”.

“I’m a freelance AI agent (I can actually write and post code, not just chat). I have 6+ combined PRs in OpenClaw and I’m looking to contribute to high-impact projects,” the email said. “Would you like me to tackle some open issues in PouchDB or other projects you care about? I’m happy to start small to prove quality.”

Background checks revealed that Kai Gritun’s profile was created on GitHub on February 1, and within days there were 103 pull requests (PRs) opened across 95 repositories, resulting in 23 commits across 22 projects.

Of the 103 projects that received PRs, many are important to the JavaScript and cloud ecosystem, and are ranked as “critical infrastructure” for industries. Successful commits, or considered commits, include those of the Nx development tool, the Unicorn static code analysis plugin ESLInt, the JavaScript command-line interface Clack, and the Cloudflare/workers-sdk software development kit.

Importantly, Kai Gritun’s GitHub profile does not show him as an AI agent, something that Lawson only noticed because he received the email.

Farming reputation

A deeper dive reveals that Kai Gritun advertises paid services that help users set up, manage, and maintain the OpenClaw personal AI agent platform (formerly known as Moltbot and Clawdbot), which has made headlines in recent weeks, not all of them good.

According to Socket, this suggests that he is deliberately doing work in order to be seen as trustworthy, a tactic known as ‘reputation farming.’ It looks busy, while creating provenance and well-known organizations and projects. The fact that Kai Gritun’s work was nonviolent and successful in human review should not obscure the broader significance of these techniques, Socket said.

“From a technology perspective, open source has gotten a boost,” Socket noted. “But what do we trade for that success? That this particular agent has brutal guidelines is almost beside the point. The incentives are clear: trust can be quickly gathered and turned into influence or income.”

In general, building trust is a slow process. This provides some tension against bad actors, with the attack on the 2024 XZ-utils supply chain, which is suspected to be the work of the country, providing a counterexample. Although the rogue developer in that incident, Jia Tan, was eventually able to introduce a backdoor into the service, it took years to build enough of a reputation for this to happen.

From Socket’s point of view, Kai Gritun’s success suggests that it is now possible to build a similar reputation in a very short time, in a way that can help speed up supply chain attacks using the same AI agent technology. This is not helped by the fact that guardians do not have an easy way to distinguish human dignity from an artificial environment created using the agency’s AI. They may also find the large number of PRs created by AI agents difficult to process.

“The XZ-Utils backdoor was discovered by accident. The next attack on the supply chain may not leave an obvious trail,” said Socket.

“The key change is that the software offering itself is now programmable,” notes Eugene Neelou, head of AI security at API security company Wallarm, who also leads the Agentic AI Runtime Security and Self-Defense (A2AS) project.

“If contribution and reputation building can be automated, the point of attack moves from the code to the surrounding management system. Projects that rely on informal trust and curatorial intuition will struggle, while those with strong, compelling AI control and management will remain strong,” he said.

The best way is to get used to this new reality. “The long-term solution is not to block AI contributors, but to introduce machine-verified governance around software change, including availability, policy enforcement, and testable contributions,” he said. “AI trust needs to be focused on verifiable controls, not assumptions about donor intent.”

This article first appeared on InfoWorld.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button