AI Cybersecurity: OpenAI and the Anthropic Race

AI cybersecurity is now an official competitive arena between OpenAI and Anthropic, with OpenAI finalizing an advanced security product for limited release for partners and Anthropic running a tightly controlled effort called Project Glasswing aimed at finding software vulnerabilities before attackers do.
Summary
- OpenAI is finalizing an AI cybersecurity product that will be released first to a limited set of partners.
- Anthropic’s Project Glasswing is a managed program focused on proactively hunting critical software vulnerabilities.
- Both efforts raise important questions about who controls the AI offense and defense tools and who is responsible when things go wrong.
Artificial intelligence has moved from a tool that helps defenders understand threats to one that can independently detect and exploit vulnerabilities. OpenAI and Anthropic are now building directly into that space, impacting governments, businesses, and the millions of software systems that support the global financial infrastructure.
OpenAI is finalizing an AI cybersecurity product with advanced capabilities and plans to release it initially to a limited group of partners, according to Tech Startups. Anthropic runs a similar effort internally called Project Glasswing, a tightly controlled initiative designed to hunt down software vulnerabilities before malicious actors find them first.
The dual announcements mark a shift in how two leading AI labs are positioning themselves. Both range from general-purpose AI to security-specific products with specific offensive and defensive capabilities. The question is no longer what AI can do for cybersecurity. He is the one who controls it and who should be responsible if it goes wrong.
Showing Anthropic Track Record
Anthropic has already demonstrated a measure of what AI security tools can achieve. As crypto.news reported, the company restricted access to its preview model of Claude Mythos after early testing found it could expose thousands of critical vulnerabilities in all widely used software environments, including a 27-year-old bug in OpenBSD and a 16-year-old remote execution bug in FreeBSD. Anthropic said: “Given the rate of AI development, it won’t be long before such capabilities proliferate, possibly beyond actors who are committed to their safe use.
Industry data cited by Anthropic shows a 72% year-over-year increase in AI-enabled cyber attacks, with 87% of global organizations reporting exposure to AI-enabled incidents by 2025. Project Glasswing is positioned as a controlled Anthropic effort to stay ahead of that curve.
The Danger of Dual Use of AI Security Tools
The deeper problem for regulators and the industry is that the same AI tool that finds vulnerabilities defensively can find them offensively. As noted by crypto.news, a joint study by Anthropic and MATS Fellows found that Claude Sonnet and GPT-5 can produce a simulated exploit against Ethereum smart contracts that cost $4.6 million in testing, and revealed a novel risk of zero for about 3,000 days that have just been sent.
That fact of dual use makes the controlled release strategies that both companies follow important. But the question of whether limited access is enough to prevent proliferation is one the lab has yet to fully answer.



