The Anthropic Myth is changing the game, NCSC chief tells Oireachtas

‘We are in a race whether we choose to accept them or not,’ said Richard Browne.
The creation of Mythos shows what is possible with AI tools in the area of cybersecurity, National Cyber Security Center (NCSC) director Richard Browne told the Oireachtas Joint Committee on Artificial Intelligence this afternoon (14 April).
“The issue is not that Anthropic created this. The issue is that Anthropic has shown that this is possible,” Browne said in response to questions from Social Democrats TD Sinéad Gibney, who said Anthropic was engaging in a “PR experiment”.
“This technology exists and can be used. [Currently] it is in the hands of the company. In five months – six months – it will be in the hands of an active state [actor]said Browne. “Governance is good, it’s very important, but it doesn’t stop criminal actors.”
Anthropic launched Mythos earlier this month to a select group of top companies around the world. In its presentation, Anthropic noted the Mythos’ capabilities to detect and act at a faster rate than its competitors.
The company, concerned about bad actors, chose to allow businesses to strengthen their cyber defenses using this tool. In the days since the launch, leaders in the US, UK and Canada have expressed their concerns.
The NCSC, in a public statement yesterday (13 April) said Mythos appears to represent “a major change in the way hardware and software vulnerabilities are perceived”.
Anthropic’s decision to limit model releases and work in partnership with industry partners is a credible approach,” it added.
AI has an “inherently unpredictable” impact on cyber security, Browne told the Committee. He noted that AI is a “real revolution” and is making a “general change” that is set to affect all other digital technologies.
The question is no longer whether AI needs to be adopted, but rather how to do it safely, he added.
The first case of the use of the technology was as a “force multiplier”, said Browne. It allows users to increase the scale of their operations – effectively “democratizing[ing]” access by removing technical and language barriers. This allows novice users to use commercial AI tools to pull off attacks.
Threat actors are already “heavy users” of AI tools, Browne said, and on the other hand, security personnel are also employing agent AI to bolster their defenses.
The National Cyber Risk Assessment launched in December reveals how AI is driving systemic vulnerability by increasing the speed, scale and sophistication of cyber attacks.
“We are in the race whether we choose to accept them or not,” said Browne. “The technology sector is advancing week by week, and the role of cyber-related risk management in society and the economy is becoming more and more important.” Look at AI as a tool, a threat and an objective, the director added.
The speed at which AI models are developing is also giving rise to the “AI gap”, leaving regions that cannot adapt behind. Security will no longer be an afterthought, no matter how promising the AI system may be, Browne said.
Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.


