Cyber Security

13 ways attackers use artificial intelligence to exploit your systems

“AI can defeat CAPTCHA programs and analyze voice biometrics to compromise authentication,” according to cybersecurity vendor Dispersive. “This capability underscores the need for organizations to adopt more advanced, state-of-the-art security measures.”

Using deepfakes for social engineering

AI-generated deepfakes are being manipulated to exploit channels that many employees fully trust, such as voice and video, instead of relying on unsatisfying email-based attacks.

The problem is getting worse with the widespread availability of AI technology capable of creating convincing deepfakes, according to Alex Lisle, CTO of deepfake detection platform Reality Defender.

“There was a recent case involving a cybersecurity company that relied on visual authentication to reset credentials,” Lisle said. “Their process requires a manager to join a Zoom call with IT to verify the employee’s identity before resetting a password.”

Lisle explains: “Attackers are now using deepfakes to impersonate those administrators in live video calls to authorize these resets.”

In the most high-profile example to date, a finance employee at design and engineering firm Arup was tricked into approving a fake HK$200 million ($25.6 million) job after attending a videoconference call where fraudsters used deepfake technology to impersonate its UK-based CFO.

Impersonating brands in aggressive ad campaigns

Cybercriminals have started using next gen AI tools to deliver brand impersonation campaigns delivered through ads and content platforms, rather than phishing or malware.

“Attacks are now using gen AI to mass-produce real ad copy, fake content, and fake endorsement pages, and then distribute them across search ads, social ads, and AI-generated content, targeting highly targeted queries like ‘product penetration’ or ‘product support,'” explains Shlomi Beer, founder and CEO at ImpersonAlly for specialized online security protection. ecosystem.

The tactic was used in an ongoing series of Google ad account fraud, impersonating the company’s Cursor AI code assistant, and a fake customer support scam for ecommerce platform Shopify, among other attacks.

Abusing OpenClaw

Attackers have also begun targeting personal AI agents such as OpenClaw.

OpenClaw offers an open source AI agent framework. The combination of supply chain attacks in the talent marketplace and poor remediation opens the door to potential exploits and malware launches, as CSO covered in more depth in our previous report.

“Cybercriminals can exploit these virtual assistants to steal private keys of cryptocurrency wallets and extract codes from victims’ machines,” said Edward Wu, CEO and founder of Dropzone AI. “We can expect 2026 to be the year that security groups will try to prevent the unauthorized use of AI personal agents.”

Memories of a toxic model

In order to provide short-term and long-term context, AI agents began to rely more on persistent memory, opening the door to operations that involved planting malicious memories.

If an attacker inserts malicious or false information into the agent’s memory, that corrupted context affects all future decisions made by the agent.

For example, security researcher Johann Rehberger showed how to plant fake memories in ChatGPT in September 2025.

“He [Rehberger] used a malicious image with hidden instructions embedded in it to insert the generated data into the model’s long-term memory,” said Siri Varma Vegiraju, security technology lead at Microsoft.

Hacking the AI ​​infrastructure

Over the past year, attackers have moved from exploiting artificial AI to targeting the infrastructure that enables it.

This attack vector is shown in sporting chain poisoning on Model Context Protocol servers, where vulnerable dependencies or modified code have introduced vulnerabilities to business environments.

For example, a fake “Postmark MCP Server” discovered in early 2025 silently BCC’d all processed emails, including internal documents, invoices, and details, on a domain controlled by the attacker.

Many other malicious MCP servers have already been identified in the wild, many designed to leak information without detection, according to Casey Bleeker CEO at SurePath AI.

“We track several MCP-specific risk categories: tool poisoning attacks, where adversaries insert malicious directives into AI tool definitions that an agent uses when an agent requests them; supply chain compromise, where a trusted MCP server or dependency is updated after authorization to misbehave; and data flow abuse where data is improperly disposed of by what appears to be legitimate AI activity,” he explains. Bleeker.

Check the truth

AI technology is powerful but has its limitations, several experts told CSO.

Rik Ferguson, VP of security intelligence at Forescout, says that cybercriminals rely more on AI to perform repetitive tasks instead of more complex tasks, such as exploiting vulnerabilities.

“Criminal use is reliable [of AI] “It’s left to heavy tasks and complex workflows like phishing and falsification, influencing and socializing, measuring and risking content, and generating boilerplate components, rather than reliably detecting and ultimately exploiting new vulnerabilities,” said Ferguson.

Over the past twelve months, affiliate detection and response company Huntress has tracked down threat actors who use AI to generate and execute common trades, from scripting to browser extensions and, in some cases, even phishing scams.

“We’ve also seen these ‘vibe coded’ scripts fail to perform and meet their objectives many times,” Anton Ovrutsky, chief strategic response analyst at Huntress, tells CSO.

And while AI has given threat actors a powerful tool, at least so far, it has failed to develop new tactics or exploit categories, according to Ovrutsky.

“A threat actor may be able to quickly display a sophisticated stealth script, however the basic ‘laws of physics’ remain; the threat actor must be in a position to execute that script in the first place,” Ovrutsky said. “We don’t have to look for an exploit powered by AI alone.”

Countermeasures

Overall misuse of next gen AI tools makes it easier for less skilled cyber criminals to make a dishonest living. Protecting the attack vector is a challenge for security professionals to use the power of artificial intelligence in a way that is more effective than attackers.

“Criminal misuse of AI technology drives the need to assess, detect, and respond to these threats, where AI is also used to combat cybercrime activity,” said Mindgard’s Garraghan.

In a blog post, Lawrence Pingree, VP of technology marketing at Dispersive, outlines the first cyber defenses security professionals can take to win what he describes as the “AI ARMS (Automation, Awareness, and Disinformation) race” between attackers and defenders.

“Relying on standard detection and response methods is no longer enough,” warns Pingree.

Along with employee education and awareness programs, businesses should use AI to detect and mitigate AI-based productivity threats in real time.

Forescout’s Ferguson says CISOs should treat enterprise AI like any other high-value SaaS platform.

“Enforce identity and conditional access, limit privileges, lock keys, and monitor AI/API usage and spending aggressively,” advises Ferguson.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button