AWS relies on early intelligence to deal with future AI and quantum threats

As Amazon celebrates 20 years of its AWS cloud this year, the world’s largest cloud computing provider is now facing two major cybersecurity threats – AI and quantum.
How the company will address these emerging issues to ensure the security and resilience of the systems used by millions of its corporate customers remains an evolving question. But senior management at AWS believes the key decisions and innovations the company has made during its 20-year tenure are addressing these threats.
Here’s a look at three key AWS developments and how they fit into what the company and its customers are facing as emerging threats now and in the coming years.
Nitro infrastructure and ‘zero humans’
When Amazon released the Virtual Private Cloud, its AWS network layer, in 2009, it was all software.
“Now VPC is being implemented in hardware,” said Eric Brandwine, who first came to AWS more than 18 years ago to work on that project and is now Amazon’s VP and principal security engineer.
What changed was the 2017 launch of Nitro, a networking, security, and hypervisor hardware foundation that enforces strict isolation between customer environments. Amazon paid more than $350 million to the fabless semiconductor company in 2015 to make the technology change happen.
“Commercial hypervisors are mature and appropriate technologies but they are not designed to scale to the cloud with the type of multi-tenancy that we have,” Brandwine tells CSO.
Nitro also enables Amazon to run AWS without employees ever touching customer infrastructure. “With Nitro, there is no human access to it,” he said. “This is one of the reasons we are able to offer bare metal events.”
If maintenance is required, all customer content is removed from the machine before employees access it.
“And we’ve had third parties look at this process,” he adds, including the NCC Group, which is conducting a review of Amazon’s security claims architecture in 2023.
Today, Nitro provides a foundation of trust to protect a company’s secure encryption keys, to discover the identity of AI agents, to protect AWS infrastructure from malicious agents, and to provide a private computing foundation for AI workloads themselves.
Symmetric cryptography and the quantum threat
Back in the early 2010s, many hardware security modules used asymmetric cryptography to protect security keys. Asymmetric cryptography, the type used to secure Internet communications, involves pairs of keys – one to lock it, the other to unlock it. It is a useful and convenient method when you are working with many groups.
Amazon chose to use symmetric encryption instead, where the same key is used to both lock and unlock data, because it is faster and more efficient.
“One of the things we’ve done over the last 15 years is to ensure customers who talk to us, we rely on symmetric cryptography,” said Ken Beer, director of cryptography for AWS. “And the Key Management Service that I helped start in 2013, we also said we would rely on symmetric cryptography to protect all keys.”
Today, more than 99.9% of all data encryption at rest does not involve asymmetric cryptography anywhere in the chain of keys they protect, he says.
That was a very lucky decision.
The reason? Quantum computers are expected to be able to break today’s asymmetric encryption standards – but symmetric encryption is safe. And progress in quantum computing has been moving so fast that Google and Cloudflare have increased their timelines.
Companies of all sizes are now racing against the clock to update their cryptography to secure algorithms – unless those strategies are symmetric.
“We don’t have to change it, and we’re glad we don’t,” Beer said. As for all data stored on Amazon’s servers, the company does not have to decrypt and re-encrypt with quantum-safe methods. It is already quantum safe.
That doesn’t mean Amazon doesn’t have asymmetric encryption anywhere. Communication with trusted partners, or through the public Internet, requires it.
AWS is targeting 2028 and 2029 to complete its post-quantum public certificate validation – there is a delay there because the world still needs to agree on a common set of standards.
“It will require cooperation between five or ten major retailers,” said Beer. “Once we agree on a way to verify digital signatures, all vendors who own different parts of the technology stack will go and implement it.”
Amazon has been a member of the CA/Browser Forum for more than a decade, he says, referring to the industry organization that sets rules for how the public’s critical infrastructure operates on the Internet. “We hope to shake up this industry in 2029.”
AWS customers who use AWS to do the cryptographic heavy lifting get post-quantum protection for free with no extra effort. Those with their own asymmetric cryptography, however, will have to do the hard work.
“There’s probably a lot of crypto embedded in people’s systems,” Beer said. “Can I get it? Can I change it? Do I have to go talk to a salesman I haven’t talked to in ten years – or is that gone?” Those are the types of questions business customers should be asking.
S3 security controls and the shared responsibility model
There are no public incidents of AWS Nitro or its encryption infrastructure being compromised. The NCC report, along with other analyst research, shows it works.
But Amazon’s data breach is always in the news. The reason? AWS customers fail to secure their S3 buckets, leak credentials, hard-code keys, and make many other mistakes when managing their environment.
According to cybersecurity firm UpGuard, AWS S3 security is “flawed by design,” with thousands of breaches over the past few years discovered by the company.
“From the day S3 launched, buckets were protected by default,” Brandwine recounts.
That’s accurate, says UpGuard – but AWS makes it too easy to accidentally configure buckets, it concludes.
Brandwine agrees that there is a problem here. “If a customer has a bad day in the cloud, it’s something they did,” he said. “But if a bunch of customers are having a bad day in the cloud, we have to look.”
Let’s say, for example, a company uses an S3 bucket to hold some content and takes the bucket down – but there are still web pages, or services, or tools that connect to it. Attackers can hijack these abandoned buckets and use them for malicious purposes.
This is user error – customers who download buckets should also download the links that point to them. But it happens. And it happens often.
“So we created something called active defense,” said Brandwine.
When Amazon detects someone trying to use a dictionary to guess bucket names, “we lie to them and say, ‘Bucket not found,'” he says. “It disables scanning and effectively eliminates dictionary attacks against S3.”
But the AWS infrastructure is complex, and there are many cases where business customers can set policies incorrectly. And it’s not just customers.
Amazon employees also make mistakes. In CodeBreach, AWS engineers badly modify AWS systems, according to Wiz researchers.
Attackers have been looking for opportunities to exploit incorrect fixes, weak credentials, and similar customer problems. Now, with AI, the risk is greater than ever.
“AI is not going to change what the threat actors are doing,” said Gee Rittenhouse, VP of security services at Amazon. “It’s changing the speed and scale at which they operate. We still see the main threats, like phishing and data compromise, but the exploits are much faster.”
Amazon also uses this technology, he says.
At the end of March, AWS introduced its AWS Security Agent for on-demand penetration testing and the AWS DevOps agent, which resolves incidents automatically.
“We have forwards competing with defenders and what used to take weeks we can now do in a few hours,” he said.
But there’s another way AI is Amazon’s biggest emerging threat. The AI agents that businesses build and deploy on AWS could be the next big breach vector, the new equivalent of unsecured S3 buckets.
Can Amazon take its success in securing its infrastructure and combine it with the lessons learned from years of S3 bucket breaches to build a security foundation for AI agents?
Rittenhouse says yes. And a lot of it comes down to the layer of agent authentication and access rights.
“We just released a new authentication, the OAuth 2 token,” he says. It’s part of Amazon Bedrock AgentCore Identity, and includes tracking who the AI agent is representing, and what resources it’s trying to access.
“It checks if the agent can do this before it does it, at the infrastructure layer,” Rittenhouse said. “And if it’s not, it’s not allowed to do it at that time, regardless of what the order says, or it’s funny, or it’s taken, our infrastructure doesn’t allow that.”
He adds: “That’s the advantage we have. “We move away from the infrastructure framework.”



