Tech News

How fully homomorphic encryption reinvents secure AI

Zama’s Jeremy Bradley discusses why the rise of AI is forcing businesses to take privacy more seriously and how new technologies are responding.

Published last month, the International AI Security Report 2026 makes two things clear: while the deployment of AI is still pilots or instruments with a limited range one to two years back, the power of AI has developed at lightning speed, and adoption – very quickly. In fact, more than 700m people are now using leading AI systems every week, an adoption rate that far exceeds previous technologies such as the personal computer.

For companies that see this shift – in both the technology’s potential and demand for it – the lure of making money with AI is strong. Delivered largely through cloud-based data processing, AI opens the door to everything from automating decisions and extracting value from data at scale to moving faster than a competitor if adopted by early adopters. These professionals have seen many already embedded in key workflows – including pricing, decision-making, R&D, legal, healthcare and finance.

The paradox of transparency

However, as soon as AI touches core IP and managed data, businesses hit a roadblock. Dealing with systems that are open by design, they struggle to support real-world use cases involving any type of confidential data (payment, ownership, corporate finance etc).

This has not slowed down the adoption of AI, but it has made it uneven. Specifically, we’ve seen rapid testing at the edges, where AI poses little risk, but be careful when it comes to training AI systems or interacting with sensitive, controlled or proprietary data. This saw the use of mass production limited to small-scale operations; rely on thin, pure, or synthetic data sets; or keep a high volume of work out of fully shared cloud environments.

All of this comes from the risk of data exposure, whether that’s through third-party infrastructure, data that is reused in ways that are difficult to audit, or information embedded in models that are difficult to test or compromise. Recent high-profile failures in the headlines (data leaks, model-inversion attacks, law enforcement, AI misuse scandals etc) have reinforced concerns about what happens to sensitive data when it enters an AI system.

Along with this, AI governance is moving from a vague policy to a fiduciary duty. The UK Information Commissioner’s (ICO) position on AI compliance, for example, is that any organizations using AI to process personal data must comply with data protection law, no matter how complex or obscure the system. The European Data Protection Board (EDPB) has a very similar position.

What level will the secret AI be

In practice, the above raises big questions – especially for regulated industries – about accountability, data retention and compliance with privacy legislation. But it also leaves businesses weighing the pressure to quickly implement AI against the risk of exposing data they can’t afford to lose or misuse. And it is this space that privacy-preserving technology – fully homomorphic encryption (FHE) specifically – is beginning to address.

FHE has long existed as a mathematical theory, promising the ability to compute on encrypted data without decrypting it. However, until recently, its use was limited; implementation was slow, resource-intensive and difficult for developers to integrate with real-world systems.

Several recent breakthroughs, however, have brought FHE closer to large-scale implementation and engineer-friendly technology. These include new cryptographic schemes such as the CKKS configuration, which support limited calculations and are more efficient in AI operations; improved algorithms improved bootstrapping procedures, significantly reducing the time required to update ciphertexts; and libraries such as TenSEAL and Concrete are being further developed, making it easier for developers to use FHE at scale. Additionally, hardware acceleration with GPUs and FPGAs has reduced computing demands, while many developer-friendly APIs have made integration with existing workflows seamless.

All of this means that now – for the first time – developers can actually design AI pipelines where privacy is guaranteed by the architecture itself, rather than externally enforced. This makes it possible to extend AI into areas such as income, healthcare, finance and other regulated domains, all without compromising privacy – a development that will see private AI become the norm.

Who is destined to gain the most?

The companies that succeed in this next phase of digital transformation will not be those that believe they are already doing enough. And it won’t be those who think that privacy can be restored in time, and it certainly won’t be those who misread today’s relative silence regarding privacy as a lack of need (as soon as there are effective solutions, expectations are reset very quickly).

Instead, it will be those who treat private data as a strategic asset from day one, and who embed privacy by design.

For those in the latter camp, they will be the first to open:

  • Access to rich, sensitive, high-quality data, simply because customers trust them.
  • The speed of deployment in sensitive areas, due to fewer legal updates, fewer bespoke controls, fewer internal vetoes. This is where the amount of time becomes the real difference.
  • Deep integration and collaboration. Privacy-preserving systems enable collaboration across organizational boundaries (partners, suppliers, authorities) that was previously impossible. This expands addressable markets, not just improving margins.

What will happen to the secret at the end of the year?

So far, the technology is ready and the benefits are clear, but that alone is not enough to move privacy from a ‘nice to have’ to a board-level requirement for many businesses. For that to happen, a series of pressures will come together.

First, we’ll start to see a few large businesses and public sector actors set privacy-preserving architectures as default requirements. This will see the market tip, and the rest will quickly follow through supply chains and platforms.

At the same time, the demand for AI will continue to grow, along with the need for it to work on sensitive data. In turn, this will see AI legislation continue to mature, and boards won’t be asking “is privacy nice to have?” but “can we prove that data has never been leaked?”

Eventually, competitive pressure will do the rest. Companies that slow to adopt privacy-by-design approaches will begin to see competitors move quickly, unlocking high-value data and closing deals that remain out of reach.

All this, I believe, will happen by the end of 2026. By 2027, expectations will have reached power, and the costs of not embedding privacy by design will be visible, measurable and strategic.

Written by Jeremy Bradley

Jeremy Bradley is a chief operating officer at Zama. He is a versatile and multi-strategic leader who has worked with many organizations to develop strategy, drive communications and collaboration, and lead policy and process.

Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button