Tech News

Embedding compliance in AI adoption

Kyndryl’s Ismail Amla discusses the company’s new policy as a coding process, and how it can help solve AI problems like agent drift.

When it comes to the adoption of AI in business, compliance concerns are becoming increasingly important.

According to Kyndryl’s most recent Readiness Report, 31pc of business customers cite regulatory or compliance concerns as the primary obstacle limiting their organization’s ability to scale recent technology investments.

The year 2026 marks an important point in the AI ​​compliance timeline in particular, with the transparency rules of the EU’s AI Act coming into force in August.

Last month, Kyndryl announced its ‘policy as code power’ – a new process designed to create AI workflows controlled by business policy.

“Policy as code is the process of translating an organization’s rules, policies and compliance requirements into machine-readable code, so AI systems are limited to operating within pre-defined monitoring routes,” explains Ismail Amla, senior vice president at Kyndryl Consult. “Human experts continue to oversee all activities related to these processes.”

Adaptive design

“Many organizations, especially those in complex, highly regulated environments, want to leverage agency AI, but are held back by concerns about security, compliance and regulation”, says Amla.

Speaking to SiliconRepublic.com, he says that policy as a code can help organizations support “consistent policy definitions” and define clear operational boundaries, subsequently ensuring that agent actions are explainable, reviewable and “consistent with organizational standards”.

Amla also says that this framework can help reduce costs, speed up decision-making, eliminate errors and “power AI-native work flow within the policy framework”.

“By embedding policy and regulatory requirements directly into AI agent operations, policy as code can help organizations create AI workflows that are governed, transparent, defined and aligned with business needs.”

But what about the long-term use of policy as a code?

Amla says the biggest benefit of this system is “trust through strong governance, better transparency, lower operational risk and reliable AI at scale”.

“Managing agent workflow in this way supports the controlled and responsible deployment of policy-restricted AI agents in areas such as financial operations, public services, supply chains and other critical domains, where reliability and predictability are critical,” he explains.

Catch the drift

In the past year, according to Amla, the biggest change he has seen in the adoption of AI is that organizations are moving beyond proof of concept and “focusing more on what it takes to make AI work at production and scale”.

“That means more attention to infrastructure, governance, data quality and organizational readiness,” he said. “Organizations are moving from evaluation to making smarter decisions based on the information they’ve received to improve their organization’s high-value results and performance, and get a return on their investment.”

But with more focus on critical AI integration comes risk, especially if the organization is not fully prepared.

Amla warns of something called ‘agent drift’, which refers to when an AI agent can appear to be trustworthy while working towards unintended consequences due to a gradual departure from the agent’s original intent or goal.

“Agency malpractice creates pressing challenges for all organizations, but it is particularly acute in the public sector and highly regulated sectors, such as banking and healthcare,” Amla said.

“In these industries, organizations cannot move from pilots to production if issues of control, trust and compliance remain unresolved. It is clear that businesses need an urgent way to enforce what agents can do during operations and close management gaps long before the drift leads to financial or compliance failures.”

Amla believes that policy as code can help solve this problem, due to its ability to allow businesses to translate their rules and policies into machine-readable instructions that “govern how AI agents think, adapt and act”.

“This greatly reduces the risk of agent drift,” he said. “It also reduces the trust and compliance concerns that exist between large enterprises and the return on investment in AI.”

Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button