Cyber Security

A 5-step approach to AI reputation management

AI is being used across organizations to increase productivity, accelerate innovation and improve business processes. The problem is that adoption has overtaken discipline. Only a minority (23.8%) of organizations have formal frameworks for AI at risk, which is how unauthorized, “shadow AI” emerges, leading to untracked data exposure, compliance conflicts and bad decisions built on unreliable results.

An AI risk assessment and management approach, such as the NIST AI Risk Management Framework, and visibility into your environment, are critical to safe AI implementation. It introduces shadow AI and puts in place the necessary controls to enable the adoption of safe, mature AI.

We noticed something was off when the new security tool started flashing warnings. Our first thought was that we had misconfigured the rule, until we dug a little deeper and realized that the warnings were all pointing to the same issue: generating API keys in outbound traffic..

The source was not a vulnerable system or a malicious actor. It was one of our product managers, trying to solve a production problem with the help of an AI tool, and unknowingly pasting production API keys into commands.

We have invested heavily in education about the safe use of AI. We have trained our developers extensively to avoid using public LLMs on sensitive data, especially secrets and information. What we didn’t do was include product managers in that training.

Why? Because “they shouldn’t have written the code.”

With AI tools lowering the barrier to coding and debugging, non-engineering roles now have the ability to interact with production data in ways that weren’t possible before. The accident was not caused by bad intent or negligence. There was a gap between us a thought the work is done and how it is actually done today.

Here’s a five-step approach to putting a strong AI risk management framework in place:

1. Open and install AI shadow

Employees often use public model APIs, browser-based information tools and chatbots that are not authorized or controlled to increase productivity without considering the risk of exposing sensitive data.

The use of AI is not difficult to identify; you just need to look in the right place and ask the right questions. Targeted questionnaires paired with traffic analysis and testing can reveal usage and provide visibility.

Start by preparing a comprehensive list to gain visibility into the AI ​​systems in use. This is already becoming a regulatory expectation, eg, the EU AI Act. Then prepare a questionnaire about AI usage scenarios relevant to different business units (eg, financial reporting, contract review, re-evaluation, marketing ideas) to identify risk areas, such as AI used for decision-making. Map these use cases to actual network calls through traffic analysis or log analysis. This helps balance the volume and types of calls that pass through your organization’s cycle, allowing for a more flexible management model.

2. Standardize the assessment by industry standards

After the acquisition, the goal is to evaluate the exposure in a way that business leaders can. The NIST AI risk management framework gives you a practical lens through its four functions: governance, mapping, measurement and management.

Start by governing by providing clear ownership, decision rights and acceptable rules of use for managing data and AI outputs. Next, map the actual usage, including how the AI ​​model is being used, who is using it, what data is being fed and the workflow or influencing decisions.

From there, you measure risk realistically by looking at three inputs together: the most likely ways things can go wrong (fast-tracked data leaks, false positives that present false facts, biased results that create compliance or reputational exposure), the potential business impact if that failure occurs (fines, contract exposure, loss of IP, litigation, negotiation and repair time), wasted time and rework. users typically transmit high-risk data, overall data volume and usage spikes during peak workloads).

Finally, manage priorities by implementing security protocols that are proportionate to the risk. Enforcing strong surveillance where impact and probability are high; use simple guidelines where they are limited. For example, a finance team uploading weather models to a free AI service is a clear, potentially huge impact.

3. Use a layered security strategy

People, process and technology working in sync are an effective defense against AI risk. Train teams on data segregation and leave no ambiguity about not sharing PII or confidential information in public AI tools. Reinforce this behavior with a tabletop exercise that shows how AI-related misconceptions can silently disrupt decisions. For example, by creating “growth drivers” that distort the forecast and cause real financial errors.

Next, streamline the rollout workflow and rapidly grow AI/data sharing to rule with incremental rollouts. Start with “advisory mode,” which flags dangerous warnings and helps you tune data sharing limits. As you learn from usage patterns and reduce false positives, adapt controls and move to block or clean up flagged commands where appropriate.

Finally, use a platform layer to manage and monitor at scale. Start with an AI traffic DLP installation, then add AI-specific monitoring and intrusion prevention capabilities that quickly analyze syntax and semantics, impact risk in real-time and warn or intervene when interactions look suspicious.

4. Enforcing human-in-the-loop supervision

As we accelerate the adoption of AI, the elephant in the room that we often forget is the side effects that go directly to the production workflow.

The NIST framework emphasizes ‘human-in-the-loop’ monitoring for failures caused by plausible but incorrect AI outputs. If these results influence legal positions, financial decisions or customer communications without human review, we are looking at a lot of bad decision-making in all important business functions.

A recommended approach is to have a human gatekeeper who is clearly accountable for specific outcomes, for example:

  • Submitting drafts to attorneys to confirm categories, responsibilities, definitions and site-specific terminology before anything is shared externally.
  • Senior analysts must sign off to verify assumptions, formulas, source data and version control before numbers inform forecasts or reporting.

5. Translate risk reduction into business growth

McKinsey research on digital trust suggests that companies leading in trust are 1.6 times more likely than others to achieve 10% or higher annual growth rates in both revenue and EBIT.

Ideally, AI risk management should be positioned as a core business initiative with clear operational value. The test ensures that fewer shadow AI tools are used, fewer incidents of sensitive data being exposed, fewer incidents, fewer test findings to be corrected, and less rework caused by unreliable results.

When you translate these improvements into hours saved, reduced external consulting/research effort and unmet incident response costs, AI risk management makes business sense.

An effective risk management framework

Treating shadow AI risk management as a priority is a good idea for implementing an effective risk management framework. Start your journey to manage AI risk with dignity by:

  • Inventing the use of AI
  • Using a systematic risk assessment approach
  • Establishing and enforcing layered controls
  • Ensuring human supervision
  • Continuous measurement

This approach gives you clear visibility into AI usage and enforces a layered defense to help your team make the best of AI. You go from pilot-stage AI testing to enterprise-grade discovery-based, risk-mapping and proactive defense.

This article was published as part of the Foundry Expert Contributor Network.
Want to join?

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button