LimitedView
AI Governance4 March 20265 min read

Why Every AI Request Needs a Policy Decision

Shadow AI is the new shadow IT. Without governance, every AI interaction is an unaudited decision. Here's why policy-first matters.

In 2012, shadow IT was the problem security teams could not contain. Employees were provisioning their own cloud storage, connecting personal devices to corporate networks, and using consumer applications to process organisational data. The tools were faster, easier, and more useful than what IT had approved. The risk was invisible until it was not.

In 2026, the same pattern is repeating with AI. Employees across every sector are using AI models to draft correspondence, summarise confidential documents, analyse customer data, and generate code. They are doing it through accounts they provisioned themselves, through providers their organisation has never evaluated, with no audit trail of what was submitted or what came back. Shadow AI is not primarily a security problem. It is a policy problem that creates security consequences.

The Unaudited Decision

Every AI request is a decision. It involves a choice of model, a choice of what data to share, and an implicit acceptance of the provider's data handling terms. In most organisations today, those decisions are made individually by the employee making the request. There is no policy enforcement. There is no audit trail. There is no governance.

The exposure this creates is not primarily about AI models leaking data to competitors, though that risk exists. The more immediate exposure is regulatory. Under UK GDPR and equivalent frameworks, an organisation is responsible for the processing decisions made with personal data regardless of which tool or employee made them. An employee submitting customer data to a consumer AI service is a data processing event. If that event is not documented and authorised, the organisation is non-compliant. The fact that it happened without the knowledge of the information security team is not a defence.

Why Policy Cannot Be Retroactive

The instinct in many organisations is to address shadow AI after the fact: audit what tools employees are using, classify them by risk, and issue guidance. This approach has a structural problem. By the time an audit identifies a pattern of AI usage, the data submitted in those interactions has already left the organisation's control. Retroactive policy cannot remediate data that has already been processed by an unapproved provider.

Effective AI governance must be upstream. Policy decisions must happen at the point of request, before the data leaves the perimeter. This requires infrastructure that sits between the user and the AI provider, intercepting requests, applying policy rules, routing to approved models, and generating an immutable audit record.

This is architecturally similar to how mature organisations handle web access. A proxy or secure web gateway does not ask employees to remember the acceptable use policy. It enforces it automatically, at the network layer, for every request. AI governance requires the equivalent: a control plane that applies organisational policy to every AI interaction, automatically and transparently.

What Policy-First AI Governance Requires

A workable AI governance framework needs three operational components. First, a model gateway that routes requests through approved providers rather than allowing direct consumer access. Second, a policy engine that evaluates each request against organisational rules covering data classification, role-based access, and regulatory constraints before it is processed. Third, an audit trail that is tamper-evident, granular enough to support incident investigation, and structured to satisfy regulatory inquiry.

None of these components require employees to change their behaviour significantly. The goal is not to make AI harder to use. It is to make every use of AI a governed event without adding friction that drives further shadow usage.

The AI Control Plane Model

LimitedView's AI Control Plane implements this architecture for enterprise deployments. Every AI request made through the Control Plane passes through an OPA-backed policy engine that evaluates the request against the organisation's defined rules before routing it to the appropriate model provider. The routing decision considers quality, cost, and latency preferences alongside policy compliance. The complete interaction is logged with HMAC signing to prevent tampering.

For high-sensitivity decisions, legal analysis, financial modelling, or anything where the stakes of a single AI output are significant, the Control Plane supports council mode: multiple models evaluate the same prompt independently and their outputs are presented for human review before any action is taken. This is not AI oversight as a compliance formality. It is AI governance as operational risk management.

The organisations that will navigate AI regulation most effectively are not those with the best AI policies on paper. They are the ones with the infrastructure to enforce those policies at the point of every request, automatically, without depending on individual employees to remember what they are allowed to do.

More Insights

Incident Analysis

Ransomware Training After an Attack: Why the First 48 Hours Matter Most

10 April 2026Read →
Research

The Neuroscience of Security Training: Why Timing Beats Content

9 April 2026Read →
AI Governance

What Is Shadow AI? The Risk Your Organisation Is Ignoring

8 April 2026Read →

Ready to Move from 12% to 73%?

See how incident-triggered training delivers measurable behaviour change — not compliance theatre.