Most organisations deploying AI have read the model card. Very few have read the licence, audited the training data provenance, or mapped what happens when the provider updates the model without notice.
Foundation models are not software packages. They are complex artefacts shaped by training data, fine-tuning decisions, and reinforcement processes that no external party can fully inspect. When you build a production system on one, you inherit its characteristics, its constraints, and its risks.
What Is Foundation Model Supply Chain Risk?
Foundation model supply chain risk refers to the threats that arise from the upstream dependencies in an AI deployment: the training data, the model weights, the fine-tuning layers, the API through which you access it, and any updates applied to that model over time. Like software supply chain risk, it involves components you did not build and cannot fully audit. Unlike software supply chain risk, the risks are less visible and the governance frameworks are substantially less mature.
What Licensing and Legal Risks Do Foundation Models Carry?
Most commercial deployments of open-weight foundation models carry licence terms that restrict or complicate enterprise use. The legal risk is that your organisation assumes it has the right to use a model commercially, embed it in a product, or process certain categories of data with it, without having read the licence terms carefully.
Some widely used open-weight models prohibit use cases that organisations are already running. Others carry clauses around competing services, data usage, or IP ownership of model outputs that legal teams have not reviewed. The gap between what engineering teams deploy and what legal teams have approved is, in LimitedView's analysis of enterprise AI governance programmes, one of the most consistent failures across sectors.
This is not a hypothetical. It is a contractual exposure that exists in many organisations right now.
What Is Data Poisoning and Why Does It Matter for Enterprise AI?
Data poisoning is the manipulation of training data to influence model behaviour in targeted ways. If a foundation model was trained on data that included adversarially crafted content, it may exhibit behaviours that are subtle, consistent, and difficult to detect through normal testing.
Organisations deploying externally trained models have no visibility into whether the training data met integrity standards. Model cards provide summaries, not audits. The absence of disclosed vulnerabilities does not confirm their absence.
For production deployments processing sensitive queries, customer data, or internal documents, this is not a theoretical concern. It is an uninspected risk that sits outside most threat models.
What Happens When the Model Changes Without Notice?
Model version drift is a real and underappreciated risk. A provider updates the underlying model. The API endpoint does not change. Your integration does not break. But the model's behaviour on certain inputs shifts.
In a well-governed deployment, your organisation detects this through behavioural monitoring and regression testing against established baselines. In a typical enterprise deployment, drift is discovered later: sometimes through an incident, sometimes through a compliance review that surfaces outputs that should not have been generated.
LimitedView's AI Control Plane is designed to provide visibility into model behaviour over time, flagging drift before it becomes an incident. The control plane logs every request and response, enabling post-hoc audit and proactive comparison when a model update is suspected or confirmed by a provider.
What Should a Foundation Model Risk Assessment Cover?
A thorough assessment covers six areas: licence compliance, training data provenance, known vulnerability disclosures, output behaviour baselines, update notification processes, and contractual protections if the provider changes terms.
Most AI governance frameworks in active use today address one or two of these. The remainder are treated as vendor trust assumptions rather than verifiable controls.
The conversation that needs to happen is between your legal team, your security function, and your AI deployment teams, and it needs to happen before production deployment rather than after the first audit finding. That ordering matters. Retrofitting governance onto a system already in production is harder, more disruptive, and more expensive than building it in from the start.
How Do You Govern Foundation Models Across a Multi-Model Environment?
Many organisations are now running multiple foundation models across different use cases. A model used for internal knowledge search, a different model for customer-facing response generation, another for code assistance. Each carries its own risk profile, its own licensing terms, and its own update cadence.
Governing this without a centralised control layer means relying on individual teams to maintain awareness of the risks specific to their deployment. That approach does not scale and does not produce consistent security outcomes across the organisation.
A model-agnostic governance layer that applies policy uniformly across providers, logs behaviour, and surfaces anomalies is the infrastructure that makes multi-model deployment auditable and defensible. The alternative is trusting that each deployment team is conducting its own risk assessment correctly, consistently, and in sync with your legal and compliance requirements.
That trust has a poor track record.


