AI Governance Is Now Infrastructure
5 realities leaders can no longer ignore.
AI is no longer an experiment — it is everyday enterprise infrastructure. Yet governance has not kept pace.
78%
of organisations have deployed AI
(Gradient Flow, 2025)
11%
have fully implemented responsible AI capabilities
(Gradient Flow, 2025)
€15M
First generative AI GDPR fine — OpenAI, December 2024
(Italy's Garante)
The result is a new kind of blind spot: invisible AI infrastructure without visible governance.
Strategic Briefing
Published: March 2025
Executive Summary
This briefing covers seven governance realities — and the real-world evidence behind each one.
Governance Failures Are Already on Record
Deloitte, OpenAI, and Air Canada have all faced real consequences from deploying AI without proper oversight. These are not isolated incidents.
The Regulatory Floor Has Arrived
The EU AI Act sets fines of up to €35M or 7% of global turnover across four risk tiers. Getting your risk classification wrong is itself a violation. High-risk rules apply from August 2026.
Inventory Is Not Risk
SBOM shows you what's in your AI stack. VEX shows you what's actually exploitable. Without both, your team is chasing false alarms instead of fixing real problems.
Governance Must Become Metadata Infrastructure
AI agents need to know more than what data exists — they need its origin, sensitivity, and permitted uses, delivered in the moment they act. Without this, models misuse data and breach rules silently.
Shadow AI Is Already in Your Pipeline
The average enterprise sees 223 shadow AI incidents per month. AI-generated code and unapproved tools are reaching production with no oversight.
Policy Without Verification Is Theatre
Writing a policy is not the same as enforcing it. NLP-based tools can automatically map policy rules to system logs — turning compliance from a periodic audit into a continuous check.
Governance Enables Speed, Not Just Safety
Mastercard, IBM, and a leading North American insurer all show the same result: governance infrastructure makes AI deployment faster and more scalable, not slower.
When Governance Fails: Real Incidents, Real Consequences
These are not hypothetical scenarios. They are documented incidents from named organisations — each a direct consequence of deploying AI without adequate governance infrastructure.
Deloitte Australia — October 2025
Deloitte used GPT-4o to help write a AU$440,000 government report — without disclosing this or checking the outputs. The report included fake academic citations, invented footnotes, and a fabricated Federal Court quote. Deloitte refunded part of the contract after an external academic spotted the errors.
Source: The Register, Computerworld, Australian Financial Review — October 2025
OpenAI / Italy's Garante — December 2024
Italy's privacy regulator fined OpenAI €15 million — the first GDPR fine for a generative AI product. OpenAI had used personal data to train ChatGPT without a valid legal basis and failed to be transparent with users. The investigation took over a year.
Source: Reuters, Bloomberg Law — December 2024
Air Canada — February 2024
Air Canada's chatbot gave a customer incorrect information about bereavement fares. When taken to court, the airline argued it wasn't responsible for what its chatbot said. The court disagreed. Organisations cannot disclaim accountability for their AI systems.
Source: BC Civil Resolution Tribunal, 2024 BCCRT 149; AI & Society, Springer — October 2024
In each case, the failure was not the AI. It was the absence of a governance control plane around it.
The Governance Gap in Plain Sight
AI is being woven into procurement, HR, underwriting, and customer-facing workflows across the enterprise. Each deployment carries risk — yet in most organisations, no single team owns it.
Ownership is fragmented
No unified accountability across business units deploying AI systems
Audit trails are incomplete
Model decisions cannot be traced back to data, version, or policy context
Risk sits with no one
Regulatory, operational, and reputational exposure accumulates without a named owner
This layered model illustrates the structural gap. Business applications and AI agents are being deployed rapidly, but the foundational governance control plane — the Trust Infrastructure — remains absent or immature in most enterprises. Without it, every layer above is operating on unstable ground.
Section 1
The €35 Million Reality Check
The EU AI Act is the world's first comprehensive AI legal framework — with extraterritorial reach. Any enterprise serving EU markets or processing EU citizen data falls within scope.
€35M or 7% of global turnover
Maximum fine for prohibited AI practices — whichever is higher
€15M or 3% of global turnover
Maximum fine for high-risk AI non-compliance
€7.5M or 1.5% of global turnover
Fine for providing misleading information to regulators
For large enterprises, these are not theoretical risks. Enforcement has already begun.
EU AI Act: Risk Tier Structure
The Act classifies AI systems into four risk tiers. Governance obligations scale with risk level — and misclassification is itself a compliance failure.
"The EU AI Act introduces strict rules based on the level of risk your systems pose, with noncompliance penalties as high as €35 million or 7% of global revenue."
— Secureframe
Section 2
SBOM vs VEX: Inventory Is Not Risk
The Problem with Inventory Alone
SBOM tools are now a regulatory expectation under US Executive Order 14028. But a component inventory tells you what is in your stack — not whether any of it is exploitable in your specific deployment context.
SBOM alone generates noise, not signal.
VEX: Contextual Exploitability
VEX documents assign one of four statuses to each known vulnerability: Not Affected, Under Investigation, Affected, or Fixed.
This transforms a raw CVE list into a prioritised, actionable risk register — and eliminates the alert fatigue that consumes security teams operating on SBOM alone.
SBOM + VEX: Ingredients vs. Allergy Information
The distinction is practical, not academic. Consider the analogy: SBOM is the full ingredient list on a food label. VEX is the allergy advisory — the contextual filter that tells you what actually poses a risk to your environment.
Security and risk teams that operate on SBOM alone are making governance decisions without context. Integrating VEX into your AI supply chain programme is the difference between visibility and actionable risk intelligence.
Section 3
Governance Becomes Metadata Infrastructure
Traditional data catalogues were built for humans — static directories that told analysts what data existed and where to find it. They were not built for AI systems, which have fundamentally different requirements.
AI agents need more than a list of available data. They need to know its origin, how sensitive it is, what policies apply, and what it can be used for — and they need that information at the moment they act on it. Without it, models make decisions using data they shouldn't, produce outputs that breach regulations, and leave no trace that anything went wrong.
From Static Catalogue to Active Metadata Layer
The shift from passive data catalogues to active metadata infrastructure is one of the most consequential — and least discussed — transformations in enterprise AI readiness. Model Context Protocol (MCP) servers now act as the context delivery layer between governance systems and AI agents.
"Gartner identifies metadata as foundational to AI readiness and shows why organisations must shift from static catalogs to active systems."
— Emily Winks, Atlan
Organisations that treat metadata as a passive documentation exercise will find their AI programmes increasingly constrained by unexplainable outputs and audit failures. Metadata is now a runtime control — not a reference document.
Section 4
The Rise of Shadow AI
Shadow IT was manageable. Shadow AI operates at a different order of magnitude — and traditional AppSec tools were not built to detect it.
Embedded Vulnerabilities
AI-generated code may introduce security flaws that bypass existing static analysis tools.
IP Contamination
Unlicensed training data may surface in generated outputs, creating undetected legal exposure.
Control Bypass
AI-produced logic can circumvent existing security controls with no visible audit trail.

The average enterprise experiences 223 shadow AI incidents per month — twice as many as a year ago. (GRC Report, 2025)
Shadow AI in the Developer Pipeline
AI-native Application Security Posture Management (ASPM) tools are the emerging response to this governance gap. They instrument the development pipeline itself — detecting AI-generated code patterns, flagging policy violations at the point of commit, and integrating governance into CI/CD workflows rather than bolting it on afterwards.
Scripts & Automation
AI-generated scripts entering production without code review
Internal Assistants
LLM-powered tools built on unapproved models with no data classification
Dev Pipelines
AI pair programmers embedded in IDE with no policy validation
Automation Tooling
AI-orchestrated workflows operating outside change management controls
Section 5
NLP: The Bridge Between Policy and Logs
There has always been a gap between how policies are written and how systems are monitored. Policies use natural language — describing who can do what, under what conditions. Systems produce logs, events, and telemetry. Historically, connecting the two required manual effort at audit time.
NLP-based compliance tools close this gap automatically. They break each policy clause into its core components — who, what, to what, and when — and match these against live system events. The result is continuous compliance monitoring, not a once-a-year review.
From Policy Intent to Compliance Evidence
This architecture enables something previously impractical at scale: interpretable anomaly detection. When an alert fires, it can reference the specific policy clause it violates — providing both the compliance team and the security operations centre with immediate, auditable context.

Key Capability Unlocked: Alerts can now reference the specific policy clause they violate — transforming compliance from retrospective audit into continuous, real-time verification. This is the foundation of interpretable AI governance at enterprise scale.
Governance as Competitive Advantage: Who's Getting It Right
The organisations moving fastest on AI built governance infrastructure first — and used it to go faster, not slower.
Mastercard
Mastercard was managing hundreds of GenAI use cases across a complex regulatory environment. They deployed a centralised AI Registry and Vendor Registry via Credo AI — giving them full visibility across all use cases and third-party vendors, at speed.
"Using the Credo AI Platform, Mastercard is able to manage AI risk and responsibly implement generative AI — with better speed and scale than ever before."
— Andrew Reiskind, Chief Data Officer, Mastercard
Source: Credo AI Case Study
IBM
IBM integrated its previously separate AI governance and compliance systems into a single platform. The result: 58% faster data clearance for third-party data, 62% faster for IBM's own data, and over 1,000 datasets approved for reuse.
Source: IBM Case Study — January 2025
North American P&C Insurer
A leading North American insurer automated AI governance across 9 billion transactions and 180+ projects. Compliance stopped being a bottleneck and became part of how they scaled.
Source: Monitaur Case Study — 2025
In each case, governance didn't slow deployment. It made it defensible, scalable, and trusted.
Final Section
Governance as the Backbone of Trustworthy AI
The strategic framing has shifted. Governance is no longer a compliance gate — it is the foundational infrastructure that makes scaling AI viable, defensible, and sustainable.
Auditability
Every AI decision traceable to its data source, model version, and policy context
Explainability
Outputs that can be interrogated and justified to regulators, customers, and boards
Trustworthiness
Governance embedded at deployment, not retrofitted after an incident
Without a control plane, AI systems become black boxes. Decisions cannot be explained. Risk cannot be attributed. Regulatory inquiries cannot be answered with confidence.
The AI Trust Control Plane
A mature governance architecture integrates four interconnected capabilities into a unified control plane. Each component is necessary. None is sufficient alone.
Active Metadata
Delivers lineage, sensitivity, and policy constraints to AI agents at inference time — replacing static documentation with runtime governance.
VEX Intelligence
Transforms component inventories into prioritised, contextual risk signals — separating exploitable vulnerabilities from theoretical noise.
AI Governance Controls
Enforces access boundaries, maintains audit trails, and assigns accountability across the AI system lifecycle from development to decommission.
NLP Policy Verification
Automates the translation from policy intent to compliance evidence — enabling continuous verification rather than periodic manual audit.
Practical Checklist | Every Enterprise Now Carries Software-Level Governance Risk
Before your next AI deployment, your leadership team should be able to answer these five questions with confidence.
1
Do you have a named owner for AI governance risk?
If not, regulatory accountability is unassigned across your organisation.
2
Can you produce an audit trail for any AI-driven decision?
If not, you cannot respond to a regulatory inquiry or internal incident with confidence.
3
Have you classified your AI systems against the EU AI Act risk tiers?
If not, you may already be non-compliant without knowing it.
4
Are you monitoring deployed AI systems for drift, misuse, or accuracy degradation?
If not, your governance posture is static — blind to live system behaviour.
5
Do you have visibility into AI tools used outside approved procurement channels?
If not, shadow AI is already operating in your environment.
A "no" to any of these is not a gap to schedule — it is a risk already materialising.
Static Checklist
Point-in-time compliance. Reactive. Audit-driven. Disconnected from operational AI systems. Leaves the organisation blind between reviews.
Living Control Plane
Continuous governance. Proactive. Infrastructure-embedded. Scales with AI deployment. Provides real-time auditability and explainability.
Published: March 2025
Strategic Briefing for CISOs, CIOs & Enterprise Architects