This is the approach we are currently deploying for a time-constrained client facing exactly this challenge.
1. Establish Immediate Executive Accountability
AI governance cannot remain buried within IT or innovation functions.
A named, senior accountable executive—at Board or Executive Committee level—must take ownership of AI risk, compliance, and assurance.
Without this:
- Decision-making fragments
- Accountability diffuses
- And most importantly—you run out of time
2. Identify and Catalogue All AI Systems
You cannot govern what you cannot see.
An accelerated AI inventory must include:
- Internally developed models
- Third-party AI tools (including SaaS platforms)
- AI embedded within enterprise systems (CRM, HR, underwriting, etc.)
- Informal or “shadow” AI usage across the workforce
This is consistently the most underestimated step—and the one that exposes the greatest level of risk.
To execute at pace, we deploy Business Analysts with pre-defined checklists and structured cataloguing templates, feeding a centralised AI Inventory Catalogue.
3. Classify Systems Against AI Act Risk Categories
The AI Act is fundamentally risk-based.
Each system must be classified as:
- Prohibited
- High-risk
- Limited risk
- Minimal risk
This classification directly determines your legal obligations.
In practice, many organisations discover that systems assumed to be low-impact are, in fact, high-risk under the Act.
Speed is achieved by applying agreed classification criteria tailored to the client’s operating context.
4. Conduct Gap Assessments on High-Risk Systems
For high-risk systems, structured gap analysis is essential.
This must assess compliance against:
- Risk management frameworks
- Data governance and data quality
- Transparency and explainability
- Human oversight
- Accuracy, robustness, and cybersecurity
- Technical documentation and auditability
The most common gaps are not technical—they are governance and documentation failures.
We address this through standardised assessment templates aligned upfront with the client, enabling rapid and repeatable execution.
5. Implement AI Governance and Control Frameworks
Compliance is not a one-off activity—it requires an operating model.
At minimum:
- Defined AI governance policies and standards
- Lifecycle controls (design → deployment → monitoring)
- Model validation and approval processes
- Monitoring and incident management
- Audit trails and documentation structures
This element often extends beyond the August deadline and should be treated as a formal AI compliance transformation programme.
6. Reassess Third-Party and Vendor Risk
One of the most critical misunderstandings: vendor AI does not transfer responsibility.
You remain accountable as the deployer.
Immediate priorities:
- Review contracts for AI liability and compliance obligations
- Obtain technical documentation and conformity evidence
- Assess vendor alignment with the AI Act
- Identify and challenge “black box” dependencies
In reality, this is often the most time-consuming activity.
Many vendors are simply not prepared. Documentation is frequently inadequate or non-existent.
Our experience shows that effective pressure can be applied by forming client user groups—collectively driving vendors to respond.
Where vendors have built solutions through rapid experimentation, there is often a deeper issue: they may struggle to fully explain how their systems work.
That is not just a compliance risk—it is an operational and strategic risk.
7. Introduce Transparency and User Disclosure Mechanisms
Particularly for generative AI, organisations must ensure:
- Users are aware they are interacting with AI
- AI-generated outputs are clearly identifiable
- Appropriate disclaimers and usage guidance are in place
Failure here creates both regulatory exposure and reputational damage.
8. Build AI Literacy Across the Organisation
AI compliance is not purely technical—it is organisational.
Employees must understand:
- What AI is and where it is used
- The associated risks
- Their responsibilities under governance frameworks
We deliver this through:
- Organisation-wide briefings (e.g. Teams sessions)
- Updated policy frameworks
- Integration into induction and training programmes
Without this, controls will be bypassed—often unintentionally.
9. Establish Monitoring, Reporting, and Incident Response
The AI Act requires continuous oversight.
Organisations must implement:
- Ongoing performance monitoring
- Detection of bias, drift, or system failure
- Incident reporting mechanisms
- Clear escalation routes
Critically, these controls must be operational, not theoretical.
We enforce this through regular audit cycles—because compliance does not sustain itself without active governance.
10. Prioritise and Sequence—You Will Not Fix Everything by August
A hard truth: full compliance by August is unrealistic for most organisations.
What is achievable:
- Identification of highest-risk exposure areas
- Implementation of interim controls
- A structured, defensible compliance roadmap
Regulators will favour organisations demonstrating control and intent over those who have taken no action.
Financial Consequences of Non – Compliance
The financial consequences of non-compliance with the EU AI Act are both direct (regulatory penalties) and indirect (commercial, operational, and strategic impacts).
Focusing on the fines the breakdown is as follows :-
Regulatory Fines (The Headline Risk)
The EU AI Act introduces penalties aligned in scale with GDPR:
- Up to €35 million or 7% of global annual turnover (whichever is higher) → For breaches involving prohibited AI practices
- Up to €15 million or 3% of global annual turnover → For non-compliance with high-risk AI obligations
- Up to €7.5 million or 1% of global annual turnover → For providing incorrect or misleading information
Implication: For large organisations, this moves into hundreds of millions in potential exposure.
Impact: Loss of revenue streams—not just penalties.
Final Thought
The EU AI Act is not just regulation—it is a forcing function.
It is compelling organisations to confront a reality long avoided:
AI is already embedded in your operations—and you are already accountable for it.
Those who approach this as a compliance exercise will struggle.
Those who use it to build structured, trustworthy AI operating models will gain a meaningful competitive advantage.
Perspective
In my view, this will become a defining divide over the next 2–3 years:
- Organisations operating controlled, governed, and trusted AI vs
- Organisations operating in opaque, unmanaged, high-risk environments
Only one of these groups will scale safely under increasing regulatory scrutiny.
