The AI Act gives your teams permission to work the way your best people already do

Mon May 4 2026
Written by
Martijn Di Bucchianico
Topic
AI Act
AI Governance
This is the first blog in our three-part series on governing and scaling AI in practice.

In this series, we address three challenges that consistently come up when organizations move beyond AI experimentation into serious deployment. In this first piece, we look at the EU AI Act, not as a compliance checklist, but as a framework that gives your teams the ability to work just like the top performers in your teams probably already do. In part two, we recognise the experimentation, centralisation and decentralisation cycle and how we can speed forward through this path.. Part three closes the series with a technical deep dive on three essential pillars for your agentic centre of excellence to focus on.

The other blogs in this series are coming soon. Keep an eye out for part two and part three.


Most AI teams we talk to already know what good looks like. They define success before they start. They understand the data they have and the data they need. They think through what needs to be done, how to do so and what could go wrong. And when they do those things consistently, their projects move.

The AI Act, for all the compliance language it comes wrapped in, is essentially asking every team to do exactly that.

Why 95% of enterprise AI initiatives fail to scale

Gartner puts AI project failure rates at around 50%[1]. MIT's research on generative AI at enterprise scale puts the number closer to 95%. I am seeing similar results at my own clients, talking to many of Xomnia’s clients and from my colleagues too. 

The pattern behind those failures is almost always the same. Teams kick off an initiative with momentum, build a proof of concept that works well enough in isolation, and then hit a wall. The wall is usually not technical. It is the absence of a clear answer to basic questions: What does success look like here? How will we know if this is working? What data do we actually need, and do we have it? Who is responsible if this goes wrong? 

Without those answers, projects stall or loop. Endless PoCs. Governance conversations that happen too late to change anything. Stakeholders who lose confidence and start asking whether AI is worth the investment at all.

What the AI Act actually requires

The obligations that kick in from August 2026 (yes really still this august!) are closer than most teams realise. But strip away the regulatory language and the AI Act is asking teams to do something straightforward: be intentional about what you are building, for whom, and with what risk profile.

That means documenting the business case. Defining what model performance needs to look like in production. Understanding where your training data comes from and whether it is fit for purpose. Identifying the failure modes that matter before you are in a position where they are happening.

For teams that already work this way, compliance is not a major lift. The documentation becomes easier because the thinking was done upfront. For teams that have been moving fast and figuring it out as they go, the Act creates a useful forcing function.

The structure that makes genAI projects move

The clients whose AI initiatives consistently reach production share a few common habits. They treat the scoping phase seriously. Before any modelling work begins, they want to know: what decision does this model need to support, and what does a good decision look like? They have a working definition of success that is specific enough to test. And they have someone who owns the risk picture, not just the technical roadmap.

Those habits map almost exactly onto what the AI Act asks for. Which is why we think of the Act less as a compliance requirement and more as a framework that rewards teams who have been thinking carefully all along.

The practical implication is this: if your team adopts the AI Act's core requirements as a development discipline rather than a legal checkbox, you get clearer scope, faster decisions, and fewer projects that stop quietly in staging.

The question worth asking your team this week

Here is a simple diagnostic. Think about your last AI initiative. When you kicked it off, how clear was the definition of success? Not in broad terms, but specifically: what metric, at what threshold, measured how, over what time period?

If the answer is "we worked that out as we went," you have found the place where AI Act adoption will have the most impact.

The teams that will move fastest over the next two years are the ones building that clarity into how they work now, not the ones retrofitting it when the auditors ask.

 

[1] https://www.gartner.com/en/articles/genai-project-failure

[2] https://www.forbes.com/sites/jasonsnyder/2025/08/26/mit-finds-95-of-genai-pilots-fail-because-companies-avoid-friction/

Written by 

Martijn Di Bucchianico

Analytics Translator at Xomnia

Written by
Martijn Di Bucchianico
Topic
AI Act
AI Governance
crossmenuchevron-down