Scaling Agentic AI: why the centralize-decentralize cycle keeps repeating (and what to do about It)

Tue May 12 2026
Written by
Martijn Di Bucchianico
Topic
Agentic AI
AI Act
AI Governance
This is the second blog in our three-part series on governing and scaling AI in practice.

In part one, we covered how the EU AI Act can work as a strategic tool rather than a regulatory hurdle. In this piece, we focus on one of the most stubborn structural tensions in AI programs, the cycle between centralizing control and decentralizing execution that keeps coming back, often at real cost. In part three, we pull governance and structure together and look honestly at what it takes to move GenAI from a promising demo into production.
Haven't read part one yet? Start there for the regulatory and strategic grounding this piece builds on.

Every organisation faces the same obstacle. First, one team builds something useful with AI. Then another team does the same thing. Then another. By month six, you have duplicate agents, conflicting pipelines, and disorder.

So you centralise. The right call. One team manages agentic AI. They create rules, stop problems before they happen, and show real results.

Then the trouble starts: everyone wants in.

When centralisation hits Its ceiling

Once a centralised AI team proves its value, something predictable happens. Inboxes overflow. Backlogs stretch for months. Every team in the organisation wants their own agent, their own pipeline, their own slice of the AI capability.

This is actually the success phase. The technology has been proven. Demand is real. But centralised teams are limited by their size. They cannot be everywhere at once.

The solution most organisations find, eventually, is to decentralise again. But this time, smarter: using a hub-and-spoke model. In this post, we explore both the organisational and technical dimensions of that approach, because they mirror and reinforce each other.

Part one: the organisational hub-and-spoke

Central support, distributed innovation

The people who know your data best are the teams working with it every day. They understand the nuances, the edge cases, the real business problems. Instead of funnelling all their ideas through a bottleneck, empower them to innovate directly.

In the hub-and-spoke model, the Centre of Excellence takes on a different role. Instead of building everything itself, it serves as facilitator and enabler. It brainstorms with business teams, shapes ideas into viable projects, and most importantly ensures that development happens within a safe, governed environment.

That safe environment is not a nice-to-have. It is the mechanism that makes distributed innovation possible. Without it, you either run the risk of IT security incidents, or you slow down innovations because the innovators do not know where to start.

The hub handles what distributed teams cannot

Standards and best practices. Shared infrastructure. Guidance when teams hit technical or governance questions they cannot resolve themselves. Think of the hub as a launchpad: the goal is to make it as easy as possible for spoke teams to build compliant, production-ready AI solutions without reinventing the wheel for every project.

Self-sufficiency as the goal

The best engagements of this kind follow a co-creation model. Internal capability is built throughout, not handed over at the end. This is the model we helped Holland Casino, and ABN AMRO Vezekeringen. When their centralised teams reached their maximum capacity, we introduced multidisciplinary teams combining engineering, reporting, and business profiles. Teams took ownership of the products they built, while gaining deeper specialisation and better mobility across business areas. The development of Agentic AI gives you the golden opportunity to include your business areas in developing this skill right from the start.

Part two: the AI gateway as technical hub-and-spoke

The same principle, expressed in architecture

Just as a Centre of Excellence enables business teams to innovate safely, a central AI Gateway enables development teams to leverage large language models without compromising security or compliance. The two are not separate ideas. They are two expressions of the same governance philosophy.

For one of our largest clients in the energy industry, we built a central GenAI platform that acts as a controlled entry point for all LLM traffic across the organisation. Every call to a large language model passes through this gateway. This is the technical implementation of the hub-and-spoke principle and it is exactly what they built. Centralised oversight, without centralised ownership.

What the gateway actually does

It prevents data leaks by routing all LLM calls through a controlled API. It tracks costs and enforces rate limits per team. It provides standardised development access across all AI projects. It enables monitoring, evaluating and audit trails, which means compliance happens by default, not as an afterthought.

Think of it as the technical equivalent of the Centre of Excellence. The hub defines the rules. The gateway enforces them automatically. Developers do not need to memorise every compliance requirement. They connect to the gateway, and the guardrails are already built in.

Compliance by design, not by review

This is the critical shift. The conventional model, where compliance review happens at the end of a project, slows down delivery. By the time legal or privacy reviews the work, implementation is complete and changes are expensive. The AI Gateway makes governance part of the development process itself.

The gateway provides a sandbox where experimentation is encouraged, without sacrificing oversight.

The link between the organisational and the technical

The Centre of Excellence and the AI Gateway are not separate initiatives. They are two sides of the same coin.

The Centre of Excellence defines the standards and maintains the infrastructure that enables distributed teams to operate safely. The AI Gateway enforces those standards automatically for every AI interaction across the organisation. When both are in place, spoke teams have the freedom to innovate and experiment, while the hub ensures that innovation stays within clear guardrails and the technical infrastructure makes compliance effortless rather than burdensome.

The path forward

Centralise to prove value. Decentralise to scale. But that decentralisation must be thoughtful and supported by the infrastructure and governance that lets distributed teams move fast without accumulating risk.

At Xomnia, we help organisations build both dimensions: the centres of excellence that enable rather than restrict, and the AI gateways that enforce compliance by design. If your organisation is ready to scale agentic AI without hitting the centralisation ceiling, we would like to explore how we can help.

Written by 

Martijn Di Bucchianico

Analytics Translator at Xomnia

Written by
Martijn Di Bucchianico
Topic
Agentic AI
AI Act
AI Governance
crossmenuchevron-down