Governance Hosting Governance Operating model 4 min read

The hosting debate is not really about hosting.

In regulated organisations, cloud versus self-hosted is not the whole decision. The harder question is whether the organisation can support the chosen model in a controlled, sustainable way.

A lot of AI discussion in regulated organisations still gets framed as a technical choice: cloud or on-premises, vendor-managed or self-hosted, public model or private environment. Those distinctions matter, but they are not the main issue.

The harder question is whether the organisation can support the model it chooses in a controlled, sustainable way. There are sensible reasons why more private or tightly contained deployment models keep coming up. Data sensitivity matters. Assurance matters. Supplier dependence matters. So do auditability, resilience, and simple organisational comfort with where systems run and who controls them. In regulated settings, that instinct is often rational.

Self-hosting often relocates the problem

But self-hosting is often described too simply. It is sometimes treated as though it solves the governance problem, when in practice it usually relocates it. Running AI in a more private environment can bring extra demands around infrastructure, security, monitoring, access control, model management, support, procurement, and internal accountability. That is where the reality check starts.

Most organisations are not making these choices from a position of surplus capability. They are doing it while IT is stretched, IG is cautious, operational teams are busy, and ownership is still split across teams. So the deployment choice does not land on a clean sheet. It lands inside the operating model they already have.

Self-hosting does not remove the governance problem. It often relocates it.

The middle ground is often more practical than it sounds

There is also a more practical middle ground that often gets missed. Many regulated organisations already have enterprise software agreements, established supplier relationships, and governance patterns that people know how to work with. In many cases, those agreements should now be revisited and reviewed properly as AI capability expands. They should not just be assumed to cover everything by default.

For many UK organisations, that often means leaning further into a large existing enterprise agreement, such as Microsoft, rather than creating something more bespoke from scratch. The contracts are familiar, the trust model is better understood internally, and the surrounding technical estate is already in place. That still needs to be a considered decision, but it can reduce some of the skills, support, and deployment burden compared with standing up a separate environment independently.

The real transition is organisational adaptation

That does not mean cloud removes the hard work. Organisations still need to revisit agreements, test governance assumptions, define data boundaries, and be clear about what use is actually in scope. Existing trust helps, but it is not a substitute for review. The real questions remain the same: who owns the use, what data is allowed, what needs human checking, and what is actually approved in practice.

That is probably what will shape the next few years. Not a simple win for cloud or on-premises, but a gradual adaptation of the operating model around AI: clearer ownership, better data foundations, more explicit governance, and a better mix of skills and responsibilities. The real transition is not just technical deployment. It is organisational adaptation.

Need a clearer view of which deployment path is actually supportable?

FM Doctor can help test the governance assumptions, ownership gaps, data boundaries, and support burdens sitting behind AI deployment choices in regulated environments.

When the decision reaches beyond one tool or supplier, the Full AI Readiness Assessment Report gives leadership a more defensible view of what the organisation can realistically support.

See the Full AI Readiness Assessment