OpenAI’s infrastructure framing exposes a bigger governance gap
OpenAI’s industrial policy framing treats AI as infrastructure, while many regulated organisations are still governing it like a software add-on.
What stands out in OpenAI’s new industrial policy paper is not just the ambition of the document, but the level it is operating at. This is not really a paper about software tools, productivity gains, or light-touch adoption. It is framing AI as something closer to infrastructure: something that will affect labour, public institutions, economic capacity, and the wider conditions in which organisations operate. The language is large. The implied shift is larger still.
Most organisations are nowhere near that frame in practice. They are still dealing with AI as a tool question. Which products are allowed. What data can be used. Whether staff guidance exists. Who signs things off. Whether there is a policy. Those are reasonable questions, but they sit at a much narrower level. They are about managing access to tools, not about governing AI as part of the operating environment. That mismatch matters.
Where governance actually breaks
The problem is that governance does not usually break at the level of policy ambition. It breaks inside ordinary work. It breaks where the real workflow is still messy, half-documented, and dependent on judgement that nobody has properly defined. It breaks where human review is assumed rather than specified. It breaks where outputs start influencing decisions, records, or communications without clear agreement on what must be checked, what can be relied on, and who is actually accountable. In regulated environments, that is where the real exposure usually sits.
Policy is now talking about infrastructure while many organisations are still governing AI like a software add-on.
That is why the paper is useful as a signal, even if some of its language feels self-serving. It shows how far the policy conversation has moved. AI is being discussed as if it will reshape the underlying conditions of work and governance. Meanwhile, many organisations still have weak control at the point of use. They may have a policy. They may have approved tools. They may even have a steering group. But if the workflow underneath is unclear, if ownership is blurred, if data boundaries are weak, and if assurance exists mostly as a vague expectation that someone will “sense-check it,” then the governance is not especially mature. It is just higher up the page.
Why the workflow gap matters
So the real issue is not simply that policy is getting ahead of practice. It is that policy is now talking about infrastructure while many organisations are still governing AI like a software add-on. For regulated teams, that is not a small gap. It is the gap between having a position on AI and having control over how AI-shaped work actually happens.
If AI is going to be treated as infrastructure, governance has to reach the workflow. That is where the real work is.