Minutes aren’t admin. They’re evidence.
In regulated environments, minutes are not clerical output. They are the record that makes decisions, risk ownership, and accountability defensible.
There’s a popular idea that AI will “take the admin away”. Minutes, actions, summaries — all the stuff that clogs diaries and steals attention from real work. On the surface, it’s a fair assumption. We’ve had Teams meetings since Covid, we can record them, and the tech can already produce a transcript and a tidy action list.
Why minutes stop being admin in regulated environments
The problem is that in regulated environments, admin isn’t just admin. It’s control. Minutes aren’t a casual recap. They’re evidence. They’re how decisions become traceable, how risks become visible, and how accountability lands somewhere specific rather than evaporating into “we all agreed at the time”. If the record is wrong, it’s not a minor admin error — it’s the start of governance failure. In estates, healthcare, safety-critical operations… that can translate into real-world harm.
If the record is wrong, the issue is not untidy admin. It is governance exposure.
This is where the “just let AI do it” narrative hits a wall. It’s not only that AI can mishear things, though it can. It’s that meeting audio is a messy input, and regulated work demands a clean, defensible output. Microphone quality varies wildly. Some people are on a headset, some are on a laptop mic from three feet away, and someone else is on unstable Wi-Fi with background noise and compression artefacts. People interrupt each other. Acronyms fly around without explanation. People use shorthand that only makes sense if you already know the local context. Add regional accents and dialect, and you often end up with transcripts that are “nearly right” — which is fine in a marketing meeting and unacceptable in anything that might later be scrutinised.
Why accuracy alone is still not enough
When someone says, “Better models will fix it,” I’m not convinced that addresses the real requirement. Yes, accuracy will improve. But regulated organisations aren’t aiming for “better”. They need high confidence — and in many contexts, they effectively need ultimate confidence that what’s written reflects what was said and agreed. That’s a very different bar.
And I’m not sure technology will ever fully reach it, because the constraint isn’t only the model. It’s the environment: imperfect audio, imperfect meeting discipline, human ambiguity, and context that isn’t spoken out loud. Even if you tried to “solve” this with voice profiles and tuning per person, it’s not realistic at organisational scale — and it still assumes everyone uses the right kit in the right way, every time.
That’s why I’m cautious about the casual recommendation to let AI “draft the minutes”. In settings where accuracy truly counts, I don’t see a sensible replacement for a human ear and a human judgement of intent. AI can still be useful around the edges:
- helping with formatting
- structuring headings
- turning clearly stated actions into cleaner wording
- producing a checklist of “possible actions mentioned” for someone to validate
But the moment the draft is treated as the source of truth, the risk jumps.
What the operating model still requires
So the practical answer isn’t “ban AI” and it isn’t “hand minutes to a model”. It’s to keep the ownership where it already belongs: with people. Use AI only in ways that don’t weaken the integrity of the record, and design your process so the official minutes are created and confirmed by a human, with clear accountability for accuracy.
Because in regulated environments, the risk isn’t that AI helps with admin. The risk is that people start trusting outputs because they look confident, and everyone’s busy. And the uncomfortable bit is this: that failure mode will look fine right up until the day it really, really doesn’t.