Governance AI governance Use classification FMD Signal 3 min read

AI governance needs classification, not just policy.

One of the biggest gaps in AI governance is not policy. It is classification.

A lot of organisations are still trying to review AI use without a clear structure for what, exactly, they are reviewing, how it should be categorised, or what level of scrutiny it should trigger.

That is understandable. The technology is still relatively new, and many organisations are still working out what sensible governance looks like in practice. But it creates a straightforward problem.

If a team cannot distinguish between a low-impact support use, a recommendation-shaping use, and something that has more material influence on decisions or actions, review quickly becomes inconsistent. Similar use cases get treated differently. Routine uses can end up over-handled, while more sensitive ones pass with too little challenge.

Classification is what turns policy language into defensible review decisions.

Why inconsistent review becomes a governance problem

Inconsistent handling is not just untidy governance. It makes defensible decision-making harder.

Many organisations now have policy language of some kind. What they often do not have is the worked-through logic for classifying use, handling exemptions, and deciding what level of review should follow. In practice, under-review is still the more common problem.

That matters because AI use cases rarely carry the same exposure. Some uses support drafting or internal analysis. Some shape recommendations. Others have more material influence over decisions, actions, records, or customer-facing outcomes. Treating those uses as if they were equivalent creates risk in both directions: low-impact uses can become unnecessarily heavy, while higher-exposure uses can move too quickly.

Where FMD Signal fits

This is the gap FMD Signal is intended to address.

Signal gives clients a more structured way to assess AI use, risk, and review expectations. The framework and matrix help organisations work out what a use case actually is, where it sits, what level of scrutiny is proportionate, and where tighter handling is needed.

It can also be adapted where organisations need to reflect additional protections, local sensitivities, or stronger internal requirements in the logic. The aim is not to turn every review into a large exercise. It is to make the route from use case to review expectation clearer.

What credible review needs to show

Governance becomes more credible when people can see not just that a review took place, but why a use case sits where it does, what exemptions apply, and what level of scrutiny follows from that.

If your organisation is still working out how AI use should be classified, reviewed, and exempted, FMD Signal offers a practical place to begin.

Need a clearer way to classify AI use?

FMD Signal helps organisations classify AI use, risk, exemptions, and review expectations so similar cases are handled more consistently.

It gives teams a practical way to decide what can proceed lightly, what needs stronger review, and where tighter handling is required.

Explore FMD Signal