It’s one thing to write an AI policy. It’s quite another to enforce it in the real world.
Most companies now have some kind of policy in place. But in practice, many are treating these policies as tick-box exercises. They are declarations that give the organizations cover for an audit. They are rarely read by employees. They are almost never enforced.
The first hurdle most teams hit? Assuming one AI policy fits all.
Why a Monolithic Policy Doesn’t Work
Let’s say you’ve banned inputting sensitive information into public AI tools like ChatGPT or Gemini. Fair enough. But how should that apply to someone in your marketing team vs someone in finance? Or an engineer on day one vs an employee handing in their notice?
The truth is, AI risk isn’t evenly distributed across your business. Different roles carry different levels of risk (and intent). The controls you place around AI use should reflect that.
A Real-World Example: The Leavers Group
Several companies we work with have created identity groups for staff who are about to leave the business. It’s a smart move. These users, perhaps days or weeks from walking out the door, often pose a higher insider risk. Not necessarily out of malice, but because they may be tidying up portfolios, copying files to take work samples, or just aren’t as invested in following policy to the letter.
So here’s the question: do you really want these employees to have the same access to GenAI tools as your core team? If someone is copying and pasting content into ChatGPT to “summarise it quickly” before their exit interview, how would you even know?
With Harmonic, you can build exceptions for this kind of case in minutes.
Start With the Groups That Matter Most
Harmonic integrates directly with identity providers like Okta, Entra ID, and Google Workspace. That means we can pull in your existing user groups and apply policy based on what’s already in place.
But anyone who’s worked with identity groups knows they’re rarely the complete picture. Groups often become out of date or have duplicates.
That’s why we let you build custom groups based on:
- Individual name or email address
- Email domain
- Department or job title
- Physical location
- Identity provider group
- Any combination using AND/OR logic

Want to restrict GenAI access for the finance team based in London, but allow it for product managers in Berlin? No problem. Want to block ex-engineers who still have access to internal Slack? Done.
Set Sensible Exceptions
Once your groups are set, enforcement is simple. You might decide that everyone in the “Upcoming Leavers” group should be fully blocked from all AI tools. Or perhaps your engineering team can use Copilot, but no sensitive data can be uploaded.
The point is: enforcing policy isn’t about applying blanket bans. It’s about having the tools to be specific.

The Bigger Picture
This is just the first step in enforcing AI policy properly. First you identify the right groups. Then you set the right controls for each. Over the next few posts, we’ll walk through what good enforcement looks like.
Because here’s the thing: most people don’t want to break the rules. They just need the rules to make sense.
And that starts by recognising that different users need different guidance.