Production
The three controls every production AI agent needs — and why the absence of any one of them kills enterprise adoption.
Three controls separate AI agents that ship in production from AI agents that stall in pilot: permissioning (the agent can only do what it is explicitly authorized to do), audit logs (every action is recorded with enough context to answer "who, what, when, why"), and revocation (you can shut an agent down immediately and the change propagates everywhere).
Each is well-understood in human IAM. None is automatic for AI agents. Without all three, enterprise security teams will not green-light deployment, and they are right not to.
Most agents today run with standing access — a service-account token with broad OAuth scopes. The agent can do anything the token allows, at any time, for any reason. This is the model that produces the headlines about agents exfiltrating data or executing actions a user never sanctioned.
Real permissioning means evaluating every single action against an explicit policy at the moment it happens. The policy considers who deployed the agent, what task it was deployed to do, what tool or data it is touching, and what organizational rules apply. The check returns allow, deny, or require-human-approval — and the decision is logged.
Permissioning is not a one-time grant. It is a per-action gate.
Audit logs answer the questions that come up after an incident: which agent did this? On whose behalf? What did they see? What did they touch? Did anyone approve it? When did it happen?
For audit logs to be useful, they need to be: immutable (no one can quietly edit history), complete (every action — not just the ones that succeeded), structured (queryable, not just text), and exportable (your SIEM and your auditors need access).
A useful test: pick a random action from yesterday and try to fully reconstruct it. Who initiated it? What was the input? What did the agent decide to do? What did the downstream system return? If you cannot answer all four within thirty seconds, your audit pipeline is not production-ready.
Revocation is what you reach for when something goes wrong: an employee leaves, a token leaks, an agent misbehaves. The goal is simple — stop the agent now — but the implementation usually is not. OAuth tokens have TTLs measured in minutes or hours. Refresh tokens last longer. Many systems cache permissions. Revocation in name is not revocation in practice.
Real revocation has three properties: immediate (within seconds, not refresh cycles), complete (every downstream system stops accepting the agent's tokens), and auditable (the revocation itself is logged).
The reliable pattern is to route every agent call through a broker that checks current policy on each request. When policy says revoked, the broker refuses the call — regardless of what tokens the agent still holds.
Permissioning, audit, and revocation are one system, not three. Permissioning decides what should happen. Audit records what did happen. Revocation forces what stops happening. Remove any one and the other two lose meaning.
Permissioning without audit is unverifiable — you cannot prove your policy matched reality. Audit without revocation is observational — you can see the fire but cannot put it out. Revocation without permissioning is the kill switch on a system you never controlled in the first place.
Before an AI agent goes to production, you should be able to say yes to all of these:
Agentic Fabriq is built around these checks. If you would like to see how the broker, policy engine, and audit pipeline come together for your stack, get in touch.