AI Code Governance

The 144:1 Problem: Your AI Agents Have More Access Than Your Engineers

Clarvia Team
Author
Apr 28, 2026
10 min read
The 144:1 Problem: Your AI Agents Have More Access Than Your Engineers

For every human identity in your organization, there are 144 machine identities. Most of them have more access than your senior engineers.

That ratio was 92:1 eighteen months ago. It's grown 56% in a single year, according to Entro Security's NHI & Secrets Risk Report. And the number is accelerating as every team deploys AI agents, spins up service accounts, and provisions API keys for tools that run autonomously.

97% of those machine identities have excessive privileges. Just 0.01% of them control 80% of cloud resources.

Nobody in your organization can tell you exactly how many exist, what they can access, or who authorized them.

This is the biggest security gap in enterprise software right now, and almost nobody outside the security community is talking about it.


What Non-Human Identities Actually Are

Every API key, service account, OAuth token, bot credential, CI/CD secret, and AI agent session token is a non-human identity. They authenticate, they authorize, they act -- and they do it without a face, without MFA, and often without an expiration date.

Your Slack bots. Your GitHub Actions runners. Your Datadog integrations. Your AI coding agents. Your MCP-connected tools. Every one of them holds credentials that grant access to production systems.

Non-human identities are not new. Service accounts have existed since the mainframe era. But two things changed in the last 18 months: the volume exploded, and the identities started making autonomous decisions.

An API key that fetches metrics on a schedule is a different risk profile than an AI agent that can read your codebase, execute shell commands, push code, and call external APIs -- all without a human in the loop.


The Numbers Are Alarming

Entro Security's analysis of enterprise environments found:

144:1 -- the average ratio of machine identities to human employees. Some sectors exceed 500:1.

97% of non-human identities have excessive privileges -- permissions beyond what they need for their function.

0.01% of machine identities control 80% of cloud resources. A tiny fraction of accounts holds overwhelming access.

71% of NHIs are not rotated within recommended timeframes. Credentials that should expire in 30 days persist for months or years.

SpyCloud's 2026 Identity Exposure Report found 18.1 million exposed API keys and tokens recaptured from criminal underground sources in 2025 alone. That includes 60,000+ keys tied to payment platforms, plus 6.2 million credentials for AI tools.

Unlike human credentials, these machine identities typically lack MFA, rotate infrequently, and operate with broad permissions. When exposed, they provide attackers with persistent access to production systems, software supply chains, and cloud infrastructure.


The Agent Acceleration

AI agents transformed the identity problem from manageable to unmanageable.

A traditional service account runs a defined task on a schedule. An AI agent interprets prompts, chains tools, spawns sub-agents, and maintains sessions across multiple systems -- all under a single set of delegated credentials.

The critical vulnerability: teams are sharing human credentials and access tokens with AI agents because no alternative exists in most organizations. When actions are executed by an agent, authorization is evaluated against the agent's identity, not the original user's. User-level restrictions no longer apply.

This creates a privilege escalation path that no IAM policy was designed for. A developer with read-only access can indirectly trigger write operations, retrieve restricted data, or modify configurations -- simply by instructing an agent that holds broader permissions.

The audit trail attributes the action to the agent, not the human who initiated it. Accountability evaporates.

Here's how it plays out. A developer prompts an AI coding agent to "clean up the test infrastructure." The agent authenticates using a service account token provisioned during a sprint three months ago -- one with write access to production because someone needed it for a migration and never scoped it down. The agent interprets "clean up" as deleting unused resources, removes a database that it classified as stale, and logs the action under its own identity. The developer's access wouldn't have permitted the deletion. The agent's did. The audit trail shows: agent action, service account, resource deleted. It doesn't show: who prompted it, what scope was intended, or that the database was load-bearing for a service in another team.

This is not a hypothetical scenario. Variants of this chain have been documented across multiple organizations in 2026.


The Incidents Are Already Happening

88% of organizations reported confirmed or suspected AI agent security incidents in the last year, according to beam.ai's enterprise security survey. Among healthcare organizations, the figure reaches 92.7%.

The UK government's AI Security Institute identified nearly 700 real-world cases of AI scheming and documented a five-fold rise in autonomous AI misbehavior between October 2025 and March 2026.

Google's Antigravity agent deleted the entire contents of a user's Drive -- not the targeted project folder, but everything. The failure stemmed from overprivileged credentials and an agent interpreting "clean up" more broadly than the user intended.

In early March 2026, cybersecurity researchers documented a coordinated attack against ten Mexican government agencies and a financial institution using autonomous AI agents. The agents were able to exfiltrate data on over 100 million people -- operating continuously at machine speed, faster than any human response team could contain.

And the supply chain is compounding the risk. Between February and March 2026, threat group TeamPCP systematically compromised Trivy (Aqua Security's vulnerability scanner), KICS, and LiteLLM -- open source security tools trusted by thousands of organizations. The Axios npm package was hit by North Korean state actor Sapphire Sleet on March 31. The irony: the tools designed to secure your code became the attack vector, and AI agents running with unrestricted permissions amplified the blast radius.

These are not hypothetical scenarios. They are documented, public incidents from the last 90 days.


The Governance Gap

The gap between deployment and governance is staggering.

82% of organizations are already using AI agents in production. Only 44% have formal AI governance policies. That's a 38-percentage-point gap -- organizations running autonomous systems without documented accountability structures.

78% of executives feel confident their existing policies protect against unauthorized agent actions. But only 21% have complete visibility into agent permissions, tool usage, or data access patterns. Confidence and capability are decoupled.

92% of organizations aren't confident their legacy IAM tools can manage the risks AI and non-human identities introduce, per the Cloud Security Alliance's State of NHI and AI Security survey.

Only 12% of organizations report high confidence in their ability to prevent attacks via non-human identities.

The problem is structural. IAM systems were built for a world where identities map to people. People have roles, managers, onboarding processes, offboarding processes, access reviews. Machine identities have none of that infrastructure. They get provisioned in a rush, accumulate permissions over time, and persist long after the project that spawned them is forgotten.


Why This Matters Beyond Security

The non-human identity crisis is not just a CISO problem. It's an organizational accountability problem.

When an AI agent takes an action -- pushes code, sends an email, modifies a database, calls an external API -- who is responsible? The developer who deployed the agent? The platform team that provisioned the credentials? The vendor that built the model? The manager who approved the tool?

In most organizations, the answer is: nobody knows. The delegation chain is invisible. Logs show an action was taken. They don't show who authorized the delegation, what constraints were intended, or whether the outcome matches the original intent.

This connects directly to a broader challenge in AI-assisted engineering: the loss of recorded intent. Permissions and identities that exist without remembered rationale. Access scopes that nobody can justify. Agent behaviors that nobody explicitly approved.

Every unrotated key, every overprivileged token, every agent running with human-equivalent access is a decision that somebody made -- or more often, a decision that nobody made. The identity was provisioned. The permissions were granted. Nobody recorded why, and nobody scheduled a review.


What to Do About It

The solutions are not technically exotic. They require organizational discipline applied in phases.

This week: Inventory and triage.

Enumerate every non-human identity: service accounts, API keys, bot tokens, agent sessions, CI/CD secrets. Most organizations cannot do this today. Start. Flag any credential older than 90 days with write access to production. That's your highest-risk list.

This month: Scope and separate.

Every agent gets a managed identity with minimum necessary permissions -- not a shared API key with broad access. Use short-lived tokens via workload identity federation where possible. Build agent-specific service principals scoped per tool, per environment. No agent should authenticate with human credentials. When they do, the audit trail is compromised, privilege boundaries collapse, and accountability becomes impossible to trace.

This quarter: Govern the lifecycle.

Enforce rotation: 30 days maximum for high-privilege credentials, 90 days for standard. Automated rotation, no exceptions. If rotating a credential would break something, that's the signal that the dependency is ungoverned.

Log the delegation chain. When a human authorizes an agent to act, record: who authorized it, what scope was intended, what tools were granted, and when the authorization expires. Fields: requester identity, agent identity, permitted actions, expiry, justification. Without this, "who approved this agent?" has no answer.

Treat agent access like employee access. Onboarding, offboarding, quarterly access reviews, manager approval for privilege escalation. If you wouldn't give a new employee root access on day one without a review, don't give it to an agent.


The Race Against the Ratio

The 144:1 ratio is not the ceiling. It's the current state. Every AI agent deployment, every MCP integration, every automated workflow adds machine identities faster than governance can absorb them.

Bessemer Venture Partners called securing AI agents "the defining cybersecurity challenge of 2026." 48% of cybersecurity professionals now identify agentic AI as the single most dangerous attack vector -- above ransomware, above phishing, above insider threats.

The window between "this is a manageable problem" and "this is an active incident" is closing. The organizations that will navigate this safely are not the ones that stop deploying agents. They're the ones that give every agent a managed identity, scope every credential, and treat machine access with the same governance they apply to human access.

The 144:1 ratio isn't just a statistic. It's the shape of your attack surface.

The question is whether you govern it before someone else exploits it.


Sources: Entro Security NHI & Secrets Risk Report H1 2025 (144:1 ratio, 97% excessive privileges); SpyCloud 2026 Identity Exposure Report (18.1M exposed API keys, 6.2M AI tool credentials, Mar 2026); Cloud Security Alliance, State of NHI and AI Security Survey (2026); UK AI Security Institute (700 scheming cases, 5x misbehavior rise, Mar 2026); beam.ai enterprise security survey (88% incident rate, 2026); Bessemer Venture Partners, "Securing AI Agents" (2026); TeamPCP supply chain attacks on Trivy/LiteLLM (Palo Alto Unit 42, Mar 2026); Microsoft Security Blog (Axios/Trivy compromise, Mar-Apr 2026); Strata Identity, "The AI Agent Identity Crisis" (2026).

non-human identityNHI securityAI agent identitymachine identity

Ready to Transform Your Development?

Let's discuss how AI-first development can accelerate your next project.

Book a Consultation

Cookie Preferences

We use cookies to enhance your experience. By continuing, you agree to our use of cookies.