LLM06:2025 - Excessive Agency
Excessive Agency is the sixth risk in the OWASP Top 10 for LLM Applications 2025. It occurs when AI agents are granted more functionality, permissions, or autonomy than they need, enabling unintended or harmful actions such as data modification, unauthorized API calls, or cascading operations that affect production systems without human oversight.
Overview
Modern AI applications increasingly rely on agents that can call external tools, query databases, invoke APIs, and execute multi-step workflows. When these agents operate with overly broad permissions, or without approval gates for sensitive operations, a single prompt injection, hallucination, or unexpected output can trigger destructive actions at scale. The principle of least privilege, long established for human users, is equally critical for AI agents. Yet many implementations grant agents the same credentials and access as the host application, effectively making the LLM an unconstrained system administrator. The risk compounds with multi-agent architectures where one agent can delegate to others, creating chains of privileged actions that are difficult to audit and easy to exploit.
What Radar Detects
AI agents with unrestricted tool or function access.** Tool-calling configurations that expose all available functions to the agent without scoping which tools are accessible for a given task or user context, allowing the agent to invoke any capability in the system.
Missing approval gates for sensitive operations.** AI agent actions that execute writes, deletions, financial transactions, or other destructive operations without requiring human confirmation before proceeding.
Overly broad function definitions in tool-calling configurations.** Tool schemas that grant full database access, unrestricted file system operations, or admin-level API permissions instead of narrowly scoped, read-only, or task-specific definitions.
Missing permission scoping on AI agent execution.** Agents that inherit the application's full service account credentials rather than operating under a restricted, purpose-built identity with the minimum permissions required.
Absence of action rate limiting on AI agent tool calls.** Agent loops that can invoke tools repeatedly without per-action or per-session rate limits, enabling runaway operations that exhaust resources or cause cascading side effects.
AI agents with write access to production systems without confirmation workflows.** Direct write paths from agent tool calls to production databases, APIs, or infrastructure with no intermediate review step, rollback capability, or dry-run mode.
Missing output validation on agent-executed actions.** Agent tool call results that are consumed and acted upon without verifying that the action completed as intended and did not produce unexpected side effects.
Multi-agent delegation without permission boundaries.** Agent architectures where one agent can delegate tasks to other agents without restricting the delegated agent's tool access or requiring re-authorization for the delegated scope.
Agent Autonomy vs. Safety
The more autonomous an AI agent is, the greater the potential damage from excessive agency. Every tool or function granted to an agent should be justified by a specific use case. If an agent does not need write access, do not grant it. If an agent does not need to call external APIs, do not expose them.
Related CWEs
CWE-250 (Execution with Unnecessary Privileges), CWE-269 (Improper Privilege Management).
See the CWE Reference for details.
Overlap with OWASP Top 10 Web
Excessive Agency relates directly to A01:2025 Broken Access Control in the traditional OWASP Top 10. The principle of least privilege (granting only the minimum permissions necessary to complete a task) applies equally to AI agents. Where A01 focuses on users bypassing access controls, LLM06 addresses the inverse: developers granting agents far more access than required, creating an implicit broken access control surface.
Prevention
- Implement least-privilege access for all AI agent tool configurations. Restrict available tools to the minimum set required for each specific task or conversation context.
- Require human-in-the-loop approval for sensitive or destructive operations such as data writes, deletions, financial transactions, and infrastructure changes.
- Scope function definitions to the narrowest possible permissions. Use read-only database connections, limited API scopes, and task-specific service accounts.
- Implement per-action and per-session rate limits on agent tool calls to prevent runaway loops and cascading operations.
- Default to read-only access for AI agents and require explicit elevation (with logging) for write operations.
- Log all agent actions with full context (tool called, parameters used, outcome, and user session) for audit and anomaly detection.
- Implement dry-run modes for destructive operations so agents can preview the impact of an action before committing it.
- Maintain an explicit allowlist of tools per agent role. Deny access to any tool not explicitly granted rather than relying on a blocklist approach.
- For multi-agent systems, enforce permission boundaries on delegation. A delegated agent should never inherit broader permissions than the delegating agent holds.
Next Steps
Previous: LLM05:2025
Improper Output Handling. LLM output used unsafely in downstream systems.
Next: LLM07:2025
System Prompt Leakage. System prompts exposed to end users.
OWASP Top 10 Overview
All OWASP standards mapped by Radar.