LLM05:2025 - Improper Output Handling

Improper Output Handling is the fifth risk in the OWASP Top 10 for LLM Applications 2025. It occurs when LLM-generated content is passed to downstream systems such as databases, web browsers, operating system shells, file systems, or code interpreters without proper validation, sanitization, or escaping. Because LLM output is inherently unpredictable, treating it as trusted input to other components creates injection vulnerabilities with potentially severe consequences.


Overview

Developers often treat LLM output as safe because the model is part of their own system. This assumption is dangerous. LLMs generate text based on probabilistic patterns, and their output can contain SQL fragments, HTML tags, shell commands, file paths, or executable code, either because the model was prompted to produce them, because the training data contained such patterns, or because a prompt injection attack manipulated the output. When this content flows into vulnerable sinks without sanitization, it triggers the same injection vulnerabilities that the security community has fought for decades: SQL injection, cross-site scripting, command injection, and more. The critical insight is that LLM output should be treated with the same distrust as user input. Radar's SAST engine traces data flow from LLM API responses to downstream sinks, flagging every path where LLM-generated content reaches a vulnerable operation without proper neutralization.


What Radar Detects

  • LLM output concatenated into SQL queries.**LLM-generated text inserted into SQL statements through string concatenation or interpolation instead of parameterized queries, enabling SQL injection through the model's output.

  • LLM-generated content rendered in HTML without encoding.**Model responses injected into web page templates without HTML entity encoding, creating cross-site scripting (XSS) vulnerabilities where the LLM output can execute JavaScript in users' browsers.

  • LLM output passed to shell execution functions.**Model-generated strings provided as arguments to exec, system, subprocess, Runtime.exec, or equivalent shell execution APIs, enabling OS command injection.

  • LLM-generated file paths used in file system operations.**Model output used to construct file paths for read, write, or delete operations without path validation or canonicalization, enabling directory traversal attacks.

  • LLM output used to construct URLs for server-side requests.**Model-generated URLs passed to HTTP client libraries for server-side fetches without validation, enabling server-side request forgery (SSRF) through the LLM.

  • Missing output validation for security-critical operations.**LLM responses used directly in authorization decisions, financial calculations, or configuration changes without schema validation or type checking.

  • LLM-generated code executed via eval or similar functions.**Model output passed to eval(), Function(), exec(), or other dynamic code execution mechanisms, allowing arbitrary code execution controlled by the model's output.


CWE-89 (SQL Injection), CWE-79 (Cross-site Scripting), CWE-78 (OS Command Injection), CWE-22 (Path Traversal), CWE-94 (Improper Control of Generation of Code).

See the CWE Reference for details.


Overlap with OWASP Top 10 Web

This category has the strongest overlap with the traditional OWASP Top 10. LLM05 directly causes A05:2025 Injection vulnerabilities. The mechanics are identical: untrusted data reaches an interpreter without proper neutralization. The only difference is the injection source: instead of user input flowing into a vulnerable sink, it is LLM output. Every traditional injection prevention technique (parameterized queries, output encoding, input validation) applies equally when the source is an LLM.


Prevention

  • Treat all LLM output as untrusted input and apply the same sanitization, validation, and encoding practices you would apply to user-supplied data.
  • Use parameterized queries for any database operations that incorporate LLM-generated content. Never build SQL statements through string concatenation with model output.
  • Encode LLM output before rendering it in HTML to prevent cross-site scripting. Use context-appropriate encoding (HTML entity encoding, JavaScript escaping, URL encoding).
  • Never pass LLM output to shell execution functions. If system commands are required, use allowlisted command patterns with strictly validated arguments.
  • Validate and canonicalize any file paths generated by the LLM before using them in file system operations.
  • Implement output schemas to constrain LLM responses to expected formats (JSON schemas, enum values, structured types) and reject responses that do not conform.
  • Never execute LLM-generated code through eval() or similar dynamic execution mechanisms in production environments.

Next Steps

Previous: LLM04:2025

Data and Model Poisoning. Manipulated training data introduces backdoors.

Next: LLM06:2025

Excessive Agency. AI agents with unrestricted permissions.

OWASP Top 10 Overview

All OWASP standards mapped by Radar.

Previous
LLM04 - Data Poisoning