LLM03:2025 - Supply Chain

Supply Chain vulnerabilities rank third in the OWASP Top 10 for LLM Applications 2025. AI applications depend on a broad ecosystem of external components: pre-trained models, fine-tuning datasets, LoRA adapters, AI framework libraries, and third-party plugins, each of which introduces a potential attack surface if integrity and provenance are not verified.


Overview

The AI supply chain extends well beyond traditional software dependencies. A compromised pre-trained model can contain backdoors that activate on specific inputs. A poisoned LoRA adapter can alter model behavior in targeted ways. Pickle-serialized model files can execute arbitrary code when loaded. Unverified plugins in AI agent frameworks can grant attackers direct access to application internals. Because AI components are often large binary artifacts distributed through community repositories, they resist the same level of code review that source-level dependencies receive. Radar's static analysis identifies code patterns that load, deserialize, or register AI components without proper integrity checks, catching supply chain risks before they reach production.


What Radar Detects

  • Model loading from untrusted sources without integrity verification.**Code that loads model files (pickle, ONNX, SafeTensors, or other formats) from remote URLs or local paths without verifying checksums, signatures, or provenance metadata.

  • Unsigned model downloads.**Fetching model artifacts over HTTP or from public repositories without hash verification, allowing man-in-the-middle attacks or repository compromise to deliver malicious models.

  • Plugin and tool registration from unverified sources.**AI agent frameworks (LangChain, AutoGPT, and similar) that register tools or plugins from external sources without validation of their origin, permissions, or code integrity.

  • Insecure model serialization with pickle.**Using Python's pickle module for model storage or transfer, which permits arbitrary code execution upon deserialization and is a well-known attack vector in the ML ecosystem.

  • Dependency on AI libraries with known vulnerabilities.**Import statements and dependency declarations that reference AI framework versions with published security advisories.

  • Missing version pinning for AI framework dependencies.**Dependency files that use unpinned or loosely pinned versions for AI libraries, allowing silent upgrades to compromised releases.


CWE-494 (Download of Code Without Integrity Check), CWE-829 (Inclusion of Functionality from Untrusted Control Sphere), CWE-1104 (Use of Unmaintained Third-Party Components).

See the CWE Reference for details.


Overlap with OWASP Top 10 Web

LLM03 is the AI-specific manifestation of A03:2025 Software Supply Chain Failures in the traditional OWASP Top 10. While A03:2025 focuses on compromised libraries, packages, and CI/CD pipelines, LLM03 extends the scope to include models, adapters, training data, and AI-specific tooling, artifacts that are unique to the machine learning ecosystem but carry the same fundamental trust and integrity risks.


Prevention

  • Verify model integrity using checksums or cryptographic signatures before loading any model artifact into your application.
  • Use SafeTensors or other safe serialization formats instead of pickle for model storage and transfer to eliminate arbitrary code execution risks.
  • Pin AI framework versions in dependency files and audit each upgrade for security advisories before adopting new versions.
  • Audit model provenance by tracking the origin, training data, and modification history of every model used in production.
  • Scan AI dependencies for known vulnerabilities as part of your CI/CD pipeline.
  • Use trusted, authenticated model registries and restrict model downloads to verified sources with integrity metadata.
  • Validate plugin and tool registrations in AI agent frameworks. Enforce allowlists of permitted tools and verify their source before activation.

Next Steps

Previous: LLM02:2025

Sensitive Information Disclosure. Private data exposed through LLM interactions.

Next: LLM04:2025

Data and Model Poisoning. Manipulated training data introduces backdoors.

OWASP Top 10 Overview

All OWASP standards mapped by Radar.

Previous
LLM02 - Info Disclosure