LiteLLM CVE-2026-42208 SQL Injection Exploited within 36 Hours of Disclosure
Summary
A critical SQL injection vulnerability, tracked as CVE-2026-42208 (CVSS 9.3), was identified in the LiteLLM Python package, allowing unauthenticated attackers to read from and modify the proxy database. The flaw was exploited in the wild within 36 hours of its disclosure in the GitHub Advisory Database.
Key Points
- CVE ID: CVE-2026-42208
- Affected Versions: LiteLLM
>=1.81.16and<1.83.7 - Fixed Version:
1.83.7-stable - Vulnerability Type: SQL Injection via unauthenticated
Authorizationheader - Targeted Endpoints: Any LLM API route, such as
POST /chat/completions - Targeted Database Tables:
litellm_credentials.credential_valuesandlitellm_config - Temporary Mitigation: Set
disable_error_logs: trueundergeneral_settings
Technical Details
The vulnerability exists because a database query used during proxy API key validation improperly concatenated caller-supplied key values into the query text instead of using parameterized queries. An unauthenticated attacker can trigger this vulnerable query path by sending a specially crafted Authorization header to any LLM API route (e.g., POST /chat/completions) through the proxy's error-handling path.
Observed exploitation patterns involved targeted probes of specific tables, specifically litellm_credentials.credential_values and litellm_config, which contain sensitive upstream LLM provider keys and proxy runtime configurations. The attack demonstrated the ability to perform column-count enumeration and identify specific Prisma table names. Because these tables often store high-value credentials—such as OpenAI organization keys, Anthropic console keys, and AWS Bedrock IAM credentials—the vulnerability allows for a transition from a web-app SQL injection to a full cloud-account compromise.
Impact / Why It Matters
Successful exploitation allows attackers to extract or modify sensitive upstream credentials, potentially leading to unauthorized access to cloud environments and significant financial impact via unauthorized LLM usage. Developers and self-hosters must upgrade to version 1.83.7-stable or higher immediately to secure their AI gateway infrastructure.