TL;DR: LiteLLM version 1.82.8 on PyPI was compromised with malware on March 24, 2026. If you installed or updated LiteLLM around that date, check your version immediately with pip show litellm. If you have 1.82.8, uninstall it, scan for persistence, and rotate all API keys. The attack was discovered using Claude Code — making this both a cautionary tale about supply chain attacks and a proof of concept for AI-assisted security response.
Why AI Coders Need to Know This
If you build anything with AI, you probably use (or your AI uses) proxy libraries to route requests to different AI models. LiteLLM is one of the most popular — it lets you use the same code to talk to OpenAI, Anthropic, Google, Mistral, and dozens of other providers. It's the kind of library that AI coding tools love to recommend.
On March 24, 2026, someone compromised the LiteLLM package on PyPI (Python's package repository). Version 1.82.8 contained malware that, when installed, executed hidden code that:
- Spawned thousands of processes — one developer reported 11,000 processes consuming all system resources
- Executed base64-encoded payloads — hidden code designed to be unreadable
- Established persistence — the malware tried to survive reboots
- Could exfiltrate data — including API keys, environment variables, and other secrets
This is not theoretical. This happened two days ago. And here's what makes it specifically dangerous for vibe coders: when AI generates your requirements file or your AI tool installs dependencies, you probably don't check every version number. AI tools routinely run pip install litellm without pinning a version, which means they get whatever's latest on PyPI — including compromised versions.
The silver lining? The attack was discovered and analyzed using Claude Code itself. A developer at FutureSearch noticed their laptop freezing, asked Claude to investigate, and within an hour had identified the malicious package, analyzed the payload, and published a disclosure. AI helping catch AI-related attacks is the future of security response — but you need to know what to look for.
What Actually Happened
Here's the timeline, based on the published disclosure:
10:52 UTC — The Poisoned Package Is Uploaded
Someone (likely through a compromised maintainer account or PyPI token) uploaded LiteLLM version 1.82.8 to PyPI. This version looked identical to the legitimate package except for a hidden payload in the installation script.
10:58 UTC — Package Is Pulled by Build Systems
Automated build systems, CI/CD pipelines, and developers running pip install --upgrade litellm start pulling the compromised version. Anyone who didn't pin their dependency version got the malware automatically.
11:07 UTC — Malware Establishes Persistence
The malware attempts to establish persistence on the infected system — meaning it tries to ensure it survives reboots. On macOS, this typically means creating launch agents or crontab entries. On Linux, systemd services or cron jobs.
11:09 UTC — Systems Start Crashing
The malware's process-spawning behavior becomes visible. One affected developer sees their htop fill with 11,000+ Python processes, all running exec(base64.b64decode('...')). Their laptop becomes completely unresponsive, forcing a hard reboot.
11:13 UTC — Investigation Begins (Using Claude Code)
After rebooting, the developer opens Claude Code and describes what happened. This is where it gets interesting — the entire investigation, from "my laptop froze" to "this is a supply chain attack on LiteLLM," was conducted as a conversation with AI.
11:40 UTC — Malware Identified
Claude Code helps trace the rogue processes back to the LiteLLM package, decode the base64 payloads, and identify the attack vector. The malicious code is extracted and analyzed.
12:02 UTC — Public Disclosure Published
A full writeup is published, alerting the community. Total time from discovery to disclosure: 49 minutes.
Why This Matters for Your AI Projects
Let's make this concrete. You're building a SaaS app with AI features. You asked Claude or Cursor to add multi-model support so you can use different AI models for different tasks. The AI probably generated something like this:
Prompt You Might Have Typed
Add support for multiple AI models to my app. I want to use
Claude for complex reasoning, GPT-4 for general tasks, and
Gemini for fast responses. Use a single interface so I can
switch between them easily.
And your AI almost certainly suggested LiteLLM or a similar library:
# What AI might generate in your requirements.txt
litellm
openai
anthropic
google-generativeai
# Or worse, in your code:
# pip install litellm ← no version pin = you get whatever is latest
If that pip install litellm happened on March 24, 2026, and you got version 1.82.8, your server is now running malware. Your API keys — OpenAI, Anthropic, Stripe, whatever else is in your environment — could be exfiltrated. Your customers' data could be compromised.
How to Check If You're Affected
Run these checks right now if you use LiteLLM:
# Check your installed version
pip show litellm
# If you see version 1.82.8, you are affected
# Immediately:
pip uninstall litellm
# Check for persistence mechanisms (macOS)
ls ~/Library/LaunchAgents/ | grep -i lite
crontab -l | grep -i lite
# Check for persistence mechanisms (Linux)
systemctl list-units | grep -i lite
crontab -l | grep -i lite
# Check running processes
ps aux | grep -i litellm
ps aux | grep "base64"
# Rotate ALL API keys in your environment
# OpenAI, Anthropic, Stripe, database passwords — all of them
Prompt for AI-Assisted Investigation
I may have installed a compromised version of LiteLLM (1.82.8).
Help me investigate:
1. Check if litellm is installed and what version
2. Search for any persistence mechanisms it might have created
3. Check for suspicious running processes
4. Check my pip install log for when it was installed
5. List all environment variables that might contain API keys
6. Help me rotate any potentially compromised credentials
I'm on [macOS/Linux/Windows]. My app uses [list your services].
What AI Gets Wrong About Supply Chain Security
The irony of AI helping discover this attack shouldn't distract from the fact that AI is also part of the problem:
AI tools almost never pin dependency versions. When Claude or Cursor generates a requirements.txt or runs pip install, they almost always install the latest version. They don't add version pins like litellm==1.82.7. This means every AI-generated project is vulnerable to any compromised package at install time.
AI recommends popular packages without checking security. AI models know that LiteLLM is popular and useful. They don't know that version 1.82.8 is compromised. They can't check real-time security advisories. By the time a model's training data includes information about a compromise, the attack window has already closed. You can't rely on AI to warn you about zero-day supply chain attacks.
AI generates setup scripts that run with full permissions. When AI creates a Dockerfile or setup script, it typically runs pip install as root or with full user permissions. There's no sandboxing, no permission restriction, no "install this package but don't let it access my environment variables." The package gets everything.
"Just use a requirements file" isn't enough. AI will generate a requirements.txt when asked, but often without hash verification. A proper lockfile with hashes (like pip freeze with --require-hashes) would have caught this attack because the hash of the compromised package wouldn't match. AI rarely sets this up by default.
AI treats all PyPI/npm packages as trusted. The entire package ecosystem model is built on trust — and that trust is regularly broken. AI has no concept of "this package had a security incident last week." It just knows the package exists and is popular. See our deeper dive on supply chain attacks for the broader pattern.
How to Protect Your AI Projects
You don't need to be a security expert. You need these five habits:
1. Always Pin Your Dependencies
Never let AI generate an unpinned requirements file. After your AI creates the project, immediately freeze versions:
# Bad (what AI usually generates)
litellm
openai
fastapi
# Good (what you should have)
litellm==1.82.7
openai==1.67.2
fastapi==0.115.8
# Best (with hash verification)
pip freeze --all > requirements.txt
# Then use: pip install --require-hashes -r requirements.txt
2. Run Dependency Scanning
Set up automated scanning that checks your packages against known vulnerabilities:
# Python
pip install pip-audit
pip-audit
# Node.js
npm audit
# Both (via Snyk)
npx snyk test
Run these before every deployment. Better yet, add them to your CI/CD pipeline. Our guide to dependency scanning covers this in detail.
3. Use a Lock File and Review Updates
Don't auto-update dependencies. When you update, review the changelog. For critical libraries like your AI proxy, wait 24-48 hours after a new release before updating — this gives the community time to catch compromises.
4. Isolate Your Secrets
Don't put all your API keys in environment variables where any package can read them. Use a secrets manager. At minimum, use separate environment files for different services and don't expose secrets to packages that don't need them:
# Instead of one big .env with everything:
OPENAI_API_KEY=sk-...
STRIPE_SECRET_KEY=sk_live_...
DATABASE_URL=postgres://...
# Isolate by service:
# ai-keys.env (only loaded by the AI module)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
# payment-keys.env (only loaded by payment module)
STRIPE_SECRET_KEY=sk_live_...
# This limits blast radius if one dependency is compromised
5. Monitor for Weird Behavior
The LiteLLM attack was caught because a developer noticed their laptop freezing. Not every attack is this obvious. Set up basic monitoring:
- Process monitoring: Alert when your app spawns unexpected child processes
- Network monitoring: Alert when your app makes requests to unexpected domains
- Resource monitoring: Alert when CPU/memory usage spikes unexpectedly
- Log anomalies: Watch for base64-encoded strings in your logs
The AI Silver Lining
Here's the remarkable part of this story: the entire investigation — from "my laptop is acting weird" to "this is a supply chain attack and here's the full analysis" — was done with Claude Code in 49 minutes.
The developer didn't need to be a security researcher. They didn't need to know how macOS launch agents work, how to parse process trees, how to decode base64 payloads, or how to write a security disclosure. They needed one thing: the instinct to ask "this is weird, help me figure out why."
This is what AI-enabled security response looks like:
- Notice something weird (11,000 processes is pretty weird)
- Ask AI to investigate (trace process trees, check logs, decode payloads)
- AI walks you through the human parts (who to contact, how to disclose, what to rotate)
- Publish and alert the community
The attacker had a significant head start — the package was live on PyPI for about an hour before discovery. In traditional security, that analysis might have taken days. AI compressed it to under an hour. That's the future of security response for non-traditional builders: you don't need to know everything, you need to know when something is wrong and have the right AI tool to help investigate.
What to Learn Next
- What Are Supply Chain Attacks? — the broader pattern behind the LiteLLM compromise, with more examples and defense strategies.
- What Is Dependency Scanning? — automated tools that catch known vulnerabilities in your packages before they ship.
- What Is Secrets Management? — keeping your API keys safe even when a dependency is compromised.
- What Is API Key Management? — the specific practice of rotating, scoping, and monitoring API keys.
- What Is Penetration Testing? — proactively testing your app for the vulnerabilities that supply chain attacks exploit.
Action Step
Right now, open your current AI project and run pip show litellm or npm list. Check every dependency version against Snyk's vulnerability database. If you don't have a lockfile with pinned versions, create one today. The 10 minutes this takes could save you from the next supply chain attack.
FAQ
On March 24, 2026, version 1.82.8 of the LiteLLM Python package on PyPI was compromised with malware. When installed, it executed base64-encoded payloads that spawned thousands of processes, established persistence, and could exfiltrate data like API keys. LiteLLM is a popular library used to route API calls to different AI models (OpenAI, Anthropic, Google, etc.).
You're potentially affected if you installed or updated LiteLLM on or after March 24, 2026 and received version 1.82.8. Check with pip show litellm. If your version is 1.82.8, uninstall immediately, scan for persistence mechanisms (launch agents, crontab entries), and rotate all API keys that were accessible on that system.
A supply chain attack compromises a trusted dependency rather than attacking your application directly. Instead of hacking your code, attackers inject malware into a package you already trust and install automatically. When you run pip install or npm install, you execute their code with your permissions. It's like someone poisoning the ingredients at the grocery store instead of breaking into your kitchen.
Pin your dependency versions (don't use latest), use lock files with hashes (pip freeze, package-lock.json), run dependency scanning tools (npm audit, pip-audit, Snyk), wait 24-48 hours before updating to new package versions, and isolate your secrets so a compromised package can't access everything.
Yes — the LiteLLM attack was discovered and analyzed using Claude Code. A developer noticed unusual system behavior, asked AI to investigate process trees and decode payloads, and had a full disclosure published in 49 minutes. AI excels at rapid investigation once you notice something is wrong, but you still need monitoring and the instinct to ask "why is this happening?"