Supply chain · March 2026

The proxy that ate your Copilot keys

Shadow AI, supply chain rot, and why TeamPCP didn’t need to hack you—you did it for them.

Let’s talk about something your CISO almost certainly doesn’t know is running in your environment right now.

Cinematic illustration: a large creature in a dark server city inhales glowing tiles showing Microsoft-style logos; cracked LiteLLM signage and cablepull mark in the scene.
Poster art for this note—gateway as hungry middle.

The keys to your Copilot are in a .env file next to a Python package that phones home to a typosquatted domain.

The fire suppression system was the accelerant.

It’s called LiteLLM. It’s an open-source Python library — 95 million downloads a month, 3.4 million a day — that acts as a unified API gateway to every major LLM provider on the planet. One proxy, one interface. OpenAI, Anthropic, Azure OpenAI, Bedrock, Gemini, Cohere, your enterprise Copilot deployment — all of it routed through a single pip install that some senior engineer stood up on a Tuesday because the vendor AI portal was too slow and IT takes three weeks to approve anything.

That’s the shadow AI problem, and it’s not theoretical. It’s sitting in your internal infrastructure right now, running as a systemd service, holding a config file full of API keys, and nobody in your security team knows it exists.

Which brings us to March 24, 2026, and a threat actor called TeamPCP.


How you get pwned by your own toolchain

TeamPCP didn’t wake up that morning and decide to attack LiteLLM. This was Phase 09 of a campaign running since at least December 2025 — methodical, infrastructure-consistent, and clever enough to weaponize your security tools against you.

On March 19, they rewrote Git tags in the trivy-action GitHub Action repository, pointing it at a malicious release carrying a credential-harvesting payload. Trivy is a vulnerability scanner. The kind of thing you run in your CI/CD pipeline because you care about security. Congratulations — your commitment to security hygiene just handed the attacker your publishing credentials.

LiteLLM’s CI/CD pipeline ran Trivy as part of its build process, pulling from apt without a pinned version. The compromised action exfiltrated the PYPI_PUBLISH token from the GitHub Actions runner environment.

Five days later, that token got used.

At 10:39 UTC on March 24, 2026, TeamPCP used the stolen credentials to upload two backdoored LiteLLM releases — 1.82.7 and 1.82.8 — to PyPI. Both removed within 46 minutes. Which sounds fast until you do the math: 3 million downloads a day, 46-minute window, blast radius across downstream AI projects described as substantial.


What was in the package

Two vectors. Both elegant. Both vicious.

Version 1.82.7 injected an obfuscated base64 payload into litellm/proxy/proxy_server.py at line 128, between two unrelated legitimate code blocks. The payload executes at import time — any process that imports the LiteLLM proxy triggers the malware. The injected code is triple-nested: base64 in proxy_server.py decodes to an orchestrator, which decodes a second base64 blob containing the actual harvester.

Version 1.82.8 escalated. It contains a malicious .pth file — litellm_init.pth, 34,628 bytes — which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module. You don’t even have to use LiteLLM. You just have to have it installed in the same Python environment.

The .pth launcher spawns a child Python process via subprocess.Popen, but because .pth files trigger on every interpreter startup, the child re-triggers the same .pth — creating an exponential fork bomb that crashed the machine. The fork bomb is actually a bug in the malware. Small comfort.

Once past the detonation stage, the payload runs a three-stage attack: it harvests credentials — SSH keys, cloud tokens, Kubernetes secrets, crypto wallets, and .env files — attempts lateral movement across Kubernetes clusters by deploying privileged pods to every node, and installs a persistent systemd backdoor that polls for additional binaries. Exfiltrated data is encrypted and sent to an attacker-controlled domain.

That domain? models.litellm.cloud — a lookalike domain deliberately registered to mimic the legitimate litellm.ai. Designed to slide past network anomaly detection because the name looks plausible in an environment that’s literally routing LLM traffic.


Why the LiteLLM proxy is a particularly catastrophic place to drop a credential stealer

Here’s where the shadow AI risk crystallizes into something a board should be scared of.

LiteLLM isn’t just a library for hobbyist AI tinkering. It’s the unofficial enterprise AI middleware layer for organizations that want flexibility without vendor lock-in. You configure it with a config.yaml and a pile of API keys, stand up the proxy server, and route everything through it. Your developers call one endpoint. LiteLLM figures out the rest.

In practice, that config file contains:

  • Your Microsoft Copilot API credentials — M365 Copilot, GitHub Copilot, whatever your enterprise licensing covers
  • Your Azure OpenAI deployment keys, tied to a service principal with meaningful permissions
  • Your Anthropic and OpenAI org-level API keys — the ones with spend limits in the thousands
  • Internal tool integrations — custom LLM-backed apps, RAG pipelines, anything the engineering team connected because it was easier than going through the vendor portal
  • Often, database connection strings in .env files sitting in the same working directory

The credential stealer grabbed all of it. SSH keys. Cloud tokens. Kubernetes configs. .env files. This makes the attack especially dangerous in environments where developers, CI/CD systems, and production infrastructure share access to sensitive credentials — compromise in one layer can quickly cascade into others.

Now layer in the shadow AI dimension. That LiteLLM instance your platform engineering team is running? There’s a reasonable chance it’s not in your CMDB. It wasn’t procured through IT. Security never reviewed it. Nobody knows what keys it’s holding or what it’s proxying to. And because it’s doing something useful — unified LLM access, cost tracking, rate limiting — nobody’s in a hurry to shut it down.

Everyone wants to talk about advanced AI vulnerabilities like prompt injection, data poisoning, and model inversion, but attackers are exploiting the exact same infrastructure weaknesses we’ve battled for a decade. The AI technology stack is built on standard, fragile, open-source foundations. Why bother engineering a complex LLM jailbreak when a poisoned Python dependency hands over your Kubernetes cluster on a silver platter?

That’s the actual threat model. And TeamPCP understood it before most enterprise security teams did.


The community response was faster than your detection would have been

When developers started reporting the compromise in GitHub issue #24512, the attackers posted 88 bot comments from 73 unique accounts in a 102-second window — previously compromised developer accounts, not purpose-created profiles. Using the compromised maintainer account, they closed the issue as “not planned” and made commits to unrelated repositories with the message “teampcp update.”

That’s coordinated suppression. They tried to kill the disclosure thread in real time. It didn’t work — the community opened a parallel issue and Hacker News picked it up — but the fact that they had the playbook ready is worth noting. This wasn’t opportunistic. TeamPCP has been active since at least December 2025, maintaining Telegram channels and embedding “TeamPCP Cloud stealer” in their payloads. They brand their work.


What you actually need to do

If you don’t know whether LiteLLM is running somewhere in your environment, that’s your first problem. Shadow AI inventory — not a theoretical exercise anymore.

Simply removing the package is not sufficient. The malware is designed to establish persistence and may have already deployed additional payloads. Rotate all potentially exposed credentials. Review logs for suspicious outbound connections. Validate the integrity of infrastructure packages.

Hunt for outbound connections to models.litellm.cloud and checkmarx.zone in your SIEM. If you find them, you have a much larger conversation ahead of you than a credential rotation.

And pin your dependencies. Unpinned pip install in a production environment isn’t a development shortcut anymore — it’s an invitation. The threat actor shifted from attacking the factory to poisoning the product delivered to millions of users. Once a developer or a production server runs pip install litellm without a pinned version and receives 1.82.7 or 1.82.8, the malicious code executes automatically.

The attacker didn’t need elevated privileges. They didn’t need a zero-day. They needed one unpinned dependency and a CI/CD pipeline that trusted its own security scanner.

That’s the state of AI infrastructure security in 2026. The keys to your Copilot are in a .env file next to a Python package that phones home to a typosquatted domain. The fire suppression system was the accelerant.


Shadow AI isn’t an employee behavior problem. It’s a governance failure with a lateral movement problem attached to it.


Sources

Source URL
LiteLLM official security update docs.litellm.ai/blog/security-update-march-2026
FutureSearch — discovery & fork bomb analysis futuresearch.ai/blog/litellm-pypi-supply-chain-attack
Snyk — poisoned security scanner deep dive snyk.io/blog/poisoned-security-scanner-backdooring-litellm
Truesec — .pth mechanism analysis truesec.com/…/litellm-supply-chain-compromise
BleepingComputer — breaking coverage bleepingcomputer.com/…/teampcp-supply-chain-attack
Trend Micro — campaign infrastructure analysis trendmicro.com/…/inside-litellm-supply-chain-compromise
ARMO — backdoor payload analysis armosec.io/blog/litellm-supply-chain-attack-backdoor-analysis
Cycode — timeline & blast radius cycode.com/blog/lite-llm-supply-chain-attack
ThreatBook — three-stage payload breakdown threatbook.io/blog/…/litellm-supply-chain-poisoning-incident
Sonatype — multi-stage credential stealer analysis sonatype.com/blog/…/credential-stealer

cablepull is a principal security architect with 25+ years in the field. This page is compiled from public incident reporting—verify against primary sources and your own counsel.