Back to Blog

The Hidden Danger: AI-Hallucinated Packages

January 15, 2025 • 5 min read

You ask ChatGPT for help with a Python script. It gives you code that imports "flask-security-utils". Sounds legit. You pip install it. One problem: that package doesn't exist.

Or at least, it didn't. Until someone noticed the pattern.

Wait, What?

Here's the thing about large language models: they're really good at sounding confident about things that aren't true.

When you ask for code, ChatGPT draws on patterns from millions of repositories. Sometimes it suggests a package that seems like it should exist. The name makes sense. It fits the context. But it's made up.

Researchers call this "hallucination." And hackers call it an opportunity.

The Attack

It works like this:

  1. Attacker asks ChatGPT a bunch of coding questions
  2. Notes down package names that don't actually exist
  3. Registers those names on PyPI or npm
  4. Fills them with malware
  5. Waits for the next developer to make the same query

That developer copies the AI's code, runs pip install, and boom — they've just installed malware from a package the AI "recommended."

This Isn't Theoretical

In 2024, security researchers found over 100 malicious packages on PyPI that were specifically named to match AI hallucinations.

The payloads? The usual stuff:

Some of these packages had thousands of downloads before they got caught.

Why This Hits Different

Traditional supply chain attacks require the attacker to either compromise an existing package or hope someone typos a name. Both are hard.

But with AI hallucinations? The AI does the work for them. It "recommends" the fake package, and the developer trusts it because... it came from ChatGPT.

That trust is the vulnerability.

So What Do You Do?

1. Question everything. If AI suggests a package, Google it first. Check if it exists. Check the download count. If it has 47 downloads, that's a red flag.

2. Use tools that check for you. This is literally why we built Redakta. Paste your requirements.txt, we tell you if any packages don't exist.

3. Don't copy-paste blindly. Read the code. Understand what packages you're adding. I know, it's slower. But so is recovering from a breach.

4. Stick to well-known packages when possible. "requests" instead of "super-async-http-client-pro." Boring is safe.

The Bottom Line

AI is incredibly useful for coding. I use it every day. But it's not infallible, and it definitely doesn't have your security in mind.

Verify before you trust. It takes 30 seconds and could save you weeks of cleanup.

Don't install hallucinated packages

Redakta checks if your packages actually exist. Takes 10 seconds.

Try Redakta Free

© 2025 Redakta • Powered by SEKURA.SE