×
AI hallucination bug spreads malware through “slopsquatting”
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants.

The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called “slopsquatting,” where cybercriminals study AI hallucinations and create malware using the same names.

  • When AI models hallucinate non-existent software packages and a developer requests these components, attackers can serve malware instead of error messages.
  • The malicious code then becomes integrated into the final software product, often undetected by developers who trust their AI coding assistants.

The technical vulnerability: Smaller open-source AI models used for local coding show particularly high hallucination rates when generating dependencies for software projects.

  • CodeLlama 7B demonstrated the worst performance with a 25% hallucination rate when generating code.
  • Other problematic models include Mistral 7B and OpenChat 7B, which frequently create fictional package references.

Historical context: This technique builds upon earlier “typosquatting” attacks, where hackers created malware using misspelled versions of legitimate package names.

  • A notable example was the “electorn” malware package, which mimicked the popular Electron application framework.
  • Modern application development’s heavy reliance on downloaded components (dependencies) makes these attacks particularly effective.

Why this matters: AI coding tools automatically request dependencies during the coding process, creating a new attack vector that’s difficult to detect.

  • The rise of AI-assisted programming will likely increase these opportunistic attacks as more developers rely on automation.
  • The malware can be subtly integrated into applications, creating security risks for end users who have no visibility into the underlying code.

Where we go from here: Security researchers are developing countermeasures to address this emerging threat.

  • Efforts are focused on improving model fine-tuning to reduce hallucinations in the first place.
  • New package verification tools are being developed to identify these hallucinations before code enters production.
Slopsquatting: The worrying AI hallucination bug that could be spreading malware

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.