Cloud Threat Landscape
  • Incidents
  • Actors
  • Techniques
  • Defenses
  • Tools
  • Targeted Technologies
  • Posters & Newspapers
  • About
  • RSS
  • STIX
  • Back to wiz.io
Cloud Threat Landscape

LLMjacking via Laravel exploitation

Type
Incident
Actors
❓Unknown
Pub. date
May 6, 2024
Initial access
1-day vulnerability
Impact
Resource hijacking
Observed techniques
LLMjackingCredential theftCloud API enumeration
Targeted technologies
Laravel
References
https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/
Status
Finalized
Last edited
Jun 2, 2024 10:20 AM

Threat actors are attempting to monetize their illicit access to LLMs while the cloud account owner bears the costs. The attackers target a variety of LLM services across AWS, Azure, and GCP. In some instances, they employ a script to automate checking the validity of the stolen credentials and enumerating permissions associated with them for multiple AI services, but without running actual queries (most likely in order to avoid detection).

In one observed incident involving this technique, the threat actor leveraged stolen cloud credentials obtained by exploiting a vulnerable and publicly exposed instance of Laravel (CVE-2021-3129). Initial access allowed exfiltration of cloud credentials and attempts to access LLM models, specifically targeting Anthropic’s Claude (v2/v3) models. The attackers employed an open-source reverse proxy for LLM usage called OAI Reverse Proxy to manage access to multiple compromised accounts without exposing the underlying credentials. They used API requests to test the extent of their access, exploiting the InvokeModel API to confirm LLM service activation.

Made with 💙 by Wiz

Last Updated: April 3, 2025