Cloud Threat Landscape
  • Incidents
  • Actors
  • Techniques
  • Defenses
  • Tools
  • Targeted Technologies
  • Posters & Newspapers
  • About
  • RSS
  • STIX
  • Back to wiz.io
Cloud Threat Landscape

LLMJacking for Roleplaying Campaign

Type
Campaign
Actors
❓Unknown
Pub. date
October 3, 2024
Initial access
Exposed secret
Impact
Resource hijacking
Observed techniques
LLMjackingCredential theft
References
https://permiso.io/blog/exploiting-hosted-models
Status
Finalized
Last edited
Jan 30, 2026 8:36 AM

In September 2024, threat actors conducted a campaign exploiting exposed AWS access keys to hijack AWS Bedrock services for operating illicit AI-powered roleplay chatbots. The attackers leverage compromised long-lived credentials (AKIA keys) discovered primarily through GitHub repository scanning to gain unauthorized access to foundation models, particularly Anthropic Claude.

The attack employs a three-phase methodology: first checking model availability via InvokeModel or the undocumented GetFoundationModelAvailability API, then programmatically requesting model access using console-only APIs with fabricated business justifications, and finally invoking models with jailbreak prompts designed to bypass content safety filters.

Over a 48-hour period in early August, researchers observed approximately 75,000 successful model invocations from 12 distinct ASNs across multiple geographic regions. The primary destination for this hijacked compute appears to be Chub[.]ai, a character roleplay platform. Generated content included policy-violating material such as sexual and violent content, with some instances involving CSEM.

AWS responded by updating the AWSCompromisedKeyQuarantineV2 policy on October 2, 2024 to explicitly block Bedrock operations.

Made with 💙 by Wiz

Last Updated: April 3, 2025