In September 2024, threat actors conducted a campaign exploiting exposed AWS access keys to hijack AWS Bedrock services for operating illicit AI-powered roleplay chatbots. The attackers leverage compromised long-lived credentials (AKIA keys) discovered primarily through GitHub repository scanning to gain unauthorized access to foundation models, particularly Anthropic Claude.
The attack employs a three-phase methodology: first checking model availability via InvokeModel or the undocumented GetFoundationModelAvailability API, then programmatically requesting model access using console-only APIs with fabricated business justifications, and finally invoking models with jailbreak prompts designed to bypass content safety filters.
Over a 48-hour period in early August, researchers observed approximately 75,000 successful model invocations from 12 distinct ASNs across multiple geographic regions. The primary destination for this hijacked compute appears to be Chub[.]ai, a character roleplay platform. Generated content included policy-violating material such as sexual and violent content, with some instances involving CSEM.
AWS responded by updating the AWSCompromisedKeyQuarantineV2 policy on October 2, 2024 to explicitly block Bedrock operations.