Researchers discovered an active exploitation of a misconfigured Open WebUI instance—a self-hosted interface for large language models (LLMs)—that was exposed to the internet with administrator access enabled and no authentication. A threat actor leveraged this misconfiguration to upload and execute a malicious, AI-assisted Python script that deployed cryptominers, infostealers, and stealth tools across Linux and Windows systems.
The attacker uploaded a Python script via Open WebUI’s plugin system, obfuscated using a deep chain of base64 and zlib compression (“pyklump”). The payload deployed Linux cryptominers (T-Rex and XMRig), compiled stealth tools (processhider, argvhider), established persistence via systemd, and communicated with a Discord webhook for C2. On Windows, a secondary stage used a JDK installer to execute a malicious JAR from a remote IP, dropping additional Java-based loaders and DLLs that performed credential theft, sandbox evasion, and system reconnaissance. Evidence suggests the Python payload was partially AI-generated, marking a notable use of LLMs in payload development.