Attackers could have injected malicious payloads into model files loaded by Hydra-dependent tools, affecting millions of downloads across affected repositories.
Security researchers have identified remote code execution flaws in three widely used open-source Python libraries for AI/ML, hosted on platforms such as Hugging Face, and boasting tens of millions of combined downloads.
The issues arise from insecure handling of model metadata, enabling attackers to embed harmful payloads that activate upon file loading, though no real-world attacks appear as of late 2025.
The libraries in question: NeMo, Uni2TS, and FlexTok — rely on Hydra, a configuration tool from Meta, where the instantiate() function processes metadata without sufficient safeguards, permitting execution of arbitrary executable objects such as builtins.exec() alongside malicious strings.
Exploitation potential
Attackers could exploit the flaws by altering popular models, injecting tainted metadata to claim enhancements, then uploading variants to repositories. Hugging Face lacks prominent warnings for NeMo or safetensors formats and obscures metadata visibility, easing such threats.
Over 700 models employ NeMo’s format, Uni2TS powers Salesforce time-series models with hundreds of thousands of downloads, and FlexTok supports EPFL VILAB image-processing models with tens of thousands more; nearly 50 of more than 100 Python libraries on the platform use Hydra, expanding the risk landscape.
The Palo Alto Networks team that discovered this vulnerability had disclosed the problems to vendors starting April 2025, prompting swift responses:
- Nvidia had assigned CVE-2025-23304 (high severity) and patched NeMo in version 2.3.2 via a safe_instantiate() function with recursive target validation and allow-listing of modules like PyTorch.
- Salesforce had issued CVE-2026-22584 (high severity), fixed Uni2TS on July 31, 2025, with module allow-listing, and confirmed no customer data breaches.
- Apple and EPFL VILAB had resolved FlexTok in June 2025 by shifting to YAML parsing, class allow-lists, and advisories to trust only verified sources.
- Meta had enhanced Hydra documentation with RCE alerts and a basic block-list for risky functions like builtins.exec(), although it remains evadable via implicit imports and unavailable in stable releases by January 2026.
These fixes underscore urgent needs for metadata sanitization in AI pipelines, as model sharing proliferates without uniform security scrutiny.



