Picklescan flaws allowed attackers to bypass scans and execute hidden code in malicious PyTorch models before the latest patch.
Three critical zero-day vulnerabilities affecting PickleScan, a widely used tool for scanning Python pickle files and PyTorch models, have been uncovered by cybersecurity researchers. The flaws, all ...
AI frameworks, including Meta’s Llama, are prone to automatic Python deserialization by pickle that could lead to remote code execution. Meta’s large language model (LLM) framework, Llama, suffers a ...
Flaws replicated from Meta’s Llama Stack to Nvidia TensorRT-LLM, vLLM, SGLang, and others, exposing enterprise AI stacks to systemic risk. Cybersecurity researchers have uncovered a chain of critical ...
Researchers at Reversing Labs have discovered two malicious machine learning (ML) models available on Hugging Face, the leading hub for sharing AI models and applications. While these models contain ...