CVE-2025-66448

7.1 HIGH
Published: December 01, 2025 Modified: December 03, 2025

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backendโ€™s code on the victim host. This vulnerability is fixed in 0.11.1.

AI Explanation

Get an AI-powered plain-language explanation of this vulnerability and remediation steps.

Login to generate AI explanation

CVSS v3.x Details

0.0 Low Medium High Critical 10.0
Vector String
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H

References to Advisories, Solutions, and Tools

Patch Vendor Advisory Exploit Third Party Advisory
https://github.com/vllm-project/vllm/pull/28126
Source: security-advisories@github.com
Issue Tracking
https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm
Source: security-advisories@github.com
Vendor Advisory

3 reference(s) from NVD

Quick Stats

CVSS v3 Score
7.1 / 10.0
EPSS (Exploit Probability)
0.2%
37th percentile
Exploitation Status
Not in CISA KEV

Weaknesses (CWE)

Affected Vendors

vllm