CVE-2025-62164

8.8 HIGH
Published: November 21, 2025 Modified: December 04, 2025

Description

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

AI Explanation

Get an AI-powered plain-language explanation of this vulnerability and remediation steps.

Login to generate AI explanation

CVSS v3.x Details

0.0 Low Medium High Critical 10.0
Vector String
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

References to Advisories, Solutions, and Tools

Patch Vendor Advisory Exploit Third Party Advisory
https://github.com/vllm-project/vllm/pull/27204
Source: security-advisories@github.com
Issue Tracking Patch Vendor Advisory
https://github.com/vllm-project/vllm/security/advisories/GHSA-mrw7-hf4f-83pf
Source: security-advisories@github.com
Issue Tracking Vendor Advisory

3 reference(s) from NVD

Quick Stats

CVSS v3 Score
8.8 / 10.0
EPSS (Exploit Probability)
0.3%
56th percentile
Exploitation Status
Not in CISA KEV

Affected Vendors

vllm