CVE-2025-62372

6.5 MEDIUM
Published: November 21, 2025 Modified: December 04, 2025

Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.

AI Explanation

Get an AI-powered plain-language explanation of this vulnerability and remediation steps.

Login to generate AI explanation

CVSS v3.x Details

0.0 Low Medium High Critical 10.0
Vector String
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

References to Advisories, Solutions, and Tools

Patch Vendor Advisory Exploit Third Party Advisory
https://github.com/vllm-project/vllm/pull/27204
Source: security-advisories@github.com
Issue Tracking Patch Vendor Advisory
https://github.com/vllm-project/vllm/pull/6613
Source: security-advisories@github.com
Issue Tracking
https://github.com/vllm-project/vllm/security/advisories/GHSA-pmqf-x6x8-p7qw
Source: security-advisories@github.com
Mitigation Vendor Advisory

4 reference(s) from NVD

Quick Stats

CVSS v3 Score
6.5 / 10.0
EPSS (Exploit Probability)
0.1%
17th percentile
Exploitation Status
Not in CISA KEV

Weaknesses (CWE)

Affected Vendors

vllm