CVE-2026-22778

9.8 CRITICAL
Published: February 02, 2026 Modified: February 23, 2026
View on NVD

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

AI Explanation

Get an AI-powered plain-language explanation of this vulnerability and remediation steps.

Login to generate AI explanation

CVSS v3.x Details

0.0 Low Medium High Critical 10.0
Vector String
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

References to Advisories, Solutions, and Tools

Patch Vendor Advisory Exploit Third Party Advisory
https://github.com/vllm-project/vllm/pull/31987
Source: security-advisories@github.com
Issue Tracking Patch
https://github.com/vllm-project/vllm/pull/32319
Source: security-advisories@github.com
Issue Tracking Patch
https://github.com/vllm-project/vllm/releases/tag/v0.14.1
Source: security-advisories@github.com
Release Notes
https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv
Source: security-advisories@github.com
Vendor Advisory

4 reference(s) from NVD

Quick Stats

CVSS v3 Score
9.8 / 10.0
EPSS (Exploit Probability)
0.1%
24th percentile
Exploitation Status
Not in CISA KEV

Weaknesses (CWE)

Affected Vendors

vllm