CVE-2026-21869

8.8 HIGH
Published: January 08, 2026 Modified: February 02, 2026
View on NVD

Description

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

AI Explanation

Get an AI-powered plain-language explanation of this vulnerability and remediation steps.

Login to generate AI explanation

CVSS v3.x Details

0.0 Low Medium High Critical 10.0
Vector String
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

References to Advisories, Solutions, and Tools

Patch Vendor Advisory Exploit Third Party Advisory
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8947-pfff-2f3c
Source: security-advisories@github.com
Exploit Vendor Advisory
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8947-pfff-2f3c
Source: 134c704f-9b21-4f2e-91b3-4a467353bcc0
Exploit Vendor Advisory

2 reference(s) from NVD

Quick Stats

CVSS v3 Score
8.8 / 10.0
EPSS (Exploit Probability)
0.4%
58th percentile
Exploitation Status
Not in CISA KEV

Weaknesses (CWE)

Affected Vendors

ggml