CVE-2024-53058

5.5 MEDIUM
Published: November 19, 2024 Modified: November 03, 2025

Description

In the Linux kernel, the following vulnerability has been resolved: net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data In case the non-paged data of a SKB carries protocol header and protocol payload to be transmitted on a certain platform that the DMA AXI address width is configured to 40-bit/48-bit, or the size of the non-paged data is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI address width is configured to 32-bit, then this SKB requires at least two DMA transmit descriptors to serve it. For example, three descriptors are allocated to split one DMA buffer mapped from one piece of non-paged data: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold extra information to be reused in stmmac_tx_clean(): tx_q->tx_skbuff_dma[N + 0], tx_q->tx_skbuff_dma[N + 1], tx_q->tx_skbuff_dma[N + 2]. Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer address returned by DMA mapping call. stmmac_tx_clean() will try to unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf is a valid buffer address. The expected behavior that saves DMA buffer address of this non-paged data to tx_q->tx_skbuff_dma[entry].buf is: tx_q->tx_skbuff_dma[N + 0].buf = NULL; tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single(); Unfortunately, the current code misbehaves like this: tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = NULL; On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address obviously, then the DMA buffer will be unmapped immediately. There may be a rare case that the DMA engine does not finish the pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go horribly wrong, DMA is going to access a unmapped/unreferenced memory region, corrupted data will be transmited or iommu fault will be triggered :( In contrast, the for-loop that maps SKB fragments behaves perfectly as expected, and that is how the driver should do for both non-paged data and paged frags actually. This patch corrects DMA map/unmap sequences by fixing the array index for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address. Tested and verified on DWXGMAC CORE 3.20a

AI Explanation

Get an AI-powered plain-language explanation of this vulnerability and remediation steps.

Login to generate AI explanation

CVSS v3.x Details

0.0 Low Medium High Critical 10.0
Vector String
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

References to Advisories, Solutions, and Tools

Patch Vendor Advisory Exploit Third Party Advisory
https://git.kernel.org/stable/c/07c9c26e37542486e34d767505e842f48f29c3f6
Source: 416baaa9-dc9f-4396-8d5f-8c081fb06d67
Patch
https://git.kernel.org/stable/c/58d23d835eb498336716cca55b5714191a309286
Source: 416baaa9-dc9f-4396-8d5f-8c081fb06d67
Patch
https://git.kernel.org/stable/c/66600fac7a984dea4ae095411f644770b2561ede
Source: 416baaa9-dc9f-4396-8d5f-8c081fb06d67
Patch
https://git.kernel.org/stable/c/a3ff23f7c3f0e13f718900803e090fd3997d6bc9
Source: 416baaa9-dc9f-4396-8d5f-8c081fb06d67
Patch
https://git.kernel.org/stable/c/ece593fc9c00741b682869d3f3dc584d37b7c9df
Source: 416baaa9-dc9f-4396-8d5f-8c081fb06d67
Patch

6 reference(s) from NVD

Quick Stats

CVSS v3 Score
5.5 / 10.0
EPSS (Exploit Probability)
0.0%
1th percentile
Exploitation Status
Not in CISA KEV

Affected Vendors

linux