diff options
| author | Chaitanya Kulkarni <kch@nvidia.com> | 2026-02-11 12:49:44 -0800 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2026-02-12 04:23:31 -0700 |
| commit | 81e7223b1a2d63b655ee72577c8579f968d037e3 (patch) | |
| tree | 4a0ed061f6ab14f2465b60c4b1e2574efe5fa5d7 /block | |
| parent | 5991bfa3f88ec8d67fa3f552c19c39ff37a4e67b (diff) | |
block: fix partial IOVA mapping cleanup in blk_rq_dma_map_iova
When dma_iova_link() fails partway through mapping a request's bvec
list, the function breaks out of the loop without cleaning up
already mapped segments. Similarly, if dma_iova_sync() fails after
linking all segments, no cleanup is performed.
This leaves partial IOVA mappings in place. The completion path
attempts to unmap the full expected size via dma_iova_destroy() or
nvme_unmap_data(), but only a partial size was actually mapped,
leading to incorrect unmap operations.
Add an out_unlink error path that calls dma_iova_destroy() to clean
up partial mappings before returning failure. The dma_iova_destroy()
function handles both partial unlink and IOVA space freeing. It
correctly handles the mapped_len == 0 case (first dma_iova_link()
failure) by only freeing the IOVA allocation without attempting to
unmap.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
| -rw-r--r-- | block/blk-mq-dma.c | 13 |
1 files changed, 8 insertions, 5 deletions
diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 3c87779cdc19..bfdb9ed70741 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -121,17 +121,20 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev, error = dma_iova_link(dma_dev, state, vec->paddr, mapped, vec->len, dir, attrs); if (error) - break; + goto out_unlink; mapped += vec->len; } while (blk_map_iter_next(req, &iter->iter, vec)); error = dma_iova_sync(dma_dev, state, 0, mapped); - if (error) { - iter->status = errno_to_blk_status(error); - return false; - } + if (error) + goto out_unlink; return true; + +out_unlink: + dma_iova_destroy(dma_dev, state, mapped, dir, attrs); + iter->status = errno_to_blk_status(error); + return false; } static inline void blk_rq_map_iter_init(struct request *rq, |
