From: Ming Lei Date: Fri, 3 Mar 2023 23:13:45 +0000 (+0800) Subject: nvme: fix handling single range discard request X-Git-Tag: baikal/mips/sdk6.1~139 X-Git-Url: https://git.baikalelectronics.ru/sdk/?a=commitdiff_plain;h=057a9b617207b79ab60034bc6ebf4d8bf5bfdfaa;p=kernel.git nvme: fix handling single range discard request [ Upstream commit e3330553770d67f16059c5a8064ba80bf482293b ] When investigating one customer report on warning in nvme_setup_discard, we observed the controller(nvme/tcp) actually exposes queue_max_discard_segments(req->q) == 1. Obviously the current code can't handle this situation, since contiguity merge like normal RW request is taken. Fix the issue by building range from request sector/nr_sectors directly. Fixes: 8673d5f203c4 ("nvme: support ranged discard requests") Signed-off-by: Ming Lei Reviewed-by: Chaitanya Kulkarni Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 2031fd960549c..a95e48b51da66 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -779,16 +779,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, range = page_address(ns->ctrl->discard_page); } - __rq_for_each_bio(bio, req) { - u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector); - u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift; - - if (n < segments) { - range[n].cattr = cpu_to_le32(0); - range[n].nlb = cpu_to_le32(nlb); - range[n].slba = cpu_to_le64(slba); + if (queue_max_discard_segments(req->q) == 1) { + u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req)); + u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9); + + range[0].cattr = cpu_to_le32(0); + range[0].nlb = cpu_to_le32(nlb); + range[0].slba = cpu_to_le64(slba); + n = 1; + } else { + __rq_for_each_bio(bio, req) { + u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector); + u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift; + + if (n < segments) { + range[n].cattr = cpu_to_le32(0); + range[n].nlb = cpu_to_le32(nlb); + range[n].slba = cpu_to_le64(slba); + } + n++; } - n++; } if (WARN_ON_ONCE(n != segments)) {