diff options
author | Li Nan | 2023-02-22 12:10:00 +0800 |
---|---|---|
committer | Greg Kroah-Hartman | 2023-05-11 23:03:39 +0900 |
commit | b50fd1c3d9d0175aa29ff2706ef36cc178bc356a (patch) | |
tree | 4777f93951b52e9b06a5d453426dad603dec92da /drivers | |
parent | acffdf1a7fe495dacdd30a5377bd29f813ee2224 (diff) |
md/raid10: fix null-ptr-deref in raid10_sync_request
commit a405c6f0229526160aa3f177f65e20c86fce84c5 upstream.
init_resync() inits mempool and sets conf->have_replacemnt at the beginning
of sync, close_sync() frees the mempool when sync is completed.
After [1] recovery might be skipped and init_resync() is called but
close_sync() is not. null-ptr-deref occurs with r10bio->dev[i].repl_bio.
The following is one way to reproduce the issue.
1) create a array, wait for resync to complete, mddev->recovery_cp is set
to MaxSector.
2) recovery is woken and it is skipped. conf->have_replacement is set to
0 in init_resync(). close_sync() not called.
3) some io errors and rdev A is set to WantReplacement.
4) a new device is added and set to A's replacement.
5) recovery is woken, A have replacement, but conf->have_replacemnt is
0. r10bio->dev[i].repl_bio will not be alloced and null-ptr-deref
occurs.
Fix it by not calling init_resync() if recovery skipped.
[1] commit 7e83ccbecd60 ("md/raid10: Allow skipping recovery when clean arrays are assembled")
Fixes: 7e83ccbecd60 ("md/raid10: Allow skipping recovery when clean arrays are assembled")
Cc: stable@vger.kernel.org
Signed-off-by: Li Nan <linan122@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230222041000.3341651-3-linan666@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/md/raid10.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index a519f8467dad..67398394cc9c 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3302,10 +3302,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, sector_t chunk_mask = conf->geo.chunk_mask; int page_idx = 0; - if (!mempool_initialized(&conf->r10buf_pool)) - if (init_resync(conf)) - return 0; - /* * Allow skipping a full rebuild for incremental assembly * of a clean array, like RAID1 does. @@ -3321,6 +3317,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, return mddev->dev_sectors - sector_nr; } + if (!mempool_initialized(&conf->r10buf_pool)) + if (init_resync(conf)) + return 0; + skipped: max_sector = mddev->dev_sectors; if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) || |