aboutsummaryrefslogtreecommitdiff
path: root/block
diff options
context:
space:
mode:
authorJens Axboe2015-11-25 10:12:54 -0700
committerJens Axboe2015-11-25 11:02:02 -0700
commitd7cf931dd9f18ce8ee7a0a9b7813a19fb2c8f5e9 (patch)
tree558ef3aca99cca15285cfd42f651857565303097 /block
parent3b627a3f934c493ada71217f14681e5157e95783 (diff)
Revert "blk-flush: Queue through IO scheduler when flush not required"
This reverts commit 1b2ff19e6a957b1ef0f365ad331b608af80e932e. Jan writes: -- Thanks for report! After some investigation I found out we allocate elevator specific data in __get_request() only for non-flush requests. And this is actually required since the flush machinery uses the space in struct request for something else. Doh. So my patch is just wrong and not easy to fix since at the time __get_request() is called we are not sure whether the flush machinery will be used in the end. Jens, please revert 1b2ff19e6a957b1ef0f365ad331b608af80e932e. Thanks! I'm somewhat surprised that you can reliably hit the race where flushing gets disabled for the device just while the request is in flight. But I guess during boot it makes some sense. -- So let's just revert it, we can fix the queue run manually after the fact. This race is rare enough that it didn't trigger in testing, it requires the specific disable-while-in-flight scenario to trigger.
Diffstat (limited to 'block')
-rw-r--r--block/blk-flush.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-flush.c b/block/blk-flush.c
index c81d56ec308f..9c423e53324a 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -422,7 +422,7 @@ void blk_insert_flush(struct request *rq)
if (q->mq_ops) {
blk_mq_insert_request(rq, false, false, true);
} else
- q->elevator->type->ops.elevator_add_req_fn(q, rq);
+ list_add_tail(&rq->queuelist, &q->queue_head);
return;
}