lkml.org 
[lkml]   [2015]   [Apr]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH] blk-mq: rationalize plug
    Date
    Shaohua Li <shli@fb.com> writes:

    >> I like the general approach. I do wonder whether we should hold back a
    >> single I/O at all times, or whether we should do something similar to
    >> what Mike Snitzer did in the dm-mpath driver, where he keeps track of
    >> the end position of the last I/O (without holding the I/O back) to
    >> determine whether we're doing sequential I/O. With your approach, we
    >> will be able to get merges from the very start of a sequential I/O
    >> stream. With Mike's approach, you don't get merges until the second and
    >> third I/O. Mike's way you get potentially better latency for random I/O
    >> at the cost of some extra fields in a data structure. Something to
    >> think about.
    >
    > Sorry for the later reply, been busy these days.
    >
    > Not sure about this. If latency is sensitive, the caller really should
    > flush plug immediately.

    Yeah, I don't have any benchmark numbers to tip the scales one way or
    the other.

    >> > @@ -1240,6 +1283,10 @@ static void blk_mq_make_request(struct request_queue *q, struct bio *bio)
    >> > return;
    >> > }
    >> >
    >> > + if (use_plug && !blk_queue_nomerges(q) &&
    >> > + blk_attempt_plug_merge(q, bio, &request_count))
    >> > + return;
    >> > +
    >>
    >> You've just added merging for flush/fua requets. Was that intentional?
    >> We don't attempt them in the sq case, and we should probably be
    >> consistent.
    >
    > FUA/FLUSH is in the unmerge bio flag list, so the merge will not happen.
    > But I agree checking it is faster.

    Actually, after looking at the patched code again, you didn't change
    that behaviour at all, so you can ignore this comment.

    >> > @@ -1314,7 +1358,7 @@ static void blk_sq_make_request(struct request_queue *q, struct bio *bio)
    >> > * If we have multiple hardware queues, just go directly to
    >> > * one of those for sync IO.
    >> > */
    >> > - use_plug = !is_flush_fua && !is_sync;
    >> > + use_plug = !is_flush_fua;
    >> >
    >> > blk_queue_bounce(q, &bio);
    >>
    >> For this part I'd rather use my patch. Would you mind rebasing your
    >> patch on top of the sq plugging patch I submitted?
    >
    > I'm ok, your patch looks better. I'll post some plug related patches out
    > soon and add your patch in them if you don't mind. If you'd like to post
    > by yourself, you can add my review in your patch:
    >
    > Reviewed-by: Shaohua Li <shli@fb.com>

    Thanks, feel free to go ahead and post it with your series.

    >> One final question, sort of related to this patch. At the end of
    >> blk_mq_make_request, we have this:
    >>
    >> run_queue:
    >> blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
    >>
    >> So, we don't run the queue synchronously for flush/fua. Is that logic
    >> right? Why don't we want to push that out immediately?
    >
    > I'm not sure. My theory is delaying flush/fua can increase the chance
    > the flush logic merges multiple flush into one flush, please see the
    > logic of flush handling. But this is just my thought, might be
    > completely wrong.

    Sure. I don't see a need to change the behaviour in this patch(set), I
    was just curious to hear the reasoning for it.

    Cheers,
    Jeff


    \
     
     \ /
      Last update: 2015-04-30 19:21    [W:2.351 / U:0.156 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site