lkml.org 
[lkml]   [2012]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: IOPS based scheduler (Was: Re: [PATCH 18/21] blkcg: move blkio_group_conf->weight to cfq)
    add Shaohua to the cc list,
    On 04/03/2012 11:37 PM, Vivek Goyal wrote:
    > On Tue, Apr 03, 2012 at 06:41:37AM +0800, Tao Ma wrote:
    >> On 04/03/2012 06:25 AM, Vivek Goyal wrote:
    >>> On Tue, Apr 03, 2012 at 06:20:10AM +0800, Tao Ma wrote:
    >>>
    >>> [..]
    >>>>> Yeah, just add config and stat files prefixed with the name of the new
    >>>>> blkcg policy.
    >>>> OK, I will add a new config file for it.
    >>>
    >>> Only if CFQ could be modified to add one iops mode, flippable through a
    >>> sysfs tunable, things will be much simpler. You will not have to add a
    >>> new IO scheduler, no new configuration/stat files in blkcg (which is
    >>> already crowded now).
    >>>
    >>> I don't think anybody has shown the code that why CFQ can't be modified
    >>> to support iops mode.
    >> Yes, I have thought of it, but it seems to me that time slice is deeply
    >> involved within the cfq(even current cfq's iops mode has used time slice
    >> to calculate). So I don't think it is feasible for me to change it. And
    >> cfq works perfectly well for sas/sata environment and the code is quite
    >> stable, more codes and more complicate algorithm does mean more bugs. So
    >> I guess a new iops based scheduler is easy and not intrusive for the
    >> user(since he can choose whether to use it or not).
    >
    > Ok, let me take one step back.
    >
    > - What's the goal of iops based scheduler. In what kind of workload and
    > storage it is going to help.
    >
    > - Can't we just set the slice_idle=0 and "quantum" to some high value
    > say "64" or "128" and achieve similar results to iops based scheduler?
    yes, I should say cfq with slice_idle = 0 works well in most cases. But
    if it comes to blkcg with ssd, it is really a disaster. You know, cfq
    has to choose between different cgroups, so even if you choose 1ms as
    the service time for each cgroup(actually in my test, only >2ms can work
    reliably). the latency for some requests(which have been sent by the
    user while not submitting to the driver) is really too much for the
    application. I don't think there is a way to resolve it in cfq.

    >
    > In theory, above will cut down on idling and try to provide fairness in
    > terms of time. I thought fairness in terms of time is most fair. The
    > most common problem is measurement of time is not attributable to
    > individual queue in an NCQ hardware. I guess that throws time measurement
    > of out the window until and unless we have a better algorithm to measure
    > time in NCQ environment.
    >
    > I guess then we can just replace time with number of requests dispatched
    > from a process queue. Allow it to dispatch requests for some time and
    > then schedule it out and put it back on service tree and charge it
    > according to its weight.
    As I have said, in this case, the minimal time(1ms) multiple the group
    number is too much for a ssd.

    If we can use iops based scheduler, we can use iops_weight for different
    cgroups and switch cgroup according to this number. So all the
    applications can have a moderate response time which can be estimated.

    btw, I have talked with Shaohua in LSF and we made a consensus that I
    will continue his work and try to add cgroup support to it.

    Thanks
    Tao
    >
    > This all works only if we have right workload. The workloads which are
    > not doing dependent reads and can keep the disk busy continuously. If
    > there is think time involved, and we do not idle, process will lose its
    > share and whole scheme of trying to differentiate between processes will
    > become ineffective.
    >
    > So if you have come with a better algorith which can keep track of iops
    > without idling and still provide service differentiation for common
    > workloads, it will be interesting.
    >
    > Thanks
    > Vivek
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at http://vger.kernel.org/majordomo-info.html
    > Please read the FAQ at http://www.tux.org/lkml/



    \
     
     \ /
      Last update: 2012-04-03 18:39    [W:4.333 / U:0.096 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site