lkml.org 
[lkml]   [2006]   [Feb]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: 2.6 vs 2.4, ssh terminal slowdown
    From
    Date
    On Mon, 2006-02-13 at 03:43 -0500, Lee Revell wrote:
    > On Mon, 2006-02-13 at 08:08 +0100, MIke Galbraith wrote:
    > > On Mon, 2006-02-13 at 01:38 -0500, Lee Revell wrote:
    > > > Do you know which of those changes fixes the "ls" problem?
    > >
    > > No, it could be either, both, or neither. Heck, it _could_ be a
    > > combination of all of the things in my experimental tree for that
    > > matter. I put this patch out there because I know they're both bugs,
    > > and strongly suspect it'll cure the worst of the interactivity related
    > > delays.
    > >
    > > I'm hoping you'll test it and confirm that it fixes yours.
    >
    > Nope, this does not fix it. "time ls" ping-pongs back and forth between
    > ~0.1s and ~0.9s. Must have been something else in the first patch.

    Hmm. Thinking about it some more, it's probably more than this alone,
    but it could well be the boost qualifier I'm using...

    Instead of declaring a task to be deserving of large quantities of boost
    based upon their present shortage of sleep_avg, I based it upon their
    not using their full slice. He who uses the least gets the most. This
    made a large contribution to mitigating the parallel compile over NFS
    problem the current scheduler has. The fact that (current) heuristics
    which mandate that any task which sleeps for 5% of it's slice may use
    95% cpu practically forever can not only work, but work quite well in
    the general case, tells me that the vast majority of all tasks are, and
    will forever remain, cpu hogs.

    The present qualifier creates positive feedback for cpu hogs by giving
    them the most reward for being the biggest hog by our own definition.
    If you'll pardon the pun, we gives pigs wings, and hope that they don't
    actually use them and fly directly over head. This is the root problem
    as I see it, that and the fact that even if sleep_avg acquisition and
    consumption were purely 1:1 as the original O(1) scheduler was, if you
    sleep 1 ns longer than you run, you'll eventually be up to you neck in
    sleep_avg. (a darn good reason to use something like slice_avg to help
    determine when to drain off the excess)

    Changing that qualifier would also mean that he who is _getting_ the
    least cpu would get the most boost as well, so it should help with
    fairness, and things like the test case mentioned in comments in the
    patch where one task can end up starving it's own partner.

    Is there any reason that "he who uses the least gets the most" would be
    inferior to "he who has the least for whatever reason gets the most"?

    If I were to put a patch together that did only that (IMHO sensible)
    thing, would anyone be interested in trying it?

    -Mike

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-02-13 13:32    [W:3.382 / U:0.324 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site