lkml.org 
[lkml]   [2012]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    SubjectRe: [BUG] kernel freezes with latest tree
    From
    Date
    On Wed, 2012-01-11 at 16:56 +0100, Ingo Molnar wrote:

    > Well, what happens if every CPU runs load_balance() and we keep
    > triggering:
    >
    > if (loops++ > sysctl_sched_nr_migrate) {
    > *lb_flags |= LBF_NEED_BREAK;
    > break;
    > }
    >
    > in this case load_balance() will do the retry:
    >
    > if (lb_flags & LBF_NEED_BREAK) {
    > lb_flags &= ~LBF_NEED_BREAK;
    > goto redo;
    > }
    >
    > but the retry starts the loop again:
    >
    > list_for_each_entry_safe(p, n, &busiest_cfs_rq->tasks, se.group_node) {
    >
    > so nobody is able to make progress: livelock/lockup.

    Ah, right! Silly me. One possibility is to rotate that list, except that
    won't work for the cgroup case where we have another iteration.

    OK, here's an updated patch..

    ---
    Subject: sched: Limit load-balance retries on lock-break
    From: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Date: Wed Jan 11 13:11:12 CET 2012

    Eric and David reported dead machines and traced it to commit a195f004 ("sched:
    Fix load-balance lock-breaking"), it turns out there's still a
    scenario where we can end up re-trying forever.

    Since there is no strict forward progress guarantee in the
    load-balance iteration we can get stuck re-retrying the same task-set
    over and over.

    Creating a forward progress guarantee with the existing structure is
    somewhat non-trivial, for now simply terminate the retry loop after a
    few tries.

    Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
    Reported-by: David Ahern <dsahern@gmail.com>
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    [eric: logic cleanup]
    Tested-by: Eric Dumazet <eric.dumazet@gmail.com>
    Link: http://lkml.kernel.org/n/tip-ya9m8grb9wfc26uqnviq2wjq@git.kernel.org
    ---
    kernel/sched/fair.c | 10 +++++++---
    1 file changed, 7 insertions(+), 3 deletions(-)
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -3130,8 +3130,10 @@ task_hot(struct task_struct *p, u64 now,
    }

    #define LBF_ALL_PINNED 0x01
    -#define LBF_NEED_BREAK 0x02
    -#define LBF_ABORT 0x04
    +#define LBF_NEED_BREAK 0x02 /* clears into HAD_BREAK */
    +#define LBF_HAD_BREAK 0x04
    +#define LBF_HAD_BREAKS 0x0C /* count HAD_BREAKs overflows into ABORT */
    +#define LBF_ABORT 0x10

    /*
    * can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
    @@ -4508,7 +4510,9 @@ static int load_balance(int this_cpu, st
    goto out_balanced;

    if (lb_flags & LBF_NEED_BREAK) {
    - lb_flags &= ~LBF_NEED_BREAK;
    + lb_flags += LBF_HAD_BREAK - LBF_NEED_BREAK;
    + if (lb_flags & LBF_ABORT)
    + goto out_balanced;
    goto redo;
    }



    \
     
     \ /
      Last update: 2012-01-11 17:09    [W:3.958 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site