lkml.org 
[lkml]   [2020]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] sched: Fix rq->nr_iowait ordering
    On Tue, Nov 17, 2020 at 10:38:29AM +0100, Peter Zijlstra wrote:
    > Subject: sched: Fix rq->nr_iowait ordering
    > From: Peter Zijlstra <peterz@infradead.org>
    > Date: Thu, 24 Sep 2020 13:50:42 +0200
    >
    > schedule() ttwu()
    > deactivate_task(); if (p->on_rq && ...) // false
    > atomic_dec(&task_rq(p)->nr_iowait);
    > if (prev->in_iowait)
    > atomic_inc(&rq->nr_iowait);
    >
    > Allows nr_iowait to be decremented before it gets incremented,
    > resulting in more dodgy IO-wait numbers than usual.
    >
    > Note that because we can now do ttwu_queue_wakelist() before
    > p->on_cpu==0, we lose the natural ordering and have to further delay
    > the decrement.
    >
    > Fixes: Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
    > Reported-by: Tejun Heo <tj@kernel.org>
    > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

    s/Fixes: Fixes:/Fixes:/

    Ok, very minor hazard that the same logic gets duplicated that someone
    might try "fix" but git blame should help. Otherwise, it makes sense as
    I've received more than one "bug" that complained that a number was larger
    than they expected even if no other problem was present so

    Acked-by: Mel Gorman <mgorman@techsingularity.net>

    --
    Mel Gorman
    SUSE Labs

    \
     
     \ /
      Last update: 2020-11-17 12:46    [W:4.825 / U:0.152 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site