lkml.org 
[lkml]   [2022]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH next] sbitmap: fix lockup while swapping
On Sat, 24 Sep 2022, Hillf Danton wrote:
>
> I think the lockup can be avoided by
> a) either advancing wake_index as early as I can [1],
> b) or doing wakeup in case of zero wait_cnt to kill all cases of waitqueue_active().
>
> Only for thoughts now.

Thanks Hillf: I gave your __sbq_wake_up() patch below several tries,
and as far as I could tell, it works just as well as my one-liner.

But I don't think it's what we would want to do: doesn't it increment
wake_index on every call to __sbq_wake_up()? whereas I thought it was
intended to be incremented only after wake_batch calls (thinking in
terms of nr 1).

I'll not be surprised if your advance-wake_index-earlier idea ends
up as a part of the solution: but mainly I agree with Jan that the
whole code needs a serious redesign (or perhaps the whole design
needs a serious recode). So I didn't give your version more thought.

Hugh

>
> Hillf
>
> [1] https://lore.kernel.org/lkml/afe5b403-4e37-80fd-643d-79e0876a7047@linux.alibaba.com/
>
> +++ b/lib/sbitmap.c
> @@ -613,6 +613,16 @@ static bool __sbq_wake_up(struct sbitmap
> if (!ws)
> return false;
>
> + do {
> + /* open code sbq_index_atomic_inc(&sbq->wake_index) to avoid race */
> + int old = atomic_read(&sbq->wake_index);
> + int new = sbq_index_inc(old);
> +
> + /* try another ws if someone else takes care of this one */
> + if (old != atomic_cmpxchg(&sbq->wake_index, old, new))
> + return true;
> + } while (0);
> +
> cur = atomic_read(&ws->wait_cnt);
> do {
> /*
> @@ -620,7 +630,7 @@ static bool __sbq_wake_up(struct sbitmap
> * function again to wakeup a new batch on a different 'ws'.
> */
> if (cur == 0)
> - return true;
> + goto out;
> sub = min(*nr, cur);
> wait_cnt = cur - sub;
> } while (!atomic_try_cmpxchg(&ws->wait_cnt, &cur, wait_cnt));
> @@ -634,6 +644,7 @@ static bool __sbq_wake_up(struct sbitmap
>
> *nr -= sub;
>
> +out:
> /*
> * When wait_cnt == 0, we have to be particularly careful as we are
> * responsible to reset wait_cnt regardless whether we've actually
> @@ -661,12 +672,6 @@ static bool __sbq_wake_up(struct sbitmap
> */
> smp_mb__before_atomic();
>
> - /*
> - * Increase wake_index before updating wait_cnt, otherwise concurrent
> - * callers can see valid wait_cnt in old waitqueue, which can cause
> - * invalid wakeup on the old waitqueue.
> - */
> - sbq_index_atomic_inc(&sbq->wake_index);
> atomic_set(&ws->wait_cnt, wake_batch);
>
> return ret || *nr;

\
 
 \ /
  Last update: 2022-09-27 06:04    [W:0.080 / U:0.820 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site