lkml.org 
[lkml]   [2022]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH next] sbitmap: fix lockup while swapping
On Mon, 26 Sep 2022, Jan Kara wrote:
> On Fri 23-09-22 16:15:29, Hugh Dickins wrote:
>
> I don't think any magic with sbq_index_atomic_inc() is going to reliably
> fix this. After all the current waitqueue may be the only one that has active
> waiters so sbq_wake_ptr() will always end up returning this waitqueue
> regardless of the current value of sbq->wake_index.
>
> Honestly, this whole code needs a serious redesign.

I was pleased to see you say so, Jan: I do agree.

> I have some
> simplifications in mind but it will take some thinking and benchmarking

I'm definitely not the right person to take it on, and glad if you can.
But I did have some thoughts and experiments over the weekend, and would
like to throw a couple of suggestions into the pot.

One, not a big issue, but I think sbq_index_atomic_inc() is misconceived.
It's unhelpful for multiple racers to be adjusting sbq->wake_index, and
wake_index = ws - sbq->ws;
atomic_cmpxchg(&sbq->wake_index, wake_index, sbq_index_inc(wake_index));
seems to me a better way for __sbq_wake_up() to increment it.

Two, and here the depths of my naivete and incomprehension may be on
display, but: I get the impression that __sbq_wake_up() is intended
to accumulate wake_batch-1 wakeups while doing nothing, then on the
wake_batch'th it hopes to do all those wake_batch wakeups. I assume
someone in the past has established that that's a safe way to procede
here, though it's not obviously safe to me.

Now, those !waitqueue_active(&ws->wait) checks are good for catching
when the hoped-for procedure has gone so "wrong" that there's actually
nothing to be woken on this ws (so proceed to the next); but they give
no clue as to when there are some but not enough wakeups done.

It is very easy to add a wake_up_nr_return() to kernel/sched/wait.c,
which returns the nr_exclusive still not woken (__wake_up_common_lock()
merely has to return nr_exclusive itself); and then __sbq_wake_up() can
be recalled until wake_batch have been woken (or all queues empty).

I do have an experiment running that way: but my testing is much too
limited to draw serious conclusions from, and I've already admitted
that I may just be misunderstanding the whole thing. But, just maybe,
a wake_up_nr_return() might be useful. End of those suggestions.

> so
> we need some fix for the interim. I was pondering for quite some time about
> some band aid to the problem you've found but didn't find anything
> satisfactory.
>
> In the end I see two options:
>
> 1) Take your patch (as wrong as it is ;). Yes, it can lead to lost wakeups
> but we were living with those for a relatively long time so probably we can
> live with them for some longer.

In getting that experiment above going, I did have to make this change
below: and it looks to me now as better than my original patch - since
this one does try all SBQ_WAIT_QUEUES before giving up, whereas my first
patch immediately gave up on the waitqueue_active !wait_cnt case.

--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -587,7 +587,7 @@ static struct sbq_wait_state *sbq_wake_p
for (i = 0; i < SBQ_WAIT_QUEUES; i++) {
struct sbq_wait_state *ws = &sbq->ws[wake_index];

- if (waitqueue_active(&ws->wait)) {
+ if (waitqueue_active(&ws->wait) && atomic_read(&ws->wait_cnt)) {
if (wake_index != atomic_read(&sbq->wake_index))
atomic_set(&sbq->wake_index, wake_index);
return ws;
TBH I have not tested this one outside of that experiment: would you
prefer this patch to my first one, I test and sign this off and send?

>
> 2) Revert Yu Kuai's original fix 040b83fcecfb8 ("sbitmap: fix possible io
> hung due to lost wakeup") and my fixup 48c033314f37 ("sbitmap: Avoid leaving
> waitqueue in invalid state in __sbq_wake_up()"). But then Keith would have
> to redo his batched accounting patches on top.

I know much too little to help make that choice.

Hugh

\
 
 \ /
  Last update: 2022-09-27 05:40    [W:0.078 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site