lkml.org 
[lkml]   [2022]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RESEND RFC PATCH] epoll: autoremove wakers even more aggressively
From
On Thu, Jun 30, 2022 at 07:59:05AM -0700, Shakeel Butt wrote:
> On Wed, Jun 29, 2022 at 7:24 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Wed, 29 Jun 2022 18:12:46 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> >
> > > On Wed, Jun 29, 2022 at 4:55 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> > > >
> > > > On Wed, 15 Jun 2022 14:24:23 -0700 Benjamin Segall <bsegall@google.com> wrote:
> > > >
> > > > > If a process is killed or otherwise exits while having active network
> > > > > connections and many threads waiting on epoll_wait, the threads will all
> > > > > be woken immediately, but not removed from ep->wq. Then when network
> > > > > traffic scans ep->wq in wake_up, every wakeup attempt will fail, and
> > > > > will not remove the entries from the list.
> > > > >
> > > > > This means that the cost of the wakeup attempt is far higher than usual,
> > > > > does not decrease, and this also competes with the dying threads trying
> > > > > to actually make progress and remove themselves from the wq.
> > > > >
> > > > > Handle this by removing visited epoll wq entries unconditionally, rather
> > > > > than only when the wakeup succeeds - the structure of ep_poll means that
> > > > > the only potential loss is the timed_out->eavail heuristic, which now
> > > > > can race and result in a redundant ep_send_events attempt. (But only
> > > > > when incoming data and a timeout actually race, not on every timeout)
> > > > >
> > > >
> > > > Thanks. I added people from 412895f03cbf96 ("epoll: atomically remove
> > > > wait entry on wake up") to cc. Hopefully someone there can help review
> > > > and maybe test this.
> > > >
> > > >
> > >
> > > Thanks Andrew. Just wanted to add that we are seeing this issue in
> > > production with real workloads and it has caused hard lockups.
> > > Particularly network heavy workloads with a lot of threads in
> > > epoll_wait() can easily trigger this issue if they get killed
> > > (oom-killed in our case).
> >
> > Hard lockups are undesirable. Is a cc:stable justified here?
>
> Not for now as I don't know if we can blame a patch which might be the
> source of this behavior.

I am able to repro the epoll hard lockup on next-20220715 with Ben's
patch reverted. The repro is a simple TCP server and tens of clients
communicating over loopback. Though to cause the hard lockup I have to
create a couple thousand threads in epoll_wait() in server and also
reduce the kernel.watchdog_thresh. With Ben's patch the repro does not
cause the hard lockup even with kernel.watchdog.thresh=1.

Please add:

Tested-by: Shakeel Butt <shakeelb@google.com>

\
 
 \ /
  Last update: 2022-07-16 03:28    [W:0.079 / U:1.940 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site