lkml.org 
[lkml]   [2020]   [Jul]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] mm: silence soft lockups from unlock_page
On Tue 21-07-20 09:23:44, Qian Cai wrote:
> On Tue, Jul 21, 2020 at 02:17:52PM +0200, Michal Hocko wrote:
> > On Tue 21-07-20 07:44:07, Qian Cai wrote:
> > >
> > >
> > > > On Jul 21, 2020, at 7:25 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > > >
> > > > Are these really important? I believe I can dig that out from the bug
> > > > report but I didn't really consider that important enough.
> > >
> > > Please dig them out. We have also been running those things on
> > > “large” powerpc as well and never saw such soft-lockups. Those
> > > details may give us some clues about the actual problem.
> >
> > I strongly suspect this is not really relevant but just FYI this is
> > 16Node, 11.9TB with 1536CPUs system.
>
> Okay, we are now talking about the HPC special case. Just brain-storming some
> ideas here.
>
>
> 1) What about increase the soft-lockup threshold early at boot and restore
> afterwards? As far as I can tell, those soft-lockups are just a few bursts of
> things and then cure itself after the booting.

Is this really better option than silencing soft lockup from the code
itself? What if the same access pattern happens later on?

> 2) Reading through the comments above page_waitqueue(), it said rare hash
> collisions could happen, so sounds like in this HPC case, it is rather easy to
> hit those hash collisons. Thus, need to deal with that instead?

As all of those seem to be the same class of process I suspect it is
more likely that many processes are hitting the page fault on the same
file page. E.g. a code/library.

> 3) The commit 62906027091f ("mm: add PageWaiters indicating tasks are waiting
> for a page bit") mentioned that,
>
> "Putting two bits in the same word opens the opportunity to remove the memory
> barrier between clearing the lock bit and testing the waiters bit, after some
> work on the arch primitives (e.g., ensuring memory operand widths match and
> cover both bits)."
>
> Do you happen to know if this only happen on powerpc?

I have only seen this single instance on that machine. I do not think
this is very much HW specific but ppc platform is likely more prone to
that. Just think of the memory itself. Each memory block is notified via
udev and ppc has very small memblocks (16M to 256M). X86 will use 2G
blocks on large machines.

> Also, probably need to
> dig out if those memory barrier is still there that could be removed to speed
> up things.

I would be really suprised if memory barriers matter much. It sounds
much more likely that there is the same underlying problem as
11a19c7b099f. There are just too many waiters on the page. The commit
prevents just the hard lockup part of the problem by dropping the lock
and continuing after the bookmark. But, as mentioned in the changelog,
cond_resched is not really an option because this path is called from
atomic context as well. So !PREEMPT kernels are still in the same boat.

I might have misunderstood something, of course, and would like to hear
where is my thinking wrong.
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2020-07-21 15:42    [W:1.099 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site