lkml.org 
[lkml]   [2020]   [Nov]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: scheduling while atomic in z3fold
From
Date
On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > > >
> > > > > > Shouldn't the list manipulation be protected with
> > > > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
> > > >
> > > > Totally untested:
> > >
> > > Hrm, the thing doesn't seem to care deeply about preemption being
> > > disabled, so adding another lock may be overkill. It looks like you
> > > could get the job done via migrate_disable()+this_cpu_ptr().
> >
> > There is however an ever so tiny chance that I'm wrong about that :)
>
> Or not, your local_lock+this_cpu_ptr version exploded too.
>
> Perhaps there's a bit of non-rt related racy racy going on in zswap
> thingy that makes swap an even less wonderful idea for RT than usual.

Raciness seems to be restricted to pool compressor. "zbud" seems to be
solid, virgin "zsmalloc" explodes, as does "z3fold" regardless which of
us puts his grubby fingerprints on it.

Exploding compressors survived zero runs of runltp -f mm, I declared
zbud to be at least kinda sorta stable after box survived five runs.

-Mike

\
 
 \ /
  Last update: 2020-11-29 12:01    [W:0.096 / U:1.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site