lkml.org 
[lkml]   [2018]   [Jun]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: dm bufio: Reduce dm_bufio_lock contention
On Mon 25-06-18 10:42:30, Mikulas Patocka wrote:
>
>
> On Mon, 25 Jun 2018, Michal Hocko wrote:
>
> > > And the throttling in dm-bufio prevents kswapd from making forward
> > > progress, causing this situation...
> >
> > Which is what we have PF_THROTTLE_LESS for. Geez, do we have to go in
> > circles like that? Are you even listening?
> >
> > [...]
> >
> > > And so what do you want to do to prevent block drivers from sleeping?
> >
> > use the existing means we have.
> > --
> > Michal Hocko
> > SUSE Labs
>
> So - do you want this patch?
>
> There is no behavior difference between changing the allocator (so that it
> implies PF_THROTTLE_LESS for block drivers) and chaning all the block
> drivers to explicitly set PF_THROTTLE_LESS.

As long as you can reliably detect those users. And using gfp_mask is
about the worst way to achieve that because users tend to be creative
when it comes to using gfp mask. PF_THROTTLE_LESS in general is a
way to tell the allocator that _you_ are the one to help the reclaim by
cleaning data.

> But if you insist that the allocator can't be changed, we have to repeat
> the same code over and over again in the block drivers.

I am not familiar with the patched code but mempool change at least
makes sense (bvec_alloc seems to fallback to mempool which then makes
sense as well). If others in md/ do the same thing

I would just use current_restore_flags rather than open code it.

Thanks!
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2018-06-25 16:58    [W:0.091 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site