lkml.org 
[lkml]   [2019]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 4/4] fs: jbd/jbd2: Substitute BH locks for RT and lock debugging
On Wed, 31 Jul 2019, Jan Kara wrote:
> On Tue 30-07-19 13:24:56, Thomas Gleixner wrote:
> > Bit spinlocks are problematic if PREEMPT_RT is enabled. They disable
> > preemption, which is undesired for latency reasons and breaks when regular
> > spinlocks are taken within the bit_spinlock locked region because regular
> > spinlocks are converted to 'sleeping spinlocks' on RT.
> >
> > Substitute the BH_State and BH_JournalHead bit spinlocks with regular
> > spinlock for PREEMPT_RT enabled kernels.
>
> Is there a real need for substitution for BH_JournalHead bit spinlock? The
> critical sections are pretty tiny, all located within fs/jbd2/journal.c.
> Maybe only the one around __journal_remove_journal_head() would need a bit
> of refactoring so that journal_free_journal_head() doesn't get called
> under the bit-spinlock.

Makes sense.

> BH_State lock is definitely worth it. In fact, if you placed the spinlock
> inside struct journal_head (which is the structure whose members are in
> fact protected by it), I'd be even fine with just using the spinlock always
> instead of the bit spinlock. journal_head is pretty big anyway (and there's
> even 4-byte hole in it for 64-bit archs) and these structures are pretty
> rare (only for actively changed metadata buffers).

Just need to figure out what to do with the ASSERT_JH(state_is_locked) case for
UP. Perhaps just return true for UP && !DEBUG_SPINLOCK?

Thanks,

tglx

\
 
 \ /
  Last update: 2019-07-31 21:42    [W:0.086 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site