lkml.org 
[lkml]   [2008]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH]: Fix SMP-reordering race in mark_buffer_dirty
On Wed, 2 Apr 2008, Linus Torvalds wrote:

> On Wed, 2 Apr 2008, Mikulas Patocka wrote:
> >
> > So you're right, the gain of mfence is so little that you can remove it
> > and use only test_set_buffer_dirty.
>
> Well, I suspect that part of the issue is that quite often you end up
> with *both* because the buffer wasn't already dirty from before.
>
> Re-dirtying a dirty buffer is pretty common for things like bitmap blocks
> etc, so it's probably a worthy optimization if it has no cost, and on
> Core2 I suspect your version is worth it, but it's not like it's going to
> be necessarily a 99% kind of case. I suspect quite a lot of the
> mark_buffer_dirty() calls are actually on clean buffers.

The cost of doing the buffer lookup and update is much more than the 20
clocks, so the 20 clocks mfence/lock difference can be forgotten.

> (Of course, a valid argument is that if it was already dirty, we'll skip
> the other expensive parts, so only the "already dirty" case is worth
> optimizing for. Maybe true. There might also be cases where it means one
> less dirty cacheline in memory.)

That is another good example: when two CPUs are simultaneously updating
the same buffer (i.e. two inodes in one block or simultaneous writes to a
bitmap), then test_set_buffer_dirty will do cache ping-pong (that is much
more expensive than that 20-100 cycles for an interlocked instruction),
and smp_mb()+test_buffer_dirty will keep cache intact.

So maybe there's still a reason for smb_mb().

Mikulas



\
 
 \ /
  Last update: 2008-04-03 00:39    [W:0.133 / U:0.304 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site