lkml.org 
[lkml]   [2008]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH]: Fix SMP-reordering race in mark_buffer_dirty


On Wed, 2 Apr 2008, Mikulas Patocka wrote:
>
> So you're right, the gain of mfence is so little that you can remove it
> and use only test_set_buffer_dirty.

Well, I suspect that part of the issue is that quite often you end up
with *both* because the buffer wasn't already dirty from before.

Re-dirtying a dirty buffer is pretty common for things like bitmap blocks
etc, so it's probably a worthy optimization if it has no cost, and on
Core2 I suspect your version is worth it, but it's not like it's going to
be necessarily a 99% kind of case. I suspect quite a lot of the
mark_buffer_dirty() calls are actually on clean buffers.

(Of course, a valid argument is that if it was already dirty, we'll skip
the other expensive parts, so only the "already dirty" case is worth
optimizing for. Maybe true. There might also be cases where it means one
less dirty cacheline in memory.)

> I don't know if there are other architectures where smb_mb() would be
> significantly faster than test_and_set_bit.

Probably none, since it test_and_set_bit() implies a smp_mb(), and
generally the bigger cost is in the barrier than in the bit setting
itself.

Core 2 is the outlier in having a noticeably faster "mfence" than atomic
instructions (and judging by noises Intel makes, Nehalem will undo that
outlier).

Linus


\
 
 \ /
  Last update: 2008-04-02 23:35    [W:0.077 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site