lkml.org 
[lkml]   [2014]   [Nov]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] arch: Introduce read_acquire()
Hello,

On Tue, Nov 11, 2014 at 06:57:05PM +0000, alexander.duyck@gmail.com wrote:
> From: Alexander Duyck <alexander.h.duyck@redhat.com>
>
> In the case of device drivers it is common to utilize receive descriptors
> in which a single field is used to determine if the descriptor is currently
> in the possession of the device or the CPU. In order to prevent any other
> fields from being read a rmb() is used resulting in something like code
> snippet from ixgbe_main.c:
>
> if (!ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_DD))
> break;
>
> /*
> * This memory barrier is needed to keep us from reading
> * any other fields out of the rx_desc until we know the
> * RXD_STAT_DD bit is set
> */
> rmb();
>
> On reviewing the documentation and code for smp_load_acquire() it occured
> to me that implementing something similar for CPU <-> device interraction
> would be worth while. This commit provides just the load/read side of this
> in the form of read_acquire(). This new primative orders the specified
> read against any subsequent reads. As a result we can reduce the above
> code snippet down to:
>
> /* This memory barrier is needed to keep us from reading
> * any other fields out of the rx_desc until we know the
> * RXD_STAT_DD bit is set
> */
> if (!(read_acquire(&rx_desc->wb.upper.status_error) &

Minor nit on naming, but load_acquire would match what we do with barriers,
where you simply drop the smp_ prefix if you want the thing to work on UP
systems too.

> cpu_to_le32(IXGBE_RXD_STAT_DD)))
> break;

I'm not familiar with the driver in question, but how are the descriptors
mapped? Is the read barrier here purely limiting re-ordering of normal
memory accesses by the CPU? If so, isn't there also scope for store_release
when updating, e.g. next_to_watch in the same driver?

We also need to understand how this plays out with
smp_mb__after_unlock_lock, which is currently *only* implemented by PowerPC.
If we end up having a similar mess to mmiowb, where PowerPC both implements
the barrier *and* plays tricks in its spin_unlock code, then everybody
loses because we'd end up with release doing the right thing anyway.

Peter and I spoke with Paul at LPC about strengthening
smp_load_acquire/smp_store_release so that release->acquire ordering is
maintained, which would allow us to drop smp_mb__after_unlock_lock
altogether. That's stronger than acquire/release in C11, but I think it's
an awful lot easier to use, particularly if device drivers are going to
start using these primitives.

Thoughts?

Will


\
 
 \ /
  Last update: 2014-11-11 21:21    [W:0.179 / U:3.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site