lkml.org 
[lkml]   [2014]   [Feb]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC][PATCH 0/5] arch: atomic rework
    On Mon, Feb 10, 2014 at 01:27:51AM +0100, Torvald Riegel wrote:
    > On Fri, 2014-02-07 at 10:02 -0800, Paul E. McKenney wrote:
    > > On Fri, Feb 07, 2014 at 04:55:48PM +0000, Will Deacon wrote:

    [ . . . ]

    > > And then it is a short and uncontroversial step to the following:
    > >
    > > Initial state: x == y == 0
    > >
    > > T1: atomic_store_explicit(42, y, memory_order_relaxed);
    > > r1 = atomic_load_explicit(x, memory_order_relaxed);
    > > if (r1 != 42)
    > > atomic_store_explicit(r1, y, memory_order_relaxed);
    > >
    > > T2: r2 = atomic_load_explicit(y, memory_order_relaxed);
    > > atomic_store_explicit(r2, x, memory_order_relaxed);
    > >
    > > This can of course result in r1 == r2 == 42, even though the constant
    > > 42 never appeared in the original code. This is one way to generate
    > > an out-of-thin-air value.
    > >
    > > As near as I can tell, compiler writers hate the idea of prohibiting
    > > speculative-store optimizations because it requires them to introduce
    > > both control and data dependency tracking into their compilers.
    >
    > I wouldn't characterize the situation like this (although I can't speak
    > for others, obviously). IMHO, it's perfectly fine on sequential /
    > non-synchronizing code, because we know the difference isn't observable
    > by a correct program. For synchronizing code, compilers just shouldn't
    > do it, or they would have to truly prove that speculation is harmless.
    > That will be hard, so I think it should just be avoided.
    >
    > Synchronization code will likely have been tuned anyway (especially if
    > it uses relaxed MO), so I don't see a large need for trying to optimize
    > using speculative atomic stores.
    >
    > Thus, I think there's an easy and practical solution.

    I like this approach, but there has been resistance to it in the past.
    Definitely worth a good try, though!

    > > Many of
    > > them seem to hate dependency tracking with a purple passion. At least,
    > > such a hatred would go a long way towards explaining the incomplete
    > > and high-overhead implementations of memory_order_consume, the long
    > > and successful use of idioms based on the memory_order_consume pattern
    > > notwithstanding [*]. ;-)
    >
    > I still think that's different because it blurs the difference between
    > sequential code and synchronizing code (ie, atomic accesses). With
    > consume MO, the simple solution above doesn't work anymore, because
    > suddenly synchronizing code does affect optimizations in sequential
    > code, even if that wouldn't reorder across the synchronizing code (which
    > would be clearly "visible" to the implementation of the optimization).

    I understand that memory_order_consume is a bit harder on compiler
    writers than the other memory orders, but it is also pretty valuable.

    Thanx, Paul



    \
     
     \ /
      Last update: 2014-02-10 06:21    [W:7.230 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site