lkml.org 
[lkml]   [2022]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure
On Tue, Aug 16, 2022 at 04:03:11PM +0900, Hector Martin wrote:
> These operations are documented as always ordered in
> include/asm-generic/bitops/instrumented-atomic.h, and producer-consumer
> type use cases where one side needs to ensure a flag is left pending
> after some shared data was updated rely on this ordering, even in the
> failure case.
>
> This is the case with the workqueue code, which currently suffers from a
> reproducible ordering violation on Apple M1 platforms (which are
> notoriously out-of-order) that ends up causing the TTY layer to fail to
> deliver data to userspace properly under the right conditions. This
> change fixes that bug.
>
> Change the documentation to restrict the "no order on failure" story to
> the _lock() variant (for which it makes sense), and remove the
> early-exit from the generic implementation, which is what causes the
> missing barrier semantics in that case. Without this, the remaining
> atomic op is fully ordered (including on ARM64 LSE, as of recent
> versions of the architecture spec).
>
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: stable@vger.kernel.org
> Fixes: e986a0d6cb36 ("locking/atomics, asm-generic/bitops/atomic.h: Rewrite using atomic_*() APIs")
> Fixes: 61e02392d3c7 ("locking/atomic/bitops: Document and clarify ordering semantics for failed test_and_{}_bit()")
> Signed-off-by: Hector Martin <marcan@marcan.st>
> ---
> Documentation/atomic_bitops.txt | 2 +-
> include/asm-generic/bitops/atomic.h | 6 ------
> 2 files changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
> index 093cdaefdb37..d8b101c97031 100644
> --- a/Documentation/atomic_bitops.txt
> +++ b/Documentation/atomic_bitops.txt
> @@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
> - RMW operations that have a return value are fully ordered.
>
> - RMW operations that are conditional are unordered on FAILURE,
> - otherwise the above rules apply. In the case of test_and_{}_bit() operations,
> + otherwise the above rules apply. In the case of test_and_set_bit_lock(),
> if the bit in memory is unchanged by the operation then it is deemed to have
> failed.

The next sentence is:

| Except for a successful test_and_set_bit_lock() which has ACQUIRE
| semantics and clear_bit_unlock() which has RELEASE semantics.

so I think it reads a bit strangely now. How about something like:


diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
index 093cdaefdb37..3b516729ec81 100644
--- a/Documentation/atomic_bitops.txt
+++ b/Documentation/atomic_bitops.txt
@@ -59,12 +59,15 @@ Like with atomic_t, the rule of thumb is:
- RMW operations that have a return value are fully ordered.

- RMW operations that are conditional are unordered on FAILURE,
- otherwise the above rules apply. In the case of test_and_{}_bit() operations,
- if the bit in memory is unchanged by the operation then it is deemed to have
- failed.
+ otherwise the above rules apply. For the purposes of ordering, the
+ test_and_{}_bit() operations are treated as unconditional.

-Except for a successful test_and_set_bit_lock() which has ACQUIRE semantics and
-clear_bit_unlock() which has RELEASE semantics.
+Except for:
+
+ - test_and_set_bit_lock() which has ACQUIRE semantics on success and is
+ unordered on failure;
+
+ - clear_bit_unlock() which has RELEASE semantics.

Since a platform only has a single means of achieving atomic operations
the same barriers as for atomic_t are used, see atomic_t.txt.

?

> diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
> index 3096f086b5a3..71ab4ba9c25d 100644
> --- a/include/asm-generic/bitops/atomic.h
> +++ b/include/asm-generic/bitops/atomic.h
> @@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
> unsigned long mask = BIT_MASK(nr);
>
> p += BIT_WORD(nr);
> - if (READ_ONCE(*p) & mask)
> - return 1;
> -
> old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
> return !!(old & mask);
> }
> @@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
> unsigned long mask = BIT_MASK(nr);
>
> p += BIT_WORD(nr);
> - if (!(READ_ONCE(*p) & mask))
> - return 0;
> -
> old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
> return !!(old & mask);

I suppose one sad thing about this is that, on arm64, we could reasonably
keep the READ_ONCE() path with a DMB LD (R->RW) barrier before the return
but I don't think we can express that in the Linux memory model so we
end up in RmW territory every time. Perhaps we should roll our own
implementation in the arch code?

In any case, this should fix the problem so:

Acked-by: Will Deacon <will@kernel.org>

Will

\
 
 \ /
  Last update: 2022-08-16 16:07    [W:0.128 / U:1.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site