lkml.org 
[lkml]   [2022]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 1/2] introduce test_bit_acquire and use it in wait_on_bit
On Mon, Aug 01, 2022 at 06:42:15AM -0400, Mikulas Patocka wrote:
> wait_on_bit tests the bit without any memory barriers, consequently the
> code that follows wait_on_bit may be moved before testing the bit on
> architectures with weak memory ordering. When the code tests for some
> event using wait_on_bit and then performs a load operation, the load may
> be unexpectedly moved before wait_on_bit and it may return data that
> existed before the event occurred.
>
> Such bugs exist in fs/buffer.c:__wait_on_buffer,
> drivers/md/dm-bufio.c:new_read,
> drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:dvb_usb_start_feed,
> drivers/bluetooth/btusb.c:btusb_mtk_hci_wmt_sync
> and perhaps in other places.
>
> We fix this class of bugs by adding a new function test_bit_acquire that
> reads the bit and provides acquire memory ordering semantics.
>
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> Cc: stable@vger.kernel.org
>
> ---
> arch/s390/include/asm/bitops.h | 10 ++++++++++
> arch/x86/include/asm/bitops.h | 7 ++++++-
> include/asm-generic/bitops/instrumented-non-atomic.h | 11 +++++++++++
> include/asm-generic/bitops/non-atomic.h | 13 +++++++++++++
> include/linux/wait_bit.h | 8 ++++----
> kernel/sched/wait_bit.c | 6 +++---
> 6 files changed, 47 insertions(+), 8 deletions(-)
>
> Index: linux-2.6/arch/x86/include/asm/bitops.h
> ===================================================================
> --- linux-2.6.orig/arch/x86/include/asm/bitops.h 2022-08-01 12:27:43.000000000 +0200
> +++ linux-2.6/arch/x86/include/asm/bitops.h 2022-08-01 12:27:43.000000000 +0200
> @@ -203,8 +203,10 @@ arch_test_and_change_bit(long nr, volati
>
> static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
> {
> - return ((1UL << (nr & (BITS_PER_LONG-1))) &
> + bool r = ((1UL << (nr & (BITS_PER_LONG-1))) &
> (addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
> + barrier();
> + return r;

Hmm, I find it a bit weird to have a barrier() here given that 'addr' is
volatile and we don't need a barrier() like this in the definition of
READ_ONCE(), for example.

> Index: linux-2.6/include/linux/wait_bit.h
> ===================================================================
> --- linux-2.6.orig/include/linux/wait_bit.h 2022-08-01 12:27:43.000000000 +0200
> +++ linux-2.6/include/linux/wait_bit.h 2022-08-01 12:27:43.000000000 +0200
> @@ -71,7 +71,7 @@ static inline int
> wait_on_bit(unsigned long *word, int bit, unsigned mode)
> {
> might_sleep();
> - if (!test_bit(bit, word))
> + if (!test_bit_acquire(bit, word))
> return 0;
> return out_of_line_wait_on_bit(word, bit,
> bit_wait,

Yet another approach here would be to leave test_bit as-is and add a call to
smp_acquire__after_ctrl_dep() since that exists already -- I don't have
strong opinions about it, but it saves you having to add another stub to
x86.

Will

\
 
 \ /
  Last update: 2022-08-01 17:55    [W:0.153 / U:0.856 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site