| From | Greg Kroah-Hartman <> | Subject | [PATCH 5.4 221/348] net: sched: add barrier to ensure correct ordering for lockless qdisc | Date | Mon, 12 Jul 2021 08:10:05 +0200 |
| |
From: Yunsheng Lin <linyunsheng@huawei.com>
[ Upstream commit 89837eb4b2463c556a123437f242d6c2bc62ce81 ]
The spin_trylock() was assumed to contain the implicit barrier needed to ensure the correct ordering between STATE_MISSED setting/clearing and STATE_MISSED checking in commit a90c57f2cedd ("net: sched: fix packet stuck problem for lockless qdisc").
But it turns out that spin_trylock() only has load-acquire semantic, for strongly-ordered system(like x86), the compiler barrier implicitly contained in spin_trylock() seems enough to ensure the correct ordering. But for weakly-orderly system (like arm64), the store-release semantic is needed to ensure the correct ordering as clear_bit() and test_bit() is store operation, see queued_spin_lock().
So add the explicit barrier to ensure the correct ordering for the above case.
Fixes: a90c57f2cedd ("net: sched: fix packet stuck problem for lockless qdisc") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org> --- include/net/sch_generic.h | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 0852f3e51360..0cb0a4bcb544 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -160,6 +160,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) if (spin_trylock(&qdisc->seqlock)) goto nolock_empty; + /* Paired with smp_mb__after_atomic() to make sure + * STATE_MISSED checking is synchronized with clearing + * in pfifo_fast_dequeue(). + */ + smp_mb__before_atomic(); + /* If the MISSED flag is set, it means other thread has * set the MISSED flag before second spin_trylock(), so * we can return false here to avoid multi cpus doing @@ -177,6 +183,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) */ set_bit(__QDISC_STATE_MISSED, &qdisc->state); + /* spin_trylock() only has load-acquire semantic, so use + * smp_mb__after_atomic() to ensure STATE_MISSED is set + * before doing the second spin_trylock(). + */ + smp_mb__after_atomic(); + /* Retry again in case other CPU may not see the new flag * after it releases the lock at the end of qdisc_run_end(). */ -- 2.30.2
|