From: Guoju Fang Date: Sat, 28 May 2022 10:16:28 +0000 (+0800) Subject: net: sched: add barrier to fix packet stuck problem for lockless qdisc X-Git-Tag: baikal/aarch64/sdk6.1~3836^2~20 X-Git-Url: https://git.baikalelectronics.ru/sdk/?a=commitdiff_plain;h=d412adacf185ecf70dc2c6ef5dad968eade7c0dc;p=kernel.git net: sched: add barrier to fix packet stuck problem for lockless qdisc In qdisc_run_end(), the spin_unlock() only has store-release semantic, which guarantees all earlier memory access are visible before it. But the subsequent test_bit() has no barrier semantics so may be reordered ahead of the spin_unlock(). The store-load reordering may cause a packet stuck problem. The concurrent operations can be described as below, CPU 0 | CPU 1 qdisc_run_end() | qdisc_run_begin() . | . ----> /* may be reorderd here */ | . | . | . | spin_unlock() | set_bit() | . | smp_mb__after_atomic() ---- test_bit() | spin_trylock() . | . Consider the following sequence of events: CPU 0 reorder test_bit() ahead and see MISSED = 0 CPU 1 calls set_bit() CPU 1 calls spin_trylock() and return fail CPU 0 executes spin_unlock() At the end of the sequence, CPU 0 calls spin_unlock() and does nothing because it see MISSED = 0. The skb on CPU 1 has beed enqueued but no one take it, until the next cpu pushing to the qdisc (if ever ...) will notice and dequeue it. This patch fix this by adding one explicit barrier. As spin_unlock() and test_bit() ordering is a store-load ordering, a full memory barrier smp_mb() is needed here. Fixes: ce41124b71dc ("net: sched: fix packet stuck problem for lockless qdisc") Signed-off-by: Guoju Fang Link: https://lore.kernel.org/r/20220528101628.120193-1-gjfang@linux.alibaba.com Signed-off-by: Jakub Kicinski --- diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 80973ce820f3e..d6cf5116b5f98 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -209,6 +209,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc) if (qdisc->flags & TCQ_F_NOLOCK) { spin_unlock(&qdisc->seqlock); + /* spin_unlock() only has store-release semantic. The unlock + * and test_bit() ordering is a store-load ordering, so a full + * memory barrier is needed here. + */ + smp_mb(); + if (unlikely(test_bit(__QDISC_STATE_MISSED, &qdisc->state))) __netif_schedule(qdisc);