]> git.baikalelectronics.ru Git - kernel.git/commit
locking/rwbase: Take care of ordering guarantee for fastpath reader
authorBoqun Feng <boqun.feng@gmail.com>
Thu, 9 Sep 2021 10:59:19 +0000 (12:59 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 15 Sep 2021 15:49:16 +0000 (17:49 +0200)
commite84ca5952b381331f09237a521baba6032619264
tree727cf795edb63c044e2a4cd5a01bf322be6b1e3f
parentc11cae0b8425458d5662485964d1bcc07e08b600
locking/rwbase: Take care of ordering guarantee for fastpath reader

Readers of rwbase can lock and unlock without taking any inner lock, if
that happens, we need the ordering provided by atomic operations to
satisfy the ordering semantics of lock/unlock. Without that, considering
the follow case:

{ X = 0 initially }

CPU 0 CPU 1
===== =====
rt_write_lock();
X = 1
rt_write_unlock():
  atomic_add(READER_BIAS - WRITER_BIAS, ->readers);
  // ->readers is READER_BIAS.
rt_read_lock():
  if ((r = atomic_read(->readers)) < 0) // True
    atomic_try_cmpxchg(->readers, r, r + 1); // succeed.
  <acquire the read lock via fast path>

r1 = X; // r1 may be 0, because nothing prevent the reordering
        // of "X=1" and atomic_add() on CPU 1.

Therefore audit every usage of atomic operations that may happen in a
fast path, and add necessary barriers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20210909110203.953991276@infradead.org
kernel/locking/rwbase_rt.c