]> git.baikalelectronics.ru Git - kernel.git/commit
powerpc: Full barrier for smp_mb__after_unlock_lock()
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Wed, 11 Dec 2013 21:59:11 +0000 (13:59 -0800)
committerIngo Molnar <mingo@kernel.org>
Mon, 16 Dec 2013 10:36:18 +0000 (11:36 +0100)
commitbdaf24fa2b062f7c935c4537ba6757b0bc0914a2
treea9c7d8a8c4b089decea2e596a5f08b182bd8cc8e
parent99524d6ed44d64431deb7c9aa67906a98fcd0015
powerpc: Full barrier for smp_mb__after_unlock_lock()

The powerpc lock acquisition sequence is as follows:

lwarx; cmpwi; bne; stwcx.; lwsync;

Lock release is as follows:

lwsync; stw;

If CPU 0 does a store (say, x=1) then a lock release, and CPU 1
does a lock acquisition then a load (say, r1=y), then there is
no guarantee of a full memory barrier between the store to 'x'
and the load from 'y'. To see this, suppose that CPUs 0 and 1
are hardware threads in the same core that share a store buffer,
and that CPU 2 is in some other core, and that CPU 2 does the
following:

y = 1; sync; r2 = x;

If 'x' and 'y' are both initially zero, then the lock
acquisition and release sequences above can result in r1 and r2
both being equal to zero, which could not happen if unlock+lock
was a full barrier.

This commit therefore makes powerpc's
smp_mb__after_unlock_lock() be a full barrier.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: <linux-arch@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/powerpc/include/asm/spinlock.h