locking/atomics/alpha: Add smp_read_barrier_depends() to _release()/_relaxed() atomics

As part of the fight against smp_read_barrier_depends(), we require
dependency ordering to be preserved when a dependency is headed by a load
performed using an atomic operation.

This patch adds smp_read_barrier_depends() to the _release() and _relaxed()
atomics on alpha, which otherwise lack anything to enforce dependency
ordering.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1508840570-22169-6-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Will Deacon 2017-10-24 11:22:50 +01:00 committed by Ingo Molnar
parent 59ecbbe7b3
commit 5a8897cc76

View File

@ -13,6 +13,15 @@
* than regular operations. * than regular operations.
*/ */
/*
* To ensure dependency ordering is preserved for the _relaxed and
* _release atomics, an smp_read_barrier_depends() is unconditionally
* inserted into the _relaxed variants, which are used to build the
* barriered versions. To avoid redundant back-to-back fences, we can
* define the _acquire and _fence versions explicitly.
*/
#define __atomic_op_acquire(op, args...) op##_relaxed(args)
#define __atomic_op_fence __atomic_op_release
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define ATOMIC64_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) }
@ -60,6 +69,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }
@ -77,6 +87,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }
@ -111,6 +122,7 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }
@ -128,6 +140,7 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }