mirror of
https://github.com/torvalds/linux.git
synced 2024-12-28 13:51:44 +00:00
tools/memory-model: Finish the removal of rb-dep, smp_read_barrier_depends(), and lockless_dereference()
Commit:bf28ae5627
("tools/memory-model: Remove rb-dep, smp_read_barrier_depends, and lockless_dereference") was merged too early, while it was still in RFC form. This patch adds in the missing pieces. Akira pointed out some typos in the original patch, and he noted that cheatsheet.txt should indicate that READ_ONCE() now implies an address dependency. Andrea suggested documenting the relationship betwwen unsuccessful RMW operations and address dependencies. Andrea pointed out that the macro for rcu_dereference() in linux.def should now use the "once" annotation instead of "deref". He also suggested that the comments should mention commit:5a8897cc76
("locking/atomics/alpha: Add smp_read_barrier_depends() to _release()/_relaxed() atomics") ... as an important precursor, and he contributed commit:cb13b424e9
("locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()") which is another prerequisite. Suggested-by: Akira Yokosawa <akiyks@gmail.com> Suggested-by: Andrea Parri <parri.andrea@gmail.com> Signed-off-by: Alan Stern <stern@rowland.harvard.edu> [ Fixed read_read_lock() typo reported by Akira. ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Andrea Parri <parri.andrea@gmail.com> Acked-by: Akira Yokosawa <akiyks@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: will.deacon@arm.com Fixes:bf28ae5627
("tools/memory-model: Remove rb-dep, smp_read_barrier_depends, and lockless_dereference") Link: http://lkml.kernel.org/r/1520443660-16858-4-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
ff1fe5e079
commit
bd5c0ba2cd
@ -1,11 +1,11 @@
|
||||
Prior Operation Subsequent Operation
|
||||
--------------- ---------------------------
|
||||
C Self R W RWM Self R W DR DW RMW SV
|
||||
__ ---- - - --- ---- - - -- -- --- --
|
||||
-- ---- - - --- ---- - - -- -- --- --
|
||||
|
||||
Store, e.g., WRITE_ONCE() Y Y
|
||||
Load, e.g., READ_ONCE() Y Y Y
|
||||
Unsuccessful RMW operation Y Y Y
|
||||
Load, e.g., READ_ONCE() Y Y Y Y
|
||||
Unsuccessful RMW operation Y Y Y Y
|
||||
rcu_dereference() Y Y Y Y
|
||||
Successful *_acquire() R Y Y Y Y Y Y
|
||||
Successful *_release() C Y Y Y W Y
|
||||
|
@ -826,7 +826,7 @@ A-cumulative; they only affect the propagation of stores that are
|
||||
executed on C before the fence (i.e., those which precede the fence in
|
||||
program order).
|
||||
|
||||
read_lock(), rcu_read_unlock(), and synchronize_rcu() fences have
|
||||
rcu_read_lock(), rcu_read_unlock(), and synchronize_rcu() fences have
|
||||
other properties which we discuss later.
|
||||
|
||||
|
||||
@ -1138,7 +1138,7 @@ final effect is that even though the two loads really are executed in
|
||||
program order, it appears that they aren't.
|
||||
|
||||
This could not have happened if the local cache had processed the
|
||||
incoming stores in FIFO order. In constrast, other architectures
|
||||
incoming stores in FIFO order. By contrast, other architectures
|
||||
maintain at least the appearance of FIFO order.
|
||||
|
||||
In practice, this difficulty is solved by inserting a special fence
|
||||
|
@ -13,7 +13,7 @@ WRITE_ONCE(X,V) { __store{once}(X,V); }
|
||||
smp_store_release(X,V) { __store{release}(*X,V); }
|
||||
smp_load_acquire(X) __load{acquire}(*X)
|
||||
rcu_assign_pointer(X,V) { __store{release}(X,V); }
|
||||
rcu_dereference(X) __load{deref}(X)
|
||||
rcu_dereference(X) __load{once}(X)
|
||||
|
||||
// Fences
|
||||
smp_mb() { __fence{mb} ; }
|
||||
|
Loading…
Reference in New Issue
Block a user