mirror of
https://github.com/torvalds/linux.git
synced 2024-11-16 09:02:00 +00:00
x86/ticketlock: Fix spin_unlock_wait() livelock
arch_spin_unlock_wait() looks very suboptimal, to the point I think this is just wrong and can lead to livelock: if the lock is heavily contended we can never see head == tail. But we do not need to wait for arch_spin_is_locked() == F. If it is locked we only need to wait until the current owner drops this lock. So we could simply spin until old_head != lock->tickets.head in this case, but .head can overflow and thus we can't check "unlocked" only once before the main loop. Also, the "unlocked" check can ignore TICKET_SLOWPATH_FLAG bit. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Paul E.McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Waiman Long <Waiman.Long@hp.com> Link: http://lkml.kernel.org/r/20141201213417.GA5842@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
9fd7fc34cf
commit
78bff1c868
@ -184,9 +184,21 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
|
||||
|
||||
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
|
||||
{
|
||||
while (arch_spin_is_locked(lock))
|
||||
__ticket_t head = ACCESS_ONCE(lock->tickets.head);
|
||||
|
||||
for (;;) {
|
||||
struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
|
||||
/*
|
||||
* We need to check "unlocked" in a loop, tmp.head == head
|
||||
* can be false positive because of overflow.
|
||||
*/
|
||||
if (tmp.head == (tmp.tail & ~TICKET_SLOWPATH_FLAG) ||
|
||||
tmp.head != head)
|
||||
break;
|
||||
|
||||
cpu_relax();
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Read-write spinlocks, allowing multiple readers
|
||||
|
Loading…
Reference in New Issue
Block a user