mirror of
https://github.com/torvalds/linux.git
synced 2024-11-19 02:21:47 +00:00
7c2102e56a
The current implementation of synchronize_sched_expedited() incorrectly assumes that resched_cpu() is unconditional, which it is not. This means that synchronize_sched_expedited() can hang when resched_cpu()'s trylock fails as follows (analysis by Neeraj Upadhyay): o CPU1 is waiting for expedited wait to complete: sync_rcu_exp_select_cpus rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5 IPI sent to CPU5 synchronize_sched_expedited_wait ret = swait_event_timeout(rsp->expedited_wq, sync_rcu_preempt_exp_done(rnp_root), jiffies_stall); expmask = 0x20, CPU 5 in idle path (in cpuidle_enter()) o CPU5 handles IPI and fails to acquire rq lock. Handles IPI sync_sched_exp_handler resched_cpu returns while failing to try lock acquire rq->lock need_resched is not set o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to idle (schedule() is not called). o CPU 1 reports RCU stall. Given that resched_cpu() is now used only by RCU, this commit fixes the assumption by making resched_cpu() unconditional. Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org> Suggested-by: Neeraj Upadhyay <neeraju@codeaurora.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org |
||
---|---|---|
.. | ||
autogroup.c | ||
autogroup.h | ||
clock.c | ||
completion.c | ||
core.c | ||
cpuacct.c | ||
cpuacct.h | ||
cpudeadline.c | ||
cpudeadline.h | ||
cpufreq_schedutil.c | ||
cpufreq.c | ||
cpupri.c | ||
cpupri.h | ||
cputime.c | ||
deadline.c | ||
debug.c | ||
fair.c | ||
features.h | ||
idle_task.c | ||
idle.c | ||
loadavg.c | ||
Makefile | ||
membarrier.c | ||
rt.c | ||
sched-pelt.h | ||
sched.h | ||
stats.c | ||
stats.h | ||
stop_task.c | ||
swait.c | ||
topology.c | ||
wait_bit.c | ||
wait.c |