sched/rt: Use container_of() to get root domain in rto_push_irq_work_func()

When the rto_push_irq_work_func() is called, it looks at the RT overloaded
bitmask in the root domain via the runqueue (rq->rd). The problem is that
during CPU up and down, nothing here stops rq->rd from changing between
taking the rq->rd->rto_lock and releasing it. That means the lock that is
released is not the same lock that was taken.

Instead of using this_rq()->rd to get the root domain, as the irq work is
part of the root domain, we can simply get the root domain from the irq work
that is passed to the routine:

 container_of(work, struct root_domain, rto_push_work)

This keeps the root domain consistent.

Reported-by: Pavan Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 4bdced5c9a ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Steven Rostedt (VMware) 2018-01-23 20:45:37 -05:00 committed by Ingo Molnar
parent 2ed41a5502
commit ad0f1d9d65

View File

@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq)
* the rt_loop_next will cause the iterator to perform another scan. * the rt_loop_next will cause the iterator to perform another scan.
* *
*/ */
static int rto_next_cpu(struct rq *rq) static int rto_next_cpu(struct root_domain *rd)
{ {
struct root_domain *rd = rq->rd;
int next; int next;
int cpu; int cpu;
@ -1985,7 +1984,7 @@ static void tell_cpu_to_push(struct rq *rq)
* Otherwise it is finishing up and an ipi needs to be sent. * Otherwise it is finishing up and an ipi needs to be sent.
*/ */
if (rq->rd->rto_cpu < 0) if (rq->rd->rto_cpu < 0)
cpu = rto_next_cpu(rq); cpu = rto_next_cpu(rq->rd);
raw_spin_unlock(&rq->rd->rto_lock); raw_spin_unlock(&rq->rd->rto_lock);
@ -1998,6 +1997,8 @@ static void tell_cpu_to_push(struct rq *rq)
/* Called from hardirq context */ /* Called from hardirq context */
void rto_push_irq_work_func(struct irq_work *work) void rto_push_irq_work_func(struct irq_work *work)
{ {
struct root_domain *rd =
container_of(work, struct root_domain, rto_push_work);
struct rq *rq; struct rq *rq;
int cpu; int cpu;
@ -2013,18 +2014,18 @@ void rto_push_irq_work_func(struct irq_work *work)
raw_spin_unlock(&rq->lock); raw_spin_unlock(&rq->lock);
} }
raw_spin_lock(&rq->rd->rto_lock); raw_spin_lock(&rd->rto_lock);
/* Pass the IPI to the next rt overloaded queue */ /* Pass the IPI to the next rt overloaded queue */
cpu = rto_next_cpu(rq); cpu = rto_next_cpu(rd);
raw_spin_unlock(&rq->rd->rto_lock); raw_spin_unlock(&rd->rto_lock);
if (cpu < 0) if (cpu < 0)
return; return;
/* Try the next RT overloaded CPU */ /* Try the next RT overloaded CPU */
irq_work_queue_on(&rq->rd->rto_push_work, cpu); irq_work_queue_on(&rd->rto_push_work, cpu);
} }
#endif /* HAVE_RT_PUSH_IPI */ #endif /* HAVE_RT_PUSH_IPI */