sched: Remove lockdep check in sched_move_task()

sched_move_task() is the only interface to change sched_task_group:
cpu_cgrp_subsys methods and autogroup_move_group() use it.

Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
is ordered with other users of sched_move_task(). This means we do no
need RCU here: if we've dereferenced a tg here, the .attach method
hasn't been called for it yet.

Thus, we should pass "true" to task_css_check() to silence lockdep
warnings.

Fixes: eeb61e53ea ("sched: Fix race between task_group and sched_task_group")
Reported-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1414473874.8574.2.camel@tkhai
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Kirill Tkhai 2014-10-28 08:24:34 +03:00 committed by Ingo Molnar
parent 980d0d51b1
commit f7b8a47da1

View File

@ -7444,8 +7444,12 @@ void sched_move_task(struct task_struct *tsk)
if (unlikely(running)) if (unlikely(running))
put_prev_task(rq, tsk); put_prev_task(rq, tsk);
tg = container_of(task_css_check(tsk, cpu_cgrp_id, /*
lockdep_is_held(&tsk->sighand->siglock)), * All callers are synchronized by task_rq_lock(); we do not use RCU
* which is pointless here. Thus, we pass "true" to task_css_check()
* to prevent lockdep warnings.
*/
tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
struct task_group, css); struct task_group, css);
tg = autogroup_task_group(tsk, tg); tg = autogroup_task_group(tsk, tg);
tsk->sched_task_group = tg; tsk->sched_task_group = tg;