forked from Minki/linux
c9c294a630
calc_delta_asym() is supposed to do the same as calc_delta_fair() except linearly shrink the result for negative nice processes - this causes them to have a smaller preemption threshold so that they are more easily preempted. The problem is that for task groups se->load.weight is the per cpu share of the actual task group weight; take that into account. Also provide a debug switch to disable the asymmetry (which I still don't like - but it does greatly benefit some workloads) This would explain the interactivity issues reported against group scheduling. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
11 lines
287 B
C
11 lines
287 B
C
SCHED_FEAT(NEW_FAIR_SLEEPERS, 1)
|
|
SCHED_FEAT(NORMALIZED_SLEEPER, 1)
|
|
SCHED_FEAT(WAKEUP_PREEMPT, 1)
|
|
SCHED_FEAT(START_DEBIT, 1)
|
|
SCHED_FEAT(AFFINE_WAKEUPS, 1)
|
|
SCHED_FEAT(CACHE_HOT_BUDDY, 1)
|
|
SCHED_FEAT(SYNC_WAKEUPS, 1)
|
|
SCHED_FEAT(HRTICK, 1)
|
|
SCHED_FEAT(DOUBLE_TICK, 0)
|
|
SCHED_FEAT(ASYM_GRAN, 1)
|