mirror of
https://github.com/torvalds/linux.git
synced 2024-11-14 08:02:07 +00:00
sched/fair: Ensure tasks spreading in LLC during LB
schbench shows latency increase for 95 percentile above since: commit0b0695f2b3
("sched/fair: Rework load_balance()") Align the behavior of the load balancer with the wake up path, which tries to select an idle CPU which belongs to the LLC for a waking task. calculate_imbalance() will use nr_running instead of the spare capacity when CPUs share resources (ie cache) at the domain level. This will ensure a better spread of tasks on idle CPUs. Running schbench on a hikey (8cores arm64) shows the problem: tip/sched/core : schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10 Latency percentiles (usec) 50.0th: 33 75.0th: 45 90.0th: 51 95.0th: 4152 *99.0th: 14288 99.5th: 14288 99.9th: 14288 min=0, max=14276 tip/sched/core + patch : schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10 Latency percentiles (usec) 50.0th: 34 75.0th: 47 90.0th: 52 95.0th: 78 *99.0th: 94 99.5th: 94 99.9th: 94 min=0, max=94 Fixes:0b0695f2b3
("sched/fair: Rework load_balance()") Reported-by: Chris Mason <clm@fb.com> Suggested-by: Rik van Riel <riel@surriel.com> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Rik van Riel <riel@surriel.com> Tested-by: Rik van Riel <riel@surriel.com> Link: https://lkml.kernel.org/r/20201102102457.28808-1-vincent.guittot@linaro.org
This commit is contained in:
parent
a73f863af4
commit
16b0a7a1a0
@ -9031,7 +9031,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
|
||||
* emptying busiest.
|
||||
*/
|
||||
if (local->group_type == group_has_spare) {
|
||||
if (busiest->group_type > group_fully_busy) {
|
||||
if ((busiest->group_type > group_fully_busy) &&
|
||||
!(env->sd->flags & SD_SHARE_PKG_RESOURCES)) {
|
||||
/*
|
||||
* If busiest is overloaded, try to fill spare
|
||||
* capacity. This might end up creating spare capacity
|
||||
|
Loading…
Reference in New Issue
Block a user