sched/fair: Fix SMT4 group_smt_balance handling

For SMT4, any group with more than 2 tasks will be marked as
group_smt_balance. Retain the behaviour of group_has_spare by marking
the busiest group as the group which has the least number of idle_cpus.

Also, handle rounding effect of adding (ncores_local + ncores_busy) when
the local is fully idle and busy group imbalance is less than 2 tasks.
Local group should try to pull at least 1 task in this case so imbalance
should be set to 2 instead.

Fixes: fee1759e4f ("sched/fair: Determine active load balance for SMT sched groups")
Acked-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/6cd1633036bb6b651af575c32c2a9608a106702c.camel@linux.intel.com
This commit is contained in:
Tim Chen 2023-09-07 10:42:21 -07:00 committed by Ingo Molnar
parent f8858d9606
commit 450e749707

View File

@ -9580,7 +9580,7 @@ static inline long sibling_imbalance(struct lb_env *env,
imbalance /= ncores_local + ncores_busiest;
/* Take advantage of resource in an empty sched group */
if (imbalance == 0 && local->sum_nr_running == 0 &&
if (imbalance <= 1 && local->sum_nr_running == 0 &&
busiest->sum_nr_running > 1)
imbalance = 2;
@ -9768,6 +9768,15 @@ static bool update_sd_pick_busiest(struct lb_env *env,
break;
case group_smt_balance:
/*
* Check if we have spare CPUs on either SMT group to
* choose has spare or fully busy handling.
*/
if (sgs->idle_cpus != 0 || busiest->idle_cpus != 0)
goto has_spare;
fallthrough;
case group_fully_busy:
/*
* Select the fully busy group with highest avg_load. In
@ -9807,6 +9816,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
else
return true;
}
has_spare:
/*
* Select not overloaded group with lowest number of idle cpus