mirror of
https://github.com/torvalds/linux.git
synced 2024-11-18 01:51:53 +00:00
b8c9636140
The estimated utilization for a task: util_est = max(util_avg, est.enqueue, est.ewma) is defined based on: - util_avg: the PELT defined utilization - est.enqueued: the util_avg at the end of the last activation - est.ewma: a exponential moving average on the est.enqueued samples According to this definition, when a task suddenly changes its bandwidth requirements from small to big, the EWMA will need to collect multiple samples before converging up to track the new big utilization. This slow convergence towards bigger utilization values is not aligned to the default scheduler behavior, which is to optimize for performance. Moreover, the est.ewma component fails to compensate for temporarely utilization drops which spans just few est.enqueued samples. To let util_est do a better job in the scenario depicted above, change its definition by making util_est directly follow upward motion and only decay the est.ewma on downward. Signed-off-by: Patrick Bellasi <patrick.bellasi@matbug.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Douglas Raillard <douglas.raillard@arm.com> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Perret <qperret@google.com> Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20191023205630.14469-1-patrick.bellasi@matbug.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
93 lines
2.4 KiB
C
93 lines
2.4 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
/*
|
|
* Only give sleepers 50% of their service deficit. This allows
|
|
* them to run sooner, but does not allow tons of sleepers to
|
|
* rip the spread apart.
|
|
*/
|
|
SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true)
|
|
|
|
/*
|
|
* Place new tasks ahead so that they do not starve already running
|
|
* tasks
|
|
*/
|
|
SCHED_FEAT(START_DEBIT, true)
|
|
|
|
/*
|
|
* Prefer to schedule the task we woke last (assuming it failed
|
|
* wakeup-preemption), since its likely going to consume data we
|
|
* touched, increases cache locality.
|
|
*/
|
|
SCHED_FEAT(NEXT_BUDDY, false)
|
|
|
|
/*
|
|
* Prefer to schedule the task that ran last (when we did
|
|
* wake-preempt) as that likely will touch the same data, increases
|
|
* cache locality.
|
|
*/
|
|
SCHED_FEAT(LAST_BUDDY, true)
|
|
|
|
/*
|
|
* Consider buddies to be cache hot, decreases the likelyness of a
|
|
* cache buddy being migrated away, increases cache locality.
|
|
*/
|
|
SCHED_FEAT(CACHE_HOT_BUDDY, true)
|
|
|
|
/*
|
|
* Allow wakeup-time preemption of the current task:
|
|
*/
|
|
SCHED_FEAT(WAKEUP_PREEMPTION, true)
|
|
|
|
SCHED_FEAT(HRTICK, false)
|
|
SCHED_FEAT(DOUBLE_TICK, false)
|
|
|
|
/*
|
|
* Decrement CPU capacity based on time not spent running tasks
|
|
*/
|
|
SCHED_FEAT(NONTASK_CAPACITY, true)
|
|
|
|
/*
|
|
* Queue remote wakeups on the target CPU and process them
|
|
* using the scheduler IPI. Reduces rq->lock contention/bounces.
|
|
*/
|
|
SCHED_FEAT(TTWU_QUEUE, true)
|
|
|
|
/*
|
|
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
|
|
*/
|
|
SCHED_FEAT(SIS_AVG_CPU, false)
|
|
SCHED_FEAT(SIS_PROP, true)
|
|
|
|
/*
|
|
* Issue a WARN when we do multiple update_rq_clock() calls
|
|
* in a single rq->lock section. Default disabled because the
|
|
* annotations are not complete.
|
|
*/
|
|
SCHED_FEAT(WARN_DOUBLE_CLOCK, false)
|
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
/*
|
|
* In order to avoid a thundering herd attack of CPUs that are
|
|
* lowering their priorities at the same time, and there being
|
|
* a single CPU that has an RT task that can migrate and is waiting
|
|
* to run, where the other CPUs will try to take that CPUs
|
|
* rq lock and possibly create a large contention, sending an
|
|
* IPI to that CPU and let that CPU push the RT task to where
|
|
* it should go may be a better scenario.
|
|
*/
|
|
SCHED_FEAT(RT_PUSH_IPI, true)
|
|
#endif
|
|
|
|
SCHED_FEAT(RT_RUNTIME_SHARE, true)
|
|
SCHED_FEAT(LB_MIN, false)
|
|
SCHED_FEAT(ATTACH_AGE_LOAD, true)
|
|
|
|
SCHED_FEAT(WA_IDLE, true)
|
|
SCHED_FEAT(WA_WEIGHT, true)
|
|
SCHED_FEAT(WA_BIAS, true)
|
|
|
|
/*
|
|
* UtilEstimation. Use estimated CPU utilization.
|
|
*/
|
|
SCHED_FEAT(UTIL_EST, true)
|
|
SCHED_FEAT(UTIL_EST_FASTUP, true)
|