diff options
author | Vincent Guittot | 2021-10-19 14:35:34 +0200 |
---|---|---|
committer | Peter Zijlstra | 2021-10-31 11:11:37 +0100 |
commit | 9d783c8dd112ad4b619e74e4bf57d2be0b400693 (patch) | |
tree | cf68010c9b8ecc7df550466273d627d67c830401 /kernel | |
parent | 9e9af819db5dbe4bf99101628955a26e2a41a1a5 (diff) |
sched/fair: Skip update_blocked_averages if we are defering load balance
In newidle_balance(), the scheduler skips load balance to the new idle cpu
when the 1st sd of this_rq is:
this_rq->avg_idle < sd->max_newidle_lb_cost
Doing a costly call to update_blocked_averages() will not be useful and
simply adds overhead when this condition is true.
Check the condition early in newidle_balance() to skip
update_blocked_averages() when possible.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Link: https://lore.kernel.org/r/20211019123537.17146-3-vincent.guittot@linaro.org
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c0145677ee99..c4c36865321b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10873,17 +10873,20 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf) */ rq_unpin_lock(this_rq, rf); + rcu_read_lock(); + sd = rcu_dereference_check_sched_domain(this_rq->sd); + if (this_rq->avg_idle < sysctl_sched_migration_cost || - !READ_ONCE(this_rq->rd->overload)) { + !READ_ONCE(this_rq->rd->overload) || + (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) { - rcu_read_lock(); - sd = rcu_dereference_check_sched_domain(this_rq->sd); if (sd) update_next_balance(sd, &next_balance); rcu_read_unlock(); goto out; } + rcu_read_unlock(); raw_spin_rq_unlock(this_rq); |