diff options
author | Frederic Weisbecker | 2019-12-03 17:01:06 +0100 |
---|---|---|
committer | Peter Zijlstra | 2019-12-17 13:32:50 +0100 |
commit | 5443a0be6121d557e12951537e10159e4c61035d (patch) | |
tree | 9dc1a41330100d587a3ea7a8f0fdfa13ce0d7b76 /kernel/sched | |
parent | 7c2e8bbd87db661122e92d71a394dd7bb3ada4d3 (diff) |
sched: Use fair:prio_changed() instead of ad-hoc implementation
set_user_nice() implements its own version of fair::prio_changed() and
therefore misses a specific optimization towards nohz_full CPUs that
avoid sending an resched IPI to a reniced task running alone. Use the
proper callback instead.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191203160106.18806-3-frederic@kernel.org
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/core.c | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 90e4b00ace89..15508c202bf5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4540,17 +4540,17 @@ void set_user_nice(struct task_struct *p, long nice) p->prio = effective_prio(p); delta = p->prio - old_prio; - if (queued) { + if (queued) enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK); - /* - * If the task increased its priority or is running and - * lowered its priority, then reschedule its CPU: - */ - if (delta < 0 || (delta > 0 && task_running(rq, p))) - resched_curr(rq); - } if (running) set_next_task(rq, p); + + /* + * If the task increased its priority or is running and + * lowered its priority, then reschedule its CPU: + */ + p->sched_class->prio_changed(rq, p, old_prio); + out_unlock: task_rq_unlock(rq, p, &rf); } |