diff options
author | Paul Turner | 2020-03-10 18:01:13 -0700 |
---|---|---|
committer | Peter Zijlstra | 2020-03-20 13:06:18 +0100 |
commit | 46a87b3851f0d6eb05e6d83d5c5a30df0eca8f76 (patch) | |
tree | b98cf01a4a12e708f063c07788f7a3a7b41a93f2 /kernel/sched | |
parent | fe61468b2cbc2b7ce5f8d3bf32ae5001d4c434e9 (diff) |
sched/core: Distribute tasks within affinity masks
Currently, when updating the affinity of tasks via either cpusets.cpus,
or, sched_setaffinity(); tasks not currently running within the newly
specified mask will be arbitrarily assigned to the first CPU within the
mask.
This (particularly in the case that we are restricting masks) can
result in many tasks being assigned to the first CPUs of their new
masks.
This:
1) Can induce scheduling delays while the load-balancer has a chance to
spread them between their new CPUs.
2) Can antogonize a poor load-balancer behavior where it has a
difficult time recognizing that a cross-socket imbalance has been
forced by an affinity mask.
This change adds a new cpumask interface to allow iterated calls to
distribute within the intersection of the provided masks.
The cases that this mainly affects are:
- modifying cpuset.cpus
- when tasks join a cpuset
- when modifying a task's affinity via sched_setaffinity(2)
Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Josh Don <joshdon@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Qais Yousef <qais.yousef@arm.com>
Tested-by: Qais Yousef <qais.yousef@arm.com>
Link: https://lkml.kernel.org/r/20200311010113.136465-1-joshdon@google.com
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/core.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 978bf6f6e0ff..014d4f793313 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1650,7 +1650,12 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (cpumask_equal(p->cpus_ptr, new_mask)) goto out; - dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask); + /* + * Picking a ~random cpu helps in cases where we are changing affinity + * for groups of tasks (ie. cpuset), so that load balancing is not + * immediately required to distribute the tasks within their new mask. + */ + dest_cpu = cpumask_any_and_distribute(cpu_valid_mask, new_mask); if (dest_cpu >= nr_cpu_ids) { ret = -EINVAL; goto out; |