aboutsummaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorSiddha, Suresh B2006-12-10 02:20:33 -0800
committerLinus Torvalds2006-12-10 09:55:43 -0800
commit783609c6cb4eaa23f2ac5c968a44483584ec133f (patch)
tree678704bab2c69f5115ad84452e931adf4c11f3f4 /include
parentb18ec80396834497933d77b81ec0918519f4e2a7 (diff)
[PATCH] sched: decrease number of load balances
Currently at a particular domain, each cpu in the sched group will do a load balance at the frequency of balance_interval. More the cores and threads, more the cpus will be in each sched group at SMP and NUMA domain. And we endup spending quite a bit of time doing load balancing in those domains. Fix this by making only one cpu(first idle cpu or first cpu in the group if all the cpus are busy) in the sched group do the load balance at that particular sched domain and this load will slowly percolate down to the other cpus with in that group(when they do load balancing at lower domains). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Christoph Lameter <clameter@engr.sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/sched.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ea92e5c89089..72d6927d29ed 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -707,6 +707,7 @@ struct sched_domain {
unsigned long lb_hot_gained[MAX_IDLE_TYPES];
unsigned long lb_nobusyg[MAX_IDLE_TYPES];
unsigned long lb_nobusyq[MAX_IDLE_TYPES];
+ unsigned long lb_stopbalance[MAX_IDLE_TYPES];
/* Active load balancing */
unsigned long alb_cnt;