diff options
author | Peter Zijlstra | 2013-08-14 14:55:31 +0200 |
---|---|---|
committer | Ingo Molnar | 2013-09-25 14:07:49 +0200 |
commit | f27dde8deef33c9e58027df11ceab2198601d6a6 (patch) | |
tree | 31793590541baf5b9998b5ca0cc459b757a7cbcf /kernel/cpu | |
parent | 4a2b4b222743bb07fedf985b884550f2ca067ea9 (diff) |
sched: Add NEED_RESCHED to the preempt_count
In order to combine the preemption and need_resched test we need to
fold the need_resched information into the preempt_count value.
Since the NEED_RESCHED flag is set across CPUs this needs to be an
atomic operation, however we very much want to avoid making
preempt_count atomic, therefore we keep the existing TIF_NEED_RESCHED
infrastructure in place but at 3 sites test it and fold its value into
preempt_count; namely:
- resched_task() when setting TIF_NEED_RESCHED on the current task
- scheduler_ipi() when resched_task() sets TIF_NEED_RESCHED on a
remote task it follows it up with a reschedule IPI
and we can modify the cpu local preempt_count from
there.
- cpu_idle_loop() for when resched_task() found tsk_is_polling().
We use an inverted bitmask to indicate need_resched so that a 0 means
both need_resched and !atomic.
Also remove the barrier() in preempt_enable() between
preempt_enable_no_resched() and preempt_check_resched() to avoid
having to reload the preemption value and allow the compiler to use
the flags of the previuos decrement. I couldn't come up with any sane
reason for this barrier() to be there as preempt_enable_no_resched()
already has a barrier() before doing the decrement.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-7a7m5qqbn5pmwnd4wko9u6da@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/cpu')
-rw-r--r-- | kernel/cpu/idle.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/cpu/idle.c b/kernel/cpu/idle.c index c261409500e4..988573a9a387 100644 --- a/kernel/cpu/idle.c +++ b/kernel/cpu/idle.c @@ -105,6 +105,13 @@ static void cpu_idle_loop(void) __current_set_polling(); } arch_cpu_idle_exit(); + /* + * We need to test and propagate the TIF_NEED_RESCHED + * bit here because we might not have send the + * reschedule IPI to idle tasks. + */ + if (tif_need_resched()) + set_preempt_need_resched(); } tick_nohz_idle_exit(); schedule_preempt_disabled(); |