diff options
author | Tomasz Buchert | 2010-10-27 15:33:33 -0700 |
---|---|---|
committer | Linus Torvalds | 2010-10-27 18:03:08 -0700 |
commit | 0bdba580ab052a21e3eda2764ed22d9ee962392b (patch) | |
tree | 9d8fb958c8612a9c72b8662d9b51c12c76ccd0bb /kernel | |
parent | d5de4ddb1bc430289bede76c0d87cabee93f749a (diff) |
cgroup_freezer: fix can_attach() to prohibit moving from/to freezing/frozen cgroups
It is possible to move a task from its cgroup even if this group is
'FREEZING'. This results in a nasty bug - the moved task will become
frozen OUTSIDE its original cgroup and will remain in a permanent 'D'
state.
This patch allows to migrate the task only between THAWED cgroups.
This behavior was observed and easily reproduced on a single core laptop.
Notice that reproducibility depends highly on the machine used. Program
and instructions how to reproduce the bug can be fetched from:
http://pentium.hopto.org/~thinred/repos/linux-misc/freezer_bug.c
Signed-off-by: Tomasz Buchert <tomasz.buchert@inria.fr>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/cgroup_freezer.c | 15 |
1 files changed, 8 insertions, 7 deletions
diff --git a/kernel/cgroup_freezer.c b/kernel/cgroup_freezer.c index 321e2a007d25..c287627bed3d 100644 --- a/kernel/cgroup_freezer.c +++ b/kernel/cgroup_freezer.c @@ -173,24 +173,25 @@ static int freezer_can_attach(struct cgroup_subsys *ss, /* * Anything frozen can't move or be moved to/from. - * - * Since orig_freezer->state == FROZEN means that @task has been - * frozen, so it's sufficient to check the latter condition. */ - if (is_task_frozen_enough(task)) + freezer = cgroup_freezer(new_cgroup); + if (freezer->state != CGROUP_THAWED) return -EBUSY; - freezer = cgroup_freezer(new_cgroup); - if (freezer->state == CGROUP_FROZEN) + rcu_read_lock(); + if (__cgroup_freezing_or_frozen(task)) { + rcu_read_unlock(); return -EBUSY; + } + rcu_read_unlock(); if (threadgroup) { struct task_struct *c; rcu_read_lock(); list_for_each_entry_rcu(c, &task->thread_group, thread_group) { - if (is_task_frozen_enough(c)) { + if (__cgroup_freezing_or_frozen(c)) { rcu_read_unlock(); return -EBUSY; } |