diff options
author | Christoph Lameter | 2006-06-30 01:55:38 -0700 |
---|---|---|
committer | Linus Torvalds | 2006-06-30 11:25:35 -0700 |
commit | 9a865ffa34b6117a5e0b67640a084d8c2e198c93 (patch) | |
tree | c295d5a0831df81eeeded3834f32f513b9ae05c7 /mm/vmscan.c | |
parent | 34aa1330f9b3c5783d269851d467326525207422 (diff) |
[PATCH] zoned vm counters: conversion of nr_slab to per zone counter
- Allows reclaim to access counter without looping over processor counts.
- Allows accurate statistics on how many pages are used in a zone by
the slab. This may become useful to balance slab allocations over
various zones.
[akpm@osdl.org: bugfix]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 0960846d649f..d6942436ac97 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1362,7 +1362,7 @@ unsigned long shrink_all_memory(unsigned long nr_pages) for_each_zone(zone) lru_pages += zone->nr_active + zone->nr_inactive; - nr_slab = read_page_state(nr_slab); + nr_slab = global_page_state(NR_SLAB); /* If slab caches are huge, it's better to hit them first */ while (nr_slab >= lru_pages) { reclaim_state.reclaimed_slab = 0; |