diff options
author | Roman Gushchin | 2020-12-29 15:15:07 -0800 |
---|---|---|
committer | Linus Torvalds | 2020-12-29 15:36:49 -0800 |
commit | 1f3147b49d75b47b6be54a1e6dfa87a4921e1e51 (patch) | |
tree | 56cedce3b399e22eb0fee8f592ad77c823129a8b /mm | |
parent | 605cc30dea249edf1b659e7d0146a2cf13cbbf71 (diff) |
mm: slub: call account_slab_page() after slab page initialization
It's convenient to have page->objects initialized before calling into
account_slab_page(). In particular, this information can be used to
pre-alloc the obj_cgroup vector.
Let's call account_slab_page() a bit later, after the initialization of
page->objects.
This commit doesn't bring any functional change, but is required for
further optimizations.
[akpm@linux-foundation.org: undo changes needed by forthcoming mm-memcg-slab-pre-allocate-obj_cgroups-for-slab-caches-with-slab_account.patch]
Link: https://lkml.kernel.org/r/20201110195753.530157-1-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/mm/slub.c b/mm/slub.c index 0c8b43a5b3b0..dc5b42e700b8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1619,9 +1619,6 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s, else page = __alloc_pages_node(node, flags, order); - if (page) - account_slab_page(page, order, s); - return page; } @@ -1774,6 +1771,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) page->objects = oo_objects(oo); + account_slab_page(page, oo_order(oo), s); + page->slab_cache = s; __SetPageSlab(page); if (page_is_pfmemalloc(page)) |