]> git.baikalelectronics.ru Git - kernel.git/commitdiff
mm: slub: call account_slab_page() after slab page initialization
authorRoman Gushchin <guro@fb.com>
Tue, 29 Dec 2020 23:15:07 +0000 (15:15 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 29 Dec 2020 23:36:49 +0000 (15:36 -0800)
It's convenient to have page->objects initialized before calling into
account_slab_page().  In particular, this information can be used to
pre-alloc the obj_cgroup vector.

Let's call account_slab_page() a bit later, after the initialization of
page->objects.

This commit doesn't bring any functional change, but is required for
further optimizations.

[akpm@linux-foundation.org: undo changes needed by forthcoming mm-memcg-slab-pre-allocate-obj_cgroups-for-slab-caches-with-slab_account.patch]

Link: https://lkml.kernel.org/r/20201110195753.530157-1-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slub.c

index 0c8b43a5b3b0339820c891cb9cde893387e03b13..dc5b42e700b853eabd668d56fd04f9eb64c5c4d0 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1619,9 +1619,6 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
        else
                page = __alloc_pages_node(node, flags, order);
 
-       if (page)
-               account_slab_page(page, order, s);
-
        return page;
 }
 
@@ -1774,6 +1771,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 
        page->objects = oo_objects(oo);
 
+       account_slab_page(page, oo_order(oo), s);
+
        page->slab_cache = s;
        __SetPageSlab(page);
        if (page_is_pfmemalloc(page))