]> git.baikalelectronics.ru Git - kernel.git/commit
mm: fix inactive list balancing between NUMA nodes and cgroups
authorJohannes Weiner <hannes@cmpxchg.org>
Fri, 19 Apr 2019 00:50:34 +0000 (17:50 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 19 Apr 2019 16:46:05 +0000 (09:46 -0700)
commit1494ac7964b120922435536e34e1ead7cde5211f
treeb2a8c30bb8ee998e0d98437fb6d8bdc8e3436121
parent8f1a4dae2c403065bc0373cf2a49cf0ce2164ed1
mm: fix inactive list balancing between NUMA nodes and cgroups

During !CONFIG_CGROUP reclaim, we expand the inactive list size if it's
thrashing on the node that is about to be reclaimed.  But when cgroups
are enabled, we suddenly ignore the node scope and use the cgroup scope
only.  The result is that pressure bleeds between NUMA nodes depending
on whether cgroups are merely compiled into Linux.  This behavioral
difference is unexpected and undesirable.

When the refault adaptivity of the inactive list was first introduced,
there were no statistics at the lruvec level - the intersection of node
and memcg - so it was better than nothing.

But now that we have that infrastructure, use lruvec_page_state() to
make the list balancing decision always NUMA aware.

[hannes@cmpxchg.org: fix bisection hole]
Link: http://lkml.kernel.org/r/20190417155241.GB23013@cmpxchg.org
Link: http://lkml.kernel.org/r/20190412144438.2645-1-hannes@cmpxchg.org
Fixes: c9393956b236 ("mm: vmscan: fix IO/refault regression in cache workingset transition")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/vmscan.c