]> git.baikalelectronics.ru Git - kernel.git/commitdiff
page_alloc: fix invalid watermark check on a negative value
authorJaewon Kim <jaewon31.kim@samsung.com>
Mon, 25 Jul 2022 09:52:12 +0000 (18:52 +0900)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 29 Jul 2022 18:33:37 +0000 (11:33 -0700)
There was a report that a task is waiting at the
throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was
increasing.

This is a bug where zone_watermark_fast returns true even when the free
is very low. The commit b9f78fa27226 ("page_alloc: consider highatomic
reserve in watermark fast") changed the watermark fast to consider
highatomic reserve. But it did not handle a negative value case which
can be happened when reserved_highatomic pageblock is bigger than the
actual free.

If watermark is considered as ok for the negative value, allocating
contexts for order-0 will consume all free pages without direct reclaim,
and finally free page may become depleted except highatomic free.

Then allocating contexts may fall into throttle_direct_reclaim. This
symptom may easily happen in a system where wmark min is low and other
reclaimers like kswapd does not make free pages quickly.

Handle the negative case by using MIN.

Link: https://lkml.kernel.org/r/20220725095212.25388-1-jaewon31.kim@samsung.com
Fixes: b9f78fa27226 ("page_alloc: consider highatomic reserve in watermark fast")
Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com>
Reported-by: GyeongHwan Hong <gh21.hong@samsung.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yong-Taek Lee <ytk.lee@samsung.com>
Cc: <stable@vger.kerenl.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/page_alloc.c

index e008a3df0485c79ac24ef31b4407dd5921a9649a..b5b14b78c4fd4844cb071022570fd4b7e6959eb2 100644 (file)
@@ -3968,11 +3968,15 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
         * need to be calculated.
         */
        if (!order) {
-               long fast_free;
+               long usable_free;
+               long reserved;
 
-               fast_free = free_pages;
-               fast_free -= __zone_watermark_unusable_free(z, 0, alloc_flags);
-               if (fast_free > mark + z->lowmem_reserve[highest_zoneidx])
+               usable_free = free_pages;
+               reserved = __zone_watermark_unusable_free(z, 0, alloc_flags);
+
+               /* reserved may over estimate high-atomic reserves. */
+               usable_free -= min(usable_free, reserved);
+               if (usable_free > mark + z->lowmem_reserve[highest_zoneidx])
                        return true;
        }