]> git.baikalelectronics.ru Git - kernel.git/commitdiff
mm/rmap: fix old bug: munlocking THP missed other mlocks
authorHugh Dickins <hughd@google.com>
Wed, 7 Jul 2021 20:08:53 +0000 (13:08 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sun, 11 Jul 2021 22:05:15 +0000 (15:05 -0700)
The kernel recovers in due course from missing Mlocked pages: but there
was no point in calling page_mlock() (formerly known as
try_to_munlock()) on a THP, because nothing got done even when it was
found to be mapped in another VM_LOCKED vma.

It's true that we need to be careful: Mlocked accounting of pte-mapped
THPs is too difficult (so consistently avoided); but Mlocked accounting
of only-pmd-mapped THPs is supposed to work, even when multiple mappings
are mlocked and munlocked or munmapped.  Refine the tests.

There is already a VM_BUG_ON_PAGE(PageDoubleMap) in page_mlock(), so
page_mlock_one() does not even have to worry about that complication.

(I said the kernel recovers: but would page reclaim be likely to split
THP before rediscovering that it's VM_LOCKED? I've not followed that up)

Fixes: 97bfeea9c9fa ("thp, mlock: do not mlock PTE-mapped file huge pages")
Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: https://lore.kernel.org/lkml/cfa154c-d595-406-eb7d-eb9df730f944@google.com/
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/rmap.c

index 746013e282c3649e0fe0d01104b79b66f80c1045..0e83c3be8568afaed6040c1f413d66cd8a79130e 100644 (file)
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1442,8 +1442,9 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
                 */
                if (!(flags & TTU_IGNORE_MLOCK)) {
                        if (vma->vm_flags & VM_LOCKED) {
-                               /* PTE-mapped THP are never mlocked */
-                               if (!PageTransCompound(page)) {
+                               /* PTE-mapped THP are never marked as mlocked */
+                               if (!PageTransCompound(page) ||
+                                   (PageHead(page) && !PageDoubleMap(page))) {
                                        /*
                                         * Holding pte lock, we do *not* need
                                         * mmap_lock here
@@ -1984,9 +1985,11 @@ static bool page_mlock_one(struct page *page, struct vm_area_struct *vma,
                 * munlock_vma_pages_range().
                 */
                if (vma->vm_flags & VM_LOCKED) {
-                       /* PTE-mapped THP are never mlocked */
-                       if (!PageTransCompound(page))
-                               mlock_vma_page(page);
+                       /*
+                        * PTE-mapped THP are never marked as mlocked, but
+                        * this function is never called when PageDoubleMap().
+                        */
+                       mlock_vma_page(page);
                        page_vma_mapped_walk_done(&pvmw);
                }