]> git.baikalelectronics.ru Git - kernel.git/commitdiff
hugetlbfs: flush before unlock on move_hugetlb_page_tables()
authorNadav Amit <namit@vmware.com>
Sun, 21 Nov 2021 20:40:08 +0000 (12:40 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 22 Nov 2021 19:36:46 +0000 (11:36 -0800)
We must flush the TLB before releasing i_mmap_rwsem to avoid the
potential reuse of an unshared PMDs page.  This is not true in the case
of move_hugetlb_page_tables().  The last reference on the page table can
therefore be dropped before the TLB flush took place.

Prevent it by reordering the operations and flushing the TLB before
releasing i_mmap_rwsem.

Fixes: b5baa59aa48d ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Nadav Amit <namit@vmware.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hugetlb.c

index 2ccebe1ca9f41ba4f096320c0fd813a0a9e0320f..abcd1785c629c4ebf049419bc7ffc33dec1a5bd0 100644 (file)
@@ -4919,9 +4919,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
 
                move_huge_pte(vma, old_addr, new_addr, src_pte);
        }
-       i_mmap_unlock_write(mapping);
        flush_tlb_range(vma, old_end - len, old_end);
        mmu_notifier_invalidate_range_end(&range);
+       i_mmap_unlock_write(mapping);
 
        return len + old_addr - old_end;
 }