]> git.baikalelectronics.ru Git - kernel.git/commit
powerpc/book3s64/mm: update flush_tlb_range to flush page walk cache
authorAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Thu, 8 Jul 2021 01:10:21 +0000 (18:10 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 8 Jul 2021 18:48:23 +0000 (11:48 -0700)
commit69b36cede859439fd3ecba7a8583baeffaf28be0
tree7add62bdd16e2f41ccbb684ccbf24ff79bae7d0f
parent9f5c0535993453a2494cad454b4fd427683a29a0
powerpc/book3s64/mm: update flush_tlb_range to flush page walk cache

flush_tlb_range is special in that we don't specify the page size used for
the translation.  Hence when flushing TLB we flush the translation cache
for all possible page sizes.  The kernel also uses the same interface when
moving page tables around.  Such a move requires us to flush the page walk
cache.

Instead of adding another interface to force page walk cache flush, update
flush_tlb_range to flush page walk cache if the range flushed is more than
the PMD range.  A page table move will always involve an invalidate range
more than PMD_SIZE.

Running microbenchmark with mprotect and parallel memory access didn't
show any observable performance impact.

Link: https://lkml.kernel.org/r/20210616045735.374532-3-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Hugh Dickins <hughd@google.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
arch/powerpc/mm/book3s64/radix_hugetlbpage.c
arch/powerpc/mm/book3s64/radix_tlb.c