]> git.baikalelectronics.ru Git - kernel.git/commit
mm: mmu_gather: use tlb->end != 0 only for TLB invalidation
authorWill Deacon <will.deacon@arm.com>
Mon, 12 Jan 2015 19:10:55 +0000 (19:10 +0000)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 13 Jan 2015 02:20:40 +0000 (15:20 +1300)
commit7475ce781cf01c6d58694a44bdcdc2a2d89392c9
tree592da7d79c7d01a4a4a1ad0353430dee1f147115
parent965987c19682f64bdd8b57d2f524ab43dbed1995
mm: mmu_gather: use tlb->end != 0 only for TLB invalidation

When batching up address ranges for TLB invalidation, we check tlb->end
!= 0 to indicate that some pages have actually been unmapped.

As of commit 920a53e20546 ("mmu_gather: fix over-eager
tlb_flush_mmu_free() calling"), we use the same check for freeing these
pages in order to avoid a performance regression where we call
free_pages_and_swap_cache even when no pages are actually queued up.

Unfortunately, the range could have been reset (tlb->end = 0) by
tlb_end_vma, which has been shown to cause memory leaks on arm64.
Furthermore, investigation into these leaks revealed that the fullmm
case on task exit no longer invalidates the TLB, by virtue of tlb->end
 == 0 (in 3.18, need_flush would have been set).

This patch resolves the problem by reverting commit 920a53e20546, using
instead tlb->local.nr as the predicate for page freeing in
tlb_flush_mmu_free and ensuring that tlb->end is initialised to a
non-zero value in the fullmm case.

Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Dave Hansen <dave@sr71.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
include/asm-generic/tlb.h
mm/memory.c