From 920a53e20546ab477129c953a2327ef96d83f38a Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Wed, 17 Dec 2014 11:59:04 -0800 Subject: [PATCH] mmu_gather: fix over-eager tlb_flush_mmu_free() calling Dave Hansen reports that commit 3b57951307b2 ("mmu_gather: move minimal range calculations into generic code") caused a performance problem: "tlb_finish_mmu() goes up about 9x in the profiles (~0.4%->3.6%) and tlb_flush_mmu_free() takes about 3.1% of CPU time with the patch applied, but does not show up at all on the commit before" and the reason is that Will moved the test for whether we need to flush from tlb_flush_mmu() into tlb_flush_mmu_tlbonly(). But that meant that tlb_flush_mmu_free() basically lost that check. Move it back into tlb_flush_mmu() where it belongs, so that it covers both tlb_flush_mmu_tlbonly() _and_ tlb_flush_mmu_free(). Reported-and-tested-by: Dave Hansen Acked-by: Will Deacon Signed-off-by: Linus Torvalds --- mm/memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c3b9097251c57..6efe36a998bae 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -235,9 +235,6 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { - if (!tlb->end) - return; - tlb_flush(tlb); mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end); #ifdef CONFIG_HAVE_RCU_TABLE_FREE @@ -259,6 +256,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb) void tlb_flush_mmu(struct mmu_gather *tlb) { + if (!tlb->end) + return; + tlb_flush_mmu_tlbonly(tlb); tlb_flush_mmu_free(tlb); } -- 2.39.5