]> git.baikalelectronics.ru Git - kernel.git/commit
x86/mm/tlb: Always use lazy TLB mode
authorRik van Riel <riel@surriel.com>
Wed, 26 Sep 2018 03:58:38 +0000 (23:58 -0400)
committerPeter Zijlstra <peterz@infradead.org>
Tue, 9 Oct 2018 14:51:11 +0000 (16:51 +0200)
commit32414eb5d48717200c0e0c5a4cfc7870e712dd48
tree2581f1146d6bfca6fec8a02a3133049950550ab6
parente53693da1cb8a108bcb6fabe4907087058b32df3
x86/mm/tlb: Always use lazy TLB mode

On most workloads, the number of context switches far exceeds the
number of TLB flushes sent. Optimizing the context switches, by always
using lazy TLB mode, speeds up those workloads.

This patch results in about a 1% reduction in CPU use on a two socket
Broadwell system running a memcache like workload.

Cc: npiggin@gmail.com
Cc: efault@gmx.de
Cc: will.deacon@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-team@fb.com
Cc: hpa@zytor.com
Cc: luto@kernel.org
Tested-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
(cherry picked from commit 80b585e73477af97a2a66e850da4051c8457f639)
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180716190337.26133-7-riel@surriel.com
arch/x86/include/asm/tlbflush.h
arch/x86/mm/tlb.c