]> git.baikalelectronics.ru Git - kernel.git/commit
x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs
authorNadav Amit <namit@vmware.com>
Fri, 5 Oct 2018 20:27:17 +0000 (13:27 -0700)
committerIngo Molnar <mingo@kernel.org>
Sat, 6 Oct 2018 13:52:16 +0000 (15:52 +0200)
commitfefa16d2a19595aec886ad142caca63e2ed7369a
tree754b18d150fb4bff9643c97f8124e329dc241c1b
parent9946703991b2d1b6ea8436118e2e29b76b0c20af
x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs

As described in:

  3e64a4f4a82b: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block - which is pretty pointless indirection in the static_cpu_has()
case, but is worth it to improve overall inlining quality.

The patch slightly increases the kernel size:

      text     data     bss      dec     hex  filename
  18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux before
  18163528 10226300 2957312 31347140 1de51c4  ./vmlinux after (+693)

And enables the inlining of function such as free_ldt_pgtables().

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181005202718.229565-3-namit@vmware.com
Link: https://lore.kernel.org/lkml/20181003213100.189959-10-namit@vmware.com/T/#u
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/cpufeature.h
arch/x86/kernel/macros.S