]> git.baikalelectronics.ru Git - kernel.git/commit
x86-32: Fix possible incomplete TLB invalidate with PAE pagetables
authorDave Hansen <dave@sr71.net>
Fri, 12 Apr 2013 23:23:54 +0000 (16:23 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 12 Apr 2013 23:56:47 +0000 (16:56 -0700)
commita34350dc80dcaa2271f4d5ccf54637d662d3bee5
treeab6010ad78195651c39a8ca3829ba1b95451762f
parent398d7fbb14bb206bd713bec3876bdbc327102c3c
x86-32: Fix possible incomplete TLB invalidate with PAE pagetables

This patch attempts to fix:

https://bugzilla.kernel.org/show_bug.cgi?id=56461

The symptom is a crash and messages like this:

chrome: Corrupted page table at address 34a03000
*pdpt = 0000000000000000 *pde = 0000000000000000
Bad pagetable: 000f [#1] PREEMPT SMP

Ingo guesses this got introduced by commit 27987a92bb1a ("x86/tlb:
enable tlb flush range support for x86") since that code started to free
unused pagetables.

On x86-32 PAE kernels, that new code has the potential to free an entire
PMD page and will clear one of the four page-directory-pointer-table
(aka pgd_t entries).

The hardware aggressively "caches" these top-level entries and invlpg
does not actually affect the CPU's copy.  If we clear one we *HAVE* to
do a full TLB flush, otherwise we might continue using a freed pmd page.
(note, we do this properly on the population side in pud_populate()).

This patch tracks whenever we clear one of these entries in the 'struct
mmu_gather', and ensures that we follow up with a full tlb flush.

BTW, I disassembled and checked that:

if (tlb->fullmm == 0)
and
if (!tlb->fullmm && !tlb->need_flush_all)

generate essentially the same code, so there should be zero impact there
to the !PAE case.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Artem S Tashkinov <t.artem@mailcity.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/x86/include/asm/tlb.h
arch/x86/mm/pgtable.c
include/asm-generic/tlb.h
mm/memory.c