From: Sean Christopherson Date: Wed, 1 Sep 2021 22:10:23 +0000 (-0700) Subject: KVM: x86/mmu: Move lpage_disallowed_link further "down" in kvm_mmu_page X-Git-Tag: baikal/aarch64/sdk6.1~5825^2~9 X-Git-Url: https://git.baikalelectronics.ru/sdk/?a=commitdiff_plain;h=b5efeb850a2bfb9d134d1ee0c07e35b8072b600d;p=kernel.git KVM: x86/mmu: Move lpage_disallowed_link further "down" in kvm_mmu_page Move "lpage_disallowed_link" out of the first 64 bytes, i.e. out of the first cache line, of kvm_mmu_page so that "spt" and to a lesser extent "gfns" land in the first cache line. "lpage_disallowed_link" is accessed relatively infrequently compared to "spt", which is accessed any time KVM is walking and/or manipulating the shadow page tables. No functional change intended. Signed-off-by: Sean Christopherson Message-Id: <20210901221023.1303578-4-seanjc@google.com> Signed-off-by: Paolo Bonzini --- diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 4e7b634efa38e..bf2bdbf333c2e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -31,9 +31,12 @@ extern bool dbg; #define IS_VALID_PAE_ROOT(x) (!!(x)) struct kvm_mmu_page { + /* + * Note, "link" through "spt" fit in a single 64 byte cache line on + * 64-bit kernels, keep it that way unless there's a reason not to. + */ struct list_head link; struct hlist_node hash_link; - struct list_head lpage_disallowed_link; bool tdp_mmu_page; bool unsync; @@ -59,6 +62,7 @@ struct kvm_mmu_page { struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ DECLARE_BITMAP(unsync_child_bitmap, 512); + struct list_head lpage_disallowed_link; #ifdef CONFIG_X86_32 /* * Used out of the mmu-lock to avoid reading spte values while an