]> git.baikalelectronics.ru Git - kernel.git/commit
Revert "locking/pvqspinlock: Don't wait if vCPU is preempted"
authorWanpeng Li <wanpengli@tencent.com>
Mon, 9 Sep 2019 01:40:28 +0000 (09:40 +0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Wed, 25 Sep 2019 08:22:37 +0000 (10:22 +0200)
commit00e8f26c5abc2aec62fc9df06bc6e16547d31cf0
treee319c1e2bbbda4f6b56f4313d6f1aa9edc9fab3d
parent167f93863b144fccdcffc6d79c8e8ac58b63f333
Revert "locking/pvqspinlock: Don't wait if vCPU is preempted"

This patch reverts commit ae9fa076d1bec (locking/pvqspinlock: Don't
wait if vCPU is preempted).  A large performance regression was caused
by this commit.  on over-subscription scenarios.

The test was run on a Xeon Skylake box, 2 sockets, 40 cores, 80 threads,
with three VMs of 80 vCPUs each.  The score of ebizzy -M is reduced from
13000-14000 records/s to 1700-1800 records/s:

          Host                Guest                score

vanilla w/o kvm optimizations     upstream    1700-1800 records/s
vanilla w/o kvm optimizations     revert      13000-14000 records/s
vanilla w/ kvm optimizations      upstream    4500-5000 records/s
vanilla w/ kvm optimizations      revert      14000-15500 records/s

Exit from aggressive wait-early mechanism can result in premature yield
and extra scheduling latency.

Actually, only 6% of wait_early events are caused by vcpu_is_preempted()
being true.  However, when one vCPU voluntarily releases its vCPU, all
the subsequently waiters in the queue will do the same and the cascading
effect leads to bad performance.

kvm optimizations:
[1] commit 7bc43707326 (KVM: Boost vCPUs that are delivering interrupts)
[2] commit c001bbd2e92 (KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption)

Tested-by: loobinliu@tencent.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: loobinliu@tencent.com
Cc: stable@vger.kernel.org
Fixes: ae9fa076d1bec (locking/pvqspinlock: Don't wait if vCPU is preempted)
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kernel/locking/qspinlock_paravirt.h