KVM: PPC: Book3S HV: Provide more detailed timings for P9 entry path
Alter the data collection points for the debug timing code in the P9
path to be more in line with what the code does. The points where we
accumulate time are now the following:
vcpu_entry: From vcpu_run_hv entry until the start of the inner loop;
guest_entry: From the start of the inner loop until the guest entry in
asm;
in_guest: From the guest entry in asm until the return to KVM C code;
guest_exit: From the return into KVM C code until the corresponding
hypercall/page fault handling or re-entry into the guest;
hypercall: Time spent handling hcalls in the kernel (hcalls can go to
QEMU, not accounted here);
page_fault: Time spent handling page faults;
vcpu_exit: vcpu_run_hv exit (almost no code here currently).
Like before, these are exposed in debugfs in a file called
"timings". There are four values:
- number of occurrences of the accumulation point;
- total time the vcpu spent in the phase in ns;
- shortest time the vcpu spent in the phase in ns;
- longest time the vcpu spent in the phase in ns;
===
Before:
rm_entry: 53132
16793518 256 4060
rm_intr: 53132
2125914 22 340
rm_exit: 53132
24108344 374 2180
guest: 53132
40980507996 404
9997650
cede: 0 0 0 0
After:
vcpu_entry: 34637
7716108 178 4416
guest_entry: 52414
49365608 324 747542
in_guest: 52411
40828715840 258
9997480
guest_exit: 52410
19681717182 826
102496674
vcpu_exit: 34636
1744462 38 182
hypercall: 45712
22878288 38
1307962
page_fault: 992
111104034 568 168688
With just one instruction (hcall):
vcpu_entry: 1 942 942 942
guest_entry: 1 4044 4044 4044
in_guest: 1 1540 1540 1540
guest_exit: 1 3542 3542 3542
vcpu_exit: 1 80 80 80
hypercall: 0 0 0 0
page_fault: 0 0 0 0
===
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525130554.2614394-6-farosas@linux.ibm.com