]> git.baikalelectronics.ru Git - kernel.git/log
kernel.git
3 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Thu, 17 Jun 2021 06:51:38 +0000 (16:51 +1000)]
Merge branch 'topic/ppc-kvm' into next

Merge some powerpc KVM patches from our topic branch.

In particular this brings in Nick's big series rewriting parts of the
guest entry/exit path in C.

Conflicts:
arch/powerpc/kernel/security.c
arch/powerpc/kvm/book3s_hv_rmhandlers.S

3 years agopowerpc/mm/book3s64: Fix possible build error
Aneesh Kumar K.V [Thu, 10 Jun 2021 08:36:39 +0000 (14:06 +0530)]
powerpc/mm/book3s64: Fix possible build error

Update _tlbiel_pid() such that we can avoid build errors like below when
using this function in other places.

arch/powerpc/mm/book3s64/radix_tlb.c: In function ‘__radix__flush_tlb_range_psize’:
arch/powerpc/mm/book3s64/radix_tlb.c:114:2: warning: ‘asm’ operand 3 probably does not match constraints
  114 |  asm volatile(PPC_TLBIEL(%0, %4, %3, %2, %1)
      |  ^~~
arch/powerpc/mm/book3s64/radix_tlb.c:114:2: error: impossible constraint in ‘asm’
make[4]: *** [scripts/Makefile.build:271: arch/powerpc/mm/book3s64/radix_tlb.o] Error 1
m

With this fix, we can also drop the __always_inline in __radix_flush_tlb_range_psize
which was added by commit d7164dcd2baf ("powerpc/mm/radix: mark __radix__flush_tlb_range_psize() as __always_inline")

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210610083639.387365-1-aneesh.kumar@linux.ibm.com
3 years agopowerpc/signal64: Don't read sigaction arguments back from user memory
Michael Ellerman [Thu, 10 Jun 2021 07:29:49 +0000 (17:29 +1000)]
powerpc/signal64: Don't read sigaction arguments back from user memory

When delivering a signal to a sigaction style handler (SA_SIGINFO), we
pass pointers to the siginfo and ucontext via r4 and r5.

Currently we populate the values in those registers by reading the
pointers out of the sigframe in user memory, even though the values in
user memory were written by the kernel just prior:

  unsafe_put_user(&frame->info, &frame->pinfo, badframe_block);
  unsafe_put_user(&frame->uc, &frame->puc, badframe_block);
  ...
  if (ksig->ka.sa.sa_flags & SA_SIGINFO) {
   err |= get_user(regs->gpr[4], (unsigned long __user *)&frame->pinfo);
   err |= get_user(regs->gpr[5], (unsigned long __user *)&frame->puc);

ie. we write &frame->info into frame->pinfo, and then read frame->pinfo
back into r4, and similarly for &frame->uc.

The code has always been like this, since linux-fullhistory commit
d4f2d95eca2c ("Forward port of 2.4 ppc64 signal changes.").

There's no reason for us to read the values back from user memory,
rather than just setting the value in the gpr[4/5] directly. In fact
reading the value back from user memory opens up the possibility of
another user thread changing the values before we read them back.
Although any process doing that would be racing against the kernel
delivering the signal, and would risk corrupting the stack, so that
would be a userspace bug.

Note that this is 64-bit only code, so there's no subtlety with the size
of pointers differing between kernel and user. Also the frame variable
is not modified to point elsewhere during the function.

In the past reading the values back from user memory was not costly, but
now that we have KUAP on some CPUs it is, so we'd rather avoid it for
that reason too.

So change the code to just set the values directly, using the same
values we have written to the sigframe previously in the function.

Note also that this matches what our 32-bit signal code does.

Using a version of will-it-scale's signal1_threads that sets SA_SIGINFO,
this results in a ~4% increase in signals per second on a Power9, from
229,777 to 239,766.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210610072949.3198522-1-mpe@ellerman.id.au
3 years agopowerpc/watchdog: include linux/processor.h for spin_until_cond
Sudeep Holla [Fri, 11 Jun 2021 19:10:58 +0000 (19:10 +0000)]
powerpc/watchdog: include linux/processor.h for spin_until_cond

This implementation uses spin_until_cond in wd_smp_lock including
neither linux/processor.h nor asm/processor.h

This patch includes linux/processor.h here for spin_until_cond usage.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5e8d2d50f301a346040362028c2ecba40685de9e.1623438544.git.christophe.leroy@csgroup.eu
3 years agopowerpc/64: drop redundant defination of spin_until_cond
Sudeep Holla [Fri, 11 Jun 2021 19:10:57 +0000 (19:10 +0000)]
powerpc/64: drop redundant defination of spin_until_cond

linux/processor.h has exactly same defination for spin_until_cond.
Drop the redundant defination in asm/processor.h

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1fff2054e5dfc00329804dbd3f2a91667c9a8aff.1623438544.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Remove impossible #ifdef combinations
Christophe Leroy [Thu, 10 Jun 2021 15:58:34 +0000 (15:58 +0000)]
powerpc/signal32: Remove impossible #ifdef combinations

PPC_TRANSACTIONAL_MEM is only on book3s/64
SPE is only on booke

PPC_TRANSACTIONAL_MEM selects ALTIVEC and VSX

Therefore, within PPC_TRANSACTIONAL_MEM sections,
ALTIVEC and VSX are always defined while SPE never is.

Remove all SPE code and all #ifdef ALTIVEC and VSX in tm
functions.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a069a348ee3c2fe3123a5a93695c2b35dc42cb40.1623340691.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Display modules range in virtual memory layout
Christophe Leroy [Fri, 11 Jun 2021 19:08:54 +0000 (19:08 +0000)]
powerpc/32: Display modules range in virtual memory layout

book3s/32 and 8xx don't use vmalloc for modules.

Print the modules area at startup as part of the virtual memory layout:

[    0.000000] Kernel virtual memory layout:
[    0.000000]   * 0xffafc000..0xffffc000  : fixmap
[    0.000000]   * 0xc9000000..0xffafc000  : vmalloc & ioremap
[    0.000000]   * 0xb0000000..0xc0000000  : modules
[    0.000000] Memory: 118480K/131072K available (7152K kernel code, 2320K rwdata, 1328K rodata, 368K init, 854K bss, 12592K reserved, 0K cma-reserved)

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/98394503e92d6fd6d8f657e0b263b32f21cf2790.1623438478.git.christophe.leroy@csgroup.eu
3 years agopowerpc: make stack walking KASAN-safe
Daniel Axtens [Mon, 14 Jun 2021 12:09:07 +0000 (22:09 +1000)]
powerpc: make stack walking KASAN-safe

Make our stack-walking code KASAN-safe by using __no_sanitize_address.
Generic code, arm64, s390 and x86 all make accesses unchecked for similar
sorts of reasons: when unwinding a stack, we might touch memory that KASAN
has marked as being out-of-bounds. In ppc64 KASAN development, I hit this
sometimes when checking for an exception frame - because we're checking
an arbitrary offset into the stack frame.

See commit d5f68614a55c ("s390/kasan: avoid false positives during stack
unwind"), commit f4fede9c9de6 ("arm64: disable kasan when accessing
frame->fp in unwind_frame"), commit 34a27c0dd03b ("x86/dumpstack:
Prevent KASAN false positive warnings") and commit cdf32bbe54b2
("tracing, kasan: Silence Kasan warning in check_stack of stack_tracer").

Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210614120907.1952321-1-dja@axtens.net
3 years agoselftests/powerpc: EBB selftest for MMCR0 control for PMU SPRs in ISA v3.1
Athira Rajeev [Tue, 25 May 2021 13:51:43 +0000 (09:51 -0400)]
selftests/powerpc: EBB selftest for MMCR0 control for PMU SPRs in ISA v3.1

With the MMCR0 control bit (PMCCEXT) in ISA v3.1, read access to
group B registers is restricted when MMCR0 PMCC=0b00. In other
platforms (like power9), the older behaviour works where group B
PMU SPRs are readable.

Patch creates a selftest which verifies that the test takes a
SIGILL when attempting to read PMU registers via helper function
"dump_ebb_state" for ISA v3.1.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com <mailto:rnsastry@linux.ibm.com>>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1621950703-1532-3-git-send-email-atrajeev@linux.vnet.ibm.com
3 years agoselftests/powerpc: Fix "no_handler" EBB selftest
Athira Rajeev [Tue, 25 May 2021 13:51:42 +0000 (09:51 -0400)]
selftests/powerpc: Fix "no_handler" EBB selftest

The "no_handler_test" in ebb selftests attempts to read the PMU
registers twice via helper function "dump_ebb_state". First dump is
just before closing of event and the second invocation is done after
closing of the event. The original intention of second
dump_ebb_state was to dump the state of registers at the end of
the test when the counters are frozen. But this will be achieved
with the first call itself since sample period is set to low value
and PMU will be frozen by then. Hence patch removes the
dump which was done before closing of the event.

Reported-by: Shirisha Ganta <shirisha.ganta1@ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com <mailto:rnsastry@linux.ibm.com>>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1621950703-1532-2-git-send-email-atrajeev@linux.vnet.ibm.com
3 years agopowerpc: Move update_power8_hid0() into its only user
Christophe Leroy [Wed, 9 Jun 2021 06:10:29 +0000 (06:10 +0000)]
powerpc: Move update_power8_hid0() into its only user

update_power8_hid0() is used only by powernv platform subcore.c

Move it there.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/37f41d74faa0c66f90b373e243e8b1ee37a1f6fa.1623219019.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Remove proc_trap()
Christophe Leroy [Wed, 9 Jun 2021 05:52:50 +0000 (05:52 +0000)]
powerpc: Remove proc_trap()

proc_trap() has never been used, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/827944ea12d470c2f862635f48b5ee6c1520351f.1623217909.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Remove __main()
Christophe Leroy [Tue, 8 Jun 2021 17:22:51 +0000 (17:22 +0000)]
powerpc/32: Remove __main()

Comment says that __main() is there to make GCC happy.

It's been there since the implementation of ppc arch in Linux 1.3.45.

ppc32 is the only architecture having that. Even ppc64 doesn't have it.

Seems like GCC is still happy without it.

Drop it for good.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d01028f8166b98584eec536b52f14c5e3f98ff6b.1623172922.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Rename PTE_SIZE to PTE_T_SIZE
Christophe Leroy [Mon, 7 Jun 2021 10:56:06 +0000 (10:56 +0000)]
powerpc/32s: Rename PTE_SIZE to PTE_T_SIZE

PTE_SIZE means PTE page table size in most placed, whereas
in hash_low.S in means size of one entry in the table.

Rename it PTE_T_SIZE, and define it directly in hash_low.S
instead of going through asm-offsets.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/83a008a9fd6cc3f2bbcb470f592555d260ed7a3d.1623063174.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Define swapper_pg_dir[] in C
Christophe Leroy [Mon, 7 Jun 2021 10:56:05 +0000 (10:56 +0000)]
powerpc: Define swapper_pg_dir[] in C

Don't duplicate swapper_pg_dir[] in each platform's head.S

Define it in mm/pgtable.c

Define MAX_PTRS_PER_PGD because on book3s/64 PTRS_PER_PGD is
not a constant.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5e3f1b8a4695c33ccc80aa3870e016bef32b85e1.1623063174.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Define empty_zero_page[] in C
Christophe Leroy [Mon, 7 Jun 2021 10:56:04 +0000 (10:56 +0000)]
powerpc: Define empty_zero_page[] in C

At the time being, empty_zero_page[] is defined in each
platform head.S.

Define it in mm/mem.c instead, and put it in BSS section instead
of the DATA section. Commit 00d03ac84053 ("arm64: mm: place
empty_zero_page in bss") explains why it is interesting to have
it in BSS.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5838caffa269e0957c5a50cc85477876220298b0.1623063174.git.christophe.leroy@csgroup.eu
3 years agopowerpc/selftests: Use gettid() instead of getppid() for null_syscall
Christophe Leroy [Fri, 4 Jun 2021 12:31:09 +0000 (12:31 +0000)]
powerpc/selftests: Use gettid() instead of getppid() for null_syscall

gettid() is 10% lighter than getppid(), use it for null_syscall selftest.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0ad62673d3e063f848e7c99d719bb966efd433e8.1622809833.git.christophe.leroy@csgroup.eu
3 years agopowerpc/nohash: Remove DEBUG_HARDER
Christophe Leroy [Thu, 3 Jun 2021 09:29:07 +0000 (09:29 +0000)]
powerpc/nohash: Remove DEBUG_HARDER

DEBUG_HARDER is not user selectable.

Remove it together with related messages.

Also remove two pr_devel() messages that should
likely have been pr_hard().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0f25109b0e12fdd1e6541dedbb2212cc53526a57.1622712515.git.christophe.leroy@csgroup.eu
3 years agopowerpc/nohash: Remove DEBUG_CLAMP_LAST_CONTEXT
Christophe Leroy [Thu, 3 Jun 2021 09:29:06 +0000 (09:29 +0000)]
powerpc/nohash: Remove DEBUG_CLAMP_LAST_CONTEXT

DEBUG_CLAMP_LAST_CONTEXT was there in the old days to reduce
number of contexts in order to ease debugging implementation
of context switching, but that's been quite stable during
years now.

As it is not user selectable, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/da81837b452e8b9f1657b529b9c3050dc10b9770.1622712515.git.christophe.leroy@csgroup.eu
3 years agopowerpc/nohash: Remove DEBUG_MAP_CONSISTENCY
Christophe Leroy [Thu, 3 Jun 2021 09:29:05 +0000 (09:29 +0000)]
powerpc/nohash: Remove DEBUG_MAP_CONSISTENCY

mmu_context handling has been there for years, so we
would know if there was problems with maps.

DEBUG_MAP_CONSISTENCY is not user selectable, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/6fe2b88956db53f8d6ee221525b2c5dc6aec82c6.1622712515.git.christophe.leroy@csgroup.eu
3 years agopowerpc/nohash: Remove CONFIG_SMP #ifdefery in mmu_context.h
Christophe Leroy [Thu, 3 Jun 2021 09:29:04 +0000 (09:29 +0000)]
powerpc/nohash: Remove CONFIG_SMP #ifdefery in mmu_context.h

Everything can be done even when CONFIG_SMP is not selected.

Just use IS_ENABLED() where relevant and rely on GCC to
opt out unneeded code and variables when CONFIG_SMP is not set.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/cc13b87b0f750a538621876ecc24c22a07e7c8be.1622712515.git.christophe.leroy@csgroup.eu
3 years agopowerpc/nohash: Convert set_context() to C
Christophe Leroy [Thu, 3 Jun 2021 09:29:03 +0000 (09:29 +0000)]
powerpc/nohash: Convert set_context() to C

ppc8xx already has set_context() in C.

Other ones have it in assembly. The only thing it does is to
write the context id into SPRN_PID.

Do it in C.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a5d0759064f3831c6b88af49ef5d3b05ba1c4dad.1622712515.git.christophe.leroy@csgroup.eu
3 years agopowerpc/nohash: Refactor update of BDI2000 pointers in switch_mmu_context()
Christophe Leroy [Thu, 3 Jun 2021 09:29:02 +0000 (09:29 +0000)]
powerpc/nohash: Refactor update of BDI2000 pointers in switch_mmu_context()

Instead of duplicating the update of BDI2000 pointers in
set_context(), do it directly from switch_mmu_context().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4c54997edd3548fa54717915e7c6ebaf60f208c0.1622712515.git.christophe.leroy@csgroup.eu
3 years agopowerpc/kuap: Force inlining of all first level KUAP helpers.
Christophe Leroy [Thu, 3 Jun 2021 09:13:54 +0000 (09:13 +0000)]
powerpc/kuap: Force inlining of all first level KUAP helpers.

All KUAP helpers defined in asm/kup.h are single line functions
that should be inlined. But on book3s/32 build, we get many
instances of <prevent_write_to_user.constprop.0>.

Force inlining of those helpers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8479a862e165a57a855292d47e24c259a578f5a0.1622711627.git.christophe.leroy@csgroup.eu
3 years agopowerpc/kuap: Remove to/from/size parameters of prevent_user_access()
Christophe Leroy [Thu, 3 Jun 2021 08:41:48 +0000 (08:41 +0000)]
powerpc/kuap: Remove to/from/size parameters of prevent_user_access()

prevent_user_access() doesn't use anymore to/from/size parameters.

Remove them.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b7113662fd2c26e4c33e9d705de324bd3860822e.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/kuap: Remove KUAP_CURRENT_XXX
Christophe Leroy [Thu, 3 Jun 2021 08:41:46 +0000 (08:41 +0000)]
powerpc/kuap: Remove KUAP_CURRENT_XXX

book3s/32 was the only user of KUAP_CURRENT_XXX.

After rework of book3s/32 KUAP, it is not used anymore.

Remove them.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/549214ecf6887d965645e664520d4886663c5ffb.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Activate KUAP and KUEP by default
Christophe Leroy [Thu, 3 Jun 2021 08:41:45 +0000 (08:41 +0000)]
powerpc/32s: Activate KUAP and KUEP by default

Now that KUAP and KUEP have been significantly optimised and can be
disabled at boot time using 'nosmap' and 'nosmep' kernel parameters,
them can be active by default like in other powerpc platforms.

It is still possible to disable them completely in the configuration.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/86c7c74a3ba5312daea7e9658b096e2bcc6f4b64.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Rework Kernel Userspace Access Protection
Christophe Leroy [Thu, 3 Jun 2021 08:41:44 +0000 (08:41 +0000)]
powerpc/32s: Rework Kernel Userspace Access Protection

On book3s/32, KUAP is provided by toggling Ks bit in segment registers.
One segment register addresses 256M of virtual memory.

At the time being, KUAP implements a complex logic to apply the
unlock/lock on the exact number of segments covering the user range
to access, with saving the boundaries of the range of segments in
a member of thread struct.

But most if not all user accesses are within a single segment.

Rework KUAP with a different approach:
- Open only one segment, the one corresponding to the starting
address of the range to be accessed.
- If a second segment is involved, it will generate a page fault. The
segment will then be open by the page fault handler.

The kuap member of thread struct will now contain:
- The start address of the current on going user access, that will be
used to know which segment to lock at the end of the user access.
- ~0 when no user access is open
- ~1 when additionnal segments are opened by a page fault.

Then, at lock time
- When only one segment is open, close it.
- When several segments are open, close all user segments.

Almost 100% of the time, only one segment will be involved.

In interrupts, inline the function that unlock/lock all segments,
because not inlining them implies a lot of register save/restore.

With the patch, writing value 128 in userspace in perf_copy_attr() is
done with 16 instructions:

    3890: 93 82 04 dc  stw     r28,1244(r2)
    3894: 7d 20 e5 26  mfsrin  r9,r28
    3898: 55 29 00 80  rlwinm  r9,r9,0,2,0
    389c: 7d 20 e1 e4  mtsrin  r9,r28
    38a0: 4c 00 01 2c  isync

    38a4: 39 20 00 80  li      r9,128
    38a8: 91 3c 00 00  stw     r9,0(r28)

    38ac: 81 42 04 dc  lwz     r10,1244(r2)
    38b0: 39 00 ff ff  li      r8,-1
    38b4: 91 02 04 dc  stw     r8,1244(r2)
    38b8: 2c 0a ff fe  cmpwi   r10,-2
    38bc: 41 82 00 88  beq     3944 <perf_copy_attr+0x36c>
    38c0: 7d 20 55 26  mfsrin  r9,r10
    38c4: 65 29 40 00  oris    r9,r9,16384
    38c8: 7d 20 51 e4  mtsrin  r9,r10
    38cc: 4c 00 01 2c  isync
...
    3944: 48 00 00 01  bl      3944 <perf_copy_attr+0x36c>
3944: R_PPC_REL24 kuap_lock_all_ool

Before the patch it was 118 instructions. In reality only 42 are
executed in most cases, but GCC is not able to see that a properly
aligned user access cannot involve more than one segment.

    5060: 39 1d 00 04  addi    r8,r29,4
    5064: 3d 20 b0 00  lis     r9,-20480
    5068: 7c 08 48 40  cmplw   r8,r9
    506c: 40 81 00 08  ble     5074 <perf_copy_attr+0x2cc>
    5070: 3d 00 b0 00  lis     r8,-20480
    5074: 39 28 ff ff  addi    r9,r8,-1
    5078: 57 aa 00 06  rlwinm  r10,r29,0,0,3
    507c: 55 29 27 3e  rlwinm  r9,r9,4,28,31
    5080: 39 29 00 01  addi    r9,r9,1
    5084: 7d 29 53 78  or      r9,r9,r10
    5088: 91 22 04 dc  stw     r9,1244(r2)
    508c: 7d 20 ed 26  mfsrin  r9,r29
    5090: 55 29 00 80  rlwinm  r9,r9,0,2,0
    5094: 7c 08 50 40  cmplw   r8,r10
    5098: 40 81 00 c0  ble     5158 <perf_copy_attr+0x3b0>
    509c: 7d 46 50 f8  not     r6,r10
    50a0: 7c c6 42 14  add     r6,r6,r8
    50a4: 54 c6 27 be  rlwinm  r6,r6,4,30,31
    50a8: 7d 20 51 e4  mtsrin  r9,r10
    50ac: 3c ea 10 00  addis   r7,r10,4096
    50b0: 39 29 01 11  addi    r9,r9,273
    50b4: 7f 88 38 40  cmplw   cr7,r8,r7
    50b8: 55 29 02 06  rlwinm  r9,r9,0,8,3
    50bc: 40 9d 00 9c  ble     cr7,5158 <perf_copy_attr+0x3b0>

    50c0: 2f 86 00 00  cmpwi   cr7,r6,0
    50c4: 41 9e 00 4c  beq     cr7,5110 <perf_copy_attr+0x368>
    50c8: 2f 86 00 01  cmpwi   cr7,r6,1
    50cc: 41 9e 00 2c  beq     cr7,50f8 <perf_copy_attr+0x350>
    50d0: 2f 86 00 02  cmpwi   cr7,r6,2
    50d4: 41 9e 00 14  beq     cr7,50e8 <perf_copy_attr+0x340>
    50d8: 7d 20 39 e4  mtsrin  r9,r7
    50dc: 39 29 01 11  addi    r9,r9,273
    50e0: 3c e7 10 00  addis   r7,r7,4096
    50e4: 55 29 02 06  rlwinm  r9,r9,0,8,3
    50e8: 7d 20 39 e4  mtsrin  r9,r7
    50ec: 39 29 01 11  addi    r9,r9,273
    50f0: 3c e7 10 00  addis   r7,r7,4096
    50f4: 55 29 02 06  rlwinm  r9,r9,0,8,3
    50f8: 7d 20 39 e4  mtsrin  r9,r7
    50fc: 3c e7 10 00  addis   r7,r7,4096
    5100: 39 29 01 11  addi    r9,r9,273
    5104: 7f 88 38 40  cmplw   cr7,r8,r7
    5108: 55 29 02 06  rlwinm  r9,r9,0,8,3
    510c: 40 9d 00 4c  ble     cr7,5158 <perf_copy_attr+0x3b0>
    5110: 7d 20 39 e4  mtsrin  r9,r7
    5114: 39 29 01 11  addi    r9,r9,273
    5118: 3c c7 10 00  addis   r6,r7,4096
    511c: 55 29 02 06  rlwinm  r9,r9,0,8,3
    5120: 7d 20 31 e4  mtsrin  r9,r6
    5124: 39 29 01 11  addi    r9,r9,273
    5128: 3c c6 10 00  addis   r6,r6,4096
    512c: 55 29 02 06  rlwinm  r9,r9,0,8,3
    5130: 7d 20 31 e4  mtsrin  r9,r6
    5134: 39 29 01 11  addi    r9,r9,273
    5138: 3c c7 30 00  addis   r6,r7,12288
    513c: 55 29 02 06  rlwinm  r9,r9,0,8,3
    5140: 7d 20 31 e4  mtsrin  r9,r6
    5144: 3c e7 40 00  addis   r7,r7,16384
    5148: 39 29 01 11  addi    r9,r9,273
    514c: 7f 88 38 40  cmplw   cr7,r8,r7
    5150: 55 29 02 06  rlwinm  r9,r9,0,8,3
    5154: 41 9d ff bc  bgt     cr7,5110 <perf_copy_attr+0x368>

    5158: 4c 00 01 2c  isync
    515c: 39 20 00 80  li      r9,128
    5160: 91 3d 00 00  stw     r9,0(r29)

    5164: 38 e0 00 00  li      r7,0
    5168: 90 e2 04 dc  stw     r7,1244(r2)
    516c: 7d 20 ed 26  mfsrin  r9,r29
    5170: 65 29 40 00  oris    r9,r9,16384
    5174: 40 81 00 c0  ble     5234 <perf_copy_attr+0x48c>
    5178: 7d 47 50 f8  not     r7,r10
    517c: 7c e7 42 14  add     r7,r7,r8
    5180: 54 e7 27 be  rlwinm  r7,r7,4,30,31
    5184: 7d 20 51 e4  mtsrin  r9,r10
    5188: 3d 4a 10 00  addis   r10,r10,4096
    518c: 39 29 01 11  addi    r9,r9,273
    5190: 7c 08 50 40  cmplw   r8,r10
    5194: 55 29 02 06  rlwinm  r9,r9,0,8,3
    5198: 40 81 00 9c  ble     5234 <perf_copy_attr+0x48c>

    519c: 2c 07 00 00  cmpwi   r7,0
    51a0: 41 82 00 4c  beq     51ec <perf_copy_attr+0x444>
    51a4: 2c 07 00 01  cmpwi   r7,1
    51a8: 41 82 00 2c  beq     51d4 <perf_copy_attr+0x42c>
    51ac: 2c 07 00 02  cmpwi   r7,2
    51b0: 41 82 00 14  beq     51c4 <perf_copy_attr+0x41c>
    51b4: 7d 20 51 e4  mtsrin  r9,r10
    51b8: 39 29 01 11  addi    r9,r9,273
    51bc: 3d 4a 10 00  addis   r10,r10,4096
    51c0: 55 29 02 06  rlwinm  r9,r9,0,8,3
    51c4: 7d 20 51 e4  mtsrin  r9,r10
    51c8: 39 29 01 11  addi    r9,r9,273
    51cc: 3d 4a 10 00  addis   r10,r10,4096
    51d0: 55 29 02 06  rlwinm  r9,r9,0,8,3
    51d4: 7d 20 51 e4  mtsrin  r9,r10
    51d8: 3d 4a 10 00  addis   r10,r10,4096
    51dc: 39 29 01 11  addi    r9,r9,273
    51e0: 7c 08 50 40  cmplw   r8,r10
    51e4: 55 29 02 06  rlwinm  r9,r9,0,8,3
    51e8: 40 81 00 4c  ble     5234 <perf_copy_attr+0x48c>
    51ec: 7d 20 51 e4  mtsrin  r9,r10
    51f0: 39 29 01 11  addi    r9,r9,273
    51f4: 3c ea 10 00  addis   r7,r10,4096
    51f8: 55 29 02 06  rlwinm  r9,r9,0,8,3
    51fc: 7d 20 39 e4  mtsrin  r9,r7
    5200: 39 29 01 11  addi    r9,r9,273
    5204: 3c e7 10 00  addis   r7,r7,4096
    5208: 55 29 02 06  rlwinm  r9,r9,0,8,3
    520c: 7d 20 39 e4  mtsrin  r9,r7
    5210: 39 29 01 11  addi    r9,r9,273
    5214: 3c ea 30 00  addis   r7,r10,12288
    5218: 55 29 02 06  rlwinm  r9,r9,0,8,3
    521c: 7d 20 39 e4  mtsrin  r9,r7
    5220: 3d 4a 40 00  addis   r10,r10,16384
    5224: 39 29 01 11  addi    r9,r9,273
    5228: 7c 08 50 40  cmplw   r8,r10
    522c: 55 29 02 06  rlwinm  r9,r9,0,8,3
    5230: 41 81 ff bc  bgt     51ec <perf_copy_attr+0x444>

    5234: 4c 00 01 2c  isync

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Export the ool handlers to fix build errors]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d9121f96a7c4302946839a0771f5d1daeeb6968c.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Allow disabling KUAP at boot time
Christophe Leroy [Thu, 3 Jun 2021 08:41:43 +0000 (08:41 +0000)]
powerpc/32s: Allow disabling KUAP at boot time

PPC64 uses MMU features to enable/disable KUAP at boot time.
But feature fixups are applied way too early on PPC32.

Now that all KUAP related actions are in C following the
conversion of KUAP initial setup and context switch in C,
static branches can be used to enable/disable KUAP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Export disable_kuap_key to fix build errors]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/cd79e8008455fba5395d099f9bb1305c039b931c.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Allow disabling KUEP at boot time
Christophe Leroy [Thu, 3 Jun 2021 08:41:42 +0000 (08:41 +0000)]
powerpc/32s: Allow disabling KUEP at boot time

PPC64 uses MMU features to enable/disable KUEP at boot time.
But feature fixups are applied way too early on PPC32.

Now that all KUEP related actions are in C following the
conversion of KUEP initial setup and context switch in C,
static branches can be used to enable/disable KUEP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7745a2c3a08ec46302920a3f48d1cb9b5469dbbb.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Initialise KUAP and KUEP in C
Christophe Leroy [Thu, 3 Jun 2021 08:41:41 +0000 (08:41 +0000)]
powerpc/32s: Initialise KUAP and KUEP in C

In order to selectively activate KUAP and KUEP in a following patch,
perform KUAP and KUEP initialisation in C.

Unlike PPC64, PPC32 doesn't have an early_setup_secondary(),
so do it in start_secondary().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/87be72023448dd4e476744ed279b8c04b8d08a1c.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Simplify calculation of segment register content
Christophe Leroy [Thu, 3 Jun 2021 08:41:40 +0000 (08:41 +0000)]
powerpc/32s: Simplify calculation of segment register content

segment register has VSID on bits 8-31.
Bits 4-7 are reserved, there is no requirement to set them to 0.

VSIDs are calculated from VSID of SR0 by adding 0x111.

Even with highest possible VSID which would be 0xFFFFF0,
adding 16 times 0x111 results in 0x1001100.

So, the reserved bits are never overflowed, no need to clear
the reserved bits after each calculation.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ddc1cfd2ec8f3b2395c6a4d7f2b0c1aa1b1e64fb.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Convert switch_mmu_context() to C
Christophe Leroy [Thu, 3 Jun 2021 08:41:39 +0000 (08:41 +0000)]
powerpc/32s: Convert switch_mmu_context() to C

switch_mmu_context() does things that can easily be done in C.

For updating user segments, we have update_user_segments().

As mentionned in commit 46d348ea9f96 ("powerpc/32s: Move KUEP
locking/unlocking in C"), update_user_segments() has the loop
unrolled which is a significant performance gain.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/05c0875ad8220c03452c3a334946e207c6ca04d6.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: move CTX_TO_VSID() into mmu-hash.h
Christophe Leroy [Thu, 3 Jun 2021 08:41:38 +0000 (08:41 +0000)]
powerpc/32s: move CTX_TO_VSID() into mmu-hash.h

In order to reuse it in switch_mmu_context(), this
patch moves CTX_TO_VSID() macro into asm/book3s/32/mmu-hash.h

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/26b36ef2939234a04b37baf6ffe50cba81f5d1b7.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Refactor update of user segment registers
Christophe Leroy [Thu, 3 Jun 2021 08:41:37 +0000 (08:41 +0000)]
powerpc/32s: Refactor update of user segment registers

KUEP implements the update of user segment registers.

Move it into mmu-hash.h in order to use it from other places.

And inline kuep_lock() and kuep_unlock(). Inlining kuep_lock() is
important for system_call_exception(), otherwise system_call_exception()
has to save into stack the system call parameters that are used just
after, and doing that takes more instructions than kuep_lock() itself.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/24591ca480d14a62ef910e38a5273d551262c4a2.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Move setup_{kuep/kuap}() into {kuep/kuap}.c
Christophe Leroy [Thu, 3 Jun 2021 08:41:36 +0000 (08:41 +0000)]
powerpc/32s: Move setup_{kuep/kuap}() into {kuep/kuap}.c

Avoids the #ifdef in mmu.c

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0b7a13d414837e58264edc336b89c2fe9f35f9bc.1622708530.git.christophe.leroy@csgroup.eu
3 years agopowerpc/8xx: Allow disabling KUAP at boot time
Christophe Leroy [Fri, 4 Jun 2021 04:49:25 +0000 (04:49 +0000)]
powerpc/8xx: Allow disabling KUAP at boot time

PPC64 uses MMU features to enable/disable KUAP at boot time.
But feature fixups are applied way too early on PPC32.

But since commit 3b7972c585ee ("powerpc/32: Manage KUAP in C"),
all KUAP is in C so it is now possible to use static branches.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3dca510ce555335261a47c4799167da698f569c0.1622782111.git.christophe.leroy@csgroup.eu
3 years agopowerpc/44x: Implement Kernel Userspace Exec Protection (KUEP)
Christophe Leroy [Wed, 2 Jun 2021 06:42:10 +0000 (06:42 +0000)]
powerpc/44x: Implement Kernel Userspace Exec Protection (KUEP)

Powerpc 44x has two bits for exec protection in TLBs: one
for user (UX) and one for superviser (SX).

Clear SX on user pages in TLB miss handlers to provide KUEP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/169310e08152aa1d96c979770291d165ec6896ae.1622616032.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Remove CONFIG_PPC_MMU_NOHASH_32
Christophe Leroy [Thu, 3 Jun 2021 07:53:49 +0000 (07:53 +0000)]
powerpc: Remove CONFIG_PPC_MMU_NOHASH_32

Since commit Fixes: 7b0ede696b53 ("powerpc/8xx: MM_SLICE is not needed anymore"),
CONFIG_PPC_MMU_NOHASH_32 has not been used.

Remove it.

Reported-by: Tom Rix <trix@redhat.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/bf1e074f6fb213a1c4cc4964370bdce4b648d647.1622706812.git.christophe.leroy@csgroup.eu
3 years agopowerpc/optprobes: use PPC_RAW_ macros
Christophe Leroy [Thu, 20 May 2021 13:50:49 +0000 (13:50 +0000)]
powerpc/optprobes: use PPC_RAW_ macros

Use PPC_RAW_ macros to simplify the code.

And use PPC_LO/PPC_HI instead of IMM_L/IMM_H which are for
internal use inside ppc-opcode.h

Those macros are self explanatory, comments can go as well.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5a167b8ba4d33a5c09cd504f0c862e25ffe85459.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/optprobes: Compact code source a bit.
Christophe Leroy [Thu, 20 May 2021 13:50:48 +0000 (13:50 +0000)]
powerpc/optprobes: Compact code source a bit.

Now that lines can be up to 100 chars long, minimise the
amount of split lines to increase readability.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8ebbd977ea8cf8d706d82458f2a21acd44562a99.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/optprobes: Minimise casts
Christophe Leroy [Thu, 20 May 2021 13:50:47 +0000 (13:50 +0000)]
powerpc/optprobes: Minimise casts

nip is already an unsigned long, no cast needed.

op_callback_addr and emulate_step_addr are kprobe_opcode_t *.
There value is obtained with ppc_kallsyms_lookup_name() which
returns 'unsigned long', and there values are used create_branch()
which expects 'unsigned long'. So change them to 'unsigned long'
to avoid casting them back and forth.

can_optimize() used p->addr several times as 'unsigned long'.
Use a local 'unsigned long' variable and avoid casting multiple times.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e03192a6d4123242a275e71ce2ba0bb4d90700c1.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/inst: Refactor PPC32 and PPC64 versions
Christophe Leroy [Thu, 20 May 2021 13:50:46 +0000 (13:50 +0000)]
powerpc/inst: Refactor PPC32 and PPC64 versions

ppc_inst() ppc_inst_prefixed() ppc_inst_swab() can easily be made common
to both PPC32 and PPC64.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d54c63dcac6d190e1cc0d2fe3259d6e621928cdf.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Don't use 'struct ppc_inst' to reference instruction location
Christophe Leroy [Thu, 20 May 2021 13:50:45 +0000 (13:50 +0000)]
powerpc: Don't use 'struct ppc_inst' to reference instruction location

'struct ppc_inst' is an internal representation of an instruction, but
in-memory instructions are and will remain a table of 'u32' forever.

Replace all 'struct ppc_inst *' used for locating an instruction in
memory by 'u32 *'. This removes a lot of undue casts to 'struct
ppc_inst *'.

It also helps locating ab-use of 'struct ppc_inst' dereference.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Fix ppc_inst_next(), use u32 instead of unsigned int]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7062722b087228e42cbd896e39bfdf526d6a340a.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/lib/code-patching: Don't use struct 'ppc_inst' for runnable code in tests.
Christophe Leroy [Thu, 20 May 2021 13:50:44 +0000 (13:50 +0000)]
powerpc/lib/code-patching: Don't use struct 'ppc_inst' for runnable code in tests.

'struct ppc_inst' is meant to represent an instruction internally, it
is not meant to dereference code in memory.

For testing code patching, use patch_instruction() to properly
write into memory the code to be tested.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d8425fb42a4adebc35b7509f121817eeb02fac31.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/lib/code-patching: Make instr_is_branch_to_addr() static
Christophe Leroy [Thu, 20 May 2021 13:50:43 +0000 (13:50 +0000)]
powerpc/lib/code-patching: Make instr_is_branch_to_addr() static

instr_is_branch_to_addr() is only used in code-patching.c

Make it static.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5f6b9c8c83170ed310953eac2f5b14539bfc964a.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Do not dereference code as 'struct ppc_inst' (uprobe, code-patching, feature...
Christophe Leroy [Thu, 20 May 2021 13:50:42 +0000 (13:50 +0000)]
powerpc: Do not dereference code as 'struct ppc_inst' (uprobe, code-patching, feature-fixups)

'struct ppc_inst' is an internal structure to represent an instruction,
it is not directly the representation of that instruction in text code.
It is not meant to map and dereference code.

Dereferencing code directly through 'struct ppc_inst' has two main issues:
- On powerpc, structs are expected to be 8 bytes aligned while code is
spread every 4 byte.
- Should a non prefixed instruction lie at the end of the page and the
following page not be mapped, it would generate a page fault.

In-memory code must be accessed with ppc_inst_read().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c9a1201dd0a66b4a0f91f0fb46d9385cbf030feb.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/inst: Avoid pointer dereferencing in ppc_inst_equal()
Christophe Leroy [Thu, 20 May 2021 13:50:41 +0000 (13:50 +0000)]
powerpc/inst: Avoid pointer dereferencing in ppc_inst_equal()

Avoid casting/dereferencing ppc_inst() as u64* , check each member
of the struct when relevant.

And remove the 0xff initialisation of the suffix for non
prefixed instruction. An instruction with 0xff as a suffix
might be invalid, but still is a prefixed instruction and
has to be considered as this.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d8b155e930b7a9708ca110e8ff0ace6713a7af75.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/inst: Improve readability of get_user_instr() and friends
Christophe Leroy [Thu, 20 May 2021 13:50:40 +0000 (13:50 +0000)]
powerpc/inst: Improve readability of get_user_instr() and friends

Remove unneeded line splits.

And remove unneeded local variable initialisation.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/fb097fda78cc6852905ef00f8f7bf371b6cc66f7.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/inst: Reduce casts in get_user_instr()
Christophe Leroy [Thu, 20 May 2021 13:50:39 +0000 (13:50 +0000)]
powerpc/inst: Reduce casts in get_user_instr()

Declare __gui_ptr as 'u32 *' instead of casting it at each use to
'unsigned int *' (which is an equivalent type).

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Use u32 * instead of unsigned int *]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2c2123998e05535d08ba03a96ea1eea921d06a86.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc/inst: Fix sparse detection on get_user_instr()
Christophe Leroy [Thu, 20 May 2021 13:50:38 +0000 (13:50 +0000)]
powerpc/inst: Fix sparse detection on get_user_instr()

get_user_instr() lacks sparse detection for the __user tag.

This is because __gui_ptr is assigned with a cast.

Fix that by adding a __chk_user_ptr()

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0320e5b41a794fd456ab8c5993bbfadcf9e1d8b4.1621516826.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Replace PPC_INST_NOP by PPC_RAW_NOP()
Christophe Leroy [Thu, 20 May 2021 10:23:11 +0000 (10:23 +0000)]
powerpc: Replace PPC_INST_NOP by PPC_RAW_NOP()

On the road to removing all PPC_INST_xx defines in
asm/ppc-opcodes.h, change PPC_INST_NOP to PPC_RAW_NOP().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ad46c195ca1b8572629ef07ba6bfe247585239a6.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/traps: Start using PPC_RAW_xx() macros
Christophe Leroy [Thu, 20 May 2021 10:23:10 +0000 (10:23 +0000)]
powerpc/traps: Start using PPC_RAW_xx() macros

Start using PPC_RAW_xx() macros where relevant.

PPC_INST_SYNC is used to both represent the 'sync' instruction and
the family of synchronisation instructions. Keep it for the later,
maybe we'll change the name in the future to avoid confusion.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0945c155d6cb113431185fc1296ac127359fe29b.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/lib/feature-fixups: Use PPC_RAW_xxx() macros
Christophe Leroy [Thu, 20 May 2021 10:23:09 +0000 (10:23 +0000)]
powerpc/lib/feature-fixups: Use PPC_RAW_xxx() macros

Use PPC_RAW_xxx() macros instead of open coding assembly
opcodes.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Fix bad converison in do_stf_exit_barrier_fixups()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e79cd8e111ca13bf8c61a384bac365aa7e207647.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/ebpf32: Use _Rx macros instead of __REG_Rx ones
Christophe Leroy [Thu, 20 May 2021 10:23:08 +0000 (10:23 +0000)]
powerpc/ebpf32: Use _Rx macros instead of __REG_Rx ones

To increase readability, use _Rx macros instead of __REG_Rx.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/eb7ec6297b5d16f141c5866da3975b418e47431b.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/ebpf64: Use PPC_RAW_MFLR()
Christophe Leroy [Thu, 20 May 2021 10:23:07 +0000 (10:23 +0000)]
powerpc/ebpf64: Use PPC_RAW_MFLR()

Use PPC_RAW_MFLR() instead of open coding with PPC_INST_MFLR.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c1887623e91e8b4da36e669e4c74de86320a5092.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/ftrace: Use PPC_RAW_MFLR() and PPC_RAW_NOP()
Christophe Leroy [Thu, 20 May 2021 10:23:06 +0000 (10:23 +0000)]
powerpc/ftrace: Use PPC_RAW_MFLR() and PPC_RAW_NOP()

Use PPC_RAW_MFLR() instead of open coding with PPC_INST_MFLR.

Same for PPC_INST_NOP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/98fd4d717810b7c4032a1edf62dd6fe638e64329.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/security: Use PPC_RAW_BLR() and PPC_RAW_NOP()
Christophe Leroy [Thu, 20 May 2021 10:23:05 +0000 (10:23 +0000)]
powerpc/security: Use PPC_RAW_BLR() and PPC_RAW_NOP()

On the road to remove all use of PPC_INST_xxx, replace
PPC_INST_BLR by PPC_RAW_BLR(). Same for PPC_INST_NOP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c04f88d0e53d2122fbbe92226892a01ebc668b6a.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/modules: Use PPC_RAW_xx() macros
Christophe Leroy [Thu, 20 May 2021 10:23:04 +0000 (10:23 +0000)]
powerpc/modules: Use PPC_RAW_xx() macros

To improve readability, use PPC_RAW_xx() macros instead of
open coding. Those macros are self-explanatory so the comments
can go as well.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/99d9ee8849d3992beeadb310a665aae01c3abfb1.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal: Use PPC_RAW_xx() macros
Christophe Leroy [Thu, 20 May 2021 10:23:03 +0000 (10:23 +0000)]
powerpc/signal: Use PPC_RAW_xx() macros

To improve readability, use PPC_RAW_xx() macros instead of
open coding. Those macros are self-explanatory so the comments
can go as well.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4ca2bfdca2f47a293d05f61eb3c4e487ee170f1f.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/lib/code-patching: Use PPC_RAW_() macros
Christophe Leroy [Thu, 20 May 2021 10:23:02 +0000 (10:23 +0000)]
powerpc/lib/code-patching: Use PPC_RAW_() macros

Instead of open coding with PPC_INST_ defines, use
PPC_RAW_() macros. It improves readability.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8c92f1d9e825ee47c6f88fe43ad42d2a8cc2ab4a.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc/opcodes: Add shorter macros for registers for use with PPC_RAW_xx()
Christophe Leroy [Thu, 20 May 2021 10:23:01 +0000 (10:23 +0000)]
powerpc/opcodes: Add shorter macros for registers for use with PPC_RAW_xx()

Today we have __REG_Rx macros . They are mainly meant for
internal use by macros __PPC_RA() and friends macros which
allows uses like __PPC_RA(R12).

When used with PPC_RAW_xx() macros, it gives a result which is
not very readable.

Add shorter macros _Rx in order to improve readability when
used with PPC_RAW_xx() macros.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ec34d92b7c2f810622261acfeeed4b0a0f4d01bd.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Rework PPC_RAW_xxx() macros for prefixed instructions
Christophe Leroy [Thu, 20 May 2021 10:23:00 +0000 (10:23 +0000)]
powerpc: Rework PPC_RAW_xxx() macros for prefixed instructions

At the time being, we have PPC_RAW_PLXVP() and PPC_RAW_PSTXVP() which
provide a 64 bits value, and then it gets split by open coding to
format it into a 'struct ppc_inst' instruction.

Instead, define a PPC_RAW_xxx_P() and a PPC_RAW_xxx_S() to be used
as is.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5d146b31b943e7ad674894421db4feef54804b9b.1621506159.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Don't handle ALTIVEC/SPE in ASM in _switch(). Do it in C.
Christophe Leroy [Fri, 14 May 2021 13:14:53 +0000 (13:14 +0000)]
powerpc: Don't handle ALTIVEC/SPE in ASM in _switch(). Do it in C.

_switch() saves and restores ALTIVEC and SPE status.
For altivec this is redundant with what __switch_to() does with
save_sprs() and restore_sprs() and giveup_all() before
calling _switch().

Add support for SPI in save_sprs() and restore_sprs() and
remove things from _switch().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8ab21fd93d6e0047aa71e6509e5e312f14b2991b.1620998075.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Force inlining of csum_add()
Christophe Leroy [Tue, 11 May 2021 06:08:06 +0000 (06:08 +0000)]
powerpc: Force inlining of csum_add()

Commit 0c3b25ec9778 ("powerpc: force inlining of csum_partial() to
avoid multiple csum_partial() with GCC10") inlined csum_partial().

Now that csum_partial() is inlined, GCC outlines csum_add() when
called by csum_partial().

c064fb28 <csum_add>:
c064fb28: 7c 63 20 14  addc    r3,r3,r4
c064fb2c: 7c 63 01 94  addze   r3,r3
c064fb30: 4e 80 00 20  blr

c0665fb8 <csum_add>:
c0665fb8: 7c 63 20 14  addc    r3,r3,r4
c0665fbc: 7c 63 01 94  addze   r3,r3
c0665fc0: 4e 80 00 20  blr

c066719c: 7c 9a c0 2e  lwzx    r4,r26,r24
c06671a0: 38 60 00 00  li      r3,0
c06671a4: 7f 1a c2 14  add     r24,r26,r24
c06671a8: 4b ff ee 11  bl      c0665fb8 <csum_add>
c06671ac: 80 98 00 04  lwz     r4,4(r24)
c06671b0: 4b ff ee 09  bl      c0665fb8 <csum_add>
c06671b4: 80 98 00 08  lwz     r4,8(r24)
c06671b8: 4b ff ee 01  bl      c0665fb8 <csum_add>
c06671bc: a0 98 00 0c  lhz     r4,12(r24)
c06671c0: 4b ff ed f9  bl      c0665fb8 <csum_add>
c06671c4: 7c 63 18 f8  not     r3,r3
c06671c8: 81 3f 00 68  lwz     r9,104(r31)
c06671cc: 81 5f 00 a0  lwz     r10,160(r31)
c06671d0: 7d 29 18 14  addc    r9,r9,r3
c06671d4: 7d 29 01 94  addze   r9,r9
c06671d8: 91 3f 00 68  stw     r9,104(r31)
c06671dc: 7d 1a 50 50  subf    r8,r26,r10
c06671e0: 83 01 00 10  lwz     r24,16(r1)
c06671e4: 83 41 00 18  lwz     r26,24(r1)

The sum with 0 is useless, should have been skipped.
And there is even one completely unused instance of csum_add().

In file included from ./include/net/checksum.h:22,
                 from ./include/linux/skbuff.h:28,
                 from ./include/linux/icmp.h:16,
                 from net/ipv6/ip6_tunnel.c:23:
./arch/powerpc/include/asm/checksum.h: In function '__ip6_tnl_rcv':
./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline]
   94 | static inline __wsum csum_add(__wsum csum, __wsum addend)
      |                      ^~~~~~~~
./arch/powerpc/include/asm/checksum.h:172:31: note: called from here
  172 |                         sum = csum_add(sum, (__force __wsum)*(const u32 *)buff);
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline]
   94 | static inline __wsum csum_add(__wsum csum, __wsum addend)
      |                      ^~~~~~~~
./arch/powerpc/include/asm/checksum.h:177:31: note: called from here
  177 |                         sum = csum_add(sum, (__force __wsum)
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  178 |                                             *(const u32 *)(buff + 4));
      |                                             ~~~~~~~~~~~~~~~~~~~~~~~~~
./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline]
   94 | static inline __wsum csum_add(__wsum csum, __wsum addend)
      |                      ^~~~~~~~
./arch/powerpc/include/asm/checksum.h:183:31: note: called from here
  183 |                         sum = csum_add(sum, (__force __wsum)
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  184 |                                             *(const u32 *)(buff + 8));
      |                                             ~~~~~~~~~~~~~~~~~~~~~~~~~
./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline]
   94 | static inline __wsum csum_add(__wsum csum, __wsum addend)
      |                      ^~~~~~~~
./arch/powerpc/include/asm/checksum.h:186:31: note: called from here
  186 |                         sum = csum_add(sum, (__force __wsum)
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  187 |                                             *(const u16 *)(buff + 12));
      |                                             ~~~~~~~~~~~~~~~~~~~~~~~~~~

Force inlining of csum_add().

     94c: 80 df 00 a0  lwz     r6,160(r31)
     950: 7d 28 50 2e  lwzx    r9,r8,r10
     954: 7d 48 52 14  add     r10,r8,r10
     958: 80 aa 00 04  lwz     r5,4(r10)
     95c: 80 ff 00 68  lwz     r7,104(r31)
     960: 7d 29 28 14  addc    r9,r9,r5
     964: 7d 29 01 94  addze   r9,r9
     968: 7d 08 30 50  subf    r8,r8,r6
     96c: 80 aa 00 08  lwz     r5,8(r10)
     970: a1 4a 00 0c  lhz     r10,12(r10)
     974: 7d 29 28 14  addc    r9,r9,r5
     978: 7d 29 01 94  addze   r9,r9
     97c: 7d 29 50 14  addc    r9,r9,r10
     980: 7d 29 01 94  addze   r9,r9
     984: 7d 29 48 f8  not     r9,r9
     988: 7c e7 48 14  addc    r7,r7,r9
     98c: 7c e7 01 94  addze   r7,r7
     990: 90 ff 00 68  stw     r7,104(r31)

In the non-inlined version, the first sum with 0 was performed.
Here it is skipped.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f7f4d4e364de6e473da874468b903da6e5d97adc.1620713272.git.christophe.leroy@csgroup.eu
3 years agoMerge branch 'fixes' into next
Michael Ellerman [Tue, 15 Jun 2021 14:14:55 +0000 (00:14 +1000)]
Merge branch 'fixes' into next

Merge our fixes branch which has a number of important fixes, notably
the fix for initrd corruption, as well as the fixes for scv vs ptrace.

3 years agopowerpc/tau: Remove superfluous parameter in alloc_workqueue() call
Finn Thain [Fri, 11 Jun 2021 07:58:27 +0000 (17:58 +1000)]
powerpc/tau: Remove superfluous parameter in alloc_workqueue() call

This avoids an (optional) compiler warning:

arch/powerpc/kernel/tau_6xx.c: In function 'TAU_init':
arch/powerpc/kernel/tau_6xx.c:204:30: error: too many arguments for format [-Werror=format-extra-args]
  tau_workq = alloc_workqueue("tau", WQ_UNBOUND, 1, 0);

Fixes: ea7369f1e1a9 ("powerpc/tau: Convert from timer to workqueue")
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Finn Thain <fthain@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a1456e8bbd33ef702e3ff6f14b1bf3919241c62b.1623398307.git.fthain@linux-m68k.org
3 years agopowerpc: Fix initrd corruption with relative jump labels
Michael Ellerman [Mon, 14 Jun 2021 13:14:40 +0000 (23:14 +1000)]
powerpc: Fix initrd corruption with relative jump labels

Commit 952080456ad6 ("powerpc: Switch to relative jump labels") switched
us to using relative jump labels. That involves changing the code,
target and key members in struct jump_entry to be relative to the
address of the jump_entry, rather than absolute addresses.

We have two static inlines that create a struct jump_entry,
arch_static_branch() and arch_static_branch_jump(), as well as an asm
macro ARCH_STATIC_BRANCH, which is used by the pseries-only hypervisor
tracing code.

Unfortunately we missed updating the key to be a relative reference in
ARCH_STATIC_BRANCH.

That causes a pseries kernel to have a handful of jump_entry structs
with bad key values. Instead of being a relative reference they instead
hold the full address of the key.

However the code doesn't expect that, it still adds the key value to the
address of the jump_entry (see jump_entry_key()) expecting to get a
pointer to a key somewhere in kernel data.

The table of jump_entry structs sits in rodata, which comes after the
kernel text. In a typical build this will be somewhere around 15MB. The
address of the key will be somewhere in data, typically around 20MB.
Adding the two values together gets us a pointer somewhere around 45MB.

We then call static_key_set_entries() with that bad pointer and modify
some members of the struct static_key we think we are pointing at.

A pseries kernel is typically ~30MB in size, so writing to ~45MB won't
corrupt the kernel itself. However if we're booting with an initrd,
depending on the size and exact location of the initrd, we can corrupt
the initrd. Depending on how exactly we corrupt the initrd it can either
cause the system to not boot, or just corrupt one of the files in the
initrd.

The fix is simply to make the key value relative to the jump_entry
struct in the ARCH_STATIC_BRANCH macro.

Fixes: 952080456ad6 ("powerpc: Switch to relative jump labels")
Reported-by: Anastasia Kovaleva <a.kovaleva@yadro.com>
Reported-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reported-by: Greg Kurz <groug@kaod.org>
Reported-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tested-by: Daniel Axtens <dja@axtens.net>
Tested-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210614131440.312360-1-mpe@ellerman.id.au
3 years agopowerpc/perf: Simplify Makefile
Christophe Leroy [Fri, 7 May 2021 14:01:09 +0000 (14:01 +0000)]
powerpc/perf: Simplify Makefile

arch/powerpc/Kbuild decend into arch/powerpc/perf/ only when
CONFIG_PERF_EVENTS is selected, so there is not need to take
CONFIG_PERF_EVENTS into account in arch/powerpc/perf/Makefile.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Michal Suchánek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d37f61afca55b5b33787b643890e061ae1c18f5f.1620396045.git.christophe.leroy@csgroup.eu
3 years agopowerpc/prom_init: Move custom isspace() to its own namespace
Andy Shevchenko [Mon, 10 May 2021 14:49:25 +0000 (17:49 +0300)]
powerpc/prom_init: Move custom isspace() to its own namespace

If by some reason any of the headers will include ctype.h
we will have a name collision. Avoid this by moving isspace()
to the dedicate namespace.

First appearance of the code is in the commit a3b35662adba
("powerpc/prom_init: Evaluate mem kernel parameter for early allocation").

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
[mpe: Reformat prom_isxdigit() now that we allow longer lines]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210510144925.58195-1-andriy.shevchenko@linux.intel.com
3 years agoselftests/powerpc: Remove the repeated declaration
Shaokun Zhang [Tue, 1 Jun 2021 06:36:25 +0000 (14:36 +0800)]
selftests/powerpc: Remove the repeated declaration

Function 'event_ebb_init' and 'event_leader_ebb_init' are declared
twice in the header file, so remove the repeated declaration.

Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1622529385-5938-1-git-send-email-zhangshaokun@hisilicon.com
3 years agopowerpc/spider-pci: Remove set but not used variable 'val'
Baokun Li [Tue, 1 Jun 2021 08:53:19 +0000 (16:53 +0800)]
powerpc/spider-pci: Remove set but not used variable 'val'

Fixes gcc '-Wunused-but-set-variable' warning:
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format

arch/powerpc/platforms/cell/spider-pci.c: In function 'spiderpci_io_flush':
arch/powerpc/platforms/cell/spider-pci.c:28:6: warning:
variable ‘val’ set but not used [-Wunused-but-set-variable]

It never used since introduction.

Signed-off-by: Baokun Li <libaokun1@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210601085319.140461-1-libaokun1@huawei.com
3 years agopowerpc/spufs: Remove set but not used variable 'dummy'
Baokun Li [Tue, 1 Jun 2021 08:51:27 +0000 (16:51 +0800)]
powerpc/spufs: Remove set but not used variable 'dummy'

Fixes gcc '-Wunused-but-set-variable' warning:
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format
# WARNING: Fixes tag on line 3 doesn't match correct format

arch/powerpc/platforms/cell/spufs/switch.c: In function 'check_ppu_mb_stat':
arch/powerpc/platforms/cell/spufs/switch.c:1660:6: warning:
variable ‘dummy’ set but not used [-Wunused-but-set-variable]

arch/powerpc/platforms/cell/spufs/switch.c: In function 'check_ppuint_mb_stat':
arch/powerpc/platforms/cell/spufs/switch.c:1675:6: warning:
variable ‘dummy’ set but not used [-Wunused-but-set-variable]

It never used since introduction.

Signed-off-by: Baokun Li <libaokun1@huawei.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210601085127.139598-1-libaokun1@huawei.com
3 years agopowerpc/52xx: Add fallthrough in mpc52xx_wdt_ioctl()
Tom Rix [Tue, 1 Jun 2021 19:02:00 +0000 (12:02 -0700)]
powerpc/52xx: Add fallthrough in mpc52xx_wdt_ioctl()

With gcc 10.3, there is this compiler error:

  compiler.h:56:26: error: this statement may fall through
  mpc52xx_gpt.c:586:2: note: here
    586 |  case WDIOC_GETTIMEOUT:
        |  ^~~~

So add the fallthrough pseudo keyword.

Signed-off-by: Tom Rix <trix@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210601190200.2637776-1-trix@redhat.com
3 years agopowerpc/signal64: Copy siginfo before changing regs->nip
Michael Ellerman [Tue, 8 Jun 2021 13:46:05 +0000 (23:46 +1000)]
powerpc/signal64: Copy siginfo before changing regs->nip

In commit 5335c62723d5 ("powerpc/signal64: Rewrite handle_rt_signal64()
to minimise uaccess switches") the 64-bit signal code was rearranged to
use user_write_access_begin/end().

As part of that change the call to copy_siginfo_to_user() was moved
later in the function, so that it could be done after the
user_write_access_end().

In particular it was moved after we modify regs->nip to point to the
signal trampoline. That means if copy_siginfo_to_user() fails we exit
handle_rt_signal64() with an error but with regs->nip modified, whereas
previously we would not modify regs->nip until the copy succeeded.

Returning an error from signal delivery but with regs->nip updated
leaves the process in a sort of half-delivered state. We do immediately
force a SEGV in signal_setup_done(), called from do_signal(), so the
process should never run in the half-delivered state.

However that SEGV is not delivered until we've gone around to
do_notify_resume() again, so it's possible some tracing could observe
the half-delivered state.

There are other cases where we fail signal delivery with regs partly
updated, eg. the write to newsp and SA_SIGINFO, but the latter at least
is very unlikely to fail as it reads back from the frame we just wrote
to.

Looking at other arches they seem to be more careful about leaving regs
unchanged until the copy operations have succeeded, and in general that
seems like good hygenie.

So although the current behaviour is not cleary buggy, it's also not
clearly correct. So move the call to copy_siginfo_to_user() up prior to
the modification of regs->nip, which is closer to the old behaviour, and
easier to reason about.

Fixes: 5335c62723d5 ("powerpc/signal64: Rewrite handle_rt_signal64() to minimise uaccess switches")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210608134605.2783677-1-mpe@ellerman.id.au
3 years agoKVM: PPC: Book3S HV: remove ISA v3.0 and v3.1 support from P7/8 path
Nicholas Piggin [Fri, 28 May 2021 09:07:52 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: remove ISA v3.0 and v3.1 support from P7/8 path

POWER9 and later processors always go via the P9 guest entry path now.
Remove the remaining support from the P7/8 path.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-33-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: implement hash host / hash guest support
Nicholas Piggin [Fri, 28 May 2021 09:07:51 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: implement hash host / hash guest support

Implement support for hash guests under hash host. This has to save and
restore the host SLB, and ensure that the MMU is off while switching
into the guest SLB.

POWER9 and later CPUs now always go via the P9 path. The "fast" guest
mode is now renamed to the P9 mode, which is consistent with its
functionality and the rest of the naming.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-32-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: implement hash guest support
Nicholas Piggin [Fri, 28 May 2021 09:07:50 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: implement hash guest support

Implement hash guest support. Guest entry/exit has to restore and
save/clear the SLB, plus several other bits to accommodate hash guests
in the P9 path. Radix host, hash guest support is removed from the P7/8
path.

The HPT hcalls and faults are not handled in real mode, which is a
performance regression. A worst-case fork/exit microbenchmark takes 3x
longer after this patch. kbuild benchmark performance is in the noise,
but the slowdown is likely to be noticed somewhere.

For now, accept this penalty for the benefit of simplifying the P7/8
paths and unifying P9 hash with the new code, because hash is a less
important configuration than radix on processors that support it. Hash
will benefit from future optimisations to this path, including possibly
a faster path to handle such hcalls and interrupts without doing a full
exit.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-31-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Reflect userspace hcalls to hash guests to support PR KVM
Nicholas Piggin [Fri, 28 May 2021 09:07:49 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Reflect userspace hcalls to hash guests to support PR KVM

The reflection of sc 1 interrupts from guest PR=1 to the guest kernel is
required to support a hash guest running PR KVM where its guest is
making hcalls with sc 1.

In preparation for hash guest support, add this hcall reflection to the
P9 path. The P7/8 path does this in its realmode hcall handler
(sc_1_fast_return).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-30-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: add virtual mode handlers for HPT hcalls and page faults
Nicholas Piggin [Fri, 28 May 2021 09:07:48 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: add virtual mode handlers for HPT hcalls and page faults

In order to support hash guests in the P9 path (which does not do real
mode hcalls or page fault handling), these real-mode hash specific
interrupts need to be implemented in virt mode.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-29-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: small pseries_do_hcall cleanup
Nicholas Piggin [Fri, 28 May 2021 09:07:47 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: small pseries_do_hcall cleanup

Functionality should not be changed.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-28-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Allow all P9 processors to enable nested HV
Nicholas Piggin [Fri, 28 May 2021 09:07:46 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Allow all P9 processors to enable nested HV

All radix guests go via the P9 path now, so there is no need to limit
nested HV to processors that support "mixed mode" MMU. Remove the
restriction.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-27-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: Remove unused nested HV tests in XICS emulation
Nicholas Piggin [Fri, 28 May 2021 09:07:45 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: Remove unused nested HV tests in XICS emulation

Commit 38a6865dcb000 ("KVM: PPC: Book3S HV: Use XICS hypercalls when
running as a nested hypervisor") added nested HV tests in XICS
hypercalls, but not all are required.

* icp_eoi is only called by kvmppc_deliver_irq_passthru which is only
  called by kvmppc_check_passthru which is only caled by
  kvmppc_read_one_intr.

* kvmppc_read_one_intr is only called by kvmppc_read_intr which is only
  called by the L0 HV rmhandlers code.

* kvmhv_rm_send_ipi is called by:
  - kvmhv_interrupt_vcore which is only called by kvmhv_commence_exit
    which is only called by the L0 HV rmhandlers code.
  - icp_send_hcore_msg which is only called by icp_rm_set_vcpu_irq.
  - icp_rm_set_vcpu_irq which is only called by icp_rm_try_update
  - icp_rm_set_vcpu_irq is not nested HV safe because it writes to
    LPCR directly without a kvmhv_on_pseries test. Nested handlers
    should not in general be using the rm handlers.

The important test seems to be in kvmppc_ipi_thread, which sends the
virt-mode H_IPI handler kick to use smp_call_function rather than
msgsnd.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-26-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: Remove virt mode checks from real mode handlers
Nicholas Piggin [Fri, 28 May 2021 09:07:44 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: Remove virt mode checks from real mode handlers

Now that the P7/8 path no longer supports radix, real-mode handlers
do not need to deal with being called in virt mode.

This change effectively reverts commit ce9fc49d02fc ("KVM: PPC: Book3S
HV: Add radix checks in real-mode hypercall handlers").

It removes a few more real-mode tests in rm hcall handlers, which
allows the indirect ops for the xive module to be removed from the
built-in xics rm handlers.

kvmppc_h_random is renamed to kvmppc_rm_h_random to be a bit more
descriptive and consistent with other rm handlers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-25-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: Remove radix guest support from P7/8 path
Nicholas Piggin [Fri, 28 May 2021 09:07:43 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: Remove radix guest support from P7/8 path

The P9 path now runs all supported radix guest combinations, so
remove radix guest support from the P7/8 path.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-24-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: Remove support for dependent threads mode on P9
Nicholas Piggin [Fri, 28 May 2021 09:07:42 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: Remove support for dependent threads mode on P9

Dependent-threads mode is the normal KVM mode for pre-POWER9 SMT
processors, where all threads in a core (or subcore) would run the same
partition at the same time, or they would run the host.

This design was mandated by MMU state that is shared between threads in
a processor, so the synchronisation point is in hypervisor real-mode
that has essentially no shared state, so it's safe for multiple threads
to gather and switch to the correct mode.

It is implemented by having the host unplug all secondary threads and
always run in SMT1 mode, and host QEMU threads essentially represent
virtual cores that wake these secondary threads out of unplug when the
ioctl is called to run the guest. This happens via a side-path that is
mostly invisible to the rest of the Linux host and the secondary threads
still appear to be unplugged.

POWER9 / ISA v3.0 has a more flexible MMU design that is independent
per-thread and allows a much simpler KVM implementation. Before the new
"P9 fast path" was added that began to take advantage of this, POWER9
support was implemented in the existing path which has support to run
in the dependent threads mode. So it was not much work to add support to
run POWER9 in this dependent threads mode.

The mode is not required by the POWER9 MMU (although "mixed-mode" hash /
radix MMU limitations of early processors were worked around using this
mode). But it is one way to run SMT guests without running different
guests or guest and host on different threads of the same core, so it
could avoid or reduce some SMT attack surfaces without turning off SMT
entirely.

This security feature has some real, if indeterminate, value. However
the old path is lagging in features (nested HV), and with this series
the new P9 path adds remaining missing features (radix prefetch bug
and hash support, in later patches), so POWER9 dependent threads mode
support would be the only remaining reason to keep that code in and keep
supporting POWER9/POWER10 in the old path. So here we make the call to
drop this feature.

Remove dependent threads mode support for POWER9 and above processors.
Systems can still achieve this security by disabling SMT entirely, but
that would generally come at a larger performance cost for guests.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-23-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: Implement radix prefetch workaround by disabling MMU
Nicholas Piggin [Fri, 28 May 2021 09:07:41 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV: Implement radix prefetch workaround by disabling MMU

Rather than partition the guest PID space + flush a rogue guest PID to
work around this problem, instead fix it by always disabling the MMU when
switching in or out of guest MMU context in HV mode.

This may be a bit less efficient, but it is a lot less complicated and
allows the P9 path to trivally implement the workaround too. Newer CPUs
are not subject to this issue.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-22-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Switch to guest MMU context as late as possible
Nicholas Piggin [Fri, 28 May 2021 09:07:40 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Switch to guest MMU context as late as possible

Move MMU context switch as late as reasonably possible to minimise code
running with guest context switched in. This becomes more important when
this code may run in real-mode, with later changes.

Move WARN_ON as early as possible so program check interrupts are less
likely to tangle everything up.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-21-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Add helpers for OS SPR handling
Nicholas Piggin [Fri, 28 May 2021 09:07:39 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Add helpers for OS SPR handling

This is a first step to wrapping supervisor and user SPR saving and
loading up into helpers, which will then be called independently in
bare metal and nested HV cases in order to optimise SPR access.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-20-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Move SPR loading after expiry time check
Nicholas Piggin [Fri, 28 May 2021 09:07:38 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Move SPR loading after expiry time check

This is wasted work if the time limit is exceeded.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-19-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Improve exit timing accounting coverage
Nicholas Piggin [Fri, 28 May 2021 09:07:37 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Improve exit timing accounting coverage

The C conversion caused exit timing to become a bit cramped. Expand it
to cover more of the entry and exit code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-18-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Read machine check registers while MSR[RI] is 0
Nicholas Piggin [Fri, 28 May 2021 09:07:36 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Read machine check registers while MSR[RI] is 0

SRR0/1, DAR, DSISR must all be protected from machine check which can
clobber them. Ensure MSR[RI] is clear while they are live.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-17-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: inline kvmhv_load_hv_regs_and_go into __kvmhv_vcpu_entry_p9
Nicholas Piggin [Fri, 28 May 2021 09:07:35 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: inline kvmhv_load_hv_regs_and_go into __kvmhv_vcpu_entry_p9

Now the initial C implementation is done, inline more HV code to make
rearranging things easier.

And rename __kvmhv_vcpu_entry_p9 to drop the leading underscores as it's
now C, and is now a more complete vcpu entry.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-16-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Implement the rest of the P9 path in C
Nicholas Piggin [Fri, 28 May 2021 09:07:34 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Implement the rest of the P9 path in C

Almost all logic is moved to C, by introducing a new in_guest mode for
the P9 path that branches very early in the KVM interrupt handler to P9
exit code.

The main P9 entry and exit assembly is now only about 160 lines of low
level stack setup and register save/restore, plus a bad-interrupt
handler.

There are two motivations for this, the first is just make the code more
maintainable being in C. The second is to reduce the amount of code
running in a special KVM mode, "realmode". In quotes because with radix
it is no longer necessarily real-mode in the MMU, but it still has to be
treated specially because it may be in real-mode, and has various
important registers like PID, DEC, TB, etc set to guest. This is hostile
to the rest of Linux and can't use arbitrary kernel functionality or be
instrumented well.

This initial patch is a reasonably faithful conversion of the asm code,
but it does lack any loop to return quickly back into the guest without
switching out of realmode in the case of unimportant or easily handled
interrupts. As explained in previous changes, handling HV interrupts
very quickly in this low level realmode is not so important for P9
performance, and are important to avoid for security, observability,
debugability reasons.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-15-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Stop handling hcalls in real-mode in the P9 path
Nicholas Piggin [Fri, 28 May 2021 09:07:33 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Stop handling hcalls in real-mode in the P9 path

In the interest of minimising the amount of code that is run in
"real-mode", don't handle hcalls in real mode in the P9 path. This
requires some new handlers for H_CEDE and xics-on-xive to be added
before xive is pulled or cede logic is checked.

This introduces a change in radix guest behaviour where radix guests
that execute 'sc 1' in userspace now get a privilege fault whereas
previously the 'sc 1' would be reflected as a syscall interrupt to the
guest kernel. That reflection is only required for hash guests that run
PR KVM.

Background:

In POWER8 and earlier processors, it is very expensive to exit from the
HV real mode context of a guest hypervisor interrupt, and switch to host
virtual mode. On those processors, guest->HV interrupts reach the
hypervisor with the MMU off because the MMU is loaded with guest context
(LPCR, SDR1, SLB), and the other threads in the sub-core need to be
pulled out of the guest too. Then the primary must save off guest state,
invalidate SLB and ERAT, and load up host state before the MMU can be
enabled to run in host virtual mode (~= regular Linux mode).

Hash guests also require a lot of hcalls to run due to the nature of the
MMU architecture and paravirtualisation design. The XICS interrupt
controller requires hcalls to run.

So KVM traditionally tries hard to avoid the full exit, by handling
hcalls and other interrupts in real mode as much as possible.

By contrast, POWER9 has independent MMU context per-thread, and in radix
mode the hypervisor is in host virtual memory mode when the HV interrupt
is taken. Radix guests do not require significant hcalls to manage their
translations, and xive guests don't need hcalls to handle interrupts. So
it's much less important for performance to handle hcalls in real mode on
POWER9.

One caveat is that the TCE hcalls are performance critical, real-mode
variants introduced for POWER8 in order to achieve 10GbE performance.
Real mode TCE hcalls were found to be less important on POWER9, which
was able to drive 40GBe networking without them (using the virt mode
hcalls) but performance is still important. These hcalls will benefit
from subsequent guest entry/exit optimisation including possibly a
faster "partial exit" that does not entirely switch to host context to
handle the hcall.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-14-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Move radix MMU switching instructions together
Nicholas Piggin [Fri, 28 May 2021 09:07:32 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Move radix MMU switching instructions together

Switching the MMU from radix<->radix mode is tricky particularly as the
MMU can remain enabled and requires a certain sequence of SPR updates.
Move these together into their own functions.

This also includes the radix TLB check / flush because it's tied in to
MMU switching due to tlbiel getting LPID from LPIDR.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-13-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Move xive vcpu context management into kvmhv_p9_guest_entry
Nicholas Piggin [Fri, 28 May 2021 09:07:31 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Move xive vcpu context management into kvmhv_p9_guest_entry

Move the xive management up so the low level register switching can be
pushed further down in a later patch. XIVE MMIO CI operations can run in
higher level code with machine checks, tracing, etc., available.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-12-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Reduce irq_work vs guest decrementer races
Nicholas Piggin [Fri, 28 May 2021 09:07:30 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Reduce irq_work vs guest decrementer races

irq_work's use of the DEC SPR is racy with guest<->host switch and guest
entry which flips the DEC interrupt to guest, which could lose a host
work interrupt.

This patch closes one race, and attempts to comment another class of
races.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-11-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: Move setting HDEC after switching to guest LPCR
Nicholas Piggin [Fri, 28 May 2021 09:07:29 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: Move setting HDEC after switching to guest LPCR

LPCR[HDICE]=0 suppresses hypervisor decrementer exceptions on some
processors, so it must be enabled before HDEC is set.

Rather than set it in the host LPCR then setting HDEC, move the HDEC
update to after the guest MMU context (including LPCR) is loaded.
There shouldn't be much concern with delaying HDEC by some 10s or 100s
of nanoseconds by setting it a bit later.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-10-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV P9: implement kvmppc_xive_pull_vcpu in C
Nicholas Piggin [Fri, 28 May 2021 09:07:28 +0000 (19:07 +1000)]
KVM: PPC: Book3S HV P9: implement kvmppc_xive_pull_vcpu in C

This is more symmetric with kvmppc_xive_push_vcpu, and has the advantage
that it runs with the MMU on.

The extra test added to the asm will go away with a future change.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210528090752.3542186-9-npiggin@gmail.com