Scott Wood [Sat, 22 Dec 2018 03:32:51 +0000 (21:32 -0600)]
powerpc/dts/fsl: Fix dtc-flagged interrupt errors
mpc8641_hpcn was updated to 4-cell interrupt specifiers, but
PCI interrupt-map was not updated. It was also missing #interrupt-cells
on the outer PCI buses.
p1020rdb-pc was updated to 4-cell interrupt specifiers, but
the ethernet-phy nodes weren't updated.
mpc832x_rdb had an invalid "interrupts = <0>" on the ethernet-phy nodes.
Besides being the wrong number of cells, 0 is not a valid IPIC interrupt
according to ipic.c. Presumably it was meant to indicate that these
PHYs are not connected to an interrupt.
Scott Wood [Wed, 31 Oct 2018 06:57:35 +0000 (14:57 +0800)]
powerpc/fsl: Use new clockgen binding
The driver retains compatibility with old device trees, but we don't
want the old nodes lying around to be copied, or used as a reference
(some of the mux options are incorrect), or even just being clutter.
Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Tang Yuantian <andy.tang@nxp.com>
[scottwood: removed sysclk node added by Andy] Signed-off-by: Scott Wood <oss@buserror.net>
Christophe Leroy [Mon, 10 Dec 2018 11:41:29 +0000 (11:41 +0000)]
powerpc/83xx: handle machine check caused by watchdog timer
When the watchdog timer is set in interrupt mode, it causes a
machine check when it times out. The purpose of this mode is to
ease debugging, not to crash the kernel and reboot the machine.
This patch implements a special handling for that, in order to not
crash the kernel if the watchdog times out while in interrupt or
within the idle task.
swiotlb will only bounce buffer when the effective dma address for the
device is smaller than the actual DMA range. Instead of flipping between
the swiotlb and nommu ops for FSL SOCs that have the second outbound
window just don't set the bus dma_mask in this case.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Scott Wood <oss@buserror.net>
Diana Craciun [Wed, 12 Dec 2018 14:03:06 +0000 (16:03 +0200)]
powerpc/fsl: Flush the branch predictor at each kernel entry (32 bit)
In order to protect against speculation attacks on
indirect branches, the branch predictor is flushed at
kernel entry to protect for the following situations:
- userspace process attacking another userspace process
- userspace process attacking the kernel
Basically when the privillege level change (i.e.the kernel
is entered), the branch predictor state is flushed.
Signed-off-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diana Craciun [Wed, 12 Dec 2018 14:03:05 +0000 (16:03 +0200)]
powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)
In order to protect against speculation attacks on
indirect branches, the branch predictor is flushed at
kernel entry to protect for the following situations:
- userspace process attacking another userspace process
- userspace process attacking the kernel
Basically when the privillege level change (i.e. the
kernel is entered), the branch predictor state is flushed.
Signed-off-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diana Craciun [Wed, 12 Dec 2018 14:03:03 +0000 (16:03 +0200)]
powerpc/fsl: Emulate SPRN_BUCSR register
In order to flush the branch predictor the guest kernel performs
writes to the BUCSR register which is hypervisor privilleged. However,
the branch predictor is flushed at each KVM entry, so the branch
predictor has been already flushed, so just return as soon as possible
to guest.
Diana Craciun [Wed, 12 Dec 2018 14:03:00 +0000 (16:03 +0200)]
powerpc/fsl: Add infrastructure to fixup branch predictor flush
In order to protect against speculation attacks (Spectre
variant 2) on NXP PowerPC platforms, the branch predictor
should be flushed when the privillege level is changed.
This patch is adding the infrastructure to fixup at runtime
the code sections that are performing the branch predictor flush
depending on a boot arg parameter which is added later in a
separate patch.
Signed-off-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 17 Dec 2018 14:18:27 +0000 (14:18 +0000)]
powerpc/prom: move the device tree if not in declared memory.
If the device tree doesn't reside in the memory which is declared
inside it, it has to be moved as well as this memory will not be
mapped by the kernel.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Add some documentation on which CPU versions map to which ISA
versions. This is all publicly available information, some of it
already in the kernel source, but it's much nicer to have it all in
one place.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 14 Dec 2018 13:57:11 +0000 (00:57 +1100)]
powerpc/configs: Don't enable PPC_EARLY_DEBUG in defconfigs
This reverts the remains of commit 8f58a55db7f9 ("powerpc: Update
default configurations").
That commit was proceeded by a commit which added a config option to
control use of BOOTX for early debug, ie. PPC_EARLY_DEBUG_BOOTX, and
then the update of the defconfigs was intended to not change behaviour
by then enabling the new config option.
However enabling PPC_EARLY_DEBUG had other consequences, notably
causing us to register the udbg console at the end of udbg_early_init().
This means on a system which doesn't have anything that BOOTX can
use (most systems), we register the udbg console very early but the
bootx code just throws everything away, meaning early boot messages
are never printed to the console.
What we want to happen is for the udbg console to only be registered
later (from setup_arch()) once we've setup udbg_putc, and then all
early boot messages will be replayed.
Arnd Bergmann [Mon, 10 Dec 2018 21:51:57 +0000 (22:51 +0100)]
powerpc: eeh_event: convert semaphore to completion
For this use case, completions and semaphores are equivalent,
but semaphores are an awkward interface that should generally
be avoided, so use the completion instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Greg Kurz [Mon, 10 Dec 2018 15:13:38 +0000 (16:13 +0100)]
ocxl/afu_irq: Don't include <asm/pnv-ocxl.h>
The AFU irq code doesn't need to reach out to the platform.
Signed-off-by: Greg Kurz <groug@kaod.org> Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Greg Kurz [Mon, 10 Dec 2018 15:18:13 +0000 (16:18 +0100)]
ocxl: Clarify error path in setup_xsl_irq()
Implementing rollback with goto and labels is a common practice that
leads to prettier and more maintainable code. FWIW, this design pattern
is already being used in alloc_link() a few lines below in this file.
Do the same in setup_xsl_irq().
Signed-off-by: Greg Kurz <groug@kaod.org> Acked-by: Frederic Barrat <fbarrat@linux.ibm.com> Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The bamboo dts has a bug: it uses a non-naturally aligned range
for PCI memory space. This isnt' supported by the code, thus
causing PCI to break on this system.
This is due to the fact that while the chip memory map has 1G
reserved for PCI memory, it's only 512M aligned. The code doesn't
know how to split that into 2 different PMMs and fails, so limit
the region to 512M.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Darren Stevens [Sun, 19 Aug 2018 20:21:55 +0000 (21:21 +0100)]
powerpc/pasemi: Add PCI initialisation for Nemo board.
The A-Eon Amigaone X1000's Nemo motherboard has an AMD SB600
connected to one of the PCI-e root ports on its PaSemi
Pwrficient 1628M SoC. Normally the SB600 southbridge would be
connected to a hidden PCI-e port on the system's northbridge,
and as a result doesn't fully comply with the PCI-e spec.
Add code to relax the PCI-e detection in both the root port
and the Linux kernel allowing on board devices to be detected.
Signed-off-by: Darren Stevens <darren@stevens-zone.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 14 Dec 2018 15:23:33 +0000 (15:23 +0000)]
powerpc/mm: Make NULL pointer deferences explicit on bad page faults.
As several other arches including x86, this patch makes it explicit
that a bad page fault is a NULL pointer dereference when the fault
address is lower than PAGE_SIZE
In the mean time, this page makes all bad_page_fault() messages
shorter so that they remain on one single line. And it prefixes them
by "BUG: " so that they get easily grepped.
The CXL code never even looks at the dma mask, so there is no good
reason for this sanity check. Remove it because it gets in the way
of the dma ops refactoring.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/dma: split the two __dma_alloc_coherent implementations
The implemementation for the CONFIG_NOT_COHERENT_CACHE case doesn't share
any code with the one for systems with coherent caches. Split it off
and merge it with the helpers in dma-noncoherent.c that have no other
callers.
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/dma: remove the unused ISA_DMA_THRESHOLD export
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/dma: remove the unused ARCH_HAS_DMA_MMAP_COHERENT define
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This function is internal to the DMA API implementation. Instead use
the DMA API to properly unmap. Note that the DMA API usage in this
driver is a disaster and urgently needs some work - it is missing all
the unmaps, seems to do a secondary map where it looks like it should
to a unmap in one place to work around cache coherency and the
directions passed in seem to be partially wrong.
Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Christian Lamparter <chunkeey@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/dma: properly wire up the unmap_page and unmap_sg methods
The unmap methods need to transfer memory ownership back from the
device to the cpu by identical means as dma_sync_*_to_cpu. I'm not
sure powerpc needs to do any work in this transfer direction, but
given that it does invalidate the caches in dma_sync_*_to_cpu already
we should make sure we also do so on unmapping.
Signed-off-by: Christoph Hellwig <hch@lst.de>
[mpe: s/dir/direction in dma_nommu_unmap_page()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 14 Dec 2018 10:27:47 +0000 (10:27 +0000)]
powerpc/prom: fix early DEBUG messages
This patch fixes early DEBUG messages in prom.c:
- Use %px instead of %p to see the addresses
- Cast memblock_phys_mem_size() with (unsigned long long) to
avoid build failure when phys_addr_t is not 64 bits.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Greg Kurz [Sun, 16 Dec 2018 21:28:50 +0000 (22:28 +0100)]
ocxl: Fix endiannes bug in ocxl_link_update_pe()
All fields in the PE are big-endian. Use cpu_to_be32() like everywhere
else something is written to the PE. Otherwise a wrong TID will be used
by the NPU. If this TID happens to point to an existing thread sharing
the same mm, it could be woken up by error. This is highly improbable
though. The likely outcome of this is the NPU not finding the target
thread and forcing the AFU into sending an interrupt, which userspace
is supposed to handle anyway.
Fixes: 3d2369b01ba7 ("ocxl: Expose the thread_id needed for wait on POWER9") Cc: stable@vger.kernel.org # v4.18 Signed-off-by: Greg Kurz <groug@kaod.org> Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Joel Stanley [Mon, 12 Nov 2018 05:28:06 +0000 (15:58 +1030)]
powerpc/32: Avoid unsupported flags with clang
When building for ppc32 with clang these flags are unsupported:
-ffixed-r2 and -mmultiple
llvm's lib/Target/PowerPC/PPCRegisterInfo.cpp marks r2 as reserved on
when building for SVR4ABI and !ppc64:
// The SVR4 ABI reserves r2 and r13
if (Subtarget.isSVR4ABI()) {
// We only reserve r2 if we need to use the TOC pointer. If we have no
// explicit uses of the TOC pointer (meaning we're a leaf function with
// no constant-pool loads, etc.) and we have no potential uses inside an
// inline asm block, then we can treat r2 has an ordinary callee-saved
// register.
const PPCFunctionInfo *FuncInfo = MF.getInfo<PPCFunctionInfo>();
if (!TM.isPPC64() || FuncInfo->usesTOCBasePtr() || MF.hasInlineAsm())
markSuperRegs(Reserved, PPC::R2); // System-reserved register
markSuperRegs(Reserved, PPC::R13); // Small Data Area pointer register
}
This means we can safely omit -ffixed-r2 when building for 32-bit
targets.
The -mmultiple/-mno-multiple flags are not supported by clang, so
platforms that might support multiple miss out on using multiple word
instructions.
We wrap these flags in cc-option so that when Clang gains support the
kernel will be able use these flags.
Clang 8 can then build a ppc44x_defconfig which boots in Qemu:
make CC=clang-8 ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu- ppc44x_defconfig
./scripts/config -e CONFIG_DEVTMPFS -d DEVTMPFS_MOUNT
make CC=clang-8 ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu-
Joel Stanley [Fri, 2 Nov 2018 00:44:55 +0000 (11:14 +1030)]
raid6/ppc: Fix build for clang
We cannot build these files with clang as it does not allow altivec
instructions in assembly when -msoft-float is passed.
Jinsong Ji <jji@us.ibm.com> wrote:
> We currently disable Altivec/VSX support when enabling soft-float. So
> any usage of vector builtins will break.
>
> Enable Altivec/VSX with soft-float may need quite some clean up work, so
> I guess this is currently a limitation.
>
> Removing -msoft-float will make it work (and we are lucky that no
> floating point instructions will be generated as well).
This is a workaround until the issue is resolved in clang.
powerpc/perf: Add constraints for power9 l2/l3 bus events
In previous generation processors, both bus events and direct
events of performance monitoring unit can be individually
programmabled and monitored in PMCs.
But in Power9, L2/L3 bus events are always available as a
"bank" of 4 events. To obtain the counts for any of the
l2/l3 bus events in a given bank, the user will have to
program PMC4 with corresponding l2/l3 bus event for that
bank.
Patch enforce two contraints incase of L2/L3 bus events.
1)Any L2/L3 event when programmed is also expected to program corresponding
PMC4 event from that group.
2)PMC4 event should always been programmed first due to group constraint
logic limitation
Raw event code has couple of fields "unit" and "cache" in it, to capture
the "unit" to monitor for a given pmcxsel and cache reload qualifier to
program in MMCR1.
isa207_get_constraint() refers "unit" field to update the MMCRC (L2/L3)
Event bus control fields with "cache" bits of the raw event code.
These are power8 specific and not supported by PowerISA v3.0 pmu. So wrap
the checks to be power8 specific. Also, "cache" bit field is referred to
update MMCR1[16:17] and this check can be power8 specific.
Fixes: f2f0fe46fd75a ('powerpc/perf: factor out power8 pmu functions') Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/perf: Update perf_regs structure to include SIER
On each sample, Sample Instruction Event Register (SIER) content
is saved in pt_regs. SIER does not have a entry as-is in the pt_regs
but instead, SIER content is saved in the "dar" register of pt_regs.
Patch adds another entry to the perf_regs structure to include the "SIER"
printing which internally maps to the "dar" of pt_regs.
It also check for the SIER availability in the platform and present
value accordingly
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/perf: Fix thresholding counter data for unknown type
MMCRA[34:36] and MMCRA[38:44] expose the thresholding counter value.
Thresholding counter can be used to count latency cycles such as
load miss to reload. But threshold counter value is not relevant
when the sampled instruction type is unknown or reserved. Patch to
fix the thresholding counter value to zero when sampled instruction
type is unknown or reserved.
Fixes: 5296cefb7779('powerpc/perf: Support to export MMCRA[TEC*] field to userspace') Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/mm/hash: Handle user access of kernel address gracefully
In commit d6dfa0c605c0 ("powerpc/mm: Move the DSISR_PROTFAULT sanity
check") we moved the protection fault access check before the vma
lookup. That means we hit that WARN_ON when user space accesses a
kernel address. Before that commit this was handled by find_vma() not
finding vma for the kernel address and considering that access as bad
area access.
Avoid the confusing WARN_ON and convert that to a ratelimited printk.
for exec:
a.out[6067]: User access of kernel address (c00000000000dea0) - exploit attempt? (uid: 1000)
a.out[6067]: segfault (11) at c00000000000dea0 nip c00000000000dea0 lr 129d507b0 code 1
a.out[6067]: Bad NIP, not dumping instructions.
Fixes: d6dfa0c605c0 ("powerpc/mm: Move the DSISR_PROTFAULT sanity check") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Breno Leitao <leitao@debian.org>
[mpe: Don't split printk() string across lines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 28 Nov 2018 17:21:10 +0000 (17:21 +0000)]
powerpc/mm: add exec protection on powerpc 603
The 603 doesn't have a HASH table, TLB misses are handled by
software. It is then possible to generate page fault when
_PAGE_EXEC is not set like in nohash/32.
There is one "reserved" PTE bit available, this patch uses
it for _PAGE_EXEC.
In order to support it, set_pte_filter() and
set_access_flags_filter() are made common, and the handling
is made dependent on MMU_FTR_HPTE_TABLE
commit af4cac33c351 ("[POWERPC] Remove the dregs of APUS support from
arch/powerpc") removed CONFIG_APUS, but forgot to remove the logic
which adapts tophys() and tovirt() for it.
This patch removes the last stale pieces.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 18 Oct 2018 05:22:06 +0000 (05:22 +0000)]
powerpc/mm: remove unused variable
In file included from ./include/linux/hugetlb.h:445:0,
from arch/powerpc/kernel/setup-common.c:37:
./arch/powerpc/include/asm/hugetlb.h: In function ‘huge_ptep_clear_flush’:
./arch/powerpc/include/asm/hugetlb.h:154:8: error: variable ‘pte’ set but not used [-Werror=unused-but-set-variable]
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 10 Dec 2018 08:08:28 +0000 (08:08 +0000)]
lib: fix build failure in CONFIG_DEBUG_VIRTUAL test
On several arches, virt_to_phys() is in io.h
Build fails without it:
CC lib/test_debug_virtual.o
lib/test_debug_virtual.c: In function 'test_debug_virtual_init':
lib/test_debug_virtual.c:26:7: error: implicit declaration of function 'virt_to_phys' [-Werror=implicit-function-declaration]
pa = virt_to_phys(va);
^
Fixes: b2dfaa4acf0b ("lib: add test module for CONFIG_DEBUG_VIRTUAL") CC: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
was added in 2006 in commit 0539014bfe16 ("[PATCH] powerpc32: Set cpu
explicitly in kernel compiles"). This change was acceptable since the
TARGET_CPU logic was 64-bit only.
Since commit bb82fdd204cc ("powerpc: Allow CPU selection
also on PPC32") this logic is no longer acceptable after the TARGET_CPU
specific. It currently appends -mcpu=powerpc at the end of the command
line, after any TARGET_CPU specific:
Fixes: bb82fdd204cc ("powerpc: Allow CPU selection also on PPC32") Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
ipic_set_priority() has been unused since 2006 when the last usage was
removed in commit bb11a33e1c11 ("[POWERPC] Adapt ipic driver to new
host_ops interface, add set_irq_type to set IRQ sense").
Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 17 Dec 2018 11:11:54 +0000 (22:11 +1100)]
Merge branch 'fixes' into next
Merge our fixes branch again, this has a couple of build fixes and also
a change to do_syscall_trace_enter() that will conflict with a patch we
want to apply in next.
powerpc/ptrace: replace ptrace_report_syscall() with a tracehook call
Arch code should use tracehook_*() helpers, as documented in
include/linux/tracehook.h, ptrace_report_syscall() is not expected to
be used outside that file.
The patch does not look very nice, but at least it is correct
and opens the way for PTRACE_GET_SYSCALL_INFO API.
Co-authored-by: Dmitry V. Levin <ldv@altlinux.org> Fixes: 8598a538b0ef ("powerpc/ptrace: Add support for PTRACE_SYSEMU") Signed-off-by: Elvira Khabirova <lineprinter@altlinux.org> Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
[mpe: Take this as a minimal fix for 4.20, we'll rework it later] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/mm: Fallback to RAM if the altmap is unusable
The "altmap" is used to provide a pool of memory that is reserved for
the vmemmap backing of hot-plugged memory. This is useful when adding
large amount of ZONE_DEVICE memory to a system with a limited amount of
normal memory.
On ppc64 we use huge pages to map the vmemmap which requires the backing
storage to be contigious and aligned to the hugepage size. The altmap
implementation allows for the altmap provider to reserve a few PFNs at
the start of the range for it's own uses and when this occurs the
first chunk of the altmap is not usable for hugepage mappings. On hash
there is no sane way to fall back to a normal sized page mapping so we
fail the allocation. This results in memory hotplug failing with
ENOMEM when the new range doesn't fall into an existing vmemmap block.
This patch handles this case by falling back to using system memory
rather than failing if we cannot allocate from the altmap. This
fallback should only ever be used for the first vmemmap block so it
should not cause excess memory consumption.
Fixes: 8566e8df4cac ("mm: pass the vmem_altmap to vmemmap_populate") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
powerpc/papr_scm: Use ibm,unit-guid as the iset cookie
The interleave set cookie is used to determine if a label stored in the
metadata space should be applied to the current region. This is
important in the case of NVDIMMs since the firmware may change the
interleaving configuration of a DIMM which would invalidate the existing
labels. In our case the hypervisor hides those details from us so we
don't really care, but libnvdimm still requires the interleave set
cookie to be non-zero.
For our purposes we just need the set cookie to be unique and fixed for
a given PAPR SCM region and using the unit-guid (really a UUID) is fine
for this purpose.
Fixes: 3f927cd9ef76 ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
[mpe: Use kernel types (u64)] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When a new nvdimm device is registered with libnvdimm via
nvdimm_create() it is added as a device on the nvdimm bus. The probe
function for the DIMM driver is potentially quite slow so actually
registering and probing the device is done in an async domain rather
than immediately after device creation. This can result in a race where
the region device (created 2nd) is probed first and fails to activate at
boot.
To fix this we use the same approach as the ACPI/NFIT driver which is to
check that all the DIMM devices registered successfully. LibNVDIMM
provides the nvdimm_bus_count_dimms() function which synchronises with
the async domain and verifies that the dimm was successfully registered
with the bus.
If either of these does not occur then we bail.
Fixes: 3f927cd9ef76 ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The return values of a h-call are returned in the CPU registers and
written to the provided buffer by the plpar_hcall() wrapper. As a result
the values written to memory are always in the native endian and should
not be byte swapped.
The inital implementation of the H-Call interface was done in qemu and
the returned values were byte swapped unnecessarily in both the
hypervisor and in the driver so this was only noticed when bringing up
the PowerVM implementation.
Fixes: 3f927cd9ef76 ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The ibm,unit-sizes property was originally specified as an array of two
u32s corresponding to the memory block size, and the number of blocks
available in that region. A fairly last-minute change to the SCM DT
specification was splitting that into two seperate u64 properties:
ibm,block-sizes and ibm,number-of-blocks that convey the same
information. No firmware / hypervisor that emitted the ibm,unit-size
property ever appeared in the wild.
Fixes: 3f927cd9ef76 ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
[mpe: Use kernel types (u32/u64)] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Fix an off-by-one error in the memory resource range. This resource is
used to determine the address range of the memory to be hot-plugged as
ZONE_DEVICE memory. The current end address results in the kernel
attempting to map an additional memblock and the hypervisor may reject
the mapping resulting in the entire hot-plug failing.
Fixes: 3f927cd9ef76 ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Making PAPR_SCM select LIBNVDIMM results in circular dependencies in
Kconfig when another symbol depends on it. Fix this by replacing the
select with a depends.
Fixes: 3f927cd9ef76 ("powerpc/pseries: Add driver for PAPR SCM regions") Reported-by: Alastair D'Silva <alastair@d-silva.org> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sandipan Das [Thu, 6 Dec 2018 09:27:01 +0000 (14:57 +0530)]
powerpc/bpf: Fix broken uapi for BPF_PROG_TYPE_PERF_EVENT
Now that there are different variants of pt_regs for userspace and
kernel, the uapi for the BPF_PROG_TYPE_PERF_EVENT program type must be
changed by exporting the user_pt_regs structure instead of the pt_regs
structure that is in-kernel only.
Fixes: 1a123cc850b4 ("powerpc: Split user/kernel definitions of struct pt_regs") Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In commit 037cda8a3f9d ("powerpc/boot: Expose Kconfig symbols to
wrapper") we added a dependency to serial.c on autoconf.h:
$(obj)/serial.c: $(obj)/autoconf.h
This works when building in-tree (ie. with KBUILD_OUTPUT unset)
because the obj tree is the src tree.
But when building with eg. O=build and -j 1 the build fails:
gcc ... -I../arch/powerpc/boot -c -o arch/powerpc/boot/serial.o arch/powerpc/boot/serial.c
gcc: error: arch/powerpc/boot/serial.c: No such file or directory
Why this is only happening with -j 1 is not clear, when building with
-j greater than 1 somehow we decide to look for serial.c in the src
tree (../), eg:
Regardless we shouldn't be specifying a dependency on serial.c in the
build tree, we want to add a dependency to the version in $(srctree)
so fix the rule to say that.
Fixes: 037cda8a3f9d ("powerpc/boot: Expose Kconfig symbols to wrapper") Tested-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 16 Nov 2018 17:06:43 +0000 (17:06 +0000)]
powerpc/mm: dump segment registers on book3s/32
This patch creates a debugfs file to see content of
segment registers
# cat /sys/kernel/debug/segment_registers
---[ User Segments ]---
0x00000000-0x0fffffff Kern key 1 User key 1 VSID 0xade2b0
0x10000000-0x1fffffff Kern key 1 User key 1 VSID 0xade3c1
0x20000000-0x2fffffff Kern key 1 User key 1 VSID 0xade4d2
0x30000000-0x3fffffff Kern key 1 User key 1 VSID 0xade5e3
0x40000000-0x4fffffff Kern key 1 User key 1 VSID 0xade6f4
0x50000000-0x5fffffff Kern key 1 User key 1 VSID 0xade805
0x60000000-0x6fffffff Kern key 1 User key 1 VSID 0xade916
0x70000000-0x7fffffff Kern key 1 User key 1 VSID 0xadea27
0x80000000-0x8fffffff Kern key 1 User key 1 VSID 0xadeb38
0x90000000-0x9fffffff Kern key 1 User key 1 VSID 0xadec49
0xa0000000-0xafffffff Kern key 1 User key 1 VSID 0xaded5a
0xb0000000-0xbfffffff Kern key 1 User key 1 VSID 0xadee6b
---[ Kernel Segments ]---
0xc0000000-0xcfffffff Kern key 0 User key 1 VSID 0x000ccc
0xd0000000-0xdfffffff Kern key 0 User key 1 VSID 0x000ddd
0xe0000000-0xefffffff Kern key 0 User key 1 VSID 0x000eee
0xf0000000-0xffffffff Kern key 0 User key 1 VSID 0x000fff
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Move it under /sys/kernel/debug/powerpc, make sr_init() __init] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Joel Stanley [Mon, 3 Dec 2018 23:07:46 +0000 (09:37 +1030)]
powerpc/math-emu: Update macros from GCC
The add_ssaaaa, sub_ddmmss, umul_ppmm and udiv_qrnnd macros originate
from GCC's longlong.h which in turn was copied from GMP's longlong.h a
few decades ago.
This was found when compiling with clang:
arch/powerpc/math-emu/fnmsub.c:46:2: error: invalid use of a cast in a
inline asm context requiring an l-value: remove the cast or build with
-fheinous-gnu-extensions
FP_ADD_D(R, T, B);
^~~~~~~~~~~~~~~~~
...
Segher points out: this was fixed in GCC over 16 years ago
( https://gcc.gnu.org/r56600 ), and in GMP (where it comes from)
presumably before that.
Update the add_ssaaaa, sub_ddmmss, umul_ppmm and udiv_qrnnd macros to
the latest GCC version in order to git rid of the invalid casts. These
were taken as-is from GCC's longlong in order to make future syncs
obvious. Other parts of sfp-machine.h were left as-is as the file
contains more features than present in longlong.h.
Link: https://github.com/ClangBuiltLinux/linux/issues/260 Signed-off-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
From what I've seen, every time this warning comes up it's bogus,
so let's ignore it.
Signed-off-by: Russell Currey <ruscur@russell.cc> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 29 Nov 2018 14:07:21 +0000 (14:07 +0000)]
powerpc/8xx: reintroduce 16K pages with HW assistance
Using this HW assistance implies some constraints on the
page table structure:
- Regardless of the main page size used (4k or 16k), the
level 1 table (PGD) contains 1024 entries and each PGD entry covers
a 4Mbytes area which is managed by a level 2 table (PTE) containing
also 1024 entries each describing a 4k page.
- 16k pages require 4 identifical entries in the L2 table
- 512k pages PTE have to be spread every 128 bytes in the L2 table
- 8M pages PTE are at the address pointed by the L1 entry and each
8M page require 2 identical entries in the PGD.
In order to use hardware assistance with 16K pages, this patch does
the following modifications:
- Make PGD size independent of the main page size
- In 16k pages mode, redefine pte_t as a struct with 4 elements,
and populate those 4 elements in __set_pte_at() and pte_update()
- Adapt the size of the hugepage tables.
- Define a PTE_FRAGMENT_NB so that a 16k page contains 4 page tables.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 29 Nov 2018 14:07:15 +0000 (14:07 +0000)]
powerpc/8xx: Use hardware assistance in TLB handlers
Today, on the 8xx the TLB handlers do SW tablewalk by doing all
the calculation in ASM, in order to match with the Linux page
table structure.
The 8xx offers hardware assistance which allows significant size
reduction of the TLB handlers, hence also reduces the time spent
in the handlers.
However, using this HW assistance implies some constraints on the
page table structure:
- Regardless of the main page size used (4k or 16k), the
level 1 table (PGD) contains 1024 entries and each PGD entry covers
a 4Mbytes area which is managed by a level 2 table (PTE) containing
also 1024 entries each describing a 4k page.
- 16k pages require 4 identifical entries in the L2 table
- 512k pages PTE have to be spread every 128 bytes in the L2 table
- 8M pages PTE are at the address pointed by the L1 entry and each
8M page require 2 identical entries in the PGD.
This patch modifies the TLB handlers to use HW assistance for 4K PAGES.
Before that patch, the mean time spent in TLB miss handlers is:
- ITLB miss: 80 ticks
- DTLB miss: 62 ticks
After that patch, the mean time spent in TLB miss handlers is:
- ITLB miss: 72 ticks
- DTLB miss: 54 ticks
So the improvement is 10% for ITLB and 13% for DTLB misses
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 29 Nov 2018 14:07:13 +0000 (14:07 +0000)]
powerpc/8xx: Temporarily disable 16k pages and hugepages
In preparation of making use of hardware assistance in TLB handlers,
this patch temporarily disables 16K pages and hugepages. The reason
is that when using HW assistance in 4K pages mode, the linux model
fit with the HW model for 4K pages and 8M pages.
However for 16K pages and 512K mode some additional work is needed
to get linux model fit with HW model.
For the 8M pages, they will naturaly come back when we switch to
HW assistance, without any additional handling.
In order to keep the following patch smaller, the removal of the
current special handling for 8M pages gets removed here as well.
Therefore the 4K pages mode will be implemented first and without
support for 512k hugepages. Then the 512k hugepages will be brought
back. And the 16K pages will be implemented in the following step.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 29 Nov 2018 14:07:11 +0000 (14:07 +0000)]
powerpc/8xx: Move SW perf counters in first 32kb of memory
In order to simplify time critical exceptions handling 8xx
specific SW perf counters, this patch moves the counters into
the beginning of memory. This is possible because .text is readable
and the counters are never modified outside of the handlers.
By doing this, we avoid having to set a second register with
the upper part of the address of the counters.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>