linux.git
3 years agoMerge branch 'for-next/kasan' into for-next/core
Will Deacon [Thu, 24 Jun 2021 13:04:00 +0000 (14:04 +0100)]
Merge branch 'for-next/kasan' into for-next/core

Optimise out-of-line KASAN checking when using software tagging.

* for-next/kasan:
  kasan: arm64: support specialized outlined tag mismatch checks

3 years agoMerge branch 'for-next/insn' into for-next/core
Will Deacon [Thu, 24 Jun 2021 13:03:24 +0000 (14:03 +0100)]
Merge branch 'for-next/insn' into for-next/core

Refactoring of our instruction decoding routines and addition of some
missing encodings.

* for-next/insn:
  arm64: insn: avoid circular include dependency
  arm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h>
  arm64: insn: decouple patching from insn code
  arm64: insn: Add load/store decoding helpers
  arm64: insn: Add some opcodes to instruction decoder
  arm64: insn: Add barrier encodings
  arm64: insn: Add SVE instruction class
  arm64: Move instruction encoder/decoder under lib/
  arm64: Move aarch32 condition check functions
  arm64: Move patching utilities out of instruction encoding/decoding

3 years agoMerge branch 'for-next/entry' into for-next/core
Will Deacon [Thu, 24 Jun 2021 13:01:55 +0000 (14:01 +0100)]
Merge branch 'for-next/entry' into for-next/core

The never-ending entry.S refactoring continues, putting us in a much
better place wrt compiler instrumentation whilst moving more of the code
into C.

* for-next/entry:
  arm64: idle: don't instrument idle code with KCOV
  arm64: entry: don't instrument entry code with KCOV
  arm64: entry: make NMI entry/exit functions static
  arm64: entry: split SDEI entry
  arm64: entry: split bad stack entry
  arm64: entry: fold el1_inv() into el1h_64_sync_handler()
  arm64: entry: handle all vectors with C
  arm64: entry: template the entry asm functions
  arm64: entry: improve bad_mode()
  arm64: entry: move bad_mode() to entry-common.c
  arm64: entry: consolidate EL1 exception returns
  arm64: entry: organise entry vectors consistently
  arm64: entry: organise entry handlers consistently
  arm64: entry: convert IRQ+FIQ handlers to C
  arm64: entry: add a call_on_irq_stack helper
  arm64: entry: move NMI preempt logic to C
  arm64: entry: move arm64_preempt_schedule_irq to entry-common.c
  arm64: entry: convert SError handlers to C
  arm64: entry: unmask IRQ+FIQ after EL0 handling
  arm64: remove redundant local_daif_mask() in bad_mode()

3 years agoMerge branch 'for-next/docs' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:37:47 +0000 (13:37 +0100)]
Merge branch 'for-next/docs' into for-next/core

Update booting requirements for the FEAT_HCX feature, added to v8.7 of
the architecture.

* for-next/docs:
  arm64: Document requirement for access to FEAT_HCX

3 years agoMerge branch 'for-next/cpuidle' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:36:39 +0000 (13:36 +0100)]
Merge branch 'for-next/cpuidle' into for-next/core

Fix resume from idle when pNMI is being used.

* for-next/cpuidle:
  arm64: suspend: Use cpuidle context helpers in cpu_suspend()
  PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter()
  arm64: Convert cpu_do_idle() to using cpuidle context helpers
  arm64: Add cpuidle context save/restore helpers

3 years agoMerge branch 'for-next/cpufeature' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:35:46 +0000 (13:35 +0100)]
Merge branch 'for-next/cpufeature' into for-next/core

Additional CPU sanity checks for MTE and preparatory changes for systems
where not all of the CPUs support 32-bit EL0.

* for-next/cpufeature:
  arm64: Restrict undef hook for cpufeature registers
  arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs
  KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
  arm64: Allow mismatched 32-bit EL0 support
  arm64: cpuinfo: Split AArch32 registers out into a separate struct
  arm64: Check if GMID_EL1.BS is the same on all CPUs
  arm64: Change the cpuinfo_arm64 member type for some sysregs to u64

3 years agoMerge branch 'for-next/cortex-strings' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:33:57 +0000 (13:33 +0100)]
Merge branch 'for-next/cortex-strings' into for-next/core

Update our kernel string routines to the latest Cortex Strings
implementation.

* for-next/cortex-strings:
  arm64: update string routine copyrights and URLs
  arm64: Rewrite __arch_clear_user()
  arm64: Better optimised memchr()
  arm64: Import latest memcpy()/memmove() implementation
  arm64: Add assembly annotations for weak-PI-alias madness
  arm64: Import latest version of Cortex Strings' strncmp
  arm64: Import updated version of Cortex Strings' strlen
  arm64: Import latest version of Cortex Strings' strcmp
  arm64: Import latest version of Cortex Strings' memcmp

3 years agoMerge branch 'for-next/caches' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:33:02 +0000 (13:33 +0100)]
Merge branch 'for-next/caches' into for-next/core

Big cleanup of our cache maintenance routines, which were confusingly
named and inconsistent in their implementations.

* for-next/caches:
  arm64: Rename arm64-internal cache maintenance functions
  arm64: Fix cache maintenance function comments
  arm64: sync_icache_aliases to take end parameter instead of size
  arm64: __clean_dcache_area_pou to take end parameter instead of size
  arm64: __clean_dcache_area_pop to take end parameter instead of size
  arm64: __clean_dcache_area_poc to take end parameter instead of size
  arm64: __flush_dcache_area to take end parameter instead of size
  arm64: dcache_by_line_op to take end parameter instead of size
  arm64: __inval_dcache_area to take end parameter instead of size
  arm64: Fix comments to refer to correct function __flush_icache_range
  arm64: Move documentation of dcache_by_line_op
  arm64: assembler: remove user_alt
  arm64: Downgrade flush_icache_range to invalidate
  arm64: Do not enable uaccess for invalidate_icache_range
  arm64: Do not enable uaccess for flush_icache_range
  arm64: Apply errata to swsusp_arch_suspend_exit
  arm64: assembler: add conditional cache fixups
  arm64: assembler: replace `kaddr` with `addr`

3 years agoMerge branch 'for-next/build' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:31:41 +0000 (13:31 +0100)]
Merge branch 'for-next/build' into for-next/core

Tweak linker flags so that GDB can understand vmlinux when using RELR
relocations.

* for-next/build:
  Makefile: fix GDB warning with CONFIG_RELR

3 years agoMerge branch 'for-next/boot' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:30:13 +0000 (13:30 +0100)]
Merge branch 'for-next/boot' into for-next/core

Boot path cleanups to enable early initialisation of per-cpu operations
needed by KCSAN.

* for-next/boot:
  arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros
  arm64: smp: initialize cpu offset earlier
  arm64: smp: unify task and sp setup
  arm64: smp: remove stack from secondary_data
  arm64: smp: remove pointless secondary_data maintenance
  arm64: assembler: add set_this_cpu_offset

3 years agoMerge branch 'for-next/stacktrace' into for-next/core
Will Deacon [Thu, 24 Jun 2021 12:15:09 +0000 (13:15 +0100)]
Merge branch 'for-next/stacktrace' into for-next/core

Relax frame record alignment requirements to facilitate 8-byte alignment
with KASAN and Clang.

* for-next/stacktrace:
  arm64: stacktrace: Relax frame record alignment requirement to 8 bytes
  arm64: Change the on_*stack functions to take a size argument
  arm64: Implement stack trace termination record

3 years agoarm64: Restrict undef hook for cpufeature registers
Raphael Gault [Mon, 17 May 2021 18:02:56 +0000 (13:02 -0500)]
arm64: Restrict undef hook for cpufeature registers

This commit modifies the mask of the mrs_hook declared in
arch/arm64/kernel/cpufeatures.c which emulates only feature register
access. This is necessary because this hook's mask was too large and
thus masking any mrs instruction, even if not related to the emulated
registers which made the pmu emulation inefficient.

Signed-off-by: Raphael Gault <raphael.gault@arm.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210517180256.2881891-1-robh@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: insn: avoid circular include dependency
Mark Rutland [Fri, 18 Jun 2021 15:11:22 +0000 (16:11 +0100)]
arm64: insn: avoid circular include dependency

Nathan reports that when building with CONFIG_LTO_CLANG_THIN=y, the
build fails due to BUILD_BUG_ON() not being defined before its uss in
<asm/insn.h>.

The problem is that with LTO, we patch READ_ONCE(), and <asm/rwonce.h>
includes <asm/insn.h>, creating a circular include chain:

        <linux/build_bug.h>
        <linux/compiler.h>
        <asm/rwonce.h>
        <asm/alternative-macros.h>
        <asm/insn.h>
        <linux/build-bug.h>

... and so when <asm/insn.h> includes <linux/build_bug.h>, none of the
BUILD_BUG* definitions have happened yet.

To avoid this, let's move AARCH64_INSN_SIZE into a header without any
dependencies, such that it can always be safely included. At the same
time, avoid including <asm/alternative.h> in <asm/insn.h>, which should
no longer be necessary (and doesn't make sense when insn.h is consumed
by userspace).

Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210621080830.GA37068@C02TD0UTHF1T.local
Fixes: 3e00e39d9dad ("arm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h>")
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: suspend: Use cpuidle context helpers in cpu_suspend()
Marc Zyngier [Tue, 15 Jun 2021 11:12:27 +0000 (12:12 +0100)]
arm64: suspend: Use cpuidle context helpers in cpu_suspend()

Use cpuidle context helpers to switch to using DAIF.IF instead
of PMR to mask interrupts, ensuring that we suspend with
interrupts being able to reach the CPU interface.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Link: https://lore.kernel.org/r/20210615111227.2454465-5-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoPSCI: Use cpuidle context helpers in psci_cpu_suspend_enter()
Marc Zyngier [Tue, 15 Jun 2021 11:12:26 +0000 (12:12 +0100)]
PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter()

The PSCI CPU suspend code isn't aware of the PMR vs DAIF game,
resulting in a system that locks up if entering CPU suspend
with GICv3 pNMI enabled.

To save the day, teach the suspend code about our new cpuidle
context helpers, which will do everything that's required just
like the usual WFI cpuidle code.

This fixes my Altra system, which would otherwise lock-up at
boot time when booted with irqchip.gicv3_pseudo_nmi=1.

Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Link: https://lore.kernel.org/r/20210615111227.2454465-4-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Convert cpu_do_idle() to using cpuidle context helpers
Marc Zyngier [Tue, 15 Jun 2021 11:12:25 +0000 (12:12 +0100)]
arm64: Convert cpu_do_idle() to using cpuidle context helpers

Now that we have helpers that are aware of the pseudo-NMI
feature, introduce them to cpu_do_idle(). This allows for
some nice cleanup.

No functional change intended.

Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210615111227.2454465-3-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Add cpuidle context save/restore helpers
Marc Zyngier [Tue, 15 Jun 2021 11:12:24 +0000 (12:12 +0100)]
arm64: Add cpuidle context save/restore helpers

As we need to start doing some additional work on all idle
paths, let's introduce a set of macros that will perform
the work related to the GICv3 pseudo-NMI idle entry exit.

Stubs are introduced to 32bit ARM for compatibility.
As these helpers are currently unused, there is no functional
change.

Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210615111227.2454465-2-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Kill 32-bit applications scheduled on 64-bit-only CPUs
Will Deacon [Tue, 8 Jun 2021 18:02:57 +0000 (19:02 +0100)]
arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs

Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea.

Ensure that 32-bit applications always take the slow-path when returning
to userspace on a system with mismatched support at EL0, so that we can
avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210608180313.11502-5-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoKVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
Will Deacon [Tue, 8 Jun 2021 18:02:56 +0000 (19:02 +0100)]
KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support

If a vCPU is caught running 32-bit code on a system with mismatched
support at EL0, then we should kill it.

Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210608180313.11502-4-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Allow mismatched 32-bit EL0 support
Will Deacon [Tue, 8 Jun 2021 18:02:55 +0000 (19:02 +0100)]
arm64: Allow mismatched 32-bit EL0 support

When confronted with a mixture of CPUs, some of which support 32-bit
applications and others which don't, we quite sensibly treat the system
as 64-bit only for userspace and prevent execve() of 32-bit binaries.

Unfortunately, some crazy folks have decided to build systems like this
with the intention of running 32-bit applications, so relax our
sanitisation logic to continue to advertise 32-bit support to userspace
on these systems and track the real 32-bit capable cores in a cpumask
instead. For now, the default behaviour remains but will be tied to
a command-line option in a later patch.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210608180313.11502-3-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: cpuinfo: Split AArch32 registers out into a separate struct
Will Deacon [Tue, 8 Jun 2021 18:02:54 +0000 (19:02 +0100)]
arm64: cpuinfo: Split AArch32 registers out into a separate struct

In preparation for late initialisation of the "sanitised" AArch32 register
state, move the AArch32 registers out of 'struct cpuinfo' and into their
own struct definition.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210608180313.11502-2-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h>
Mark Rutland [Wed, 9 Jun 2021 10:23:01 +0000 (11:23 +0100)]
arm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h>

For histroical reasons, we define AARCH64_INSN_SIZE in
<asm/alternative-macros.h>, but it would make more sense to do so in
<asm/insn.h>. Let's move it into <asm/insn.h>, and add the necessary
include directives for this.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210609102301.17332-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: decouple patching from insn code
Mark Rutland [Wed, 9 Jun 2021 10:23:00 +0000 (11:23 +0100)]
arm64: insn: decouple patching from insn code

Currently, <asm/insn.h> includes <asm/patching.h>. We intend that
<asm/insn.h> will be usable from userspace, so it doesn't make sense to
include headers for kernel-only features such as the patching routines,
and we'd intended to restrict <asm/insn.h> to instruction encoding
details.

Let's decouple the patching code from <asm/insn.h>, and explicitly
include <asm/patching.h> where it is needed. Since <asm/patching.h>
isn't included from assembly, we can drop the __ASSEMBLY__ guards.

At the same time, sort the kprobes includes so that it's easier to see
what is and isn't incldued.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210609102301.17332-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMakefile: fix GDB warning with CONFIG_RELR
Nick Desaulniers [Sat, 22 May 2021 01:26:24 +0000 (18:26 -0700)]
Makefile: fix GDB warning with CONFIG_RELR

GDB produces the following warning when debugging kernels built with
CONFIG_RELR:

BFD: /android0/linux-next/vmlinux: unknown type [0x13] section `.relr.dyn'

when loading a kernel built with CONFIG_RELR into GDB. It can also
prevent debugging symbols using such relocations.

Peter sugguests:
  [That flag] means that lld will use dynamic tags and section type
  numbers in the OS-specific range rather than the generic range. The
  kernel itself doesn't care about these numbers; it determines the
  location of the RELR section using symbols defined by a linker script.

Link: https://github.com/ClangBuiltLinux/linux/issues/1057
Suggested-by: Peter Collingbourne <pcc@google.com>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20210522012626.2811297-1-ndesaulniers@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: idle: don't instrument idle code with KCOV
Mark Rutland [Mon, 7 Jun 2021 09:46:24 +0000 (10:46 +0100)]
arm64: idle: don't instrument idle code with KCOV

The low-level idle code in arch_cpu_idle() and its callees runs at a
time where where portions of the kernel environment aren't available.
For example, RCU may not be watching, and lockdep state may be
out-of-sync with the hardware. Due to this, it is not sound to
instrument this code.

We generally avoid instrumentation by marking the entry functions as
`noinstr`, but currently this doesn't inhibit KCOV instrumentation.
Prevent this by factoring these functions into a new idle.c so that we
can disable KCOV for the entire compilation unit, as is done for the
core idle code in kernel/sched/idle.c.

We'd like to keep instrumentation of the rest of process.c, and for the
existing code in cpuidle.c, so a new compilation unit is preferable. The
arch_cpu_idle_dead() function in process.c is a cpu hotplug function
that is safe to instrument, so it is left as-is in process.c.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-21-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: don't instrument entry code with KCOV
Mark Rutland [Mon, 7 Jun 2021 09:46:23 +0000 (10:46 +0100)]
arm64: entry: don't instrument entry code with KCOV

The code in entry-common.c runs at exception entry and return
boundaries, where portions of the kernel environment aren't available.
For example, RCU may not be watching, and lockdep state may be
out-of-sync with the hardware. Due to this, it is not sound to
instrument this code.

We generally avoid instrumentation by marking the entry functions as
`noinstr`, but currently this doesn't inhibit KCOV instrumentation.
Prevent this by disabling KCOV for the entire compilation unit.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-20-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: make NMI entry/exit functions static
Mark Rutland [Mon, 7 Jun 2021 09:46:22 +0000 (10:46 +0100)]
arm64: entry: make NMI entry/exit functions static

Now that we only call arm64_enter_nmi() and arm64_exit_nmi() from within
entry-common.c, let's make these static to ensure this remains the case.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-19-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: split SDEI entry
Mark Rutland [Mon, 7 Jun 2021 09:46:21 +0000 (10:46 +0100)]
arm64: entry: split SDEI entry

We'd like to keep all the entry sequencing in entry-common.c, as this
will allow us to ensure this is consistent, and free from any unsound
instrumentation.

Currently __sdei_handler() performs the NMI entry/exit sequences in
sdei.c. Let's split the low-level entry sequence from the event
handling, moving the former to entry-common.c and keeping the latter in
sdei.c. The event handling function is renamed to do_sdei_event(),
matching the do_${FOO}() pattern used for other exception handlers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-18-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: split bad stack entry
Mark Rutland [Mon, 7 Jun 2021 09:46:20 +0000 (10:46 +0100)]
arm64: entry: split bad stack entry

We'd like to keep all the entry sequencing in entry-common.c, as this
will allow us to ensure this is consistent, and free from any unsound
instrumentation.

Currently handle_bad_stack() performs the NMI entry sequence in traps.c.
Let's split the low-level entry sequence from the reporting, moving the
former to entry-common.c and keeping the latter in traps.c. To make it
clear that reporting function never returns, it is renamed to
panic_bad_stack().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-17-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: fold el1_inv() into el1h_64_sync_handler()
Mark Rutland [Mon, 7 Jun 2021 09:46:19 +0000 (10:46 +0100)]
arm64: entry: fold el1_inv() into el1h_64_sync_handler()

An unexpected synchronous exception from EL1h could happen at any time,
and for robustness we should treat this as an NMI, making minimal
assumptions about the context the exception was taken from.

Currently el1_inv() assumes we can use enter_from_kernel_mode(), and
also assumes that we should inherit the original DAIF value. Neither of
these are desireable when we take an unexpected exception. Further,
after el1_inv() calls __panic_unhandled(), the remainder of the function
is unreachable, and therefore superfluous.

Let's address this and simplify things by having el1h_64_sync_handler()
call __panic_unhandled() directly, without any of the redundant logic.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reported-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-16-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: handle all vectors with C
Mark Rutland [Mon, 7 Jun 2021 09:46:18 +0000 (10:46 +0100)]
arm64: entry: handle all vectors with C

We have 16 architectural exception vectors, and depending on kernel
configuration we handle 8 or 12 of these with C code, with the remaining
8 or 4 of these handled as special cases in the entry assembly.

It would be nicer if the entry assembly were uniform for all exceptions,
and we deferred any specific handling of the exceptions to C code. This
way the entry assembly can be more easily templated without ifdeffery or
special cases, and it's easier to modify the handling of these cases in
future (e.g. to dump additional registers other context).

This patch reworks the entry code so that we always have a C handler for
every architectural exception vector, with the entry assembly being
completely uniform. We now have to handle exceptions from EL1t and EL1h,
and also have to handle exceptions from AArch32 even when the kernel is
built without CONFIG_COMPAT. To make this clear and to simplify
templating, we rename the top-level exception handlers with a consistent
naming scheme:

  asm: <el+sp>_<regsize>_<type>
  c:   <el+sp>_<regsize>_<type>_handler

.. where:

  <el+sp> is `el1t`, `el1h`, or `el0t`
  <regsize> is `64` or `32`
  <type> is `sync`, `irq`, `fiq`, or `error`

... e.g.

  asm: el1h_64_sync
  c:   el1h_64_sync_handler

... with lower-level handlers simply using "el1" and "compat" as today.

For unexpected exceptions, this information is passed to
__panic_unhandled(), so it can report the specific vector an unexpected
exception was taken from, e.g.

| Unhandled 64-bit el1t sync exception

For vectors we never expect to enter legitimately, the C code is
generated using a macro to avoid code duplication. The exceptions are
handled via __panic_unhandled(), replacing bad_mode() (which is
removed).

The `kernel_ventry` and `entry_handler` assembly macros are updated to
handle the new naming scheme. In theory it should be possible to
generate the entry functions at the same time as the vectors using a
single table, but this will require reworking the linker script to split
the two into separate sections, so for now we have separate tables.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-15-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: template the entry asm functions
Mark Rutland [Mon, 7 Jun 2021 09:46:17 +0000 (10:46 +0100)]
arm64: entry: template the entry asm functions

Now that the majority of the exception triage logic has been converted
to C, the entry assembly functions all have a uniform structure.

Let's generate them all with an assembly macro to reduce the amount of
code and to ensure they all remain in sync if we make changes in future.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-14-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: improve bad_mode()
Mark Rutland [Mon, 7 Jun 2021 09:46:16 +0000 (10:46 +0100)]
arm64: entry: improve bad_mode()

Our use of bad_mode() has a few rough edges:

* AArch64 doesn't use the term "mode", and refers to "Execution
  states", "Exception levels", and "Selected stack pointer".

* We log the exception type (SYNC/IRQ/FIQ/SError), but not the actual
  "mode" (though this can be decoded from the SPSR value).

* We use bad_mode() as a second-level handler for unexpected synchronous
  exceptions, where the "mode" is legitimate, but the specific exception
  is not.

* We dump the ESR value, but call this "code", and so it's not clear to
  all readers that this is the ESR.

... and all of this can be somewhat opaque to those who aren't extremely
familiar with the code.

Let's make this a bit clearer by having bad_mode() log "Unhandled
${TYPE} exception" rather than "Bad mode in ${TYPE} handler", using
"ESR" rather than "code", and having the final panic() log "Unhandled
exception" rather than "Bad mode".

In future we'd like to log the specific architectural vector rather than
just the type of exception, so we also split the core of bad_mode() out
into a helper called __panic_unhandled(), which takes the vector as a
string argument.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-13-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: move bad_mode() to entry-common.c
Mark Rutland [Mon, 7 Jun 2021 09:46:15 +0000 (10:46 +0100)]
arm64: entry: move bad_mode() to entry-common.c

In subsequent patches we'll rework the way bad_mode() is called by
exception entry code. In preparation for this, let's move bad_mode()
itself into entry-common.c.

Let's also mark it as noinstr (e.g. to prevent it being kprobed), and
let's also make the `handler` array a local variable, as this is only
use by bad_mode(), and will be removed entirely in a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-12-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: consolidate EL1 exception returns
Mark Rutland [Mon, 7 Jun 2021 09:46:14 +0000 (10:46 +0100)]
arm64: entry: consolidate EL1 exception returns

Following the example of ret_to_user, let's consolidate all the EL1
return paths with a ret_to_kernel helper, rather than each entry point
having its own copy of the return code.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-11-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: organise entry vectors consistently
Mark Rutland [Mon, 7 Jun 2021 09:46:13 +0000 (10:46 +0100)]
arm64: entry: organise entry vectors consistently

In subsequent patches we'll rename the entry handlers based on their
original EL, register width, and exception class. To do so, we need to
make all 3 mandatory arguments to the `kernel_ventry` macro, and
distinguish EL1h from EL1t.

In preparation for this, let's make the current set of arguments
mandatory, and move the `regsize` column before the branch label suffix,
making the vectors easier to read column-wise.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-10-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: organise entry handlers consistently
Mark Rutland [Mon, 7 Jun 2021 09:46:12 +0000 (10:46 +0100)]
arm64: entry: organise entry handlers consistently

In entry.S we have two comments which distinguish EL0 and EL1 exception
handlers, but the code isn't actually laid out to match, and there are a
few other inconsistencies that would be good to clear up.

This patch organizes the entry handers consistently:

* The handlers are laid out in order of the vectors, to make them easier
  to navigate.

* The inconsistently-applied alignment is removed

* The handlers are consistently marked with SYM_CODE_START_LOCAL()
  rather than SYM_CODE_START_LOCAL_NOALIGN(), giving them the same
  default alignment as other assembly code snippets.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-9-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: convert IRQ+FIQ handlers to C
Mark Rutland [Mon, 7 Jun 2021 09:46:11 +0000 (10:46 +0100)]
arm64: entry: convert IRQ+FIQ handlers to C

For various reasons we'd like to convert the bulk of arm64's exception
triage logic to C. As a step towards that, this patch converts the EL1
and EL0 IRQ+FIQ triage logic to C.

Separate C functions are added for the native and compat cases so that
in subsequent patches we can handle native/compat differences in C.

Since the triage functions can now call arm64_apply_bp_hardening()
directly, the do_el0_irq_bp_hardening() wrapper function is removed.

Since the user_exit_irqoff macro is now unused, it is removed. The
user_enter_irqoff macro is still used by the ret_to_user code, and
cannot be removed at this time.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-8-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: add a call_on_irq_stack helper
Mark Rutland [Mon, 7 Jun 2021 09:46:10 +0000 (10:46 +0100)]
arm64: entry: add a call_on_irq_stack helper

When handling IRQ/FIQ exceptions the entry assembly may transition from
a task's stack to a CPU's IRQ stack (and IRQ shadow call stack).

In subsequent patches we want to migrate the IRQ/FIQ triage logic to C,
and as we want to perform some actions on the task stack (e.g. EL1
preemption), we need to switch stacks within the C handler. So that we
can do so, this patch adds a helper to call a function on a CPU's IRQ
stack (and shadow stack as appropriate).

Subsequent patches will make use of the new helper function.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-7-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: move NMI preempt logic to C
Mark Rutland [Mon, 7 Jun 2021 09:46:09 +0000 (10:46 +0100)]
arm64: entry: move NMI preempt logic to C

Currently portions of our preempt logic are written in C while other
parts are written in assembly. Let's clean this up a little bit by
moving the NMI preempt checks to C. For now, the preempt count (and
need_resched) checking is left in assembly, and will be converted
with the body of the IRQ handler in subsequent patches.

Other than the increased lockdep coverage there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-6-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: move arm64_preempt_schedule_irq to entry-common.c
Mark Rutland [Mon, 7 Jun 2021 09:46:08 +0000 (10:46 +0100)]
arm64: entry: move arm64_preempt_schedule_irq to entry-common.c

Subsequent patches will pull more of the IRQ entry handling into C. To
keep this in one place, let's move arm64_preempt_schedule_irq() into
entry-common.c along with the other entry management functions.

We no longer need to include <linux/lockdep.h> in process.c, so the
include directive is removed.

There should be no functional change as a result of this patch.

Reviewed-by Joey Gouly <joey.gouly@arm.com>

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: convert SError handlers to C
Mark Rutland [Mon, 7 Jun 2021 09:46:07 +0000 (10:46 +0100)]
arm64: entry: convert SError handlers to C

For various reasons we'd like to convert the bulk of arm64's exception
triage logic to C. As a step towards that, this patch converts the EL1
and EL0 SError triage logic to C.

Separate C functions are added for the native and compat cases so that
in subsequent patches we can handle native/compat differences in C.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: unmask IRQ+FIQ after EL0 handling
Mark Rutland [Mon, 7 Jun 2021 09:46:06 +0000 (10:46 +0100)]
arm64: entry: unmask IRQ+FIQ after EL0 handling

For non-fatal exceptions taken from EL0, we expect that at some point
during exception handling it is possible to return to a regular process
context with all exceptions unmasked (e.g. as we do in
do_notify_resume()), and we generally aim to unmask exceptions wherever
possible.

While handling SError and debug exceptions from EL0, we need to leave
some exceptions masked during handling. Handling SError requires us to
mask SError (which also requires masking IRQ+FIQ), and handing debug
exceptions requires us to mask debug (which also requires masking
SError+IRQ+FIQ).

Once do_serror() or do_debug_exception() has returned, we no longer need
to mask exceptions, and can unmask them all, which is what we did prior
to commit:

  9034f6251572a474 ("arm64: Do not enable IRQs for ct_user_exit")

... where we had to mask IRQs as for context_tracking_user_exit()
expected IRQs to be masked.

Since then, we realised that our context tracking wasn't entirely
correct, and reworked the entry code to fix this. As of commit:

  23529049c6842382 ("arm64: entry: fix non-NMI user<->kernel transitions")

... we replaced the call to context_tracking_user_exit() with a call to
user_exit_irqoff() as part of enter_from_user_mode(), which occurs
earlier, before we run the body of the handler and unmask exceptions in
DAIF.

When we return to userspace, we go via ret_to_user(), which masks
exceptions in DAIF prior to calling user_enter_irqoff() as part of
exit_to_user_mode().

Thus, there's no longer a reason to leave IRQs or FIQs masked at the end
of the EL0 debug or error handlers, as neither the user exit context
tracking nor the user entry context tracking requires this. Let's bring
these into line with other EL0 exception handlers and ensure that IRQ
and FIQ are unmasked in DAIF at some point during the handler.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: remove redundant local_daif_mask() in bad_mode()
Mark Rutland [Mon, 7 Jun 2021 09:46:05 +0000 (10:46 +0100)]
arm64: remove redundant local_daif_mask() in bad_mode()

Upon taking an exception, the CPU sets all the DAIF bits. We never
clear any of these bits prior to calling bad_mode(), and bad_mode()
itself never clears any of these bits, so there's no need to call
local_daif_mask().

This patch removes the redundant call.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210607094624.34689-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: update string routine copyrights and URLs
Mark Rutland [Wed, 2 Jun 2021 15:13:58 +0000 (16:13 +0100)]
arm64: update string routine copyrights and URLs

To make future archaeology easier, let's have the string routine comment
blocks encode the specific upstream commit ID they were imported from.
These are the same commit IDs as listed in the commits importing the
code, expanded to 16 characters. Note that the routines have different
commit IDs, each reprsenting the latest upstream commit which changed
the particular routine.

At the same time, let's consistently include 2021 in the copyright
dates.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210602151358.35571-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Rewrite __arch_clear_user()
Robin Murphy [Thu, 27 May 2021 15:34:48 +0000 (16:34 +0100)]
arm64: Rewrite __arch_clear_user()

Now that we're always using STTR variants rather than abstracting two
different addressing modes, the user_ldst macro here is frankly more
obfuscating than helpful. Rewrite __arch_clear_user() with regular
USER() annotations so that it's clearer what's going on, and take the
opportunity to minimise the branchiness in the most common paths, while
also allowing the exception fixup to return an accurate result.

Apparently some folks examine large reads from /dev/zero closely enough
to notice the loop being hot, so align it per the other critical loops
(presumably around a typical instruction fetch granularity).

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/1cbd78b12c076a8ad4656a345811cfb9425df0b3.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Better optimised memchr()
Robin Murphy [Thu, 27 May 2021 15:34:47 +0000 (16:34 +0100)]
arm64: Better optimised memchr()

Although we implement our own assembly version of memchr(), it turns
out to be barely any better than what GCC can generate for the generic
C version (and would go wrong if the size_t argument were ever large
enough to be interpreted as negative). Unfortunately we can't import the
tuned implementation from the Arm optimized-routines library, since that
has some Advanced SIMD parts which are not really viable for general
kernel library code. What we can do, however, is pep things up with some
relatively straightforward word-at-a-time logic for larger calls.

Adding some timing to optimized-routines' memchr() test for a simple
benchmark, overall this version comes in around half as fast as the SIMD
code, but still nearly 4x faster than our existing implementation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/58471b42f9287e039dafa9e5e7035077152438fd.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest memcpy()/memmove() implementation
Robin Murphy [Thu, 27 May 2021 15:34:46 +0000 (16:34 +0100)]
arm64: Import latest memcpy()/memmove() implementation

Import the latest implementation of memcpy(), based on the
upstream code of string/aarch64/memcpy.S at commit afd6244 from
https://github.com/ARM-software/optimized-routines, and subsuming
memmove() in the process.

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Note also that the needs of the usercopy routines vs. regular memcpy()
have now diverged so far that we abandon the shared template idea
and the damage which that incurred to the tuning of LDP/STP loops.
We'll be back to tackle those routines separately in future.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/3c953af43506581b2422f61952261e76949ba711.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Add assembly annotations for weak-PI-alias madness
Robin Murphy [Thu, 27 May 2021 15:34:45 +0000 (16:34 +0100)]
arm64: Add assembly annotations for weak-PI-alias madness

Add yet another set of assembly symbol annotations, this time for the
borderline-absurd situation of a function aliasing to a weak symbol
which itself also wants a position-independent alias.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/75545b3c4129b20b887474bb58a9cf302bf2132b.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest version of Cortex Strings' strncmp
Sam Tebbs [Thu, 27 May 2021 15:34:44 +0000 (16:34 +0100)]
arm64: Import latest version of Cortex Strings' strncmp

Import the latest version of the former Cortex Strings - now
Arm Optimized Routines - strncmp function based on the upstream
code of string/aarch64/strncmp.S at commit e823e3a from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/26110bee02ad360596c9a7536af7eaaf6890d0e8.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import updated version of Cortex Strings' strlen
Sam Tebbs [Thu, 27 May 2021 15:34:43 +0000 (16:34 +0100)]
arm64: Import updated version of Cortex Strings' strlen

Import an updated version of the former Cortex Strings - now Arm
Optimized Routines - strcmp function. The latest version introduces
Advanced SIMD usage which rules it out for our purposes, but we can
still pick an intermediate improvement from the previous version,
namely string/aarch64/strlen.S at commit 98e4d6a from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/32e3489398a24b23ae6e996935ac4818f8fd9dfd.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest version of Cortex Strings' strcmp
Sam Tebbs [Thu, 27 May 2021 15:34:42 +0000 (16:34 +0100)]
arm64: Import latest version of Cortex Strings' strcmp

Import the latest version of the former Cortex Strings - now
Arm Optimized Routines - strcmp function based on the upstream
code of string/aarch64/strcmp.S at commit afd6244 from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/0fe90c90b96b569fbdfd46e47bd1298abb02079e.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest version of Cortex Strings' memcmp
Sam Tebbs [Thu, 27 May 2021 15:34:41 +0000 (16:34 +0100)]
arm64: Import latest version of Cortex Strings' memcmp

Import the latest version of the former Cortex Strings - now
Arm Optimized Routines - memcmp function based on the upstream
code of string/aarch64/memcmp.S at commit e823e3a from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/2889de2d41054f3f508fb3addad784a3606ef383.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros
Will Deacon [Thu, 27 May 2021 10:55:29 +0000 (11:55 +0100)]
arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros

The scs_load and scs_save asm macros don't make use of the mandatory
'tmp' register argument, so drop it and fix up the callers.

Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20210527105529.21967-1-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add load/store decoding helpers
Julien Thierry [Wed, 3 Mar 2021 17:05:36 +0000 (18:05 +0100)]
arm64: insn: Add load/store decoding helpers

Provide some function to group different load/store instructions.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-9-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add some opcodes to instruction decoder
Julien Thierry [Wed, 3 Mar 2021 17:05:35 +0000 (18:05 +0100)]
arm64: insn: Add some opcodes to instruction decoder

Add decoding capability for some instructions that objtool will need
to decode.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-8-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add barrier encodings
Julien Thierry [Wed, 3 Mar 2021 17:05:34 +0000 (18:05 +0100)]
arm64: insn: Add barrier encodings

Create necessary functions to encode/decode aarch64 barrier
instructions.

DSB needs special case handling as it has multiple encodings.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-7-jthierry@redhat.com
[will: Don't reject DSB #4]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add SVE instruction class
Julien Thierry [Wed, 3 Mar 2021 17:05:33 +0000 (18:05 +0100)]
arm64: insn: Add SVE instruction class

SVE has been public for some time now. Let the decoder acknowledge
its existence.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-6-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move instruction encoder/decoder under lib/
Julien Thierry [Wed, 3 Mar 2021 17:05:32 +0000 (18:05 +0100)]
arm64: Move instruction encoder/decoder under lib/

Aarch64 instruction set encoding and decoding logic can prove useful
for some features/tools both part of the kernel and outside the kernel.

Isolate the function dealing only with encoding/decoding instructions,
with minimal dependency on kernel utilities in order to be able to reuse
that code.

Code was only moved, no code should have been added, removed nor
modifier.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-5-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move aarch32 condition check functions
Julien Thierry [Wed, 3 Mar 2021 17:05:30 +0000 (18:05 +0100)]
arm64: Move aarch32 condition check functions

The functions to check condition flags for aarch32 execution is only
used to emulate aarch32 instructions. Move them from the instruction
encoding/decoding code to the trap handling files.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-3-jthierry@redhat.com
[will: leave aarch32_opcode_cond_checks where it is]
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move patching utilities out of instruction encoding/decoding
Julien Thierry [Wed, 3 Mar 2021 17:05:29 +0000 (18:05 +0100)]
arm64: Move patching utilities out of instruction encoding/decoding

Files insn.[c|h] containt some functions used for instruction patching.
In order to reuse the instruction encoder/decoder, move the patching
utilities to their own file.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-2-jthierry@redhat.com
[will: Include patching.h in insn.h to fix header mess; add __ASSEMBLY__ guards]
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agokasan: arm64: support specialized outlined tag mismatch checks
Peter Collingbourne [Wed, 26 May 2021 17:49:27 +0000 (10:49 -0700)]
kasan: arm64: support specialized outlined tag mismatch checks

By using outlined checks we can achieve a significant code size
improvement by moving the tag-based ASAN checks into separate
functions. Unlike the existing CONFIG_KASAN_OUTLINE mode these
functions have a custom calling convention that preserves most
registers and is specialized to the register containing the address
and the type of access, and as a result we can eliminate the code
size and performance overhead of a standard calling convention such
as AAPCS for these functions.

This change depends on a separate series of changes to Clang [1] to
support outlined checks in the kernel, although the change works fine
without them (we just don't get outlined checks). This is because the
flag -mllvm -hwasan-inline-all-checks=0 has no effect until the Clang
changes land. The flag was introduced in the Clang 9.0 timeframe as
part of the support for outlined checks in userspace and because our
minimum Clang version is 10.0 we can pass it unconditionally.

Outlined checks require a new runtime function with a custom calling
convention. Add this function to arch/arm64/lib.

I measured the code size of defconfig + tag-based KASAN, as well
as boot time (i.e. time to init launch) on a DragonBoard 845c with
an Android arm64 GKI kernel. The results are below:

                               code size    boot time
CONFIG_KASAN_INLINE=y before    92824064      6.18s
CONFIG_KASAN_INLINE=y after     38822400      6.65s
CONFIG_KASAN_OUTLINE=y          39215616     11.48s

We can see straight away that specialized outlined checks beat the
existing CONFIG_KASAN_OUTLINE=y on both code size and boot time
for tag-based ASAN.

As for the comparison between CONFIG_KASAN_INLINE=y before and after
we saw similar performance numbers in userspace [2] and decided
that since the performance overhead is minimal compared to the
overhead of tag-based ASAN itself as well as compared to the code
size improvements we would just replace the inlined checks with the
specialized outlined checks without the option to select between them,
and that is what I have implemented in this patch.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://linux-review.googlesource.com/id/I1a30036c70ab3c3ee78d75ed9b87ef7cdc3fdb76
Link: [1] https://reviews.llvm.org/D90426
Link: [2] https://reviews.llvm.org/D56954
Link: https://lore.kernel.org/r/20210526174927.2477847-3-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/stacktrace' into for-next/kasan
Will Deacon [Wed, 26 May 2021 22:27:42 +0000 (23:27 +0100)]
Merge branch 'for-next/stacktrace' into for-next/kasan

Merge in stack unwinding work to cater for 8-byte aligned stack frames
which may be generated following optimisations to CONFIG_KASAN_OUTLINE.

* for-next/stacktrace:
  arm64: stacktrace: Relax frame record alignment requirement to 8 bytes
  arm64: Change the on_*stack functions to take a size argument
  arm64: Implement stack trace termination record

4 years agoarm64: smp: initialize cpu offset earlier
Mark Rutland [Thu, 20 May 2021 11:50:31 +0000 (12:50 +0100)]
arm64: smp: initialize cpu offset earlier

Now that we have a consistent place to initialize CPU context registers
early in the boot path, let's also initialize the per-cpu offset here.
This makes the primary and secondary boot paths more consistent, and
allows for the use of per-cpu operations earlier, which will be
necessary for instrumentation with KCSAN.

Note that smp_prepare_boot_cpu() still needs to re-initialize CPU0's
offset as immediately prior to this the per-cpu areas may be
reallocated, and hence the boot-time offset may be stale. A comment is
added to make this clear.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-7-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: smp: unify task and sp setup
Mark Rutland [Thu, 20 May 2021 11:50:30 +0000 (12:50 +0100)]
arm64: smp: unify task and sp setup

Once we enable the MMU, we have to initialize:

* SP_EL0 to point at the active task
* SP to point at the active task's stack
* SCS_SP to point at the active task's shadow stack

For all tasks (including init_task), this information can be derived
from the task's task_struct.

Let's unify __primary_switched and __secondary_switched to consistently
acquire this information from the relevant task_struct. At the same
time, let's fold this together with initializing a task's final frame.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-6-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: smp: remove stack from secondary_data
Mark Rutland [Thu, 20 May 2021 11:50:29 +0000 (12:50 +0100)]
arm64: smp: remove stack from secondary_data

When we boot a secondary CPU, we pass it a task and a stack to use. As
the stack is always the task's stack, which can be derived from the
task, let's have the secondary CPU derive this itself and avoid passing
redundant information.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: smp: remove pointless secondary_data maintenance
Mark Rutland [Thu, 20 May 2021 11:50:28 +0000 (12:50 +0100)]
arm64: smp: remove pointless secondary_data maintenance

All reads and writes of secondary_data occur with the MMU on, using
coherent attributes, so there's no need to perform any cache maintenance
for this.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: add set_this_cpu_offset
Mark Rutland [Thu, 20 May 2021 11:50:27 +0000 (12:50 +0100)]
arm64: assembler: add set_this_cpu_offset

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/stacktrace' into for-next/boot
Will Deacon [Wed, 26 May 2021 21:42:51 +0000 (22:42 +0100)]
Merge branch 'for-next/stacktrace' into for-next/boot

Merge in stack unwinding work to minimise conflicts in head.S.

* for-next/stacktrace:
  arm64: stacktrace: Relax frame record alignment requirement to 8 bytes
  arm64: Change the on_*stack functions to take a size argument
  arm64: Implement stack trace termination record

4 years agoarm64: Check if GMID_EL1.BS is the same on all CPUs
Catalin Marinas [Wed, 26 May 2021 19:36:21 +0000 (20:36 +0100)]
arm64: Check if GMID_EL1.BS is the same on all CPUs

The GMID_EL1.BS field determines the number of tags accessed by the
LDGM/STGM instructions (EL1 and up), used by the kernel for copying or
zeroing page tags.

Taint the kernel if GMID_EL1.BS differs between CPUs but only of
CONFIG_ARM64_MTE is enabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
Link: https://lore.kernel.org/r/20210526193621.21559-3-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Change the cpuinfo_arm64 member type for some sysregs to u64
Catalin Marinas [Wed, 26 May 2021 19:36:20 +0000 (20:36 +0100)]
arm64: Change the cpuinfo_arm64 member type for some sysregs to u64

The architecture has been updated and the CTR_EL0, CNTFRQ_EL0,
DCZID_EL0, MIDR_EL1, REVIDR_EL1 registers are all 64-bit, even if most
of them have a RES0 top 32-bit.

Change their type to u64 in struct cpuinfo_arm64.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <Suzuki.Poulose@arm.com>
Link: https://lore.kernel.org/r/20210526193621.21559-2-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: stacktrace: Relax frame record alignment requirement to 8 bytes
Peter Collingbourne [Wed, 26 May 2021 17:49:26 +0000 (10:49 -0700)]
arm64: stacktrace: Relax frame record alignment requirement to 8 bytes

The AAPCS places no requirements on the alignment of the frame
record. In theory it could be placed anywhere, although it seems
sensible to require it to be aligned to 8 bytes. With an upcoming
enhancement to tag-based KASAN Clang will begin creating frame records
located at an address that is only aligned to 8 bytes. Accommodate
such frame records in the stack unwinding code.

As pointed out by Mark Rutland, the userspace stack unwinding code
has the same problem, so fix it there as well.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ia22c375230e67ca055e9e4bb639383567f7ad268
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210526174927.2477847-2-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Change the on_*stack functions to take a size argument
Peter Collingbourne [Wed, 26 May 2021 17:49:25 +0000 (10:49 -0700)]
arm64: Change the on_*stack functions to take a size argument

unwind_frame() was previously implicitly checking that the frame
record is in bounds of the stack by enforcing that FP is both aligned
to 16 and in bounds of the stack. Once the FP alignment requirement
is relaxed to 8 this will not be sufficient because it does not
account for the case where FP points to 8 bytes before the end of the
stack.

Make the check explicit by changing the on_*stack functions to take a
size argument and adjusting the callers to pass the appropriate sizes.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ib7a3eb3eea41b0687ffaba045ceb2012d077d8b4
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210526174927.2477847-1-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Rename arm64-internal cache maintenance functions
Fuad Tabba [Mon, 24 May 2021 08:30:01 +0000 (09:30 +0100)]
arm64: Rename arm64-internal cache maintenance functions

Although naming across the codebase isn't that consistent, it
tends to follow certain patterns. Moreover, the term "flush"
isn't defined in the Arm Architecture reference manual, and might
be interpreted to mean clean, invalidate, or both for a cache.

Rename arm64-internal functions to make the naming internally
consistent, as well as making it consistent with the Arm ARM, by
specifying whether it applies to the instruction, data, or both
caches, whether the operation is a clean, invalidate, or both.
Also specify which point the operation applies to, i.e., to the
point of unification (PoU), coherency (PoC), or persistence
(PoP).

This commit applies the following sed transformation to all files
under arch/arm64:

"s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\
"s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\
"s/\binvalidate_icache_range\b/icache_inval_pou/g;"\
"s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\
"s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\
"s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\
"s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\
"s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\
"s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\
"s/\b__flush_icache_all\b/icache_inval_all_pou/g;"

Note that __clean_dcache_area_poc is deliberately missing a word
boundary check at the beginning in order to match the efistub
symbols in image-vars.h.

Also note that, despite its name, __flush_icache_range operates
on both instruction and data caches. The name change here
reflects that.

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Fix cache maintenance function comments
Fuad Tabba [Mon, 24 May 2021 08:30:00 +0000 (09:30 +0100)]
arm64: Fix cache maintenance function comments

Fix and expand comments for the cache maintenance functions in
cacheflush.h. Adds comments to functions that weren't described
before. Explains what the functions do using Arm Architecture
Reference Manual terminology.

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-18-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: sync_icache_aliases to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:59 +0000 (09:29 +0100)]
arm64: sync_icache_aliases to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-17-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __clean_dcache_area_pou to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:58 +0000 (09:29 +0100)]
arm64: __clean_dcache_area_pou to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-16-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __clean_dcache_area_pop to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:57 +0000 (09:29 +0100)]
arm64: __clean_dcache_area_pop to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-15-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __clean_dcache_area_poc to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:56 +0000 (09:29 +0100)]
arm64: __clean_dcache_area_poc to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_clean_area, it changes the
parameters for that as well. However, __dma_clean_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-14-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __flush_dcache_area to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:55 +0000 (09:29 +0100)]
arm64: __flush_dcache_area to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-13-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: dcache_by_line_op to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:54 +0000 (09:29 +0100)]
arm64: dcache_by_line_op to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-12-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __inval_dcache_area to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:53 +0000 (09:29 +0100)]
arm64: __inval_dcache_area to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_inv_area, it changes the
parameters for that as well. However, __dma_inv_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-11-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Fix comments to refer to correct function __flush_icache_range
Fuad Tabba [Mon, 24 May 2021 08:29:52 +0000 (09:29 +0100)]
arm64: Fix comments to refer to correct function __flush_icache_range

Many comments refer to the function flush_icache_range, where the
intent is in fact __flush_icache_range. Fix these comments to
refer to the intended function.

That's probably due to commit 3b8c9f1cdfc506e9 ("arm64: IPI each
CPU after invalidating the I-cache for kernel mappings"), which
renamed flush_icache_range() to __flush_icache_range() and added
a wrapper.

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-10-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move documentation of dcache_by_line_op
Fuad Tabba [Mon, 24 May 2021 08:29:51 +0000 (09:29 +0100)]
arm64: Move documentation of dcache_by_line_op

The comment describing the macro dcache_by_line_op is placed
right before the previous macro of the one it describes, which is
a bit confusing. Move it to the macro it describes (dcache_by_line_op).

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-9-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: remove user_alt
Fuad Tabba [Mon, 24 May 2021 08:29:50 +0000 (09:29 +0100)]
arm64: assembler: remove user_alt

user_alt isn't being used anymore. It's also simpler and clearer
to directly use alternative_insn and _cond_extable in-line when
needed.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/20210520125735.GF17233@C02TD0UTHF1T.local/
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-8-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Downgrade flush_icache_range to invalidate
Fuad Tabba [Mon, 24 May 2021 08:29:49 +0000 (09:29 +0100)]
arm64: Downgrade flush_icache_range to invalidate

Since __flush_dcache_area is called right before,
invalidate_icache_range is sufficient in this case.

Rewrite the comment to better explain the rationale behind the
cache maintenance operations used here.

No functional change intended.
Possible performance impact due to invalidating only the icache
rather than invalidating and cleaning both caches.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-7-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Do not enable uaccess for invalidate_icache_range
Fuad Tabba [Mon, 24 May 2021 08:29:48 +0000 (09:29 +0100)]
arm64: Do not enable uaccess for invalidate_icache_range

invalidate_icache_range() works on kernel addresses, and doesn't
need uaccess. Remove the code that toggles uaccess_ttbr0_enable,
as well as the code that emits an entry into the exception table
(via the macro invalidate_icache_by_line).

Changes return type of invalidate_icache_range() from int (which
used to indicate a fault) to void, since it doesn't need uaccess
and won't fault. Note that return value was never checked by any
of the callers.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-6-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Do not enable uaccess for flush_icache_range
Fuad Tabba [Mon, 24 May 2021 08:29:47 +0000 (09:29 +0100)]
arm64: Do not enable uaccess for flush_icache_range

__flush_icache_range works on kernel addresses, and doesn't need
uaccess. The existing code is a side-effect of its current
implementation with __flush_cache_user_range fallthrough.

Instead of fallthrough to share the code, use a common macro for
the two where the caller specifies an optional fixup label if
user access is needed. If provided, this label would be used to
generate an extable entry.

Simplify the code to use dcache_by_line_op, instead of
replicating much of its functionality.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Reported-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Link: https://lore.kernel.org/linux-arm-kernel/20210521121846.GB1040@C02TD0UTHF1T.local/
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-5-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Apply errata to swsusp_arch_suspend_exit
Fuad Tabba [Mon, 24 May 2021 08:29:46 +0000 (09:29 +0100)]
arm64: Apply errata to swsusp_arch_suspend_exit

The Arm errata covered by ARM64_WORKAROUND_CLEAN_CACHE require
that "dc cvau" instructions get promoted to "dc civac".

Reported-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-4-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: add conditional cache fixups
Mark Rutland [Mon, 24 May 2021 08:29:45 +0000 (09:29 +0100)]
arm64: assembler: add conditional cache fixups

It would be helpful if we could use both `dcache_by_line_op` and
`invalidate_icache_by_line` for user memory without accidentally fixing
up unexpected faults when performing maintenance on kernel addresses.

Let's make this possible by having both macros take an optional fixup
label, and only generating an extable entry if a label is provided.

At the same time, let's clean up the labels used to be globally unique
using \@ as we do for other macros.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-3-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: replace `kaddr` with `addr`
Mark Rutland [Mon, 24 May 2021 08:29:44 +0000 (09:29 +0100)]
arm64: assembler: replace `kaddr` with `addr`

The `__dcache_op_workaround_clean_cache` and `dcache_by_line_op` macros
are only expected to be usedc on kernel memory, without a user fault
fixup, and so we named their address variables `kaddr` to make this
clear.

Subseuqent patches will modify these to also work on user memory with an
(optional) user fault fixup, where `kaddr` won't make as much sense. To
aid the legibility of patches, this patch (only) replaces `kaddr` with
`addr` as a preparatory step.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-2-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Document requirement for access to FEAT_HCX
Mark Brown [Wed, 12 May 2021 16:23:50 +0000 (17:23 +0100)]
arm64: Document requirement for access to FEAT_HCX

v8.7 of the architecture introduced FEAT_HCX which adds an additional
hypervisor configuration register HCRX_EL2. Even though Linux does not
currently make use of this feature let's document that the EL3 trap for
access to the register should be disabled so that we are able to make
use of it in future.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210512162350.20349-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Implement stack trace termination record
Madhavan T. Venkataraman [Mon, 10 May 2021 11:00:26 +0000 (12:00 +0100)]
arm64: Implement stack trace termination record

Reliable stacktracing requires that we identify when a stacktrace is
terminated early. We can do this by ensuring all tasks have a final
frame record at a known location on their task stack, and checking
that this is the final frame record in the chain.

We'd like to use task_pt_regs(task)->stackframe as the final frame
record, as this is already setup upon exception entry from EL0. For
kernel tasks we need to consistently reserve the pt_regs and point x29
at this, which we can do with small changes to __primary_switched,
__secondary_switched, and copy_process().

Since the final frame record must be at a specific location, we must
create the final frame record in __primary_switched and
__secondary_switched rather than leaving this to start_kernel and
secondary_start_kernel. Thus, __primary_switched and
__secondary_switched will now show up in stacktraces for the idle tasks.

Since the final frame record is now identified by its location rather
than by its contents, we identify it at the start of unwind_frame(),
before we read any values from it.

External debuggers may terminate the stack trace when FP == 0. In the
pt_regs->stackframe, the PC is 0 as well. So, stack traces taken in the
debugger may print an extra record 0x0 at the end. While this is not
pretty, this does not do any harm. This is a small price to pay for
having reliable stack trace termination in the kernel. That said, gdb
does not show the extra record probably because it uses DWARF and not
frame pointers for stack traces.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
[Mark: rebase, use ASM_BUG(), update comments, update commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210510110026.18061-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoLinux 5.13-rc3
Linus Torvalds [Sun, 23 May 2021 21:42:48 +0000 (11:42 -1000)]
Linux 5.13-rc3

4 years agoMerge tag 'perf-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 23 May 2021 16:32:40 +0000 (06:32 -1000)]
Merge tag 'perf-urgent-2021-05-23' of git://git./linux/kernel/git/tip/tip

Pull perf fixes from Thomas Gleixner:
 "Two perf fixes:

   - Do not check the LBR_TOS MSR when setting up unrelated LBR MSRs as
     this can cause malfunction when TOS is not supported

   - Allocate the LBR XSAVE buffers along with the DS buffers upfront
     because allocating them when adding an event can deadlock"

* tag 'perf-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf/x86/lbr: Remove cpuc->lbr_xsave allocation from atomic context
  perf/x86: Avoid touching LBR_TOS MSR for Arch LBR

4 years agoMerge tag 'locking-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 23 May 2021 16:30:08 +0000 (06:30 -1000)]
Merge tag 'locking-urgent-2021-05-23' of git://git./linux/kernel/git/tip/tip

Pull locking fixes from Thomas Gleixner:
 "Two locking fixes:

   - Invoke the lockdep tracepoints in the correct place so the ordering
     is correct again

   - Don't leave the mutex WAITER bit stale when the last waiter is
     dropping out early due to a signal as that forces all subsequent
     lock operations needlessly into the slowpath until it's cleaned up
     again"

* tag 'locking-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal
  locking/lockdep: Correct calling tracepoints

4 years agoMerge tag 'irq-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 23 May 2021 16:28:20 +0000 (06:28 -1000)]
Merge tag 'irq-urgent-2021-05-23' of git://git./linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "A few fixes for irqchip drivers:

   - Allocate interrupt descriptors correctly on Mainstone PXA when
     SPARSE_IRQ is enabled; otherwise the interrupt association fails

   - Make the APPLE AIC chip driver depend on APPLE

   - Remove redundant error output on devm_ioremap_resource() failure"

* tag 'irq-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip: Remove redundant error printing
  irqchip/apple-aic: APPLE_AIC should depend on ARCH_APPLE
  ARM: PXA: Fix cplds irqdesc allocation when using legacy mode

4 years agoMerge tag 'x86_urgent_for_v5.13_rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 23 May 2021 16:12:25 +0000 (06:12 -1000)]
Merge tag 'x86_urgent_for_v5.13_rc3' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Borislav Petkov:

 - Fix how SEV handles MMIO accesses by forwarding potential page faults
   instead of killing the machine and by using the accessors with the
   exact functionality needed when accessing memory.

 - Fix a confusion with Clang LTO compiler switches passed to the it

 - Handle the case gracefully when VMGEXIT has been executed in
   userspace

* tag 'x86_urgent_for_v5.13_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/sev-es: Use __put_user()/__get_user() for data accesses
  x86/sev-es: Forward page-faults which happen during emulation
  x86/sev-es: Don't return NULL from sev_es_get_ghcb()
  x86/build: Fix location of '-plugin-opt=' flags
  x86/sev-es: Invalidate the GHCB after completing VMGEXIT
  x86/sev-es: Move sev_es_put_ghcb() in prep for follow on patch

4 years agoMerge tag 'powerpc-5.13-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc...
Linus Torvalds [Sun, 23 May 2021 16:07:33 +0000 (06:07 -1000)]
Merge tag 'powerpc-5.13-4' of git://git./linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:

 - Fix breakage of strace (and other ptracers etc.) when using the new
   scv ABI (Power9 or later with glibc >= 2.33).

 - Fix early_ioremap() on 64-bit, which broke booting on some machines.

Thanks to Dmitry V. Levin, Nicholas Piggin, Alexey Kardashevskiy, and
Christophe Leroy.

* tag 'powerpc-5.13-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/64s/syscall: Fix ptrace syscall info with scv syscalls
  powerpc/64s/syscall: Use pt_regs.trap to distinguish syscall ABI difference between sc and scv syscalls
  powerpc: Fix early setup to make early_ioremap() work

4 years agoMerge tag 'kbuild-fixes-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/masah...
Linus Torvalds [Sun, 23 May 2021 05:53:56 +0000 (19:53 -1000)]
Merge tag 'kbuild-fixes-v5.13' of git://git./linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild fixes from Masahiro Yamada:

 - Fix short log indentation for tools builds

 - Fix dummy-tools to adjust to the latest stackprotector check

* tag 'kbuild-fixes-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  kbuild: dummy-tools: adjust to stricter stackprotector check
  scripts/jobserver-exec: Fix a typo ("envirnoment")
  tools build: Fix quiet cmd indentation