The implementation of flush_icache_range() includes instruction sequences
which are themselves patched at runtime, so it is not safe to call from
the patching framework.
This patch reworks the alternatives cache-flushing code so that it rolls
its own internal D-cache maintenance using DC CIVAC before invalidating
the entire I-cache after all alternatives have been applied at boot.
Modules don't cause any issues, since flush_icache_range() is safe to
call by the time they are loaded.
Acked-by: Mark Rutland <email address hidden>
Reported-by: Rohit Khanna <email address hidden>
Cc: Alexander Van Brunt <email address hidden>
Signed-off-by: Will Deacon <email address hidden>
Signed-off-by: Catalin Marinas <email address hidden>
(cherry picked from commit 429388682dc266e7a693f9c27e3aabd341d55343)
Signed-off-by: Kamal Mostafa <email address hidden>
In preparation of updating efi_mem_reserve_persistent() to cause less
fragmentation when dealing with many persistent reservations, update
the struct definition and the code that handles it currently so it
can describe an arbitrary number of reservations using a single linked
list entry. The actual optimization will be implemented in a subsequent
patch.
Tested-by: Marc Zyngier <email address hidden>
Signed-off-by: Ard Biesheuvel <email address hidden>
Cc: Andy Lutomirski <email address hidden>
Cc: Arend van Spriel <email address hidden>
Cc: Bhupesh Sharma <email address hidden>
Cc: Borislav Petkov <email address hidden>
Cc: Dave Hansen <email address hidden>
Cc: Eric Snowberg <email address hidden>
Cc: Hans de Goede <email address hidden>
Cc: Joe Perches <email address hidden>
Cc: Jon Hunter <email address hidden>
Cc: Julien Thierry <email address hidden>
Cc: Linus Torvalds <email address hidden>
Cc: Matt Fleming <email address hidden>
Cc: Nathan Chancellor <email address hidden>
Cc: Peter Zijlstra <email address hidden>
Cc: Sai Praneeth Prakhya <email address hidden>
Cc: Sedat Dilek <email address hidden>
Cc: Thomas Gleixner <email address hidden>
Cc: YiFei Zhu <email address hidden>
Cc: <email address hidden>
Link: http://<email address hidden>
Signed-off-by: Ingo Molnar <email address hidden>
(cherry picked from commit 5f0b0ecf043a5319e729c11a53bc8294df12dab3)
Signed-off-by: Kamal Mostafa <email address hidden>
Mapping the MEMRESERVE EFI configuration table from an early initcall
is too late: the GICv3 ITS code that creates persistent reservations
for the boot CPU's LPI tables is invoked from init_IRQ(), which runs
much earlier than the handling of the initcalls. This results in a
WARN() splat because the LPI tables cannot be reserved persistently,
which will result in silent memory corruption after a kexec reboot.
So instead, invoke the initialization performed by the initcall from
efi_mem_reserve_persistent() itself as well, but keep the initcall so
that the init is guaranteed to have been called before SMP boot.
Currently, efi_mem_reserve_persistent() may not be called from atomic
context, since both the kmalloc() call and the memremap() call may
sleep.
The kmalloc() call is easy enough to fix, but the memremap() call
needs to be moved into an init hook since we cannot control the
memory allocation behavior of memremap() at the call site.
Installing UEFI configuration tables can only be done before calling
ExitBootServices(), so if we want to use the new MEMRESRVE config table
from the kernel proper, we need to install a dummy entry from the stub.
Tested-by: Jeremy Linton <email address hidden>
Signed-off-by: Ard Biesheuvel <email address hidden>
(cherry picked from commit b844470f22061e8cd646cb355e85d2f518b2c913)
Signed-off-by: Kamal Mostafa <email address hidden>
cpu_enable_ssbs() is called via stop_machine() as part of the cpu_enable
callback. A spin lock is used to ensure the hook is registered before
the rest of the callback is executed.
On -RT spin_lock() may sleep. However, all the callees in stop_machine()
are expected to not sleep. Therefore a raw_spin_lock() is required here.
Given this is already done under stop_machine() and the work done under
the lock is quite small, the latency should not increase too much.
The cpu errata and feature enable callbacks are only called via their
respective arm64_cpu_capabilities structure and therefore shouldn't
exist in the global namespace.
Move the PAN, RAS and cache maintenance emulation enable callbacks into
the same files as their corresponding arm64_cpu_capabilities structures,
making them static in the process.
Signed-off-by: Will Deacon <email address hidden>
Signed-off-by: Catalin Marinas <email address hidden>
(backported from commit b8925ee2e12d1cb9a11d6f28b5814f2bfa59dce1)
Signed-off-by: Kamal Mostafa <email address hidden>