~apw/ubuntu/+source/linux/+git/pti:pti/artful-retpoline-intelv1--pull2

Last commit made on 2018-02-06
Get this branch:
git clone -b pti/artful-retpoline-intelv1--pull2 https://git.launchpad.net/~apw/ubuntu/+source/linux/+git/pti
Only Andy Whitcroft can upload to this branch. If you are Andy Whitcroft please log in for upload directions.

Branch merges

Branch information

Name:
pti/artful-retpoline-intelv1--pull2
Repository:
lp:~apw/ubuntu/+source/linux/+git/pti

Recent commits

c95f498... by Borislav Petkov <email address hidden>

x86/retpoline: Simplify vmexit_fill_RSB()

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507

commit 1dde7415e99933bb7293d6b2843752cbdb43ec11

Simplify it to call an asm-function instead of pasting 41 insn bytes at
every call site. Also, add alignment to the macro as suggested here:

  https://support.google.com/faqs/answer/7625886

[dwmw2: Clean up comments, let it clobber %ebx and just tell the compiler]

Signed-off-by: Borislav Petkov <email address hidden>
Signed-off-by: David Woodhouse <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>

(cherry picked from commit 6a6a9c38986e9c4bcfdc53fba7b915a6ab834ce1)
Signed-off-by: Andy Whitcroft <email address hidden>

9d79e3d... by Waiman Long <email address hidden>

x86/retpoline: Remove the esp/rsp thunk

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507

commit 1df37383a8aeabb9b418698f0bcdffea01f4b1b2

It doesn't make sense to have an indirect call thunk with esp/rsp as
retpoline code won't work correctly with the stack pointer register.
Removing it will help compiler writers to catch error in case such
a thunk call is emitted incorrectly.

Fixes: 76b043848fd2 ("x86/retpoline: Add initial retpoline support")
Suggested-by: Jeff Law <email address hidden>
Signed-off-by: Waiman Long <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>
Acked-by: David Woodhouse <email address hidden>
Cc: Tom Lendacky <email address hidden>
Cc: Kees Cook <email address hidden>
Cc: Andi Kleen <email address hidden>
Cc: Tim Chen <email address hidden>
Cc: Peter Zijlstra <email address hidden>
Cc: Linus Torvalds <email address hidden>
Cc: Jiri Kosina <email address hidden>
Cc: Andy Lutomirski <email address hidden>
Cc: Dave Hansen <email address hidden>
Cc: Josh Poimboeuf <email address hidden>
Cc: Arjan van de Ven <email address hidden>
Cc: Greg Kroah-Hartman <email address hidden>
Cc: Paul Turner <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>

(cherry picked from commit 04007c4ac40ca141e9160b33368e592394d1b6d3)
Signed-off-by: Andy Whitcroft <email address hidden>

77884ad... by Andi Kleen <email address hidden>

x86/retpoline: Optimize inline assembler for vmexit_fill_RSB

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507

commit 3f7d875566d8e79c5e0b2c9a413e91b2c29e0854 upstream.

The generated assembler for the C fill RSB inline asm operations has
several issues:

- The C code sets up the loop register, which is then immediately
  overwritten in __FILL_RETURN_BUFFER with the same value again.

- The C code also passes in the iteration count in another register, which
  is not used at all.

Remove these two unnecessary operations. Just rely on the single constant
passed to the macro for the iterations.

Signed-off-by: Andi Kleen <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>
Acked-by: David Woodhouse <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Cc: <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>

(cherry picked from commit 5fa871644e6d48f5c120e8c469c96850a01c60c8)
Signed-off-by: Andy Whitcroft <email address hidden>

278701c... by Tom Lendacky

x86/retpoline: Add LFENCE to the retpoline/RSB filling RSB macros

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507

commit 28d437d550e1e39f805d99f9f8ac399c778827b7 upstream.

The PAUSE instruction is currently used in the retpoline and RSB filling
macros as a speculation trap. The use of PAUSE was originally suggested
because it showed a very, very small difference in the amount of
cycles/time used to execute the retpoline as compared to LFENCE. On AMD,
the PAUSE instruction is not a serializing instruction, so the pause/jmp
loop will use excess power as it is speculated over waiting for return
to mispredict to the correct target.

The RSB filling macro is applicable to AMD, and, if software is unable to
verify that LFENCE is serializing on AMD (possible when running under a
hypervisor), the generic retpoline support will be used and, so, is also
applicable to AMD. Keep the current usage of PAUSE for Intel, but add an
LFENCE instruction to the speculation trap for AMD.

The same sequence has been adopted by GCC for the GCC generated retpolines.

Signed-off-by: Tom Lendacky <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: David Woodhouse <email address hidden>
Acked-by: Arjan van de Ven <email address hidden>
Cc: Rik van Riel <email address hidden>
Cc: Andi Kleen <email address hidden>
Cc: Paul Turner <email address hidden>
Cc: Peter Zijlstra <email address hidden>
Cc: Tim Chen <email address hidden>
Cc: Jiri Kosina <email address hidden>
Cc: Dave Hansen <email address hidden>
Cc: Andy Lutomirski <email address hidden>
Cc: Josh Poimboeuf <email address hidden>
Cc: Dan Williams <email address hidden>
Cc: Linus Torvalds <email address hidden>
Cc: Greg Kroah-Hartman <email address hidden>
Cc: Kees Cook <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>

(cherry picked from commit 956ec9e7b59a1fb3fd4c9bc8a13a7f7700e9d7d2)
Signed-off-by: Andy Whitcroft <email address hidden>

e2ba816... by David Woodhouse <email address hidden>

x86/retpoline: Fill RSB on context switch for affected CPUs

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507

commit c995efd5a740d9cbafbf58bde4973e8b50b4d761 upstream.

On context switch from a shallow call stack to a deeper one, as the CPU
does 'ret' up the deeper side it may encounter RSB entries (predictions for
where the 'ret' goes to) which were populated in userspace.

This is problematic if neither SMEP nor KPTI (the latter of which marks
userspace pages as NX for the kernel) are active, as malicious code in
userspace may then be executed speculatively.

Overwrite the CPU's return prediction stack with calls which are predicted
to return to an infinite loop, to "capture" speculation if this
happens. This is required both for retpoline, and also in conjunction with
IBRS for !SMEP && !KPTI.

On Skylake+ the problem is slightly different, and an *underflow* of the
RSB may cause errant branch predictions to occur. So there it's not so much
overwrite, as *filling* the RSB to attempt to prevent it getting
empty. This is only a partial solution for Skylake+ since there are many
other conditions which may result in the RSB becoming empty. The full
solution on Skylake+ is to use IBRS, which will prevent the problem even
when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
required on context switch.

[ tglx: Added missing vendor check and slighty massaged comments and
   changelog ]

Signed-off-by: David Woodhouse <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>
Acked-by: Arjan van de Ven <email address hidden>
Cc: <email address hidden>
Cc: Rik van Riel <email address hidden>
Cc: Andi Kleen <email address hidden>
Cc: Josh Poimboeuf <email address hidden>
Cc: <email address hidden>
Cc: Peter Zijlstra <email address hidden>
Cc: Linus Torvalds <email address hidden>
Cc: Jiri Kosina <email address hidden>
Cc: Andy Lutomirski <email address hidden>
Cc: Dave Hansen <email address hidden>
Cc: Kees Cook <email address hidden>
Cc: Tim Chen <email address hidden>
Cc: Greg Kroah-Hartman <email address hidden>
Cc: Paul Turner <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>

(cherry picked from commit 051547583bdda4b74953053a1034026c56b55c4c)
Signed-off-by: Andy Whitcroft <email address hidden>

bc3391e... by Paolo Pisati

UBUNTU: [Config] UNMAP_KERNEL_AT_EL0=y && HARDEN_BRANCH_PREDICTOR=y

CVE-2017-5754 ARM64 KPTI fixes

Signed-off-by: Paolo Pisati <email address hidden>
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Acked-by: Brad Figg <email address hidden>
Signed-off-by: Kleber Sacilotto de Souza <email address hidden>

dc817da... by Jayachandran C <email address hidden>

UBUNTU: SAUCE: arm64: Branch predictor hardening for Cavium ThunderX2

When upstream applied this commit, the existing hardening function was used
instead of the new one. This commit applies the delta between upstream and
vendor commit.

CVE-2017-5754 ARM64 KPTI fixes

Use PSCI based mitigation for speculative execution attacks targeting
the branch predictor. The approach is similar to the one used for
Cortex-A CPUs, but in case of ThunderX2 we add another SMC call to
test if the firmware supports the capability.

If the secure firmware has been updated with the mitigation code to
invalidate the branch target buffer, we use the PSCI version call to
invoke it.

Signed-off-by: Jayachandran C <email address hidden>
Signed-off-by: Paolo Pisati <email address hidden>
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Acked-by: Brad Figg <email address hidden>
Signed-off-by: Kleber Sacilotto de Souza <email address hidden>

1313409... by Shanker Donthineni <email address hidden>

UBUNTU: SAUCE: arm64: Implement branch predictor hardening for Falkor

When upstream applied this commit, FALKOR_V1 was missing and only FALKOR
support was added. This commit applies the delta between upstream and vendor
commit.

CVE-2017-5754 ARM64 KPTI fixes

Falkor is susceptible to branch predictor aliasing and can
theoretically be attacked by malicious code. This patch
implements a mitigation for these attacks, preventing any
malicious entries from affecting other victim contexts.

Signed-off-by: Shanker Donthineni <email address hidden>
[will: fix label name when !CONFIG_KVM]
Signed-off-by: Will Deacon <email address hidden>
Signed-off-by: Paolo Pisati <email address hidden>
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Acked-by: Brad Figg <email address hidden>
Signed-off-by: Kleber Sacilotto de Souza <email address hidden>

4a0bcab... by Mark Rutland

UBUNTU: SAUCE: bpf: inhibit speculated out-of-bounds pointers

CVE-2017-5754 ARM64 KPTI fixes

Under speculation, CPUs may mis-predict branches in bounds checks. Thus,
memory accesses under a bounds check may be speculated even if the
bounds check fails, providing a primitive for building a side channel.

The EBPF map code has a number of such bounds-checks accesses in
map_lookup_elem implementations. This patch modifies these to use the
nospec helpers to inhibit such side channels.

The JITted lookup_elem implementations remain potentially vulnerable,
and are disabled (with JITted code falling back to the C
implementations).

Signed-off-by: Mark Rutland <email address hidden>
Signed-off-by: Will Deacon <email address hidden>
Signed-off-by: Paolo Pisati <email address hidden>
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Acked-by: Brad Figg <email address hidden>
Signed-off-by: Kleber Sacilotto de Souza <email address hidden>

190e076... by Mark Rutland

UBUNTU: SAUCE: arm: implement nospec_ptr()

CVE-2017-5754 ARM64 KPTI fixes

This patch implements nospec_ptr() for arm, following the recommended
architectural sequences for the arm and thumb instruction sets.

Signed-off-by: Mark Rutland <email address hidden>
Signed-off-by: Dan Williams <email address hidden>
Signed-off-by: Catalin Marinas <email address hidden>
Signed-off-by: Paolo Pisati <email address hidden>
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Acked-by: Brad Figg <email address hidden>
Signed-off-by: Kleber Sacilotto de Souza <email address hidden>