~vicamo/+git/ubuntu-kernel:bug-1942160/vmd-bridge-aspm-by-name/jammy

Last commit made on 2022-04-11
Get this branch:
git clone -b bug-1942160/vmd-bridge-aspm-by-name/jammy https://git.launchpad.net/~vicamo/+git/ubuntu-kernel
Only You-Sheng Yang can upload to this branch. If you are You-Sheng Yang please log in for upload directions.

Branch merges

Branch information

Name:
bug-1942160/vmd-bridge-aspm-by-name/jammy
Repository:
lp:~vicamo/+git/ubuntu-kernel

Recent commits

b72b3ce... by You-Sheng Yang

Bug 1942160: UBUNTU: SAUCE: vmd: fixup bridge ASPM by driver name instead

1c7e078... by You-Sheng Yang

UBUNTU: SAUCE: vmd: fixup bridge ASPM by driver name instead

BugLink: https://bugs.launchpad.net/bugs/1942160

Additional VMD bridge IDs needed for new Alder Lake platforms, but
actually there is no a complete list for them. Here we match bridge
devices if they're directly attached to a VMD controller instead.

Signed-off-by: You-Sheng Yang <email address hidden>
Signed-off-by: Timo Aaltonen <email address hidden>

121c072... by Dimitri John Ledkov

UBUNTU: [Config] updateconfigs after AMX patchset

BugLink: https://bugs.launchpad.net/bugs/1967750

Update configs after applying AMX patchset. Enforce
STRICT_SIGALTSTACK_SIZE as off, because:

CONFIG_STRICT_SIGALTSTACK_SIZE is intend for enforcing strict checking
of the sigaltstack size against the *real size of the FPU frame*,
enabling it is risky since it may lead to the broken of legacy
applications which already allocate a too small sigaltstack but can
still work because they never get a signal delivered. (lin-x-wang)

Fixes: cf1383fe60 ("x86/signal: Implement sigaltstack size validation")
Signed-off-by: Dimitri John Ledkov <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

87ffc89... by Jordy Zomer <email address hidden>

nfc: st21nfca: Fix potential buffer overflows in EVT_TRANSACTION

It appears that there are some buffer overflows in EVT_TRANSACTION.
This happens because the length parameters that are passed to memcpy
come directly from skb->data and are not guarded in any way.

Signed-off-by: Jordy Zomer <email address hidden>
Reviewed-by: Krzysztof Kozlowski <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 4fbcc1a4cb20fe26ad0225679c536c80f1648221)
CVE-2022-26490
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Bartlomiej Zolnierkiewicz <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

83f5b84... by Peter Zijlstra <email address hidden>

bpf,x86: Respect X86_FEATURE_RETPOLINE*

BugLink: https://bugs.launchpad.net/bugs/1967579

Current BPF codegen doesn't respect X86_FEATURE_RETPOLINE* flags and
unconditionally emits a thunk call, this is sub-optimal and doesn't
match the regular, compiler generated, code.

Update the i386 JIT to emit code equal to what the compiler emits for
the regular kernel text (IOW. a plain THUNK call).

Update the x86_64 JIT to emit code similar to the result of compiler
and kernel rewrites as according to X86_FEATURE_RETPOLINE* flags.
Inlining RETPOLINE_AMD (lfence; jmp *%reg) and !RETPOLINE (jmp *%reg),
while doing a THUNK call for RETPOLINE.

This removes the hard-coded retpoline thunks and shrinks the generated
code. Leaving a single retpoline thunk definition in the kernel.

Signed-off-by: Peter Zijlstra (Intel) <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Alexei Starovoitov <email address hidden>
Acked-by: Josh Poimboeuf <email address hidden>
Tested-by: Alexei Starovoitov <email address hidden>
Link: https://<email address hidden>
(backported from commit 87c87ecd00c54ecd677798cb49ef27329e0fab41)
[cascardo: RETPOLINE_AMD was renamed to RETPOLINE_LFENCE]
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

470167a... by Peter Zijlstra <email address hidden>

bpf,x86: Simplify computing label offsets

BugLink: https://bugs.launchpad.net/bugs/1967579

Take an idea from the 32bit JIT, which uses the multi-pass nature of
the JIT to compute the instruction offsets on a prior pass in order to
compute the relative jump offsets on a later pass.

Application to the x86_64 JIT is slightly more involved because the
offsets depend on program variables (such as callee_regs_used and
stack_depth) and hence the computed offsets need to be kept in the
context of the JIT.

This removes, IMO quite fragile, code that hard-codes the offsets and
tries to compute the length of variable parts of it.

Convert both emit_bpf_tail_call_*() functions which have an out: label
at the end. Additionally emit_bpt_tail_call_direct() also has a poke
table entry, for which it computes the offset from the end (and thus
already relies on the previous pass to have computed addrs[i]), also
convert this to be a forward based offset.

Signed-off-by: Peter Zijlstra (Intel) <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Alexei Starovoitov <email address hidden>
Acked-by: Josh Poimboeuf <email address hidden>
Tested-by: Alexei Starovoitov <email address hidden>
Link: https://<email address hidden>
(cherry picked from commit dceba0817ca329868a15e2e1dd46eb6340b69206)
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

d821002... by Peter Zijlstra <email address hidden>

x86/alternative: Add debug prints to apply_retpolines()

BugLink: https://bugs.launchpad.net/bugs/1967579

Make sure we can see the text changes when booting with
'debug-alternative'.

Example output:

 [ ] SMP alternatives: retpoline at: __traceiter_initcall_level+0x1f/0x30 (ffffffff8100066f) len: 5 to: __x86_indirect_thunk_rax+0x0/0x20
 [ ] SMP alternatives: ffffffff82603e58: [2:5) optimized NOPs: ff d0 0f 1f 00
 [ ] SMP alternatives: ffffffff8100066f: orig: e8 cc 30 00 01
 [ ] SMP alternatives: ffffffff8100066f: repl: ff d0 0f 1f 00

Signed-off-by: Peter Zijlstra (Intel) <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Josh Poimboeuf <email address hidden>
Tested-by: Alexei Starovoitov <email address hidden>
Link: https://<email address hidden>
(cherry picked from commit d4b5a5c993009ffeb5febe3b701da3faab6adb96 linux-next.git)
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

a954841... by Peter Zijlstra <email address hidden>

x86/alternative: Try inline spectre_v2=retpoline,amd

BugLink: https://bugs.launchpad.net/bugs/1967579

Try and replace retpoline thunk calls with:

  LFENCE
  CALL *%\reg

for spectre_v2=retpoline,amd.

Specifically, the sequence above is 5 bytes for the low 8 registers,
but 6 bytes for the high 8 registers. This means that unless the
compilers prefix stuff the call with higher registers this replacement
will fail.

Luckily GCC strongly favours RAX for the indirect calls and most (95%+
for defconfig-x86_64) will be converted. OTOH clang strongly favours
R11 and almost nothing gets converted.

Note: it will also generate a correct replacement for the Jcc.d32
case, except unless the compilers start to prefix stuff that, it'll
never fit. Specifically:

  Jncc.d8 1f
  LFENCE
  JMP *%\reg
1:

is 7-8 bytes long, where the original instruction in unpadded form is
only 6 bytes.

Signed-off-by: Peter Zijlstra (Intel) <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Josh Poimboeuf <email address hidden>
Tested-by: Alexei Starovoitov <email address hidden>
Link: https://<email address hidden>
(backported from commit bbe2df3f6b6da7848398d55b1311d58a16ec21e4)
[cascardo: RETPOLINE_AMD was renamed to RETPOLINE_LFENCE]
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

4d11116... by Peter Zijlstra <email address hidden>

x86/alternative: Handle Jcc __x86_indirect_thunk_\reg

BugLink: https://bugs.launchpad.net/bugs/1967579

Handle the rare cases where the compiler (clang) does an indirect
conditional tail-call using:

  Jcc __x86_indirect_thunk_\reg

For the !RETPOLINE case this can be rewritten to fit the original (6
byte) instruction like:

  Jncc.d8 1f
  JMP *%\reg
  NOP
1:

Signed-off-by: Peter Zijlstra (Intel) <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Josh Poimboeuf <email address hidden>
Tested-by: Alexei Starovoitov <email address hidden>
Link: https://<email address hidden>
(cherry picked from commit 2f0cbb2a8e5bbf101e9de118fc0eb168111a5e1e)
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>

2554390... by Peter Zijlstra <email address hidden>

x86/alternative: Implement .retpoline_sites support

BugLink: https://bugs.launchpad.net/bugs/1967579

Rewrite retpoline thunk call sites to be indirect calls for
spectre_v2=off. This ensures spectre_v2=off is as near to a
RETPOLINE=n build as possible.

This is the replacement for objtool writing alternative entries to
ensure the same and achieves feature-parity with the previous
approach.

One noteworthy feature is that it relies on the thunks to be in
machine order to compute the register index.

Specifically, this does not yet address the Jcc __x86_indirect_thunk_*
calls generated by clang, a future patch will add this.

Signed-off-by: Peter Zijlstra (Intel) <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Josh Poimboeuf <email address hidden>
Tested-by: Alexei Starovoitov <email address hidden>
Link: https://<email address hidden>
(backported from commit 7508500900814d14e2e085cdc4e28142721abbdf)
[cascardo: small conflict fixup at arch/x86/kernel/module.c]
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Andrea Righi <email address hidden>