~kamalmostafa/ubuntu/+source/linux/+git/trusty:master

Last commit made on 2018-08-10
Get this branch:
git clone -b master https://git.launchpad.net/~kamalmostafa/ubuntu/+source/linux/+git/trusty
Only Kamal Mostafa can upload to this branch. If you are Kamal Mostafa please log in for upload directions.

Branch merges

Branch information

Recent commits

a7506e0... by Juerg Haefliger

UBUNTU: Ubuntu-3.13.0-155.205

Signed-off-by: Juerg Haefliger <email address hidden>

e0b5195... by tglx

posix-timer: Properly check sigevent->sigev_notify

timer_create() specifies via sigevent->sigev_notify the signal delivery for
the new timer. The valid modes are SIGEV_NONE, SIGEV_SIGNAL, SIGEV_THREAD
and (SIGEV_SIGNAL | SIGEV_THREAD_ID).

The sanity check in good_sigevent() is only checking the valid combination
for the SIGEV_THREAD_ID bit, i.e. SIGEV_SIGNAL, but if SIGEV_THREAD_ID is
not set it accepts any random value.

This has no real effects on the posix timer and signal delivery code, but
it affects show_timer() which handles the output of /proc/$PID/timers. That
function uses a string array to pretty print sigev_notify. The access to
that array has no bound checks, so random sigev_notify cause access beyond
the array bounds.

Add proper checks for the valid notify modes and remove the SIGEV_THREAD_ID
masking from various code pathes as SIGEV_NONE can never be set in
combination with SIGEV_THREAD_ID.

Reported-by: Eric Biggers <email address hidden>
Reported-by: Dmitry Vyukov <email address hidden>
Reported-by: Alexey Dobriyan <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>
Cc: John Stultz <email address hidden>
Cc: <email address hidden>

CVE-2017-18344

(backported from commit cef31d9af908243421258f1df35a4a644604efbe)
[tyhicks: Do not worry about removing the SIGEV_THREAD_ID masking since it is
 irrelevant to the security fix]
Signed-off-by: Tyler Hicks <email address hidden>
Signed-off-by: Juerg Haefliger <email address hidden>

8a668da... by Eric Dumazet <email address hidden>

tcp: detect malicious patterns in tcp_collapse_ofo_queue()

In case an attacker feeds tiny packets completely out of order,
tcp_collapse_ofo_queue() might scan the whole rb-tree, performing
expensive copies, but not changing socket memory usage at all.

1) Do not attempt to collapse tiny skbs.
2) Add logic to exit early when too many tiny skbs are detected.

We prefer not doing aggressive collapsing (which copies packets)
for pathological flows, and revert to tcp_prune_ofo_queue() which
will be less expensive.

In the future, we might add the possibility of terminating flows
that are proven to be malicious.

Signed-off-by: Eric Dumazet <email address hidden>
Acked-by: Soheil Hassas Yeganeh <email address hidden>
Signed-off-by: David S. Miller <email address hidden>

CVE-2018-5390

(backported from commit 3d4bf93ac12003f9b8e1e2de37fe27983deebdcf)
Signed-off-by: Tyler Hicks <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>
Signed-off-by: Juerg Haefliger <email address hidden>

c93f445... by Eric Dumazet <email address hidden>

tcp: avoid collapses in tcp_prune_queue() if possible

Right after a TCP flow is created, receiving tiny out of order
packets allways hit the condition :

if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
 tcp_clamp_window(sk);

tcp_clamp_window() increases sk_rcvbuf to match sk_rmem_alloc
(guarded by tcp_rmem[2])

Calling tcp_collapse_ofo_queue() in this case is not useful,
and offers a O(N^2) surface attack to malicious peers.

Better not attempt anything before full queue capacity is reached,
forcing attacker to spend lots of resource and allow us to more
easily detect the abuse.

Signed-off-by: Eric Dumazet <email address hidden>
Acked-by: Soheil Hassas Yeganeh <email address hidden>
Acked-by: Yuchung Cheng <email address hidden>
Signed-off-by: David S. Miller <email address hidden>

CVE-2018-5390

(cherry picked from commit f4a3313d8e2ca9fd8d8f45e40a2903ba782607e7)
Signed-off-by: Juerg Haefliger <email address hidden>

036a56d... by Stefan Bader

Revert "net: increase fragment memory usage limits"

This reverts commit c2a936600f78aea00d3312ea4b66a79a4619f9b4. It
made denial of service attacks on the IP fragment handling easier to
carry out.

CVE-2018-5391

Signed-off-by: Stefan Bader <email address hidden>
(cherry picked from Xenial)
Signed-off-by: Juerg Haefliger <email address hidden>

8b4a424... by Andi Kleen <email address hidden>

x86/mm/pat: Make set_memory_np() L1TF safe

set_memory_np() is used to mark kernel mappings not present, but it has
it's own open coded mechanism which does not have the L1TF protection of
inverting the address bits.

Replace the open coded PTE manipulation with the L1TF protecting low level
PTE routines.

Passes the CPA self test.

Signed-off-by: Andi Kleen <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>

CVE-2018-3620
CVE-2018-3646

[smb: Context adjustments]
Signed-off-by: Stefan Bader <email address hidden>
(backported from Xenial)
[juergh: Adjusted context.]
Signed-off-by: Juerg Haefliger <email address hidden>

d151b3b... by Stefan Bader

UBUNTU: SAUCE: Add pfn_pud() and pud_mkhuge()

Both were introduced when pud size transparent hugepage support was
added and that would be too complex and dangerous to backport. The
pfn_pud() function was extended in "x86/speculation/l1tf: Protect
PROT_NONE PTEs against speculation" but not backported then since it
was not present, yet (so no users).
For the following patch, though, we will need both.

CVE-2018-3620
CVE-2018-3646

Signed-off-by: Stefan Bader <email address hidden>
(backported from Xenial)
[juergh: Adjusted context.]
Signed-off-by: Juerg Haefliger <email address hidden>

9468d91... by Matt Fleming <email address hidden>

x86/mm/pat: Ensure cpa->pfn only contains page frame numbers

The x86 pageattr code is confused about the data that is stored
in cpa->pfn, sometimes it's treated as a page frame number,
sometimes it's treated as an unshifted physical address, and in
one place it's treated as a pte.

The result of this is that the mapping functions do not map the
intended physical address.

This isn't a problem in practice because most of the addresses
we're mapping in the EFI code paths are already mapped in
'trampoline_pgd' and so the pageattr mapping functions don't
actually do anything in this case. But when we move to using a
separate page table for the EFI runtime this will be an issue.

Signed-off-by: Matt Fleming <email address hidden>
Reviewed-by: Borislav Petkov <email address hidden>
Acked-by: Borislav Petkov <email address hidden>
Cc: Andy Lutomirski <email address hidden>
Cc: Ard Biesheuvel <email address hidden>
Cc: Borislav Petkov <email address hidden>
Cc: Brian Gerst <email address hidden>
Cc: Dave Hansen <email address hidden>
Cc: Denys Vlasenko <email address hidden>
Cc: H. Peter Anvin <email address hidden>
Cc: Linus Torvalds <email address hidden>
Cc: Peter Zijlstra <email address hidden>
Cc: Sai Praneeth Prakhya <email address hidden>
Cc: Thomas Gleixner <email address hidden>
Cc: Toshi Kani <email address hidden>
Cc: <email address hidden>
Link: http://<email address hidden>
Signed-off-by: Ingo Molnar <email address hidden>

CVE-2018-3620
CVE-2018-3646

(backported from commit edc3b9129cecd0f0857112136f5b8b1bc1d45918)
[juergh:
 - Adjusted context.
 - {pmd,pud}_pgprot -> pgprot.]
Signed-off-by: Juerg Haefliger <email address hidden>

c6e1653... by Andi Kleen <email address hidden>

x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert

Some cases in THP like:
  - MADV_FREE
  - mprotect
  - split

mark the PMD non present for temporarily to prevent races. The window for
an L1TF attack in these contexts is very small, but it wants to be fixed
for correctness sake.

Use the proper low level functions for pmd/pud_mknotpresent() to address
this.

Signed-off-by: Andi Kleen <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>

CVE-2018-3620
CVE-2018-3646

[smb: Drop pud_mknotpresent() changes as it does not exist]
Signed-off-by: Stefan Bader <email address hidden>
(backported from Xenial)
[juergh: Adjusted context.]
Signed-off-by: Juerg Haefliger <email address hidden>

996259e... by Andi Kleen <email address hidden>

x86/speculation/l1tf: Invert all not present mappings

For kernel mappings PAGE_PROTNONE is not necessarily set for a non present
mapping, but the inversion logic explicitely checks for !PRESENT and
PROT_NONE.

Remove the PROT_NONE check and make the inversion unconditional for all not
present mappings.

Signed-off-by: Andi Kleen <email address hidden>
Signed-off-by: Thomas Gleixner <email address hidden>

CVE-2018-3620
CVE-2018-3646

Signed-off-by: Stefan Bader <email address hidden>
(cherry picked from Xenial)
Signed-off-by: Juerg Haefliger <email address hidden>