~vicamo/+git/ubuntu-kernel:bug-1861610/add-more-lenovo-elan-i2c-ids/mainline

Last commit made on 2020-03-27
Get this branch:
git clone -b bug-1861610/add-more-lenovo-elan-i2c-ids/mainline https://git.launchpad.net/~vicamo/+git/ubuntu-kernel
Only You-Sheng Yang can upload to this branch. If you are You-Sheng Yang please log in for upload directions.

Branch merges

Branch information

Name:
bug-1861610/add-more-lenovo-elan-i2c-ids/mainline
Repository:
lp:~vicamo/+git/ubuntu-kernel

Recent commits

113238e... by "Dave.Wang" <email address hidden>

Input: elan_i2c - add more hardware ID for Lenovo laptop

Signed-off-by: Dave Wang <email address hidden>
(cherry picked from
https://lore.kernel.org/linux-input/000201d5a8bd$9fead3f0$dfc07bd0$@emc.com.tw/)
Signed-off-by: You-Sheng Yang <email address hidden>

16fbf79... by Linus Torvalds <email address hidden>

Linux 5.6-rc7

67d584e... by Linus Torvalds <email address hidden>

Merge tag 'for-5.6-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux

Pull btrfs fixes from David Sterba:
 "Two fixes.

  The first is a regression: when dropping some incompat bits the
  conditions were reversed. The other is a fix for rename whiteout
  potentially leaving stack memory linked to a list"

* tag 'for-5.6-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  btrfs: fix removal of raid[56|1c34} incompat flags after removing block group
  btrfs: fix log context list corruption after rename whiteout error

b3c03db... by Linus Torvalds <email address hidden>

Merge branch 'akpm' (patches from Andrew)

Merge misc fixes from Andrew Morton:
 "10 fixes"

* emailed patches from Andrew Morton <email address hidden>:
  x86/mm: split vmalloc_sync_all()
  mm, slub: prevent kmalloc_node crashes and memory leaks
  mm/mmu_notifier: silence PROVE_RCU_LIST warnings
  epoll: fix possible lost wakeup on epoll_ctl() path
  mm: do not allow MADV_PAGEOUT for CoW pages
  mm, memcg: throttle allocators based on ancestral memory.high
  mm, memcg: fix corruption on 64-bit divisor in memory.high throttling
  page-flags: fix a crash at SetPageError(THP_SWAP)
  mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case
  memcg: fix NULL pointer dereference in __mem_cgroup_usage_unregister_event

763802b... by Joerg Roedel <email address hidden>

x86/mm: split vmalloc_sync_all()

Commit 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in
__purge_vmap_area_lazy()") introduced a call to vmalloc_sync_all() in
the vunmap() code-path. While this change was necessary to maintain
correctness on x86-32-pae kernels, it also adds additional cycles for
architectures that don't need it.

Specifically on x86-64 with CONFIG_VMAP_STACK=y some people reported
severe performance regressions in micro-benchmarks because it now also
calls the x86-64 implementation of vmalloc_sync_all() on vunmap(). But
the vmalloc_sync_all() implementation on x86-64 is only needed for newly
created mappings.

To avoid the unnecessary work on x86-64 and to gain the performance
back, split up vmalloc_sync_all() into two functions:

 * vmalloc_sync_mappings(), and
 * vmalloc_sync_unmappings()

Most call-sites to vmalloc_sync_all() only care about new mappings being
synchronized. The only exception is the new call-site added in the
above mentioned commit.

Shile Zhang directed us to a report of an 80% regression in reaim
throughput.

Fixes: 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()")
Reported-by: kernel test robot <email address hidden>
Reported-by: Shile Zhang <email address hidden>
Signed-off-by: Joerg Roedel <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Tested-by: Borislav Petkov <email address hidden>
Acked-by: Rafael J. Wysocki <email address hidden> [GHES]
Cc: Dave Hansen <email address hidden>
Cc: Andy Lutomirski <email address hidden>
Cc: Peter Zijlstra <email address hidden>
Cc: Thomas Gleixner <email address hidden>
Cc: Ingo Molnar <email address hidden>
Cc: <email address hidden>
Link: http://lkml.kernel.org/r/20191009124418.8286-1-joro@8bytes.org
Link: https://<email address hidden>/thread/4D3JPPHBNOSPFK2KEPC6KGKS6J25AIDB/
Link: http://<email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>

0715e6c... by Vlastimil Babka <email address hidden>

mm, slub: prevent kmalloc_node crashes and memory leaks

Sachin reports [1] a crash in SLUB __slab_alloc():

  BUG: Kernel NULL pointer dereference on read at 0x000073b0
  Faulting instruction address: 0xc0000000003d55f4
  Oops: Kernel access of bad area, sig: 11 [#1]
  LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
  Modules linked in:
  CPU: 19 PID: 1 Comm: systemd Not tainted 5.6.0-rc2-next-20200218-autotest #1
  NIP: c0000000003d55f4 LR: c0000000003d5b94 CTR: 0000000000000000
  REGS: c0000008b37836d0 TRAP: 0300 Not tainted (5.6.0-rc2-next-20200218-autotest)
  MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 24004844 XER: 00000000
  CFAR: c00000000000dec4 DAR: 00000000000073b0 DSISR: 40000000 IRQMASK: 1
  GPR00: c0000000003d5b94 c0000008b3783960 c00000000155d400 c0000008b301f500
  GPR04: 0000000000000dc0 0000000000000002 c0000000003443d8 c0000008bb398620
  GPR08: 00000008ba2f0000 0000000000000001 0000000000000000 0000000000000000
  GPR12: 0000000024004844 c00000001ec52a00 0000000000000000 0000000000000000
  GPR16: c0000008a1b20048 c000000001595898 c000000001750c18 0000000000000002
  GPR20: c000000001750c28 c000000001624470 0000000fffffffe0 5deadbeef0000122
  GPR24: 0000000000000001 0000000000000dc0 0000000000000002 c0000000003443d8
  GPR28: c0000008b301f500 c0000008bb398620 0000000000000000 c00c000002287180
  NIP ___slab_alloc+0x1f4/0x760
  LR __slab_alloc+0x34/0x60
  Call Trace:
    ___slab_alloc+0x334/0x760 (unreliable)
    __slab_alloc+0x34/0x60
    __kmalloc_node+0x110/0x490
    kvmalloc_node+0x58/0x110
    mem_cgroup_css_online+0x108/0x270
    online_css+0x48/0xd0
    cgroup_apply_control_enable+0x2ec/0x4d0
    cgroup_mkdir+0x228/0x5f0
    kernfs_iop_mkdir+0x90/0xf0
    vfs_mkdir+0x110/0x230
    do_mkdirat+0xb0/0x1a0
    system_call+0x5c/0x68

This is a PowerPC platform with following NUMA topology:

  available: 2 nodes (0-1)
  node 0 cpus:
  node 0 size: 0 MB
  node 0 free: 0 MB
  node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
  node 1 size: 35247 MB
  node 1 free: 30907 MB
  node distances:
  node 0 1
    0: 10 40
    1: 40 10

  possible numa nodes: 0-31

This only happens with a mmotm patch "mm/memcontrol.c: allocate
shrinker_map on appropriate NUMA node" [2] which effectively calls
kmalloc_node for each possible node. SLUB however only allocates
kmem_cache_node on online N_NORMAL_MEMORY nodes, and relies on
node_to_mem_node to return such valid node for other nodes since commit
a561ce00b09e ("slub: fall back to node_to_mem_node() node if allocating
on memoryless node"). This is however not true in this configuration
where the _node_numa_mem_ array is not initialized for nodes 0 and 2-31,
thus it contains zeroes and get_partial() ends up accessing
non-allocated kmem_cache_node.

A related issue was reported by Bharata (originally by Ramachandran) [3]
where a similar PowerPC configuration, but with mainline kernel without
patch [2] ends up allocating large amounts of pages by kmalloc-1k
kmalloc-512. This seems to have the same underlying issue with
node_to_mem_node() not behaving as expected, and might probably also
lead to an infinite loop with CONFIG_SLUB_CPU_PARTIAL [4].

This patch should fix both issues by not relying on node_to_mem_node()
anymore and instead simply falling back to NUMA_NO_NODE, when
kmalloc_node(node) is attempted for a node that's not online, or has no
usable memory. The "usable memory" condition is also changed from
node_present_pages() to N_NORMAL_MEMORY node state, as that is exactly
the condition that SLUB uses to allocate kmem_cache_node structures.
The check in get_partial() is removed completely, as the checks in
___slab_alloc() are now sufficient to prevent get_partial() being
reached with an invalid node.

[1] https://<email address hidden>/
[2] https://<email address hidden>/
[3] https://<email address hidden>/
[4] https://<email address hidden>/

Fixes: a561ce00b09e ("slub: fall back to node_to_mem_node() node if allocating on memoryless node")
Reported-by: Sachin Sant <email address hidden>
Reported-by: PUVICHAKRAVARTHY RAMACHANDRAN <email address hidden>
Signed-off-by: Vlastimil Babka <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Tested-by: Sachin Sant <email address hidden>
Tested-by: Bharata B Rao <email address hidden>
Reviewed-by: Srikar Dronamraju <email address hidden>
Cc: Mel Gorman <email address hidden>
Cc: Michael Ellerman <email address hidden>
Cc: Michal Hocko <email address hidden>
Cc: Christopher Lameter <email address hidden>
Cc: <email address hidden>
Cc: Joonsoo Kim <email address hidden>
Cc: Pekka Enberg <email address hidden>
Cc: David Rientjes <email address hidden>
Cc: Kirill Tkhai <email address hidden>
Cc: Vlastimil Babka <email address hidden>
Cc: Nathan Lynch <email address hidden>
Cc: <email address hidden>
Link: http://<email address hidden>
Debugged-by: Srikar Dronamraju <email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>

63886ba... by Qian Cai <email address hidden>

mm/mmu_notifier: silence PROVE_RCU_LIST warnings

It is safe to traverse mm->notifier_subscriptions->list either under
SRCU read lock or mm->notifier_subscriptions->lock using
hlist_for_each_entry_rcu(). Silence the PROVE_RCU_LIST false positives,
for example,

  WARNING: suspicious RCU usage
  -----------------------------
  mm/mmu_notifier.c:484 RCU-list traversed in non-reader section!!

  other info that might help us debug this:

  rcu_scheduler_active = 2, debug_locks = 1
  3 locks held by libvirtd/802:
   #0: ffff9321e3f58148 (&mm->mmap_sem#2){++++}, at: do_mprotect_pkey+0xe1/0x3e0
   #1: ffffffff91ae6160 (mmu_notifier_invalidate_range_start){+.+.}, at: change_p4d_range+0x5fa/0x800
   #2: ffffffff91ae6e08 (srcu){....}, at: __mmu_notifier_invalidate_range_start+0x178/0x460

  stack backtrace:
  CPU: 7 PID: 802 Comm: libvirtd Tainted: G I 5.6.0-rc6-next-20200317+ #2
  Hardware name: HP ProLiant BL460c Gen8, BIOS I31 11/02/2014
  Call Trace:
    dump_stack+0xa4/0xfe
    lockdep_rcu_suspicious+0xeb/0xf5
    __mmu_notifier_invalidate_range_start+0x3ff/0x460
    change_p4d_range+0x746/0x800
    change_protection+0x1df/0x300
    mprotect_fixup+0x245/0x3e0
    do_mprotect_pkey+0x23b/0x3e0
    __x64_sys_mprotect+0x51/0x70
    do_syscall_64+0x91/0xae8
    entry_SYSCALL_64_after_hwframe+0x49/0xb3

Signed-off-by: Qian Cai <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Reviewed-by: Paul E. McKenney <email address hidden>
Reviewed-by: Jason Gunthorpe <email address hidden>
Link: http://<email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>

1b53734... by Roman Penyaev <email address hidden>

epoll: fix possible lost wakeup on epoll_ctl() path

This fixes possible lost wakeup introduced by commit a218cc491420.
Originally modifications to ep->wq were serialized by ep->wq.lock, but
in commit a218cc491420 ("epoll: use rwlock in order to reduce
ep_poll_callback() contention") a new rw lock was introduced in order to
relax fd event path, i.e. callers of ep_poll_callback() function.

After the change ep_modify and ep_insert (both are called on epoll_ctl()
path) were switched to ep->lock, but ep_poll (epoll_wait) was using
ep->wq.lock on wqueue list modification.

The bug doesn't lead to any wqueue list corruptions, because wake up
path and list modifications were serialized by ep->wq.lock internally,
but actual waitqueue_active() check prior wake_up() call can be
reordered with modifications of ep ready list, thus wake up can be lost.

And yes, can be healed by explicit smp_mb():

  list_add_tail(&epi->rdlink, &ep->rdllist);
  smp_mb();
  if (waitqueue_active(&ep->wq))
 wake_up(&ep->wp);

But let's make it simple, thus current patch replaces ep->wq.lock with
the ep->lock for wqueue modifications, thus wake up path always observes
activeness of the wqueue correcty.

Fixes: a218cc491420 ("epoll: use rwlock in order to reduce ep_poll_callback() contention")
Reported-by: Max Neunhoeffer <email address hidden>
Signed-off-by: Roman Penyaev <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Tested-by: Max Neunhoeffer <email address hidden>
Cc: Jakub Kicinski <email address hidden>
Cc: Christopher Kohlhoff <email address hidden>
Cc: Davidlohr Bueso <email address hidden>
Cc: Jason Baron <email address hidden>
Cc: Jes Sorensen <email address hidden>
Cc: <email address hidden> [5.1+]
Link: http://<email address hidden>
References: https://bugzilla.kernel.org/show_bug.cgi?id=205933
Bisected-by: Max Neunhoeffer <email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>

12e967f... by Michal Hocko <email address hidden>

mm: do not allow MADV_PAGEOUT for CoW pages

Jann has brought up a very interesting point [1]. While shared pages
are excluded from MADV_PAGEOUT normally, CoW pages can be easily
reclaimed that way. This can lead to all sorts of hard to debug
problems. E.g. performance problems outlined by Daniel [2].

There are runtime environments where there is a substantial memory
shared among security domains via CoW memory and a easy to reclaim way
of that memory, which MADV_{COLD,PAGEOUT} offers, can lead to either
performance degradation in for the parent process which might be more
privileged or even open side channel attacks.

The feasibility of the latter is not really clear to me TBH but there is
no real reason for exposure at this stage. It seems there is no real
use case to depend on reclaiming CoW memory via madvise at this stage so
it is much easier to simply disallow it and this is what this patch
does. Put it simply MADV_{PAGEOUT,COLD} can operate only on the
exclusively owned memory which is a straightforward semantic.

[1] http://lkml.<email address hidden>
[2] http://lkml.<email address hidden>

Fixes: 9c276cc65a58 ("mm: introduce MADV_COLD")
Reported-by: Jann Horn <email address hidden>
Signed-off-by: Michal Hocko <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Acked-by: Vlastimil Babka <email address hidden>
Cc: Minchan Kim <email address hidden>
Cc: Daniel Colascione <email address hidden>
Cc: Dave Hansen <email address hidden>
Cc: "Joel Fernandes (Google)" <email address hidden>
Cc: <email address hidden>
Link: http://<email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>

e26733e... by Chris Down

mm, memcg: throttle allocators based on ancestral memory.high

Prior to this commit, we only directly check the affected cgroup's
memory.high against its usage. However, it's possible that we are being
reclaimed as a result of hitting an ancestor memory.high and should be
penalised based on that, instead.

This patch changes memory.high overage throttling to use the largest
overage in its ancestors when considering how many penalty jiffies to
charge. This makes sure that we penalise poorly behaving cgroups in the
same way regardless of at what level of the hierarchy memory.high was
breached.

Fixes: 0e4b01df8659 ("mm, memcg: throttle allocators when failing reclaim over memory.high")
Reported-by: Johannes Weiner <email address hidden>
Signed-off-by: Chris Down <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Acked-by: Johannes Weiner <email address hidden>
Cc: Tejun Heo <email address hidden>
Cc: Michal Hocko <email address hidden>
Cc: Nathan Chancellor <email address hidden>
Cc: Roman Gushchin <email address hidden>
Cc: <email address hidden> [5.4.x+]
Link: http://lkml.kernel.org<email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>