~jpdonnelly/ubuntu/+source/linux/+git/trusty:master

Last commit made on 2016-12-06
Get this branch:
git clone -b master https://git.launchpad.net/~jpdonnelly/ubuntu/+source/linux/+git/trusty
Only John Donnelly can upload to this branch. If you are John Donnelly please log in for upload directions.

Branch merges

Branch information

Recent commits

f2bd6e2... by Luis Henriques

UBUNTU: Ubuntu-3.13.0-106.153

Signed-off-by: Luis Henriques <email address hidden>

ec6e9b9... by Mathias Krause <email address hidden>

proc: prevent accessing /proc/<PID>/environ until it's ready

If /proc/<PID>/environ gets read before the envp[] array is fully set up
in create_{aout,elf,elf_fdpic,flat}_tables(), we might end up trying to
read more bytes than are actually written, as env_start will already be
set but env_end will still be zero, making the range calculation
underflow, allowing to read beyond the end of what has been written.

Fix this as it is done for /proc/<PID>/cmdline by testing env_end for
zero. It is, apparently, intentionally set last in create_*_tables().

This bug was found by the PaX size_overflow plugin that detected the
arithmetic underflow of 'this_len = env_end - (env_start + src)' when
env_end is still zero.

The expected consequence is that userland trying to access
/proc/<PID>/environ of a not yet fully set up process may get
inconsistent data as we're in the middle of copying in the environment
variables.

Fixes: https://forums.grsecurity.net/viewtopic.php?f=3&t=4363
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=116461
Signed-off-by: Mathias Krause <email address hidden>
Cc: Emese Revfy <email address hidden>
Cc: Pax Team <email address hidden>
Cc: Al Viro <email address hidden>
Cc: Mateusz Guzik <email address hidden>
Cc: Alexey Dobriyan <email address hidden>
Cc: Cyrill Gorcunov <email address hidden>
Cc: Jarod Wilson <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>
CVE-2016-7916
(cherry picked from commit 8148a73c9901a8794a50f950083c00ccf97d43b3)
Signed-off-by: Luis Henriques <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Colin Ian King <email address hidden>

7754e97... by Eric W. Biederman

mnt: Add a per mount namespace limit on the number of mounts

CAI Qian <email address hidden> pointed out that the semantics
of shared subtrees make it possible to create an exponentially
increasing number of mounts in a mount namespace.

    mkdir /tmp/1 /tmp/2
    mount --make-rshared /
    for i in $(seq 1 20) ; do mount --bind /tmp/1 /tmp/2 ; done

Will create create 2^20 or 1048576 mounts, which is a practical problem
as some people have managed to hit this by accident.

As such CVE-2016-6213 was assigned.

Ian Kent <email address hidden> described the situation for autofs users
as follows:

> The number of mounts for direct mount maps is usually not very large because of
> the way they are implemented, large direct mount maps can have performance
> problems. There can be anywhere from a few (likely case a few hundred) to less
> than 10000, plus mounts that have been triggered and not yet expired.
>
> Indirect mounts have one autofs mount at the root plus the number of mounts that
> have been triggered and not yet expired.
>
> The number of autofs indirect map entries can range from a few to the common
> case of several thousand and in rare cases up to between 30000 and 50000. I've
> not heard of people with maps larger than 50000 entries.
>
> The larger the number of map entries the greater the possibility for a large
> number of active mounts so it's not hard to expect cases of a 1000 or somewhat
> more active mounts.

So I am setting the default number of mounts allowed per mount
namespace at 100,000. This is more than enough for any use case I
know of, but small enough to quickly stop an exponential increase
in mounts. Which should be perfect to catch misconfigurations and
malfunctioning programs.

For anyone who needs a higher limit this can be changed by writing
to the new /proc/sys/fs/mount-max sysctl.

Tested-by: CAI Qian <email address hidden>
Signed-off-by: "Eric W. Biederman" <email address hidden>
CVE-2016-6213
(backported from commit d29216842a85c7970c536108e093963f02714498)
[ luis:
  - adjusted context
  - replaced READ_ONCE() by ACCESS_ONCE() ]
Signed-off-by: Luis Henriques <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Acked-by: Tim Gardner <email address hidden>

636f6d7... by Benjamin LaHaise <email address hidden>

aio: fix reqs_available handling

BugLink: http://bugs.launchpad.net/bugs/1641129

As reported by Dan Aloni, commit f8567a3845ac ("aio: fix aio request
leak when events are reaped by userspace") introduces a regression when
user code attempts to perform io_submit() with more events than are
available in the ring buffer. Reverting that commit would reintroduce a
regression when user space event reaping is used.

Fixing this bug is a bit more involved than the previous attempts to fix
this regression. Since we do not have a single point at which we can
count events as being reaped by user space and io_getevents(), we have
to track event completion by looking at the number of events left in the
event ring. So long as there are as many events in the ring buffer as
there have been completion events generate, we cannot call
put_reqs_available(). The code to check for this is now placed in
refill_reqs_available().

A test program from Dan and modified by me for verifying this bug is available
at http://www.kvack.org/~bcrl/20140824-aio_bug.c .

Reported-by: Dan Aloni <email address hidden>
Signed-off-by: Benjamin LaHaise <email address hidden>
Acked-by: Dan Aloni <email address hidden>
Cc: Kent Overstreet <email address hidden>
Cc: Mateusz Guzik <email address hidden>
Cc: Petr Matousek <email address hidden>
Cc: <email address hidden> # v3.16 and anything that f8567a3845ac was backported to
Signed-off-by: Linus Torvalds <email address hidden>
(cherry picked from commit d856f32a86b2b015ab180ab7a55e455ed8d3ccc5)
Signed-off-by: Tim Gardner <email address hidden>
Acked-by: Colin Ian King <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>

587e541... by Long Li

hv: do not lose pending heartbeat vmbus packets

BugLink: http://bugs.launchpad.net/bugs/1632786

The host keeps sending heartbeat packets independent of the
guest responding to them. Even though we respond to the heartbeat messages at
interrupt level, we can have situations where there maybe multiple heartbeat
messages pending that have not been responded to. For instance this occurs when the
VM is paused and the host continues to send the heartbeat messages.
Address this issue by draining and responding to all
the heartbeat messages that maybe pending.

Signed-off-by: Long Li <email address hidden>
Signed-off-by: K. Y. Srinivasan <email address hidden>
CC: Stable <email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>
(cherry picked from commit 407a3aee6ee2d2cb46d9ba3fc380bc29f35d020c)
Signed-off-by: Joseph Salisbury <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>

29eb367... by Nicolas Dichtel

ipv6: correctly add local routes when lo goes up

BugLink: http://bugs.launchpad.net/bugs/1634545

The goal of the patch is to fix this scenario:
 ip link add dummy1 type dummy
 ip link set dummy1 up
 ip link set lo down ; ip link set lo up

After that sequence, the local route to the link layer address of dummy1 is
not there anymore.

When the loopback is set down, all local routes are deleted by
addrconf_ifdown()/rt6_ifdown(). At this time, the rt6_info entry still
exists, because the corresponding idev has a reference on it. After the rcu
grace period, dst_rcu_free() is called, and thus ___dst_free(), which will
set obsolete to DST_OBSOLETE_DEAD.

In this case, init_loopback() is called before dst_rcu_free(), thus
obsolete is still sets to something <= 0. So, the function doesn't add the
route again. To avoid that race, let's check the rt6 refcnt instead.

Fixes: 25fb6ca4ed9c ("net IPv6 : Fix broken IPv6 routing table after loopback down-up")
Fixes: a881ae1f625c ("ipv6: don't call addrconf_dst_alloc again when enable lo")
Fixes: 33d99113b110 ("ipv6: reallocate addrconf router for ipv6 address when lo device up")
Reported-by: Francesco Santoro <francesco.santoro@6wind.com>
Reported-by: Samuel Gauthier <samuel.gauthier@6wind.com>
CC: Balakumaran Kannan <email address hidden>
CC: Maruthi Thotad <email address hidden>
CC: Sabrina Dubroca <email address hidden>
CC: Hannes Frederic Sowa <email address hidden>
CC: Weilong Chen <email address hidden>
CC: Gao feng <email address hidden>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit a220445f9f4382c36a53d8ef3e08165fa27f7e2c)
Signed-off-by: Joseph Salisbury <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>

4297b07... by Gao feng <email address hidden>

ipv6: reallocate addrconf router for ipv6 address when lo device up

BugLink: http://bugs.launchpad.net/bugs/1634545

commit 25fb6ca4ed9cad72f14f61629b68dc03c0d9713f
"net IPv6 : Fix broken IPv6 routing table after loopback down-up"
allocates addrconf router for ipv6 address when lo device up.
but commit a881ae1f625c599b460cc8f8a7fcb1c438f699ad
"ipv6:don't call addrconf_dst_alloc again when enable lo" breaks
this behavior.

Since the addrconf router is moved to the garbage list when
lo device down, we should release this router and rellocate
a new one for ipv6 address when lo device up.

This patch solves bug 67951 on bugzilla
https://bugzilla.kernel.org/show_bug.cgi?id=67951

change from v1:
use ip6_rt_put to repleace ip6_del_rt, thanks Hannes!
change code style, suggested by Sergei.

CC: Sabrina Dubroca <email address hidden>
CC: Hannes Frederic Sowa <email address hidden>
Reported-by: Weilong Chen <email address hidden>
Signed-off-by: Weilong Chen <email address hidden>
Signed-off-by: Gao feng <email address hidden>
Acked-by: Hannes Frederic Sowa <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 33d99113b1102c2d2f8603b9ba72d89d915c13f5)
Signed-off-by: Joseph Salisbury <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>

a7f8a08... by Richard Guy Briggs <email address hidden>

audit: stop an old auditd being starved out by a new auditd

BugLink: http://bugs.launchpad.net/bugs/1633404

Nothing prevents a new auditd starting up and replacing a valid
audit_pid when an old auditd is still running, effectively starving out
the old auditd since audit_pid no longer points to the old valid
auditd.

If no message to auditd has been attempted since auditd died
unnaturally or got killed, audit_pid will still indicate it is alive.
There isn't an easy way to detect if an old auditd is still running on
the existing audit_pid other than attempting to send a message to see
if it fails. An -ECONNREFUSED almost certainly means it disappeared
and can be replaced. Other errors are not so straightforward and may
indicate transient problems that will resolve themselves and the old
auditd will recover. Yet others will likely need manual intervention
for which a new auditd will not solve the problem.

Send a new message type (AUDIT_REPLACE) to the old auditd containing a
u32 with the PID of the new auditd. If the audit replace message
succeeds (or doesn't fail with certainty), fail to register the new
auditd and return an error (-EEXIST).

This is expected to make the patch preventing an old auditd orphaning a
new auditd redundant.

V3: Switch audit message type from 1000 to 1300 block.

Signed-off-by: Richard Guy Briggs <email address hidden>
Signed-off-by: Paul Moore <email address hidden>
(backported from commit 133e1e5acd4a63c4a0dcc413e90d5decdbce9c4a)
Signed-off-by: Joseph Salisbury <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>

a80b498... by Jiri Pirko <email address hidden>

neigh: fix setting of default gc_* values

BugLink: http://bugs.launchpad.net/bugs/1634892

This patch fixes bug introduced by:
commit 1d4c8c29841b9991cdf3c7cc4ba7f96a94f104ca
"neigh: restore old behaviour of default parms values"

The thing is that in neigh_sysctl_register, extra1 and extra2 which were
previously set for NEIGH_VAR_GC_* are overwritten. That leads to
nonsense int limits for gc_* variables. So fix this by not touching
extra* fields for gc_* variables.

Signed-off-by: Jiri Pirko <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit b194c1f1dbd5f2671e49e0ac801b1b78dc7de93b)
Signed-off-by: Joseph Salisbury <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>

26bd136... by Tim Gardner

UBUNTU: [Config] Add nvme to the generic inclusion list

BugLink: http://bugs.launchpad.net/bugs/1640275

Signed-off-by: Tim Gardner <email address hidden>
Acked-by: Colin Ian King <email address hidden>
Acked-by: Robert Hooker <email address hidden>
Signed-off-by: Luis Henriques <email address hidden>