~pearlteam/ubuntu/+source/linux/+git/zesty:pearl-4.10.0-25.29

Last commit made on 2017-06-21
Get this branch:
git clone -b pearl-4.10.0-25.29 https://git.launchpad.net/~pearlteam/ubuntu/+source/linux/+git/zesty
Members of The Pearl Team can upload to this branch. Log in for directions.

Branch merges

Branch information

Name:
pearl-4.10.0-25.29
Repository:
lp:~pearlteam/ubuntu/+source/linux/+git/zesty

Recent commits

e0e5d1b... by dann frazier

UBUNTU: Ubuntu-4.10.0-25.29+pearl.1

Signed-off-by: dann frazier <email address hidden>

af944aa... by dann frazier

UBUNTU: d-i: Add hibmc-drm to kernel-image udeb

BugLink: https://bugs.launchpad.net/bugs/1698954

Needed for the installer to appear on the HiSilicon D05 server's graphical
display.

Signed-off-by: dann frazier <email address hidden>
Signed-off-by: Seth Forshee <email address hidden>

c5bd092... by dann frazier

UBUNTU: Start new release

Ignore: yes
Signed-off-by: dann frazier <email address hidden>

ab1f739... by Stefan Bader

UBUNTU: Ubuntu-4.10.0-25.29

Signed-off-by: Stefan Bader <email address hidden>

dfdf58a... by Stefan Bader

UBUNTU: SAUCE: mm: Only expand stack if guard area is hit

This was a change which happened rather late in the process. It might
have some performance benefit as it avoids trying to expand the stack
every time it is touched and instead checks on whether the guard area
has been reached.

CVE-2017-1000364

Signed-off-by: Stefan Bader <email address hidden>
Acked-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

154dffc... by DaveM

ipv6: Check ip6_find_1stfragopt() return value properly.

Do not use unsigned variables to see if it returns a negative
error or not.

Fixes: 2423496af35d ("ipv6: Prevent overrun when parsing v6 header options")
Reported-by: Julia Lawall <email address hidden>
Signed-off-by: David S. Miller <email address hidden>

CVE-2017-9074

(cherry picked from commit 7dd7eb9513bd02184d45f000ab69d78cb1fa1531)
Signed-off-by: Po-Hsu Lin <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Colin Ian King <email address hidden>
Signed-off-by-by: Stefan Bader <email address hidden>

e6bc0dd... by Nate Watterson <email address hidden>

iommu/iova: Fix underflow bug in __alloc_and_insert_iova_range

Normally, calling alloc_iova() using an iova_domain with insufficient
pfns remaining between start_pfn and dma_limit will fail and return a
NULL pointer. Unexpectedly, if such a "full" iova_domain contains an
iova with pfn_lo == 0, the alloc_iova() call will instead succeed and
return an iova containing invalid pfns.

This is caused by an underflow bug in __alloc_and_insert_iova_range()
that occurs after walking the "full" iova tree when the search ends
at the iova with pfn_lo == 0 and limit_pfn is then adjusted to be just
below that (-1). This (now huge) limit_pfn gives the impression that a
vast amount of space is available between it and start_pfn and thus
a new iova is allocated with the invalid pfn_hi value, 0xFFF.... .

To rememdy this, a check is introduced to ensure that adjustments to
limit_pfn will not underflow.

This issue has been observed in the wild, and is easily reproduced with
the following sample code.

 struct iova_domain *iovad = kzalloc(sizeof(*iovad), GFP_KERNEL);
 struct iova *rsvd_iova, *good_iova, *bad_iova;
 unsigned long limit_pfn = 3;
 unsigned long start_pfn = 1;
 unsigned long va_size = 2;

 init_iova_domain(iovad, SZ_4K, start_pfn, limit_pfn);
 rsvd_iova = reserve_iova(iovad, 0, 0);
 good_iova = alloc_iova(iovad, va_size, limit_pfn, true);
 bad_iova = alloc_iova(iovad, va_size, limit_pfn, true);

Prior to the patch, this yielded:
 *rsvd_iova == {0, 0} /* Expected */
 *good_iova == {2, 3} /* Expected */
 *bad_iova == {-2, -1} /* Oh no... */

After the patch, bad_iova is NULL as expected since inadequate
space remains between limit_pfn and start_pfn after allocating
good_iova.

BugLink: http://bugs.launchpad.net/bugs/1680549

Signed-off-by: Nate Watterson <email address hidden>
Signed-off-by: Joerg Roedel <email address hidden>
(cherry picked from commit 5016bdb796b3726eec043ca0ce3be981f712c756)
Signed-off-by: Manoj Iyer <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

71f9e84... by Robin Murphy <email address hidden>

iommu/dma: Plumb in the per-CPU IOVA caches

With IOVA allocation suitably tidied up, we are finally free to opt in
to the per-CPU caching mechanism. The caching alone can provide a modest
improvement over walking the rbtree for weedier systems (iperf3 shows
~10% more ethernet throughput on an ARM Juno r1 constrained to a single
650MHz Cortex-A53), but the real gain will be in sidestepping the rbtree
lock contention which larger ARM-based systems with lots of parallel I/O
are starting to feel the pain of.

BugLink: http://bugs.launchpad.net/bugs/1680549

Reviewed-by: Nate Watterson <email address hidden>
Tested-by: Nate Watterson <email address hidden>
Signed-off-by: Robin Murphy <email address hidden>
Signed-off-by: Joerg Roedel <email address hidden>
(cherry picked from commit bb65a64c7285e7105c1a6c8a33b37770343a4e96)
Signed-off-by: Manoj Iyer <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

d905fce... by Robin Murphy <email address hidden>

iommu/dma: Clean up MSI IOVA allocation

Now that allocation is suitably abstracted, our private alloc/free
helpers can drive the trivial MSI cookie allocator directly as well,
which lets us clean up its exposed guts from iommu_dma_map_msi_msg() and
simplify things quite a bit.

BugLink: http://bugs.launchpad.net/bugs/1680549

Reviewed-by: Nate Watterson <email address hidden>
Tested-by: Nate Watterson <email address hidden>
Signed-off-by: Robin Murphy <email address hidden>
Signed-off-by: Joerg Roedel <email address hidden>
(cherry picked from commit a44e6657585b15eeebf5681bfcc7ce0b002429c2)
Signed-off-by: Manoj Iyer <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

5ff7191... by Robin Murphy <email address hidden>

iommu/dma: Convert to address-based allocation

In preparation for some IOVA allocation improvements, clean up all the
explicit struct iova usage such that all our mapping, unmapping and
cleanup paths deal exclusively with addresses rather than implementation
details. In the process, a few of the things we're touching get renamed
for the sake of internal consistency.

BugLink: http://bugs.launchpad.net/bugs/1680549

Reviewed-by: Nate Watterson <email address hidden>
Tested-by: Nate Watterson <email address hidden>
Signed-off-by: Robin Murphy <email address hidden>
Signed-off-by: Joerg Roedel <email address hidden>
(cherry picked from commit 842fe519f68b4d17ba53c66d69f22a72b1ad08cf)
Signed-off-by: Manoj Iyer <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Seth Forshee <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>