~vicamo/+git/ubuntu-kernel:intel-ipu6/chromeos-5.4/oem-5.6

Last commit made on 2020-09-07
Get this branch:
git clone -b intel-ipu6/chromeos-5.4/oem-5.6 https://git.launchpad.net/~vicamo/+git/ubuntu-kernel
Only You-Sheng Yang can upload to this branch. If you are You-Sheng Yang please log in for upload directions.

Branch merges

Branch information

Name:
intel-ipu6/chromeos-5.4/oem-5.6
Repository:
lp:~vicamo/+git/ubuntu-kernel

Recent commits

0c15d30... by You-Sheng Yang

Bug XXX: Add Intel IPU6 driver

bd61d0f... by You-Sheng Yang

UBUNTU: SAUCE: updateconfigs for IPU6 driver

3c89699... by You-Sheng Yang

UBUNTU: SAUCE: still compile ipu3-cio2

f7b9989... by Qiu, Tianshu

CHROMIUM: media: intel-ipu6: Add IPU6 and IPU6SE drivers

This patch adds Intel IPU6 and IPU6SE drivers.

BUG=b:149068439
BUG=b:149068672
TEST=Sanity checked basic camera functions.

Change-Id: I52139f1f1372d3d16ee2fb7e16ff7304a712a6c1
Signed-off-by: Tianshu Qiu <email address hidden>
Signed-off-by: Bingbu Cao <email address hidden>

a1dbac9... by Alexander Usyskin

mei: bus: don't clean driver pointer

BugLink: https://bugs.launchpad.net/bugs/1893609

It's not needed to set driver to NULL in mei_cl_device_remove()
which is bus_type remove() handler as this is done anyway
in __device_release_driver().

Actually this is causing an endless loop in driver_detach()
on ubuntu patched kernel, while removing (rmmod) the mei_hdcp module.
The reason list_empty(&drv->p->klist_devices.k_list) is always not-empty.
as the check is always true in __device_release_driver()
 if (dev->driver != drv)
  return;

The non upstream patch is causing this behavior, titled:
'vfio -- release device lock before userspace requests'

Nevertheless the fix is correct also for the upstream.

Link: https://patchwork.<email address hidden>/
Cc: <email address hidden>
Cc: Andy Whitcroft <email address hidden>
Signed-off-by: Alexander Usyskin <email address hidden>
Signed-off-by: Tomas Winkler <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>
(cherry picked from commit e852c2c251ed9c23ae6e3efebc5ec49adb504207)
Signed-off-by: Aaron Ma <email address hidden>
Signed-off-by: Timo Aaltonen <email address hidden>

0d31ee3... by Timo Aaltonen

UBUNTU: update dkms package versions

BugLink: https://bugs.launchpad.net/bugs/1786013
Signed-off-by: Timo Aaltonen <email address hidden>

789720a... by Alberto Milone

UBUNTU: [packaging] add signed modules for the 450 nvidia driver

The 450 series replaces the 440 series.

BugLink: https://bugs.launchpad.net/bugs/1887674

Signed-off-by: Alberto Milone <email address hidden>
Signed-off-by: Timo Aaltonen <email address hidden>

a652a15... by Alex Williamson <email address hidden>

vfio-pci: Invalidate mmaps and block MMIO access on disabled memory

Accessing the disabled memory space of a PCI device would typically
result in a master abort response on conventional PCI, or an
unsupported request on PCI express. The user would generally see
these as a -1 response for the read return data and the write would be
silently discarded, possibly with an uncorrected, non-fatal AER error
triggered on the host. Some systems however take it upon themselves
to bring down the entire system when they see something that might
indicate a loss of data, such as this discarded write to a disabled
memory space.

To avoid this, we want to try to block the user from accessing memory
spaces while they're disabled. We start with a semaphore around the
memory enable bit, where writers modify the memory enable state and
must be serialized, while readers make use of the memory region and
can access in parallel. Writers include both direct manipulation via
the command register, as well as any reset path where the internal
mechanics of the reset may both explicitly and implicitly disable
memory access, and manipulation of the MSI-X configuration, where the
MSI-X vector table resides in MMIO space of the device. Readers
include the read and write file ops to access the vfio device fd
offsets as well as memory mapped access. In the latter case, we make
use of our new vma list support to zap, or invalidate, those memory
mappings in order to force them to be faulted back in on access.

Our semaphore usage will stall user access to MMIO spaces across
internal operations like reset, but the user might experience new
behavior when trying to access the MMIO space while disabled via the
PCI command register. Access via read or write while disabled will
return -EIO and access via memory maps will result in a SIGBUS. This
is expected to be compatible with known use cases and potentially
provides better error handling capabilities than present in the
hardware, while avoiding the more readily accessible and severe
platform error responses that might otherwise occur.

Fixes: CVE-2020-12888
Reviewed-by: Peter Xu <email address hidden>
Signed-off-by: Alex Williamson <email address hidden>
(cherry picked from commit abafbc551fddede3e0a08dee1dcde08fc0eb8476)
CVE-2020-12888
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Timo Aaltonen <email address hidden>

6de71a9... by Alex Williamson <email address hidden>

vfio-pci: Fault mmaps to enable vma tracking

Rather than calling remap_pfn_range() when a region is mmap'd, setup
a vm_ops handler to support dynamic faulting of the range on access.
This allows us to manage a list of vmas actively mapping the area that
we can later use to invalidate those mappings. The open callback
invalidates the vma range so that all tracking is inserted in the
fault handler and removed in the close handler.

Reviewed-by: Peter Xu <email address hidden>
Signed-off-by: Alex Williamson <email address hidden>
(backported from commit 11c4cd07ba111a09f49625f9e4c851d83daf0a22)
[cascardo: adjusted context in header]
CVE-2020-12888
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Timo Aaltonen <email address hidden>

f2bca66... by Alex Williamson <email address hidden>

vfio/type1: Support faulting PFNMAP vmas

With conversion to follow_pfn(), DMA mapping a PFNMAP range depends on
the range being faulted into the vma. Add support to manually provide
that, in the same way as done on KVM with hva_to_pfn_remapped().

Reviewed-by: Peter Xu <email address hidden>
Signed-off-by: Alex Williamson <email address hidden>
(cherry picked from commit 41311242221e3482b20bfed10fa4d9db98d87016)
CVE-2020-12888
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Signed-off-by: Timo Aaltonen <email address hidden>