apparmor does not start in Disco LXD containers

Bug #1824812 reported by Christian Ehrhardt 
28
This bug affects 2 people
Affects Status Importance Assigned to Milestone
AppArmor
Fix Released
Undecided
Unassigned
apparmor (Ubuntu)
Fix Released
High
Jamie Strandboge
libvirt (Ubuntu)
Invalid
Undecided
Unassigned
linux (Ubuntu)
Fix Released
Undecided
Christian Brauner
Disco
Fix Released
Undecided
Unassigned

Bug Description

In LXD apparmor now skips starting.

Steps to reproduce:
1. start LXD container
  $ lxc launch ubuntu-daily:d d-testapparmor
  (disco to trigger the issue, cosmic as reference)
2. check the default profiles loaded
  $ aa-status

=> This will in cosmic and up to recently disco list plenty of profiles active even in the default install.
Cosmic:
  25 profiles are loaded.
  25 profiles are in enforce mode.
Disco:
  15 profiles are loaded.
  15 profiles are in enforce mode.

All those 15 remaining are from snaps.
The service of apparmor.service actually states that it refuses to start.

$ systemctl status apparmor
...
Apr 15 13:56:12 testkvm-disco-to apparmor.systemd[101]: Not starting AppArmor in container

I can get those profiles (the default installed ones) loaded, for example:
  $ sudo apparmor_parser -r /etc/apparmor.d/sbin.dhclient
makes it appear
  22 profiles are in enforce mode.
   /sbin/dhclient

I was wondering as in my case I found my guest with no (=0) profiles loaded. But as shown above after "apparmor_parser -r" and package install profiles seemed fine. Then the puzzle was solved, on package install they
will call apparmor_parser via the dh_apparmor snippet and it is fine.

To fully disable all of them:
  $ lxc stop <container>
  $ lxc start <container>
  $ lxc exec d-testapparmor aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

That would match the service doing an early exit as shown in systemctl status output above. The package install or manual load works, but none are loaded by the service automatically e.g. on container restart.

--- --- ---

This bug started as:
Migrations to Disco trigger "Unable to find security driver for model apparmor"

This most likely is related to my KVM-in-LXD setup but it worked fine for years and I'd like to sort out what broke. I have migrated to Disco's qemu 3.1 already which makes me doubts generic issues in qemu 3.1 in general.

The virt tests that run cross release work fine starting from X/B/C but all those chains fail at mirgating to Disco now with:
  $ lxc exec testkvm-cosmic-from -- virsh migrate --unsafe --live kvmguest-bionic-normal
  qemu+ssh://10.21.151.207/system
  error: unsupported configuration: Unable to find security driver for model apparmor

I need to analyze what changed

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

In my disco container they really start without apparmor isolation nowadays.
After startign a guest with uvtool I checked what was auto-labelled.

Classic:
  <seclabel type='dynamic' model='apparmor' relabel='yes'>
    <label>libvirt-6400c017-06af-4ef4-a483-93380dae261c</label>
    <imagelabel>libvirt-6400c017-06af-4ef4-a483-93380dae261c</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+64055:+115</label>
    <imagelabel>+64055:+115</imagelabel>
  </seclabel>

Disco:
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+64055:+108</label>
    <imagelabel>+64055:+108</imagelabel>
  </seclabel>

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I had this kind of errors a few times in the past, need to check some conditions in this case ...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Pure parsing works (was broken int he past)

$ cat << EOF > /tmp/test.xml
  <domain type='kvm'>
    <name>test-seclabel</name>
    <uuid>12345678-9abc-def1-2345-6789abcdef00</uuid>
    <memory unit='KiB'>1</memory>
    <os><type arch='x86_64'>hvm</type></os>
    <seclabel type='dynamic' model='apparmor' relabel='yes'/>
    <seclabel type='dynamic' model='dac' relabel='yes'/>
  </domain>
  EOF
  $ /usr/lib/libvirt/virt-aa-helper -d -r \
    -u libvirt-12345678-9abc-def1-2345-6789abcdef00 < /tmp/test.xml

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Apparmor is disabled in LXD containers now !?!
Compare aa-status after spawning a new container.

root@d-testapparmor:~# aa-status
apparmor module is loaded.
15 profiles are loaded.
15 profiles are in enforce mode.
   /snap/core/6673/usr/lib/snapd/snap-confine
   /snap/core/6673/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   snap-update-ns.core
   snap-update-ns.lxd
   snap.core.hook.configure
   snap.lxd.activate
   snap.lxd.benchmark
   snap.lxd.buginfo
   snap.lxd.check-kernel
   snap.lxd.daemon
   snap.lxd.hook.configure
   snap.lxd.hook.install
   snap.lxd.lxc
   snap.lxd.lxd
   snap.lxd.migrate
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

root@c-testapparmor:~# aa-status
apparmor module is loaded.
25 profiles are loaded.
25 profiles are in enforce mode.
   /sbin/dhclient
   /snap/core/6673/usr/lib/snapd/snap-confine
   /snap/core/6673/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/tcpdump
   man_filter
   man_groff
   snap-update-ns.core
   snap-update-ns.lxd
   snap.core.hook.configure
   snap.lxd.activate
   snap.lxd.benchmark
   snap.lxd.buginfo
   snap.lxd.check-kernel
   snap.lxd.daemon
   snap.lxd.hook.configure
   snap.lxd.hook.install
   snap.lxd.lxc
   snap.lxd.lxd
   snap.lxd.migrate
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

That is confirmed by the service:
Apr 15 14:16:21 d-testapparmor systemd[1]: Starting Load AppArmor profiles...
Apr 15 14:16:21 d-testapparmor apparmor.systemd[101]: Not starting AppArmor in container
Apr 15 14:16:21 d-testapparmor systemd[1]: Started Load AppArmor profiles.

summary: - Migrations to Disco trigger "Unable to find security driver for model
- apparmor"
+ apparmor no more starting in Disco LXD containers
description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote : Re: apparmor no more starting in Disco LXD containers

Since I started seeing this in libvirt There might be reasons that is done that way but this affects me and probably other use cases e.g. if I install libvirt:
  $ apt install libvirt-daemon-system
  $ aa-status | grep libvirt

On my test systems the containers do not get any profile loaded:
$ aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

When testing a new disco container on my laptop they at least have only less profiles, but some profiles work. Odd at least.

description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

In that container that has no profiles at all I can explicitly load them.
 $ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.libvirtd
 $ systemctl restart libvirtd

makes it show up correctly
  1 processes are in enforce mode.
   /usr/sbin/libvirtd (1146)

But why is it missing in the first place ... ?

description: updated
description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

In Cosmic /lib/systemd/system/apparmor.service pointed to "/etc/init.d/apparmor start"
This had some code, but it was not triggered:
                if [ -x /usr/bin/systemd-detect-virt ] && \
                   systemd-detect-virt --quiet --container && \
                   ! is_container_with_internal_policy; then
                        log_daemon_msg "Not starting AppArmor in container"
                        log_end_msg 0
                        exit 0

The interesting bit here is /lib/apparmor/functions with the function is_container_with_internal_policy

That essentially detected stacked namespaces in LXD and made it continue to work.

In Disco this now uses /lib/apparmor/apparmor.systemd instead.
I still calls is_container_with_internal_policy which now is only slightly different and stored in /lib/apparmor/rc.apparmor.functions

We need to track down why this now no more returns true ...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Adding set -x and calling this directly:

Cosmic:
. /lib/apparmor/functions
is_container_with_internal_policy
+ local ns_stacked_path=/sys/kernel/security/apparmor/.ns_stacked
+ local ns_name_path=/sys/kernel/security/apparmor/.ns_name
+ local ns_stacked
+ local ns_name
+ '[' -f /sys/kernel/security/apparmor/.ns_stacked ']'
+ '[' -f /sys/kernel/security/apparmor/.ns_name ']'
+ read -r ns_stacked
+ '[' yes '!=' yes ']'
+ read -r ns_name
+ '[' 'c-testapparmor_<var-snap-lxd-common-lxd>' = 'lxd-c-testapparmor_<var-snap-lxd-common-lxd>' ']'
+ return 0

Disco:
. /lib/apparmor/rc.apparmor.functions
is_container_with_internal_policy
+ local ns_stacked_path=/.ns_stacked
+ local ns_name_path=/.ns_name
+ local ns_stacked
+ local ns_name
+ '[' -f /.ns_stacked ']'
+ return 1

Ok, in my case the ENV var that is now used is not set.

$ export SFS_MOUNTPOINT=/sys/kernel/security/apparmor/
$ is_container_with_internal_policy
+ is_container_with_internal_policy
+ set -x
+ local ns_stacked_path=/sys/kernel/security/apparmor//.ns_stacked
+ local ns_name_path=/sys/kernel/security/apparmor//.ns_name
+ local ns_stacked
+ local ns_name
+ '[' -f /sys/kernel/security/apparmor//.ns_stacked ']'
+ '[' -f /sys/kernel/security/apparmor//.ns_name ']'
+ read -r ns_stacked
+ '[' yes '!=' yes ']'
+ read -r ns_name
+ '[' 'd-testapparmor_<var-snap-lxd-common-lxd>' = 'lxd-d-testapparmor_<var-snap-lxd-common-lxd>' ']'
+ return 0

Now it works, could it be that in the init script context this isn't set either?
Yep that is it:
If I patch in the path it works again
 # patch /lib/apparmor/rc.apparmor.functions to have SFS_MOUNTPOINT=/sys/kernel/security/apparmor/
 $ systemctl restart apparmor
 $ aa-status
   # lists all profiles again

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Yep, adding
 Environment=SFS_MOUNTPOINT=/sys/kernel/security/apparmor/
to
 /lib/systemd/system/apparmor.service
Fixes the bug.

Revision history for this message
John Johansen (jjohansen) wrote :

Perhaps because of bug 1823379, which broke some code's dynamic detection of apparmor being enabled via /sys/module/apparmor/parameters/enabled?

The fix is working its way through the queue and is currently in proposed.

Revision history for this message
John Johansen (jjohansen) wrote :

Sorry, no. Ignore comment #10

Changed in libvirt (Ubuntu):
status: New → Invalid
Changed in apparmor (Ubuntu):
status: New → Triaged
assignee: nobody → Jamie Strandboge (jdstrand)
importance: Undecided → High
Changed in apparmor:
status: New → Triaged
Revision history for this message
Jamie Strandboge (jdstrand) wrote :

This is due to a bug in upstream parser/rc.apparmor.functions because SFS_MOUNTPOINT is only set after is_apparmor_loaded() is called, but is_container_with_internal_policy() doesn't call it. /lib/apparmor/apparmor.systemd calls is_container_with_internal_policy() prior to apparmor_start() and it is only through apparmor_start() that is_apparmor_loaded() is called.

summary: - apparmor no more starting in Disco LXD containers
+ apparmor does not start in Disco LXD containers
Revision history for this message
Jamie Strandboge (jdstrand) wrote :

There are two bugs that are causing trouble for apparmor policy in LXD containers:

1. the rc.apparmor.functions bug (easy fix: define SFS_MOUNTPOINT at the right time
2. there is some sort of an interaction with the 5.0.0 kernel that is causing problems

I'll give complete instructions on how to reproduce in a moment

Revision history for this message
Jamie Strandboge (jdstrand) wrote :
Download full text (4.0 KiB)

The following will reproduce the issue in a disco VM with disco LXD container:

Initial setup:
1. have an up to date disco vm
$ cat /proc/version_signature
Ubuntu 5.0.0-11.12-generic 5.0.6

2. sudo snap install lxd
3. sudo adduser `id -un` lxd
4. newgrp lxd
5. sudo lxd init # use defaults
6. . /etc/profile.d/apps-bin-path.sh

After this note the SFS_MOUNTPOINT bug:
1. lxc launch ubuntu-daily:d d-testapparmor
2. lxc exec d-testapparmor /lib/apparmor/apparmor.systemd reload
3. fix /lib/apparmor/rc.apparmor.functions to define SFS_MOUNTPOINT="${SECURITYFS}/${MODULE}" at the top of is_container_with_internal_policy(). Ie lxc exec d-testapparmor vi /lib/apparmor/rc.apparmor.functions
4. lxc exec d-testapparmor -- sh -x /lib/apparmor/apparmor.systemd reload # notice apparmor_parser was called

At this point, these were called (as seen from the sh -x output, above):

/sbin/apparmor_parser --write-cache --replace -- /etc/apparmor.d
/sbin/apparmor_parser --write-cache --replace -- /var/lib/snapd/apparmor/profiles

but no profiles were loaded:
$ lxc exec d-testapparmor aa-status

Note weird parser error trying to load an individual profile:
$ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient
AppArmor parser error for /etc/apparmor.d/sbin.dhclient in /etc/apparmor.d/tunables/home at line 25: Could not process include directory '/etc/apparmor.d/tunables/home.d' in 'tunables/home.d'

Stopping and starting the container doesn't help:
$ lxc stop d-testapparmor
$ lxc start d-testapparmor
$ lxc exec d-testapparmor aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Note, under 5.0.0-8.9 and with the SFS_MOUNTPOINT fix, the tunables error goes away:
$ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient
$

and the profiles load on container start:
$ lxc exec d-testapparmor aa-status
apparmor module is loaded.
27 profiles are loaded.
27 profiles are in enforce mode.
   /sbin/dhclient
   /snap/core/6673/usr/lib/snapd/snap-confine
   /snap/core/6673/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/tcpdump
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   snap-update-ns.core
   snap-update-ns.lxd
   snap.core.hook.configure
   snap.lxd.activate
   snap.lxd.benchmark
   snap.lxd.buginfo
   snap.lxd.check-kernel
   snap.lxd.daemon
   snap.lxd.hook.configure
   snap.lxd.hook.install
   snap.lxd.lxc
   snap.lxd.lxd
   snap.lxd.migrate
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

However, 5.0.0-11.12 has fixes for lxd and apparmor. This 11.12 also starts using ...

Read more...

Changed in linux (Ubuntu):
status: New → Confirmed
assignee: nobody → John Johansen (jjohansen)
Revision history for this message
Jamie Strandboge (jdstrand) wrote :

Since the apparmor SFS_MOUNTPOINT change is small, I'll prepare an upload for that immediately. We may need another parser update for the other issue.

Changed in apparmor (Ubuntu):
status: Triaged → In Progress
Revision history for this message
Tyler Hicks (tyhicks) wrote :

I noticed that confinement inside of LXD containers works fine when shiftfs is disabled:

$ sudo rmmod shiftfs
$ sudo mv /lib/modules/5.0.0-11-generic/kernel/fs/shiftfs.ko .
$ sudo systemctl restart snap.lxd.daemon
$ lxc launch ubuntu-daily:d noshift
Creating noshift
Starting noshift

# Now log in to the container and fix the apparmor init script bug
# around SFS_MOUNTPOINT by modifying /lib/apparmor/rc.apparmor.functions
# to define SFS_MOUNTPOINT="${SECURITYFS}/${MODULE}" at the top of
# is_container_with_internal_policy()

$ lxc exec noshift -- sh -x /lib/apparmor/apparmor.systemd reload
$ lxc exec noshift -- aa-status
apparmor module is loaded.
27 profiles are loaded.
27 profiles are in enforce mode.
   /sbin/dhclient
   /snap/core/6673/usr/lib/snapd/snap-confine
   /snap/core/6673/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/tcpdump
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   snap-update-ns.core
   snap-update-ns.lxd
   snap.core.hook.configure
   snap.lxd.activate
   snap.lxd.benchmark
   snap.lxd.buginfo
   snap.lxd.check-kernel
   snap.lxd.daemon
   snap.lxd.hook.configure
   snap.lxd.hook.install
   snap.lxd.lxc
   snap.lxd.lxd
   snap.lxd.migrate
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Revision history for this message
Jamie Strandboge (jdstrand) wrote :

Uploaded 2.13.2-9ubuntu6 with the SFS_MOUNTPOINT change.

Revision history for this message
Christian Brauner (cbrauner) wrote :

Okay, I have a fix for the shiftfs side I think. Attached here.

Revision history for this message
Tyler Hicks (tyhicks) wrote :

I was able to narrow down this apparmor_parser error to shiftfs:

AppArmor parser error for /etc/apparmor.d/sbin.dhclient in /etc/apparmor.d/tunables/home at line 25: Could not process include directory '/etc/apparmor.d/tunables/home.d' in 'tunables/home.d'

The problem stems from shiftfs not handling this sequence:

 getdents()
  lseek() to reset the f_pos to 0
   getdents()

I'm attaching a test case for this issue, called dir-seek.c.

When ran on a non-shiftfs filesystem, you'll see something like this:

 $ ./dir-seek
 PASS: orig_count (29) == new_count (29)

When you run the test case on shiftfs, you'll see something like this:

 $ ./dir-seek
 FAIL: orig_count (29) != new_count (0)

The f_pos of the directory file is not properly tracked/reset on shiftfs.

Revision history for this message
Tyler Hicks (tyhicks) wrote :

When running a test kernel with Christian's patch, the dir-seek test case passes:

 $ ./dir-seek
 PASS: orig_count (9) == new_count (9)

Unfortunately, I can't be sure that apparmor policy is loaded correctly when creating a new LXD container due to the apparmor portion of this bug report. However, I was able to verify that I can use apparmor_parser as expected and, after manually doing the SFS_MOUNTPOINT fix in the apparmor init script, that policy is loaded during container boot.

Changed in linux (Ubuntu):
assignee: John Johansen (jjohansen) → Christian Brauner (cbrauner)
status: Confirmed → In Progress
Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "UBUNTU: SAUCE: shiftfs: use correct llseek method for" seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
tags: added: shiftfs
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package apparmor - 2.13.2-9ubuntu6

---------------
apparmor (2.13.2-9ubuntu6) disco; urgency=medium

  * lp1824812.patch: set SFS_MOUNTPOINT in is_container_with_internal_policy()
    since it is sometimes called independently of is_apparmor_loaded()
    - LP: #1824812

 -- Jamie Strandboge <email address hidden> Mon, 15 Apr 2019 15:59:54 +0000

Changed in apparmor (Ubuntu):
status: In Progress → Fix Released
no longer affects: libvirt (Ubuntu Disco)
no longer affects: apparmor (Ubuntu Disco)
Changed in linux (Ubuntu Disco):
status: New → Fix Committed
Revision history for this message
Ubuntu Kernel Bot (ubuntu-kernel-bot) wrote :

This bug is awaiting verification that the kernel in -proposed solves the problem. Please test the kernel and update this bug with the results. If the problem is solved, change the tag 'verification-needed-disco' to 'verification-done-disco'. If the problem still exists, change the tag 'verification-needed-disco' to 'verification-failed-disco'.

If verification is not done by 5 working days from today, this fix will be dropped from the source code, and this bug will be closed.

See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you!

tags: added: verification-needed-disco
Revision history for this message
Ubuntu Kernel Bot (ubuntu-kernel-bot) wrote :

This bug is awaiting verification that the kernel in -proposed solves the problem. Please test the kernel and update this bug with the results. If the problem is solved, change the tag 'verification-needed-bionic' to 'verification-done-bionic'. If the problem still exists, change the tag 'verification-needed-bionic' to 'verification-failed-bionic'.

If verification is not done by 5 working days from today, this fix will be dropped from the source code, and this bug will be closed.

See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you!

tags: added: verification-needed-bionic
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I have not seen/triggered the kernel issue mentioned in here (identified by jdstrand).
But on request I'll try it at least.

Testing on Disco with Host Having:
5.0.0-13-generic

# Create container and trigger the issue:
lxc launch ubuntu-daily:d d-testapparmor
# update the container to not have the bug in apparmor userspace
lxc exec d-testapparmor apt update
lxc exec d-testapparmor apt upgrade
# Check status of AA in the container

Harr, this is not using shiftfs - therefore I can't trigger the bug yet.

Trying to get shiftfs to be active, not loaded yet
sudo modprobe shiftfs
sudo systemctl restart snap.lxd.daemon
# but creating a container still is empty
lxc exec d-testapparmor -- grep shiftfs /proc/self/mountinfo
<nothing>

Yep the daemon think it is not available
$ lxc info | grep shiftfs
    shiftfs: "false"

I tried on this for a while but even
 $ sudo snap set lxd shiftfs.enable=true
Won't set it to true.
I'm not sure I can verify this one as I don't know what blocks me from using shiftfs in the first place.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Download full text (7.4 KiB)

Ordering was important:

$ modprobe shiftfs
$ sudo snap set lxd shiftfs.enable=true
$ sudo systemctl restart snap.lxd.daemon
Now it is enabled:
$ lxc info | grep shiftfs
    shiftfs: "true"
$ lxc exec d-testapparmor -- mount | grep shift
/var/snap/lxd/common/lxd/storage-pools/default2/containers/d-testapparmor/rootfs on / type shiftfs (rw,relatime,passthrough=3)
/var/snap/lxd/common/lxd/storage-pools/default2/containers/d-testapparmor/rootfs on /snap type shiftfs (rw,relatime,passthrough=3)

And with that I can reproduce the bug:

$ lxc exec d-testapparmor -- aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
$ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient
AppArmor parser error for /etc/apparmor.d/sbin.dhclient in /etc/apparmor.d/tunables/home at line 25: Could not process include directory '/etc/apparmor.d/tunables/home.d' in 'tunables/home.d'

Installing the host kernel from proposed.
=> 5.0.0.14.15

ubuntu@disco-test-aa-stack:~$ sudo apt install linux-generic linux-headers-generic linux-image-generic
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  linux-headers-5.0.0-14 linux-headers-5.0.0-14-generic linux-image-5.0.0-14-generic linux-modules-5.0.0-14-generic linux-modules-extra-5.0.0-14-generic
Suggested packages:
  fdutils linux-doc-5.0.0 | linux-source-5.0.0 linux-tools
The following NEW packages will be installed:
  linux-headers-5.0.0-14 linux-headers-5.0.0-14-generic linux-image-5.0.0-14-generic linux-modules-5.0.0-14-generic linux-modules-extra-5.0.0-14-generic
The following packages will be upgraded:
  linux-generic linux-headers-generic linux-image-generic
3 upgraded, 5 newly installed, 0 to remove and 8 not upgraded.
Need to get 67.1 MB of archives.
After this operation, 334 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 linux-modules-5.0.0-14-generic amd64 5.0.0-14.15 [13.7 MB]
6% [1 linux-modules-5.0.0-14-generic 4743 kB/13.7 MB 35%]
Get:2 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 linux-image-5.0.0-14-generic amd64 5.0.0-14.15 [8350 kB]
Get:3 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 linux-modules-extra-5.0.0-14-generic amd64 5.0.0-14.15 [33.2 MB]
Get:4 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 linux-generic amd64 5.0.0.14.15 [1860 B]
Get:5 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 linux-image-generic amd64 5.0.0.14.15 [2484 B]
Get:6 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 linux-headers-5.0.0-14 all 5.0.0-14.15 [10.7 MB] ...

Read more...

tags: added: verification-done-disco
removed: verification-needed-disco
Connor Kuehl (connork)
tags: added: verification-done-bionic
removed: verification-needed-bionic
Revision history for this message
Launchpad Janitor (janitor) wrote :
Download full text (3.7 KiB)

This bug was fixed in the package linux - 5.0.0-15.16

---------------
linux (5.0.0-15.16) disco; urgency=medium

  * CVE-2019-11683
    - udp: fix GRO reception in case of length mismatch
    - udp: fix GRO packet of death

  * CVE-2018-12126 // CVE-2018-12127 // CVE-2018-12130
    - x86/msr-index: Cleanup bit defines
    - x86/speculation: Consolidate CPU whitelists
    - x86/speculation/mds: Add basic bug infrastructure for MDS
    - x86/speculation/mds: Add BUG_MSBDS_ONLY
    - x86/kvm: Expose X86_FEATURE_MD_CLEAR to guests
    - x86/speculation/mds: Add mds_clear_cpu_buffers()
    - x86/speculation/mds: Clear CPU buffers on exit to user
    - x86/kvm/vmx: Add MDS protection when L1D Flush is not active
    - x86/speculation/mds: Conditionally clear CPU buffers on idle entry
    - x86/speculation/mds: Add mitigation control for MDS
    - x86/speculation/mds: Add sysfs reporting for MDS
    - x86/speculation/mds: Add mitigation mode VMWERV
    - Documentation: Move L1TF to separate directory
    - Documentation: Add MDS vulnerability documentation
    - x86/speculation/mds: Add mds=full,nosmt cmdline option
    - x86/speculation: Move arch_smt_update() call to after mitigation decisions
    - x86/speculation/mds: Add SMT warning message
    - x86/speculation/mds: Fix comment
    - x86/speculation/mds: Print SMT vulnerable on MSBDS with mitigations off
    - x86/speculation/mds: Add 'mitigations=' support for MDS

  * CVE-2017-5715 // CVE-2017-5753
    - s390/speculation: Support 'mitigations=' cmdline option

  * CVE-2017-5715 // CVE-2017-5753 // CVE-2017-5754 // CVE-2018-3639
    - powerpc/speculation: Support 'mitigations=' cmdline option

  * CVE-2017-5715 // CVE-2017-5754 // CVE-2018-3620 // CVE-2018-3639 //
    CVE-2018-3646
    - cpu/speculation: Add 'mitigations=' cmdline option
    - x86/speculation: Support 'mitigations=' cmdline option

  * Packaging resync (LP: #1786013)
    - [Packaging] resync git-ubuntu-log

linux (5.0.0-14.15) disco; urgency=medium

  * linux: 5.0.0-14.15 -proposed tracker (LP: #1826150)

  * [SRU] Please sync vbox modules from virtualbox 6.0.6 on next kernel update
    (LP: #1825210)
    - vbox-update: updates for renamed makefiles
    - ubuntu: vbox -- update to 6.0.6-dfsg-1

  * Intel I210 Ethernet card not working after hotplug [8086:1533]
    (LP: #1818490)
    - igb: Fix WARN_ONCE on runtime suspend

  * [regression][snd_hda_codec_realtek] repeating crackling noise after 19.04
    upgrade (LP: #1821663)
    - ALSA: hda - Add two more machines to the power_save_blacklist

  * CVE-2019-9500
    - brcmfmac: assure SSID length from firmware is limited

  * CVE-2019-9503
    - brcmfmac: add subtype check for event handling in data path

  * CVE-2019-3882
    - vfio/type1: Limit DMA mappings per container

  * autofs kernel module missing (LP: #1824333)
    - [Config] Update autofs4 path in inclusion list

  * The Realtek card reader does not enter PCIe 1.1/1.2 (LP: #1825487)
    - misc: rtsx: Enable OCP for rts522a rts524a rts525a rts5260
    - SAUCE: misc: rtsx: Fixed rts5260 power saving parameter and sd glitch

  * headset-mic doesn't work on two Dell laptops. (LP: #1825272)
    - ALSA: hda/realtek - add...

Read more...

Changed in linux (Ubuntu Disco):
status: Fix Committed → Fix Released
Changed in linux (Ubuntu):
status: In Progress → Fix Released
Revision history for this message
Jamie Strandboge (jdstrand) wrote :

This was fixed upstream in 61c27d8808f0589beb6a319cc04073e8bb32d860

Changed in apparmor:
status: Triaged → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.