Merge ~xnox/ubiquity:zfs-list-cache into ubiquity:main

Proposed by Dimitri John Ledkov
Status: Merged
Merged at revision: 5cf0c7bc3cc5b45b8a0cc3b4bf7b16ac1b7d65cf
Proposed branch: ~xnox/ubiquity:zfs-list-cache
Merge into: ubiquity:main
Diff against target: 81 lines (+30/-14)
3 files modified
debian/changelog (+7/-0)
scripts/zsys-setup (+22/-12)
ubiquity/plugins/ubi-partman.py (+1/-2)
Reviewer Review Type Date Requested Status
Didier Roche-Tolomelli (community) Approve
Jean-Baptiste Lallement Pending
Ubuntu Installer Team Pending
Review via email: mp+431831@code.launchpad.net

Commit message

* Re-enable zfs encryption
* zsys-setup: generate correct zfs-list.cache for target (LP: #1993318)

To post a comment you must log in.
Revision history for this message
Nick Rosbrook (enr0n) wrote :

I tested this by patching /usr/share/ubiquity/zsys-setup in the live environment, before starting the install. I ran the install as normal from there, and the first boot behaved normally again. Specifically, /var/lib/dpkg/status was populated correctly, apt commands worked, and the Firefox icon was present on the dock.

The output from `zfs list` and `mount` looked sane to me.

Revision history for this message
Nick Rosbrook (enr0n) wrote :
Download full text (7.5 KiB)

$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
bpool 136M 1.37G 96K /boot
bpool/BOOT 135M 1.37G 96K none
bpool/BOOT/ubuntu_rvbd0s 135M 1.37G 135M /boot
rpool 5.58G 9.43G 192K /
rpool/ROOT 5.07G 9.43G 192K none
rpool/ROOT/ubuntu_rvbd0s 5.07G 9.43G 3.41G /
rpool/ROOT/ubuntu_rvbd0s/srv 192K 9.43G 192K /srv
rpool/ROOT/ubuntu_rvbd0s/usr 580K 9.43G 192K /usr
rpool/ROOT/ubuntu_rvbd0s/usr/local 388K 9.43G 388K /usr/local
rpool/ROOT/ubuntu_rvbd0s/var 1.66G 9.43G 192K /var
rpool/ROOT/ubuntu_rvbd0s/var/games 192K 9.43G 192K /var/games
rpool/ROOT/ubuntu_rvbd0s/var/lib 1.65G 9.43G 1.52G /var/lib
rpool/ROOT/ubuntu_rvbd0s/var/lib/AccountsService 208K 9.43G 208K /var/lib/AccountsService
rpool/ROOT/ubuntu_rvbd0s/var/lib/NetworkManager 224K 9.43G 224K /var/lib/NetworkManager
rpool/ROOT/ubuntu_rvbd0s/var/lib/apt 82.7M 9.43G 82.7M /var/lib/apt
rpool/ROOT/ubuntu_rvbd0s/var/lib/dpkg 50.2M 9.43G 50.2M /var/lib/dpkg
rpool/ROOT/ubuntu_rvbd0s/var/log 2.14M 9.43G 2.14M /var/log
rpool/ROOT/ubuntu_rvbd0s/var/mail 192K 9.43G 192K /var/mail
rpool/ROOT/ubuntu_rvbd0s/var/snap 2.49M 9.43G 2.49M /var/snap
rpool/ROOT/ubuntu_rvbd0s/var/spool 276K 9.43G 276K /var/spool
rpool/ROOT/ubuntu_rvbd0s/var/www 192K 9.43G 192K /var/www
rpool/USERDATA 5.19M 9.43G 192K /
rpool/USERDATA/root_5hwecp 288K 9.43G 288K /root
rpool/USERDATA/ubuntu_5hwecp 4.72M 9.43G 4.72M /home/ubuntu
rpool/keystore 518M 9.88G 63.4M -

$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=1942688k,nr_inodes=485672,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=401788k,mode=755,inode64)
/dev/mapper/keystore-rpool on /run/keystore/rpool type ext4 (rw,relatime,stripe=2)
rpool/ROOT/ubuntu_rvbd0s on / type zfs (rw,relatime,xattr,posixacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /...

Read more...

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

This looks like the correct fix with current ZFS. Sad that we need to do this local workaround, but from what I see, you are then doing the proper cleanup and removing the prefix.

Nick's testing seems on par with our expectation. Approving.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/debian/changelog b/debian/changelog
2index 7c40ed1..0fb1543 100644
3--- a/debian/changelog
4+++ b/debian/changelog
5@@ -1,3 +1,10 @@
6+ubiquity (22.10.11) UNRELEASED; urgency=medium
7+
8+ * Re-enable zfs encryption
9+ * zsys-setup: generate correct zfs-list.cache for target (LP: #1993318)
10+
11+ -- Dimitri John Ledkov <dimitri.ledkov@canonical.com> Wed, 19 Oct 2022 15:29:28 +0100
12+
13 ubiquity (22.10.10) kinetic; urgency=medium
14
15 * Temporarily disable zfs + encryption option (LP: #1993318)
16diff --git a/scripts/zsys-setup b/scripts/zsys-setup
17index 78df4f3..9298405 100755
18--- a/scripts/zsys-setup
19+++ b/scripts/zsys-setup
20@@ -584,18 +584,6 @@ elif [ "${COMMAND}" = "finalize" ]; then
21 exit 1
22 fi
23
24- # Activate zfs generator.
25- # After enabling the generator we should run zfs set canmount=on DATASET
26- # in the chroot for one dataset of each pool to refresh the zfs cache.
27- echo "I: Activating zfs generator"
28-
29- # Create zpool cache
30- zpool set cachefile= bpool
31- zpool set cachefile= rpool
32- cp /etc/zfs/zpool.cache "${TARGET}/etc/zfs/"
33- mkdir -p "${TARGET}/etc/zfs/zfs-list.cache"
34- touch "${TARGET}/etc/zfs/zfs-list.cache/bpool" "${TARGET}/etc/zfs/zfs-list.cache/rpool"
35-
36 # Handle userdata
37 UUID_ORIG=$(head -100 /dev/urandom | tr -dc 'a-z0-9' |head -c6)
38 mkdir -p "${TARGET}/tmp/home"
39@@ -618,6 +606,28 @@ elif [ "${COMMAND}" = "finalize" ]; then
40 chroot "${TARGET}" update-initramfs -u
41 fi
42
43+ # Activate zfs generator, after all zfs commands have completed
44+ echo "I: Activating zfs generator"
45+
46+ # Create zpool cache
47+ zpool set cachefile= bpool
48+ zpool set cachefile= rpool
49+ cp /etc/zfs/zpool.cache "${TARGET}/etc/zfs/"
50+ mkdir -p "/etc/zfs/zfs-list.cache" "${TARGET}/etc/zfs/zfs-list.cache"
51+ for pool in bpool rpool; do
52+ # Force cache generation
53+ : >"/etc/zfs/zfs-list.cache/${pool}"
54+ # Execute zfs-list-cacher with a manual fake event
55+ env -i ZEVENT_POOL=${pool} ZED_ZEDLET_DIR=/etc/zfs/zed.d ZEVENT_SUBCLASS=history_event ZFS=zfs ZEVENT_HISTORY_INTERNAL_NAME=create /etc/zfs/zed.d/history_event-zfs-list-cacher.sh
56+ # ZFS list doesn't honor target prefix for chroots for
57+ # the mountpoint property
58+ # https://github.com/openzfs/zfs/issues/1078
59+ # Drop leading /target from all mountpoint fields
60+ sed -E "s|\t${TARGET}/?|\t/|g" "/etc/zfs/zfs-list.cache/${pool}" > "${TARGET}/etc/zfs/zfs-list.cache/${pool}"
61+ # Ensure installer system doesn't generate mount units
62+ rm -f "/etc/zfs/zfs-list.cache/${pool}"
63+ done
64+
65 echo "I: ZFS setup complete"
66 else
67 echo "E: Unknown command: $COMMAND"
68diff --git a/ubiquity/plugins/ubi-partman.py b/ubiquity/plugins/ubi-partman.py
69index 8059446..b9f3d9e 100644
70--- a/ubiquity/plugins/ubi-partman.py
71+++ b/ubiquity/plugins/ubi-partman.py
72@@ -668,8 +668,7 @@ class PageGtk(PageBase):
73 if not widget.get_active():
74 return
75
76- # use_volume_manager = self.use_lvm.get_active() or self.use_zfs.get_active()
77- use_volume_manager = self.use_lvm.get_active()
78+ use_volume_manager = self.use_lvm.get_active() or self.use_zfs.get_active()
79 if not use_volume_manager:
80 self.use_crypto.set_active(False)
81 self.use_crypto.set_sensitive(use_volume_manager)

Subscribers

People subscribed via source and target branches