Merge lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094 into lp:~curtin-dev/curtin/trunk

Proposed by Ryan Harper
Status: Merged
Merged at revision: 304
Proposed branch: lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094
Merge into: lp:~curtin-dev/curtin/trunk
Diff against target: 266 lines (+134/-23)
4 files modified
curtin/commands/block_meta.py (+73/-13)
curtin/commands/curthooks.py (+4/-0)
examples/tests/mdadm_bcache.yaml (+18/-0)
tests/vmtests/test_mdadm_bcache.py (+39/-10)
To merge this branch: bzr merge lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094
Reviewer Review Type Date Requested Status
Scott Moser Pending
Review via email: mp+277255@code.launchpad.net

Commit message

Allow re-use of bcache cache devices with separate backing devices

Decouple creation of bcache caches and backing devices by dropping
use of make-bcache -B <> -C <>, in favor of single use -B and -C
and combining backing devices with caches via sysfs attach attributes.
Update the mdadm/bcache vmtest to include a multi-bcache backing device cached by single cachedevice test.

While validating results, fix lurking issue with mdadm.conf not getting updated post raid configuration in target triggering https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052

Description of the change

Refactor how we create bcache caches and backing devices; decoupling those (ie, don't use make-bcache -B -C) allow us to discover previously created backing or caching devices and attach them via sysfs interface.

Update the mdadm/bcache vmtest to include a multi-bcache backing device cached by single cachedevice test.

While validating results, fix lurking issue with mdadm.conf not getting updated post raid configuration in target triggering https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052

Latestly, this branch passes make check and make vmtest.

To post a comment you must log in.
Revision history for this message
Scott Moser (smoser) wrote :

this looks good other than the one comment i have inline below.
basically, for speed purposes i'm interested in avoiding calling and recalling update-initramfs mjultiple times.

Revision history for this message
Ryan Harper (raharper) :
Revision history for this message
Scott Moser (smoser) wrote :

On Tue, 17 Nov 2015, Ryan Harper wrote:

>
>
> Diff comments:
>
> >
> > === modified file 'curtin/commands/curthooks.py'
> > --- curtin/commands/curthooks.py 2015-10-02 19:59:11 +0000
> > +++ curtin/commands/curthooks.py 2015-11-11 14:47:59 +0000
> > @@ -687,6 +687,10 @@
> > "mdadm.conf")
> > if os.path.exists(mdadm_location):
> > copy_mdadm_conf(mdadm_location, target)
> > + # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
> > + # reconfigure mdadm
> > + util.subp(['chroot', target, 'dpkg-reconfigure',
> > + '--frontend=noninteractive', 'mdadm'], data=None)
>
> Hrm, unclear to me yet if we end up tripping ourselves up with modifications to the target that could potentially stick around.

> Also not clear to me that we won't have issues with configurations to installations of packages that might expect something to be present
> post install that update-initramfs might do to the target filesystem.

update-intiramfs is a focused operation without side effects. If it has
side effects, those are issues with it and we really should raise it.

> I hate to extend the install time, that said, this only hits for mdadm case where it is really needed.

It already runs so much though. And each one is a multi-second operation
with disk io and cpu.

Can we at least try to make this a call to update_initramfs so that we
explicitly "know" that is what we're doing? That'd mean in the future we
could possibly disable update-intiramfs during install and then update
once at the end (just having each update_initramfs call mark that one was
requested and then running at the end if any were).

Scott

Revision history for this message
Ryan Harper (raharper) wrote :

On Tue, Nov 17, 2015 at 9:21 AM, Scott Moser <email address hidden> wrote:

> On Tue, 17 Nov 2015, Ryan Harper wrote:
>
> >
> >
> > Diff comments:
> >
> > >
> > > === modified file 'curtin/commands/curthooks.py'
> > > --- curtin/commands/curthooks.py 2015-10-02 19:59:11 +0000
> > > +++ curtin/commands/curthooks.py 2015-11-11 14:47:59 +0000
> > > @@ -687,6 +687,10 @@
> > > "mdadm.conf")
> > > if os.path.exists(mdadm_location):
> > > copy_mdadm_conf(mdadm_location, target)
> > > + # as per
> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
> > > + # reconfigure mdadm
> > > + util.subp(['chroot', target, 'dpkg-reconfigure',
> > > + '--frontend=noninteractive', 'mdadm'], data=None)
> >
> > Hrm, unclear to me yet if we end up tripping ourselves up with
> modifications to the target that could potentially stick around.
>
> > Also not clear to me that we won't have issues with configurations to
> installations of packages that might expect something to be present
> > post install that update-initramfs might do to the target filesystem.
>
> update-intiramfs is a focused operation without side effects. If it has
> side effects, those are issues with it and we really should raise it.
>

That's fair; I suppose the package would fail to install if it was checking
for post update-initramfs state since we don't run that at install time.

>
> > I hate to extend the install time, that said, this only hits for mdadm
> case where it is really needed.
>
> It already runs so much though. And each one is a multi-second operation
> with disk io and cpu.
>
> Can we at least try to make this a call to update_initramfs so that we
> explicitly "know" that is what we're doing? That'd mean in the future we
> could possibly disable update-intiramfs during install and then update
> once at the end (just having each update_initramfs call mark that one was
> requested and then running at the end if any were).
>

I'm fine with that, however; the documentation surrounding this currently
says
that mdadm needs to be reconfigured; If we're fine in saying that the only
thing
that matters is getting mdadm.conf into the initramfs for
post-install-first-boot
to ensure that the device we created at install time (say md0 and md1)
actually
end up as md0 and md1 on first boot then OK.

I'm open to alternatives but I don't think we need to exhaustively search
for that
while holding up a critical release issue.

> Scott
>
> --
>
> https://code.launchpad.net/~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094/+merge/277255
> You are the owner of
> lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094.
>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'curtin/commands/block_meta.py'
--- curtin/commands/block_meta.py 2015-10-30 13:00:28 +0000
+++ curtin/commands/block_meta.py 2015-11-11 14:47:59 +0000
@@ -31,6 +31,7 @@
31import sys31import sys
32import tempfile32import tempfile
33import time33import time
34import re
3435
35SIMPLE = 'simple'36SIMPLE = 'simple'
36SIMPLE_BOOT = 'simple-boot'37SIMPLE_BOOT = 'simple-boot'
@@ -155,20 +156,43 @@
155 util.subp(cmd, rcs=[0, 1, 2, 5], capture=True)156 util.subp(cmd, rcs=[0, 1, 2, 5], capture=True)
156157
157158
158def get_holders(devname):159def block_find_sysfs_path(devname):
160 # Look up any block device holders. Handle devices and partitions
161 # as devnames (vdb, md0, vdb7)
159 if not devname:162 if not devname:
160 return []163 return []
161164
162 devname_sysfs = \165 sys_class_block = '/sys/class/block/'
163 '/sys/class/block/{}/holders'.format(os.path.basename(devname))166 basename = os.path.basename(devname)
164 if not os.path.exists(devname_sysfs):167 # try without parent blockdevice, then prepend parent
165 err = ('No sysfs path to device holders:'168 paths = [
169 os.path.join(sys_class_block, basename),
170 os.path.join(sys_class_block,
171 re.split('[\d+]', basename)[0], basename),
172 ]
173
174 # find path to devname directory in sysfs
175 devname_sysfs = None
176 for path in paths:
177 if os.path.exists(path):
178 devname_sysfs = path
179
180 if devname_sysfs is None:
181 err = ('No sysfs path to device:'
166 ' {}'.format(devname_sysfs))182 ' {}'.format(devname_sysfs))
167 LOG.error(err)183 LOG.error(err)
168 raise ValueError(err)184 raise ValueError(err)
169185
170 LOG.debug('Getting blockdev holders: {}'.format(devname_sysfs))186 return devname_sysfs
171 return os.listdir(devname_sysfs)187
188
189def get_holders(devname):
190 devname_sysfs = block_find_sysfs_path(devname)
191 if devname_sysfs:
192 LOG.debug('Getting blockdev holders: {}'.format(devname_sysfs))
193 return os.listdir(os.path.join(devname_sysfs, 'holders'))
194
195 return []
172196
173197
174def clear_holders(sys_block_path):198def clear_holders(sys_block_path):
@@ -901,17 +925,37 @@
901 cache_mode = info.get('cache_mode', None)925 cache_mode = info.get('cache_mode', None)
902926
903 if not backing_device or not cache_device:927 if not backing_device or not cache_device:
904 raise ValueError("backing device and cache device for bcache must be \928 raise ValueError("backing device and cache device for bcache"
905 specified")929 " must be specified")
906930
907 # The bcache module is not loaded when bcache is installed by apt-get, so931 # The bcache module is not loaded when bcache is installed by apt-get, so
908 # we will load it now932 # we will load it now
909 util.subp(["modprobe", "bcache"])933 util.subp(["modprobe", "bcache"])
910934
911 # If both the backing device and cache device are specified at the same935 if cache_device:
912 # time than it is not necessary to attach the cache device manually, as936 # /sys/class/block/XXX/YYY/
913 # bcache will do this automatically.937 cache_device_sysfs = block_find_sysfs_path(cache_device)
914 util.subp(["make-bcache", "-B", backing_device, "-C", cache_device])938
939 if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
940 # read in cset uuid from cache device
941 (out, err) = util.subp(["bcache-super-show", cache_device],
942 capture=True)
943 LOG.debug('out=[{}]'.format(out))
944 [cset_uuid] = [line.split()[-1] for line in out.split("\n")
945 if line.startswith('cset.uuid')]
946
947 else:
948 # make the cache device, extracting cacheset uuid
949 (out, err) = util.subp(["make-bcache", "-C", cache_device],
950 capture=True)
951 LOG.debug('out=[{}]'.format(out))
952 [cset_uuid] = [line.split()[-1] for line in out.split("\n")
953 if line.startswith('Set UUID:')]
954
955 if backing_device:
956 backing_device_sysfs = block_find_sysfs_path(backing_device)
957 if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")):
958 util.subp(["make-bcache", "-B", backing_device])
915959
916 # Some versions of bcache-tools will register the bcache device as soon as960 # Some versions of bcache-tools will register the bcache device as soon as
917 # we run make-bcache using udev rules, so wait for udev to settle, then try961 # we run make-bcache using udev rules, so wait for udev to settle, then try
@@ -927,6 +971,22 @@
927 fp.write(path)971 fp.write(path)
928 fp.close()972 fp.close()
929973
974 # if we specify both then we need to attach backing to cache
975 if cache_device and backing_device:
976 if cset_uuid:
977 LOG.info("Attaching backing device to cacheset: "
978 "{} -> {} cset.uuid: {}".format(backing_device,
979 cache_device,
980 cset_uuid))
981 attach = os.path.join(backing_device_sysfs,
982 "bcache",
983 "attach")
984 with open(attach, "w") as fp:
985 fp.write(cset_uuid)
986 else:
987 LOG.error("Invalid cset_uuid: {}".format(cset_uuid))
988 raise
989
930 if cache_mode:990 if cache_mode:
931 # find the actual bcache device name via sysfs using the991 # find the actual bcache device name via sysfs using the
932 # backing device's holders directory.992 # backing device's holders directory.
933993
=== modified file 'curtin/commands/curthooks.py'
--- curtin/commands/curthooks.py 2015-10-02 19:59:11 +0000
+++ curtin/commands/curthooks.py 2015-11-11 14:47:59 +0000
@@ -687,6 +687,10 @@
687 "mdadm.conf")687 "mdadm.conf")
688 if os.path.exists(mdadm_location):688 if os.path.exists(mdadm_location):
689 copy_mdadm_conf(mdadm_location, target)689 copy_mdadm_conf(mdadm_location, target)
690 # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
691 # reconfigure mdadm
692 util.subp(['chroot', target, 'dpkg-reconfigure',
693 '--frontend=noninteractive', 'mdadm'], data=None)
690694
691 # If udev dname rules were created, copy them to target695 # If udev dname rules were created, copy them to target
692 udev_rules_d = os.path.join(state['scratch'], "rules.d")696 udev_rules_d = os.path.join(state['scratch'], "rules.d")
693697
=== modified file 'examples/tests/mdadm_bcache.yaml'
--- examples/tests/mdadm_bcache.yaml 2015-10-29 19:13:25 +0000
+++ examples/tests/mdadm_bcache.yaml 2015-11-11 14:47:59 +0000
@@ -32,6 +32,10 @@
32 type: partition32 type: partition
33 size: 1GB33 size: 1GB
34 device: sda34 device: sda
35 - id: sda6
36 type: partition
37 size: 1GB
38 device: sda
35 - id: mddevice39 - id: mddevice
36 name: md040 name: md0
37 type: raid41 type: raid
@@ -47,6 +51,12 @@
47 backing_device: mddevice51 backing_device: mddevice
48 cache_device: sda552 cache_device: sda5
49 cache_mode: writeback53 cache_mode: writeback
54 - id: bcache1
55 type: bcache
56 name: cached_array_2
57 backing_device: sda6
58 cache_device: sda5
59 cache_mode: writeback
50 - id: sda1_root60 - id: sda1_root
51 type: format61 type: format
52 fstype: ext462 fstype: ext4
@@ -55,6 +65,10 @@
55 type: format65 type: format
56 fstype: ext466 fstype: ext4
57 volume: bcache067 volume: bcache0
68 - id: bcache_storage
69 type: format
70 fstype: ext4
71 volume: bcache1
58 - id: sda1_mount72 - id: sda1_mount
59 type: mount73 type: mount
60 path: /74 path: /
@@ -63,3 +77,7 @@
63 type: mount77 type: mount
64 path: /media/data78 path: /media/data
65 device: raid_storage79 device: raid_storage
80 - id: bcache1_mount
81 type: mount
82 path: /media/bcache1
83 device: bcache_storage
6684
=== modified file 'tests/vmtests/test_mdadm_bcache.py'
--- tests/vmtests/test_mdadm_bcache.py 2015-10-30 13:38:36 +0000
+++ tests/vmtests/test_mdadm_bcache.py 2015-11-11 14:47:59 +0000
@@ -44,31 +44,60 @@
44 'main_disk': 5,44 'main_disk': 5,
45 'main_disk': 6,45 'main_disk': 6,
46 'md0': 0,46 'md0': 0,
47 'cached_array': 0}47 'cached_array': 0,
48 'cached_array_2': 0}
4849
49 collect_scripts = TestMdadmAbs.collect_scripts + [textwrap.dedent("""50 collect_scripts = TestMdadmAbs.collect_scripts + [textwrap.dedent("""
50 cd OUTPUT_COLLECT_D51 cd OUTPUT_COLLECT_D
51 bcache-super-show /dev/vda6 > bcache_super_vda652 bcache-super-show /dev/vda6 > bcache_super_vda6
53 bcache-super-show /dev/vda7 > bcache_super_vda7
54 bcache-super-show /dev/md0 > bcache_super_md0
52 ls /sys/fs/bcache > bcache_ls55 ls /sys/fs/bcache > bcache_ls
53 cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode56 cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode
57 cat /proc/mounts > proc_mounts
54 """)]58 """)]
55 fstab_expected = {59 fstab_expected = {
56 '/dev/bcache0': '/media/data'60 '/dev/bcache0': '/media/data',
61 '/dev/bcache1': '/media/bcache1',
57 }62 }
5863
59 def test_bcache_output_files_exist(self):64 def test_bcache_output_files_exist(self):
60 self.output_files_exist(["bcache_super_vda6", "bcache_ls",65 self.output_files_exist(["bcache_super_vda6",
66 "bcache_super_vda7",
67 "bcache_super_md0",
68 "bcache_ls",
61 "bcache_cache_mode"])69 "bcache_cache_mode"])
6270
63 def test_bcache_status(self):71 def test_bcache_status(self):
72 bcache_supers = [
73 "bcache_super_vda6",
74 "bcache_super_vda7",
75 "bcache_super_md0",
76 ]
64 bcache_cset_uuid = None77 bcache_cset_uuid = None
65 with open(os.path.join(self.td.mnt, "bcache_super_vda6"), "r") as fp:78 found = {}
66 for line in fp.read().splitlines():79 for bcache_super in bcache_supers:
67 if line != "" and line.split()[0] == "cset.uuid":80 with open(os.path.join(self.td.mnt, bcache_super), "r") as fp:
68 bcache_cset_uuid = line.split()[-1].rstrip()81 for line in fp.read().splitlines():
69 self.assertIsNotNone(bcache_cset_uuid)82 if line != "" and line.split()[0] == "cset.uuid":
70 with open(os.path.join(self.td.mnt, "bcache_ls"), "r") as fp:83 bcache_cset_uuid = line.split()[-1].rstrip()
71 self.assertTrue(bcache_cset_uuid in fp.read().splitlines())84 if bcache_cset_uuid in found:
85 found[bcache_cset_uuid].append(bcache_super)
86 else:
87 found[bcache_cset_uuid] = [bcache_super]
88 self.assertIsNotNone(bcache_cset_uuid)
89 with open(os.path.join(self.td.mnt, "bcache_ls"), "r") as fp:
90 self.assertTrue(bcache_cset_uuid in fp.read().splitlines())
91
92 # one cset.uuid for all devices
93 self.assertEqual(len(found), 1)
94
95 # three devices with same cset.uuid
96 self.assertEqual(len(found[bcache_cset_uuid]), 3)
97
98 # check the cset.uuid in the dict
99 self.assertEqual(list(found.keys()).pop(),
100 bcache_cset_uuid)
72101
73 def test_bcache_cachemode(self):102 def test_bcache_cachemode(self):
74 self.check_file_regex("bcache_cache_mode", r"\[writeback\]")103 self.check_file_regex("bcache_cache_mode", r"\[writeback\]")

Subscribers

People subscribed via source and target branches