Merge lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094 into lp:~curtin-dev/curtin/trunk

Proposed by Ryan Harper
Status: Merged
Merged at revision: 304
Proposed branch: lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094
Merge into: lp:~curtin-dev/curtin/trunk
Diff against target: 266 lines (+134/-23)
4 files modified
curtin/commands/block_meta.py (+73/-13)
curtin/commands/curthooks.py (+4/-0)
examples/tests/mdadm_bcache.yaml (+18/-0)
tests/vmtests/test_mdadm_bcache.py (+39/-10)
To merge this branch: bzr merge lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094
Reviewer Review Type Date Requested Status
Scott Moser Pending
Review via email: mp+277255@code.launchpad.net

Commit message

Allow re-use of bcache cache devices with separate backing devices

Decouple creation of bcache caches and backing devices by dropping
use of make-bcache -B <> -C <>, in favor of single use -B and -C
and combining backing devices with caches via sysfs attach attributes.
Update the mdadm/bcache vmtest to include a multi-bcache backing device cached by single cachedevice test.

While validating results, fix lurking issue with mdadm.conf not getting updated post raid configuration in target triggering https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052

Description of the change

Refactor how we create bcache caches and backing devices; decoupling those (ie, don't use make-bcache -B -C) allow us to discover previously created backing or caching devices and attach them via sysfs interface.

Update the mdadm/bcache vmtest to include a multi-bcache backing device cached by single cachedevice test.

While validating results, fix lurking issue with mdadm.conf not getting updated post raid configuration in target triggering https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052

Latestly, this branch passes make check and make vmtest.

To post a comment you must log in.
Revision history for this message
Scott Moser (smoser) wrote :

this looks good other than the one comment i have inline below.
basically, for speed purposes i'm interested in avoiding calling and recalling update-initramfs mjultiple times.

Revision history for this message
Ryan Harper (raharper) :
Revision history for this message
Scott Moser (smoser) wrote :

On Tue, 17 Nov 2015, Ryan Harper wrote:

>
>
> Diff comments:
>
> >
> > === modified file 'curtin/commands/curthooks.py'
> > --- curtin/commands/curthooks.py 2015-10-02 19:59:11 +0000
> > +++ curtin/commands/curthooks.py 2015-11-11 14:47:59 +0000
> > @@ -687,6 +687,10 @@
> > "mdadm.conf")
> > if os.path.exists(mdadm_location):
> > copy_mdadm_conf(mdadm_location, target)
> > + # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
> > + # reconfigure mdadm
> > + util.subp(['chroot', target, 'dpkg-reconfigure',
> > + '--frontend=noninteractive', 'mdadm'], data=None)
>
> Hrm, unclear to me yet if we end up tripping ourselves up with modifications to the target that could potentially stick around.

> Also not clear to me that we won't have issues with configurations to installations of packages that might expect something to be present
> post install that update-initramfs might do to the target filesystem.

update-intiramfs is a focused operation without side effects. If it has
side effects, those are issues with it and we really should raise it.

> I hate to extend the install time, that said, this only hits for mdadm case where it is really needed.

It already runs so much though. And each one is a multi-second operation
with disk io and cpu.

Can we at least try to make this a call to update_initramfs so that we
explicitly "know" that is what we're doing? That'd mean in the future we
could possibly disable update-intiramfs during install and then update
once at the end (just having each update_initramfs call mark that one was
requested and then running at the end if any were).

Scott

Revision history for this message
Ryan Harper (raharper) wrote :

On Tue, Nov 17, 2015 at 9:21 AM, Scott Moser <email address hidden> wrote:

> On Tue, 17 Nov 2015, Ryan Harper wrote:
>
> >
> >
> > Diff comments:
> >
> > >
> > > === modified file 'curtin/commands/curthooks.py'
> > > --- curtin/commands/curthooks.py 2015-10-02 19:59:11 +0000
> > > +++ curtin/commands/curthooks.py 2015-11-11 14:47:59 +0000
> > > @@ -687,6 +687,10 @@
> > > "mdadm.conf")
> > > if os.path.exists(mdadm_location):
> > > copy_mdadm_conf(mdadm_location, target)
> > > + # as per
> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
> > > + # reconfigure mdadm
> > > + util.subp(['chroot', target, 'dpkg-reconfigure',
> > > + '--frontend=noninteractive', 'mdadm'], data=None)
> >
> > Hrm, unclear to me yet if we end up tripping ourselves up with
> modifications to the target that could potentially stick around.
>
> > Also not clear to me that we won't have issues with configurations to
> installations of packages that might expect something to be present
> > post install that update-initramfs might do to the target filesystem.
>
> update-intiramfs is a focused operation without side effects. If it has
> side effects, those are issues with it and we really should raise it.
>

That's fair; I suppose the package would fail to install if it was checking
for post update-initramfs state since we don't run that at install time.

>
> > I hate to extend the install time, that said, this only hits for mdadm
> case where it is really needed.
>
> It already runs so much though. And each one is a multi-second operation
> with disk io and cpu.
>
> Can we at least try to make this a call to update_initramfs so that we
> explicitly "know" that is what we're doing? That'd mean in the future we
> could possibly disable update-intiramfs during install and then update
> once at the end (just having each update_initramfs call mark that one was
> requested and then running at the end if any were).
>

I'm fine with that, however; the documentation surrounding this currently
says
that mdadm needs to be reconfigured; If we're fine in saying that the only
thing
that matters is getting mdadm.conf into the initramfs for
post-install-first-boot
to ensure that the device we created at install time (say md0 and md1)
actually
end up as md0 and md1 on first boot then OK.

I'm open to alternatives but I don't think we need to exhaustively search
for that
while holding up a critical release issue.

> Scott
>
> --
>
> https://code.launchpad.net/~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094/+merge/277255
> You are the owner of
> lp:~raharper/curtin/trunk.fix-bcache-multi-bdev-to-cdev-lp1514094.
>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'curtin/commands/block_meta.py'
2--- curtin/commands/block_meta.py 2015-10-30 13:00:28 +0000
3+++ curtin/commands/block_meta.py 2015-11-11 14:47:59 +0000
4@@ -31,6 +31,7 @@
5 import sys
6 import tempfile
7 import time
8+import re
9
10 SIMPLE = 'simple'
11 SIMPLE_BOOT = 'simple-boot'
12@@ -155,20 +156,43 @@
13 util.subp(cmd, rcs=[0, 1, 2, 5], capture=True)
14
15
16-def get_holders(devname):
17+def block_find_sysfs_path(devname):
18+ # Look up any block device holders. Handle devices and partitions
19+ # as devnames (vdb, md0, vdb7)
20 if not devname:
21 return []
22
23- devname_sysfs = \
24- '/sys/class/block/{}/holders'.format(os.path.basename(devname))
25- if not os.path.exists(devname_sysfs):
26- err = ('No sysfs path to device holders:'
27+ sys_class_block = '/sys/class/block/'
28+ basename = os.path.basename(devname)
29+ # try without parent blockdevice, then prepend parent
30+ paths = [
31+ os.path.join(sys_class_block, basename),
32+ os.path.join(sys_class_block,
33+ re.split('[\d+]', basename)[0], basename),
34+ ]
35+
36+ # find path to devname directory in sysfs
37+ devname_sysfs = None
38+ for path in paths:
39+ if os.path.exists(path):
40+ devname_sysfs = path
41+
42+ if devname_sysfs is None:
43+ err = ('No sysfs path to device:'
44 ' {}'.format(devname_sysfs))
45 LOG.error(err)
46 raise ValueError(err)
47
48- LOG.debug('Getting blockdev holders: {}'.format(devname_sysfs))
49- return os.listdir(devname_sysfs)
50+ return devname_sysfs
51+
52+
53+def get_holders(devname):
54+ devname_sysfs = block_find_sysfs_path(devname)
55+ if devname_sysfs:
56+ LOG.debug('Getting blockdev holders: {}'.format(devname_sysfs))
57+ return os.listdir(os.path.join(devname_sysfs, 'holders'))
58+
59+ return []
60
61
62 def clear_holders(sys_block_path):
63@@ -901,17 +925,37 @@
64 cache_mode = info.get('cache_mode', None)
65
66 if not backing_device or not cache_device:
67- raise ValueError("backing device and cache device for bcache must be \
68- specified")
69+ raise ValueError("backing device and cache device for bcache"
70+ " must be specified")
71
72 # The bcache module is not loaded when bcache is installed by apt-get, so
73 # we will load it now
74 util.subp(["modprobe", "bcache"])
75
76- # If both the backing device and cache device are specified at the same
77- # time than it is not necessary to attach the cache device manually, as
78- # bcache will do this automatically.
79- util.subp(["make-bcache", "-B", backing_device, "-C", cache_device])
80+ if cache_device:
81+ # /sys/class/block/XXX/YYY/
82+ cache_device_sysfs = block_find_sysfs_path(cache_device)
83+
84+ if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
85+ # read in cset uuid from cache device
86+ (out, err) = util.subp(["bcache-super-show", cache_device],
87+ capture=True)
88+ LOG.debug('out=[{}]'.format(out))
89+ [cset_uuid] = [line.split()[-1] for line in out.split("\n")
90+ if line.startswith('cset.uuid')]
91+
92+ else:
93+ # make the cache device, extracting cacheset uuid
94+ (out, err) = util.subp(["make-bcache", "-C", cache_device],
95+ capture=True)
96+ LOG.debug('out=[{}]'.format(out))
97+ [cset_uuid] = [line.split()[-1] for line in out.split("\n")
98+ if line.startswith('Set UUID:')]
99+
100+ if backing_device:
101+ backing_device_sysfs = block_find_sysfs_path(backing_device)
102+ if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")):
103+ util.subp(["make-bcache", "-B", backing_device])
104
105 # Some versions of bcache-tools will register the bcache device as soon as
106 # we run make-bcache using udev rules, so wait for udev to settle, then try
107@@ -927,6 +971,22 @@
108 fp.write(path)
109 fp.close()
110
111+ # if we specify both then we need to attach backing to cache
112+ if cache_device and backing_device:
113+ if cset_uuid:
114+ LOG.info("Attaching backing device to cacheset: "
115+ "{} -> {} cset.uuid: {}".format(backing_device,
116+ cache_device,
117+ cset_uuid))
118+ attach = os.path.join(backing_device_sysfs,
119+ "bcache",
120+ "attach")
121+ with open(attach, "w") as fp:
122+ fp.write(cset_uuid)
123+ else:
124+ LOG.error("Invalid cset_uuid: {}".format(cset_uuid))
125+ raise
126+
127 if cache_mode:
128 # find the actual bcache device name via sysfs using the
129 # backing device's holders directory.
130
131=== modified file 'curtin/commands/curthooks.py'
132--- curtin/commands/curthooks.py 2015-10-02 19:59:11 +0000
133+++ curtin/commands/curthooks.py 2015-11-11 14:47:59 +0000
134@@ -687,6 +687,10 @@
135 "mdadm.conf")
136 if os.path.exists(mdadm_location):
137 copy_mdadm_conf(mdadm_location, target)
138+ # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
139+ # reconfigure mdadm
140+ util.subp(['chroot', target, 'dpkg-reconfigure',
141+ '--frontend=noninteractive', 'mdadm'], data=None)
142
143 # If udev dname rules were created, copy them to target
144 udev_rules_d = os.path.join(state['scratch'], "rules.d")
145
146=== modified file 'examples/tests/mdadm_bcache.yaml'
147--- examples/tests/mdadm_bcache.yaml 2015-10-29 19:13:25 +0000
148+++ examples/tests/mdadm_bcache.yaml 2015-11-11 14:47:59 +0000
149@@ -32,6 +32,10 @@
150 type: partition
151 size: 1GB
152 device: sda
153+ - id: sda6
154+ type: partition
155+ size: 1GB
156+ device: sda
157 - id: mddevice
158 name: md0
159 type: raid
160@@ -47,6 +51,12 @@
161 backing_device: mddevice
162 cache_device: sda5
163 cache_mode: writeback
164+ - id: bcache1
165+ type: bcache
166+ name: cached_array_2
167+ backing_device: sda6
168+ cache_device: sda5
169+ cache_mode: writeback
170 - id: sda1_root
171 type: format
172 fstype: ext4
173@@ -55,6 +65,10 @@
174 type: format
175 fstype: ext4
176 volume: bcache0
177+ - id: bcache_storage
178+ type: format
179+ fstype: ext4
180+ volume: bcache1
181 - id: sda1_mount
182 type: mount
183 path: /
184@@ -63,3 +77,7 @@
185 type: mount
186 path: /media/data
187 device: raid_storage
188+ - id: bcache1_mount
189+ type: mount
190+ path: /media/bcache1
191+ device: bcache_storage
192
193=== modified file 'tests/vmtests/test_mdadm_bcache.py'
194--- tests/vmtests/test_mdadm_bcache.py 2015-10-30 13:38:36 +0000
195+++ tests/vmtests/test_mdadm_bcache.py 2015-11-11 14:47:59 +0000
196@@ -44,31 +44,60 @@
197 'main_disk': 5,
198 'main_disk': 6,
199 'md0': 0,
200- 'cached_array': 0}
201+ 'cached_array': 0,
202+ 'cached_array_2': 0}
203
204 collect_scripts = TestMdadmAbs.collect_scripts + [textwrap.dedent("""
205 cd OUTPUT_COLLECT_D
206 bcache-super-show /dev/vda6 > bcache_super_vda6
207+ bcache-super-show /dev/vda7 > bcache_super_vda7
208+ bcache-super-show /dev/md0 > bcache_super_md0
209 ls /sys/fs/bcache > bcache_ls
210 cat /sys/block/bcache0/bcache/cache_mode > bcache_cache_mode
211+ cat /proc/mounts > proc_mounts
212 """)]
213 fstab_expected = {
214- '/dev/bcache0': '/media/data'
215+ '/dev/bcache0': '/media/data',
216+ '/dev/bcache1': '/media/bcache1',
217 }
218
219 def test_bcache_output_files_exist(self):
220- self.output_files_exist(["bcache_super_vda6", "bcache_ls",
221+ self.output_files_exist(["bcache_super_vda6",
222+ "bcache_super_vda7",
223+ "bcache_super_md0",
224+ "bcache_ls",
225 "bcache_cache_mode"])
226
227 def test_bcache_status(self):
228+ bcache_supers = [
229+ "bcache_super_vda6",
230+ "bcache_super_vda7",
231+ "bcache_super_md0",
232+ ]
233 bcache_cset_uuid = None
234- with open(os.path.join(self.td.mnt, "bcache_super_vda6"), "r") as fp:
235- for line in fp.read().splitlines():
236- if line != "" and line.split()[0] == "cset.uuid":
237- bcache_cset_uuid = line.split()[-1].rstrip()
238- self.assertIsNotNone(bcache_cset_uuid)
239- with open(os.path.join(self.td.mnt, "bcache_ls"), "r") as fp:
240- self.assertTrue(bcache_cset_uuid in fp.read().splitlines())
241+ found = {}
242+ for bcache_super in bcache_supers:
243+ with open(os.path.join(self.td.mnt, bcache_super), "r") as fp:
244+ for line in fp.read().splitlines():
245+ if line != "" and line.split()[0] == "cset.uuid":
246+ bcache_cset_uuid = line.split()[-1].rstrip()
247+ if bcache_cset_uuid in found:
248+ found[bcache_cset_uuid].append(bcache_super)
249+ else:
250+ found[bcache_cset_uuid] = [bcache_super]
251+ self.assertIsNotNone(bcache_cset_uuid)
252+ with open(os.path.join(self.td.mnt, "bcache_ls"), "r") as fp:
253+ self.assertTrue(bcache_cset_uuid in fp.read().splitlines())
254+
255+ # one cset.uuid for all devices
256+ self.assertEqual(len(found), 1)
257+
258+ # three devices with same cset.uuid
259+ self.assertEqual(len(found[bcache_cset_uuid]), 3)
260+
261+ # check the cset.uuid in the dict
262+ self.assertEqual(list(found.keys()).pop(),
263+ bcache_cset_uuid)
264
265 def test_bcache_cachemode(self):
266 self.check_file_regex("bcache_cache_mode", r"\[writeback\]")

Subscribers

People subscribed via source and target branches