Merge lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425 into lp:~smoser/ubuntu/xenial/curtin/pkg

Proposed by Ryan Harper
Status: Merged
Merged at revision: 56
Proposed branch: lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425
Merge into: lp:~smoser/ubuntu/xenial/curtin/pkg
Diff against target: 14513 lines (+10324/-1893)
94 files modified
Makefile (+3/-1)
curtin/__init__.py (+4/-0)
curtin/block/__init__.py (+249/-61)
curtin/block/clear_holders.py (+387/-0)
curtin/block/lvm.py (+96/-0)
curtin/block/mdadm.py (+18/-5)
curtin/block/mkfs.py (+10/-5)
curtin/commands/apply_net.py (+156/-1)
curtin/commands/apt_config.py (+668/-0)
curtin/commands/block_info.py (+75/-0)
curtin/commands/block_meta.py (+134/-263)
curtin/commands/block_wipe.py (+1/-2)
curtin/commands/clear_holders.py (+48/-0)
curtin/commands/curthooks.py (+61/-235)
curtin/commands/main.py (+4/-3)
curtin/config.py (+2/-3)
curtin/gpg.py (+74/-0)
curtin/net/__init__.py (+67/-30)
curtin/net/network_state.py (+45/-1)
curtin/util.py (+278/-81)
debian/changelog (+32/-2)
doc/conf.py (+21/-4)
doc/devel/README-vmtest.txt (+0/-152)
doc/devel/README.txt (+0/-55)
doc/devel/clear_holders_doc.txt (+85/-0)
doc/index.rst (+6/-0)
doc/topics/apt_source.rst (+164/-0)
doc/topics/config.rst (+551/-0)
doc/topics/development.rst (+68/-0)
doc/topics/integration-testing.rst (+245/-0)
doc/topics/networking.rst (+522/-0)
doc/topics/overview.rst (+7/-7)
doc/topics/reporting.rst (+3/-3)
doc/topics/storage.rst (+894/-0)
examples/apt-source.yaml (+267/-0)
examples/network-ipv6-bond-vlan.yaml (+56/-0)
examples/tests/apt_config_command.yaml (+85/-0)
examples/tests/apt_source_custom.yaml (+97/-0)
examples/tests/apt_source_modify.yaml (+92/-0)
examples/tests/apt_source_modify_arches.yaml (+102/-0)
examples/tests/apt_source_modify_disable_suite.yaml (+92/-0)
examples/tests/apt_source_preserve.yaml (+98/-0)
examples/tests/apt_source_search.yaml (+97/-0)
examples/tests/basic.yaml (+5/-1)
examples/tests/basic_network_static_ipv6.yaml (+22/-0)
examples/tests/basic_scsi.yaml (+1/-1)
examples/tests/network_alias.yaml (+125/-0)
examples/tests/network_mtu.yaml (+88/-0)
examples/tests/network_source_ipv6.yaml (+31/-0)
examples/tests/test_old_apt_features.yaml (+11/-0)
examples/tests/test_old_apt_features_ports.yaml (+10/-0)
examples/tests/uefi_basic.yaml (+15/-0)
examples/tests/vlan_network_ipv6.yaml (+92/-0)
setup.py (+2/-2)
tests/unittests/helpers.py (+41/-0)
tests/unittests/test_apt_custom_sources_list.py (+170/-0)
tests/unittests/test_apt_source.py (+1032/-0)
tests/unittests/test_block.py (+210/-0)
tests/unittests/test_block_lvm.py (+94/-0)
tests/unittests/test_block_mdadm.py (+28/-23)
tests/unittests/test_block_mkfs.py (+2/-2)
tests/unittests/test_clear_holders.py (+329/-0)
tests/unittests/test_make_dname.py (+200/-0)
tests/unittests/test_net.py (+54/-13)
tests/unittests/test_util.py (+180/-2)
tests/vmtests/__init__.py (+38/-38)
tests/vmtests/helpers.py (+129/-166)
tests/vmtests/test_apt_config_cmd.py (+55/-0)
tests/vmtests/test_apt_source.py (+238/-0)
tests/vmtests/test_basic.py (+21/-41)
tests/vmtests/test_bcache_basic.py (+5/-8)
tests/vmtests/test_bonding.py (+0/-204)
tests/vmtests/test_lvm.py (+2/-1)
tests/vmtests/test_mdadm_bcache.py (+21/-17)
tests/vmtests/test_multipath.py (+5/-13)
tests/vmtests/test_network.py (+205/-348)
tests/vmtests/test_network_alias.py (+40/-0)
tests/vmtests/test_network_bonding.py (+63/-0)
tests/vmtests/test_network_enisource.py (+91/-0)
tests/vmtests/test_network_ipv6.py (+53/-0)
tests/vmtests/test_network_ipv6_enisource.py (+26/-0)
tests/vmtests/test_network_ipv6_static.py (+42/-0)
tests/vmtests/test_network_ipv6_vlan.py (+34/-0)
tests/vmtests/test_network_mtu.py (+155/-0)
tests/vmtests/test_network_static.py (+44/-0)
tests/vmtests/test_network_vlan.py (+77/-0)
tests/vmtests/test_nvme.py (+2/-3)
tests/vmtests/test_old_apt_features.py (+89/-0)
tests/vmtests/test_raid5_bcache.py (+5/-8)
tests/vmtests/test_uefi_basic.py (+16/-18)
tools/jenkins-runner (+33/-7)
tools/launch (+9/-48)
tools/xkvm (+90/-2)
tox.ini (+30/-13)
To merge this branch: bzr merge lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425
Reviewer Review Type Date Requested Status
Scott Moser Pending
Review via email: mp+307473@code.launchpad.net

Description of the change

Import new upstream snapshot (revno 425)

New Upstream snapshot:
- unittest,tox.ini: catch and fix issue with trusty-level mock of open
- block/mdadm: add option to ignore mdadm_assemble errors (LP: #1618429)
- curtin/doc: overhaul curtin documentation for readthedocs.org (LP: #1351085)
- curtin.util: re-add support for RunInChroot (LP: #1617375)
- curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs
- curtin.block: refactor clear_holders logic into block.clear_holders and cli cmd
- curtin.apply_net should exit non-zero upon exception. (LP: #1615780)
- apt: fix bug in disable_suites if sources.list line is blank.
- vmtests: disable Wily in vmtests
- Fix the unittests for test_apt_source.
- get CURTIN_VMTEST_PARALLEL shown correctly in jenkins-runner output
- fix vmtest check_file_strippedline to strip lines before comparing
- fix whitespace damage in tests/vmtests/__init__.py
- fix dpkg-reconfigure when debconf_selections was provided. (LP: #1609614)
- fix apt tests on non-intel arch
- Add apt features to curtin. (LP: #1574113)
- vmtest: easier use of parallel and controlling timeouts
- mkfs.vfat: add force flag for formating whole disks (LP: #1597923)
- block.mkfs: fix sectorsize flag (LP: #1597522)
- block_meta: cleanup use of sys_block_path and handle cciss knames (LP: #1562249)
- block.get_blockdev_sector_size: handle _lsblock multi result return (LP: #1598310)
- util: add target (chroot) support to subp, add target_path helper.
- block_meta: fallback to parted if blkid does not produce output (LP: #1524031)
- commands.block_wipe: correct default wipe mode to 'superblock'
- tox.ini: run coverage normally rather than separately
- move uefi boot knowledge from launch and vmtest to xkvm

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2016-05-10 16:13:29 +0000
3+++ Makefile 2016-10-03 18:55:20 +0000
4@@ -49,5 +49,7 @@
5 sync-images:
6 @$(CWD)/tools/vmtest-sync-images
7
8+clean:
9+ rm -rf doc/_build
10
11-.PHONY: all test pyflakes pyflakes3 pep8 build
12+.PHONY: all clean test pyflakes pyflakes3 pep8 build
13
14=== modified file 'curtin/__init__.py'
15--- curtin/__init__.py 2015-11-23 16:22:09 +0000
16+++ curtin/__init__.py 2016-10-03 18:55:20 +0000
17@@ -33,6 +33,10 @@
18 'SUBCOMMAND_SYSTEM_INSTALL',
19 # subcommand 'system-upgrade' is present
20 'SUBCOMMAND_SYSTEM_UPGRADE',
21+ # supports new format of apt configuration
22+ 'APT_CONFIG_V1',
23 ]
24
25+__version__ = "0.1.0"
26+
27 # vi: ts=4 expandtab syntax=python
28
29=== modified file 'curtin/block/__init__.py'
30--- curtin/block/__init__.py 2016-10-03 18:00:41 +0000
31+++ curtin/block/__init__.py 2016-10-03 18:55:20 +0000
32@@ -23,21 +23,31 @@
33 import itertools
34
35 from curtin import util
36+from curtin.block import lvm
37+from curtin.log import LOG
38 from curtin.udev import udevadm_settle
39-from curtin.log import LOG
40
41
42 def get_dev_name_entry(devname):
43+ """
44+ convert device name to path in /dev
45+ """
46 bname = devname.split('/dev/')[-1]
47 return (bname, "/dev/" + bname)
48
49
50 def is_valid_device(devname):
51+ """
52+ check if device is a valid device
53+ """
54 devent = get_dev_name_entry(devname)[1]
55 return is_block_device(devent)
56
57
58 def is_block_device(path):
59+ """
60+ check if path is a block device
61+ """
62 try:
63 return stat.S_ISBLK(os.stat(path).st_mode)
64 except OSError as e:
65@@ -47,26 +57,99 @@
66
67
68 def dev_short(devname):
69+ """
70+ get short form of device name
71+ """
72+ devname = os.path.normpath(devname)
73 if os.path.sep in devname:
74 return os.path.basename(devname)
75 return devname
76
77
78 def dev_path(devname):
79+ """
80+ convert device name to path in /dev
81+ """
82 if devname.startswith('/dev/'):
83 return devname
84 else:
85 return '/dev/' + devname
86
87
88+def path_to_kname(path):
89+ """
90+ converts a path in /dev or a path in /sys/block to the device kname,
91+ taking special devices and unusual naming schemes into account
92+ """
93+ # if path given is a link, get real path
94+ # only do this if given a path though, if kname is already specified then
95+ # this would cause a failure where the function should still be able to run
96+ if os.path.sep in path:
97+ path = os.path.realpath(path)
98+ # using basename here ensures that the function will work given a path in
99+ # /dev, a kname, or a path in /sys/block as an arg
100+ dev_kname = os.path.basename(path)
101+ # cciss devices need to have 'cciss!' prepended
102+ if path.startswith('/dev/cciss'):
103+ dev_kname = 'cciss!' + dev_kname
104+ LOG.debug("path_to_kname input: '{}' output: '{}'".format(path, dev_kname))
105+ return dev_kname
106+
107+
108+def kname_to_path(kname):
109+ """
110+ converts a kname to a path in /dev, taking special devices and unusual
111+ naming schemes into account
112+ """
113+ # if given something that is already a dev path, return it
114+ if os.path.exists(kname) and is_valid_device(kname):
115+ path = kname
116+ LOG.debug("kname_to_path input: '{}' output: '{}'".format(kname, path))
117+ return os.path.realpath(path)
118+ # adding '/dev' to path is not sufficient to handle cciss devices and
119+ # possibly other special devices which have not been encountered yet
120+ path = os.path.realpath(os.sep.join(['/dev'] + kname.split('!')))
121+ # make sure path we get is correct
122+ if not (os.path.exists(path) and is_valid_device(path)):
123+ raise OSError('could not get path to dev from kname: {}'.format(kname))
124+ LOG.debug("kname_to_path input: '{}' output: '{}'".format(kname, path))
125+ return path
126+
127+
128+def partition_kname(disk_kname, partition_number):
129+ """
130+ Add number to disk_kname prepending a 'p' if needed
131+ """
132+ for dev_type in ['nvme', 'mmcblk', 'cciss', 'mpath', 'dm']:
133+ if disk_kname.startswith(dev_type):
134+ partition_number = "p%s" % partition_number
135+ break
136+ return "%s%s" % (disk_kname, partition_number)
137+
138+
139+def sysfs_to_devpath(sysfs_path):
140+ """
141+ convert a path in /sys/class/block to a path in /dev
142+ """
143+ path = kname_to_path(path_to_kname(sysfs_path))
144+ if not is_block_device(path):
145+ raise ValueError('could not find blockdev for sys path: {}'
146+ .format(sysfs_path))
147+ return path
148+
149+
150 def sys_block_path(devname, add=None, strict=True):
151+ """
152+ get path to device in /sys/class/block
153+ """
154 toks = ['/sys/class/block']
155 # insert parent dev if devname is partition
156+ devname = os.path.normpath(devname)
157 (parent, partnum) = get_blockdev_for_partition(devname)
158 if partnum:
159- toks.append(dev_short(parent))
160+ toks.append(path_to_kname(parent))
161
162- toks.append(dev_short(devname))
163+ toks.append(path_to_kname(devname))
164
165 if add is not None:
166 toks.append(add)
167@@ -83,6 +166,9 @@
168
169
170 def _lsblock_pairs_to_dict(lines):
171+ """
172+ parse lsblock output and convert to dict
173+ """
174 ret = {}
175 for line in lines.splitlines():
176 toks = shlex.split(line)
177@@ -98,6 +184,9 @@
178
179
180 def _lsblock(args=None):
181+ """
182+ get lsblock data as dict
183+ """
184 # lsblk --help | sed -n '/Available/,/^$/p' |
185 # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort
186 keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO',
187@@ -120,8 +209,10 @@
188
189
190 def get_unused_blockdev_info():
191- # return a list of unused block devices. These are devices that
192- # do not have anything mounted on them.
193+ """
194+ return a list of unused block devices.
195+ These are devices that do not have anything mounted on them.
196+ """
197
198 # get a list of top level block devices, then iterate over it to get
199 # devices dependent on those. If the lsblk call for that specific
200@@ -137,7 +228,9 @@
201
202
203 def get_devices_for_mp(mountpoint):
204- # return a list of devices (full paths) used by the provided mountpoint
205+ """
206+ return a list of devices (full paths) used by the provided mountpoint
207+ """
208 bdinfo = _lsblock()
209 found = set()
210 for devname, data in bdinfo.items():
211@@ -158,6 +251,9 @@
212
213
214 def get_installable_blockdevs(include_removable=False, min_size=1024**3):
215+ """
216+ find blockdevs suitable for installation
217+ """
218 good = []
219 unused = get_unused_blockdev_info()
220 for devname, data in unused.items():
221@@ -172,21 +268,25 @@
222
223
224 def get_blockdev_for_partition(devpath):
225+ """
226+ find the parent device for a partition.
227+ returns a tuple of the parent block device and the partition number
228+ if device is not a partition, None will be returned for partition number
229+ """
230+ # normalize path
231+ rpath = os.path.realpath(devpath)
232+
233 # convert an entry in /dev/ to parent disk and partition number
234 # if devpath is a block device and not a partition, return (devpath, None)
235-
236- # input of /dev/vdb or /dev/disk/by-label/foo
237- # rpath is hopefully a real-ish path in /dev (vda, sdb..)
238- rpath = os.path.realpath(devpath)
239-
240- bname = os.path.basename(rpath)
241- syspath = "/sys/class/block/%s" % bname
242-
243+ base = '/sys/class/block'
244+
245+ # input of /dev/vdb, /dev/disk/by-label/foo, /sys/block/foo,
246+ # /sys/block/class/foo, or just foo
247+ syspath = os.path.join(base, path_to_kname(devpath))
248+
249+ # don't need to try out multiple sysfs paths as path_to_kname handles cciss
250 if not os.path.exists(syspath):
251- syspath2 = "/sys/class/block/cciss!%s" % bname
252- if not os.path.exists(syspath2):
253- raise ValueError("%s had no syspath (%s)" % (devpath, syspath))
254- syspath = syspath2
255+ raise OSError("%s had no syspath (%s)" % (devpath, syspath))
256
257 ptpath = os.path.join(syspath, "partition")
258 if not os.path.exists(ptpath):
259@@ -207,8 +307,21 @@
260 return (diskdevpath, ptnum)
261
262
263+def get_sysfs_partitions(device):
264+ """
265+ get a list of sysfs paths for partitions under a block device
266+ accepts input as a device kname, sysfs path, or dev path
267+ returns empty list if no partitions available
268+ """
269+ sysfs_path = sys_block_path(device)
270+ return [sys_block_path(kname) for kname in os.listdir(sysfs_path)
271+ if os.path.exists(os.path.join(sysfs_path, kname, 'partition'))]
272+
273+
274 def get_pardevs_on_blockdevs(devs):
275- # return a dict of partitions with their info that are on provided devs
276+ """
277+ return a dict of partitions with their info that are on provided devs
278+ """
279 if devs is None:
280 devs = []
281 devs = [get_dev_name_entry(d)[1] for d in devs]
282@@ -243,7 +356,9 @@
283
284
285 def rescan_block_devices():
286- # run 'blockdev --rereadpt' for all block devices not currently mounted
287+ """
288+ run 'blockdev --rereadpt' for all block devices not currently mounted
289+ """
290 unused = get_unused_blockdev_info()
291 devices = []
292 for devname, data in unused.items():
293@@ -271,6 +386,9 @@
294
295
296 def blkid(devs=None, cache=True):
297+ """
298+ get data about block devices from blkid and convert to dict
299+ """
300 if devs is None:
301 devs = []
302
303@@ -423,7 +541,18 @@
304 """
305 info = _lsblock([devpath])
306 LOG.debug('get_blockdev_sector_size: info:\n%s' % util.json_dumps(info))
307- [parent] = info
308+ # (LP: 1598310) The call to _lsblock() may return multiple results.
309+ # If it does, then search for a result with the correct device path.
310+ # If no such device is found among the results, then fall back to previous
311+ # behavior, which was taking the first of the results
312+ assert len(info) > 0
313+ for (k, v) in info.items():
314+ if v.get('device_path') == devpath:
315+ parent = k
316+ break
317+ else:
318+ parent = list(info.keys())[0]
319+
320 return (int(info[parent]['LOG-SEC']), int(info[parent]['PHY-SEC']))
321
322
323@@ -499,50 +628,108 @@
324 def sysfs_partition_data(blockdev=None, sysfs_path=None):
325 # given block device or sysfs_path, return a list of tuples
326 # of (kernel_name, number, offset, size)
327- if blockdev is None and sysfs_path is None:
328- raise ValueError("Blockdev and sysfs_path cannot both be None")
329-
330 if blockdev:
331+ blockdev = os.path.normpath(blockdev)
332 sysfs_path = sys_block_path(blockdev)
333-
334- ptdata = []
335- # /sys/class/block/dev has entries of 'kname' for each partition
336+ elif sysfs_path:
337+ # use normpath to ensure that paths with trailing slash work
338+ sysfs_path = os.path.normpath(sysfs_path)
339+ blockdev = os.path.join('/dev', os.path.basename(sysfs_path))
340+ else:
341+ raise ValueError("Blockdev and sysfs_path cannot both be None")
342
343 # queue property is only on parent devices, ie, we can't read
344 # /sys/class/block/vda/vda1/queue/* as queue is only on the
345 # parent device
346+ sysfs_prefix = sysfs_path
347 (parent, partnum) = get_blockdev_for_partition(blockdev)
348- sysfs_prefix = sysfs_path
349 if partnum:
350 sysfs_prefix = sys_block_path(parent)
351-
352- block_size = int(util.load_file(os.path.join(sysfs_prefix,
353- 'queue/logical_block_size')))
354-
355- block_size = int(
356- util.load_file(os.path.join(sysfs_path, 'queue/logical_block_size')))
357+ partnum = int(partnum)
358+
359+ block_size = int(util.load_file(os.path.join(
360+ sysfs_prefix, 'queue/logical_block_size')))
361 unit = block_size
362- for d in os.listdir(sysfs_path):
363- partd = os.path.join(sysfs_path, d)
364+
365+ ptdata = []
366+ for part_sysfs in get_sysfs_partitions(sysfs_prefix):
367 data = {}
368 for sfile in ('partition', 'start', 'size'):
369- dfile = os.path.join(partd, sfile)
370+ dfile = os.path.join(part_sysfs, sfile)
371 if not os.path.isfile(dfile):
372 continue
373 data[sfile] = int(util.load_file(dfile))
374- if 'partition' not in data:
375- continue
376- ptdata.append((d, data['partition'], data['start'] * unit,
377- data['size'] * unit,))
378+ if partnum is None or data['partition'] == partnum:
379+ ptdata.append((path_to_kname(part_sysfs), data['partition'],
380+ data['start'] * unit, data['size'] * unit,))
381
382 return ptdata
383
384
385+def get_part_table_type(device):
386+ """
387+ check the type of partition table present on the specified device
388+ returns None if no ptable was present or device could not be read
389+ """
390+ # it is neccessary to look for the gpt signature first, then the dos
391+ # signature, because a gpt formatted disk usually has a valid mbr to
392+ # protect the disk from being modified by older partitioning tools
393+ return ('gpt' if check_efi_signature(device) else
394+ 'dos' if check_dos_signature(device) else None)
395+
396+
397+def check_dos_signature(device):
398+ """
399+ check if there is a dos partition table signature present on device
400+ """
401+ # the last 2 bytes of a dos partition table have the signature with the
402+ # value 0xAA55. the dos partition table is always 0x200 bytes long, even if
403+ # the underlying disk uses a larger logical block size, so the start of
404+ # this signature must be at 0x1fe
405+ # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout
406+ return (is_block_device(device) and util.file_size(device) >= 0x200 and
407+ (util.load_file(device, mode='rb', read_len=2, offset=0x1fe) ==
408+ b'\x55\xAA'))
409+
410+
411+def check_efi_signature(device):
412+ """
413+ check if there is a gpt partition table signature present on device
414+ """
415+ # the gpt partition table header is always on lba 1, regardless of the
416+ # logical block size used by the underlying disk. therefore, a static
417+ # offset cannot be used, the offset to the start of the table header is
418+ # always the sector size of the disk
419+ # the start of the gpt partition table header shoult have the signaure
420+ # 'EFI PART'.
421+ # https://en.wikipedia.org/wiki/GUID_Partition_Table
422+ sector_size = get_blockdev_sector_size(device)[0]
423+ return (is_block_device(device) and
424+ util.file_size(device) >= 2 * sector_size and
425+ (util.load_file(device, mode='rb', read_len=8,
426+ offset=sector_size) == b'EFI PART'))
427+
428+
429+def is_extended_partition(device):
430+ """
431+ check if the specified device path is a dos extended partition
432+ """
433+ # an extended partition must be on a dos disk, must be a partition, must be
434+ # within the first 4 partitions and will have a valid dos signature,
435+ # because the format of the extended partition matches that of a real mbr
436+ (parent_dev, part_number) = get_blockdev_for_partition(device)
437+ return (get_part_table_type(parent_dev) in ['dos', 'msdos'] and
438+ part_number is not None and int(part_number) <= 4 and
439+ check_dos_signature(device))
440+
441+
442 def wipe_file(path, reader=None, buflen=4 * 1024 * 1024):
443- # wipe the existing file at path.
444- # if reader is provided, it will be called as a 'reader(buflen)'
445- # to provide data for each write. Otherwise, zeros are used.
446- # writes will be done in size of buflen.
447+ """
448+ wipe the existing file at path.
449+ if reader is provided, it will be called as a 'reader(buflen)'
450+ to provide data for each write. Otherwise, zeros are used.
451+ writes will be done in size of buflen.
452+ """
453 if reader:
454 readfunc = reader
455 else:
456@@ -551,13 +738,11 @@
457 def readfunc(size):
458 return buf
459
460+ size = util.file_size(path)
461+ LOG.debug("%s is %s bytes. wiping with buflen=%s",
462+ path, size, buflen)
463+
464 with open(path, "rb+") as fp:
465- # get the size by seeking to end.
466- fp.seek(0, 2)
467- size = fp.tell()
468- LOG.debug("%s is %s bytes. wiping with buflen=%s",
469- path, size, buflen)
470- fp.seek(0)
471 while True:
472 pbuf = readfunc(buflen)
473 pos = fp.tell()
474@@ -574,16 +759,18 @@
475
476
477 def quick_zero(path, partitions=True):
478- # zero 1M at front, 1M at end, and 1M at front
479- # if this is a block device and partitions is true, then
480- # zero 1M at front and end of each partition.
481+ """
482+ zero 1M at front, 1M at end, and 1M at front
483+ if this is a block device and partitions is true, then
484+ zero 1M at front and end of each partition.
485+ """
486 buflen = 1024
487 count = 1024
488 zero_size = buflen * count
489 offsets = [0, -zero_size]
490 is_block = is_block_device(path)
491 if not (is_block or os.path.isfile(path)):
492- raise ValueError("%s: not an existing file or block device")
493+ raise ValueError("%s: not an existing file or block device", path)
494
495 if partitions and is_block:
496 ptdata = sysfs_partition_data(path)
497@@ -596,6 +783,9 @@
498
499
500 def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False):
501+ """
502+ write zeros to file at specified offsets
503+ """
504 bmsg = "{path} (size={size}): "
505 m_short = bmsg + "{tot} bytes from {offset} > size."
506 m_badoff = bmsg + "invalid offset {offset}."
507@@ -657,15 +847,13 @@
508 if mode == "pvremove":
509 # We need to use --force --force in case it's already in a volgroup and
510 # pvremove doesn't want to remove it
511- cmds = []
512- cmds.append(["pvremove", "--force", "--force", "--yes", path])
513- cmds.append(["pvscan", "--cache"])
514- cmds.append(["vgscan", "--mknodes", "--cache"])
515+
516 # If pvremove is run and there is no label on the system,
517 # then it exits with 5. That is also okay, because we might be
518 # wiping something that is already blank
519- for cmd in cmds:
520- util.subp(cmd, rcs=[0, 5], capture=True)
521+ util.subp(['pvremove', '--force', '--force', '--yes', path],
522+ rcs=[0, 5], capture=True)
523+ lvm.lvm_scan()
524 elif mode == "zero":
525 wipe_file(path)
526 elif mode == "random":
527
528=== added file 'curtin/block/clear_holders.py'
529--- curtin/block/clear_holders.py 1970-01-01 00:00:00 +0000
530+++ curtin/block/clear_holders.py 2016-10-03 18:55:20 +0000
531@@ -0,0 +1,387 @@
532+# Copyright (C) 2016 Canonical Ltd.
533+#
534+# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
535+#
536+# Curtin is free software: you can redistribute it and/or modify it under
537+# the terms of the GNU Affero General Public License as published by the
538+# Free Software Foundation, either version 3 of the License, or (at your
539+# option) any later version.
540+#
541+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
542+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
543+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
544+# more details.
545+#
546+# You should have received a copy of the GNU Affero General Public License
547+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
548+
549+"""
550+This module provides a mechanism for shutting down virtual storage layers on
551+top of a block device, making it possible to reuse the block device without
552+having to reboot the system
553+"""
554+
555+import os
556+
557+from curtin import (block, udev, util)
558+from curtin.block import lvm
559+from curtin.log import LOG
560+
561+
562+def _define_handlers_registry():
563+ """
564+ returns instantiated dev_types
565+ """
566+ return {
567+ 'partition': {'shutdown': wipe_superblock,
568+ 'ident': identify_partition},
569+ 'lvm': {'shutdown': shutdown_lvm, 'ident': identify_lvm},
570+ 'crypt': {'shutdown': shutdown_crypt, 'ident': identify_crypt},
571+ 'raid': {'shutdown': shutdown_mdadm, 'ident': identify_mdadm},
572+ 'bcache': {'shutdown': shutdown_bcache, 'ident': identify_bcache},
573+ 'disk': {'ident': lambda x: False, 'shutdown': wipe_superblock},
574+ }
575+
576+
577+def get_dmsetup_uuid(device):
578+ """
579+ get the dm uuid for a specified dmsetup device
580+ """
581+ blockdev = block.sysfs_to_devpath(device)
582+ (out, _) = util.subp(['dmsetup', 'info', blockdev, '-C', '-o', 'uuid',
583+ '--noheadings'], capture=True)
584+ return out.strip()
585+
586+
587+def get_bcache_using_dev(device):
588+ """
589+ Get the /sys/fs/bcache/ path of the bcache volume using specified device
590+ """
591+ # FIXME: when block.bcache is written this should be moved there
592+ sysfs_path = block.sys_block_path(device)
593+ return os.path.realpath(os.path.join(sysfs_path, 'bcache', 'cache'))
594+
595+
596+def shutdown_bcache(device):
597+ """
598+ Shut down bcache for specified bcache device
599+ """
600+ bcache_shutdown_message = ('shutdown_bcache running on {} has determined '
601+ 'that the device has already been shut down '
602+ 'during handling of another bcache dev. '
603+ 'skipping'.format(device))
604+ if not os.path.exists(device):
605+ LOG.info(bcache_shutdown_message)
606+ return
607+
608+ bcache_sysfs = get_bcache_using_dev(device)
609+ if not os.path.exists(bcache_sysfs):
610+ LOG.info(bcache_shutdown_message)
611+ return
612+
613+ LOG.debug('stopping bcache at: %s', bcache_sysfs)
614+ util.write_file(os.path.join(bcache_sysfs, 'stop'), '1', mode=None)
615+
616+
617+def shutdown_lvm(device):
618+ """
619+ Shutdown specified lvm device.
620+ """
621+ device = block.sys_block_path(device)
622+ # lvm devices have a dm directory that containes a file 'name' containing
623+ # '{volume group}-{logical volume}'. The volume can be freed using lvremove
624+ name_file = os.path.join(device, 'dm', 'name')
625+ (vg_name, lv_name) = lvm.split_lvm_name(util.load_file(name_file))
626+ # use two --force flags here in case the volume group that this lv is
627+ # attached two has been damaged
628+ LOG.debug('running lvremove on %s/%s', vg_name, lv_name)
629+ util.subp(['lvremove', '--force', '--force',
630+ '{}/{}'.format(vg_name, lv_name)], rcs=[0, 5])
631+ # if that was the last lvol in the volgroup, get rid of volgroup
632+ if len(lvm.get_lvols_in_volgroup(vg_name)) == 0:
633+ util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5])
634+ # refresh lvmetad
635+ lvm.lvm_scan()
636+
637+
638+def shutdown_crypt(device):
639+ """
640+ Shutdown specified cryptsetup device
641+ """
642+ blockdev = block.sysfs_to_devpath(device)
643+ util.subp(['cryptsetup', 'remove', blockdev], capture=True)
644+
645+
646+def shutdown_mdadm(device):
647+ """
648+ Shutdown specified mdadm device.
649+ """
650+ blockdev = block.sysfs_to_devpath(device)
651+ LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)
652+ block.mdadm.mdadm_stop(blockdev)
653+ block.mdadm.mdadm_remove(blockdev)
654+
655+
656+def wipe_superblock(device):
657+ """
658+ Wrapper for block.wipe_volume compatible with shutdown function interface
659+ """
660+ blockdev = block.sysfs_to_devpath(device)
661+ # when operating on a disk that used to have a dos part table with an
662+ # extended partition, attempting to wipe the extended partition will fail
663+ if block.is_extended_partition(blockdev):
664+ LOG.info("extended partitions do not need wiping, so skipping: '%s'",
665+ blockdev)
666+ else:
667+ LOG.info('wiping superblock on %s', blockdev)
668+ block.wipe_volume(blockdev, mode='superblock')
669+
670+
671+def identify_lvm(device):
672+ """
673+ determine if specified device is a lvm device
674+ """
675+ return (block.path_to_kname(device).startswith('dm') and
676+ get_dmsetup_uuid(device).startswith('LVM'))
677+
678+
679+def identify_crypt(device):
680+ """
681+ determine if specified device is dm-crypt device
682+ """
683+ return (block.path_to_kname(device).startswith('dm') and
684+ get_dmsetup_uuid(device).startswith('CRYPT'))
685+
686+
687+def identify_mdadm(device):
688+ """
689+ determine if specified device is a mdadm device
690+ """
691+ return block.path_to_kname(device).startswith('md')
692+
693+
694+def identify_bcache(device):
695+ """
696+ determine if specified device is a bcache device
697+ """
698+ return block.path_to_kname(device).startswith('bcache')
699+
700+
701+def identify_partition(device):
702+ """
703+ determine if specified device is a partition
704+ """
705+ path = os.path.join(block.sys_block_path(device), 'partition')
706+ return os.path.exists(path)
707+
708+
709+def get_holders(device):
710+ """
711+ Look up any block device holders, return list of knames
712+ """
713+ # block.sys_block_path works when given a /sys or /dev path
714+ sysfs_path = block.sys_block_path(device)
715+ # get holders
716+ holders = os.listdir(os.path.join(sysfs_path, 'holders'))
717+ LOG.debug("devname '%s' had holders: %s", device, holders)
718+ return holders
719+
720+
721+def gen_holders_tree(device):
722+ """
723+ generate a tree representing the current storage hirearchy above 'device'
724+ """
725+ device = block.sys_block_path(device)
726+ dev_name = block.path_to_kname(device)
727+ # the holders for a device should consist of the devices in the holders/
728+ # dir in sysfs and any partitions on the device. this ensures that a
729+ # storage tree starting from a disk will include all devices holding the
730+ # disk's partitions
731+ holder_paths = ([block.sys_block_path(h) for h in get_holders(device)] +
732+ block.get_sysfs_partitions(device))
733+ # the DEV_TYPE registry contains a function under the key 'ident' for each
734+ # device type entry that returns true if the device passed to it is of the
735+ # correct type. there should never be a situation in which multiple
736+ # identify functions return true. therefore, it will always work to take
737+ # the device type with the first identify function that returns true as the
738+ # device type for the current device. in the event that no identify
739+ # functions return true, the device will be treated as a disk
740+ # (DEFAULT_DEV_TYPE). the identify function for disk never returns true.
741+ # the next() builtin in python will not raise a StopIteration exception if
742+ # there is a default value defined
743+ dev_type = next((k for k, v in DEV_TYPES.items() if v['ident'](device)),
744+ DEFAULT_DEV_TYPE)
745+ return {
746+ 'device': device, 'dev_type': dev_type, 'name': dev_name,
747+ 'holders': [gen_holders_tree(h) for h in holder_paths],
748+ }
749+
750+
751+def plan_shutdown_holder_trees(holders_trees):
752+ """
753+ plan best order to shut down holders in, taking into account high level
754+ storage layers that may have many devices below them
755+
756+ returns a sorted list of descriptions of storage config entries including
757+ their path in /sys/block and their dev type
758+
759+ can accept either a single storage tree or a list of storage trees assumed
760+ to start at an equal place in storage hirearchy (i.e. a list of trees
761+ starting from disk)
762+ """
763+ # holds a temporary registry of holders to allow cross references
764+ # key = device sysfs path, value = {} of priority level, shutdown function
765+ reg = {}
766+
767+ # normalize to list of trees
768+ if not isinstance(holders_trees, (list, tuple)):
769+ holders_trees = [holders_trees]
770+
771+ def flatten_holders_tree(tree, level=0):
772+ """
773+ add entries from holders tree to registry with level key corresponding
774+ to how many layers from raw disks the current device is at
775+ """
776+ device = tree['device']
777+
778+ # always go with highest level if current device has been
779+ # encountered already. since the device and everything above it is
780+ # re-added to the registry it ensures that any increase of level
781+ # required here will propagate down the tree
782+ # this handles a scenario like mdadm + bcache, where the backing
783+ # device for bcache is a 3nd level item like mdadm, but the cache
784+ # device is 1st level (disk) or second level (partition), ensuring
785+ # that the bcache item is always considered higher level than
786+ # anything else regardless of whether it was added to the tree via
787+ # the cache device or backing device first
788+ if device in reg:
789+ level = max(reg[device]['level'], level)
790+
791+ reg[device] = {'level': level, 'device': device,
792+ 'dev_type': tree['dev_type']}
793+
794+ # handle holders above this level
795+ for holder in tree['holders']:
796+ flatten_holders_tree(holder, level=level + 1)
797+
798+ # flatten the holders tree into the registry
799+ for holders_tree in holders_trees:
800+ flatten_holders_tree(holders_tree)
801+
802+ # return list of entry dicts with highest level first
803+ return [reg[k] for k in sorted(reg, key=lambda x: reg[x]['level'] * -1)]
804+
805+
806+def format_holders_tree(holders_tree):
807+ """
808+ draw a nice dirgram of the holders tree
809+ """
810+ # spacer styles based on output of 'tree --charset=ascii'
811+ spacers = (('`-- ', ' ' * 4), ('|-- ', '|' + ' ' * 3))
812+
813+ def format_tree(tree):
814+ """
815+ format entry and any subentries
816+ """
817+ result = [tree['name']]
818+ holders = tree['holders']
819+ for (holder_no, holder) in enumerate(holders):
820+ spacer_style = spacers[min(len(holders) - (holder_no + 1), 1)]
821+ subtree_lines = format_tree(holder)
822+ for (line_no, line) in enumerate(subtree_lines):
823+ result.append(spacer_style[min(line_no, 1)] + line)
824+ return result
825+
826+ return '\n'.join(format_tree(holders_tree))
827+
828+
829+def get_holder_types(tree):
830+ """
831+ get flattened list of types of holders in holders tree and the devices
832+ they correspond to
833+ """
834+ types = {(tree['dev_type'], tree['device'])}
835+ for holder in tree['holders']:
836+ types.update(get_holder_types(holder))
837+ return types
838+
839+
840+def assert_clear(base_paths):
841+ """
842+ Check if all paths in base_paths are clear to use
843+ """
844+ valid = ('disk', 'partition')
845+ if not isinstance(base_paths, (list, tuple)):
846+ base_paths = [base_paths]
847+ base_paths = [block.sys_block_path(path) for path in base_paths]
848+ for holders_tree in [gen_holders_tree(p) for p in base_paths]:
849+ if any(holder_type not in valid and path not in base_paths
850+ for (holder_type, path) in get_holder_types(holders_tree)):
851+ raise OSError('Storage not clear, remaining:\n{}'
852+ .format(format_holders_tree(holders_tree)))
853+
854+
855+def clear_holders(base_paths, try_preserve=False):
856+ """
857+ Clear all storage layers depending on the devices specified in 'base_paths'
858+ A single device or list of devices can be specified.
859+ Device paths can be specified either as paths in /dev or /sys/block
860+ Will throw OSError if any holders could not be shut down
861+ """
862+ # handle single path
863+ if not isinstance(base_paths, (list, tuple)):
864+ base_paths = [base_paths]
865+
866+ # get current holders and plan how to shut them down
867+ holder_trees = [gen_holders_tree(path) for path in base_paths]
868+ LOG.info('Current device storage tree:\n%s',
869+ '\n'.join(format_holders_tree(tree) for tree in holder_trees))
870+ ordered_devs = plan_shutdown_holder_trees(holder_trees)
871+
872+ # run shutdown functions
873+ for dev_info in ordered_devs:
874+ dev_type = DEV_TYPES.get(dev_info['dev_type'])
875+ shutdown_function = dev_type.get('shutdown')
876+ if not shutdown_function:
877+ continue
878+ if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS:
879+ LOG.info('shutdown function for holder type: %s is destructive. '
880+ 'attempting to preserve data, so not skipping' %
881+ dev_info['dev_type'])
882+ continue
883+ LOG.info("shutdown running on holder type: '%s' syspath: '%s'",
884+ dev_info['dev_type'], dev_info['device'])
885+ shutdown_function(dev_info['device'])
886+ udev.udevadm_settle()
887+
888+
889+def start_clear_holders_deps():
890+ """
891+ prepare system for clear holders to be able to scan old devices
892+ """
893+ # a mdadm scan has to be started in case there is a md device that needs to
894+ # be detected. if the scan fails, it is either because there are no mdadm
895+ # devices on the system, or because there is a mdadm device in a damaged
896+ # state that could not be started. due to the nature of mdadm tools, it is
897+ # difficult to know which is the case. if any errors did occur, then ignore
898+ # them, since no action needs to be taken if there were no mdadm devices on
899+ # the system, and in the case where there is some mdadm metadata on a disk,
900+ # but there was not enough to start the array, the call to wipe_volume on
901+ # all disks and partitions should be sufficient to remove the mdadm
902+ # metadata
903+ block.mdadm.mdadm_assemble(scan=True, ignore_errors=True)
904+ # the bcache module needs to be present to properly detect bcache devs
905+ # on some systems (precise without hwe kernel) it may not be possible to
906+ # lad the bcache module bcause it is not present in the kernel. if this
907+ # happens then there is no need to halt installation, as the bcache devices
908+ # will never appear and will never prevent the disk from being reformatted
909+ util.subp(['modprobe', 'bcache'], rcs=[0, 1])
910+
911+
912+# anything that is not identified can assumed to be a 'disk' or similar
913+DEFAULT_DEV_TYPE = 'disk'
914+# handlers that should not be run if an attempt is being made to preserve data
915+DATA_DESTROYING_HANDLERS = [wipe_superblock]
916+# types of devices that could be encountered by clear holders and functions to
917+# identify them and shut them down
918+DEV_TYPES = _define_handlers_registry()
919
920=== added file 'curtin/block/lvm.py'
921--- curtin/block/lvm.py 1970-01-01 00:00:00 +0000
922+++ curtin/block/lvm.py 2016-10-03 18:55:20 +0000
923@@ -0,0 +1,96 @@
924+# Copyright (C) 2016 Canonical Ltd.
925+#
926+# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
927+#
928+# Curtin is free software: you can redistribute it and/or modify it under
929+# the terms of the GNU Affero General Public License as published by the
930+# Free Software Foundation, either version 3 of the License, or (at your
931+# option) any later version.
932+#
933+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
934+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
935+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
936+# more details.
937+#
938+# You should have received a copy of the GNU Affero General Public License
939+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
940+
941+"""
942+This module provides some helper functions for manipulating lvm devices
943+"""
944+
945+from curtin import util
946+from curtin.log import LOG
947+import os
948+
949+# separator to use for lvm/dm tools
950+_SEP = '='
951+
952+
953+def _filter_lvm_info(lvtool, match_field, query_field, match_key):
954+ """
955+ filter output of pv/vg/lvdisplay tools
956+ """
957+ (out, _) = util.subp([lvtool, '-C', '--separator', _SEP, '--noheadings',
958+ '-o', ','.join([match_field, query_field])],
959+ capture=True)
960+ return [qf for (mf, qf) in
961+ [l.strip().split(_SEP) for l in out.strip().splitlines()]
962+ if mf == match_key]
963+
964+
965+def get_pvols_in_volgroup(vg_name):
966+ """
967+ get physical volumes used by volgroup
968+ """
969+ return _filter_lvm_info('pvdisplay', 'vg_name', 'pv_name', vg_name)
970+
971+
972+def get_lvols_in_volgroup(vg_name):
973+ """
974+ get logical volumes in volgroup
975+ """
976+ return _filter_lvm_info('lvdisplay', 'vg_name', 'lv_name', vg_name)
977+
978+
979+def split_lvm_name(full):
980+ """
981+ split full lvm name into tuple of (volgroup, lv_name)
982+ """
983+ # 'dmsetup splitname' is the authoratative source for lvm name parsing
984+ (out, _) = util.subp(['dmsetup', 'splitname', full, '-c', '--noheadings',
985+ '--separator', _SEP, '-o', 'vg_name,lv_name'],
986+ capture=True)
987+ return out.strip().split(_SEP)
988+
989+
990+def lvmetad_running():
991+ """
992+ check if lvmetad is running
993+ """
994+ return os.path.exists(os.environ.get('LVM_LVMETAD_PIDFILE',
995+ '/run/lvmetad.pid'))
996+
997+
998+def lvm_scan():
999+ """
1000+ run full scan for volgroups, logical volumes and physical volumes
1001+ """
1002+ # the lvm tools lvscan, vgscan and pvscan on ubuntu precise do not
1003+ # support the flag --cache. the flag is present for the tools in ubuntu
1004+ # trusty and later. since lvmetad is used in current releases of
1005+ # ubuntu, the --cache flag is needed to ensure that the data cached by
1006+ # lvmetad is updated.
1007+
1008+ # before appending the cache flag though, check if lvmetad is running. this
1009+ # ensures that we do the right thing even if lvmetad is supported but is
1010+ # not running
1011+ release = util.lsb_release().get('codename')
1012+ if release in [None, 'UNAVAILABLE']:
1013+ LOG.warning('unable to find release number, assuming xenial or later')
1014+ release = 'xenial'
1015+
1016+ for cmd in [['pvscan'], ['vgscan', '--mknodes']]:
1017+ if release != 'precise' and lvmetad_running():
1018+ cmd.append('--cache')
1019+ util.subp(cmd, capture=True)
1020
1021=== modified file 'curtin/block/mdadm.py'
1022--- curtin/block/mdadm.py 2016-05-10 16:13:29 +0000
1023+++ curtin/block/mdadm.py 2016-10-03 18:55:20 +0000
1024@@ -28,7 +28,7 @@
1025 from subprocess import CalledProcessError
1026
1027 from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)
1028-from curtin import util
1029+from curtin import (util, udev)
1030 from curtin.log import LOG
1031
1032 NOSPARE_RAID_LEVELS = [
1033@@ -117,21 +117,34 @@
1034 #
1035
1036
1037-def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False):
1038+def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False,
1039+ ignore_errors=False):
1040 # md_devname is a /dev/XXXX
1041 # devices is non-empty list of /dev/xxx
1042 # if spares is non-empt list append of /dev/xxx
1043 cmd = ["mdadm", "--assemble"]
1044 if scan:
1045- cmd += ['--scan']
1046+ cmd += ['--scan', '-v']
1047 else:
1048 valid_mdname(md_devname)
1049 cmd += [md_devname, "--run"] + devices
1050 if spares:
1051 cmd += spares
1052
1053- util.subp(cmd, capture=True, rcs=[0, 1, 2])
1054- util.subp(["udevadm", "settle"])
1055+ try:
1056+ # mdadm assemble returns 1 when no arrays are found. this might not be
1057+ # an error depending on the situation this function was called in, so
1058+ # accept a return code of 1
1059+ # mdadm assemble returns 2 when called on an array that is already
1060+ # assembled. this is not an error, so accept return code of 2
1061+ # all other return codes can be accepted with ignore_error set to true
1062+ util.subp(cmd, capture=True, rcs=[0, 1, 2])
1063+ except util.ProcessExecutionError:
1064+ LOG.warning("mdadm_assemble had unexpected return code")
1065+ if not ignore_errors:
1066+ raise
1067+
1068+ udev.udevadm_settle()
1069
1070
1071 def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""):
1072
1073=== modified file 'curtin/block/mkfs.py'
1074--- curtin/block/mkfs.py 2016-05-10 16:13:29 +0000
1075+++ curtin/block/mkfs.py 2016-10-03 18:55:20 +0000
1076@@ -78,6 +78,7 @@
1077 "swap": "--uuid"},
1078 "force": {"btrfs": "--force",
1079 "ext": "-F",
1080+ "fat": "-I",
1081 "ntfs": "--force",
1082 "reiserfs": "-f",
1083 "swap": "--force",
1084@@ -91,6 +92,7 @@
1085 "btrfs": "--sectorsize",
1086 "ext": "-b",
1087 "fat": "-S",
1088+ "xfs": "-s",
1089 "ntfs": "--sector-size",
1090 "reiserfs": "--block-size"}
1091 }
1092@@ -165,12 +167,15 @@
1093 # use device logical block size to ensure properly formated filesystems
1094 (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path)
1095 if logical_bsize > 512:
1096+ lbs_str = ('size={}'.format(logical_bsize) if fs_family == "xfs"
1097+ else str(logical_bsize))
1098 cmd.extend(get_flag_mapping("sectorsize", fs_family,
1099- param=str(logical_bsize),
1100- strict=strict))
1101- # mkfs.vfat doesn't calculate this right for non-512b sector size
1102- # lp:1569576 , d-i uses the same setting.
1103- cmd.extend(["-s", "1"])
1104+ param=lbs_str, strict=strict))
1105+
1106+ if fs_family == 'fat':
1107+ # mkfs.vfat doesn't calculate this right for non-512b sector size
1108+ # lp:1569576 , d-i uses the same setting.
1109+ cmd.extend(["-s", "1"])
1110
1111 if force:
1112 cmd.extend(get_flag_mapping("force", fs_family, strict=strict))
1113
1114=== modified file 'curtin/commands/apply_net.py'
1115--- curtin/commands/apply_net.py 2016-05-10 16:13:29 +0000
1116+++ curtin/commands/apply_net.py 2016-10-03 18:55:20 +0000
1117@@ -26,6 +26,57 @@
1118
1119 LOG = log.LOG
1120
1121+IFUPDOWN_IPV6_MTU_PRE_HOOK = """#!/bin/bash -e
1122+# injected by curtin installer
1123+
1124+[ "${IFACE}" != "lo" ] || exit 0
1125+
1126+# Trigger only if MTU configured
1127+[ -n "${IF_MTU}" ] || exit 0
1128+
1129+read CUR_DEV_MTU </sys/class/net/${IFACE}/mtu ||:
1130+read CUR_IPV6_MTU </proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
1131+[ -n "${CUR_DEV_MTU}" ] && echo ${CUR_DEV_MTU} > /run/network/${IFACE}_dev.mtu
1132+[ -n "${CUR_IPV6_MTU}" ] &&
1133+ echo ${CUR_IPV6_MTU} > /run/network/${IFACE}_ipv6.mtu
1134+exit 0
1135+"""
1136+
1137+IFUPDOWN_IPV6_MTU_POST_HOOK = """#!/bin/bash -e
1138+# injected by curtin installer
1139+
1140+[ "${IFACE}" != "lo" ] || exit 0
1141+
1142+# Trigger only if MTU configured
1143+[ -n "${IF_MTU}" ] || exit 0
1144+
1145+read PRE_DEV_MTU </run/network/${IFACE}_dev.mtu ||:
1146+read CUR_DEV_MTU </sys/class/net/${IFACE}/mtu ||:
1147+read PRE_IPV6_MTU </run/network/${IFACE}_ipv6.mtu ||:
1148+read CUR_IPV6_MTU </proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
1149+
1150+if [ "${ADDRFAM}" = "inet6" ]; then
1151+ # We need to check the underlying interface MTU and
1152+ # raise it if the IPV6 mtu is larger
1153+ if [ ${CUR_DEV_MTU} -lt ${IF_MTU} ]; then
1154+ ip link set ${IFACE} mtu ${IF_MTU}
1155+ fi
1156+ # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${IF_MTU}
1157+ echo ${IF_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
1158+
1159+elif [ "${ADDRFAM}" = "inet" ]; then
1160+ # handle the clobber case where inet mtu changes v6 mtu.
1161+ # ifupdown will already have set dev mtu, so lower mtu
1162+ # if needed. If v6 mtu was larger, it get's clamped down
1163+ # to the dev MTU value.
1164+ if [ ${PRE_IPV6_MTU} -lt ${CUR_IPV6_MTU} ]; then
1165+ # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${PRE_IPV6_MTU}
1166+ echo ${PRE_IPV6_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
1167+ fi
1168+fi
1169+exit 0
1170+"""
1171+
1172
1173 def apply_net(target, network_state=None, network_config=None):
1174 if network_state is None and network_config is None:
1175@@ -45,6 +96,108 @@
1176
1177 net.render_network_state(target=target, network_state=ns)
1178
1179+ _maybe_remove_legacy_eth0(target)
1180+ LOG.info('Attempting to remove ipv6 privacy extensions')
1181+ _disable_ipv6_privacy_extensions(target)
1182+ _patch_ifupdown_ipv6_mtu_hook(target)
1183+
1184+
1185+def _patch_ifupdown_ipv6_mtu_hook(target,
1186+ prehookfn="etc/network/if-pre-up.d/mtuipv6",
1187+ posthookfn="etc/network/if-up.d/mtuipv6"):
1188+
1189+ contents = {
1190+ 'prehook': IFUPDOWN_IPV6_MTU_PRE_HOOK,
1191+ 'posthook': IFUPDOWN_IPV6_MTU_POST_HOOK,
1192+ }
1193+
1194+ hookfn = {
1195+ 'prehook': prehookfn,
1196+ 'posthook': posthookfn,
1197+ }
1198+
1199+ for hook in ['prehook', 'posthook']:
1200+ fn = hookfn[hook]
1201+ cfg = util.target_path(target, path=fn)
1202+ LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg)
1203+ util.write_file(cfg, contents[hook], mode=0o755)
1204+
1205+
1206+def _disable_ipv6_privacy_extensions(target,
1207+ path="etc/sysctl.d/10-ipv6-privacy.conf"):
1208+
1209+ """Ubuntu server image sets a preference to use IPv6 privacy extensions
1210+ by default; this races with the cloud-image desire to disable them.
1211+ Resolve this by allowing the cloud-image setting to win. """
1212+
1213+ cfg = util.target_path(target, path=path)
1214+ if not os.path.exists(cfg):
1215+ LOG.warn('Failed to find ipv6 privacy conf file %s', cfg)
1216+ return
1217+
1218+ bmsg = "Disabling IPv6 privacy extensions config may not apply."
1219+ try:
1220+ contents = util.load_file(cfg)
1221+ known_contents = ["net.ipv6.conf.all.use_tempaddr = 2",
1222+ "net.ipv6.conf.default.use_tempaddr = 2"]
1223+ lines = [f.strip() for f in contents.splitlines()
1224+ if not f.startswith("#")]
1225+ if lines == known_contents:
1226+ LOG.info('deleting file: %s', cfg)
1227+ util.del_file(cfg)
1228+ msg = "removed %s with known contents" % cfg
1229+ curtin_contents = '\n'.join(
1230+ ["# IPv6 Privacy Extensions (RFC 4941)",
1231+ "# Disabled by curtin",
1232+ "# net.ipv6.conf.all.use_tempaddr = 2",
1233+ "# net.ipv6.conf.default.use_tempaddr = 2"])
1234+ util.write_file(cfg, curtin_contents)
1235+ else:
1236+ LOG.info('skipping, content didnt match')
1237+ LOG.debug("found content:\n%s", lines)
1238+ LOG.debug("expected contents:\n%s", known_contents)
1239+ msg = (bmsg + " '%s' exists with user configured content." % cfg)
1240+ except:
1241+ msg = bmsg + " %s exists, but could not be read." % cfg
1242+ LOG.exception(msg)
1243+ return
1244+
1245+
1246+def _maybe_remove_legacy_eth0(target,
1247+ path="etc/network/interfaces.d/eth0.cfg"):
1248+ """Ubuntu cloud images previously included a 'eth0.cfg' that had
1249+ hard coded content. That file would interfere with the rendered
1250+ configuration if it was present.
1251+
1252+ if the file does not exist do nothing.
1253+ If the file exists:
1254+ - with known content, remove it and warn
1255+ - with unknown content, leave it and warn
1256+ """
1257+
1258+ cfg = util.target_path(target, path=path)
1259+ if not os.path.exists(cfg):
1260+ LOG.warn('Failed to find legacy network conf file %s', cfg)
1261+ return
1262+
1263+ bmsg = "Dynamic networking config may not apply."
1264+ try:
1265+ contents = util.load_file(cfg)
1266+ known_contents = ["auto eth0", "iface eth0 inet dhcp"]
1267+ lines = [f.strip() for f in contents.splitlines()
1268+ if not f.startswith("#")]
1269+ if lines == known_contents:
1270+ util.del_file(cfg)
1271+ msg = "removed %s with known contents" % cfg
1272+ else:
1273+ msg = (bmsg + " '%s' exists with user configured content." % cfg)
1274+ except:
1275+ msg = bmsg + " %s exists, but could not be read." % cfg
1276+ LOG.exception(msg)
1277+ return
1278+
1279+ LOG.warn(msg)
1280+
1281
1282 def apply_net_main(args):
1283 # curtin apply_net [--net-state=/config/netstate.yml] [--target=/]
1284@@ -76,8 +229,10 @@
1285 apply_net(target=state['target'],
1286 network_state=state['network_state'],
1287 network_config=state['network_config'])
1288+
1289 except Exception:
1290 LOG.exception('failed to apply network config')
1291+ return 1
1292
1293 LOG.info('Applied network configuration successfully')
1294 sys.exit(0)
1295@@ -90,7 +245,7 @@
1296 'metavar': 'NETSTATE', 'action': 'store',
1297 'default': os.environ.get('OUTPUT_NETWORK_STATE')}),
1298 (('-t', '--target'),
1299- {'help': ('target filesystem root to add swap file to. '
1300+ {'help': ('target filesystem root to configure networking to. '
1301 'default is env["TARGET_MOUNT_POINT"]'),
1302 'metavar': 'TARGET', 'action': 'store',
1303 'default': os.environ.get('TARGET_MOUNT_POINT')}),
1304
1305=== added file 'curtin/commands/apt_config.py'
1306--- curtin/commands/apt_config.py 1970-01-01 00:00:00 +0000
1307+++ curtin/commands/apt_config.py 2016-10-03 18:55:20 +0000
1308@@ -0,0 +1,668 @@
1309+# Copyright (C) 2016 Canonical Ltd.
1310+#
1311+# Author: Christian Ehrhardt <christian.ehrhardt@canonical.com>
1312+#
1313+# Curtin is free software: you can redistribute it and/or modify it under
1314+# the terms of the GNU Affero General Public License as published by the
1315+# Free Software Foundation, either version 3 of the License, or (at your
1316+# option) any later version.
1317+#
1318+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
1319+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
1320+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
1321+# more details.
1322+#
1323+# You should have received a copy of the GNU Affero General Public License
1324+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1325+"""
1326+apt.py
1327+Handle the setup of apt related tasks like proxies, mirrors, repositories.
1328+"""
1329+
1330+import argparse
1331+import glob
1332+import os
1333+import re
1334+import sys
1335+import yaml
1336+
1337+from curtin.log import LOG
1338+from curtin import (config, util, gpg)
1339+
1340+from . import populate_one_subcmd
1341+
1342+# this will match 'XXX:YYY' (ie, 'cloud-archive:foo' or 'ppa:bar')
1343+ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
1344+
1345+# place where apt stores cached repository data
1346+APT_LISTS = "/var/lib/apt/lists"
1347+
1348+# Files to store proxy information
1349+APT_CONFIG_FN = "/etc/apt/apt.conf.d/94curtin-config"
1350+APT_PROXY_FN = "/etc/apt/apt.conf.d/90curtin-aptproxy"
1351+
1352+# Default keyserver to use
1353+DEFAULT_KEYSERVER = "keyserver.ubuntu.com"
1354+
1355+# Default archive mirrors
1356+PRIMARY_ARCH_MIRRORS = {"PRIMARY": "http://archive.ubuntu.com/ubuntu/",
1357+ "SECURITY": "http://security.ubuntu.com/ubuntu/"}
1358+PORTS_MIRRORS = {"PRIMARY": "http://ports.ubuntu.com/ubuntu-ports",
1359+ "SECURITY": "http://ports.ubuntu.com/ubuntu-ports"}
1360+PRIMARY_ARCHES = ['amd64', 'i386']
1361+PORTS_ARCHES = ['s390x', 'arm64', 'armhf', 'powerpc', 'ppc64el']
1362+
1363+
1364+def get_default_mirrors(arch=None):
1365+ """returns the default mirrors for the target. These depend on the
1366+ architecture, for more see:
1367+ https://wiki.ubuntu.com/UbuntuDevelopment/PackageArchive#Ports"""
1368+ if arch is None:
1369+ arch = util.get_architecture()
1370+ if arch in PRIMARY_ARCHES:
1371+ return PRIMARY_ARCH_MIRRORS.copy()
1372+ if arch in PORTS_ARCHES:
1373+ return PORTS_MIRRORS.copy()
1374+ raise ValueError("No default mirror known for arch %s" % arch)
1375+
1376+
1377+def handle_apt(cfg, target=None):
1378+ """ handle_apt
1379+ process the config for apt_config. This can be called from
1380+ curthooks if a global apt config was provided or via the "apt"
1381+ standalone command.
1382+ """
1383+ release = util.lsb_release(target=target)['codename']
1384+ arch = util.get_architecture(target)
1385+ mirrors = find_apt_mirror_info(cfg, arch)
1386+ LOG.debug("Apt Mirror info: %s", mirrors)
1387+
1388+ apply_debconf_selections(cfg, target)
1389+
1390+ if not config.value_as_boolean(cfg.get('preserve_sources_list',
1391+ True)):
1392+ generate_sources_list(cfg, release, mirrors, target)
1393+ rename_apt_lists(mirrors, target)
1394+
1395+ try:
1396+ apply_apt_proxy_config(cfg, target + APT_PROXY_FN,
1397+ target + APT_CONFIG_FN)
1398+ except (IOError, OSError):
1399+ LOG.exception("Failed to apply proxy or apt config info:")
1400+
1401+ # Process 'apt_source -> sources {dict}'
1402+ if 'sources' in cfg:
1403+ params = mirrors
1404+ params['RELEASE'] = release
1405+ params['MIRROR'] = mirrors["MIRROR"]
1406+
1407+ matcher = None
1408+ matchcfg = cfg.get('add_apt_repo_match', ADD_APT_REPO_MATCH)
1409+ if matchcfg:
1410+ matcher = re.compile(matchcfg).search
1411+
1412+ add_apt_sources(cfg['sources'], target,
1413+ template_params=params, aa_repo_match=matcher)
1414+
1415+
1416+def debconf_set_selections(selections, target=None):
1417+ util.subp(['debconf-set-selections'], data=selections, target=target,
1418+ capture=True)
1419+
1420+
1421+def dpkg_reconfigure(packages, target=None):
1422+ # For any packages that are already installed, but have preseed data
1423+ # we populate the debconf database, but the filesystem configuration
1424+ # would be preferred on a subsequent dpkg-reconfigure.
1425+ # so, what we have to do is "know" information about certain packages
1426+ # to unconfigure them.
1427+ unhandled = []
1428+ to_config = []
1429+ for pkg in packages:
1430+ if pkg in CONFIG_CLEANERS:
1431+ LOG.debug("unconfiguring %s", pkg)
1432+ CONFIG_CLEANERS[pkg](target)
1433+ to_config.append(pkg)
1434+ else:
1435+ unhandled.append(pkg)
1436+
1437+ if len(unhandled):
1438+ LOG.warn("The following packages were installed and preseeded, "
1439+ "but cannot be unconfigured: %s", unhandled)
1440+
1441+ if len(to_config):
1442+ util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] +
1443+ list(to_config), data=None, target=target, capture=True)
1444+
1445+
1446+def apply_debconf_selections(cfg, target=None):
1447+ """apply_debconf_selections - push content to debconf"""
1448+ # debconf_selections:
1449+ # set1: |
1450+ # cloud-init cloud-init/datasources multiselect MAAS
1451+ # set2: pkg pkg/value string bar
1452+ selsets = cfg.get('debconf_selections')
1453+ if not selsets:
1454+ LOG.debug("debconf_selections was not set in config")
1455+ return
1456+
1457+ selections = '\n'.join(
1458+ [selsets[key] for key in sorted(selsets.keys())])
1459+ debconf_set_selections(selections.encode() + b"\n", target=target)
1460+
1461+ # get a complete list of packages listed in input
1462+ pkgs_cfgd = set()
1463+ for key, content in selsets.items():
1464+ for line in content.splitlines():
1465+ if line.startswith("#"):
1466+ continue
1467+ pkg = re.sub(r"[:\s].*", "", line)
1468+ pkgs_cfgd.add(pkg)
1469+
1470+ pkgs_installed = util.get_installed_packages(target)
1471+
1472+ LOG.debug("pkgs_cfgd: %s", pkgs_cfgd)
1473+ LOG.debug("pkgs_installed: %s", pkgs_installed)
1474+ need_reconfig = pkgs_cfgd.intersection(pkgs_installed)
1475+
1476+ if len(need_reconfig) == 0:
1477+ LOG.debug("no need for reconfig")
1478+ return
1479+
1480+ dpkg_reconfigure(need_reconfig, target=target)
1481+
1482+
1483+def clean_cloud_init(target):
1484+ """clean out any local cloud-init config"""
1485+ flist = glob.glob(
1486+ util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*"))
1487+
1488+ LOG.debug("cleaning cloud-init config from: %s", flist)
1489+ for dpkg_cfg in flist:
1490+ os.unlink(dpkg_cfg)
1491+
1492+
1493+def mirrorurl_to_apt_fileprefix(mirror):
1494+ """ mirrorurl_to_apt_fileprefix
1495+ Convert a mirror url to the file prefix used by apt on disk to
1496+ store cache information for that mirror.
1497+ To do so do:
1498+ - take off ???://
1499+ - drop tailing /
1500+ - convert in string / to _
1501+ """
1502+ string = mirror
1503+ if string.endswith("/"):
1504+ string = string[0:-1]
1505+ pos = string.find("://")
1506+ if pos >= 0:
1507+ string = string[pos + 3:]
1508+ string = string.replace("/", "_")
1509+ return string
1510+
1511+
1512+def rename_apt_lists(new_mirrors, target=None):
1513+ """rename_apt_lists - rename apt lists to preserve old cache data"""
1514+ default_mirrors = get_default_mirrors(util.get_architecture(target))
1515+
1516+ pre = util.target_path(target, APT_LISTS)
1517+ for (name, omirror) in default_mirrors.items():
1518+ nmirror = new_mirrors.get(name)
1519+ if not nmirror:
1520+ continue
1521+
1522+ oprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(omirror)
1523+ nprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(nmirror)
1524+ if oprefix == nprefix:
1525+ continue
1526+ olen = len(oprefix)
1527+ for filename in glob.glob("%s_*" % oprefix):
1528+ newname = "%s%s" % (nprefix, filename[olen:])
1529+ LOG.debug("Renaming apt list %s to %s", filename, newname)
1530+ try:
1531+ os.rename(filename, newname)
1532+ except OSError:
1533+ # since this is a best effort task, warn with but don't fail
1534+ LOG.warn("Failed to rename apt list:", exc_info=True)
1535+
1536+
1537+def mirror_to_placeholder(tmpl, mirror, placeholder):
1538+ """ mirror_to_placeholder
1539+ replace the specified mirror in a template with a placeholder string
1540+ Checks for existance of the expected mirror and warns if not found
1541+ """
1542+ if mirror not in tmpl:
1543+ LOG.warn("Expected mirror '%s' not found in: %s", mirror, tmpl)
1544+ return tmpl.replace(mirror, placeholder)
1545+
1546+
1547+def map_known_suites(suite):
1548+ """there are a few default names which will be auto-extended.
1549+ This comes at the inability to use those names literally as suites,
1550+ but on the other hand increases readability of the cfg quite a lot"""
1551+ mapping = {'updates': '$RELEASE-updates',
1552+ 'backports': '$RELEASE-backports',
1553+ 'security': '$RELEASE-security',
1554+ 'proposed': '$RELEASE-proposed',
1555+ 'release': '$RELEASE'}
1556+ try:
1557+ retsuite = mapping[suite]
1558+ except KeyError:
1559+ retsuite = suite
1560+ return retsuite
1561+
1562+
1563+def disable_suites(disabled, src, release):
1564+ """reads the config for suites to be disabled and removes those
1565+ from the template"""
1566+ if not disabled:
1567+ return src
1568+
1569+ retsrc = src
1570+ for suite in disabled:
1571+ suite = map_known_suites(suite)
1572+ releasesuite = util.render_string(suite, {'RELEASE': release})
1573+ LOG.debug("Disabling suite %s as %s", suite, releasesuite)
1574+
1575+ newsrc = ""
1576+ for line in retsrc.splitlines(True):
1577+ if line.startswith("#"):
1578+ newsrc += line
1579+ continue
1580+
1581+ # sources.list allow options in cols[1] which can have spaces
1582+ # so the actual suite can be [2] or later. example:
1583+ # deb [ arch=amd64,armel k=v ] http://example.com/debian
1584+ cols = line.split()
1585+ if len(cols) > 1:
1586+ pcol = 2
1587+ if cols[1].startswith("["):
1588+ for col in cols[1:]:
1589+ pcol += 1
1590+ if col.endswith("]"):
1591+ break
1592+
1593+ if cols[pcol] == releasesuite:
1594+ line = '# suite disabled by curtin: %s' % line
1595+ newsrc += line
1596+ retsrc = newsrc
1597+
1598+ return retsrc
1599+
1600+
1601+def generate_sources_list(cfg, release, mirrors, target=None):
1602+ """ generate_sources_list
1603+ create a source.list file based on a custom or default template
1604+ by replacing mirrors and release in the template
1605+ """
1606+ default_mirrors = get_default_mirrors(util.get_architecture(target))
1607+ aptsrc = "/etc/apt/sources.list"
1608+ params = {'RELEASE': release}
1609+ for k in mirrors:
1610+ params[k] = mirrors[k]
1611+
1612+ tmpl = cfg.get('sources_list', None)
1613+ if tmpl is None:
1614+ LOG.info("No custom template provided, fall back to modify"
1615+ "mirrors in %s on the target system", aptsrc)
1616+ tmpl = util.load_file(util.target_path(target, aptsrc))
1617+ # Strategy if no custom template was provided:
1618+ # - Only replacing mirrors
1619+ # - no reason to replace "release" as it is from target anyway
1620+ # - The less we depend upon, the more stable this is against changes
1621+ # - warn if expected original content wasn't found
1622+ tmpl = mirror_to_placeholder(tmpl, default_mirrors['PRIMARY'],
1623+ "$MIRROR")
1624+ tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'],
1625+ "$SECURITY")
1626+
1627+ orig = util.target_path(target, aptsrc)
1628+ if os.path.exists(orig):
1629+ os.rename(orig, orig + ".curtin.old")
1630+
1631+ rendered = util.render_string(tmpl, params)
1632+ disabled = disable_suites(cfg.get('disable_suites'), rendered, release)
1633+ util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644)
1634+
1635+ # protect the just generated sources.list from cloud-init
1636+ cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg"
1637+ # this has to work with older cloud-init as well, so use old key
1638+ cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1)
1639+ try:
1640+ util.write_file(util.target_path(target, cloudfile),
1641+ cloudconf, mode=0o644)
1642+ except IOError:
1643+ LOG.exception("Failed to protect source.list from cloud-init in (%s)",
1644+ util.target_path(target, cloudfile))
1645+ raise
1646+
1647+
1648+def add_apt_key_raw(key, target=None):
1649+ """
1650+ actual adding of a key as defined in key argument
1651+ to the system
1652+ """
1653+ LOG.debug("Adding key:\n'%s'", key)
1654+ try:
1655+ util.subp(['apt-key', 'add', '-'], data=key.encode(), target=target)
1656+ except util.ProcessExecutionError:
1657+ LOG.exception("failed to add apt GPG Key to apt keyring")
1658+ raise
1659+
1660+
1661+def add_apt_key(ent, target=None):
1662+ """
1663+ Add key to the system as defined in ent (if any).
1664+ Supports raw keys or keyid's
1665+ The latter will as a first step fetched to get the raw key
1666+ """
1667+ if 'keyid' in ent and 'key' not in ent:
1668+ keyserver = DEFAULT_KEYSERVER
1669+ if 'keyserver' in ent:
1670+ keyserver = ent['keyserver']
1671+
1672+ ent['key'] = gpg.getkeybyid(ent['keyid'], keyserver)
1673+
1674+ if 'key' in ent:
1675+ add_apt_key_raw(ent['key'], target)
1676+
1677+
1678+def add_apt_sources(srcdict, target=None, template_params=None,
1679+ aa_repo_match=None):
1680+ """
1681+ add entries in /etc/apt/sources.list.d for each abbreviated
1682+ sources.list entry in 'srcdict'. When rendering template, also
1683+ include the values in dictionary searchList
1684+ """
1685+ if template_params is None:
1686+ template_params = {}
1687+
1688+ if aa_repo_match is None:
1689+ raise ValueError('did not get a valid repo matcher')
1690+
1691+ if not isinstance(srcdict, dict):
1692+ raise TypeError('unknown apt format: %s' % (srcdict))
1693+
1694+ for filename in srcdict:
1695+ ent = srcdict[filename]
1696+ if 'filename' not in ent:
1697+ ent['filename'] = filename
1698+
1699+ add_apt_key(ent, target)
1700+
1701+ if 'source' not in ent:
1702+ continue
1703+ source = ent['source']
1704+ source = util.render_string(source, template_params)
1705+
1706+ if not ent['filename'].startswith("/"):
1707+ ent['filename'] = os.path.join("/etc/apt/sources.list.d/",
1708+ ent['filename'])
1709+ if not ent['filename'].endswith(".list"):
1710+ ent['filename'] += ".list"
1711+
1712+ if aa_repo_match(source):
1713+ try:
1714+ with util.ChrootableTarget(
1715+ target, sys_resolvconf=True) as in_chroot:
1716+ in_chroot.subp(["add-apt-repository", source])
1717+ except util.ProcessExecutionError:
1718+ LOG.exception("add-apt-repository failed.")
1719+ raise
1720+ continue
1721+
1722+ sourcefn = util.target_path(target, ent['filename'])
1723+ try:
1724+ contents = "%s\n" % (source)
1725+ util.write_file(sourcefn, contents, omode="a")
1726+ except IOError as detail:
1727+ LOG.exception("failed write to file %s: %s", sourcefn, detail)
1728+ raise
1729+
1730+ util.apt_update(target=target, force=True,
1731+ comment="apt-source changed config")
1732+
1733+ return
1734+
1735+
1736+def search_for_mirror(candidates):
1737+ """
1738+ Search through a list of mirror urls for one that works
1739+ This needs to return quickly.
1740+ """
1741+ if candidates is None:
1742+ return None
1743+
1744+ LOG.debug("search for mirror in candidates: '%s'", candidates)
1745+ for cand in candidates:
1746+ try:
1747+ if util.is_resolvable_url(cand):
1748+ LOG.debug("found working mirror: '%s'", cand)
1749+ return cand
1750+ except Exception:
1751+ pass
1752+ return None
1753+
1754+
1755+def update_mirror_info(pmirror, smirror, arch):
1756+ """sets security mirror to primary if not defined.
1757+ returns defaults if no mirrors are defined"""
1758+ if pmirror is not None:
1759+ if smirror is None:
1760+ smirror = pmirror
1761+ return {'PRIMARY': pmirror,
1762+ 'SECURITY': smirror}
1763+ return get_default_mirrors(arch)
1764+
1765+
1766+def get_arch_mirrorconfig(cfg, mirrortype, arch):
1767+ """out of a list of potential mirror configurations select
1768+ and return the one matching the architecture (or default)"""
1769+ # select the mirror specification (if-any)
1770+ mirror_cfg_list = cfg.get(mirrortype, None)
1771+ if mirror_cfg_list is None:
1772+ return None
1773+
1774+ # select the specification matching the target arch
1775+ default = None
1776+ for mirror_cfg_elem in mirror_cfg_list:
1777+ arches = mirror_cfg_elem.get("arches")
1778+ if arch in arches:
1779+ return mirror_cfg_elem
1780+ if "default" in arches:
1781+ default = mirror_cfg_elem
1782+ return default
1783+
1784+
1785+def get_mirror(cfg, mirrortype, arch):
1786+ """pass the three potential stages of mirror specification
1787+ returns None is neither of them found anything otherwise the first
1788+ hit is returned"""
1789+ mcfg = get_arch_mirrorconfig(cfg, mirrortype, arch)
1790+ if mcfg is None:
1791+ return None
1792+
1793+ # directly specified
1794+ mirror = mcfg.get("uri", None)
1795+
1796+ # fallback to search if specified
1797+ if mirror is None:
1798+ # list of mirrors to try to resolve
1799+ mirror = search_for_mirror(mcfg.get("search", None))
1800+
1801+ return mirror
1802+
1803+
1804+def find_apt_mirror_info(cfg, arch=None):
1805+ """find_apt_mirror_info
1806+ find an apt_mirror given the cfg provided.
1807+ It can check for separate config of primary and security mirrors
1808+ If only primary is given security is assumed to be equal to primary
1809+ If the generic apt_mirror is given that is defining for both
1810+ """
1811+
1812+ if arch is None:
1813+ arch = util.get_architecture()
1814+ LOG.debug("got arch for mirror selection: %s", arch)
1815+ pmirror = get_mirror(cfg, "primary", arch)
1816+ LOG.debug("got primary mirror: %s", pmirror)
1817+ smirror = get_mirror(cfg, "security", arch)
1818+ LOG.debug("got security mirror: %s", smirror)
1819+
1820+ # Note: curtin has no cloud-datasource fallback
1821+
1822+ mirror_info = update_mirror_info(pmirror, smirror, arch)
1823+
1824+ # less complex replacements use only MIRROR, derive from primary
1825+ mirror_info["MIRROR"] = mirror_info["PRIMARY"]
1826+
1827+ return mirror_info
1828+
1829+
1830+def apply_apt_proxy_config(cfg, proxy_fname, config_fname):
1831+ """apply_apt_proxy_config
1832+ Applies any apt*proxy config from if specified
1833+ """
1834+ # Set up any apt proxy
1835+ cfgs = (('proxy', 'Acquire::http::Proxy "%s";'),
1836+ ('http_proxy', 'Acquire::http::Proxy "%s";'),
1837+ ('ftp_proxy', 'Acquire::ftp::Proxy "%s";'),
1838+ ('https_proxy', 'Acquire::https::Proxy "%s";'))
1839+
1840+ proxies = [fmt % cfg.get(name) for (name, fmt) in cfgs if cfg.get(name)]
1841+ if len(proxies):
1842+ LOG.debug("write apt proxy info to %s", proxy_fname)
1843+ util.write_file(proxy_fname, '\n'.join(proxies) + '\n')
1844+ elif os.path.isfile(proxy_fname):
1845+ util.del_file(proxy_fname)
1846+ LOG.debug("no apt proxy configured, removed %s", proxy_fname)
1847+
1848+ if cfg.get('conf', None):
1849+ LOG.debug("write apt config info to %s", config_fname)
1850+ util.write_file(config_fname, cfg.get('conf'))
1851+ elif os.path.isfile(config_fname):
1852+ util.del_file(config_fname)
1853+ LOG.debug("no apt config configured, removed %s", config_fname)
1854+
1855+
1856+def apt_command(args):
1857+ """ Main entry point for curtin apt-config standalone command
1858+ This does not read the global config as handled by curthooks, but
1859+ instead one can specify a different "target" and a new cfg via --config
1860+ """
1861+ cfg = config.load_command_config(args, {})
1862+
1863+ if args.target is not None:
1864+ target = args.target
1865+ else:
1866+ state = util.load_command_environment()
1867+ target = state['target']
1868+
1869+ if target is None:
1870+ sys.stderr.write("Unable to find target. "
1871+ "Use --target or set TARGET_MOUNT_POINT\n")
1872+ sys.exit(2)
1873+
1874+ apt_cfg = cfg.get("apt")
1875+ # if no apt config section is available, do nothing
1876+ if apt_cfg is not None:
1877+ LOG.debug("Handling apt to target %s with config %s",
1878+ target, apt_cfg)
1879+ try:
1880+ with util.ChrootableTarget(target, sys_resolvconf=True):
1881+ handle_apt(apt_cfg, target)
1882+ except (RuntimeError, TypeError, ValueError, IOError):
1883+ LOG.exception("Failed to configure apt features '%s'", apt_cfg)
1884+ sys.exit(1)
1885+ else:
1886+ LOG.info("No apt config provided, skipping")
1887+
1888+ sys.exit(0)
1889+
1890+
1891+def translate_old_apt_features(cfg):
1892+ """translate the few old apt related features into the new config format"""
1893+ predef_apt_cfg = cfg.get("apt")
1894+ if predef_apt_cfg is None:
1895+ cfg['apt'] = {}
1896+ predef_apt_cfg = cfg.get("apt")
1897+
1898+ if cfg.get('apt_proxy') is not None:
1899+ if predef_apt_cfg.get('proxy') is not None:
1900+ msg = ("Error in apt_proxy configuration: "
1901+ "old and new format of apt features "
1902+ "are mutually exclusive")
1903+ LOG.error(msg)
1904+ raise ValueError(msg)
1905+
1906+ cfg['apt']['proxy'] = cfg.get('apt_proxy')
1907+ LOG.debug("Transferred %s into new format: %s", cfg.get('apt_proxy'),
1908+ cfg.get('apte'))
1909+ del cfg['apt_proxy']
1910+
1911+ if cfg.get('apt_mirrors') is not None:
1912+ if predef_apt_cfg.get('mirrors') is not None:
1913+ msg = ("Error in apt_mirror configuration: "
1914+ "old and new format of apt features "
1915+ "are mutually exclusive")
1916+ LOG.error(msg)
1917+ raise ValueError(msg)
1918+
1919+ old = cfg.get('apt_mirrors')
1920+ cfg['apt']['primary'] = [{"arches": ["default"],
1921+ "uri": old.get('ubuntu_archive')}]
1922+ cfg['apt']['security'] = [{"arches": ["default"],
1923+ "uri": old.get('ubuntu_security')}]
1924+ LOG.debug("Transferred %s into new format: %s", cfg.get('apt_mirror'),
1925+ cfg.get('apt'))
1926+ del cfg['apt_mirrors']
1927+ # to work this also needs to disable the default protection
1928+ psl = predef_apt_cfg.get('preserve_sources_list')
1929+ if psl is not None:
1930+ if config.value_as_boolean(psl) is True:
1931+ msg = ("Error in apt_mirror configuration: "
1932+ "apt_mirrors and preserve_sources_list: True "
1933+ "are mutually exclusive")
1934+ LOG.error(msg)
1935+ raise ValueError(msg)
1936+ cfg['apt']['preserve_sources_list'] = False
1937+
1938+ if cfg.get('debconf_selections') is not None:
1939+ if predef_apt_cfg.get('debconf_selections') is not None:
1940+ msg = ("Error in debconf_selections configuration: "
1941+ "old and new format of apt features "
1942+ "are mutually exclusive")
1943+ LOG.error(msg)
1944+ raise ValueError(msg)
1945+
1946+ selsets = cfg.get('debconf_selections')
1947+ cfg['apt']['debconf_selections'] = selsets
1948+ LOG.info("Transferred %s into new format: %s",
1949+ cfg.get('debconf_selections'),
1950+ cfg.get('apt'))
1951+ del cfg['debconf_selections']
1952+
1953+ return cfg
1954+
1955+
1956+CMD_ARGUMENTS = (
1957+ ((('-c', '--config'),
1958+ {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend,
1959+ 'metavar': 'FILE', 'type': argparse.FileType("rb"),
1960+ 'dest': 'cfgopts', 'default': []}),
1961+ (('-t', '--target'),
1962+ {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]',
1963+ 'action': 'store', 'metavar': 'TARGET',
1964+ 'default': os.environ.get('TARGET_MOUNT_POINT')}),)
1965+)
1966+
1967+
1968+def POPULATE_SUBCMD(parser):
1969+ """Populate subcommand option parsing for apt-config"""
1970+ populate_one_subcmd(parser, CMD_ARGUMENTS, apt_command)
1971+
1972+CONFIG_CLEANERS = {
1973+ 'cloud-init': clean_cloud_init,
1974+}
1975+
1976+# vi: ts=4 expandtab syntax=python
1977
1978=== added file 'curtin/commands/block_info.py'
1979--- curtin/commands/block_info.py 1970-01-01 00:00:00 +0000
1980+++ curtin/commands/block_info.py 2016-10-03 18:55:20 +0000
1981@@ -0,0 +1,75 @@
1982+# Copyright (C) 2016 Canonical Ltd.
1983+#
1984+# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
1985+#
1986+# Curtin is free software: you can redistribute it and/or modify it under
1987+# the terms of the GNU Affero General Public License as published by the
1988+# Free Software Foundation, either version 3 of the License, or (at your
1989+# option) any later version.
1990+#
1991+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
1992+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
1993+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
1994+# more details.
1995+#
1996+# You should have received a copy of the GNU Affero General Public License
1997+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1998+
1999+import os
2000+from . import populate_one_subcmd
2001+from curtin import (block, util)
2002+
2003+
2004+def block_info_main(args):
2005+ """get information about block devices, similar to lsblk"""
2006+ if not args.devices:
2007+ raise ValueError('devices to scan must be specified')
2008+ if not all(block.is_block_device(d) for d in args.devices):
2009+ raise ValueError('invalid device(s)')
2010+
2011+ def add_size_to_holders_tree(tree):
2012+ """add size information to generated holders trees"""
2013+ size_file = os.path.join(tree['device'], 'size')
2014+ # size file is always represented in 512 byte sectors even if
2015+ # underlying disk uses a larger logical_block_size
2016+ size = ((512 * int(util.load_file(size_file)))
2017+ if os.path.exists(size_file) else None)
2018+ tree['size'] = util.bytes2human(size) if args.human else str(size)
2019+ for holder in tree['holders']:
2020+ add_size_to_holders_tree(holder)
2021+ return tree
2022+
2023+ def format_name(tree):
2024+ """format information for human readable display"""
2025+ res = {
2026+ 'name': ' - '.join((tree['name'], tree['dev_type'], tree['size'])),
2027+ 'holders': []
2028+ }
2029+ for holder in tree['holders']:
2030+ res['holders'].append(format_name(holder))
2031+ return res
2032+
2033+ trees = [add_size_to_holders_tree(t) for t in
2034+ [block.clear_holders.gen_holders_tree(d) for d in args.devices]]
2035+
2036+ print(util.json_dumps(trees) if args.json else
2037+ '\n'.join(block.clear_holders.format_holders_tree(t) for t in
2038+ [format_name(tree) for tree in trees]))
2039+
2040+ return 0
2041+
2042+
2043+CMD_ARGUMENTS = (
2044+ ('devices',
2045+ {'help': 'devices to get info for', 'default': [], 'nargs': '+'}),
2046+ ('--human',
2047+ {'help': 'output size in human readable format', 'default': False,
2048+ 'action': 'store_true'}),
2049+ (('-j', '--json'),
2050+ {'help': 'output data in json format', 'default': False,
2051+ 'action': 'store_true'}),
2052+)
2053+
2054+
2055+def POPULATE_SUBCMD(parser):
2056+ populate_one_subcmd(parser, CMD_ARGUMENTS, block_info_main)
2057
2058=== modified file 'curtin/commands/block_meta.py'
2059--- curtin/commands/block_meta.py 2016-10-03 18:00:41 +0000
2060+++ curtin/commands/block_meta.py 2016-10-03 18:55:20 +0000
2061@@ -17,9 +17,8 @@
2062
2063 from collections import OrderedDict
2064 from curtin import (block, config, util)
2065-from curtin.block import mdadm
2066+from curtin.block import (mdadm, mkfs, clear_holders, lvm)
2067 from curtin.log import LOG
2068-from curtin.block import mkfs
2069 from curtin.reporter import events
2070
2071 from . import populate_one_subcmd
2072@@ -28,7 +27,7 @@
2073 import glob
2074 import os
2075 import platform
2076-import re
2077+import string
2078 import sys
2079 import tempfile
2080 import time
2081@@ -129,128 +128,6 @@
2082 return "mbr"
2083
2084
2085-def block_find_sysfs_path(devname):
2086- # return the path in sys for device named devname
2087- # support either short name ('sda') or full path /dev/sda
2088- # sda -> /sys/class/block/sda
2089- # sda1 -> /sys/class/block/sda/sda1
2090- if not devname:
2091- raise ValueError("empty devname provided to find_sysfs_path")
2092-
2093- sys_class_block = '/sys/class/block/'
2094- basename = os.path.basename(devname)
2095- # try without parent blockdevice, then prepend parent
2096- paths = [
2097- os.path.join(sys_class_block, basename),
2098- os.path.join(sys_class_block,
2099- re.split('[\d+]', basename)[0], basename),
2100- ]
2101-
2102- # find path to devname directory in sysfs
2103- devname_sysfs = None
2104- for path in paths:
2105- if os.path.exists(path):
2106- devname_sysfs = path
2107-
2108- if devname_sysfs is None:
2109- err = ('No sysfs path to device:'
2110- ' {}'.format(devname_sysfs))
2111- LOG.error(err)
2112- raise ValueError(err)
2113-
2114- return devname_sysfs
2115-
2116-
2117-def get_holders(devname):
2118- # Look up any block device holders.
2119- # Handle devices and partitions as devnames (vdb, md0, vdb7)
2120- devname_sysfs = block_find_sysfs_path(devname)
2121- if devname_sysfs:
2122- holders = os.listdir(os.path.join(devname_sysfs, 'holders'))
2123- LOG.debug("devname '%s' had holders: %s", devname, ','.join(holders))
2124- return holders
2125-
2126- LOG.debug('get_holders: did not find sysfs path for %s', devname)
2127- return []
2128-
2129-
2130-def clear_holders(sys_block_path):
2131- holders = os.listdir(os.path.join(sys_block_path, "holders"))
2132- LOG.info("clear_holders running on '%s', with holders '%s'" %
2133- (sys_block_path, holders))
2134- for holder in holders:
2135- # get path to holder in /sys/block, then clear it
2136- try:
2137- holder_realpath = os.path.realpath(
2138- os.path.join(sys_block_path, "holders", holder))
2139- clear_holders(holder_realpath)
2140- except IOError as e:
2141- # something might have already caused the holder to go away
2142- if util.is_file_not_found_exc(e):
2143- pass
2144- pass
2145-
2146- # detect what type of holder is using this volume and shut it down, need to
2147- # find more robust name of doing detection
2148- if "bcache" in sys_block_path:
2149- # bcache device
2150- part_devs = []
2151- for part_dev in glob.glob(os.path.join(sys_block_path,
2152- "slaves", "*", "dev")):
2153- with open(part_dev, "r") as fp:
2154- part_dev_id = fp.read().rstrip()
2155- part_devs.append(
2156- os.path.split(os.path.realpath(os.path.join("/dev/block",
2157- part_dev_id)))[-1])
2158- for cache_dev in glob.glob("/sys/fs/bcache/*/bdev*"):
2159- for part_dev in part_devs:
2160- if part_dev in os.path.realpath(cache_dev):
2161- # This is our bcache device, stop it, wait for udev to
2162- # settle
2163- with open(os.path.join(os.path.split(cache_dev)[0],
2164- "stop"), "w") as fp:
2165- LOG.info("stopping: %s" % fp)
2166- fp.write("1")
2167- udevadm_settle()
2168- break
2169- for part_dev in part_devs:
2170- block.wipe_volume(os.path.join("/dev", part_dev),
2171- mode="superblock")
2172-
2173- if os.path.exists(os.path.join(sys_block_path, "bcache")):
2174- # bcache device that isn't running, if it were, we would have found it
2175- # when we looked for holders
2176- try:
2177- with open(os.path.join(sys_block_path, "bcache", "set", "stop"),
2178- "w") as fp:
2179- LOG.info("stopping: %s" % fp)
2180- fp.write("1")
2181- except IOError as e:
2182- if not util.is_file_not_found_exc(e):
2183- raise e
2184- with open(os.path.join(sys_block_path, "bcache", "stop"),
2185- "w") as fp:
2186- LOG.info("stopping: %s" % fp)
2187- fp.write("1")
2188- udevadm_settle()
2189-
2190- if os.path.exists(os.path.join(sys_block_path, "md")):
2191- # md device
2192- block_dev = os.path.join("/dev/", os.path.split(sys_block_path)[-1])
2193- # if these fail its okay, the array might not be assembled and thats
2194- # fine
2195- mdadm.mdadm_stop(block_dev)
2196- mdadm.mdadm_remove(block_dev)
2197-
2198- elif os.path.exists(os.path.join(sys_block_path, "dm")):
2199- # Shut down any volgroups
2200- with open(os.path.join(sys_block_path, "dm", "name"), "r") as fp:
2201- name = fp.read().split('-')
2202- util.subp(["lvremove", "--force", name[0].rstrip(), name[1].rstrip()],
2203- rcs=[0, 5])
2204- util.subp(["vgremove", name[0].rstrip()], rcs=[0, 5, 6])
2205-
2206-
2207 def devsync(devpath):
2208 LOG.debug('devsync for %s', devpath)
2209 util.subp(['partprobe', devpath], rcs=[0, 1])
2210@@ -265,14 +142,6 @@
2211 raise OSError('Failed to find device at path: %s', devpath)
2212
2213
2214-def determine_partition_kname(disk_kname, partition_number):
2215- for dev_type in ["nvme", "mmcblk"]:
2216- if disk_kname.startswith(dev_type):
2217- partition_number = "p%s" % partition_number
2218- break
2219- return "%s%s" % (disk_kname, partition_number)
2220-
2221-
2222 def determine_partition_number(partition_id, storage_config):
2223 vol = storage_config.get(partition_id)
2224 partnumber = vol.get('number')
2225@@ -304,6 +173,18 @@
2226 return partnumber
2227
2228
2229+def sanitize_dname(dname):
2230+ """
2231+ dnames should be sanitized before writing rule files, in case maas has
2232+ emitted a dname with a special character
2233+
2234+ only letters, numbers and '-' and '_' are permitted, as this will be
2235+ used for a device path. spaces are also not permitted
2236+ """
2237+ valid = string.digits + string.ascii_letters + '-_'
2238+ return ''.join(c if c in valid else '-' for c in dname)
2239+
2240+
2241 def make_dname(volume, storage_config):
2242 state = util.load_command_environment()
2243 rules_dir = os.path.join(state['scratch'], "rules.d")
2244@@ -321,7 +202,7 @@
2245 # we may not always be able to find a uniq identifier on devices with names
2246 if not ptuuid and vol.get('type') in ["disk", "partition"]:
2247 LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format(
2248- dname))
2249+ volume))
2250 return
2251
2252 rule = [
2253@@ -346,11 +227,24 @@
2254 volgroup_name = storage_config.get(vol.get('volgroup')).get('name')
2255 dname = "%s-%s" % (volgroup_name, dname)
2256 rule.append(compose_udev_equality("ENV{DM_NAME}", dname))
2257- rule.append("SYMLINK+=\"disk/by-dname/%s\"" % dname)
2258+ else:
2259+ raise ValueError('cannot make dname for device with type: {}'
2260+ .format(vol.get('type')))
2261+
2262+ # note: this sanitization is done here instead of for all name attributes
2263+ # at the beginning of storage configuration, as some devices, such as
2264+ # lvm devices may use the name attribute and may permit special chars
2265+ sanitized = sanitize_dname(dname)
2266+ if sanitized != dname:
2267+ LOG.warning(
2268+ "dname modified to remove invalid chars. old: '{}' new: '{}'"
2269+ .format(dname, sanitized))
2270+
2271+ rule.append("SYMLINK+=\"disk/by-dname/%s\"" % sanitized)
2272 LOG.debug("Writing dname udev rule '{}'".format(str(rule)))
2273 util.ensure_dir(rules_dir)
2274- with open(os.path.join(rules_dir, volume), "w") as fp:
2275- fp.write(', '.join(rule))
2276+ rule_file = os.path.join(rules_dir, '{}.rules'.format(sanitized))
2277+ util.write_file(rule_file, ', '.join(rule))
2278
2279
2280 def get_path_to_storage_volume(volume, storage_config):
2281@@ -368,9 +262,9 @@
2282 partnumber = determine_partition_number(vol.get('id'), storage_config)
2283 disk_block_path = get_path_to_storage_volume(vol.get('device'),
2284 storage_config)
2285- (base_path, disk_kname) = os.path.split(disk_block_path)
2286- partition_kname = determine_partition_kname(disk_kname, partnumber)
2287- volume_path = os.path.join(base_path, partition_kname)
2288+ disk_kname = block.path_to_kname(disk_block_path)
2289+ partition_kname = block.partition_kname(disk_kname, partnumber)
2290+ volume_path = block.kname_to_path(partition_kname)
2291 devsync_vol = os.path.join(disk_block_path)
2292
2293 elif vol.get('type') == "disk":
2294@@ -419,13 +313,15 @@
2295 # block devs are in the slaves dir there. Then, those blockdevs can be
2296 # checked against the kname of the devs in the config for the desired
2297 # bcache device. This is not very elegant though
2298- backing_device_kname = os.path.split(get_path_to_storage_volume(
2299- vol.get('backing_device'), storage_config))[-1]
2300+ backing_device_path = get_path_to_storage_volume(
2301+ vol.get('backing_device'), storage_config)
2302+ backing_device_kname = block.path_to_kname(backing_device_path)
2303 sys_path = list(filter(lambda x: backing_device_kname in x,
2304 glob.glob("/sys/block/bcache*/slaves/*")))[0]
2305 while "bcache" not in os.path.split(sys_path)[-1]:
2306 sys_path = os.path.split(sys_path)[0]
2307- volume_path = os.path.join("/dev", os.path.split(sys_path)[-1])
2308+ bcache_kname = block.path_to_kname(sys_path)
2309+ volume_path = block.kname_to_path(bcache_kname)
2310 LOG.debug('got bcache volume path {}'.format(volume_path))
2311
2312 else:
2313@@ -442,62 +338,35 @@
2314
2315
2316 def disk_handler(info, storage_config):
2317+ _dos_names = ['dos', 'msdos']
2318 ptable = info.get('ptable')
2319-
2320 disk = get_path_to_storage_volume(info.get('id'), storage_config)
2321
2322- # Handle preserve flag
2323- if info.get('preserve'):
2324- if not ptable:
2325- # Don't need to check state, return
2326- return
2327-
2328- # Check state of current ptable
2329- try:
2330- (out, _err) = util.subp(["blkid", "-o", "export", disk],
2331- capture=True)
2332- except util.ProcessExecutionError:
2333- raise ValueError("disk '%s' has no readable partition table or \
2334- cannot be accessed, but preserve is set to true, so cannot \
2335- continue")
2336- current_ptable = list(filter(lambda x: "PTTYPE" in x,
2337- out.splitlines()))[0].split("=")[-1]
2338- if current_ptable == "dos" and ptable != "msdos" or \
2339- current_ptable == "gpt" and ptable != "gpt":
2340- raise ValueError("disk '%s' does not have correct \
2341- partition table, but preserve is set to true, so not \
2342- creating table, so not creating table." % info.get('id'))
2343- LOG.info("disk '%s' marked to be preserved, so keeping partition \
2344- table")
2345- return
2346-
2347- # Wipe the disk
2348- if info.get('wipe') and info.get('wipe') != "none":
2349- # The disk has a lable, clear all partitions
2350- mdadm.mdadm_assemble(scan=True)
2351- disk_kname = os.path.split(disk)[-1]
2352- syspath_partitions = list(
2353- os.path.split(prt)[0] for prt in
2354- glob.glob("/sys/block/%s/*/partition" % disk_kname))
2355- for partition in syspath_partitions:
2356- clear_holders(partition)
2357- with open(os.path.join(partition, "dev"), "r") as fp:
2358- block_no = fp.read().rstrip()
2359- partition_path = os.path.realpath(
2360- os.path.join("/dev/block", block_no))
2361- block.wipe_volume(partition_path, mode=info.get('wipe'))
2362-
2363- clear_holders("/sys/block/%s" % disk_kname)
2364- block.wipe_volume(disk, mode=info.get('wipe'))
2365-
2366- # Create partition table on disk
2367- if info.get('ptable'):
2368- LOG.info("labeling device: '%s' with '%s' partition table", disk,
2369- ptable)
2370- if ptable == "gpt":
2371- util.subp(["sgdisk", "--clear", disk])
2372- elif ptable == "msdos":
2373- util.subp(["parted", disk, "--script", "mklabel", "msdos"])
2374+ if config.value_as_boolean(info.get('preserve')):
2375+ # Handle preserve flag, verifying if ptable specified in config
2376+ if config.value_as_boolean(ptable):
2377+ current_ptable = block.get_part_table_type(disk)
2378+ if not ((ptable in _dos_names and current_ptable in _dos_names) or
2379+ (ptable == 'gpt' and current_ptable == 'gpt')):
2380+ raise ValueError(
2381+ "disk '%s' does not have correct partition table or "
2382+ "cannot be read, but preserve is set to true. "
2383+ "cannot continue installation." % info.get('id'))
2384+ LOG.info("disk '%s' marked to be preserved, so keeping partition "
2385+ "table" % disk)
2386+ else:
2387+ # wipe the disk and create the partition table if instructed to do so
2388+ if config.value_as_boolean(info.get('wipe')):
2389+ block.wipe_volume(disk, mode=info.get('wipe'))
2390+ if config.value_as_boolean(ptable):
2391+ LOG.info("labeling device: '%s' with '%s' partition table", disk,
2392+ ptable)
2393+ if ptable == "gpt":
2394+ util.subp(["sgdisk", "--clear", disk])
2395+ elif ptable in _dos_names:
2396+ util.subp(["parted", disk, "--script", "mklabel", "msdos"])
2397+ else:
2398+ raise ValueError('invalid partition table type: %s', ptable)
2399
2400 # Make the name if needed
2401 if info.get('name'):
2402@@ -542,13 +411,12 @@
2403
2404 disk = get_path_to_storage_volume(device, storage_config)
2405 partnumber = determine_partition_number(info.get('id'), storage_config)
2406-
2407- disk_kname = os.path.split(
2408- get_path_to_storage_volume(device, storage_config))[-1]
2409+ disk_kname = block.path_to_kname(disk)
2410+ disk_sysfs_path = block.sys_block_path(disk)
2411 # consider the disks logical sector size when calculating sectors
2412 try:
2413- prefix = "/sys/block/%s/queue/" % disk_kname
2414- with open(prefix + "logical_block_size", "r") as f:
2415+ lbs_path = os.path.join(disk_sysfs_path, 'queue', 'logical_block_size')
2416+ with open(lbs_path, 'r') as f:
2417 l = f.readline()
2418 logical_block_size_bytes = int(l)
2419 except:
2420@@ -566,17 +434,14 @@
2421 extended_part_no = determine_partition_number(
2422 key, storage_config)
2423 break
2424- partition_kname = determine_partition_kname(
2425- disk_kname, extended_part_no)
2426- previous_partition = "/sys/block/%s/%s/" % \
2427- (disk_kname, partition_kname)
2428+ pnum = extended_part_no
2429 else:
2430 pnum = find_previous_partition(device, info['id'], storage_config)
2431- LOG.debug("previous partition number for '%s' found to be '%s'",
2432- info.get('id'), pnum)
2433- partition_kname = determine_partition_kname(disk_kname, pnum)
2434- previous_partition = "/sys/block/%s/%s/" % \
2435- (disk_kname, partition_kname)
2436+
2437+ LOG.debug("previous partition number for '%s' found to be '%s'",
2438+ info.get('id'), pnum)
2439+ partition_kname = block.partition_kname(disk_kname, pnum)
2440+ previous_partition = os.path.join(disk_sysfs_path, partition_kname)
2441 LOG.debug("previous partition: {}".format(previous_partition))
2442 # XXX: sys/block/X/{size,start} is *ALWAYS* in 512b value
2443 previous_size = util.load_file(os.path.join(previous_partition,
2444@@ -629,9 +494,9 @@
2445 length_sectors = length_sectors + (logdisks * alignment_offset)
2446
2447 # Handle preserve flag
2448- if info.get('preserve'):
2449+ if config.value_as_boolean(info.get('preserve')):
2450 return
2451- elif storage_config.get(device).get('preserve'):
2452+ elif config.value_as_boolean(storage_config.get(device).get('preserve')):
2453 raise NotImplementedError("Partition '%s' is not marked to be \
2454 preserved, but device '%s' is. At this time, preserving devices \
2455 but not also the partitions on the devices is not supported, \
2456@@ -674,11 +539,16 @@
2457 else:
2458 raise ValueError("parent partition has invalid partition table")
2459
2460- # Wipe the partition if told to do so
2461- if info.get('wipe') and info.get('wipe') != "none":
2462- block.wipe_volume(
2463- get_path_to_storage_volume(info.get('id'), storage_config),
2464- mode=info.get('wipe'))
2465+ # Wipe the partition if told to do so, do not wipe dos extended partitions
2466+ # as this may damage the extended partition table
2467+ if config.value_as_boolean(info.get('wipe')):
2468+ if info.get('flag') == "extended":
2469+ LOG.warn("extended partitions do not need wiping, so skipping: "
2470+ "'%s'" % info.get('id'))
2471+ else:
2472+ block.wipe_volume(
2473+ get_path_to_storage_volume(info.get('id'), storage_config),
2474+ mode=info.get('wipe'))
2475 # Make the name if needed
2476 if storage_config.get(device).get('name') and partition_type != 'extended':
2477 make_dname(info.get('id'), storage_config)
2478@@ -694,7 +564,7 @@
2479 volume_path = get_path_to_storage_volume(volume, storage_config)
2480
2481 # Handle preserve flag
2482- if info.get('preserve'):
2483+ if config.value_as_boolean(info.get('preserve')):
2484 # Volume marked to be preserved, not formatting
2485 return
2486
2487@@ -776,26 +646,21 @@
2488 storage_config))
2489
2490 # Handle preserve flag
2491- if info.get('preserve'):
2492+ if config.value_as_boolean(info.get('preserve')):
2493 # LVM will probably be offline, so start it
2494 util.subp(["vgchange", "-a", "y"])
2495 # Verify that volgroup exists and contains all specified devices
2496- current_paths = []
2497- (out, _err) = util.subp(["pvdisplay", "-C", "--separator", "=", "-o",
2498- "vg_name,pv_name", "--noheadings"],
2499- capture=True)
2500- for line in out.splitlines():
2501- if name in line:
2502- current_paths.append(line.split("=")[-1])
2503- if set(current_paths) != set(device_paths):
2504- raise ValueError("volgroup '%s' marked to be preserved, but does \
2505- not exist or does not contain the right physical \
2506- volumes" % info.get('id'))
2507+ if set(lvm.get_pvols_in_volgroup(name)) != set(device_paths):
2508+ raise ValueError("volgroup '%s' marked to be preserved, but does "
2509+ "not exist or does not contain the right "
2510+ "physical volumes" % info.get('id'))
2511 else:
2512 # Create vgrcreate command and run
2513- cmd = ["vgcreate", name]
2514- cmd.extend(device_paths)
2515- util.subp(cmd)
2516+ # capture output to avoid printing it to log
2517+ util.subp(['vgcreate', name] + device_paths, capture=True)
2518+
2519+ # refresh lvmetad
2520+ lvm.lvm_scan()
2521
2522
2523 def lvm_partition_handler(info, storage_config):
2524@@ -805,28 +670,23 @@
2525 raise ValueError("lvm volgroup for lvm partition must be specified")
2526 if not name:
2527 raise ValueError("lvm partition name must be specified")
2528+ if info.get('ptable'):
2529+ raise ValueError("Partition tables on top of lvm logical volumes is "
2530+ "not supported")
2531
2532 # Handle preserve flag
2533- if info.get('preserve'):
2534- (out, _err) = util.subp(["lvdisplay", "-C", "--separator", "=", "-o",
2535- "lv_name,vg_name", "--noheadings"],
2536- capture=True)
2537- found = False
2538- for line in out.splitlines():
2539- if name in line:
2540- if volgroup == line.split("=")[-1]:
2541- found = True
2542- break
2543- if not found:
2544- raise ValueError("lvm partition '%s' marked to be preserved, but \
2545- does not exist or does not mach storage \
2546- configuration" % info.get('id'))
2547+ if config.value_as_boolean(info.get('preserve')):
2548+ if name not in lvm.get_lvols_in_volgroup(volgroup):
2549+ raise ValueError("lvm partition '%s' marked to be preserved, but "
2550+ "does not exist or does not mach storage "
2551+ "configuration" % info.get('id'))
2552 elif storage_config.get(info.get('volgroup')).get('preserve'):
2553- raise NotImplementedError("Lvm Partition '%s' is not marked to be \
2554- preserved, but volgroup '%s' is. At this time, preserving \
2555- volgroups but not also the lvm partitions on the volgroup is \
2556- not supported, because of the possibility of damaging lvm \
2557- partitions intended to be preserved." % (info.get('id'), volgroup))
2558+ raise NotImplementedError(
2559+ "Lvm Partition '%s' is not marked to be preserved, but volgroup "
2560+ "'%s' is. At this time, preserving volgroups but not also the lvm "
2561+ "partitions on the volgroup is not supported, because of the "
2562+ "possibility of damaging lvm partitions intended to be "
2563+ "preserved." % (info.get('id'), volgroup))
2564 else:
2565 cmd = ["lvcreate", volgroup, "-n", name]
2566 if info.get('size'):
2567@@ -836,9 +696,8 @@
2568
2569 util.subp(cmd)
2570
2571- if info.get('ptable'):
2572- raise ValueError("Partition tables on top of lvm logical volumes is \
2573- not supported")
2574+ # refresh lvmetad
2575+ lvm.lvm_scan()
2576
2577 make_dname(info.get('id'), storage_config)
2578
2579@@ -925,7 +784,7 @@
2580 zip(spare_devices, spare_device_paths)))
2581
2582 # Handle preserve flag
2583- if info.get('preserve'):
2584+ if config.value_as_boolean(info.get('preserve')):
2585 # check if the array is already up, if not try to assemble
2586 if not mdadm.md_check(md_devname, raidlevel,
2587 device_paths, spare_device_paths):
2588@@ -981,9 +840,6 @@
2589 raise ValueError("backing device and cache device for bcache"
2590 " must be specified")
2591
2592- # The bcache module is not loaded when bcache is installed by apt-get, so
2593- # we will load it now
2594- util.subp(["modprobe", "bcache"])
2595 bcache_sysfs = "/sys/fs/bcache"
2596 udevadm_settle(exists=bcache_sysfs)
2597
2598@@ -1003,7 +859,7 @@
2599 bcache_device, expected)
2600 return
2601 LOG.debug('bcache device path not found: %s', expected)
2602- local_holders = get_holders(bcache_device)
2603+ local_holders = clear_holders.get_holders(bcache_device)
2604 LOG.debug('got initial holders being "%s"', local_holders)
2605 if len(local_holders) == 0:
2606 raise ValueError("holders == 0 , expected non-zero")
2607@@ -1033,7 +889,7 @@
2608
2609 if cache_device:
2610 # /sys/class/block/XXX/YYY/
2611- cache_device_sysfs = block_find_sysfs_path(cache_device)
2612+ cache_device_sysfs = block.sys_block_path(cache_device)
2613
2614 if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
2615 LOG.debug('caching device already exists at {}/bcache. Read '
2616@@ -1058,7 +914,7 @@
2617 ensure_bcache_is_registered(cache_device, target_sysfs_path)
2618
2619 if backing_device:
2620- backing_device_sysfs = block_find_sysfs_path(backing_device)
2621+ backing_device_sysfs = block.sys_block_path(backing_device)
2622 target_sysfs_path = os.path.join(backing_device_sysfs, "bcache")
2623 if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")):
2624 util.subp(["make-bcache", "-B", backing_device])
2625@@ -1066,7 +922,7 @@
2626
2627 # via the holders we can identify which bcache device we just created
2628 # for a given backing device
2629- holders = get_holders(backing_device)
2630+ holders = clear_holders.get_holders(backing_device)
2631 if len(holders) != 1:
2632 err = ('Invalid number {} of holding devices:'
2633 ' "{}"'.format(len(holders), holders))
2634@@ -1158,6 +1014,21 @@
2635 # set up reportstack
2636 stack_prefix = state.get('report_stack_prefix', '')
2637
2638+ # shut down any already existing storage layers above any disks used in
2639+ # config that have 'wipe' set
2640+ with events.ReportEventStack(
2641+ name=stack_prefix, reporting_enabled=True, level='INFO',
2642+ description="removing previous storage devices"):
2643+ clear_holders.start_clear_holders_deps()
2644+ disk_paths = [get_path_to_storage_volume(k, storage_config_dict)
2645+ for (k, v) in storage_config_dict.items()
2646+ if v.get('type') == 'disk' and
2647+ config.value_as_boolean(v.get('wipe')) and
2648+ not config.value_as_boolean(v.get('preserve'))]
2649+ clear_holders.clear_holders(disk_paths)
2650+ # if anything was not properly shut down, stop installation
2651+ clear_holders.assert_clear(disk_paths)
2652+
2653 for item_id, command in storage_config_dict.items():
2654 handler = command_handlers.get(command['type'])
2655 if not handler:
2656
2657=== modified file 'curtin/commands/block_wipe.py'
2658--- curtin/commands/block_wipe.py 2016-05-10 16:13:29 +0000
2659+++ curtin/commands/block_wipe.py 2016-10-03 18:55:20 +0000
2660@@ -21,7 +21,6 @@
2661
2662
2663 def wipe_main(args):
2664- # curtin clear-holders device [device2 [device3]]
2665 for blockdev in args.devices:
2666 try:
2667 block.wipe_volume(blockdev, mode=args.mode)
2668@@ -36,7 +35,7 @@
2669 CMD_ARGUMENTS = (
2670 ((('-m', '--mode'),
2671 {'help': 'mode for wipe.', 'action': 'store',
2672- 'default': 'superblocks',
2673+ 'default': 'superblock',
2674 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}),
2675 ('devices',
2676 {'help': 'devices to wipe', 'default': [], 'nargs': '+'}),
2677
2678=== added file 'curtin/commands/clear_holders.py'
2679--- curtin/commands/clear_holders.py 1970-01-01 00:00:00 +0000
2680+++ curtin/commands/clear_holders.py 2016-10-03 18:55:20 +0000
2681@@ -0,0 +1,48 @@
2682+# Copyright (C) 2016 Canonical Ltd.
2683+#
2684+# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
2685+#
2686+# Curtin is free software: you can redistribute it and/or modify it under
2687+# the terms of the GNU Affero General Public License as published by the
2688+# Free Software Foundation, either version 3 of the License, or (at your
2689+# option) any later version.
2690+#
2691+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
2692+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
2693+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
2694+# more details.
2695+#
2696+# You should have received a copy of the GNU Affero General Public License
2697+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
2698+
2699+from curtin import block
2700+from . import populate_one_subcmd
2701+
2702+
2703+def clear_holders_main(args):
2704+ """
2705+ wrapper for clear_holders accepting cli args
2706+ """
2707+ if (not all(block.is_block_device(device) for device in args.devices) or
2708+ len(args.devices) == 0):
2709+ raise ValueError('invalid devices specified')
2710+ block.clear_holders.start_clear_holders_deps()
2711+ block.clear_holders.clear_holders(args.devices, try_preserve=args.preserve)
2712+ if args.try_preserve:
2713+ print('ran clear_holders attempting to preserve data. however, '
2714+ 'hotplug support for some devices may cause holders to restart ')
2715+ block.clear_holders.assert_clear(args.devices)
2716+
2717+
2718+CMD_ARGUMENTS = (
2719+ (('devices',
2720+ {'help': 'devices to free', 'default': [], 'nargs': '+'}),
2721+ (('-p', '--preserve'),
2722+ {'help': 'try to shut down holders without erasing anything',
2723+ 'default': False, 'action': 'store_true'}),
2724+ )
2725+)
2726+
2727+
2728+def POPULATE_SUBCMD(parser):
2729+ populate_one_subcmd(parser, CMD_ARGUMENTS, clear_holders_main)
2730
2731=== modified file 'curtin/commands/curthooks.py'
2732--- curtin/commands/curthooks.py 2016-10-03 18:00:41 +0000
2733+++ curtin/commands/curthooks.py 2016-10-03 18:55:20 +0000
2734@@ -16,10 +16,8 @@
2735 # along with Curtin. If not, see <http://www.gnu.org/licenses/>.
2736
2737 import copy
2738-import glob
2739 import os
2740 import platform
2741-import re
2742 import sys
2743 import shutil
2744 import textwrap
2745@@ -30,8 +28,8 @@
2746 from curtin.log import LOG
2747 from curtin import swap
2748 from curtin import util
2749-from curtin import net
2750 from curtin.reporter import events
2751+from curtin.commands import apply_net, apt_config
2752
2753 from . import populate_one_subcmd
2754
2755@@ -90,45 +88,15 @@
2756 info.get('perms', "0644")))
2757
2758
2759-def apt_config(cfg, target):
2760- # cfg['apt_proxy']
2761-
2762- proxy_cfg_path = os.path.sep.join(
2763- [target, '/etc/apt/apt.conf.d/90curtin-aptproxy'])
2764- if cfg.get('apt_proxy'):
2765- util.write_file(
2766- proxy_cfg_path,
2767- content='Acquire::HTTP::Proxy "%s";\n' % cfg['apt_proxy'])
2768+def do_apt_config(cfg, target):
2769+ cfg = apt_config.translate_old_apt_features(cfg)
2770+ apt_cfg = cfg.get("apt")
2771+ if apt_cfg is not None:
2772+ LOG.info("curthooks handling apt to target %s with config %s",
2773+ target, apt_cfg)
2774+ apt_config.handle_apt(apt_cfg, target)
2775 else:
2776- if os.path.isfile(proxy_cfg_path):
2777- os.unlink(proxy_cfg_path)
2778-
2779- # cfg['apt_mirrors']
2780- # apt_mirrors:
2781- # ubuntu_archive: http://local.archive/ubuntu
2782- # ubuntu_security: http://local.archive/ubuntu
2783- sources_list = os.path.sep.join([target, '/etc/apt/sources.list'])
2784- if (isinstance(cfg.get('apt_mirrors'), dict) and
2785- os.path.isfile(sources_list)):
2786- repls = [
2787- ('ubuntu_archive', r'http://\S*[.]*archive.ubuntu.com/\S*'),
2788- ('ubuntu_security', r'http://security.ubuntu.com/\S*'),
2789- ]
2790- content = None
2791- for name, regex in repls:
2792- mirror = cfg['apt_mirrors'].get(name)
2793- if not mirror:
2794- continue
2795-
2796- if content is None:
2797- with open(sources_list) as fp:
2798- content = fp.read()
2799- util.write_file(sources_list + ".dist", content)
2800-
2801- content = re.sub(regex, mirror + " ", content)
2802-
2803- if content is not None:
2804- util.write_file(sources_list, content)
2805+ LOG.info("No apt config provided, skipping")
2806
2807
2808 def disable_overlayroot(cfg, target):
2809@@ -140,51 +108,6 @@
2810 shutil.move(local_conf, local_conf + ".old")
2811
2812
2813-def clean_cloud_init(target):
2814- flist = glob.glob(
2815- os.path.sep.join([target, "/etc/cloud/cloud.cfg.d/*dpkg*"]))
2816-
2817- LOG.debug("cleaning cloud-init config from: %s" % flist)
2818- for dpkg_cfg in flist:
2819- os.unlink(dpkg_cfg)
2820-
2821-
2822-def _maybe_remove_legacy_eth0(target,
2823- path="/etc/network/interfaces.d/eth0.cfg"):
2824- """Ubuntu cloud images previously included a 'eth0.cfg' that had
2825- hard coded content. That file would interfere with the rendered
2826- configuration if it was present.
2827-
2828- if the file does not exist do nothing.
2829- If the file exists:
2830- - with known content, remove it and warn
2831- - with unknown content, leave it and warn
2832- """
2833-
2834- cfg = os.path.sep.join([target, path])
2835- if not os.path.exists(cfg):
2836- LOG.warn('Failed to find legacy conf file %s', cfg)
2837- return
2838-
2839- bmsg = "Dynamic networking config may not apply."
2840- try:
2841- contents = util.load_file(cfg)
2842- known_contents = ["auto eth0", "iface eth0 inet dhcp"]
2843- lines = [f.strip() for f in contents.splitlines()
2844- if not f.startswith("#")]
2845- if lines == known_contents:
2846- util.del_file(cfg)
2847- msg = "removed %s with known contents" % cfg
2848- else:
2849- msg = (bmsg + " '%s' exists with user configured content." % cfg)
2850- except:
2851- msg = bmsg + " %s exists, but could not be read." % cfg
2852- LOG.exception(msg)
2853- return
2854-
2855- LOG.warn(msg)
2856-
2857-
2858 def setup_zipl(cfg, target):
2859 if platform.machine() != 's390x':
2860 return
2861@@ -232,8 +155,8 @@
2862 def run_zipl(cfg, target):
2863 if platform.machine() != 's390x':
2864 return
2865- with util.RunInChroot(target) as in_chroot:
2866- in_chroot(['zipl'])
2867+ with util.ChrootableTarget(target) as in_chroot:
2868+ in_chroot.subp(['zipl'])
2869
2870
2871 def install_kernel(cfg, target):
2872@@ -250,126 +173,45 @@
2873 mapping = copy.deepcopy(KERNEL_MAPPING)
2874 config.merge_config(mapping, kernel_cfg.get('mapping', {}))
2875
2876- with util.RunInChroot(target) as in_chroot:
2877-
2878- if kernel_package:
2879- util.install_packages([kernel_package], target=target)
2880- return
2881-
2882- # uname[2] is kernel name (ie: 3.16.0-7-generic)
2883- # version gets X.Y.Z, flavor gets anything after second '-'.
2884- kernel = os.uname()[2]
2885- codename, err = in_chroot(['lsb_release', '--codename', '--short'],
2886- capture=True)
2887- codename = codename.strip()
2888- version, abi, flavor = kernel.split('-', 2)
2889-
2890- try:
2891- map_suffix = mapping[codename][version]
2892- except KeyError:
2893- LOG.warn("Couldn't detect kernel package to install for %s."
2894- % kernel)
2895- if kernel_fallback is not None:
2896- util.install_packages([kernel_fallback], target=target)
2897- return
2898-
2899- package = "linux-{flavor}{map_suffix}".format(
2900- flavor=flavor, map_suffix=map_suffix)
2901-
2902- if util.has_pkg_available(package, target):
2903- if util.has_pkg_installed(package, target):
2904- LOG.debug("Kernel package '%s' already installed", package)
2905- else:
2906- LOG.debug("installing kernel package '%s'", package)
2907- util.install_packages([package], target=target)
2908- else:
2909- if kernel_fallback is not None:
2910- LOG.info("Kernel package '%s' not available. "
2911- "Installing fallback package '%s'.",
2912- package, kernel_fallback)
2913- util.install_packages([kernel_fallback], target=target)
2914- else:
2915- LOG.warn("Kernel package '%s' not available and no fallback."
2916- " System may not boot.", package)
2917-
2918-
2919-def apply_debconf_selections(cfg, target):
2920- # debconf_selections:
2921- # set1: |
2922- # cloud-init cloud-init/datasources multiselect MAAS
2923- # set2: pkg pkg/value string bar
2924- selsets = cfg.get('debconf_selections')
2925- if not selsets:
2926- LOG.debug("debconf_selections was not set in config")
2927- return
2928-
2929- # for each entry in selections, chroot and apply them.
2930- # keep a running total of packages we've seen.
2931- pkgs_cfgd = set()
2932- for key, content in selsets.items():
2933- LOG.debug("setting for %s, %s" % (key, content))
2934- util.subp(['chroot', target, 'debconf-set-selections'],
2935- data=content.encode())
2936- for line in content.splitlines():
2937- if line.startswith("#"):
2938- continue
2939- pkg = re.sub(r"[:\s].*", "", line)
2940- pkgs_cfgd.add(pkg)
2941-
2942- pkgs_installed = get_installed_packages(target)
2943-
2944- LOG.debug("pkgs_cfgd: %s" % pkgs_cfgd)
2945- LOG.debug("pkgs_installed: %s" % pkgs_installed)
2946- need_reconfig = pkgs_cfgd.intersection(pkgs_installed)
2947-
2948- if len(need_reconfig) == 0:
2949- LOG.debug("no need for reconfig")
2950- return
2951-
2952- # For any packages that are already installed, but have preseed data
2953- # we populate the debconf database, but the filesystem configuration
2954- # would be preferred on a subsequent dpkg-reconfigure.
2955- # so, what we have to do is "know" information about certain packages
2956- # to unconfigure them.
2957- unhandled = []
2958- to_config = []
2959- for pkg in need_reconfig:
2960- if pkg in CONFIG_CLEANERS:
2961- LOG.debug("unconfiguring %s" % pkg)
2962- CONFIG_CLEANERS[pkg](target)
2963- to_config.append(pkg)
2964- else:
2965- unhandled.append(pkg)
2966-
2967- if len(unhandled):
2968- LOG.warn("The following packages were installed and preseeded, "
2969- "but cannot be unconfigured: %s", unhandled)
2970-
2971- util.subp(['chroot', target, 'dpkg-reconfigure',
2972- '--frontend=noninteractive'] +
2973- list(to_config), data=None)
2974-
2975-
2976-def get_installed_packages(target=None):
2977- cmd = []
2978- if target is not None:
2979- cmd = ['chroot', target]
2980- cmd.extend(['dpkg-query', '--list'])
2981-
2982- (out, _err) = util.subp(cmd, capture=True)
2983- if isinstance(out, bytes):
2984- out = out.decode()
2985-
2986- pkgs_inst = set()
2987- for line in out.splitlines():
2988- try:
2989- (state, pkg, other) = line.split(None, 2)
2990- except ValueError:
2991- continue
2992- if state.startswith("hi") or state.startswith("ii"):
2993- pkgs_inst.add(re.sub(":.*", "", pkg))
2994-
2995- return pkgs_inst
2996+ if kernel_package:
2997+ util.install_packages([kernel_package], target=target)
2998+ return
2999+
3000+ # uname[2] is kernel name (ie: 3.16.0-7-generic)
3001+ # version gets X.Y.Z, flavor gets anything after second '-'.
3002+ kernel = os.uname()[2]
3003+ codename, _ = util.subp(['lsb_release', '--codename', '--short'],
3004+ capture=True, target=target)
3005+ codename = codename.strip()
3006+ version, abi, flavor = kernel.split('-', 2)
3007+
3008+ try:
3009+ map_suffix = mapping[codename][version]
3010+ except KeyError:
3011+ LOG.warn("Couldn't detect kernel package to install for %s."
3012+ % kernel)
3013+ if kernel_fallback is not None:
3014+ util.install_packages([kernel_fallback], target=target)
3015+ return
3016+
3017+ package = "linux-{flavor}{map_suffix}".format(
3018+ flavor=flavor, map_suffix=map_suffix)
3019+
3020+ if util.has_pkg_available(package, target):
3021+ if util.has_pkg_installed(package, target):
3022+ LOG.debug("Kernel package '%s' already installed", package)
3023+ else:
3024+ LOG.debug("installing kernel package '%s'", package)
3025+ util.install_packages([package], target=target)
3026+ else:
3027+ if kernel_fallback is not None:
3028+ LOG.info("Kernel package '%s' not available. "
3029+ "Installing fallback package '%s'.",
3030+ package, kernel_fallback)
3031+ util.install_packages([kernel_fallback], target=target)
3032+ else:
3033+ LOG.warn("Kernel package '%s' not available and no fallback."
3034+ " System may not boot.", package)
3035
3036
3037 def setup_grub(cfg, target):
3038@@ -498,12 +340,11 @@
3039 util.subp(args + instdevs, env=env)
3040
3041
3042-def update_initramfs(target, all_kernels=False):
3043+def update_initramfs(target=None, all_kernels=False):
3044 cmd = ['update-initramfs', '-u']
3045 if all_kernels:
3046 cmd.extend(['-k', 'all'])
3047- with util.RunInChroot(target) as in_chroot:
3048- in_chroot(cmd)
3049+ util.subp(cmd, target=target)
3050
3051
3052 def copy_fstab(fstab, target):
3053@@ -533,7 +374,6 @@
3054
3055
3056 def apply_networking(target, state):
3057- netstate = state.get('network_state')
3058 netconf = state.get('network_config')
3059 interfaces = state.get('interfaces')
3060
3061@@ -544,22 +384,13 @@
3062 return True
3063 return False
3064
3065- ns = None
3066- if is_valid_src(netstate):
3067- LOG.debug("applying network_state")
3068- ns = net.network_state.from_state_file(netstate)
3069- elif is_valid_src(netconf):
3070- LOG.debug("applying network_config")
3071- ns = net.parse_net_config(netconf)
3072-
3073- if ns is not None:
3074- net.render_network_state(target=target, network_state=ns)
3075+ if is_valid_src(netconf):
3076+ LOG.info("applying network_config")
3077+ apply_net.apply_net(target, network_state=None, network_config=netconf)
3078 else:
3079 LOG.debug("copying interfaces")
3080 copy_interfaces(interfaces, target)
3081
3082- _maybe_remove_legacy_eth0(target)
3083-
3084
3085 def copy_interfaces(interfaces, target):
3086 if not interfaces:
3087@@ -704,8 +535,8 @@
3088
3089 # FIXME: this assumes grub. need more generic way to update root=
3090 util.ensure_dir(os.path.sep.join([target, os.path.dirname(grub_dev)]))
3091- with util.RunInChroot(target) as in_chroot:
3092- in_chroot(['update-grub'])
3093+ with util.ChrootableTarget(target) as in_chroot:
3094+ in_chroot.subp(['update-grub'])
3095
3096 else:
3097 LOG.warn("Not sure how this will boot")
3098@@ -740,7 +571,7 @@
3099 }
3100
3101 needed_packages = []
3102- installed_packages = get_installed_packages(target)
3103+ installed_packages = util.get_installed_packages(target)
3104 for cust_cfg, pkg_reqs in custom_configs.items():
3105 if cust_cfg not in cfg:
3106 continue
3107@@ -820,7 +651,7 @@
3108 name=stack_prefix, reporting_enabled=True, level="INFO",
3109 description="writing config files and configuring apt"):
3110 write_files(cfg, target)
3111- apt_config(cfg, target)
3112+ do_apt_config(cfg, target)
3113 disable_overlayroot(cfg, target)
3114
3115 # packages may be needed prior to installing kernel
3116@@ -834,8 +665,8 @@
3117 copy_mdadm_conf(mdadm_location, target)
3118 # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
3119 # reconfigure mdadm
3120- util.subp(['chroot', target, 'dpkg-reconfigure',
3121- '--frontend=noninteractive', 'mdadm'], data=None)
3122+ util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'],
3123+ data=None, target=target)
3124
3125 with events.ReportEventStack(
3126 name=stack_prefix, reporting_enabled=True, level="INFO",
3127@@ -843,7 +674,6 @@
3128 setup_zipl(cfg, target)
3129 install_kernel(cfg, target)
3130 run_zipl(cfg, target)
3131- apply_debconf_selections(cfg, target)
3132
3133 restore_dist_interfaces(cfg, target)
3134
3135@@ -906,8 +736,4 @@
3136 populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks)
3137
3138
3139-CONFIG_CLEANERS = {
3140- 'cloud-init': clean_cloud_init,
3141-}
3142-
3143 # vi: ts=4 expandtab syntax=python
3144
3145=== modified file 'curtin/commands/main.py'
3146--- curtin/commands/main.py 2016-05-10 16:13:29 +0000
3147+++ curtin/commands/main.py 2016-10-03 18:55:20 +0000
3148@@ -26,9 +26,10 @@
3149 from ..deps import install_deps
3150
3151 SUB_COMMAND_MODULES = [
3152- 'apply_net', 'block-meta', 'block-wipe', 'curthooks', 'extract',
3153- 'hook', 'in-target', 'install', 'mkfs', 'net-meta',
3154- 'pack', 'swap', 'system-install', 'system-upgrade']
3155+ 'apply_net', 'block-info', 'block-meta', 'block-wipe', 'curthooks',
3156+ 'clear-holders', 'extract', 'hook', 'in-target', 'install', 'mkfs',
3157+ 'net-meta', 'apt-config', 'pack', 'swap', 'system-install',
3158+ 'system-upgrade']
3159
3160
3161 def add_subcmd(subparser, subcmd):
3162
3163=== modified file 'curtin/config.py'
3164--- curtin/config.py 2016-03-18 14:16:45 +0000
3165+++ curtin/config.py 2016-10-03 18:55:20 +0000
3166@@ -138,6 +138,5 @@
3167
3168
3169 def value_as_boolean(value):
3170- if value in (False, None, '0', 0, 'False', 'false', ''):
3171- return False
3172- return True
3173+ false_values = (False, None, 0, '0', 'False', 'false', 'None', 'none', '')
3174+ return value not in false_values
3175
3176=== added file 'curtin/gpg.py'
3177--- curtin/gpg.py 1970-01-01 00:00:00 +0000
3178+++ curtin/gpg.py 2016-10-03 18:55:20 +0000
3179@@ -0,0 +1,74 @@
3180+# Copyright (C) 2016 Canonical Ltd.
3181+#
3182+# Author: Scott Moser <scott.moser@canonical.com>
3183+# Christian Ehrhardt <christian.ehrhardt@canonical.com>
3184+#
3185+# Curtin is free software: you can redistribute it and/or modify it under
3186+# the terms of the GNU Affero General Public License as published by the
3187+# Free Software Foundation, either version 3 of the License, or (at your
3188+# option) any later version.
3189+#
3190+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
3191+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
3192+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
3193+# more details.
3194+#
3195+# You should have received a copy of the GNU Affero General Public License
3196+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
3197+""" gpg.py
3198+gpg related utilities to get raw keys data by their id
3199+"""
3200+
3201+from curtin import util
3202+
3203+from .log import LOG
3204+
3205+
3206+def export_armour(key):
3207+ """Export gpg key, armoured key gets returned"""
3208+ try:
3209+ (armour, _) = util.subp(["gpg", "--export", "--armour", key],
3210+ capture=True)
3211+ except util.ProcessExecutionError as error:
3212+ # debug, since it happens for any key not on the system initially
3213+ LOG.debug('Failed to export armoured key "%s": %s', key, error)
3214+ armour = None
3215+ return armour
3216+
3217+
3218+def recv_key(key, keyserver):
3219+ """Receive gpg key from the specified keyserver"""
3220+ LOG.debug('Receive gpg key "%s"', key)
3221+ try:
3222+ util.subp(["gpg", "--keyserver", keyserver, "--recv", key],
3223+ capture=True)
3224+ except util.ProcessExecutionError as error:
3225+ raise ValueError(('Failed to import key "%s" '
3226+ 'from server "%s" - error %s') %
3227+ (key, keyserver, error))
3228+
3229+
3230+def delete_key(key):
3231+ """Delete the specified key from the local gpg ring"""
3232+ try:
3233+ util.subp(["gpg", "--batch", "--yes", "--delete-keys", key],
3234+ capture=True)
3235+ except util.ProcessExecutionError as error:
3236+ LOG.warn('Failed delete key "%s": %s', key, error)
3237+
3238+
3239+def getkeybyid(keyid, keyserver='keyserver.ubuntu.com'):
3240+ """get gpg keyid from keyserver"""
3241+ armour = export_armour(keyid)
3242+ if not armour:
3243+ try:
3244+ recv_key(keyid, keyserver=keyserver)
3245+ armour = export_armour(keyid)
3246+ except ValueError:
3247+ LOG.exception('Failed to obtain gpg key %s', keyid)
3248+ raise
3249+ finally:
3250+ # delete just imported key to leave environment as it was before
3251+ delete_key(keyid)
3252+
3253+ return armour
3254
3255=== modified file 'curtin/net/__init__.py'
3256--- curtin/net/__init__.py 2016-10-03 18:00:41 +0000
3257+++ curtin/net/__init__.py 2016-10-03 18:55:20 +0000
3258@@ -299,7 +299,7 @@
3259 mac = iface.get('mac_address', '')
3260 # len(macaddr) == 2 * 6 + 5 == 17
3261 if ifname and mac and len(mac) == 17:
3262- content += generate_udev_rule(ifname, mac)
3263+ content += generate_udev_rule(ifname, mac.lower())
3264
3265 return content
3266
3267@@ -349,7 +349,7 @@
3268 'subnets',
3269 'type',
3270 ]
3271- if iface['type'] not in ['bond', 'bridge']:
3272+ if iface['type'] not in ['bond', 'bridge', 'vlan']:
3273 ignore_map.append('mac_address')
3274
3275 for key, value in iface.items():
3276@@ -361,26 +361,52 @@
3277 return content
3278
3279
3280-def render_route(route):
3281- content = "up route add"
3282+def render_route(route, indent=""):
3283+ """When rendering routes for an iface, in some cases applying a route
3284+ may result in the route command returning non-zero which produces
3285+ some confusing output for users manually using ifup/ifdown[1]. To
3286+ that end, we will optionally include an '|| true' postfix to each
3287+ route line allowing users to work with ifup/ifdown without using
3288+ --force option.
3289+
3290+ We may at somepoint not want to emit this additional postfix, and
3291+ add a 'strict' flag to this function. When called with strict=True,
3292+ then we will not append the postfix.
3293+
3294+ 1. http://askubuntu.com/questions/168033/
3295+ how-to-set-static-routes-in-ubuntu-server
3296+ """
3297+ content = []
3298+ up = indent + "post-up route add"
3299+ down = indent + "pre-down route del"
3300+ or_true = " || true"
3301 mapping = {
3302 'network': '-net',
3303 'netmask': 'netmask',
3304 'gateway': 'gw',
3305 'metric': 'metric',
3306 }
3307- for k in ['network', 'netmask', 'gateway', 'metric']:
3308- if k in route:
3309- content += " %s %s" % (mapping[k], route[k])
3310-
3311- content += '\n'
3312- return content
3313-
3314-
3315-def iface_start_entry(iface, index):
3316+ if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
3317+ default_gw = " default gw %s" % route['gateway']
3318+ content.append(up + default_gw + or_true)
3319+ content.append(down + default_gw + or_true)
3320+ elif route['network'] == '::' and route['netmask'] == 0:
3321+ # ipv6!
3322+ default_gw = " -A inet6 default gw %s" % route['gateway']
3323+ content.append(up + default_gw + or_true)
3324+ content.append(down + default_gw + or_true)
3325+ else:
3326+ route_line = ""
3327+ for k in ['network', 'netmask', 'gateway', 'metric']:
3328+ if k in route:
3329+ route_line += " %s %s" % (mapping[k], route[k])
3330+ content.append(up + route_line + or_true)
3331+ content.append(down + route_line + or_true)
3332+ return "\n".join(content)
3333+
3334+
3335+def iface_start_entry(iface):
3336 fullname = iface['name']
3337- if index != 0:
3338- fullname += ":%s" % index
3339
3340 control = iface['control']
3341 if control == "auto":
3342@@ -397,6 +423,16 @@
3343 "iface {fullname} {inet} {mode}\n").format(**subst)
3344
3345
3346+def subnet_is_ipv6(subnet):
3347+ # 'static6' or 'dhcp6'
3348+ if subnet['type'].endswith('6'):
3349+ # This is a request for DHCPv6.
3350+ return True
3351+ elif subnet['type'] == 'static' and ":" in subnet['address']:
3352+ return True
3353+ return False
3354+
3355+
3356 def render_interfaces(network_state):
3357 ''' Given state, emit etc/network/interfaces content '''
3358
3359@@ -424,42 +460,43 @@
3360 content += "\n"
3361 subnets = iface.get('subnets', {})
3362 if subnets:
3363- for index, subnet in zip(range(0, len(subnets)), subnets):
3364+ for index, subnet in enumerate(subnets):
3365 if content[-2:] != "\n\n":
3366 content += "\n"
3367 iface['index'] = index
3368 iface['mode'] = subnet['type']
3369 iface['control'] = subnet.get('control', 'auto')
3370 subnet_inet = 'inet'
3371- if iface['mode'].endswith('6'):
3372- # This is a request for DHCPv6.
3373- subnet_inet += '6'
3374- elif iface['mode'] == 'static' and ":" in subnet['address']:
3375- # This is a static IPv6 address.
3376+ if subnet_is_ipv6(subnet):
3377 subnet_inet += '6'
3378 iface['inet'] = subnet_inet
3379- if iface['mode'].startswith('dhcp'):
3380+ if subnet['type'].startswith('dhcp'):
3381 iface['mode'] = 'dhcp'
3382
3383- content += iface_start_entry(iface, index)
3384+ # do not emit multiple 'auto $IFACE' lines as older (precise)
3385+ # ifupdown complains
3386+ if "auto %s\n" % (iface['name']) in content:
3387+ iface['control'] = 'alias'
3388+
3389+ content += iface_start_entry(iface)
3390 content += iface_add_subnet(iface, subnet)
3391 content += iface_add_attrs(iface, index)
3392- if len(subnets) > 1 and index == 0:
3393- for i in range(1, len(subnets)):
3394- content += " post-up ifup %s:%s\n" % (iface['name'],
3395- i)
3396+
3397+ for route in subnet.get('routes', []):
3398+ content += render_route(route, indent=" ") + '\n'
3399+
3400 else:
3401 # ifenslave docs say to auto the slave devices
3402- if 'bond-master' in iface:
3403+ if 'bond-master' in iface or 'bond-slaves' in iface:
3404 content += "auto {name}\n".format(**iface)
3405 content += "iface {name} {inet} {mode}\n".format(**iface)
3406- content += iface_add_attrs(iface, index)
3407+ content += iface_add_attrs(iface, 0)
3408
3409 for route in network_state.get('routes'):
3410 content += render_route(route)
3411
3412 # global replacements until v2 format
3413- content = content.replace('mac_address', 'hwaddress')
3414+ content = content.replace('mac_address', 'hwaddress ether')
3415
3416 # Play nice with others and source eni config files
3417 content += "\nsource /etc/network/interfaces.d/*.cfg\n"
3418
3419=== modified file 'curtin/net/network_state.py'
3420--- curtin/net/network_state.py 2015-10-02 16:19:07 +0000
3421+++ curtin/net/network_state.py 2016-10-03 18:55:20 +0000
3422@@ -121,6 +121,18 @@
3423 iface = interfaces.get(command['name'], {})
3424 for param, val in command.get('params', {}).items():
3425 iface.update({param: val})
3426+
3427+ # convert subnet ipv6 netmask to cidr as needed
3428+ subnets = command.get('subnets')
3429+ if subnets:
3430+ for subnet in subnets:
3431+ if subnet['type'] == 'static':
3432+ if 'netmask' in subnet and ':' in subnet['address']:
3433+ subnet['netmask'] = mask2cidr(subnet['netmask'])
3434+ for route in subnet.get('routes', []):
3435+ if 'netmask' in route:
3436+ route['netmask'] = mask2cidr(route['netmask'])
3437+
3438 iface.update({
3439 'name': command.get('name'),
3440 'type': command.get('type'),
3441@@ -130,7 +142,7 @@
3442 'mtu': command.get('mtu'),
3443 'address': None,
3444 'gateway': None,
3445- 'subnets': command.get('subnets'),
3446+ 'subnets': subnets,
3447 })
3448 self.network_state['interfaces'].update({command.get('name'): iface})
3449 self.dump_network_state()
3450@@ -141,6 +153,7 @@
3451 iface eth0.222 inet static
3452 address 10.10.10.1
3453 netmask 255.255.255.0
3454+ hwaddress ether BC:76:4E:06:96:B3
3455 vlan-raw-device eth0
3456 '''
3457 required_keys = [
3458@@ -332,6 +345,37 @@
3459 return ".".join([str(x) for x in mask])
3460
3461
3462+def ipv4mask2cidr(mask):
3463+ if '.' not in mask:
3464+ return mask
3465+ return sum([bin(int(x)).count('1') for x in mask.split('.')])
3466+
3467+
3468+def ipv6mask2cidr(mask):
3469+ if ':' not in mask:
3470+ return mask
3471+
3472+ bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00,
3473+ 0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc,
3474+ 0xfffe, 0xffff]
3475+ cidr = 0
3476+ for word in mask.split(':'):
3477+ if not word or int(word, 16) == 0:
3478+ break
3479+ cidr += bitCount.index(int(word, 16))
3480+
3481+ return cidr
3482+
3483+
3484+def mask2cidr(mask):
3485+ if ':' in mask:
3486+ return ipv6mask2cidr(mask)
3487+ elif '.' in mask:
3488+ return ipv4mask2cidr(mask)
3489+ else:
3490+ return mask
3491+
3492+
3493 if __name__ == '__main__':
3494 import sys
3495 import random
3496
3497=== modified file 'curtin/util.py'
3498--- curtin/util.py 2016-10-03 18:00:41 +0000
3499+++ curtin/util.py 2016-10-03 18:55:20 +0000
3500@@ -16,18 +16,35 @@
3501 # along with Curtin. If not, see <http://www.gnu.org/licenses/>.
3502
3503 import argparse
3504+import collections
3505 import errno
3506 import glob
3507 import json
3508 import os
3509 import platform
3510+import re
3511 import shutil
3512+import socket
3513 import subprocess
3514 import stat
3515 import sys
3516 import tempfile
3517 import time
3518
3519+# avoid the dependency to python3-six as used in cloud-init
3520+try:
3521+ from urlparse import urlparse
3522+except ImportError:
3523+ # python3
3524+ # avoid triggering pylint, https://github.com/PyCQA/pylint/issues/769
3525+ # pylint:disable=import-error,no-name-in-module
3526+ from urllib.parse import urlparse
3527+
3528+try:
3529+ string_types = (basestring,)
3530+except NameError:
3531+ string_types = (str,)
3532+
3533 from .log import LOG
3534
3535 _INSTALLED_HELPERS_PATH = '/usr/lib/curtin/helpers'
3536@@ -35,14 +52,22 @@
3537
3538 _LSB_RELEASE = {}
3539
3540+_DNS_REDIRECT_IP = None
3541+
3542+# matcher used in template rendering functions
3543+BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)')
3544+
3545
3546 def _subp(args, data=None, rcs=None, env=None, capture=False, shell=False,
3547- logstring=False, decode="replace"):
3548+ logstring=False, decode="replace", target=None):
3549 if rcs is None:
3550 rcs = [0]
3551
3552 devnull_fp = None
3553 try:
3554+ if target_path(target) != "/":
3555+ args = ['chroot', target] + list(args)
3556+
3557 if not logstring:
3558 LOG.debug(("Running command %s with allowed return codes %s"
3559 " (shell=%s, capture=%s)"), args, rcs, shell, capture)
3560@@ -118,6 +143,8 @@
3561 a list of times to sleep in between retries. After each failure
3562 subp will sleep for N seconds and then try again. A value of [1, 3]
3563 means to run, sleep 1, run, sleep 3, run and then return exit code.
3564+ :param target:
3565+ run the command as 'chroot target <args>'
3566 """
3567 retries = []
3568 if "retries" in kwargs:
3569@@ -277,15 +304,29 @@
3570
3571
3572 def write_file(filename, content, mode=0o644, omode="w"):
3573+ """
3574+ write 'content' to file at 'filename' using python open mode 'omode'.
3575+ if mode is not set, then chmod file to mode. mode is 644 by default
3576+ """
3577 ensure_dir(os.path.dirname(filename))
3578 with open(filename, omode) as fp:
3579 fp.write(content)
3580- os.chmod(filename, mode)
3581-
3582-
3583-def load_file(path, mode="r"):
3584+ if mode:
3585+ os.chmod(filename, mode)
3586+
3587+
3588+def load_file(path, mode="r", read_len=None, offset=0):
3589 with open(path, mode) as fp:
3590- return fp.read()
3591+ if offset:
3592+ fp.seek(offset)
3593+ return fp.read(read_len) if read_len else fp.read()
3594+
3595+
3596+def file_size(path):
3597+ """get the size of a file"""
3598+ with open(path, 'rb') as fp:
3599+ fp.seek(0, 2)
3600+ return fp.tell()
3601
3602
3603 def del_file(path):
3604@@ -311,7 +352,7 @@
3605 'done',
3606 ''])
3607
3608- fpath = os.path.join(target, "usr/sbin/policy-rc.d")
3609+ fpath = target_path(target, "/usr/sbin/policy-rc.d")
3610
3611 if os.path.isfile(fpath):
3612 return False
3613@@ -322,7 +363,7 @@
3614
3615 def undisable_daemons_in_root(target):
3616 try:
3617- os.unlink(os.path.join(target, "usr/sbin/policy-rc.d"))
3618+ os.unlink(target_path(target, "/usr/sbin/policy-rc.d"))
3619 except OSError as e:
3620 if e.errno != errno.ENOENT:
3621 raise
3622@@ -334,7 +375,7 @@
3623 def __init__(self, target, allow_daemons=False, sys_resolvconf=True):
3624 if target is None:
3625 target = "/"
3626- self.target = os.path.abspath(target)
3627+ self.target = target_path(target)
3628 self.mounts = ["/dev", "/proc", "/sys"]
3629 self.umounts = []
3630 self.disabled_daemons = False
3631@@ -344,20 +385,21 @@
3632
3633 def __enter__(self):
3634 for p in self.mounts:
3635- tpath = os.path.join(self.target, p[1:])
3636+ tpath = target_path(self.target, p)
3637 if do_mount(p, tpath, opts='--bind'):
3638 self.umounts.append(tpath)
3639
3640 if not self.allow_daemons:
3641 self.disabled_daemons = disable_daemons_in_root(self.target)
3642
3643- target_etc = os.path.join(self.target, "etc")
3644+ rconf = target_path(self.target, "/etc/resolv.conf")
3645+ target_etc = os.path.dirname(rconf)
3646 if self.target != "/" and os.path.isdir(target_etc):
3647 # never muck with resolv.conf on /
3648 rconf = os.path.join(target_etc, "resolv.conf")
3649 rtd = None
3650 try:
3651- rtd = tempfile.mkdtemp(dir=os.path.dirname(rconf))
3652+ rtd = tempfile.mkdtemp(dir=target_etc)
3653 tmp = os.path.join(rtd, "resolv.conf")
3654 os.rename(rconf, tmp)
3655 self.rconf_d = rtd
3656@@ -375,25 +417,23 @@
3657 undisable_daemons_in_root(self.target)
3658
3659 # if /dev is to be unmounted, udevadm settle (LP: #1462139)
3660- if os.path.join(self.target, "dev") in self.umounts:
3661+ if target_path(self.target, "/dev") in self.umounts:
3662 subp(['udevadm', 'settle'])
3663
3664 for p in reversed(self.umounts):
3665 do_umount(p)
3666
3667- rconf = os.path.join(self.target, "etc", "resolv.conf")
3668+ rconf = target_path(self.target, "/etc/resolv.conf")
3669 if self.sys_resolvconf and self.rconf_d:
3670 os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf)
3671 shutil.rmtree(self.rconf_d)
3672
3673+ def subp(self, *args, **kwargs):
3674+ kwargs['target'] = self.target
3675+ return subp(*args, **kwargs)
3676
3677-class RunInChroot(ChrootableTarget):
3678- def __call__(self, args, **kwargs):
3679- if self.target != "/":
3680- chroot = ["chroot", self.target]
3681- else:
3682- chroot = []
3683- return subp(chroot + args, **kwargs)
3684+ def path(self, path):
3685+ return target_path(self.target, path)
3686
3687
3688 def is_exe(fpath):
3689@@ -402,14 +442,13 @@
3690
3691
3692 def which(program, search=None, target=None):
3693- if target is None or os.path.realpath(target) == "/":
3694- target = "/"
3695+ target = target_path(target)
3696
3697 if os.path.sep in program:
3698 # if program had a '/' in it, then do not search PATH
3699 # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls
3700 # so effectively we set cwd to / (or target)
3701- if is_exe(os.path.sep.join((target, program,))):
3702+ if is_exe(target_path(target, program)):
3703 return program
3704
3705 if search is None:
3706@@ -424,8 +463,9 @@
3707 search = [os.path.abspath(p) for p in search]
3708
3709 for path in search:
3710- if is_exe(os.path.sep.join((target, path, program,))):
3711- return os.path.sep.join((path, program,))
3712+ ppath = os.path.sep.join((path, program))
3713+ if is_exe(target_path(target, ppath)):
3714+ return ppath
3715
3716 return None
3717
3718@@ -467,33 +507,39 @@
3719
3720
3721 def get_architecture(target=None):
3722- chroot = []
3723- if target is not None:
3724- chroot = ['chroot', target]
3725- out, _ = subp(chroot + ['dpkg', '--print-architecture'],
3726- capture=True)
3727+ out, _ = subp(['dpkg', '--print-architecture'], capture=True,
3728+ target=target)
3729 return out.strip()
3730
3731
3732 def has_pkg_available(pkg, target=None):
3733- chroot = []
3734- if target is not None:
3735- chroot = ['chroot', target]
3736- out, _ = subp(chroot + ['apt-cache', 'pkgnames'], capture=True)
3737+ out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target)
3738 for item in out.splitlines():
3739 if pkg == item.strip():
3740 return True
3741 return False
3742
3743
3744+def get_installed_packages(target=None):
3745+ (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True)
3746+
3747+ pkgs_inst = set()
3748+ for line in out.splitlines():
3749+ try:
3750+ (state, pkg, other) = line.split(None, 2)
3751+ except ValueError:
3752+ continue
3753+ if state.startswith("hi") or state.startswith("ii"):
3754+ pkgs_inst.add(re.sub(":.*", "", pkg))
3755+
3756+ return pkgs_inst
3757+
3758+
3759 def has_pkg_installed(pkg, target=None):
3760- chroot = []
3761- if target is not None:
3762- chroot = ['chroot', target]
3763 try:
3764- out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat',
3765- '${db:Status-Abbrev}', pkg],
3766- capture=True)
3767+ out, _ = subp(['dpkg-query', '--show', '--showformat',
3768+ '${db:Status-Abbrev}', pkg],
3769+ capture=True, target=target)
3770 return out.rstrip() == "ii"
3771 except ProcessExecutionError:
3772 return False
3773@@ -542,13 +588,9 @@
3774 """Use dpkg-query to extract package pkg's version string
3775 and parse the version string into a dictionary
3776 """
3777- chroot = []
3778- if target is not None:
3779- chroot = ['chroot', target]
3780 try:
3781- out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat',
3782- '${Version}', pkg],
3783- capture=True)
3784+ out, _ = subp(['dpkg-query', '--show', '--showformat',
3785+ '${Version}', pkg], capture=True, target=target)
3786 raw = out.rstrip()
3787 return parse_dpkg_version(raw, name=pkg, semx=semx)
3788 except ProcessExecutionError:
3789@@ -600,11 +642,11 @@
3790 if comment.endswith("\n"):
3791 comment = comment[:-1]
3792
3793- marker = os.path.join(target, marker)
3794+ marker = target_path(target, marker)
3795 # if marker exists, check if there are files that would make it obsolete
3796- listfiles = [os.path.join(target, "etc/apt/sources.list")]
3797+ listfiles = [target_path(target, "/etc/apt/sources.list")]
3798 listfiles += glob.glob(
3799- os.path.join(target, "etc/apt/sources.list.d/*.list"))
3800+ target_path(target, "etc/apt/sources.list.d/*.list"))
3801
3802 if os.path.exists(marker) and not force:
3803 if len(find_newer(marker, listfiles)) == 0:
3804@@ -612,7 +654,7 @@
3805
3806 restore_perms = []
3807
3808- abs_tmpdir = tempfile.mkdtemp(dir=os.path.join(target, 'tmp'))
3809+ abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp"))
3810 try:
3811 abs_slist = abs_tmpdir + "/sources.list"
3812 abs_slistd = abs_tmpdir + "/sources.list.d"
3813@@ -621,8 +663,8 @@
3814 ch_slistd = ch_tmpdir + "/sources.list.d"
3815
3816 # this file gets executed on apt-get update sometimes. (LP: #1527710)
3817- motd_update = os.path.join(
3818- target, "usr/lib/update-notifier/update-motd-updates-available")
3819+ motd_update = target_path(
3820+ target, "/usr/lib/update-notifier/update-motd-updates-available")
3821 pmode = set_unexecutable(motd_update)
3822 if pmode is not None:
3823 restore_perms.append((motd_update, pmode),)
3824@@ -647,8 +689,8 @@
3825 'update']
3826
3827 # do not using 'run_apt_command' so we can use 'retries' to subp
3828- with RunInChroot(target, allow_daemons=True) as inchroot:
3829- inchroot(update_cmd, env=env, retries=retries)
3830+ with ChrootableTarget(target, allow_daemons=True) as inchroot:
3831+ inchroot.subp(update_cmd, env=env, retries=retries)
3832 finally:
3833 for fname, perms in restore_perms:
3834 os.chmod(fname, perms)
3835@@ -685,9 +727,8 @@
3836 return env, cmd
3837
3838 apt_update(target, env=env, comment=' '.join(cmd))
3839- ric = RunInChroot(target, allow_daemons=allow_daemons)
3840- with ric as inchroot:
3841- return inchroot(cmd, env=env)
3842+ with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot:
3843+ return inchroot.subp(cmd, env=env)
3844
3845
3846 def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False):
3847@@ -716,7 +757,7 @@
3848 """
3849 Look for "hook" in "target" and run it
3850 """
3851- target_hook = os.path.join(target, 'curtin', hook)
3852+ target_hook = target_path(target, '/curtin/' + hook)
3853 if os.path.isfile(target_hook):
3854 LOG.debug("running %s" % target_hook)
3855 subp([target_hook])
3856@@ -828,6 +869,18 @@
3857 return val
3858
3859
3860+def bytes2human(size):
3861+ """convert size in bytes to human readable"""
3862+ if not (isinstance(size, (int, float)) and
3863+ int(size) == size and
3864+ int(size) >= 0):
3865+ raise ValueError('size must be a integral value')
3866+ mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40}
3867+ unit_order = sorted(mpliers, key=lambda x: -1 * mpliers[x])
3868+ unit = next((u for u in unit_order if (size / mpliers[u]) >= 1), 'B')
3869+ return str(int(size / mpliers[unit])) + unit
3870+
3871+
3872 def import_module(import_str):
3873 """Import a module."""
3874 __import__(import_str)
3875@@ -843,30 +896,42 @@
3876
3877
3878 def is_file_not_found_exc(exc):
3879- return (isinstance(exc, IOError) and exc.errno == errno.ENOENT)
3880-
3881-
3882-def lsb_release():
3883+ return (isinstance(exc, (IOError, OSError)) and
3884+ hasattr(exc, 'errno') and
3885+ exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO))
3886+
3887+
3888+def _lsb_release(target=None):
3889 fmap = {'Codename': 'codename', 'Description': 'description',
3890 'Distributor ID': 'id', 'Release': 'release'}
3891+
3892+ data = {}
3893+ try:
3894+ out, _ = subp(['lsb_release', '--all'], capture=True, target=target)
3895+ for line in out.splitlines():
3896+ fname, _, val = line.partition(":")
3897+ if fname in fmap:
3898+ data[fmap[fname]] = val.strip()
3899+ missing = [k for k in fmap.values() if k not in data]
3900+ if len(missing):
3901+ LOG.warn("Missing fields in lsb_release --all output: %s",
3902+ ','.join(missing))
3903+
3904+ except ProcessExecutionError as err:
3905+ LOG.warn("Unable to get lsb_release --all: %s", err)
3906+ data = {v: "UNAVAILABLE" for v in fmap.values()}
3907+
3908+ return data
3909+
3910+
3911+def lsb_release(target=None):
3912+ if target_path(target) != "/":
3913+ # do not use or update cache if target is provided
3914+ return _lsb_release(target)
3915+
3916 global _LSB_RELEASE
3917 if not _LSB_RELEASE:
3918- data = {}
3919- try:
3920- out, err = subp(['lsb_release', '--all'], capture=True)
3921- for line in out.splitlines():
3922- fname, tok, val = line.partition(":")
3923- if fname in fmap:
3924- data[fmap[fname]] = val.strip()
3925- missing = [k for k in fmap.values() if k not in data]
3926- if len(missing):
3927- LOG.warn("Missing fields in lsb_release --all output: %s",
3928- ','.join(missing))
3929-
3930- except ProcessExecutionError as e:
3931- LOG.warn("Unable to get lsb_release --all: %s", e)
3932- data = {v: "UNAVAILABLE" for v in fmap.values()}
3933-
3934+ data = _lsb_release()
3935 _LSB_RELEASE.update(data)
3936 return _LSB_RELEASE
3937
3938@@ -881,8 +946,7 @@
3939
3940
3941 def json_dumps(data):
3942- return json.dumps(data, indent=1, sort_keys=True,
3943- separators=(',', ': ')).encode('utf-8')
3944+ return json.dumps(data, indent=1, sort_keys=True, separators=(',', ': '))
3945
3946
3947 def get_platform_arch():
3948@@ -895,4 +959,137 @@
3949 }
3950 return platform2arch.get(platform.machine(), platform.machine())
3951
3952+
3953+def basic_template_render(content, params):
3954+ """This does simple replacement of bash variable like templates.
3955+
3956+ It identifies patterns like ${a} or $a and can also identify patterns like
3957+ ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted
3958+ by key 'a'.
3959+ """
3960+
3961+ def replacer(match):
3962+ """ replacer
3963+ replacer used in regex match to replace content
3964+ """
3965+ # Only 1 of the 2 groups will actually have a valid entry.
3966+ name = match.group(1)
3967+ if name is None:
3968+ name = match.group(2)
3969+ if name is None:
3970+ raise RuntimeError("Match encountered but no valid group present")
3971+ path = collections.deque(name.split("."))
3972+ selected_params = params
3973+ while len(path) > 1:
3974+ key = path.popleft()
3975+ if not isinstance(selected_params, dict):
3976+ raise TypeError("Can not traverse into"
3977+ " non-dictionary '%s' of type %s while"
3978+ " looking for subkey '%s'"
3979+ % (selected_params,
3980+ selected_params.__class__.__name__,
3981+ key))
3982+ selected_params = selected_params[key]
3983+ key = path.popleft()
3984+ if not isinstance(selected_params, dict):
3985+ raise TypeError("Can not extract key '%s' from non-dictionary"
3986+ " '%s' of type %s"
3987+ % (key, selected_params,
3988+ selected_params.__class__.__name__))
3989+ return str(selected_params[key])
3990+
3991+ return BASIC_MATCHER.sub(replacer, content)
3992+
3993+
3994+def render_string(content, params):
3995+ """ render_string
3996+ render a string following replacement rules as defined in
3997+ basic_template_render returning the string
3998+ """
3999+ if not params:
4000+ params = {}
4001+ return basic_template_render(content, params)
4002+
4003+
4004+def is_resolvable(name):
4005+ """determine if a url is resolvable, return a boolean
4006+ This also attempts to be resilent against dns redirection.
4007+
4008+ Note, that normal nsswitch resolution is used here. So in order
4009+ to avoid any utilization of 'search' entries in /etc/resolv.conf
4010+ we have to append '.'.
4011+
4012+ The top level 'invalid' domain is invalid per RFC. And example.com
4013+ should also not exist. The random entry will be resolved inside
4014+ the search list.
4015+ """
4016+ global _DNS_REDIRECT_IP
4017+ if _DNS_REDIRECT_IP is None:
4018+ badips = set()
4019+ badnames = ("does-not-exist.example.com.", "example.invalid.")
4020+ badresults = {}
4021+ for iname in badnames:
4022+ try:
4023+ result = socket.getaddrinfo(iname, None, 0, 0,
4024+ socket.SOCK_STREAM,
4025+ socket.AI_CANONNAME)
4026+ badresults[iname] = []
4027+ for (_, _, _, cname, sockaddr) in result:
4028+ badresults[iname].append("%s: %s" % (cname, sockaddr[0]))
4029+ badips.add(sockaddr[0])
4030+ except (socket.gaierror, socket.error):
4031+ pass
4032+ _DNS_REDIRECT_IP = badips
4033+ if badresults:
4034+ LOG.debug("detected dns redirection: %s", badresults)
4035+
4036+ try:
4037+ result = socket.getaddrinfo(name, None)
4038+ # check first result's sockaddr field
4039+ addr = result[0][4][0]
4040+ if addr in _DNS_REDIRECT_IP:
4041+ LOG.debug("dns %s in _DNS_REDIRECT_IP", name)
4042+ return False
4043+ LOG.debug("dns %s resolved to '%s'", name, result)
4044+ return True
4045+ except (socket.gaierror, socket.error):
4046+ LOG.debug("dns %s failed to resolve", name)
4047+ return False
4048+
4049+
4050+def is_resolvable_url(url):
4051+ """determine if this url is resolvable (existing or ip)."""
4052+ return is_resolvable(urlparse(url).hostname)
4053+
4054+
4055+def target_path(target, path=None):
4056+ # return 'path' inside target, accepting target as None
4057+ if target in (None, ""):
4058+ target = "/"
4059+ elif not isinstance(target, string_types):
4060+ raise ValueError("Unexpected input for target: %s" % target)
4061+ else:
4062+ target = os.path.abspath(target)
4063+ # abspath("//") returns "//" specifically for 2 slashes.
4064+ if target.startswith("//"):
4065+ target = target[1:]
4066+
4067+ if not path:
4068+ return target
4069+
4070+ # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /.
4071+ while len(path) and path[0] == "/":
4072+ path = path[1:]
4073+
4074+ return os.path.join(target, path)
4075+
4076+
4077+class RunInChroot(ChrootableTarget):
4078+ """Backwards compatibility for RunInChroot (LP: #1617375).
4079+ It needs to work like:
4080+ with RunInChroot("/target") as in_chroot:
4081+ in_chroot(["your", "chrooted", "command"])"""
4082+ __call__ = ChrootableTarget.subp
4083+
4084+
4085 # vi: ts=4 expandtab syntax=python
4086
4087=== modified file 'debian/changelog'
4088--- debian/changelog 2016-10-03 17:23:32 +0000
4089+++ debian/changelog 2016-10-03 18:55:20 +0000
4090@@ -1,8 +1,38 @@
4091-curtin (0.1.0~bzr399-0ubuntu1~16.04.1ubuntu1) UNRELEASED; urgency=medium
4092+curtin (0.1.0~bzr425-0ubuntu1~16.04.1) xenial-proposed; urgency=medium
4093
4094+ [ Scott Moser ]
4095 * debian/new-upstream-snapshot: add writing of debian changelog entries.
4096
4097- -- Scott Moser <smoser@ubuntu.com> Mon, 03 Oct 2016 13:23:11 -0400
4098+ [ Ryan Harper ]
4099+ * New upstream snapshot.
4100+ - unittest,tox.ini: catch and fix issue with trusty-level mock of open
4101+ - block/mdadm: add option to ignore mdadm_assemble errors (LP: #1618429)
4102+ - curtin/doc: overhaul curtin documentation for readthedocs.org (LP: #1351085)
4103+ - curtin.util: re-add support for RunInChroot (LP: #1617375)
4104+ - curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs
4105+ - curtin.block: refactor clear_holders logic into block.clear_holders and cli cmd
4106+ - curtin.apply_net should exit non-zero upon exception. (LP: #1615780)
4107+ - apt: fix bug in disable_suites if sources.list line is blank.
4108+ - vmtests: disable Wily in vmtests
4109+ - Fix the unittests for test_apt_source.
4110+ - get CURTIN_VMTEST_PARALLEL shown correctly in jenkins-runner output
4111+ - fix vmtest check_file_strippedline to strip lines before comparing
4112+ - fix whitespace damage in tests/vmtests/__init__.py
4113+ - fix dpkg-reconfigure when debconf_selections was provided. (LP: #1609614)
4114+ - fix apt tests on non-intel arch
4115+ - Add apt features to curtin. (LP: #1574113)
4116+ - vmtest: easier use of parallel and controlling timeouts
4117+ - mkfs.vfat: add force flag for formating whole disks (LP: #1597923)
4118+ - block.mkfs: fix sectorsize flag (LP: #1597522)
4119+ - block_meta: cleanup use of sys_block_path and handle cciss knames (LP: #1562249)
4120+ - block.get_blockdev_sector_size: handle _lsblock multi result return (LP: #1598310)
4121+ - util: add target (chroot) support to subp, add target_path helper.
4122+ - block_meta: fallback to parted if blkid does not produce output (LP: #1524031)
4123+ - commands.block_wipe: correct default wipe mode to 'superblock'
4124+ - tox.ini: run coverage normally rather than separately
4125+ - move uefi boot knowledge from launch and vmtest to xkvm
4126+
4127+ -- Ryan Harper <ryan.harper@canonical.com> Mon, 03 Oct 2016 13:43:54 -0500
4128
4129 curtin (0.1.0~bzr399-0ubuntu1~16.04.1) xenial-proposed; urgency=medium
4130
4131
4132=== modified file 'doc/conf.py'
4133--- doc/conf.py 2015-10-02 16:19:07 +0000
4134+++ doc/conf.py 2016-10-03 18:55:20 +0000
4135@@ -13,6 +13,11 @@
4136
4137 import sys, os
4138
4139+# Fix path so we can import curtin.__version__
4140+sys.path.insert(1, os.path.realpath(os.path.join(
4141+ os.path.dirname(__file__), '..')))
4142+import curtin
4143+
4144 # If extensions (or modules to document with autodoc) are in another directory,
4145 # add these directories to sys.path here. If the directory is relative to the
4146 # documentation root, use os.path.abspath to make it absolute, like shown here.
4147@@ -41,16 +46,16 @@
4148
4149 # General information about the project.
4150 project = u'curtin'
4151-copyright = u'2013, Scott Moser'
4152+copyright = u'2016, Scott Moser, Ryan Harper'
4153
4154 # The version info for the project you're documenting, acts as replacement for
4155 # |version| and |release|, also used in various other places throughout the
4156 # built documents.
4157 #
4158 # The short X.Y version.
4159-version = '0.3'
4160+version = curtin.__version__
4161 # The full version, including alpha/beta/rc tags.
4162-release = '0.3'
4163+release = version
4164
4165 # The language for content autogenerated by Sphinx. Refer to documentation
4166 # for a list of supported languages.
4167@@ -93,6 +98,18 @@
4168 # a list of builtin themes.
4169 html_theme = 'classic'
4170
4171+# on_rtd is whether we are on readthedocs.org, this line of code grabbed from
4172+# docs.readthedocs.org
4173+on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
4174+
4175+if not on_rtd: # only import and set the theme if we're building docs locally
4176+ import sphinx_rtd_theme
4177+ html_theme = 'sphinx_rtd_theme'
4178+ html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
4179+
4180+# otherwise, readthedocs.org uses their theme by default, so no need to specify
4181+# it
4182+
4183 # Theme options are theme-specific and customize the look and feel of a theme
4184 # further. For a list of options available for each theme, see the
4185 # documentation.
4186@@ -120,7 +137,7 @@
4187 # Add any paths that contain custom static files (such as style sheets) here,
4188 # relative to this directory. They are copied after the builtin static files,
4189 # so a file named "default.css" will overwrite the builtin "default.css".
4190-html_static_path = ['static']
4191+#html_static_path = ['static']
4192
4193 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
4194 # using the given strftime format.
4195
4196=== removed file 'doc/devel/README-vmtest.txt'
4197--- doc/devel/README-vmtest.txt 2016-02-12 21:54:46 +0000
4198+++ doc/devel/README-vmtest.txt 1970-01-01 00:00:00 +0000
4199@@ -1,152 +0,0 @@
4200-== Background ==
4201-Curtin includes a mechanism called 'vmtest' that allows it to actually
4202-do installs and validate a number of configurations.
4203-
4204-The general flow of the vmtests is:
4205- 1. each test has an associated yaml config file for curtin in examples/tests
4206- 2. uses curtin-pack to create the user-data for cloud-init to trigger install
4207- 3. create and install a system using 'tools/launch'.
4208- 3.1 The install environment is booted from a maas ephemeral image.
4209- 3.2 kernel & initrd used are from maas images (not part of the image)
4210- 3.3 network by default is handled via user networking
4211- 3.4 It creates all empty disks required
4212- 3.5 cloud-init datasource is provided by launch
4213- a) like: ds=nocloud-net;seedfrom=http://10.7.0.41:41518/
4214- provided by python webserver start_http
4215- b) via -drive file=/tmp/launch.8VOiOn/seed.img,if=virtio,media=cdrom
4216- as a seed disk (if booted without external kernel)
4217- 3.6 dependencies and other preparations are installed at the beginning by
4218- curtin inside the ephemeral image prior to configuring the target
4219- 4. power off the system.
4220- 5. configure a 'NoCloud' datasource seed image that provides scripts that
4221- will run on first boot.
4222- 5.1 this will contain all our code to gather health data on the install
4223- 5.2 by cloud-init design this runs only once per instance, if you start
4224- the system again this won't be called again
4225- 6. boot the installed system with 'tools/xkvm'.
4226- 6.1 reuses the disks that were installed/configured in the former steps
4227- 6.2 also adds an output disk
4228- 6.3 additionally the seed image for the data gathering is added
4229- 6.4 On this boot it will run the provided scripts, write their output to a
4230- "data" disk and then shut itself down.
4231- 7. extract the data from the output disk
4232- 8. vmtest python code now verifies if the output is as expected.
4233-
4234-== Debugging ==
4235-At 3.1
4236- - one can pull data out of the maas image with
4237- sudo mount-image-callback your.img -- sh -c 'COMMAND'
4238- e.g. sudo mount-image-callback your.img -- sh -c 'cp $MOUNTPOINT/boot/* .'
4239-At step 3.6 -> 4.
4240- - tools/launch can be called in a way to give you console access
4241- to do so just call tools/launch but drop the -serial=x parameter.
4242- One might want to change "'power_state': {'mode': 'poweroff'}" to avoid
4243- the auto reboot before getting control
4244- Replace the directory usually seen in the launch calls with a clean fresh
4245- directory
4246- - In /curtin curtin and its config can be found
4247- - if the system gets that far cloud-init will create a user ubuntu/passw0rd
4248- - otherwise one can use a cloud-image from https://cloud-images.ubuntu.com/
4249- and add a backdoor user via
4250- bzr branch lp:~maas-maintainers/maas/backdoor-image backdoor-image
4251- sudo ./backdoor-image -v --user=<USER> --password-auth --password=<PW> IMG
4252-At step 6 -> 7
4253- - You might want to keep all the temporary images around.
4254- To do so you can set CURTIN_VMTEST_KEEP_DATA_PASS=all:
4255- export CURTIN_VMTEST_KEEP_DATA_PASS=all CURTIN_VMTEST_KEEP_DATA_FAIL=all
4256- That will keep the /tmp/tmpXXXXX directories and all files in there for
4257- further execution.
4258-At step 7
4259- - You might want to take a look at the output disk yourself.
4260- It is a normal qcow image, so one can use mount-image-callback as described
4261- above
4262- - to invoke xkvm on your own take the command you see in the output and
4263- remove the "-serial ..." but add -nographic instead
4264- For graphical console one can add --vnc 127.0.0.1:1
4265-
4266-== Setup ==
4267-In order to run vmtest you'll need some dependencies. To get them, you
4268-can run:
4269- make vmtest-deps
4270-
4271-That will install all necessary dependencies.
4272-
4273-== Running ==
4274-Running tests is done most simply by:
4275-
4276- make vmtest
4277-
4278-If you wish to all tests in test_network.py, do so with:
4279- sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py
4280-
4281-Or run a single test with:
4282- sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py:WilyTestBasic
4283-
4284-Note:
4285- * currently, the tests have to run as root. The reason for this is that
4286- the kernel and initramfs to boot are extracted from the maas ephemeral
4287- image. This should be fixed at some point, and then 'make vmtest'
4288-
4289- The tests themselves don't actually have to run as root, but the
4290- test setup does.
4291- * the 'tools' directory must be in your path.
4292- * test will set apt_proxy in the guests to the value of
4293- 'apt_proxy' environment variable. If that is not set it will
4294- look at the host's apt config and read 'Acquire::HTTP::Proxy'
4295-
4296-== Environment Variables ==
4297-Some environment variables affect the running of vmtest
4298- * apt_proxy:
4299- test will set apt_proxy in the guests to the value of 'apt_proxy'.
4300- If that is not set it will look at the host's apt config and read
4301- 'Acquire::HTTP::Proxy'
4302-
4303- * CURTIN_VMTEST_KEEP_DATA_PASS CURTIN_VMTEST_KEEP_DATA_FAIL:
4304- default:
4305- CURTIN_VMTEST_KEEP_DATA_PASS=none
4306- CURTIN_VMTEST_KEEP_DATA_FAIL=all
4307- These 2 variables determine what portions of the temporary
4308- test data are kept.
4309-
4310- The variables contain a comma ',' delimited list of directories
4311- that should be kept in the case of pass or fail. Additionally,
4312- the values 'all' and 'none' are accepted.
4313-
4314- Each vmtest that runs has its own sub-directory under the top level
4315- CURTIN_VMTEST_TOPDIR. In that directory are directories:
4316- boot: inputs to the system boot (after install)
4317- install: install phase related files
4318- disks: the disks used for installation and boot
4319- logs: install and boot logs
4320- collect: data collected by the boot phase
4321-
4322- * CURTIN_VMTEST_TOPDIR: default $TMPDIR/vmtest-<timestamp>
4323- vmtest puts all test data under this value. By default, it creates
4324- a directory in TMPDIR (/tmp) named with as "vmtest-<timestamp>"
4325-
4326- If you set this value, you must ensure that the directory is either
4327- non-existant or clean.
4328-
4329- * CURTIN_VMTEST_LOG: default $TMPDIR/vmtest-<timestamp>.log
4330- vmtest writes extended log information to this file.
4331- The default puts the log along side the TOPDIR.
4332-
4333- * CURTIN_VMTEST_IMAGE_SYNC: default false (boolean)
4334- if set to true, each run will attempt a sync of images.
4335- If you want to make sure images are always up to date, then set to true.
4336-
4337- * CURTIN_VMTEST_BRIDGE: default 'user'
4338- the network devices will be attached to this bridge. The default is
4339- 'user', which means to use qemu user mode networking. Set it to
4340- 'virbr0' or 'lxcbr0' to use those bridges and then be able to ssh
4341- in directly.
4342-
4343- * IMAGE_DIR: default /srv/images
4344- vmtest keeps a mirror of maas ephemeral images in this directory.
4345-
4346- * IMAGES_TO_KEEP: default 1
4347- keep this number of images of each release in the IMAGE_DIR.
4348-
4349-Environment 'boolean' values:
4350- For boolean environment variables the value is considered True
4351- if it is any value other than case insensitive 'false', '' or "0"
4352
4353=== removed file 'doc/devel/README.txt'
4354--- doc/devel/README.txt 2015-03-11 13:19:43 +0000
4355+++ doc/devel/README.txt 1970-01-01 00:00:00 +0000
4356@@ -1,55 +0,0 @@
4357-## curtin development ##
4358-
4359-This document describes how to use kvm and ubuntu cloud images
4360-to develop curtin or test install configurations inside kvm.
4361-
4362-## get some dependencies ##
4363-sudo apt-get -qy install kvm libvirt-bin cloud-utils bzr
4364-
4365-## get cloud image to boot (-disk1.img) and one to install (-root.tar.gz)
4366-mkdir -p ~/download
4367-DLDIR=$( cd ~/download && pwd )
4368-rel="trusty"
4369-arch=amd64
4370-burl="http://cloud-images.ubuntu.com/$rel/current/"
4371-for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-${arch}-disk1.img; do
4372- wget "$burl/$f" -O $DLDIR/$f; done
4373-( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
4374-
4375-BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
4376-ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
4377-
4378-## get curtin
4379-mkdir -p ~/src
4380-bzr init-repo ~/src/curtin
4381-( cd ~/src/curtin && bzr branch lp:curtin trunk.dist )
4382-( cd ~/src/curtin && bzr branch trunk.dist trunk )
4383-
4384-## work with curtin
4385-cd ~/src/curtin/trunk
4386-# use 'launch' to launch a kvm instance with user data to pack
4387-# up local curtin and run it inside instance.
4388-./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"
4389-
4390-## notes about 'launch' ##
4391- * launch has --help so you can see that for some info.
4392- * '--publish' adds a web server at ${HTTP_PORT:-9923}
4393- and puts the files you want available there. You can reference
4394- this url in config or cmdline with 'PUBURL'. For example
4395- '--publish foo.img' will put 'foo.img' at PUBURL/foo.img.
4396- * launch sets 'ubuntu' user password to 'passw0rd'
4397- * launch runs 'kvm -curses'
4398- kvm -curses keyboard info:
4399- 'alt-2' to go to qemu console
4400- * launch puts serial console to 'serial.log' (look there for stuff)
4401- * when logged in
4402- * you can look at /var/log/cloud-init-output.log
4403- * archive should be extracted in /curtin
4404- * shell archive should be in /var/lib/cloud/instance/scripts/part-002
4405- * when logged in, and archive available at
4406-
4407-
4408-## other notes ##
4409- * need to add '--install-deps' or something for curtin
4410- cloud-image in 12.04 has no 'python3'
4411- ideally 'curtin --install-deps install' would get the things it needs
4412
4413=== added file 'doc/devel/clear_holders_doc.txt'
4414--- doc/devel/clear_holders_doc.txt 1970-01-01 00:00:00 +0000
4415+++ doc/devel/clear_holders_doc.txt 2016-10-03 18:55:20 +0000
4416@@ -0,0 +1,85 @@
4417+The new version of clear_holders is based around a data structure called a
4418+holder_tree which represents the current storage hirearchy above a specified
4419+starting device. Each node in a holders tree contains data about the node and a
4420+key 'holders' which contains a list of all nodes that depend on it. The keys in
4421+a holdres_tree node are:
4422+ - device: the path to the device in /sys/class/block
4423+ - dev_type: what type of storage layer the device is. possible values:
4424+ - disk
4425+ - lvm
4426+ - crypt
4427+ - raid
4428+ - bcache
4429+ - disk
4430+ - name: the kname of the device (used for display)
4431+ - holders: holders_trees for devices depending on the current device
4432+
4433+A holders tree can be generated for a device using the function
4434+clear_holders.gen_holders_tree. The device can be specified either as a path in
4435+/sys/class/block or as a path in /dev.
4436+
4437+The new implementation of block.clear_holders shuts down storage devices in a
4438+holders tree starting from the leaves of the tree and ascending towards the
4439+root. The old implementation of clear_holders ascended up each path of the tree
4440+separately, in a pattern similar to depth first search. The problem with the
4441+old implementation is that in some cases either an attempt would be made to
4442+remove one storage device while other devices depended on it or clear_holders
4443+would attempt to shut down the same storage device several times. In order to
4444+cope with this the old version of clear_holders had logic to handle expected
4445+failures and hope for the best moving forward. The new version of clear_holders
4446+is able to run without many anticipated failures.
4447+
4448+The logic to plan what order to shut down storage layers in is in
4449+clear_holders.plan_shutdown_holders_trees. This function accepts either a
4450+single holders tree or a list of holders trees. When run with a list of holders
4451+trees, it assumes that all of these trees start at basically the same layer in
4452+the overall storage hirearcy for the system (i.e. a list of holders trees
4453+starting from all of the target installation disks). This function returns a
4454+list of dictionaries, with each dictionary containing the keys:
4455+ - device: the path to the device in /sys/class/block
4456+ - dev_type: what type of storage layer the device is. possible values:
4457+ - disk
4458+ - lvm
4459+ - crypt
4460+ - raid
4461+ - bcache
4462+ - disk
4463+ - level: the level of the device in the current storage hirearchy
4464+ (starting from 0)
4465+
4466+The items in the list returned by clear_holders.plan_shutdown_holders_trees
4467+should be processed in order to make sure the holders trees are shutdown fully
4468+
4469+The main interface for clear_holders is the function
4470+clear_holders.clear_holders. If the system has just been booted it could be
4471+beneficial to run the function clear_holders.start_clear_holders_deps before
4472+using clear_holders.clear_holders. This ensures clear_holders will be able to
4473+properly storage devices. The function clear_holders.clear_holders can be
4474+passed either a single device or a list of devices and will shut down all
4475+storage devices above the device(s). The devices can be specified either by
4476+path in /dev or by path in /sys/class/block.
4477+
4478+In order to test if a device or devices are free to be partitioned/formatted,
4479+the function clear_holders.assert_clear can be passed either a single device or
4480+a list of devices, with devices specified either by path in /dev or by path in
4481+/sys/class/block. If there are any storage devices that depend on one of the
4482+devices passed to clear_holders.assert_clear, then an OSError will be raised.
4483+If clear_holders.assert_clear does not raise any errors, then the devices
4484+specified should be ready for partitioning.
4485+
4486+It is possible to query further information about storage devices using
4487+clear_holders.
4488+
4489+Holders for a individual device can be queried using clear_holders.get_holders.
4490+Results are returned as a list or knames for holding devices.
4491+
4492+A holders tree can be printed in a human readable format using
4493+clear_holders.format_holders_tree(). Example output:
4494+sda
4495+|-- sda1
4496+|-- sda2
4497+`-- sda5
4498+ `-- dm-0
4499+ |-- dm-1
4500+ `-- dm-2
4501+ `-- dm-3
4502
4503=== modified file 'doc/index.rst'
4504--- doc/index.rst 2015-10-02 16:19:07 +0000
4505+++ doc/index.rst 2016-10-03 18:55:20 +0000
4506@@ -13,7 +13,13 @@
4507 :maxdepth: 2
4508
4509 topics/overview
4510+ topics/config
4511+ topics/apt_source
4512+ topics/networking
4513+ topics/storage
4514 topics/reporting
4515+ topics/development
4516+ topics/integration-testing
4517
4518
4519
4520
4521=== added file 'doc/topics/apt_source.rst'
4522--- doc/topics/apt_source.rst 1970-01-01 00:00:00 +0000
4523+++ doc/topics/apt_source.rst 2016-10-03 18:55:20 +0000
4524@@ -0,0 +1,164 @@
4525+==========
4526+APT Source
4527+==========
4528+
4529+This part of curtin is meant to allow influencing the apt behaviour and configuration.
4530+
4531+By default - if no apt config is provided - it does nothing. That keeps behavior compatible on upgrades.
4532+
4533+The feature has an optional target argument which - by default - is used to modify the environment that curtin currently installs (@TARGET_MOUNT_POINT).
4534+
4535+Features
4536+~~~~~~~~
4537+
4538+* Add PGP keys to the APT trusted keyring
4539+
4540+ - add via short keyid
4541+
4542+ - add via long key fingerprint
4543+
4544+ - specify a custom keyserver to pull from
4545+
4546+ - add raw keys (which makes you independent of keyservers)
4547+
4548+* Influence global apt configuration
4549+
4550+ - adding ppa's
4551+
4552+ - replacing mirror, security mirror and release in sources.list
4553+
4554+ - able to provide a fully custom template for sources.list
4555+
4556+ - add arbitrary apt.conf settings
4557+
4558+ - provide debconf configurations
4559+
4560+ - disabling suites (=pockets)
4561+
4562+ - per architecture mirror definition
4563+
4564+
4565+Configuration
4566+~~~~~~~~~~~~~
4567+
4568+The general configuration of the apt feature is under an element called ``apt``.
4569+
4570+This can have various "global" subelements as listed in the examples below.
4571+The file ``apt-source.yaml`` holds more examples.
4572+
4573+These global configurations are valid throughput all of the apt feature.
4574+So for exmaple a global specification of a ``primary`` mirror will apply to all rendered sources entries.
4575+
4576+Then there is a section ``sources`` which can hold any number of source subelements itself.
4577+The key is the filename and will be prepended by /etc/apt/sources.list.d/ if it doesn't start with a ``/``.
4578+There are certain cases - where no content is written into a source.list file where the filename will be ignored - yet it can still be used as index for merging.
4579+
4580+The values inside the entries consist of the following optional entries
4581+
4582+* ``source``: a sources.list entry (some variable replacements apply)
4583+
4584+* ``keyid``: providing a key to import via shortid or fingerprint
4585+
4586+* ``key``: providing a raw PGP key
4587+
4588+* ``keyserver``: specify an alternate keyserver to pull keys from that were specified by keyid
4589+
4590+The section "sources" is is a dictionary (unlike most block/net configs which are lists). This format allows merging between multiple input files than a list like ::
4591+
4592+ sources:
4593+ s1: {'key': 'key1', 'source': 'source1'}
4594+
4595+ sources:
4596+ s2: {'key': 'key2'}
4597+ s1: {'keyserver': 'foo'}
4598+
4599+ This would be merged into
4600+ s1: {'key': 'key1', 'source': 'source1', keyserver: 'foo'}
4601+ s2: {'key': 'key2'}
4602+
4603+Here is just one of the most common examples for this feature: install with curtin in an isolated environment (derived repository):
4604+
4605+For that we need to:
4606+* insert the PGP key of the local repository to be trusted
4607+
4608+ - since you are locked down you can't pull from keyserver.ubuntu.com
4609+
4610+ - if you have an internal keyserver you could pull from there, but let us assume you don't even have that; so you have to provide the raw key
4611+
4612+ - in the example I'll use the key of the "Ubuntu CD Image Automatic Signing Key" which makes no sense as it is in the trusted keyring anyway, but it is a good example. (Also the key is shortened to stay readable)
4613+
4614+::
4615+
4616+ -----BEGIN PGP PUBLIC KEY BLOCK-----
4617+ Version: GnuPG v1
4618+ mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O
4619+ RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1
4620+ Twx6DKLF+3rF5nf1F3Q=
4621+ =PBAe
4622+ -----END PGP PUBLIC KEY BLOCK-----
4623+
4624+* replace the mirrors used to some mirrors available inside the isolated environment for apt to pull repository data from.
4625+
4626+ - lets consider we have a local mirror at ``mymirror.local`` but otherwise following the usual paths
4627+
4628+ - make an example with a partial mirror that doesn't mirror the backports suite, so backports have to be disabled
4629+
4630+That would be specified as ::
4631+
4632+ apt:
4633+ primary:
4634+ - arches [default]
4635+ uri: http://mymirror.local/ubuntu/
4636+ disable_suites: [backports]
4637+ sources:
4638+ localrepokey:
4639+ key: | # full key as block
4640+ -----BEGIN PGP PUBLIC KEY BLOCK-----
4641+ Version: GnuPG v1
4642+
4643+ mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O
4644+ RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1
4645+ Twx6DKLF+3rF5nf1F3Q=
4646+ =PBAe
4647+ -----END PGP PUBLIC KEY BLOCK-----
4648+
4649+The file examples/apt-source.yaml holds various further examples that can be configured with this feature.
4650+
4651+
4652+Common snippets
4653+~~~~~~~~~~~~~~~
4654+This is a collection of additional ideas people can use the feature for customizing their to-be-installed system.
4655+
4656+* enable proposed on installing
4657+
4658+::
4659+
4660+ apt:
4661+ sources:
4662+ proposed.list: deb $MIRROR $RELEASE-proposed main restricted universe multiverse
4663+
4664+* Make debug symbols available
4665+
4666+::
4667+
4668+ apt:
4669+ sources:
4670+ ddebs.list: |
4671+ deb http://ddebs.ubuntu.com $RELEASE main restricted universe multiverse
4672+  deb http://ddebs.ubuntu.com $RELEASE-updates main restricted universe multiverse
4673+  deb http://ddebs.ubuntu.com $RELEASE-security main restricted universe multiverse
4674+ deb http://ddebs.ubuntu.com $RELEASE-proposed main restricted universe multiverse
4675+
4676+Timing
4677+~~~~~~
4678+The feature is implemented at the stage of curthooks_commands, which runs just after curtin has extracted the image to the target.
4679+Additionally it can be ran as standalong command "curtin -v --config <yourconfigfile> apt-config".
4680+
4681+This will pick up the target from the environment variable that is set by curtin, if you want to use it to a different target or outside of usual curtin handling you can add ``--target <path>`` to it to overwrite the target path.
4682+This target should have at least a minimal system with apt, apt-add-repository and dpkg being installed for the functionality to work.
4683+
4684+
4685+Dependencies
4686+~~~~~~~~~~~~
4687+Cloud-init might need to resolve dependencies and install packages in the ephemeral environment to run curtin.
4688+Therefore it is recommended to not only provide an apt configuration to curtin for the target, but also one to the install environment via cloud-init.
4689
4690=== added file 'doc/topics/config.rst'
4691--- doc/topics/config.rst 1970-01-01 00:00:00 +0000
4692+++ doc/topics/config.rst 2016-10-03 18:55:20 +0000
4693@@ -0,0 +1,551 @@
4694+====================
4695+Curtin Configuration
4696+====================
4697+
4698+Curtin exposes a number of configuration options for controlling Curtin
4699+behavior during installation.
4700+
4701+
4702+Configuration options
4703+---------------------
4704+Curtin's top level config keys are as follows:
4705+
4706+
4707+- apt_mirrors (``apt_mirrors``)
4708+- apt_proxy (``apt_proxy``)
4709+- block-meta (``block``)
4710+- debconf_selections (``debconf_selections``)
4711+- disable_overlayroot (``disable_overlayroot``)
4712+- grub (``grub``)
4713+- http_proxy (``http_proxy``)
4714+- install (``install``)
4715+- kernel (``kernel``)
4716+- kexec (``kexec``)
4717+- multipath (``multipath``)
4718+- network (``network``)
4719+- power_state (``power_state``)
4720+- reporting (``reporting``)
4721+- restore_dist_interfaces: (``restore_dist_interfaces``)
4722+- sources (``sources``)
4723+- stages (``stages``)
4724+- storage (``storage``)
4725+- swap (``swap``)
4726+- system_upgrade (``system_upgrade``)
4727+- write_files (``write_files``)
4728+
4729+
4730+apt_mirrors
4731+~~~~~~~~~~~
4732+Configure APT mirrors for ``ubuntu_archive`` and ``ubuntu_security``
4733+
4734+**ubuntu_archive**: *<http://local.archive/ubuntu>*
4735+
4736+**ubuntu_security**: *<http://local.archive/ubuntu>*
4737+
4738+If the target OS includes /etc/apt/sources.list, Curtin will replace
4739+the default values for each key set with the supplied mirror URL.
4740+
4741+**Example**::
4742+
4743+ apt_mirrors:
4744+ ubuntu_archive: http://local.archive/ubuntu
4745+ ubuntu_security: http://local.archive/ubuntu
4746+
4747+
4748+apt_proxy
4749+~~~~~~~~~
4750+Curtin will configure an APT HTTP proxy in the target OS
4751+
4752+**apt_proxy**: *<URL to APT proxy>*
4753+
4754+**Example**::
4755+
4756+ apt_proxy: http://squid.mirror:3267/
4757+
4758+
4759+block-meta
4760+~~~~~~~~~~
4761+Configure how Curtin selects and configures disks on the target
4762+system without providing a custom configuration (mode=simple).
4763+
4764+**devices**: *<List of block devices for use>*
4765+
4766+The ``devices`` parameter is a list of block device paths that Curtin may
4767+select from with choosing where to install the OS.
4768+
4769+**boot-partition**: *<dictionary of configuration>*
4770+
4771+The ``boot-partition`` parameter controls how to configure the boot partition
4772+with the following parameters:
4773+
4774+**enabled**: *<boolean>*
4775+
4776+Enabled will forcibly setup a partition on the target device for booting.
4777+
4778+**format**: *<['uefi', 'gpt', 'prep', 'mbr']>*
4779+
4780+Specify the partition format. Some formats, like ``uefi`` and ``prep``
4781+are restricted by platform characteristics.
4782+
4783+**fstype**: *<filesystem type: one of ['ext3', 'ext4'], defaults to 'ext4'>*
4784+
4785+Specify the filesystem format on the boot partition.
4786+
4787+**label**: *<filesystem label: defaults to 'boot'>*
4788+
4789+Specify the filesystem label on the boot partition.
4790+
4791+**Example**::
4792+
4793+ block-meta:
4794+ devices:
4795+ - /dev/sda
4796+ - /dev/sdb
4797+ boot-partition:
4798+ - enabled: True
4799+ format: gpt
4800+ fstype: ext4
4801+ label: my-boot-partition
4802+
4803+
4804+debconf_selections
4805+~~~~~~~~~~~~~~~~~~
4806+Curtin will update the target with debconf set-selection values. Users will
4807+need to be familiar with the package debconf options. Users can probe a
4808+packages' debconf settings by using ``debconf-get-selections``.
4809+
4810+**selection_name**: *<debconf-set-selections input>*
4811+
4812+``debconf-set-selections`` is in the form::
4813+
4814+ <packagename> <packagename/option-name> <type> <value>
4815+
4816+**Example**::
4817+
4818+ debconf_selections:
4819+ set1: |
4820+ cloud-init cloud-init/datasources multiselect MAAS
4821+ lxd lxd/bridge-name string lxdbr0
4822+ set2: lxd lxd/setup-bridge boolean true
4823+
4824+
4825+
4826+disable_overlayroot
4827+~~~~~~~~~~~~~~~~~~~
4828+Curtin disables overlayroot in the target by default.
4829+
4830+**disable_overlayroot**: *<boolean: default True>*
4831+
4832+**Example**::
4833+
4834+ disable_overlayroot: False
4835+
4836+
4837+grub
4838+~~~~
4839+Curtin configures grub as the target machine's boot loader. Users
4840+can control a few options to tailor how the system will boot after
4841+installation.
4842+
4843+**install_devices**: *<list of block device names to install grub>*
4844+
4845+Specify a list of devices onto which grub will attempt to install.
4846+
4847+**replace_linux_default**: *<boolean: default True>*
4848+
4849+Controls whether grub-install will update the Linux Default target
4850+value during installation.
4851+
4852+**update_nvram**: *<boolean: default False>*
4853+
4854+Certain platforms, like ``uefi`` and ``prep`` systems utilize
4855+NVRAM to hold boot configuration settings which control the order in
4856+which devices are booted. Curtin by default will not attempt to
4857+update the NVRAM settings to preserve the system configuration.
4858+Users may want to force NVRAM to be updated such that the next boot
4859+of the system will boot from the installed device.
4860+
4861+**Example**::
4862+
4863+ grub:
4864+ install_devices:
4865+ - /dev/sda1
4866+ replace_linux_default: False
4867+ update_nvram: True
4868+
4869+
4870+http_proxy
4871+~~~~~~~~~~
4872+Curtin will export ``http_proxy`` value into the installer environment.
4873+
4874+**http_proxy**: *<HTTP Proxy URL>*
4875+
4876+**Example**::
4877+
4878+ http_proxy: http://squid.proxy:3728/
4879+
4880+
4881+
4882+install
4883+~~~~~~~
4884+Configure Curtin's install options.
4885+
4886+**log_file**: *<path to write Curtin's install.log data>*
4887+
4888+Curtin logs install progress by default to /var/log/curtin/install.log
4889+
4890+**post_files**: *<List of files to read from host to include in reporting data>*
4891+
4892+Curtin by default will post the ``log_file`` value to any configured reporter.
4893+
4894+**save_install_config**: *<Path to save merged curtin configuration file>*
4895+
4896+Curtin will save the merged configuration data into the target OS at
4897+the path of ``save_install_config``. This defaults to /root/curtin-install-cfg.yaml
4898+
4899+**Example**::
4900+
4901+ install:
4902+ log_file: /tmp/install.log
4903+ post_files:
4904+ - /tmp/install.log
4905+ - /var/log/syslog
4906+ save_install_config: /root/myconf.yaml
4907+
4908+
4909+kernel
4910+~~~~~~
4911+Configure how Curtin selects which kernel to install into the target image.
4912+If ``kernel`` is not configured, Curtin will use the default mapping below
4913+and determine which ``package`` value by looking up the current release
4914+and current kernel version running.
4915+
4916+
4917+**fallback-package**: *<kernel package-name to be used as fallback>*
4918+
4919+Specify a kernel package name to be used if the default package is not
4920+available.
4921+
4922+**mapping**: *<Dictionary mapping Ubuntu release to HWE kernel names>*
4923+
4924+Default mapping for Releases to package names is as follows::
4925+
4926+ precise:
4927+ 3.2.0:
4928+ 3.5.0: -lts-quantal
4929+ 3.8.0: -lts-raring
4930+ 3.11.0: -lts-saucy
4931+ 3.13.0: -lts-trusty
4932+ trusty:
4933+ 3.13.0:
4934+ 3.16.0: -lts-utopic
4935+ 3.19.0: -lts-vivid
4936+ 4.2.0: -lts-wily
4937+ 4.4.0: -lts-xenial
4938+ xenial:
4939+ 4.3.0:
4940+ 4.4.0:
4941+
4942+
4943+**package**: *<Linux kernel package name>*
4944+
4945+Specify the exact package to install in the target OS.
4946+
4947+**Example**::
4948+
4949+ kernel:
4950+ fallback-package: linux-image-generic
4951+ package: linux-image-generic-lts-xenial
4952+ mapping:
4953+ - xenial:
4954+ - 4.4.0: -my-custom-kernel
4955+
4956+
4957+kexec
4958+~~~~~
4959+Curtin can use kexec to "reboot" into the target OS.
4960+
4961+**mode**: *<on>*
4962+
4963+Enable rebooting with kexec.
4964+
4965+**Example**::
4966+
4967+ kexec: on
4968+
4969+
4970+multipath
4971+~~~~~~~~~
4972+Curtin will detect and autoconfigure multipath by default to enable
4973+boot for systems with multipath. Curtin does not apply any advanced
4974+configuration or tuning, rather it uses distro defaults and provides
4975+enough configuration to enable booting.
4976+
4977+**mode**: *<['auto', ['disabled']>*
4978+
4979+Defaults to auto which will configure enough to enable booting on multipath
4980+devices. Disabled will prevent curtin from installing or configuring
4981+multipath.
4982+
4983+**overwrite_bindings**: *<boolean>*
4984+
4985+If ``overwrite_bindings`` is True then Curtin will generate new bindings
4986+file for multipath, overriding any existing binding in the target image.
4987+
4988+**Example**::
4989+
4990+ multipath:
4991+ mode: auto
4992+ overwrite_bindings: True
4993+
4994+
4995+network
4996+~~~~~~~
4997+Configure networking (see Networking section for details).
4998+
4999+**network_option_1**: *<option value>*
5000+
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: