Merge lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425 into lp:~smoser/ubuntu/xenial/curtin/pkg

Proposed by Ryan Harper
Status: Merged
Merged at revision: 56
Proposed branch: lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425
Merge into: lp:~smoser/ubuntu/xenial/curtin/pkg
Diff against target: 14513 lines (+10324/-1893)
94 files modified
Makefile (+3/-1)
curtin/__init__.py (+4/-0)
curtin/block/__init__.py (+249/-61)
curtin/block/clear_holders.py (+387/-0)
curtin/block/lvm.py (+96/-0)
curtin/block/mdadm.py (+18/-5)
curtin/block/mkfs.py (+10/-5)
curtin/commands/apply_net.py (+156/-1)
curtin/commands/apt_config.py (+668/-0)
curtin/commands/block_info.py (+75/-0)
curtin/commands/block_meta.py (+134/-263)
curtin/commands/block_wipe.py (+1/-2)
curtin/commands/clear_holders.py (+48/-0)
curtin/commands/curthooks.py (+61/-235)
curtin/commands/main.py (+4/-3)
curtin/config.py (+2/-3)
curtin/gpg.py (+74/-0)
curtin/net/__init__.py (+67/-30)
curtin/net/network_state.py (+45/-1)
curtin/util.py (+278/-81)
debian/changelog (+32/-2)
doc/conf.py (+21/-4)
doc/devel/README-vmtest.txt (+0/-152)
doc/devel/README.txt (+0/-55)
doc/devel/clear_holders_doc.txt (+85/-0)
doc/index.rst (+6/-0)
doc/topics/apt_source.rst (+164/-0)
doc/topics/config.rst (+551/-0)
doc/topics/development.rst (+68/-0)
doc/topics/integration-testing.rst (+245/-0)
doc/topics/networking.rst (+522/-0)
doc/topics/overview.rst (+7/-7)
doc/topics/reporting.rst (+3/-3)
doc/topics/storage.rst (+894/-0)
examples/apt-source.yaml (+267/-0)
examples/network-ipv6-bond-vlan.yaml (+56/-0)
examples/tests/apt_config_command.yaml (+85/-0)
examples/tests/apt_source_custom.yaml (+97/-0)
examples/tests/apt_source_modify.yaml (+92/-0)
examples/tests/apt_source_modify_arches.yaml (+102/-0)
examples/tests/apt_source_modify_disable_suite.yaml (+92/-0)
examples/tests/apt_source_preserve.yaml (+98/-0)
examples/tests/apt_source_search.yaml (+97/-0)
examples/tests/basic.yaml (+5/-1)
examples/tests/basic_network_static_ipv6.yaml (+22/-0)
examples/tests/basic_scsi.yaml (+1/-1)
examples/tests/network_alias.yaml (+125/-0)
examples/tests/network_mtu.yaml (+88/-0)
examples/tests/network_source_ipv6.yaml (+31/-0)
examples/tests/test_old_apt_features.yaml (+11/-0)
examples/tests/test_old_apt_features_ports.yaml (+10/-0)
examples/tests/uefi_basic.yaml (+15/-0)
examples/tests/vlan_network_ipv6.yaml (+92/-0)
setup.py (+2/-2)
tests/unittests/helpers.py (+41/-0)
tests/unittests/test_apt_custom_sources_list.py (+170/-0)
tests/unittests/test_apt_source.py (+1032/-0)
tests/unittests/test_block.py (+210/-0)
tests/unittests/test_block_lvm.py (+94/-0)
tests/unittests/test_block_mdadm.py (+28/-23)
tests/unittests/test_block_mkfs.py (+2/-2)
tests/unittests/test_clear_holders.py (+329/-0)
tests/unittests/test_make_dname.py (+200/-0)
tests/unittests/test_net.py (+54/-13)
tests/unittests/test_util.py (+180/-2)
tests/vmtests/__init__.py (+38/-38)
tests/vmtests/helpers.py (+129/-166)
tests/vmtests/test_apt_config_cmd.py (+55/-0)
tests/vmtests/test_apt_source.py (+238/-0)
tests/vmtests/test_basic.py (+21/-41)
tests/vmtests/test_bcache_basic.py (+5/-8)
tests/vmtests/test_bonding.py (+0/-204)
tests/vmtests/test_lvm.py (+2/-1)
tests/vmtests/test_mdadm_bcache.py (+21/-17)
tests/vmtests/test_multipath.py (+5/-13)
tests/vmtests/test_network.py (+205/-348)
tests/vmtests/test_network_alias.py (+40/-0)
tests/vmtests/test_network_bonding.py (+63/-0)
tests/vmtests/test_network_enisource.py (+91/-0)
tests/vmtests/test_network_ipv6.py (+53/-0)
tests/vmtests/test_network_ipv6_enisource.py (+26/-0)
tests/vmtests/test_network_ipv6_static.py (+42/-0)
tests/vmtests/test_network_ipv6_vlan.py (+34/-0)
tests/vmtests/test_network_mtu.py (+155/-0)
tests/vmtests/test_network_static.py (+44/-0)
tests/vmtests/test_network_vlan.py (+77/-0)
tests/vmtests/test_nvme.py (+2/-3)
tests/vmtests/test_old_apt_features.py (+89/-0)
tests/vmtests/test_raid5_bcache.py (+5/-8)
tests/vmtests/test_uefi_basic.py (+16/-18)
tools/jenkins-runner (+33/-7)
tools/launch (+9/-48)
tools/xkvm (+90/-2)
tox.ini (+30/-13)
To merge this branch: bzr merge lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425
Reviewer Review Type Date Requested Status
Scott Moser Pending
Review via email: mp+307473@code.launchpad.net

Description of the change

Import new upstream snapshot (revno 425)

New Upstream snapshot:
- unittest,tox.ini: catch and fix issue with trusty-level mock of open
- block/mdadm: add option to ignore mdadm_assemble errors (LP: #1618429)
- curtin/doc: overhaul curtin documentation for readthedocs.org (LP: #1351085)
- curtin.util: re-add support for RunInChroot (LP: #1617375)
- curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs
- curtin.block: refactor clear_holders logic into block.clear_holders and cli cmd
- curtin.apply_net should exit non-zero upon exception. (LP: #1615780)
- apt: fix bug in disable_suites if sources.list line is blank.
- vmtests: disable Wily in vmtests
- Fix the unittests for test_apt_source.
- get CURTIN_VMTEST_PARALLEL shown correctly in jenkins-runner output
- fix vmtest check_file_strippedline to strip lines before comparing
- fix whitespace damage in tests/vmtests/__init__.py
- fix dpkg-reconfigure when debconf_selections was provided. (LP: #1609614)
- fix apt tests on non-intel arch
- Add apt features to curtin. (LP: #1574113)
- vmtest: easier use of parallel and controlling timeouts
- mkfs.vfat: add force flag for formating whole disks (LP: #1597923)
- block.mkfs: fix sectorsize flag (LP: #1597522)
- block_meta: cleanup use of sys_block_path and handle cciss knames (LP: #1562249)
- block.get_blockdev_sector_size: handle _lsblock multi result return (LP: #1598310)
- util: add target (chroot) support to subp, add target_path helper.
- block_meta: fallback to parted if blkid does not produce output (LP: #1524031)
- commands.block_wipe: correct default wipe mode to 'superblock'
- tox.ini: run coverage normally rather than separately
- move uefi boot knowledge from launch and vmtest to xkvm

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2016-05-10 16:13:29 +0000
+++ Makefile 2016-10-03 18:55:20 +0000
@@ -49,5 +49,7 @@
49sync-images:49sync-images:
50 @$(CWD)/tools/vmtest-sync-images50 @$(CWD)/tools/vmtest-sync-images
5151
52clean:
53 rm -rf doc/_build
5254
53.PHONY: all test pyflakes pyflakes3 pep8 build55.PHONY: all clean test pyflakes pyflakes3 pep8 build
5456
=== modified file 'curtin/__init__.py'
--- curtin/__init__.py 2015-11-23 16:22:09 +0000
+++ curtin/__init__.py 2016-10-03 18:55:20 +0000
@@ -33,6 +33,10 @@
33 'SUBCOMMAND_SYSTEM_INSTALL',33 'SUBCOMMAND_SYSTEM_INSTALL',
34 # subcommand 'system-upgrade' is present34 # subcommand 'system-upgrade' is present
35 'SUBCOMMAND_SYSTEM_UPGRADE',35 'SUBCOMMAND_SYSTEM_UPGRADE',
36 # supports new format of apt configuration
37 'APT_CONFIG_V1',
36]38]
3739
40__version__ = "0.1.0"
41
38# vi: ts=4 expandtab syntax=python42# vi: ts=4 expandtab syntax=python
3943
=== modified file 'curtin/block/__init__.py'
--- curtin/block/__init__.py 2016-10-03 18:00:41 +0000
+++ curtin/block/__init__.py 2016-10-03 18:55:20 +0000
@@ -23,21 +23,31 @@
23import itertools23import itertools
2424
25from curtin import util25from curtin import util
26from curtin.block import lvm
27from curtin.log import LOG
26from curtin.udev import udevadm_settle28from curtin.udev import udevadm_settle
27from curtin.log import LOG
2829
2930
30def get_dev_name_entry(devname):31def get_dev_name_entry(devname):
32 """
33 convert device name to path in /dev
34 """
31 bname = devname.split('/dev/')[-1]35 bname = devname.split('/dev/')[-1]
32 return (bname, "/dev/" + bname)36 return (bname, "/dev/" + bname)
3337
3438
35def is_valid_device(devname):39def is_valid_device(devname):
40 """
41 check if device is a valid device
42 """
36 devent = get_dev_name_entry(devname)[1]43 devent = get_dev_name_entry(devname)[1]
37 return is_block_device(devent)44 return is_block_device(devent)
3845
3946
40def is_block_device(path):47def is_block_device(path):
48 """
49 check if path is a block device
50 """
41 try:51 try:
42 return stat.S_ISBLK(os.stat(path).st_mode)52 return stat.S_ISBLK(os.stat(path).st_mode)
43 except OSError as e:53 except OSError as e:
@@ -47,26 +57,99 @@
4757
4858
49def dev_short(devname):59def dev_short(devname):
60 """
61 get short form of device name
62 """
63 devname = os.path.normpath(devname)
50 if os.path.sep in devname:64 if os.path.sep in devname:
51 return os.path.basename(devname)65 return os.path.basename(devname)
52 return devname66 return devname
5367
5468
55def dev_path(devname):69def dev_path(devname):
70 """
71 convert device name to path in /dev
72 """
56 if devname.startswith('/dev/'):73 if devname.startswith('/dev/'):
57 return devname74 return devname
58 else:75 else:
59 return '/dev/' + devname76 return '/dev/' + devname
6077
6178
79def path_to_kname(path):
80 """
81 converts a path in /dev or a path in /sys/block to the device kname,
82 taking special devices and unusual naming schemes into account
83 """
84 # if path given is a link, get real path
85 # only do this if given a path though, if kname is already specified then
86 # this would cause a failure where the function should still be able to run
87 if os.path.sep in path:
88 path = os.path.realpath(path)
89 # using basename here ensures that the function will work given a path in
90 # /dev, a kname, or a path in /sys/block as an arg
91 dev_kname = os.path.basename(path)
92 # cciss devices need to have 'cciss!' prepended
93 if path.startswith('/dev/cciss'):
94 dev_kname = 'cciss!' + dev_kname
95 LOG.debug("path_to_kname input: '{}' output: '{}'".format(path, dev_kname))
96 return dev_kname
97
98
99def kname_to_path(kname):
100 """
101 converts a kname to a path in /dev, taking special devices and unusual
102 naming schemes into account
103 """
104 # if given something that is already a dev path, return it
105 if os.path.exists(kname) and is_valid_device(kname):
106 path = kname
107 LOG.debug("kname_to_path input: '{}' output: '{}'".format(kname, path))
108 return os.path.realpath(path)
109 # adding '/dev' to path is not sufficient to handle cciss devices and
110 # possibly other special devices which have not been encountered yet
111 path = os.path.realpath(os.sep.join(['/dev'] + kname.split('!')))
112 # make sure path we get is correct
113 if not (os.path.exists(path) and is_valid_device(path)):
114 raise OSError('could not get path to dev from kname: {}'.format(kname))
115 LOG.debug("kname_to_path input: '{}' output: '{}'".format(kname, path))
116 return path
117
118
119def partition_kname(disk_kname, partition_number):
120 """
121 Add number to disk_kname prepending a 'p' if needed
122 """
123 for dev_type in ['nvme', 'mmcblk', 'cciss', 'mpath', 'dm']:
124 if disk_kname.startswith(dev_type):
125 partition_number = "p%s" % partition_number
126 break
127 return "%s%s" % (disk_kname, partition_number)
128
129
130def sysfs_to_devpath(sysfs_path):
131 """
132 convert a path in /sys/class/block to a path in /dev
133 """
134 path = kname_to_path(path_to_kname(sysfs_path))
135 if not is_block_device(path):
136 raise ValueError('could not find blockdev for sys path: {}'
137 .format(sysfs_path))
138 return path
139
140
62def sys_block_path(devname, add=None, strict=True):141def sys_block_path(devname, add=None, strict=True):
142 """
143 get path to device in /sys/class/block
144 """
63 toks = ['/sys/class/block']145 toks = ['/sys/class/block']
64 # insert parent dev if devname is partition146 # insert parent dev if devname is partition
147 devname = os.path.normpath(devname)
65 (parent, partnum) = get_blockdev_for_partition(devname)148 (parent, partnum) = get_blockdev_for_partition(devname)
66 if partnum:149 if partnum:
67 toks.append(dev_short(parent))150 toks.append(path_to_kname(parent))
68151
69 toks.append(dev_short(devname))152 toks.append(path_to_kname(devname))
70153
71 if add is not None:154 if add is not None:
72 toks.append(add)155 toks.append(add)
@@ -83,6 +166,9 @@
83166
84167
85def _lsblock_pairs_to_dict(lines):168def _lsblock_pairs_to_dict(lines):
169 """
170 parse lsblock output and convert to dict
171 """
86 ret = {}172 ret = {}
87 for line in lines.splitlines():173 for line in lines.splitlines():
88 toks = shlex.split(line)174 toks = shlex.split(line)
@@ -98,6 +184,9 @@
98184
99185
100def _lsblock(args=None):186def _lsblock(args=None):
187 """
188 get lsblock data as dict
189 """
101 # lsblk --help | sed -n '/Available/,/^$/p' |190 # lsblk --help | sed -n '/Available/,/^$/p' |
102 # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort191 # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort
103 keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO',192 keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO',
@@ -120,8 +209,10 @@
120209
121210
122def get_unused_blockdev_info():211def get_unused_blockdev_info():
123 # return a list of unused block devices. These are devices that212 """
124 # do not have anything mounted on them.213 return a list of unused block devices.
214 These are devices that do not have anything mounted on them.
215 """
125216
126 # get a list of top level block devices, then iterate over it to get217 # get a list of top level block devices, then iterate over it to get
127 # devices dependent on those. If the lsblk call for that specific218 # devices dependent on those. If the lsblk call for that specific
@@ -137,7 +228,9 @@
137228
138229
139def get_devices_for_mp(mountpoint):230def get_devices_for_mp(mountpoint):
140 # return a list of devices (full paths) used by the provided mountpoint231 """
232 return a list of devices (full paths) used by the provided mountpoint
233 """
141 bdinfo = _lsblock()234 bdinfo = _lsblock()
142 found = set()235 found = set()
143 for devname, data in bdinfo.items():236 for devname, data in bdinfo.items():
@@ -158,6 +251,9 @@
158251
159252
160def get_installable_blockdevs(include_removable=False, min_size=1024**3):253def get_installable_blockdevs(include_removable=False, min_size=1024**3):
254 """
255 find blockdevs suitable for installation
256 """
161 good = []257 good = []
162 unused = get_unused_blockdev_info()258 unused = get_unused_blockdev_info()
163 for devname, data in unused.items():259 for devname, data in unused.items():
@@ -172,21 +268,25 @@
172268
173269
174def get_blockdev_for_partition(devpath):270def get_blockdev_for_partition(devpath):
271 """
272 find the parent device for a partition.
273 returns a tuple of the parent block device and the partition number
274 if device is not a partition, None will be returned for partition number
275 """
276 # normalize path
277 rpath = os.path.realpath(devpath)
278
175 # convert an entry in /dev/ to parent disk and partition number279 # convert an entry in /dev/ to parent disk and partition number
176 # if devpath is a block device and not a partition, return (devpath, None)280 # if devpath is a block device and not a partition, return (devpath, None)
177281 base = '/sys/class/block'
178 # input of /dev/vdb or /dev/disk/by-label/foo282
179 # rpath is hopefully a real-ish path in /dev (vda, sdb..)283 # input of /dev/vdb, /dev/disk/by-label/foo, /sys/block/foo,
180 rpath = os.path.realpath(devpath)284 # /sys/block/class/foo, or just foo
181285 syspath = os.path.join(base, path_to_kname(devpath))
182 bname = os.path.basename(rpath)286
183 syspath = "/sys/class/block/%s" % bname287 # don't need to try out multiple sysfs paths as path_to_kname handles cciss
184
185 if not os.path.exists(syspath):288 if not os.path.exists(syspath):
186 syspath2 = "/sys/class/block/cciss!%s" % bname289 raise OSError("%s had no syspath (%s)" % (devpath, syspath))
187 if not os.path.exists(syspath2):
188 raise ValueError("%s had no syspath (%s)" % (devpath, syspath))
189 syspath = syspath2
190290
191 ptpath = os.path.join(syspath, "partition")291 ptpath = os.path.join(syspath, "partition")
192 if not os.path.exists(ptpath):292 if not os.path.exists(ptpath):
@@ -207,8 +307,21 @@
207 return (diskdevpath, ptnum)307 return (diskdevpath, ptnum)
208308
209309
310def get_sysfs_partitions(device):
311 """
312 get a list of sysfs paths for partitions under a block device
313 accepts input as a device kname, sysfs path, or dev path
314 returns empty list if no partitions available
315 """
316 sysfs_path = sys_block_path(device)
317 return [sys_block_path(kname) for kname in os.listdir(sysfs_path)
318 if os.path.exists(os.path.join(sysfs_path, kname, 'partition'))]
319
320
210def get_pardevs_on_blockdevs(devs):321def get_pardevs_on_blockdevs(devs):
211 # return a dict of partitions with their info that are on provided devs322 """
323 return a dict of partitions with their info that are on provided devs
324 """
212 if devs is None:325 if devs is None:
213 devs = []326 devs = []
214 devs = [get_dev_name_entry(d)[1] for d in devs]327 devs = [get_dev_name_entry(d)[1] for d in devs]
@@ -243,7 +356,9 @@
243356
244357
245def rescan_block_devices():358def rescan_block_devices():
246 # run 'blockdev --rereadpt' for all block devices not currently mounted359 """
360 run 'blockdev --rereadpt' for all block devices not currently mounted
361 """
247 unused = get_unused_blockdev_info()362 unused = get_unused_blockdev_info()
248 devices = []363 devices = []
249 for devname, data in unused.items():364 for devname, data in unused.items():
@@ -271,6 +386,9 @@
271386
272387
273def blkid(devs=None, cache=True):388def blkid(devs=None, cache=True):
389 """
390 get data about block devices from blkid and convert to dict
391 """
274 if devs is None:392 if devs is None:
275 devs = []393 devs = []
276394
@@ -423,7 +541,18 @@
423 """541 """
424 info = _lsblock([devpath])542 info = _lsblock([devpath])
425 LOG.debug('get_blockdev_sector_size: info:\n%s' % util.json_dumps(info))543 LOG.debug('get_blockdev_sector_size: info:\n%s' % util.json_dumps(info))
426 [parent] = info544 # (LP: 1598310) The call to _lsblock() may return multiple results.
545 # If it does, then search for a result with the correct device path.
546 # If no such device is found among the results, then fall back to previous
547 # behavior, which was taking the first of the results
548 assert len(info) > 0
549 for (k, v) in info.items():
550 if v.get('device_path') == devpath:
551 parent = k
552 break
553 else:
554 parent = list(info.keys())[0]
555
427 return (int(info[parent]['LOG-SEC']), int(info[parent]['PHY-SEC']))556 return (int(info[parent]['LOG-SEC']), int(info[parent]['PHY-SEC']))
428557
429558
@@ -499,50 +628,108 @@
499def sysfs_partition_data(blockdev=None, sysfs_path=None):628def sysfs_partition_data(blockdev=None, sysfs_path=None):
500 # given block device or sysfs_path, return a list of tuples629 # given block device or sysfs_path, return a list of tuples
501 # of (kernel_name, number, offset, size)630 # of (kernel_name, number, offset, size)
502 if blockdev is None and sysfs_path is None:
503 raise ValueError("Blockdev and sysfs_path cannot both be None")
504
505 if blockdev:631 if blockdev:
632 blockdev = os.path.normpath(blockdev)
506 sysfs_path = sys_block_path(blockdev)633 sysfs_path = sys_block_path(blockdev)
507634 elif sysfs_path:
508 ptdata = []635 # use normpath to ensure that paths with trailing slash work
509 # /sys/class/block/dev has entries of 'kname' for each partition636 sysfs_path = os.path.normpath(sysfs_path)
637 blockdev = os.path.join('/dev', os.path.basename(sysfs_path))
638 else:
639 raise ValueError("Blockdev and sysfs_path cannot both be None")
510640
511 # queue property is only on parent devices, ie, we can't read641 # queue property is only on parent devices, ie, we can't read
512 # /sys/class/block/vda/vda1/queue/* as queue is only on the642 # /sys/class/block/vda/vda1/queue/* as queue is only on the
513 # parent device643 # parent device
644 sysfs_prefix = sysfs_path
514 (parent, partnum) = get_blockdev_for_partition(blockdev)645 (parent, partnum) = get_blockdev_for_partition(blockdev)
515 sysfs_prefix = sysfs_path
516 if partnum:646 if partnum:
517 sysfs_prefix = sys_block_path(parent)647 sysfs_prefix = sys_block_path(parent)
518648 partnum = int(partnum)
519 block_size = int(util.load_file(os.path.join(sysfs_prefix,649
520 'queue/logical_block_size')))650 block_size = int(util.load_file(os.path.join(
521651 sysfs_prefix, 'queue/logical_block_size')))
522 block_size = int(
523 util.load_file(os.path.join(sysfs_path, 'queue/logical_block_size')))
524 unit = block_size652 unit = block_size
525 for d in os.listdir(sysfs_path):653
526 partd = os.path.join(sysfs_path, d)654 ptdata = []
655 for part_sysfs in get_sysfs_partitions(sysfs_prefix):
527 data = {}656 data = {}
528 for sfile in ('partition', 'start', 'size'):657 for sfile in ('partition', 'start', 'size'):
529 dfile = os.path.join(partd, sfile)658 dfile = os.path.join(part_sysfs, sfile)
530 if not os.path.isfile(dfile):659 if not os.path.isfile(dfile):
531 continue660 continue
532 data[sfile] = int(util.load_file(dfile))661 data[sfile] = int(util.load_file(dfile))
533 if 'partition' not in data:662 if partnum is None or data['partition'] == partnum:
534 continue663 ptdata.append((path_to_kname(part_sysfs), data['partition'],
535 ptdata.append((d, data['partition'], data['start'] * unit,664 data['start'] * unit, data['size'] * unit,))
536 data['size'] * unit,))
537665
538 return ptdata666 return ptdata
539667
540668
669def get_part_table_type(device):
670 """
671 check the type of partition table present on the specified device
672 returns None if no ptable was present or device could not be read
673 """
674 # it is neccessary to look for the gpt signature first, then the dos
675 # signature, because a gpt formatted disk usually has a valid mbr to
676 # protect the disk from being modified by older partitioning tools
677 return ('gpt' if check_efi_signature(device) else
678 'dos' if check_dos_signature(device) else None)
679
680
681def check_dos_signature(device):
682 """
683 check if there is a dos partition table signature present on device
684 """
685 # the last 2 bytes of a dos partition table have the signature with the
686 # value 0xAA55. the dos partition table is always 0x200 bytes long, even if
687 # the underlying disk uses a larger logical block size, so the start of
688 # this signature must be at 0x1fe
689 # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout
690 return (is_block_device(device) and util.file_size(device) >= 0x200 and
691 (util.load_file(device, mode='rb', read_len=2, offset=0x1fe) ==
692 b'\x55\xAA'))
693
694
695def check_efi_signature(device):
696 """
697 check if there is a gpt partition table signature present on device
698 """
699 # the gpt partition table header is always on lba 1, regardless of the
700 # logical block size used by the underlying disk. therefore, a static
701 # offset cannot be used, the offset to the start of the table header is
702 # always the sector size of the disk
703 # the start of the gpt partition table header shoult have the signaure
704 # 'EFI PART'.
705 # https://en.wikipedia.org/wiki/GUID_Partition_Table
706 sector_size = get_blockdev_sector_size(device)[0]
707 return (is_block_device(device) and
708 util.file_size(device) >= 2 * sector_size and
709 (util.load_file(device, mode='rb', read_len=8,
710 offset=sector_size) == b'EFI PART'))
711
712
713def is_extended_partition(device):
714 """
715 check if the specified device path is a dos extended partition
716 """
717 # an extended partition must be on a dos disk, must be a partition, must be
718 # within the first 4 partitions and will have a valid dos signature,
719 # because the format of the extended partition matches that of a real mbr
720 (parent_dev, part_number) = get_blockdev_for_partition(device)
721 return (get_part_table_type(parent_dev) in ['dos', 'msdos'] and
722 part_number is not None and int(part_number) <= 4 and
723 check_dos_signature(device))
724
725
541def wipe_file(path, reader=None, buflen=4 * 1024 * 1024):726def wipe_file(path, reader=None, buflen=4 * 1024 * 1024):
542 # wipe the existing file at path.727 """
543 # if reader is provided, it will be called as a 'reader(buflen)'728 wipe the existing file at path.
544 # to provide data for each write. Otherwise, zeros are used.729 if reader is provided, it will be called as a 'reader(buflen)'
545 # writes will be done in size of buflen.730 to provide data for each write. Otherwise, zeros are used.
731 writes will be done in size of buflen.
732 """
546 if reader:733 if reader:
547 readfunc = reader734 readfunc = reader
548 else:735 else:
@@ -551,13 +738,11 @@
551 def readfunc(size):738 def readfunc(size):
552 return buf739 return buf
553740
741 size = util.file_size(path)
742 LOG.debug("%s is %s bytes. wiping with buflen=%s",
743 path, size, buflen)
744
554 with open(path, "rb+") as fp:745 with open(path, "rb+") as fp:
555 # get the size by seeking to end.
556 fp.seek(0, 2)
557 size = fp.tell()
558 LOG.debug("%s is %s bytes. wiping with buflen=%s",
559 path, size, buflen)
560 fp.seek(0)
561 while True:746 while True:
562 pbuf = readfunc(buflen)747 pbuf = readfunc(buflen)
563 pos = fp.tell()748 pos = fp.tell()
@@ -574,16 +759,18 @@
574759
575760
576def quick_zero(path, partitions=True):761def quick_zero(path, partitions=True):
577 # zero 1M at front, 1M at end, and 1M at front762 """
578 # if this is a block device and partitions is true, then763 zero 1M at front, 1M at end, and 1M at front
579 # zero 1M at front and end of each partition.764 if this is a block device and partitions is true, then
765 zero 1M at front and end of each partition.
766 """
580 buflen = 1024767 buflen = 1024
581 count = 1024768 count = 1024
582 zero_size = buflen * count769 zero_size = buflen * count
583 offsets = [0, -zero_size]770 offsets = [0, -zero_size]
584 is_block = is_block_device(path)771 is_block = is_block_device(path)
585 if not (is_block or os.path.isfile(path)):772 if not (is_block or os.path.isfile(path)):
586 raise ValueError("%s: not an existing file or block device")773 raise ValueError("%s: not an existing file or block device", path)
587774
588 if partitions and is_block:775 if partitions and is_block:
589 ptdata = sysfs_partition_data(path)776 ptdata = sysfs_partition_data(path)
@@ -596,6 +783,9 @@
596783
597784
598def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False):785def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False):
786 """
787 write zeros to file at specified offsets
788 """
599 bmsg = "{path} (size={size}): "789 bmsg = "{path} (size={size}): "
600 m_short = bmsg + "{tot} bytes from {offset} > size."790 m_short = bmsg + "{tot} bytes from {offset} > size."
601 m_badoff = bmsg + "invalid offset {offset}."791 m_badoff = bmsg + "invalid offset {offset}."
@@ -657,15 +847,13 @@
657 if mode == "pvremove":847 if mode == "pvremove":
658 # We need to use --force --force in case it's already in a volgroup and848 # We need to use --force --force in case it's already in a volgroup and
659 # pvremove doesn't want to remove it849 # pvremove doesn't want to remove it
660 cmds = []850
661 cmds.append(["pvremove", "--force", "--force", "--yes", path])
662 cmds.append(["pvscan", "--cache"])
663 cmds.append(["vgscan", "--mknodes", "--cache"])
664 # If pvremove is run and there is no label on the system,851 # If pvremove is run and there is no label on the system,
665 # then it exits with 5. That is also okay, because we might be852 # then it exits with 5. That is also okay, because we might be
666 # wiping something that is already blank853 # wiping something that is already blank
667 for cmd in cmds:854 util.subp(['pvremove', '--force', '--force', '--yes', path],
668 util.subp(cmd, rcs=[0, 5], capture=True)855 rcs=[0, 5], capture=True)
856 lvm.lvm_scan()
669 elif mode == "zero":857 elif mode == "zero":
670 wipe_file(path)858 wipe_file(path)
671 elif mode == "random":859 elif mode == "random":
672860
=== added file 'curtin/block/clear_holders.py'
--- curtin/block/clear_holders.py 1970-01-01 00:00:00 +0000
+++ curtin/block/clear_holders.py 2016-10-03 18:55:20 +0000
@@ -0,0 +1,387 @@
1# Copyright (C) 2016 Canonical Ltd.
2#
3# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
4#
5# Curtin is free software: you can redistribute it and/or modify it under
6# the terms of the GNU Affero General Public License as published by the
7# Free Software Foundation, either version 3 of the License, or (at your
8# option) any later version.
9#
10# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
11# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
12# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
13# more details.
14#
15# You should have received a copy of the GNU Affero General Public License
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
17
18"""
19This module provides a mechanism for shutting down virtual storage layers on
20top of a block device, making it possible to reuse the block device without
21having to reboot the system
22"""
23
24import os
25
26from curtin import (block, udev, util)
27from curtin.block import lvm
28from curtin.log import LOG
29
30
31def _define_handlers_registry():
32 """
33 returns instantiated dev_types
34 """
35 return {
36 'partition': {'shutdown': wipe_superblock,
37 'ident': identify_partition},
38 'lvm': {'shutdown': shutdown_lvm, 'ident': identify_lvm},
39 'crypt': {'shutdown': shutdown_crypt, 'ident': identify_crypt},
40 'raid': {'shutdown': shutdown_mdadm, 'ident': identify_mdadm},
41 'bcache': {'shutdown': shutdown_bcache, 'ident': identify_bcache},
42 'disk': {'ident': lambda x: False, 'shutdown': wipe_superblock},
43 }
44
45
46def get_dmsetup_uuid(device):
47 """
48 get the dm uuid for a specified dmsetup device
49 """
50 blockdev = block.sysfs_to_devpath(device)
51 (out, _) = util.subp(['dmsetup', 'info', blockdev, '-C', '-o', 'uuid',
52 '--noheadings'], capture=True)
53 return out.strip()
54
55
56def get_bcache_using_dev(device):
57 """
58 Get the /sys/fs/bcache/ path of the bcache volume using specified device
59 """
60 # FIXME: when block.bcache is written this should be moved there
61 sysfs_path = block.sys_block_path(device)
62 return os.path.realpath(os.path.join(sysfs_path, 'bcache', 'cache'))
63
64
65def shutdown_bcache(device):
66 """
67 Shut down bcache for specified bcache device
68 """
69 bcache_shutdown_message = ('shutdown_bcache running on {} has determined '
70 'that the device has already been shut down '
71 'during handling of another bcache dev. '
72 'skipping'.format(device))
73 if not os.path.exists(device):
74 LOG.info(bcache_shutdown_message)
75 return
76
77 bcache_sysfs = get_bcache_using_dev(device)
78 if not os.path.exists(bcache_sysfs):
79 LOG.info(bcache_shutdown_message)
80 return
81
82 LOG.debug('stopping bcache at: %s', bcache_sysfs)
83 util.write_file(os.path.join(bcache_sysfs, 'stop'), '1', mode=None)
84
85
86def shutdown_lvm(device):
87 """
88 Shutdown specified lvm device.
89 """
90 device = block.sys_block_path(device)
91 # lvm devices have a dm directory that containes a file 'name' containing
92 # '{volume group}-{logical volume}'. The volume can be freed using lvremove
93 name_file = os.path.join(device, 'dm', 'name')
94 (vg_name, lv_name) = lvm.split_lvm_name(util.load_file(name_file))
95 # use two --force flags here in case the volume group that this lv is
96 # attached two has been damaged
97 LOG.debug('running lvremove on %s/%s', vg_name, lv_name)
98 util.subp(['lvremove', '--force', '--force',
99 '{}/{}'.format(vg_name, lv_name)], rcs=[0, 5])
100 # if that was the last lvol in the volgroup, get rid of volgroup
101 if len(lvm.get_lvols_in_volgroup(vg_name)) == 0:
102 util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5])
103 # refresh lvmetad
104 lvm.lvm_scan()
105
106
107def shutdown_crypt(device):
108 """
109 Shutdown specified cryptsetup device
110 """
111 blockdev = block.sysfs_to_devpath(device)
112 util.subp(['cryptsetup', 'remove', blockdev], capture=True)
113
114
115def shutdown_mdadm(device):
116 """
117 Shutdown specified mdadm device.
118 """
119 blockdev = block.sysfs_to_devpath(device)
120 LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev)
121 block.mdadm.mdadm_stop(blockdev)
122 block.mdadm.mdadm_remove(blockdev)
123
124
125def wipe_superblock(device):
126 """
127 Wrapper for block.wipe_volume compatible with shutdown function interface
128 """
129 blockdev = block.sysfs_to_devpath(device)
130 # when operating on a disk that used to have a dos part table with an
131 # extended partition, attempting to wipe the extended partition will fail
132 if block.is_extended_partition(blockdev):
133 LOG.info("extended partitions do not need wiping, so skipping: '%s'",
134 blockdev)
135 else:
136 LOG.info('wiping superblock on %s', blockdev)
137 block.wipe_volume(blockdev, mode='superblock')
138
139
140def identify_lvm(device):
141 """
142 determine if specified device is a lvm device
143 """
144 return (block.path_to_kname(device).startswith('dm') and
145 get_dmsetup_uuid(device).startswith('LVM'))
146
147
148def identify_crypt(device):
149 """
150 determine if specified device is dm-crypt device
151 """
152 return (block.path_to_kname(device).startswith('dm') and
153 get_dmsetup_uuid(device).startswith('CRYPT'))
154
155
156def identify_mdadm(device):
157 """
158 determine if specified device is a mdadm device
159 """
160 return block.path_to_kname(device).startswith('md')
161
162
163def identify_bcache(device):
164 """
165 determine if specified device is a bcache device
166 """
167 return block.path_to_kname(device).startswith('bcache')
168
169
170def identify_partition(device):
171 """
172 determine if specified device is a partition
173 """
174 path = os.path.join(block.sys_block_path(device), 'partition')
175 return os.path.exists(path)
176
177
178def get_holders(device):
179 """
180 Look up any block device holders, return list of knames
181 """
182 # block.sys_block_path works when given a /sys or /dev path
183 sysfs_path = block.sys_block_path(device)
184 # get holders
185 holders = os.listdir(os.path.join(sysfs_path, 'holders'))
186 LOG.debug("devname '%s' had holders: %s", device, holders)
187 return holders
188
189
190def gen_holders_tree(device):
191 """
192 generate a tree representing the current storage hirearchy above 'device'
193 """
194 device = block.sys_block_path(device)
195 dev_name = block.path_to_kname(device)
196 # the holders for a device should consist of the devices in the holders/
197 # dir in sysfs and any partitions on the device. this ensures that a
198 # storage tree starting from a disk will include all devices holding the
199 # disk's partitions
200 holder_paths = ([block.sys_block_path(h) for h in get_holders(device)] +
201 block.get_sysfs_partitions(device))
202 # the DEV_TYPE registry contains a function under the key 'ident' for each
203 # device type entry that returns true if the device passed to it is of the
204 # correct type. there should never be a situation in which multiple
205 # identify functions return true. therefore, it will always work to take
206 # the device type with the first identify function that returns true as the
207 # device type for the current device. in the event that no identify
208 # functions return true, the device will be treated as a disk
209 # (DEFAULT_DEV_TYPE). the identify function for disk never returns true.
210 # the next() builtin in python will not raise a StopIteration exception if
211 # there is a default value defined
212 dev_type = next((k for k, v in DEV_TYPES.items() if v['ident'](device)),
213 DEFAULT_DEV_TYPE)
214 return {
215 'device': device, 'dev_type': dev_type, 'name': dev_name,
216 'holders': [gen_holders_tree(h) for h in holder_paths],
217 }
218
219
220def plan_shutdown_holder_trees(holders_trees):
221 """
222 plan best order to shut down holders in, taking into account high level
223 storage layers that may have many devices below them
224
225 returns a sorted list of descriptions of storage config entries including
226 their path in /sys/block and their dev type
227
228 can accept either a single storage tree or a list of storage trees assumed
229 to start at an equal place in storage hirearchy (i.e. a list of trees
230 starting from disk)
231 """
232 # holds a temporary registry of holders to allow cross references
233 # key = device sysfs path, value = {} of priority level, shutdown function
234 reg = {}
235
236 # normalize to list of trees
237 if not isinstance(holders_trees, (list, tuple)):
238 holders_trees = [holders_trees]
239
240 def flatten_holders_tree(tree, level=0):
241 """
242 add entries from holders tree to registry with level key corresponding
243 to how many layers from raw disks the current device is at
244 """
245 device = tree['device']
246
247 # always go with highest level if current device has been
248 # encountered already. since the device and everything above it is
249 # re-added to the registry it ensures that any increase of level
250 # required here will propagate down the tree
251 # this handles a scenario like mdadm + bcache, where the backing
252 # device for bcache is a 3nd level item like mdadm, but the cache
253 # device is 1st level (disk) or second level (partition), ensuring
254 # that the bcache item is always considered higher level than
255 # anything else regardless of whether it was added to the tree via
256 # the cache device or backing device first
257 if device in reg:
258 level = max(reg[device]['level'], level)
259
260 reg[device] = {'level': level, 'device': device,
261 'dev_type': tree['dev_type']}
262
263 # handle holders above this level
264 for holder in tree['holders']:
265 flatten_holders_tree(holder, level=level + 1)
266
267 # flatten the holders tree into the registry
268 for holders_tree in holders_trees:
269 flatten_holders_tree(holders_tree)
270
271 # return list of entry dicts with highest level first
272 return [reg[k] for k in sorted(reg, key=lambda x: reg[x]['level'] * -1)]
273
274
275def format_holders_tree(holders_tree):
276 """
277 draw a nice dirgram of the holders tree
278 """
279 # spacer styles based on output of 'tree --charset=ascii'
280 spacers = (('`-- ', ' ' * 4), ('|-- ', '|' + ' ' * 3))
281
282 def format_tree(tree):
283 """
284 format entry and any subentries
285 """
286 result = [tree['name']]
287 holders = tree['holders']
288 for (holder_no, holder) in enumerate(holders):
289 spacer_style = spacers[min(len(holders) - (holder_no + 1), 1)]
290 subtree_lines = format_tree(holder)
291 for (line_no, line) in enumerate(subtree_lines):
292 result.append(spacer_style[min(line_no, 1)] + line)
293 return result
294
295 return '\n'.join(format_tree(holders_tree))
296
297
298def get_holder_types(tree):
299 """
300 get flattened list of types of holders in holders tree and the devices
301 they correspond to
302 """
303 types = {(tree['dev_type'], tree['device'])}
304 for holder in tree['holders']:
305 types.update(get_holder_types(holder))
306 return types
307
308
309def assert_clear(base_paths):
310 """
311 Check if all paths in base_paths are clear to use
312 """
313 valid = ('disk', 'partition')
314 if not isinstance(base_paths, (list, tuple)):
315 base_paths = [base_paths]
316 base_paths = [block.sys_block_path(path) for path in base_paths]
317 for holders_tree in [gen_holders_tree(p) for p in base_paths]:
318 if any(holder_type not in valid and path not in base_paths
319 for (holder_type, path) in get_holder_types(holders_tree)):
320 raise OSError('Storage not clear, remaining:\n{}'
321 .format(format_holders_tree(holders_tree)))
322
323
324def clear_holders(base_paths, try_preserve=False):
325 """
326 Clear all storage layers depending on the devices specified in 'base_paths'
327 A single device or list of devices can be specified.
328 Device paths can be specified either as paths in /dev or /sys/block
329 Will throw OSError if any holders could not be shut down
330 """
331 # handle single path
332 if not isinstance(base_paths, (list, tuple)):
333 base_paths = [base_paths]
334
335 # get current holders and plan how to shut them down
336 holder_trees = [gen_holders_tree(path) for path in base_paths]
337 LOG.info('Current device storage tree:\n%s',
338 '\n'.join(format_holders_tree(tree) for tree in holder_trees))
339 ordered_devs = plan_shutdown_holder_trees(holder_trees)
340
341 # run shutdown functions
342 for dev_info in ordered_devs:
343 dev_type = DEV_TYPES.get(dev_info['dev_type'])
344 shutdown_function = dev_type.get('shutdown')
345 if not shutdown_function:
346 continue
347 if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS:
348 LOG.info('shutdown function for holder type: %s is destructive. '
349 'attempting to preserve data, so not skipping' %
350 dev_info['dev_type'])
351 continue
352 LOG.info("shutdown running on holder type: '%s' syspath: '%s'",
353 dev_info['dev_type'], dev_info['device'])
354 shutdown_function(dev_info['device'])
355 udev.udevadm_settle()
356
357
358def start_clear_holders_deps():
359 """
360 prepare system for clear holders to be able to scan old devices
361 """
362 # a mdadm scan has to be started in case there is a md device that needs to
363 # be detected. if the scan fails, it is either because there are no mdadm
364 # devices on the system, or because there is a mdadm device in a damaged
365 # state that could not be started. due to the nature of mdadm tools, it is
366 # difficult to know which is the case. if any errors did occur, then ignore
367 # them, since no action needs to be taken if there were no mdadm devices on
368 # the system, and in the case where there is some mdadm metadata on a disk,
369 # but there was not enough to start the array, the call to wipe_volume on
370 # all disks and partitions should be sufficient to remove the mdadm
371 # metadata
372 block.mdadm.mdadm_assemble(scan=True, ignore_errors=True)
373 # the bcache module needs to be present to properly detect bcache devs
374 # on some systems (precise without hwe kernel) it may not be possible to
375 # lad the bcache module bcause it is not present in the kernel. if this
376 # happens then there is no need to halt installation, as the bcache devices
377 # will never appear and will never prevent the disk from being reformatted
378 util.subp(['modprobe', 'bcache'], rcs=[0, 1])
379
380
381# anything that is not identified can assumed to be a 'disk' or similar
382DEFAULT_DEV_TYPE = 'disk'
383# handlers that should not be run if an attempt is being made to preserve data
384DATA_DESTROYING_HANDLERS = [wipe_superblock]
385# types of devices that could be encountered by clear holders and functions to
386# identify them and shut them down
387DEV_TYPES = _define_handlers_registry()
0388
=== added file 'curtin/block/lvm.py'
--- curtin/block/lvm.py 1970-01-01 00:00:00 +0000
+++ curtin/block/lvm.py 2016-10-03 18:55:20 +0000
@@ -0,0 +1,96 @@
1# Copyright (C) 2016 Canonical Ltd.
2#
3# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
4#
5# Curtin is free software: you can redistribute it and/or modify it under
6# the terms of the GNU Affero General Public License as published by the
7# Free Software Foundation, either version 3 of the License, or (at your
8# option) any later version.
9#
10# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
11# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
12# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
13# more details.
14#
15# You should have received a copy of the GNU Affero General Public License
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
17
18"""
19This module provides some helper functions for manipulating lvm devices
20"""
21
22from curtin import util
23from curtin.log import LOG
24import os
25
26# separator to use for lvm/dm tools
27_SEP = '='
28
29
30def _filter_lvm_info(lvtool, match_field, query_field, match_key):
31 """
32 filter output of pv/vg/lvdisplay tools
33 """
34 (out, _) = util.subp([lvtool, '-C', '--separator', _SEP, '--noheadings',
35 '-o', ','.join([match_field, query_field])],
36 capture=True)
37 return [qf for (mf, qf) in
38 [l.strip().split(_SEP) for l in out.strip().splitlines()]
39 if mf == match_key]
40
41
42def get_pvols_in_volgroup(vg_name):
43 """
44 get physical volumes used by volgroup
45 """
46 return _filter_lvm_info('pvdisplay', 'vg_name', 'pv_name', vg_name)
47
48
49def get_lvols_in_volgroup(vg_name):
50 """
51 get logical volumes in volgroup
52 """
53 return _filter_lvm_info('lvdisplay', 'vg_name', 'lv_name', vg_name)
54
55
56def split_lvm_name(full):
57 """
58 split full lvm name into tuple of (volgroup, lv_name)
59 """
60 # 'dmsetup splitname' is the authoratative source for lvm name parsing
61 (out, _) = util.subp(['dmsetup', 'splitname', full, '-c', '--noheadings',
62 '--separator', _SEP, '-o', 'vg_name,lv_name'],
63 capture=True)
64 return out.strip().split(_SEP)
65
66
67def lvmetad_running():
68 """
69 check if lvmetad is running
70 """
71 return os.path.exists(os.environ.get('LVM_LVMETAD_PIDFILE',
72 '/run/lvmetad.pid'))
73
74
75def lvm_scan():
76 """
77 run full scan for volgroups, logical volumes and physical volumes
78 """
79 # the lvm tools lvscan, vgscan and pvscan on ubuntu precise do not
80 # support the flag --cache. the flag is present for the tools in ubuntu
81 # trusty and later. since lvmetad is used in current releases of
82 # ubuntu, the --cache flag is needed to ensure that the data cached by
83 # lvmetad is updated.
84
85 # before appending the cache flag though, check if lvmetad is running. this
86 # ensures that we do the right thing even if lvmetad is supported but is
87 # not running
88 release = util.lsb_release().get('codename')
89 if release in [None, 'UNAVAILABLE']:
90 LOG.warning('unable to find release number, assuming xenial or later')
91 release = 'xenial'
92
93 for cmd in [['pvscan'], ['vgscan', '--mknodes']]:
94 if release != 'precise' and lvmetad_running():
95 cmd.append('--cache')
96 util.subp(cmd, capture=True)
097
=== modified file 'curtin/block/mdadm.py'
--- curtin/block/mdadm.py 2016-05-10 16:13:29 +0000
+++ curtin/block/mdadm.py 2016-10-03 18:55:20 +0000
@@ -28,7 +28,7 @@
28from subprocess import CalledProcessError28from subprocess import CalledProcessError
2929
30from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)30from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)
31from curtin import util31from curtin import (util, udev)
32from curtin.log import LOG32from curtin.log import LOG
3333
34NOSPARE_RAID_LEVELS = [34NOSPARE_RAID_LEVELS = [
@@ -117,21 +117,34 @@
117#117#
118118
119119
120def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False):120def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False,
121 ignore_errors=False):
121 # md_devname is a /dev/XXXX122 # md_devname is a /dev/XXXX
122 # devices is non-empty list of /dev/xxx123 # devices is non-empty list of /dev/xxx
123 # if spares is non-empt list append of /dev/xxx124 # if spares is non-empt list append of /dev/xxx
124 cmd = ["mdadm", "--assemble"]125 cmd = ["mdadm", "--assemble"]
125 if scan:126 if scan:
126 cmd += ['--scan']127 cmd += ['--scan', '-v']
127 else:128 else:
128 valid_mdname(md_devname)129 valid_mdname(md_devname)
129 cmd += [md_devname, "--run"] + devices130 cmd += [md_devname, "--run"] + devices
130 if spares:131 if spares:
131 cmd += spares132 cmd += spares
132133
133 util.subp(cmd, capture=True, rcs=[0, 1, 2])134 try:
134 util.subp(["udevadm", "settle"])135 # mdadm assemble returns 1 when no arrays are found. this might not be
136 # an error depending on the situation this function was called in, so
137 # accept a return code of 1
138 # mdadm assemble returns 2 when called on an array that is already
139 # assembled. this is not an error, so accept return code of 2
140 # all other return codes can be accepted with ignore_error set to true
141 util.subp(cmd, capture=True, rcs=[0, 1, 2])
142 except util.ProcessExecutionError:
143 LOG.warning("mdadm_assemble had unexpected return code")
144 if not ignore_errors:
145 raise
146
147 udev.udevadm_settle()
135148
136149
137def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""):150def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""):
138151
=== modified file 'curtin/block/mkfs.py'
--- curtin/block/mkfs.py 2016-05-10 16:13:29 +0000
+++ curtin/block/mkfs.py 2016-10-03 18:55:20 +0000
@@ -78,6 +78,7 @@
78 "swap": "--uuid"},78 "swap": "--uuid"},
79 "force": {"btrfs": "--force",79 "force": {"btrfs": "--force",
80 "ext": "-F",80 "ext": "-F",
81 "fat": "-I",
81 "ntfs": "--force",82 "ntfs": "--force",
82 "reiserfs": "-f",83 "reiserfs": "-f",
83 "swap": "--force",84 "swap": "--force",
@@ -91,6 +92,7 @@
91 "btrfs": "--sectorsize",92 "btrfs": "--sectorsize",
92 "ext": "-b",93 "ext": "-b",
93 "fat": "-S",94 "fat": "-S",
95 "xfs": "-s",
94 "ntfs": "--sector-size",96 "ntfs": "--sector-size",
95 "reiserfs": "--block-size"}97 "reiserfs": "--block-size"}
96}98}
@@ -165,12 +167,15 @@
165 # use device logical block size to ensure properly formated filesystems167 # use device logical block size to ensure properly formated filesystems
166 (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path)168 (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path)
167 if logical_bsize > 512:169 if logical_bsize > 512:
170 lbs_str = ('size={}'.format(logical_bsize) if fs_family == "xfs"
171 else str(logical_bsize))
168 cmd.extend(get_flag_mapping("sectorsize", fs_family,172 cmd.extend(get_flag_mapping("sectorsize", fs_family,
169 param=str(logical_bsize),173 param=lbs_str, strict=strict))
170 strict=strict))174
171 # mkfs.vfat doesn't calculate this right for non-512b sector size175 if fs_family == 'fat':
172 # lp:1569576 , d-i uses the same setting.176 # mkfs.vfat doesn't calculate this right for non-512b sector size
173 cmd.extend(["-s", "1"])177 # lp:1569576 , d-i uses the same setting.
178 cmd.extend(["-s", "1"])
174179
175 if force:180 if force:
176 cmd.extend(get_flag_mapping("force", fs_family, strict=strict))181 cmd.extend(get_flag_mapping("force", fs_family, strict=strict))
177182
=== modified file 'curtin/commands/apply_net.py'
--- curtin/commands/apply_net.py 2016-05-10 16:13:29 +0000
+++ curtin/commands/apply_net.py 2016-10-03 18:55:20 +0000
@@ -26,6 +26,57 @@
2626
27LOG = log.LOG27LOG = log.LOG
2828
29IFUPDOWN_IPV6_MTU_PRE_HOOK = """#!/bin/bash -e
30# injected by curtin installer
31
32[ "${IFACE}" != "lo" ] || exit 0
33
34# Trigger only if MTU configured
35[ -n "${IF_MTU}" ] || exit 0
36
37read CUR_DEV_MTU </sys/class/net/${IFACE}/mtu ||:
38read CUR_IPV6_MTU </proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
39[ -n "${CUR_DEV_MTU}" ] && echo ${CUR_DEV_MTU} > /run/network/${IFACE}_dev.mtu
40[ -n "${CUR_IPV6_MTU}" ] &&
41 echo ${CUR_IPV6_MTU} > /run/network/${IFACE}_ipv6.mtu
42exit 0
43"""
44
45IFUPDOWN_IPV6_MTU_POST_HOOK = """#!/bin/bash -e
46# injected by curtin installer
47
48[ "${IFACE}" != "lo" ] || exit 0
49
50# Trigger only if MTU configured
51[ -n "${IF_MTU}" ] || exit 0
52
53read PRE_DEV_MTU </run/network/${IFACE}_dev.mtu ||:
54read CUR_DEV_MTU </sys/class/net/${IFACE}/mtu ||:
55read PRE_IPV6_MTU </run/network/${IFACE}_ipv6.mtu ||:
56read CUR_IPV6_MTU </proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
57
58if [ "${ADDRFAM}" = "inet6" ]; then
59 # We need to check the underlying interface MTU and
60 # raise it if the IPV6 mtu is larger
61 if [ ${CUR_DEV_MTU} -lt ${IF_MTU} ]; then
62 ip link set ${IFACE} mtu ${IF_MTU}
63 fi
64 # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${IF_MTU}
65 echo ${IF_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
66
67elif [ "${ADDRFAM}" = "inet" ]; then
68 # handle the clobber case where inet mtu changes v6 mtu.
69 # ifupdown will already have set dev mtu, so lower mtu
70 # if needed. If v6 mtu was larger, it get's clamped down
71 # to the dev MTU value.
72 if [ ${PRE_IPV6_MTU} -lt ${CUR_IPV6_MTU} ]; then
73 # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${PRE_IPV6_MTU}
74 echo ${PRE_IPV6_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||:
75 fi
76fi
77exit 0
78"""
79
2980
30def apply_net(target, network_state=None, network_config=None):81def apply_net(target, network_state=None, network_config=None):
31 if network_state is None and network_config is None:82 if network_state is None and network_config is None:
@@ -45,6 +96,108 @@
4596
46 net.render_network_state(target=target, network_state=ns)97 net.render_network_state(target=target, network_state=ns)
4798
99 _maybe_remove_legacy_eth0(target)
100 LOG.info('Attempting to remove ipv6 privacy extensions')
101 _disable_ipv6_privacy_extensions(target)
102 _patch_ifupdown_ipv6_mtu_hook(target)
103
104
105def _patch_ifupdown_ipv6_mtu_hook(target,
106 prehookfn="etc/network/if-pre-up.d/mtuipv6",
107 posthookfn="etc/network/if-up.d/mtuipv6"):
108
109 contents = {
110 'prehook': IFUPDOWN_IPV6_MTU_PRE_HOOK,
111 'posthook': IFUPDOWN_IPV6_MTU_POST_HOOK,
112 }
113
114 hookfn = {
115 'prehook': prehookfn,
116 'posthook': posthookfn,
117 }
118
119 for hook in ['prehook', 'posthook']:
120 fn = hookfn[hook]
121 cfg = util.target_path(target, path=fn)
122 LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg)
123 util.write_file(cfg, contents[hook], mode=0o755)
124
125
126def _disable_ipv6_privacy_extensions(target,
127 path="etc/sysctl.d/10-ipv6-privacy.conf"):
128
129 """Ubuntu server image sets a preference to use IPv6 privacy extensions
130 by default; this races with the cloud-image desire to disable them.
131 Resolve this by allowing the cloud-image setting to win. """
132
133 cfg = util.target_path(target, path=path)
134 if not os.path.exists(cfg):
135 LOG.warn('Failed to find ipv6 privacy conf file %s', cfg)
136 return
137
138 bmsg = "Disabling IPv6 privacy extensions config may not apply."
139 try:
140 contents = util.load_file(cfg)
141 known_contents = ["net.ipv6.conf.all.use_tempaddr = 2",
142 "net.ipv6.conf.default.use_tempaddr = 2"]
143 lines = [f.strip() for f in contents.splitlines()
144 if not f.startswith("#")]
145 if lines == known_contents:
146 LOG.info('deleting file: %s', cfg)
147 util.del_file(cfg)
148 msg = "removed %s with known contents" % cfg
149 curtin_contents = '\n'.join(
150 ["# IPv6 Privacy Extensions (RFC 4941)",
151 "# Disabled by curtin",
152 "# net.ipv6.conf.all.use_tempaddr = 2",
153 "# net.ipv6.conf.default.use_tempaddr = 2"])
154 util.write_file(cfg, curtin_contents)
155 else:
156 LOG.info('skipping, content didnt match')
157 LOG.debug("found content:\n%s", lines)
158 LOG.debug("expected contents:\n%s", known_contents)
159 msg = (bmsg + " '%s' exists with user configured content." % cfg)
160 except:
161 msg = bmsg + " %s exists, but could not be read." % cfg
162 LOG.exception(msg)
163 return
164
165
166def _maybe_remove_legacy_eth0(target,
167 path="etc/network/interfaces.d/eth0.cfg"):
168 """Ubuntu cloud images previously included a 'eth0.cfg' that had
169 hard coded content. That file would interfere with the rendered
170 configuration if it was present.
171
172 if the file does not exist do nothing.
173 If the file exists:
174 - with known content, remove it and warn
175 - with unknown content, leave it and warn
176 """
177
178 cfg = util.target_path(target, path=path)
179 if not os.path.exists(cfg):
180 LOG.warn('Failed to find legacy network conf file %s', cfg)
181 return
182
183 bmsg = "Dynamic networking config may not apply."
184 try:
185 contents = util.load_file(cfg)
186 known_contents = ["auto eth0", "iface eth0 inet dhcp"]
187 lines = [f.strip() for f in contents.splitlines()
188 if not f.startswith("#")]
189 if lines == known_contents:
190 util.del_file(cfg)
191 msg = "removed %s with known contents" % cfg
192 else:
193 msg = (bmsg + " '%s' exists with user configured content." % cfg)
194 except:
195 msg = bmsg + " %s exists, but could not be read." % cfg
196 LOG.exception(msg)
197 return
198
199 LOG.warn(msg)
200
48201
49def apply_net_main(args):202def apply_net_main(args):
50 # curtin apply_net [--net-state=/config/netstate.yml] [--target=/]203 # curtin apply_net [--net-state=/config/netstate.yml] [--target=/]
@@ -76,8 +229,10 @@
76 apply_net(target=state['target'],229 apply_net(target=state['target'],
77 network_state=state['network_state'],230 network_state=state['network_state'],
78 network_config=state['network_config'])231 network_config=state['network_config'])
232
79 except Exception:233 except Exception:
80 LOG.exception('failed to apply network config')234 LOG.exception('failed to apply network config')
235 return 1
81236
82 LOG.info('Applied network configuration successfully')237 LOG.info('Applied network configuration successfully')
83 sys.exit(0)238 sys.exit(0)
@@ -90,7 +245,7 @@
90 'metavar': 'NETSTATE', 'action': 'store',245 'metavar': 'NETSTATE', 'action': 'store',
91 'default': os.environ.get('OUTPUT_NETWORK_STATE')}),246 'default': os.environ.get('OUTPUT_NETWORK_STATE')}),
92 (('-t', '--target'),247 (('-t', '--target'),
93 {'help': ('target filesystem root to add swap file to. '248 {'help': ('target filesystem root to configure networking to. '
94 'default is env["TARGET_MOUNT_POINT"]'),249 'default is env["TARGET_MOUNT_POINT"]'),
95 'metavar': 'TARGET', 'action': 'store',250 'metavar': 'TARGET', 'action': 'store',
96 'default': os.environ.get('TARGET_MOUNT_POINT')}),251 'default': os.environ.get('TARGET_MOUNT_POINT')}),
97252
=== added file 'curtin/commands/apt_config.py'
--- curtin/commands/apt_config.py 1970-01-01 00:00:00 +0000
+++ curtin/commands/apt_config.py 2016-10-03 18:55:20 +0000
@@ -0,0 +1,668 @@
1# Copyright (C) 2016 Canonical Ltd.
2#
3# Author: Christian Ehrhardt <christian.ehrhardt@canonical.com>
4#
5# Curtin is free software: you can redistribute it and/or modify it under
6# the terms of the GNU Affero General Public License as published by the
7# Free Software Foundation, either version 3 of the License, or (at your
8# option) any later version.
9#
10# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
11# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
12# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
13# more details.
14#
15# You should have received a copy of the GNU Affero General Public License
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
17"""
18apt.py
19Handle the setup of apt related tasks like proxies, mirrors, repositories.
20"""
21
22import argparse
23import glob
24import os
25import re
26import sys
27import yaml
28
29from curtin.log import LOG
30from curtin import (config, util, gpg)
31
32from . import populate_one_subcmd
33
34# this will match 'XXX:YYY' (ie, 'cloud-archive:foo' or 'ppa:bar')
35ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
36
37# place where apt stores cached repository data
38APT_LISTS = "/var/lib/apt/lists"
39
40# Files to store proxy information
41APT_CONFIG_FN = "/etc/apt/apt.conf.d/94curtin-config"
42APT_PROXY_FN = "/etc/apt/apt.conf.d/90curtin-aptproxy"
43
44# Default keyserver to use
45DEFAULT_KEYSERVER = "keyserver.ubuntu.com"
46
47# Default archive mirrors
48PRIMARY_ARCH_MIRRORS = {"PRIMARY": "http://archive.ubuntu.com/ubuntu/",
49 "SECURITY": "http://security.ubuntu.com/ubuntu/"}
50PORTS_MIRRORS = {"PRIMARY": "http://ports.ubuntu.com/ubuntu-ports",
51 "SECURITY": "http://ports.ubuntu.com/ubuntu-ports"}
52PRIMARY_ARCHES = ['amd64', 'i386']
53PORTS_ARCHES = ['s390x', 'arm64', 'armhf', 'powerpc', 'ppc64el']
54
55
56def get_default_mirrors(arch=None):
57 """returns the default mirrors for the target. These depend on the
58 architecture, for more see:
59 https://wiki.ubuntu.com/UbuntuDevelopment/PackageArchive#Ports"""
60 if arch is None:
61 arch = util.get_architecture()
62 if arch in PRIMARY_ARCHES:
63 return PRIMARY_ARCH_MIRRORS.copy()
64 if arch in PORTS_ARCHES:
65 return PORTS_MIRRORS.copy()
66 raise ValueError("No default mirror known for arch %s" % arch)
67
68
69def handle_apt(cfg, target=None):
70 """ handle_apt
71 process the config for apt_config. This can be called from
72 curthooks if a global apt config was provided or via the "apt"
73 standalone command.
74 """
75 release = util.lsb_release(target=target)['codename']
76 arch = util.get_architecture(target)
77 mirrors = find_apt_mirror_info(cfg, arch)
78 LOG.debug("Apt Mirror info: %s", mirrors)
79
80 apply_debconf_selections(cfg, target)
81
82 if not config.value_as_boolean(cfg.get('preserve_sources_list',
83 True)):
84 generate_sources_list(cfg, release, mirrors, target)
85 rename_apt_lists(mirrors, target)
86
87 try:
88 apply_apt_proxy_config(cfg, target + APT_PROXY_FN,
89 target + APT_CONFIG_FN)
90 except (IOError, OSError):
91 LOG.exception("Failed to apply proxy or apt config info:")
92
93 # Process 'apt_source -> sources {dict}'
94 if 'sources' in cfg:
95 params = mirrors
96 params['RELEASE'] = release
97 params['MIRROR'] = mirrors["MIRROR"]
98
99 matcher = None
100 matchcfg = cfg.get('add_apt_repo_match', ADD_APT_REPO_MATCH)
101 if matchcfg:
102 matcher = re.compile(matchcfg).search
103
104 add_apt_sources(cfg['sources'], target,
105 template_params=params, aa_repo_match=matcher)
106
107
108def debconf_set_selections(selections, target=None):
109 util.subp(['debconf-set-selections'], data=selections, target=target,
110 capture=True)
111
112
113def dpkg_reconfigure(packages, target=None):
114 # For any packages that are already installed, but have preseed data
115 # we populate the debconf database, but the filesystem configuration
116 # would be preferred on a subsequent dpkg-reconfigure.
117 # so, what we have to do is "know" information about certain packages
118 # to unconfigure them.
119 unhandled = []
120 to_config = []
121 for pkg in packages:
122 if pkg in CONFIG_CLEANERS:
123 LOG.debug("unconfiguring %s", pkg)
124 CONFIG_CLEANERS[pkg](target)
125 to_config.append(pkg)
126 else:
127 unhandled.append(pkg)
128
129 if len(unhandled):
130 LOG.warn("The following packages were installed and preseeded, "
131 "but cannot be unconfigured: %s", unhandled)
132
133 if len(to_config):
134 util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] +
135 list(to_config), data=None, target=target, capture=True)
136
137
138def apply_debconf_selections(cfg, target=None):
139 """apply_debconf_selections - push content to debconf"""
140 # debconf_selections:
141 # set1: |
142 # cloud-init cloud-init/datasources multiselect MAAS
143 # set2: pkg pkg/value string bar
144 selsets = cfg.get('debconf_selections')
145 if not selsets:
146 LOG.debug("debconf_selections was not set in config")
147 return
148
149 selections = '\n'.join(
150 [selsets[key] for key in sorted(selsets.keys())])
151 debconf_set_selections(selections.encode() + b"\n", target=target)
152
153 # get a complete list of packages listed in input
154 pkgs_cfgd = set()
155 for key, content in selsets.items():
156 for line in content.splitlines():
157 if line.startswith("#"):
158 continue
159 pkg = re.sub(r"[:\s].*", "", line)
160 pkgs_cfgd.add(pkg)
161
162 pkgs_installed = util.get_installed_packages(target)
163
164 LOG.debug("pkgs_cfgd: %s", pkgs_cfgd)
165 LOG.debug("pkgs_installed: %s", pkgs_installed)
166 need_reconfig = pkgs_cfgd.intersection(pkgs_installed)
167
168 if len(need_reconfig) == 0:
169 LOG.debug("no need for reconfig")
170 return
171
172 dpkg_reconfigure(need_reconfig, target=target)
173
174
175def clean_cloud_init(target):
176 """clean out any local cloud-init config"""
177 flist = glob.glob(
178 util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*"))
179
180 LOG.debug("cleaning cloud-init config from: %s", flist)
181 for dpkg_cfg in flist:
182 os.unlink(dpkg_cfg)
183
184
185def mirrorurl_to_apt_fileprefix(mirror):
186 """ mirrorurl_to_apt_fileprefix
187 Convert a mirror url to the file prefix used by apt on disk to
188 store cache information for that mirror.
189 To do so do:
190 - take off ???://
191 - drop tailing /
192 - convert in string / to _
193 """
194 string = mirror
195 if string.endswith("/"):
196 string = string[0:-1]
197 pos = string.find("://")
198 if pos >= 0:
199 string = string[pos + 3:]
200 string = string.replace("/", "_")
201 return string
202
203
204def rename_apt_lists(new_mirrors, target=None):
205 """rename_apt_lists - rename apt lists to preserve old cache data"""
206 default_mirrors = get_default_mirrors(util.get_architecture(target))
207
208 pre = util.target_path(target, APT_LISTS)
209 for (name, omirror) in default_mirrors.items():
210 nmirror = new_mirrors.get(name)
211 if not nmirror:
212 continue
213
214 oprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(omirror)
215 nprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(nmirror)
216 if oprefix == nprefix:
217 continue
218 olen = len(oprefix)
219 for filename in glob.glob("%s_*" % oprefix):
220 newname = "%s%s" % (nprefix, filename[olen:])
221 LOG.debug("Renaming apt list %s to %s", filename, newname)
222 try:
223 os.rename(filename, newname)
224 except OSError:
225 # since this is a best effort task, warn with but don't fail
226 LOG.warn("Failed to rename apt list:", exc_info=True)
227
228
229def mirror_to_placeholder(tmpl, mirror, placeholder):
230 """ mirror_to_placeholder
231 replace the specified mirror in a template with a placeholder string
232 Checks for existance of the expected mirror and warns if not found
233 """
234 if mirror not in tmpl:
235 LOG.warn("Expected mirror '%s' not found in: %s", mirror, tmpl)
236 return tmpl.replace(mirror, placeholder)
237
238
239def map_known_suites(suite):
240 """there are a few default names which will be auto-extended.
241 This comes at the inability to use those names literally as suites,
242 but on the other hand increases readability of the cfg quite a lot"""
243 mapping = {'updates': '$RELEASE-updates',
244 'backports': '$RELEASE-backports',
245 'security': '$RELEASE-security',
246 'proposed': '$RELEASE-proposed',
247 'release': '$RELEASE'}
248 try:
249 retsuite = mapping[suite]
250 except KeyError:
251 retsuite = suite
252 return retsuite
253
254
255def disable_suites(disabled, src, release):
256 """reads the config for suites to be disabled and removes those
257 from the template"""
258 if not disabled:
259 return src
260
261 retsrc = src
262 for suite in disabled:
263 suite = map_known_suites(suite)
264 releasesuite = util.render_string(suite, {'RELEASE': release})
265 LOG.debug("Disabling suite %s as %s", suite, releasesuite)
266
267 newsrc = ""
268 for line in retsrc.splitlines(True):
269 if line.startswith("#"):
270 newsrc += line
271 continue
272
273 # sources.list allow options in cols[1] which can have spaces
274 # so the actual suite can be [2] or later. example:
275 # deb [ arch=amd64,armel k=v ] http://example.com/debian
276 cols = line.split()
277 if len(cols) > 1:
278 pcol = 2
279 if cols[1].startswith("["):
280 for col in cols[1:]:
281 pcol += 1
282 if col.endswith("]"):
283 break
284
285 if cols[pcol] == releasesuite:
286 line = '# suite disabled by curtin: %s' % line
287 newsrc += line
288 retsrc = newsrc
289
290 return retsrc
291
292
293def generate_sources_list(cfg, release, mirrors, target=None):
294 """ generate_sources_list
295 create a source.list file based on a custom or default template
296 by replacing mirrors and release in the template
297 """
298 default_mirrors = get_default_mirrors(util.get_architecture(target))
299 aptsrc = "/etc/apt/sources.list"
300 params = {'RELEASE': release}
301 for k in mirrors:
302 params[k] = mirrors[k]
303
304 tmpl = cfg.get('sources_list', None)
305 if tmpl is None:
306 LOG.info("No custom template provided, fall back to modify"
307 "mirrors in %s on the target system", aptsrc)
308 tmpl = util.load_file(util.target_path(target, aptsrc))
309 # Strategy if no custom template was provided:
310 # - Only replacing mirrors
311 # - no reason to replace "release" as it is from target anyway
312 # - The less we depend upon, the more stable this is against changes
313 # - warn if expected original content wasn't found
314 tmpl = mirror_to_placeholder(tmpl, default_mirrors['PRIMARY'],
315 "$MIRROR")
316 tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'],
317 "$SECURITY")
318
319 orig = util.target_path(target, aptsrc)
320 if os.path.exists(orig):
321 os.rename(orig, orig + ".curtin.old")
322
323 rendered = util.render_string(tmpl, params)
324 disabled = disable_suites(cfg.get('disable_suites'), rendered, release)
325 util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644)
326
327 # protect the just generated sources.list from cloud-init
328 cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg"
329 # this has to work with older cloud-init as well, so use old key
330 cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1)
331 try:
332 util.write_file(util.target_path(target, cloudfile),
333 cloudconf, mode=0o644)
334 except IOError:
335 LOG.exception("Failed to protect source.list from cloud-init in (%s)",
336 util.target_path(target, cloudfile))
337 raise
338
339
340def add_apt_key_raw(key, target=None):
341 """
342 actual adding of a key as defined in key argument
343 to the system
344 """
345 LOG.debug("Adding key:\n'%s'", key)
346 try:
347 util.subp(['apt-key', 'add', '-'], data=key.encode(), target=target)
348 except util.ProcessExecutionError:
349 LOG.exception("failed to add apt GPG Key to apt keyring")
350 raise
351
352
353def add_apt_key(ent, target=None):
354 """
355 Add key to the system as defined in ent (if any).
356 Supports raw keys or keyid's
357 The latter will as a first step fetched to get the raw key
358 """
359 if 'keyid' in ent and 'key' not in ent:
360 keyserver = DEFAULT_KEYSERVER
361 if 'keyserver' in ent:
362 keyserver = ent['keyserver']
363
364 ent['key'] = gpg.getkeybyid(ent['keyid'], keyserver)
365
366 if 'key' in ent:
367 add_apt_key_raw(ent['key'], target)
368
369
370def add_apt_sources(srcdict, target=None, template_params=None,
371 aa_repo_match=None):
372 """
373 add entries in /etc/apt/sources.list.d for each abbreviated
374 sources.list entry in 'srcdict'. When rendering template, also
375 include the values in dictionary searchList
376 """
377 if template_params is None:
378 template_params = {}
379
380 if aa_repo_match is None:
381 raise ValueError('did not get a valid repo matcher')
382
383 if not isinstance(srcdict, dict):
384 raise TypeError('unknown apt format: %s' % (srcdict))
385
386 for filename in srcdict:
387 ent = srcdict[filename]
388 if 'filename' not in ent:
389 ent['filename'] = filename
390
391 add_apt_key(ent, target)
392
393 if 'source' not in ent:
394 continue
395 source = ent['source']
396 source = util.render_string(source, template_params)
397
398 if not ent['filename'].startswith("/"):
399 ent['filename'] = os.path.join("/etc/apt/sources.list.d/",
400 ent['filename'])
401 if not ent['filename'].endswith(".list"):
402 ent['filename'] += ".list"
403
404 if aa_repo_match(source):
405 try:
406 with util.ChrootableTarget(
407 target, sys_resolvconf=True) as in_chroot:
408 in_chroot.subp(["add-apt-repository", source])
409 except util.ProcessExecutionError:
410 LOG.exception("add-apt-repository failed.")
411 raise
412 continue
413
414 sourcefn = util.target_path(target, ent['filename'])
415 try:
416 contents = "%s\n" % (source)
417 util.write_file(sourcefn, contents, omode="a")
418 except IOError as detail:
419 LOG.exception("failed write to file %s: %s", sourcefn, detail)
420 raise
421
422 util.apt_update(target=target, force=True,
423 comment="apt-source changed config")
424
425 return
426
427
428def search_for_mirror(candidates):
429 """
430 Search through a list of mirror urls for one that works
431 This needs to return quickly.
432 """
433 if candidates is None:
434 return None
435
436 LOG.debug("search for mirror in candidates: '%s'", candidates)
437 for cand in candidates:
438 try:
439 if util.is_resolvable_url(cand):
440 LOG.debug("found working mirror: '%s'", cand)
441 return cand
442 except Exception:
443 pass
444 return None
445
446
447def update_mirror_info(pmirror, smirror, arch):
448 """sets security mirror to primary if not defined.
449 returns defaults if no mirrors are defined"""
450 if pmirror is not None:
451 if smirror is None:
452 smirror = pmirror
453 return {'PRIMARY': pmirror,
454 'SECURITY': smirror}
455 return get_default_mirrors(arch)
456
457
458def get_arch_mirrorconfig(cfg, mirrortype, arch):
459 """out of a list of potential mirror configurations select
460 and return the one matching the architecture (or default)"""
461 # select the mirror specification (if-any)
462 mirror_cfg_list = cfg.get(mirrortype, None)
463 if mirror_cfg_list is None:
464 return None
465
466 # select the specification matching the target arch
467 default = None
468 for mirror_cfg_elem in mirror_cfg_list:
469 arches = mirror_cfg_elem.get("arches")
470 if arch in arches:
471 return mirror_cfg_elem
472 if "default" in arches:
473 default = mirror_cfg_elem
474 return default
475
476
477def get_mirror(cfg, mirrortype, arch):
478 """pass the three potential stages of mirror specification
479 returns None is neither of them found anything otherwise the first
480 hit is returned"""
481 mcfg = get_arch_mirrorconfig(cfg, mirrortype, arch)
482 if mcfg is None:
483 return None
484
485 # directly specified
486 mirror = mcfg.get("uri", None)
487
488 # fallback to search if specified
489 if mirror is None:
490 # list of mirrors to try to resolve
491 mirror = search_for_mirror(mcfg.get("search", None))
492
493 return mirror
494
495
496def find_apt_mirror_info(cfg, arch=None):
497 """find_apt_mirror_info
498 find an apt_mirror given the cfg provided.
499 It can check for separate config of primary and security mirrors
500 If only primary is given security is assumed to be equal to primary
501 If the generic apt_mirror is given that is defining for both
502 """
503
504 if arch is None:
505 arch = util.get_architecture()
506 LOG.debug("got arch for mirror selection: %s", arch)
507 pmirror = get_mirror(cfg, "primary", arch)
508 LOG.debug("got primary mirror: %s", pmirror)
509 smirror = get_mirror(cfg, "security", arch)
510 LOG.debug("got security mirror: %s", smirror)
511
512 # Note: curtin has no cloud-datasource fallback
513
514 mirror_info = update_mirror_info(pmirror, smirror, arch)
515
516 # less complex replacements use only MIRROR, derive from primary
517 mirror_info["MIRROR"] = mirror_info["PRIMARY"]
518
519 return mirror_info
520
521
522def apply_apt_proxy_config(cfg, proxy_fname, config_fname):
523 """apply_apt_proxy_config
524 Applies any apt*proxy config from if specified
525 """
526 # Set up any apt proxy
527 cfgs = (('proxy', 'Acquire::http::Proxy "%s";'),
528 ('http_proxy', 'Acquire::http::Proxy "%s";'),
529 ('ftp_proxy', 'Acquire::ftp::Proxy "%s";'),
530 ('https_proxy', 'Acquire::https::Proxy "%s";'))
531
532 proxies = [fmt % cfg.get(name) for (name, fmt) in cfgs if cfg.get(name)]
533 if len(proxies):
534 LOG.debug("write apt proxy info to %s", proxy_fname)
535 util.write_file(proxy_fname, '\n'.join(proxies) + '\n')
536 elif os.path.isfile(proxy_fname):
537 util.del_file(proxy_fname)
538 LOG.debug("no apt proxy configured, removed %s", proxy_fname)
539
540 if cfg.get('conf', None):
541 LOG.debug("write apt config info to %s", config_fname)
542 util.write_file(config_fname, cfg.get('conf'))
543 elif os.path.isfile(config_fname):
544 util.del_file(config_fname)
545 LOG.debug("no apt config configured, removed %s", config_fname)
546
547
548def apt_command(args):
549 """ Main entry point for curtin apt-config standalone command
550 This does not read the global config as handled by curthooks, but
551 instead one can specify a different "target" and a new cfg via --config
552 """
553 cfg = config.load_command_config(args, {})
554
555 if args.target is not None:
556 target = args.target
557 else:
558 state = util.load_command_environment()
559 target = state['target']
560
561 if target is None:
562 sys.stderr.write("Unable to find target. "
563 "Use --target or set TARGET_MOUNT_POINT\n")
564 sys.exit(2)
565
566 apt_cfg = cfg.get("apt")
567 # if no apt config section is available, do nothing
568 if apt_cfg is not None:
569 LOG.debug("Handling apt to target %s with config %s",
570 target, apt_cfg)
571 try:
572 with util.ChrootableTarget(target, sys_resolvconf=True):
573 handle_apt(apt_cfg, target)
574 except (RuntimeError, TypeError, ValueError, IOError):
575 LOG.exception("Failed to configure apt features '%s'", apt_cfg)
576 sys.exit(1)
577 else:
578 LOG.info("No apt config provided, skipping")
579
580 sys.exit(0)
581
582
583def translate_old_apt_features(cfg):
584 """translate the few old apt related features into the new config format"""
585 predef_apt_cfg = cfg.get("apt")
586 if predef_apt_cfg is None:
587 cfg['apt'] = {}
588 predef_apt_cfg = cfg.get("apt")
589
590 if cfg.get('apt_proxy') is not None:
591 if predef_apt_cfg.get('proxy') is not None:
592 msg = ("Error in apt_proxy configuration: "
593 "old and new format of apt features "
594 "are mutually exclusive")
595 LOG.error(msg)
596 raise ValueError(msg)
597
598 cfg['apt']['proxy'] = cfg.get('apt_proxy')
599 LOG.debug("Transferred %s into new format: %s", cfg.get('apt_proxy'),
600 cfg.get('apte'))
601 del cfg['apt_proxy']
602
603 if cfg.get('apt_mirrors') is not None:
604 if predef_apt_cfg.get('mirrors') is not None:
605 msg = ("Error in apt_mirror configuration: "
606 "old and new format of apt features "
607 "are mutually exclusive")
608 LOG.error(msg)
609 raise ValueError(msg)
610
611 old = cfg.get('apt_mirrors')
612 cfg['apt']['primary'] = [{"arches": ["default"],
613 "uri": old.get('ubuntu_archive')}]
614 cfg['apt']['security'] = [{"arches": ["default"],
615 "uri": old.get('ubuntu_security')}]
616 LOG.debug("Transferred %s into new format: %s", cfg.get('apt_mirror'),
617 cfg.get('apt'))
618 del cfg['apt_mirrors']
619 # to work this also needs to disable the default protection
620 psl = predef_apt_cfg.get('preserve_sources_list')
621 if psl is not None:
622 if config.value_as_boolean(psl) is True:
623 msg = ("Error in apt_mirror configuration: "
624 "apt_mirrors and preserve_sources_list: True "
625 "are mutually exclusive")
626 LOG.error(msg)
627 raise ValueError(msg)
628 cfg['apt']['preserve_sources_list'] = False
629
630 if cfg.get('debconf_selections') is not None:
631 if predef_apt_cfg.get('debconf_selections') is not None:
632 msg = ("Error in debconf_selections configuration: "
633 "old and new format of apt features "
634 "are mutually exclusive")
635 LOG.error(msg)
636 raise ValueError(msg)
637
638 selsets = cfg.get('debconf_selections')
639 cfg['apt']['debconf_selections'] = selsets
640 LOG.info("Transferred %s into new format: %s",
641 cfg.get('debconf_selections'),
642 cfg.get('apt'))
643 del cfg['debconf_selections']
644
645 return cfg
646
647
648CMD_ARGUMENTS = (
649 ((('-c', '--config'),
650 {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend,
651 'metavar': 'FILE', 'type': argparse.FileType("rb"),
652 'dest': 'cfgopts', 'default': []}),
653 (('-t', '--target'),
654 {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]',
655 'action': 'store', 'metavar': 'TARGET',
656 'default': os.environ.get('TARGET_MOUNT_POINT')}),)
657)
658
659
660def POPULATE_SUBCMD(parser):
661 """Populate subcommand option parsing for apt-config"""
662 populate_one_subcmd(parser, CMD_ARGUMENTS, apt_command)
663
664CONFIG_CLEANERS = {
665 'cloud-init': clean_cloud_init,
666}
667
668# vi: ts=4 expandtab syntax=python
0669
=== added file 'curtin/commands/block_info.py'
--- curtin/commands/block_info.py 1970-01-01 00:00:00 +0000
+++ curtin/commands/block_info.py 2016-10-03 18:55:20 +0000
@@ -0,0 +1,75 @@
1# Copyright (C) 2016 Canonical Ltd.
2#
3# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
4#
5# Curtin is free software: you can redistribute it and/or modify it under
6# the terms of the GNU Affero General Public License as published by the
7# Free Software Foundation, either version 3 of the License, or (at your
8# option) any later version.
9#
10# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
11# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
12# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
13# more details.
14#
15# You should have received a copy of the GNU Affero General Public License
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
17
18import os
19from . import populate_one_subcmd
20from curtin import (block, util)
21
22
23def block_info_main(args):
24 """get information about block devices, similar to lsblk"""
25 if not args.devices:
26 raise ValueError('devices to scan must be specified')
27 if not all(block.is_block_device(d) for d in args.devices):
28 raise ValueError('invalid device(s)')
29
30 def add_size_to_holders_tree(tree):
31 """add size information to generated holders trees"""
32 size_file = os.path.join(tree['device'], 'size')
33 # size file is always represented in 512 byte sectors even if
34 # underlying disk uses a larger logical_block_size
35 size = ((512 * int(util.load_file(size_file)))
36 if os.path.exists(size_file) else None)
37 tree['size'] = util.bytes2human(size) if args.human else str(size)
38 for holder in tree['holders']:
39 add_size_to_holders_tree(holder)
40 return tree
41
42 def format_name(tree):
43 """format information for human readable display"""
44 res = {
45 'name': ' - '.join((tree['name'], tree['dev_type'], tree['size'])),
46 'holders': []
47 }
48 for holder in tree['holders']:
49 res['holders'].append(format_name(holder))
50 return res
51
52 trees = [add_size_to_holders_tree(t) for t in
53 [block.clear_holders.gen_holders_tree(d) for d in args.devices]]
54
55 print(util.json_dumps(trees) if args.json else
56 '\n'.join(block.clear_holders.format_holders_tree(t) for t in
57 [format_name(tree) for tree in trees]))
58
59 return 0
60
61
62CMD_ARGUMENTS = (
63 ('devices',
64 {'help': 'devices to get info for', 'default': [], 'nargs': '+'}),
65 ('--human',
66 {'help': 'output size in human readable format', 'default': False,
67 'action': 'store_true'}),
68 (('-j', '--json'),
69 {'help': 'output data in json format', 'default': False,
70 'action': 'store_true'}),
71)
72
73
74def POPULATE_SUBCMD(parser):
75 populate_one_subcmd(parser, CMD_ARGUMENTS, block_info_main)
076
=== modified file 'curtin/commands/block_meta.py'
--- curtin/commands/block_meta.py 2016-10-03 18:00:41 +0000
+++ curtin/commands/block_meta.py 2016-10-03 18:55:20 +0000
@@ -17,9 +17,8 @@
1717
18from collections import OrderedDict18from collections import OrderedDict
19from curtin import (block, config, util)19from curtin import (block, config, util)
20from curtin.block import mdadm20from curtin.block import (mdadm, mkfs, clear_holders, lvm)
21from curtin.log import LOG21from curtin.log import LOG
22from curtin.block import mkfs
23from curtin.reporter import events22from curtin.reporter import events
2423
25from . import populate_one_subcmd24from . import populate_one_subcmd
@@ -28,7 +27,7 @@
28import glob27import glob
29import os28import os
30import platform29import platform
31import re30import string
32import sys31import sys
33import tempfile32import tempfile
34import time33import time
@@ -129,128 +128,6 @@
129 return "mbr"128 return "mbr"
130129
131130
132def block_find_sysfs_path(devname):
133 # return the path in sys for device named devname
134 # support either short name ('sda') or full path /dev/sda
135 # sda -> /sys/class/block/sda
136 # sda1 -> /sys/class/block/sda/sda1
137 if not devname:
138 raise ValueError("empty devname provided to find_sysfs_path")
139
140 sys_class_block = '/sys/class/block/'
141 basename = os.path.basename(devname)
142 # try without parent blockdevice, then prepend parent
143 paths = [
144 os.path.join(sys_class_block, basename),
145 os.path.join(sys_class_block,
146 re.split('[\d+]', basename)[0], basename),
147 ]
148
149 # find path to devname directory in sysfs
150 devname_sysfs = None
151 for path in paths:
152 if os.path.exists(path):
153 devname_sysfs = path
154
155 if devname_sysfs is None:
156 err = ('No sysfs path to device:'
157 ' {}'.format(devname_sysfs))
158 LOG.error(err)
159 raise ValueError(err)
160
161 return devname_sysfs
162
163
164def get_holders(devname):
165 # Look up any block device holders.
166 # Handle devices and partitions as devnames (vdb, md0, vdb7)
167 devname_sysfs = block_find_sysfs_path(devname)
168 if devname_sysfs:
169 holders = os.listdir(os.path.join(devname_sysfs, 'holders'))
170 LOG.debug("devname '%s' had holders: %s", devname, ','.join(holders))
171 return holders
172
173 LOG.debug('get_holders: did not find sysfs path for %s', devname)
174 return []
175
176
177def clear_holders(sys_block_path):
178 holders = os.listdir(os.path.join(sys_block_path, "holders"))
179 LOG.info("clear_holders running on '%s', with holders '%s'" %
180 (sys_block_path, holders))
181 for holder in holders:
182 # get path to holder in /sys/block, then clear it
183 try:
184 holder_realpath = os.path.realpath(
185 os.path.join(sys_block_path, "holders", holder))
186 clear_holders(holder_realpath)
187 except IOError as e:
188 # something might have already caused the holder to go away
189 if util.is_file_not_found_exc(e):
190 pass
191 pass
192
193 # detect what type of holder is using this volume and shut it down, need to
194 # find more robust name of doing detection
195 if "bcache" in sys_block_path:
196 # bcache device
197 part_devs = []
198 for part_dev in glob.glob(os.path.join(sys_block_path,
199 "slaves", "*", "dev")):
200 with open(part_dev, "r") as fp:
201 part_dev_id = fp.read().rstrip()
202 part_devs.append(
203 os.path.split(os.path.realpath(os.path.join("/dev/block",
204 part_dev_id)))[-1])
205 for cache_dev in glob.glob("/sys/fs/bcache/*/bdev*"):
206 for part_dev in part_devs:
207 if part_dev in os.path.realpath(cache_dev):
208 # This is our bcache device, stop it, wait for udev to
209 # settle
210 with open(os.path.join(os.path.split(cache_dev)[0],
211 "stop"), "w") as fp:
212 LOG.info("stopping: %s" % fp)
213 fp.write("1")
214 udevadm_settle()
215 break
216 for part_dev in part_devs:
217 block.wipe_volume(os.path.join("/dev", part_dev),
218 mode="superblock")
219
220 if os.path.exists(os.path.join(sys_block_path, "bcache")):
221 # bcache device that isn't running, if it were, we would have found it
222 # when we looked for holders
223 try:
224 with open(os.path.join(sys_block_path, "bcache", "set", "stop"),
225 "w") as fp:
226 LOG.info("stopping: %s" % fp)
227 fp.write("1")
228 except IOError as e:
229 if not util.is_file_not_found_exc(e):
230 raise e
231 with open(os.path.join(sys_block_path, "bcache", "stop"),
232 "w") as fp:
233 LOG.info("stopping: %s" % fp)
234 fp.write("1")
235 udevadm_settle()
236
237 if os.path.exists(os.path.join(sys_block_path, "md")):
238 # md device
239 block_dev = os.path.join("/dev/", os.path.split(sys_block_path)[-1])
240 # if these fail its okay, the array might not be assembled and thats
241 # fine
242 mdadm.mdadm_stop(block_dev)
243 mdadm.mdadm_remove(block_dev)
244
245 elif os.path.exists(os.path.join(sys_block_path, "dm")):
246 # Shut down any volgroups
247 with open(os.path.join(sys_block_path, "dm", "name"), "r") as fp:
248 name = fp.read().split('-')
249 util.subp(["lvremove", "--force", name[0].rstrip(), name[1].rstrip()],
250 rcs=[0, 5])
251 util.subp(["vgremove", name[0].rstrip()], rcs=[0, 5, 6])
252
253
254def devsync(devpath):131def devsync(devpath):
255 LOG.debug('devsync for %s', devpath)132 LOG.debug('devsync for %s', devpath)
256 util.subp(['partprobe', devpath], rcs=[0, 1])133 util.subp(['partprobe', devpath], rcs=[0, 1])
@@ -265,14 +142,6 @@
265 raise OSError('Failed to find device at path: %s', devpath)142 raise OSError('Failed to find device at path: %s', devpath)
266143
267144
268def determine_partition_kname(disk_kname, partition_number):
269 for dev_type in ["nvme", "mmcblk"]:
270 if disk_kname.startswith(dev_type):
271 partition_number = "p%s" % partition_number
272 break
273 return "%s%s" % (disk_kname, partition_number)
274
275
276def determine_partition_number(partition_id, storage_config):145def determine_partition_number(partition_id, storage_config):
277 vol = storage_config.get(partition_id)146 vol = storage_config.get(partition_id)
278 partnumber = vol.get('number')147 partnumber = vol.get('number')
@@ -304,6 +173,18 @@
304 return partnumber173 return partnumber
305174
306175
176def sanitize_dname(dname):
177 """
178 dnames should be sanitized before writing rule files, in case maas has
179 emitted a dname with a special character
180
181 only letters, numbers and '-' and '_' are permitted, as this will be
182 used for a device path. spaces are also not permitted
183 """
184 valid = string.digits + string.ascii_letters + '-_'
185 return ''.join(c if c in valid else '-' for c in dname)
186
187
307def make_dname(volume, storage_config):188def make_dname(volume, storage_config):
308 state = util.load_command_environment()189 state = util.load_command_environment()
309 rules_dir = os.path.join(state['scratch'], "rules.d")190 rules_dir = os.path.join(state['scratch'], "rules.d")
@@ -321,7 +202,7 @@
321 # we may not always be able to find a uniq identifier on devices with names202 # we may not always be able to find a uniq identifier on devices with names
322 if not ptuuid and vol.get('type') in ["disk", "partition"]:203 if not ptuuid and vol.get('type') in ["disk", "partition"]:
323 LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format(204 LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format(
324 dname))205 volume))
325 return206 return
326207
327 rule = [208 rule = [
@@ -346,11 +227,24 @@
346 volgroup_name = storage_config.get(vol.get('volgroup')).get('name')227 volgroup_name = storage_config.get(vol.get('volgroup')).get('name')
347 dname = "%s-%s" % (volgroup_name, dname)228 dname = "%s-%s" % (volgroup_name, dname)
348 rule.append(compose_udev_equality("ENV{DM_NAME}", dname))229 rule.append(compose_udev_equality("ENV{DM_NAME}", dname))
349 rule.append("SYMLINK+=\"disk/by-dname/%s\"" % dname)230 else:
231 raise ValueError('cannot make dname for device with type: {}'
232 .format(vol.get('type')))
233
234 # note: this sanitization is done here instead of for all name attributes
235 # at the beginning of storage configuration, as some devices, such as
236 # lvm devices may use the name attribute and may permit special chars
237 sanitized = sanitize_dname(dname)
238 if sanitized != dname:
239 LOG.warning(
240 "dname modified to remove invalid chars. old: '{}' new: '{}'"
241 .format(dname, sanitized))
242
243 rule.append("SYMLINK+=\"disk/by-dname/%s\"" % sanitized)
350 LOG.debug("Writing dname udev rule '{}'".format(str(rule)))244 LOG.debug("Writing dname udev rule '{}'".format(str(rule)))
351 util.ensure_dir(rules_dir)245 util.ensure_dir(rules_dir)
352 with open(os.path.join(rules_dir, volume), "w") as fp:246 rule_file = os.path.join(rules_dir, '{}.rules'.format(sanitized))
353 fp.write(', '.join(rule))247 util.write_file(rule_file, ', '.join(rule))
354248
355249
356def get_path_to_storage_volume(volume, storage_config):250def get_path_to_storage_volume(volume, storage_config):
@@ -368,9 +262,9 @@
368 partnumber = determine_partition_number(vol.get('id'), storage_config)262 partnumber = determine_partition_number(vol.get('id'), storage_config)
369 disk_block_path = get_path_to_storage_volume(vol.get('device'),263 disk_block_path = get_path_to_storage_volume(vol.get('device'),
370 storage_config)264 storage_config)
371 (base_path, disk_kname) = os.path.split(disk_block_path)265 disk_kname = block.path_to_kname(disk_block_path)
372 partition_kname = determine_partition_kname(disk_kname, partnumber)266 partition_kname = block.partition_kname(disk_kname, partnumber)
373 volume_path = os.path.join(base_path, partition_kname)267 volume_path = block.kname_to_path(partition_kname)
374 devsync_vol = os.path.join(disk_block_path)268 devsync_vol = os.path.join(disk_block_path)
375269
376 elif vol.get('type') == "disk":270 elif vol.get('type') == "disk":
@@ -419,13 +313,15 @@
419 # block devs are in the slaves dir there. Then, those blockdevs can be313 # block devs are in the slaves dir there. Then, those blockdevs can be
420 # checked against the kname of the devs in the config for the desired314 # checked against the kname of the devs in the config for the desired
421 # bcache device. This is not very elegant though315 # bcache device. This is not very elegant though
422 backing_device_kname = os.path.split(get_path_to_storage_volume(316 backing_device_path = get_path_to_storage_volume(
423 vol.get('backing_device'), storage_config))[-1]317 vol.get('backing_device'), storage_config)
318 backing_device_kname = block.path_to_kname(backing_device_path)
424 sys_path = list(filter(lambda x: backing_device_kname in x,319 sys_path = list(filter(lambda x: backing_device_kname in x,
425 glob.glob("/sys/block/bcache*/slaves/*")))[0]320 glob.glob("/sys/block/bcache*/slaves/*")))[0]
426 while "bcache" not in os.path.split(sys_path)[-1]:321 while "bcache" not in os.path.split(sys_path)[-1]:
427 sys_path = os.path.split(sys_path)[0]322 sys_path = os.path.split(sys_path)[0]
428 volume_path = os.path.join("/dev", os.path.split(sys_path)[-1])323 bcache_kname = block.path_to_kname(sys_path)
324 volume_path = block.kname_to_path(bcache_kname)
429 LOG.debug('got bcache volume path {}'.format(volume_path))325 LOG.debug('got bcache volume path {}'.format(volume_path))
430326
431 else:327 else:
@@ -442,62 +338,35 @@
442338
443339
444def disk_handler(info, storage_config):340def disk_handler(info, storage_config):
341 _dos_names = ['dos', 'msdos']
445 ptable = info.get('ptable')342 ptable = info.get('ptable')
446
447 disk = get_path_to_storage_volume(info.get('id'), storage_config)343 disk = get_path_to_storage_volume(info.get('id'), storage_config)
448344
449 # Handle preserve flag345 if config.value_as_boolean(info.get('preserve')):
450 if info.get('preserve'):346 # Handle preserve flag, verifying if ptable specified in config
451 if not ptable:347 if config.value_as_boolean(ptable):
452 # Don't need to check state, return348 current_ptable = block.get_part_table_type(disk)
453 return349 if not ((ptable in _dos_names and current_ptable in _dos_names) or
454350 (ptable == 'gpt' and current_ptable == 'gpt')):
455 # Check state of current ptable351 raise ValueError(
456 try:352 "disk '%s' does not have correct partition table or "
457 (out, _err) = util.subp(["blkid", "-o", "export", disk],353 "cannot be read, but preserve is set to true. "
458 capture=True)354 "cannot continue installation." % info.get('id'))
459 except util.ProcessExecutionError:355 LOG.info("disk '%s' marked to be preserved, so keeping partition "
460 raise ValueError("disk '%s' has no readable partition table or \356 "table" % disk)
461 cannot be accessed, but preserve is set to true, so cannot \357 else:
462 continue")358 # wipe the disk and create the partition table if instructed to do so
463 current_ptable = list(filter(lambda x: "PTTYPE" in x,359 if config.value_as_boolean(info.get('wipe')):
464 out.splitlines()))[0].split("=")[-1]360 block.wipe_volume(disk, mode=info.get('wipe'))
465 if current_ptable == "dos" and ptable != "msdos" or \361 if config.value_as_boolean(ptable):
466 current_ptable == "gpt" and ptable != "gpt":362 LOG.info("labeling device: '%s' with '%s' partition table", disk,
467 raise ValueError("disk '%s' does not have correct \363 ptable)
468 partition table, but preserve is set to true, so not \364 if ptable == "gpt":
469 creating table, so not creating table." % info.get('id'))365 util.subp(["sgdisk", "--clear", disk])
470 LOG.info("disk '%s' marked to be preserved, so keeping partition \366 elif ptable in _dos_names:
471 table")367 util.subp(["parted", disk, "--script", "mklabel", "msdos"])
472 return368 else:
473369 raise ValueError('invalid partition table type: %s', ptable)
474 # Wipe the disk
475 if info.get('wipe') and info.get('wipe') != "none":
476 # The disk has a lable, clear all partitions
477 mdadm.mdadm_assemble(scan=True)
478 disk_kname = os.path.split(disk)[-1]
479 syspath_partitions = list(
480 os.path.split(prt)[0] for prt in
481 glob.glob("/sys/block/%s/*/partition" % disk_kname))
482 for partition in syspath_partitions:
483 clear_holders(partition)
484 with open(os.path.join(partition, "dev"), "r") as fp:
485 block_no = fp.read().rstrip()
486 partition_path = os.path.realpath(
487 os.path.join("/dev/block", block_no))
488 block.wipe_volume(partition_path, mode=info.get('wipe'))
489
490 clear_holders("/sys/block/%s" % disk_kname)
491 block.wipe_volume(disk, mode=info.get('wipe'))
492
493 # Create partition table on disk
494 if info.get('ptable'):
495 LOG.info("labeling device: '%s' with '%s' partition table", disk,
496 ptable)
497 if ptable == "gpt":
498 util.subp(["sgdisk", "--clear", disk])
499 elif ptable == "msdos":
500 util.subp(["parted", disk, "--script", "mklabel", "msdos"])
501370
502 # Make the name if needed371 # Make the name if needed
503 if info.get('name'):372 if info.get('name'):
@@ -542,13 +411,12 @@
542411
543 disk = get_path_to_storage_volume(device, storage_config)412 disk = get_path_to_storage_volume(device, storage_config)
544 partnumber = determine_partition_number(info.get('id'), storage_config)413 partnumber = determine_partition_number(info.get('id'), storage_config)
545414 disk_kname = block.path_to_kname(disk)
546 disk_kname = os.path.split(415 disk_sysfs_path = block.sys_block_path(disk)
547 get_path_to_storage_volume(device, storage_config))[-1]
548 # consider the disks logical sector size when calculating sectors416 # consider the disks logical sector size when calculating sectors
549 try:417 try:
550 prefix = "/sys/block/%s/queue/" % disk_kname418 lbs_path = os.path.join(disk_sysfs_path, 'queue', 'logical_block_size')
551 with open(prefix + "logical_block_size", "r") as f:419 with open(lbs_path, 'r') as f:
552 l = f.readline()420 l = f.readline()
553 logical_block_size_bytes = int(l)421 logical_block_size_bytes = int(l)
554 except:422 except:
@@ -566,17 +434,14 @@
566 extended_part_no = determine_partition_number(434 extended_part_no = determine_partition_number(
567 key, storage_config)435 key, storage_config)
568 break436 break
569 partition_kname = determine_partition_kname(437 pnum = extended_part_no
570 disk_kname, extended_part_no)
571 previous_partition = "/sys/block/%s/%s/" % \
572 (disk_kname, partition_kname)
573 else:438 else:
574 pnum = find_previous_partition(device, info['id'], storage_config)439 pnum = find_previous_partition(device, info['id'], storage_config)
575 LOG.debug("previous partition number for '%s' found to be '%s'",440
576 info.get('id'), pnum)441 LOG.debug("previous partition number for '%s' found to be '%s'",
577 partition_kname = determine_partition_kname(disk_kname, pnum)442 info.get('id'), pnum)
578 previous_partition = "/sys/block/%s/%s/" % \443 partition_kname = block.partition_kname(disk_kname, pnum)
579 (disk_kname, partition_kname)444 previous_partition = os.path.join(disk_sysfs_path, partition_kname)
580 LOG.debug("previous partition: {}".format(previous_partition))445 LOG.debug("previous partition: {}".format(previous_partition))
581 # XXX: sys/block/X/{size,start} is *ALWAYS* in 512b value446 # XXX: sys/block/X/{size,start} is *ALWAYS* in 512b value
582 previous_size = util.load_file(os.path.join(previous_partition,447 previous_size = util.load_file(os.path.join(previous_partition,
@@ -629,9 +494,9 @@
629 length_sectors = length_sectors + (logdisks * alignment_offset)494 length_sectors = length_sectors + (logdisks * alignment_offset)
630495
631 # Handle preserve flag496 # Handle preserve flag
632 if info.get('preserve'):497 if config.value_as_boolean(info.get('preserve')):
633 return498 return
634 elif storage_config.get(device).get('preserve'):499 elif config.value_as_boolean(storage_config.get(device).get('preserve')):
635 raise NotImplementedError("Partition '%s' is not marked to be \500 raise NotImplementedError("Partition '%s' is not marked to be \
636 preserved, but device '%s' is. At this time, preserving devices \501 preserved, but device '%s' is. At this time, preserving devices \
637 but not also the partitions on the devices is not supported, \502 but not also the partitions on the devices is not supported, \
@@ -674,11 +539,16 @@
674 else:539 else:
675 raise ValueError("parent partition has invalid partition table")540 raise ValueError("parent partition has invalid partition table")
676541
677 # Wipe the partition if told to do so542 # Wipe the partition if told to do so, do not wipe dos extended partitions
678 if info.get('wipe') and info.get('wipe') != "none":543 # as this may damage the extended partition table
679 block.wipe_volume(544 if config.value_as_boolean(info.get('wipe')):
680 get_path_to_storage_volume(info.get('id'), storage_config),545 if info.get('flag') == "extended":
681 mode=info.get('wipe'))546 LOG.warn("extended partitions do not need wiping, so skipping: "
547 "'%s'" % info.get('id'))
548 else:
549 block.wipe_volume(
550 get_path_to_storage_volume(info.get('id'), storage_config),
551 mode=info.get('wipe'))
682 # Make the name if needed552 # Make the name if needed
683 if storage_config.get(device).get('name') and partition_type != 'extended':553 if storage_config.get(device).get('name') and partition_type != 'extended':
684 make_dname(info.get('id'), storage_config)554 make_dname(info.get('id'), storage_config)
@@ -694,7 +564,7 @@
694 volume_path = get_path_to_storage_volume(volume, storage_config)564 volume_path = get_path_to_storage_volume(volume, storage_config)
695565
696 # Handle preserve flag566 # Handle preserve flag
697 if info.get('preserve'):567 if config.value_as_boolean(info.get('preserve')):
698 # Volume marked to be preserved, not formatting568 # Volume marked to be preserved, not formatting
699 return569 return
700570
@@ -776,26 +646,21 @@
776 storage_config))646 storage_config))
777647
778 # Handle preserve flag648 # Handle preserve flag
779 if info.get('preserve'):649 if config.value_as_boolean(info.get('preserve')):
780 # LVM will probably be offline, so start it650 # LVM will probably be offline, so start it
781 util.subp(["vgchange", "-a", "y"])651 util.subp(["vgchange", "-a", "y"])
782 # Verify that volgroup exists and contains all specified devices652 # Verify that volgroup exists and contains all specified devices
783 current_paths = []653 if set(lvm.get_pvols_in_volgroup(name)) != set(device_paths):
784 (out, _err) = util.subp(["pvdisplay", "-C", "--separator", "=", "-o",654 raise ValueError("volgroup '%s' marked to be preserved, but does "
785 "vg_name,pv_name", "--noheadings"],655 "not exist or does not contain the right "
786 capture=True)656 "physical volumes" % info.get('id'))
787 for line in out.splitlines():
788 if name in line:
789 current_paths.append(line.split("=")[-1])
790 if set(current_paths) != set(device_paths):
791 raise ValueError("volgroup '%s' marked to be preserved, but does \
792 not exist or does not contain the right physical \
793 volumes" % info.get('id'))
794 else:657 else:
795 # Create vgrcreate command and run658 # Create vgrcreate command and run
796 cmd = ["vgcreate", name]659 # capture output to avoid printing it to log
797 cmd.extend(device_paths)660 util.subp(['vgcreate', name] + device_paths, capture=True)
798 util.subp(cmd)661
662 # refresh lvmetad
663 lvm.lvm_scan()
799664
800665
801def lvm_partition_handler(info, storage_config):666def lvm_partition_handler(info, storage_config):
@@ -805,28 +670,23 @@
805 raise ValueError("lvm volgroup for lvm partition must be specified")670 raise ValueError("lvm volgroup for lvm partition must be specified")
806 if not name:671 if not name:
807 raise ValueError("lvm partition name must be specified")672 raise ValueError("lvm partition name must be specified")
673 if info.get('ptable'):
674 raise ValueError("Partition tables on top of lvm logical volumes is "
675 "not supported")
808676
809 # Handle preserve flag677 # Handle preserve flag
810 if info.get('preserve'):678 if config.value_as_boolean(info.get('preserve')):
811 (out, _err) = util.subp(["lvdisplay", "-C", "--separator", "=", "-o",679 if name not in lvm.get_lvols_in_volgroup(volgroup):
812 "lv_name,vg_name", "--noheadings"],680 raise ValueError("lvm partition '%s' marked to be preserved, but "
813 capture=True)681 "does not exist or does not mach storage "
814 found = False682 "configuration" % info.get('id'))
815 for line in out.splitlines():
816 if name in line:
817 if volgroup == line.split("=")[-1]:
818 found = True
819 break
820 if not found:
821 raise ValueError("lvm partition '%s' marked to be preserved, but \
822 does not exist or does not mach storage \
823 configuration" % info.get('id'))
824 elif storage_config.get(info.get('volgroup')).get('preserve'):683 elif storage_config.get(info.get('volgroup')).get('preserve'):
825 raise NotImplementedError("Lvm Partition '%s' is not marked to be \684 raise NotImplementedError(
826 preserved, but volgroup '%s' is. At this time, preserving \685 "Lvm Partition '%s' is not marked to be preserved, but volgroup "
827 volgroups but not also the lvm partitions on the volgroup is \686 "'%s' is. At this time, preserving volgroups but not also the lvm "
828 not supported, because of the possibility of damaging lvm \687 "partitions on the volgroup is not supported, because of the "
829 partitions intended to be preserved." % (info.get('id'), volgroup))688 "possibility of damaging lvm partitions intended to be "
689 "preserved." % (info.get('id'), volgroup))
830 else:690 else:
831 cmd = ["lvcreate", volgroup, "-n", name]691 cmd = ["lvcreate", volgroup, "-n", name]
832 if info.get('size'):692 if info.get('size'):
@@ -836,9 +696,8 @@
836696
837 util.subp(cmd)697 util.subp(cmd)
838698
839 if info.get('ptable'):699 # refresh lvmetad
840 raise ValueError("Partition tables on top of lvm logical volumes is \700 lvm.lvm_scan()
841 not supported")
842701
843 make_dname(info.get('id'), storage_config)702 make_dname(info.get('id'), storage_config)
844703
@@ -925,7 +784,7 @@
925 zip(spare_devices, spare_device_paths)))784 zip(spare_devices, spare_device_paths)))
926785
927 # Handle preserve flag786 # Handle preserve flag
928 if info.get('preserve'):787 if config.value_as_boolean(info.get('preserve')):
929 # check if the array is already up, if not try to assemble788 # check if the array is already up, if not try to assemble
930 if not mdadm.md_check(md_devname, raidlevel,789 if not mdadm.md_check(md_devname, raidlevel,
931 device_paths, spare_device_paths):790 device_paths, spare_device_paths):
@@ -981,9 +840,6 @@
981 raise ValueError("backing device and cache device for bcache"840 raise ValueError("backing device and cache device for bcache"
982 " must be specified")841 " must be specified")
983842
984 # The bcache module is not loaded when bcache is installed by apt-get, so
985 # we will load it now
986 util.subp(["modprobe", "bcache"])
987 bcache_sysfs = "/sys/fs/bcache"843 bcache_sysfs = "/sys/fs/bcache"
988 udevadm_settle(exists=bcache_sysfs)844 udevadm_settle(exists=bcache_sysfs)
989845
@@ -1003,7 +859,7 @@
1003 bcache_device, expected)859 bcache_device, expected)
1004 return860 return
1005 LOG.debug('bcache device path not found: %s', expected)861 LOG.debug('bcache device path not found: %s', expected)
1006 local_holders = get_holders(bcache_device)862 local_holders = clear_holders.get_holders(bcache_device)
1007 LOG.debug('got initial holders being "%s"', local_holders)863 LOG.debug('got initial holders being "%s"', local_holders)
1008 if len(local_holders) == 0:864 if len(local_holders) == 0:
1009 raise ValueError("holders == 0 , expected non-zero")865 raise ValueError("holders == 0 , expected non-zero")
@@ -1033,7 +889,7 @@
1033889
1034 if cache_device:890 if cache_device:
1035 # /sys/class/block/XXX/YYY/891 # /sys/class/block/XXX/YYY/
1036 cache_device_sysfs = block_find_sysfs_path(cache_device)892 cache_device_sysfs = block.sys_block_path(cache_device)
1037893
1038 if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):894 if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
1039 LOG.debug('caching device already exists at {}/bcache. Read '895 LOG.debug('caching device already exists at {}/bcache. Read '
@@ -1058,7 +914,7 @@
1058 ensure_bcache_is_registered(cache_device, target_sysfs_path)914 ensure_bcache_is_registered(cache_device, target_sysfs_path)
1059915
1060 if backing_device:916 if backing_device:
1061 backing_device_sysfs = block_find_sysfs_path(backing_device)917 backing_device_sysfs = block.sys_block_path(backing_device)
1062 target_sysfs_path = os.path.join(backing_device_sysfs, "bcache")918 target_sysfs_path = os.path.join(backing_device_sysfs, "bcache")
1063 if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")):919 if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")):
1064 util.subp(["make-bcache", "-B", backing_device])920 util.subp(["make-bcache", "-B", backing_device])
@@ -1066,7 +922,7 @@
1066922
1067 # via the holders we can identify which bcache device we just created923 # via the holders we can identify which bcache device we just created
1068 # for a given backing device924 # for a given backing device
1069 holders = get_holders(backing_device)925 holders = clear_holders.get_holders(backing_device)
1070 if len(holders) != 1:926 if len(holders) != 1:
1071 err = ('Invalid number {} of holding devices:'927 err = ('Invalid number {} of holding devices:'
1072 ' "{}"'.format(len(holders), holders))928 ' "{}"'.format(len(holders), holders))
@@ -1158,6 +1014,21 @@
1158 # set up reportstack1014 # set up reportstack
1159 stack_prefix = state.get('report_stack_prefix', '')1015 stack_prefix = state.get('report_stack_prefix', '')
11601016
1017 # shut down any already existing storage layers above any disks used in
1018 # config that have 'wipe' set
1019 with events.ReportEventStack(
1020 name=stack_prefix, reporting_enabled=True, level='INFO',
1021 description="removing previous storage devices"):
1022 clear_holders.start_clear_holders_deps()
1023 disk_paths = [get_path_to_storage_volume(k, storage_config_dict)
1024 for (k, v) in storage_config_dict.items()
1025 if v.get('type') == 'disk' and
1026 config.value_as_boolean(v.get('wipe')) and
1027 not config.value_as_boolean(v.get('preserve'))]
1028 clear_holders.clear_holders(disk_paths)
1029 # if anything was not properly shut down, stop installation
1030 clear_holders.assert_clear(disk_paths)
1031
1161 for item_id, command in storage_config_dict.items():1032 for item_id, command in storage_config_dict.items():
1162 handler = command_handlers.get(command['type'])1033 handler = command_handlers.get(command['type'])
1163 if not handler:1034 if not handler:
11641035
=== modified file 'curtin/commands/block_wipe.py'
--- curtin/commands/block_wipe.py 2016-05-10 16:13:29 +0000
+++ curtin/commands/block_wipe.py 2016-10-03 18:55:20 +0000
@@ -21,7 +21,6 @@
2121
2222
23def wipe_main(args):23def wipe_main(args):
24 # curtin clear-holders device [device2 [device3]]
25 for blockdev in args.devices:24 for blockdev in args.devices:
26 try:25 try:
27 block.wipe_volume(blockdev, mode=args.mode)26 block.wipe_volume(blockdev, mode=args.mode)
@@ -36,7 +35,7 @@
36CMD_ARGUMENTS = (35CMD_ARGUMENTS = (
37 ((('-m', '--mode'),36 ((('-m', '--mode'),
38 {'help': 'mode for wipe.', 'action': 'store',37 {'help': 'mode for wipe.', 'action': 'store',
39 'default': 'superblocks',38 'default': 'superblock',
40 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}),39 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}),
41 ('devices',40 ('devices',
42 {'help': 'devices to wipe', 'default': [], 'nargs': '+'}),41 {'help': 'devices to wipe', 'default': [], 'nargs': '+'}),
4342
=== added file 'curtin/commands/clear_holders.py'
--- curtin/commands/clear_holders.py 1970-01-01 00:00:00 +0000
+++ curtin/commands/clear_holders.py 2016-10-03 18:55:20 +0000
@@ -0,0 +1,48 @@
1# Copyright (C) 2016 Canonical Ltd.
2#
3# Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com>
4#
5# Curtin is free software: you can redistribute it and/or modify it under
6# the terms of the GNU Affero General Public License as published by the
7# Free Software Foundation, either version 3 of the License, or (at your
8# option) any later version.
9#
10# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
11# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
12# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
13# more details.
14#
15# You should have received a copy of the GNU Affero General Public License
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
17
18from curtin import block
19from . import populate_one_subcmd
20
21
22def clear_holders_main(args):
23 """
24 wrapper for clear_holders accepting cli args
25 """
26 if (not all(block.is_block_device(device) for device in args.devices) or
27 len(args.devices) == 0):
28 raise ValueError('invalid devices specified')
29 block.clear_holders.start_clear_holders_deps()
30 block.clear_holders.clear_holders(args.devices, try_preserve=args.preserve)
31 if args.try_preserve:
32 print('ran clear_holders attempting to preserve data. however, '
33 'hotplug support for some devices may cause holders to restart ')
34 block.clear_holders.assert_clear(args.devices)
35
36
37CMD_ARGUMENTS = (
38 (('devices',
39 {'help': 'devices to free', 'default': [], 'nargs': '+'}),
40 (('-p', '--preserve'),
41 {'help': 'try to shut down holders without erasing anything',
42 'default': False, 'action': 'store_true'}),
43 )
44)
45
46
47def POPULATE_SUBCMD(parser):
48 populate_one_subcmd(parser, CMD_ARGUMENTS, clear_holders_main)
049
=== modified file 'curtin/commands/curthooks.py'
--- curtin/commands/curthooks.py 2016-10-03 18:00:41 +0000
+++ curtin/commands/curthooks.py 2016-10-03 18:55:20 +0000
@@ -16,10 +16,8 @@
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1717
18import copy18import copy
19import glob
20import os19import os
21import platform20import platform
22import re
23import sys21import sys
24import shutil22import shutil
25import textwrap23import textwrap
@@ -30,8 +28,8 @@
30from curtin.log import LOG28from curtin.log import LOG
31from curtin import swap29from curtin import swap
32from curtin import util30from curtin import util
33from curtin import net
34from curtin.reporter import events31from curtin.reporter import events
32from curtin.commands import apply_net, apt_config
3533
36from . import populate_one_subcmd34from . import populate_one_subcmd
3735
@@ -90,45 +88,15 @@
90 info.get('perms', "0644")))88 info.get('perms', "0644")))
9189
9290
93def apt_config(cfg, target):91def do_apt_config(cfg, target):
94 # cfg['apt_proxy']92 cfg = apt_config.translate_old_apt_features(cfg)
9593 apt_cfg = cfg.get("apt")
96 proxy_cfg_path = os.path.sep.join(94 if apt_cfg is not None:
97 [target, '/etc/apt/apt.conf.d/90curtin-aptproxy'])95 LOG.info("curthooks handling apt to target %s with config %s",
98 if cfg.get('apt_proxy'):96 target, apt_cfg)
99 util.write_file(97 apt_config.handle_apt(apt_cfg, target)
100 proxy_cfg_path,
101 content='Acquire::HTTP::Proxy "%s";\n' % cfg['apt_proxy'])
102 else:98 else:
103 if os.path.isfile(proxy_cfg_path):99 LOG.info("No apt config provided, skipping")
104 os.unlink(proxy_cfg_path)
105
106 # cfg['apt_mirrors']
107 # apt_mirrors:
108 # ubuntu_archive: http://local.archive/ubuntu
109 # ubuntu_security: http://local.archive/ubuntu
110 sources_list = os.path.sep.join([target, '/etc/apt/sources.list'])
111 if (isinstance(cfg.get('apt_mirrors'), dict) and
112 os.path.isfile(sources_list)):
113 repls = [
114 ('ubuntu_archive', r'http://\S*[.]*archive.ubuntu.com/\S*'),
115 ('ubuntu_security', r'http://security.ubuntu.com/\S*'),
116 ]
117 content = None
118 for name, regex in repls:
119 mirror = cfg['apt_mirrors'].get(name)
120 if not mirror:
121 continue
122
123 if content is None:
124 with open(sources_list) as fp:
125 content = fp.read()
126 util.write_file(sources_list + ".dist", content)
127
128 content = re.sub(regex, mirror + " ", content)
129
130 if content is not None:
131 util.write_file(sources_list, content)
132100
133101
134def disable_overlayroot(cfg, target):102def disable_overlayroot(cfg, target):
@@ -140,51 +108,6 @@
140 shutil.move(local_conf, local_conf + ".old")108 shutil.move(local_conf, local_conf + ".old")
141109
142110
143def clean_cloud_init(target):
144 flist = glob.glob(
145 os.path.sep.join([target, "/etc/cloud/cloud.cfg.d/*dpkg*"]))
146
147 LOG.debug("cleaning cloud-init config from: %s" % flist)
148 for dpkg_cfg in flist:
149 os.unlink(dpkg_cfg)
150
151
152def _maybe_remove_legacy_eth0(target,
153 path="/etc/network/interfaces.d/eth0.cfg"):
154 """Ubuntu cloud images previously included a 'eth0.cfg' that had
155 hard coded content. That file would interfere with the rendered
156 configuration if it was present.
157
158 if the file does not exist do nothing.
159 If the file exists:
160 - with known content, remove it and warn
161 - with unknown content, leave it and warn
162 """
163
164 cfg = os.path.sep.join([target, path])
165 if not os.path.exists(cfg):
166 LOG.warn('Failed to find legacy conf file %s', cfg)
167 return
168
169 bmsg = "Dynamic networking config may not apply."
170 try:
171 contents = util.load_file(cfg)
172 known_contents = ["auto eth0", "iface eth0 inet dhcp"]
173 lines = [f.strip() for f in contents.splitlines()
174 if not f.startswith("#")]
175 if lines == known_contents:
176 util.del_file(cfg)
177 msg = "removed %s with known contents" % cfg
178 else:
179 msg = (bmsg + " '%s' exists with user configured content." % cfg)
180 except:
181 msg = bmsg + " %s exists, but could not be read." % cfg
182 LOG.exception(msg)
183 return
184
185 LOG.warn(msg)
186
187
188def setup_zipl(cfg, target):111def setup_zipl(cfg, target):
189 if platform.machine() != 's390x':112 if platform.machine() != 's390x':
190 return113 return
@@ -232,8 +155,8 @@
232def run_zipl(cfg, target):155def run_zipl(cfg, target):
233 if platform.machine() != 's390x':156 if platform.machine() != 's390x':
234 return157 return
235 with util.RunInChroot(target) as in_chroot:158 with util.ChrootableTarget(target) as in_chroot:
236 in_chroot(['zipl'])159 in_chroot.subp(['zipl'])
237160
238161
239def install_kernel(cfg, target):162def install_kernel(cfg, target):
@@ -250,126 +173,45 @@
250 mapping = copy.deepcopy(KERNEL_MAPPING)173 mapping = copy.deepcopy(KERNEL_MAPPING)
251 config.merge_config(mapping, kernel_cfg.get('mapping', {}))174 config.merge_config(mapping, kernel_cfg.get('mapping', {}))
252175
253 with util.RunInChroot(target) as in_chroot:176 if kernel_package:
254177 util.install_packages([kernel_package], target=target)
255 if kernel_package:178 return
256 util.install_packages([kernel_package], target=target)179
257 return180 # uname[2] is kernel name (ie: 3.16.0-7-generic)
258181 # version gets X.Y.Z, flavor gets anything after second '-'.
259 # uname[2] is kernel name (ie: 3.16.0-7-generic)182 kernel = os.uname()[2]
260 # version gets X.Y.Z, flavor gets anything after second '-'.183 codename, _ = util.subp(['lsb_release', '--codename', '--short'],
261 kernel = os.uname()[2]184 capture=True, target=target)
262 codename, err = in_chroot(['lsb_release', '--codename', '--short'],185 codename = codename.strip()
263 capture=True)186 version, abi, flavor = kernel.split('-', 2)
264 codename = codename.strip()187
265 version, abi, flavor = kernel.split('-', 2)188 try:
266189 map_suffix = mapping[codename][version]
267 try:190 except KeyError:
268 map_suffix = mapping[codename][version]191 LOG.warn("Couldn't detect kernel package to install for %s."
269 except KeyError:192 % kernel)
270 LOG.warn("Couldn't detect kernel package to install for %s."193 if kernel_fallback is not None:
271 % kernel)194 util.install_packages([kernel_fallback], target=target)
272 if kernel_fallback is not None:195 return
273 util.install_packages([kernel_fallback], target=target)196
274 return197 package = "linux-{flavor}{map_suffix}".format(
275198 flavor=flavor, map_suffix=map_suffix)
276 package = "linux-{flavor}{map_suffix}".format(199
277 flavor=flavor, map_suffix=map_suffix)200 if util.has_pkg_available(package, target):
278201 if util.has_pkg_installed(package, target):
279 if util.has_pkg_available(package, target):202 LOG.debug("Kernel package '%s' already installed", package)
280 if util.has_pkg_installed(package, target):203 else:
281 LOG.debug("Kernel package '%s' already installed", package)204 LOG.debug("installing kernel package '%s'", package)
282 else:205 util.install_packages([package], target=target)
283 LOG.debug("installing kernel package '%s'", package)206 else:
284 util.install_packages([package], target=target)207 if kernel_fallback is not None:
285 else:208 LOG.info("Kernel package '%s' not available. "
286 if kernel_fallback is not None:209 "Installing fallback package '%s'.",
287 LOG.info("Kernel package '%s' not available. "210 package, kernel_fallback)
288 "Installing fallback package '%s'.",211 util.install_packages([kernel_fallback], target=target)
289 package, kernel_fallback)212 else:
290 util.install_packages([kernel_fallback], target=target)213 LOG.warn("Kernel package '%s' not available and no fallback."
291 else:214 " System may not boot.", package)
292 LOG.warn("Kernel package '%s' not available and no fallback."
293 " System may not boot.", package)
294
295
296def apply_debconf_selections(cfg, target):
297 # debconf_selections:
298 # set1: |
299 # cloud-init cloud-init/datasources multiselect MAAS
300 # set2: pkg pkg/value string bar
301 selsets = cfg.get('debconf_selections')
302 if not selsets:
303 LOG.debug("debconf_selections was not set in config")
304 return
305
306 # for each entry in selections, chroot and apply them.
307 # keep a running total of packages we've seen.
308 pkgs_cfgd = set()
309 for key, content in selsets.items():
310 LOG.debug("setting for %s, %s" % (key, content))
311 util.subp(['chroot', target, 'debconf-set-selections'],
312 data=content.encode())
313 for line in content.splitlines():
314 if line.startswith("#"):
315 continue
316 pkg = re.sub(r"[:\s].*", "", line)
317 pkgs_cfgd.add(pkg)
318
319 pkgs_installed = get_installed_packages(target)
320
321 LOG.debug("pkgs_cfgd: %s" % pkgs_cfgd)
322 LOG.debug("pkgs_installed: %s" % pkgs_installed)
323 need_reconfig = pkgs_cfgd.intersection(pkgs_installed)
324
325 if len(need_reconfig) == 0:
326 LOG.debug("no need for reconfig")
327 return
328
329 # For any packages that are already installed, but have preseed data
330 # we populate the debconf database, but the filesystem configuration
331 # would be preferred on a subsequent dpkg-reconfigure.
332 # so, what we have to do is "know" information about certain packages
333 # to unconfigure them.
334 unhandled = []
335 to_config = []
336 for pkg in need_reconfig:
337 if pkg in CONFIG_CLEANERS:
338 LOG.debug("unconfiguring %s" % pkg)
339 CONFIG_CLEANERS[pkg](target)
340 to_config.append(pkg)
341 else:
342 unhandled.append(pkg)
343
344 if len(unhandled):
345 LOG.warn("The following packages were installed and preseeded, "
346 "but cannot be unconfigured: %s", unhandled)
347
348 util.subp(['chroot', target, 'dpkg-reconfigure',
349 '--frontend=noninteractive'] +
350 list(to_config), data=None)
351
352
353def get_installed_packages(target=None):
354 cmd = []
355 if target is not None:
356 cmd = ['chroot', target]
357 cmd.extend(['dpkg-query', '--list'])
358
359 (out, _err) = util.subp(cmd, capture=True)
360 if isinstance(out, bytes):
361 out = out.decode()
362
363 pkgs_inst = set()
364 for line in out.splitlines():
365 try:
366 (state, pkg, other) = line.split(None, 2)
367 except ValueError:
368 continue
369 if state.startswith("hi") or state.startswith("ii"):
370 pkgs_inst.add(re.sub(":.*", "", pkg))
371
372 return pkgs_inst
373215
374216
375def setup_grub(cfg, target):217def setup_grub(cfg, target):
@@ -498,12 +340,11 @@
498 util.subp(args + instdevs, env=env)340 util.subp(args + instdevs, env=env)
499341
500342
501def update_initramfs(target, all_kernels=False):343def update_initramfs(target=None, all_kernels=False):
502 cmd = ['update-initramfs', '-u']344 cmd = ['update-initramfs', '-u']
503 if all_kernels:345 if all_kernels:
504 cmd.extend(['-k', 'all'])346 cmd.extend(['-k', 'all'])
505 with util.RunInChroot(target) as in_chroot:347 util.subp(cmd, target=target)
506 in_chroot(cmd)
507348
508349
509def copy_fstab(fstab, target):350def copy_fstab(fstab, target):
@@ -533,7 +374,6 @@
533374
534375
535def apply_networking(target, state):376def apply_networking(target, state):
536 netstate = state.get('network_state')
537 netconf = state.get('network_config')377 netconf = state.get('network_config')
538 interfaces = state.get('interfaces')378 interfaces = state.get('interfaces')
539379
@@ -544,22 +384,13 @@
544 return True384 return True
545 return False385 return False
546386
547 ns = None387 if is_valid_src(netconf):
548 if is_valid_src(netstate):388 LOG.info("applying network_config")
549 LOG.debug("applying network_state")389 apply_net.apply_net(target, network_state=None, network_config=netconf)
550 ns = net.network_state.from_state_file(netstate)
551 elif is_valid_src(netconf):
552 LOG.debug("applying network_config")
553 ns = net.parse_net_config(netconf)
554
555 if ns is not None:
556 net.render_network_state(target=target, network_state=ns)
557 else:390 else:
558 LOG.debug("copying interfaces")391 LOG.debug("copying interfaces")
559 copy_interfaces(interfaces, target)392 copy_interfaces(interfaces, target)
560393
561 _maybe_remove_legacy_eth0(target)
562
563394
564def copy_interfaces(interfaces, target):395def copy_interfaces(interfaces, target):
565 if not interfaces:396 if not interfaces:
@@ -704,8 +535,8 @@
704535
705 # FIXME: this assumes grub. need more generic way to update root=536 # FIXME: this assumes grub. need more generic way to update root=
706 util.ensure_dir(os.path.sep.join([target, os.path.dirname(grub_dev)]))537 util.ensure_dir(os.path.sep.join([target, os.path.dirname(grub_dev)]))
707 with util.RunInChroot(target) as in_chroot:538 with util.ChrootableTarget(target) as in_chroot:
708 in_chroot(['update-grub'])539 in_chroot.subp(['update-grub'])
709540
710 else:541 else:
711 LOG.warn("Not sure how this will boot")542 LOG.warn("Not sure how this will boot")
@@ -740,7 +571,7 @@
740 }571 }
741572
742 needed_packages = []573 needed_packages = []
743 installed_packages = get_installed_packages(target)574 installed_packages = util.get_installed_packages(target)
744 for cust_cfg, pkg_reqs in custom_configs.items():575 for cust_cfg, pkg_reqs in custom_configs.items():
745 if cust_cfg not in cfg:576 if cust_cfg not in cfg:
746 continue577 continue
@@ -820,7 +651,7 @@
820 name=stack_prefix, reporting_enabled=True, level="INFO",651 name=stack_prefix, reporting_enabled=True, level="INFO",
821 description="writing config files and configuring apt"):652 description="writing config files and configuring apt"):
822 write_files(cfg, target)653 write_files(cfg, target)
823 apt_config(cfg, target)654 do_apt_config(cfg, target)
824 disable_overlayroot(cfg, target)655 disable_overlayroot(cfg, target)
825656
826 # packages may be needed prior to installing kernel657 # packages may be needed prior to installing kernel
@@ -834,8 +665,8 @@
834 copy_mdadm_conf(mdadm_location, target)665 copy_mdadm_conf(mdadm_location, target)
835 # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052666 # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
836 # reconfigure mdadm667 # reconfigure mdadm
837 util.subp(['chroot', target, 'dpkg-reconfigure',668 util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'],
838 '--frontend=noninteractive', 'mdadm'], data=None)669 data=None, target=target)
839670
840 with events.ReportEventStack(671 with events.ReportEventStack(
841 name=stack_prefix, reporting_enabled=True, level="INFO",672 name=stack_prefix, reporting_enabled=True, level="INFO",
@@ -843,7 +674,6 @@
843 setup_zipl(cfg, target)674 setup_zipl(cfg, target)
844 install_kernel(cfg, target)675 install_kernel(cfg, target)
845 run_zipl(cfg, target)676 run_zipl(cfg, target)
846 apply_debconf_selections(cfg, target)
847677
848 restore_dist_interfaces(cfg, target)678 restore_dist_interfaces(cfg, target)
849679
@@ -906,8 +736,4 @@
906 populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks)736 populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks)
907737
908738
909CONFIG_CLEANERS = {
910 'cloud-init': clean_cloud_init,
911}
912
913# vi: ts=4 expandtab syntax=python739# vi: ts=4 expandtab syntax=python
914740
=== modified file 'curtin/commands/main.py'
--- curtin/commands/main.py 2016-05-10 16:13:29 +0000
+++ curtin/commands/main.py 2016-10-03 18:55:20 +0000
@@ -26,9 +26,10 @@
26from ..deps import install_deps26from ..deps import install_deps
2727
28SUB_COMMAND_MODULES = [28SUB_COMMAND_MODULES = [
29 'apply_net', 'block-meta', 'block-wipe', 'curthooks', 'extract',29 'apply_net', 'block-info', 'block-meta', 'block-wipe', 'curthooks',
30 'hook', 'in-target', 'install', 'mkfs', 'net-meta',30 'clear-holders', 'extract', 'hook', 'in-target', 'install', 'mkfs',
31 'pack', 'swap', 'system-install', 'system-upgrade']31 'net-meta', 'apt-config', 'pack', 'swap', 'system-install',
32 'system-upgrade']
3233
3334
34def add_subcmd(subparser, subcmd):35def add_subcmd(subparser, subcmd):
3536
=== modified file 'curtin/config.py'
--- curtin/config.py 2016-03-18 14:16:45 +0000
+++ curtin/config.py 2016-10-03 18:55:20 +0000
@@ -138,6 +138,5 @@
138138
139139
140def value_as_boolean(value):140def value_as_boolean(value):
141 if value in (False, None, '0', 0, 'False', 'false', ''):141 false_values = (False, None, 0, '0', 'False', 'false', 'None', 'none', '')
142 return False142 return value not in false_values
143 return True
144143
=== added file 'curtin/gpg.py'
--- curtin/gpg.py 1970-01-01 00:00:00 +0000
+++ curtin/gpg.py 2016-10-03 18:55:20 +0000
@@ -0,0 +1,74 @@
1# Copyright (C) 2016 Canonical Ltd.
2#
3# Author: Scott Moser <scott.moser@canonical.com>
4# Christian Ehrhardt <christian.ehrhardt@canonical.com>
5#
6# Curtin is free software: you can redistribute it and/or modify it under
7# the terms of the GNU Affero General Public License as published by the
8# Free Software Foundation, either version 3 of the License, or (at your
9# option) any later version.
10#
11# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
12# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
13# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
14# more details.
15#
16# You should have received a copy of the GNU Affero General Public License
17# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
18""" gpg.py
19gpg related utilities to get raw keys data by their id
20"""
21
22from curtin import util
23
24from .log import LOG
25
26
27def export_armour(key):
28 """Export gpg key, armoured key gets returned"""
29 try:
30 (armour, _) = util.subp(["gpg", "--export", "--armour", key],
31 capture=True)
32 except util.ProcessExecutionError as error:
33 # debug, since it happens for any key not on the system initially
34 LOG.debug('Failed to export armoured key "%s": %s', key, error)
35 armour = None
36 return armour
37
38
39def recv_key(key, keyserver):
40 """Receive gpg key from the specified keyserver"""
41 LOG.debug('Receive gpg key "%s"', key)
42 try:
43 util.subp(["gpg", "--keyserver", keyserver, "--recv", key],
44 capture=True)
45 except util.ProcessExecutionError as error:
46 raise ValueError(('Failed to import key "%s" '
47 'from server "%s" - error %s') %
48 (key, keyserver, error))
49
50
51def delete_key(key):
52 """Delete the specified key from the local gpg ring"""
53 try:
54 util.subp(["gpg", "--batch", "--yes", "--delete-keys", key],
55 capture=True)
56 except util.ProcessExecutionError as error:
57 LOG.warn('Failed delete key "%s": %s', key, error)
58
59
60def getkeybyid(keyid, keyserver='keyserver.ubuntu.com'):
61 """get gpg keyid from keyserver"""
62 armour = export_armour(keyid)
63 if not armour:
64 try:
65 recv_key(keyid, keyserver=keyserver)
66 armour = export_armour(keyid)
67 except ValueError:
68 LOG.exception('Failed to obtain gpg key %s', keyid)
69 raise
70 finally:
71 # delete just imported key to leave environment as it was before
72 delete_key(keyid)
73
74 return armour
075
=== modified file 'curtin/net/__init__.py'
--- curtin/net/__init__.py 2016-10-03 18:00:41 +0000
+++ curtin/net/__init__.py 2016-10-03 18:55:20 +0000
@@ -299,7 +299,7 @@
299 mac = iface.get('mac_address', '')299 mac = iface.get('mac_address', '')
300 # len(macaddr) == 2 * 6 + 5 == 17300 # len(macaddr) == 2 * 6 + 5 == 17
301 if ifname and mac and len(mac) == 17:301 if ifname and mac and len(mac) == 17:
302 content += generate_udev_rule(ifname, mac)302 content += generate_udev_rule(ifname, mac.lower())
303303
304 return content304 return content
305305
@@ -349,7 +349,7 @@
349 'subnets',349 'subnets',
350 'type',350 'type',
351 ]351 ]
352 if iface['type'] not in ['bond', 'bridge']:352 if iface['type'] not in ['bond', 'bridge', 'vlan']:
353 ignore_map.append('mac_address')353 ignore_map.append('mac_address')
354354
355 for key, value in iface.items():355 for key, value in iface.items():
@@ -361,26 +361,52 @@
361 return content361 return content
362362
363363
364def render_route(route):364def render_route(route, indent=""):
365 content = "up route add"365 """When rendering routes for an iface, in some cases applying a route
366 may result in the route command returning non-zero which produces
367 some confusing output for users manually using ifup/ifdown[1]. To
368 that end, we will optionally include an '|| true' postfix to each
369 route line allowing users to work with ifup/ifdown without using
370 --force option.
371
372 We may at somepoint not want to emit this additional postfix, and
373 add a 'strict' flag to this function. When called with strict=True,
374 then we will not append the postfix.
375
376 1. http://askubuntu.com/questions/168033/
377 how-to-set-static-routes-in-ubuntu-server
378 """
379 content = []
380 up = indent + "post-up route add"
381 down = indent + "pre-down route del"
382 or_true = " || true"
366 mapping = {383 mapping = {
367 'network': '-net',384 'network': '-net',
368 'netmask': 'netmask',385 'netmask': 'netmask',
369 'gateway': 'gw',386 'gateway': 'gw',
370 'metric': 'metric',387 'metric': 'metric',
371 }388 }
372 for k in ['network', 'netmask', 'gateway', 'metric']:389 if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
373 if k in route:390 default_gw = " default gw %s" % route['gateway']
374 content += " %s %s" % (mapping[k], route[k])391 content.append(up + default_gw + or_true)
375392 content.append(down + default_gw + or_true)
376 content += '\n'393 elif route['network'] == '::' and route['netmask'] == 0:
377 return content394 # ipv6!
378395 default_gw = " -A inet6 default gw %s" % route['gateway']
379396 content.append(up + default_gw + or_true)
380def iface_start_entry(iface, index):397 content.append(down + default_gw + or_true)
398 else:
399 route_line = ""
400 for k in ['network', 'netmask', 'gateway', 'metric']:
401 if k in route:
402 route_line += " %s %s" % (mapping[k], route[k])
403 content.append(up + route_line + or_true)
404 content.append(down + route_line + or_true)
405 return "\n".join(content)
406
407
408def iface_start_entry(iface):
381 fullname = iface['name']409 fullname = iface['name']
382 if index != 0:
383 fullname += ":%s" % index
384410
385 control = iface['control']411 control = iface['control']
386 if control == "auto":412 if control == "auto":
@@ -397,6 +423,16 @@
397 "iface {fullname} {inet} {mode}\n").format(**subst)423 "iface {fullname} {inet} {mode}\n").format(**subst)
398424
399425
426def subnet_is_ipv6(subnet):
427 # 'static6' or 'dhcp6'
428 if subnet['type'].endswith('6'):
429 # This is a request for DHCPv6.
430 return True
431 elif subnet['type'] == 'static' and ":" in subnet['address']:
432 return True
433 return False
434
435
400def render_interfaces(network_state):436def render_interfaces(network_state):
401 ''' Given state, emit etc/network/interfaces content '''437 ''' Given state, emit etc/network/interfaces content '''
402438
@@ -424,42 +460,43 @@
424 content += "\n"460 content += "\n"
425 subnets = iface.get('subnets', {})461 subnets = iface.get('subnets', {})
426 if subnets:462 if subnets:
427 for index, subnet in zip(range(0, len(subnets)), subnets):463 for index, subnet in enumerate(subnets):
428 if content[-2:] != "\n\n":464 if content[-2:] != "\n\n":
429 content += "\n"465 content += "\n"
430 iface['index'] = index466 iface['index'] = index
431 iface['mode'] = subnet['type']467 iface['mode'] = subnet['type']
432 iface['control'] = subnet.get('control', 'auto')468 iface['control'] = subnet.get('control', 'auto')
433 subnet_inet = 'inet'469 subnet_inet = 'inet'
434 if iface['mode'].endswith('6'):470 if subnet_is_ipv6(subnet):
435 # This is a request for DHCPv6.
436 subnet_inet += '6'
437 elif iface['mode'] == 'static' and ":" in subnet['address']:
438 # This is a static IPv6 address.
439 subnet_inet += '6'471 subnet_inet += '6'
440 iface['inet'] = subnet_inet472 iface['inet'] = subnet_inet
441 if iface['mode'].startswith('dhcp'):473 if subnet['type'].startswith('dhcp'):
442 iface['mode'] = 'dhcp'474 iface['mode'] = 'dhcp'
443475
444 content += iface_start_entry(iface, index)476 # do not emit multiple 'auto $IFACE' lines as older (precise)
477 # ifupdown complains
478 if "auto %s\n" % (iface['name']) in content:
479 iface['control'] = 'alias'
480
481 content += iface_start_entry(iface)
445 content += iface_add_subnet(iface, subnet)482 content += iface_add_subnet(iface, subnet)
446 content += iface_add_attrs(iface, index)483 content += iface_add_attrs(iface, index)
447 if len(subnets) > 1 and index == 0:484
448 for i in range(1, len(subnets)):485 for route in subnet.get('routes', []):
449 content += " post-up ifup %s:%s\n" % (iface['name'],486 content += render_route(route, indent=" ") + '\n'
450 i)487
451 else:488 else:
452 # ifenslave docs say to auto the slave devices489 # ifenslave docs say to auto the slave devices
453 if 'bond-master' in iface:490 if 'bond-master' in iface or 'bond-slaves' in iface:
454 content += "auto {name}\n".format(**iface)491 content += "auto {name}\n".format(**iface)
455 content += "iface {name} {inet} {mode}\n".format(**iface)492 content += "iface {name} {inet} {mode}\n".format(**iface)
456 content += iface_add_attrs(iface, index)493 content += iface_add_attrs(iface, 0)
457494
458 for route in network_state.get('routes'):495 for route in network_state.get('routes'):
459 content += render_route(route)496 content += render_route(route)
460497
461 # global replacements until v2 format498 # global replacements until v2 format
462 content = content.replace('mac_address', 'hwaddress')499 content = content.replace('mac_address', 'hwaddress ether')
463500
464 # Play nice with others and source eni config files501 # Play nice with others and source eni config files
465 content += "\nsource /etc/network/interfaces.d/*.cfg\n"502 content += "\nsource /etc/network/interfaces.d/*.cfg\n"
466503
=== modified file 'curtin/net/network_state.py'
--- curtin/net/network_state.py 2015-10-02 16:19:07 +0000
+++ curtin/net/network_state.py 2016-10-03 18:55:20 +0000
@@ -121,6 +121,18 @@
121 iface = interfaces.get(command['name'], {})121 iface = interfaces.get(command['name'], {})
122 for param, val in command.get('params', {}).items():122 for param, val in command.get('params', {}).items():
123 iface.update({param: val})123 iface.update({param: val})
124
125 # convert subnet ipv6 netmask to cidr as needed
126 subnets = command.get('subnets')
127 if subnets:
128 for subnet in subnets:
129 if subnet['type'] == 'static':
130 if 'netmask' in subnet and ':' in subnet['address']:
131 subnet['netmask'] = mask2cidr(subnet['netmask'])
132 for route in subnet.get('routes', []):
133 if 'netmask' in route:
134 route['netmask'] = mask2cidr(route['netmask'])
135
124 iface.update({136 iface.update({
125 'name': command.get('name'),137 'name': command.get('name'),
126 'type': command.get('type'),138 'type': command.get('type'),
@@ -130,7 +142,7 @@
130 'mtu': command.get('mtu'),142 'mtu': command.get('mtu'),
131 'address': None,143 'address': None,
132 'gateway': None,144 'gateway': None,
133 'subnets': command.get('subnets'),145 'subnets': subnets,
134 })146 })
135 self.network_state['interfaces'].update({command.get('name'): iface})147 self.network_state['interfaces'].update({command.get('name'): iface})
136 self.dump_network_state()148 self.dump_network_state()
@@ -141,6 +153,7 @@
141 iface eth0.222 inet static153 iface eth0.222 inet static
142 address 10.10.10.1154 address 10.10.10.1
143 netmask 255.255.255.0155 netmask 255.255.255.0
156 hwaddress ether BC:76:4E:06:96:B3
144 vlan-raw-device eth0157 vlan-raw-device eth0
145 '''158 '''
146 required_keys = [159 required_keys = [
@@ -332,6 +345,37 @@
332 return ".".join([str(x) for x in mask])345 return ".".join([str(x) for x in mask])
333346
334347
348def ipv4mask2cidr(mask):
349 if '.' not in mask:
350 return mask
351 return sum([bin(int(x)).count('1') for x in mask.split('.')])
352
353
354def ipv6mask2cidr(mask):
355 if ':' not in mask:
356 return mask
357
358 bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00,
359 0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc,
360 0xfffe, 0xffff]
361 cidr = 0
362 for word in mask.split(':'):
363 if not word or int(word, 16) == 0:
364 break
365 cidr += bitCount.index(int(word, 16))
366
367 return cidr
368
369
370def mask2cidr(mask):
371 if ':' in mask:
372 return ipv6mask2cidr(mask)
373 elif '.' in mask:
374 return ipv4mask2cidr(mask)
375 else:
376 return mask
377
378
335if __name__ == '__main__':379if __name__ == '__main__':
336 import sys380 import sys
337 import random381 import random
338382
=== modified file 'curtin/util.py'
--- curtin/util.py 2016-10-03 18:00:41 +0000
+++ curtin/util.py 2016-10-03 18:55:20 +0000
@@ -16,18 +16,35 @@
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1717
18import argparse18import argparse
19import collections
19import errno20import errno
20import glob21import glob
21import json22import json
22import os23import os
23import platform24import platform
25import re
24import shutil26import shutil
27import socket
25import subprocess28import subprocess
26import stat29import stat
27import sys30import sys
28import tempfile31import tempfile
29import time32import time
3033
34# avoid the dependency to python3-six as used in cloud-init
35try:
36 from urlparse import urlparse
37except ImportError:
38 # python3
39 # avoid triggering pylint, https://github.com/PyCQA/pylint/issues/769
40 # pylint:disable=import-error,no-name-in-module
41 from urllib.parse import urlparse
42
43try:
44 string_types = (basestring,)
45except NameError:
46 string_types = (str,)
47
31from .log import LOG48from .log import LOG
3249
33_INSTALLED_HELPERS_PATH = '/usr/lib/curtin/helpers'50_INSTALLED_HELPERS_PATH = '/usr/lib/curtin/helpers'
@@ -35,14 +52,22 @@
3552
36_LSB_RELEASE = {}53_LSB_RELEASE = {}
3754
55_DNS_REDIRECT_IP = None
56
57# matcher used in template rendering functions
58BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)')
59
3860
39def _subp(args, data=None, rcs=None, env=None, capture=False, shell=False,61def _subp(args, data=None, rcs=None, env=None, capture=False, shell=False,
40 logstring=False, decode="replace"):62 logstring=False, decode="replace", target=None):
41 if rcs is None:63 if rcs is None:
42 rcs = [0]64 rcs = [0]
4365
44 devnull_fp = None66 devnull_fp = None
45 try:67 try:
68 if target_path(target) != "/":
69 args = ['chroot', target] + list(args)
70
46 if not logstring:71 if not logstring:
47 LOG.debug(("Running command %s with allowed return codes %s"72 LOG.debug(("Running command %s with allowed return codes %s"
48 " (shell=%s, capture=%s)"), args, rcs, shell, capture)73 " (shell=%s, capture=%s)"), args, rcs, shell, capture)
@@ -118,6 +143,8 @@
118 a list of times to sleep in between retries. After each failure143 a list of times to sleep in between retries. After each failure
119 subp will sleep for N seconds and then try again. A value of [1, 3]144 subp will sleep for N seconds and then try again. A value of [1, 3]
120 means to run, sleep 1, run, sleep 3, run and then return exit code.145 means to run, sleep 1, run, sleep 3, run and then return exit code.
146 :param target:
147 run the command as 'chroot target <args>'
121 """148 """
122 retries = []149 retries = []
123 if "retries" in kwargs:150 if "retries" in kwargs:
@@ -277,15 +304,29 @@
277304
278305
279def write_file(filename, content, mode=0o644, omode="w"):306def write_file(filename, content, mode=0o644, omode="w"):
307 """
308 write 'content' to file at 'filename' using python open mode 'omode'.
309 if mode is not set, then chmod file to mode. mode is 644 by default
310 """
280 ensure_dir(os.path.dirname(filename))311 ensure_dir(os.path.dirname(filename))
281 with open(filename, omode) as fp:312 with open(filename, omode) as fp:
282 fp.write(content)313 fp.write(content)
283 os.chmod(filename, mode)314 if mode:
284315 os.chmod(filename, mode)
285316
286def load_file(path, mode="r"):317
318def load_file(path, mode="r", read_len=None, offset=0):
287 with open(path, mode) as fp:319 with open(path, mode) as fp:
288 return fp.read()320 if offset:
321 fp.seek(offset)
322 return fp.read(read_len) if read_len else fp.read()
323
324
325def file_size(path):
326 """get the size of a file"""
327 with open(path, 'rb') as fp:
328 fp.seek(0, 2)
329 return fp.tell()
289330
290331
291def del_file(path):332def del_file(path):
@@ -311,7 +352,7 @@
311 'done',352 'done',
312 ''])353 ''])
313354
314 fpath = os.path.join(target, "usr/sbin/policy-rc.d")355 fpath = target_path(target, "/usr/sbin/policy-rc.d")
315356
316 if os.path.isfile(fpath):357 if os.path.isfile(fpath):
317 return False358 return False
@@ -322,7 +363,7 @@
322363
323def undisable_daemons_in_root(target):364def undisable_daemons_in_root(target):
324 try:365 try:
325 os.unlink(os.path.join(target, "usr/sbin/policy-rc.d"))366 os.unlink(target_path(target, "/usr/sbin/policy-rc.d"))
326 except OSError as e:367 except OSError as e:
327 if e.errno != errno.ENOENT:368 if e.errno != errno.ENOENT:
328 raise369 raise
@@ -334,7 +375,7 @@
334 def __init__(self, target, allow_daemons=False, sys_resolvconf=True):375 def __init__(self, target, allow_daemons=False, sys_resolvconf=True):
335 if target is None:376 if target is None:
336 target = "/"377 target = "/"
337 self.target = os.path.abspath(target)378 self.target = target_path(target)
338 self.mounts = ["/dev", "/proc", "/sys"]379 self.mounts = ["/dev", "/proc", "/sys"]
339 self.umounts = []380 self.umounts = []
340 self.disabled_daemons = False381 self.disabled_daemons = False
@@ -344,20 +385,21 @@
344385
345 def __enter__(self):386 def __enter__(self):
346 for p in self.mounts:387 for p in self.mounts:
347 tpath = os.path.join(self.target, p[1:])388 tpath = target_path(self.target, p)
348 if do_mount(p, tpath, opts='--bind'):389 if do_mount(p, tpath, opts='--bind'):
349 self.umounts.append(tpath)390 self.umounts.append(tpath)
350391
351 if not self.allow_daemons:392 if not self.allow_daemons:
352 self.disabled_daemons = disable_daemons_in_root(self.target)393 self.disabled_daemons = disable_daemons_in_root(self.target)
353394
354 target_etc = os.path.join(self.target, "etc")395 rconf = target_path(self.target, "/etc/resolv.conf")
396 target_etc = os.path.dirname(rconf)
355 if self.target != "/" and os.path.isdir(target_etc):397 if self.target != "/" and os.path.isdir(target_etc):
356 # never muck with resolv.conf on /398 # never muck with resolv.conf on /
357 rconf = os.path.join(target_etc, "resolv.conf")399 rconf = os.path.join(target_etc, "resolv.conf")
358 rtd = None400 rtd = None
359 try:401 try:
360 rtd = tempfile.mkdtemp(dir=os.path.dirname(rconf))402 rtd = tempfile.mkdtemp(dir=target_etc)
361 tmp = os.path.join(rtd, "resolv.conf")403 tmp = os.path.join(rtd, "resolv.conf")
362 os.rename(rconf, tmp)404 os.rename(rconf, tmp)
363 self.rconf_d = rtd405 self.rconf_d = rtd
@@ -375,25 +417,23 @@
375 undisable_daemons_in_root(self.target)417 undisable_daemons_in_root(self.target)
376418
377 # if /dev is to be unmounted, udevadm settle (LP: #1462139)419 # if /dev is to be unmounted, udevadm settle (LP: #1462139)
378 if os.path.join(self.target, "dev") in self.umounts:420 if target_path(self.target, "/dev") in self.umounts:
379 subp(['udevadm', 'settle'])421 subp(['udevadm', 'settle'])
380422
381 for p in reversed(self.umounts):423 for p in reversed(self.umounts):
382 do_umount(p)424 do_umount(p)
383425
384 rconf = os.path.join(self.target, "etc", "resolv.conf")426 rconf = target_path(self.target, "/etc/resolv.conf")
385 if self.sys_resolvconf and self.rconf_d:427 if self.sys_resolvconf and self.rconf_d:
386 os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf)428 os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf)
387 shutil.rmtree(self.rconf_d)429 shutil.rmtree(self.rconf_d)
388430
431 def subp(self, *args, **kwargs):
432 kwargs['target'] = self.target
433 return subp(*args, **kwargs)
389434
390class RunInChroot(ChrootableTarget):435 def path(self, path):
391 def __call__(self, args, **kwargs):436 return target_path(self.target, path)
392 if self.target != "/":
393 chroot = ["chroot", self.target]
394 else:
395 chroot = []
396 return subp(chroot + args, **kwargs)
397437
398438
399def is_exe(fpath):439def is_exe(fpath):
@@ -402,14 +442,13 @@
402442
403443
404def which(program, search=None, target=None):444def which(program, search=None, target=None):
405 if target is None or os.path.realpath(target) == "/":445 target = target_path(target)
406 target = "/"
407446
408 if os.path.sep in program:447 if os.path.sep in program:
409 # if program had a '/' in it, then do not search PATH448 # if program had a '/' in it, then do not search PATH
410 # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls449 # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls
411 # so effectively we set cwd to / (or target)450 # so effectively we set cwd to / (or target)
412 if is_exe(os.path.sep.join((target, program,))):451 if is_exe(target_path(target, program)):
413 return program452 return program
414453
415 if search is None:454 if search is None:
@@ -424,8 +463,9 @@
424 search = [os.path.abspath(p) for p in search]463 search = [os.path.abspath(p) for p in search]
425464
426 for path in search:465 for path in search:
427 if is_exe(os.path.sep.join((target, path, program,))):466 ppath = os.path.sep.join((path, program))
428 return os.path.sep.join((path, program,))467 if is_exe(target_path(target, ppath)):
468 return ppath
429469
430 return None470 return None
431471
@@ -467,33 +507,39 @@
467507
468508
469def get_architecture(target=None):509def get_architecture(target=None):
470 chroot = []510 out, _ = subp(['dpkg', '--print-architecture'], capture=True,
471 if target is not None:511 target=target)
472 chroot = ['chroot', target]
473 out, _ = subp(chroot + ['dpkg', '--print-architecture'],
474 capture=True)
475 return out.strip()512 return out.strip()
476513
477514
478def has_pkg_available(pkg, target=None):515def has_pkg_available(pkg, target=None):
479 chroot = []516 out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target)
480 if target is not None:
481 chroot = ['chroot', target]
482 out, _ = subp(chroot + ['apt-cache', 'pkgnames'], capture=True)
483 for item in out.splitlines():517 for item in out.splitlines():
484 if pkg == item.strip():518 if pkg == item.strip():
485 return True519 return True
486 return False520 return False
487521
488522
523def get_installed_packages(target=None):
524 (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True)
525
526 pkgs_inst = set()
527 for line in out.splitlines():
528 try:
529 (state, pkg, other) = line.split(None, 2)
530 except ValueError:
531 continue
532 if state.startswith("hi") or state.startswith("ii"):
533 pkgs_inst.add(re.sub(":.*", "", pkg))
534
535 return pkgs_inst
536
537
489def has_pkg_installed(pkg, target=None):538def has_pkg_installed(pkg, target=None):
490 chroot = []
491 if target is not None:
492 chroot = ['chroot', target]
493 try:539 try:
494 out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat',540 out, _ = subp(['dpkg-query', '--show', '--showformat',
495 '${db:Status-Abbrev}', pkg],541 '${db:Status-Abbrev}', pkg],
496 capture=True)542 capture=True, target=target)
497 return out.rstrip() == "ii"543 return out.rstrip() == "ii"
498 except ProcessExecutionError:544 except ProcessExecutionError:
499 return False545 return False
@@ -542,13 +588,9 @@
542 """Use dpkg-query to extract package pkg's version string588 """Use dpkg-query to extract package pkg's version string
543 and parse the version string into a dictionary589 and parse the version string into a dictionary
544 """590 """
545 chroot = []
546 if target is not None:
547 chroot = ['chroot', target]
548 try:591 try:
549 out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat',592 out, _ = subp(['dpkg-query', '--show', '--showformat',
550 '${Version}', pkg],593 '${Version}', pkg], capture=True, target=target)
551 capture=True)
552 raw = out.rstrip()594 raw = out.rstrip()
553 return parse_dpkg_version(raw, name=pkg, semx=semx)595 return parse_dpkg_version(raw, name=pkg, semx=semx)
554 except ProcessExecutionError:596 except ProcessExecutionError:
@@ -600,11 +642,11 @@
600 if comment.endswith("\n"):642 if comment.endswith("\n"):
601 comment = comment[:-1]643 comment = comment[:-1]
602644
603 marker = os.path.join(target, marker)645 marker = target_path(target, marker)
604 # if marker exists, check if there are files that would make it obsolete646 # if marker exists, check if there are files that would make it obsolete
605 listfiles = [os.path.join(target, "etc/apt/sources.list")]647 listfiles = [target_path(target, "/etc/apt/sources.list")]
606 listfiles += glob.glob(648 listfiles += glob.glob(
607 os.path.join(target, "etc/apt/sources.list.d/*.list"))649 target_path(target, "etc/apt/sources.list.d/*.list"))
608650
609 if os.path.exists(marker) and not force:651 if os.path.exists(marker) and not force:
610 if len(find_newer(marker, listfiles)) == 0:652 if len(find_newer(marker, listfiles)) == 0:
@@ -612,7 +654,7 @@
612654
613 restore_perms = []655 restore_perms = []
614656
615 abs_tmpdir = tempfile.mkdtemp(dir=os.path.join(target, 'tmp'))657 abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp"))
616 try:658 try:
617 abs_slist = abs_tmpdir + "/sources.list"659 abs_slist = abs_tmpdir + "/sources.list"
618 abs_slistd = abs_tmpdir + "/sources.list.d"660 abs_slistd = abs_tmpdir + "/sources.list.d"
@@ -621,8 +663,8 @@
621 ch_slistd = ch_tmpdir + "/sources.list.d"663 ch_slistd = ch_tmpdir + "/sources.list.d"
622664
623 # this file gets executed on apt-get update sometimes. (LP: #1527710)665 # this file gets executed on apt-get update sometimes. (LP: #1527710)
624 motd_update = os.path.join(666 motd_update = target_path(
625 target, "usr/lib/update-notifier/update-motd-updates-available")667 target, "/usr/lib/update-notifier/update-motd-updates-available")
626 pmode = set_unexecutable(motd_update)668 pmode = set_unexecutable(motd_update)
627 if pmode is not None:669 if pmode is not None:
628 restore_perms.append((motd_update, pmode),)670 restore_perms.append((motd_update, pmode),)
@@ -647,8 +689,8 @@
647 'update']689 'update']
648690
649 # do not using 'run_apt_command' so we can use 'retries' to subp691 # do not using 'run_apt_command' so we can use 'retries' to subp
650 with RunInChroot(target, allow_daemons=True) as inchroot:692 with ChrootableTarget(target, allow_daemons=True) as inchroot:
651 inchroot(update_cmd, env=env, retries=retries)693 inchroot.subp(update_cmd, env=env, retries=retries)
652 finally:694 finally:
653 for fname, perms in restore_perms:695 for fname, perms in restore_perms:
654 os.chmod(fname, perms)696 os.chmod(fname, perms)
@@ -685,9 +727,8 @@
685 return env, cmd727 return env, cmd
686728
687 apt_update(target, env=env, comment=' '.join(cmd))729 apt_update(target, env=env, comment=' '.join(cmd))
688 ric = RunInChroot(target, allow_daemons=allow_daemons)730 with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot:
689 with ric as inchroot:731 return inchroot.subp(cmd, env=env)
690 return inchroot(cmd, env=env)
691732
692733
693def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False):734def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False):
@@ -716,7 +757,7 @@
716 """757 """
717 Look for "hook" in "target" and run it758 Look for "hook" in "target" and run it
718 """759 """
719 target_hook = os.path.join(target, 'curtin', hook)760 target_hook = target_path(target, '/curtin/' + hook)
720 if os.path.isfile(target_hook):761 if os.path.isfile(target_hook):
721 LOG.debug("running %s" % target_hook)762 LOG.debug("running %s" % target_hook)
722 subp([target_hook])763 subp([target_hook])
@@ -828,6 +869,18 @@
828 return val869 return val
829870
830871
872def bytes2human(size):
873 """convert size in bytes to human readable"""
874 if not (isinstance(size, (int, float)) and
875 int(size) == size and
876 int(size) >= 0):
877 raise ValueError('size must be a integral value')
878 mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40}
879 unit_order = sorted(mpliers, key=lambda x: -1 * mpliers[x])
880 unit = next((u for u in unit_order if (size / mpliers[u]) >= 1), 'B')
881 return str(int(size / mpliers[unit])) + unit
882
883
831def import_module(import_str):884def import_module(import_str):
832 """Import a module."""885 """Import a module."""
833 __import__(import_str)886 __import__(import_str)
@@ -843,30 +896,42 @@
843896
844897
845def is_file_not_found_exc(exc):898def is_file_not_found_exc(exc):
846 return (isinstance(exc, IOError) and exc.errno == errno.ENOENT)899 return (isinstance(exc, (IOError, OSError)) and
847900 hasattr(exc, 'errno') and
848901 exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO))
849def lsb_release():902
903
904def _lsb_release(target=None):
850 fmap = {'Codename': 'codename', 'Description': 'description',905 fmap = {'Codename': 'codename', 'Description': 'description',
851 'Distributor ID': 'id', 'Release': 'release'}906 'Distributor ID': 'id', 'Release': 'release'}
907
908 data = {}
909 try:
910 out, _ = subp(['lsb_release', '--all'], capture=True, target=target)
911 for line in out.splitlines():
912 fname, _, val = line.partition(":")
913 if fname in fmap:
914 data[fmap[fname]] = val.strip()
915 missing = [k for k in fmap.values() if k not in data]
916 if len(missing):
917 LOG.warn("Missing fields in lsb_release --all output: %s",
918 ','.join(missing))
919
920 except ProcessExecutionError as err:
921 LOG.warn("Unable to get lsb_release --all: %s", err)
922 data = {v: "UNAVAILABLE" for v in fmap.values()}
923
924 return data
925
926
927def lsb_release(target=None):
928 if target_path(target) != "/":
929 # do not use or update cache if target is provided
930 return _lsb_release(target)
931
852 global _LSB_RELEASE932 global _LSB_RELEASE
853 if not _LSB_RELEASE:933 if not _LSB_RELEASE:
854 data = {}934 data = _lsb_release()
855 try:
856 out, err = subp(['lsb_release', '--all'], capture=True)
857 for line in out.splitlines():
858 fname, tok, val = line.partition(":")
859 if fname in fmap:
860 data[fmap[fname]] = val.strip()
861 missing = [k for k in fmap.values() if k not in data]
862 if len(missing):
863 LOG.warn("Missing fields in lsb_release --all output: %s",
864 ','.join(missing))
865
866 except ProcessExecutionError as e:
867 LOG.warn("Unable to get lsb_release --all: %s", e)
868 data = {v: "UNAVAILABLE" for v in fmap.values()}
869
870 _LSB_RELEASE.update(data)935 _LSB_RELEASE.update(data)
871 return _LSB_RELEASE936 return _LSB_RELEASE
872937
@@ -881,8 +946,7 @@
881946
882947
883def json_dumps(data):948def json_dumps(data):
884 return json.dumps(data, indent=1, sort_keys=True,949 return json.dumps(data, indent=1, sort_keys=True, separators=(',', ': '))
885 separators=(',', ': ')).encode('utf-8')
886950
887951
888def get_platform_arch():952def get_platform_arch():
@@ -895,4 +959,137 @@
895 }959 }
896 return platform2arch.get(platform.machine(), platform.machine())960 return platform2arch.get(platform.machine(), platform.machine())
897961
962
963def basic_template_render(content, params):
964 """This does simple replacement of bash variable like templates.
965
966 It identifies patterns like ${a} or $a and can also identify patterns like
967 ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted
968 by key 'a'.
969 """
970
971 def replacer(match):
972 """ replacer
973 replacer used in regex match to replace content
974 """
975 # Only 1 of the 2 groups will actually have a valid entry.
976 name = match.group(1)
977 if name is None:
978 name = match.group(2)
979 if name is None:
980 raise RuntimeError("Match encountered but no valid group present")
981 path = collections.deque(name.split("."))
982 selected_params = params
983 while len(path) > 1:
984 key = path.popleft()
985 if not isinstance(selected_params, dict):
986 raise TypeError("Can not traverse into"
987 " non-dictionary '%s' of type %s while"
988 " looking for subkey '%s'"
989 % (selected_params,
990 selected_params.__class__.__name__,
991 key))
992 selected_params = selected_params[key]
993 key = path.popleft()
994 if not isinstance(selected_params, dict):
995 raise TypeError("Can not extract key '%s' from non-dictionary"
996 " '%s' of type %s"
997 % (key, selected_params,
998 selected_params.__class__.__name__))
999 return str(selected_params[key])
1000
1001 return BASIC_MATCHER.sub(replacer, content)
1002
1003
1004def render_string(content, params):
1005 """ render_string
1006 render a string following replacement rules as defined in
1007 basic_template_render returning the string
1008 """
1009 if not params:
1010 params = {}
1011 return basic_template_render(content, params)
1012
1013
1014def is_resolvable(name):
1015 """determine if a url is resolvable, return a boolean
1016 This also attempts to be resilent against dns redirection.
1017
1018 Note, that normal nsswitch resolution is used here. So in order
1019 to avoid any utilization of 'search' entries in /etc/resolv.conf
1020 we have to append '.'.
1021
1022 The top level 'invalid' domain is invalid per RFC. And example.com
1023 should also not exist. The random entry will be resolved inside
1024 the search list.
1025 """
1026 global _DNS_REDIRECT_IP
1027 if _DNS_REDIRECT_IP is None:
1028 badips = set()
1029 badnames = ("does-not-exist.example.com.", "example.invalid.")
1030 badresults = {}
1031 for iname in badnames:
1032 try:
1033 result = socket.getaddrinfo(iname, None, 0, 0,
1034 socket.SOCK_STREAM,
1035 socket.AI_CANONNAME)
1036 badresults[iname] = []
1037 for (_, _, _, cname, sockaddr) in result:
1038 badresults[iname].append("%s: %s" % (cname, sockaddr[0]))
1039 badips.add(sockaddr[0])
1040 except (socket.gaierror, socket.error):
1041 pass
1042 _DNS_REDIRECT_IP = badips
1043 if badresults:
1044 LOG.debug("detected dns redirection: %s", badresults)
1045
1046 try:
1047 result = socket.getaddrinfo(name, None)
1048 # check first result's sockaddr field
1049 addr = result[0][4][0]
1050 if addr in _DNS_REDIRECT_IP:
1051 LOG.debug("dns %s in _DNS_REDIRECT_IP", name)
1052 return False
1053 LOG.debug("dns %s resolved to '%s'", name, result)
1054 return True
1055 except (socket.gaierror, socket.error):
1056 LOG.debug("dns %s failed to resolve", name)
1057 return False
1058
1059
1060def is_resolvable_url(url):
1061 """determine if this url is resolvable (existing or ip)."""
1062 return is_resolvable(urlparse(url).hostname)
1063
1064
1065def target_path(target, path=None):
1066 # return 'path' inside target, accepting target as None
1067 if target in (None, ""):
1068 target = "/"
1069 elif not isinstance(target, string_types):
1070 raise ValueError("Unexpected input for target: %s" % target)
1071 else:
1072 target = os.path.abspath(target)
1073 # abspath("//") returns "//" specifically for 2 slashes.
1074 if target.startswith("//"):
1075 target = target[1:]
1076
1077 if not path:
1078 return target
1079
1080 # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /.
1081 while len(path) and path[0] == "/":
1082 path = path[1:]
1083
1084 return os.path.join(target, path)
1085
1086
1087class RunInChroot(ChrootableTarget):
1088 """Backwards compatibility for RunInChroot (LP: #1617375).
1089 It needs to work like:
1090 with RunInChroot("/target") as in_chroot:
1091 in_chroot(["your", "chrooted", "command"])"""
1092 __call__ = ChrootableTarget.subp
1093
1094
898# vi: ts=4 expandtab syntax=python1095# vi: ts=4 expandtab syntax=python
8991096
=== modified file 'debian/changelog'
--- debian/changelog 2016-10-03 17:23:32 +0000
+++ debian/changelog 2016-10-03 18:55:20 +0000
@@ -1,8 +1,38 @@
1curtin (0.1.0~bzr399-0ubuntu1~16.04.1ubuntu1) UNRELEASED; urgency=medium1curtin (0.1.0~bzr425-0ubuntu1~16.04.1) xenial-proposed; urgency=medium
22
3 [ Scott Moser ]
3 * debian/new-upstream-snapshot: add writing of debian changelog entries. 4 * debian/new-upstream-snapshot: add writing of debian changelog entries.
45
5 -- Scott Moser <smoser@ubuntu.com> Mon, 03 Oct 2016 13:23:11 -04006 [ Ryan Harper ]
7 * New upstream snapshot.
8 - unittest,tox.ini: catch and fix issue with trusty-level mock of open
9 - block/mdadm: add option to ignore mdadm_assemble errors (LP: #1618429)
10 - curtin/doc: overhaul curtin documentation for readthedocs.org (LP: #1351085)
11 - curtin.util: re-add support for RunInChroot (LP: #1617375)
12 - curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs
13 - curtin.block: refactor clear_holders logic into block.clear_holders and cli cmd
14 - curtin.apply_net should exit non-zero upon exception. (LP: #1615780)
15 - apt: fix bug in disable_suites if sources.list line is blank.
16 - vmtests: disable Wily in vmtests
17 - Fix the unittests for test_apt_source.
18 - get CURTIN_VMTEST_PARALLEL shown correctly in jenkins-runner output
19 - fix vmtest check_file_strippedline to strip lines before comparing
20 - fix whitespace damage in tests/vmtests/__init__.py
21 - fix dpkg-reconfigure when debconf_selections was provided. (LP: #1609614)
22 - fix apt tests on non-intel arch
23 - Add apt features to curtin. (LP: #1574113)
24 - vmtest: easier use of parallel and controlling timeouts
25 - mkfs.vfat: add force flag for formating whole disks (LP: #1597923)
26 - block.mkfs: fix sectorsize flag (LP: #1597522)
27 - block_meta: cleanup use of sys_block_path and handle cciss knames (LP: #1562249)
28 - block.get_blockdev_sector_size: handle _lsblock multi result return (LP: #1598310)
29 - util: add target (chroot) support to subp, add target_path helper.
30 - block_meta: fallback to parted if blkid does not produce output (LP: #1524031)
31 - commands.block_wipe: correct default wipe mode to 'superblock'
32 - tox.ini: run coverage normally rather than separately
33 - move uefi boot knowledge from launch and vmtest to xkvm
34
35 -- Ryan Harper <ryan.harper@canonical.com> Mon, 03 Oct 2016 13:43:54 -0500
636
7curtin (0.1.0~bzr399-0ubuntu1~16.04.1) xenial-proposed; urgency=medium37curtin (0.1.0~bzr399-0ubuntu1~16.04.1) xenial-proposed; urgency=medium
838
939
=== modified file 'doc/conf.py'
--- doc/conf.py 2015-10-02 16:19:07 +0000
+++ doc/conf.py 2016-10-03 18:55:20 +0000
@@ -13,6 +13,11 @@
1313
14import sys, os14import sys, os
1515
16# Fix path so we can import curtin.__version__
17sys.path.insert(1, os.path.realpath(os.path.join(
18 os.path.dirname(__file__), '..')))
19import curtin
20
16# If extensions (or modules to document with autodoc) are in another directory,21# If extensions (or modules to document with autodoc) are in another directory,
17# add these directories to sys.path here. If the directory is relative to the22# add these directories to sys.path here. If the directory is relative to the
18# documentation root, use os.path.abspath to make it absolute, like shown here.23# documentation root, use os.path.abspath to make it absolute, like shown here.
@@ -41,16 +46,16 @@
4146
42# General information about the project.47# General information about the project.
43project = u'curtin'48project = u'curtin'
44copyright = u'2013, Scott Moser'49copyright = u'2016, Scott Moser, Ryan Harper'
4550
46# The version info for the project you're documenting, acts as replacement for51# The version info for the project you're documenting, acts as replacement for
47# |version| and |release|, also used in various other places throughout the52# |version| and |release|, also used in various other places throughout the
48# built documents.53# built documents.
49#54#
50# The short X.Y version.55# The short X.Y version.
51version = '0.3'56version = curtin.__version__
52# The full version, including alpha/beta/rc tags.57# The full version, including alpha/beta/rc tags.
53release = '0.3'58release = version
5459
55# The language for content autogenerated by Sphinx. Refer to documentation60# The language for content autogenerated by Sphinx. Refer to documentation
56# for a list of supported languages.61# for a list of supported languages.
@@ -93,6 +98,18 @@
93# a list of builtin themes.98# a list of builtin themes.
94html_theme = 'classic'99html_theme = 'classic'
95100
101# on_rtd is whether we are on readthedocs.org, this line of code grabbed from
102# docs.readthedocs.org
103on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
104
105if not on_rtd: # only import and set the theme if we're building docs locally
106 import sphinx_rtd_theme
107 html_theme = 'sphinx_rtd_theme'
108 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
109
110# otherwise, readthedocs.org uses their theme by default, so no need to specify
111# it
112
96# Theme options are theme-specific and customize the look and feel of a theme113# Theme options are theme-specific and customize the look and feel of a theme
97# further. For a list of options available for each theme, see the114# further. For a list of options available for each theme, see the
98# documentation.115# documentation.
@@ -120,7 +137,7 @@
120# Add any paths that contain custom static files (such as style sheets) here,137# Add any paths that contain custom static files (such as style sheets) here,
121# relative to this directory. They are copied after the builtin static files,138# relative to this directory. They are copied after the builtin static files,
122# so a file named "default.css" will overwrite the builtin "default.css".139# so a file named "default.css" will overwrite the builtin "default.css".
123html_static_path = ['static']140#html_static_path = ['static']
124141
125# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,142# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
126# using the given strftime format.143# using the given strftime format.
127144
=== removed file 'doc/devel/README-vmtest.txt'
--- doc/devel/README-vmtest.txt 2016-02-12 21:54:46 +0000
+++ doc/devel/README-vmtest.txt 1970-01-01 00:00:00 +0000
@@ -1,152 +0,0 @@
1== Background ==
2Curtin includes a mechanism called 'vmtest' that allows it to actually
3do installs and validate a number of configurations.
4
5The general flow of the vmtests is:
6 1. each test has an associated yaml config file for curtin in examples/tests
7 2. uses curtin-pack to create the user-data for cloud-init to trigger install
8 3. create and install a system using 'tools/launch'.
9 3.1 The install environment is booted from a maas ephemeral image.
10 3.2 kernel & initrd used are from maas images (not part of the image)
11 3.3 network by default is handled via user networking
12 3.4 It creates all empty disks required
13 3.5 cloud-init datasource is provided by launch
14 a) like: ds=nocloud-net;seedfrom=http://10.7.0.41:41518/
15 provided by python webserver start_http
16 b) via -drive file=/tmp/launch.8VOiOn/seed.img,if=virtio,media=cdrom
17 as a seed disk (if booted without external kernel)
18 3.6 dependencies and other preparations are installed at the beginning by
19 curtin inside the ephemeral image prior to configuring the target
20 4. power off the system.
21 5. configure a 'NoCloud' datasource seed image that provides scripts that
22 will run on first boot.
23 5.1 this will contain all our code to gather health data on the install
24 5.2 by cloud-init design this runs only once per instance, if you start
25 the system again this won't be called again
26 6. boot the installed system with 'tools/xkvm'.
27 6.1 reuses the disks that were installed/configured in the former steps
28 6.2 also adds an output disk
29 6.3 additionally the seed image for the data gathering is added
30 6.4 On this boot it will run the provided scripts, write their output to a
31 "data" disk and then shut itself down.
32 7. extract the data from the output disk
33 8. vmtest python code now verifies if the output is as expected.
34
35== Debugging ==
36At 3.1
37 - one can pull data out of the maas image with
38 sudo mount-image-callback your.img -- sh -c 'COMMAND'
39 e.g. sudo mount-image-callback your.img -- sh -c 'cp $MOUNTPOINT/boot/* .'
40At step 3.6 -> 4.
41 - tools/launch can be called in a way to give you console access
42 to do so just call tools/launch but drop the -serial=x parameter.
43 One might want to change "'power_state': {'mode': 'poweroff'}" to avoid
44 the auto reboot before getting control
45 Replace the directory usually seen in the launch calls with a clean fresh
46 directory
47 - In /curtin curtin and its config can be found
48 - if the system gets that far cloud-init will create a user ubuntu/passw0rd
49 - otherwise one can use a cloud-image from https://cloud-images.ubuntu.com/
50 and add a backdoor user via
51 bzr branch lp:~maas-maintainers/maas/backdoor-image backdoor-image
52 sudo ./backdoor-image -v --user=<USER> --password-auth --password=<PW> IMG
53At step 6 -> 7
54 - You might want to keep all the temporary images around.
55 To do so you can set CURTIN_VMTEST_KEEP_DATA_PASS=all:
56 export CURTIN_VMTEST_KEEP_DATA_PASS=all CURTIN_VMTEST_KEEP_DATA_FAIL=all
57 That will keep the /tmp/tmpXXXXX directories and all files in there for
58 further execution.
59At step 7
60 - You might want to take a look at the output disk yourself.
61 It is a normal qcow image, so one can use mount-image-callback as described
62 above
63 - to invoke xkvm on your own take the command you see in the output and
64 remove the "-serial ..." but add -nographic instead
65 For graphical console one can add --vnc 127.0.0.1:1
66
67== Setup ==
68In order to run vmtest you'll need some dependencies. To get them, you
69can run:
70 make vmtest-deps
71
72That will install all necessary dependencies.
73
74== Running ==
75Running tests is done most simply by:
76
77 make vmtest
78
79If you wish to all tests in test_network.py, do so with:
80 sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py
81
82Or run a single test with:
83 sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py:WilyTestBasic
84
85Note:
86 * currently, the tests have to run as root. The reason for this is that
87 the kernel and initramfs to boot are extracted from the maas ephemeral
88 image. This should be fixed at some point, and then 'make vmtest'
89
90 The tests themselves don't actually have to run as root, but the
91 test setup does.
92 * the 'tools' directory must be in your path.
93 * test will set apt_proxy in the guests to the value of
94 'apt_proxy' environment variable. If that is not set it will
95 look at the host's apt config and read 'Acquire::HTTP::Proxy'
96
97== Environment Variables ==
98Some environment variables affect the running of vmtest
99 * apt_proxy:
100 test will set apt_proxy in the guests to the value of 'apt_proxy'.
101 If that is not set it will look at the host's apt config and read
102 'Acquire::HTTP::Proxy'
103
104 * CURTIN_VMTEST_KEEP_DATA_PASS CURTIN_VMTEST_KEEP_DATA_FAIL:
105 default:
106 CURTIN_VMTEST_KEEP_DATA_PASS=none
107 CURTIN_VMTEST_KEEP_DATA_FAIL=all
108 These 2 variables determine what portions of the temporary
109 test data are kept.
110
111 The variables contain a comma ',' delimited list of directories
112 that should be kept in the case of pass or fail. Additionally,
113 the values 'all' and 'none' are accepted.
114
115 Each vmtest that runs has its own sub-directory under the top level
116 CURTIN_VMTEST_TOPDIR. In that directory are directories:
117 boot: inputs to the system boot (after install)
118 install: install phase related files
119 disks: the disks used for installation and boot
120 logs: install and boot logs
121 collect: data collected by the boot phase
122
123 * CURTIN_VMTEST_TOPDIR: default $TMPDIR/vmtest-<timestamp>
124 vmtest puts all test data under this value. By default, it creates
125 a directory in TMPDIR (/tmp) named with as "vmtest-<timestamp>"
126
127 If you set this value, you must ensure that the directory is either
128 non-existant or clean.
129
130 * CURTIN_VMTEST_LOG: default $TMPDIR/vmtest-<timestamp>.log
131 vmtest writes extended log information to this file.
132 The default puts the log along side the TOPDIR.
133
134 * CURTIN_VMTEST_IMAGE_SYNC: default false (boolean)
135 if set to true, each run will attempt a sync of images.
136 If you want to make sure images are always up to date, then set to true.
137
138 * CURTIN_VMTEST_BRIDGE: default 'user'
139 the network devices will be attached to this bridge. The default is
140 'user', which means to use qemu user mode networking. Set it to
141 'virbr0' or 'lxcbr0' to use those bridges and then be able to ssh
142 in directly.
143
144 * IMAGE_DIR: default /srv/images
145 vmtest keeps a mirror of maas ephemeral images in this directory.
146
147 * IMAGES_TO_KEEP: default 1
148 keep this number of images of each release in the IMAGE_DIR.
149
150Environment 'boolean' values:
151 For boolean environment variables the value is considered True
152 if it is any value other than case insensitive 'false', '' or "0"
1530
=== removed file 'doc/devel/README.txt'
--- doc/devel/README.txt 2015-03-11 13:19:43 +0000
+++ doc/devel/README.txt 1970-01-01 00:00:00 +0000
@@ -1,55 +0,0 @@
1## curtin development ##
2
3This document describes how to use kvm and ubuntu cloud images
4to develop curtin or test install configurations inside kvm.
5
6## get some dependencies ##
7sudo apt-get -qy install kvm libvirt-bin cloud-utils bzr
8
9## get cloud image to boot (-disk1.img) and one to install (-root.tar.gz)
10mkdir -p ~/download
11DLDIR=$( cd ~/download && pwd )
12rel="trusty"
13arch=amd64
14burl="http://cloud-images.ubuntu.com/$rel/current/"
15for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-${arch}-disk1.img; do
16 wget "$burl/$f" -O $DLDIR/$f; done
17( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
18
19BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
20ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
21
22## get curtin
23mkdir -p ~/src
24bzr init-repo ~/src/curtin
25( cd ~/src/curtin && bzr branch lp:curtin trunk.dist )
26( cd ~/src/curtin && bzr branch trunk.dist trunk )
27
28## work with curtin
29cd ~/src/curtin/trunk
30# use 'launch' to launch a kvm instance with user data to pack
31# up local curtin and run it inside instance.
32./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"
33
34## notes about 'launch' ##
35 * launch has --help so you can see that for some info.
36 * '--publish' adds a web server at ${HTTP_PORT:-9923}
37 and puts the files you want available there. You can reference
38 this url in config or cmdline with 'PUBURL'. For example
39 '--publish foo.img' will put 'foo.img' at PUBURL/foo.img.
40 * launch sets 'ubuntu' user password to 'passw0rd'
41 * launch runs 'kvm -curses'
42 kvm -curses keyboard info:
43 'alt-2' to go to qemu console
44 * launch puts serial console to 'serial.log' (look there for stuff)
45 * when logged in
46 * you can look at /var/log/cloud-init-output.log
47 * archive should be extracted in /curtin
48 * shell archive should be in /var/lib/cloud/instance/scripts/part-002
49 * when logged in, and archive available at
50
51
52## other notes ##
53 * need to add '--install-deps' or something for curtin
54 cloud-image in 12.04 has no 'python3'
55 ideally 'curtin --install-deps install' would get the things it needs
560
=== added file 'doc/devel/clear_holders_doc.txt'
--- doc/devel/clear_holders_doc.txt 1970-01-01 00:00:00 +0000
+++ doc/devel/clear_holders_doc.txt 2016-10-03 18:55:20 +0000
@@ -0,0 +1,85 @@
1The new version of clear_holders is based around a data structure called a
2holder_tree which represents the current storage hirearchy above a specified
3starting device. Each node in a holders tree contains data about the node and a
4key 'holders' which contains a list of all nodes that depend on it. The keys in
5a holdres_tree node are:
6 - device: the path to the device in /sys/class/block
7 - dev_type: what type of storage layer the device is. possible values:
8 - disk
9 - lvm
10 - crypt
11 - raid
12 - bcache
13 - disk
14 - name: the kname of the device (used for display)
15 - holders: holders_trees for devices depending on the current device
16
17A holders tree can be generated for a device using the function
18clear_holders.gen_holders_tree. The device can be specified either as a path in
19/sys/class/block or as a path in /dev.
20
21The new implementation of block.clear_holders shuts down storage devices in a
22holders tree starting from the leaves of the tree and ascending towards the
23root. The old implementation of clear_holders ascended up each path of the tree
24separately, in a pattern similar to depth first search. The problem with the
25old implementation is that in some cases either an attempt would be made to
26remove one storage device while other devices depended on it or clear_holders
27would attempt to shut down the same storage device several times. In order to
28cope with this the old version of clear_holders had logic to handle expected
29failures and hope for the best moving forward. The new version of clear_holders
30is able to run without many anticipated failures.
31
32The logic to plan what order to shut down storage layers in is in
33clear_holders.plan_shutdown_holders_trees. This function accepts either a
34single holders tree or a list of holders trees. When run with a list of holders
35trees, it assumes that all of these trees start at basically the same layer in
36the overall storage hirearcy for the system (i.e. a list of holders trees
37starting from all of the target installation disks). This function returns a
38list of dictionaries, with each dictionary containing the keys:
39 - device: the path to the device in /sys/class/block
40 - dev_type: what type of storage layer the device is. possible values:
41 - disk
42 - lvm
43 - crypt
44 - raid
45 - bcache
46 - disk
47 - level: the level of the device in the current storage hirearchy
48 (starting from 0)
49
50The items in the list returned by clear_holders.plan_shutdown_holders_trees
51should be processed in order to make sure the holders trees are shutdown fully
52
53The main interface for clear_holders is the function
54clear_holders.clear_holders. If the system has just been booted it could be
55beneficial to run the function clear_holders.start_clear_holders_deps before
56using clear_holders.clear_holders. This ensures clear_holders will be able to
57properly storage devices. The function clear_holders.clear_holders can be
58passed either a single device or a list of devices and will shut down all
59storage devices above the device(s). The devices can be specified either by
60path in /dev or by path in /sys/class/block.
61
62In order to test if a device or devices are free to be partitioned/formatted,
63the function clear_holders.assert_clear can be passed either a single device or
64a list of devices, with devices specified either by path in /dev or by path in
65/sys/class/block. If there are any storage devices that depend on one of the
66devices passed to clear_holders.assert_clear, then an OSError will be raised.
67If clear_holders.assert_clear does not raise any errors, then the devices
68specified should be ready for partitioning.
69
70It is possible to query further information about storage devices using
71clear_holders.
72
73Holders for a individual device can be queried using clear_holders.get_holders.
74Results are returned as a list or knames for holding devices.
75
76A holders tree can be printed in a human readable format using
77clear_holders.format_holders_tree(). Example output:
78sda
79|-- sda1
80|-- sda2
81`-- sda5
82 `-- dm-0
83 |-- dm-1
84 `-- dm-2
85 `-- dm-3
086
=== modified file 'doc/index.rst'
--- doc/index.rst 2015-10-02 16:19:07 +0000
+++ doc/index.rst 2016-10-03 18:55:20 +0000
@@ -13,7 +13,13 @@
13 :maxdepth: 213 :maxdepth: 2
1414
15 topics/overview15 topics/overview
16 topics/config
17 topics/apt_source
18 topics/networking
19 topics/storage
16 topics/reporting20 topics/reporting
21 topics/development
22 topics/integration-testing
1723
1824
1925
2026
=== added file 'doc/topics/apt_source.rst'
--- doc/topics/apt_source.rst 1970-01-01 00:00:00 +0000
+++ doc/topics/apt_source.rst 2016-10-03 18:55:20 +0000
@@ -0,0 +1,164 @@
1==========
2APT Source
3==========
4
5This part of curtin is meant to allow influencing the apt behaviour and configuration.
6
7By default - if no apt config is provided - it does nothing. That keeps behavior compatible on upgrades.
8
9The feature has an optional target argument which - by default - is used to modify the environment that curtin currently installs (@TARGET_MOUNT_POINT).
10
11Features
12~~~~~~~~
13
14* Add PGP keys to the APT trusted keyring
15
16 - add via short keyid
17
18 - add via long key fingerprint
19
20 - specify a custom keyserver to pull from
21
22 - add raw keys (which makes you independent of keyservers)
23
24* Influence global apt configuration
25
26 - adding ppa's
27
28 - replacing mirror, security mirror and release in sources.list
29
30 - able to provide a fully custom template for sources.list
31
32 - add arbitrary apt.conf settings
33
34 - provide debconf configurations
35
36 - disabling suites (=pockets)
37
38 - per architecture mirror definition
39
40
41Configuration
42~~~~~~~~~~~~~
43
44The general configuration of the apt feature is under an element called ``apt``.
45
46This can have various "global" subelements as listed in the examples below.
47The file ``apt-source.yaml`` holds more examples.
48
49These global configurations are valid throughput all of the apt feature.
50So for exmaple a global specification of a ``primary`` mirror will apply to all rendered sources entries.
51
52Then there is a section ``sources`` which can hold any number of source subelements itself.
53The key is the filename and will be prepended by /etc/apt/sources.list.d/ if it doesn't start with a ``/``.
54There are certain cases - where no content is written into a source.list file where the filename will be ignored - yet it can still be used as index for merging.
55
56The values inside the entries consist of the following optional entries
57
58* ``source``: a sources.list entry (some variable replacements apply)
59
60* ``keyid``: providing a key to import via shortid or fingerprint
61
62* ``key``: providing a raw PGP key
63
64* ``keyserver``: specify an alternate keyserver to pull keys from that were specified by keyid
65
66The section "sources" is is a dictionary (unlike most block/net configs which are lists). This format allows merging between multiple input files than a list like ::
67
68 sources:
69 s1: {'key': 'key1', 'source': 'source1'}
70
71 sources:
72 s2: {'key': 'key2'}
73 s1: {'keyserver': 'foo'}
74
75 This would be merged into
76 s1: {'key': 'key1', 'source': 'source1', keyserver: 'foo'}
77 s2: {'key': 'key2'}
78
79Here is just one of the most common examples for this feature: install with curtin in an isolated environment (derived repository):
80
81For that we need to:
82* insert the PGP key of the local repository to be trusted
83
84 - since you are locked down you can't pull from keyserver.ubuntu.com
85
86 - if you have an internal keyserver you could pull from there, but let us assume you don't even have that; so you have to provide the raw key
87
88 - in the example I'll use the key of the "Ubuntu CD Image Automatic Signing Key" which makes no sense as it is in the trusted keyring anyway, but it is a good example. (Also the key is shortened to stay readable)
89
90::
91
92 -----BEGIN PGP PUBLIC KEY BLOCK-----
93 Version: GnuPG v1
94 mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O
95 RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1
96 Twx6DKLF+3rF5nf1F3Q=
97 =PBAe
98 -----END PGP PUBLIC KEY BLOCK-----
99
100* replace the mirrors used to some mirrors available inside the isolated environment for apt to pull repository data from.
101
102 - lets consider we have a local mirror at ``mymirror.local`` but otherwise following the usual paths
103
104 - make an example with a partial mirror that doesn't mirror the backports suite, so backports have to be disabled
105
106That would be specified as ::
107
108 apt:
109 primary:
110 - arches [default]
111 uri: http://mymirror.local/ubuntu/
112 disable_suites: [backports]
113 sources:
114 localrepokey:
115 key: | # full key as block
116 -----BEGIN PGP PUBLIC KEY BLOCK-----
117 Version: GnuPG v1
118
119 mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O
120 RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1
121 Twx6DKLF+3rF5nf1F3Q=
122 =PBAe
123 -----END PGP PUBLIC KEY BLOCK-----
124
125The file examples/apt-source.yaml holds various further examples that can be configured with this feature.
126
127
128Common snippets
129~~~~~~~~~~~~~~~
130This is a collection of additional ideas people can use the feature for customizing their to-be-installed system.
131
132* enable proposed on installing
133
134::
135
136 apt:
137 sources:
138 proposed.list: deb $MIRROR $RELEASE-proposed main restricted universe multiverse
139
140* Make debug symbols available
141
142::
143
144 apt:
145 sources:
146 ddebs.list: |
147 deb http://ddebs.ubuntu.com $RELEASE main restricted universe multiverse
148  deb http://ddebs.ubuntu.com $RELEASE-updates main restricted universe multiverse
149  deb http://ddebs.ubuntu.com $RELEASE-security main restricted universe multiverse
150 deb http://ddebs.ubuntu.com $RELEASE-proposed main restricted universe multiverse
151
152Timing
153~~~~~~
154The feature is implemented at the stage of curthooks_commands, which runs just after curtin has extracted the image to the target.
155Additionally it can be ran as standalong command "curtin -v --config <yourconfigfile> apt-config".
156
157This will pick up the target from the environment variable that is set by curtin, if you want to use it to a different target or outside of usual curtin handling you can add ``--target <path>`` to it to overwrite the target path.
158This target should have at least a minimal system with apt, apt-add-repository and dpkg being installed for the functionality to work.
159
160
161Dependencies
162~~~~~~~~~~~~
163Cloud-init might need to resolve dependencies and install packages in the ephemeral environment to run curtin.
164Therefore it is recommended to not only provide an apt configuration to curtin for the target, but also one to the install environment via cloud-init.
0165
=== added file 'doc/topics/config.rst'
--- doc/topics/config.rst 1970-01-01 00:00:00 +0000
+++ doc/topics/config.rst 2016-10-03 18:55:20 +0000
@@ -0,0 +1,551 @@
1====================
2Curtin Configuration
3====================
4
5Curtin exposes a number of configuration options for controlling Curtin
6behavior during installation.
7
8
9Configuration options
10---------------------
11Curtin's top level config keys are as follows:
12
13
14- apt_mirrors (``apt_mirrors``)
15- apt_proxy (``apt_proxy``)
16- block-meta (``block``)
17- debconf_selections (``debconf_selections``)
18- disable_overlayroot (``disable_overlayroot``)
19- grub (``grub``)
20- http_proxy (``http_proxy``)
21- install (``install``)
22- kernel (``kernel``)
23- kexec (``kexec``)
24- multipath (``multipath``)
25- network (``network``)
26- power_state (``power_state``)
27- reporting (``reporting``)
28- restore_dist_interfaces: (``restore_dist_interfaces``)
29- sources (``sources``)
30- stages (``stages``)
31- storage (``storage``)
32- swap (``swap``)
33- system_upgrade (``system_upgrade``)
34- write_files (``write_files``)
35
36
37apt_mirrors
38~~~~~~~~~~~
39Configure APT mirrors for ``ubuntu_archive`` and ``ubuntu_security``
40
41**ubuntu_archive**: *<http://local.archive/ubuntu>*
42
43**ubuntu_security**: *<http://local.archive/ubuntu>*
44
45If the target OS includes /etc/apt/sources.list, Curtin will replace
46the default values for each key set with the supplied mirror URL.
47
48**Example**::
49
50 apt_mirrors:
51 ubuntu_archive: http://local.archive/ubuntu
52 ubuntu_security: http://local.archive/ubuntu
53
54
55apt_proxy
56~~~~~~~~~
57Curtin will configure an APT HTTP proxy in the target OS
58
59**apt_proxy**: *<URL to APT proxy>*
60
61**Example**::
62
63 apt_proxy: http://squid.mirror:3267/
64
65
66block-meta
67~~~~~~~~~~
68Configure how Curtin selects and configures disks on the target
69system without providing a custom configuration (mode=simple).
70
71**devices**: *<List of block devices for use>*
72
73The ``devices`` parameter is a list of block device paths that Curtin may
74select from with choosing where to install the OS.
75
76**boot-partition**: *<dictionary of configuration>*
77
78The ``boot-partition`` parameter controls how to configure the boot partition
79with the following parameters:
80
81**enabled**: *<boolean>*
82
83Enabled will forcibly setup a partition on the target device for booting.
84
85**format**: *<['uefi', 'gpt', 'prep', 'mbr']>*
86
87Specify the partition format. Some formats, like ``uefi`` and ``prep``
88are restricted by platform characteristics.
89
90**fstype**: *<filesystem type: one of ['ext3', 'ext4'], defaults to 'ext4'>*
91
92Specify the filesystem format on the boot partition.
93
94**label**: *<filesystem label: defaults to 'boot'>*
95
96Specify the filesystem label on the boot partition.
97
98**Example**::
99
100 block-meta:
101 devices:
102 - /dev/sda
103 - /dev/sdb
104 boot-partition:
105 - enabled: True
106 format: gpt
107 fstype: ext4
108 label: my-boot-partition
109
110
111debconf_selections
112~~~~~~~~~~~~~~~~~~
113Curtin will update the target with debconf set-selection values. Users will
114need to be familiar with the package debconf options. Users can probe a
115packages' debconf settings by using ``debconf-get-selections``.
116
117**selection_name**: *<debconf-set-selections input>*
118
119``debconf-set-selections`` is in the form::
120
121 <packagename> <packagename/option-name> <type> <value>
122
123**Example**::
124
125 debconf_selections:
126 set1: |
127 cloud-init cloud-init/datasources multiselect MAAS
128 lxd lxd/bridge-name string lxdbr0
129 set2: lxd lxd/setup-bridge boolean true
130
131
132
133disable_overlayroot
134~~~~~~~~~~~~~~~~~~~
135Curtin disables overlayroot in the target by default.
136
137**disable_overlayroot**: *<boolean: default True>*
138
139**Example**::
140
141 disable_overlayroot: False
142
143
144grub
145~~~~
146Curtin configures grub as the target machine's boot loader. Users
147can control a few options to tailor how the system will boot after
148installation.
149
150**install_devices**: *<list of block device names to install grub>*
151
152Specify a list of devices onto which grub will attempt to install.
153
154**replace_linux_default**: *<boolean: default True>*
155
156Controls whether grub-install will update the Linux Default target
157value during installation.
158
159**update_nvram**: *<boolean: default False>*
160
161Certain platforms, like ``uefi`` and ``prep`` systems utilize
162NVRAM to hold boot configuration settings which control the order in
163which devices are booted. Curtin by default will not attempt to
164update the NVRAM settings to preserve the system configuration.
165Users may want to force NVRAM to be updated such that the next boot
166of the system will boot from the installed device.
167
168**Example**::
169
170 grub:
171 install_devices:
172 - /dev/sda1
173 replace_linux_default: False
174 update_nvram: True
175
176
177http_proxy
178~~~~~~~~~~
179Curtin will export ``http_proxy`` value into the installer environment.
180
181**http_proxy**: *<HTTP Proxy URL>*
182
183**Example**::
184
185 http_proxy: http://squid.proxy:3728/
186
187
188
189install
190~~~~~~~
191Configure Curtin's install options.
192
193**log_file**: *<path to write Curtin's install.log data>*
194
195Curtin logs install progress by default to /var/log/curtin/install.log
196
197**post_files**: *<List of files to read from host to include in reporting data>*
198
199Curtin by default will post the ``log_file`` value to any configured reporter.
200
201**save_install_config**: *<Path to save merged curtin configuration file>*
202
203Curtin will save the merged configuration data into the target OS at
204the path of ``save_install_config``. This defaults to /root/curtin-install-cfg.yaml
205
206**Example**::
207
208 install:
209 log_file: /tmp/install.log
210 post_files:
211 - /tmp/install.log
212 - /var/log/syslog
213 save_install_config: /root/myconf.yaml
214
215
216kernel
217~~~~~~
218Configure how Curtin selects which kernel to install into the target image.
219If ``kernel`` is not configured, Curtin will use the default mapping below
220and determine which ``package`` value by looking up the current release
221and current kernel version running.
222
223
224**fallback-package**: *<kernel package-name to be used as fallback>*
225
226Specify a kernel package name to be used if the default package is not
227available.
228
229**mapping**: *<Dictionary mapping Ubuntu release to HWE kernel names>*
230
231Default mapping for Releases to package names is as follows::
232
233 precise:
234 3.2.0:
235 3.5.0: -lts-quantal
236 3.8.0: -lts-raring
237 3.11.0: -lts-saucy
238 3.13.0: -lts-trusty
239 trusty:
240 3.13.0:
241 3.16.0: -lts-utopic
242 3.19.0: -lts-vivid
243 4.2.0: -lts-wily
244 4.4.0: -lts-xenial
245 xenial:
246 4.3.0:
247 4.4.0:
248
249
250**package**: *<Linux kernel package name>*
251
252Specify the exact package to install in the target OS.
253
254**Example**::
255
256 kernel:
257 fallback-package: linux-image-generic
258 package: linux-image-generic-lts-xenial
259 mapping:
260 - xenial:
261 - 4.4.0: -my-custom-kernel
262
263
264kexec
265~~~~~
266Curtin can use kexec to "reboot" into the target OS.
267
268**mode**: *<on>*
269
270Enable rebooting with kexec.
271
272**Example**::
273
274 kexec: on
275
276
277multipath
278~~~~~~~~~
279Curtin will detect and autoconfigure multipath by default to enable
280boot for systems with multipath. Curtin does not apply any advanced
281configuration or tuning, rather it uses distro defaults and provides
282enough configuration to enable booting.
283
284**mode**: *<['auto', ['disabled']>*
285
286Defaults to auto which will configure enough to enable booting on multipath
287devices. Disabled will prevent curtin from installing or configuring
288multipath.
289
290**overwrite_bindings**: *<boolean>*
291
292If ``overwrite_bindings`` is True then Curtin will generate new bindings
293file for multipath, overriding any existing binding in the target image.
294
295**Example**::
296
297 multipath:
298 mode: auto
299 overwrite_bindings: True
300
301
302network
303~~~~~~~
304Configure networking (see Networking section for details).
305
306**network_option_1**: *<option value>*
307
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: