Merge lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425 into lp:~smoser/ubuntu/xenial/curtin/pkg
- Xenial (16.04)
- pkg-sru-revno425
- Merge into pkg
Status: | Merged |
---|---|
Merged at revision: | 56 |
Proposed branch: | lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425 |
Merge into: | lp:~smoser/ubuntu/xenial/curtin/pkg |
Diff against target: |
14513 lines (+10324/-1893) 94 files modified
Makefile (+3/-1) curtin/__init__.py (+4/-0) curtin/block/__init__.py (+249/-61) curtin/block/clear_holders.py (+387/-0) curtin/block/lvm.py (+96/-0) curtin/block/mdadm.py (+18/-5) curtin/block/mkfs.py (+10/-5) curtin/commands/apply_net.py (+156/-1) curtin/commands/apt_config.py (+668/-0) curtin/commands/block_info.py (+75/-0) curtin/commands/block_meta.py (+134/-263) curtin/commands/block_wipe.py (+1/-2) curtin/commands/clear_holders.py (+48/-0) curtin/commands/curthooks.py (+61/-235) curtin/commands/main.py (+4/-3) curtin/config.py (+2/-3) curtin/gpg.py (+74/-0) curtin/net/__init__.py (+67/-30) curtin/net/network_state.py (+45/-1) curtin/util.py (+278/-81) debian/changelog (+32/-2) doc/conf.py (+21/-4) doc/devel/README-vmtest.txt (+0/-152) doc/devel/README.txt (+0/-55) doc/devel/clear_holders_doc.txt (+85/-0) doc/index.rst (+6/-0) doc/topics/apt_source.rst (+164/-0) doc/topics/config.rst (+551/-0) doc/topics/development.rst (+68/-0) doc/topics/integration-testing.rst (+245/-0) doc/topics/networking.rst (+522/-0) doc/topics/overview.rst (+7/-7) doc/topics/reporting.rst (+3/-3) doc/topics/storage.rst (+894/-0) examples/apt-source.yaml (+267/-0) examples/network-ipv6-bond-vlan.yaml (+56/-0) examples/tests/apt_config_command.yaml (+85/-0) examples/tests/apt_source_custom.yaml (+97/-0) examples/tests/apt_source_modify.yaml (+92/-0) examples/tests/apt_source_modify_arches.yaml (+102/-0) examples/tests/apt_source_modify_disable_suite.yaml (+92/-0) examples/tests/apt_source_preserve.yaml (+98/-0) examples/tests/apt_source_search.yaml (+97/-0) examples/tests/basic.yaml (+5/-1) examples/tests/basic_network_static_ipv6.yaml (+22/-0) examples/tests/basic_scsi.yaml (+1/-1) examples/tests/network_alias.yaml (+125/-0) examples/tests/network_mtu.yaml (+88/-0) examples/tests/network_source_ipv6.yaml (+31/-0) examples/tests/test_old_apt_features.yaml (+11/-0) examples/tests/test_old_apt_features_ports.yaml (+10/-0) examples/tests/uefi_basic.yaml (+15/-0) examples/tests/vlan_network_ipv6.yaml (+92/-0) setup.py (+2/-2) tests/unittests/helpers.py (+41/-0) tests/unittests/test_apt_custom_sources_list.py (+170/-0) tests/unittests/test_apt_source.py (+1032/-0) tests/unittests/test_block.py (+210/-0) tests/unittests/test_block_lvm.py (+94/-0) tests/unittests/test_block_mdadm.py (+28/-23) tests/unittests/test_block_mkfs.py (+2/-2) tests/unittests/test_clear_holders.py (+329/-0) tests/unittests/test_make_dname.py (+200/-0) tests/unittests/test_net.py (+54/-13) tests/unittests/test_util.py (+180/-2) tests/vmtests/__init__.py (+38/-38) tests/vmtests/helpers.py (+129/-166) tests/vmtests/test_apt_config_cmd.py (+55/-0) tests/vmtests/test_apt_source.py (+238/-0) tests/vmtests/test_basic.py (+21/-41) tests/vmtests/test_bcache_basic.py (+5/-8) tests/vmtests/test_bonding.py (+0/-204) tests/vmtests/test_lvm.py (+2/-1) tests/vmtests/test_mdadm_bcache.py (+21/-17) tests/vmtests/test_multipath.py (+5/-13) tests/vmtests/test_network.py (+205/-348) tests/vmtests/test_network_alias.py (+40/-0) tests/vmtests/test_network_bonding.py (+63/-0) tests/vmtests/test_network_enisource.py (+91/-0) tests/vmtests/test_network_ipv6.py (+53/-0) tests/vmtests/test_network_ipv6_enisource.py (+26/-0) tests/vmtests/test_network_ipv6_static.py (+42/-0) tests/vmtests/test_network_ipv6_vlan.py (+34/-0) tests/vmtests/test_network_mtu.py (+155/-0) tests/vmtests/test_network_static.py (+44/-0) tests/vmtests/test_network_vlan.py (+77/-0) tests/vmtests/test_nvme.py (+2/-3) tests/vmtests/test_old_apt_features.py (+89/-0) tests/vmtests/test_raid5_bcache.py (+5/-8) tests/vmtests/test_uefi_basic.py (+16/-18) tools/jenkins-runner (+33/-7) tools/launch (+9/-48) tools/xkvm (+90/-2) tox.ini (+30/-13) |
To merge this branch: | bzr merge lp:~raharper/ubuntu/xenial/curtin/pkg-sru-revno425 |
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser | Pending | ||
Review via email: mp+307473@code.launchpad.net |
Commit message
Description of the change
Import new upstream snapshot (revno 425)
New Upstream snapshot:
- unittest,tox.ini: catch and fix issue with trusty-level mock of open
- block/mdadm: add option to ignore mdadm_assemble errors (LP: #1618429)
- curtin/doc: overhaul curtin documentation for readthedocs.org (LP: #1351085)
- curtin.util: re-add support for RunInChroot (LP: #1617375)
- curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs
- curtin.block: refactor clear_holders logic into block.clear_holders and cli cmd
- curtin.apply_net should exit non-zero upon exception. (LP: #1615780)
- apt: fix bug in disable_suites if sources.list line is blank.
- vmtests: disable Wily in vmtests
- Fix the unittests for test_apt_source.
- get CURTIN_
- fix vmtest check_file_
- fix whitespace damage in tests/vmtests/
- fix dpkg-reconfigure when debconf_selections was provided. (LP: #1609614)
- fix apt tests on non-intel arch
- Add apt features to curtin. (LP: #1574113)
- vmtest: easier use of parallel and controlling timeouts
- mkfs.vfat: add force flag for formating whole disks (LP: #1597923)
- block.mkfs: fix sectorsize flag (LP: #1597522)
- block_meta: cleanup use of sys_block_path and handle cciss knames (LP: #1562249)
- block.get_
- util: add target (chroot) support to subp, add target_path helper.
- block_meta: fallback to parted if blkid does not produce output (LP: #1524031)
- commands.
- tox.ini: run coverage normally rather than separately
- move uefi boot knowledge from launch and vmtest to xkvm
Preview Diff
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2016-05-10 16:13:29 +0000 | |||
3 | +++ Makefile 2016-10-03 18:55:20 +0000 | |||
4 | @@ -49,5 +49,7 @@ | |||
5 | 49 | sync-images: | 49 | sync-images: |
6 | 50 | @$(CWD)/tools/vmtest-sync-images | 50 | @$(CWD)/tools/vmtest-sync-images |
7 | 51 | 51 | ||
8 | 52 | clean: | ||
9 | 53 | rm -rf doc/_build | ||
10 | 52 | 54 | ||
12 | 53 | .PHONY: all test pyflakes pyflakes3 pep8 build | 55 | .PHONY: all clean test pyflakes pyflakes3 pep8 build |
13 | 54 | 56 | ||
14 | === modified file 'curtin/__init__.py' | |||
15 | --- curtin/__init__.py 2015-11-23 16:22:09 +0000 | |||
16 | +++ curtin/__init__.py 2016-10-03 18:55:20 +0000 | |||
17 | @@ -33,6 +33,10 @@ | |||
18 | 33 | 'SUBCOMMAND_SYSTEM_INSTALL', | 33 | 'SUBCOMMAND_SYSTEM_INSTALL', |
19 | 34 | # subcommand 'system-upgrade' is present | 34 | # subcommand 'system-upgrade' is present |
20 | 35 | 'SUBCOMMAND_SYSTEM_UPGRADE', | 35 | 'SUBCOMMAND_SYSTEM_UPGRADE', |
21 | 36 | # supports new format of apt configuration | ||
22 | 37 | 'APT_CONFIG_V1', | ||
23 | 36 | ] | 38 | ] |
24 | 37 | 39 | ||
25 | 40 | __version__ = "0.1.0" | ||
26 | 41 | |||
27 | 38 | # vi: ts=4 expandtab syntax=python | 42 | # vi: ts=4 expandtab syntax=python |
28 | 39 | 43 | ||
29 | === modified file 'curtin/block/__init__.py' | |||
30 | --- curtin/block/__init__.py 2016-10-03 18:00:41 +0000 | |||
31 | +++ curtin/block/__init__.py 2016-10-03 18:55:20 +0000 | |||
32 | @@ -23,21 +23,31 @@ | |||
33 | 23 | import itertools | 23 | import itertools |
34 | 24 | 24 | ||
35 | 25 | from curtin import util | 25 | from curtin import util |
36 | 26 | from curtin.block import lvm | ||
37 | 27 | from curtin.log import LOG | ||
38 | 26 | from curtin.udev import udevadm_settle | 28 | from curtin.udev import udevadm_settle |
39 | 27 | from curtin.log import LOG | ||
40 | 28 | 29 | ||
41 | 29 | 30 | ||
42 | 30 | def get_dev_name_entry(devname): | 31 | def get_dev_name_entry(devname): |
43 | 32 | """ | ||
44 | 33 | convert device name to path in /dev | ||
45 | 34 | """ | ||
46 | 31 | bname = devname.split('/dev/')[-1] | 35 | bname = devname.split('/dev/')[-1] |
47 | 32 | return (bname, "/dev/" + bname) | 36 | return (bname, "/dev/" + bname) |
48 | 33 | 37 | ||
49 | 34 | 38 | ||
50 | 35 | def is_valid_device(devname): | 39 | def is_valid_device(devname): |
51 | 40 | """ | ||
52 | 41 | check if device is a valid device | ||
53 | 42 | """ | ||
54 | 36 | devent = get_dev_name_entry(devname)[1] | 43 | devent = get_dev_name_entry(devname)[1] |
55 | 37 | return is_block_device(devent) | 44 | return is_block_device(devent) |
56 | 38 | 45 | ||
57 | 39 | 46 | ||
58 | 40 | def is_block_device(path): | 47 | def is_block_device(path): |
59 | 48 | """ | ||
60 | 49 | check if path is a block device | ||
61 | 50 | """ | ||
62 | 41 | try: | 51 | try: |
63 | 42 | return stat.S_ISBLK(os.stat(path).st_mode) | 52 | return stat.S_ISBLK(os.stat(path).st_mode) |
64 | 43 | except OSError as e: | 53 | except OSError as e: |
65 | @@ -47,26 +57,99 @@ | |||
66 | 47 | 57 | ||
67 | 48 | 58 | ||
68 | 49 | def dev_short(devname): | 59 | def dev_short(devname): |
69 | 60 | """ | ||
70 | 61 | get short form of device name | ||
71 | 62 | """ | ||
72 | 63 | devname = os.path.normpath(devname) | ||
73 | 50 | if os.path.sep in devname: | 64 | if os.path.sep in devname: |
74 | 51 | return os.path.basename(devname) | 65 | return os.path.basename(devname) |
75 | 52 | return devname | 66 | return devname |
76 | 53 | 67 | ||
77 | 54 | 68 | ||
78 | 55 | def dev_path(devname): | 69 | def dev_path(devname): |
79 | 70 | """ | ||
80 | 71 | convert device name to path in /dev | ||
81 | 72 | """ | ||
82 | 56 | if devname.startswith('/dev/'): | 73 | if devname.startswith('/dev/'): |
83 | 57 | return devname | 74 | return devname |
84 | 58 | else: | 75 | else: |
85 | 59 | return '/dev/' + devname | 76 | return '/dev/' + devname |
86 | 60 | 77 | ||
87 | 61 | 78 | ||
88 | 79 | def path_to_kname(path): | ||
89 | 80 | """ | ||
90 | 81 | converts a path in /dev or a path in /sys/block to the device kname, | ||
91 | 82 | taking special devices and unusual naming schemes into account | ||
92 | 83 | """ | ||
93 | 84 | # if path given is a link, get real path | ||
94 | 85 | # only do this if given a path though, if kname is already specified then | ||
95 | 86 | # this would cause a failure where the function should still be able to run | ||
96 | 87 | if os.path.sep in path: | ||
97 | 88 | path = os.path.realpath(path) | ||
98 | 89 | # using basename here ensures that the function will work given a path in | ||
99 | 90 | # /dev, a kname, or a path in /sys/block as an arg | ||
100 | 91 | dev_kname = os.path.basename(path) | ||
101 | 92 | # cciss devices need to have 'cciss!' prepended | ||
102 | 93 | if path.startswith('/dev/cciss'): | ||
103 | 94 | dev_kname = 'cciss!' + dev_kname | ||
104 | 95 | LOG.debug("path_to_kname input: '{}' output: '{}'".format(path, dev_kname)) | ||
105 | 96 | return dev_kname | ||
106 | 97 | |||
107 | 98 | |||
108 | 99 | def kname_to_path(kname): | ||
109 | 100 | """ | ||
110 | 101 | converts a kname to a path in /dev, taking special devices and unusual | ||
111 | 102 | naming schemes into account | ||
112 | 103 | """ | ||
113 | 104 | # if given something that is already a dev path, return it | ||
114 | 105 | if os.path.exists(kname) and is_valid_device(kname): | ||
115 | 106 | path = kname | ||
116 | 107 | LOG.debug("kname_to_path input: '{}' output: '{}'".format(kname, path)) | ||
117 | 108 | return os.path.realpath(path) | ||
118 | 109 | # adding '/dev' to path is not sufficient to handle cciss devices and | ||
119 | 110 | # possibly other special devices which have not been encountered yet | ||
120 | 111 | path = os.path.realpath(os.sep.join(['/dev'] + kname.split('!'))) | ||
121 | 112 | # make sure path we get is correct | ||
122 | 113 | if not (os.path.exists(path) and is_valid_device(path)): | ||
123 | 114 | raise OSError('could not get path to dev from kname: {}'.format(kname)) | ||
124 | 115 | LOG.debug("kname_to_path input: '{}' output: '{}'".format(kname, path)) | ||
125 | 116 | return path | ||
126 | 117 | |||
127 | 118 | |||
128 | 119 | def partition_kname(disk_kname, partition_number): | ||
129 | 120 | """ | ||
130 | 121 | Add number to disk_kname prepending a 'p' if needed | ||
131 | 122 | """ | ||
132 | 123 | for dev_type in ['nvme', 'mmcblk', 'cciss', 'mpath', 'dm']: | ||
133 | 124 | if disk_kname.startswith(dev_type): | ||
134 | 125 | partition_number = "p%s" % partition_number | ||
135 | 126 | break | ||
136 | 127 | return "%s%s" % (disk_kname, partition_number) | ||
137 | 128 | |||
138 | 129 | |||
139 | 130 | def sysfs_to_devpath(sysfs_path): | ||
140 | 131 | """ | ||
141 | 132 | convert a path in /sys/class/block to a path in /dev | ||
142 | 133 | """ | ||
143 | 134 | path = kname_to_path(path_to_kname(sysfs_path)) | ||
144 | 135 | if not is_block_device(path): | ||
145 | 136 | raise ValueError('could not find blockdev for sys path: {}' | ||
146 | 137 | .format(sysfs_path)) | ||
147 | 138 | return path | ||
148 | 139 | |||
149 | 140 | |||
150 | 62 | def sys_block_path(devname, add=None, strict=True): | 141 | def sys_block_path(devname, add=None, strict=True): |
151 | 142 | """ | ||
152 | 143 | get path to device in /sys/class/block | ||
153 | 144 | """ | ||
154 | 63 | toks = ['/sys/class/block'] | 145 | toks = ['/sys/class/block'] |
155 | 64 | # insert parent dev if devname is partition | 146 | # insert parent dev if devname is partition |
156 | 147 | devname = os.path.normpath(devname) | ||
157 | 65 | (parent, partnum) = get_blockdev_for_partition(devname) | 148 | (parent, partnum) = get_blockdev_for_partition(devname) |
158 | 66 | if partnum: | 149 | if partnum: |
160 | 67 | toks.append(dev_short(parent)) | 150 | toks.append(path_to_kname(parent)) |
161 | 68 | 151 | ||
163 | 69 | toks.append(dev_short(devname)) | 152 | toks.append(path_to_kname(devname)) |
164 | 70 | 153 | ||
165 | 71 | if add is not None: | 154 | if add is not None: |
166 | 72 | toks.append(add) | 155 | toks.append(add) |
167 | @@ -83,6 +166,9 @@ | |||
168 | 83 | 166 | ||
169 | 84 | 167 | ||
170 | 85 | def _lsblock_pairs_to_dict(lines): | 168 | def _lsblock_pairs_to_dict(lines): |
171 | 169 | """ | ||
172 | 170 | parse lsblock output and convert to dict | ||
173 | 171 | """ | ||
174 | 86 | ret = {} | 172 | ret = {} |
175 | 87 | for line in lines.splitlines(): | 173 | for line in lines.splitlines(): |
176 | 88 | toks = shlex.split(line) | 174 | toks = shlex.split(line) |
177 | @@ -98,6 +184,9 @@ | |||
178 | 98 | 184 | ||
179 | 99 | 185 | ||
180 | 100 | def _lsblock(args=None): | 186 | def _lsblock(args=None): |
181 | 187 | """ | ||
182 | 188 | get lsblock data as dict | ||
183 | 189 | """ | ||
184 | 101 | # lsblk --help | sed -n '/Available/,/^$/p' | | 190 | # lsblk --help | sed -n '/Available/,/^$/p' | |
185 | 102 | # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort | 191 | # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort |
186 | 103 | keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO', | 192 | keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO', |
187 | @@ -120,8 +209,10 @@ | |||
188 | 120 | 209 | ||
189 | 121 | 210 | ||
190 | 122 | def get_unused_blockdev_info(): | 211 | def get_unused_blockdev_info(): |
193 | 123 | # return a list of unused block devices. These are devices that | 212 | """ |
194 | 124 | # do not have anything mounted on them. | 213 | return a list of unused block devices. |
195 | 214 | These are devices that do not have anything mounted on them. | ||
196 | 215 | """ | ||
197 | 125 | 216 | ||
198 | 126 | # get a list of top level block devices, then iterate over it to get | 217 | # get a list of top level block devices, then iterate over it to get |
199 | 127 | # devices dependent on those. If the lsblk call for that specific | 218 | # devices dependent on those. If the lsblk call for that specific |
200 | @@ -137,7 +228,9 @@ | |||
201 | 137 | 228 | ||
202 | 138 | 229 | ||
203 | 139 | def get_devices_for_mp(mountpoint): | 230 | def get_devices_for_mp(mountpoint): |
205 | 140 | # return a list of devices (full paths) used by the provided mountpoint | 231 | """ |
206 | 232 | return a list of devices (full paths) used by the provided mountpoint | ||
207 | 233 | """ | ||
208 | 141 | bdinfo = _lsblock() | 234 | bdinfo = _lsblock() |
209 | 142 | found = set() | 235 | found = set() |
210 | 143 | for devname, data in bdinfo.items(): | 236 | for devname, data in bdinfo.items(): |
211 | @@ -158,6 +251,9 @@ | |||
212 | 158 | 251 | ||
213 | 159 | 252 | ||
214 | 160 | def get_installable_blockdevs(include_removable=False, min_size=1024**3): | 253 | def get_installable_blockdevs(include_removable=False, min_size=1024**3): |
215 | 254 | """ | ||
216 | 255 | find blockdevs suitable for installation | ||
217 | 256 | """ | ||
218 | 161 | good = [] | 257 | good = [] |
219 | 162 | unused = get_unused_blockdev_info() | 258 | unused = get_unused_blockdev_info() |
220 | 163 | for devname, data in unused.items(): | 259 | for devname, data in unused.items(): |
221 | @@ -172,21 +268,25 @@ | |||
222 | 172 | 268 | ||
223 | 173 | 269 | ||
224 | 174 | def get_blockdev_for_partition(devpath): | 270 | def get_blockdev_for_partition(devpath): |
225 | 271 | """ | ||
226 | 272 | find the parent device for a partition. | ||
227 | 273 | returns a tuple of the parent block device and the partition number | ||
228 | 274 | if device is not a partition, None will be returned for partition number | ||
229 | 275 | """ | ||
230 | 276 | # normalize path | ||
231 | 277 | rpath = os.path.realpath(devpath) | ||
232 | 278 | |||
233 | 175 | # convert an entry in /dev/ to parent disk and partition number | 279 | # convert an entry in /dev/ to parent disk and partition number |
234 | 176 | # if devpath is a block device and not a partition, return (devpath, None) | 280 | # if devpath is a block device and not a partition, return (devpath, None) |
243 | 177 | 281 | base = '/sys/class/block' | |
244 | 178 | # input of /dev/vdb or /dev/disk/by-label/foo | 282 | |
245 | 179 | # rpath is hopefully a real-ish path in /dev (vda, sdb..) | 283 | # input of /dev/vdb, /dev/disk/by-label/foo, /sys/block/foo, |
246 | 180 | rpath = os.path.realpath(devpath) | 284 | # /sys/block/class/foo, or just foo |
247 | 181 | 285 | syspath = os.path.join(base, path_to_kname(devpath)) | |
248 | 182 | bname = os.path.basename(rpath) | 286 | |
249 | 183 | syspath = "/sys/class/block/%s" % bname | 287 | # don't need to try out multiple sysfs paths as path_to_kname handles cciss |
242 | 184 | |||
250 | 185 | if not os.path.exists(syspath): | 288 | if not os.path.exists(syspath): |
255 | 186 | syspath2 = "/sys/class/block/cciss!%s" % bname | 289 | raise OSError("%s had no syspath (%s)" % (devpath, syspath)) |
252 | 187 | if not os.path.exists(syspath2): | ||
253 | 188 | raise ValueError("%s had no syspath (%s)" % (devpath, syspath)) | ||
254 | 189 | syspath = syspath2 | ||
256 | 190 | 290 | ||
257 | 191 | ptpath = os.path.join(syspath, "partition") | 291 | ptpath = os.path.join(syspath, "partition") |
258 | 192 | if not os.path.exists(ptpath): | 292 | if not os.path.exists(ptpath): |
259 | @@ -207,8 +307,21 @@ | |||
260 | 207 | return (diskdevpath, ptnum) | 307 | return (diskdevpath, ptnum) |
261 | 208 | 308 | ||
262 | 209 | 309 | ||
263 | 310 | def get_sysfs_partitions(device): | ||
264 | 311 | """ | ||
265 | 312 | get a list of sysfs paths for partitions under a block device | ||
266 | 313 | accepts input as a device kname, sysfs path, or dev path | ||
267 | 314 | returns empty list if no partitions available | ||
268 | 315 | """ | ||
269 | 316 | sysfs_path = sys_block_path(device) | ||
270 | 317 | return [sys_block_path(kname) for kname in os.listdir(sysfs_path) | ||
271 | 318 | if os.path.exists(os.path.join(sysfs_path, kname, 'partition'))] | ||
272 | 319 | |||
273 | 320 | |||
274 | 210 | def get_pardevs_on_blockdevs(devs): | 321 | def get_pardevs_on_blockdevs(devs): |
276 | 211 | # return a dict of partitions with their info that are on provided devs | 322 | """ |
277 | 323 | return a dict of partitions with their info that are on provided devs | ||
278 | 324 | """ | ||
279 | 212 | if devs is None: | 325 | if devs is None: |
280 | 213 | devs = [] | 326 | devs = [] |
281 | 214 | devs = [get_dev_name_entry(d)[1] for d in devs] | 327 | devs = [get_dev_name_entry(d)[1] for d in devs] |
282 | @@ -243,7 +356,9 @@ | |||
283 | 243 | 356 | ||
284 | 244 | 357 | ||
285 | 245 | def rescan_block_devices(): | 358 | def rescan_block_devices(): |
287 | 246 | # run 'blockdev --rereadpt' for all block devices not currently mounted | 359 | """ |
288 | 360 | run 'blockdev --rereadpt' for all block devices not currently mounted | ||
289 | 361 | """ | ||
290 | 247 | unused = get_unused_blockdev_info() | 362 | unused = get_unused_blockdev_info() |
291 | 248 | devices = [] | 363 | devices = [] |
292 | 249 | for devname, data in unused.items(): | 364 | for devname, data in unused.items(): |
293 | @@ -271,6 +386,9 @@ | |||
294 | 271 | 386 | ||
295 | 272 | 387 | ||
296 | 273 | def blkid(devs=None, cache=True): | 388 | def blkid(devs=None, cache=True): |
297 | 389 | """ | ||
298 | 390 | get data about block devices from blkid and convert to dict | ||
299 | 391 | """ | ||
300 | 274 | if devs is None: | 392 | if devs is None: |
301 | 275 | devs = [] | 393 | devs = [] |
302 | 276 | 394 | ||
303 | @@ -423,7 +541,18 @@ | |||
304 | 423 | """ | 541 | """ |
305 | 424 | info = _lsblock([devpath]) | 542 | info = _lsblock([devpath]) |
306 | 425 | LOG.debug('get_blockdev_sector_size: info:\n%s' % util.json_dumps(info)) | 543 | LOG.debug('get_blockdev_sector_size: info:\n%s' % util.json_dumps(info)) |
308 | 426 | [parent] = info | 544 | # (LP: 1598310) The call to _lsblock() may return multiple results. |
309 | 545 | # If it does, then search for a result with the correct device path. | ||
310 | 546 | # If no such device is found among the results, then fall back to previous | ||
311 | 547 | # behavior, which was taking the first of the results | ||
312 | 548 | assert len(info) > 0 | ||
313 | 549 | for (k, v) in info.items(): | ||
314 | 550 | if v.get('device_path') == devpath: | ||
315 | 551 | parent = k | ||
316 | 552 | break | ||
317 | 553 | else: | ||
318 | 554 | parent = list(info.keys())[0] | ||
319 | 555 | |||
320 | 427 | return (int(info[parent]['LOG-SEC']), int(info[parent]['PHY-SEC'])) | 556 | return (int(info[parent]['LOG-SEC']), int(info[parent]['PHY-SEC'])) |
321 | 428 | 557 | ||
322 | 429 | 558 | ||
323 | @@ -499,50 +628,108 @@ | |||
324 | 499 | def sysfs_partition_data(blockdev=None, sysfs_path=None): | 628 | def sysfs_partition_data(blockdev=None, sysfs_path=None): |
325 | 500 | # given block device or sysfs_path, return a list of tuples | 629 | # given block device or sysfs_path, return a list of tuples |
326 | 501 | # of (kernel_name, number, offset, size) | 630 | # of (kernel_name, number, offset, size) |
327 | 502 | if blockdev is None and sysfs_path is None: | ||
328 | 503 | raise ValueError("Blockdev and sysfs_path cannot both be None") | ||
329 | 504 | |||
330 | 505 | if blockdev: | 631 | if blockdev: |
331 | 632 | blockdev = os.path.normpath(blockdev) | ||
332 | 506 | sysfs_path = sys_block_path(blockdev) | 633 | sysfs_path = sys_block_path(blockdev) |
336 | 507 | 634 | elif sysfs_path: | |
337 | 508 | ptdata = [] | 635 | # use normpath to ensure that paths with trailing slash work |
338 | 509 | # /sys/class/block/dev has entries of 'kname' for each partition | 636 | sysfs_path = os.path.normpath(sysfs_path) |
339 | 637 | blockdev = os.path.join('/dev', os.path.basename(sysfs_path)) | ||
340 | 638 | else: | ||
341 | 639 | raise ValueError("Blockdev and sysfs_path cannot both be None") | ||
342 | 510 | 640 | ||
343 | 511 | # queue property is only on parent devices, ie, we can't read | 641 | # queue property is only on parent devices, ie, we can't read |
344 | 512 | # /sys/class/block/vda/vda1/queue/* as queue is only on the | 642 | # /sys/class/block/vda/vda1/queue/* as queue is only on the |
345 | 513 | # parent device | 643 | # parent device |
346 | 644 | sysfs_prefix = sysfs_path | ||
347 | 514 | (parent, partnum) = get_blockdev_for_partition(blockdev) | 645 | (parent, partnum) = get_blockdev_for_partition(blockdev) |
348 | 515 | sysfs_prefix = sysfs_path | ||
349 | 516 | if partnum: | 646 | if partnum: |
350 | 517 | sysfs_prefix = sys_block_path(parent) | 647 | sysfs_prefix = sys_block_path(parent) |
357 | 518 | 648 | partnum = int(partnum) | |
358 | 519 | block_size = int(util.load_file(os.path.join(sysfs_prefix, | 649 | |
359 | 520 | 'queue/logical_block_size'))) | 650 | block_size = int(util.load_file(os.path.join( |
360 | 521 | 651 | sysfs_prefix, 'queue/logical_block_size'))) | |
355 | 522 | block_size = int( | ||
356 | 523 | util.load_file(os.path.join(sysfs_path, 'queue/logical_block_size'))) | ||
361 | 524 | unit = block_size | 652 | unit = block_size |
364 | 525 | for d in os.listdir(sysfs_path): | 653 | |
365 | 526 | partd = os.path.join(sysfs_path, d) | 654 | ptdata = [] |
366 | 655 | for part_sysfs in get_sysfs_partitions(sysfs_prefix): | ||
367 | 527 | data = {} | 656 | data = {} |
368 | 528 | for sfile in ('partition', 'start', 'size'): | 657 | for sfile in ('partition', 'start', 'size'): |
370 | 529 | dfile = os.path.join(partd, sfile) | 658 | dfile = os.path.join(part_sysfs, sfile) |
371 | 530 | if not os.path.isfile(dfile): | 659 | if not os.path.isfile(dfile): |
372 | 531 | continue | 660 | continue |
373 | 532 | data[sfile] = int(util.load_file(dfile)) | 661 | data[sfile] = int(util.load_file(dfile)) |
378 | 533 | if 'partition' not in data: | 662 | if partnum is None or data['partition'] == partnum: |
379 | 534 | continue | 663 | ptdata.append((path_to_kname(part_sysfs), data['partition'], |
380 | 535 | ptdata.append((d, data['partition'], data['start'] * unit, | 664 | data['start'] * unit, data['size'] * unit,)) |
377 | 536 | data['size'] * unit,)) | ||
381 | 537 | 665 | ||
382 | 538 | return ptdata | 666 | return ptdata |
383 | 539 | 667 | ||
384 | 540 | 668 | ||
385 | 669 | def get_part_table_type(device): | ||
386 | 670 | """ | ||
387 | 671 | check the type of partition table present on the specified device | ||
388 | 672 | returns None if no ptable was present or device could not be read | ||
389 | 673 | """ | ||
390 | 674 | # it is neccessary to look for the gpt signature first, then the dos | ||
391 | 675 | # signature, because a gpt formatted disk usually has a valid mbr to | ||
392 | 676 | # protect the disk from being modified by older partitioning tools | ||
393 | 677 | return ('gpt' if check_efi_signature(device) else | ||
394 | 678 | 'dos' if check_dos_signature(device) else None) | ||
395 | 679 | |||
396 | 680 | |||
397 | 681 | def check_dos_signature(device): | ||
398 | 682 | """ | ||
399 | 683 | check if there is a dos partition table signature present on device | ||
400 | 684 | """ | ||
401 | 685 | # the last 2 bytes of a dos partition table have the signature with the | ||
402 | 686 | # value 0xAA55. the dos partition table is always 0x200 bytes long, even if | ||
403 | 687 | # the underlying disk uses a larger logical block size, so the start of | ||
404 | 688 | # this signature must be at 0x1fe | ||
405 | 689 | # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout | ||
406 | 690 | return (is_block_device(device) and util.file_size(device) >= 0x200 and | ||
407 | 691 | (util.load_file(device, mode='rb', read_len=2, offset=0x1fe) == | ||
408 | 692 | b'\x55\xAA')) | ||
409 | 693 | |||
410 | 694 | |||
411 | 695 | def check_efi_signature(device): | ||
412 | 696 | """ | ||
413 | 697 | check if there is a gpt partition table signature present on device | ||
414 | 698 | """ | ||
415 | 699 | # the gpt partition table header is always on lba 1, regardless of the | ||
416 | 700 | # logical block size used by the underlying disk. therefore, a static | ||
417 | 701 | # offset cannot be used, the offset to the start of the table header is | ||
418 | 702 | # always the sector size of the disk | ||
419 | 703 | # the start of the gpt partition table header shoult have the signaure | ||
420 | 704 | # 'EFI PART'. | ||
421 | 705 | # https://en.wikipedia.org/wiki/GUID_Partition_Table | ||
422 | 706 | sector_size = get_blockdev_sector_size(device)[0] | ||
423 | 707 | return (is_block_device(device) and | ||
424 | 708 | util.file_size(device) >= 2 * sector_size and | ||
425 | 709 | (util.load_file(device, mode='rb', read_len=8, | ||
426 | 710 | offset=sector_size) == b'EFI PART')) | ||
427 | 711 | |||
428 | 712 | |||
429 | 713 | def is_extended_partition(device): | ||
430 | 714 | """ | ||
431 | 715 | check if the specified device path is a dos extended partition | ||
432 | 716 | """ | ||
433 | 717 | # an extended partition must be on a dos disk, must be a partition, must be | ||
434 | 718 | # within the first 4 partitions and will have a valid dos signature, | ||
435 | 719 | # because the format of the extended partition matches that of a real mbr | ||
436 | 720 | (parent_dev, part_number) = get_blockdev_for_partition(device) | ||
437 | 721 | return (get_part_table_type(parent_dev) in ['dos', 'msdos'] and | ||
438 | 722 | part_number is not None and int(part_number) <= 4 and | ||
439 | 723 | check_dos_signature(device)) | ||
440 | 724 | |||
441 | 725 | |||
442 | 541 | def wipe_file(path, reader=None, buflen=4 * 1024 * 1024): | 726 | def wipe_file(path, reader=None, buflen=4 * 1024 * 1024): |
447 | 542 | # wipe the existing file at path. | 727 | """ |
448 | 543 | # if reader is provided, it will be called as a 'reader(buflen)' | 728 | wipe the existing file at path. |
449 | 544 | # to provide data for each write. Otherwise, zeros are used. | 729 | if reader is provided, it will be called as a 'reader(buflen)' |
450 | 545 | # writes will be done in size of buflen. | 730 | to provide data for each write. Otherwise, zeros are used. |
451 | 731 | writes will be done in size of buflen. | ||
452 | 732 | """ | ||
453 | 546 | if reader: | 733 | if reader: |
454 | 547 | readfunc = reader | 734 | readfunc = reader |
455 | 548 | else: | 735 | else: |
456 | @@ -551,13 +738,11 @@ | |||
457 | 551 | def readfunc(size): | 738 | def readfunc(size): |
458 | 552 | return buf | 739 | return buf |
459 | 553 | 740 | ||
460 | 741 | size = util.file_size(path) | ||
461 | 742 | LOG.debug("%s is %s bytes. wiping with buflen=%s", | ||
462 | 743 | path, size, buflen) | ||
463 | 744 | |||
464 | 554 | with open(path, "rb+") as fp: | 745 | with open(path, "rb+") as fp: |
465 | 555 | # get the size by seeking to end. | ||
466 | 556 | fp.seek(0, 2) | ||
467 | 557 | size = fp.tell() | ||
468 | 558 | LOG.debug("%s is %s bytes. wiping with buflen=%s", | ||
469 | 559 | path, size, buflen) | ||
470 | 560 | fp.seek(0) | ||
471 | 561 | while True: | 746 | while True: |
472 | 562 | pbuf = readfunc(buflen) | 747 | pbuf = readfunc(buflen) |
473 | 563 | pos = fp.tell() | 748 | pos = fp.tell() |
474 | @@ -574,16 +759,18 @@ | |||
475 | 574 | 759 | ||
476 | 575 | 760 | ||
477 | 576 | def quick_zero(path, partitions=True): | 761 | def quick_zero(path, partitions=True): |
481 | 577 | # zero 1M at front, 1M at end, and 1M at front | 762 | """ |
482 | 578 | # if this is a block device and partitions is true, then | 763 | zero 1M at front, 1M at end, and 1M at front |
483 | 579 | # zero 1M at front and end of each partition. | 764 | if this is a block device and partitions is true, then |
484 | 765 | zero 1M at front and end of each partition. | ||
485 | 766 | """ | ||
486 | 580 | buflen = 1024 | 767 | buflen = 1024 |
487 | 581 | count = 1024 | 768 | count = 1024 |
488 | 582 | zero_size = buflen * count | 769 | zero_size = buflen * count |
489 | 583 | offsets = [0, -zero_size] | 770 | offsets = [0, -zero_size] |
490 | 584 | is_block = is_block_device(path) | 771 | is_block = is_block_device(path) |
491 | 585 | if not (is_block or os.path.isfile(path)): | 772 | if not (is_block or os.path.isfile(path)): |
493 | 586 | raise ValueError("%s: not an existing file or block device") | 773 | raise ValueError("%s: not an existing file or block device", path) |
494 | 587 | 774 | ||
495 | 588 | if partitions and is_block: | 775 | if partitions and is_block: |
496 | 589 | ptdata = sysfs_partition_data(path) | 776 | ptdata = sysfs_partition_data(path) |
497 | @@ -596,6 +783,9 @@ | |||
498 | 596 | 783 | ||
499 | 597 | 784 | ||
500 | 598 | def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False): | 785 | def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False): |
501 | 786 | """ | ||
502 | 787 | write zeros to file at specified offsets | ||
503 | 788 | """ | ||
504 | 599 | bmsg = "{path} (size={size}): " | 789 | bmsg = "{path} (size={size}): " |
505 | 600 | m_short = bmsg + "{tot} bytes from {offset} > size." | 790 | m_short = bmsg + "{tot} bytes from {offset} > size." |
506 | 601 | m_badoff = bmsg + "invalid offset {offset}." | 791 | m_badoff = bmsg + "invalid offset {offset}." |
507 | @@ -657,15 +847,13 @@ | |||
508 | 657 | if mode == "pvremove": | 847 | if mode == "pvremove": |
509 | 658 | # We need to use --force --force in case it's already in a volgroup and | 848 | # We need to use --force --force in case it's already in a volgroup and |
510 | 659 | # pvremove doesn't want to remove it | 849 | # pvremove doesn't want to remove it |
515 | 660 | cmds = [] | 850 | |
512 | 661 | cmds.append(["pvremove", "--force", "--force", "--yes", path]) | ||
513 | 662 | cmds.append(["pvscan", "--cache"]) | ||
514 | 663 | cmds.append(["vgscan", "--mknodes", "--cache"]) | ||
516 | 664 | # If pvremove is run and there is no label on the system, | 851 | # If pvremove is run and there is no label on the system, |
517 | 665 | # then it exits with 5. That is also okay, because we might be | 852 | # then it exits with 5. That is also okay, because we might be |
518 | 666 | # wiping something that is already blank | 853 | # wiping something that is already blank |
521 | 667 | for cmd in cmds: | 854 | util.subp(['pvremove', '--force', '--force', '--yes', path], |
522 | 668 | util.subp(cmd, rcs=[0, 5], capture=True) | 855 | rcs=[0, 5], capture=True) |
523 | 856 | lvm.lvm_scan() | ||
524 | 669 | elif mode == "zero": | 857 | elif mode == "zero": |
525 | 670 | wipe_file(path) | 858 | wipe_file(path) |
526 | 671 | elif mode == "random": | 859 | elif mode == "random": |
527 | 672 | 860 | ||
528 | === added file 'curtin/block/clear_holders.py' | |||
529 | --- curtin/block/clear_holders.py 1970-01-01 00:00:00 +0000 | |||
530 | +++ curtin/block/clear_holders.py 2016-10-03 18:55:20 +0000 | |||
531 | @@ -0,0 +1,387 @@ | |||
532 | 1 | # Copyright (C) 2016 Canonical Ltd. | ||
533 | 2 | # | ||
534 | 3 | # Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com> | ||
535 | 4 | # | ||
536 | 5 | # Curtin is free software: you can redistribute it and/or modify it under | ||
537 | 6 | # the terms of the GNU Affero General Public License as published by the | ||
538 | 7 | # Free Software Foundation, either version 3 of the License, or (at your | ||
539 | 8 | # option) any later version. | ||
540 | 9 | # | ||
541 | 10 | # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY | ||
542 | 11 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS | ||
543 | 12 | # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for | ||
544 | 13 | # more details. | ||
545 | 14 | # | ||
546 | 15 | # You should have received a copy of the GNU Affero General Public License | ||
547 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | ||
548 | 17 | |||
549 | 18 | """ | ||
550 | 19 | This module provides a mechanism for shutting down virtual storage layers on | ||
551 | 20 | top of a block device, making it possible to reuse the block device without | ||
552 | 21 | having to reboot the system | ||
553 | 22 | """ | ||
554 | 23 | |||
555 | 24 | import os | ||
556 | 25 | |||
557 | 26 | from curtin import (block, udev, util) | ||
558 | 27 | from curtin.block import lvm | ||
559 | 28 | from curtin.log import LOG | ||
560 | 29 | |||
561 | 30 | |||
562 | 31 | def _define_handlers_registry(): | ||
563 | 32 | """ | ||
564 | 33 | returns instantiated dev_types | ||
565 | 34 | """ | ||
566 | 35 | return { | ||
567 | 36 | 'partition': {'shutdown': wipe_superblock, | ||
568 | 37 | 'ident': identify_partition}, | ||
569 | 38 | 'lvm': {'shutdown': shutdown_lvm, 'ident': identify_lvm}, | ||
570 | 39 | 'crypt': {'shutdown': shutdown_crypt, 'ident': identify_crypt}, | ||
571 | 40 | 'raid': {'shutdown': shutdown_mdadm, 'ident': identify_mdadm}, | ||
572 | 41 | 'bcache': {'shutdown': shutdown_bcache, 'ident': identify_bcache}, | ||
573 | 42 | 'disk': {'ident': lambda x: False, 'shutdown': wipe_superblock}, | ||
574 | 43 | } | ||
575 | 44 | |||
576 | 45 | |||
577 | 46 | def get_dmsetup_uuid(device): | ||
578 | 47 | """ | ||
579 | 48 | get the dm uuid for a specified dmsetup device | ||
580 | 49 | """ | ||
581 | 50 | blockdev = block.sysfs_to_devpath(device) | ||
582 | 51 | (out, _) = util.subp(['dmsetup', 'info', blockdev, '-C', '-o', 'uuid', | ||
583 | 52 | '--noheadings'], capture=True) | ||
584 | 53 | return out.strip() | ||
585 | 54 | |||
586 | 55 | |||
587 | 56 | def get_bcache_using_dev(device): | ||
588 | 57 | """ | ||
589 | 58 | Get the /sys/fs/bcache/ path of the bcache volume using specified device | ||
590 | 59 | """ | ||
591 | 60 | # FIXME: when block.bcache is written this should be moved there | ||
592 | 61 | sysfs_path = block.sys_block_path(device) | ||
593 | 62 | return os.path.realpath(os.path.join(sysfs_path, 'bcache', 'cache')) | ||
594 | 63 | |||
595 | 64 | |||
596 | 65 | def shutdown_bcache(device): | ||
597 | 66 | """ | ||
598 | 67 | Shut down bcache for specified bcache device | ||
599 | 68 | """ | ||
600 | 69 | bcache_shutdown_message = ('shutdown_bcache running on {} has determined ' | ||
601 | 70 | 'that the device has already been shut down ' | ||
602 | 71 | 'during handling of another bcache dev. ' | ||
603 | 72 | 'skipping'.format(device)) | ||
604 | 73 | if not os.path.exists(device): | ||
605 | 74 | LOG.info(bcache_shutdown_message) | ||
606 | 75 | return | ||
607 | 76 | |||
608 | 77 | bcache_sysfs = get_bcache_using_dev(device) | ||
609 | 78 | if not os.path.exists(bcache_sysfs): | ||
610 | 79 | LOG.info(bcache_shutdown_message) | ||
611 | 80 | return | ||
612 | 81 | |||
613 | 82 | LOG.debug('stopping bcache at: %s', bcache_sysfs) | ||
614 | 83 | util.write_file(os.path.join(bcache_sysfs, 'stop'), '1', mode=None) | ||
615 | 84 | |||
616 | 85 | |||
617 | 86 | def shutdown_lvm(device): | ||
618 | 87 | """ | ||
619 | 88 | Shutdown specified lvm device. | ||
620 | 89 | """ | ||
621 | 90 | device = block.sys_block_path(device) | ||
622 | 91 | # lvm devices have a dm directory that containes a file 'name' containing | ||
623 | 92 | # '{volume group}-{logical volume}'. The volume can be freed using lvremove | ||
624 | 93 | name_file = os.path.join(device, 'dm', 'name') | ||
625 | 94 | (vg_name, lv_name) = lvm.split_lvm_name(util.load_file(name_file)) | ||
626 | 95 | # use two --force flags here in case the volume group that this lv is | ||
627 | 96 | # attached two has been damaged | ||
628 | 97 | LOG.debug('running lvremove on %s/%s', vg_name, lv_name) | ||
629 | 98 | util.subp(['lvremove', '--force', '--force', | ||
630 | 99 | '{}/{}'.format(vg_name, lv_name)], rcs=[0, 5]) | ||
631 | 100 | # if that was the last lvol in the volgroup, get rid of volgroup | ||
632 | 101 | if len(lvm.get_lvols_in_volgroup(vg_name)) == 0: | ||
633 | 102 | util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5]) | ||
634 | 103 | # refresh lvmetad | ||
635 | 104 | lvm.lvm_scan() | ||
636 | 105 | |||
637 | 106 | |||
638 | 107 | def shutdown_crypt(device): | ||
639 | 108 | """ | ||
640 | 109 | Shutdown specified cryptsetup device | ||
641 | 110 | """ | ||
642 | 111 | blockdev = block.sysfs_to_devpath(device) | ||
643 | 112 | util.subp(['cryptsetup', 'remove', blockdev], capture=True) | ||
644 | 113 | |||
645 | 114 | |||
646 | 115 | def shutdown_mdadm(device): | ||
647 | 116 | """ | ||
648 | 117 | Shutdown specified mdadm device. | ||
649 | 118 | """ | ||
650 | 119 | blockdev = block.sysfs_to_devpath(device) | ||
651 | 120 | LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev) | ||
652 | 121 | block.mdadm.mdadm_stop(blockdev) | ||
653 | 122 | block.mdadm.mdadm_remove(blockdev) | ||
654 | 123 | |||
655 | 124 | |||
656 | 125 | def wipe_superblock(device): | ||
657 | 126 | """ | ||
658 | 127 | Wrapper for block.wipe_volume compatible with shutdown function interface | ||
659 | 128 | """ | ||
660 | 129 | blockdev = block.sysfs_to_devpath(device) | ||
661 | 130 | # when operating on a disk that used to have a dos part table with an | ||
662 | 131 | # extended partition, attempting to wipe the extended partition will fail | ||
663 | 132 | if block.is_extended_partition(blockdev): | ||
664 | 133 | LOG.info("extended partitions do not need wiping, so skipping: '%s'", | ||
665 | 134 | blockdev) | ||
666 | 135 | else: | ||
667 | 136 | LOG.info('wiping superblock on %s', blockdev) | ||
668 | 137 | block.wipe_volume(blockdev, mode='superblock') | ||
669 | 138 | |||
670 | 139 | |||
671 | 140 | def identify_lvm(device): | ||
672 | 141 | """ | ||
673 | 142 | determine if specified device is a lvm device | ||
674 | 143 | """ | ||
675 | 144 | return (block.path_to_kname(device).startswith('dm') and | ||
676 | 145 | get_dmsetup_uuid(device).startswith('LVM')) | ||
677 | 146 | |||
678 | 147 | |||
679 | 148 | def identify_crypt(device): | ||
680 | 149 | """ | ||
681 | 150 | determine if specified device is dm-crypt device | ||
682 | 151 | """ | ||
683 | 152 | return (block.path_to_kname(device).startswith('dm') and | ||
684 | 153 | get_dmsetup_uuid(device).startswith('CRYPT')) | ||
685 | 154 | |||
686 | 155 | |||
687 | 156 | def identify_mdadm(device): | ||
688 | 157 | """ | ||
689 | 158 | determine if specified device is a mdadm device | ||
690 | 159 | """ | ||
691 | 160 | return block.path_to_kname(device).startswith('md') | ||
692 | 161 | |||
693 | 162 | |||
694 | 163 | def identify_bcache(device): | ||
695 | 164 | """ | ||
696 | 165 | determine if specified device is a bcache device | ||
697 | 166 | """ | ||
698 | 167 | return block.path_to_kname(device).startswith('bcache') | ||
699 | 168 | |||
700 | 169 | |||
701 | 170 | def identify_partition(device): | ||
702 | 171 | """ | ||
703 | 172 | determine if specified device is a partition | ||
704 | 173 | """ | ||
705 | 174 | path = os.path.join(block.sys_block_path(device), 'partition') | ||
706 | 175 | return os.path.exists(path) | ||
707 | 176 | |||
708 | 177 | |||
709 | 178 | def get_holders(device): | ||
710 | 179 | """ | ||
711 | 180 | Look up any block device holders, return list of knames | ||
712 | 181 | """ | ||
713 | 182 | # block.sys_block_path works when given a /sys or /dev path | ||
714 | 183 | sysfs_path = block.sys_block_path(device) | ||
715 | 184 | # get holders | ||
716 | 185 | holders = os.listdir(os.path.join(sysfs_path, 'holders')) | ||
717 | 186 | LOG.debug("devname '%s' had holders: %s", device, holders) | ||
718 | 187 | return holders | ||
719 | 188 | |||
720 | 189 | |||
721 | 190 | def gen_holders_tree(device): | ||
722 | 191 | """ | ||
723 | 192 | generate a tree representing the current storage hirearchy above 'device' | ||
724 | 193 | """ | ||
725 | 194 | device = block.sys_block_path(device) | ||
726 | 195 | dev_name = block.path_to_kname(device) | ||
727 | 196 | # the holders for a device should consist of the devices in the holders/ | ||
728 | 197 | # dir in sysfs and any partitions on the device. this ensures that a | ||
729 | 198 | # storage tree starting from a disk will include all devices holding the | ||
730 | 199 | # disk's partitions | ||
731 | 200 | holder_paths = ([block.sys_block_path(h) for h in get_holders(device)] + | ||
732 | 201 | block.get_sysfs_partitions(device)) | ||
733 | 202 | # the DEV_TYPE registry contains a function under the key 'ident' for each | ||
734 | 203 | # device type entry that returns true if the device passed to it is of the | ||
735 | 204 | # correct type. there should never be a situation in which multiple | ||
736 | 205 | # identify functions return true. therefore, it will always work to take | ||
737 | 206 | # the device type with the first identify function that returns true as the | ||
738 | 207 | # device type for the current device. in the event that no identify | ||
739 | 208 | # functions return true, the device will be treated as a disk | ||
740 | 209 | # (DEFAULT_DEV_TYPE). the identify function for disk never returns true. | ||
741 | 210 | # the next() builtin in python will not raise a StopIteration exception if | ||
742 | 211 | # there is a default value defined | ||
743 | 212 | dev_type = next((k for k, v in DEV_TYPES.items() if v['ident'](device)), | ||
744 | 213 | DEFAULT_DEV_TYPE) | ||
745 | 214 | return { | ||
746 | 215 | 'device': device, 'dev_type': dev_type, 'name': dev_name, | ||
747 | 216 | 'holders': [gen_holders_tree(h) for h in holder_paths], | ||
748 | 217 | } | ||
749 | 218 | |||
750 | 219 | |||
751 | 220 | def plan_shutdown_holder_trees(holders_trees): | ||
752 | 221 | """ | ||
753 | 222 | plan best order to shut down holders in, taking into account high level | ||
754 | 223 | storage layers that may have many devices below them | ||
755 | 224 | |||
756 | 225 | returns a sorted list of descriptions of storage config entries including | ||
757 | 226 | their path in /sys/block and their dev type | ||
758 | 227 | |||
759 | 228 | can accept either a single storage tree or a list of storage trees assumed | ||
760 | 229 | to start at an equal place in storage hirearchy (i.e. a list of trees | ||
761 | 230 | starting from disk) | ||
762 | 231 | """ | ||
763 | 232 | # holds a temporary registry of holders to allow cross references | ||
764 | 233 | # key = device sysfs path, value = {} of priority level, shutdown function | ||
765 | 234 | reg = {} | ||
766 | 235 | |||
767 | 236 | # normalize to list of trees | ||
768 | 237 | if not isinstance(holders_trees, (list, tuple)): | ||
769 | 238 | holders_trees = [holders_trees] | ||
770 | 239 | |||
771 | 240 | def flatten_holders_tree(tree, level=0): | ||
772 | 241 | """ | ||
773 | 242 | add entries from holders tree to registry with level key corresponding | ||
774 | 243 | to how many layers from raw disks the current device is at | ||
775 | 244 | """ | ||
776 | 245 | device = tree['device'] | ||
777 | 246 | |||
778 | 247 | # always go with highest level if current device has been | ||
779 | 248 | # encountered already. since the device and everything above it is | ||
780 | 249 | # re-added to the registry it ensures that any increase of level | ||
781 | 250 | # required here will propagate down the tree | ||
782 | 251 | # this handles a scenario like mdadm + bcache, where the backing | ||
783 | 252 | # device for bcache is a 3nd level item like mdadm, but the cache | ||
784 | 253 | # device is 1st level (disk) or second level (partition), ensuring | ||
785 | 254 | # that the bcache item is always considered higher level than | ||
786 | 255 | # anything else regardless of whether it was added to the tree via | ||
787 | 256 | # the cache device or backing device first | ||
788 | 257 | if device in reg: | ||
789 | 258 | level = max(reg[device]['level'], level) | ||
790 | 259 | |||
791 | 260 | reg[device] = {'level': level, 'device': device, | ||
792 | 261 | 'dev_type': tree['dev_type']} | ||
793 | 262 | |||
794 | 263 | # handle holders above this level | ||
795 | 264 | for holder in tree['holders']: | ||
796 | 265 | flatten_holders_tree(holder, level=level + 1) | ||
797 | 266 | |||
798 | 267 | # flatten the holders tree into the registry | ||
799 | 268 | for holders_tree in holders_trees: | ||
800 | 269 | flatten_holders_tree(holders_tree) | ||
801 | 270 | |||
802 | 271 | # return list of entry dicts with highest level first | ||
803 | 272 | return [reg[k] for k in sorted(reg, key=lambda x: reg[x]['level'] * -1)] | ||
804 | 273 | |||
805 | 274 | |||
806 | 275 | def format_holders_tree(holders_tree): | ||
807 | 276 | """ | ||
808 | 277 | draw a nice dirgram of the holders tree | ||
809 | 278 | """ | ||
810 | 279 | # spacer styles based on output of 'tree --charset=ascii' | ||
811 | 280 | spacers = (('`-- ', ' ' * 4), ('|-- ', '|' + ' ' * 3)) | ||
812 | 281 | |||
813 | 282 | def format_tree(tree): | ||
814 | 283 | """ | ||
815 | 284 | format entry and any subentries | ||
816 | 285 | """ | ||
817 | 286 | result = [tree['name']] | ||
818 | 287 | holders = tree['holders'] | ||
819 | 288 | for (holder_no, holder) in enumerate(holders): | ||
820 | 289 | spacer_style = spacers[min(len(holders) - (holder_no + 1), 1)] | ||
821 | 290 | subtree_lines = format_tree(holder) | ||
822 | 291 | for (line_no, line) in enumerate(subtree_lines): | ||
823 | 292 | result.append(spacer_style[min(line_no, 1)] + line) | ||
824 | 293 | return result | ||
825 | 294 | |||
826 | 295 | return '\n'.join(format_tree(holders_tree)) | ||
827 | 296 | |||
828 | 297 | |||
829 | 298 | def get_holder_types(tree): | ||
830 | 299 | """ | ||
831 | 300 | get flattened list of types of holders in holders tree and the devices | ||
832 | 301 | they correspond to | ||
833 | 302 | """ | ||
834 | 303 | types = {(tree['dev_type'], tree['device'])} | ||
835 | 304 | for holder in tree['holders']: | ||
836 | 305 | types.update(get_holder_types(holder)) | ||
837 | 306 | return types | ||
838 | 307 | |||
839 | 308 | |||
840 | 309 | def assert_clear(base_paths): | ||
841 | 310 | """ | ||
842 | 311 | Check if all paths in base_paths are clear to use | ||
843 | 312 | """ | ||
844 | 313 | valid = ('disk', 'partition') | ||
845 | 314 | if not isinstance(base_paths, (list, tuple)): | ||
846 | 315 | base_paths = [base_paths] | ||
847 | 316 | base_paths = [block.sys_block_path(path) for path in base_paths] | ||
848 | 317 | for holders_tree in [gen_holders_tree(p) for p in base_paths]: | ||
849 | 318 | if any(holder_type not in valid and path not in base_paths | ||
850 | 319 | for (holder_type, path) in get_holder_types(holders_tree)): | ||
851 | 320 | raise OSError('Storage not clear, remaining:\n{}' | ||
852 | 321 | .format(format_holders_tree(holders_tree))) | ||
853 | 322 | |||
854 | 323 | |||
855 | 324 | def clear_holders(base_paths, try_preserve=False): | ||
856 | 325 | """ | ||
857 | 326 | Clear all storage layers depending on the devices specified in 'base_paths' | ||
858 | 327 | A single device or list of devices can be specified. | ||
859 | 328 | Device paths can be specified either as paths in /dev or /sys/block | ||
860 | 329 | Will throw OSError if any holders could not be shut down | ||
861 | 330 | """ | ||
862 | 331 | # handle single path | ||
863 | 332 | if not isinstance(base_paths, (list, tuple)): | ||
864 | 333 | base_paths = [base_paths] | ||
865 | 334 | |||
866 | 335 | # get current holders and plan how to shut them down | ||
867 | 336 | holder_trees = [gen_holders_tree(path) for path in base_paths] | ||
868 | 337 | LOG.info('Current device storage tree:\n%s', | ||
869 | 338 | '\n'.join(format_holders_tree(tree) for tree in holder_trees)) | ||
870 | 339 | ordered_devs = plan_shutdown_holder_trees(holder_trees) | ||
871 | 340 | |||
872 | 341 | # run shutdown functions | ||
873 | 342 | for dev_info in ordered_devs: | ||
874 | 343 | dev_type = DEV_TYPES.get(dev_info['dev_type']) | ||
875 | 344 | shutdown_function = dev_type.get('shutdown') | ||
876 | 345 | if not shutdown_function: | ||
877 | 346 | continue | ||
878 | 347 | if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS: | ||
879 | 348 | LOG.info('shutdown function for holder type: %s is destructive. ' | ||
880 | 349 | 'attempting to preserve data, so not skipping' % | ||
881 | 350 | dev_info['dev_type']) | ||
882 | 351 | continue | ||
883 | 352 | LOG.info("shutdown running on holder type: '%s' syspath: '%s'", | ||
884 | 353 | dev_info['dev_type'], dev_info['device']) | ||
885 | 354 | shutdown_function(dev_info['device']) | ||
886 | 355 | udev.udevadm_settle() | ||
887 | 356 | |||
888 | 357 | |||
889 | 358 | def start_clear_holders_deps(): | ||
890 | 359 | """ | ||
891 | 360 | prepare system for clear holders to be able to scan old devices | ||
892 | 361 | """ | ||
893 | 362 | # a mdadm scan has to be started in case there is a md device that needs to | ||
894 | 363 | # be detected. if the scan fails, it is either because there are no mdadm | ||
895 | 364 | # devices on the system, or because there is a mdadm device in a damaged | ||
896 | 365 | # state that could not be started. due to the nature of mdadm tools, it is | ||
897 | 366 | # difficult to know which is the case. if any errors did occur, then ignore | ||
898 | 367 | # them, since no action needs to be taken if there were no mdadm devices on | ||
899 | 368 | # the system, and in the case where there is some mdadm metadata on a disk, | ||
900 | 369 | # but there was not enough to start the array, the call to wipe_volume on | ||
901 | 370 | # all disks and partitions should be sufficient to remove the mdadm | ||
902 | 371 | # metadata | ||
903 | 372 | block.mdadm.mdadm_assemble(scan=True, ignore_errors=True) | ||
904 | 373 | # the bcache module needs to be present to properly detect bcache devs | ||
905 | 374 | # on some systems (precise without hwe kernel) it may not be possible to | ||
906 | 375 | # lad the bcache module bcause it is not present in the kernel. if this | ||
907 | 376 | # happens then there is no need to halt installation, as the bcache devices | ||
908 | 377 | # will never appear and will never prevent the disk from being reformatted | ||
909 | 378 | util.subp(['modprobe', 'bcache'], rcs=[0, 1]) | ||
910 | 379 | |||
911 | 380 | |||
912 | 381 | # anything that is not identified can assumed to be a 'disk' or similar | ||
913 | 382 | DEFAULT_DEV_TYPE = 'disk' | ||
914 | 383 | # handlers that should not be run if an attempt is being made to preserve data | ||
915 | 384 | DATA_DESTROYING_HANDLERS = [wipe_superblock] | ||
916 | 385 | # types of devices that could be encountered by clear holders and functions to | ||
917 | 386 | # identify them and shut them down | ||
918 | 387 | DEV_TYPES = _define_handlers_registry() | ||
919 | 0 | 388 | ||
920 | === added file 'curtin/block/lvm.py' | |||
921 | --- curtin/block/lvm.py 1970-01-01 00:00:00 +0000 | |||
922 | +++ curtin/block/lvm.py 2016-10-03 18:55:20 +0000 | |||
923 | @@ -0,0 +1,96 @@ | |||
924 | 1 | # Copyright (C) 2016 Canonical Ltd. | ||
925 | 2 | # | ||
926 | 3 | # Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com> | ||
927 | 4 | # | ||
928 | 5 | # Curtin is free software: you can redistribute it and/or modify it under | ||
929 | 6 | # the terms of the GNU Affero General Public License as published by the | ||
930 | 7 | # Free Software Foundation, either version 3 of the License, or (at your | ||
931 | 8 | # option) any later version. | ||
932 | 9 | # | ||
933 | 10 | # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY | ||
934 | 11 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS | ||
935 | 12 | # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for | ||
936 | 13 | # more details. | ||
937 | 14 | # | ||
938 | 15 | # You should have received a copy of the GNU Affero General Public License | ||
939 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | ||
940 | 17 | |||
941 | 18 | """ | ||
942 | 19 | This module provides some helper functions for manipulating lvm devices | ||
943 | 20 | """ | ||
944 | 21 | |||
945 | 22 | from curtin import util | ||
946 | 23 | from curtin.log import LOG | ||
947 | 24 | import os | ||
948 | 25 | |||
949 | 26 | # separator to use for lvm/dm tools | ||
950 | 27 | _SEP = '=' | ||
951 | 28 | |||
952 | 29 | |||
953 | 30 | def _filter_lvm_info(lvtool, match_field, query_field, match_key): | ||
954 | 31 | """ | ||
955 | 32 | filter output of pv/vg/lvdisplay tools | ||
956 | 33 | """ | ||
957 | 34 | (out, _) = util.subp([lvtool, '-C', '--separator', _SEP, '--noheadings', | ||
958 | 35 | '-o', ','.join([match_field, query_field])], | ||
959 | 36 | capture=True) | ||
960 | 37 | return [qf for (mf, qf) in | ||
961 | 38 | [l.strip().split(_SEP) for l in out.strip().splitlines()] | ||
962 | 39 | if mf == match_key] | ||
963 | 40 | |||
964 | 41 | |||
965 | 42 | def get_pvols_in_volgroup(vg_name): | ||
966 | 43 | """ | ||
967 | 44 | get physical volumes used by volgroup | ||
968 | 45 | """ | ||
969 | 46 | return _filter_lvm_info('pvdisplay', 'vg_name', 'pv_name', vg_name) | ||
970 | 47 | |||
971 | 48 | |||
972 | 49 | def get_lvols_in_volgroup(vg_name): | ||
973 | 50 | """ | ||
974 | 51 | get logical volumes in volgroup | ||
975 | 52 | """ | ||
976 | 53 | return _filter_lvm_info('lvdisplay', 'vg_name', 'lv_name', vg_name) | ||
977 | 54 | |||
978 | 55 | |||
979 | 56 | def split_lvm_name(full): | ||
980 | 57 | """ | ||
981 | 58 | split full lvm name into tuple of (volgroup, lv_name) | ||
982 | 59 | """ | ||
983 | 60 | # 'dmsetup splitname' is the authoratative source for lvm name parsing | ||
984 | 61 | (out, _) = util.subp(['dmsetup', 'splitname', full, '-c', '--noheadings', | ||
985 | 62 | '--separator', _SEP, '-o', 'vg_name,lv_name'], | ||
986 | 63 | capture=True) | ||
987 | 64 | return out.strip().split(_SEP) | ||
988 | 65 | |||
989 | 66 | |||
990 | 67 | def lvmetad_running(): | ||
991 | 68 | """ | ||
992 | 69 | check if lvmetad is running | ||
993 | 70 | """ | ||
994 | 71 | return os.path.exists(os.environ.get('LVM_LVMETAD_PIDFILE', | ||
995 | 72 | '/run/lvmetad.pid')) | ||
996 | 73 | |||
997 | 74 | |||
998 | 75 | def lvm_scan(): | ||
999 | 76 | """ | ||
1000 | 77 | run full scan for volgroups, logical volumes and physical volumes | ||
1001 | 78 | """ | ||
1002 | 79 | # the lvm tools lvscan, vgscan and pvscan on ubuntu precise do not | ||
1003 | 80 | # support the flag --cache. the flag is present for the tools in ubuntu | ||
1004 | 81 | # trusty and later. since lvmetad is used in current releases of | ||
1005 | 82 | # ubuntu, the --cache flag is needed to ensure that the data cached by | ||
1006 | 83 | # lvmetad is updated. | ||
1007 | 84 | |||
1008 | 85 | # before appending the cache flag though, check if lvmetad is running. this | ||
1009 | 86 | # ensures that we do the right thing even if lvmetad is supported but is | ||
1010 | 87 | # not running | ||
1011 | 88 | release = util.lsb_release().get('codename') | ||
1012 | 89 | if release in [None, 'UNAVAILABLE']: | ||
1013 | 90 | LOG.warning('unable to find release number, assuming xenial or later') | ||
1014 | 91 | release = 'xenial' | ||
1015 | 92 | |||
1016 | 93 | for cmd in [['pvscan'], ['vgscan', '--mknodes']]: | ||
1017 | 94 | if release != 'precise' and lvmetad_running(): | ||
1018 | 95 | cmd.append('--cache') | ||
1019 | 96 | util.subp(cmd, capture=True) | ||
1020 | 0 | 97 | ||
1021 | === modified file 'curtin/block/mdadm.py' | |||
1022 | --- curtin/block/mdadm.py 2016-05-10 16:13:29 +0000 | |||
1023 | +++ curtin/block/mdadm.py 2016-10-03 18:55:20 +0000 | |||
1024 | @@ -28,7 +28,7 @@ | |||
1025 | 28 | from subprocess import CalledProcessError | 28 | from subprocess import CalledProcessError |
1026 | 29 | 29 | ||
1027 | 30 | from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path) | 30 | from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path) |
1029 | 31 | from curtin import util | 31 | from curtin import (util, udev) |
1030 | 32 | from curtin.log import LOG | 32 | from curtin.log import LOG |
1031 | 33 | 33 | ||
1032 | 34 | NOSPARE_RAID_LEVELS = [ | 34 | NOSPARE_RAID_LEVELS = [ |
1033 | @@ -117,21 +117,34 @@ | |||
1034 | 117 | # | 117 | # |
1035 | 118 | 118 | ||
1036 | 119 | 119 | ||
1038 | 120 | def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False): | 120 | def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False, |
1039 | 121 | ignore_errors=False): | ||
1040 | 121 | # md_devname is a /dev/XXXX | 122 | # md_devname is a /dev/XXXX |
1041 | 122 | # devices is non-empty list of /dev/xxx | 123 | # devices is non-empty list of /dev/xxx |
1042 | 123 | # if spares is non-empt list append of /dev/xxx | 124 | # if spares is non-empt list append of /dev/xxx |
1043 | 124 | cmd = ["mdadm", "--assemble"] | 125 | cmd = ["mdadm", "--assemble"] |
1044 | 125 | if scan: | 126 | if scan: |
1046 | 126 | cmd += ['--scan'] | 127 | cmd += ['--scan', '-v'] |
1047 | 127 | else: | 128 | else: |
1048 | 128 | valid_mdname(md_devname) | 129 | valid_mdname(md_devname) |
1049 | 129 | cmd += [md_devname, "--run"] + devices | 130 | cmd += [md_devname, "--run"] + devices |
1050 | 130 | if spares: | 131 | if spares: |
1051 | 131 | cmd += spares | 132 | cmd += spares |
1052 | 132 | 133 | ||
1055 | 133 | util.subp(cmd, capture=True, rcs=[0, 1, 2]) | 134 | try: |
1056 | 134 | util.subp(["udevadm", "settle"]) | 135 | # mdadm assemble returns 1 when no arrays are found. this might not be |
1057 | 136 | # an error depending on the situation this function was called in, so | ||
1058 | 137 | # accept a return code of 1 | ||
1059 | 138 | # mdadm assemble returns 2 when called on an array that is already | ||
1060 | 139 | # assembled. this is not an error, so accept return code of 2 | ||
1061 | 140 | # all other return codes can be accepted with ignore_error set to true | ||
1062 | 141 | util.subp(cmd, capture=True, rcs=[0, 1, 2]) | ||
1063 | 142 | except util.ProcessExecutionError: | ||
1064 | 143 | LOG.warning("mdadm_assemble had unexpected return code") | ||
1065 | 144 | if not ignore_errors: | ||
1066 | 145 | raise | ||
1067 | 146 | |||
1068 | 147 | udev.udevadm_settle() | ||
1069 | 135 | 148 | ||
1070 | 136 | 149 | ||
1071 | 137 | def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""): | 150 | def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""): |
1072 | 138 | 151 | ||
1073 | === modified file 'curtin/block/mkfs.py' | |||
1074 | --- curtin/block/mkfs.py 2016-05-10 16:13:29 +0000 | |||
1075 | +++ curtin/block/mkfs.py 2016-10-03 18:55:20 +0000 | |||
1076 | @@ -78,6 +78,7 @@ | |||
1077 | 78 | "swap": "--uuid"}, | 78 | "swap": "--uuid"}, |
1078 | 79 | "force": {"btrfs": "--force", | 79 | "force": {"btrfs": "--force", |
1079 | 80 | "ext": "-F", | 80 | "ext": "-F", |
1080 | 81 | "fat": "-I", | ||
1081 | 81 | "ntfs": "--force", | 82 | "ntfs": "--force", |
1082 | 82 | "reiserfs": "-f", | 83 | "reiserfs": "-f", |
1083 | 83 | "swap": "--force", | 84 | "swap": "--force", |
1084 | @@ -91,6 +92,7 @@ | |||
1085 | 91 | "btrfs": "--sectorsize", | 92 | "btrfs": "--sectorsize", |
1086 | 92 | "ext": "-b", | 93 | "ext": "-b", |
1087 | 93 | "fat": "-S", | 94 | "fat": "-S", |
1088 | 95 | "xfs": "-s", | ||
1089 | 94 | "ntfs": "--sector-size", | 96 | "ntfs": "--sector-size", |
1090 | 95 | "reiserfs": "--block-size"} | 97 | "reiserfs": "--block-size"} |
1091 | 96 | } | 98 | } |
1092 | @@ -165,12 +167,15 @@ | |||
1093 | 165 | # use device logical block size to ensure properly formated filesystems | 167 | # use device logical block size to ensure properly formated filesystems |
1094 | 166 | (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path) | 168 | (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path) |
1095 | 167 | if logical_bsize > 512: | 169 | if logical_bsize > 512: |
1096 | 170 | lbs_str = ('size={}'.format(logical_bsize) if fs_family == "xfs" | ||
1097 | 171 | else str(logical_bsize)) | ||
1098 | 168 | cmd.extend(get_flag_mapping("sectorsize", fs_family, | 172 | cmd.extend(get_flag_mapping("sectorsize", fs_family, |
1104 | 169 | param=str(logical_bsize), | 173 | param=lbs_str, strict=strict)) |
1105 | 170 | strict=strict)) | 174 | |
1106 | 171 | # mkfs.vfat doesn't calculate this right for non-512b sector size | 175 | if fs_family == 'fat': |
1107 | 172 | # lp:1569576 , d-i uses the same setting. | 176 | # mkfs.vfat doesn't calculate this right for non-512b sector size |
1108 | 173 | cmd.extend(["-s", "1"]) | 177 | # lp:1569576 , d-i uses the same setting. |
1109 | 178 | cmd.extend(["-s", "1"]) | ||
1110 | 174 | 179 | ||
1111 | 175 | if force: | 180 | if force: |
1112 | 176 | cmd.extend(get_flag_mapping("force", fs_family, strict=strict)) | 181 | cmd.extend(get_flag_mapping("force", fs_family, strict=strict)) |
1113 | 177 | 182 | ||
1114 | === modified file 'curtin/commands/apply_net.py' | |||
1115 | --- curtin/commands/apply_net.py 2016-05-10 16:13:29 +0000 | |||
1116 | +++ curtin/commands/apply_net.py 2016-10-03 18:55:20 +0000 | |||
1117 | @@ -26,6 +26,57 @@ | |||
1118 | 26 | 26 | ||
1119 | 27 | LOG = log.LOG | 27 | LOG = log.LOG |
1120 | 28 | 28 | ||
1121 | 29 | IFUPDOWN_IPV6_MTU_PRE_HOOK = """#!/bin/bash -e | ||
1122 | 30 | # injected by curtin installer | ||
1123 | 31 | |||
1124 | 32 | [ "${IFACE}" != "lo" ] || exit 0 | ||
1125 | 33 | |||
1126 | 34 | # Trigger only if MTU configured | ||
1127 | 35 | [ -n "${IF_MTU}" ] || exit 0 | ||
1128 | 36 | |||
1129 | 37 | read CUR_DEV_MTU </sys/class/net/${IFACE}/mtu ||: | ||
1130 | 38 | read CUR_IPV6_MTU </proc/sys/net/ipv6/conf/${IFACE}/mtu ||: | ||
1131 | 39 | [ -n "${CUR_DEV_MTU}" ] && echo ${CUR_DEV_MTU} > /run/network/${IFACE}_dev.mtu | ||
1132 | 40 | [ -n "${CUR_IPV6_MTU}" ] && | ||
1133 | 41 | echo ${CUR_IPV6_MTU} > /run/network/${IFACE}_ipv6.mtu | ||
1134 | 42 | exit 0 | ||
1135 | 43 | """ | ||
1136 | 44 | |||
1137 | 45 | IFUPDOWN_IPV6_MTU_POST_HOOK = """#!/bin/bash -e | ||
1138 | 46 | # injected by curtin installer | ||
1139 | 47 | |||
1140 | 48 | [ "${IFACE}" != "lo" ] || exit 0 | ||
1141 | 49 | |||
1142 | 50 | # Trigger only if MTU configured | ||
1143 | 51 | [ -n "${IF_MTU}" ] || exit 0 | ||
1144 | 52 | |||
1145 | 53 | read PRE_DEV_MTU </run/network/${IFACE}_dev.mtu ||: | ||
1146 | 54 | read CUR_DEV_MTU </sys/class/net/${IFACE}/mtu ||: | ||
1147 | 55 | read PRE_IPV6_MTU </run/network/${IFACE}_ipv6.mtu ||: | ||
1148 | 56 | read CUR_IPV6_MTU </proc/sys/net/ipv6/conf/${IFACE}/mtu ||: | ||
1149 | 57 | |||
1150 | 58 | if [ "${ADDRFAM}" = "inet6" ]; then | ||
1151 | 59 | # We need to check the underlying interface MTU and | ||
1152 | 60 | # raise it if the IPV6 mtu is larger | ||
1153 | 61 | if [ ${CUR_DEV_MTU} -lt ${IF_MTU} ]; then | ||
1154 | 62 | ip link set ${IFACE} mtu ${IF_MTU} | ||
1155 | 63 | fi | ||
1156 | 64 | # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${IF_MTU} | ||
1157 | 65 | echo ${IF_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||: | ||
1158 | 66 | |||
1159 | 67 | elif [ "${ADDRFAM}" = "inet" ]; then | ||
1160 | 68 | # handle the clobber case where inet mtu changes v6 mtu. | ||
1161 | 69 | # ifupdown will already have set dev mtu, so lower mtu | ||
1162 | 70 | # if needed. If v6 mtu was larger, it get's clamped down | ||
1163 | 71 | # to the dev MTU value. | ||
1164 | 72 | if [ ${PRE_IPV6_MTU} -lt ${CUR_IPV6_MTU} ]; then | ||
1165 | 73 | # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${PRE_IPV6_MTU} | ||
1166 | 74 | echo ${PRE_IPV6_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||: | ||
1167 | 75 | fi | ||
1168 | 76 | fi | ||
1169 | 77 | exit 0 | ||
1170 | 78 | """ | ||
1171 | 79 | |||
1172 | 29 | 80 | ||
1173 | 30 | def apply_net(target, network_state=None, network_config=None): | 81 | def apply_net(target, network_state=None, network_config=None): |
1174 | 31 | if network_state is None and network_config is None: | 82 | if network_state is None and network_config is None: |
1175 | @@ -45,6 +96,108 @@ | |||
1176 | 45 | 96 | ||
1177 | 46 | net.render_network_state(target=target, network_state=ns) | 97 | net.render_network_state(target=target, network_state=ns) |
1178 | 47 | 98 | ||
1179 | 99 | _maybe_remove_legacy_eth0(target) | ||
1180 | 100 | LOG.info('Attempting to remove ipv6 privacy extensions') | ||
1181 | 101 | _disable_ipv6_privacy_extensions(target) | ||
1182 | 102 | _patch_ifupdown_ipv6_mtu_hook(target) | ||
1183 | 103 | |||
1184 | 104 | |||
1185 | 105 | def _patch_ifupdown_ipv6_mtu_hook(target, | ||
1186 | 106 | prehookfn="etc/network/if-pre-up.d/mtuipv6", | ||
1187 | 107 | posthookfn="etc/network/if-up.d/mtuipv6"): | ||
1188 | 108 | |||
1189 | 109 | contents = { | ||
1190 | 110 | 'prehook': IFUPDOWN_IPV6_MTU_PRE_HOOK, | ||
1191 | 111 | 'posthook': IFUPDOWN_IPV6_MTU_POST_HOOK, | ||
1192 | 112 | } | ||
1193 | 113 | |||
1194 | 114 | hookfn = { | ||
1195 | 115 | 'prehook': prehookfn, | ||
1196 | 116 | 'posthook': posthookfn, | ||
1197 | 117 | } | ||
1198 | 118 | |||
1199 | 119 | for hook in ['prehook', 'posthook']: | ||
1200 | 120 | fn = hookfn[hook] | ||
1201 | 121 | cfg = util.target_path(target, path=fn) | ||
1202 | 122 | LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg) | ||
1203 | 123 | util.write_file(cfg, contents[hook], mode=0o755) | ||
1204 | 124 | |||
1205 | 125 | |||
1206 | 126 | def _disable_ipv6_privacy_extensions(target, | ||
1207 | 127 | path="etc/sysctl.d/10-ipv6-privacy.conf"): | ||
1208 | 128 | |||
1209 | 129 | """Ubuntu server image sets a preference to use IPv6 privacy extensions | ||
1210 | 130 | by default; this races with the cloud-image desire to disable them. | ||
1211 | 131 | Resolve this by allowing the cloud-image setting to win. """ | ||
1212 | 132 | |||
1213 | 133 | cfg = util.target_path(target, path=path) | ||
1214 | 134 | if not os.path.exists(cfg): | ||
1215 | 135 | LOG.warn('Failed to find ipv6 privacy conf file %s', cfg) | ||
1216 | 136 | return | ||
1217 | 137 | |||
1218 | 138 | bmsg = "Disabling IPv6 privacy extensions config may not apply." | ||
1219 | 139 | try: | ||
1220 | 140 | contents = util.load_file(cfg) | ||
1221 | 141 | known_contents = ["net.ipv6.conf.all.use_tempaddr = 2", | ||
1222 | 142 | "net.ipv6.conf.default.use_tempaddr = 2"] | ||
1223 | 143 | lines = [f.strip() for f in contents.splitlines() | ||
1224 | 144 | if not f.startswith("#")] | ||
1225 | 145 | if lines == known_contents: | ||
1226 | 146 | LOG.info('deleting file: %s', cfg) | ||
1227 | 147 | util.del_file(cfg) | ||
1228 | 148 | msg = "removed %s with known contents" % cfg | ||
1229 | 149 | curtin_contents = '\n'.join( | ||
1230 | 150 | ["# IPv6 Privacy Extensions (RFC 4941)", | ||
1231 | 151 | "# Disabled by curtin", | ||
1232 | 152 | "# net.ipv6.conf.all.use_tempaddr = 2", | ||
1233 | 153 | "# net.ipv6.conf.default.use_tempaddr = 2"]) | ||
1234 | 154 | util.write_file(cfg, curtin_contents) | ||
1235 | 155 | else: | ||
1236 | 156 | LOG.info('skipping, content didnt match') | ||
1237 | 157 | LOG.debug("found content:\n%s", lines) | ||
1238 | 158 | LOG.debug("expected contents:\n%s", known_contents) | ||
1239 | 159 | msg = (bmsg + " '%s' exists with user configured content." % cfg) | ||
1240 | 160 | except: | ||
1241 | 161 | msg = bmsg + " %s exists, but could not be read." % cfg | ||
1242 | 162 | LOG.exception(msg) | ||
1243 | 163 | return | ||
1244 | 164 | |||
1245 | 165 | |||
1246 | 166 | def _maybe_remove_legacy_eth0(target, | ||
1247 | 167 | path="etc/network/interfaces.d/eth0.cfg"): | ||
1248 | 168 | """Ubuntu cloud images previously included a 'eth0.cfg' that had | ||
1249 | 169 | hard coded content. That file would interfere with the rendered | ||
1250 | 170 | configuration if it was present. | ||
1251 | 171 | |||
1252 | 172 | if the file does not exist do nothing. | ||
1253 | 173 | If the file exists: | ||
1254 | 174 | - with known content, remove it and warn | ||
1255 | 175 | - with unknown content, leave it and warn | ||
1256 | 176 | """ | ||
1257 | 177 | |||
1258 | 178 | cfg = util.target_path(target, path=path) | ||
1259 | 179 | if not os.path.exists(cfg): | ||
1260 | 180 | LOG.warn('Failed to find legacy network conf file %s', cfg) | ||
1261 | 181 | return | ||
1262 | 182 | |||
1263 | 183 | bmsg = "Dynamic networking config may not apply." | ||
1264 | 184 | try: | ||
1265 | 185 | contents = util.load_file(cfg) | ||
1266 | 186 | known_contents = ["auto eth0", "iface eth0 inet dhcp"] | ||
1267 | 187 | lines = [f.strip() for f in contents.splitlines() | ||
1268 | 188 | if not f.startswith("#")] | ||
1269 | 189 | if lines == known_contents: | ||
1270 | 190 | util.del_file(cfg) | ||
1271 | 191 | msg = "removed %s with known contents" % cfg | ||
1272 | 192 | else: | ||
1273 | 193 | msg = (bmsg + " '%s' exists with user configured content." % cfg) | ||
1274 | 194 | except: | ||
1275 | 195 | msg = bmsg + " %s exists, but could not be read." % cfg | ||
1276 | 196 | LOG.exception(msg) | ||
1277 | 197 | return | ||
1278 | 198 | |||
1279 | 199 | LOG.warn(msg) | ||
1280 | 200 | |||
1281 | 48 | 201 | ||
1282 | 49 | def apply_net_main(args): | 202 | def apply_net_main(args): |
1283 | 50 | # curtin apply_net [--net-state=/config/netstate.yml] [--target=/] | 203 | # curtin apply_net [--net-state=/config/netstate.yml] [--target=/] |
1284 | @@ -76,8 +229,10 @@ | |||
1285 | 76 | apply_net(target=state['target'], | 229 | apply_net(target=state['target'], |
1286 | 77 | network_state=state['network_state'], | 230 | network_state=state['network_state'], |
1287 | 78 | network_config=state['network_config']) | 231 | network_config=state['network_config']) |
1288 | 232 | |||
1289 | 79 | except Exception: | 233 | except Exception: |
1290 | 80 | LOG.exception('failed to apply network config') | 234 | LOG.exception('failed to apply network config') |
1291 | 235 | return 1 | ||
1292 | 81 | 236 | ||
1293 | 82 | LOG.info('Applied network configuration successfully') | 237 | LOG.info('Applied network configuration successfully') |
1294 | 83 | sys.exit(0) | 238 | sys.exit(0) |
1295 | @@ -90,7 +245,7 @@ | |||
1296 | 90 | 'metavar': 'NETSTATE', 'action': 'store', | 245 | 'metavar': 'NETSTATE', 'action': 'store', |
1297 | 91 | 'default': os.environ.get('OUTPUT_NETWORK_STATE')}), | 246 | 'default': os.environ.get('OUTPUT_NETWORK_STATE')}), |
1298 | 92 | (('-t', '--target'), | 247 | (('-t', '--target'), |
1300 | 93 | {'help': ('target filesystem root to add swap file to. ' | 248 | {'help': ('target filesystem root to configure networking to. ' |
1301 | 94 | 'default is env["TARGET_MOUNT_POINT"]'), | 249 | 'default is env["TARGET_MOUNT_POINT"]'), |
1302 | 95 | 'metavar': 'TARGET', 'action': 'store', | 250 | 'metavar': 'TARGET', 'action': 'store', |
1303 | 96 | 'default': os.environ.get('TARGET_MOUNT_POINT')}), | 251 | 'default': os.environ.get('TARGET_MOUNT_POINT')}), |
1304 | 97 | 252 | ||
1305 | === added file 'curtin/commands/apt_config.py' | |||
1306 | --- curtin/commands/apt_config.py 1970-01-01 00:00:00 +0000 | |||
1307 | +++ curtin/commands/apt_config.py 2016-10-03 18:55:20 +0000 | |||
1308 | @@ -0,0 +1,668 @@ | |||
1309 | 1 | # Copyright (C) 2016 Canonical Ltd. | ||
1310 | 2 | # | ||
1311 | 3 | # Author: Christian Ehrhardt <christian.ehrhardt@canonical.com> | ||
1312 | 4 | # | ||
1313 | 5 | # Curtin is free software: you can redistribute it and/or modify it under | ||
1314 | 6 | # the terms of the GNU Affero General Public License as published by the | ||
1315 | 7 | # Free Software Foundation, either version 3 of the License, or (at your | ||
1316 | 8 | # option) any later version. | ||
1317 | 9 | # | ||
1318 | 10 | # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY | ||
1319 | 11 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS | ||
1320 | 12 | # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for | ||
1321 | 13 | # more details. | ||
1322 | 14 | # | ||
1323 | 15 | # You should have received a copy of the GNU Affero General Public License | ||
1324 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | ||
1325 | 17 | """ | ||
1326 | 18 | apt.py | ||
1327 | 19 | Handle the setup of apt related tasks like proxies, mirrors, repositories. | ||
1328 | 20 | """ | ||
1329 | 21 | |||
1330 | 22 | import argparse | ||
1331 | 23 | import glob | ||
1332 | 24 | import os | ||
1333 | 25 | import re | ||
1334 | 26 | import sys | ||
1335 | 27 | import yaml | ||
1336 | 28 | |||
1337 | 29 | from curtin.log import LOG | ||
1338 | 30 | from curtin import (config, util, gpg) | ||
1339 | 31 | |||
1340 | 32 | from . import populate_one_subcmd | ||
1341 | 33 | |||
1342 | 34 | # this will match 'XXX:YYY' (ie, 'cloud-archive:foo' or 'ppa:bar') | ||
1343 | 35 | ADD_APT_REPO_MATCH = r"^[\w-]+:\w" | ||
1344 | 36 | |||
1345 | 37 | # place where apt stores cached repository data | ||
1346 | 38 | APT_LISTS = "/var/lib/apt/lists" | ||
1347 | 39 | |||
1348 | 40 | # Files to store proxy information | ||
1349 | 41 | APT_CONFIG_FN = "/etc/apt/apt.conf.d/94curtin-config" | ||
1350 | 42 | APT_PROXY_FN = "/etc/apt/apt.conf.d/90curtin-aptproxy" | ||
1351 | 43 | |||
1352 | 44 | # Default keyserver to use | ||
1353 | 45 | DEFAULT_KEYSERVER = "keyserver.ubuntu.com" | ||
1354 | 46 | |||
1355 | 47 | # Default archive mirrors | ||
1356 | 48 | PRIMARY_ARCH_MIRRORS = {"PRIMARY": "http://archive.ubuntu.com/ubuntu/", | ||
1357 | 49 | "SECURITY": "http://security.ubuntu.com/ubuntu/"} | ||
1358 | 50 | PORTS_MIRRORS = {"PRIMARY": "http://ports.ubuntu.com/ubuntu-ports", | ||
1359 | 51 | "SECURITY": "http://ports.ubuntu.com/ubuntu-ports"} | ||
1360 | 52 | PRIMARY_ARCHES = ['amd64', 'i386'] | ||
1361 | 53 | PORTS_ARCHES = ['s390x', 'arm64', 'armhf', 'powerpc', 'ppc64el'] | ||
1362 | 54 | |||
1363 | 55 | |||
1364 | 56 | def get_default_mirrors(arch=None): | ||
1365 | 57 | """returns the default mirrors for the target. These depend on the | ||
1366 | 58 | architecture, for more see: | ||
1367 | 59 | https://wiki.ubuntu.com/UbuntuDevelopment/PackageArchive#Ports""" | ||
1368 | 60 | if arch is None: | ||
1369 | 61 | arch = util.get_architecture() | ||
1370 | 62 | if arch in PRIMARY_ARCHES: | ||
1371 | 63 | return PRIMARY_ARCH_MIRRORS.copy() | ||
1372 | 64 | if arch in PORTS_ARCHES: | ||
1373 | 65 | return PORTS_MIRRORS.copy() | ||
1374 | 66 | raise ValueError("No default mirror known for arch %s" % arch) | ||
1375 | 67 | |||
1376 | 68 | |||
1377 | 69 | def handle_apt(cfg, target=None): | ||
1378 | 70 | """ handle_apt | ||
1379 | 71 | process the config for apt_config. This can be called from | ||
1380 | 72 | curthooks if a global apt config was provided or via the "apt" | ||
1381 | 73 | standalone command. | ||
1382 | 74 | """ | ||
1383 | 75 | release = util.lsb_release(target=target)['codename'] | ||
1384 | 76 | arch = util.get_architecture(target) | ||
1385 | 77 | mirrors = find_apt_mirror_info(cfg, arch) | ||
1386 | 78 | LOG.debug("Apt Mirror info: %s", mirrors) | ||
1387 | 79 | |||
1388 | 80 | apply_debconf_selections(cfg, target) | ||
1389 | 81 | |||
1390 | 82 | if not config.value_as_boolean(cfg.get('preserve_sources_list', | ||
1391 | 83 | True)): | ||
1392 | 84 | generate_sources_list(cfg, release, mirrors, target) | ||
1393 | 85 | rename_apt_lists(mirrors, target) | ||
1394 | 86 | |||
1395 | 87 | try: | ||
1396 | 88 | apply_apt_proxy_config(cfg, target + APT_PROXY_FN, | ||
1397 | 89 | target + APT_CONFIG_FN) | ||
1398 | 90 | except (IOError, OSError): | ||
1399 | 91 | LOG.exception("Failed to apply proxy or apt config info:") | ||
1400 | 92 | |||
1401 | 93 | # Process 'apt_source -> sources {dict}' | ||
1402 | 94 | if 'sources' in cfg: | ||
1403 | 95 | params = mirrors | ||
1404 | 96 | params['RELEASE'] = release | ||
1405 | 97 | params['MIRROR'] = mirrors["MIRROR"] | ||
1406 | 98 | |||
1407 | 99 | matcher = None | ||
1408 | 100 | matchcfg = cfg.get('add_apt_repo_match', ADD_APT_REPO_MATCH) | ||
1409 | 101 | if matchcfg: | ||
1410 | 102 | matcher = re.compile(matchcfg).search | ||
1411 | 103 | |||
1412 | 104 | add_apt_sources(cfg['sources'], target, | ||
1413 | 105 | template_params=params, aa_repo_match=matcher) | ||
1414 | 106 | |||
1415 | 107 | |||
1416 | 108 | def debconf_set_selections(selections, target=None): | ||
1417 | 109 | util.subp(['debconf-set-selections'], data=selections, target=target, | ||
1418 | 110 | capture=True) | ||
1419 | 111 | |||
1420 | 112 | |||
1421 | 113 | def dpkg_reconfigure(packages, target=None): | ||
1422 | 114 | # For any packages that are already installed, but have preseed data | ||
1423 | 115 | # we populate the debconf database, but the filesystem configuration | ||
1424 | 116 | # would be preferred on a subsequent dpkg-reconfigure. | ||
1425 | 117 | # so, what we have to do is "know" information about certain packages | ||
1426 | 118 | # to unconfigure them. | ||
1427 | 119 | unhandled = [] | ||
1428 | 120 | to_config = [] | ||
1429 | 121 | for pkg in packages: | ||
1430 | 122 | if pkg in CONFIG_CLEANERS: | ||
1431 | 123 | LOG.debug("unconfiguring %s", pkg) | ||
1432 | 124 | CONFIG_CLEANERS[pkg](target) | ||
1433 | 125 | to_config.append(pkg) | ||
1434 | 126 | else: | ||
1435 | 127 | unhandled.append(pkg) | ||
1436 | 128 | |||
1437 | 129 | if len(unhandled): | ||
1438 | 130 | LOG.warn("The following packages were installed and preseeded, " | ||
1439 | 131 | "but cannot be unconfigured: %s", unhandled) | ||
1440 | 132 | |||
1441 | 133 | if len(to_config): | ||
1442 | 134 | util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] + | ||
1443 | 135 | list(to_config), data=None, target=target, capture=True) | ||
1444 | 136 | |||
1445 | 137 | |||
1446 | 138 | def apply_debconf_selections(cfg, target=None): | ||
1447 | 139 | """apply_debconf_selections - push content to debconf""" | ||
1448 | 140 | # debconf_selections: | ||
1449 | 141 | # set1: | | ||
1450 | 142 | # cloud-init cloud-init/datasources multiselect MAAS | ||
1451 | 143 | # set2: pkg pkg/value string bar | ||
1452 | 144 | selsets = cfg.get('debconf_selections') | ||
1453 | 145 | if not selsets: | ||
1454 | 146 | LOG.debug("debconf_selections was not set in config") | ||
1455 | 147 | return | ||
1456 | 148 | |||
1457 | 149 | selections = '\n'.join( | ||
1458 | 150 | [selsets[key] for key in sorted(selsets.keys())]) | ||
1459 | 151 | debconf_set_selections(selections.encode() + b"\n", target=target) | ||
1460 | 152 | |||
1461 | 153 | # get a complete list of packages listed in input | ||
1462 | 154 | pkgs_cfgd = set() | ||
1463 | 155 | for key, content in selsets.items(): | ||
1464 | 156 | for line in content.splitlines(): | ||
1465 | 157 | if line.startswith("#"): | ||
1466 | 158 | continue | ||
1467 | 159 | pkg = re.sub(r"[:\s].*", "", line) | ||
1468 | 160 | pkgs_cfgd.add(pkg) | ||
1469 | 161 | |||
1470 | 162 | pkgs_installed = util.get_installed_packages(target) | ||
1471 | 163 | |||
1472 | 164 | LOG.debug("pkgs_cfgd: %s", pkgs_cfgd) | ||
1473 | 165 | LOG.debug("pkgs_installed: %s", pkgs_installed) | ||
1474 | 166 | need_reconfig = pkgs_cfgd.intersection(pkgs_installed) | ||
1475 | 167 | |||
1476 | 168 | if len(need_reconfig) == 0: | ||
1477 | 169 | LOG.debug("no need for reconfig") | ||
1478 | 170 | return | ||
1479 | 171 | |||
1480 | 172 | dpkg_reconfigure(need_reconfig, target=target) | ||
1481 | 173 | |||
1482 | 174 | |||
1483 | 175 | def clean_cloud_init(target): | ||
1484 | 176 | """clean out any local cloud-init config""" | ||
1485 | 177 | flist = glob.glob( | ||
1486 | 178 | util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) | ||
1487 | 179 | |||
1488 | 180 | LOG.debug("cleaning cloud-init config from: %s", flist) | ||
1489 | 181 | for dpkg_cfg in flist: | ||
1490 | 182 | os.unlink(dpkg_cfg) | ||
1491 | 183 | |||
1492 | 184 | |||
1493 | 185 | def mirrorurl_to_apt_fileprefix(mirror): | ||
1494 | 186 | """ mirrorurl_to_apt_fileprefix | ||
1495 | 187 | Convert a mirror url to the file prefix used by apt on disk to | ||
1496 | 188 | store cache information for that mirror. | ||
1497 | 189 | To do so do: | ||
1498 | 190 | - take off ???:// | ||
1499 | 191 | - drop tailing / | ||
1500 | 192 | - convert in string / to _ | ||
1501 | 193 | """ | ||
1502 | 194 | string = mirror | ||
1503 | 195 | if string.endswith("/"): | ||
1504 | 196 | string = string[0:-1] | ||
1505 | 197 | pos = string.find("://") | ||
1506 | 198 | if pos >= 0: | ||
1507 | 199 | string = string[pos + 3:] | ||
1508 | 200 | string = string.replace("/", "_") | ||
1509 | 201 | return string | ||
1510 | 202 | |||
1511 | 203 | |||
1512 | 204 | def rename_apt_lists(new_mirrors, target=None): | ||
1513 | 205 | """rename_apt_lists - rename apt lists to preserve old cache data""" | ||
1514 | 206 | default_mirrors = get_default_mirrors(util.get_architecture(target)) | ||
1515 | 207 | |||
1516 | 208 | pre = util.target_path(target, APT_LISTS) | ||
1517 | 209 | for (name, omirror) in default_mirrors.items(): | ||
1518 | 210 | nmirror = new_mirrors.get(name) | ||
1519 | 211 | if not nmirror: | ||
1520 | 212 | continue | ||
1521 | 213 | |||
1522 | 214 | oprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(omirror) | ||
1523 | 215 | nprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(nmirror) | ||
1524 | 216 | if oprefix == nprefix: | ||
1525 | 217 | continue | ||
1526 | 218 | olen = len(oprefix) | ||
1527 | 219 | for filename in glob.glob("%s_*" % oprefix): | ||
1528 | 220 | newname = "%s%s" % (nprefix, filename[olen:]) | ||
1529 | 221 | LOG.debug("Renaming apt list %s to %s", filename, newname) | ||
1530 | 222 | try: | ||
1531 | 223 | os.rename(filename, newname) | ||
1532 | 224 | except OSError: | ||
1533 | 225 | # since this is a best effort task, warn with but don't fail | ||
1534 | 226 | LOG.warn("Failed to rename apt list:", exc_info=True) | ||
1535 | 227 | |||
1536 | 228 | |||
1537 | 229 | def mirror_to_placeholder(tmpl, mirror, placeholder): | ||
1538 | 230 | """ mirror_to_placeholder | ||
1539 | 231 | replace the specified mirror in a template with a placeholder string | ||
1540 | 232 | Checks for existance of the expected mirror and warns if not found | ||
1541 | 233 | """ | ||
1542 | 234 | if mirror not in tmpl: | ||
1543 | 235 | LOG.warn("Expected mirror '%s' not found in: %s", mirror, tmpl) | ||
1544 | 236 | return tmpl.replace(mirror, placeholder) | ||
1545 | 237 | |||
1546 | 238 | |||
1547 | 239 | def map_known_suites(suite): | ||
1548 | 240 | """there are a few default names which will be auto-extended. | ||
1549 | 241 | This comes at the inability to use those names literally as suites, | ||
1550 | 242 | but on the other hand increases readability of the cfg quite a lot""" | ||
1551 | 243 | mapping = {'updates': '$RELEASE-updates', | ||
1552 | 244 | 'backports': '$RELEASE-backports', | ||
1553 | 245 | 'security': '$RELEASE-security', | ||
1554 | 246 | 'proposed': '$RELEASE-proposed', | ||
1555 | 247 | 'release': '$RELEASE'} | ||
1556 | 248 | try: | ||
1557 | 249 | retsuite = mapping[suite] | ||
1558 | 250 | except KeyError: | ||
1559 | 251 | retsuite = suite | ||
1560 | 252 | return retsuite | ||
1561 | 253 | |||
1562 | 254 | |||
1563 | 255 | def disable_suites(disabled, src, release): | ||
1564 | 256 | """reads the config for suites to be disabled and removes those | ||
1565 | 257 | from the template""" | ||
1566 | 258 | if not disabled: | ||
1567 | 259 | return src | ||
1568 | 260 | |||
1569 | 261 | retsrc = src | ||
1570 | 262 | for suite in disabled: | ||
1571 | 263 | suite = map_known_suites(suite) | ||
1572 | 264 | releasesuite = util.render_string(suite, {'RELEASE': release}) | ||
1573 | 265 | LOG.debug("Disabling suite %s as %s", suite, releasesuite) | ||
1574 | 266 | |||
1575 | 267 | newsrc = "" | ||
1576 | 268 | for line in retsrc.splitlines(True): | ||
1577 | 269 | if line.startswith("#"): | ||
1578 | 270 | newsrc += line | ||
1579 | 271 | continue | ||
1580 | 272 | |||
1581 | 273 | # sources.list allow options in cols[1] which can have spaces | ||
1582 | 274 | # so the actual suite can be [2] or later. example: | ||
1583 | 275 | # deb [ arch=amd64,armel k=v ] http://example.com/debian | ||
1584 | 276 | cols = line.split() | ||
1585 | 277 | if len(cols) > 1: | ||
1586 | 278 | pcol = 2 | ||
1587 | 279 | if cols[1].startswith("["): | ||
1588 | 280 | for col in cols[1:]: | ||
1589 | 281 | pcol += 1 | ||
1590 | 282 | if col.endswith("]"): | ||
1591 | 283 | break | ||
1592 | 284 | |||
1593 | 285 | if cols[pcol] == releasesuite: | ||
1594 | 286 | line = '# suite disabled by curtin: %s' % line | ||
1595 | 287 | newsrc += line | ||
1596 | 288 | retsrc = newsrc | ||
1597 | 289 | |||
1598 | 290 | return retsrc | ||
1599 | 291 | |||
1600 | 292 | |||
1601 | 293 | def generate_sources_list(cfg, release, mirrors, target=None): | ||
1602 | 294 | """ generate_sources_list | ||
1603 | 295 | create a source.list file based on a custom or default template | ||
1604 | 296 | by replacing mirrors and release in the template | ||
1605 | 297 | """ | ||
1606 | 298 | default_mirrors = get_default_mirrors(util.get_architecture(target)) | ||
1607 | 299 | aptsrc = "/etc/apt/sources.list" | ||
1608 | 300 | params = {'RELEASE': release} | ||
1609 | 301 | for k in mirrors: | ||
1610 | 302 | params[k] = mirrors[k] | ||
1611 | 303 | |||
1612 | 304 | tmpl = cfg.get('sources_list', None) | ||
1613 | 305 | if tmpl is None: | ||
1614 | 306 | LOG.info("No custom template provided, fall back to modify" | ||
1615 | 307 | "mirrors in %s on the target system", aptsrc) | ||
1616 | 308 | tmpl = util.load_file(util.target_path(target, aptsrc)) | ||
1617 | 309 | # Strategy if no custom template was provided: | ||
1618 | 310 | # - Only replacing mirrors | ||
1619 | 311 | # - no reason to replace "release" as it is from target anyway | ||
1620 | 312 | # - The less we depend upon, the more stable this is against changes | ||
1621 | 313 | # - warn if expected original content wasn't found | ||
1622 | 314 | tmpl = mirror_to_placeholder(tmpl, default_mirrors['PRIMARY'], | ||
1623 | 315 | "$MIRROR") | ||
1624 | 316 | tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'], | ||
1625 | 317 | "$SECURITY") | ||
1626 | 318 | |||
1627 | 319 | orig = util.target_path(target, aptsrc) | ||
1628 | 320 | if os.path.exists(orig): | ||
1629 | 321 | os.rename(orig, orig + ".curtin.old") | ||
1630 | 322 | |||
1631 | 323 | rendered = util.render_string(tmpl, params) | ||
1632 | 324 | disabled = disable_suites(cfg.get('disable_suites'), rendered, release) | ||
1633 | 325 | util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644) | ||
1634 | 326 | |||
1635 | 327 | # protect the just generated sources.list from cloud-init | ||
1636 | 328 | cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg" | ||
1637 | 329 | # this has to work with older cloud-init as well, so use old key | ||
1638 | 330 | cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) | ||
1639 | 331 | try: | ||
1640 | 332 | util.write_file(util.target_path(target, cloudfile), | ||
1641 | 333 | cloudconf, mode=0o644) | ||
1642 | 334 | except IOError: | ||
1643 | 335 | LOG.exception("Failed to protect source.list from cloud-init in (%s)", | ||
1644 | 336 | util.target_path(target, cloudfile)) | ||
1645 | 337 | raise | ||
1646 | 338 | |||
1647 | 339 | |||
1648 | 340 | def add_apt_key_raw(key, target=None): | ||
1649 | 341 | """ | ||
1650 | 342 | actual adding of a key as defined in key argument | ||
1651 | 343 | to the system | ||
1652 | 344 | """ | ||
1653 | 345 | LOG.debug("Adding key:\n'%s'", key) | ||
1654 | 346 | try: | ||
1655 | 347 | util.subp(['apt-key', 'add', '-'], data=key.encode(), target=target) | ||
1656 | 348 | except util.ProcessExecutionError: | ||
1657 | 349 | LOG.exception("failed to add apt GPG Key to apt keyring") | ||
1658 | 350 | raise | ||
1659 | 351 | |||
1660 | 352 | |||
1661 | 353 | def add_apt_key(ent, target=None): | ||
1662 | 354 | """ | ||
1663 | 355 | Add key to the system as defined in ent (if any). | ||
1664 | 356 | Supports raw keys or keyid's | ||
1665 | 357 | The latter will as a first step fetched to get the raw key | ||
1666 | 358 | """ | ||
1667 | 359 | if 'keyid' in ent and 'key' not in ent: | ||
1668 | 360 | keyserver = DEFAULT_KEYSERVER | ||
1669 | 361 | if 'keyserver' in ent: | ||
1670 | 362 | keyserver = ent['keyserver'] | ||
1671 | 363 | |||
1672 | 364 | ent['key'] = gpg.getkeybyid(ent['keyid'], keyserver) | ||
1673 | 365 | |||
1674 | 366 | if 'key' in ent: | ||
1675 | 367 | add_apt_key_raw(ent['key'], target) | ||
1676 | 368 | |||
1677 | 369 | |||
1678 | 370 | def add_apt_sources(srcdict, target=None, template_params=None, | ||
1679 | 371 | aa_repo_match=None): | ||
1680 | 372 | """ | ||
1681 | 373 | add entries in /etc/apt/sources.list.d for each abbreviated | ||
1682 | 374 | sources.list entry in 'srcdict'. When rendering template, also | ||
1683 | 375 | include the values in dictionary searchList | ||
1684 | 376 | """ | ||
1685 | 377 | if template_params is None: | ||
1686 | 378 | template_params = {} | ||
1687 | 379 | |||
1688 | 380 | if aa_repo_match is None: | ||
1689 | 381 | raise ValueError('did not get a valid repo matcher') | ||
1690 | 382 | |||
1691 | 383 | if not isinstance(srcdict, dict): | ||
1692 | 384 | raise TypeError('unknown apt format: %s' % (srcdict)) | ||
1693 | 385 | |||
1694 | 386 | for filename in srcdict: | ||
1695 | 387 | ent = srcdict[filename] | ||
1696 | 388 | if 'filename' not in ent: | ||
1697 | 389 | ent['filename'] = filename | ||
1698 | 390 | |||
1699 | 391 | add_apt_key(ent, target) | ||
1700 | 392 | |||
1701 | 393 | if 'source' not in ent: | ||
1702 | 394 | continue | ||
1703 | 395 | source = ent['source'] | ||
1704 | 396 | source = util.render_string(source, template_params) | ||
1705 | 397 | |||
1706 | 398 | if not ent['filename'].startswith("/"): | ||
1707 | 399 | ent['filename'] = os.path.join("/etc/apt/sources.list.d/", | ||
1708 | 400 | ent['filename']) | ||
1709 | 401 | if not ent['filename'].endswith(".list"): | ||
1710 | 402 | ent['filename'] += ".list" | ||
1711 | 403 | |||
1712 | 404 | if aa_repo_match(source): | ||
1713 | 405 | try: | ||
1714 | 406 | with util.ChrootableTarget( | ||
1715 | 407 | target, sys_resolvconf=True) as in_chroot: | ||
1716 | 408 | in_chroot.subp(["add-apt-repository", source]) | ||
1717 | 409 | except util.ProcessExecutionError: | ||
1718 | 410 | LOG.exception("add-apt-repository failed.") | ||
1719 | 411 | raise | ||
1720 | 412 | continue | ||
1721 | 413 | |||
1722 | 414 | sourcefn = util.target_path(target, ent['filename']) | ||
1723 | 415 | try: | ||
1724 | 416 | contents = "%s\n" % (source) | ||
1725 | 417 | util.write_file(sourcefn, contents, omode="a") | ||
1726 | 418 | except IOError as detail: | ||
1727 | 419 | LOG.exception("failed write to file %s: %s", sourcefn, detail) | ||
1728 | 420 | raise | ||
1729 | 421 | |||
1730 | 422 | util.apt_update(target=target, force=True, | ||
1731 | 423 | comment="apt-source changed config") | ||
1732 | 424 | |||
1733 | 425 | return | ||
1734 | 426 | |||
1735 | 427 | |||
1736 | 428 | def search_for_mirror(candidates): | ||
1737 | 429 | """ | ||
1738 | 430 | Search through a list of mirror urls for one that works | ||
1739 | 431 | This needs to return quickly. | ||
1740 | 432 | """ | ||
1741 | 433 | if candidates is None: | ||
1742 | 434 | return None | ||
1743 | 435 | |||
1744 | 436 | LOG.debug("search for mirror in candidates: '%s'", candidates) | ||
1745 | 437 | for cand in candidates: | ||
1746 | 438 | try: | ||
1747 | 439 | if util.is_resolvable_url(cand): | ||
1748 | 440 | LOG.debug("found working mirror: '%s'", cand) | ||
1749 | 441 | return cand | ||
1750 | 442 | except Exception: | ||
1751 | 443 | pass | ||
1752 | 444 | return None | ||
1753 | 445 | |||
1754 | 446 | |||
1755 | 447 | def update_mirror_info(pmirror, smirror, arch): | ||
1756 | 448 | """sets security mirror to primary if not defined. | ||
1757 | 449 | returns defaults if no mirrors are defined""" | ||
1758 | 450 | if pmirror is not None: | ||
1759 | 451 | if smirror is None: | ||
1760 | 452 | smirror = pmirror | ||
1761 | 453 | return {'PRIMARY': pmirror, | ||
1762 | 454 | 'SECURITY': smirror} | ||
1763 | 455 | return get_default_mirrors(arch) | ||
1764 | 456 | |||
1765 | 457 | |||
1766 | 458 | def get_arch_mirrorconfig(cfg, mirrortype, arch): | ||
1767 | 459 | """out of a list of potential mirror configurations select | ||
1768 | 460 | and return the one matching the architecture (or default)""" | ||
1769 | 461 | # select the mirror specification (if-any) | ||
1770 | 462 | mirror_cfg_list = cfg.get(mirrortype, None) | ||
1771 | 463 | if mirror_cfg_list is None: | ||
1772 | 464 | return None | ||
1773 | 465 | |||
1774 | 466 | # select the specification matching the target arch | ||
1775 | 467 | default = None | ||
1776 | 468 | for mirror_cfg_elem in mirror_cfg_list: | ||
1777 | 469 | arches = mirror_cfg_elem.get("arches") | ||
1778 | 470 | if arch in arches: | ||
1779 | 471 | return mirror_cfg_elem | ||
1780 | 472 | if "default" in arches: | ||
1781 | 473 | default = mirror_cfg_elem | ||
1782 | 474 | return default | ||
1783 | 475 | |||
1784 | 476 | |||
1785 | 477 | def get_mirror(cfg, mirrortype, arch): | ||
1786 | 478 | """pass the three potential stages of mirror specification | ||
1787 | 479 | returns None is neither of them found anything otherwise the first | ||
1788 | 480 | hit is returned""" | ||
1789 | 481 | mcfg = get_arch_mirrorconfig(cfg, mirrortype, arch) | ||
1790 | 482 | if mcfg is None: | ||
1791 | 483 | return None | ||
1792 | 484 | |||
1793 | 485 | # directly specified | ||
1794 | 486 | mirror = mcfg.get("uri", None) | ||
1795 | 487 | |||
1796 | 488 | # fallback to search if specified | ||
1797 | 489 | if mirror is None: | ||
1798 | 490 | # list of mirrors to try to resolve | ||
1799 | 491 | mirror = search_for_mirror(mcfg.get("search", None)) | ||
1800 | 492 | |||
1801 | 493 | return mirror | ||
1802 | 494 | |||
1803 | 495 | |||
1804 | 496 | def find_apt_mirror_info(cfg, arch=None): | ||
1805 | 497 | """find_apt_mirror_info | ||
1806 | 498 | find an apt_mirror given the cfg provided. | ||
1807 | 499 | It can check for separate config of primary and security mirrors | ||
1808 | 500 | If only primary is given security is assumed to be equal to primary | ||
1809 | 501 | If the generic apt_mirror is given that is defining for both | ||
1810 | 502 | """ | ||
1811 | 503 | |||
1812 | 504 | if arch is None: | ||
1813 | 505 | arch = util.get_architecture() | ||
1814 | 506 | LOG.debug("got arch for mirror selection: %s", arch) | ||
1815 | 507 | pmirror = get_mirror(cfg, "primary", arch) | ||
1816 | 508 | LOG.debug("got primary mirror: %s", pmirror) | ||
1817 | 509 | smirror = get_mirror(cfg, "security", arch) | ||
1818 | 510 | LOG.debug("got security mirror: %s", smirror) | ||
1819 | 511 | |||
1820 | 512 | # Note: curtin has no cloud-datasource fallback | ||
1821 | 513 | |||
1822 | 514 | mirror_info = update_mirror_info(pmirror, smirror, arch) | ||
1823 | 515 | |||
1824 | 516 | # less complex replacements use only MIRROR, derive from primary | ||
1825 | 517 | mirror_info["MIRROR"] = mirror_info["PRIMARY"] | ||
1826 | 518 | |||
1827 | 519 | return mirror_info | ||
1828 | 520 | |||
1829 | 521 | |||
1830 | 522 | def apply_apt_proxy_config(cfg, proxy_fname, config_fname): | ||
1831 | 523 | """apply_apt_proxy_config | ||
1832 | 524 | Applies any apt*proxy config from if specified | ||
1833 | 525 | """ | ||
1834 | 526 | # Set up any apt proxy | ||
1835 | 527 | cfgs = (('proxy', 'Acquire::http::Proxy "%s";'), | ||
1836 | 528 | ('http_proxy', 'Acquire::http::Proxy "%s";'), | ||
1837 | 529 | ('ftp_proxy', 'Acquire::ftp::Proxy "%s";'), | ||
1838 | 530 | ('https_proxy', 'Acquire::https::Proxy "%s";')) | ||
1839 | 531 | |||
1840 | 532 | proxies = [fmt % cfg.get(name) for (name, fmt) in cfgs if cfg.get(name)] | ||
1841 | 533 | if len(proxies): | ||
1842 | 534 | LOG.debug("write apt proxy info to %s", proxy_fname) | ||
1843 | 535 | util.write_file(proxy_fname, '\n'.join(proxies) + '\n') | ||
1844 | 536 | elif os.path.isfile(proxy_fname): | ||
1845 | 537 | util.del_file(proxy_fname) | ||
1846 | 538 | LOG.debug("no apt proxy configured, removed %s", proxy_fname) | ||
1847 | 539 | |||
1848 | 540 | if cfg.get('conf', None): | ||
1849 | 541 | LOG.debug("write apt config info to %s", config_fname) | ||
1850 | 542 | util.write_file(config_fname, cfg.get('conf')) | ||
1851 | 543 | elif os.path.isfile(config_fname): | ||
1852 | 544 | util.del_file(config_fname) | ||
1853 | 545 | LOG.debug("no apt config configured, removed %s", config_fname) | ||
1854 | 546 | |||
1855 | 547 | |||
1856 | 548 | def apt_command(args): | ||
1857 | 549 | """ Main entry point for curtin apt-config standalone command | ||
1858 | 550 | This does not read the global config as handled by curthooks, but | ||
1859 | 551 | instead one can specify a different "target" and a new cfg via --config | ||
1860 | 552 | """ | ||
1861 | 553 | cfg = config.load_command_config(args, {}) | ||
1862 | 554 | |||
1863 | 555 | if args.target is not None: | ||
1864 | 556 | target = args.target | ||
1865 | 557 | else: | ||
1866 | 558 | state = util.load_command_environment() | ||
1867 | 559 | target = state['target'] | ||
1868 | 560 | |||
1869 | 561 | if target is None: | ||
1870 | 562 | sys.stderr.write("Unable to find target. " | ||
1871 | 563 | "Use --target or set TARGET_MOUNT_POINT\n") | ||
1872 | 564 | sys.exit(2) | ||
1873 | 565 | |||
1874 | 566 | apt_cfg = cfg.get("apt") | ||
1875 | 567 | # if no apt config section is available, do nothing | ||
1876 | 568 | if apt_cfg is not None: | ||
1877 | 569 | LOG.debug("Handling apt to target %s with config %s", | ||
1878 | 570 | target, apt_cfg) | ||
1879 | 571 | try: | ||
1880 | 572 | with util.ChrootableTarget(target, sys_resolvconf=True): | ||
1881 | 573 | handle_apt(apt_cfg, target) | ||
1882 | 574 | except (RuntimeError, TypeError, ValueError, IOError): | ||
1883 | 575 | LOG.exception("Failed to configure apt features '%s'", apt_cfg) | ||
1884 | 576 | sys.exit(1) | ||
1885 | 577 | else: | ||
1886 | 578 | LOG.info("No apt config provided, skipping") | ||
1887 | 579 | |||
1888 | 580 | sys.exit(0) | ||
1889 | 581 | |||
1890 | 582 | |||
1891 | 583 | def translate_old_apt_features(cfg): | ||
1892 | 584 | """translate the few old apt related features into the new config format""" | ||
1893 | 585 | predef_apt_cfg = cfg.get("apt") | ||
1894 | 586 | if predef_apt_cfg is None: | ||
1895 | 587 | cfg['apt'] = {} | ||
1896 | 588 | predef_apt_cfg = cfg.get("apt") | ||
1897 | 589 | |||
1898 | 590 | if cfg.get('apt_proxy') is not None: | ||
1899 | 591 | if predef_apt_cfg.get('proxy') is not None: | ||
1900 | 592 | msg = ("Error in apt_proxy configuration: " | ||
1901 | 593 | "old and new format of apt features " | ||
1902 | 594 | "are mutually exclusive") | ||
1903 | 595 | LOG.error(msg) | ||
1904 | 596 | raise ValueError(msg) | ||
1905 | 597 | |||
1906 | 598 | cfg['apt']['proxy'] = cfg.get('apt_proxy') | ||
1907 | 599 | LOG.debug("Transferred %s into new format: %s", cfg.get('apt_proxy'), | ||
1908 | 600 | cfg.get('apte')) | ||
1909 | 601 | del cfg['apt_proxy'] | ||
1910 | 602 | |||
1911 | 603 | if cfg.get('apt_mirrors') is not None: | ||
1912 | 604 | if predef_apt_cfg.get('mirrors') is not None: | ||
1913 | 605 | msg = ("Error in apt_mirror configuration: " | ||
1914 | 606 | "old and new format of apt features " | ||
1915 | 607 | "are mutually exclusive") | ||
1916 | 608 | LOG.error(msg) | ||
1917 | 609 | raise ValueError(msg) | ||
1918 | 610 | |||
1919 | 611 | old = cfg.get('apt_mirrors') | ||
1920 | 612 | cfg['apt']['primary'] = [{"arches": ["default"], | ||
1921 | 613 | "uri": old.get('ubuntu_archive')}] | ||
1922 | 614 | cfg['apt']['security'] = [{"arches": ["default"], | ||
1923 | 615 | "uri": old.get('ubuntu_security')}] | ||
1924 | 616 | LOG.debug("Transferred %s into new format: %s", cfg.get('apt_mirror'), | ||
1925 | 617 | cfg.get('apt')) | ||
1926 | 618 | del cfg['apt_mirrors'] | ||
1927 | 619 | # to work this also needs to disable the default protection | ||
1928 | 620 | psl = predef_apt_cfg.get('preserve_sources_list') | ||
1929 | 621 | if psl is not None: | ||
1930 | 622 | if config.value_as_boolean(psl) is True: | ||
1931 | 623 | msg = ("Error in apt_mirror configuration: " | ||
1932 | 624 | "apt_mirrors and preserve_sources_list: True " | ||
1933 | 625 | "are mutually exclusive") | ||
1934 | 626 | LOG.error(msg) | ||
1935 | 627 | raise ValueError(msg) | ||
1936 | 628 | cfg['apt']['preserve_sources_list'] = False | ||
1937 | 629 | |||
1938 | 630 | if cfg.get('debconf_selections') is not None: | ||
1939 | 631 | if predef_apt_cfg.get('debconf_selections') is not None: | ||
1940 | 632 | msg = ("Error in debconf_selections configuration: " | ||
1941 | 633 | "old and new format of apt features " | ||
1942 | 634 | "are mutually exclusive") | ||
1943 | 635 | LOG.error(msg) | ||
1944 | 636 | raise ValueError(msg) | ||
1945 | 637 | |||
1946 | 638 | selsets = cfg.get('debconf_selections') | ||
1947 | 639 | cfg['apt']['debconf_selections'] = selsets | ||
1948 | 640 | LOG.info("Transferred %s into new format: %s", | ||
1949 | 641 | cfg.get('debconf_selections'), | ||
1950 | 642 | cfg.get('apt')) | ||
1951 | 643 | del cfg['debconf_selections'] | ||
1952 | 644 | |||
1953 | 645 | return cfg | ||
1954 | 646 | |||
1955 | 647 | |||
1956 | 648 | CMD_ARGUMENTS = ( | ||
1957 | 649 | ((('-c', '--config'), | ||
1958 | 650 | {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend, | ||
1959 | 651 | 'metavar': 'FILE', 'type': argparse.FileType("rb"), | ||
1960 | 652 | 'dest': 'cfgopts', 'default': []}), | ||
1961 | 653 | (('-t', '--target'), | ||
1962 | 654 | {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', | ||
1963 | 655 | 'action': 'store', 'metavar': 'TARGET', | ||
1964 | 656 | 'default': os.environ.get('TARGET_MOUNT_POINT')}),) | ||
1965 | 657 | ) | ||
1966 | 658 | |||
1967 | 659 | |||
1968 | 660 | def POPULATE_SUBCMD(parser): | ||
1969 | 661 | """Populate subcommand option parsing for apt-config""" | ||
1970 | 662 | populate_one_subcmd(parser, CMD_ARGUMENTS, apt_command) | ||
1971 | 663 | |||
1972 | 664 | CONFIG_CLEANERS = { | ||
1973 | 665 | 'cloud-init': clean_cloud_init, | ||
1974 | 666 | } | ||
1975 | 667 | |||
1976 | 668 | # vi: ts=4 expandtab syntax=python | ||
1977 | 0 | 669 | ||
1978 | === added file 'curtin/commands/block_info.py' | |||
1979 | --- curtin/commands/block_info.py 1970-01-01 00:00:00 +0000 | |||
1980 | +++ curtin/commands/block_info.py 2016-10-03 18:55:20 +0000 | |||
1981 | @@ -0,0 +1,75 @@ | |||
1982 | 1 | # Copyright (C) 2016 Canonical Ltd. | ||
1983 | 2 | # | ||
1984 | 3 | # Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com> | ||
1985 | 4 | # | ||
1986 | 5 | # Curtin is free software: you can redistribute it and/or modify it under | ||
1987 | 6 | # the terms of the GNU Affero General Public License as published by the | ||
1988 | 7 | # Free Software Foundation, either version 3 of the License, or (at your | ||
1989 | 8 | # option) any later version. | ||
1990 | 9 | # | ||
1991 | 10 | # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY | ||
1992 | 11 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS | ||
1993 | 12 | # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for | ||
1994 | 13 | # more details. | ||
1995 | 14 | # | ||
1996 | 15 | # You should have received a copy of the GNU Affero General Public License | ||
1997 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | ||
1998 | 17 | |||
1999 | 18 | import os | ||
2000 | 19 | from . import populate_one_subcmd | ||
2001 | 20 | from curtin import (block, util) | ||
2002 | 21 | |||
2003 | 22 | |||
2004 | 23 | def block_info_main(args): | ||
2005 | 24 | """get information about block devices, similar to lsblk""" | ||
2006 | 25 | if not args.devices: | ||
2007 | 26 | raise ValueError('devices to scan must be specified') | ||
2008 | 27 | if not all(block.is_block_device(d) for d in args.devices): | ||
2009 | 28 | raise ValueError('invalid device(s)') | ||
2010 | 29 | |||
2011 | 30 | def add_size_to_holders_tree(tree): | ||
2012 | 31 | """add size information to generated holders trees""" | ||
2013 | 32 | size_file = os.path.join(tree['device'], 'size') | ||
2014 | 33 | # size file is always represented in 512 byte sectors even if | ||
2015 | 34 | # underlying disk uses a larger logical_block_size | ||
2016 | 35 | size = ((512 * int(util.load_file(size_file))) | ||
2017 | 36 | if os.path.exists(size_file) else None) | ||
2018 | 37 | tree['size'] = util.bytes2human(size) if args.human else str(size) | ||
2019 | 38 | for holder in tree['holders']: | ||
2020 | 39 | add_size_to_holders_tree(holder) | ||
2021 | 40 | return tree | ||
2022 | 41 | |||
2023 | 42 | def format_name(tree): | ||
2024 | 43 | """format information for human readable display""" | ||
2025 | 44 | res = { | ||
2026 | 45 | 'name': ' - '.join((tree['name'], tree['dev_type'], tree['size'])), | ||
2027 | 46 | 'holders': [] | ||
2028 | 47 | } | ||
2029 | 48 | for holder in tree['holders']: | ||
2030 | 49 | res['holders'].append(format_name(holder)) | ||
2031 | 50 | return res | ||
2032 | 51 | |||
2033 | 52 | trees = [add_size_to_holders_tree(t) for t in | ||
2034 | 53 | [block.clear_holders.gen_holders_tree(d) for d in args.devices]] | ||
2035 | 54 | |||
2036 | 55 | print(util.json_dumps(trees) if args.json else | ||
2037 | 56 | '\n'.join(block.clear_holders.format_holders_tree(t) for t in | ||
2038 | 57 | [format_name(tree) for tree in trees])) | ||
2039 | 58 | |||
2040 | 59 | return 0 | ||
2041 | 60 | |||
2042 | 61 | |||
2043 | 62 | CMD_ARGUMENTS = ( | ||
2044 | 63 | ('devices', | ||
2045 | 64 | {'help': 'devices to get info for', 'default': [], 'nargs': '+'}), | ||
2046 | 65 | ('--human', | ||
2047 | 66 | {'help': 'output size in human readable format', 'default': False, | ||
2048 | 67 | 'action': 'store_true'}), | ||
2049 | 68 | (('-j', '--json'), | ||
2050 | 69 | {'help': 'output data in json format', 'default': False, | ||
2051 | 70 | 'action': 'store_true'}), | ||
2052 | 71 | ) | ||
2053 | 72 | |||
2054 | 73 | |||
2055 | 74 | def POPULATE_SUBCMD(parser): | ||
2056 | 75 | populate_one_subcmd(parser, CMD_ARGUMENTS, block_info_main) | ||
2057 | 0 | 76 | ||
2058 | === modified file 'curtin/commands/block_meta.py' | |||
2059 | --- curtin/commands/block_meta.py 2016-10-03 18:00:41 +0000 | |||
2060 | +++ curtin/commands/block_meta.py 2016-10-03 18:55:20 +0000 | |||
2061 | @@ -17,9 +17,8 @@ | |||
2062 | 17 | 17 | ||
2063 | 18 | from collections import OrderedDict | 18 | from collections import OrderedDict |
2064 | 19 | from curtin import (block, config, util) | 19 | from curtin import (block, config, util) |
2066 | 20 | from curtin.block import mdadm | 20 | from curtin.block import (mdadm, mkfs, clear_holders, lvm) |
2067 | 21 | from curtin.log import LOG | 21 | from curtin.log import LOG |
2068 | 22 | from curtin.block import mkfs | ||
2069 | 23 | from curtin.reporter import events | 22 | from curtin.reporter import events |
2070 | 24 | 23 | ||
2071 | 25 | from . import populate_one_subcmd | 24 | from . import populate_one_subcmd |
2072 | @@ -28,7 +27,7 @@ | |||
2073 | 28 | import glob | 27 | import glob |
2074 | 29 | import os | 28 | import os |
2075 | 30 | import platform | 29 | import platform |
2077 | 31 | import re | 30 | import string |
2078 | 32 | import sys | 31 | import sys |
2079 | 33 | import tempfile | 32 | import tempfile |
2080 | 34 | import time | 33 | import time |
2081 | @@ -129,128 +128,6 @@ | |||
2082 | 129 | return "mbr" | 128 | return "mbr" |
2083 | 130 | 129 | ||
2084 | 131 | 130 | ||
2085 | 132 | def block_find_sysfs_path(devname): | ||
2086 | 133 | # return the path in sys for device named devname | ||
2087 | 134 | # support either short name ('sda') or full path /dev/sda | ||
2088 | 135 | # sda -> /sys/class/block/sda | ||
2089 | 136 | # sda1 -> /sys/class/block/sda/sda1 | ||
2090 | 137 | if not devname: | ||
2091 | 138 | raise ValueError("empty devname provided to find_sysfs_path") | ||
2092 | 139 | |||
2093 | 140 | sys_class_block = '/sys/class/block/' | ||
2094 | 141 | basename = os.path.basename(devname) | ||
2095 | 142 | # try without parent blockdevice, then prepend parent | ||
2096 | 143 | paths = [ | ||
2097 | 144 | os.path.join(sys_class_block, basename), | ||
2098 | 145 | os.path.join(sys_class_block, | ||
2099 | 146 | re.split('[\d+]', basename)[0], basename), | ||
2100 | 147 | ] | ||
2101 | 148 | |||
2102 | 149 | # find path to devname directory in sysfs | ||
2103 | 150 | devname_sysfs = None | ||
2104 | 151 | for path in paths: | ||
2105 | 152 | if os.path.exists(path): | ||
2106 | 153 | devname_sysfs = path | ||
2107 | 154 | |||
2108 | 155 | if devname_sysfs is None: | ||
2109 | 156 | err = ('No sysfs path to device:' | ||
2110 | 157 | ' {}'.format(devname_sysfs)) | ||
2111 | 158 | LOG.error(err) | ||
2112 | 159 | raise ValueError(err) | ||
2113 | 160 | |||
2114 | 161 | return devname_sysfs | ||
2115 | 162 | |||
2116 | 163 | |||
2117 | 164 | def get_holders(devname): | ||
2118 | 165 | # Look up any block device holders. | ||
2119 | 166 | # Handle devices and partitions as devnames (vdb, md0, vdb7) | ||
2120 | 167 | devname_sysfs = block_find_sysfs_path(devname) | ||
2121 | 168 | if devname_sysfs: | ||
2122 | 169 | holders = os.listdir(os.path.join(devname_sysfs, 'holders')) | ||
2123 | 170 | LOG.debug("devname '%s' had holders: %s", devname, ','.join(holders)) | ||
2124 | 171 | return holders | ||
2125 | 172 | |||
2126 | 173 | LOG.debug('get_holders: did not find sysfs path for %s', devname) | ||
2127 | 174 | return [] | ||
2128 | 175 | |||
2129 | 176 | |||
2130 | 177 | def clear_holders(sys_block_path): | ||
2131 | 178 | holders = os.listdir(os.path.join(sys_block_path, "holders")) | ||
2132 | 179 | LOG.info("clear_holders running on '%s', with holders '%s'" % | ||
2133 | 180 | (sys_block_path, holders)) | ||
2134 | 181 | for holder in holders: | ||
2135 | 182 | # get path to holder in /sys/block, then clear it | ||
2136 | 183 | try: | ||
2137 | 184 | holder_realpath = os.path.realpath( | ||
2138 | 185 | os.path.join(sys_block_path, "holders", holder)) | ||
2139 | 186 | clear_holders(holder_realpath) | ||
2140 | 187 | except IOError as e: | ||
2141 | 188 | # something might have already caused the holder to go away | ||
2142 | 189 | if util.is_file_not_found_exc(e): | ||
2143 | 190 | pass | ||
2144 | 191 | pass | ||
2145 | 192 | |||
2146 | 193 | # detect what type of holder is using this volume and shut it down, need to | ||
2147 | 194 | # find more robust name of doing detection | ||
2148 | 195 | if "bcache" in sys_block_path: | ||
2149 | 196 | # bcache device | ||
2150 | 197 | part_devs = [] | ||
2151 | 198 | for part_dev in glob.glob(os.path.join(sys_block_path, | ||
2152 | 199 | "slaves", "*", "dev")): | ||
2153 | 200 | with open(part_dev, "r") as fp: | ||
2154 | 201 | part_dev_id = fp.read().rstrip() | ||
2155 | 202 | part_devs.append( | ||
2156 | 203 | os.path.split(os.path.realpath(os.path.join("/dev/block", | ||
2157 | 204 | part_dev_id)))[-1]) | ||
2158 | 205 | for cache_dev in glob.glob("/sys/fs/bcache/*/bdev*"): | ||
2159 | 206 | for part_dev in part_devs: | ||
2160 | 207 | if part_dev in os.path.realpath(cache_dev): | ||
2161 | 208 | # This is our bcache device, stop it, wait for udev to | ||
2162 | 209 | # settle | ||
2163 | 210 | with open(os.path.join(os.path.split(cache_dev)[0], | ||
2164 | 211 | "stop"), "w") as fp: | ||
2165 | 212 | LOG.info("stopping: %s" % fp) | ||
2166 | 213 | fp.write("1") | ||
2167 | 214 | udevadm_settle() | ||
2168 | 215 | break | ||
2169 | 216 | for part_dev in part_devs: | ||
2170 | 217 | block.wipe_volume(os.path.join("/dev", part_dev), | ||
2171 | 218 | mode="superblock") | ||
2172 | 219 | |||
2173 | 220 | if os.path.exists(os.path.join(sys_block_path, "bcache")): | ||
2174 | 221 | # bcache device that isn't running, if it were, we would have found it | ||
2175 | 222 | # when we looked for holders | ||
2176 | 223 | try: | ||
2177 | 224 | with open(os.path.join(sys_block_path, "bcache", "set", "stop"), | ||
2178 | 225 | "w") as fp: | ||
2179 | 226 | LOG.info("stopping: %s" % fp) | ||
2180 | 227 | fp.write("1") | ||
2181 | 228 | except IOError as e: | ||
2182 | 229 | if not util.is_file_not_found_exc(e): | ||
2183 | 230 | raise e | ||
2184 | 231 | with open(os.path.join(sys_block_path, "bcache", "stop"), | ||
2185 | 232 | "w") as fp: | ||
2186 | 233 | LOG.info("stopping: %s" % fp) | ||
2187 | 234 | fp.write("1") | ||
2188 | 235 | udevadm_settle() | ||
2189 | 236 | |||
2190 | 237 | if os.path.exists(os.path.join(sys_block_path, "md")): | ||
2191 | 238 | # md device | ||
2192 | 239 | block_dev = os.path.join("/dev/", os.path.split(sys_block_path)[-1]) | ||
2193 | 240 | # if these fail its okay, the array might not be assembled and thats | ||
2194 | 241 | # fine | ||
2195 | 242 | mdadm.mdadm_stop(block_dev) | ||
2196 | 243 | mdadm.mdadm_remove(block_dev) | ||
2197 | 244 | |||
2198 | 245 | elif os.path.exists(os.path.join(sys_block_path, "dm")): | ||
2199 | 246 | # Shut down any volgroups | ||
2200 | 247 | with open(os.path.join(sys_block_path, "dm", "name"), "r") as fp: | ||
2201 | 248 | name = fp.read().split('-') | ||
2202 | 249 | util.subp(["lvremove", "--force", name[0].rstrip(), name[1].rstrip()], | ||
2203 | 250 | rcs=[0, 5]) | ||
2204 | 251 | util.subp(["vgremove", name[0].rstrip()], rcs=[0, 5, 6]) | ||
2205 | 252 | |||
2206 | 253 | |||
2207 | 254 | def devsync(devpath): | 131 | def devsync(devpath): |
2208 | 255 | LOG.debug('devsync for %s', devpath) | 132 | LOG.debug('devsync for %s', devpath) |
2209 | 256 | util.subp(['partprobe', devpath], rcs=[0, 1]) | 133 | util.subp(['partprobe', devpath], rcs=[0, 1]) |
2210 | @@ -265,14 +142,6 @@ | |||
2211 | 265 | raise OSError('Failed to find device at path: %s', devpath) | 142 | raise OSError('Failed to find device at path: %s', devpath) |
2212 | 266 | 143 | ||
2213 | 267 | 144 | ||
2214 | 268 | def determine_partition_kname(disk_kname, partition_number): | ||
2215 | 269 | for dev_type in ["nvme", "mmcblk"]: | ||
2216 | 270 | if disk_kname.startswith(dev_type): | ||
2217 | 271 | partition_number = "p%s" % partition_number | ||
2218 | 272 | break | ||
2219 | 273 | return "%s%s" % (disk_kname, partition_number) | ||
2220 | 274 | |||
2221 | 275 | |||
2222 | 276 | def determine_partition_number(partition_id, storage_config): | 145 | def determine_partition_number(partition_id, storage_config): |
2223 | 277 | vol = storage_config.get(partition_id) | 146 | vol = storage_config.get(partition_id) |
2224 | 278 | partnumber = vol.get('number') | 147 | partnumber = vol.get('number') |
2225 | @@ -304,6 +173,18 @@ | |||
2226 | 304 | return partnumber | 173 | return partnumber |
2227 | 305 | 174 | ||
2228 | 306 | 175 | ||
2229 | 176 | def sanitize_dname(dname): | ||
2230 | 177 | """ | ||
2231 | 178 | dnames should be sanitized before writing rule files, in case maas has | ||
2232 | 179 | emitted a dname with a special character | ||
2233 | 180 | |||
2234 | 181 | only letters, numbers and '-' and '_' are permitted, as this will be | ||
2235 | 182 | used for a device path. spaces are also not permitted | ||
2236 | 183 | """ | ||
2237 | 184 | valid = string.digits + string.ascii_letters + '-_' | ||
2238 | 185 | return ''.join(c if c in valid else '-' for c in dname) | ||
2239 | 186 | |||
2240 | 187 | |||
2241 | 307 | def make_dname(volume, storage_config): | 188 | def make_dname(volume, storage_config): |
2242 | 308 | state = util.load_command_environment() | 189 | state = util.load_command_environment() |
2243 | 309 | rules_dir = os.path.join(state['scratch'], "rules.d") | 190 | rules_dir = os.path.join(state['scratch'], "rules.d") |
2244 | @@ -321,7 +202,7 @@ | |||
2245 | 321 | # we may not always be able to find a uniq identifier on devices with names | 202 | # we may not always be able to find a uniq identifier on devices with names |
2246 | 322 | if not ptuuid and vol.get('type') in ["disk", "partition"]: | 203 | if not ptuuid and vol.get('type') in ["disk", "partition"]: |
2247 | 323 | LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format( | 204 | LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format( |
2249 | 324 | dname)) | 205 | volume)) |
2250 | 325 | return | 206 | return |
2251 | 326 | 207 | ||
2252 | 327 | rule = [ | 208 | rule = [ |
2253 | @@ -346,11 +227,24 @@ | |||
2254 | 346 | volgroup_name = storage_config.get(vol.get('volgroup')).get('name') | 227 | volgroup_name = storage_config.get(vol.get('volgroup')).get('name') |
2255 | 347 | dname = "%s-%s" % (volgroup_name, dname) | 228 | dname = "%s-%s" % (volgroup_name, dname) |
2256 | 348 | rule.append(compose_udev_equality("ENV{DM_NAME}", dname)) | 229 | rule.append(compose_udev_equality("ENV{DM_NAME}", dname)) |
2258 | 349 | rule.append("SYMLINK+=\"disk/by-dname/%s\"" % dname) | 230 | else: |
2259 | 231 | raise ValueError('cannot make dname for device with type: {}' | ||
2260 | 232 | .format(vol.get('type'))) | ||
2261 | 233 | |||
2262 | 234 | # note: this sanitization is done here instead of for all name attributes | ||
2263 | 235 | # at the beginning of storage configuration, as some devices, such as | ||
2264 | 236 | # lvm devices may use the name attribute and may permit special chars | ||
2265 | 237 | sanitized = sanitize_dname(dname) | ||
2266 | 238 | if sanitized != dname: | ||
2267 | 239 | LOG.warning( | ||
2268 | 240 | "dname modified to remove invalid chars. old: '{}' new: '{}'" | ||
2269 | 241 | .format(dname, sanitized)) | ||
2270 | 242 | |||
2271 | 243 | rule.append("SYMLINK+=\"disk/by-dname/%s\"" % sanitized) | ||
2272 | 350 | LOG.debug("Writing dname udev rule '{}'".format(str(rule))) | 244 | LOG.debug("Writing dname udev rule '{}'".format(str(rule))) |
2273 | 351 | util.ensure_dir(rules_dir) | 245 | util.ensure_dir(rules_dir) |
2276 | 352 | with open(os.path.join(rules_dir, volume), "w") as fp: | 246 | rule_file = os.path.join(rules_dir, '{}.rules'.format(sanitized)) |
2277 | 353 | fp.write(', '.join(rule)) | 247 | util.write_file(rule_file, ', '.join(rule)) |
2278 | 354 | 248 | ||
2279 | 355 | 249 | ||
2280 | 356 | def get_path_to_storage_volume(volume, storage_config): | 250 | def get_path_to_storage_volume(volume, storage_config): |
2281 | @@ -368,9 +262,9 @@ | |||
2282 | 368 | partnumber = determine_partition_number(vol.get('id'), storage_config) | 262 | partnumber = determine_partition_number(vol.get('id'), storage_config) |
2283 | 369 | disk_block_path = get_path_to_storage_volume(vol.get('device'), | 263 | disk_block_path = get_path_to_storage_volume(vol.get('device'), |
2284 | 370 | storage_config) | 264 | storage_config) |
2288 | 371 | (base_path, disk_kname) = os.path.split(disk_block_path) | 265 | disk_kname = block.path_to_kname(disk_block_path) |
2289 | 372 | partition_kname = determine_partition_kname(disk_kname, partnumber) | 266 | partition_kname = block.partition_kname(disk_kname, partnumber) |
2290 | 373 | volume_path = os.path.join(base_path, partition_kname) | 267 | volume_path = block.kname_to_path(partition_kname) |
2291 | 374 | devsync_vol = os.path.join(disk_block_path) | 268 | devsync_vol = os.path.join(disk_block_path) |
2292 | 375 | 269 | ||
2293 | 376 | elif vol.get('type') == "disk": | 270 | elif vol.get('type') == "disk": |
2294 | @@ -419,13 +313,15 @@ | |||
2295 | 419 | # block devs are in the slaves dir there. Then, those blockdevs can be | 313 | # block devs are in the slaves dir there. Then, those blockdevs can be |
2296 | 420 | # checked against the kname of the devs in the config for the desired | 314 | # checked against the kname of the devs in the config for the desired |
2297 | 421 | # bcache device. This is not very elegant though | 315 | # bcache device. This is not very elegant though |
2300 | 422 | backing_device_kname = os.path.split(get_path_to_storage_volume( | 316 | backing_device_path = get_path_to_storage_volume( |
2301 | 423 | vol.get('backing_device'), storage_config))[-1] | 317 | vol.get('backing_device'), storage_config) |
2302 | 318 | backing_device_kname = block.path_to_kname(backing_device_path) | ||
2303 | 424 | sys_path = list(filter(lambda x: backing_device_kname in x, | 319 | sys_path = list(filter(lambda x: backing_device_kname in x, |
2304 | 425 | glob.glob("/sys/block/bcache*/slaves/*")))[0] | 320 | glob.glob("/sys/block/bcache*/slaves/*")))[0] |
2305 | 426 | while "bcache" not in os.path.split(sys_path)[-1]: | 321 | while "bcache" not in os.path.split(sys_path)[-1]: |
2306 | 427 | sys_path = os.path.split(sys_path)[0] | 322 | sys_path = os.path.split(sys_path)[0] |
2308 | 428 | volume_path = os.path.join("/dev", os.path.split(sys_path)[-1]) | 323 | bcache_kname = block.path_to_kname(sys_path) |
2309 | 324 | volume_path = block.kname_to_path(bcache_kname) | ||
2310 | 429 | LOG.debug('got bcache volume path {}'.format(volume_path)) | 325 | LOG.debug('got bcache volume path {}'.format(volume_path)) |
2311 | 430 | 326 | ||
2312 | 431 | else: | 327 | else: |
2313 | @@ -442,62 +338,35 @@ | |||
2314 | 442 | 338 | ||
2315 | 443 | 339 | ||
2316 | 444 | def disk_handler(info, storage_config): | 340 | def disk_handler(info, storage_config): |
2317 | 341 | _dos_names = ['dos', 'msdos'] | ||
2318 | 445 | ptable = info.get('ptable') | 342 | ptable = info.get('ptable') |
2319 | 446 | |||
2320 | 447 | disk = get_path_to_storage_volume(info.get('id'), storage_config) | 343 | disk = get_path_to_storage_volume(info.get('id'), storage_config) |
2321 | 448 | 344 | ||
2374 | 449 | # Handle preserve flag | 345 | if config.value_as_boolean(info.get('preserve')): |
2375 | 450 | if info.get('preserve'): | 346 | # Handle preserve flag, verifying if ptable specified in config |
2376 | 451 | if not ptable: | 347 | if config.value_as_boolean(ptable): |
2377 | 452 | # Don't need to check state, return | 348 | current_ptable = block.get_part_table_type(disk) |
2378 | 453 | return | 349 | if not ((ptable in _dos_names and current_ptable in _dos_names) or |
2379 | 454 | 350 | (ptable == 'gpt' and current_ptable == 'gpt')): | |
2380 | 455 | # Check state of current ptable | 351 | raise ValueError( |
2381 | 456 | try: | 352 | "disk '%s' does not have correct partition table or " |
2382 | 457 | (out, _err) = util.subp(["blkid", "-o", "export", disk], | 353 | "cannot be read, but preserve is set to true. " |
2383 | 458 | capture=True) | 354 | "cannot continue installation." % info.get('id')) |
2384 | 459 | except util.ProcessExecutionError: | 355 | LOG.info("disk '%s' marked to be preserved, so keeping partition " |
2385 | 460 | raise ValueError("disk '%s' has no readable partition table or \ | 356 | "table" % disk) |
2386 | 461 | cannot be accessed, but preserve is set to true, so cannot \ | 357 | else: |
2387 | 462 | continue") | 358 | # wipe the disk and create the partition table if instructed to do so |
2388 | 463 | current_ptable = list(filter(lambda x: "PTTYPE" in x, | 359 | if config.value_as_boolean(info.get('wipe')): |
2389 | 464 | out.splitlines()))[0].split("=")[-1] | 360 | block.wipe_volume(disk, mode=info.get('wipe')) |
2390 | 465 | if current_ptable == "dos" and ptable != "msdos" or \ | 361 | if config.value_as_boolean(ptable): |
2391 | 466 | current_ptable == "gpt" and ptable != "gpt": | 362 | LOG.info("labeling device: '%s' with '%s' partition table", disk, |
2392 | 467 | raise ValueError("disk '%s' does not have correct \ | 363 | ptable) |
2393 | 468 | partition table, but preserve is set to true, so not \ | 364 | if ptable == "gpt": |
2394 | 469 | creating table, so not creating table." % info.get('id')) | 365 | util.subp(["sgdisk", "--clear", disk]) |
2395 | 470 | LOG.info("disk '%s' marked to be preserved, so keeping partition \ | 366 | elif ptable in _dos_names: |
2396 | 471 | table") | 367 | util.subp(["parted", disk, "--script", "mklabel", "msdos"]) |
2397 | 472 | return | 368 | else: |
2398 | 473 | 369 | raise ValueError('invalid partition table type: %s', ptable) | |
2347 | 474 | # Wipe the disk | ||
2348 | 475 | if info.get('wipe') and info.get('wipe') != "none": | ||
2349 | 476 | # The disk has a lable, clear all partitions | ||
2350 | 477 | mdadm.mdadm_assemble(scan=True) | ||
2351 | 478 | disk_kname = os.path.split(disk)[-1] | ||
2352 | 479 | syspath_partitions = list( | ||
2353 | 480 | os.path.split(prt)[0] for prt in | ||
2354 | 481 | glob.glob("/sys/block/%s/*/partition" % disk_kname)) | ||
2355 | 482 | for partition in syspath_partitions: | ||
2356 | 483 | clear_holders(partition) | ||
2357 | 484 | with open(os.path.join(partition, "dev"), "r") as fp: | ||
2358 | 485 | block_no = fp.read().rstrip() | ||
2359 | 486 | partition_path = os.path.realpath( | ||
2360 | 487 | os.path.join("/dev/block", block_no)) | ||
2361 | 488 | block.wipe_volume(partition_path, mode=info.get('wipe')) | ||
2362 | 489 | |||
2363 | 490 | clear_holders("/sys/block/%s" % disk_kname) | ||
2364 | 491 | block.wipe_volume(disk, mode=info.get('wipe')) | ||
2365 | 492 | |||
2366 | 493 | # Create partition table on disk | ||
2367 | 494 | if info.get('ptable'): | ||
2368 | 495 | LOG.info("labeling device: '%s' with '%s' partition table", disk, | ||
2369 | 496 | ptable) | ||
2370 | 497 | if ptable == "gpt": | ||
2371 | 498 | util.subp(["sgdisk", "--clear", disk]) | ||
2372 | 499 | elif ptable == "msdos": | ||
2373 | 500 | util.subp(["parted", disk, "--script", "mklabel", "msdos"]) | ||
2399 | 501 | 370 | ||
2400 | 502 | # Make the name if needed | 371 | # Make the name if needed |
2401 | 503 | if info.get('name'): | 372 | if info.get('name'): |
2402 | @@ -542,13 +411,12 @@ | |||
2403 | 542 | 411 | ||
2404 | 543 | disk = get_path_to_storage_volume(device, storage_config) | 412 | disk = get_path_to_storage_volume(device, storage_config) |
2405 | 544 | partnumber = determine_partition_number(info.get('id'), storage_config) | 413 | partnumber = determine_partition_number(info.get('id'), storage_config) |
2409 | 545 | 414 | disk_kname = block.path_to_kname(disk) | |
2410 | 546 | disk_kname = os.path.split( | 415 | disk_sysfs_path = block.sys_block_path(disk) |
2408 | 547 | get_path_to_storage_volume(device, storage_config))[-1] | ||
2411 | 548 | # consider the disks logical sector size when calculating sectors | 416 | # consider the disks logical sector size when calculating sectors |
2412 | 549 | try: | 417 | try: |
2415 | 550 | prefix = "/sys/block/%s/queue/" % disk_kname | 418 | lbs_path = os.path.join(disk_sysfs_path, 'queue', 'logical_block_size') |
2416 | 551 | with open(prefix + "logical_block_size", "r") as f: | 419 | with open(lbs_path, 'r') as f: |
2417 | 552 | l = f.readline() | 420 | l = f.readline() |
2418 | 553 | logical_block_size_bytes = int(l) | 421 | logical_block_size_bytes = int(l) |
2419 | 554 | except: | 422 | except: |
2420 | @@ -566,17 +434,14 @@ | |||
2421 | 566 | extended_part_no = determine_partition_number( | 434 | extended_part_no = determine_partition_number( |
2422 | 567 | key, storage_config) | 435 | key, storage_config) |
2423 | 568 | break | 436 | break |
2428 | 569 | partition_kname = determine_partition_kname( | 437 | pnum = extended_part_no |
2425 | 570 | disk_kname, extended_part_no) | ||
2426 | 571 | previous_partition = "/sys/block/%s/%s/" % \ | ||
2427 | 572 | (disk_kname, partition_kname) | ||
2429 | 573 | else: | 438 | else: |
2430 | 574 | pnum = find_previous_partition(device, info['id'], storage_config) | 439 | pnum = find_previous_partition(device, info['id'], storage_config) |
2436 | 575 | LOG.debug("previous partition number for '%s' found to be '%s'", | 440 | |
2437 | 576 | info.get('id'), pnum) | 441 | LOG.debug("previous partition number for '%s' found to be '%s'", |
2438 | 577 | partition_kname = determine_partition_kname(disk_kname, pnum) | 442 | info.get('id'), pnum) |
2439 | 578 | previous_partition = "/sys/block/%s/%s/" % \ | 443 | partition_kname = block.partition_kname(disk_kname, pnum) |
2440 | 579 | (disk_kname, partition_kname) | 444 | previous_partition = os.path.join(disk_sysfs_path, partition_kname) |
2441 | 580 | LOG.debug("previous partition: {}".format(previous_partition)) | 445 | LOG.debug("previous partition: {}".format(previous_partition)) |
2442 | 581 | # XXX: sys/block/X/{size,start} is *ALWAYS* in 512b value | 446 | # XXX: sys/block/X/{size,start} is *ALWAYS* in 512b value |
2443 | 582 | previous_size = util.load_file(os.path.join(previous_partition, | 447 | previous_size = util.load_file(os.path.join(previous_partition, |
2444 | @@ -629,9 +494,9 @@ | |||
2445 | 629 | length_sectors = length_sectors + (logdisks * alignment_offset) | 494 | length_sectors = length_sectors + (logdisks * alignment_offset) |
2446 | 630 | 495 | ||
2447 | 631 | # Handle preserve flag | 496 | # Handle preserve flag |
2449 | 632 | if info.get('preserve'): | 497 | if config.value_as_boolean(info.get('preserve')): |
2450 | 633 | return | 498 | return |
2452 | 634 | elif storage_config.get(device).get('preserve'): | 499 | elif config.value_as_boolean(storage_config.get(device).get('preserve')): |
2453 | 635 | raise NotImplementedError("Partition '%s' is not marked to be \ | 500 | raise NotImplementedError("Partition '%s' is not marked to be \ |
2454 | 636 | preserved, but device '%s' is. At this time, preserving devices \ | 501 | preserved, but device '%s' is. At this time, preserving devices \ |
2455 | 637 | but not also the partitions on the devices is not supported, \ | 502 | but not also the partitions on the devices is not supported, \ |
2456 | @@ -674,11 +539,16 @@ | |||
2457 | 674 | else: | 539 | else: |
2458 | 675 | raise ValueError("parent partition has invalid partition table") | 540 | raise ValueError("parent partition has invalid partition table") |
2459 | 676 | 541 | ||
2465 | 677 | # Wipe the partition if told to do so | 542 | # Wipe the partition if told to do so, do not wipe dos extended partitions |
2466 | 678 | if info.get('wipe') and info.get('wipe') != "none": | 543 | # as this may damage the extended partition table |
2467 | 679 | block.wipe_volume( | 544 | if config.value_as_boolean(info.get('wipe')): |
2468 | 680 | get_path_to_storage_volume(info.get('id'), storage_config), | 545 | if info.get('flag') == "extended": |
2469 | 681 | mode=info.get('wipe')) | 546 | LOG.warn("extended partitions do not need wiping, so skipping: " |
2470 | 547 | "'%s'" % info.get('id')) | ||
2471 | 548 | else: | ||
2472 | 549 | block.wipe_volume( | ||
2473 | 550 | get_path_to_storage_volume(info.get('id'), storage_config), | ||
2474 | 551 | mode=info.get('wipe')) | ||
2475 | 682 | # Make the name if needed | 552 | # Make the name if needed |
2476 | 683 | if storage_config.get(device).get('name') and partition_type != 'extended': | 553 | if storage_config.get(device).get('name') and partition_type != 'extended': |
2477 | 684 | make_dname(info.get('id'), storage_config) | 554 | make_dname(info.get('id'), storage_config) |
2478 | @@ -694,7 +564,7 @@ | |||
2479 | 694 | volume_path = get_path_to_storage_volume(volume, storage_config) | 564 | volume_path = get_path_to_storage_volume(volume, storage_config) |
2480 | 695 | 565 | ||
2481 | 696 | # Handle preserve flag | 566 | # Handle preserve flag |
2483 | 697 | if info.get('preserve'): | 567 | if config.value_as_boolean(info.get('preserve')): |
2484 | 698 | # Volume marked to be preserved, not formatting | 568 | # Volume marked to be preserved, not formatting |
2485 | 699 | return | 569 | return |
2486 | 700 | 570 | ||
2487 | @@ -776,26 +646,21 @@ | |||
2488 | 776 | storage_config)) | 646 | storage_config)) |
2489 | 777 | 647 | ||
2490 | 778 | # Handle preserve flag | 648 | # Handle preserve flag |
2492 | 779 | if info.get('preserve'): | 649 | if config.value_as_boolean(info.get('preserve')): |
2493 | 780 | # LVM will probably be offline, so start it | 650 | # LVM will probably be offline, so start it |
2494 | 781 | util.subp(["vgchange", "-a", "y"]) | 651 | util.subp(["vgchange", "-a", "y"]) |
2495 | 782 | # Verify that volgroup exists and contains all specified devices | 652 | # Verify that volgroup exists and contains all specified devices |
2507 | 783 | current_paths = [] | 653 | if set(lvm.get_pvols_in_volgroup(name)) != set(device_paths): |
2508 | 784 | (out, _err) = util.subp(["pvdisplay", "-C", "--separator", "=", "-o", | 654 | raise ValueError("volgroup '%s' marked to be preserved, but does " |
2509 | 785 | "vg_name,pv_name", "--noheadings"], | 655 | "not exist or does not contain the right " |
2510 | 786 | capture=True) | 656 | "physical volumes" % info.get('id')) |
2500 | 787 | for line in out.splitlines(): | ||
2501 | 788 | if name in line: | ||
2502 | 789 | current_paths.append(line.split("=")[-1]) | ||
2503 | 790 | if set(current_paths) != set(device_paths): | ||
2504 | 791 | raise ValueError("volgroup '%s' marked to be preserved, but does \ | ||
2505 | 792 | not exist or does not contain the right physical \ | ||
2506 | 793 | volumes" % info.get('id')) | ||
2511 | 794 | else: | 657 | else: |
2512 | 795 | # Create vgrcreate command and run | 658 | # Create vgrcreate command and run |
2516 | 796 | cmd = ["vgcreate", name] | 659 | # capture output to avoid printing it to log |
2517 | 797 | cmd.extend(device_paths) | 660 | util.subp(['vgcreate', name] + device_paths, capture=True) |
2518 | 798 | util.subp(cmd) | 661 | |
2519 | 662 | # refresh lvmetad | ||
2520 | 663 | lvm.lvm_scan() | ||
2521 | 799 | 664 | ||
2522 | 800 | 665 | ||
2523 | 801 | def lvm_partition_handler(info, storage_config): | 666 | def lvm_partition_handler(info, storage_config): |
2524 | @@ -805,28 +670,23 @@ | |||
2525 | 805 | raise ValueError("lvm volgroup for lvm partition must be specified") | 670 | raise ValueError("lvm volgroup for lvm partition must be specified") |
2526 | 806 | if not name: | 671 | if not name: |
2527 | 807 | raise ValueError("lvm partition name must be specified") | 672 | raise ValueError("lvm partition name must be specified") |
2528 | 673 | if info.get('ptable'): | ||
2529 | 674 | raise ValueError("Partition tables on top of lvm logical volumes is " | ||
2530 | 675 | "not supported") | ||
2531 | 808 | 676 | ||
2532 | 809 | # Handle preserve flag | 677 | # Handle preserve flag |
2547 | 810 | if info.get('preserve'): | 678 | if config.value_as_boolean(info.get('preserve')): |
2548 | 811 | (out, _err) = util.subp(["lvdisplay", "-C", "--separator", "=", "-o", | 679 | if name not in lvm.get_lvols_in_volgroup(volgroup): |
2549 | 812 | "lv_name,vg_name", "--noheadings"], | 680 | raise ValueError("lvm partition '%s' marked to be preserved, but " |
2550 | 813 | capture=True) | 681 | "does not exist or does not mach storage " |
2551 | 814 | found = False | 682 | "configuration" % info.get('id')) |
2538 | 815 | for line in out.splitlines(): | ||
2539 | 816 | if name in line: | ||
2540 | 817 | if volgroup == line.split("=")[-1]: | ||
2541 | 818 | found = True | ||
2542 | 819 | break | ||
2543 | 820 | if not found: | ||
2544 | 821 | raise ValueError("lvm partition '%s' marked to be preserved, but \ | ||
2545 | 822 | does not exist or does not mach storage \ | ||
2546 | 823 | configuration" % info.get('id')) | ||
2552 | 824 | elif storage_config.get(info.get('volgroup')).get('preserve'): | 683 | elif storage_config.get(info.get('volgroup')).get('preserve'): |
2558 | 825 | raise NotImplementedError("Lvm Partition '%s' is not marked to be \ | 684 | raise NotImplementedError( |
2559 | 826 | preserved, but volgroup '%s' is. At this time, preserving \ | 685 | "Lvm Partition '%s' is not marked to be preserved, but volgroup " |
2560 | 827 | volgroups but not also the lvm partitions on the volgroup is \ | 686 | "'%s' is. At this time, preserving volgroups but not also the lvm " |
2561 | 828 | not supported, because of the possibility of damaging lvm \ | 687 | "partitions on the volgroup is not supported, because of the " |
2562 | 829 | partitions intended to be preserved." % (info.get('id'), volgroup)) | 688 | "possibility of damaging lvm partitions intended to be " |
2563 | 689 | "preserved." % (info.get('id'), volgroup)) | ||
2564 | 830 | else: | 690 | else: |
2565 | 831 | cmd = ["lvcreate", volgroup, "-n", name] | 691 | cmd = ["lvcreate", volgroup, "-n", name] |
2566 | 832 | if info.get('size'): | 692 | if info.get('size'): |
2567 | @@ -836,9 +696,8 @@ | |||
2568 | 836 | 696 | ||
2569 | 837 | util.subp(cmd) | 697 | util.subp(cmd) |
2570 | 838 | 698 | ||
2574 | 839 | if info.get('ptable'): | 699 | # refresh lvmetad |
2575 | 840 | raise ValueError("Partition tables on top of lvm logical volumes is \ | 700 | lvm.lvm_scan() |
2573 | 841 | not supported") | ||
2576 | 842 | 701 | ||
2577 | 843 | make_dname(info.get('id'), storage_config) | 702 | make_dname(info.get('id'), storage_config) |
2578 | 844 | 703 | ||
2579 | @@ -925,7 +784,7 @@ | |||
2580 | 925 | zip(spare_devices, spare_device_paths))) | 784 | zip(spare_devices, spare_device_paths))) |
2581 | 926 | 785 | ||
2582 | 927 | # Handle preserve flag | 786 | # Handle preserve flag |
2584 | 928 | if info.get('preserve'): | 787 | if config.value_as_boolean(info.get('preserve')): |
2585 | 929 | # check if the array is already up, if not try to assemble | 788 | # check if the array is already up, if not try to assemble |
2586 | 930 | if not mdadm.md_check(md_devname, raidlevel, | 789 | if not mdadm.md_check(md_devname, raidlevel, |
2587 | 931 | device_paths, spare_device_paths): | 790 | device_paths, spare_device_paths): |
2588 | @@ -981,9 +840,6 @@ | |||
2589 | 981 | raise ValueError("backing device and cache device for bcache" | 840 | raise ValueError("backing device and cache device for bcache" |
2590 | 982 | " must be specified") | 841 | " must be specified") |
2591 | 983 | 842 | ||
2592 | 984 | # The bcache module is not loaded when bcache is installed by apt-get, so | ||
2593 | 985 | # we will load it now | ||
2594 | 986 | util.subp(["modprobe", "bcache"]) | ||
2595 | 987 | bcache_sysfs = "/sys/fs/bcache" | 843 | bcache_sysfs = "/sys/fs/bcache" |
2596 | 988 | udevadm_settle(exists=bcache_sysfs) | 844 | udevadm_settle(exists=bcache_sysfs) |
2597 | 989 | 845 | ||
2598 | @@ -1003,7 +859,7 @@ | |||
2599 | 1003 | bcache_device, expected) | 859 | bcache_device, expected) |
2600 | 1004 | return | 860 | return |
2601 | 1005 | LOG.debug('bcache device path not found: %s', expected) | 861 | LOG.debug('bcache device path not found: %s', expected) |
2603 | 1006 | local_holders = get_holders(bcache_device) | 862 | local_holders = clear_holders.get_holders(bcache_device) |
2604 | 1007 | LOG.debug('got initial holders being "%s"', local_holders) | 863 | LOG.debug('got initial holders being "%s"', local_holders) |
2605 | 1008 | if len(local_holders) == 0: | 864 | if len(local_holders) == 0: |
2606 | 1009 | raise ValueError("holders == 0 , expected non-zero") | 865 | raise ValueError("holders == 0 , expected non-zero") |
2607 | @@ -1033,7 +889,7 @@ | |||
2608 | 1033 | 889 | ||
2609 | 1034 | if cache_device: | 890 | if cache_device: |
2610 | 1035 | # /sys/class/block/XXX/YYY/ | 891 | # /sys/class/block/XXX/YYY/ |
2612 | 1036 | cache_device_sysfs = block_find_sysfs_path(cache_device) | 892 | cache_device_sysfs = block.sys_block_path(cache_device) |
2613 | 1037 | 893 | ||
2614 | 1038 | if os.path.exists(os.path.join(cache_device_sysfs, "bcache")): | 894 | if os.path.exists(os.path.join(cache_device_sysfs, "bcache")): |
2615 | 1039 | LOG.debug('caching device already exists at {}/bcache. Read ' | 895 | LOG.debug('caching device already exists at {}/bcache. Read ' |
2616 | @@ -1058,7 +914,7 @@ | |||
2617 | 1058 | ensure_bcache_is_registered(cache_device, target_sysfs_path) | 914 | ensure_bcache_is_registered(cache_device, target_sysfs_path) |
2618 | 1059 | 915 | ||
2619 | 1060 | if backing_device: | 916 | if backing_device: |
2621 | 1061 | backing_device_sysfs = block_find_sysfs_path(backing_device) | 917 | backing_device_sysfs = block.sys_block_path(backing_device) |
2622 | 1062 | target_sysfs_path = os.path.join(backing_device_sysfs, "bcache") | 918 | target_sysfs_path = os.path.join(backing_device_sysfs, "bcache") |
2623 | 1063 | if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")): | 919 | if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")): |
2624 | 1064 | util.subp(["make-bcache", "-B", backing_device]) | 920 | util.subp(["make-bcache", "-B", backing_device]) |
2625 | @@ -1066,7 +922,7 @@ | |||
2626 | 1066 | 922 | ||
2627 | 1067 | # via the holders we can identify which bcache device we just created | 923 | # via the holders we can identify which bcache device we just created |
2628 | 1068 | # for a given backing device | 924 | # for a given backing device |
2630 | 1069 | holders = get_holders(backing_device) | 925 | holders = clear_holders.get_holders(backing_device) |
2631 | 1070 | if len(holders) != 1: | 926 | if len(holders) != 1: |
2632 | 1071 | err = ('Invalid number {} of holding devices:' | 927 | err = ('Invalid number {} of holding devices:' |
2633 | 1072 | ' "{}"'.format(len(holders), holders)) | 928 | ' "{}"'.format(len(holders), holders)) |
2634 | @@ -1158,6 +1014,21 @@ | |||
2635 | 1158 | # set up reportstack | 1014 | # set up reportstack |
2636 | 1159 | stack_prefix = state.get('report_stack_prefix', '') | 1015 | stack_prefix = state.get('report_stack_prefix', '') |
2637 | 1160 | 1016 | ||
2638 | 1017 | # shut down any already existing storage layers above any disks used in | ||
2639 | 1018 | # config that have 'wipe' set | ||
2640 | 1019 | with events.ReportEventStack( | ||
2641 | 1020 | name=stack_prefix, reporting_enabled=True, level='INFO', | ||
2642 | 1021 | description="removing previous storage devices"): | ||
2643 | 1022 | clear_holders.start_clear_holders_deps() | ||
2644 | 1023 | disk_paths = [get_path_to_storage_volume(k, storage_config_dict) | ||
2645 | 1024 | for (k, v) in storage_config_dict.items() | ||
2646 | 1025 | if v.get('type') == 'disk' and | ||
2647 | 1026 | config.value_as_boolean(v.get('wipe')) and | ||
2648 | 1027 | not config.value_as_boolean(v.get('preserve'))] | ||
2649 | 1028 | clear_holders.clear_holders(disk_paths) | ||
2650 | 1029 | # if anything was not properly shut down, stop installation | ||
2651 | 1030 | clear_holders.assert_clear(disk_paths) | ||
2652 | 1031 | |||
2653 | 1161 | for item_id, command in storage_config_dict.items(): | 1032 | for item_id, command in storage_config_dict.items(): |
2654 | 1162 | handler = command_handlers.get(command['type']) | 1033 | handler = command_handlers.get(command['type']) |
2655 | 1163 | if not handler: | 1034 | if not handler: |
2656 | 1164 | 1035 | ||
2657 | === modified file 'curtin/commands/block_wipe.py' | |||
2658 | --- curtin/commands/block_wipe.py 2016-05-10 16:13:29 +0000 | |||
2659 | +++ curtin/commands/block_wipe.py 2016-10-03 18:55:20 +0000 | |||
2660 | @@ -21,7 +21,6 @@ | |||
2661 | 21 | 21 | ||
2662 | 22 | 22 | ||
2663 | 23 | def wipe_main(args): | 23 | def wipe_main(args): |
2664 | 24 | # curtin clear-holders device [device2 [device3]] | ||
2665 | 25 | for blockdev in args.devices: | 24 | for blockdev in args.devices: |
2666 | 26 | try: | 25 | try: |
2667 | 27 | block.wipe_volume(blockdev, mode=args.mode) | 26 | block.wipe_volume(blockdev, mode=args.mode) |
2668 | @@ -36,7 +35,7 @@ | |||
2669 | 36 | CMD_ARGUMENTS = ( | 35 | CMD_ARGUMENTS = ( |
2670 | 37 | ((('-m', '--mode'), | 36 | ((('-m', '--mode'), |
2671 | 38 | {'help': 'mode for wipe.', 'action': 'store', | 37 | {'help': 'mode for wipe.', 'action': 'store', |
2673 | 39 | 'default': 'superblocks', | 38 | 'default': 'superblock', |
2674 | 40 | 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}), | 39 | 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}), |
2675 | 41 | ('devices', | 40 | ('devices', |
2676 | 42 | {'help': 'devices to wipe', 'default': [], 'nargs': '+'}), | 41 | {'help': 'devices to wipe', 'default': [], 'nargs': '+'}), |
2677 | 43 | 42 | ||
2678 | === added file 'curtin/commands/clear_holders.py' | |||
2679 | --- curtin/commands/clear_holders.py 1970-01-01 00:00:00 +0000 | |||
2680 | +++ curtin/commands/clear_holders.py 2016-10-03 18:55:20 +0000 | |||
2681 | @@ -0,0 +1,48 @@ | |||
2682 | 1 | # Copyright (C) 2016 Canonical Ltd. | ||
2683 | 2 | # | ||
2684 | 3 | # Author: Wesley Wiedenmeier <wesley.wiedenmeier@canonical.com> | ||
2685 | 4 | # | ||
2686 | 5 | # Curtin is free software: you can redistribute it and/or modify it under | ||
2687 | 6 | # the terms of the GNU Affero General Public License as published by the | ||
2688 | 7 | # Free Software Foundation, either version 3 of the License, or (at your | ||
2689 | 8 | # option) any later version. | ||
2690 | 9 | # | ||
2691 | 10 | # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY | ||
2692 | 11 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS | ||
2693 | 12 | # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for | ||
2694 | 13 | # more details. | ||
2695 | 14 | # | ||
2696 | 15 | # You should have received a copy of the GNU Affero General Public License | ||
2697 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | ||
2698 | 17 | |||
2699 | 18 | from curtin import block | ||
2700 | 19 | from . import populate_one_subcmd | ||
2701 | 20 | |||
2702 | 21 | |||
2703 | 22 | def clear_holders_main(args): | ||
2704 | 23 | """ | ||
2705 | 24 | wrapper for clear_holders accepting cli args | ||
2706 | 25 | """ | ||
2707 | 26 | if (not all(block.is_block_device(device) for device in args.devices) or | ||
2708 | 27 | len(args.devices) == 0): | ||
2709 | 28 | raise ValueError('invalid devices specified') | ||
2710 | 29 | block.clear_holders.start_clear_holders_deps() | ||
2711 | 30 | block.clear_holders.clear_holders(args.devices, try_preserve=args.preserve) | ||
2712 | 31 | if args.try_preserve: | ||
2713 | 32 | print('ran clear_holders attempting to preserve data. however, ' | ||
2714 | 33 | 'hotplug support for some devices may cause holders to restart ') | ||
2715 | 34 | block.clear_holders.assert_clear(args.devices) | ||
2716 | 35 | |||
2717 | 36 | |||
2718 | 37 | CMD_ARGUMENTS = ( | ||
2719 | 38 | (('devices', | ||
2720 | 39 | {'help': 'devices to free', 'default': [], 'nargs': '+'}), | ||
2721 | 40 | (('-p', '--preserve'), | ||
2722 | 41 | {'help': 'try to shut down holders without erasing anything', | ||
2723 | 42 | 'default': False, 'action': 'store_true'}), | ||
2724 | 43 | ) | ||
2725 | 44 | ) | ||
2726 | 45 | |||
2727 | 46 | |||
2728 | 47 | def POPULATE_SUBCMD(parser): | ||
2729 | 48 | populate_one_subcmd(parser, CMD_ARGUMENTS, clear_holders_main) | ||
2730 | 0 | 49 | ||
2731 | === modified file 'curtin/commands/curthooks.py' | |||
2732 | --- curtin/commands/curthooks.py 2016-10-03 18:00:41 +0000 | |||
2733 | +++ curtin/commands/curthooks.py 2016-10-03 18:55:20 +0000 | |||
2734 | @@ -16,10 +16,8 @@ | |||
2735 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. |
2736 | 17 | 17 | ||
2737 | 18 | import copy | 18 | import copy |
2738 | 19 | import glob | ||
2739 | 20 | import os | 19 | import os |
2740 | 21 | import platform | 20 | import platform |
2741 | 22 | import re | ||
2742 | 23 | import sys | 21 | import sys |
2743 | 24 | import shutil | 22 | import shutil |
2744 | 25 | import textwrap | 23 | import textwrap |
2745 | @@ -30,8 +28,8 @@ | |||
2746 | 30 | from curtin.log import LOG | 28 | from curtin.log import LOG |
2747 | 31 | from curtin import swap | 29 | from curtin import swap |
2748 | 32 | from curtin import util | 30 | from curtin import util |
2749 | 33 | from curtin import net | ||
2750 | 34 | from curtin.reporter import events | 31 | from curtin.reporter import events |
2751 | 32 | from curtin.commands import apply_net, apt_config | ||
2752 | 35 | 33 | ||
2753 | 36 | from . import populate_one_subcmd | 34 | from . import populate_one_subcmd |
2754 | 37 | 35 | ||
2755 | @@ -90,45 +88,15 @@ | |||
2756 | 90 | info.get('perms', "0644"))) | 88 | info.get('perms', "0644"))) |
2757 | 91 | 89 | ||
2758 | 92 | 90 | ||
2768 | 93 | def apt_config(cfg, target): | 91 | def do_apt_config(cfg, target): |
2769 | 94 | # cfg['apt_proxy'] | 92 | cfg = apt_config.translate_old_apt_features(cfg) |
2770 | 95 | 93 | apt_cfg = cfg.get("apt") | |
2771 | 96 | proxy_cfg_path = os.path.sep.join( | 94 | if apt_cfg is not None: |
2772 | 97 | [target, '/etc/apt/apt.conf.d/90curtin-aptproxy']) | 95 | LOG.info("curthooks handling apt to target %s with config %s", |
2773 | 98 | if cfg.get('apt_proxy'): | 96 | target, apt_cfg) |
2774 | 99 | util.write_file( | 97 | apt_config.handle_apt(apt_cfg, target) |
2766 | 100 | proxy_cfg_path, | ||
2767 | 101 | content='Acquire::HTTP::Proxy "%s";\n' % cfg['apt_proxy']) | ||
2775 | 102 | else: | 98 | else: |
2805 | 103 | if os.path.isfile(proxy_cfg_path): | 99 | LOG.info("No apt config provided, skipping") |
2777 | 104 | os.unlink(proxy_cfg_path) | ||
2778 | 105 | |||
2779 | 106 | # cfg['apt_mirrors'] | ||
2780 | 107 | # apt_mirrors: | ||
2781 | 108 | # ubuntu_archive: http://local.archive/ubuntu | ||
2782 | 109 | # ubuntu_security: http://local.archive/ubuntu | ||
2783 | 110 | sources_list = os.path.sep.join([target, '/etc/apt/sources.list']) | ||
2784 | 111 | if (isinstance(cfg.get('apt_mirrors'), dict) and | ||
2785 | 112 | os.path.isfile(sources_list)): | ||
2786 | 113 | repls = [ | ||
2787 | 114 | ('ubuntu_archive', r'http://\S*[.]*archive.ubuntu.com/\S*'), | ||
2788 | 115 | ('ubuntu_security', r'http://security.ubuntu.com/\S*'), | ||
2789 | 116 | ] | ||
2790 | 117 | content = None | ||
2791 | 118 | for name, regex in repls: | ||
2792 | 119 | mirror = cfg['apt_mirrors'].get(name) | ||
2793 | 120 | if not mirror: | ||
2794 | 121 | continue | ||
2795 | 122 | |||
2796 | 123 | if content is None: | ||
2797 | 124 | with open(sources_list) as fp: | ||
2798 | 125 | content = fp.read() | ||
2799 | 126 | util.write_file(sources_list + ".dist", content) | ||
2800 | 127 | |||
2801 | 128 | content = re.sub(regex, mirror + " ", content) | ||
2802 | 129 | |||
2803 | 130 | if content is not None: | ||
2804 | 131 | util.write_file(sources_list, content) | ||
2806 | 132 | 100 | ||
2807 | 133 | 101 | ||
2808 | 134 | def disable_overlayroot(cfg, target): | 102 | def disable_overlayroot(cfg, target): |
2809 | @@ -140,51 +108,6 @@ | |||
2810 | 140 | shutil.move(local_conf, local_conf + ".old") | 108 | shutil.move(local_conf, local_conf + ".old") |
2811 | 141 | 109 | ||
2812 | 142 | 110 | ||
2813 | 143 | def clean_cloud_init(target): | ||
2814 | 144 | flist = glob.glob( | ||
2815 | 145 | os.path.sep.join([target, "/etc/cloud/cloud.cfg.d/*dpkg*"])) | ||
2816 | 146 | |||
2817 | 147 | LOG.debug("cleaning cloud-init config from: %s" % flist) | ||
2818 | 148 | for dpkg_cfg in flist: | ||
2819 | 149 | os.unlink(dpkg_cfg) | ||
2820 | 150 | |||
2821 | 151 | |||
2822 | 152 | def _maybe_remove_legacy_eth0(target, | ||
2823 | 153 | path="/etc/network/interfaces.d/eth0.cfg"): | ||
2824 | 154 | """Ubuntu cloud images previously included a 'eth0.cfg' that had | ||
2825 | 155 | hard coded content. That file would interfere with the rendered | ||
2826 | 156 | configuration if it was present. | ||
2827 | 157 | |||
2828 | 158 | if the file does not exist do nothing. | ||
2829 | 159 | If the file exists: | ||
2830 | 160 | - with known content, remove it and warn | ||
2831 | 161 | - with unknown content, leave it and warn | ||
2832 | 162 | """ | ||
2833 | 163 | |||
2834 | 164 | cfg = os.path.sep.join([target, path]) | ||
2835 | 165 | if not os.path.exists(cfg): | ||
2836 | 166 | LOG.warn('Failed to find legacy conf file %s', cfg) | ||
2837 | 167 | return | ||
2838 | 168 | |||
2839 | 169 | bmsg = "Dynamic networking config may not apply." | ||
2840 | 170 | try: | ||
2841 | 171 | contents = util.load_file(cfg) | ||
2842 | 172 | known_contents = ["auto eth0", "iface eth0 inet dhcp"] | ||
2843 | 173 | lines = [f.strip() for f in contents.splitlines() | ||
2844 | 174 | if not f.startswith("#")] | ||
2845 | 175 | if lines == known_contents: | ||
2846 | 176 | util.del_file(cfg) | ||
2847 | 177 | msg = "removed %s with known contents" % cfg | ||
2848 | 178 | else: | ||
2849 | 179 | msg = (bmsg + " '%s' exists with user configured content." % cfg) | ||
2850 | 180 | except: | ||
2851 | 181 | msg = bmsg + " %s exists, but could not be read." % cfg | ||
2852 | 182 | LOG.exception(msg) | ||
2853 | 183 | return | ||
2854 | 184 | |||
2855 | 185 | LOG.warn(msg) | ||
2856 | 186 | |||
2857 | 187 | |||
2858 | 188 | def setup_zipl(cfg, target): | 111 | def setup_zipl(cfg, target): |
2859 | 189 | if platform.machine() != 's390x': | 112 | if platform.machine() != 's390x': |
2860 | 190 | return | 113 | return |
2861 | @@ -232,8 +155,8 @@ | |||
2862 | 232 | def run_zipl(cfg, target): | 155 | def run_zipl(cfg, target): |
2863 | 233 | if platform.machine() != 's390x': | 156 | if platform.machine() != 's390x': |
2864 | 234 | return | 157 | return |
2867 | 235 | with util.RunInChroot(target) as in_chroot: | 158 | with util.ChrootableTarget(target) as in_chroot: |
2868 | 236 | in_chroot(['zipl']) | 159 | in_chroot.subp(['zipl']) |
2869 | 237 | 160 | ||
2870 | 238 | 161 | ||
2871 | 239 | def install_kernel(cfg, target): | 162 | def install_kernel(cfg, target): |
2872 | @@ -250,126 +173,45 @@ | |||
2873 | 250 | mapping = copy.deepcopy(KERNEL_MAPPING) | 173 | mapping = copy.deepcopy(KERNEL_MAPPING) |
2874 | 251 | config.merge_config(mapping, kernel_cfg.get('mapping', {})) | 174 | config.merge_config(mapping, kernel_cfg.get('mapping', {})) |
2875 | 252 | 175 | ||
2996 | 253 | with util.RunInChroot(target) as in_chroot: | 176 | if kernel_package: |
2997 | 254 | 177 | util.install_packages([kernel_package], target=target) | |
2998 | 255 | if kernel_package: | 178 | return |
2999 | 256 | util.install_packages([kernel_package], target=target) | 179 | |
3000 | 257 | return | 180 | # uname[2] is kernel name (ie: 3.16.0-7-generic) |
3001 | 258 | 181 | # version gets X.Y.Z, flavor gets anything after second '-'. | |
3002 | 259 | # uname[2] is kernel name (ie: 3.16.0-7-generic) | 182 | kernel = os.uname()[2] |
3003 | 260 | # version gets X.Y.Z, flavor gets anything after second '-'. | 183 | codename, _ = util.subp(['lsb_release', '--codename', '--short'], |
3004 | 261 | kernel = os.uname()[2] | 184 | capture=True, target=target) |
3005 | 262 | codename, err = in_chroot(['lsb_release', '--codename', '--short'], | 185 | codename = codename.strip() |
3006 | 263 | capture=True) | 186 | version, abi, flavor = kernel.split('-', 2) |
3007 | 264 | codename = codename.strip() | 187 | |
3008 | 265 | version, abi, flavor = kernel.split('-', 2) | 188 | try: |
3009 | 266 | 189 | map_suffix = mapping[codename][version] | |
3010 | 267 | try: | 190 | except KeyError: |
3011 | 268 | map_suffix = mapping[codename][version] | 191 | LOG.warn("Couldn't detect kernel package to install for %s." |
3012 | 269 | except KeyError: | 192 | % kernel) |
3013 | 270 | LOG.warn("Couldn't detect kernel package to install for %s." | 193 | if kernel_fallback is not None: |
3014 | 271 | % kernel) | 194 | util.install_packages([kernel_fallback], target=target) |
3015 | 272 | if kernel_fallback is not None: | 195 | return |
3016 | 273 | util.install_packages([kernel_fallback], target=target) | 196 | |
3017 | 274 | return | 197 | package = "linux-{flavor}{map_suffix}".format( |
3018 | 275 | 198 | flavor=flavor, map_suffix=map_suffix) | |
3019 | 276 | package = "linux-{flavor}{map_suffix}".format( | 199 | |
3020 | 277 | flavor=flavor, map_suffix=map_suffix) | 200 | if util.has_pkg_available(package, target): |
3021 | 278 | 201 | if util.has_pkg_installed(package, target): | |
3022 | 279 | if util.has_pkg_available(package, target): | 202 | LOG.debug("Kernel package '%s' already installed", package) |
3023 | 280 | if util.has_pkg_installed(package, target): | 203 | else: |
3024 | 281 | LOG.debug("Kernel package '%s' already installed", package) | 204 | LOG.debug("installing kernel package '%s'", package) |
3025 | 282 | else: | 205 | util.install_packages([package], target=target) |
3026 | 283 | LOG.debug("installing kernel package '%s'", package) | 206 | else: |
3027 | 284 | util.install_packages([package], target=target) | 207 | if kernel_fallback is not None: |
3028 | 285 | else: | 208 | LOG.info("Kernel package '%s' not available. " |
3029 | 286 | if kernel_fallback is not None: | 209 | "Installing fallback package '%s'.", |
3030 | 287 | LOG.info("Kernel package '%s' not available. " | 210 | package, kernel_fallback) |
3031 | 288 | "Installing fallback package '%s'.", | 211 | util.install_packages([kernel_fallback], target=target) |
3032 | 289 | package, kernel_fallback) | 212 | else: |
3033 | 290 | util.install_packages([kernel_fallback], target=target) | 213 | LOG.warn("Kernel package '%s' not available and no fallback." |
3034 | 291 | else: | 214 | " System may not boot.", package) |
2915 | 292 | LOG.warn("Kernel package '%s' not available and no fallback." | ||
2916 | 293 | " System may not boot.", package) | ||
2917 | 294 | |||
2918 | 295 | |||
2919 | 296 | def apply_debconf_selections(cfg, target): | ||
2920 | 297 | # debconf_selections: | ||
2921 | 298 | # set1: | | ||
2922 | 299 | # cloud-init cloud-init/datasources multiselect MAAS | ||
2923 | 300 | # set2: pkg pkg/value string bar | ||
2924 | 301 | selsets = cfg.get('debconf_selections') | ||
2925 | 302 | if not selsets: | ||
2926 | 303 | LOG.debug("debconf_selections was not set in config") | ||
2927 | 304 | return | ||
2928 | 305 | |||
2929 | 306 | # for each entry in selections, chroot and apply them. | ||
2930 | 307 | # keep a running total of packages we've seen. | ||
2931 | 308 | pkgs_cfgd = set() | ||
2932 | 309 | for key, content in selsets.items(): | ||
2933 | 310 | LOG.debug("setting for %s, %s" % (key, content)) | ||
2934 | 311 | util.subp(['chroot', target, 'debconf-set-selections'], | ||
2935 | 312 | data=content.encode()) | ||
2936 | 313 | for line in content.splitlines(): | ||
2937 | 314 | if line.startswith("#"): | ||
2938 | 315 | continue | ||
2939 | 316 | pkg = re.sub(r"[:\s].*", "", line) | ||
2940 | 317 | pkgs_cfgd.add(pkg) | ||
2941 | 318 | |||
2942 | 319 | pkgs_installed = get_installed_packages(target) | ||
2943 | 320 | |||
2944 | 321 | LOG.debug("pkgs_cfgd: %s" % pkgs_cfgd) | ||
2945 | 322 | LOG.debug("pkgs_installed: %s" % pkgs_installed) | ||
2946 | 323 | need_reconfig = pkgs_cfgd.intersection(pkgs_installed) | ||
2947 | 324 | |||
2948 | 325 | if len(need_reconfig) == 0: | ||
2949 | 326 | LOG.debug("no need for reconfig") | ||
2950 | 327 | return | ||
2951 | 328 | |||
2952 | 329 | # For any packages that are already installed, but have preseed data | ||
2953 | 330 | # we populate the debconf database, but the filesystem configuration | ||
2954 | 331 | # would be preferred on a subsequent dpkg-reconfigure. | ||
2955 | 332 | # so, what we have to do is "know" information about certain packages | ||
2956 | 333 | # to unconfigure them. | ||
2957 | 334 | unhandled = [] | ||
2958 | 335 | to_config = [] | ||
2959 | 336 | for pkg in need_reconfig: | ||
2960 | 337 | if pkg in CONFIG_CLEANERS: | ||
2961 | 338 | LOG.debug("unconfiguring %s" % pkg) | ||
2962 | 339 | CONFIG_CLEANERS[pkg](target) | ||
2963 | 340 | to_config.append(pkg) | ||
2964 | 341 | else: | ||
2965 | 342 | unhandled.append(pkg) | ||
2966 | 343 | |||
2967 | 344 | if len(unhandled): | ||
2968 | 345 | LOG.warn("The following packages were installed and preseeded, " | ||
2969 | 346 | "but cannot be unconfigured: %s", unhandled) | ||
2970 | 347 | |||
2971 | 348 | util.subp(['chroot', target, 'dpkg-reconfigure', | ||
2972 | 349 | '--frontend=noninteractive'] + | ||
2973 | 350 | list(to_config), data=None) | ||
2974 | 351 | |||
2975 | 352 | |||
2976 | 353 | def get_installed_packages(target=None): | ||
2977 | 354 | cmd = [] | ||
2978 | 355 | if target is not None: | ||
2979 | 356 | cmd = ['chroot', target] | ||
2980 | 357 | cmd.extend(['dpkg-query', '--list']) | ||
2981 | 358 | |||
2982 | 359 | (out, _err) = util.subp(cmd, capture=True) | ||
2983 | 360 | if isinstance(out, bytes): | ||
2984 | 361 | out = out.decode() | ||
2985 | 362 | |||
2986 | 363 | pkgs_inst = set() | ||
2987 | 364 | for line in out.splitlines(): | ||
2988 | 365 | try: | ||
2989 | 366 | (state, pkg, other) = line.split(None, 2) | ||
2990 | 367 | except ValueError: | ||
2991 | 368 | continue | ||
2992 | 369 | if state.startswith("hi") or state.startswith("ii"): | ||
2993 | 370 | pkgs_inst.add(re.sub(":.*", "", pkg)) | ||
2994 | 371 | |||
2995 | 372 | return pkgs_inst | ||
3035 | 373 | 215 | ||
3036 | 374 | 216 | ||
3037 | 375 | def setup_grub(cfg, target): | 217 | def setup_grub(cfg, target): |
3038 | @@ -498,12 +340,11 @@ | |||
3039 | 498 | util.subp(args + instdevs, env=env) | 340 | util.subp(args + instdevs, env=env) |
3040 | 499 | 341 | ||
3041 | 500 | 342 | ||
3043 | 501 | def update_initramfs(target, all_kernels=False): | 343 | def update_initramfs(target=None, all_kernels=False): |
3044 | 502 | cmd = ['update-initramfs', '-u'] | 344 | cmd = ['update-initramfs', '-u'] |
3045 | 503 | if all_kernels: | 345 | if all_kernels: |
3046 | 504 | cmd.extend(['-k', 'all']) | 346 | cmd.extend(['-k', 'all']) |
3049 | 505 | with util.RunInChroot(target) as in_chroot: | 347 | util.subp(cmd, target=target) |
3048 | 506 | in_chroot(cmd) | ||
3050 | 507 | 348 | ||
3051 | 508 | 349 | ||
3052 | 509 | def copy_fstab(fstab, target): | 350 | def copy_fstab(fstab, target): |
3053 | @@ -533,7 +374,6 @@ | |||
3054 | 533 | 374 | ||
3055 | 534 | 375 | ||
3056 | 535 | def apply_networking(target, state): | 376 | def apply_networking(target, state): |
3057 | 536 | netstate = state.get('network_state') | ||
3058 | 537 | netconf = state.get('network_config') | 377 | netconf = state.get('network_config') |
3059 | 538 | interfaces = state.get('interfaces') | 378 | interfaces = state.get('interfaces') |
3060 | 539 | 379 | ||
3061 | @@ -544,22 +384,13 @@ | |||
3062 | 544 | return True | 384 | return True |
3063 | 545 | return False | 385 | return False |
3064 | 546 | 386 | ||
3075 | 547 | ns = None | 387 | if is_valid_src(netconf): |
3076 | 548 | if is_valid_src(netstate): | 388 | LOG.info("applying network_config") |
3077 | 549 | LOG.debug("applying network_state") | 389 | apply_net.apply_net(target, network_state=None, network_config=netconf) |
3068 | 550 | ns = net.network_state.from_state_file(netstate) | ||
3069 | 551 | elif is_valid_src(netconf): | ||
3070 | 552 | LOG.debug("applying network_config") | ||
3071 | 553 | ns = net.parse_net_config(netconf) | ||
3072 | 554 | |||
3073 | 555 | if ns is not None: | ||
3074 | 556 | net.render_network_state(target=target, network_state=ns) | ||
3078 | 557 | else: | 390 | else: |
3079 | 558 | LOG.debug("copying interfaces") | 391 | LOG.debug("copying interfaces") |
3080 | 559 | copy_interfaces(interfaces, target) | 392 | copy_interfaces(interfaces, target) |
3081 | 560 | 393 | ||
3082 | 561 | _maybe_remove_legacy_eth0(target) | ||
3083 | 562 | |||
3084 | 563 | 394 | ||
3085 | 564 | def copy_interfaces(interfaces, target): | 395 | def copy_interfaces(interfaces, target): |
3086 | 565 | if not interfaces: | 396 | if not interfaces: |
3087 | @@ -704,8 +535,8 @@ | |||
3088 | 704 | 535 | ||
3089 | 705 | # FIXME: this assumes grub. need more generic way to update root= | 536 | # FIXME: this assumes grub. need more generic way to update root= |
3090 | 706 | util.ensure_dir(os.path.sep.join([target, os.path.dirname(grub_dev)])) | 537 | util.ensure_dir(os.path.sep.join([target, os.path.dirname(grub_dev)])) |
3093 | 707 | with util.RunInChroot(target) as in_chroot: | 538 | with util.ChrootableTarget(target) as in_chroot: |
3094 | 708 | in_chroot(['update-grub']) | 539 | in_chroot.subp(['update-grub']) |
3095 | 709 | 540 | ||
3096 | 710 | else: | 541 | else: |
3097 | 711 | LOG.warn("Not sure how this will boot") | 542 | LOG.warn("Not sure how this will boot") |
3098 | @@ -740,7 +571,7 @@ | |||
3099 | 740 | } | 571 | } |
3100 | 741 | 572 | ||
3101 | 742 | needed_packages = [] | 573 | needed_packages = [] |
3103 | 743 | installed_packages = get_installed_packages(target) | 574 | installed_packages = util.get_installed_packages(target) |
3104 | 744 | for cust_cfg, pkg_reqs in custom_configs.items(): | 575 | for cust_cfg, pkg_reqs in custom_configs.items(): |
3105 | 745 | if cust_cfg not in cfg: | 576 | if cust_cfg not in cfg: |
3106 | 746 | continue | 577 | continue |
3107 | @@ -820,7 +651,7 @@ | |||
3108 | 820 | name=stack_prefix, reporting_enabled=True, level="INFO", | 651 | name=stack_prefix, reporting_enabled=True, level="INFO", |
3109 | 821 | description="writing config files and configuring apt"): | 652 | description="writing config files and configuring apt"): |
3110 | 822 | write_files(cfg, target) | 653 | write_files(cfg, target) |
3112 | 823 | apt_config(cfg, target) | 654 | do_apt_config(cfg, target) |
3113 | 824 | disable_overlayroot(cfg, target) | 655 | disable_overlayroot(cfg, target) |
3114 | 825 | 656 | ||
3115 | 826 | # packages may be needed prior to installing kernel | 657 | # packages may be needed prior to installing kernel |
3116 | @@ -834,8 +665,8 @@ | |||
3117 | 834 | copy_mdadm_conf(mdadm_location, target) | 665 | copy_mdadm_conf(mdadm_location, target) |
3118 | 835 | # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052 | 666 | # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052 |
3119 | 836 | # reconfigure mdadm | 667 | # reconfigure mdadm |
3122 | 837 | util.subp(['chroot', target, 'dpkg-reconfigure', | 668 | util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], |
3123 | 838 | '--frontend=noninteractive', 'mdadm'], data=None) | 669 | data=None, target=target) |
3124 | 839 | 670 | ||
3125 | 840 | with events.ReportEventStack( | 671 | with events.ReportEventStack( |
3126 | 841 | name=stack_prefix, reporting_enabled=True, level="INFO", | 672 | name=stack_prefix, reporting_enabled=True, level="INFO", |
3127 | @@ -843,7 +674,6 @@ | |||
3128 | 843 | setup_zipl(cfg, target) | 674 | setup_zipl(cfg, target) |
3129 | 844 | install_kernel(cfg, target) | 675 | install_kernel(cfg, target) |
3130 | 845 | run_zipl(cfg, target) | 676 | run_zipl(cfg, target) |
3131 | 846 | apply_debconf_selections(cfg, target) | ||
3132 | 847 | 677 | ||
3133 | 848 | restore_dist_interfaces(cfg, target) | 678 | restore_dist_interfaces(cfg, target) |
3134 | 849 | 679 | ||
3135 | @@ -906,8 +736,4 @@ | |||
3136 | 906 | populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks) | 736 | populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks) |
3137 | 907 | 737 | ||
3138 | 908 | 738 | ||
3139 | 909 | CONFIG_CLEANERS = { | ||
3140 | 910 | 'cloud-init': clean_cloud_init, | ||
3141 | 911 | } | ||
3142 | 912 | |||
3143 | 913 | # vi: ts=4 expandtab syntax=python | 739 | # vi: ts=4 expandtab syntax=python |
3144 | 914 | 740 | ||
3145 | === modified file 'curtin/commands/main.py' | |||
3146 | --- curtin/commands/main.py 2016-05-10 16:13:29 +0000 | |||
3147 | +++ curtin/commands/main.py 2016-10-03 18:55:20 +0000 | |||
3148 | @@ -26,9 +26,10 @@ | |||
3149 | 26 | from ..deps import install_deps | 26 | from ..deps import install_deps |
3150 | 27 | 27 | ||
3151 | 28 | SUB_COMMAND_MODULES = [ | 28 | SUB_COMMAND_MODULES = [ |
3155 | 29 | 'apply_net', 'block-meta', 'block-wipe', 'curthooks', 'extract', | 29 | 'apply_net', 'block-info', 'block-meta', 'block-wipe', 'curthooks', |
3156 | 30 | 'hook', 'in-target', 'install', 'mkfs', 'net-meta', | 30 | 'clear-holders', 'extract', 'hook', 'in-target', 'install', 'mkfs', |
3157 | 31 | 'pack', 'swap', 'system-install', 'system-upgrade'] | 31 | 'net-meta', 'apt-config', 'pack', 'swap', 'system-install', |
3158 | 32 | 'system-upgrade'] | ||
3159 | 32 | 33 | ||
3160 | 33 | 34 | ||
3161 | 34 | def add_subcmd(subparser, subcmd): | 35 | def add_subcmd(subparser, subcmd): |
3162 | 35 | 36 | ||
3163 | === modified file 'curtin/config.py' | |||
3164 | --- curtin/config.py 2016-03-18 14:16:45 +0000 | |||
3165 | +++ curtin/config.py 2016-10-03 18:55:20 +0000 | |||
3166 | @@ -138,6 +138,5 @@ | |||
3167 | 138 | 138 | ||
3168 | 139 | 139 | ||
3169 | 140 | def value_as_boolean(value): | 140 | def value_as_boolean(value): |
3173 | 141 | if value in (False, None, '0', 0, 'False', 'false', ''): | 141 | false_values = (False, None, 0, '0', 'False', 'false', 'None', 'none', '') |
3174 | 142 | return False | 142 | return value not in false_values |
3172 | 143 | return True | ||
3175 | 144 | 143 | ||
3176 | === added file 'curtin/gpg.py' | |||
3177 | --- curtin/gpg.py 1970-01-01 00:00:00 +0000 | |||
3178 | +++ curtin/gpg.py 2016-10-03 18:55:20 +0000 | |||
3179 | @@ -0,0 +1,74 @@ | |||
3180 | 1 | # Copyright (C) 2016 Canonical Ltd. | ||
3181 | 2 | # | ||
3182 | 3 | # Author: Scott Moser <scott.moser@canonical.com> | ||
3183 | 4 | # Christian Ehrhardt <christian.ehrhardt@canonical.com> | ||
3184 | 5 | # | ||
3185 | 6 | # Curtin is free software: you can redistribute it and/or modify it under | ||
3186 | 7 | # the terms of the GNU Affero General Public License as published by the | ||
3187 | 8 | # Free Software Foundation, either version 3 of the License, or (at your | ||
3188 | 9 | # option) any later version. | ||
3189 | 10 | # | ||
3190 | 11 | # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY | ||
3191 | 12 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS | ||
3192 | 13 | # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for | ||
3193 | 14 | # more details. | ||
3194 | 15 | # | ||
3195 | 16 | # You should have received a copy of the GNU Affero General Public License | ||
3196 | 17 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | ||
3197 | 18 | """ gpg.py | ||
3198 | 19 | gpg related utilities to get raw keys data by their id | ||
3199 | 20 | """ | ||
3200 | 21 | |||
3201 | 22 | from curtin import util | ||
3202 | 23 | |||
3203 | 24 | from .log import LOG | ||
3204 | 25 | |||
3205 | 26 | |||
3206 | 27 | def export_armour(key): | ||
3207 | 28 | """Export gpg key, armoured key gets returned""" | ||
3208 | 29 | try: | ||
3209 | 30 | (armour, _) = util.subp(["gpg", "--export", "--armour", key], | ||
3210 | 31 | capture=True) | ||
3211 | 32 | except util.ProcessExecutionError as error: | ||
3212 | 33 | # debug, since it happens for any key not on the system initially | ||
3213 | 34 | LOG.debug('Failed to export armoured key "%s": %s', key, error) | ||
3214 | 35 | armour = None | ||
3215 | 36 | return armour | ||
3216 | 37 | |||
3217 | 38 | |||
3218 | 39 | def recv_key(key, keyserver): | ||
3219 | 40 | """Receive gpg key from the specified keyserver""" | ||
3220 | 41 | LOG.debug('Receive gpg key "%s"', key) | ||
3221 | 42 | try: | ||
3222 | 43 | util.subp(["gpg", "--keyserver", keyserver, "--recv", key], | ||
3223 | 44 | capture=True) | ||
3224 | 45 | except util.ProcessExecutionError as error: | ||
3225 | 46 | raise ValueError(('Failed to import key "%s" ' | ||
3226 | 47 | 'from server "%s" - error %s') % | ||
3227 | 48 | (key, keyserver, error)) | ||
3228 | 49 | |||
3229 | 50 | |||
3230 | 51 | def delete_key(key): | ||
3231 | 52 | """Delete the specified key from the local gpg ring""" | ||
3232 | 53 | try: | ||
3233 | 54 | util.subp(["gpg", "--batch", "--yes", "--delete-keys", key], | ||
3234 | 55 | capture=True) | ||
3235 | 56 | except util.ProcessExecutionError as error: | ||
3236 | 57 | LOG.warn('Failed delete key "%s": %s', key, error) | ||
3237 | 58 | |||
3238 | 59 | |||
3239 | 60 | def getkeybyid(keyid, keyserver='keyserver.ubuntu.com'): | ||
3240 | 61 | """get gpg keyid from keyserver""" | ||
3241 | 62 | armour = export_armour(keyid) | ||
3242 | 63 | if not armour: | ||
3243 | 64 | try: | ||
3244 | 65 | recv_key(keyid, keyserver=keyserver) | ||
3245 | 66 | armour = export_armour(keyid) | ||
3246 | 67 | except ValueError: | ||
3247 | 68 | LOG.exception('Failed to obtain gpg key %s', keyid) | ||
3248 | 69 | raise | ||
3249 | 70 | finally: | ||
3250 | 71 | # delete just imported key to leave environment as it was before | ||
3251 | 72 | delete_key(keyid) | ||
3252 | 73 | |||
3253 | 74 | return armour | ||
3254 | 0 | 75 | ||
3255 | === modified file 'curtin/net/__init__.py' | |||
3256 | --- curtin/net/__init__.py 2016-10-03 18:00:41 +0000 | |||
3257 | +++ curtin/net/__init__.py 2016-10-03 18:55:20 +0000 | |||
3258 | @@ -299,7 +299,7 @@ | |||
3259 | 299 | mac = iface.get('mac_address', '') | 299 | mac = iface.get('mac_address', '') |
3260 | 300 | # len(macaddr) == 2 * 6 + 5 == 17 | 300 | # len(macaddr) == 2 * 6 + 5 == 17 |
3261 | 301 | if ifname and mac and len(mac) == 17: | 301 | if ifname and mac and len(mac) == 17: |
3263 | 302 | content += generate_udev_rule(ifname, mac) | 302 | content += generate_udev_rule(ifname, mac.lower()) |
3264 | 303 | 303 | ||
3265 | 304 | return content | 304 | return content |
3266 | 305 | 305 | ||
3267 | @@ -349,7 +349,7 @@ | |||
3268 | 349 | 'subnets', | 349 | 'subnets', |
3269 | 350 | 'type', | 350 | 'type', |
3270 | 351 | ] | 351 | ] |
3272 | 352 | if iface['type'] not in ['bond', 'bridge']: | 352 | if iface['type'] not in ['bond', 'bridge', 'vlan']: |
3273 | 353 | ignore_map.append('mac_address') | 353 | ignore_map.append('mac_address') |
3274 | 354 | 354 | ||
3275 | 355 | for key, value in iface.items(): | 355 | for key, value in iface.items(): |
3276 | @@ -361,26 +361,52 @@ | |||
3277 | 361 | return content | 361 | return content |
3278 | 362 | 362 | ||
3279 | 363 | 363 | ||
3282 | 364 | def render_route(route): | 364 | def render_route(route, indent=""): |
3283 | 365 | content = "up route add" | 365 | """When rendering routes for an iface, in some cases applying a route |
3284 | 366 | may result in the route command returning non-zero which produces | ||
3285 | 367 | some confusing output for users manually using ifup/ifdown[1]. To | ||
3286 | 368 | that end, we will optionally include an '|| true' postfix to each | ||
3287 | 369 | route line allowing users to work with ifup/ifdown without using | ||
3288 | 370 | --force option. | ||
3289 | 371 | |||
3290 | 372 | We may at somepoint not want to emit this additional postfix, and | ||
3291 | 373 | add a 'strict' flag to this function. When called with strict=True, | ||
3292 | 374 | then we will not append the postfix. | ||
3293 | 375 | |||
3294 | 376 | 1. http://askubuntu.com/questions/168033/ | ||
3295 | 377 | how-to-set-static-routes-in-ubuntu-server | ||
3296 | 378 | """ | ||
3297 | 379 | content = [] | ||
3298 | 380 | up = indent + "post-up route add" | ||
3299 | 381 | down = indent + "pre-down route del" | ||
3300 | 382 | or_true = " || true" | ||
3301 | 366 | mapping = { | 383 | mapping = { |
3302 | 367 | 'network': '-net', | 384 | 'network': '-net', |
3303 | 368 | 'netmask': 'netmask', | 385 | 'netmask': 'netmask', |
3304 | 369 | 'gateway': 'gw', | 386 | 'gateway': 'gw', |
3305 | 370 | 'metric': 'metric', | 387 | 'metric': 'metric', |
3306 | 371 | } | 388 | } |
3316 | 372 | for k in ['network', 'netmask', 'gateway', 'metric']: | 389 | if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0': |
3317 | 373 | if k in route: | 390 | default_gw = " default gw %s" % route['gateway'] |
3318 | 374 | content += " %s %s" % (mapping[k], route[k]) | 391 | content.append(up + default_gw + or_true) |
3319 | 375 | 392 | content.append(down + default_gw + or_true) | |
3320 | 376 | content += '\n' | 393 | elif route['network'] == '::' and route['netmask'] == 0: |
3321 | 377 | return content | 394 | # ipv6! |
3322 | 378 | 395 | default_gw = " -A inet6 default gw %s" % route['gateway'] | |
3323 | 379 | 396 | content.append(up + default_gw + or_true) | |
3324 | 380 | def iface_start_entry(iface, index): | 397 | content.append(down + default_gw + or_true) |
3325 | 398 | else: | ||
3326 | 399 | route_line = "" | ||
3327 | 400 | for k in ['network', 'netmask', 'gateway', 'metric']: | ||
3328 | 401 | if k in route: | ||
3329 | 402 | route_line += " %s %s" % (mapping[k], route[k]) | ||
3330 | 403 | content.append(up + route_line + or_true) | ||
3331 | 404 | content.append(down + route_line + or_true) | ||
3332 | 405 | return "\n".join(content) | ||
3333 | 406 | |||
3334 | 407 | |||
3335 | 408 | def iface_start_entry(iface): | ||
3336 | 381 | fullname = iface['name'] | 409 | fullname = iface['name'] |
3337 | 382 | if index != 0: | ||
3338 | 383 | fullname += ":%s" % index | ||
3339 | 384 | 410 | ||
3340 | 385 | control = iface['control'] | 411 | control = iface['control'] |
3341 | 386 | if control == "auto": | 412 | if control == "auto": |
3342 | @@ -397,6 +423,16 @@ | |||
3343 | 397 | "iface {fullname} {inet} {mode}\n").format(**subst) | 423 | "iface {fullname} {inet} {mode}\n").format(**subst) |
3344 | 398 | 424 | ||
3345 | 399 | 425 | ||
3346 | 426 | def subnet_is_ipv6(subnet): | ||
3347 | 427 | # 'static6' or 'dhcp6' | ||
3348 | 428 | if subnet['type'].endswith('6'): | ||
3349 | 429 | # This is a request for DHCPv6. | ||
3350 | 430 | return True | ||
3351 | 431 | elif subnet['type'] == 'static' and ":" in subnet['address']: | ||
3352 | 432 | return True | ||
3353 | 433 | return False | ||
3354 | 434 | |||
3355 | 435 | |||
3356 | 400 | def render_interfaces(network_state): | 436 | def render_interfaces(network_state): |
3357 | 401 | ''' Given state, emit etc/network/interfaces content ''' | 437 | ''' Given state, emit etc/network/interfaces content ''' |
3358 | 402 | 438 | ||
3359 | @@ -424,42 +460,43 @@ | |||
3360 | 424 | content += "\n" | 460 | content += "\n" |
3361 | 425 | subnets = iface.get('subnets', {}) | 461 | subnets = iface.get('subnets', {}) |
3362 | 426 | if subnets: | 462 | if subnets: |
3364 | 427 | for index, subnet in zip(range(0, len(subnets)), subnets): | 463 | for index, subnet in enumerate(subnets): |
3365 | 428 | if content[-2:] != "\n\n": | 464 | if content[-2:] != "\n\n": |
3366 | 429 | content += "\n" | 465 | content += "\n" |
3367 | 430 | iface['index'] = index | 466 | iface['index'] = index |
3368 | 431 | iface['mode'] = subnet['type'] | 467 | iface['mode'] = subnet['type'] |
3369 | 432 | iface['control'] = subnet.get('control', 'auto') | 468 | iface['control'] = subnet.get('control', 'auto') |
3370 | 433 | subnet_inet = 'inet' | 469 | subnet_inet = 'inet' |
3376 | 434 | if iface['mode'].endswith('6'): | 470 | if subnet_is_ipv6(subnet): |
3372 | 435 | # This is a request for DHCPv6. | ||
3373 | 436 | subnet_inet += '6' | ||
3374 | 437 | elif iface['mode'] == 'static' and ":" in subnet['address']: | ||
3375 | 438 | # This is a static IPv6 address. | ||
3377 | 439 | subnet_inet += '6' | 471 | subnet_inet += '6' |
3378 | 440 | iface['inet'] = subnet_inet | 472 | iface['inet'] = subnet_inet |
3380 | 441 | if iface['mode'].startswith('dhcp'): | 473 | if subnet['type'].startswith('dhcp'): |
3381 | 442 | iface['mode'] = 'dhcp' | 474 | iface['mode'] = 'dhcp' |
3382 | 443 | 475 | ||
3384 | 444 | content += iface_start_entry(iface, index) | 476 | # do not emit multiple 'auto $IFACE' lines as older (precise) |
3385 | 477 | # ifupdown complains | ||
3386 | 478 | if "auto %s\n" % (iface['name']) in content: | ||
3387 | 479 | iface['control'] = 'alias' | ||
3388 | 480 | |||
3389 | 481 | content += iface_start_entry(iface) | ||
3390 | 445 | content += iface_add_subnet(iface, subnet) | 482 | content += iface_add_subnet(iface, subnet) |
3391 | 446 | content += iface_add_attrs(iface, index) | 483 | content += iface_add_attrs(iface, index) |
3396 | 447 | if len(subnets) > 1 and index == 0: | 484 | |
3397 | 448 | for i in range(1, len(subnets)): | 485 | for route in subnet.get('routes', []): |
3398 | 449 | content += " post-up ifup %s:%s\n" % (iface['name'], | 486 | content += render_route(route, indent=" ") + '\n' |
3399 | 450 | i) | 487 | |
3400 | 451 | else: | 488 | else: |
3401 | 452 | # ifenslave docs say to auto the slave devices | 489 | # ifenslave docs say to auto the slave devices |
3403 | 453 | if 'bond-master' in iface: | 490 | if 'bond-master' in iface or 'bond-slaves' in iface: |
3404 | 454 | content += "auto {name}\n".format(**iface) | 491 | content += "auto {name}\n".format(**iface) |
3405 | 455 | content += "iface {name} {inet} {mode}\n".format(**iface) | 492 | content += "iface {name} {inet} {mode}\n".format(**iface) |
3407 | 456 | content += iface_add_attrs(iface, index) | 493 | content += iface_add_attrs(iface, 0) |
3408 | 457 | 494 | ||
3409 | 458 | for route in network_state.get('routes'): | 495 | for route in network_state.get('routes'): |
3410 | 459 | content += render_route(route) | 496 | content += render_route(route) |
3411 | 460 | 497 | ||
3412 | 461 | # global replacements until v2 format | 498 | # global replacements until v2 format |
3414 | 462 | content = content.replace('mac_address', 'hwaddress') | 499 | content = content.replace('mac_address', 'hwaddress ether') |
3415 | 463 | 500 | ||
3416 | 464 | # Play nice with others and source eni config files | 501 | # Play nice with others and source eni config files |
3417 | 465 | content += "\nsource /etc/network/interfaces.d/*.cfg\n" | 502 | content += "\nsource /etc/network/interfaces.d/*.cfg\n" |
3418 | 466 | 503 | ||
3419 | === modified file 'curtin/net/network_state.py' | |||
3420 | --- curtin/net/network_state.py 2015-10-02 16:19:07 +0000 | |||
3421 | +++ curtin/net/network_state.py 2016-10-03 18:55:20 +0000 | |||
3422 | @@ -121,6 +121,18 @@ | |||
3423 | 121 | iface = interfaces.get(command['name'], {}) | 121 | iface = interfaces.get(command['name'], {}) |
3424 | 122 | for param, val in command.get('params', {}).items(): | 122 | for param, val in command.get('params', {}).items(): |
3425 | 123 | iface.update({param: val}) | 123 | iface.update({param: val}) |
3426 | 124 | |||
3427 | 125 | # convert subnet ipv6 netmask to cidr as needed | ||
3428 | 126 | subnets = command.get('subnets') | ||
3429 | 127 | if subnets: | ||
3430 | 128 | for subnet in subnets: | ||
3431 | 129 | if subnet['type'] == 'static': | ||
3432 | 130 | if 'netmask' in subnet and ':' in subnet['address']: | ||
3433 | 131 | subnet['netmask'] = mask2cidr(subnet['netmask']) | ||
3434 | 132 | for route in subnet.get('routes', []): | ||
3435 | 133 | if 'netmask' in route: | ||
3436 | 134 | route['netmask'] = mask2cidr(route['netmask']) | ||
3437 | 135 | |||
3438 | 124 | iface.update({ | 136 | iface.update({ |
3439 | 125 | 'name': command.get('name'), | 137 | 'name': command.get('name'), |
3440 | 126 | 'type': command.get('type'), | 138 | 'type': command.get('type'), |
3441 | @@ -130,7 +142,7 @@ | |||
3442 | 130 | 'mtu': command.get('mtu'), | 142 | 'mtu': command.get('mtu'), |
3443 | 131 | 'address': None, | 143 | 'address': None, |
3444 | 132 | 'gateway': None, | 144 | 'gateway': None, |
3446 | 133 | 'subnets': command.get('subnets'), | 145 | 'subnets': subnets, |
3447 | 134 | }) | 146 | }) |
3448 | 135 | self.network_state['interfaces'].update({command.get('name'): iface}) | 147 | self.network_state['interfaces'].update({command.get('name'): iface}) |
3449 | 136 | self.dump_network_state() | 148 | self.dump_network_state() |
3450 | @@ -141,6 +153,7 @@ | |||
3451 | 141 | iface eth0.222 inet static | 153 | iface eth0.222 inet static |
3452 | 142 | address 10.10.10.1 | 154 | address 10.10.10.1 |
3453 | 143 | netmask 255.255.255.0 | 155 | netmask 255.255.255.0 |
3454 | 156 | hwaddress ether BC:76:4E:06:96:B3 | ||
3455 | 144 | vlan-raw-device eth0 | 157 | vlan-raw-device eth0 |
3456 | 145 | ''' | 158 | ''' |
3457 | 146 | required_keys = [ | 159 | required_keys = [ |
3458 | @@ -332,6 +345,37 @@ | |||
3459 | 332 | return ".".join([str(x) for x in mask]) | 345 | return ".".join([str(x) for x in mask]) |
3460 | 333 | 346 | ||
3461 | 334 | 347 | ||
3462 | 348 | def ipv4mask2cidr(mask): | ||
3463 | 349 | if '.' not in mask: | ||
3464 | 350 | return mask | ||
3465 | 351 | return sum([bin(int(x)).count('1') for x in mask.split('.')]) | ||
3466 | 352 | |||
3467 | 353 | |||
3468 | 354 | def ipv6mask2cidr(mask): | ||
3469 | 355 | if ':' not in mask: | ||
3470 | 356 | return mask | ||
3471 | 357 | |||
3472 | 358 | bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00, | ||
3473 | 359 | 0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc, | ||
3474 | 360 | 0xfffe, 0xffff] | ||
3475 | 361 | cidr = 0 | ||
3476 | 362 | for word in mask.split(':'): | ||
3477 | 363 | if not word or int(word, 16) == 0: | ||
3478 | 364 | break | ||
3479 | 365 | cidr += bitCount.index(int(word, 16)) | ||
3480 | 366 | |||
3481 | 367 | return cidr | ||
3482 | 368 | |||
3483 | 369 | |||
3484 | 370 | def mask2cidr(mask): | ||
3485 | 371 | if ':' in mask: | ||
3486 | 372 | return ipv6mask2cidr(mask) | ||
3487 | 373 | elif '.' in mask: | ||
3488 | 374 | return ipv4mask2cidr(mask) | ||
3489 | 375 | else: | ||
3490 | 376 | return mask | ||
3491 | 377 | |||
3492 | 378 | |||
3493 | 335 | if __name__ == '__main__': | 379 | if __name__ == '__main__': |
3494 | 336 | import sys | 380 | import sys |
3495 | 337 | import random | 381 | import random |
3496 | 338 | 382 | ||
3497 | === modified file 'curtin/util.py' | |||
3498 | --- curtin/util.py 2016-10-03 18:00:41 +0000 | |||
3499 | +++ curtin/util.py 2016-10-03 18:55:20 +0000 | |||
3500 | @@ -16,18 +16,35 @@ | |||
3501 | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. | 16 | # along with Curtin. If not, see <http://www.gnu.org/licenses/>. |
3502 | 17 | 17 | ||
3503 | 18 | import argparse | 18 | import argparse |
3504 | 19 | import collections | ||
3505 | 19 | import errno | 20 | import errno |
3506 | 20 | import glob | 21 | import glob |
3507 | 21 | import json | 22 | import json |
3508 | 22 | import os | 23 | import os |
3509 | 23 | import platform | 24 | import platform |
3510 | 25 | import re | ||
3511 | 24 | import shutil | 26 | import shutil |
3512 | 27 | import socket | ||
3513 | 25 | import subprocess | 28 | import subprocess |
3514 | 26 | import stat | 29 | import stat |
3515 | 27 | import sys | 30 | import sys |
3516 | 28 | import tempfile | 31 | import tempfile |
3517 | 29 | import time | 32 | import time |
3518 | 30 | 33 | ||
3519 | 34 | # avoid the dependency to python3-six as used in cloud-init | ||
3520 | 35 | try: | ||
3521 | 36 | from urlparse import urlparse | ||
3522 | 37 | except ImportError: | ||
3523 | 38 | # python3 | ||
3524 | 39 | # avoid triggering pylint, https://github.com/PyCQA/pylint/issues/769 | ||
3525 | 40 | # pylint:disable=import-error,no-name-in-module | ||
3526 | 41 | from urllib.parse import urlparse | ||
3527 | 42 | |||
3528 | 43 | try: | ||
3529 | 44 | string_types = (basestring,) | ||
3530 | 45 | except NameError: | ||
3531 | 46 | string_types = (str,) | ||
3532 | 47 | |||
3533 | 31 | from .log import LOG | 48 | from .log import LOG |
3534 | 32 | 49 | ||
3535 | 33 | _INSTALLED_HELPERS_PATH = '/usr/lib/curtin/helpers' | 50 | _INSTALLED_HELPERS_PATH = '/usr/lib/curtin/helpers' |
3536 | @@ -35,14 +52,22 @@ | |||
3537 | 35 | 52 | ||
3538 | 36 | _LSB_RELEASE = {} | 53 | _LSB_RELEASE = {} |
3539 | 37 | 54 | ||
3540 | 55 | _DNS_REDIRECT_IP = None | ||
3541 | 56 | |||
3542 | 57 | # matcher used in template rendering functions | ||
3543 | 58 | BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)') | ||
3544 | 59 | |||
3545 | 38 | 60 | ||
3546 | 39 | def _subp(args, data=None, rcs=None, env=None, capture=False, shell=False, | 61 | def _subp(args, data=None, rcs=None, env=None, capture=False, shell=False, |
3548 | 40 | logstring=False, decode="replace"): | 62 | logstring=False, decode="replace", target=None): |
3549 | 41 | if rcs is None: | 63 | if rcs is None: |
3550 | 42 | rcs = [0] | 64 | rcs = [0] |
3551 | 43 | 65 | ||
3552 | 44 | devnull_fp = None | 66 | devnull_fp = None |
3553 | 45 | try: | 67 | try: |
3554 | 68 | if target_path(target) != "/": | ||
3555 | 69 | args = ['chroot', target] + list(args) | ||
3556 | 70 | |||
3557 | 46 | if not logstring: | 71 | if not logstring: |
3558 | 47 | LOG.debug(("Running command %s with allowed return codes %s" | 72 | LOG.debug(("Running command %s with allowed return codes %s" |
3559 | 48 | " (shell=%s, capture=%s)"), args, rcs, shell, capture) | 73 | " (shell=%s, capture=%s)"), args, rcs, shell, capture) |
3560 | @@ -118,6 +143,8 @@ | |||
3561 | 118 | a list of times to sleep in between retries. After each failure | 143 | a list of times to sleep in between retries. After each failure |
3562 | 119 | subp will sleep for N seconds and then try again. A value of [1, 3] | 144 | subp will sleep for N seconds and then try again. A value of [1, 3] |
3563 | 120 | means to run, sleep 1, run, sleep 3, run and then return exit code. | 145 | means to run, sleep 1, run, sleep 3, run and then return exit code. |
3564 | 146 | :param target: | ||
3565 | 147 | run the command as 'chroot target <args>' | ||
3566 | 121 | """ | 148 | """ |
3567 | 122 | retries = [] | 149 | retries = [] |
3568 | 123 | if "retries" in kwargs: | 150 | if "retries" in kwargs: |
3569 | @@ -277,15 +304,29 @@ | |||
3570 | 277 | 304 | ||
3571 | 278 | 305 | ||
3572 | 279 | def write_file(filename, content, mode=0o644, omode="w"): | 306 | def write_file(filename, content, mode=0o644, omode="w"): |
3573 | 307 | """ | ||
3574 | 308 | write 'content' to file at 'filename' using python open mode 'omode'. | ||
3575 | 309 | if mode is not set, then chmod file to mode. mode is 644 by default | ||
3576 | 310 | """ | ||
3577 | 280 | ensure_dir(os.path.dirname(filename)) | 311 | ensure_dir(os.path.dirname(filename)) |
3578 | 281 | with open(filename, omode) as fp: | 312 | with open(filename, omode) as fp: |
3579 | 282 | fp.write(content) | 313 | fp.write(content) |
3584 | 283 | os.chmod(filename, mode) | 314 | if mode: |
3585 | 284 | 315 | os.chmod(filename, mode) | |
3586 | 285 | 316 | ||
3587 | 286 | def load_file(path, mode="r"): | 317 | |
3588 | 318 | def load_file(path, mode="r", read_len=None, offset=0): | ||
3589 | 287 | with open(path, mode) as fp: | 319 | with open(path, mode) as fp: |
3591 | 288 | return fp.read() | 320 | if offset: |
3592 | 321 | fp.seek(offset) | ||
3593 | 322 | return fp.read(read_len) if read_len else fp.read() | ||
3594 | 323 | |||
3595 | 324 | |||
3596 | 325 | def file_size(path): | ||
3597 | 326 | """get the size of a file""" | ||
3598 | 327 | with open(path, 'rb') as fp: | ||
3599 | 328 | fp.seek(0, 2) | ||
3600 | 329 | return fp.tell() | ||
3601 | 289 | 330 | ||
3602 | 290 | 331 | ||
3603 | 291 | def del_file(path): | 332 | def del_file(path): |
3604 | @@ -311,7 +352,7 @@ | |||
3605 | 311 | 'done', | 352 | 'done', |
3606 | 312 | '']) | 353 | '']) |
3607 | 313 | 354 | ||
3609 | 314 | fpath = os.path.join(target, "usr/sbin/policy-rc.d") | 355 | fpath = target_path(target, "/usr/sbin/policy-rc.d") |
3610 | 315 | 356 | ||
3611 | 316 | if os.path.isfile(fpath): | 357 | if os.path.isfile(fpath): |
3612 | 317 | return False | 358 | return False |
3613 | @@ -322,7 +363,7 @@ | |||
3614 | 322 | 363 | ||
3615 | 323 | def undisable_daemons_in_root(target): | 364 | def undisable_daemons_in_root(target): |
3616 | 324 | try: | 365 | try: |
3618 | 325 | os.unlink(os.path.join(target, "usr/sbin/policy-rc.d")) | 366 | os.unlink(target_path(target, "/usr/sbin/policy-rc.d")) |
3619 | 326 | except OSError as e: | 367 | except OSError as e: |
3620 | 327 | if e.errno != errno.ENOENT: | 368 | if e.errno != errno.ENOENT: |
3621 | 328 | raise | 369 | raise |
3622 | @@ -334,7 +375,7 @@ | |||
3623 | 334 | def __init__(self, target, allow_daemons=False, sys_resolvconf=True): | 375 | def __init__(self, target, allow_daemons=False, sys_resolvconf=True): |
3624 | 335 | if target is None: | 376 | if target is None: |
3625 | 336 | target = "/" | 377 | target = "/" |
3627 | 337 | self.target = os.path.abspath(target) | 378 | self.target = target_path(target) |
3628 | 338 | self.mounts = ["/dev", "/proc", "/sys"] | 379 | self.mounts = ["/dev", "/proc", "/sys"] |
3629 | 339 | self.umounts = [] | 380 | self.umounts = [] |
3630 | 340 | self.disabled_daemons = False | 381 | self.disabled_daemons = False |
3631 | @@ -344,20 +385,21 @@ | |||
3632 | 344 | 385 | ||
3633 | 345 | def __enter__(self): | 386 | def __enter__(self): |
3634 | 346 | for p in self.mounts: | 387 | for p in self.mounts: |
3636 | 347 | tpath = os.path.join(self.target, p[1:]) | 388 | tpath = target_path(self.target, p) |
3637 | 348 | if do_mount(p, tpath, opts='--bind'): | 389 | if do_mount(p, tpath, opts='--bind'): |
3638 | 349 | self.umounts.append(tpath) | 390 | self.umounts.append(tpath) |
3639 | 350 | 391 | ||
3640 | 351 | if not self.allow_daemons: | 392 | if not self.allow_daemons: |
3641 | 352 | self.disabled_daemons = disable_daemons_in_root(self.target) | 393 | self.disabled_daemons = disable_daemons_in_root(self.target) |
3642 | 353 | 394 | ||
3644 | 354 | target_etc = os.path.join(self.target, "etc") | 395 | rconf = target_path(self.target, "/etc/resolv.conf") |
3645 | 396 | target_etc = os.path.dirname(rconf) | ||
3646 | 355 | if self.target != "/" and os.path.isdir(target_etc): | 397 | if self.target != "/" and os.path.isdir(target_etc): |
3647 | 356 | # never muck with resolv.conf on / | 398 | # never muck with resolv.conf on / |
3648 | 357 | rconf = os.path.join(target_etc, "resolv.conf") | 399 | rconf = os.path.join(target_etc, "resolv.conf") |
3649 | 358 | rtd = None | 400 | rtd = None |
3650 | 359 | try: | 401 | try: |
3652 | 360 | rtd = tempfile.mkdtemp(dir=os.path.dirname(rconf)) | 402 | rtd = tempfile.mkdtemp(dir=target_etc) |
3653 | 361 | tmp = os.path.join(rtd, "resolv.conf") | 403 | tmp = os.path.join(rtd, "resolv.conf") |
3654 | 362 | os.rename(rconf, tmp) | 404 | os.rename(rconf, tmp) |
3655 | 363 | self.rconf_d = rtd | 405 | self.rconf_d = rtd |
3656 | @@ -375,25 +417,23 @@ | |||
3657 | 375 | undisable_daemons_in_root(self.target) | 417 | undisable_daemons_in_root(self.target) |
3658 | 376 | 418 | ||
3659 | 377 | # if /dev is to be unmounted, udevadm settle (LP: #1462139) | 419 | # if /dev is to be unmounted, udevadm settle (LP: #1462139) |
3661 | 378 | if os.path.join(self.target, "dev") in self.umounts: | 420 | if target_path(self.target, "/dev") in self.umounts: |
3662 | 379 | subp(['udevadm', 'settle']) | 421 | subp(['udevadm', 'settle']) |
3663 | 380 | 422 | ||
3664 | 381 | for p in reversed(self.umounts): | 423 | for p in reversed(self.umounts): |
3665 | 382 | do_umount(p) | 424 | do_umount(p) |
3666 | 383 | 425 | ||
3668 | 384 | rconf = os.path.join(self.target, "etc", "resolv.conf") | 426 | rconf = target_path(self.target, "/etc/resolv.conf") |
3669 | 385 | if self.sys_resolvconf and self.rconf_d: | 427 | if self.sys_resolvconf and self.rconf_d: |
3670 | 386 | os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) | 428 | os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) |
3671 | 387 | shutil.rmtree(self.rconf_d) | 429 | shutil.rmtree(self.rconf_d) |
3672 | 388 | 430 | ||
3673 | 431 | def subp(self, *args, **kwargs): | ||
3674 | 432 | kwargs['target'] = self.target | ||
3675 | 433 | return subp(*args, **kwargs) | ||
3676 | 389 | 434 | ||
3684 | 390 | class RunInChroot(ChrootableTarget): | 435 | def path(self, path): |
3685 | 391 | def __call__(self, args, **kwargs): | 436 | return target_path(self.target, path) |
3679 | 392 | if self.target != "/": | ||
3680 | 393 | chroot = ["chroot", self.target] | ||
3681 | 394 | else: | ||
3682 | 395 | chroot = [] | ||
3683 | 396 | return subp(chroot + args, **kwargs) | ||
3686 | 397 | 437 | ||
3687 | 398 | 438 | ||
3688 | 399 | def is_exe(fpath): | 439 | def is_exe(fpath): |
3689 | @@ -402,14 +442,13 @@ | |||
3690 | 402 | 442 | ||
3691 | 403 | 443 | ||
3692 | 404 | def which(program, search=None, target=None): | 444 | def which(program, search=None, target=None): |
3695 | 405 | if target is None or os.path.realpath(target) == "/": | 445 | target = target_path(target) |
3694 | 406 | target = "/" | ||
3696 | 407 | 446 | ||
3697 | 408 | if os.path.sep in program: | 447 | if os.path.sep in program: |
3698 | 409 | # if program had a '/' in it, then do not search PATH | 448 | # if program had a '/' in it, then do not search PATH |
3699 | 410 | # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls | 449 | # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls |
3700 | 411 | # so effectively we set cwd to / (or target) | 450 | # so effectively we set cwd to / (or target) |
3702 | 412 | if is_exe(os.path.sep.join((target, program,))): | 451 | if is_exe(target_path(target, program)): |
3703 | 413 | return program | 452 | return program |
3704 | 414 | 453 | ||
3705 | 415 | if search is None: | 454 | if search is None: |
3706 | @@ -424,8 +463,9 @@ | |||
3707 | 424 | search = [os.path.abspath(p) for p in search] | 463 | search = [os.path.abspath(p) for p in search] |
3708 | 425 | 464 | ||
3709 | 426 | for path in search: | 465 | for path in search: |
3712 | 427 | if is_exe(os.path.sep.join((target, path, program,))): | 466 | ppath = os.path.sep.join((path, program)) |
3713 | 428 | return os.path.sep.join((path, program,)) | 467 | if is_exe(target_path(target, ppath)): |
3714 | 468 | return ppath | ||
3715 | 429 | 469 | ||
3716 | 430 | return None | 470 | return None |
3717 | 431 | 471 | ||
3718 | @@ -467,33 +507,39 @@ | |||
3719 | 467 | 507 | ||
3720 | 468 | 508 | ||
3721 | 469 | def get_architecture(target=None): | 509 | def get_architecture(target=None): |
3727 | 470 | chroot = [] | 510 | out, _ = subp(['dpkg', '--print-architecture'], capture=True, |
3728 | 471 | if target is not None: | 511 | target=target) |
3724 | 472 | chroot = ['chroot', target] | ||
3725 | 473 | out, _ = subp(chroot + ['dpkg', '--print-architecture'], | ||
3726 | 474 | capture=True) | ||
3729 | 475 | return out.strip() | 512 | return out.strip() |
3730 | 476 | 513 | ||
3731 | 477 | 514 | ||
3732 | 478 | def has_pkg_available(pkg, target=None): | 515 | def has_pkg_available(pkg, target=None): |
3737 | 479 | chroot = [] | 516 | out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) |
3734 | 480 | if target is not None: | ||
3735 | 481 | chroot = ['chroot', target] | ||
3736 | 482 | out, _ = subp(chroot + ['apt-cache', 'pkgnames'], capture=True) | ||
3738 | 483 | for item in out.splitlines(): | 517 | for item in out.splitlines(): |
3739 | 484 | if pkg == item.strip(): | 518 | if pkg == item.strip(): |
3740 | 485 | return True | 519 | return True |
3741 | 486 | return False | 520 | return False |
3742 | 487 | 521 | ||
3743 | 488 | 522 | ||
3744 | 523 | def get_installed_packages(target=None): | ||
3745 | 524 | (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) | ||
3746 | 525 | |||
3747 | 526 | pkgs_inst = set() | ||
3748 | 527 | for line in out.splitlines(): | ||
3749 | 528 | try: | ||
3750 | 529 | (state, pkg, other) = line.split(None, 2) | ||
3751 | 530 | except ValueError: | ||
3752 | 531 | continue | ||
3753 | 532 | if state.startswith("hi") or state.startswith("ii"): | ||
3754 | 533 | pkgs_inst.add(re.sub(":.*", "", pkg)) | ||
3755 | 534 | |||
3756 | 535 | return pkgs_inst | ||
3757 | 536 | |||
3758 | 537 | |||
3759 | 489 | def has_pkg_installed(pkg, target=None): | 538 | def has_pkg_installed(pkg, target=None): |
3760 | 490 | chroot = [] | ||
3761 | 491 | if target is not None: | ||
3762 | 492 | chroot = ['chroot', target] | ||
3763 | 493 | try: | 539 | try: |
3767 | 494 | out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat', | 540 | out, _ = subp(['dpkg-query', '--show', '--showformat', |
3768 | 495 | '${db:Status-Abbrev}', pkg], | 541 | '${db:Status-Abbrev}', pkg], |
3769 | 496 | capture=True) | 542 | capture=True, target=target) |
3770 | 497 | return out.rstrip() == "ii" | 543 | return out.rstrip() == "ii" |
3771 | 498 | except ProcessExecutionError: | 544 | except ProcessExecutionError: |
3772 | 499 | return False | 545 | return False |
3773 | @@ -542,13 +588,9 @@ | |||
3774 | 542 | """Use dpkg-query to extract package pkg's version string | 588 | """Use dpkg-query to extract package pkg's version string |
3775 | 543 | and parse the version string into a dictionary | 589 | and parse the version string into a dictionary |
3776 | 544 | """ | 590 | """ |
3777 | 545 | chroot = [] | ||
3778 | 546 | if target is not None: | ||
3779 | 547 | chroot = ['chroot', target] | ||
3780 | 548 | try: | 591 | try: |
3784 | 549 | out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat', | 592 | out, _ = subp(['dpkg-query', '--show', '--showformat', |
3785 | 550 | '${Version}', pkg], | 593 | '${Version}', pkg], capture=True, target=target) |
3783 | 551 | capture=True) | ||
3786 | 552 | raw = out.rstrip() | 594 | raw = out.rstrip() |
3787 | 553 | return parse_dpkg_version(raw, name=pkg, semx=semx) | 595 | return parse_dpkg_version(raw, name=pkg, semx=semx) |
3788 | 554 | except ProcessExecutionError: | 596 | except ProcessExecutionError: |
3789 | @@ -600,11 +642,11 @@ | |||
3790 | 600 | if comment.endswith("\n"): | 642 | if comment.endswith("\n"): |
3791 | 601 | comment = comment[:-1] | 643 | comment = comment[:-1] |
3792 | 602 | 644 | ||
3794 | 603 | marker = os.path.join(target, marker) | 645 | marker = target_path(target, marker) |
3795 | 604 | # if marker exists, check if there are files that would make it obsolete | 646 | # if marker exists, check if there are files that would make it obsolete |
3797 | 605 | listfiles = [os.path.join(target, "etc/apt/sources.list")] | 647 | listfiles = [target_path(target, "/etc/apt/sources.list")] |
3798 | 606 | listfiles += glob.glob( | 648 | listfiles += glob.glob( |
3800 | 607 | os.path.join(target, "etc/apt/sources.list.d/*.list")) | 649 | target_path(target, "etc/apt/sources.list.d/*.list")) |
3801 | 608 | 650 | ||
3802 | 609 | if os.path.exists(marker) and not force: | 651 | if os.path.exists(marker) and not force: |
3803 | 610 | if len(find_newer(marker, listfiles)) == 0: | 652 | if len(find_newer(marker, listfiles)) == 0: |
3804 | @@ -612,7 +654,7 @@ | |||
3805 | 612 | 654 | ||
3806 | 613 | restore_perms = [] | 655 | restore_perms = [] |
3807 | 614 | 656 | ||
3809 | 615 | abs_tmpdir = tempfile.mkdtemp(dir=os.path.join(target, 'tmp')) | 657 | abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) |
3810 | 616 | try: | 658 | try: |
3811 | 617 | abs_slist = abs_tmpdir + "/sources.list" | 659 | abs_slist = abs_tmpdir + "/sources.list" |
3812 | 618 | abs_slistd = abs_tmpdir + "/sources.list.d" | 660 | abs_slistd = abs_tmpdir + "/sources.list.d" |
3813 | @@ -621,8 +663,8 @@ | |||
3814 | 621 | ch_slistd = ch_tmpdir + "/sources.list.d" | 663 | ch_slistd = ch_tmpdir + "/sources.list.d" |
3815 | 622 | 664 | ||
3816 | 623 | # this file gets executed on apt-get update sometimes. (LP: #1527710) | 665 | # this file gets executed on apt-get update sometimes. (LP: #1527710) |
3819 | 624 | motd_update = os.path.join( | 666 | motd_update = target_path( |
3820 | 625 | target, "usr/lib/update-notifier/update-motd-updates-available") | 667 | target, "/usr/lib/update-notifier/update-motd-updates-available") |
3821 | 626 | pmode = set_unexecutable(motd_update) | 668 | pmode = set_unexecutable(motd_update) |
3822 | 627 | if pmode is not None: | 669 | if pmode is not None: |
3823 | 628 | restore_perms.append((motd_update, pmode),) | 670 | restore_perms.append((motd_update, pmode),) |
3824 | @@ -647,8 +689,8 @@ | |||
3825 | 647 | 'update'] | 689 | 'update'] |
3826 | 648 | 690 | ||
3827 | 649 | # do not using 'run_apt_command' so we can use 'retries' to subp | 691 | # do not using 'run_apt_command' so we can use 'retries' to subp |
3830 | 650 | with RunInChroot(target, allow_daemons=True) as inchroot: | 692 | with ChrootableTarget(target, allow_daemons=True) as inchroot: |
3831 | 651 | inchroot(update_cmd, env=env, retries=retries) | 693 | inchroot.subp(update_cmd, env=env, retries=retries) |
3832 | 652 | finally: | 694 | finally: |
3833 | 653 | for fname, perms in restore_perms: | 695 | for fname, perms in restore_perms: |
3834 | 654 | os.chmod(fname, perms) | 696 | os.chmod(fname, perms) |
3835 | @@ -685,9 +727,8 @@ | |||
3836 | 685 | return env, cmd | 727 | return env, cmd |
3837 | 686 | 728 | ||
3838 | 687 | apt_update(target, env=env, comment=' '.join(cmd)) | 729 | apt_update(target, env=env, comment=' '.join(cmd)) |
3842 | 688 | ric = RunInChroot(target, allow_daemons=allow_daemons) | 730 | with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: |
3843 | 689 | with ric as inchroot: | 731 | return inchroot.subp(cmd, env=env) |
3841 | 690 | return inchroot(cmd, env=env) | ||
3844 | 691 | 732 | ||
3845 | 692 | 733 | ||
3846 | 693 | def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False): | 734 | def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False): |
3847 | @@ -716,7 +757,7 @@ | |||
3848 | 716 | """ | 757 | """ |
3849 | 717 | Look for "hook" in "target" and run it | 758 | Look for "hook" in "target" and run it |
3850 | 718 | """ | 759 | """ |
3852 | 719 | target_hook = os.path.join(target, 'curtin', hook) | 760 | target_hook = target_path(target, '/curtin/' + hook) |
3853 | 720 | if os.path.isfile(target_hook): | 761 | if os.path.isfile(target_hook): |
3854 | 721 | LOG.debug("running %s" % target_hook) | 762 | LOG.debug("running %s" % target_hook) |
3855 | 722 | subp([target_hook]) | 763 | subp([target_hook]) |
3856 | @@ -828,6 +869,18 @@ | |||
3857 | 828 | return val | 869 | return val |
3858 | 829 | 870 | ||
3859 | 830 | 871 | ||
3860 | 872 | def bytes2human(size): | ||
3861 | 873 | """convert size in bytes to human readable""" | ||
3862 | 874 | if not (isinstance(size, (int, float)) and | ||
3863 | 875 | int(size) == size and | ||
3864 | 876 | int(size) >= 0): | ||
3865 | 877 | raise ValueError('size must be a integral value') | ||
3866 | 878 | mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40} | ||
3867 | 879 | unit_order = sorted(mpliers, key=lambda x: -1 * mpliers[x]) | ||
3868 | 880 | unit = next((u for u in unit_order if (size / mpliers[u]) >= 1), 'B') | ||
3869 | 881 | return str(int(size / mpliers[unit])) + unit | ||
3870 | 882 | |||
3871 | 883 | |||
3872 | 831 | def import_module(import_str): | 884 | def import_module(import_str): |
3873 | 832 | """Import a module.""" | 885 | """Import a module.""" |
3874 | 833 | __import__(import_str) | 886 | __import__(import_str) |
3875 | @@ -843,30 +896,42 @@ | |||
3876 | 843 | 896 | ||
3877 | 844 | 897 | ||
3878 | 845 | def is_file_not_found_exc(exc): | 898 | def is_file_not_found_exc(exc): |
3883 | 846 | return (isinstance(exc, IOError) and exc.errno == errno.ENOENT) | 899 | return (isinstance(exc, (IOError, OSError)) and |
3884 | 847 | 900 | hasattr(exc, 'errno') and | |
3885 | 848 | 901 | exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO)) | |
3886 | 849 | def lsb_release(): | 902 | |
3887 | 903 | |||
3888 | 904 | def _lsb_release(target=None): | ||
3889 | 850 | fmap = {'Codename': 'codename', 'Description': 'description', | 905 | fmap = {'Codename': 'codename', 'Description': 'description', |
3890 | 851 | 'Distributor ID': 'id', 'Release': 'release'} | 906 | 'Distributor ID': 'id', 'Release': 'release'} |
3891 | 907 | |||
3892 | 908 | data = {} | ||
3893 | 909 | try: | ||
3894 | 910 | out, _ = subp(['lsb_release', '--all'], capture=True, target=target) | ||
3895 | 911 | for line in out.splitlines(): | ||
3896 | 912 | fname, _, val = line.partition(":") | ||
3897 | 913 | if fname in fmap: | ||
3898 | 914 | data[fmap[fname]] = val.strip() | ||
3899 | 915 | missing = [k for k in fmap.values() if k not in data] | ||
3900 | 916 | if len(missing): | ||
3901 | 917 | LOG.warn("Missing fields in lsb_release --all output: %s", | ||
3902 | 918 | ','.join(missing)) | ||
3903 | 919 | |||
3904 | 920 | except ProcessExecutionError as err: | ||
3905 | 921 | LOG.warn("Unable to get lsb_release --all: %s", err) | ||
3906 | 922 | data = {v: "UNAVAILABLE" for v in fmap.values()} | ||
3907 | 923 | |||
3908 | 924 | return data | ||
3909 | 925 | |||
3910 | 926 | |||
3911 | 927 | def lsb_release(target=None): | ||
3912 | 928 | if target_path(target) != "/": | ||
3913 | 929 | # do not use or update cache if target is provided | ||
3914 | 930 | return _lsb_release(target) | ||
3915 | 931 | |||
3916 | 852 | global _LSB_RELEASE | 932 | global _LSB_RELEASE |
3917 | 853 | if not _LSB_RELEASE: | 933 | if not _LSB_RELEASE: |
3934 | 854 | data = {} | 934 | data = _lsb_release() |
3919 | 855 | try: | ||
3920 | 856 | out, err = subp(['lsb_release', '--all'], capture=True) | ||
3921 | 857 | for line in out.splitlines(): | ||
3922 | 858 | fname, tok, val = line.partition(":") | ||
3923 | 859 | if fname in fmap: | ||
3924 | 860 | data[fmap[fname]] = val.strip() | ||
3925 | 861 | missing = [k for k in fmap.values() if k not in data] | ||
3926 | 862 | if len(missing): | ||
3927 | 863 | LOG.warn("Missing fields in lsb_release --all output: %s", | ||
3928 | 864 | ','.join(missing)) | ||
3929 | 865 | |||
3930 | 866 | except ProcessExecutionError as e: | ||
3931 | 867 | LOG.warn("Unable to get lsb_release --all: %s", e) | ||
3932 | 868 | data = {v: "UNAVAILABLE" for v in fmap.values()} | ||
3933 | 869 | |||
3935 | 870 | _LSB_RELEASE.update(data) | 935 | _LSB_RELEASE.update(data) |
3936 | 871 | return _LSB_RELEASE | 936 | return _LSB_RELEASE |
3937 | 872 | 937 | ||
3938 | @@ -881,8 +946,7 @@ | |||
3939 | 881 | 946 | ||
3940 | 882 | 947 | ||
3941 | 883 | def json_dumps(data): | 948 | def json_dumps(data): |
3944 | 884 | return json.dumps(data, indent=1, sort_keys=True, | 949 | return json.dumps(data, indent=1, sort_keys=True, separators=(',', ': ')) |
3943 | 885 | separators=(',', ': ')).encode('utf-8') | ||
3945 | 886 | 950 | ||
3946 | 887 | 951 | ||
3947 | 888 | def get_platform_arch(): | 952 | def get_platform_arch(): |
3948 | @@ -895,4 +959,137 @@ | |||
3949 | 895 | } | 959 | } |
3950 | 896 | return platform2arch.get(platform.machine(), platform.machine()) | 960 | return platform2arch.get(platform.machine(), platform.machine()) |
3951 | 897 | 961 | ||
3952 | 962 | |||
3953 | 963 | def basic_template_render(content, params): | ||
3954 | 964 | """This does simple replacement of bash variable like templates. | ||
3955 | 965 | |||
3956 | 966 | It identifies patterns like ${a} or $a and can also identify patterns like | ||
3957 | 967 | ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted | ||
3958 | 968 | by key 'a'. | ||
3959 | 969 | """ | ||
3960 | 970 | |||
3961 | 971 | def replacer(match): | ||
3962 | 972 | """ replacer | ||
3963 | 973 | replacer used in regex match to replace content | ||
3964 | 974 | """ | ||
3965 | 975 | # Only 1 of the 2 groups will actually have a valid entry. | ||
3966 | 976 | name = match.group(1) | ||
3967 | 977 | if name is None: | ||
3968 | 978 | name = match.group(2) | ||
3969 | 979 | if name is None: | ||
3970 | 980 | raise RuntimeError("Match encountered but no valid group present") | ||
3971 | 981 | path = collections.deque(name.split(".")) | ||
3972 | 982 | selected_params = params | ||
3973 | 983 | while len(path) > 1: | ||
3974 | 984 | key = path.popleft() | ||
3975 | 985 | if not isinstance(selected_params, dict): | ||
3976 | 986 | raise TypeError("Can not traverse into" | ||
3977 | 987 | " non-dictionary '%s' of type %s while" | ||
3978 | 988 | " looking for subkey '%s'" | ||
3979 | 989 | % (selected_params, | ||
3980 | 990 | selected_params.__class__.__name__, | ||
3981 | 991 | key)) | ||
3982 | 992 | selected_params = selected_params[key] | ||
3983 | 993 | key = path.popleft() | ||
3984 | 994 | if not isinstance(selected_params, dict): | ||
3985 | 995 | raise TypeError("Can not extract key '%s' from non-dictionary" | ||
3986 | 996 | " '%s' of type %s" | ||
3987 | 997 | % (key, selected_params, | ||
3988 | 998 | selected_params.__class__.__name__)) | ||
3989 | 999 | return str(selected_params[key]) | ||
3990 | 1000 | |||
3991 | 1001 | return BASIC_MATCHER.sub(replacer, content) | ||
3992 | 1002 | |||
3993 | 1003 | |||
3994 | 1004 | def render_string(content, params): | ||
3995 | 1005 | """ render_string | ||
3996 | 1006 | render a string following replacement rules as defined in | ||
3997 | 1007 | basic_template_render returning the string | ||
3998 | 1008 | """ | ||
3999 | 1009 | if not params: | ||
4000 | 1010 | params = {} | ||
4001 | 1011 | return basic_template_render(content, params) | ||
4002 | 1012 | |||
4003 | 1013 | |||
4004 | 1014 | def is_resolvable(name): | ||
4005 | 1015 | """determine if a url is resolvable, return a boolean | ||
4006 | 1016 | This also attempts to be resilent against dns redirection. | ||
4007 | 1017 | |||
4008 | 1018 | Note, that normal nsswitch resolution is used here. So in order | ||
4009 | 1019 | to avoid any utilization of 'search' entries in /etc/resolv.conf | ||
4010 | 1020 | we have to append '.'. | ||
4011 | 1021 | |||
4012 | 1022 | The top level 'invalid' domain is invalid per RFC. And example.com | ||
4013 | 1023 | should also not exist. The random entry will be resolved inside | ||
4014 | 1024 | the search list. | ||
4015 | 1025 | """ | ||
4016 | 1026 | global _DNS_REDIRECT_IP | ||
4017 | 1027 | if _DNS_REDIRECT_IP is None: | ||
4018 | 1028 | badips = set() | ||
4019 | 1029 | badnames = ("does-not-exist.example.com.", "example.invalid.") | ||
4020 | 1030 | badresults = {} | ||
4021 | 1031 | for iname in badnames: | ||
4022 | 1032 | try: | ||
4023 | 1033 | result = socket.getaddrinfo(iname, None, 0, 0, | ||
4024 | 1034 | socket.SOCK_STREAM, | ||
4025 | 1035 | socket.AI_CANONNAME) | ||
4026 | 1036 | badresults[iname] = [] | ||
4027 | 1037 | for (_, _, _, cname, sockaddr) in result: | ||
4028 | 1038 | badresults[iname].append("%s: %s" % (cname, sockaddr[0])) | ||
4029 | 1039 | badips.add(sockaddr[0]) | ||
4030 | 1040 | except (socket.gaierror, socket.error): | ||
4031 | 1041 | pass | ||
4032 | 1042 | _DNS_REDIRECT_IP = badips | ||
4033 | 1043 | if badresults: | ||
4034 | 1044 | LOG.debug("detected dns redirection: %s", badresults) | ||
4035 | 1045 | |||
4036 | 1046 | try: | ||
4037 | 1047 | result = socket.getaddrinfo(name, None) | ||
4038 | 1048 | # check first result's sockaddr field | ||
4039 | 1049 | addr = result[0][4][0] | ||
4040 | 1050 | if addr in _DNS_REDIRECT_IP: | ||
4041 | 1051 | LOG.debug("dns %s in _DNS_REDIRECT_IP", name) | ||
4042 | 1052 | return False | ||
4043 | 1053 | LOG.debug("dns %s resolved to '%s'", name, result) | ||
4044 | 1054 | return True | ||
4045 | 1055 | except (socket.gaierror, socket.error): | ||
4046 | 1056 | LOG.debug("dns %s failed to resolve", name) | ||
4047 | 1057 | return False | ||
4048 | 1058 | |||
4049 | 1059 | |||
4050 | 1060 | def is_resolvable_url(url): | ||
4051 | 1061 | """determine if this url is resolvable (existing or ip).""" | ||
4052 | 1062 | return is_resolvable(urlparse(url).hostname) | ||
4053 | 1063 | |||
4054 | 1064 | |||
4055 | 1065 | def target_path(target, path=None): | ||
4056 | 1066 | # return 'path' inside target, accepting target as None | ||
4057 | 1067 | if target in (None, ""): | ||
4058 | 1068 | target = "/" | ||
4059 | 1069 | elif not isinstance(target, string_types): | ||
4060 | 1070 | raise ValueError("Unexpected input for target: %s" % target) | ||
4061 | 1071 | else: | ||
4062 | 1072 | target = os.path.abspath(target) | ||
4063 | 1073 | # abspath("//") returns "//" specifically for 2 slashes. | ||
4064 | 1074 | if target.startswith("//"): | ||
4065 | 1075 | target = target[1:] | ||
4066 | 1076 | |||
4067 | 1077 | if not path: | ||
4068 | 1078 | return target | ||
4069 | 1079 | |||
4070 | 1080 | # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. | ||
4071 | 1081 | while len(path) and path[0] == "/": | ||
4072 | 1082 | path = path[1:] | ||
4073 | 1083 | |||
4074 | 1084 | return os.path.join(target, path) | ||
4075 | 1085 | |||
4076 | 1086 | |||
4077 | 1087 | class RunInChroot(ChrootableTarget): | ||
4078 | 1088 | """Backwards compatibility for RunInChroot (LP: #1617375). | ||
4079 | 1089 | It needs to work like: | ||
4080 | 1090 | with RunInChroot("/target") as in_chroot: | ||
4081 | 1091 | in_chroot(["your", "chrooted", "command"])""" | ||
4082 | 1092 | __call__ = ChrootableTarget.subp | ||
4083 | 1093 | |||
4084 | 1094 | |||
4085 | 898 | # vi: ts=4 expandtab syntax=python | 1095 | # vi: ts=4 expandtab syntax=python |
4086 | 899 | 1096 | ||
4087 | === modified file 'debian/changelog' | |||
4088 | --- debian/changelog 2016-10-03 17:23:32 +0000 | |||
4089 | +++ debian/changelog 2016-10-03 18:55:20 +0000 | |||
4090 | @@ -1,8 +1,38 @@ | |||
4092 | 1 | curtin (0.1.0~bzr399-0ubuntu1~16.04.1ubuntu1) UNRELEASED; urgency=medium | 1 | curtin (0.1.0~bzr425-0ubuntu1~16.04.1) xenial-proposed; urgency=medium |
4093 | 2 | 2 | ||
4094 | 3 | [ Scott Moser ] | ||
4095 | 3 | * debian/new-upstream-snapshot: add writing of debian changelog entries. | 4 | * debian/new-upstream-snapshot: add writing of debian changelog entries. |
4096 | 4 | 5 | ||
4098 | 5 | -- Scott Moser <smoser@ubuntu.com> Mon, 03 Oct 2016 13:23:11 -0400 | 6 | [ Ryan Harper ] |
4099 | 7 | * New upstream snapshot. | ||
4100 | 8 | - unittest,tox.ini: catch and fix issue with trusty-level mock of open | ||
4101 | 9 | - block/mdadm: add option to ignore mdadm_assemble errors (LP: #1618429) | ||
4102 | 10 | - curtin/doc: overhaul curtin documentation for readthedocs.org (LP: #1351085) | ||
4103 | 11 | - curtin.util: re-add support for RunInChroot (LP: #1617375) | ||
4104 | 12 | - curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs | ||
4105 | 13 | - curtin.block: refactor clear_holders logic into block.clear_holders and cli cmd | ||
4106 | 14 | - curtin.apply_net should exit non-zero upon exception. (LP: #1615780) | ||
4107 | 15 | - apt: fix bug in disable_suites if sources.list line is blank. | ||
4108 | 16 | - vmtests: disable Wily in vmtests | ||
4109 | 17 | - Fix the unittests for test_apt_source. | ||
4110 | 18 | - get CURTIN_VMTEST_PARALLEL shown correctly in jenkins-runner output | ||
4111 | 19 | - fix vmtest check_file_strippedline to strip lines before comparing | ||
4112 | 20 | - fix whitespace damage in tests/vmtests/__init__.py | ||
4113 | 21 | - fix dpkg-reconfigure when debconf_selections was provided. (LP: #1609614) | ||
4114 | 22 | - fix apt tests on non-intel arch | ||
4115 | 23 | - Add apt features to curtin. (LP: #1574113) | ||
4116 | 24 | - vmtest: easier use of parallel and controlling timeouts | ||
4117 | 25 | - mkfs.vfat: add force flag for formating whole disks (LP: #1597923) | ||
4118 | 26 | - block.mkfs: fix sectorsize flag (LP: #1597522) | ||
4119 | 27 | - block_meta: cleanup use of sys_block_path and handle cciss knames (LP: #1562249) | ||
4120 | 28 | - block.get_blockdev_sector_size: handle _lsblock multi result return (LP: #1598310) | ||
4121 | 29 | - util: add target (chroot) support to subp, add target_path helper. | ||
4122 | 30 | - block_meta: fallback to parted if blkid does not produce output (LP: #1524031) | ||
4123 | 31 | - commands.block_wipe: correct default wipe mode to 'superblock' | ||
4124 | 32 | - tox.ini: run coverage normally rather than separately | ||
4125 | 33 | - move uefi boot knowledge from launch and vmtest to xkvm | ||
4126 | 34 | |||
4127 | 35 | -- Ryan Harper <ryan.harper@canonical.com> Mon, 03 Oct 2016 13:43:54 -0500 | ||
4128 | 6 | 36 | ||
4129 | 7 | curtin (0.1.0~bzr399-0ubuntu1~16.04.1) xenial-proposed; urgency=medium | 37 | curtin (0.1.0~bzr399-0ubuntu1~16.04.1) xenial-proposed; urgency=medium |
4130 | 8 | 38 | ||
4131 | 9 | 39 | ||
4132 | === modified file 'doc/conf.py' | |||
4133 | --- doc/conf.py 2015-10-02 16:19:07 +0000 | |||
4134 | +++ doc/conf.py 2016-10-03 18:55:20 +0000 | |||
4135 | @@ -13,6 +13,11 @@ | |||
4136 | 13 | 13 | ||
4137 | 14 | import sys, os | 14 | import sys, os |
4138 | 15 | 15 | ||
4139 | 16 | # Fix path so we can import curtin.__version__ | ||
4140 | 17 | sys.path.insert(1, os.path.realpath(os.path.join( | ||
4141 | 18 | os.path.dirname(__file__), '..'))) | ||
4142 | 19 | import curtin | ||
4143 | 20 | |||
4144 | 16 | # If extensions (or modules to document with autodoc) are in another directory, | 21 | # If extensions (or modules to document with autodoc) are in another directory, |
4145 | 17 | # add these directories to sys.path here. If the directory is relative to the | 22 | # add these directories to sys.path here. If the directory is relative to the |
4146 | 18 | # documentation root, use os.path.abspath to make it absolute, like shown here. | 23 | # documentation root, use os.path.abspath to make it absolute, like shown here. |
4147 | @@ -41,16 +46,16 @@ | |||
4148 | 41 | 46 | ||
4149 | 42 | # General information about the project. | 47 | # General information about the project. |
4150 | 43 | project = u'curtin' | 48 | project = u'curtin' |
4152 | 44 | copyright = u'2013, Scott Moser' | 49 | copyright = u'2016, Scott Moser, Ryan Harper' |
4153 | 45 | 50 | ||
4154 | 46 | # The version info for the project you're documenting, acts as replacement for | 51 | # The version info for the project you're documenting, acts as replacement for |
4155 | 47 | # |version| and |release|, also used in various other places throughout the | 52 | # |version| and |release|, also used in various other places throughout the |
4156 | 48 | # built documents. | 53 | # built documents. |
4157 | 49 | # | 54 | # |
4158 | 50 | # The short X.Y version. | 55 | # The short X.Y version. |
4160 | 51 | version = '0.3' | 56 | version = curtin.__version__ |
4161 | 52 | # The full version, including alpha/beta/rc tags. | 57 | # The full version, including alpha/beta/rc tags. |
4163 | 53 | release = '0.3' | 58 | release = version |
4164 | 54 | 59 | ||
4165 | 55 | # The language for content autogenerated by Sphinx. Refer to documentation | 60 | # The language for content autogenerated by Sphinx. Refer to documentation |
4166 | 56 | # for a list of supported languages. | 61 | # for a list of supported languages. |
4167 | @@ -93,6 +98,18 @@ | |||
4168 | 93 | # a list of builtin themes. | 98 | # a list of builtin themes. |
4169 | 94 | html_theme = 'classic' | 99 | html_theme = 'classic' |
4170 | 95 | 100 | ||
4171 | 101 | # on_rtd is whether we are on readthedocs.org, this line of code grabbed from | ||
4172 | 102 | # docs.readthedocs.org | ||
4173 | 103 | on_rtd = os.environ.get('READTHEDOCS', None) == 'True' | ||
4174 | 104 | |||
4175 | 105 | if not on_rtd: # only import and set the theme if we're building docs locally | ||
4176 | 106 | import sphinx_rtd_theme | ||
4177 | 107 | html_theme = 'sphinx_rtd_theme' | ||
4178 | 108 | html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] | ||
4179 | 109 | |||
4180 | 110 | # otherwise, readthedocs.org uses their theme by default, so no need to specify | ||
4181 | 111 | # it | ||
4182 | 112 | |||
4183 | 96 | # Theme options are theme-specific and customize the look and feel of a theme | 113 | # Theme options are theme-specific and customize the look and feel of a theme |
4184 | 97 | # further. For a list of options available for each theme, see the | 114 | # further. For a list of options available for each theme, see the |
4185 | 98 | # documentation. | 115 | # documentation. |
4186 | @@ -120,7 +137,7 @@ | |||
4187 | 120 | # Add any paths that contain custom static files (such as style sheets) here, | 137 | # Add any paths that contain custom static files (such as style sheets) here, |
4188 | 121 | # relative to this directory. They are copied after the builtin static files, | 138 | # relative to this directory. They are copied after the builtin static files, |
4189 | 122 | # so a file named "default.css" will overwrite the builtin "default.css". | 139 | # so a file named "default.css" will overwrite the builtin "default.css". |
4191 | 123 | html_static_path = ['static'] | 140 | #html_static_path = ['static'] |
4192 | 124 | 141 | ||
4193 | 125 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, | 142 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, |
4194 | 126 | # using the given strftime format. | 143 | # using the given strftime format. |
4195 | 127 | 144 | ||
4196 | === removed file 'doc/devel/README-vmtest.txt' | |||
4197 | --- doc/devel/README-vmtest.txt 2016-02-12 21:54:46 +0000 | |||
4198 | +++ doc/devel/README-vmtest.txt 1970-01-01 00:00:00 +0000 | |||
4199 | @@ -1,152 +0,0 @@ | |||
4200 | 1 | == Background == | ||
4201 | 2 | Curtin includes a mechanism called 'vmtest' that allows it to actually | ||
4202 | 3 | do installs and validate a number of configurations. | ||
4203 | 4 | |||
4204 | 5 | The general flow of the vmtests is: | ||
4205 | 6 | 1. each test has an associated yaml config file for curtin in examples/tests | ||
4206 | 7 | 2. uses curtin-pack to create the user-data for cloud-init to trigger install | ||
4207 | 8 | 3. create and install a system using 'tools/launch'. | ||
4208 | 9 | 3.1 The install environment is booted from a maas ephemeral image. | ||
4209 | 10 | 3.2 kernel & initrd used are from maas images (not part of the image) | ||
4210 | 11 | 3.3 network by default is handled via user networking | ||
4211 | 12 | 3.4 It creates all empty disks required | ||
4212 | 13 | 3.5 cloud-init datasource is provided by launch | ||
4213 | 14 | a) like: ds=nocloud-net;seedfrom=http://10.7.0.41:41518/ | ||
4214 | 15 | provided by python webserver start_http | ||
4215 | 16 | b) via -drive file=/tmp/launch.8VOiOn/seed.img,if=virtio,media=cdrom | ||
4216 | 17 | as a seed disk (if booted without external kernel) | ||
4217 | 18 | 3.6 dependencies and other preparations are installed at the beginning by | ||
4218 | 19 | curtin inside the ephemeral image prior to configuring the target | ||
4219 | 20 | 4. power off the system. | ||
4220 | 21 | 5. configure a 'NoCloud' datasource seed image that provides scripts that | ||
4221 | 22 | will run on first boot. | ||
4222 | 23 | 5.1 this will contain all our code to gather health data on the install | ||
4223 | 24 | 5.2 by cloud-init design this runs only once per instance, if you start | ||
4224 | 25 | the system again this won't be called again | ||
4225 | 26 | 6. boot the installed system with 'tools/xkvm'. | ||
4226 | 27 | 6.1 reuses the disks that were installed/configured in the former steps | ||
4227 | 28 | 6.2 also adds an output disk | ||
4228 | 29 | 6.3 additionally the seed image for the data gathering is added | ||
4229 | 30 | 6.4 On this boot it will run the provided scripts, write their output to a | ||
4230 | 31 | "data" disk and then shut itself down. | ||
4231 | 32 | 7. extract the data from the output disk | ||
4232 | 33 | 8. vmtest python code now verifies if the output is as expected. | ||
4233 | 34 | |||
4234 | 35 | == Debugging == | ||
4235 | 36 | At 3.1 | ||
4236 | 37 | - one can pull data out of the maas image with | ||
4237 | 38 | sudo mount-image-callback your.img -- sh -c 'COMMAND' | ||
4238 | 39 | e.g. sudo mount-image-callback your.img -- sh -c 'cp $MOUNTPOINT/boot/* .' | ||
4239 | 40 | At step 3.6 -> 4. | ||
4240 | 41 | - tools/launch can be called in a way to give you console access | ||
4241 | 42 | to do so just call tools/launch but drop the -serial=x parameter. | ||
4242 | 43 | One might want to change "'power_state': {'mode': 'poweroff'}" to avoid | ||
4243 | 44 | the auto reboot before getting control | ||
4244 | 45 | Replace the directory usually seen in the launch calls with a clean fresh | ||
4245 | 46 | directory | ||
4246 | 47 | - In /curtin curtin and its config can be found | ||
4247 | 48 | - if the system gets that far cloud-init will create a user ubuntu/passw0rd | ||
4248 | 49 | - otherwise one can use a cloud-image from https://cloud-images.ubuntu.com/ | ||
4249 | 50 | and add a backdoor user via | ||
4250 | 51 | bzr branch lp:~maas-maintainers/maas/backdoor-image backdoor-image | ||
4251 | 52 | sudo ./backdoor-image -v --user=<USER> --password-auth --password=<PW> IMG | ||
4252 | 53 | At step 6 -> 7 | ||
4253 | 54 | - You might want to keep all the temporary images around. | ||
4254 | 55 | To do so you can set CURTIN_VMTEST_KEEP_DATA_PASS=all: | ||
4255 | 56 | export CURTIN_VMTEST_KEEP_DATA_PASS=all CURTIN_VMTEST_KEEP_DATA_FAIL=all | ||
4256 | 57 | That will keep the /tmp/tmpXXXXX directories and all files in there for | ||
4257 | 58 | further execution. | ||
4258 | 59 | At step 7 | ||
4259 | 60 | - You might want to take a look at the output disk yourself. | ||
4260 | 61 | It is a normal qcow image, so one can use mount-image-callback as described | ||
4261 | 62 | above | ||
4262 | 63 | - to invoke xkvm on your own take the command you see in the output and | ||
4263 | 64 | remove the "-serial ..." but add -nographic instead | ||
4264 | 65 | For graphical console one can add --vnc 127.0.0.1:1 | ||
4265 | 66 | |||
4266 | 67 | == Setup == | ||
4267 | 68 | In order to run vmtest you'll need some dependencies. To get them, you | ||
4268 | 69 | can run: | ||
4269 | 70 | make vmtest-deps | ||
4270 | 71 | |||
4271 | 72 | That will install all necessary dependencies. | ||
4272 | 73 | |||
4273 | 74 | == Running == | ||
4274 | 75 | Running tests is done most simply by: | ||
4275 | 76 | |||
4276 | 77 | make vmtest | ||
4277 | 78 | |||
4278 | 79 | If you wish to all tests in test_network.py, do so with: | ||
4279 | 80 | sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py | ||
4280 | 81 | |||
4281 | 82 | Or run a single test with: | ||
4282 | 83 | sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py:WilyTestBasic | ||
4283 | 84 | |||
4284 | 85 | Note: | ||
4285 | 86 | * currently, the tests have to run as root. The reason for this is that | ||
4286 | 87 | the kernel and initramfs to boot are extracted from the maas ephemeral | ||
4287 | 88 | image. This should be fixed at some point, and then 'make vmtest' | ||
4288 | 89 | |||
4289 | 90 | The tests themselves don't actually have to run as root, but the | ||
4290 | 91 | test setup does. | ||
4291 | 92 | * the 'tools' directory must be in your path. | ||
4292 | 93 | * test will set apt_proxy in the guests to the value of | ||
4293 | 94 | 'apt_proxy' environment variable. If that is not set it will | ||
4294 | 95 | look at the host's apt config and read 'Acquire::HTTP::Proxy' | ||
4295 | 96 | |||
4296 | 97 | == Environment Variables == | ||
4297 | 98 | Some environment variables affect the running of vmtest | ||
4298 | 99 | * apt_proxy: | ||
4299 | 100 | test will set apt_proxy in the guests to the value of 'apt_proxy'. | ||
4300 | 101 | If that is not set it will look at the host's apt config and read | ||
4301 | 102 | 'Acquire::HTTP::Proxy' | ||
4302 | 103 | |||
4303 | 104 | * CURTIN_VMTEST_KEEP_DATA_PASS CURTIN_VMTEST_KEEP_DATA_FAIL: | ||
4304 | 105 | default: | ||
4305 | 106 | CURTIN_VMTEST_KEEP_DATA_PASS=none | ||
4306 | 107 | CURTIN_VMTEST_KEEP_DATA_FAIL=all | ||
4307 | 108 | These 2 variables determine what portions of the temporary | ||
4308 | 109 | test data are kept. | ||
4309 | 110 | |||
4310 | 111 | The variables contain a comma ',' delimited list of directories | ||
4311 | 112 | that should be kept in the case of pass or fail. Additionally, | ||
4312 | 113 | the values 'all' and 'none' are accepted. | ||
4313 | 114 | |||
4314 | 115 | Each vmtest that runs has its own sub-directory under the top level | ||
4315 | 116 | CURTIN_VMTEST_TOPDIR. In that directory are directories: | ||
4316 | 117 | boot: inputs to the system boot (after install) | ||
4317 | 118 | install: install phase related files | ||
4318 | 119 | disks: the disks used for installation and boot | ||
4319 | 120 | logs: install and boot logs | ||
4320 | 121 | collect: data collected by the boot phase | ||
4321 | 122 | |||
4322 | 123 | * CURTIN_VMTEST_TOPDIR: default $TMPDIR/vmtest-<timestamp> | ||
4323 | 124 | vmtest puts all test data under this value. By default, it creates | ||
4324 | 125 | a directory in TMPDIR (/tmp) named with as "vmtest-<timestamp>" | ||
4325 | 126 | |||
4326 | 127 | If you set this value, you must ensure that the directory is either | ||
4327 | 128 | non-existant or clean. | ||
4328 | 129 | |||
4329 | 130 | * CURTIN_VMTEST_LOG: default $TMPDIR/vmtest-<timestamp>.log | ||
4330 | 131 | vmtest writes extended log information to this file. | ||
4331 | 132 | The default puts the log along side the TOPDIR. | ||
4332 | 133 | |||
4333 | 134 | * CURTIN_VMTEST_IMAGE_SYNC: default false (boolean) | ||
4334 | 135 | if set to true, each run will attempt a sync of images. | ||
4335 | 136 | If you want to make sure images are always up to date, then set to true. | ||
4336 | 137 | |||
4337 | 138 | * CURTIN_VMTEST_BRIDGE: default 'user' | ||
4338 | 139 | the network devices will be attached to this bridge. The default is | ||
4339 | 140 | 'user', which means to use qemu user mode networking. Set it to | ||
4340 | 141 | 'virbr0' or 'lxcbr0' to use those bridges and then be able to ssh | ||
4341 | 142 | in directly. | ||
4342 | 143 | |||
4343 | 144 | * IMAGE_DIR: default /srv/images | ||
4344 | 145 | vmtest keeps a mirror of maas ephemeral images in this directory. | ||
4345 | 146 | |||
4346 | 147 | * IMAGES_TO_KEEP: default 1 | ||
4347 | 148 | keep this number of images of each release in the IMAGE_DIR. | ||
4348 | 149 | |||
4349 | 150 | Environment 'boolean' values: | ||
4350 | 151 | For boolean environment variables the value is considered True | ||
4351 | 152 | if it is any value other than case insensitive 'false', '' or "0" | ||
4352 | 153 | 0 | ||
4353 | === removed file 'doc/devel/README.txt' | |||
4354 | --- doc/devel/README.txt 2015-03-11 13:19:43 +0000 | |||
4355 | +++ doc/devel/README.txt 1970-01-01 00:00:00 +0000 | |||
4356 | @@ -1,55 +0,0 @@ | |||
4357 | 1 | ## curtin development ## | ||
4358 | 2 | |||
4359 | 3 | This document describes how to use kvm and ubuntu cloud images | ||
4360 | 4 | to develop curtin or test install configurations inside kvm. | ||
4361 | 5 | |||
4362 | 6 | ## get some dependencies ## | ||
4363 | 7 | sudo apt-get -qy install kvm libvirt-bin cloud-utils bzr | ||
4364 | 8 | |||
4365 | 9 | ## get cloud image to boot (-disk1.img) and one to install (-root.tar.gz) | ||
4366 | 10 | mkdir -p ~/download | ||
4367 | 11 | DLDIR=$( cd ~/download && pwd ) | ||
4368 | 12 | rel="trusty" | ||
4369 | 13 | arch=amd64 | ||
4370 | 14 | burl="http://cloud-images.ubuntu.com/$rel/current/" | ||
4371 | 15 | for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-${arch}-disk1.img; do | ||
4372 | 16 | wget "$burl/$f" -O $DLDIR/$f; done | ||
4373 | 17 | ( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2) | ||
4374 | 18 | |||
4375 | 19 | BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2" | ||
4376 | 20 | ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz" | ||
4377 | 21 | |||
4378 | 22 | ## get curtin | ||
4379 | 23 | mkdir -p ~/src | ||
4380 | 24 | bzr init-repo ~/src/curtin | ||
4381 | 25 | ( cd ~/src/curtin && bzr branch lp:curtin trunk.dist ) | ||
4382 | 26 | ( cd ~/src/curtin && bzr branch trunk.dist trunk ) | ||
4383 | 27 | |||
4384 | 28 | ## work with curtin | ||
4385 | 29 | cd ~/src/curtin/trunk | ||
4386 | 30 | # use 'launch' to launch a kvm instance with user data to pack | ||
4387 | 31 | # up local curtin and run it inside instance. | ||
4388 | 32 | ./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}" | ||
4389 | 33 | |||
4390 | 34 | ## notes about 'launch' ## | ||
4391 | 35 | * launch has --help so you can see that for some info. | ||
4392 | 36 | * '--publish' adds a web server at ${HTTP_PORT:-9923} | ||
4393 | 37 | and puts the files you want available there. You can reference | ||
4394 | 38 | this url in config or cmdline with 'PUBURL'. For example | ||
4395 | 39 | '--publish foo.img' will put 'foo.img' at PUBURL/foo.img. | ||
4396 | 40 | * launch sets 'ubuntu' user password to 'passw0rd' | ||
4397 | 41 | * launch runs 'kvm -curses' | ||
4398 | 42 | kvm -curses keyboard info: | ||
4399 | 43 | 'alt-2' to go to qemu console | ||
4400 | 44 | * launch puts serial console to 'serial.log' (look there for stuff) | ||
4401 | 45 | * when logged in | ||
4402 | 46 | * you can look at /var/log/cloud-init-output.log | ||
4403 | 47 | * archive should be extracted in /curtin | ||
4404 | 48 | * shell archive should be in /var/lib/cloud/instance/scripts/part-002 | ||
4405 | 49 | * when logged in, and archive available at | ||
4406 | 50 | |||
4407 | 51 | |||
4408 | 52 | ## other notes ## | ||
4409 | 53 | * need to add '--install-deps' or something for curtin | ||
4410 | 54 | cloud-image in 12.04 has no 'python3' | ||
4411 | 55 | ideally 'curtin --install-deps install' would get the things it needs | ||
4412 | 56 | 0 | ||
4413 | === added file 'doc/devel/clear_holders_doc.txt' | |||
4414 | --- doc/devel/clear_holders_doc.txt 1970-01-01 00:00:00 +0000 | |||
4415 | +++ doc/devel/clear_holders_doc.txt 2016-10-03 18:55:20 +0000 | |||
4416 | @@ -0,0 +1,85 @@ | |||
4417 | 1 | The new version of clear_holders is based around a data structure called a | ||
4418 | 2 | holder_tree which represents the current storage hirearchy above a specified | ||
4419 | 3 | starting device. Each node in a holders tree contains data about the node and a | ||
4420 | 4 | key 'holders' which contains a list of all nodes that depend on it. The keys in | ||
4421 | 5 | a holdres_tree node are: | ||
4422 | 6 | - device: the path to the device in /sys/class/block | ||
4423 | 7 | - dev_type: what type of storage layer the device is. possible values: | ||
4424 | 8 | - disk | ||
4425 | 9 | - lvm | ||
4426 | 10 | - crypt | ||
4427 | 11 | - raid | ||
4428 | 12 | - bcache | ||
4429 | 13 | - disk | ||
4430 | 14 | - name: the kname of the device (used for display) | ||
4431 | 15 | - holders: holders_trees for devices depending on the current device | ||
4432 | 16 | |||
4433 | 17 | A holders tree can be generated for a device using the function | ||
4434 | 18 | clear_holders.gen_holders_tree. The device can be specified either as a path in | ||
4435 | 19 | /sys/class/block or as a path in /dev. | ||
4436 | 20 | |||
4437 | 21 | The new implementation of block.clear_holders shuts down storage devices in a | ||
4438 | 22 | holders tree starting from the leaves of the tree and ascending towards the | ||
4439 | 23 | root. The old implementation of clear_holders ascended up each path of the tree | ||
4440 | 24 | separately, in a pattern similar to depth first search. The problem with the | ||
4441 | 25 | old implementation is that in some cases either an attempt would be made to | ||
4442 | 26 | remove one storage device while other devices depended on it or clear_holders | ||
4443 | 27 | would attempt to shut down the same storage device several times. In order to | ||
4444 | 28 | cope with this the old version of clear_holders had logic to handle expected | ||
4445 | 29 | failures and hope for the best moving forward. The new version of clear_holders | ||
4446 | 30 | is able to run without many anticipated failures. | ||
4447 | 31 | |||
4448 | 32 | The logic to plan what order to shut down storage layers in is in | ||
4449 | 33 | clear_holders.plan_shutdown_holders_trees. This function accepts either a | ||
4450 | 34 | single holders tree or a list of holders trees. When run with a list of holders | ||
4451 | 35 | trees, it assumes that all of these trees start at basically the same layer in | ||
4452 | 36 | the overall storage hirearcy for the system (i.e. a list of holders trees | ||
4453 | 37 | starting from all of the target installation disks). This function returns a | ||
4454 | 38 | list of dictionaries, with each dictionary containing the keys: | ||
4455 | 39 | - device: the path to the device in /sys/class/block | ||
4456 | 40 | - dev_type: what type of storage layer the device is. possible values: | ||
4457 | 41 | - disk | ||
4458 | 42 | - lvm | ||
4459 | 43 | - crypt | ||
4460 | 44 | - raid | ||
4461 | 45 | - bcache | ||
4462 | 46 | - disk | ||
4463 | 47 | - level: the level of the device in the current storage hirearchy | ||
4464 | 48 | (starting from 0) | ||
4465 | 49 | |||
4466 | 50 | The items in the list returned by clear_holders.plan_shutdown_holders_trees | ||
4467 | 51 | should be processed in order to make sure the holders trees are shutdown fully | ||
4468 | 52 | |||
4469 | 53 | The main interface for clear_holders is the function | ||
4470 | 54 | clear_holders.clear_holders. If the system has just been booted it could be | ||
4471 | 55 | beneficial to run the function clear_holders.start_clear_holders_deps before | ||
4472 | 56 | using clear_holders.clear_holders. This ensures clear_holders will be able to | ||
4473 | 57 | properly storage devices. The function clear_holders.clear_holders can be | ||
4474 | 58 | passed either a single device or a list of devices and will shut down all | ||
4475 | 59 | storage devices above the device(s). The devices can be specified either by | ||
4476 | 60 | path in /dev or by path in /sys/class/block. | ||
4477 | 61 | |||
4478 | 62 | In order to test if a device or devices are free to be partitioned/formatted, | ||
4479 | 63 | the function clear_holders.assert_clear can be passed either a single device or | ||
4480 | 64 | a list of devices, with devices specified either by path in /dev or by path in | ||
4481 | 65 | /sys/class/block. If there are any storage devices that depend on one of the | ||
4482 | 66 | devices passed to clear_holders.assert_clear, then an OSError will be raised. | ||
4483 | 67 | If clear_holders.assert_clear does not raise any errors, then the devices | ||
4484 | 68 | specified should be ready for partitioning. | ||
4485 | 69 | |||
4486 | 70 | It is possible to query further information about storage devices using | ||
4487 | 71 | clear_holders. | ||
4488 | 72 | |||
4489 | 73 | Holders for a individual device can be queried using clear_holders.get_holders. | ||
4490 | 74 | Results are returned as a list or knames for holding devices. | ||
4491 | 75 | |||
4492 | 76 | A holders tree can be printed in a human readable format using | ||
4493 | 77 | clear_holders.format_holders_tree(). Example output: | ||
4494 | 78 | sda | ||
4495 | 79 | |-- sda1 | ||
4496 | 80 | |-- sda2 | ||
4497 | 81 | `-- sda5 | ||
4498 | 82 | `-- dm-0 | ||
4499 | 83 | |-- dm-1 | ||
4500 | 84 | `-- dm-2 | ||
4501 | 85 | `-- dm-3 | ||
4502 | 0 | 86 | ||
4503 | === modified file 'doc/index.rst' | |||
4504 | --- doc/index.rst 2015-10-02 16:19:07 +0000 | |||
4505 | +++ doc/index.rst 2016-10-03 18:55:20 +0000 | |||
4506 | @@ -13,7 +13,13 @@ | |||
4507 | 13 | :maxdepth: 2 | 13 | :maxdepth: 2 |
4508 | 14 | 14 | ||
4509 | 15 | topics/overview | 15 | topics/overview |
4510 | 16 | topics/config | ||
4511 | 17 | topics/apt_source | ||
4512 | 18 | topics/networking | ||
4513 | 19 | topics/storage | ||
4514 | 16 | topics/reporting | 20 | topics/reporting |
4515 | 21 | topics/development | ||
4516 | 22 | topics/integration-testing | ||
4517 | 17 | 23 | ||
4518 | 18 | 24 | ||
4519 | 19 | 25 | ||
4520 | 20 | 26 | ||
4521 | === added file 'doc/topics/apt_source.rst' | |||
4522 | --- doc/topics/apt_source.rst 1970-01-01 00:00:00 +0000 | |||
4523 | +++ doc/topics/apt_source.rst 2016-10-03 18:55:20 +0000 | |||
4524 | @@ -0,0 +1,164 @@ | |||
4525 | 1 | ========== | ||
4526 | 2 | APT Source | ||
4527 | 3 | ========== | ||
4528 | 4 | |||
4529 | 5 | This part of curtin is meant to allow influencing the apt behaviour and configuration. | ||
4530 | 6 | |||
4531 | 7 | By default - if no apt config is provided - it does nothing. That keeps behavior compatible on upgrades. | ||
4532 | 8 | |||
4533 | 9 | The feature has an optional target argument which - by default - is used to modify the environment that curtin currently installs (@TARGET_MOUNT_POINT). | ||
4534 | 10 | |||
4535 | 11 | Features | ||
4536 | 12 | ~~~~~~~~ | ||
4537 | 13 | |||
4538 | 14 | * Add PGP keys to the APT trusted keyring | ||
4539 | 15 | |||
4540 | 16 | - add via short keyid | ||
4541 | 17 | |||
4542 | 18 | - add via long key fingerprint | ||
4543 | 19 | |||
4544 | 20 | - specify a custom keyserver to pull from | ||
4545 | 21 | |||
4546 | 22 | - add raw keys (which makes you independent of keyservers) | ||
4547 | 23 | |||
4548 | 24 | * Influence global apt configuration | ||
4549 | 25 | |||
4550 | 26 | - adding ppa's | ||
4551 | 27 | |||
4552 | 28 | - replacing mirror, security mirror and release in sources.list | ||
4553 | 29 | |||
4554 | 30 | - able to provide a fully custom template for sources.list | ||
4555 | 31 | |||
4556 | 32 | - add arbitrary apt.conf settings | ||
4557 | 33 | |||
4558 | 34 | - provide debconf configurations | ||
4559 | 35 | |||
4560 | 36 | - disabling suites (=pockets) | ||
4561 | 37 | |||
4562 | 38 | - per architecture mirror definition | ||
4563 | 39 | |||
4564 | 40 | |||
4565 | 41 | Configuration | ||
4566 | 42 | ~~~~~~~~~~~~~ | ||
4567 | 43 | |||
4568 | 44 | The general configuration of the apt feature is under an element called ``apt``. | ||
4569 | 45 | |||
4570 | 46 | This can have various "global" subelements as listed in the examples below. | ||
4571 | 47 | The file ``apt-source.yaml`` holds more examples. | ||
4572 | 48 | |||
4573 | 49 | These global configurations are valid throughput all of the apt feature. | ||
4574 | 50 | So for exmaple a global specification of a ``primary`` mirror will apply to all rendered sources entries. | ||
4575 | 51 | |||
4576 | 52 | Then there is a section ``sources`` which can hold any number of source subelements itself. | ||
4577 | 53 | The key is the filename and will be prepended by /etc/apt/sources.list.d/ if it doesn't start with a ``/``. | ||
4578 | 54 | There are certain cases - where no content is written into a source.list file where the filename will be ignored - yet it can still be used as index for merging. | ||
4579 | 55 | |||
4580 | 56 | The values inside the entries consist of the following optional entries | ||
4581 | 57 | |||
4582 | 58 | * ``source``: a sources.list entry (some variable replacements apply) | ||
4583 | 59 | |||
4584 | 60 | * ``keyid``: providing a key to import via shortid or fingerprint | ||
4585 | 61 | |||
4586 | 62 | * ``key``: providing a raw PGP key | ||
4587 | 63 | |||
4588 | 64 | * ``keyserver``: specify an alternate keyserver to pull keys from that were specified by keyid | ||
4589 | 65 | |||
4590 | 66 | The section "sources" is is a dictionary (unlike most block/net configs which are lists). This format allows merging between multiple input files than a list like :: | ||
4591 | 67 | |||
4592 | 68 | sources: | ||
4593 | 69 | s1: {'key': 'key1', 'source': 'source1'} | ||
4594 | 70 | |||
4595 | 71 | sources: | ||
4596 | 72 | s2: {'key': 'key2'} | ||
4597 | 73 | s1: {'keyserver': 'foo'} | ||
4598 | 74 | |||
4599 | 75 | This would be merged into | ||
4600 | 76 | s1: {'key': 'key1', 'source': 'source1', keyserver: 'foo'} | ||
4601 | 77 | s2: {'key': 'key2'} | ||
4602 | 78 | |||
4603 | 79 | Here is just one of the most common examples for this feature: install with curtin in an isolated environment (derived repository): | ||
4604 | 80 | |||
4605 | 81 | For that we need to: | ||
4606 | 82 | * insert the PGP key of the local repository to be trusted | ||
4607 | 83 | |||
4608 | 84 | - since you are locked down you can't pull from keyserver.ubuntu.com | ||
4609 | 85 | |||
4610 | 86 | - if you have an internal keyserver you could pull from there, but let us assume you don't even have that; so you have to provide the raw key | ||
4611 | 87 | |||
4612 | 88 | - in the example I'll use the key of the "Ubuntu CD Image Automatic Signing Key" which makes no sense as it is in the trusted keyring anyway, but it is a good example. (Also the key is shortened to stay readable) | ||
4613 | 89 | |||
4614 | 90 | :: | ||
4615 | 91 | |||
4616 | 92 | -----BEGIN PGP PUBLIC KEY BLOCK----- | ||
4617 | 93 | Version: GnuPG v1 | ||
4618 | 94 | mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O | ||
4619 | 95 | RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1 | ||
4620 | 96 | Twx6DKLF+3rF5nf1F3Q= | ||
4621 | 97 | =PBAe | ||
4622 | 98 | -----END PGP PUBLIC KEY BLOCK----- | ||
4623 | 99 | |||
4624 | 100 | * replace the mirrors used to some mirrors available inside the isolated environment for apt to pull repository data from. | ||
4625 | 101 | |||
4626 | 102 | - lets consider we have a local mirror at ``mymirror.local`` but otherwise following the usual paths | ||
4627 | 103 | |||
4628 | 104 | - make an example with a partial mirror that doesn't mirror the backports suite, so backports have to be disabled | ||
4629 | 105 | |||
4630 | 106 | That would be specified as :: | ||
4631 | 107 | |||
4632 | 108 | apt: | ||
4633 | 109 | primary: | ||
4634 | 110 | - arches [default] | ||
4635 | 111 | uri: http://mymirror.local/ubuntu/ | ||
4636 | 112 | disable_suites: [backports] | ||
4637 | 113 | sources: | ||
4638 | 114 | localrepokey: | ||
4639 | 115 | key: | # full key as block | ||
4640 | 116 | -----BEGIN PGP PUBLIC KEY BLOCK----- | ||
4641 | 117 | Version: GnuPG v1 | ||
4642 | 118 | |||
4643 | 119 | mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O | ||
4644 | 120 | RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1 | ||
4645 | 121 | Twx6DKLF+3rF5nf1F3Q= | ||
4646 | 122 | =PBAe | ||
4647 | 123 | -----END PGP PUBLIC KEY BLOCK----- | ||
4648 | 124 | |||
4649 | 125 | The file examples/apt-source.yaml holds various further examples that can be configured with this feature. | ||
4650 | 126 | |||
4651 | 127 | |||
4652 | 128 | Common snippets | ||
4653 | 129 | ~~~~~~~~~~~~~~~ | ||
4654 | 130 | This is a collection of additional ideas people can use the feature for customizing their to-be-installed system. | ||
4655 | 131 | |||
4656 | 132 | * enable proposed on installing | ||
4657 | 133 | |||
4658 | 134 | :: | ||
4659 | 135 | |||
4660 | 136 | apt: | ||
4661 | 137 | sources: | ||
4662 | 138 | proposed.list: deb $MIRROR $RELEASE-proposed main restricted universe multiverse | ||
4663 | 139 | |||
4664 | 140 | * Make debug symbols available | ||
4665 | 141 | |||
4666 | 142 | :: | ||
4667 | 143 | |||
4668 | 144 | apt: | ||
4669 | 145 | sources: | ||
4670 | 146 | ddebs.list: | | ||
4671 | 147 | deb http://ddebs.ubuntu.com $RELEASE main restricted universe multiverse | ||
4672 | 148 | Â deb http://ddebs.ubuntu.com $RELEASE-updates main restricted universe multiverse | ||
4673 | 149 | Â deb http://ddebs.ubuntu.com $RELEASE-security main restricted universe multiverse | ||
4674 | 150 | deb http://ddebs.ubuntu.com $RELEASE-proposed main restricted universe multiverse | ||
4675 | 151 | |||
4676 | 152 | Timing | ||
4677 | 153 | ~~~~~~ | ||
4678 | 154 | The feature is implemented at the stage of curthooks_commands, which runs just after curtin has extracted the image to the target. | ||
4679 | 155 | Additionally it can be ran as standalong command "curtin -v --config <yourconfigfile> apt-config". | ||
4680 | 156 | |||
4681 | 157 | This will pick up the target from the environment variable that is set by curtin, if you want to use it to a different target or outside of usual curtin handling you can add ``--target <path>`` to it to overwrite the target path. | ||
4682 | 158 | This target should have at least a minimal system with apt, apt-add-repository and dpkg being installed for the functionality to work. | ||
4683 | 159 | |||
4684 | 160 | |||
4685 | 161 | Dependencies | ||
4686 | 162 | ~~~~~~~~~~~~ | ||
4687 | 163 | Cloud-init might need to resolve dependencies and install packages in the ephemeral environment to run curtin. | ||
4688 | 164 | Therefore it is recommended to not only provide an apt configuration to curtin for the target, but also one to the install environment via cloud-init. | ||
4689 | 0 | 165 | ||
4690 | === added file 'doc/topics/config.rst' | |||
4691 | --- doc/topics/config.rst 1970-01-01 00:00:00 +0000 | |||
4692 | +++ doc/topics/config.rst 2016-10-03 18:55:20 +0000 | |||
4693 | @@ -0,0 +1,551 @@ | |||
4694 | 1 | ==================== | ||
4695 | 2 | Curtin Configuration | ||
4696 | 3 | ==================== | ||
4697 | 4 | |||
4698 | 5 | Curtin exposes a number of configuration options for controlling Curtin | ||
4699 | 6 | behavior during installation. | ||
4700 | 7 | |||
4701 | 8 | |||
4702 | 9 | Configuration options | ||
4703 | 10 | --------------------- | ||
4704 | 11 | Curtin's top level config keys are as follows: | ||
4705 | 12 | |||
4706 | 13 | |||
4707 | 14 | - apt_mirrors (``apt_mirrors``) | ||
4708 | 15 | - apt_proxy (``apt_proxy``) | ||
4709 | 16 | - block-meta (``block``) | ||
4710 | 17 | - debconf_selections (``debconf_selections``) | ||
4711 | 18 | - disable_overlayroot (``disable_overlayroot``) | ||
4712 | 19 | - grub (``grub``) | ||
4713 | 20 | - http_proxy (``http_proxy``) | ||
4714 | 21 | - install (``install``) | ||
4715 | 22 | - kernel (``kernel``) | ||
4716 | 23 | - kexec (``kexec``) | ||
4717 | 24 | - multipath (``multipath``) | ||
4718 | 25 | - network (``network``) | ||
4719 | 26 | - power_state (``power_state``) | ||
4720 | 27 | - reporting (``reporting``) | ||
4721 | 28 | - restore_dist_interfaces: (``restore_dist_interfaces``) | ||
4722 | 29 | - sources (``sources``) | ||
4723 | 30 | - stages (``stages``) | ||
4724 | 31 | - storage (``storage``) | ||
4725 | 32 | - swap (``swap``) | ||
4726 | 33 | - system_upgrade (``system_upgrade``) | ||
4727 | 34 | - write_files (``write_files``) | ||
4728 | 35 | |||
4729 | 36 | |||
4730 | 37 | apt_mirrors | ||
4731 | 38 | ~~~~~~~~~~~ | ||
4732 | 39 | Configure APT mirrors for ``ubuntu_archive`` and ``ubuntu_security`` | ||
4733 | 40 | |||
4734 | 41 | **ubuntu_archive**: *<http://local.archive/ubuntu>* | ||
4735 | 42 | |||
4736 | 43 | **ubuntu_security**: *<http://local.archive/ubuntu>* | ||
4737 | 44 | |||
4738 | 45 | If the target OS includes /etc/apt/sources.list, Curtin will replace | ||
4739 | 46 | the default values for each key set with the supplied mirror URL. | ||
4740 | 47 | |||
4741 | 48 | **Example**:: | ||
4742 | 49 | |||
4743 | 50 | apt_mirrors: | ||
4744 | 51 | ubuntu_archive: http://local.archive/ubuntu | ||
4745 | 52 | ubuntu_security: http://local.archive/ubuntu | ||
4746 | 53 | |||
4747 | 54 | |||
4748 | 55 | apt_proxy | ||
4749 | 56 | ~~~~~~~~~ | ||
4750 | 57 | Curtin will configure an APT HTTP proxy in the target OS | ||
4751 | 58 | |||
4752 | 59 | **apt_proxy**: *<URL to APT proxy>* | ||
4753 | 60 | |||
4754 | 61 | **Example**:: | ||
4755 | 62 | |||
4756 | 63 | apt_proxy: http://squid.mirror:3267/ | ||
4757 | 64 | |||
4758 | 65 | |||
4759 | 66 | block-meta | ||
4760 | 67 | ~~~~~~~~~~ | ||
4761 | 68 | Configure how Curtin selects and configures disks on the target | ||
4762 | 69 | system without providing a custom configuration (mode=simple). | ||
4763 | 70 | |||
4764 | 71 | **devices**: *<List of block devices for use>* | ||
4765 | 72 | |||
4766 | 73 | The ``devices`` parameter is a list of block device paths that Curtin may | ||
4767 | 74 | select from with choosing where to install the OS. | ||
4768 | 75 | |||
4769 | 76 | **boot-partition**: *<dictionary of configuration>* | ||
4770 | 77 | |||
4771 | 78 | The ``boot-partition`` parameter controls how to configure the boot partition | ||
4772 | 79 | with the following parameters: | ||
4773 | 80 | |||
4774 | 81 | **enabled**: *<boolean>* | ||
4775 | 82 | |||
4776 | 83 | Enabled will forcibly setup a partition on the target device for booting. | ||
4777 | 84 | |||
4778 | 85 | **format**: *<['uefi', 'gpt', 'prep', 'mbr']>* | ||
4779 | 86 | |||
4780 | 87 | Specify the partition format. Some formats, like ``uefi`` and ``prep`` | ||
4781 | 88 | are restricted by platform characteristics. | ||
4782 | 89 | |||
4783 | 90 | **fstype**: *<filesystem type: one of ['ext3', 'ext4'], defaults to 'ext4'>* | ||
4784 | 91 | |||
4785 | 92 | Specify the filesystem format on the boot partition. | ||
4786 | 93 | |||
4787 | 94 | **label**: *<filesystem label: defaults to 'boot'>* | ||
4788 | 95 | |||
4789 | 96 | Specify the filesystem label on the boot partition. | ||
4790 | 97 | |||
4791 | 98 | **Example**:: | ||
4792 | 99 | |||
4793 | 100 | block-meta: | ||
4794 | 101 | devices: | ||
4795 | 102 | - /dev/sda | ||
4796 | 103 | - /dev/sdb | ||
4797 | 104 | boot-partition: | ||
4798 | 105 | - enabled: True | ||
4799 | 106 | format: gpt | ||
4800 | 107 | fstype: ext4 | ||
4801 | 108 | label: my-boot-partition | ||
4802 | 109 | |||
4803 | 110 | |||
4804 | 111 | debconf_selections | ||
4805 | 112 | ~~~~~~~~~~~~~~~~~~ | ||
4806 | 113 | Curtin will update the target with debconf set-selection values. Users will | ||
4807 | 114 | need to be familiar with the package debconf options. Users can probe a | ||
4808 | 115 | packages' debconf settings by using ``debconf-get-selections``. | ||
4809 | 116 | |||
4810 | 117 | **selection_name**: *<debconf-set-selections input>* | ||
4811 | 118 | |||
4812 | 119 | ``debconf-set-selections`` is in the form:: | ||
4813 | 120 | |||
4814 | 121 | <packagename> <packagename/option-name> <type> <value> | ||
4815 | 122 | |||
4816 | 123 | **Example**:: | ||
4817 | 124 | |||
4818 | 125 | debconf_selections: | ||
4819 | 126 | set1: | | ||
4820 | 127 | cloud-init cloud-init/datasources multiselect MAAS | ||
4821 | 128 | lxd lxd/bridge-name string lxdbr0 | ||
4822 | 129 | set2: lxd lxd/setup-bridge boolean true | ||
4823 | 130 | |||
4824 | 131 | |||
4825 | 132 | |||
4826 | 133 | disable_overlayroot | ||
4827 | 134 | ~~~~~~~~~~~~~~~~~~~ | ||
4828 | 135 | Curtin disables overlayroot in the target by default. | ||
4829 | 136 | |||
4830 | 137 | **disable_overlayroot**: *<boolean: default True>* | ||
4831 | 138 | |||
4832 | 139 | **Example**:: | ||
4833 | 140 | |||
4834 | 141 | disable_overlayroot: False | ||
4835 | 142 | |||
4836 | 143 | |||
4837 | 144 | grub | ||
4838 | 145 | ~~~~ | ||
4839 | 146 | Curtin configures grub as the target machine's boot loader. Users | ||
4840 | 147 | can control a few options to tailor how the system will boot after | ||
4841 | 148 | installation. | ||
4842 | 149 | |||
4843 | 150 | **install_devices**: *<list of block device names to install grub>* | ||
4844 | 151 | |||
4845 | 152 | Specify a list of devices onto which grub will attempt to install. | ||
4846 | 153 | |||
4847 | 154 | **replace_linux_default**: *<boolean: default True>* | ||
4848 | 155 | |||
4849 | 156 | Controls whether grub-install will update the Linux Default target | ||
4850 | 157 | value during installation. | ||
4851 | 158 | |||
4852 | 159 | **update_nvram**: *<boolean: default False>* | ||
4853 | 160 | |||
4854 | 161 | Certain platforms, like ``uefi`` and ``prep`` systems utilize | ||
4855 | 162 | NVRAM to hold boot configuration settings which control the order in | ||
4856 | 163 | which devices are booted. Curtin by default will not attempt to | ||
4857 | 164 | update the NVRAM settings to preserve the system configuration. | ||
4858 | 165 | Users may want to force NVRAM to be updated such that the next boot | ||
4859 | 166 | of the system will boot from the installed device. | ||
4860 | 167 | |||
4861 | 168 | **Example**:: | ||
4862 | 169 | |||
4863 | 170 | grub: | ||
4864 | 171 | install_devices: | ||
4865 | 172 | - /dev/sda1 | ||
4866 | 173 | replace_linux_default: False | ||
4867 | 174 | update_nvram: True | ||
4868 | 175 | |||
4869 | 176 | |||
4870 | 177 | http_proxy | ||
4871 | 178 | ~~~~~~~~~~ | ||
4872 | 179 | Curtin will export ``http_proxy`` value into the installer environment. | ||
4873 | 180 | |||
4874 | 181 | **http_proxy**: *<HTTP Proxy URL>* | ||
4875 | 182 | |||
4876 | 183 | **Example**:: | ||
4877 | 184 | |||
4878 | 185 | http_proxy: http://squid.proxy:3728/ | ||
4879 | 186 | |||
4880 | 187 | |||
4881 | 188 | |||
4882 | 189 | install | ||
4883 | 190 | ~~~~~~~ | ||
4884 | 191 | Configure Curtin's install options. | ||
4885 | 192 | |||
4886 | 193 | **log_file**: *<path to write Curtin's install.log data>* | ||
4887 | 194 | |||
4888 | 195 | Curtin logs install progress by default to /var/log/curtin/install.log | ||
4889 | 196 | |||
4890 | 197 | **post_files**: *<List of files to read from host to include in reporting data>* | ||
4891 | 198 | |||
4892 | 199 | Curtin by default will post the ``log_file`` value to any configured reporter. | ||
4893 | 200 | |||
4894 | 201 | **save_install_config**: *<Path to save merged curtin configuration file>* | ||
4895 | 202 | |||
4896 | 203 | Curtin will save the merged configuration data into the target OS at | ||
4897 | 204 | the path of ``save_install_config``. This defaults to /root/curtin-install-cfg.yaml | ||
4898 | 205 | |||
4899 | 206 | **Example**:: | ||
4900 | 207 | |||
4901 | 208 | install: | ||
4902 | 209 | log_file: /tmp/install.log | ||
4903 | 210 | post_files: | ||
4904 | 211 | - /tmp/install.log | ||
4905 | 212 | - /var/log/syslog | ||
4906 | 213 | save_install_config: /root/myconf.yaml | ||
4907 | 214 | |||
4908 | 215 | |||
4909 | 216 | kernel | ||
4910 | 217 | ~~~~~~ | ||
4911 | 218 | Configure how Curtin selects which kernel to install into the target image. | ||
4912 | 219 | If ``kernel`` is not configured, Curtin will use the default mapping below | ||
4913 | 220 | and determine which ``package`` value by looking up the current release | ||
4914 | 221 | and current kernel version running. | ||
4915 | 222 | |||
4916 | 223 | |||
4917 | 224 | **fallback-package**: *<kernel package-name to be used as fallback>* | ||
4918 | 225 | |||
4919 | 226 | Specify a kernel package name to be used if the default package is not | ||
4920 | 227 | available. | ||
4921 | 228 | |||
4922 | 229 | **mapping**: *<Dictionary mapping Ubuntu release to HWE kernel names>* | ||
4923 | 230 | |||
4924 | 231 | Default mapping for Releases to package names is as follows:: | ||
4925 | 232 | |||
4926 | 233 | precise: | ||
4927 | 234 | 3.2.0: | ||
4928 | 235 | 3.5.0: -lts-quantal | ||
4929 | 236 | 3.8.0: -lts-raring | ||
4930 | 237 | 3.11.0: -lts-saucy | ||
4931 | 238 | 3.13.0: -lts-trusty | ||
4932 | 239 | trusty: | ||
4933 | 240 | 3.13.0: | ||
4934 | 241 | 3.16.0: -lts-utopic | ||
4935 | 242 | 3.19.0: -lts-vivid | ||
4936 | 243 | 4.2.0: -lts-wily | ||
4937 | 244 | 4.4.0: -lts-xenial | ||
4938 | 245 | xenial: | ||
4939 | 246 | 4.3.0: | ||
4940 | 247 | 4.4.0: | ||
4941 | 248 | |||
4942 | 249 | |||
4943 | 250 | **package**: *<Linux kernel package name>* | ||
4944 | 251 | |||
4945 | 252 | Specify the exact package to install in the target OS. | ||
4946 | 253 | |||
4947 | 254 | **Example**:: | ||
4948 | 255 | |||
4949 | 256 | kernel: | ||
4950 | 257 | fallback-package: linux-image-generic | ||
4951 | 258 | package: linux-image-generic-lts-xenial | ||
4952 | 259 | mapping: | ||
4953 | 260 | - xenial: | ||
4954 | 261 | - 4.4.0: -my-custom-kernel | ||
4955 | 262 | |||
4956 | 263 | |||
4957 | 264 | kexec | ||
4958 | 265 | ~~~~~ | ||
4959 | 266 | Curtin can use kexec to "reboot" into the target OS. | ||
4960 | 267 | |||
4961 | 268 | **mode**: *<on>* | ||
4962 | 269 | |||
4963 | 270 | Enable rebooting with kexec. | ||
4964 | 271 | |||
4965 | 272 | **Example**:: | ||
4966 | 273 | |||
4967 | 274 | kexec: on | ||
4968 | 275 | |||
4969 | 276 | |||
4970 | 277 | multipath | ||
4971 | 278 | ~~~~~~~~~ | ||
4972 | 279 | Curtin will detect and autoconfigure multipath by default to enable | ||
4973 | 280 | boot for systems with multipath. Curtin does not apply any advanced | ||
4974 | 281 | configuration or tuning, rather it uses distro defaults and provides | ||
4975 | 282 | enough configuration to enable booting. | ||
4976 | 283 | |||
4977 | 284 | **mode**: *<['auto', ['disabled']>* | ||
4978 | 285 | |||
4979 | 286 | Defaults to auto which will configure enough to enable booting on multipath | ||
4980 | 287 | devices. Disabled will prevent curtin from installing or configuring | ||
4981 | 288 | multipath. | ||
4982 | 289 | |||
4983 | 290 | **overwrite_bindings**: *<boolean>* | ||
4984 | 291 | |||
4985 | 292 | If ``overwrite_bindings`` is True then Curtin will generate new bindings | ||
4986 | 293 | file for multipath, overriding any existing binding in the target image. | ||
4987 | 294 | |||
4988 | 295 | **Example**:: | ||
4989 | 296 | |||
4990 | 297 | multipath: | ||
4991 | 298 | mode: auto | ||
4992 | 299 | overwrite_bindings: True | ||
4993 | 300 | |||
4994 | 301 | |||
4995 | 302 | network | ||
4996 | 303 | ~~~~~~~ | ||
4997 | 304 | Configure networking (see Networking section for details). | ||
4998 | 305 | |||
4999 | 306 | **network_option_1**: *<option value>* | ||
5000 | 307 |