Merge ~smoser/cloud-init:bug/1686514-azure-reformat-large into cloud-init:master
- Git
- lp:~smoser/cloud-init
- bug/1686514-azure-reformat-large
- Merge into master
Status: | Merged |
---|---|
Merged at revision: | 31b6f173280fcc8e9be2732ae2e9b6f6c89679d4 |
Proposed branch: | ~smoser/cloud-init:bug/1686514-azure-reformat-large |
Merge into: | cloud-init:master |
Diff against target: |
588 lines (+307/-77) 4 files modified
cloudinit/config/cc_disk_setup.py (+14/-5) cloudinit/sources/DataSourceAzure.py (+49/-35) tests/unittests/test_datasource/test_azure.py (+228/-37) tests/unittests/test_handler/test_handler_disk_setup.py (+16/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Ryan Harper | Needs Fixing | ||
Review via email: mp+323420@code.launchpad.net |
Commit message
Azure: fix reformatting of ephemeral disks on resize to large types.
Large instance types have a different disk format on the newly
partitioned ephemeral drive. So we have to adjust the logic in the
Azure datasource to recognize that a disk with 2 partitions and
an empty ntfs filesystem on the second one is acceptable.
This also adjusts the datasources's builtin fs_setup config to remove
the 'replace_fs' entry. This entry was previously ignored, and confusing.
I've clarified the doc on that also.
LP: #1686514
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Ryan Harper (raharper) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:225163c43eb
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:f3efa89cce9
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:d6cfcd4f0ec
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Please look at the discussion of realpath; I believe the code works as you have it but I'd prefer to be explicit in where we use realpath.
Scott Moser (smoser) : | # |
Ryan Harper (raharper) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:e4d051d05a6
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:b5722bd1358
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
Going to merge this based on
05/17/17 10:09:05 <smoser> rharper, thanks. can you read my comments on my azure branch ? want to get that landed today.
05/17/17 10:09:12 <smoser> if you have somethign you want me to review, please po int.
05/17/17 10:09:58 <rharper> smoser: ack; I read them; I still don't like the side-effects but I won't object any more;
Scott Moser (smoser) : | # |
Preview Diff
1 | diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py | |||
2 | index 6f827dd..29eb5dd 100644 | |||
3 | --- a/cloudinit/config/cc_disk_setup.py | |||
4 | +++ b/cloudinit/config/cc_disk_setup.py | |||
5 | @@ -68,6 +68,9 @@ specified using ``filesystem``. | |||
6 | 68 | Using ``overwrite: true`` for filesystems is dangerous and can lead to data | 68 | Using ``overwrite: true`` for filesystems is dangerous and can lead to data |
7 | 69 | loss, so double check the entry in ``fs_setup``. | 69 | loss, so double check the entry in ``fs_setup``. |
8 | 70 | 70 | ||
9 | 71 | .. note:: | ||
10 | 72 | ``replace_fs`` is ignored unless ``partition`` is ``auto`` or ``any``. | ||
11 | 73 | |||
12 | 71 | **Internal name:** ``cc_disk_setup`` | 74 | **Internal name:** ``cc_disk_setup`` |
13 | 72 | 75 | ||
14 | 73 | **Module frequency:** per instance | 76 | **Module frequency:** per instance |
15 | @@ -127,7 +130,7 @@ def handle(_name, cfg, cloud, log, _args): | |||
16 | 127 | log.debug("Partitioning disks: %s", str(disk_setup)) | 130 | log.debug("Partitioning disks: %s", str(disk_setup)) |
17 | 128 | for disk, definition in disk_setup.items(): | 131 | for disk, definition in disk_setup.items(): |
18 | 129 | if not isinstance(definition, dict): | 132 | if not isinstance(definition, dict): |
20 | 130 | log.warn("Invalid disk definition for %s" % disk) | 133 | log.warning("Invalid disk definition for %s" % disk) |
21 | 131 | continue | 134 | continue |
22 | 132 | 135 | ||
23 | 133 | try: | 136 | try: |
24 | @@ -144,7 +147,7 @@ def handle(_name, cfg, cloud, log, _args): | |||
25 | 144 | update_fs_setup_devices(fs_setup, cloud.device_name_to_device) | 147 | update_fs_setup_devices(fs_setup, cloud.device_name_to_device) |
26 | 145 | for definition in fs_setup: | 148 | for definition in fs_setup: |
27 | 146 | if not isinstance(definition, dict): | 149 | if not isinstance(definition, dict): |
29 | 147 | log.warn("Invalid file system definition: %s" % definition) | 150 | log.warning("Invalid file system definition: %s" % definition) |
30 | 148 | continue | 151 | continue |
31 | 149 | 152 | ||
32 | 150 | try: | 153 | try: |
33 | @@ -199,8 +202,13 @@ def update_fs_setup_devices(disk_setup, tformer): | |||
34 | 199 | definition['_origname'] = origname | 202 | definition['_origname'] = origname |
35 | 200 | definition['device'] = tformed | 203 | definition['device'] = tformed |
36 | 201 | 204 | ||
39 | 202 | if part and 'partition' in definition: | 205 | if part: |
40 | 203 | definition['_partition'] = definition['partition'] | 206 | # In origname with <dev>.N, N overrides 'partition' key. |
41 | 207 | if 'partition' in definition: | ||
42 | 208 | LOG.warning("Partition '%s' from dotted device name '%s' " | ||
43 | 209 | "overrides 'partition' key in %s", part, origname, | ||
44 | 210 | definition) | ||
45 | 211 | definition['_partition'] = definition['partition'] | ||
46 | 204 | definition['partition'] = part | 212 | definition['partition'] = part |
47 | 205 | 213 | ||
48 | 206 | 214 | ||
49 | @@ -849,7 +857,8 @@ def mkfs(fs_cfg): | |||
50 | 849 | # Check to see if the fs already exists | 857 | # Check to see if the fs already exists |
51 | 850 | LOG.debug("Checking device %s", device) | 858 | LOG.debug("Checking device %s", device) |
52 | 851 | check_label, check_fstype, _ = check_fs(device) | 859 | check_label, check_fstype, _ = check_fs(device) |
54 | 852 | LOG.debug("Device %s has %s %s", device, check_label, check_fstype) | 860 | LOG.debug("Device '%s' has check_label='%s' check_fstype=%s", |
55 | 861 | device, check_label, check_fstype) | ||
56 | 853 | 862 | ||
57 | 854 | if check_label == label and check_fstype == fs_type: | 863 | if check_label == label and check_fstype == fs_type: |
58 | 855 | LOG.debug("Existing file system found at %s", device) | 864 | LOG.debug("Existing file system found at %s", device) |
59 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py | |||
60 | index 5254e18..44857c0 100644 | |||
61 | --- a/cloudinit/sources/DataSourceAzure.py | |||
62 | +++ b/cloudinit/sources/DataSourceAzure.py | |||
63 | @@ -196,8 +196,7 @@ BUILTIN_CLOUD_CONFIG = { | |||
64 | 196 | 'overwrite': True}, | 196 | 'overwrite': True}, |
65 | 197 | }, | 197 | }, |
66 | 198 | 'fs_setup': [{'filesystem': DEFAULT_FS, | 198 | 'fs_setup': [{'filesystem': DEFAULT_FS, |
69 | 199 | 'device': 'ephemeral0.1', | 199 | 'device': 'ephemeral0.1'}], |
68 | 200 | 'replace_fs': 'ntfs'}], | ||
70 | 201 | } | 200 | } |
71 | 202 | 201 | ||
72 | 203 | DS_CFG_PATH = ['datasource', DS_NAME] | 202 | DS_CFG_PATH = ['datasource', DS_NAME] |
73 | @@ -413,56 +412,71 @@ class DataSourceAzureNet(sources.DataSource): | |||
74 | 413 | return | 412 | return |
75 | 414 | 413 | ||
76 | 415 | 414 | ||
77 | 415 | def _partitions_on_device(devpath, maxnum=16): | ||
78 | 416 | # return a list of tuples (ptnum, path) for each part on devpath | ||
79 | 417 | for suff in ("-part", "p", ""): | ||
80 | 418 | found = [] | ||
81 | 419 | for pnum in range(1, maxnum): | ||
82 | 420 | ppath = devpath + suff + str(pnum) | ||
83 | 421 | if os.path.exists(ppath): | ||
84 | 422 | found.append((pnum, os.path.realpath(ppath))) | ||
85 | 423 | if found: | ||
86 | 424 | return found | ||
87 | 425 | return [] | ||
88 | 426 | |||
89 | 427 | |||
90 | 428 | def _has_ntfs_filesystem(devpath): | ||
91 | 429 | ntfs_devices = util.find_devs_with("TYPE=ntfs", no_cache=True) | ||
92 | 430 | LOG.debug('ntfs_devices found = %s', ntfs_devices) | ||
93 | 431 | return os.path.realpath(devpath) in ntfs_devices | ||
94 | 432 | |||
95 | 433 | |||
96 | 416 | def can_dev_be_reformatted(devpath): | 434 | def can_dev_be_reformatted(devpath): |
99 | 417 | # determine if the ephemeral block device path devpath | 435 | """Determine if block device devpath is newly formatted ephemeral. |
100 | 418 | # is newly formatted after a resize. | 436 | |
101 | 437 | A newly formatted disk will: | ||
102 | 438 | a.) have a partition table (dos or gpt) | ||
103 | 439 | b.) have 1 partition that is ntfs formatted, or | ||
104 | 440 | have 2 partitions with the second partition ntfs formatted. | ||
105 | 441 | (larger instances with >2TB ephemeral disk have gpt, and will | ||
106 | 442 | have a microsoft reserved partition as part 1. LP: #1686514) | ||
107 | 443 | c.) the ntfs partition will have no files other than possibly | ||
108 | 444 | 'dataloss_warning_readme.txt'""" | ||
109 | 419 | if not os.path.exists(devpath): | 445 | if not os.path.exists(devpath): |
110 | 420 | return False, 'device %s does not exist' % devpath | 446 | return False, 'device %s does not exist' % devpath |
111 | 421 | 447 | ||
121 | 422 | realpath = os.path.realpath(devpath) | 448 | LOG.debug('Resolving realpath of %s -> %s', devpath, |
122 | 423 | LOG.debug('Resolving realpath of %s -> %s', devpath, realpath) | 449 | os.path.realpath(devpath)) |
114 | 424 | |||
115 | 425 | # it is possible that the block device might exist, but the kernel | ||
116 | 426 | # have not yet read the partition table and sent events. we udevadm settle | ||
117 | 427 | # to hope to resolve that. Better here would probably be to test and see, | ||
118 | 428 | # and then settle if we didn't find anything and try again. | ||
119 | 429 | if util.which("udevadm"): | ||
120 | 430 | util.subp(["udevadm", "settle"]) | ||
123 | 431 | 450 | ||
124 | 432 | # devpath of /dev/sd[a-z] or /dev/disk/cloud/azure_resource | 451 | # devpath of /dev/sd[a-z] or /dev/disk/cloud/azure_resource |
125 | 433 | # where partitions are "<devpath>1" or "<devpath>-part1" or "<devpath>p1" | 452 | # where partitions are "<devpath>1" or "<devpath>-part1" or "<devpath>p1" |
138 | 434 | part1path = None | 453 | partitions = _partitions_on_device(devpath) |
139 | 435 | for suff in ("-part", "p", ""): | 454 | if len(partitions) == 0: |
128 | 436 | cand = devpath + suff + "1" | ||
129 | 437 | if os.path.exists(cand): | ||
130 | 438 | if os.path.exists(devpath + suff + "2"): | ||
131 | 439 | msg = ('device %s had more than 1 partition: %s, %s' % | ||
132 | 440 | devpath, cand, devpath + suff + "2") | ||
133 | 441 | return False, msg | ||
134 | 442 | part1path = cand | ||
135 | 443 | break | ||
136 | 444 | |||
137 | 445 | if part1path is None: | ||
140 | 446 | return False, 'device %s was not partitioned' % devpath | 455 | return False, 'device %s was not partitioned' % devpath |
141 | 456 | elif len(partitions) > 2: | ||
142 | 457 | msg = ('device %s had 3 or more partitions: %s' % | ||
143 | 458 | (devpath, ' '.join([p[1] for p in partitions]))) | ||
144 | 459 | return False, msg | ||
145 | 460 | elif len(partitions) == 2: | ||
146 | 461 | cand_part, cand_path = partitions[1] | ||
147 | 462 | else: | ||
148 | 463 | cand_part, cand_path = partitions[0] | ||
149 | 447 | 464 | ||
156 | 448 | real_part1path = os.path.realpath(part1path) | 465 | if not _has_ntfs_filesystem(cand_path): |
157 | 449 | ntfs_devices = util.find_devs_with("TYPE=ntfs", no_cache=True) | 466 | msg = ('partition %s (%s) on device %s was not ntfs formatted' % |
158 | 450 | LOG.debug('ntfs_devices found = %s', ntfs_devices) | 467 | (cand_part, cand_path, devpath)) |
153 | 451 | if real_part1path not in ntfs_devices: | ||
154 | 452 | msg = ('partition 1 (%s -> %s) on device %s was not ntfs formatted' % | ||
155 | 453 | (part1path, real_part1path, devpath)) | ||
159 | 454 | return False, msg | 468 | return False, msg |
160 | 455 | 469 | ||
161 | 456 | def count_files(mp): | 470 | def count_files(mp): |
162 | 457 | ignored = set(['dataloss_warning_readme.txt']) | 471 | ignored = set(['dataloss_warning_readme.txt']) |
163 | 458 | return len([f for f in os.listdir(mp) if f.lower() not in ignored]) | 472 | return len([f for f in os.listdir(mp) if f.lower() not in ignored]) |
164 | 459 | 473 | ||
167 | 460 | bmsg = ('partition 1 (%s -> %s) on device %s was ntfs formatted' % | 474 | bmsg = ('partition %s (%s) on device %s was ntfs formatted' % |
168 | 461 | (part1path, real_part1path, devpath)) | 475 | (cand_part, cand_path, devpath)) |
169 | 462 | try: | 476 | try: |
171 | 463 | file_count = util.mount_cb(part1path, count_files) | 477 | file_count = util.mount_cb(cand_path, count_files) |
172 | 464 | except util.MountFailedError as e: | 478 | except util.MountFailedError as e: |
174 | 465 | return False, bmsg + ' but mount of %s failed: %s' % (part1path, e) | 479 | return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e) |
175 | 466 | 480 | ||
176 | 467 | if file_count != 0: | 481 | if file_count != 0: |
177 | 468 | return False, bmsg + ' but had %d files on it.' % file_count | 482 | return False, bmsg + ' but had %d files on it.' % file_count |
178 | diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py | |||
179 | index e6b0dcb..67cddeb 100644 | |||
180 | --- a/tests/unittests/test_datasource/test_azure.py | |||
181 | +++ b/tests/unittests/test_datasource/test_azure.py | |||
182 | @@ -1,12 +1,13 @@ | |||
183 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
184 | 2 | 2 | ||
185 | 3 | from cloudinit import helpers | 3 | from cloudinit import helpers |
188 | 4 | from cloudinit.util import b64e, decode_binary, load_file | 4 | from cloudinit.util import b64e, decode_binary, load_file, write_file |
189 | 5 | from cloudinit.sources import DataSourceAzure | 5 | from cloudinit.sources import DataSourceAzure as dsaz |
190 | 6 | from cloudinit.util import find_freebsd_part | 6 | from cloudinit.util import find_freebsd_part |
191 | 7 | from cloudinit.util import get_path_dev_freebsd | 7 | from cloudinit.util import get_path_dev_freebsd |
192 | 8 | 8 | ||
194 | 9 | from ..helpers import TestCase, populate_dir, mock, ExitStack, PY26, SkipTest | 9 | from ..helpers import (CiTestCase, TestCase, populate_dir, mock, |
195 | 10 | ExitStack, PY26, SkipTest) | ||
196 | 10 | 11 | ||
197 | 11 | import crypt | 12 | import crypt |
198 | 12 | import os | 13 | import os |
199 | @@ -98,7 +99,6 @@ class TestAzureDataSource(TestCase): | |||
200 | 98 | self.patches.enter_context(mock.patch.object(module, name, new)) | 99 | self.patches.enter_context(mock.patch.object(module, name, new)) |
201 | 99 | 100 | ||
202 | 100 | def _get_mockds(self): | 101 | def _get_mockds(self): |
203 | 101 | mod = DataSourceAzure | ||
204 | 102 | sysctl_out = "dev.storvsc.3.%pnpinfo: "\ | 102 | sysctl_out = "dev.storvsc.3.%pnpinfo: "\ |
205 | 103 | "classid=ba6163d9-04a1-4d29-b605-72e2ffb1dc7f "\ | 103 | "classid=ba6163d9-04a1-4d29-b605-72e2ffb1dc7f "\ |
206 | 104 | "deviceid=f8b3781b-1e82-4818-a1c3-63d806ec15bb\n" | 104 | "deviceid=f8b3781b-1e82-4818-a1c3-63d806ec15bb\n" |
207 | @@ -123,14 +123,14 @@ scbus-1 on xpt0 bus 0 | |||
208 | 123 | <Msft Virtual Disk 1.0> at scbus3 target 1 lun 0 (da1,pass2) | 123 | <Msft Virtual Disk 1.0> at scbus3 target 1 lun 0 (da1,pass2) |
209 | 124 | """ | 124 | """ |
210 | 125 | self.apply_patches([ | 125 | self.apply_patches([ |
212 | 126 | (mod, 'get_dev_storvsc_sysctl', mock.MagicMock( | 126 | (dsaz, 'get_dev_storvsc_sysctl', mock.MagicMock( |
213 | 127 | return_value=sysctl_out)), | 127 | return_value=sysctl_out)), |
215 | 128 | (mod, 'get_camcontrol_dev_bus', mock.MagicMock( | 128 | (dsaz, 'get_camcontrol_dev_bus', mock.MagicMock( |
216 | 129 | return_value=camctl_devbus)), | 129 | return_value=camctl_devbus)), |
218 | 130 | (mod, 'get_camcontrol_dev', mock.MagicMock( | 130 | (dsaz, 'get_camcontrol_dev', mock.MagicMock( |
219 | 131 | return_value=camctl_dev)) | 131 | return_value=camctl_dev)) |
220 | 132 | ]) | 132 | ]) |
222 | 133 | return mod | 133 | return dsaz |
223 | 134 | 134 | ||
224 | 135 | def _get_ds(self, data, agent_command=None): | 135 | def _get_ds(self, data, agent_command=None): |
225 | 136 | 136 | ||
226 | @@ -152,8 +152,7 @@ scbus-1 on xpt0 bus 0 | |||
227 | 152 | populate_dir(os.path.join(self.paths.seed_dir, "azure"), | 152 | populate_dir(os.path.join(self.paths.seed_dir, "azure"), |
228 | 153 | {'ovf-env.xml': data['ovfcontent']}) | 153 | {'ovf-env.xml': data['ovfcontent']}) |
229 | 154 | 154 | ||
232 | 155 | mod = DataSourceAzure | 155 | dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d |
231 | 156 | mod.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d | ||
233 | 157 | 156 | ||
234 | 158 | self.get_metadata_from_fabric = mock.MagicMock(return_value={ | 157 | self.get_metadata_from_fabric = mock.MagicMock(return_value={ |
235 | 159 | 'public-keys': [], | 158 | 'public-keys': [], |
236 | @@ -162,19 +161,19 @@ scbus-1 on xpt0 bus 0 | |||
237 | 162 | self.instance_id = 'test-instance-id' | 161 | self.instance_id = 'test-instance-id' |
238 | 163 | 162 | ||
239 | 164 | self.apply_patches([ | 163 | self.apply_patches([ |
249 | 165 | (mod, 'list_possible_azure_ds_devs', dsdevs), | 164 | (dsaz, 'list_possible_azure_ds_devs', dsdevs), |
250 | 166 | (mod, 'invoke_agent', _invoke_agent), | 165 | (dsaz, 'invoke_agent', _invoke_agent), |
251 | 167 | (mod, 'wait_for_files', _wait_for_files), | 166 | (dsaz, 'wait_for_files', _wait_for_files), |
252 | 168 | (mod, 'pubkeys_from_crt_files', _pubkeys_from_crt_files), | 167 | (dsaz, 'pubkeys_from_crt_files', _pubkeys_from_crt_files), |
253 | 169 | (mod, 'perform_hostname_bounce', mock.MagicMock()), | 168 | (dsaz, 'perform_hostname_bounce', mock.MagicMock()), |
254 | 170 | (mod, 'get_hostname', mock.MagicMock()), | 169 | (dsaz, 'get_hostname', mock.MagicMock()), |
255 | 171 | (mod, 'set_hostname', mock.MagicMock()), | 170 | (dsaz, 'set_hostname', mock.MagicMock()), |
256 | 172 | (mod, 'get_metadata_from_fabric', self.get_metadata_from_fabric), | 171 | (dsaz, 'get_metadata_from_fabric', self.get_metadata_from_fabric), |
257 | 173 | (mod.util, 'read_dmi_data', mock.MagicMock( | 172 | (dsaz.util, 'read_dmi_data', mock.MagicMock( |
258 | 174 | return_value=self.instance_id)), | 173 | return_value=self.instance_id)), |
259 | 175 | ]) | 174 | ]) |
260 | 176 | 175 | ||
262 | 177 | dsrc = mod.DataSourceAzureNet( | 176 | dsrc = dsaz.DataSourceAzureNet( |
263 | 178 | data.get('sys_cfg', {}), distro=None, paths=self.paths) | 177 | data.get('sys_cfg', {}), distro=None, paths=self.paths) |
264 | 179 | if agent_command is not None: | 178 | if agent_command is not None: |
265 | 180 | dsrc.ds_cfg['agent_command'] = agent_command | 179 | dsrc.ds_cfg['agent_command'] = agent_command |
266 | @@ -418,7 +417,7 @@ fdescfs /dev/fd fdescfs rw 0 0 | |||
267 | 418 | cfg = dsrc.get_config_obj() | 417 | cfg = dsrc.get_config_obj() |
268 | 419 | 418 | ||
269 | 420 | self.assertEqual(dsrc.device_name_to_device("ephemeral0"), | 419 | self.assertEqual(dsrc.device_name_to_device("ephemeral0"), |
271 | 421 | DataSourceAzure.RESOURCE_DISK_PATH) | 420 | dsaz.RESOURCE_DISK_PATH) |
272 | 422 | assert 'disk_setup' in cfg | 421 | assert 'disk_setup' in cfg |
273 | 423 | assert 'fs_setup' in cfg | 422 | assert 'fs_setup' in cfg |
274 | 424 | self.assertIsInstance(cfg['disk_setup'], dict) | 423 | self.assertIsInstance(cfg['disk_setup'], dict) |
275 | @@ -468,14 +467,13 @@ fdescfs /dev/fd fdescfs rw 0 0 | |||
276 | 468 | 467 | ||
277 | 469 | # Make sure that the redacted password on disk is not used by CI | 468 | # Make sure that the redacted password on disk is not used by CI |
278 | 470 | self.assertNotEqual(dsrc.cfg.get('password'), | 469 | self.assertNotEqual(dsrc.cfg.get('password'), |
280 | 471 | DataSourceAzure.DEF_PASSWD_REDACTION) | 470 | dsaz.DEF_PASSWD_REDACTION) |
281 | 472 | 471 | ||
282 | 473 | # Make sure that the password was really encrypted | 472 | # Make sure that the password was really encrypted |
283 | 474 | et = ET.fromstring(on_disk_ovf) | 473 | et = ET.fromstring(on_disk_ovf) |
284 | 475 | for elem in et.iter(): | 474 | for elem in et.iter(): |
285 | 476 | if 'UserPassword' in elem.tag: | 475 | if 'UserPassword' in elem.tag: |
288 | 477 | self.assertEqual(DataSourceAzure.DEF_PASSWD_REDACTION, | 476 | self.assertEqual(dsaz.DEF_PASSWD_REDACTION, elem.text) |
287 | 478 | elem.text) | ||
289 | 479 | 477 | ||
290 | 480 | def test_ovf_env_arrives_in_waagent_dir(self): | 478 | def test_ovf_env_arrives_in_waagent_dir(self): |
291 | 481 | xml = construct_valid_ovf_env(data={}, userdata="FOODATA") | 479 | xml = construct_valid_ovf_env(data={}, userdata="FOODATA") |
292 | @@ -524,17 +522,17 @@ class TestAzureBounce(TestCase): | |||
293 | 524 | 522 | ||
294 | 525 | def mock_out_azure_moving_parts(self): | 523 | def mock_out_azure_moving_parts(self): |
295 | 526 | self.patches.enter_context( | 524 | self.patches.enter_context( |
297 | 527 | mock.patch.object(DataSourceAzure, 'invoke_agent')) | 525 | mock.patch.object(dsaz, 'invoke_agent')) |
298 | 528 | self.patches.enter_context( | 526 | self.patches.enter_context( |
300 | 529 | mock.patch.object(DataSourceAzure, 'wait_for_files')) | 527 | mock.patch.object(dsaz, 'wait_for_files')) |
301 | 530 | self.patches.enter_context( | 528 | self.patches.enter_context( |
303 | 531 | mock.patch.object(DataSourceAzure, 'list_possible_azure_ds_devs', | 529 | mock.patch.object(dsaz, 'list_possible_azure_ds_devs', |
304 | 532 | mock.MagicMock(return_value=[]))) | 530 | mock.MagicMock(return_value=[]))) |
305 | 533 | self.patches.enter_context( | 531 | self.patches.enter_context( |
307 | 534 | mock.patch.object(DataSourceAzure, 'get_metadata_from_fabric', | 532 | mock.patch.object(dsaz, 'get_metadata_from_fabric', |
308 | 535 | mock.MagicMock(return_value={}))) | 533 | mock.MagicMock(return_value={}))) |
309 | 536 | self.patches.enter_context( | 534 | self.patches.enter_context( |
311 | 537 | mock.patch.object(DataSourceAzure.util, 'read_dmi_data', | 535 | mock.patch.object(dsaz.util, 'read_dmi_data', |
312 | 538 | mock.MagicMock(return_value='test-instance-id'))) | 536 | mock.MagicMock(return_value='test-instance-id'))) |
313 | 539 | 537 | ||
314 | 540 | def setUp(self): | 538 | def setUp(self): |
315 | @@ -543,13 +541,13 @@ class TestAzureBounce(TestCase): | |||
316 | 543 | self.waagent_d = os.path.join(self.tmp, 'var', 'lib', 'waagent') | 541 | self.waagent_d = os.path.join(self.tmp, 'var', 'lib', 'waagent') |
317 | 544 | self.paths = helpers.Paths({'cloud_dir': self.tmp}) | 542 | self.paths = helpers.Paths({'cloud_dir': self.tmp}) |
318 | 545 | self.addCleanup(shutil.rmtree, self.tmp) | 543 | self.addCleanup(shutil.rmtree, self.tmp) |
320 | 546 | DataSourceAzure.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d | 544 | dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d |
321 | 547 | self.patches = ExitStack() | 545 | self.patches = ExitStack() |
322 | 548 | self.mock_out_azure_moving_parts() | 546 | self.mock_out_azure_moving_parts() |
323 | 549 | self.get_hostname = self.patches.enter_context( | 547 | self.get_hostname = self.patches.enter_context( |
325 | 550 | mock.patch.object(DataSourceAzure, 'get_hostname')) | 548 | mock.patch.object(dsaz, 'get_hostname')) |
326 | 551 | self.set_hostname = self.patches.enter_context( | 549 | self.set_hostname = self.patches.enter_context( |
328 | 552 | mock.patch.object(DataSourceAzure, 'set_hostname')) | 550 | mock.patch.object(dsaz, 'set_hostname')) |
329 | 553 | self.subp = self.patches.enter_context( | 551 | self.subp = self.patches.enter_context( |
330 | 554 | mock.patch('cloudinit.sources.DataSourceAzure.util.subp')) | 552 | mock.patch('cloudinit.sources.DataSourceAzure.util.subp')) |
331 | 555 | 553 | ||
332 | @@ -560,7 +558,7 @@ class TestAzureBounce(TestCase): | |||
333 | 560 | if ovfcontent is not None: | 558 | if ovfcontent is not None: |
334 | 561 | populate_dir(os.path.join(self.paths.seed_dir, "azure"), | 559 | populate_dir(os.path.join(self.paths.seed_dir, "azure"), |
335 | 562 | {'ovf-env.xml': ovfcontent}) | 560 | {'ovf-env.xml': ovfcontent}) |
337 | 563 | dsrc = DataSourceAzure.DataSourceAzureNet( | 561 | dsrc = dsaz.DataSourceAzureNet( |
338 | 564 | {}, distro=None, paths=self.paths) | 562 | {}, distro=None, paths=self.paths) |
339 | 565 | if agent_command is not None: | 563 | if agent_command is not None: |
340 | 566 | dsrc.ds_cfg['agent_command'] = agent_command | 564 | dsrc.ds_cfg['agent_command'] = agent_command |
341 | @@ -673,7 +671,7 @@ class TestAzureBounce(TestCase): | |||
342 | 673 | 671 | ||
343 | 674 | def test_default_bounce_command_used_by_default(self): | 672 | def test_default_bounce_command_used_by_default(self): |
344 | 675 | cmd = 'default-bounce-command' | 673 | cmd = 'default-bounce-command' |
346 | 676 | DataSourceAzure.BUILTIN_DS_CONFIG['hostname_bounce']['command'] = cmd | 674 | dsaz.BUILTIN_DS_CONFIG['hostname_bounce']['command'] = cmd |
347 | 677 | cfg = {'hostname_bounce': {'policy': 'force'}} | 675 | cfg = {'hostname_bounce': {'policy': 'force'}} |
348 | 678 | data = self.get_ovf_env_with_dscfg('some-hostname', cfg) | 676 | data = self.get_ovf_env_with_dscfg('some-hostname', cfg) |
349 | 679 | self._get_ds(data, agent_command=['not', '__builtin__']).get_data() | 677 | self._get_ds(data, agent_command=['not', '__builtin__']).get_data() |
350 | @@ -701,15 +699,208 @@ class TestAzureBounce(TestCase): | |||
351 | 701 | class TestReadAzureOvf(TestCase): | 699 | class TestReadAzureOvf(TestCase): |
352 | 702 | def test_invalid_xml_raises_non_azure_ds(self): | 700 | def test_invalid_xml_raises_non_azure_ds(self): |
353 | 703 | invalid_xml = "<foo>" + construct_valid_ovf_env(data={}) | 701 | invalid_xml = "<foo>" + construct_valid_ovf_env(data={}) |
356 | 704 | self.assertRaises(DataSourceAzure.BrokenAzureDataSource, | 702 | self.assertRaises(dsaz.BrokenAzureDataSource, |
357 | 705 | DataSourceAzure.read_azure_ovf, invalid_xml) | 703 | dsaz.read_azure_ovf, invalid_xml) |
358 | 706 | 704 | ||
359 | 707 | def test_load_with_pubkeys(self): | 705 | def test_load_with_pubkeys(self): |
360 | 708 | mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': ''}] | 706 | mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': ''}] |
361 | 709 | pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist] | 707 | pubkeys = [(x['fingerprint'], x['path'], x['value']) for x in mypklist] |
362 | 710 | content = construct_valid_ovf_env(pubkeys=pubkeys) | 708 | content = construct_valid_ovf_env(pubkeys=pubkeys) |
364 | 711 | (_md, _ud, cfg) = DataSourceAzure.read_azure_ovf(content) | 709 | (_md, _ud, cfg) = dsaz.read_azure_ovf(content) |
365 | 712 | for mypk in mypklist: | 710 | for mypk in mypklist: |
366 | 713 | self.assertIn(mypk, cfg['_pubkeys']) | 711 | self.assertIn(mypk, cfg['_pubkeys']) |
367 | 714 | 712 | ||
368 | 713 | |||
369 | 714 | class TestCanDevBeReformatted(CiTestCase): | ||
370 | 715 | warning_file = 'dataloss_warning_readme.txt' | ||
371 | 716 | |||
372 | 717 | def _domock(self, mockpath, sattr=None): | ||
373 | 718 | patcher = mock.patch(mockpath) | ||
374 | 719 | setattr(self, sattr, patcher.start()) | ||
375 | 720 | self.addCleanup(patcher.stop) | ||
376 | 721 | |||
377 | 722 | def setUp(self): | ||
378 | 723 | super(TestCanDevBeReformatted, self).setUp() | ||
379 | 724 | |||
380 | 725 | def patchup(self, devs): | ||
381 | 726 | bypath = {} | ||
382 | 727 | for path, data in devs.items(): | ||
383 | 728 | bypath[path] = data | ||
384 | 729 | if 'realpath' in data: | ||
385 | 730 | bypath[data['realpath']] = data | ||
386 | 731 | for ppath, pdata in data.get('partitions', {}).items(): | ||
387 | 732 | bypath[ppath] = pdata | ||
388 | 733 | if 'realpath' in data: | ||
389 | 734 | bypath[pdata['realpath']] = pdata | ||
390 | 735 | |||
391 | 736 | def realpath(d): | ||
392 | 737 | return bypath[d].get('realpath', d) | ||
393 | 738 | |||
394 | 739 | def partitions_on_device(devpath): | ||
395 | 740 | parts = bypath.get(devpath, {}).get('partitions', {}) | ||
396 | 741 | ret = [] | ||
397 | 742 | for path, data in parts.items(): | ||
398 | 743 | ret.append((data.get('num'), realpath(path))) | ||
399 | 744 | # return sorted by partition number | ||
400 | 745 | return sorted(ret, key=lambda d: d[0]) | ||
401 | 746 | |||
402 | 747 | def mount_cb(device, callback): | ||
403 | 748 | p = self.tmp_dir() | ||
404 | 749 | for f in bypath.get(device).get('files', []): | ||
405 | 750 | write_file(os.path.join(p, f), content=f) | ||
406 | 751 | return callback(p) | ||
407 | 752 | |||
408 | 753 | def has_ntfs_fs(device): | ||
409 | 754 | return bypath.get(device, {}).get('fs') == 'ntfs' | ||
410 | 755 | |||
411 | 756 | p = 'cloudinit.sources.DataSourceAzure' | ||
412 | 757 | self._domock(p + "._partitions_on_device", 'm_partitions_on_device') | ||
413 | 758 | self._domock(p + "._has_ntfs_filesystem", 'm_has_ntfs_filesystem') | ||
414 | 759 | self._domock(p + ".util.mount_cb", 'm_mount_cb') | ||
415 | 760 | self._domock(p + ".os.path.realpath", 'm_realpath') | ||
416 | 761 | self._domock(p + ".os.path.exists", 'm_exists') | ||
417 | 762 | |||
418 | 763 | self.m_exists.side_effect = lambda p: p in bypath | ||
419 | 764 | self.m_realpath.side_effect = realpath | ||
420 | 765 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs | ||
421 | 766 | self.m_mount_cb.side_effect = mount_cb | ||
422 | 767 | self.m_partitions_on_device.side_effect = partitions_on_device | ||
423 | 768 | |||
424 | 769 | def test_three_partitions_is_false(self): | ||
425 | 770 | """A disk with 3 partitions can not be formatted.""" | ||
426 | 771 | self.patchup({ | ||
427 | 772 | '/dev/sda': { | ||
428 | 773 | 'partitions': { | ||
429 | 774 | '/dev/sda1': {'num': 1}, | ||
430 | 775 | '/dev/sda2': {'num': 2}, | ||
431 | 776 | '/dev/sda3': {'num': 3}, | ||
432 | 777 | }}}) | ||
433 | 778 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
434 | 779 | self.assertFalse(False, value) | ||
435 | 780 | self.assertIn("3 or more", msg.lower()) | ||
436 | 781 | |||
437 | 782 | def test_no_partitions_is_false(self): | ||
438 | 783 | """A disk with no partitions can not be formatted.""" | ||
439 | 784 | self.patchup({'/dev/sda': {}}) | ||
440 | 785 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
441 | 786 | self.assertEqual(False, value) | ||
442 | 787 | self.assertIn("not partitioned", msg.lower()) | ||
443 | 788 | |||
444 | 789 | def test_two_partitions_not_ntfs_false(self): | ||
445 | 790 | """2 partitions and 2nd not ntfs can not be formatted.""" | ||
446 | 791 | self.patchup({ | ||
447 | 792 | '/dev/sda': { | ||
448 | 793 | 'partitions': { | ||
449 | 794 | '/dev/sda1': {'num': 1}, | ||
450 | 795 | '/dev/sda2': {'num': 2, 'fs': 'ext4', 'files': []}, | ||
451 | 796 | }}}) | ||
452 | 797 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
453 | 798 | self.assertFalse(False, value) | ||
454 | 799 | self.assertIn("not ntfs", msg.lower()) | ||
455 | 800 | |||
456 | 801 | def test_two_partitions_ntfs_populated_false(self): | ||
457 | 802 | """2 partitions and populated ntfs fs on 2nd can not be formatted.""" | ||
458 | 803 | self.patchup({ | ||
459 | 804 | '/dev/sda': { | ||
460 | 805 | 'partitions': { | ||
461 | 806 | '/dev/sda1': {'num': 1}, | ||
462 | 807 | '/dev/sda2': {'num': 2, 'fs': 'ntfs', | ||
463 | 808 | 'files': ['secret.txt']}, | ||
464 | 809 | }}}) | ||
465 | 810 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
466 | 811 | self.assertFalse(False, value) | ||
467 | 812 | self.assertIn("files on it", msg.lower()) | ||
468 | 813 | |||
469 | 814 | def test_two_partitions_ntfs_empty_is_true(self): | ||
470 | 815 | """2 partitions and empty ntfs fs on 2nd can be formatted.""" | ||
471 | 816 | self.patchup({ | ||
472 | 817 | '/dev/sda': { | ||
473 | 818 | 'partitions': { | ||
474 | 819 | '/dev/sda1': {'num': 1}, | ||
475 | 820 | '/dev/sda2': {'num': 2, 'fs': 'ntfs', 'files': []}, | ||
476 | 821 | }}}) | ||
477 | 822 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
478 | 823 | self.assertEqual(True, value) | ||
479 | 824 | self.assertIn("safe for", msg.lower()) | ||
480 | 825 | |||
481 | 826 | def test_one_partition_not_ntfs_false(self): | ||
482 | 827 | """1 partition witih fs other than ntfs can not be formatted.""" | ||
483 | 828 | self.patchup({ | ||
484 | 829 | '/dev/sda': { | ||
485 | 830 | 'partitions': { | ||
486 | 831 | '/dev/sda1': {'num': 1, 'fs': 'zfs'}, | ||
487 | 832 | }}}) | ||
488 | 833 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
489 | 834 | self.assertEqual(False, value) | ||
490 | 835 | self.assertIn("not ntfs", msg.lower()) | ||
491 | 836 | |||
492 | 837 | def test_one_partition_ntfs_populated_false(self): | ||
493 | 838 | """1 mountable ntfs partition with many files can not be formatted.""" | ||
494 | 839 | self.patchup({ | ||
495 | 840 | '/dev/sda': { | ||
496 | 841 | 'partitions': { | ||
497 | 842 | '/dev/sda1': {'num': 1, 'fs': 'ntfs', | ||
498 | 843 | 'files': ['file1.txt', 'file2.exe']}, | ||
499 | 844 | }}}) | ||
500 | 845 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
501 | 846 | self.assertEqual(False, value) | ||
502 | 847 | self.assertIn("files on it", msg.lower()) | ||
503 | 848 | |||
504 | 849 | def test_one_partition_ntfs_empty_is_true(self): | ||
505 | 850 | """1 mountable ntfs partition and no files can be formatted.""" | ||
506 | 851 | self.patchup({ | ||
507 | 852 | '/dev/sda': { | ||
508 | 853 | 'partitions': { | ||
509 | 854 | '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []} | ||
510 | 855 | }}}) | ||
511 | 856 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
512 | 857 | self.assertEqual(True, value) | ||
513 | 858 | self.assertIn("safe for", msg.lower()) | ||
514 | 859 | |||
515 | 860 | def test_one_partition_ntfs_empty_with_dataloss_file_is_true(self): | ||
516 | 861 | """1 mountable ntfs partition and only warn file can be formatted.""" | ||
517 | 862 | self.patchup({ | ||
518 | 863 | '/dev/sda': { | ||
519 | 864 | 'partitions': { | ||
520 | 865 | '/dev/sda1': {'num': 1, 'fs': 'ntfs', | ||
521 | 866 | 'files': ['dataloss_warning_readme.txt']} | ||
522 | 867 | }}}) | ||
523 | 868 | value, msg = dsaz.can_dev_be_reformatted("/dev/sda") | ||
524 | 869 | self.assertEqual(True, value) | ||
525 | 870 | self.assertIn("safe for", msg.lower()) | ||
526 | 871 | |||
527 | 872 | def test_one_partition_through_realpath_is_true(self): | ||
528 | 873 | """A symlink to a device with 1 ntfs partition can be formatted.""" | ||
529 | 874 | epath = '/dev/disk/cloud/azure_resource' | ||
530 | 875 | self.patchup({ | ||
531 | 876 | epath: { | ||
532 | 877 | 'realpath': '/dev/sdb', | ||
533 | 878 | 'partitions': { | ||
534 | 879 | epath + '-part1': { | ||
535 | 880 | 'num': 1, 'fs': 'ntfs', 'files': [self.warning_file], | ||
536 | 881 | 'realpath': '/dev/sdb1'} | ||
537 | 882 | }}}) | ||
538 | 883 | value, msg = dsaz.can_dev_be_reformatted(epath) | ||
539 | 884 | self.assertEqual(True, value) | ||
540 | 885 | self.assertIn("safe for", msg.lower()) | ||
541 | 886 | |||
542 | 887 | def test_three_partition_through_realpath_is_false(self): | ||
543 | 888 | """A symlink to a device with 3 partitions can not be formatted.""" | ||
544 | 889 | epath = '/dev/disk/cloud/azure_resource' | ||
545 | 890 | self.patchup({ | ||
546 | 891 | epath: { | ||
547 | 892 | 'realpath': '/dev/sdb', | ||
548 | 893 | 'partitions': { | ||
549 | 894 | epath + '-part1': { | ||
550 | 895 | 'num': 1, 'fs': 'ntfs', 'files': [self.warning_file], | ||
551 | 896 | 'realpath': '/dev/sdb1'}, | ||
552 | 897 | epath + '-part2': {'num': 2, 'fs': 'ext3', | ||
553 | 898 | 'realpath': '/dev/sdb2'}, | ||
554 | 899 | epath + '-part3': {'num': 3, 'fs': 'ext', | ||
555 | 900 | 'realpath': '/dev/sdb3'} | ||
556 | 901 | }}}) | ||
557 | 902 | value, msg = dsaz.can_dev_be_reformatted(epath) | ||
558 | 903 | self.assertEqual(False, value) | ||
559 | 904 | self.assertIn("3 or more", msg.lower()) | ||
560 | 905 | |||
561 | 715 | # vi: ts=4 expandtab | 906 | # vi: ts=4 expandtab |
562 | diff --git a/tests/unittests/test_handler/test_handler_disk_setup.py b/tests/unittests/test_handler/test_handler_disk_setup.py | |||
563 | index 9f00d46..68fc6aa 100644 | |||
564 | --- a/tests/unittests/test_handler/test_handler_disk_setup.py | |||
565 | +++ b/tests/unittests/test_handler/test_handler_disk_setup.py | |||
566 | @@ -151,6 +151,22 @@ class TestUpdateFsSetupDevices(TestCase): | |||
567 | 151 | 'filesystem': 'xfs' | 151 | 'filesystem': 'xfs' |
568 | 152 | }, fs_setup) | 152 | }, fs_setup) |
569 | 153 | 153 | ||
570 | 154 | def test_dotted_devname_populates_partition(self): | ||
571 | 155 | fs_setup = { | ||
572 | 156 | 'device': 'ephemeral0.1', | ||
573 | 157 | 'label': 'test2', | ||
574 | 158 | 'filesystem': 'xfs' | ||
575 | 159 | } | ||
576 | 160 | cc_disk_setup.update_fs_setup_devices([fs_setup], | ||
577 | 161 | lambda device: device) | ||
578 | 162 | self.assertEqual({ | ||
579 | 163 | '_origname': 'ephemeral0.1', | ||
580 | 164 | 'device': 'ephemeral0', | ||
581 | 165 | 'partition': '1', | ||
582 | 166 | 'label': 'test2', | ||
583 | 167 | 'filesystem': 'xfs' | ||
584 | 168 | }, fs_setup) | ||
585 | 169 | |||
586 | 154 | 170 | ||
587 | 155 | @mock.patch('cloudinit.config.cc_disk_setup.find_device_node', | 171 | @mock.patch('cloudinit.config.cc_disk_setup.find_device_node', |
588 | 156 | return_value=('/dev/xdb1', False)) | 172 | return_value=('/dev/xdb1', False)) |
PASSED: Continuous integration, rev:66073f870cc 2a7a3794c59293c a9bf715dddf1f6 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 325/ /jenkins. ubuntu. com/server/ job/cloud- init-ci/ nodes=metal- amd64/325 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ nodes=metal- arm64/325 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ nodes=metal- ppc64el/ 325 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ nodes=vm- i386/325
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 325/rebuild
https:/