Merge ~raharper/cloud-init:ubuntu-devel-new-artful-release-v2 into cloud-init:ubuntu/devel
- Git
- lp:~raharper/cloud-init
- ubuntu-devel-new-artful-release-v2
- Merge into ubuntu/devel
Proposed by
Ryan Harper
on 2017-08-21
| Status: | Merged | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Merged at revision: | 46afa1266e511cd3e48f4703fed49650e108a33b | ||||||||||||||||
| Proposed branch: | ~raharper/cloud-init:ubuntu-devel-new-artful-release-v2 | ||||||||||||||||
| Merge into: | cloud-init:ubuntu/devel | ||||||||||||||||
| Diff against target: |
4127 lines (+2892/-256) 41 files modified
Makefile (+3/-3) cloudinit/analyze/__init__.py (+0/-0) cloudinit/analyze/__main__.py (+155/-0) cloudinit/analyze/dump.py (+176/-0) cloudinit/analyze/show.py (+207/-0) cloudinit/analyze/tests/test_dump.py (+210/-0) cloudinit/cmd/main.py (+15/-29) cloudinit/config/cc_ntp.py (+45/-13) cloudinit/distros/arch.py (+59/-31) cloudinit/net/__init__.py (+134/-46) cloudinit/net/dhcp.py (+119/-0) cloudinit/net/netplan.py (+9/-26) cloudinit/net/network_state.py (+69/-16) cloudinit/net/sysconfig.py (+5/-1) cloudinit/net/tests/__init__.py (+0/-0) cloudinit/net/tests/test_dhcp.py (+144/-0) cloudinit/net/tests/test_init.py (+522/-0) cloudinit/sources/DataSourceAliYun.py (+6/-3) cloudinit/sources/DataSourceEc2.py (+99/-22) cloudinit/sources/DataSourceOVF.py (+62/-1) cloudinit/sources/helpers/vmware/imc/config.py (+21/-3) cloudinit/sources/helpers/vmware/imc/config_passwd.py (+67/-0) debian/changelog (+28/-0) doc/rtd/index.rst (+1/-0) doc/rtd/topics/capabilities.rst (+40/-10) doc/rtd/topics/debugging.rst (+146/-0) setup.py (+1/-1) templates/timesyncd.conf.tmpl (+8/-0) tests/cloud_tests/bddeb.py (+9/-7) tests/unittests/helpers.py (+1/-1) tests/unittests/test_cli.py (+84/-3) tests/unittests/test_datasource/test_aliyun.py (+6/-5) tests/unittests/test_datasource/test_common.py (+1/-0) tests/unittests/test_datasource/test_ec2.py (+112/-24) tests/unittests/test_distros/__init__.py (+21/-0) tests/unittests/test_distros/test_arch.py (+45/-0) tests/unittests/test_distros/test_netconfig.py (+2/-2) tests/unittests/test_handler/test_handler_ntp.py (+101/-4) tests/unittests/test_net.py (+118/-0) tests/unittests/test_vmware_config_file.py (+30/-2) tox.ini (+11/-3) |
||||||||||||||||
| Related bugs: |
|
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Server Team CI bot | continuous-integration | Approve on 2017-08-21 | |
| Scott Moser | 2017-08-21 | Pending | |
|
Review via email:
|
|||
Commit Message
Description of the Change
Create a new artful release to ubuntu/devel
Following new upstream release instructions:
https:/
To post a comment you must log in.
| Scott Moser (smoser) wrote : | # |
PASSED: Continuous integration, rev:46afa1266e5
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
review:
Approve
(continuous-integration)
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
| 1 | diff --git a/Makefile b/Makefile |
| 2 | index e9f5498..9e7f4ee 100644 |
| 3 | --- a/Makefile |
| 4 | +++ b/Makefile |
| 5 | @@ -46,15 +46,15 @@ pyflakes: |
| 6 | |
| 7 | pyflakes3: |
| 8 | @$(CWD)/tools/run-pyflakes3 |
| 9 | - |
| 10 | + |
| 11 | unittest: clean_pyc |
| 12 | - nosetests $(noseopts) tests/unittests |
| 13 | + nosetests $(noseopts) tests/unittests cloudinit |
| 14 | |
| 15 | unittest3: clean_pyc |
| 16 | nosetests3 $(noseopts) tests/unittests |
| 17 | |
| 18 | ci-deps-ubuntu: |
| 19 | - @$(PYVER) $(CWD)/tools/read-dependencies --distro-ubuntu --test-distro |
| 20 | + @$(PYVER) $(CWD)/tools/read-dependencies --distro ubuntu --test-distro |
| 21 | |
| 22 | ci-deps-centos: |
| 23 | @$(PYVER) $(CWD)/tools/read-dependencies --distro centos --test-distro |
| 24 | diff --git a/cloudinit/analyze/__init__.py b/cloudinit/analyze/__init__.py |
| 25 | new file mode 100644 |
| 26 | index 0000000..e69de29 |
| 27 | --- /dev/null |
| 28 | +++ b/cloudinit/analyze/__init__.py |
| 29 | diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py |
| 30 | new file mode 100644 |
| 31 | index 0000000..71cba4f |
| 32 | --- /dev/null |
| 33 | +++ b/cloudinit/analyze/__main__.py |
| 34 | @@ -0,0 +1,155 @@ |
| 35 | +# Copyright (C) 2017 Canonical Ltd. |
| 36 | +# |
| 37 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 38 | + |
| 39 | +import argparse |
| 40 | +import re |
| 41 | +import sys |
| 42 | + |
| 43 | +from . import dump |
| 44 | +from . import show |
| 45 | + |
| 46 | + |
| 47 | +def get_parser(parser=None): |
| 48 | + if not parser: |
| 49 | + parser = argparse.ArgumentParser( |
| 50 | + prog='cloudinit-analyze', |
| 51 | + description='Devel tool: Analyze cloud-init logs and data') |
| 52 | + subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand') |
| 53 | + subparsers.required = True |
| 54 | + |
| 55 | + parser_blame = subparsers.add_parser( |
| 56 | + 'blame', help='Print list of executed stages ordered by time to init') |
| 57 | + parser_blame.add_argument( |
| 58 | + '-i', '--infile', action='store', dest='infile', |
| 59 | + default='/var/log/cloud-init.log', |
| 60 | + help='specify where to read input.') |
| 61 | + parser_blame.add_argument( |
| 62 | + '-o', '--outfile', action='store', dest='outfile', default='-', |
| 63 | + help='specify where to write output. ') |
| 64 | + parser_blame.set_defaults(action=('blame', analyze_blame)) |
| 65 | + |
| 66 | + parser_show = subparsers.add_parser( |
| 67 | + 'show', help='Print list of in-order events during execution') |
| 68 | + parser_show.add_argument('-f', '--format', action='store', |
| 69 | + dest='print_format', default='%I%D @%Es +%ds', |
| 70 | + help='specify formatting of output.') |
| 71 | + parser_show.add_argument('-i', '--infile', action='store', |
| 72 | + dest='infile', default='/var/log/cloud-init.log', |
| 73 | + help='specify where to read input.') |
| 74 | + parser_show.add_argument('-o', '--outfile', action='store', |
| 75 | + dest='outfile', default='-', |
| 76 | + help='specify where to write output.') |
| 77 | + parser_show.set_defaults(action=('show', analyze_show)) |
| 78 | + parser_dump = subparsers.add_parser( |
| 79 | + 'dump', help='Dump cloud-init events in JSON format') |
| 80 | + parser_dump.add_argument('-i', '--infile', action='store', |
| 81 | + dest='infile', default='/var/log/cloud-init.log', |
| 82 | + help='specify where to read input. ') |
| 83 | + parser_dump.add_argument('-o', '--outfile', action='store', |
| 84 | + dest='outfile', default='-', |
| 85 | + help='specify where to write output. ') |
| 86 | + parser_dump.set_defaults(action=('dump', analyze_dump)) |
| 87 | + return parser |
| 88 | + |
| 89 | + |
| 90 | +def analyze_blame(name, args): |
| 91 | + """Report a list of records sorted by largest time delta. |
| 92 | + |
| 93 | + For example: |
| 94 | + 30.210s (init-local) searching for datasource |
| 95 | + 8.706s (init-network) reading and applying user-data |
| 96 | + 166ms (modules-config) .... |
| 97 | + 807us (modules-final) ... |
| 98 | + |
| 99 | + We generate event records parsing cloud-init logs, formatting the output |
| 100 | + and sorting by record data ('delta') |
| 101 | + """ |
| 102 | + (infh, outfh) = configure_io(args) |
| 103 | + blame_format = ' %ds (%n)' |
| 104 | + r = re.compile('(^\s+\d+\.\d+)', re.MULTILINE) |
| 105 | + for idx, record in enumerate(show.show_events(_get_events(infh), |
| 106 | + blame_format)): |
| 107 | + srecs = sorted(filter(r.match, record), reverse=True) |
| 108 | + outfh.write('-- Boot Record %02d --\n' % (idx + 1)) |
| 109 | + outfh.write('\n'.join(srecs) + '\n') |
| 110 | + outfh.write('\n') |
| 111 | + outfh.write('%d boot records analyzed\n' % (idx + 1)) |
| 112 | + |
| 113 | + |
| 114 | +def analyze_show(name, args): |
| 115 | + """Generate output records using the 'standard' format to printing events. |
| 116 | + |
| 117 | + Example output follows: |
| 118 | + Starting stage: (init-local) |
| 119 | + ... |
| 120 | + Finished stage: (init-local) 0.105195 seconds |
| 121 | + |
| 122 | + Starting stage: (init-network) |
| 123 | + ... |
| 124 | + Finished stage: (init-network) 0.339024 seconds |
| 125 | + |
| 126 | + Starting stage: (modules-config) |
| 127 | + ... |
| 128 | + Finished stage: (modules-config) 0.NNN seconds |
| 129 | + |
| 130 | + Starting stage: (modules-final) |
| 131 | + ... |
| 132 | + Finished stage: (modules-final) 0.NNN seconds |
| 133 | + """ |
| 134 | + (infh, outfh) = configure_io(args) |
| 135 | + for idx, record in enumerate(show.show_events(_get_events(infh), |
| 136 | + args.print_format)): |
| 137 | + outfh.write('-- Boot Record %02d --\n' % (idx + 1)) |
| 138 | + outfh.write('The total time elapsed since completing an event is' |
| 139 | + ' printed after the "@" character.\n') |
| 140 | + outfh.write('The time the event takes is printed after the "+" ' |
| 141 | + 'character.\n\n') |
| 142 | + outfh.write('\n'.join(record) + '\n') |
| 143 | + outfh.write('%d boot records analyzed\n' % (idx + 1)) |
| 144 | + |
| 145 | + |
| 146 | +def analyze_dump(name, args): |
| 147 | + """Dump cloud-init events in json format""" |
| 148 | + (infh, outfh) = configure_io(args) |
| 149 | + outfh.write(dump.json_dumps(_get_events(infh)) + '\n') |
| 150 | + |
| 151 | + |
| 152 | +def _get_events(infile): |
| 153 | + rawdata = None |
| 154 | + events, rawdata = show.load_events(infile, None) |
| 155 | + if not events: |
| 156 | + events, _ = dump.dump_events(rawdata=rawdata) |
| 157 | + return events |
| 158 | + |
| 159 | + |
| 160 | +def configure_io(args): |
| 161 | + """Common parsing and setup of input/output files""" |
| 162 | + if args.infile == '-': |
| 163 | + infh = sys.stdin |
| 164 | + else: |
| 165 | + try: |
| 166 | + infh = open(args.infile, 'r') |
| 167 | + except (FileNotFoundError, PermissionError): |
| 168 | + sys.stderr.write('Cannot open file %s\n' % args.infile) |
| 169 | + sys.exit(1) |
| 170 | + |
| 171 | + if args.outfile == '-': |
| 172 | + outfh = sys.stdout |
| 173 | + else: |
| 174 | + try: |
| 175 | + outfh = open(args.outfile, 'w') |
| 176 | + except PermissionError: |
| 177 | + sys.stderr.write('Cannot open file %s\n' % args.outfile) |
| 178 | + sys.exit(1) |
| 179 | + |
| 180 | + return (infh, outfh) |
| 181 | + |
| 182 | + |
| 183 | +if __name__ == '__main__': |
| 184 | + parser = get_parser() |
| 185 | + args = parser.parse_args() |
| 186 | + (name, action_functor) = args.action |
| 187 | + action_functor(name, args) |
| 188 | + |
| 189 | +# vi: ts=4 expandtab |
| 190 | diff --git a/cloudinit/analyze/dump.py b/cloudinit/analyze/dump.py |
| 191 | new file mode 100644 |
| 192 | index 0000000..ca4da49 |
| 193 | --- /dev/null |
| 194 | +++ b/cloudinit/analyze/dump.py |
| 195 | @@ -0,0 +1,176 @@ |
| 196 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 197 | + |
| 198 | +import calendar |
| 199 | +from datetime import datetime |
| 200 | +import json |
| 201 | +import sys |
| 202 | + |
| 203 | +from cloudinit import util |
| 204 | + |
| 205 | +stage_to_description = { |
| 206 | + 'finished': 'finished running cloud-init', |
| 207 | + 'init-local': 'starting search for local datasources', |
| 208 | + 'init-network': 'searching for network datasources', |
| 209 | + 'init': 'searching for network datasources', |
| 210 | + 'modules-config': 'running config modules', |
| 211 | + 'modules-final': 'finalizing modules', |
| 212 | + 'modules': 'running modules for', |
| 213 | + 'single': 'running single module ', |
| 214 | +} |
| 215 | + |
| 216 | +# logger's asctime format |
| 217 | +CLOUD_INIT_ASCTIME_FMT = "%Y-%m-%d %H:%M:%S,%f" |
| 218 | + |
| 219 | +# journctl -o short-precise |
| 220 | +CLOUD_INIT_JOURNALCTL_FMT = "%b %d %H:%M:%S.%f %Y" |
| 221 | + |
| 222 | +# other |
| 223 | +DEFAULT_FMT = "%b %d %H:%M:%S %Y" |
| 224 | + |
| 225 | + |
| 226 | +def parse_timestamp(timestampstr): |
| 227 | + # default syslog time does not include the current year |
| 228 | + months = [calendar.month_abbr[m] for m in range(1, 13)] |
| 229 | + if timestampstr.split()[0] in months: |
| 230 | + # Aug 29 22:55:26 |
| 231 | + FMT = DEFAULT_FMT |
| 232 | + if '.' in timestampstr: |
| 233 | + FMT = CLOUD_INIT_JOURNALCTL_FMT |
| 234 | + dt = datetime.strptime(timestampstr + " " + |
| 235 | + str(datetime.now().year), |
| 236 | + FMT) |
| 237 | + timestamp = dt.strftime("%s.%f") |
| 238 | + elif "," in timestampstr: |
| 239 | + # 2016-09-12 14:39:20,839 |
| 240 | + dt = datetime.strptime(timestampstr, CLOUD_INIT_ASCTIME_FMT) |
| 241 | + timestamp = dt.strftime("%s.%f") |
| 242 | + else: |
| 243 | + # allow date(1) to handle other formats we don't expect |
| 244 | + timestamp = parse_timestamp_from_date(timestampstr) |
| 245 | + |
| 246 | + return float(timestamp) |
| 247 | + |
| 248 | + |
| 249 | +def parse_timestamp_from_date(timestampstr): |
| 250 | + out, _ = util.subp(['date', '+%s.%3N', '-d', timestampstr]) |
| 251 | + timestamp = out.strip() |
| 252 | + return float(timestamp) |
| 253 | + |
| 254 | + |
| 255 | +def parse_ci_logline(line): |
| 256 | + # Stage Starts: |
| 257 | + # Cloud-init v. 0.7.7 running 'init-local' at \ |
| 258 | + # Fri, 02 Sep 2016 19:28:07 +0000. Up 1.0 seconds. |
| 259 | + # Cloud-init v. 0.7.7 running 'init' at \ |
| 260 | + # Fri, 02 Sep 2016 19:28:08 +0000. Up 2.0 seconds. |
| 261 | + # Cloud-init v. 0.7.7 finished at |
| 262 | + # Aug 29 22:55:26 test1 [CLOUDINIT] handlers.py[DEBUG]: \ |
| 263 | + # finish: modules-final: SUCCESS: running modules for final |
| 264 | + # 2016-08-30T21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: \ |
| 265 | + # finish: modules-final: SUCCESS: running modules for final |
| 266 | + # |
| 267 | + # Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]: \ |
| 268 | + # Cloud-init v. 0.7.8 running 'init-local' at \ |
| 269 | + # Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds. |
| 270 | + # |
| 271 | + # 2017-05-22 18:02:01,088 - util.py[DEBUG]: Cloud-init v. 0.7.9 running \ |
| 272 | + # 'init-local' at Mon, 22 May 2017 18:02:01 +0000. Up 2.0 seconds. |
| 273 | + |
| 274 | + separators = [' - ', ' [CLOUDINIT] '] |
| 275 | + found = False |
| 276 | + for sep in separators: |
| 277 | + if sep in line: |
| 278 | + found = True |
| 279 | + break |
| 280 | + |
| 281 | + if not found: |
| 282 | + return None |
| 283 | + |
| 284 | + (timehost, eventstr) = line.split(sep) |
| 285 | + |
| 286 | + # journalctl -o short-precise |
| 287 | + if timehost.endswith(":"): |
| 288 | + timehost = " ".join(timehost.split()[0:-1]) |
| 289 | + |
| 290 | + if "," in timehost: |
| 291 | + timestampstr, extra = timehost.split(",") |
| 292 | + timestampstr += ",%s" % extra.split()[0] |
| 293 | + if ' ' in extra: |
| 294 | + hostname = extra.split()[-1] |
| 295 | + else: |
| 296 | + hostname = timehost.split()[-1] |
| 297 | + timestampstr = timehost.split(hostname)[0].strip() |
| 298 | + if 'Cloud-init v.' in eventstr: |
| 299 | + event_type = 'start' |
| 300 | + if 'running' in eventstr: |
| 301 | + stage_and_timestamp = eventstr.split('running')[1].lstrip() |
| 302 | + event_name, _ = stage_and_timestamp.split(' at ') |
| 303 | + event_name = event_name.replace("'", "").replace(":", "-") |
| 304 | + if event_name == "init": |
| 305 | + event_name = "init-network" |
| 306 | + else: |
| 307 | + # don't generate a start for the 'finished at' banner |
| 308 | + return None |
| 309 | + event_description = stage_to_description[event_name] |
| 310 | + else: |
| 311 | + (pymodloglvl, event_type, event_name) = eventstr.split()[0:3] |
| 312 | + event_description = eventstr.split(event_name)[1].strip() |
| 313 | + |
| 314 | + event = { |
| 315 | + 'name': event_name.rstrip(":"), |
| 316 | + 'description': event_description, |
| 317 | + 'timestamp': parse_timestamp(timestampstr), |
| 318 | + 'origin': 'cloudinit', |
| 319 | + 'event_type': event_type.rstrip(":"), |
| 320 | + } |
| 321 | + if event['event_type'] == "finish": |
| 322 | + result = event_description.split(":")[0] |
| 323 | + desc = event_description.split(result)[1].lstrip(':').strip() |
| 324 | + event['result'] = result |
| 325 | + event['description'] = desc.strip() |
| 326 | + |
| 327 | + return event |
| 328 | + |
| 329 | + |
| 330 | +def json_dumps(data): |
| 331 | + return json.dumps(data, indent=1, sort_keys=True, |
| 332 | + separators=(',', ': ')) |
| 333 | + |
| 334 | + |
| 335 | +def dump_events(cisource=None, rawdata=None): |
| 336 | + events = [] |
| 337 | + event = None |
| 338 | + CI_EVENT_MATCHES = ['start:', 'finish:', 'Cloud-init v.'] |
| 339 | + |
| 340 | + if not any([cisource, rawdata]): |
| 341 | + raise ValueError('Either cisource or rawdata parameters are required') |
| 342 | + |
| 343 | + if rawdata: |
| 344 | + data = rawdata.splitlines() |
| 345 | + else: |
| 346 | + data = cisource.readlines() |
| 347 | + |
| 348 | + for line in data: |
| 349 | + for match in CI_EVENT_MATCHES: |
| 350 | + if match in line: |
| 351 | + try: |
| 352 | + event = parse_ci_logline(line) |
| 353 | + except ValueError: |
| 354 | + sys.stderr.write('Skipping invalid entry\n') |
| 355 | + if event: |
| 356 | + events.append(event) |
| 357 | + |
| 358 | + return events, data |
| 359 | + |
| 360 | + |
| 361 | +def main(): |
| 362 | + if len(sys.argv) > 1: |
| 363 | + cisource = open(sys.argv[1]) |
| 364 | + else: |
| 365 | + cisource = sys.stdin |
| 366 | + |
| 367 | + return json_dumps(dump_events(cisource)) |
| 368 | + |
| 369 | + |
| 370 | +if __name__ == "__main__": |
| 371 | + print(main()) |
| 372 | diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py |
| 373 | new file mode 100644 |
| 374 | index 0000000..3b356bb |
| 375 | --- /dev/null |
| 376 | +++ b/cloudinit/analyze/show.py |
| 377 | @@ -0,0 +1,207 @@ |
| 378 | +# Copyright (C) 2016 Canonical Ltd. |
| 379 | +# |
| 380 | +# Author: Ryan Harper <ryan.harper@canonical.com> |
| 381 | +# |
| 382 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 383 | + |
| 384 | +import base64 |
| 385 | +import datetime |
| 386 | +import json |
| 387 | +import os |
| 388 | + |
| 389 | +from cloudinit import util |
| 390 | + |
| 391 | +# An event: |
| 392 | +''' |
| 393 | +{ |
| 394 | + "description": "executing late commands", |
| 395 | + "event_type": "start", |
| 396 | + "level": "INFO", |
| 397 | + "name": "cmd-install/stage-late" |
| 398 | + "origin": "cloudinit", |
| 399 | + "timestamp": 1461164249.1590767, |
| 400 | +}, |
| 401 | + |
| 402 | + { |
| 403 | + "description": "executing late commands", |
| 404 | + "event_type": "finish", |
| 405 | + "level": "INFO", |
| 406 | + "name": "cmd-install/stage-late", |
| 407 | + "origin": "cloudinit", |
| 408 | + "result": "SUCCESS", |
| 409 | + "timestamp": 1461164249.1590767 |
| 410 | + } |
| 411 | + |
| 412 | +''' |
| 413 | +format_key = { |
| 414 | + '%d': 'delta', |
| 415 | + '%D': 'description', |
| 416 | + '%E': 'elapsed', |
| 417 | + '%e': 'event_type', |
| 418 | + '%I': 'indent', |
| 419 | + '%l': 'level', |
| 420 | + '%n': 'name', |
| 421 | + '%o': 'origin', |
| 422 | + '%r': 'result', |
| 423 | + '%t': 'timestamp', |
| 424 | + '%T': 'total_time', |
| 425 | +} |
| 426 | + |
| 427 | +formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v) |
| 428 | + for k, v in format_key.items()]) |
| 429 | + |
| 430 | + |
| 431 | +def format_record(msg, event): |
| 432 | + for i, j in format_key.items(): |
| 433 | + if i in msg: |
| 434 | + # ensure consistent formatting of time values |
| 435 | + if j in ['delta', 'elapsed', 'timestamp']: |
| 436 | + msg = msg.replace(i, "{%s:08.5f}" % j) |
| 437 | + else: |
| 438 | + msg = msg.replace(i, "{%s}" % j) |
| 439 | + return msg.format(**event) |
| 440 | + |
| 441 | + |
| 442 | +def dump_event_files(event): |
| 443 | + content = dict((k, v) for k, v in event.items() if k not in ['content']) |
| 444 | + files = content['files'] |
| 445 | + saved = [] |
| 446 | + for f in files: |
| 447 | + fname = f['path'] |
| 448 | + fn_local = os.path.basename(fname) |
| 449 | + fcontent = base64.b64decode(f['content']).decode('ascii') |
| 450 | + util.write_file(fn_local, fcontent) |
| 451 | + saved.append(fn_local) |
| 452 | + |
| 453 | + return saved |
| 454 | + |
| 455 | + |
| 456 | +def event_name(event): |
| 457 | + if event: |
| 458 | + return event.get('name') |
| 459 | + return None |
| 460 | + |
| 461 | + |
| 462 | +def event_type(event): |
| 463 | + if event: |
| 464 | + return event.get('event_type') |
| 465 | + return None |
| 466 | + |
| 467 | + |
| 468 | +def event_parent(event): |
| 469 | + if event: |
| 470 | + return event_name(event).split("/")[0] |
| 471 | + return None |
| 472 | + |
| 473 | + |
| 474 | +def event_timestamp(event): |
| 475 | + return float(event.get('timestamp')) |
| 476 | + |
| 477 | + |
| 478 | +def event_datetime(event): |
| 479 | + return datetime.datetime.utcfromtimestamp(event_timestamp(event)) |
| 480 | + |
| 481 | + |
| 482 | +def delta_seconds(t1, t2): |
| 483 | + return (t2 - t1).total_seconds() |
| 484 | + |
| 485 | + |
| 486 | +def event_duration(start, finish): |
| 487 | + return delta_seconds(event_datetime(start), event_datetime(finish)) |
| 488 | + |
| 489 | + |
| 490 | +def event_record(start_time, start, finish): |
| 491 | + record = finish.copy() |
| 492 | + record.update({ |
| 493 | + 'delta': event_duration(start, finish), |
| 494 | + 'elapsed': delta_seconds(start_time, event_datetime(start)), |
| 495 | + 'indent': '|' + ' ' * (event_name(start).count('/') - 1) + '`->', |
| 496 | + }) |
| 497 | + |
| 498 | + return record |
| 499 | + |
| 500 | + |
| 501 | +def total_time_record(total_time): |
| 502 | + return 'Total Time: %3.5f seconds\n' % total_time |
| 503 | + |
| 504 | + |
| 505 | +def generate_records(events, blame_sort=False, |
| 506 | + print_format="(%n) %d seconds in %I%D", |
| 507 | + dump_files=False, log_datafiles=False): |
| 508 | + |
| 509 | + sorted_events = sorted(events, key=lambda x: x['timestamp']) |
| 510 | + records = [] |
| 511 | + start_time = None |
| 512 | + total_time = 0.0 |
| 513 | + stage_start_time = {} |
| 514 | + stages_seen = [] |
| 515 | + boot_records = [] |
| 516 | + |
| 517 | + unprocessed = [] |
| 518 | + for e in range(0, len(sorted_events)): |
| 519 | + event = events[e] |
| 520 | + try: |
| 521 | + next_evt = events[e + 1] |
| 522 | + except IndexError: |
| 523 | + next_evt = None |
| 524 | + |
| 525 | + if event_type(event) == 'start': |
| 526 | + if event.get('name') in stages_seen: |
| 527 | + records.append(total_time_record(total_time)) |
| 528 | + boot_records.append(records) |
| 529 | + records = [] |
| 530 | + start_time = None |
| 531 | + total_time = 0.0 |
| 532 | + |
| 533 | + if start_time is None: |
| 534 | + stages_seen = [] |
| 535 | + start_time = event_datetime(event) |
| 536 | + stage_start_time[event_parent(event)] = start_time |
| 537 | + |
| 538 | + # see if we have a pair |
| 539 | + if event_name(event) == event_name(next_evt): |
| 540 | + if event_type(next_evt) == 'finish': |
| 541 | + records.append(format_record(print_format, |
| 542 | + event_record(start_time, |
| 543 | + event, |
| 544 | + next_evt))) |
| 545 | + else: |
| 546 | + # This is a parent event |
| 547 | + records.append("Starting stage: %s" % event.get('name')) |
| 548 | + unprocessed.append(event) |
| 549 | + stages_seen.append(event.get('name')) |
| 550 | + continue |
| 551 | + else: |
| 552 | + prev_evt = unprocessed.pop() |
| 553 | + if event_name(event) == event_name(prev_evt): |
| 554 | + record = event_record(start_time, prev_evt, event) |
| 555 | + records.append(format_record("Finished stage: " |
| 556 | + "(%n) %d seconds ", |
| 557 | + record) + "\n") |
| 558 | + total_time += record.get('delta') |
| 559 | + else: |
| 560 | + # not a match, put it back |
| 561 | + unprocessed.append(prev_evt) |
| 562 | + |
| 563 | + records.append(total_time_record(total_time)) |
| 564 | + boot_records.append(records) |
| 565 | + return boot_records |
| 566 | + |
| 567 | + |
| 568 | +def show_events(events, print_format): |
| 569 | + return generate_records(events, print_format=print_format) |
| 570 | + |
| 571 | + |
| 572 | +def load_events(infile, rawdata=None): |
| 573 | + if rawdata: |
| 574 | + data = rawdata.read() |
| 575 | + else: |
| 576 | + data = infile.read() |
| 577 | + |
| 578 | + j = None |
| 579 | + try: |
| 580 | + j = json.loads(data) |
| 581 | + except json.JSONDecodeError: |
| 582 | + pass |
| 583 | + |
| 584 | + return j, data |
| 585 | diff --git a/cloudinit/analyze/tests/test_dump.py b/cloudinit/analyze/tests/test_dump.py |
| 586 | new file mode 100644 |
| 587 | index 0000000..2c0885d |
| 588 | --- /dev/null |
| 589 | +++ b/cloudinit/analyze/tests/test_dump.py |
| 590 | @@ -0,0 +1,210 @@ |
| 591 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 592 | + |
| 593 | +from datetime import datetime |
| 594 | +from textwrap import dedent |
| 595 | + |
| 596 | +from cloudinit.analyze.dump import ( |
| 597 | + dump_events, parse_ci_logline, parse_timestamp) |
| 598 | +from cloudinit.util import subp, write_file |
| 599 | +from tests.unittests.helpers import CiTestCase |
| 600 | + |
| 601 | + |
| 602 | +class TestParseTimestamp(CiTestCase): |
| 603 | + |
| 604 | + def test_parse_timestamp_handles_cloud_init_default_format(self): |
| 605 | + """Logs with cloud-init detailed formats will be properly parsed.""" |
| 606 | + trusty_fmt = '%Y-%m-%d %H:%M:%S,%f' |
| 607 | + trusty_stamp = '2016-09-12 14:39:20,839' |
| 608 | + |
| 609 | + parsed = parse_timestamp(trusty_stamp) |
| 610 | + |
| 611 | + # convert ourselves |
| 612 | + dt = datetime.strptime(trusty_stamp, trusty_fmt) |
| 613 | + expected = float(dt.strftime('%s.%f')) |
| 614 | + |
| 615 | + # use date(1) |
| 616 | + out, _err = subp(['date', '+%s.%3N', '-d', trusty_stamp]) |
| 617 | + timestamp = out.strip() |
| 618 | + date_ts = float(timestamp) |
| 619 | + |
| 620 | + self.assertEqual(expected, parsed) |
| 621 | + self.assertEqual(expected, date_ts) |
| 622 | + self.assertEqual(date_ts, parsed) |
| 623 | + |
| 624 | + def test_parse_timestamp_handles_syslog_adding_year(self): |
| 625 | + """Syslog timestamps lack a year. Add year and properly parse.""" |
| 626 | + syslog_fmt = '%b %d %H:%M:%S %Y' |
| 627 | + syslog_stamp = 'Aug 08 15:12:51' |
| 628 | + |
| 629 | + # convert stamp ourselves by adding the missing year value |
| 630 | + year = datetime.now().year |
| 631 | + dt = datetime.strptime(syslog_stamp + " " + str(year), syslog_fmt) |
| 632 | + expected = float(dt.strftime('%s.%f')) |
| 633 | + parsed = parse_timestamp(syslog_stamp) |
| 634 | + |
| 635 | + # use date(1) |
| 636 | + out, _ = subp(['date', '+%s.%3N', '-d', syslog_stamp]) |
| 637 | + timestamp = out.strip() |
| 638 | + date_ts = float(timestamp) |
| 639 | + |
| 640 | + self.assertEqual(expected, parsed) |
| 641 | + self.assertEqual(expected, date_ts) |
| 642 | + self.assertEqual(date_ts, parsed) |
| 643 | + |
| 644 | + def test_parse_timestamp_handles_journalctl_format_adding_year(self): |
| 645 | + """Journalctl precise timestamps lack a year. Add year and parse.""" |
| 646 | + journal_fmt = '%b %d %H:%M:%S.%f %Y' |
| 647 | + journal_stamp = 'Aug 08 17:15:50.606811' |
| 648 | + |
| 649 | + # convert stamp ourselves by adding the missing year value |
| 650 | + year = datetime.now().year |
| 651 | + dt = datetime.strptime(journal_stamp + " " + str(year), journal_fmt) |
| 652 | + expected = float(dt.strftime('%s.%f')) |
| 653 | + parsed = parse_timestamp(journal_stamp) |
| 654 | + |
| 655 | + # use date(1) |
| 656 | + out, _ = subp(['date', '+%s.%6N', '-d', journal_stamp]) |
| 657 | + timestamp = out.strip() |
| 658 | + date_ts = float(timestamp) |
| 659 | + |
| 660 | + self.assertEqual(expected, parsed) |
| 661 | + self.assertEqual(expected, date_ts) |
| 662 | + self.assertEqual(date_ts, parsed) |
| 663 | + |
| 664 | + def test_parse_unexpected_timestamp_format_with_date_command(self): |
| 665 | + """Dump sends unexpected timestamp formats to data for processing.""" |
| 666 | + new_fmt = '%H:%M %m/%d %Y' |
| 667 | + new_stamp = '17:15 08/08' |
| 668 | + |
| 669 | + # convert stamp ourselves by adding the missing year value |
| 670 | + year = datetime.now().year |
| 671 | + dt = datetime.strptime(new_stamp + " " + str(year), new_fmt) |
| 672 | + expected = float(dt.strftime('%s.%f')) |
| 673 | + parsed = parse_timestamp(new_stamp) |
| 674 | + |
| 675 | + # use date(1) |
| 676 | + out, _ = subp(['date', '+%s.%6N', '-d', new_stamp]) |
| 677 | + timestamp = out.strip() |
| 678 | + date_ts = float(timestamp) |
| 679 | + |
| 680 | + self.assertEqual(expected, parsed) |
| 681 | + self.assertEqual(expected, date_ts) |
| 682 | + self.assertEqual(date_ts, parsed) |
| 683 | + |
| 684 | + |
| 685 | +class TestParseCILogLine(CiTestCase): |
| 686 | + |
| 687 | + def test_parse_logline_returns_none_without_separators(self): |
| 688 | + """When no separators are found, parse_ci_logline returns None.""" |
| 689 | + expected_parse_ignores = [ |
| 690 | + '', '-', 'adsf-asdf', '2017-05-22 18:02:01,088', 'CLOUDINIT'] |
| 691 | + for parse_ignores in expected_parse_ignores: |
| 692 | + self.assertIsNone(parse_ci_logline(parse_ignores)) |
| 693 | + |
| 694 | + def test_parse_logline_returns_event_for_cloud_init_logs(self): |
| 695 | + """parse_ci_logline returns an event parse from cloud-init format.""" |
| 696 | + line = ( |
| 697 | + "2017-08-08 20:05:07,147 - util.py[DEBUG]: Cloud-init v. 0.7.9" |
| 698 | + " running 'init-local' at Tue, 08 Aug 2017 20:05:07 +0000. Up" |
| 699 | + " 6.26 seconds.") |
| 700 | + dt = datetime.strptime( |
| 701 | + '2017-08-08 20:05:07,147', '%Y-%m-%d %H:%M:%S,%f') |
| 702 | + timestamp = float(dt.strftime('%s.%f')) |
| 703 | + expected = { |
| 704 | + 'description': 'starting search for local datasources', |
| 705 | + 'event_type': 'start', |
| 706 | + 'name': 'init-local', |
| 707 | + 'origin': 'cloudinit', |
| 708 | + 'timestamp': timestamp} |
| 709 | + self.assertEqual(expected, parse_ci_logline(line)) |
| 710 | + |
| 711 | + def test_parse_logline_returns_event_for_journalctl_logs(self): |
| 712 | + """parse_ci_logline returns an event parse from journalctl format.""" |
| 713 | + line = ("Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT]" |
| 714 | + " util.py[DEBUG]: Cloud-init v. 0.7.8 running 'init-local' at" |
| 715 | + " Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds.") |
| 716 | + year = datetime.now().year |
| 717 | + dt = datetime.strptime( |
| 718 | + 'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y') |
| 719 | + timestamp = float(dt.strftime('%s.%f')) |
| 720 | + expected = { |
| 721 | + 'description': 'starting search for local datasources', |
| 722 | + 'event_type': 'start', |
| 723 | + 'name': 'init-local', |
| 724 | + 'origin': 'cloudinit', |
| 725 | + 'timestamp': timestamp} |
| 726 | + self.assertEqual(expected, parse_ci_logline(line)) |
| 727 | + |
| 728 | + def test_parse_logline_returns_event_for_finish_events(self): |
| 729 | + """parse_ci_logline returns a finish event for a parsed log line.""" |
| 730 | + line = ('2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT]' |
| 731 | + ' handlers.py[DEBUG]: finish: modules-final: SUCCESS: running' |
| 732 | + ' modules for final') |
| 733 | + expected = { |
| 734 | + 'description': 'running modules for final', |
| 735 | + 'event_type': 'finish', |
| 736 | + 'name': 'modules-final', |
| 737 | + 'origin': 'cloudinit', |
| 738 | + 'result': 'SUCCESS', |
| 739 | + 'timestamp': 1472594005.972} |
| 740 | + self.assertEqual(expected, parse_ci_logline(line)) |
| 741 | + |
| 742 | + |
| 743 | +SAMPLE_LOGS = dedent("""\ |
| 744 | +Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]:\ |
| 745 | + Cloud-init v. 0.7.8 running 'init-local' at Thu, 03 Nov 2016\ |
| 746 | + 06:51:06 +0000. Up 1.0 seconds. |
| 747 | +2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: finish:\ |
| 748 | + modules-final: SUCCESS: running modules for final |
| 749 | +""") |
| 750 | + |
| 751 | + |
| 752 | +class TestDumpEvents(CiTestCase): |
| 753 | + maxDiff = None |
| 754 | + |
| 755 | + def test_dump_events_with_rawdata(self): |
| 756 | + """Rawdata is split and parsed into a tuple of events and data""" |
| 757 | + events, data = dump_events(rawdata=SAMPLE_LOGS) |
| 758 | + expected_data = SAMPLE_LOGS.splitlines() |
| 759 | + year = datetime.now().year |
| 760 | + dt1 = datetime.strptime( |
| 761 | + 'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y') |
| 762 | + timestamp1 = float(dt1.strftime('%s.%f')) |
| 763 | + expected_events = [{ |
| 764 | + 'description': 'starting search for local datasources', |
| 765 | + 'event_type': 'start', |
| 766 | + 'name': 'init-local', |
| 767 | + 'origin': 'cloudinit', |
| 768 | + 'timestamp': timestamp1}, { |
| 769 | + 'description': 'running modules for final', |
| 770 | + 'event_type': 'finish', |
| 771 | + 'name': 'modules-final', |
| 772 | + 'origin': 'cloudinit', |
| 773 | + 'result': 'SUCCESS', |
| 774 | + 'timestamp': 1472594005.972}] |
| 775 | + self.assertEqual(expected_events, events) |
| 776 | + self.assertEqual(expected_data, data) |
| 777 | + |
| 778 | + def test_dump_events_with_cisource(self): |
| 779 | + """Cisource file is read and parsed into a tuple of events and data.""" |
| 780 | + tmpfile = self.tmp_path('logfile') |
| 781 | + write_file(tmpfile, SAMPLE_LOGS) |
| 782 | + events, data = dump_events(cisource=open(tmpfile)) |
| 783 | + year = datetime.now().year |
| 784 | + dt1 = datetime.strptime( |
| 785 | + 'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y') |
| 786 | + timestamp1 = float(dt1.strftime('%s.%f')) |
| 787 | + expected_events = [{ |
| 788 | + 'description': 'starting search for local datasources', |
| 789 | + 'event_type': 'start', |
| 790 | + 'name': 'init-local', |
| 791 | + 'origin': 'cloudinit', |
| 792 | + 'timestamp': timestamp1}, { |
| 793 | + 'description': 'running modules for final', |
| 794 | + 'event_type': 'finish', |
| 795 | + 'name': 'modules-final', |
| 796 | + 'origin': 'cloudinit', |
| 797 | + 'result': 'SUCCESS', |
| 798 | + 'timestamp': 1472594005.972}] |
| 799 | + self.assertEqual(expected_events, events) |
| 800 | + self.assertEqual(SAMPLE_LOGS.splitlines(), [d.strip() for d in data]) |
| 801 | diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py |
| 802 | index 139e03b..9c0ac86 100644 |
| 803 | --- a/cloudinit/cmd/main.py |
| 804 | +++ b/cloudinit/cmd/main.py |
| 805 | @@ -50,13 +50,6 @@ WELCOME_MSG_TPL = ("Cloud-init v. {version} running '{action}' at " |
| 806 | # Module section template |
| 807 | MOD_SECTION_TPL = "cloud_%s_modules" |
| 808 | |
| 809 | -# Things u can query on |
| 810 | -QUERY_DATA_TYPES = [ |
| 811 | - 'data', |
| 812 | - 'data_raw', |
| 813 | - 'instance_id', |
| 814 | -] |
| 815 | - |
| 816 | # Frequency shortname to full name |
| 817 | # (so users don't have to remember the full name...) |
| 818 | FREQ_SHORT_NAMES = { |
| 819 | @@ -510,11 +503,6 @@ def main_modules(action_name, args): |
| 820 | return run_module_section(mods, name, name) |
| 821 | |
| 822 | |
| 823 | -def main_query(name, _args): |
| 824 | - raise NotImplementedError(("Action '%s' is not" |
| 825 | - " currently implemented") % (name)) |
| 826 | - |
| 827 | - |
| 828 | def main_single(name, args): |
| 829 | # Cloud-init single stage is broken up into the following sub-stages |
| 830 | # 1. Ensure that the init object fetches its config without errors |
| 831 | @@ -713,9 +701,11 @@ def main(sysv_args=None): |
| 832 | default=False) |
| 833 | |
| 834 | parser.set_defaults(reporter=None) |
| 835 | - subparsers = parser.add_subparsers() |
| 836 | + subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand') |
| 837 | + subparsers.required = True |
| 838 | |
| 839 | # Each action and its sub-options (if any) |
| 840 | + |
| 841 | parser_init = subparsers.add_parser('init', |
| 842 | help=('initializes cloud-init and' |
| 843 | ' performs initial modules')) |
| 844 | @@ -737,17 +727,6 @@ def main(sysv_args=None): |
| 845 | choices=('init', 'config', 'final')) |
| 846 | parser_mod.set_defaults(action=('modules', main_modules)) |
| 847 | |
| 848 | - # These settings are used when you want to query information |
| 849 | - # stored in the cloud-init data objects/directories/files |
| 850 | - parser_query = subparsers.add_parser('query', |
| 851 | - help=('query information stored ' |
| 852 | - 'in cloud-init')) |
| 853 | - parser_query.add_argument("--name", '-n', action="store", |
| 854 | - help="item name to query on", |
| 855 | - required=True, |
| 856 | - choices=QUERY_DATA_TYPES) |
| 857 | - parser_query.set_defaults(action=('query', main_query)) |
| 858 | - |
| 859 | # This subcommand allows you to run a single module |
| 860 | parser_single = subparsers.add_parser('single', |
| 861 | help=('run a single module ')) |
| 862 | @@ -781,15 +760,22 @@ def main(sysv_args=None): |
| 863 | help=('list defined features')) |
| 864 | parser_features.set_defaults(action=('features', main_features)) |
| 865 | |
| 866 | + parser_analyze = subparsers.add_parser( |
| 867 | + 'analyze', help='Devel tool: Analyze cloud-init logs and data') |
| 868 | + if sysv_args and sysv_args[0] == 'analyze': |
| 869 | + # Only load this parser if analyze is specified to avoid file load cost |
| 870 | + # FIXME put this under 'devel' subcommand (coming in next branch) |
| 871 | + from cloudinit.analyze.__main__ import get_parser as analyze_parser |
| 872 | + # Construct analyze subcommand parser |
| 873 | + analyze_parser(parser_analyze) |
| 874 | + |
| 875 | args = parser.parse_args(args=sysv_args) |
| 876 | |
| 877 | - try: |
| 878 | - (name, functor) = args.action |
| 879 | - except AttributeError: |
| 880 | - parser.error('too few arguments') |
| 881 | + # Subparsers.required = True and each subparser sets action=(name, functor) |
| 882 | + (name, functor) = args.action |
| 883 | |
| 884 | # Setup basic logging to start (until reinitialized) |
| 885 | - # iff in debug mode... |
| 886 | + # iff in debug mode. |
| 887 | if args.debug: |
| 888 | logging.setupBasicLogging() |
| 889 | |
| 890 | diff --git a/cloudinit/config/cc_ntp.py b/cloudinit/config/cc_ntp.py |
| 891 | index 31ed64e..a02b4bf 100644 |
| 892 | --- a/cloudinit/config/cc_ntp.py |
| 893 | +++ b/cloudinit/config/cc_ntp.py |
| 894 | @@ -50,6 +50,7 @@ LOG = logging.getLogger(__name__) |
| 895 | |
| 896 | frequency = PER_INSTANCE |
| 897 | NTP_CONF = '/etc/ntp.conf' |
| 898 | +TIMESYNCD_CONF = '/etc/systemd/timesyncd.conf.d/cloud-init.conf' |
| 899 | NR_POOL_SERVERS = 4 |
| 900 | distros = ['centos', 'debian', 'fedora', 'opensuse', 'ubuntu'] |
| 901 | |
| 902 | @@ -132,20 +133,50 @@ def handle(name, cfg, cloud, log, _args): |
| 903 | " is a %s %instead"), type_utils.obj_name(ntp_cfg)) |
| 904 | |
| 905 | validate_cloudconfig_schema(cfg, schema) |
| 906 | + if ntp_installable(): |
| 907 | + service_name = 'ntp' |
| 908 | + confpath = NTP_CONF |
| 909 | + template_name = None |
| 910 | + packages = ['ntp'] |
| 911 | + check_exe = 'ntpd' |
| 912 | + else: |
| 913 | + service_name = 'systemd-timesyncd' |
| 914 | + confpath = TIMESYNCD_CONF |
| 915 | + template_name = 'timesyncd.conf' |
| 916 | + packages = [] |
| 917 | + check_exe = '/lib/systemd/systemd-timesyncd' |
| 918 | + |
| 919 | rename_ntp_conf() |
| 920 | # ensure when ntp is installed it has a configuration file |
| 921 | # to use instead of starting up with packaged defaults |
| 922 | - write_ntp_config_template(ntp_cfg, cloud) |
| 923 | - install_ntp(cloud.distro.install_packages, packages=['ntp'], |
| 924 | - check_exe="ntpd") |
| 925 | - # if ntp was already installed, it may not have started |
| 926 | + write_ntp_config_template(ntp_cfg, cloud, confpath, template=template_name) |
| 927 | + install_ntp(cloud.distro.install_packages, packages=packages, |
| 928 | + check_exe=check_exe) |
| 929 | + |
| 930 | try: |
| 931 | - reload_ntp(systemd=cloud.distro.uses_systemd()) |
| 932 | + reload_ntp(service_name, systemd=cloud.distro.uses_systemd()) |
| 933 | except util.ProcessExecutionError as e: |
| 934 | LOG.exception("Failed to reload/start ntp service: %s", e) |
| 935 | raise |
| 936 | |
| 937 | |
| 938 | +def ntp_installable(): |
| 939 | + """Check if we can install ntp package |
| 940 | + |
| 941 | + Ubuntu-Core systems do not have an ntp package available, so |
| 942 | + we always return False. Other systems require package managers to install |
| 943 | + the ntp package If we fail to find one of the package managers, then we |
| 944 | + cannot install ntp. |
| 945 | + """ |
| 946 | + if util.system_is_snappy(): |
| 947 | + return False |
| 948 | + |
| 949 | + if any(map(util.which, ['apt-get', 'dnf', 'yum', 'zypper'])): |
| 950 | + return True |
| 951 | + |
| 952 | + return False |
| 953 | + |
| 954 | + |
| 955 | def install_ntp(install_func, packages=None, check_exe="ntpd"): |
| 956 | if util.which(check_exe): |
| 957 | return |
| 958 | @@ -156,7 +187,7 @@ def install_ntp(install_func, packages=None, check_exe="ntpd"): |
| 959 | |
| 960 | |
| 961 | def rename_ntp_conf(config=None): |
| 962 | - """Rename any existing ntp.conf file and render from template""" |
| 963 | + """Rename any existing ntp.conf file""" |
| 964 | if config is None: # For testing |
| 965 | config = NTP_CONF |
| 966 | if os.path.exists(config): |
| 967 | @@ -171,7 +202,7 @@ def generate_server_names(distro): |
| 968 | return names |
| 969 | |
| 970 | |
| 971 | -def write_ntp_config_template(cfg, cloud): |
| 972 | +def write_ntp_config_template(cfg, cloud, path, template=None): |
| 973 | servers = cfg.get('servers', []) |
| 974 | pools = cfg.get('pools', []) |
| 975 | |
| 976 | @@ -185,19 +216,20 @@ def write_ntp_config_template(cfg, cloud): |
| 977 | 'pools': pools, |
| 978 | } |
| 979 | |
| 980 | - template_fn = cloud.get_template_filename('ntp.conf.%s' % |
| 981 | - (cloud.distro.name)) |
| 982 | + if template is None: |
| 983 | + template = 'ntp.conf.%s' % cloud.distro.name |
| 984 | + |
| 985 | + template_fn = cloud.get_template_filename(template) |
| 986 | if not template_fn: |
| 987 | template_fn = cloud.get_template_filename('ntp.conf') |
| 988 | if not template_fn: |
| 989 | raise RuntimeError(("No template found, " |
| 990 | - "not rendering %s"), NTP_CONF) |
| 991 | + "not rendering %s"), path) |
| 992 | |
| 993 | - templater.render_to_file(template_fn, NTP_CONF, params) |
| 994 | + templater.render_to_file(template_fn, path, params) |
| 995 | |
| 996 | |
| 997 | -def reload_ntp(systemd=False): |
| 998 | - service = 'ntp' |
| 999 | +def reload_ntp(service, systemd=False): |
| 1000 | if systemd: |
| 1001 | cmd = ['systemctl', 'reload-or-restart', service] |
| 1002 | else: |
| 1003 | diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py |
| 1004 | index b4c0ba7..f87a343 100644 |
| 1005 | --- a/cloudinit/distros/arch.py |
| 1006 | +++ b/cloudinit/distros/arch.py |
| 1007 | @@ -14,6 +14,8 @@ from cloudinit.distros.parsers.hostname import HostnameConf |
| 1008 | |
| 1009 | from cloudinit.settings import PER_INSTANCE |
| 1010 | |
| 1011 | +import os |
| 1012 | + |
| 1013 | LOG = logging.getLogger(__name__) |
| 1014 | |
| 1015 | |
| 1016 | @@ -52,31 +54,10 @@ class Distro(distros.Distro): |
| 1017 | entries = net_util.translate_network(settings) |
| 1018 | LOG.debug("Translated ubuntu style network settings %s into %s", |
| 1019 | settings, entries) |
| 1020 | - dev_names = entries.keys() |
| 1021 | - # Format for netctl |
| 1022 | - for (dev, info) in entries.items(): |
| 1023 | - nameservers = [] |
| 1024 | - net_fn = self.network_conf_dir + dev |
| 1025 | - net_cfg = { |
| 1026 | - 'Connection': 'ethernet', |
| 1027 | - 'Interface': dev, |
| 1028 | - 'IP': info.get('bootproto'), |
| 1029 | - 'Address': "('%s/%s')" % (info.get('address'), |
| 1030 | - info.get('netmask')), |
| 1031 | - 'Gateway': info.get('gateway'), |
| 1032 | - 'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '') |
| 1033 | - } |
| 1034 | - util.write_file(net_fn, convert_netctl(net_cfg)) |
| 1035 | - if info.get('auto'): |
| 1036 | - self._enable_interface(dev) |
| 1037 | - if 'dns-nameservers' in info: |
| 1038 | - nameservers.extend(info['dns-nameservers']) |
| 1039 | - |
| 1040 | - if nameservers: |
| 1041 | - util.write_file(self.resolve_conf_fn, |
| 1042 | - convert_resolv_conf(nameservers)) |
| 1043 | - |
| 1044 | - return dev_names |
| 1045 | + return _render_network( |
| 1046 | + entries, resolv_conf=self.resolve_conf_fn, |
| 1047 | + conf_dir=self.network_conf_dir, |
| 1048 | + enable_func=self._enable_interface) |
| 1049 | |
| 1050 | def _enable_interface(self, device_name): |
| 1051 | cmd = ['netctl', 'reenable', device_name] |
| 1052 | @@ -173,13 +154,60 @@ class Distro(distros.Distro): |
| 1053 | ["-y"], freq=PER_INSTANCE) |
| 1054 | |
| 1055 | |
| 1056 | +def _render_network(entries, target="/", conf_dir="etc/netctl", |
| 1057 | + resolv_conf="etc/resolv.conf", enable_func=None): |
| 1058 | + """Render the translate_network format into netctl files in target. |
| 1059 | + Paths will be rendered under target. |
| 1060 | + """ |
| 1061 | + |
| 1062 | + devs = [] |
| 1063 | + nameservers = [] |
| 1064 | + resolv_conf = util.target_path(target, resolv_conf) |
| 1065 | + conf_dir = util.target_path(target, conf_dir) |
| 1066 | + |
| 1067 | + for (dev, info) in entries.items(): |
| 1068 | + if dev == 'lo': |
| 1069 | + # no configuration should be rendered for 'lo' |
| 1070 | + continue |
| 1071 | + devs.append(dev) |
| 1072 | + net_fn = os.path.join(conf_dir, dev) |
| 1073 | + net_cfg = { |
| 1074 | + 'Connection': 'ethernet', |
| 1075 | + 'Interface': dev, |
| 1076 | + 'IP': info.get('bootproto'), |
| 1077 | + 'Address': "%s/%s" % (info.get('address'), |
| 1078 | + info.get('netmask')), |
| 1079 | + 'Gateway': info.get('gateway'), |
| 1080 | + 'DNS': info.get('dns-nameservers', []), |
| 1081 | + } |
| 1082 | + util.write_file(net_fn, convert_netctl(net_cfg)) |
| 1083 | + if enable_func and info.get('auto'): |
| 1084 | + enable_func(dev) |
| 1085 | + if 'dns-nameservers' in info: |
| 1086 | + nameservers.extend(info['dns-nameservers']) |
| 1087 | + |
| 1088 | + if nameservers: |
| 1089 | + util.write_file(resolv_conf, |
| 1090 | + convert_resolv_conf(nameservers)) |
| 1091 | + return devs |
| 1092 | + |
| 1093 | + |
| 1094 | def convert_netctl(settings): |
| 1095 | - """Returns a settings string formatted for netctl.""" |
| 1096 | - result = '' |
| 1097 | - if isinstance(settings, dict): |
| 1098 | - for k, v in settings.items(): |
| 1099 | - result = result + '%s=%s\n' % (k, v) |
| 1100 | - return result |
| 1101 | + """Given a dictionary, returns a string in netctl profile format. |
| 1102 | + |
| 1103 | + netctl profile is described at: |
| 1104 | + https://git.archlinux.org/netctl.git/tree/docs/netctl.profile.5.txt |
| 1105 | + |
| 1106 | + Note that the 'Special Quoting Rules' are not handled here.""" |
| 1107 | + result = [] |
| 1108 | + for key in sorted(settings): |
| 1109 | + val = settings[key] |
| 1110 | + if val is None: |
| 1111 | + val = "" |
| 1112 | + elif isinstance(val, (tuple, list)): |
| 1113 | + val = "(" + ' '.join("'%s'" % v for v in val) + ")" |
| 1114 | + result.append("%s=%s\n" % (key, val)) |
| 1115 | + return ''.join(result) |
| 1116 | |
| 1117 | |
| 1118 | def convert_resolv_conf(settings): |
| 1119 | diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py |
| 1120 | index d1740e5..a1b0db1 100644 |
| 1121 | --- a/cloudinit/net/__init__.py |
| 1122 | +++ b/cloudinit/net/__init__.py |
| 1123 | @@ -10,6 +10,7 @@ import logging |
| 1124 | import os |
| 1125 | import re |
| 1126 | |
| 1127 | +from cloudinit.net.network_state import mask_to_net_prefix |
| 1128 | from cloudinit import util |
| 1129 | |
| 1130 | LOG = logging.getLogger(__name__) |
| 1131 | @@ -28,8 +29,13 @@ def _natural_sort_key(s, _nsre=re.compile('([0-9]+)')): |
| 1132 | for text in re.split(_nsre, s)] |
| 1133 | |
| 1134 | |
| 1135 | +def get_sys_class_path(): |
| 1136 | + """Simple function to return the global SYS_CLASS_NET.""" |
| 1137 | + return SYS_CLASS_NET |
| 1138 | + |
| 1139 | + |
| 1140 | def sys_dev_path(devname, path=""): |
| 1141 | - return SYS_CLASS_NET + devname + "/" + path |
| 1142 | + return get_sys_class_path() + devname + "/" + path |
| 1143 | |
| 1144 | |
| 1145 | def read_sys_net(devname, path, translate=None, |
| 1146 | @@ -77,7 +83,7 @@ def read_sys_net_int(iface, field): |
| 1147 | return None |
| 1148 | try: |
| 1149 | return int(val) |
| 1150 | - except TypeError: |
| 1151 | + except ValueError: |
| 1152 | return None |
| 1153 | |
| 1154 | |
| 1155 | @@ -149,7 +155,14 @@ def device_devid(devname): |
| 1156 | |
| 1157 | |
| 1158 | def get_devicelist(): |
| 1159 | - return os.listdir(SYS_CLASS_NET) |
| 1160 | + try: |
| 1161 | + devs = os.listdir(get_sys_class_path()) |
| 1162 | + except OSError as e: |
| 1163 | + if e.errno == errno.ENOENT: |
| 1164 | + devs = [] |
| 1165 | + else: |
| 1166 | + raise |
| 1167 | + return devs |
| 1168 | |
| 1169 | |
| 1170 | class ParserError(Exception): |
| 1171 | @@ -162,13 +175,8 @@ def is_disabled_cfg(cfg): |
| 1172 | return cfg.get('config') == "disabled" |
| 1173 | |
| 1174 | |
| 1175 | -def generate_fallback_config(blacklist_drivers=None, config_driver=None): |
| 1176 | - """Determine which attached net dev is most likely to have a connection and |
| 1177 | - generate network state to run dhcp on that interface""" |
| 1178 | - |
| 1179 | - if not config_driver: |
| 1180 | - config_driver = False |
| 1181 | - |
| 1182 | +def find_fallback_nic(blacklist_drivers=None): |
| 1183 | + """Return the name of the 'fallback' network device.""" |
| 1184 | if not blacklist_drivers: |
| 1185 | blacklist_drivers = [] |
| 1186 | |
| 1187 | @@ -220,15 +228,24 @@ def generate_fallback_config(blacklist_drivers=None, config_driver=None): |
| 1188 | if DEFAULT_PRIMARY_INTERFACE in names: |
| 1189 | names.remove(DEFAULT_PRIMARY_INTERFACE) |
| 1190 | names.insert(0, DEFAULT_PRIMARY_INTERFACE) |
| 1191 | - target_name = None |
| 1192 | - target_mac = None |
| 1193 | + |
| 1194 | + # pick the first that has a mac-address |
| 1195 | for name in names: |
| 1196 | - mac = read_sys_net_safe(name, 'address') |
| 1197 | - if mac: |
| 1198 | - target_name = name |
| 1199 | - target_mac = mac |
| 1200 | - break |
| 1201 | - if target_mac and target_name: |
| 1202 | + if read_sys_net_safe(name, 'address'): |
| 1203 | + return name |
| 1204 | + return None |
| 1205 | + |
| 1206 | + |
| 1207 | +def generate_fallback_config(blacklist_drivers=None, config_driver=None): |
| 1208 | + """Determine which attached net dev is most likely to have a connection and |
| 1209 | + generate network state to run dhcp on that interface""" |
| 1210 | + |
| 1211 | + if not config_driver: |
| 1212 | + config_driver = False |
| 1213 | + |
| 1214 | + target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers) |
| 1215 | + if target_name: |
| 1216 | + target_mac = read_sys_net_safe(target_name, 'address') |
| 1217 | nconf = {'config': [], 'version': 1} |
| 1218 | cfg = {'type': 'physical', 'name': target_name, |
| 1219 | 'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]} |
| 1220 | @@ -497,28 +514,8 @@ def get_interfaces_by_mac(): |
| 1221 | """Build a dictionary of tuples {mac: name}. |
| 1222 | |
| 1223 | Bridges and any devices that have a 'stolen' mac are excluded.""" |
| 1224 | - try: |
| 1225 | - devs = get_devicelist() |
| 1226 | - except OSError as e: |
| 1227 | - if e.errno == errno.ENOENT: |
| 1228 | - devs = [] |
| 1229 | - else: |
| 1230 | - raise |
| 1231 | ret = {} |
| 1232 | - empty_mac = '00:00:00:00:00:00' |
| 1233 | - for name in devs: |
| 1234 | - if not interface_has_own_mac(name): |
| 1235 | - continue |
| 1236 | - if is_bridge(name): |
| 1237 | - continue |
| 1238 | - if is_vlan(name): |
| 1239 | - continue |
| 1240 | - mac = get_interface_mac(name) |
| 1241 | - # some devices may not have a mac (tun0) |
| 1242 | - if not mac: |
| 1243 | - continue |
| 1244 | - if mac == empty_mac and name != 'lo': |
| 1245 | - continue |
| 1246 | + for name, mac, _driver, _devid in get_interfaces(): |
| 1247 | if mac in ret: |
| 1248 | raise RuntimeError( |
| 1249 | "duplicate mac found! both '%s' and '%s' have mac '%s'" % |
| 1250 | @@ -531,14 +528,8 @@ def get_interfaces(): |
| 1251 | """Return list of interface tuples (name, mac, driver, device_id) |
| 1252 | |
| 1253 | Bridges and any devices that have a 'stolen' mac are excluded.""" |
| 1254 | - try: |
| 1255 | - devs = get_devicelist() |
| 1256 | - except OSError as e: |
| 1257 | - if e.errno == errno.ENOENT: |
| 1258 | - devs = [] |
| 1259 | - else: |
| 1260 | - raise |
| 1261 | ret = [] |
| 1262 | + devs = get_devicelist() |
| 1263 | empty_mac = '00:00:00:00:00:00' |
| 1264 | for name in devs: |
| 1265 | if not interface_has_own_mac(name): |
| 1266 | @@ -557,6 +548,103 @@ def get_interfaces(): |
| 1267 | return ret |
| 1268 | |
| 1269 | |
| 1270 | +class EphemeralIPv4Network(object): |
| 1271 | + """Context manager which sets up temporary static network configuration. |
| 1272 | + |
| 1273 | + No operations are performed if the provided interface is already connected. |
| 1274 | + If unconnected, bring up the interface with valid ip, prefix and broadcast. |
| 1275 | + If router is provided setup a default route for that interface. Upon |
| 1276 | + context exit, clean up the interface leaving no configuration behind. |
| 1277 | + """ |
| 1278 | + |
| 1279 | + def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None): |
| 1280 | + """Setup context manager and validate call signature. |
| 1281 | + |
| 1282 | + @param interface: Name of the network interface to bring up. |
| 1283 | + @param ip: IP address to assign to the interface. |
| 1284 | + @param prefix_or_mask: Either netmask of the format X.X.X.X or an int |
| 1285 | + prefix. |
| 1286 | + @param broadcast: Broadcast address for the IPv4 network. |
| 1287 | + @param router: Optionally the default gateway IP. |
| 1288 | + """ |
| 1289 | + if not all([interface, ip, prefix_or_mask, broadcast]): |
| 1290 | + raise ValueError( |
| 1291 | + 'Cannot init network on {0} with {1}/{2} and bcast {3}'.format( |
| 1292 | + interface, ip, prefix_or_mask, broadcast)) |
| 1293 | + try: |
| 1294 | + self.prefix = mask_to_net_prefix(prefix_or_mask) |
| 1295 | + except ValueError as e: |
| 1296 | + raise ValueError( |
| 1297 | + 'Cannot setup network: {0}'.format(e)) |
| 1298 | + self.interface = interface |
| 1299 | + self.ip = ip |
| 1300 | + self.broadcast = broadcast |
| 1301 | + self.router = router |
| 1302 | + self.cleanup_cmds = [] # List of commands to run to cleanup state. |
| 1303 | + |
| 1304 | + def __enter__(self): |
| 1305 | + """Perform ephemeral network setup if interface is not connected.""" |
| 1306 | + self._bringup_device() |
| 1307 | + if self.router: |
| 1308 | + self._bringup_router() |
| 1309 | + |
| 1310 | + def __exit__(self, excp_type, excp_value, excp_traceback): |
| 1311 | + """Teardown anything we set up.""" |
| 1312 | + for cmd in self.cleanup_cmds: |
| 1313 | + util.subp(cmd, capture=True) |
| 1314 | + |
| 1315 | + def _delete_address(self, address, prefix): |
| 1316 | + """Perform the ip command to remove the specified address.""" |
| 1317 | + util.subp( |
| 1318 | + ['ip', '-family', 'inet', 'addr', 'del', |
| 1319 | + '%s/%s' % (address, prefix), 'dev', self.interface], |
| 1320 | + capture=True) |
| 1321 | + |
| 1322 | + def _bringup_device(self): |
| 1323 | + """Perform the ip comands to fully setup the device.""" |
| 1324 | + cidr = '{0}/{1}'.format(self.ip, self.prefix) |
| 1325 | + LOG.debug( |
| 1326 | + 'Attempting setup of ephemeral network on %s with %s brd %s', |
| 1327 | + self.interface, cidr, self.broadcast) |
| 1328 | + try: |
| 1329 | + util.subp( |
| 1330 | + ['ip', '-family', 'inet', 'addr', 'add', cidr, 'broadcast', |
| 1331 | + self.broadcast, 'dev', self.interface], |
| 1332 | + capture=True, update_env={'LANG': 'C'}) |
| 1333 | + except util.ProcessExecutionError as e: |
| 1334 | + if "File exists" not in e.stderr: |
| 1335 | + raise |
| 1336 | + LOG.debug( |
| 1337 | + 'Skip ephemeral network setup, %s already has address %s', |
| 1338 | + self.interface, self.ip) |
| 1339 | + else: |
| 1340 | + # Address creation success, bring up device and queue cleanup |
| 1341 | + util.subp( |
| 1342 | + ['ip', '-family', 'inet', 'link', 'set', 'dev', self.interface, |
| 1343 | + 'up'], capture=True) |
| 1344 | + self.cleanup_cmds.append( |
| 1345 | + ['ip', '-family', 'inet', 'link', 'set', 'dev', self.interface, |
| 1346 | + 'down']) |
| 1347 | + self.cleanup_cmds.append( |
| 1348 | + ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev', |
| 1349 | + self.interface]) |
| 1350 | + |
| 1351 | + def _bringup_router(self): |
| 1352 | + """Perform the ip commands to fully setup the router if needed.""" |
| 1353 | + # Check if a default route exists and exit if it does |
| 1354 | + out, _ = util.subp(['ip', 'route', 'show', '0.0.0.0/0'], capture=True) |
| 1355 | + if 'default' in out: |
| 1356 | + LOG.debug( |
| 1357 | + 'Skip ephemeral route setup. %s already has default route: %s', |
| 1358 | + self.interface, out.strip()) |
| 1359 | + return |
| 1360 | + util.subp( |
| 1361 | + ['ip', '-4', 'route', 'add', 'default', 'via', self.router, |
| 1362 | + 'dev', self.interface], capture=True) |
| 1363 | + self.cleanup_cmds.insert( |
| 1364 | + 0, ['ip', '-4', 'route', 'del', 'default', 'dev', self.interface]) |
| 1365 | + |
| 1366 | + |
| 1367 | class RendererNotFoundError(RuntimeError): |
| 1368 | pass |
| 1369 | |
| 1370 | diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py |
| 1371 | new file mode 100644 |
| 1372 | index 0000000..c7febc5 |
| 1373 | --- /dev/null |
| 1374 | +++ b/cloudinit/net/dhcp.py |
| 1375 | @@ -0,0 +1,119 @@ |
| 1376 | +# Copyright (C) 2017 Canonical Ltd. |
| 1377 | +# |
| 1378 | +# Author: Chad Smith <chad.smith@canonical.com> |
| 1379 | +# |
| 1380 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 1381 | + |
| 1382 | +import logging |
| 1383 | +import os |
| 1384 | +import re |
| 1385 | + |
| 1386 | +from cloudinit.net import find_fallback_nic, get_devicelist |
| 1387 | +from cloudinit import util |
| 1388 | + |
| 1389 | +LOG = logging.getLogger(__name__) |
| 1390 | + |
| 1391 | + |
| 1392 | +class InvalidDHCPLeaseFileError(Exception): |
| 1393 | + """Raised when parsing an empty or invalid dhcp.leases file. |
| 1394 | + |
| 1395 | + Current uses are DataSourceAzure and DataSourceEc2 during ephemeral |
| 1396 | + boot to scrape metadata. |
| 1397 | + """ |
| 1398 | + pass |
| 1399 | + |
| 1400 | + |
| 1401 | +def maybe_perform_dhcp_discovery(nic=None): |
| 1402 | + """Perform dhcp discovery if nic valid and dhclient command exists. |
| 1403 | + |
| 1404 | + If the nic is invalid or undiscoverable or dhclient command is not found, |
| 1405 | + skip dhcp_discovery and return an empty dict. |
| 1406 | + |
| 1407 | + @param nic: Name of the network interface we want to run dhclient on. |
| 1408 | + @return: A dict of dhcp options from the dhclient discovery if run, |
| 1409 | + otherwise an empty dict is returned. |
| 1410 | + """ |
| 1411 | + if nic is None: |
| 1412 | + nic = find_fallback_nic() |
| 1413 | + if nic is None: |
| 1414 | + LOG.debug( |
| 1415 | + 'Skip dhcp_discovery: Unable to find fallback nic.') |
| 1416 | + return {} |
| 1417 | + elif nic not in get_devicelist(): |
| 1418 | + LOG.debug( |
| 1419 | + 'Skip dhcp_discovery: nic %s not found in get_devicelist.', nic) |
| 1420 | + return {} |
| 1421 | + dhclient_path = util.which('dhclient') |
| 1422 | + if not dhclient_path: |
| 1423 | + LOG.debug('Skip dhclient configuration: No dhclient command found.') |
| 1424 | + return {} |
| 1425 | + with util.tempdir(prefix='cloud-init-dhcp-') as tmpdir: |
| 1426 | + return dhcp_discovery(dhclient_path, nic, tmpdir) |
| 1427 | + |
| 1428 | + |
| 1429 | +def parse_dhcp_lease_file(lease_file): |
| 1430 | + """Parse the given dhcp lease file for the most recent lease. |
| 1431 | + |
| 1432 | + Return a dict of dhcp options as key value pairs for the most recent lease |
| 1433 | + block. |
| 1434 | + |
| 1435 | + @raises: InvalidDHCPLeaseFileError on empty of unparseable leasefile |
| 1436 | + content. |
| 1437 | + """ |
| 1438 | + lease_regex = re.compile(r"lease {(?P<lease>[^}]*)}\n") |
| 1439 | + dhcp_leases = [] |
| 1440 | + lease_content = util.load_file(lease_file) |
| 1441 | + if len(lease_content) == 0: |
| 1442 | + raise InvalidDHCPLeaseFileError( |
| 1443 | + 'Cannot parse empty dhcp lease file {0}'.format(lease_file)) |
| 1444 | + for lease in lease_regex.findall(lease_content): |
| 1445 | + lease_options = [] |
| 1446 | + for line in lease.split(';'): |
| 1447 | + # Strip newlines, double-quotes and option prefix |
| 1448 | + line = line.strip().replace('"', '').replace('option ', '') |
| 1449 | + if not line: |
| 1450 | + continue |
| 1451 | + lease_options.append(line.split(' ', 1)) |
| 1452 | + dhcp_leases.append(dict(lease_options)) |
| 1453 | + if not dhcp_leases: |
| 1454 | + raise InvalidDHCPLeaseFileError( |
| 1455 | + 'Cannot parse dhcp lease file {0}. No leases found'.format( |
| 1456 | + lease_file)) |
| 1457 | + return dhcp_leases |
| 1458 | + |
| 1459 | + |
| 1460 | +def dhcp_discovery(dhclient_cmd_path, interface, cleandir): |
| 1461 | + """Run dhclient on the interface without scripts or filesystem artifacts. |
| 1462 | + |
| 1463 | + @param dhclient_cmd_path: Full path to the dhclient used. |
| 1464 | + @param interface: Name of the network inteface on which to dhclient. |
| 1465 | + @param cleandir: The directory from which to run dhclient as well as store |
| 1466 | + dhcp leases. |
| 1467 | + |
| 1468 | + @return: A dict of dhcp options parsed from the dhcp.leases file or empty |
| 1469 | + dict. |
| 1470 | + """ |
| 1471 | + LOG.debug('Performing a dhcp discovery on %s', interface) |
| 1472 | + |
| 1473 | + # XXX We copy dhclient out of /sbin/dhclient to avoid dealing with strict |
| 1474 | + # app armor profiles which disallow running dhclient -sf <our-script-file>. |
| 1475 | + # We want to avoid running /sbin/dhclient-script because of side-effects in |
| 1476 | + # /etc/resolv.conf any any other vendor specific scripts in |
| 1477 | + # /etc/dhcp/dhclient*hooks.d. |
| 1478 | + sandbox_dhclient_cmd = os.path.join(cleandir, 'dhclient') |
| 1479 | + util.copy(dhclient_cmd_path, sandbox_dhclient_cmd) |
| 1480 | + pid_file = os.path.join(cleandir, 'dhclient.pid') |
| 1481 | + lease_file = os.path.join(cleandir, 'dhcp.leases') |
| 1482 | + |
| 1483 | + # ISC dhclient needs the interface up to send initial discovery packets. |
| 1484 | + # Generally dhclient relies on dhclient-script PREINIT action to bring the |
| 1485 | + # link up before attempting discovery. Since we are using -sf /bin/true, |
| 1486 | + # we need to do that "link up" ourselves first. |
| 1487 | + util.subp(['ip', 'link', 'set', 'dev', interface, 'up'], capture=True) |
| 1488 | + cmd = [sandbox_dhclient_cmd, '-1', '-v', '-lf', lease_file, |
| 1489 | + '-pf', pid_file, interface, '-sf', '/bin/true'] |
| 1490 | + util.subp(cmd, capture=True) |
| 1491 | + return parse_dhcp_lease_file(lease_file) |
| 1492 | + |
| 1493 | + |
| 1494 | +# vi: ts=4 expandtab |
| 1495 | diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py |
| 1496 | index 9f35b72..3b06fbf 100644 |
| 1497 | --- a/cloudinit/net/netplan.py |
| 1498 | +++ b/cloudinit/net/netplan.py |
| 1499 | @@ -4,7 +4,7 @@ import copy |
| 1500 | import os |
| 1501 | |
| 1502 | from . import renderer |
| 1503 | -from .network_state import subnet_is_ipv6 |
| 1504 | +from .network_state import subnet_is_ipv6, NET_CONFIG_TO_V2 |
| 1505 | |
| 1506 | from cloudinit import log as logging |
| 1507 | from cloudinit import util |
| 1508 | @@ -27,31 +27,6 @@ network: |
| 1509 | """ |
| 1510 | |
| 1511 | LOG = logging.getLogger(__name__) |
| 1512 | -NET_CONFIG_TO_V2 = { |
| 1513 | - 'bond': {'bond-ad-select': 'ad-select', |
| 1514 | - 'bond-arp-interval': 'arp-interval', |
| 1515 | - 'bond-arp-ip-target': 'arp-ip-target', |
| 1516 | - 'bond-arp-validate': 'arp-validate', |
| 1517 | - 'bond-downdelay': 'down-delay', |
| 1518 | - 'bond-fail-over-mac': 'fail-over-mac-policy', |
| 1519 | - 'bond-lacp-rate': 'lacp-rate', |
| 1520 | - 'bond-miimon': 'mii-monitor-interval', |
| 1521 | - 'bond-min-links': 'min-links', |
| 1522 | - 'bond-mode': 'mode', |
| 1523 | - 'bond-num-grat-arp': 'gratuitious-arp', |
| 1524 | - 'bond-primary-reselect': 'primary-reselect-policy', |
| 1525 | - 'bond-updelay': 'up-delay', |
| 1526 | - 'bond-xmit-hash-policy': 'transmit-hash-policy'}, |
| 1527 | - 'bridge': {'bridge_ageing': 'ageing-time', |
| 1528 | - 'bridge_bridgeprio': 'priority', |
| 1529 | - 'bridge_fd': 'forward-delay', |
| 1530 | - 'bridge_gcint': None, |
| 1531 | - 'bridge_hello': 'hello-time', |
| 1532 | - 'bridge_maxage': 'max-age', |
| 1533 | - 'bridge_maxwait': None, |
| 1534 | - 'bridge_pathcost': 'path-cost', |
| 1535 | - 'bridge_portprio': None, |
| 1536 | - 'bridge_waitport': None}} |
| 1537 | |
| 1538 | |
| 1539 | def _get_params_dict_by_match(config, match): |
| 1540 | @@ -247,6 +222,14 @@ class Renderer(renderer.Renderer): |
| 1541 | util.subp(cmd, capture=True) |
| 1542 | |
| 1543 | def _render_content(self, network_state): |
| 1544 | + |
| 1545 | + # if content already in netplan format, pass it back |
| 1546 | + if network_state.version == 2: |
| 1547 | + LOG.debug('V2 to V2 passthrough') |
| 1548 | + return util.yaml_dumps({'network': network_state.config}, |
| 1549 | + explicit_start=False, |
| 1550 | + explicit_end=False) |
| 1551 | + |
| 1552 | ethernets = {} |
| 1553 | wifis = {} |
| 1554 | bridges = {} |
| 1555 | diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py |
| 1556 | index 87a7222..6faf01b 100644 |
| 1557 | --- a/cloudinit/net/network_state.py |
| 1558 | +++ b/cloudinit/net/network_state.py |
| 1559 | @@ -23,6 +23,33 @@ NETWORK_V2_KEY_FILTER = [ |
| 1560 | 'match', 'mtu', 'nameservers', 'renderer', 'set-name', 'wakeonlan' |
| 1561 | ] |
| 1562 | |
| 1563 | +NET_CONFIG_TO_V2 = { |
| 1564 | + 'bond': {'bond-ad-select': 'ad-select', |
| 1565 | + 'bond-arp-interval': 'arp-interval', |
| 1566 | + 'bond-arp-ip-target': 'arp-ip-target', |
| 1567 | + 'bond-arp-validate': 'arp-validate', |
| 1568 | + 'bond-downdelay': 'down-delay', |
| 1569 | + 'bond-fail-over-mac': 'fail-over-mac-policy', |
| 1570 | + 'bond-lacp-rate': 'lacp-rate', |
| 1571 | + 'bond-miimon': 'mii-monitor-interval', |
| 1572 | + 'bond-min-links': 'min-links', |
| 1573 | + 'bond-mode': 'mode', |
| 1574 | + 'bond-num-grat-arp': 'gratuitious-arp', |
| 1575 | + 'bond-primary': 'primary', |
| 1576 | + 'bond-primary-reselect': 'primary-reselect-policy', |
| 1577 | + 'bond-updelay': 'up-delay', |
| 1578 | + 'bond-xmit-hash-policy': 'transmit-hash-policy'}, |
| 1579 | + 'bridge': {'bridge_ageing': 'ageing-time', |
| 1580 | + 'bridge_bridgeprio': 'priority', |
| 1581 | + 'bridge_fd': 'forward-delay', |
| 1582 | + 'bridge_gcint': None, |
| 1583 | + 'bridge_hello': 'hello-time', |
| 1584 | + 'bridge_maxage': 'max-age', |
| 1585 | + 'bridge_maxwait': None, |
| 1586 | + 'bridge_pathcost': 'path-cost', |
| 1587 | + 'bridge_portprio': None, |
| 1588 | + 'bridge_waitport': None}} |
| 1589 | + |
| 1590 | |
| 1591 | def parse_net_config_data(net_config, skip_broken=True): |
| 1592 | """Parses the config, returns NetworkState object |
| 1593 | @@ -120,6 +147,10 @@ class NetworkState(object): |
| 1594 | self.use_ipv6 = network_state.get('use_ipv6', False) |
| 1595 | |
| 1596 | @property |
| 1597 | + def config(self): |
| 1598 | + return self._network_state['config'] |
| 1599 | + |
| 1600 | + @property |
| 1601 | def version(self): |
| 1602 | return self._version |
| 1603 | |
| 1604 | @@ -166,12 +197,14 @@ class NetworkStateInterpreter(object): |
| 1605 | 'search': [], |
| 1606 | }, |
| 1607 | 'use_ipv6': False, |
| 1608 | + 'config': None, |
| 1609 | } |
| 1610 | |
| 1611 | def __init__(self, version=NETWORK_STATE_VERSION, config=None): |
| 1612 | self._version = version |
| 1613 | self._config = config |
| 1614 | self._network_state = copy.deepcopy(self.initial_network_state) |
| 1615 | + self._network_state['config'] = config |
| 1616 | self._parsed = False |
| 1617 | |
| 1618 | @property |
| 1619 | @@ -460,12 +493,15 @@ class NetworkStateInterpreter(object): |
| 1620 | v2_command = { |
| 1621 | bond0: { |
| 1622 | 'interfaces': ['interface0', 'interface1'], |
| 1623 | - 'miimon': 100, |
| 1624 | - 'mode': '802.3ad', |
| 1625 | - 'xmit_hash_policy': 'layer3+4'}, |
| 1626 | + 'parameters': { |
| 1627 | + 'mii-monitor-interval': 100, |
| 1628 | + 'mode': '802.3ad', |
| 1629 | + 'xmit_hash_policy': 'layer3+4'}}, |
| 1630 | bond1: { |
| 1631 | 'bond-slaves': ['interface2', 'interface7'], |
| 1632 | - 'mode': 1 |
| 1633 | + 'parameters': { |
| 1634 | + 'mode': 1, |
| 1635 | + } |
| 1636 | } |
| 1637 | } |
| 1638 | |
| 1639 | @@ -554,6 +590,7 @@ class NetworkStateInterpreter(object): |
| 1640 | if not mac_address: |
| 1641 | LOG.debug('NetworkState Version2: missing "macaddress" info ' |
| 1642 | 'in config entry: %s: %s', eth, str(cfg)) |
| 1643 | + phy_cmd.update({'mac_address': mac_address}) |
| 1644 | |
| 1645 | for key in ['mtu', 'match', 'wakeonlan']: |
| 1646 | if key in cfg: |
| 1647 | @@ -598,8 +635,8 @@ class NetworkStateInterpreter(object): |
| 1648 | self.handle_vlan(vlan_cmd) |
| 1649 | |
| 1650 | def handle_wifis(self, command): |
| 1651 | - raise NotImplementedError("NetworkState V2: " |
| 1652 | - "Skipping wifi configuration") |
| 1653 | + LOG.warning('Wifi configuration is only available to distros with' |
| 1654 | + 'netplan rendering support.') |
| 1655 | |
| 1656 | def _v2_common(self, cfg): |
| 1657 | LOG.debug('v2_common: handling config:\n%s', cfg) |
| 1658 | @@ -616,6 +653,11 @@ class NetworkStateInterpreter(object): |
| 1659 | |
| 1660 | def _handle_bond_bridge(self, command, cmd_type=None): |
| 1661 | """Common handler for bond and bridge types""" |
| 1662 | + |
| 1663 | + # inverse mapping for v2 keynames to v1 keynames |
| 1664 | + v2key_to_v1 = dict((v, k) for k, v in |
| 1665 | + NET_CONFIG_TO_V2.get(cmd_type).items()) |
| 1666 | + |
| 1667 | for item_name, item_cfg in command.items(): |
| 1668 | item_params = dict((key, value) for (key, value) in |
| 1669 | item_cfg.items() if key not in |
| 1670 | @@ -624,14 +666,20 @@ class NetworkStateInterpreter(object): |
| 1671 | 'type': cmd_type, |
| 1672 | 'name': item_name, |
| 1673 | cmd_type + '_interfaces': item_cfg.get('interfaces'), |
| 1674 | - 'params': item_params, |
| 1675 | + 'params': dict((v2key_to_v1[k], v) for k, v in |
| 1676 | + item_params.get('parameters', {}).items()) |
| 1677 | } |
| 1678 | subnets = self._v2_to_v1_ipcfg(item_cfg) |
| 1679 | if len(subnets) > 0: |
| 1680 | v1_cmd.update({'subnets': subnets}) |
| 1681 | |
| 1682 | - LOG.debug('v2(%ss) -> v1(%s):\n%s', cmd_type, cmd_type, v1_cmd) |
| 1683 | - self.handle_bridge(v1_cmd) |
| 1684 | + LOG.debug('v2(%s) -> v1(%s):\n%s', cmd_type, cmd_type, v1_cmd) |
| 1685 | + if cmd_type == "bridge": |
| 1686 | + self.handle_bridge(v1_cmd) |
| 1687 | + elif cmd_type == "bond": |
| 1688 | + self.handle_bond(v1_cmd) |
| 1689 | + else: |
| 1690 | + raise ValueError('Unknown command type: %s', cmd_type) |
| 1691 | |
| 1692 | def _v2_to_v1_ipcfg(self, cfg): |
| 1693 | """Common ipconfig extraction from v2 to v1 subnets array.""" |
| 1694 | @@ -651,12 +699,6 @@ class NetworkStateInterpreter(object): |
| 1695 | 'address': address, |
| 1696 | } |
| 1697 | |
| 1698 | - routes = [] |
| 1699 | - for route in cfg.get('routes', []): |
| 1700 | - routes.append(_normalize_route( |
| 1701 | - {'address': route.get('to'), 'gateway': route.get('via')})) |
| 1702 | - subnet['routes'] = routes |
| 1703 | - |
| 1704 | if ":" in address: |
| 1705 | if 'gateway6' in cfg and gateway6 is None: |
| 1706 | gateway6 = cfg.get('gateway6') |
| 1707 | @@ -667,6 +709,17 @@ class NetworkStateInterpreter(object): |
| 1708 | subnet.update({'gateway': gateway4}) |
| 1709 | |
| 1710 | subnets.append(subnet) |
| 1711 | + |
| 1712 | + routes = [] |
| 1713 | + for route in cfg.get('routes', []): |
| 1714 | + routes.append(_normalize_route( |
| 1715 | + {'destination': route.get('to'), 'gateway': route.get('via')})) |
| 1716 | + |
| 1717 | + # v2 routes are bound to the interface, in v1 we add them under |
| 1718 | + # the first subnet since there isn't an equivalent interface level. |
| 1719 | + if len(subnets) and len(routes): |
| 1720 | + subnets[0]['routes'] = routes |
| 1721 | + |
| 1722 | return subnets |
| 1723 | |
| 1724 | |
| 1725 | @@ -721,7 +774,7 @@ def _normalize_net_keys(network, address_keys=()): |
| 1726 | elif netmask: |
| 1727 | prefix = mask_to_net_prefix(netmask) |
| 1728 | elif 'prefix' in net: |
| 1729 | - prefix = int(prefix) |
| 1730 | + prefix = int(net['prefix']) |
| 1731 | else: |
| 1732 | prefix = 64 if ipv6 else 24 |
| 1733 | |
| 1734 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py |
| 1735 | index a550f97..f572796 100644 |
| 1736 | --- a/cloudinit/net/sysconfig.py |
| 1737 | +++ b/cloudinit/net/sysconfig.py |
| 1738 | @@ -484,7 +484,11 @@ class Renderer(renderer.Renderer): |
| 1739 | content.add_nameserver(nameserver) |
| 1740 | for searchdomain in network_state.dns_searchdomains: |
| 1741 | content.add_search_domain(searchdomain) |
| 1742 | - return "\n".join([_make_header(';'), str(content)]) |
| 1743 | + header = _make_header(';') |
| 1744 | + content_str = str(content) |
| 1745 | + if not content_str.startswith(header): |
| 1746 | + content_str = header + '\n' + content_str |
| 1747 | + return content_str |
| 1748 | |
| 1749 | @staticmethod |
| 1750 | def _render_networkmanager_conf(network_state): |
| 1751 | diff --git a/cloudinit/net/tests/__init__.py b/cloudinit/net/tests/__init__.py |
| 1752 | new file mode 100644 |
| 1753 | index 0000000..e69de29 |
| 1754 | --- /dev/null |
| 1755 | +++ b/cloudinit/net/tests/__init__.py |
| 1756 | diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py |
| 1757 | new file mode 100644 |
| 1758 | index 0000000..47d8d46 |
| 1759 | --- /dev/null |
| 1760 | +++ b/cloudinit/net/tests/test_dhcp.py |
| 1761 | @@ -0,0 +1,144 @@ |
| 1762 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 1763 | + |
| 1764 | +import mock |
| 1765 | +import os |
| 1766 | +from textwrap import dedent |
| 1767 | + |
| 1768 | +from cloudinit.net.dhcp import ( |
| 1769 | + InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery, |
| 1770 | + parse_dhcp_lease_file, dhcp_discovery) |
| 1771 | +from cloudinit.util import ensure_file, write_file |
| 1772 | +from tests.unittests.helpers import CiTestCase |
| 1773 | + |
| 1774 | + |
| 1775 | +class TestParseDHCPLeasesFile(CiTestCase): |
| 1776 | + |
| 1777 | + def test_parse_empty_lease_file_errors(self): |
| 1778 | + """parse_dhcp_lease_file errors when file content is empty.""" |
| 1779 | + empty_file = self.tmp_path('leases') |
| 1780 | + ensure_file(empty_file) |
| 1781 | + with self.assertRaises(InvalidDHCPLeaseFileError) as context_manager: |
| 1782 | + parse_dhcp_lease_file(empty_file) |
| 1783 | + error = context_manager.exception |
| 1784 | + self.assertIn('Cannot parse empty dhcp lease file', str(error)) |
| 1785 | + |
| 1786 | + def test_parse_malformed_lease_file_content_errors(self): |
| 1787 | + """parse_dhcp_lease_file errors when file content isn't dhcp leases.""" |
| 1788 | + non_lease_file = self.tmp_path('leases') |
| 1789 | + write_file(non_lease_file, 'hi mom.') |
| 1790 | + with self.assertRaises(InvalidDHCPLeaseFileError) as context_manager: |
| 1791 | + parse_dhcp_lease_file(non_lease_file) |
| 1792 | + error = context_manager.exception |
| 1793 | + self.assertIn('Cannot parse dhcp lease file', str(error)) |
| 1794 | + |
| 1795 | + def test_parse_multiple_leases(self): |
| 1796 | + """parse_dhcp_lease_file returns a list of all leases within.""" |
| 1797 | + lease_file = self.tmp_path('leases') |
| 1798 | + content = dedent(""" |
| 1799 | + lease { |
| 1800 | + interface "wlp3s0"; |
| 1801 | + fixed-address 192.168.2.74; |
| 1802 | + option subnet-mask 255.255.255.0; |
| 1803 | + option routers 192.168.2.1; |
| 1804 | + renew 4 2017/07/27 18:02:30; |
| 1805 | + expire 5 2017/07/28 07:08:15; |
| 1806 | + } |
| 1807 | + lease { |
| 1808 | + interface "wlp3s0"; |
| 1809 | + fixed-address 192.168.2.74; |
| 1810 | + option subnet-mask 255.255.255.0; |
| 1811 | + option routers 192.168.2.1; |
| 1812 | + } |
| 1813 | + """) |
| 1814 | + expected = [ |
| 1815 | + {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74', |
| 1816 | + 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1', |
| 1817 | + 'renew': '4 2017/07/27 18:02:30', |
| 1818 | + 'expire': '5 2017/07/28 07:08:15'}, |
| 1819 | + {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74', |
| 1820 | + 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}] |
| 1821 | + write_file(lease_file, content) |
| 1822 | + self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file)) |
| 1823 | + |
| 1824 | + |
| 1825 | +class TestDHCPDiscoveryClean(CiTestCase): |
| 1826 | + with_logs = True |
| 1827 | + |
| 1828 | + @mock.patch('cloudinit.net.dhcp.find_fallback_nic') |
| 1829 | + def test_no_fallback_nic_found(self, m_fallback_nic): |
| 1830 | + """Log and do nothing when nic is absent and no fallback is found.""" |
| 1831 | + m_fallback_nic.return_value = None # No fallback nic found |
| 1832 | + self.assertEqual({}, maybe_perform_dhcp_discovery()) |
| 1833 | + self.assertIn( |
| 1834 | + 'Skip dhcp_discovery: Unable to find fallback nic.', |
| 1835 | + self.logs.getvalue()) |
| 1836 | + |
| 1837 | + def test_provided_nic_does_not_exist(self): |
| 1838 | + """When the provided nic doesn't exist, log a message and no-op.""" |
| 1839 | + self.assertEqual({}, maybe_perform_dhcp_discovery('idontexist')) |
| 1840 | + self.assertIn( |
| 1841 | + 'Skip dhcp_discovery: nic idontexist not found in get_devicelist.', |
| 1842 | + self.logs.getvalue()) |
| 1843 | + |
| 1844 | + @mock.patch('cloudinit.net.dhcp.util.which') |
| 1845 | + @mock.patch('cloudinit.net.dhcp.find_fallback_nic') |
| 1846 | + def test_absent_dhclient_command(self, m_fallback, m_which): |
| 1847 | + """When dhclient doesn't exist in the OS, log the issue and no-op.""" |
| 1848 | + m_fallback.return_value = 'eth9' |
| 1849 | + m_which.return_value = None # dhclient isn't found |
| 1850 | + self.assertEqual({}, maybe_perform_dhcp_discovery()) |
| 1851 | + self.assertIn( |
| 1852 | + 'Skip dhclient configuration: No dhclient command found.', |
| 1853 | + self.logs.getvalue()) |
| 1854 | + |
| 1855 | + @mock.patch('cloudinit.net.dhcp.dhcp_discovery') |
| 1856 | + @mock.patch('cloudinit.net.dhcp.util.which') |
| 1857 | + @mock.patch('cloudinit.net.dhcp.find_fallback_nic') |
| 1858 | + def test_dhclient_run_with_tmpdir(self, m_fallback, m_which, m_dhcp): |
| 1859 | + """maybe_perform_dhcp_discovery passes tmpdir to dhcp_discovery.""" |
| 1860 | + m_fallback.return_value = 'eth9' |
| 1861 | + m_which.return_value = '/sbin/dhclient' |
| 1862 | + m_dhcp.return_value = {'address': '192.168.2.2'} |
| 1863 | + self.assertEqual( |
| 1864 | + {'address': '192.168.2.2'}, maybe_perform_dhcp_discovery()) |
| 1865 | + m_dhcp.assert_called_once() |
| 1866 | + call = m_dhcp.call_args_list[0] |
| 1867 | + self.assertEqual('/sbin/dhclient', call[0][0]) |
| 1868 | + self.assertEqual('eth9', call[0][1]) |
| 1869 | + self.assertIn('/tmp/cloud-init-dhcp-', call[0][2]) |
| 1870 | + |
| 1871 | + @mock.patch('cloudinit.net.dhcp.util.subp') |
| 1872 | + def test_dhcp_discovery_run_in_sandbox(self, m_subp): |
| 1873 | + """dhcp_discovery brings up the interface and runs dhclient. |
| 1874 | + |
| 1875 | + It also returns the parsed dhcp.leases file generated in the sandbox. |
| 1876 | + """ |
| 1877 | + tmpdir = self.tmp_dir() |
| 1878 | + dhclient_script = os.path.join(tmpdir, 'dhclient.orig') |
| 1879 | + script_content = '#!/bin/bash\necho fake-dhclient' |
| 1880 | + write_file(dhclient_script, script_content, mode=0o755) |
| 1881 | + lease_content = dedent(""" |
| 1882 | + lease { |
| 1883 | + interface "eth9"; |
| 1884 | + fixed-address 192.168.2.74; |
| 1885 | + option subnet-mask 255.255.255.0; |
| 1886 | + option routers 192.168.2.1; |
| 1887 | + } |
| 1888 | + """) |
| 1889 | + lease_file = os.path.join(tmpdir, 'dhcp.leases') |
| 1890 | + write_file(lease_file, lease_content) |
| 1891 | + self.assertItemsEqual( |
| 1892 | + [{'interface': 'eth9', 'fixed-address': '192.168.2.74', |
| 1893 | + 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}], |
| 1894 | + dhcp_discovery(dhclient_script, 'eth9', tmpdir)) |
| 1895 | + # dhclient script got copied |
| 1896 | + with open(os.path.join(tmpdir, 'dhclient')) as stream: |
| 1897 | + self.assertEqual(script_content, stream.read()) |
| 1898 | + # Interface was brought up before dhclient called from sandbox |
| 1899 | + m_subp.assert_has_calls([ |
| 1900 | + mock.call( |
| 1901 | + ['ip', 'link', 'set', 'dev', 'eth9', 'up'], capture=True), |
| 1902 | + mock.call( |
| 1903 | + [os.path.join(tmpdir, 'dhclient'), '-1', '-v', '-lf', |
| 1904 | + lease_file, '-pf', os.path.join(tmpdir, 'dhclient.pid'), |
| 1905 | + 'eth9', '-sf', '/bin/true'], capture=True)]) |
| 1906 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py |
| 1907 | new file mode 100644 |
| 1908 | index 0000000..cc052a7 |
| 1909 | --- /dev/null |
| 1910 | +++ b/cloudinit/net/tests/test_init.py |
| 1911 | @@ -0,0 +1,522 @@ |
| 1912 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 1913 | + |
| 1914 | +import copy |
| 1915 | +import errno |
| 1916 | +import mock |
| 1917 | +import os |
| 1918 | + |
| 1919 | +import cloudinit.net as net |
| 1920 | +from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
| 1921 | +from tests.unittests.helpers import CiTestCase |
| 1922 | + |
| 1923 | + |
| 1924 | +class TestSysDevPath(CiTestCase): |
| 1925 | + |
| 1926 | + def test_sys_dev_path(self): |
| 1927 | + """sys_dev_path returns a path under SYS_CLASS_NET for a device.""" |
| 1928 | + dev = 'something' |
| 1929 | + path = 'attribute' |
| 1930 | + expected = net.SYS_CLASS_NET + dev + '/' + path |
| 1931 | + self.assertEqual(expected, net.sys_dev_path(dev, path)) |
| 1932 | + |
| 1933 | + def test_sys_dev_path_without_path(self): |
| 1934 | + """When path param isn't provided it defaults to empty string.""" |
| 1935 | + dev = 'something' |
| 1936 | + expected = net.SYS_CLASS_NET + dev + '/' |
| 1937 | + self.assertEqual(expected, net.sys_dev_path(dev)) |
| 1938 | + |
| 1939 | + |
| 1940 | +class TestReadSysNet(CiTestCase): |
| 1941 | + with_logs = True |
| 1942 | + |
| 1943 | + def setUp(self): |
| 1944 | + super(TestReadSysNet, self).setUp() |
| 1945 | + sys_mock = mock.patch('cloudinit.net.get_sys_class_path') |
| 1946 | + self.m_sys_path = sys_mock.start() |
| 1947 | + self.sysdir = self.tmp_dir() + '/' |
| 1948 | + self.m_sys_path.return_value = self.sysdir |
| 1949 | + self.addCleanup(sys_mock.stop) |
| 1950 | + |
| 1951 | + def test_read_sys_net_strips_contents_of_sys_path(self): |
| 1952 | + """read_sys_net strips whitespace from the contents of a sys file.""" |
| 1953 | + content = 'some stuff with trailing whitespace\t\r\n' |
| 1954 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), content) |
| 1955 | + self.assertEqual(content.strip(), net.read_sys_net('dev', 'attr')) |
| 1956 | + |
| 1957 | + def test_read_sys_net_reraises_oserror(self): |
| 1958 | + """read_sys_net raises OSError/IOError when file doesn't exist.""" |
| 1959 | + # Non-specific Exception because versions of python OSError vs IOError. |
| 1960 | + with self.assertRaises(Exception) as context_manager: # noqa: H202 |
| 1961 | + net.read_sys_net('dev', 'attr') |
| 1962 | + error = context_manager.exception |
| 1963 | + self.assertIn('No such file or directory', str(error)) |
| 1964 | + |
| 1965 | + def test_read_sys_net_handles_error_with_on_enoent(self): |
| 1966 | + """read_sys_net handles OSError/IOError with on_enoent if provided.""" |
| 1967 | + handled_errors = [] |
| 1968 | + |
| 1969 | + def on_enoent(e): |
| 1970 | + handled_errors.append(e) |
| 1971 | + |
| 1972 | + net.read_sys_net('dev', 'attr', on_enoent=on_enoent) |
| 1973 | + error = handled_errors[0] |
| 1974 | + self.assertIsInstance(error, Exception) |
| 1975 | + self.assertIn('No such file or directory', str(error)) |
| 1976 | + |
| 1977 | + def test_read_sys_net_translates_content(self): |
| 1978 | + """read_sys_net translates content when translate dict is provided.""" |
| 1979 | + content = "you're welcome\n" |
| 1980 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), content) |
| 1981 | + translate = {"you're welcome": 'de nada'} |
| 1982 | + self.assertEqual( |
| 1983 | + 'de nada', |
| 1984 | + net.read_sys_net('dev', 'attr', translate=translate)) |
| 1985 | + |
| 1986 | + def test_read_sys_net_errors_on_translation_failures(self): |
| 1987 | + """read_sys_net raises a KeyError and logs details on failure.""" |
| 1988 | + content = "you're welcome\n" |
| 1989 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), content) |
| 1990 | + with self.assertRaises(KeyError) as context_manager: |
| 1991 | + net.read_sys_net('dev', 'attr', translate={}) |
| 1992 | + error = context_manager.exception |
| 1993 | + self.assertEqual('"you\'re welcome"', str(error)) |
| 1994 | + self.assertIn( |
| 1995 | + "Found unexpected (not translatable) value 'you're welcome' in " |
| 1996 | + "'{0}dev/attr".format(self.sysdir), |
| 1997 | + self.logs.getvalue()) |
| 1998 | + |
| 1999 | + def test_read_sys_net_handles_handles_with_onkeyerror(self): |
| 2000 | + """read_sys_net handles translation errors calling on_keyerror.""" |
| 2001 | + content = "you're welcome\n" |
| 2002 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), content) |
| 2003 | + handled_errors = [] |
| 2004 | + |
| 2005 | + def on_keyerror(e): |
| 2006 | + handled_errors.append(e) |
| 2007 | + |
| 2008 | + net.read_sys_net('dev', 'attr', translate={}, on_keyerror=on_keyerror) |
| 2009 | + error = handled_errors[0] |
| 2010 | + self.assertIsInstance(error, KeyError) |
| 2011 | + self.assertEqual('"you\'re welcome"', str(error)) |
| 2012 | + |
| 2013 | + def test_read_sys_net_safe_false_on_translate_failure(self): |
| 2014 | + """read_sys_net_safe returns False on translation failures.""" |
| 2015 | + content = "you're welcome\n" |
| 2016 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), content) |
| 2017 | + self.assertFalse(net.read_sys_net_safe('dev', 'attr', translate={})) |
| 2018 | + |
| 2019 | + def test_read_sys_net_safe_returns_false_on_noent_failure(self): |
| 2020 | + """read_sys_net_safe returns False on file not found failures.""" |
| 2021 | + self.assertFalse(net.read_sys_net_safe('dev', 'attr')) |
| 2022 | + |
| 2023 | + def test_read_sys_net_int_returns_none_on_error(self): |
| 2024 | + """read_sys_net_safe returns None on failures.""" |
| 2025 | + self.assertFalse(net.read_sys_net_int('dev', 'attr')) |
| 2026 | + |
| 2027 | + def test_read_sys_net_int_returns_none_on_valueerror(self): |
| 2028 | + """read_sys_net_safe returns None when content is not an int.""" |
| 2029 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), 'NOTINT\n') |
| 2030 | + self.assertFalse(net.read_sys_net_int('dev', 'attr')) |
| 2031 | + |
| 2032 | + def test_read_sys_net_int_returns_integer_from_content(self): |
| 2033 | + """read_sys_net_safe returns None on failures.""" |
| 2034 | + write_file(os.path.join(self.sysdir, 'dev', 'attr'), '1\n') |
| 2035 | + self.assertEqual(1, net.read_sys_net_int('dev', 'attr')) |
| 2036 | + |
| 2037 | + def test_is_up_true(self): |
| 2038 | + """is_up is True if sys/net/devname/operstate is 'up' or 'unknown'.""" |
| 2039 | + for state in ['up', 'unknown']: |
| 2040 | + write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state) |
| 2041 | + self.assertTrue(net.is_up('eth0')) |
| 2042 | + |
| 2043 | + def test_is_up_false(self): |
| 2044 | + """is_up is False if sys/net/devname/operstate is 'down' or invalid.""" |
| 2045 | + for state in ['down', 'incomprehensible']: |
| 2046 | + write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state) |
| 2047 | + self.assertFalse(net.is_up('eth0')) |
| 2048 | + |
| 2049 | + def test_is_wireless(self): |
| 2050 | + """is_wireless is True when /sys/net/devname/wireless exists.""" |
| 2051 | + self.assertFalse(net.is_wireless('eth0')) |
| 2052 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'wireless')) |
| 2053 | + self.assertTrue(net.is_wireless('eth0')) |
| 2054 | + |
| 2055 | + def test_is_bridge(self): |
| 2056 | + """is_bridge is True when /sys/net/devname/bridge exists.""" |
| 2057 | + self.assertFalse(net.is_bridge('eth0')) |
| 2058 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'bridge')) |
| 2059 | + self.assertTrue(net.is_bridge('eth0')) |
| 2060 | + |
| 2061 | + def test_is_bond(self): |
| 2062 | + """is_bond is True when /sys/net/devname/bonding exists.""" |
| 2063 | + self.assertFalse(net.is_bond('eth0')) |
| 2064 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'bonding')) |
| 2065 | + self.assertTrue(net.is_bond('eth0')) |
| 2066 | + |
| 2067 | + def test_is_vlan(self): |
| 2068 | + """is_vlan is True when /sys/net/devname/uevent has DEVTYPE=vlan.""" |
| 2069 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'uevent')) |
| 2070 | + self.assertFalse(net.is_vlan('eth0')) |
| 2071 | + content = 'junk\nDEVTYPE=vlan\njunk\n' |
| 2072 | + write_file(os.path.join(self.sysdir, 'eth0', 'uevent'), content) |
| 2073 | + self.assertTrue(net.is_vlan('eth0')) |
| 2074 | + |
| 2075 | + def test_is_connected_when_physically_connected(self): |
| 2076 | + """is_connected is True when /sys/net/devname/iflink reports 2.""" |
| 2077 | + self.assertFalse(net.is_connected('eth0')) |
| 2078 | + write_file(os.path.join(self.sysdir, 'eth0', 'iflink'), "2") |
| 2079 | + self.assertTrue(net.is_connected('eth0')) |
| 2080 | + |
| 2081 | + def test_is_connected_when_wireless_and_carrier_active(self): |
| 2082 | + """is_connected is True if wireless /sys/net/devname/carrier is 1.""" |
| 2083 | + self.assertFalse(net.is_connected('eth0')) |
| 2084 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'wireless')) |
| 2085 | + self.assertFalse(net.is_connected('eth0')) |
| 2086 | + write_file(os.path.join(self.sysdir, 'eth0', 'carrier'), "1") |
| 2087 | + self.assertTrue(net.is_connected('eth0')) |
| 2088 | + |
| 2089 | + def test_is_physical(self): |
| 2090 | + """is_physical is True when /sys/net/devname/device exists.""" |
| 2091 | + self.assertFalse(net.is_physical('eth0')) |
| 2092 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'device')) |
| 2093 | + self.assertTrue(net.is_physical('eth0')) |
| 2094 | + |
| 2095 | + def test_is_present(self): |
| 2096 | + """is_present is True when /sys/net/devname exists.""" |
| 2097 | + self.assertFalse(net.is_present('eth0')) |
| 2098 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'device')) |
| 2099 | + self.assertTrue(net.is_present('eth0')) |
| 2100 | + |
| 2101 | + |
| 2102 | +class TestGenerateFallbackConfig(CiTestCase): |
| 2103 | + |
| 2104 | + def setUp(self): |
| 2105 | + super(TestGenerateFallbackConfig, self).setUp() |
| 2106 | + sys_mock = mock.patch('cloudinit.net.get_sys_class_path') |
| 2107 | + self.m_sys_path = sys_mock.start() |
| 2108 | + self.sysdir = self.tmp_dir() + '/' |
| 2109 | + self.m_sys_path.return_value = self.sysdir |
| 2110 | + self.addCleanup(sys_mock.stop) |
| 2111 | + |
| 2112 | + def test_generate_fallback_finds_connected_eth_with_mac(self): |
| 2113 | + """generate_fallback_config finds any connected device with a mac.""" |
| 2114 | + write_file(os.path.join(self.sysdir, 'eth0', 'carrier'), '1') |
| 2115 | + write_file(os.path.join(self.sysdir, 'eth1', 'carrier'), '1') |
| 2116 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2117 | + write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac) |
| 2118 | + expected = { |
| 2119 | + 'config': [{'type': 'physical', 'mac_address': mac, |
| 2120 | + 'name': 'eth1', 'subnets': [{'type': 'dhcp'}]}], |
| 2121 | + 'version': 1} |
| 2122 | + self.assertEqual(expected, net.generate_fallback_config()) |
| 2123 | + |
| 2124 | + def test_generate_fallback_finds_dormant_eth_with_mac(self): |
| 2125 | + """generate_fallback_config finds any dormant device with a mac.""" |
| 2126 | + write_file(os.path.join(self.sysdir, 'eth0', 'dormant'), '1') |
| 2127 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2128 | + write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) |
| 2129 | + expected = { |
| 2130 | + 'config': [{'type': 'physical', 'mac_address': mac, |
| 2131 | + 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}], |
| 2132 | + 'version': 1} |
| 2133 | + self.assertEqual(expected, net.generate_fallback_config()) |
| 2134 | + |
| 2135 | + def test_generate_fallback_finds_eth_by_operstate(self): |
| 2136 | + """generate_fallback_config finds any dormant device with a mac.""" |
| 2137 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2138 | + write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) |
| 2139 | + expected = { |
| 2140 | + 'config': [{'type': 'physical', 'mac_address': mac, |
| 2141 | + 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}], |
| 2142 | + 'version': 1} |
| 2143 | + valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown'] |
| 2144 | + for state in valid_operstates: |
| 2145 | + write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state) |
| 2146 | + self.assertEqual(expected, net.generate_fallback_config()) |
| 2147 | + write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), 'noworky') |
| 2148 | + self.assertIsNone(net.generate_fallback_config()) |
| 2149 | + |
| 2150 | + def test_generate_fallback_config_skips_veth(self): |
| 2151 | + """generate_fallback_config will skip any veth interfaces.""" |
| 2152 | + # A connected veth which gets ignored |
| 2153 | + write_file(os.path.join(self.sysdir, 'veth0', 'carrier'), '1') |
| 2154 | + self.assertIsNone(net.generate_fallback_config()) |
| 2155 | + |
| 2156 | + def test_generate_fallback_config_skips_bridges(self): |
| 2157 | + """generate_fallback_config will skip any bridges interfaces.""" |
| 2158 | + # A connected veth which gets ignored |
| 2159 | + write_file(os.path.join(self.sysdir, 'eth0', 'carrier'), '1') |
| 2160 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2161 | + write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) |
| 2162 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'bridge')) |
| 2163 | + self.assertIsNone(net.generate_fallback_config()) |
| 2164 | + |
| 2165 | + def test_generate_fallback_config_skips_bonds(self): |
| 2166 | + """generate_fallback_config will skip any bonded interfaces.""" |
| 2167 | + # A connected veth which gets ignored |
| 2168 | + write_file(os.path.join(self.sysdir, 'eth0', 'carrier'), '1') |
| 2169 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2170 | + write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) |
| 2171 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'bonding')) |
| 2172 | + self.assertIsNone(net.generate_fallback_config()) |
| 2173 | + |
| 2174 | + |
| 2175 | +class TestGetDeviceList(CiTestCase): |
| 2176 | + |
| 2177 | + def setUp(self): |
| 2178 | + super(TestGetDeviceList, self).setUp() |
| 2179 | + sys_mock = mock.patch('cloudinit.net.get_sys_class_path') |
| 2180 | + self.m_sys_path = sys_mock.start() |
| 2181 | + self.sysdir = self.tmp_dir() + '/' |
| 2182 | + self.m_sys_path.return_value = self.sysdir |
| 2183 | + self.addCleanup(sys_mock.stop) |
| 2184 | + |
| 2185 | + def test_get_devicelist_raise_oserror(self): |
| 2186 | + """get_devicelist raise any non-ENOENT OSerror.""" |
| 2187 | + error = OSError('Can not do it') |
| 2188 | + error.errno = errno.EPERM # Set non-ENOENT |
| 2189 | + self.m_sys_path.side_effect = error |
| 2190 | + with self.assertRaises(OSError) as context_manager: |
| 2191 | + net.get_devicelist() |
| 2192 | + exception = context_manager.exception |
| 2193 | + self.assertEqual('Can not do it', str(exception)) |
| 2194 | + |
| 2195 | + def test_get_devicelist_empty_without_sys_net(self): |
| 2196 | + """get_devicelist returns empty list when missing SYS_CLASS_NET.""" |
| 2197 | + self.m_sys_path.return_value = 'idontexist' |
| 2198 | + self.assertEqual([], net.get_devicelist()) |
| 2199 | + |
| 2200 | + def test_get_devicelist_empty_with_no_devices_in_sys_net(self): |
| 2201 | + """get_devicelist returns empty directoty listing for SYS_CLASS_NET.""" |
| 2202 | + self.assertEqual([], net.get_devicelist()) |
| 2203 | + |
| 2204 | + def test_get_devicelist_lists_any_subdirectories_in_sys_net(self): |
| 2205 | + """get_devicelist returns a directory listing for SYS_CLASS_NET.""" |
| 2206 | + write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), 'up') |
| 2207 | + write_file(os.path.join(self.sysdir, 'eth1', 'operstate'), 'up') |
| 2208 | + self.assertItemsEqual(['eth0', 'eth1'], net.get_devicelist()) |
| 2209 | + |
| 2210 | + |
| 2211 | +class TestGetInterfaceMAC(CiTestCase): |
| 2212 | + |
| 2213 | + def setUp(self): |
| 2214 | + super(TestGetInterfaceMAC, self).setUp() |
| 2215 | + sys_mock = mock.patch('cloudinit.net.get_sys_class_path') |
| 2216 | + self.m_sys_path = sys_mock.start() |
| 2217 | + self.sysdir = self.tmp_dir() + '/' |
| 2218 | + self.m_sys_path.return_value = self.sysdir |
| 2219 | + self.addCleanup(sys_mock.stop) |
| 2220 | + |
| 2221 | + def test_get_interface_mac_false_with_no_mac(self): |
| 2222 | + """get_device_list returns False when no mac is reported.""" |
| 2223 | + ensure_file(os.path.join(self.sysdir, 'eth0', 'bonding')) |
| 2224 | + mac_path = os.path.join(self.sysdir, 'eth0', 'address') |
| 2225 | + self.assertFalse(os.path.exists(mac_path)) |
| 2226 | + self.assertFalse(net.get_interface_mac('eth0')) |
| 2227 | + |
| 2228 | + def test_get_interface_mac(self): |
| 2229 | + """get_interfaces returns the mac from SYS_CLASS_NET/dev/address.""" |
| 2230 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2231 | + write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac) |
| 2232 | + self.assertEqual(mac, net.get_interface_mac('eth1')) |
| 2233 | + |
| 2234 | + def test_get_interface_mac_grabs_bonding_address(self): |
| 2235 | + """get_interfaces returns the source device mac for bonded devices.""" |
| 2236 | + source_dev_mac = 'aa:bb:cc:aa:bb:cc' |
| 2237 | + bonded_mac = 'dd:ee:ff:dd:ee:ff' |
| 2238 | + write_file(os.path.join(self.sysdir, 'eth1', 'address'), bonded_mac) |
| 2239 | + write_file( |
| 2240 | + os.path.join(self.sysdir, 'eth1', 'bonding_slave', 'perm_hwaddr'), |
| 2241 | + source_dev_mac) |
| 2242 | + self.assertEqual(source_dev_mac, net.get_interface_mac('eth1')) |
| 2243 | + |
| 2244 | + def test_get_interfaces_empty_list_without_sys_net(self): |
| 2245 | + """get_interfaces returns an empty list when missing SYS_CLASS_NET.""" |
| 2246 | + self.m_sys_path.return_value = 'idontexist' |
| 2247 | + self.assertEqual([], net.get_interfaces()) |
| 2248 | + |
| 2249 | + def test_get_interfaces_by_mac_skips_empty_mac(self): |
| 2250 | + """Ignore 00:00:00:00:00:00 addresses from get_interfaces_by_mac.""" |
| 2251 | + empty_mac = '00:00:00:00:00:00' |
| 2252 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2253 | + write_file(os.path.join(self.sysdir, 'eth1', 'address'), empty_mac) |
| 2254 | + write_file(os.path.join(self.sysdir, 'eth1', 'addr_assign_type'), '0') |
| 2255 | + write_file(os.path.join(self.sysdir, 'eth2', 'addr_assign_type'), '0') |
| 2256 | + write_file(os.path.join(self.sysdir, 'eth2', 'address'), mac) |
| 2257 | + expected = [('eth2', 'aa:bb:cc:aa:bb:cc', None, None)] |
| 2258 | + self.assertEqual(expected, net.get_interfaces()) |
| 2259 | + |
| 2260 | + def test_get_interfaces_by_mac_skips_missing_mac(self): |
| 2261 | + """Ignore interfaces without an address from get_interfaces_by_mac.""" |
| 2262 | + write_file(os.path.join(self.sysdir, 'eth1', 'addr_assign_type'), '0') |
| 2263 | + address_path = os.path.join(self.sysdir, 'eth1', 'address') |
| 2264 | + self.assertFalse(os.path.exists(address_path)) |
| 2265 | + mac = 'aa:bb:cc:aa:bb:cc' |
| 2266 | + write_file(os.path.join(self.sysdir, 'eth2', 'addr_assign_type'), '0') |
| 2267 | + write_file(os.path.join(self.sysdir, 'eth2', 'address'), mac) |
| 2268 | + expected = [('eth2', 'aa:bb:cc:aa:bb:cc', None, None)] |
| 2269 | + self.assertEqual(expected, net.get_interfaces()) |
| 2270 | + |
| 2271 | + |
| 2272 | +class TestInterfaceHasOwnMAC(CiTestCase): |
| 2273 | + |
| 2274 | + def setUp(self): |
| 2275 | + super(TestInterfaceHasOwnMAC, self).setUp() |
| 2276 | + sys_mock = mock.patch('cloudinit.net.get_sys_class_path') |
| 2277 | + self.m_sys_path = sys_mock.start() |
| 2278 | + self.sysdir = self.tmp_dir() + '/' |
| 2279 | + self.m_sys_path.return_value = self.sysdir |
| 2280 | + self.addCleanup(sys_mock.stop) |
| 2281 | + |
| 2282 | + def test_interface_has_own_mac_false_when_stolen(self): |
| 2283 | + """Return False from interface_has_own_mac when address is stolen.""" |
| 2284 | + write_file(os.path.join(self.sysdir, 'eth1', 'addr_assign_type'), '2') |
| 2285 | + self.assertFalse(net.interface_has_own_mac('eth1')) |
| 2286 | + |
| 2287 | + def test_interface_has_own_mac_true_when_not_stolen(self): |
| 2288 | + """Return False from interface_has_own_mac when mac isn't stolen.""" |
| 2289 | + valid_assign_types = ['0', '1', '3'] |
| 2290 | + assign_path = os.path.join(self.sysdir, 'eth1', 'addr_assign_type') |
| 2291 | + for _type in valid_assign_types: |
| 2292 | + write_file(assign_path, _type) |
| 2293 | + self.assertTrue(net.interface_has_own_mac('eth1')) |
| 2294 | + |
| 2295 | + def test_interface_has_own_mac_strict_errors_on_absent_assign_type(self): |
| 2296 | + """When addr_assign_type is absent, interface_has_own_mac errors.""" |
| 2297 | + with self.assertRaises(ValueError): |
| 2298 | + net.interface_has_own_mac('eth1', strict=True) |
| 2299 | + |
| 2300 | + |
| 2301 | +@mock.patch('cloudinit.net.util.subp') |
| 2302 | +class TestEphemeralIPV4Network(CiTestCase): |
| 2303 | + |
| 2304 | + with_logs = True |
| 2305 | + |
| 2306 | + def setUp(self): |
| 2307 | + super(TestEphemeralIPV4Network, self).setUp() |
| 2308 | + sys_mock = mock.patch('cloudinit.net.get_sys_class_path') |
| 2309 | + self.m_sys_path = sys_mock.start() |
| 2310 | + self.sysdir = self.tmp_dir() + '/' |
| 2311 | + self.m_sys_path.return_value = self.sysdir |
| 2312 | + self.addCleanup(sys_mock.stop) |
| 2313 | + |
| 2314 | + def test_ephemeral_ipv4_network_errors_on_missing_params(self, m_subp): |
| 2315 | + """No required params for EphemeralIPv4Network can be None.""" |
| 2316 | + required_params = { |
| 2317 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
| 2318 | + 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255'} |
| 2319 | + for key in required_params.keys(): |
| 2320 | + params = copy.deepcopy(required_params) |
| 2321 | + params[key] = None |
| 2322 | + with self.assertRaises(ValueError) as context_manager: |
| 2323 | + net.EphemeralIPv4Network(**params) |
| 2324 | + error = context_manager.exception |
| 2325 | + self.assertIn('Cannot init network on', str(error)) |
| 2326 | + self.assertEqual(0, m_subp.call_count) |
| 2327 | + |
| 2328 | + def test_ephemeral_ipv4_network_errors_invalid_mask_prefix(self, m_subp): |
| 2329 | + """Raise an error when prefix_or_mask is not a netmask or prefix.""" |
| 2330 | + params = { |
| 2331 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
| 2332 | + 'broadcast': '192.168.2.255'} |
| 2333 | + invalid_masks = ('invalid', 'invalid.', '123.123.123') |
| 2334 | + for error_val in invalid_masks: |
| 2335 | + params['prefix_or_mask'] = error_val |
| 2336 | + with self.assertRaises(ValueError) as context_manager: |
| 2337 | + with net.EphemeralIPv4Network(**params): |
| 2338 | + pass |
| 2339 | + error = context_manager.exception |
| 2340 | + self.assertIn('Cannot setup network: netmask', str(error)) |
| 2341 | + self.assertEqual(0, m_subp.call_count) |
| 2342 | + |
| 2343 | + def test_ephemeral_ipv4_network_performs_teardown(self, m_subp): |
| 2344 | + """EphemeralIPv4Network performs teardown on the device if setup.""" |
| 2345 | + expected_setup_calls = [ |
| 2346 | + mock.call( |
| 2347 | + ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24', |
| 2348 | + 'broadcast', '192.168.2.255', 'dev', 'eth0'], |
| 2349 | + capture=True, update_env={'LANG': 'C'}), |
| 2350 | + mock.call( |
| 2351 | + ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'], |
| 2352 | + capture=True)] |
| 2353 | + expected_teardown_calls = [ |
| 2354 | + mock.call( |
| 2355 | + ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', |
| 2356 | + 'down'], capture=True), |
| 2357 | + mock.call( |
| 2358 | + ['ip', '-family', 'inet', 'addr', 'del', '192.168.2.2/24', |
| 2359 | + 'dev', 'eth0'], capture=True)] |
| 2360 | + params = { |
| 2361 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
| 2362 | + 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255'} |
| 2363 | + with net.EphemeralIPv4Network(**params): |
| 2364 | + self.assertEqual(expected_setup_calls, m_subp.call_args_list) |
| 2365 | + m_subp.assert_has_calls(expected_teardown_calls) |
| 2366 | + |
| 2367 | + def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp): |
| 2368 | + """EphemeralIPv4Network handles exception when address is setup. |
| 2369 | + |
| 2370 | + It performs no cleanup as the interface was already setup. |
| 2371 | + """ |
| 2372 | + params = { |
| 2373 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
| 2374 | + 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255'} |
| 2375 | + m_subp.side_effect = ProcessExecutionError( |
| 2376 | + '', 'RTNETLINK answers: File exists', 2) |
| 2377 | + expected_calls = [ |
| 2378 | + mock.call( |
| 2379 | + ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24', |
| 2380 | + 'broadcast', '192.168.2.255', 'dev', 'eth0'], |
| 2381 | + capture=True, update_env={'LANG': 'C'})] |
| 2382 | + with net.EphemeralIPv4Network(**params): |
| 2383 | + pass |
| 2384 | + self.assertEqual(expected_calls, m_subp.call_args_list) |
| 2385 | + self.assertIn( |
| 2386 | + 'Skip ephemeral network setup, eth0 already has address', |
| 2387 | + self.logs.getvalue()) |
| 2388 | + |
| 2389 | + def test_ephemeral_ipv4_network_with_prefix(self, m_subp): |
| 2390 | + """EphemeralIPv4Network takes a valid prefix to setup the network.""" |
| 2391 | + params = { |
| 2392 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
| 2393 | + 'prefix_or_mask': '24', 'broadcast': '192.168.2.255'} |
| 2394 | + for prefix_val in ['24', 16]: # prefix can be int or string |
| 2395 | + params['prefix_or_mask'] = prefix_val |
| 2396 | + with net.EphemeralIPv4Network(**params): |
| 2397 | + pass |
| 2398 | + m_subp.assert_has_calls([mock.call( |
| 2399 | + ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24', |
| 2400 | + 'broadcast', '192.168.2.255', 'dev', 'eth0'], |
| 2401 | + capture=True, update_env={'LANG': 'C'})]) |
| 2402 | + m_subp.assert_has_calls([mock.call( |
| 2403 | + ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/16', |
| 2404 | + 'broadcast', '192.168.2.255', 'dev', 'eth0'], |
| 2405 | + capture=True, update_env={'LANG': 'C'})]) |
| 2406 | + |
| 2407 | + def test_ephemeral_ipv4_network_with_new_default_route(self, m_subp): |
| 2408 | + """Add the route when router is set and no default route exists.""" |
| 2409 | + params = { |
| 2410 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
| 2411 | + 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255', |
| 2412 | + 'router': '192.168.2.1'} |
| 2413 | + m_subp.return_value = '', '' # Empty response from ip route gw check |
| 2414 | + expected_setup_calls = [ |
| 2415 | + mock.call( |
| 2416 | + ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24', |
| 2417 | + 'broadcast', '192.168.2.255', 'dev', 'eth0'], |
| 2418 | + capture=True, update_env={'LANG': 'C'}), |
| 2419 | + mock.call( |
| 2420 | + ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'], |
| 2421 | + capture=True), |
| 2422 | + mock.call( |
| 2423 | + ['ip', 'route', 'show', '0.0.0.0/0'], capture=True), |
| 2424 | + mock.call( |
| 2425 | + ['ip', '-4', 'route', 'add', 'default', 'via', |
| 2426 | + '192.168.2.1', 'dev', 'eth0'], capture=True)] |
| 2427 | + expected_teardown_calls = [mock.call( |
| 2428 | + ['ip', '-4', 'route', 'del', 'default', 'dev', 'eth0'], |
| 2429 | + capture=True)] |
| 2430 | + |
| 2431 | + with net.EphemeralIPv4Network(**params): |
| 2432 | + self.assertEqual(expected_setup_calls, m_subp.call_args_list) |
| 2433 | + m_subp.assert_has_calls(expected_teardown_calls) |
| 2434 | diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py |
| 2435 | index 380e27c..43a7e42 100644 |
| 2436 | --- a/cloudinit/sources/DataSourceAliYun.py |
| 2437 | +++ b/cloudinit/sources/DataSourceAliYun.py |
| 2438 | @@ -6,17 +6,20 @@ from cloudinit import sources |
| 2439 | from cloudinit.sources import DataSourceEc2 as EC2 |
| 2440 | from cloudinit import util |
| 2441 | |
| 2442 | -DEF_MD_VERSION = "2016-01-01" |
| 2443 | ALIYUN_PRODUCT = "Alibaba Cloud ECS" |
| 2444 | |
| 2445 | |
| 2446 | class DataSourceAliYun(EC2.DataSourceEc2): |
| 2447 | - metadata_urls = ["http://100.100.100.200"] |
| 2448 | + |
| 2449 | + metadata_urls = ['http://100.100.100.200'] |
| 2450 | + |
| 2451 | + # The minimum supported metadata_version from the ec2 metadata apis |
| 2452 | + min_metadata_version = '2016-01-01' |
| 2453 | + extended_metadata_versions = [] |
| 2454 | |
| 2455 | def __init__(self, sys_cfg, distro, paths): |
| 2456 | super(DataSourceAliYun, self).__init__(sys_cfg, distro, paths) |
| 2457 | self.seed_dir = os.path.join(paths.seed_dir, "AliYun") |
| 2458 | - self.api_ver = DEF_MD_VERSION |
| 2459 | |
| 2460 | def get_hostname(self, fqdn=False, _resolve_ip=False): |
| 2461 | return self.metadata.get('hostname', 'localhost.localdomain') |
| 2462 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py |
| 2463 | index 4ec9592..8e5f8ee 100644 |
| 2464 | --- a/cloudinit/sources/DataSourceEc2.py |
| 2465 | +++ b/cloudinit/sources/DataSourceEc2.py |
| 2466 | @@ -13,6 +13,8 @@ import time |
| 2467 | |
| 2468 | from cloudinit import ec2_utils as ec2 |
| 2469 | from cloudinit import log as logging |
| 2470 | +from cloudinit import net |
| 2471 | +from cloudinit.net import dhcp |
| 2472 | from cloudinit import sources |
| 2473 | from cloudinit import url_helper as uhelp |
| 2474 | from cloudinit import util |
| 2475 | @@ -20,8 +22,7 @@ from cloudinit import warnings |
| 2476 | |
| 2477 | LOG = logging.getLogger(__name__) |
| 2478 | |
| 2479 | -# Which version we are requesting of the ec2 metadata apis |
| 2480 | -DEF_MD_VERSION = '2009-04-04' |
| 2481 | +SKIP_METADATA_URL_CODES = frozenset([uhelp.NOT_FOUND]) |
| 2482 | |
| 2483 | STRICT_ID_PATH = ("datasource", "Ec2", "strict_id") |
| 2484 | STRICT_ID_DEFAULT = "warn" |
| 2485 | @@ -41,17 +42,28 @@ class Platforms(object): |
| 2486 | |
| 2487 | |
| 2488 | class DataSourceEc2(sources.DataSource): |
| 2489 | + |
| 2490 | # Default metadata urls that will be used if none are provided |
| 2491 | # They will be checked for 'resolveability' and some of the |
| 2492 | # following may be discarded if they do not resolve |
| 2493 | metadata_urls = ["http://169.254.169.254", "http://instance-data.:8773"] |
| 2494 | + |
| 2495 | + # The minimum supported metadata_version from the ec2 metadata apis |
| 2496 | + min_metadata_version = '2009-04-04' |
| 2497 | + |
| 2498 | + # Priority ordered list of additional metadata versions which will be tried |
| 2499 | + # for extended metadata content. IPv6 support comes in 2016-09-02 |
| 2500 | + extended_metadata_versions = ['2016-09-02'] |
| 2501 | + |
| 2502 | _cloud_platform = None |
| 2503 | |
| 2504 | + # Whether we want to get network configuration from the metadata service. |
| 2505 | + get_network_metadata = False |
| 2506 | + |
| 2507 | def __init__(self, sys_cfg, distro, paths): |
| 2508 | sources.DataSource.__init__(self, sys_cfg, distro, paths) |
| 2509 | self.metadata_address = None |
| 2510 | self.seed_dir = os.path.join(paths.seed_dir, "ec2") |
| 2511 | - self.api_ver = DEF_MD_VERSION |
| 2512 | |
| 2513 | def get_data(self): |
| 2514 | seed_ret = {} |
| 2515 | @@ -73,21 +85,27 @@ class DataSourceEc2(sources.DataSource): |
| 2516 | elif self.cloud_platform == Platforms.NO_EC2_METADATA: |
| 2517 | return False |
| 2518 | |
| 2519 | - try: |
| 2520 | - if not self.wait_for_metadata_service(): |
| 2521 | + if self.get_network_metadata: # Setup networking in init-local stage. |
| 2522 | + if util.is_FreeBSD(): |
| 2523 | + LOG.debug("FreeBSD doesn't support running dhclient with -sf") |
| 2524 | return False |
| 2525 | - start_time = time.time() |
| 2526 | - self.userdata_raw = \ |
| 2527 | - ec2.get_instance_userdata(self.api_ver, self.metadata_address) |
| 2528 | - self.metadata = ec2.get_instance_metadata(self.api_ver, |
| 2529 | - self.metadata_address) |
| 2530 | - LOG.debug("Crawl of metadata service took %.3f seconds", |
| 2531 | - time.time() - start_time) |
| 2532 | - return True |
| 2533 | - except Exception: |
| 2534 | - util.logexc(LOG, "Failed reading from metadata address %s", |
| 2535 | - self.metadata_address) |
| 2536 | - return False |
| 2537 | + dhcp_leases = dhcp.maybe_perform_dhcp_discovery() |
| 2538 | + if not dhcp_leases: |
| 2539 | + # DataSourceEc2Local failed in init-local stage. DataSourceEc2 |
| 2540 | + # will still run in init-network stage. |
| 2541 | + return False |
| 2542 | + dhcp_opts = dhcp_leases[-1] |
| 2543 | + net_params = {'interface': dhcp_opts.get('interface'), |
| 2544 | + 'ip': dhcp_opts.get('fixed-address'), |
| 2545 | + 'prefix_or_mask': dhcp_opts.get('subnet-mask'), |
| 2546 | + 'broadcast': dhcp_opts.get('broadcast-address'), |
| 2547 | + 'router': dhcp_opts.get('routers')} |
| 2548 | + with net.EphemeralIPv4Network(**net_params): |
| 2549 | + return util.log_time( |
| 2550 | + logfunc=LOG.debug, msg='Crawl of metadata service', |
| 2551 | + func=self._crawl_metadata) |
| 2552 | + else: |
| 2553 | + return self._crawl_metadata() |
| 2554 | |
| 2555 | @property |
| 2556 | def launch_index(self): |
| 2557 | @@ -95,6 +113,32 @@ class DataSourceEc2(sources.DataSource): |
| 2558 | return None |
| 2559 | return self.metadata.get('ami-launch-index') |
| 2560 | |
| 2561 | + def get_metadata_api_version(self): |
| 2562 | + """Get the best supported api version from the metadata service. |
| 2563 | + |
| 2564 | + Loop through all extended support metadata versions in order and |
| 2565 | + return the most-fully featured metadata api version discovered. |
| 2566 | + |
| 2567 | + If extended_metadata_versions aren't present, return the datasource's |
| 2568 | + min_metadata_version. |
| 2569 | + """ |
| 2570 | + # Assumes metadata service is already up |
| 2571 | + for api_ver in self.extended_metadata_versions: |
| 2572 | + url = '{0}/{1}/meta-data/instance-id'.format( |
| 2573 | + self.metadata_address, api_ver) |
| 2574 | + try: |
| 2575 | + resp = uhelp.readurl(url=url) |
| 2576 | + except uhelp.UrlError as e: |
| 2577 | + LOG.debug('url %s raised exception %s', url, e) |
| 2578 | + else: |
| 2579 | + if resp.code == 200: |
| 2580 | + LOG.debug('Found preferred metadata version %s', api_ver) |
| 2581 | + return api_ver |
| 2582 | + elif resp.code == 404: |
| 2583 | + msg = 'Metadata api version %s not present. Headers: %s' |
| 2584 | + LOG.debug(msg, api_ver, resp.headers) |
| 2585 | + return self.min_metadata_version |
| 2586 | + |
| 2587 | def get_instance_id(self): |
| 2588 | return self.metadata['instance-id'] |
| 2589 | |
| 2590 | @@ -138,21 +182,22 @@ class DataSourceEc2(sources.DataSource): |
| 2591 | urls = [] |
| 2592 | url2base = {} |
| 2593 | for url in mdurls: |
| 2594 | - cur = "%s/%s/meta-data/instance-id" % (url, self.api_ver) |
| 2595 | + cur = '{0}/{1}/meta-data/instance-id'.format( |
| 2596 | + url, self.min_metadata_version) |
| 2597 | urls.append(cur) |
| 2598 | url2base[cur] = url |
| 2599 | |
| 2600 | start_time = time.time() |
| 2601 | - url = uhelp.wait_for_url(urls=urls, max_wait=max_wait, |
| 2602 | - timeout=timeout, status_cb=LOG.warn) |
| 2603 | + url = uhelp.wait_for_url( |
| 2604 | + urls=urls, max_wait=max_wait, timeout=timeout, status_cb=LOG.warn) |
| 2605 | |
| 2606 | if url: |
| 2607 | - LOG.debug("Using metadata source: '%s'", url2base[url]) |
| 2608 | + self.metadata_address = url2base[url] |
| 2609 | + LOG.debug("Using metadata source: '%s'", self.metadata_address) |
| 2610 | else: |
| 2611 | LOG.critical("Giving up on md from %s after %s seconds", |
| 2612 | urls, int(time.time() - start_time)) |
| 2613 | |
| 2614 | - self.metadata_address = url2base.get(url) |
| 2615 | return bool(url) |
| 2616 | |
| 2617 | def device_name_to_device(self, name): |
| 2618 | @@ -234,6 +279,37 @@ class DataSourceEc2(sources.DataSource): |
| 2619 | util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT), |
| 2620 | cfg) |
| 2621 | |
| 2622 | + def _crawl_metadata(self): |
| 2623 | + """Crawl metadata service when available. |
| 2624 | + |
| 2625 | + @returns: True on success, False otherwise. |
| 2626 | + """ |
| 2627 | + if not self.wait_for_metadata_service(): |
| 2628 | + return False |
| 2629 | + api_version = self.get_metadata_api_version() |
| 2630 | + try: |
| 2631 | + self.userdata_raw = ec2.get_instance_userdata( |
| 2632 | + api_version, self.metadata_address) |
| 2633 | + self.metadata = ec2.get_instance_metadata( |
| 2634 | + api_version, self.metadata_address) |
| 2635 | + except Exception: |
| 2636 | + util.logexc( |
| 2637 | + LOG, "Failed reading from metadata address %s", |
| 2638 | + self.metadata_address) |
| 2639 | + return False |
| 2640 | + return True |
| 2641 | + |
| 2642 | + |
| 2643 | +class DataSourceEc2Local(DataSourceEc2): |
| 2644 | + """Datasource run at init-local which sets up network to query metadata. |
| 2645 | + |
| 2646 | + In init-local, no network is available. This subclass sets up minimal |
| 2647 | + networking with dhclient on a viable nic so that it can talk to the |
| 2648 | + metadata service. If the metadata service provides network configuration |
| 2649 | + then render the network configuration for that instance based on metadata. |
| 2650 | + """ |
| 2651 | + get_network_metadata = True # Get metadata network config if present |
| 2652 | + |
| 2653 | |
| 2654 | def read_strict_mode(cfgval, default): |
| 2655 | try: |
| 2656 | @@ -349,6 +425,7 @@ def _collect_platform_data(): |
| 2657 | |
| 2658 | # Used to match classes to dependencies |
| 2659 | datasources = [ |
| 2660 | + (DataSourceEc2Local, (sources.DEP_FILESYSTEM,)), # Run at init-local |
| 2661 | (DataSourceEc2, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)), |
| 2662 | ] |
| 2663 | |
| 2664 | diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py |
| 2665 | index f20c9a6..73d3877 100644 |
| 2666 | --- a/cloudinit/sources/DataSourceOVF.py |
| 2667 | +++ b/cloudinit/sources/DataSourceOVF.py |
| 2668 | @@ -25,6 +25,8 @@ from cloudinit.sources.helpers.vmware.imc.config_file \ |
| 2669 | import ConfigFile |
| 2670 | from cloudinit.sources.helpers.vmware.imc.config_nic \ |
| 2671 | import NicConfigurator |
| 2672 | +from cloudinit.sources.helpers.vmware.imc.config_passwd \ |
| 2673 | + import PasswordConfigurator |
| 2674 | from cloudinit.sources.helpers.vmware.imc.guestcust_error \ |
| 2675 | import GuestCustErrorEnum |
| 2676 | from cloudinit.sources.helpers.vmware.imc.guestcust_event \ |
| 2677 | @@ -117,6 +119,8 @@ class DataSourceOVF(sources.DataSource): |
| 2678 | (md, ud, cfg) = read_vmware_imc(conf) |
| 2679 | dirpath = os.path.dirname(vmwareImcConfigFilePath) |
| 2680 | nics = get_nics_to_enable(dirpath) |
| 2681 | + markerid = conf.marker_id |
| 2682 | + markerexists = check_marker_exists(markerid) |
| 2683 | except Exception as e: |
| 2684 | LOG.debug("Error parsing the customization Config File") |
| 2685 | LOG.exception(e) |
| 2686 | @@ -127,7 +131,6 @@ class DataSourceOVF(sources.DataSource): |
| 2687 | return False |
| 2688 | finally: |
| 2689 | util.del_dir(os.path.dirname(vmwareImcConfigFilePath)) |
| 2690 | - |
| 2691 | try: |
| 2692 | LOG.debug("Applying the Network customization") |
| 2693 | nicConfigurator = NicConfigurator(conf.nics) |
| 2694 | @@ -140,6 +143,35 @@ class DataSourceOVF(sources.DataSource): |
| 2695 | GuestCustEventEnum.GUESTCUST_EVENT_NETWORK_SETUP_FAILED) |
| 2696 | enable_nics(nics) |
| 2697 | return False |
| 2698 | + if markerid and not markerexists: |
| 2699 | + LOG.debug("Applying password customization") |
| 2700 | + pwdConfigurator = PasswordConfigurator() |
| 2701 | + adminpwd = conf.admin_password |
| 2702 | + try: |
| 2703 | + resetpwd = conf.reset_password |
| 2704 | + if adminpwd or resetpwd: |
| 2705 | + pwdConfigurator.configure(adminpwd, resetpwd, |
| 2706 | + self.distro) |
| 2707 | + else: |
| 2708 | + LOG.debug("Changing password is not needed") |
| 2709 | + except Exception as e: |
| 2710 | + LOG.debug("Error applying Password Configuration: %s", e) |
| 2711 | + set_customization_status( |
| 2712 | + GuestCustStateEnum.GUESTCUST_STATE_RUNNING, |
| 2713 | + GuestCustEventEnum.GUESTCUST_EVENT_CUSTOMIZE_FAILED) |
| 2714 | + enable_nics(nics) |
| 2715 | + return False |
| 2716 | + if markerid: |
| 2717 | + LOG.debug("Handle marker creation") |
| 2718 | + try: |
| 2719 | + setup_marker_files(markerid) |
| 2720 | + except Exception as e: |
| 2721 | + LOG.debug("Error creating marker files: %s", e) |
| 2722 | + set_customization_status( |
| 2723 | + GuestCustStateEnum.GUESTCUST_STATE_RUNNING, |
| 2724 | + GuestCustEventEnum.GUESTCUST_EVENT_CUSTOMIZE_FAILED) |
| 2725 | + enable_nics(nics) |
| 2726 | + return False |
| 2727 | |
| 2728 | vmwarePlatformFound = True |
| 2729 | set_customization_status( |
| 2730 | @@ -445,4 +477,33 @@ datasources = ( |
| 2731 | def get_datasource_list(depends): |
| 2732 | return sources.list_from_depends(depends, datasources) |
| 2733 | |
| 2734 | + |
| 2735 | +# To check if marker file exists |
| 2736 | +def check_marker_exists(markerid): |
| 2737 | + """ |
| 2738 | + Check the existence of a marker file. |
| 2739 | + Presence of marker file determines whether a certain code path is to be |
| 2740 | + executed. It is needed for partial guest customization in VMware. |
| 2741 | + """ |
| 2742 | + if not markerid: |
| 2743 | + return False |
| 2744 | + markerfile = "/.markerfile-" + markerid |
| 2745 | + if os.path.exists(markerfile): |
| 2746 | + return True |
| 2747 | + return False |
| 2748 | + |
| 2749 | + |
| 2750 | +# Create a marker file |
| 2751 | +def setup_marker_files(markerid): |
| 2752 | + """ |
| 2753 | + Create a new marker file. |
| 2754 | + Marker files are unique to a full customization workflow in VMware |
| 2755 | + environment. |
| 2756 | + """ |
| 2757 | + if not markerid: |
| 2758 | + return |
| 2759 | + markerfile = "/.markerfile-" + markerid |
| 2760 | + util.del_file("/.markerfile-*.txt") |
| 2761 | + open(markerfile, 'w').close() |
| 2762 | + |
| 2763 | # vi: ts=4 expandtab |
| 2764 | diff --git a/cloudinit/sources/helpers/vmware/imc/config.py b/cloudinit/sources/helpers/vmware/imc/config.py |
| 2765 | index 9a5e3a8..49d441d 100644 |
| 2766 | --- a/cloudinit/sources/helpers/vmware/imc/config.py |
| 2767 | +++ b/cloudinit/sources/helpers/vmware/imc/config.py |
| 2768 | @@ -5,6 +5,7 @@ |
| 2769 | # |
| 2770 | # This file is part of cloud-init. See LICENSE file for license information. |
| 2771 | |
| 2772 | + |
| 2773 | from .nic import Nic |
| 2774 | |
| 2775 | |
| 2776 | @@ -14,13 +15,16 @@ class Config(object): |
| 2777 | Specification file. |
| 2778 | """ |
| 2779 | |
| 2780 | + CUSTOM_SCRIPT = 'CUSTOM-SCRIPT|SCRIPT-NAME' |
| 2781 | DNS = 'DNS|NAMESERVER|' |
| 2782 | - SUFFIX = 'DNS|SUFFIX|' |
| 2783 | + DOMAINNAME = 'NETWORK|DOMAINNAME' |
| 2784 | + HOSTNAME = 'NETWORK|HOSTNAME' |
| 2785 | + MARKERID = 'MISC|MARKER-ID' |
| 2786 | PASS = 'PASSWORD|-PASS' |
| 2787 | + RESETPASS = 'PASSWORD|RESET' |
| 2788 | + SUFFIX = 'DNS|SUFFIX|' |
| 2789 | TIMEZONE = 'DATETIME|TIMEZONE' |
| 2790 | UTC = 'DATETIME|UTC' |
| 2791 | - HOSTNAME = 'NETWORK|HOSTNAME' |
| 2792 | - DOMAINNAME = 'NETWORK|DOMAINNAME' |
| 2793 | |
| 2794 | def __init__(self, configFile): |
| 2795 | self._configFile = configFile |
| 2796 | @@ -82,4 +86,18 @@ class Config(object): |
| 2797 | |
| 2798 | return res |
| 2799 | |
| 2800 | + @property |
| 2801 | + def reset_password(self): |
| 2802 | + """Retreives if the root password needs to be reset.""" |
| 2803 | + resetPass = self._configFile.get(Config.RESETPASS, 'no') |
| 2804 | + resetPass = resetPass.lower() |
| 2805 | + if resetPass not in ('yes', 'no'): |
| 2806 | + raise ValueError('ResetPassword value should be yes/no') |
| 2807 | + return resetPass == 'yes' |
| 2808 | + |
| 2809 | + @property |
| 2810 | + def marker_id(self): |
| 2811 | + """Returns marker id.""" |
| 2812 | + return self._configFile.get(Config.MARKERID, None) |
| 2813 | + |
| 2814 | # vi: ts=4 expandtab |
| 2815 | diff --git a/cloudinit/sources/helpers/vmware/imc/config_passwd.py b/cloudinit/sources/helpers/vmware/imc/config_passwd.py |
| 2816 | new file mode 100644 |
| 2817 | index 0000000..75cfbaa |
| 2818 | --- /dev/null |
| 2819 | +++ b/cloudinit/sources/helpers/vmware/imc/config_passwd.py |
| 2820 | @@ -0,0 +1,67 @@ |
| 2821 | +# Copyright (C) 2016 Canonical Ltd. |
| 2822 | +# Copyright (C) 2016 VMware INC. |
| 2823 | +# |
| 2824 | +# Author: Maitreyee Saikia <msaikia@vmware.com> |
| 2825 | +# |
| 2826 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 2827 | + |
| 2828 | + |
| 2829 | +import logging |
| 2830 | +import os |
| 2831 | + |
| 2832 | +from cloudinit import util |
| 2833 | + |
| 2834 | +LOG = logging.getLogger(__name__) |
| 2835 | + |
| 2836 | + |
| 2837 | +class PasswordConfigurator(object): |
| 2838 | + """ |
| 2839 | + Class for changing configurations related to passwords in a VM. Includes |
| 2840 | + setting and expiring passwords. |
| 2841 | + """ |
| 2842 | + def configure(self, passwd, resetPasswd, distro): |
| 2843 | + """ |
| 2844 | + Main method to perform all functionalities based on configuration file |
| 2845 | + inputs. |
| 2846 | + @param passwd: encoded admin password. |
| 2847 | + @param resetPasswd: boolean to determine if password needs to be reset. |
| 2848 | + @return cfg: dict to be used by cloud-init set_passwd code. |
| 2849 | + """ |
| 2850 | + LOG.info('Starting password configuration') |
| 2851 | + if passwd: |
| 2852 | + passwd = util.b64d(passwd) |
| 2853 | + allRootUsers = [] |
| 2854 | + for line in open('/etc/passwd', 'r'): |
| 2855 | + if line.split(':')[2] == '0': |
| 2856 | + allRootUsers.append(line.split(':')[0]) |
| 2857 | + # read shadow file and check for each user, if its uid0 or root. |
| 2858 | + uidUsersList = [] |
| 2859 | + for line in open('/etc/shadow', 'r'): |
| 2860 | + user = line.split(':')[0] |
| 2861 | + if user in allRootUsers: |
| 2862 | + uidUsersList.append(user) |
| 2863 | + if passwd: |
| 2864 | + LOG.info('Setting admin password') |
| 2865 | + distro.set_passwd('root', passwd) |
| 2866 | + if resetPasswd: |
| 2867 | + self.reset_password(uidUsersList) |
| 2868 | + LOG.info('Configure Password completed!') |
| 2869 | + |
| 2870 | + def reset_password(self, uidUserList): |
| 2871 | + """ |
| 2872 | + Method to reset password. Use passwd --expire command. Use chage if |
| 2873 | + not succeeded using passwd command. Log failure message otherwise. |
| 2874 | + @param: list of users for which to expire password. |
| 2875 | + """ |
| 2876 | + LOG.info('Expiring password.') |
| 2877 | + for user in uidUserList: |
| 2878 | + try: |
| 2879 | + out, err = util.subp(['passwd', '--expire', user]) |
| 2880 | + except util.ProcessExecutionError as e: |
| 2881 | + if os.path.exists('/usr/bin/chage'): |
| 2882 | + out, e = util.subp(['chage', '-d', '0', user]) |
| 2883 | + else: |
| 2884 | + LOG.warning('Failed to expire password for %s with error: ' |
| 2885 | + '%s', user, e) |
| 2886 | + |
| 2887 | +# vi: ts=4 expandtab |
| 2888 | diff --git a/debian/changelog b/debian/changelog |
| 2889 | index c5f5136..ea6872e 100644 |
| 2890 | --- a/debian/changelog |
| 2891 | +++ b/debian/changelog |
| 2892 | @@ -1,3 +1,31 @@ |
| 2893 | +cloud-init (0.7.9-243-ge74d775-0ubuntu1) artful; urgency=medium |
| 2894 | + |
| 2895 | + * New upstream snapshot. |
| 2896 | + - tools: Add tooling for basic cloud-init performance analysis. |
| 2897 | + [Chad Smith] (LP: #1709761) |
| 2898 | + - network: add v2 passthrough and fix parsing v2 config with bonds/bridge |
| 2899 | + params [Ryan Harper] (LP: #1709180) |
| 2900 | + - doc: update capabilities with features available, link doc reference, |
| 2901 | + cli example [Ryan Harper] |
| 2902 | + - vcloud directory: Guest Customization support for passwords |
| 2903 | + [Maitreyee Saikia] |
| 2904 | + - ec2: Allow Ec2 to run in init-local using dhclient in a sandbox. |
| 2905 | + [Chad Smith] (LP: #1709772) |
| 2906 | + - cc_ntp: fallback on timesyncd configuration if ntp is not installable |
| 2907 | + [Ryan Harper] (LP: #1686485) |
| 2908 | + - net: Reduce duplicate code. Have get_interfaces_by_mac use |
| 2909 | + get_interfaces. |
| 2910 | + - tests: Fix build tree integration tests [Joshua Powers] |
| 2911 | + - sysconfig: Dont repeat header when rendering resolv.conf |
| 2912 | + [Ryan Harper] (LP: #1701420) |
| 2913 | + - archlinux: Fix bug with empty dns, do not render 'lo' devices. |
| 2914 | + (LP: #1663045, #1706593) |
| 2915 | + - cloudinit.net: add initialize_network_device function and tests |
| 2916 | + [Chad Smith] |
| 2917 | + - makefile: fix ci-deps-ubuntu target [Chad Smith] |
| 2918 | + |
| 2919 | + -- Ryan Harper <ryan.harper@canonical.com> Mon, 21 Aug 2017 15:09:36 -0500 |
| 2920 | + |
| 2921 | cloud-init (0.7.9-231-g80bf98b9-0ubuntu1) artful; urgency=medium |
| 2922 | |
| 2923 | * New upstream snapshot. |
| 2924 | diff --git a/doc/rtd/index.rst b/doc/rtd/index.rst |
| 2925 | index a691103..de67f36 100644 |
| 2926 | --- a/doc/rtd/index.rst |
| 2927 | +++ b/doc/rtd/index.rst |
| 2928 | @@ -40,6 +40,7 @@ initialization of a cloud instance. |
| 2929 | topics/merging.rst |
| 2930 | topics/network-config.rst |
| 2931 | topics/vendordata.rst |
| 2932 | + topics/debugging.rst |
| 2933 | topics/moreinfo.rst |
| 2934 | topics/hacking.rst |
| 2935 | topics/tests.rst |
| 2936 | diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst |
| 2937 | index 2c8770b..b8034b0 100644 |
| 2938 | --- a/doc/rtd/topics/capabilities.rst |
| 2939 | +++ b/doc/rtd/topics/capabilities.rst |
| 2940 | @@ -31,19 +31,49 @@ support. This allows other applications to detect what features the installed |
| 2941 | cloud-init supports without having to parse its version number. If present, |
| 2942 | this list of features will be located at ``cloudinit.version.FEATURES``. |
| 2943 | |
| 2944 | -When checking if cloud-init supports a feature, in order to not break the |
| 2945 | -detection script on older versions of cloud-init without the features list, a |
| 2946 | -script similar to the following should be used. Note that this will exit 0 if |
| 2947 | -the feature is supported and 1 otherwise:: |
| 2948 | +Currently defined feature names include: |
| 2949 | |
| 2950 | - import sys |
| 2951 | - from cloudinit import version |
| 2952 | - sys.exit('<FEATURE_NAME>' not in getattr(version, 'FEATURES', [])) |
| 2953 | + - ``NETWORK_CONFIG_V1`` support for v1 networking configuration, |
| 2954 | + see :ref:`network_config_v1` documentation for examples. |
| 2955 | + - ``NETWORK_CONFIG_V2`` support for v2 networking configuration, |
| 2956 | + see :ref:`network_config_v2` documentation for examples. |
| 2957 | |
| 2958 | -Currently defined feature names include: |
| 2959 | |
| 2960 | - - ``NETWORK_CONFIG_V1`` support for v1 networking configuration, see curtin |
| 2961 | - documentation for examples. |
| 2962 | +CLI Interface : |
| 2963 | + |
| 2964 | +``cloud-init features`` will print out each feature supported. If cloud-init |
| 2965 | +does not have the features subcommand, it also does not support any features |
| 2966 | +described in this document. |
| 2967 | + |
| 2968 | +.. code-block:: bash |
| 2969 | + |
| 2970 | + % cloud-init --help |
| 2971 | + usage: cloud-init [-h] [--version] [--file FILES] [--debug] [--force] |
| 2972 | + {init,modules,query,single,dhclient-hook,features} ... |
| 2973 | + |
| 2974 | + positional arguments: |
| 2975 | + {init,modules,query,single,dhclient-hook,features} |
| 2976 | + init initializes cloud-init and performs initial modules |
| 2977 | + modules activates modules using a given configuration key |
| 2978 | + query query information stored in cloud-init |
| 2979 | + single run a single module |
| 2980 | + dhclient-hook run the dhclient hookto record network info |
| 2981 | + features list defined features |
| 2982 | + |
| 2983 | + optional arguments: |
| 2984 | + -h, --help show this help message and exit |
| 2985 | + --version, -v show program's version number and exit |
| 2986 | + --file FILES, -f FILES |
| 2987 | + additional yaml configuration files to use |
| 2988 | + --debug, -d show additional pre-action logging (default: False) |
| 2989 | + --force force running even if no datasource is found (use at |
| 2990 | + your own risk) |
| 2991 | + |
| 2992 | + |
| 2993 | + % cloud-init features |
| 2994 | + NETWORK_CONFIG_V1 |
| 2995 | + NETWORK_CONFIG_V2 |
| 2996 | + |
| 2997 | |
| 2998 | .. _Cloud-init: https://launchpad.net/cloud-init |
| 2999 | .. vi: textwidth=78 |
| 3000 | diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst |
| 3001 | new file mode 100644 |
| 3002 | index 0000000..4e43dd5 |
| 3003 | --- /dev/null |
| 3004 | +++ b/doc/rtd/topics/debugging.rst |
| 3005 | @@ -0,0 +1,146 @@ |
| 3006 | +********************** |
| 3007 | +Testing and debugging cloud-init |
| 3008 | +********************** |
| 3009 | + |
| 3010 | +Overview |
| 3011 | +======== |
| 3012 | +This topic will discuss general approaches for test and debug of cloud-init on |
| 3013 | +deployed instances. |
| 3014 | + |
| 3015 | + |
| 3016 | +Boot Time Analysis - cloud-init analyze |
| 3017 | +====================================== |
| 3018 | +Occasionally instances don't appear as performant as we would like and |
| 3019 | +cloud-init packages a simple facility to inspect what operations took |
| 3020 | +cloud-init the longest during boot and setup. |
| 3021 | + |
| 3022 | +The script **/usr/bin/cloud-init** has an analyze sub-command **analyze** |
| 3023 | +which parses any cloud-init.log file into formatted and sorted events. It |
| 3024 | +allows for detailed analysis of the most costly cloud-init operations are to |
| 3025 | +determine the long-pole in cloud-init configuration and setup. These |
| 3026 | +subcommands default to reading /var/log/cloud-init.log. |
| 3027 | + |
| 3028 | +* ``analyze show`` Parse and organize cloud-init.log events by stage and |
| 3029 | +include each sub-stage granularity with time delta reports. |
| 3030 | + |
| 3031 | +.. code-block:: bash |
| 3032 | + |
| 3033 | + $ cloud-init analyze show -i my-cloud-init.log |
| 3034 | + -- Boot Record 01 -- |
| 3035 | + The total time elapsed since completing an event is printed after the "@" |
| 3036 | + character. |
| 3037 | + The time the event takes is printed after the "+" character. |
| 3038 | + |
| 3039 | + Starting stage: modules-config |
| 3040 | + |`->config-emit_upstart ran successfully @05.47600s +00.00100s |
| 3041 | + |`->config-snap_config ran successfully @05.47700s +00.00100s |
| 3042 | + |`->config-ssh-import-id ran successfully @05.47800s +00.00200s |
| 3043 | + |`->config-locale ran successfully @05.48000s +00.00100s |
| 3044 | + ... |
| 3045 | + |
| 3046 | + |
| 3047 | +* ``analyze dump`` Parse cloud-init.log into event records and return a list of |
| 3048 | +dictionaries that can be consumed for other reporting needs. |
| 3049 | + |
| 3050 | +.. code-block:: bash |
| 3051 | + |
| 3052 | + $ cloud-init analyze blame -i my-cloud-init.log |
| 3053 | + [ |
| 3054 | + { |
| 3055 | + "description": "running config modules", |
| 3056 | + "event_type": "start", |
| 3057 | + "name": "modules-config", |
| 3058 | + "origin": "cloudinit", |
| 3059 | + "timestamp": 1510807493.0 |
| 3060 | + },... |
| 3061 | + |
| 3062 | +* ``analyze blame`` Parse cloud-init.log into event records and sort them based |
| 3063 | +on highest time cost for quick assessment of areas of cloud-init that may need |
| 3064 | +improvement. |
| 3065 | + |
| 3066 | +.. code-block:: bash |
| 3067 | + |
| 3068 | + $ cloud-init analyze blame -i my-cloud-init.log |
| 3069 | + -- Boot Record 11 -- |
| 3070 | + 00.01300s (modules-final/config-scripts-per-boot) |
| 3071 | + 00.00400s (modules-final/config-final-message) |
| 3072 | + 00.00100s (modules-final/config-rightscale_userdata) |
| 3073 | + ... |
| 3074 | + |
| 3075 | + |
| 3076 | +Analyze quickstart - LXC |
| 3077 | +--------------------------- |
| 3078 | +To quickly obtain a cloud-init log try using lxc on any ubuntu system: |
| 3079 | + |
| 3080 | +.. code-block:: bash |
| 3081 | + |
| 3082 | + $ lxc init ubuntu-daily:xenial x1 |
| 3083 | + $ lxc start x1 |
| 3084 | + # Take lxc's cloud-init.log and pipe it to the analyzer |
| 3085 | + $ lxc file pull x1/var/log/cloud-init.log - | cloud-init analyze dump -i - |
| 3086 | + $ lxc file pull x1/var/log/cloud-init.log - | \ |
| 3087 | + python3 -m cloudinit.analyze dump -i - |
| 3088 | + |
| 3089 | +Analyze quickstart - KVM |
| 3090 | +--------------------------- |
| 3091 | +To quickly analyze a KVM a cloud-init log: |
| 3092 | + |
| 3093 | +1. Download the current cloud image |
| 3094 | + wget https://cloud-images.ubuntu.com/daily/server/xenial/current/xenial-server-cloudimg-amd64.img |
| 3095 | +2. Create a snapshot image to preserve the original cloud-image |
| 3096 | + |
| 3097 | +.. code-block:: bash |
| 3098 | + |
| 3099 | + $ qemu-img create -b xenial-server-cloudimg-amd64.img -f qcow2 \ |
| 3100 | + test-cloudinit.qcow2 |
| 3101 | + |
| 3102 | +3. Create a seed image with metadata using `cloud-localds` |
| 3103 | + |
| 3104 | +.. code-block:: bash |
| 3105 | + |
| 3106 | + $ cat > user-data <<EOF |
| 3107 | + #cloud-config |
| 3108 | + password: passw0rd |
| 3109 | + chpasswd: { expire: False } |
| 3110 | + EOF |
| 3111 | + $ cloud-localds my-seed.img user-data |
| 3112 | + |
| 3113 | +4. Launch your modified VM |
| 3114 | + |
| 3115 | +.. code-block:: bash |
| 3116 | + |
| 3117 | + $ kvm -m 512 -net nic -net user -redir tcp:2222::22 \ |
| 3118 | + -drive file=test-cloudinit.qcow2,if=virtio,format=qcow2 \ |
| 3119 | + -drive file=my-seed.img,if=virtio,format=raw |
| 3120 | + |
| 3121 | +5. Analyze the boot (blame, dump, show) |
| 3122 | + |
| 3123 | +.. code-block:: bash |
| 3124 | + |
| 3125 | + $ ssh -p 2222 ubuntu@localhost 'cat /var/log/cloud-init.log' | \ |
| 3126 | + cloud-init analyze blame -i - |
| 3127 | + |
| 3128 | + |
| 3129 | +Running single cloud config modules |
| 3130 | +=================================== |
| 3131 | +This subcommand is not called by the init system. It can be called manually to |
| 3132 | +load the configured datasource and run a single cloud-config module once using |
| 3133 | +the cached userdata and metadata after the instance has booted. Each |
| 3134 | +cloud-config module has a module FREQUENCY configured: PER_INSTANCE, PER_BOOT, |
| 3135 | +PER_ONCE or PER_ALWAYS. When a module is run by cloud-init, it stores a |
| 3136 | +semaphore file in |
| 3137 | +``/var/lib/cloud/instance/sem/config_<module_name>.<frequency>`` which marks |
| 3138 | +when the module last successfully ran. Presence of this semaphore file |
| 3139 | +prevents a module from running again if it has already been run. To ensure that |
| 3140 | +a module is run again, the desired frequency can be overridden on the |
| 3141 | +commandline: |
| 3142 | + |
| 3143 | +.. code-block:: bash |
| 3144 | + |
| 3145 | + $ sudo cloud-init single --name cc_ssh --frequency always |
| 3146 | + ... |
| 3147 | + Generating public/private ed25519 key pair |
| 3148 | + ... |
| 3149 | + |
| 3150 | +Inspect cloud-init.log for output of what operations were performed as a |
| 3151 | +result. |
| 3152 | diff --git a/setup.py b/setup.py |
| 3153 | index b1bde43..5c65c7f 100755 |
| 3154 | --- a/setup.py |
| 3155 | +++ b/setup.py |
| 3156 | @@ -240,7 +240,7 @@ setuptools.setup( |
| 3157 | author='Scott Moser', |
| 3158 | author_email='scott.moser@canonical.com', |
| 3159 | url='http://launchpad.net/cloud-init/', |
| 3160 | - packages=setuptools.find_packages(exclude=['tests']), |
| 3161 | + packages=setuptools.find_packages(exclude=['tests.*', '*.tests', 'tests']), |
| 3162 | scripts=['tools/cloud-init-per'], |
| 3163 | license='Dual-licensed under GPLv3 or Apache 2.0', |
| 3164 | data_files=data_files, |
| 3165 | diff --git a/templates/timesyncd.conf.tmpl b/templates/timesyncd.conf.tmpl |
| 3166 | new file mode 100644 |
| 3167 | index 0000000..6b98301 |
| 3168 | --- /dev/null |
| 3169 | +++ b/templates/timesyncd.conf.tmpl |
| 3170 | @@ -0,0 +1,8 @@ |
| 3171 | +## template:jinja |
| 3172 | +# cloud-init generated file |
| 3173 | +# See timesyncd.conf(5) for details. |
| 3174 | + |
| 3175 | +[Time] |
| 3176 | +{% if servers or pools -%} |
| 3177 | +NTP={% for host in servers|list + pools|list %}{{ host }} {% endfor -%} |
| 3178 | +{% endif -%} |
| 3179 | diff --git a/tests/cloud_tests/bddeb.py b/tests/cloud_tests/bddeb.py |
| 3180 | index 53dbf74..fe80535 100644 |
| 3181 | --- a/tests/cloud_tests/bddeb.py |
| 3182 | +++ b/tests/cloud_tests/bddeb.py |
| 3183 | @@ -11,7 +11,7 @@ from tests.cloud_tests import (config, LOG) |
| 3184 | from tests.cloud_tests import (platforms, images, snapshots, instances) |
| 3185 | from tests.cloud_tests.stage import (PlatformComponent, run_stage, run_single) |
| 3186 | |
| 3187 | -build_deps = ['devscripts', 'equivs', 'git', 'tar'] |
| 3188 | +pre_reqs = ['devscripts', 'equivs', 'git', 'tar'] |
| 3189 | |
| 3190 | |
| 3191 | def _out(cmd_res): |
| 3192 | @@ -26,13 +26,10 @@ def build_deb(args, instance): |
| 3193 | @return_value: tuple of results and fail count |
| 3194 | """ |
| 3195 | # update remote system package list and install build deps |
| 3196 | - LOG.debug('installing build deps') |
| 3197 | - pkgs = ' '.join(build_deps) |
| 3198 | + LOG.debug('installing pre-reqs') |
| 3199 | + pkgs = ' '.join(pre_reqs) |
| 3200 | cmd = 'apt-get update && apt-get install --yes {}'.format(pkgs) |
| 3201 | instance.execute(['/bin/sh', '-c', cmd]) |
| 3202 | - # TODO Remove this call once we have a ci-deps Makefile target |
| 3203 | - instance.execute(['mk-build-deps', '--install', '-t', |
| 3204 | - 'apt-get --no-install-recommends --yes', 'cloud-init']) |
| 3205 | |
| 3206 | # local tmpfile that must be deleted |
| 3207 | local_tarball = tempfile.NamedTemporaryFile().name |
| 3208 | @@ -40,7 +37,7 @@ def build_deb(args, instance): |
| 3209 | # paths to use in remote system |
| 3210 | output_link = '/root/cloud-init_all.deb' |
| 3211 | remote_tarball = _out(instance.execute(['mktemp'])) |
| 3212 | - extract_dir = _out(instance.execute(['mktemp', '--directory'])) |
| 3213 | + extract_dir = '/root' |
| 3214 | bddeb_path = os.path.join(extract_dir, 'packages', 'bddeb') |
| 3215 | git_env = {'GIT_DIR': os.path.join(extract_dir, '.git'), |
| 3216 | 'GIT_WORK_TREE': extract_dir} |
| 3217 | @@ -56,6 +53,11 @@ def build_deb(args, instance): |
| 3218 | instance.execute(['git', 'commit', '-a', '-m', 'tmp', '--allow-empty'], |
| 3219 | env=git_env) |
| 3220 | |
| 3221 | + LOG.debug('installing deps') |
| 3222 | + deps_path = os.path.join(extract_dir, 'tools', 'read-dependencies') |
| 3223 | + instance.execute([deps_path, '--install', '--test-distro', |
| 3224 | + '--distro', 'ubuntu', '--python-version', '3']) |
| 3225 | + |
| 3226 | LOG.debug('building deb in remote system at: %s', output_link) |
| 3227 | bddeb_args = args.bddeb_args.split() if args.bddeb_args else [] |
| 3228 | instance.execute([bddeb_path, '-d'] + bddeb_args, env=git_env) |
| 3229 | diff --git a/tests/unittests/helpers.py b/tests/unittests/helpers.py |
| 3230 | index 08c5c46..bf1dc5d 100644 |
| 3231 | --- a/tests/unittests/helpers.py |
| 3232 | +++ b/tests/unittests/helpers.py |
| 3233 | @@ -278,7 +278,7 @@ class FilesystemMockingTestCase(ResourceUsingTestCase): |
| 3234 | return root |
| 3235 | |
| 3236 | |
| 3237 | -class HttprettyTestCase(TestCase): |
| 3238 | +class HttprettyTestCase(CiTestCase): |
| 3239 | # necessary as http_proxy gets in the way of httpretty |
| 3240 | # https://github.com/gabrielfalcao/HTTPretty/issues/122 |
| 3241 | def setUp(self): |
| 3242 | diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py |
| 3243 | index 06f366b..7780f16 100644 |
| 3244 | --- a/tests/unittests/test_cli.py |
| 3245 | +++ b/tests/unittests/test_cli.py |
| 3246 | @@ -31,9 +31,90 @@ class TestCLI(test_helpers.FilesystemMockingTestCase): |
| 3247 | |
| 3248 | def test_no_arguments_shows_error_message(self): |
| 3249 | exit_code = self._call_main() |
| 3250 | - self.assertIn('cloud-init: error: too few arguments', |
| 3251 | - self.stderr.getvalue()) |
| 3252 | + missing_subcommand_message = [ |
| 3253 | + 'too few arguments', # python2.7 msg |
| 3254 | + 'the following arguments are required: subcommand' # python3 msg |
| 3255 | + ] |
| 3256 | + error = self.stderr.getvalue() |
| 3257 | + matches = ([msg in error for msg in missing_subcommand_message]) |
| 3258 | + self.assertTrue( |
| 3259 | + any(matches), 'Did not find error message for missing subcommand') |
| 3260 | self.assertEqual(2, exit_code) |
| 3261 | |
| 3262 | + def test_all_subcommands_represented_in_help(self): |
| 3263 | + """All known subparsers are represented in the cloud-int help doc.""" |
| 3264 | + self._call_main() |
| 3265 | + error = self.stderr.getvalue() |
| 3266 | + expected_subcommands = ['analyze', 'init', 'modules', 'single', |
| 3267 | + 'dhclient-hook', 'features'] |
| 3268 | + for subcommand in expected_subcommands: |
| 3269 | + self.assertIn(subcommand, error) |
| 3270 | |
| 3271 | -# vi: ts=4 expandtab |
| 3272 | + @mock.patch('cloudinit.cmd.main.status_wrapper') |
| 3273 | + def test_init_subcommand_parser(self, m_status_wrapper): |
| 3274 | + """The subcommand 'init' calls status_wrapper passing init.""" |
| 3275 | + self._call_main(['cloud-init', 'init']) |
| 3276 | + (name, parseargs) = m_status_wrapper.call_args_list[0][0] |
| 3277 | + self.assertEqual('init', name) |
| 3278 | + self.assertEqual('init', parseargs.subcommand) |
| 3279 | + self.assertEqual('init', parseargs.action[0]) |
| 3280 | + self.assertEqual('main_init', parseargs.action[1].__name__) |
| 3281 | + |
| 3282 | + @mock.patch('cloudinit.cmd.main.status_wrapper') |
| 3283 | + def test_modules_subcommand_parser(self, m_status_wrapper): |
| 3284 | + """The subcommand 'modules' calls status_wrapper passing modules.""" |
| 3285 | + self._call_main(['cloud-init', 'modules']) |
| 3286 | + (name, parseargs) = m_status_wrapper.call_args_list[0][0] |
| 3287 | + self.assertEqual('modules', name) |
| 3288 | + self.assertEqual('modules', parseargs.subcommand) |
| 3289 | + self.assertEqual('modules', parseargs.action[0]) |
| 3290 | + self.assertEqual('main_modules', parseargs.action[1].__name__) |
| 3291 | + |
| 3292 | + def test_analyze_subcommand_parser(self): |
| 3293 | + """The subcommand cloud-init analyze calls the correct subparser.""" |
| 3294 | + self._call_main(['cloud-init', 'analyze']) |
| 3295 | + # These subcommands only valid for cloud-init analyze script |
| 3296 | + expected_subcommands = ['blame', 'show', 'dump'] |
| 3297 | + error = self.stderr.getvalue() |
| 3298 | + for subcommand in expected_subcommands: |
| 3299 | + self.assertIn(subcommand, error) |
| 3300 | + |
| 3301 | + @mock.patch('cloudinit.cmd.main.main_single') |
| 3302 | + def test_single_subcommand(self, m_main_single): |
| 3303 | + """The subcommand 'single' calls main_single with valid args.""" |
| 3304 | + self._call_main(['cloud-init', 'single', '--name', 'cc_ntp']) |
| 3305 | + (name, parseargs) = m_main_single.call_args_list[0][0] |
| 3306 | + self.assertEqual('single', name) |
| 3307 | + self.assertEqual('single', parseargs.subcommand) |
| 3308 | + self.assertEqual('single', parseargs.action[0]) |
| 3309 | + self.assertFalse(parseargs.debug) |
| 3310 | + self.assertFalse(parseargs.force) |
| 3311 | + self.assertIsNone(parseargs.frequency) |
| 3312 | + self.assertEqual('cc_ntp', parseargs.name) |
| 3313 | + self.assertFalse(parseargs.report) |
| 3314 | + |
| 3315 | + @mock.patch('cloudinit.cmd.main.dhclient_hook') |
| 3316 | + def test_dhclient_hook_subcommand(self, m_dhclient_hook): |
| 3317 | + """The subcommand 'dhclient-hook' calls dhclient_hook with args.""" |
| 3318 | + self._call_main(['cloud-init', 'dhclient-hook', 'net_action', 'eth0']) |
| 3319 | + (name, parseargs) = m_dhclient_hook.call_args_list[0][0] |
| 3320 | + self.assertEqual('dhclient_hook', name) |
| 3321 | + self.assertEqual('dhclient-hook', parseargs.subcommand) |
| 3322 | + self.assertEqual('dhclient_hook', parseargs.action[0]) |
| 3323 | + self.assertFalse(parseargs.debug) |
| 3324 | + self.assertFalse(parseargs.force) |
| 3325 | + self.assertEqual('net_action', parseargs.net_action) |
| 3326 | + self.assertEqual('eth0', parseargs.net_interface) |
| 3327 | + |
| 3328 | + @mock.patch('cloudinit.cmd.main.main_features') |
| 3329 | + def test_features_hook_subcommand(self, m_features): |
| 3330 | + """The subcommand 'features' calls main_features with args.""" |
| 3331 | + self._call_main(['cloud-init', 'features']) |
| 3332 | + (name, parseargs) = m_features.call_args_list[0][0] |
| 3333 | + self.assertEqual('features', name) |
| 3334 | + self.assertEqual('features', parseargs.subcommand) |
| 3335 | + self.assertEqual('features', parseargs.action[0]) |
| 3336 | + self.assertFalse(parseargs.debug) |
| 3337 | + self.assertFalse(parseargs.force) |
| 3338 | + |
| 3339 | +# : ts=4 expandtab |
| 3340 | diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py |
| 3341 | index 990bff2..996560e 100644 |
| 3342 | --- a/tests/unittests/test_datasource/test_aliyun.py |
| 3343 | +++ b/tests/unittests/test_datasource/test_aliyun.py |
| 3344 | @@ -70,7 +70,6 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase): |
| 3345 | paths = helpers.Paths({}) |
| 3346 | self.ds = ay.DataSourceAliYun(cfg, distro, paths) |
| 3347 | self.metadata_address = self.ds.metadata_urls[0] |
| 3348 | - self.api_ver = self.ds.api_ver |
| 3349 | |
| 3350 | @property |
| 3351 | def default_metadata(self): |
| 3352 | @@ -82,13 +81,15 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase): |
| 3353 | |
| 3354 | @property |
| 3355 | def metadata_url(self): |
| 3356 | - return os.path.join(self.metadata_address, |
| 3357 | - self.api_ver, 'meta-data') + '/' |
| 3358 | + return os.path.join( |
| 3359 | + self.metadata_address, |
| 3360 | + self.ds.min_metadata_version, 'meta-data') + '/' |
| 3361 | |
| 3362 | @property |
| 3363 | def userdata_url(self): |
| 3364 | - return os.path.join(self.metadata_address, |
| 3365 | - self.api_ver, 'user-data') |
| 3366 | + return os.path.join( |
| 3367 | + self.metadata_address, |
| 3368 | + self.ds.min_metadata_version, 'user-data') |
| 3369 | |
| 3370 | def regist_default_server(self): |
| 3371 | register_mock_metaserver(self.metadata_url, self.default_metadata) |
| 3372 | diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py |
| 3373 | index 413e87a..4802f10 100644 |
| 3374 | --- a/tests/unittests/test_datasource/test_common.py |
| 3375 | +++ b/tests/unittests/test_datasource/test_common.py |
| 3376 | @@ -35,6 +35,7 @@ DEFAULT_LOCAL = [ |
| 3377 | OpenNebula.DataSourceOpenNebula, |
| 3378 | OVF.DataSourceOVF, |
| 3379 | SmartOS.DataSourceSmartOS, |
| 3380 | + Ec2.DataSourceEc2Local, |
| 3381 | ] |
| 3382 | |
| 3383 | DEFAULT_NETWORK = [ |
| 3384 | diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py |
| 3385 | index 12230ae..33d0261 100644 |
| 3386 | --- a/tests/unittests/test_datasource/test_ec2.py |
| 3387 | +++ b/tests/unittests/test_datasource/test_ec2.py |
| 3388 | @@ -8,35 +8,67 @@ from cloudinit import helpers |
| 3389 | from cloudinit.sources import DataSourceEc2 as ec2 |
| 3390 | |
| 3391 | |
| 3392 | -# collected from api version 2009-04-04/ with |
| 3393 | +# collected from api version 2016-09-02/ with |
| 3394 | # python3 -c 'import json |
| 3395 | # from cloudinit.ec2_utils import get_instance_metadata as gm |
| 3396 | -# print(json.dumps(gm("2009-04-04"), indent=1, sort_keys=True))' |
| 3397 | +# print(json.dumps(gm("2016-09-02"), indent=1, sort_keys=True))' |
| 3398 | DEFAULT_METADATA = { |
| 3399 | - "ami-id": "ami-80861296", |
| 3400 | + "ami-id": "ami-8b92b4ee", |
| 3401 | "ami-launch-index": "0", |
| 3402 | "ami-manifest-path": "(unknown)", |
| 3403 | "block-device-mapping": {"ami": "/dev/sda1", "root": "/dev/sda1"}, |
| 3404 | - "hostname": "ip-10-0-0-149", |
| 3405 | + "hostname": "ip-172-31-31-158.us-east-2.compute.internal", |
| 3406 | "instance-action": "none", |
| 3407 | - "instance-id": "i-0052913950685138c", |
| 3408 | - "instance-type": "t2.micro", |
| 3409 | - "local-hostname": "ip-10-0-0-149", |
| 3410 | - "local-ipv4": "10.0.0.149", |
| 3411 | - "placement": {"availability-zone": "us-east-1b"}, |
| 3412 | + "instance-id": "i-0a33f80f09c96477f", |
| 3413 | + "instance-type": "t2.small", |
| 3414 | + "local-hostname": "ip-172-3-3-15.us-east-2.compute.internal", |
| 3415 | + "local-ipv4": "172.3.3.15", |
| 3416 | + "mac": "06:17:04:d7:26:09", |
| 3417 | + "metrics": {"vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"}, |
| 3418 | + "network": { |
| 3419 | + "interfaces": { |
| 3420 | + "macs": { |
| 3421 | + "06:17:04:d7:26:09": { |
| 3422 | + "device-number": "0", |
| 3423 | + "interface-id": "eni-e44ef49e", |
| 3424 | + "ipv4-associations": {"13.59.77.202": "172.3.3.15"}, |
| 3425 | + "ipv6s": "2600:1f16:aeb:b20b:9d87:a4af:5cc9:73dc", |
| 3426 | + "local-hostname": ("ip-172-3-3-15.us-east-2." |
| 3427 | + "compute.internal"), |
| 3428 | + "local-ipv4s": "172.3.3.15", |
| 3429 | + "mac": "06:17:04:d7:26:09", |
| 3430 | + "owner-id": "950047163771", |
| 3431 | + "public-hostname": ("ec2-13-59-77-202.us-east-2." |
| 3432 | + "compute.amazonaws.com"), |
| 3433 | + "public-ipv4s": "13.59.77.202", |
| 3434 | + "security-group-ids": "sg-5a61d333", |
| 3435 | + "security-groups": "wide-open", |
| 3436 | + "subnet-id": "subnet-20b8565b", |
| 3437 | + "subnet-ipv4-cidr-block": "172.31.16.0/20", |
| 3438 | + "subnet-ipv6-cidr-blocks": "2600:1f16:aeb:b20b::/64", |
| 3439 | + "vpc-id": "vpc-87e72bee", |
| 3440 | + "vpc-ipv4-cidr-block": "172.31.0.0/16", |
| 3441 | + "vpc-ipv4-cidr-blocks": "172.31.0.0/16", |
| 3442 | + "vpc-ipv6-cidr-blocks": "2600:1f16:aeb:b200::/56" |
| 3443 | + } |
| 3444 | + } |
| 3445 | + } |
| 3446 | + }, |
| 3447 | + "placement": {"availability-zone": "us-east-2b"}, |
| 3448 | "profile": "default-hvm", |
| 3449 | - "public-hostname": "", |
| 3450 | - "public-ipv4": "107.23.188.247", |
| 3451 | + "public-hostname": "ec2-13-59-77-202.us-east-2.compute.amazonaws.com", |
| 3452 | + "public-ipv4": "13.59.77.202", |
| 3453 | "public-keys": {"brickies": ["ssh-rsa AAAAB3Nz....w== brickies"]}, |
| 3454 | - "reservation-id": "r-00a2c173fb5782a08", |
| 3455 | - "security-groups": "wide-open" |
| 3456 | + "reservation-id": "r-01efbc9996bac1bd6", |
| 3457 | + "security-groups": "my-wide-open", |
| 3458 | + "services": {"domain": "amazonaws.com", "partition": "aws"} |
| 3459 | } |
| 3460 | |
| 3461 | |
| 3462 | def _register_ssh_keys(rfunc, base_url, keys_data): |
| 3463 | """handle ssh key inconsistencies. |
| 3464 | |
| 3465 | - public-keys in the ec2 metadata is inconsistently formatted compared |
| 3466 | + public-keys in the ec2 metadata is inconsistently formated compared |
| 3467 | to other entries. |
| 3468 | Given keys_data of {name1: pubkey1, name2: pubkey2} |
| 3469 | |
| 3470 | @@ -115,6 +147,8 @@ def register_mock_metaserver(base_url, data): |
| 3471 | |
| 3472 | |
| 3473 | class TestEc2(test_helpers.HttprettyTestCase): |
| 3474 | + with_logs = True |
| 3475 | + |
| 3476 | valid_platform_data = { |
| 3477 | 'uuid': 'ec212f79-87d1-2f1d-588f-d86dc0fd5412', |
| 3478 | 'uuid_source': 'dmi', |
| 3479 | @@ -123,16 +157,20 @@ class TestEc2(test_helpers.HttprettyTestCase): |
| 3480 | |
| 3481 | def setUp(self): |
| 3482 | super(TestEc2, self).setUp() |
| 3483 | - self.metadata_addr = ec2.DataSourceEc2.metadata_urls[0] |
| 3484 | - self.api_ver = '2009-04-04' |
| 3485 | + self.datasource = ec2.DataSourceEc2 |
| 3486 | + self.metadata_addr = self.datasource.metadata_urls[0] |
| 3487 | |
| 3488 | @property |
| 3489 | def metadata_url(self): |
| 3490 | - return '/'.join([self.metadata_addr, self.api_ver, 'meta-data', '']) |
| 3491 | + return '/'.join([ |
| 3492 | + self.metadata_addr, |
| 3493 | + self.datasource.min_metadata_version, 'meta-data', '']) |
| 3494 | |
| 3495 | @property |
| 3496 | def userdata_url(self): |
| 3497 | - return '/'.join([self.metadata_addr, self.api_ver, 'user-data']) |
| 3498 | + return '/'.join([ |
| 3499 | + self.metadata_addr, |
| 3500 | + self.datasource.min_metadata_version, 'user-data']) |
| 3501 | |
| 3502 | def _patch_add_cleanup(self, mpath, *args, **kwargs): |
| 3503 | p = mock.patch(mpath, *args, **kwargs) |
| 3504 | @@ -144,7 +182,7 @@ class TestEc2(test_helpers.HttprettyTestCase): |
| 3505 | paths = helpers.Paths({}) |
| 3506 | if sys_cfg is None: |
| 3507 | sys_cfg = {} |
| 3508 | - ds = ec2.DataSourceEc2(sys_cfg=sys_cfg, distro=distro, paths=paths) |
| 3509 | + ds = self.datasource(sys_cfg=sys_cfg, distro=distro, paths=paths) |
| 3510 | if platform_data is not None: |
| 3511 | self._patch_add_cleanup( |
| 3512 | "cloudinit.sources.DataSourceEc2._collect_platform_data", |
| 3513 | @@ -157,14 +195,16 @@ class TestEc2(test_helpers.HttprettyTestCase): |
| 3514 | return ds |
| 3515 | |
| 3516 | @httpretty.activate |
| 3517 | - def test_valid_platform_with_strict_true(self): |
| 3518 | + @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') |
| 3519 | + def test_valid_platform_with_strict_true(self, m_dhcp): |
| 3520 | """Valid platform data should return true with strict_id true.""" |
| 3521 | ds = self._setup_ds( |
| 3522 | platform_data=self.valid_platform_data, |
| 3523 | sys_cfg={'datasource': {'Ec2': {'strict_id': True}}}, |
| 3524 | md=DEFAULT_METADATA) |
| 3525 | ret = ds.get_data() |
| 3526 | - self.assertEqual(True, ret) |
| 3527 | + self.assertTrue(ret) |
| 3528 | + self.assertEqual(0, m_dhcp.call_count) |
| 3529 | |
| 3530 | @httpretty.activate |
| 3531 | def test_valid_platform_with_strict_false(self): |
| 3532 | @@ -174,7 +214,7 @@ class TestEc2(test_helpers.HttprettyTestCase): |
| 3533 | sys_cfg={'datasource': {'Ec2': {'strict_id': False}}}, |
| 3534 | md=DEFAULT_METADATA) |
| 3535 | ret = ds.get_data() |
| 3536 | - self.assertEqual(True, ret) |
| 3537 | + self.assertTrue(ret) |
| 3538 | |
| 3539 | @httpretty.activate |
| 3540 | def test_unknown_platform_with_strict_true(self): |
| 3541 | @@ -185,7 +225,7 @@ class TestEc2(test_helpers.HttprettyTestCase): |
| 3542 | sys_cfg={'datasource': {'Ec2': {'strict_id': True}}}, |
| 3543 | md=DEFAULT_METADATA) |
| 3544 | ret = ds.get_data() |
| 3545 | - self.assertEqual(False, ret) |
| 3546 | + self.assertFalse(ret) |
| 3547 | |
| 3548 | @httpretty.activate |
| 3549 | def test_unknown_platform_with_strict_false(self): |
| 3550 | @@ -196,7 +236,55 @@ class TestEc2(test_helpers.HttprettyTestCase): |
| 3551 | sys_cfg={'datasource': {'Ec2': {'strict_id': False}}}, |
| 3552 | md=DEFAULT_METADATA) |
| 3553 | ret = ds.get_data() |
| 3554 | - self.assertEqual(True, ret) |
| 3555 | + self.assertTrue(ret) |
| 3556 | + |
| 3557 | + @httpretty.activate |
| 3558 | + @mock.patch('cloudinit.sources.DataSourceEc2.util.is_FreeBSD') |
| 3559 | + def test_ec2_local_returns_false_on_bsd(self, m_is_freebsd): |
| 3560 | + """DataSourceEc2Local returns False on BSD. |
| 3561 | + |
| 3562 | + FreeBSD dhclient doesn't support dhclient -sf to run in a sandbox. |
| 3563 | + """ |
| 3564 | + m_is_freebsd.return_value = True |
| 3565 | + self.datasource = ec2.DataSourceEc2Local |
| 3566 | + ds = self._setup_ds( |
| 3567 | + platform_data=self.valid_platform_data, |
| 3568 | + sys_cfg={'datasource': {'Ec2': {'strict_id': False}}}, |
| 3569 | + md=DEFAULT_METADATA) |
| 3570 | + ret = ds.get_data() |
| 3571 | + self.assertFalse(ret) |
| 3572 | + self.assertIn( |
| 3573 | + "FreeBSD doesn't support running dhclient with -sf", |
| 3574 | + self.logs.getvalue()) |
| 3575 | + |
| 3576 | + @httpretty.activate |
| 3577 | + @mock.patch('cloudinit.net.EphemeralIPv4Network') |
| 3578 | + @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') |
| 3579 | + @mock.patch('cloudinit.sources.DataSourceEc2.util.is_FreeBSD') |
| 3580 | + def test_ec2_local_performs_dhcp_on_non_bsd(self, m_is_bsd, m_dhcp, m_net): |
| 3581 | + """Ec2Local returns True for valid platform data on non-BSD with dhcp. |
| 3582 | + |
| 3583 | + DataSourceEc2Local will setup initial IPv4 network via dhcp discovery. |
| 3584 | + Then the metadata services is crawled for more network config info. |
| 3585 | + When the platform data is valid, return True. |
| 3586 | + """ |
| 3587 | + m_is_bsd.return_value = False |
| 3588 | + m_dhcp.return_value = [{ |
| 3589 | + 'interface': 'eth9', 'fixed-address': '192.168.2.9', |
| 3590 | + 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0', |
| 3591 | + 'broadcast-address': '192.168.2.255'}] |
| 3592 | + self.datasource = ec2.DataSourceEc2Local |
| 3593 | + ds = self._setup_ds( |
| 3594 | + platform_data=self.valid_platform_data, |
| 3595 | + sys_cfg={'datasource': {'Ec2': {'strict_id': False}}}, |
| 3596 | + md=DEFAULT_METADATA) |
| 3597 | + ret = ds.get_data() |
| 3598 | + self.assertTrue(ret) |
| 3599 | + m_dhcp.assert_called_once_with() |
| 3600 | + m_net.assert_called_once_with( |
| 3601 | + broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', |
| 3602 | + prefix_or_mask='255.255.255.0', router='192.168.2.1') |
| 3603 | + self.assertIn('Crawl of metadata service took', self.logs.getvalue()) |
| 3604 | |
| 3605 | |
| 3606 | # vi: ts=4 expandtab |
| 3607 | diff --git a/tests/unittests/test_distros/__init__.py b/tests/unittests/test_distros/__init__.py |
| 3608 | index e69de29..5394aa5 100644 |
| 3609 | --- a/tests/unittests/test_distros/__init__.py |
| 3610 | +++ b/tests/unittests/test_distros/__init__.py |
| 3611 | @@ -0,0 +1,21 @@ |
| 3612 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 3613 | +import copy |
| 3614 | + |
| 3615 | +from cloudinit import distros |
| 3616 | +from cloudinit import helpers |
| 3617 | +from cloudinit import settings |
| 3618 | + |
| 3619 | + |
| 3620 | +def _get_distro(dtype, system_info=None): |
| 3621 | + """Return a Distro class of distro 'dtype'. |
| 3622 | + |
| 3623 | + cfg is format of CFG_BUILTIN['system_info']. |
| 3624 | + |
| 3625 | + example: _get_distro("debian") |
| 3626 | + """ |
| 3627 | + if system_info is None: |
| 3628 | + system_info = copy.deepcopy(settings.CFG_BUILTIN['system_info']) |
| 3629 | + system_info['distro'] = dtype |
| 3630 | + paths = helpers.Paths(system_info['paths']) |
| 3631 | + distro_cls = distros.fetch(dtype) |
| 3632 | + return distro_cls(dtype, system_info, paths) |
| 3633 | diff --git a/tests/unittests/test_distros/test_arch.py b/tests/unittests/test_distros/test_arch.py |
| 3634 | new file mode 100644 |
| 3635 | index 0000000..3d4c9a7 |
| 3636 | --- /dev/null |
| 3637 | +++ b/tests/unittests/test_distros/test_arch.py |
| 3638 | @@ -0,0 +1,45 @@ |
| 3639 | +# This file is part of cloud-init. See LICENSE file for license information. |
| 3640 | + |
| 3641 | +from cloudinit.distros.arch import _render_network |
| 3642 | +from cloudinit import util |
| 3643 | + |
| 3644 | +from ..helpers import (CiTestCase, dir2dict) |
| 3645 | + |
| 3646 | +from . import _get_distro |
| 3647 | + |
| 3648 | + |
| 3649 | +class TestArch(CiTestCase): |
| 3650 | + |
| 3651 | + def test_get_distro(self): |
| 3652 | + distro = _get_distro("arch") |
| 3653 | + hostname = "myhostname" |
| 3654 | + hostfile = self.tmp_path("hostfile") |
| 3655 | + distro._write_hostname(hostname, hostfile) |
| 3656 | + self.assertEqual(hostname + "\n", util.load_file(hostfile)) |
| 3657 | + |
| 3658 | + |
| 3659 | +class TestRenderNetwork(CiTestCase): |
| 3660 | + def test_basic_static(self): |
| 3661 | + """Just the most basic static config. |
| 3662 | + |
| 3663 | + note 'lo' should not be rendered as an interface.""" |
| 3664 | + entries = {'eth0': {'auto': True, |
| 3665 | + 'dns-nameservers': ['8.8.8.8'], |
| 3666 | + 'bootproto': 'static', |
| 3667 | + 'address': '10.0.0.2', |
| 3668 | + 'gateway': '10.0.0.1', |
| 3669 | + 'netmask': '255.255.255.0'}, |
| 3670 | + 'lo': {'auto': True}} |
| 3671 | + target = self.tmp_dir() |
| 3672 | + devs = _render_network(entries, target=target) |
| 3673 | + files = dir2dict(target, prefix=target) |
| 3674 | + self.assertEqual(['eth0'], devs) |
| 3675 | + self.assertEqual( |
| 3676 | + {'/etc/netctl/eth0': '\n'.join([ |
| 3677 | + "Address=10.0.0.2/255.255.255.0", |
| 3678 | + "Connection=ethernet", |
| 3679 | + "DNS=('8.8.8.8')", |
| 3680 | + "Gateway=10.0.0.1", |
| 3681 | + "IP=static", |
| 3682 | + "Interface=eth0", ""]), |
| 3683 | + '/etc/resolv.conf': 'nameserver 8.8.8.8\n'}, files) |
| 3684 | diff --git a/tests/unittests/test_distros/test_netconfig.py b/tests/unittests/test_distros/test_netconfig.py |
| 3685 | index 2f505d9..6d89dba 100644 |
| 3686 | --- a/tests/unittests/test_distros/test_netconfig.py |
| 3687 | +++ b/tests/unittests/test_distros/test_netconfig.py |
| 3688 | @@ -135,7 +135,7 @@ network: |
| 3689 | V2_NET_CFG = { |
| 3690 | 'ethernets': { |
| 3691 | 'eth7': { |
| 3692 | - 'addresses': ['192.168.1.5/255.255.255.0'], |
| 3693 | + 'addresses': ['192.168.1.5/24'], |
| 3694 | 'gateway4': '192.168.1.254'}, |
| 3695 | 'eth9': { |
| 3696 | 'dhcp4': True} |
| 3697 | @@ -151,7 +151,6 @@ V2_TO_V2_NET_CFG_OUTPUT = """ |
| 3698 | # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: |
| 3699 | # network: {config: disabled} |
| 3700 | network: |
| 3701 | - version: 2 |
| 3702 | ethernets: |
| 3703 | eth7: |
| 3704 | addresses: |
| 3705 | @@ -159,6 +158,7 @@ network: |
| 3706 | gateway4: 192.168.1.254 |
| 3707 | eth9: |
| 3708 | dhcp4: true |
| 3709 | + version: 2 |
| 3710 | """ |
| 3711 | |
| 3712 | |
| 3713 | diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py |
| 3714 | index 7f27864..83d5faa 100644 |
| 3715 | --- a/tests/unittests/test_handler/test_handler_ntp.py |
| 3716 | +++ b/tests/unittests/test_handler/test_handler_ntp.py |
| 3717 | @@ -16,6 +16,14 @@ servers {{servers}} |
| 3718 | pools {{pools}} |
| 3719 | """ |
| 3720 | |
| 3721 | +TIMESYNCD_TEMPLATE = b"""\ |
| 3722 | +## template:jinja |
| 3723 | +[Time] |
| 3724 | +{% if servers or pools -%} |
| 3725 | +NTP={% for host in servers|list + pools|list %}{{ host }} {% endfor -%} |
| 3726 | +{% endif -%} |
| 3727 | +""" |
| 3728 | + |
| 3729 | try: |
| 3730 | import jsonschema |
| 3731 | assert jsonschema # avoid pyflakes error F401: import unused |
| 3732 | @@ -59,6 +67,14 @@ class TestNtp(FilesystemMockingTestCase): |
| 3733 | cc_ntp.install_ntp(install_func, packages=['ntp'], check_exe='ntpd') |
| 3734 | install_func.assert_not_called() |
| 3735 | |
| 3736 | + @mock.patch("cloudinit.config.cc_ntp.util") |
| 3737 | + def test_ntp_install_no_op_with_empty_pkg_list(self, mock_util): |
| 3738 | + """ntp_install calls install_func with empty list""" |
| 3739 | + mock_util.which.return_value = None # check_exe not found |
| 3740 | + install_func = mock.MagicMock() |
| 3741 | + cc_ntp.install_ntp(install_func, packages=[], check_exe='timesyncd') |
| 3742 | + install_func.assert_called_once_with([]) |
| 3743 | + |
| 3744 | def test_ntp_rename_ntp_conf(self): |
| 3745 | """When NTP_CONF exists, rename_ntp moves it.""" |
| 3746 | ntpconf = self.tmp_path("ntp.conf", self.new_root) |
| 3747 | @@ -68,6 +84,30 @@ class TestNtp(FilesystemMockingTestCase): |
| 3748 | self.assertFalse(os.path.exists(ntpconf)) |
| 3749 | self.assertTrue(os.path.exists("{0}.dist".format(ntpconf))) |
| 3750 | |
| 3751 | + @mock.patch("cloudinit.config.cc_ntp.util") |
| 3752 | + def test_reload_ntp_defaults(self, mock_util): |
| 3753 | + """Test service is restarted/reloaded (defaults)""" |
| 3754 | + service = 'ntp' |
| 3755 | + cmd = ['service', service, 'restart'] |
| 3756 | + cc_ntp.reload_ntp(service) |
| 3757 | + mock_util.subp.assert_called_with(cmd, capture=True) |
| 3758 | + |
| 3759 | + @mock.patch("cloudinit.config.cc_ntp.util") |
| 3760 | + def test_reload_ntp_systemd(self, mock_util): |
| 3761 | + """Test service is restarted/reloaded (systemd)""" |
| 3762 | + service = 'ntp' |
| 3763 | + cmd = ['systemctl', 'reload-or-restart', service] |
| 3764 | + cc_ntp.reload_ntp(service, systemd=True) |
| 3765 | + mock_util.subp.assert_called_with(cmd, capture=True) |
| 3766 | + |
| 3767 | + @mock.patch("cloudinit.config.cc_ntp.util") |
| 3768 | + def test_reload_ntp_systemd_timesycnd(self, mock_util): |
| 3769 | + """Test service is restarted/reloaded (systemd/timesyncd)""" |
| 3770 | + service = 'systemd-timesycnd' |
| 3771 | + cmd = ['systemctl', 'reload-or-restart', service] |
| 3772 | + cc_ntp.reload_ntp(service, systemd=True) |
| 3773 | + mock_util.subp.assert_called_with(cmd, capture=True) |
| 3774 | + |
| 3775 | def test_ntp_rename_ntp_conf_skip_missing(self): |
| 3776 | """When NTP_CONF doesn't exist rename_ntp doesn't create a file.""" |
| 3777 | ntpconf = self.tmp_path("ntp.conf", self.new_root) |
| 3778 | @@ -94,7 +134,7 @@ class TestNtp(FilesystemMockingTestCase): |
| 3779 | with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream: |
| 3780 | stream.write(NTP_TEMPLATE) |
| 3781 | with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf): |
| 3782 | - cc_ntp.write_ntp_config_template(cfg, mycloud) |
| 3783 | + cc_ntp.write_ntp_config_template(cfg, mycloud, ntp_conf) |
| 3784 | content = util.read_file_or_url('file://' + ntp_conf).contents |
| 3785 | self.assertEqual( |
| 3786 | "servers ['192.168.2.1', '192.168.2.2']\npools []\n", |
| 3787 | @@ -120,7 +160,7 @@ class TestNtp(FilesystemMockingTestCase): |
| 3788 | with open('{0}.{1}.tmpl'.format(ntp_conf, distro), 'wb') as stream: |
| 3789 | stream.write(NTP_TEMPLATE) |
| 3790 | with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf): |
| 3791 | - cc_ntp.write_ntp_config_template(cfg, mycloud) |
| 3792 | + cc_ntp.write_ntp_config_template(cfg, mycloud, ntp_conf) |
| 3793 | content = util.read_file_or_url('file://' + ntp_conf).contents |
| 3794 | self.assertEqual( |
| 3795 | "servers []\npools ['10.0.0.1', '10.0.0.2']\n", |
| 3796 | @@ -139,7 +179,7 @@ class TestNtp(FilesystemMockingTestCase): |
| 3797 | with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream: |
| 3798 | stream.write(NTP_TEMPLATE) |
| 3799 | with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf): |
| 3800 | - cc_ntp.write_ntp_config_template({}, mycloud) |
| 3801 | + cc_ntp.write_ntp_config_template({}, mycloud, ntp_conf) |
| 3802 | content = util.read_file_or_url('file://' + ntp_conf).contents |
| 3803 | default_pools = [ |
| 3804 | "{0}.{1}.pool.ntp.org".format(x, distro) |
| 3805 | @@ -152,7 +192,8 @@ class TestNtp(FilesystemMockingTestCase): |
| 3806 | ",".join(default_pools)), |
| 3807 | self.logs.getvalue()) |
| 3808 | |
| 3809 | - def test_ntp_handler_mocked_template(self): |
| 3810 | + @mock.patch("cloudinit.config.cc_ntp.ntp_installable") |
| 3811 | + def test_ntp_handler_mocked_template(self, m_ntp_install): |
| 3812 | """Test ntp handler renders ubuntu ntp.conf template.""" |
| 3813 | pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org'] |
| 3814 | servers = ['192.168.23.3', '192.168.23.4'] |
| 3815 | @@ -164,6 +205,8 @@ class TestNtp(FilesystemMockingTestCase): |
| 3816 | } |
| 3817 | mycloud = self._get_cloud('ubuntu') |
| 3818 | ntp_conf = self.tmp_path('ntp.conf', self.new_root) # Doesn't exist |
| 3819 | + m_ntp_install.return_value = True |
| 3820 | + |
| 3821 | # Create ntp.conf.tmpl |
| 3822 | with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream: |
| 3823 | stream.write(NTP_TEMPLATE) |
| 3824 | @@ -176,6 +219,34 @@ class TestNtp(FilesystemMockingTestCase): |
| 3825 | 'servers {0}\npools {1}\n'.format(servers, pools), |
| 3826 | content.decode()) |
| 3827 | |
| 3828 | + @mock.patch("cloudinit.config.cc_ntp.util") |
| 3829 | + def test_ntp_handler_mocked_template_snappy(self, m_util): |
| 3830 | + """Test ntp handler renders timesycnd.conf template on snappy.""" |
| 3831 | + pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org'] |
| 3832 | + servers = ['192.168.23.3', '192.168.23.4'] |
| 3833 | + cfg = { |
| 3834 | + 'ntp': { |
| 3835 | + 'pools': pools, |
| 3836 | + 'servers': servers |
| 3837 | + } |
| 3838 | + } |
| 3839 | + mycloud = self._get_cloud('ubuntu') |
| 3840 | + m_util.system_is_snappy.return_value = True |
| 3841 | + |
| 3842 | + # Create timesyncd.conf.tmpl |
| 3843 | + tsyncd_conf = self.tmp_path("timesyncd.conf", self.new_root) |
| 3844 | + template = '{0}.tmpl'.format(tsyncd_conf) |
| 3845 | + with open(template, 'wb') as stream: |
| 3846 | + stream.write(TIMESYNCD_TEMPLATE) |
| 3847 | + |
| 3848 | + with mock.patch('cloudinit.config.cc_ntp.TIMESYNCD_CONF', tsyncd_conf): |
| 3849 | + cc_ntp.handle('notimportant', cfg, mycloud, None, None) |
| 3850 | + |
| 3851 | + content = util.read_file_or_url('file://' + tsyncd_conf).contents |
| 3852 | + self.assertEqual( |
| 3853 | + "[Time]\nNTP=%s %s \n" % (" ".join(servers), " ".join(pools)), |
| 3854 | + content.decode()) |
| 3855 | + |
| 3856 | def test_ntp_handler_real_distro_templates(self): |
| 3857 | """Test ntp handler renders the shipped distro ntp.conf templates.""" |
| 3858 | pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org'] |
| 3859 | @@ -333,4 +404,30 @@ class TestNtp(FilesystemMockingTestCase): |
| 3860 | "pools ['0.mypool.org', '0.mypool.org']\n", |
| 3861 | content) |
| 3862 | |
| 3863 | + @mock.patch("cloudinit.config.cc_ntp.ntp_installable") |
| 3864 | + def test_ntp_handler_timesyncd(self, m_ntp_install): |
| 3865 | + """Test ntp handler configures timesyncd""" |
| 3866 | + m_ntp_install.return_value = False |
| 3867 | + distro = 'ubuntu' |
| 3868 | + cfg = { |
| 3869 | + 'servers': ['192.168.2.1', '192.168.2.2'], |
| 3870 | + 'pools': ['0.mypool.org'], |
| 3871 | + } |
| 3872 | + mycloud = self._get_cloud(distro) |
| 3873 | + tsyncd_conf = self.tmp_path("timesyncd.conf", self.new_root) |
| 3874 | + # Create timesyncd.conf.tmpl |
| 3875 | + template = '{0}.tmpl'.format(tsyncd_conf) |
| 3876 | + print(template) |
| 3877 | + with open(template, 'wb') as stream: |
| 3878 | + stream.write(TIMESYNCD_TEMPLATE) |
| 3879 | + with mock.patch('cloudinit.config.cc_ntp.TIMESYNCD_CONF', tsyncd_conf): |
| 3880 | + cc_ntp.write_ntp_config_template(cfg, mycloud, tsyncd_conf, |
| 3881 | + template='timesyncd.conf') |
| 3882 | + |
| 3883 | + content = util.read_file_or_url('file://' + tsyncd_conf).contents |
| 3884 | + self.assertEqual( |
| 3885 | + "[Time]\nNTP=192.168.2.1 192.168.2.2 0.mypool.org \n", |
| 3886 | + content.decode()) |
| 3887 | + |
| 3888 | + |
| 3889 | # vi: ts=4 expandtab |
| 3890 | diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py |
| 3891 | index e49abcc..f251024 100644 |
| 3892 | --- a/tests/unittests/test_net.py |
| 3893 | +++ b/tests/unittests/test_net.py |
| 3894 | @@ -1059,6 +1059,100 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
| 3895 | - type: static |
| 3896 | address: 2001:1::1/92 |
| 3897 | """), |
| 3898 | + 'expected_netplan': textwrap.dedent(""" |
| 3899 | + network: |
| 3900 | + version: 2 |
| 3901 | + ethernets: |
| 3902 | + bond0s0: |
| 3903 | + match: |
| 3904 | + macaddress: aa:bb:cc:dd:e8:00 |
| 3905 | + set-name: bond0s0 |
| 3906 | + bond0s1: |
| 3907 | + match: |
| 3908 | + macaddress: aa:bb:cc:dd:e8:01 |
| 3909 | + set-name: bond0s1 |
| 3910 | + bonds: |
| 3911 | + bond0: |
| 3912 | + addresses: |
| 3913 | + - 192.168.0.2/24 |
| 3914 | + - 192.168.1.2/24 |
| 3915 | + - 2001:1::1/92 |
| 3916 | + gateway4: 192.168.0.1 |
| 3917 | + interfaces: |
| 3918 | + - bond0s0 |
| 3919 | + - bond0s1 |
| 3920 | + parameters: |
| 3921 | + mii-monitor-interval: 100 |
| 3922 | + mode: active-backup |
| 3923 | + transmit-hash-policy: layer3+4 |
| 3924 | + routes: |
| 3925 | + - to: 10.1.3.0/24 |
| 3926 | + via: 192.168.0.3 |
| 3927 | + """), |
| 3928 | + 'yaml-v2': textwrap.dedent(""" |
| 3929 | + version: 2 |
| 3930 | + ethernets: |
| 3931 | + eth0: |
| 3932 | + match: |
| 3933 | + driver: "virtio_net" |
| 3934 | + macaddress: "aa:bb:cc:dd:e8:00" |
| 3935 | + vf0: |
| 3936 | + set-name: vf0 |
| 3937 | + match: |
| 3938 | + driver: "e1000" |
| 3939 | + macaddress: "aa:bb:cc:dd:e8:01" |
| 3940 | + bonds: |
| 3941 | + bond0: |
| 3942 | + addresses: |
| 3943 | + - 192.168.0.2/24 |
| 3944 | + - 192.168.1.2/24 |
| 3945 | + - 2001:1::1/92 |
| 3946 | + gateway4: 192.168.0.1 |
| 3947 | + interfaces: |
| 3948 | + - eth0 |
| 3949 | + - vf0 |
| 3950 | + parameters: |
| 3951 | + mii-monitor-interval: 100 |
| 3952 | + mode: active-backup |
| 3953 | + primary: vf0 |
| 3954 | + transmit-hash-policy: "layer3+4" |
| 3955 | + routes: |
| 3956 | + - to: 10.1.3.0/24 |
| 3957 | + via: 192.168.0.3 |
| 3958 | + """), |
| 3959 | + 'expected_netplan-v2': textwrap.dedent(""" |
| 3960 | + network: |
| 3961 | + bonds: |
| 3962 | + bond0: |
| 3963 | + addresses: |
| 3964 | + - 192.168.0.2/24 |
| 3965 | + - 192.168.1.2/24 |
| 3966 | + - 2001:1::1/92 |
| 3967 | + gateway4: 192.168.0.1 |
| 3968 | + interfaces: |
| 3969 | + - eth0 |
| 3970 | + - vf0 |
| 3971 | + parameters: |
| 3972 | + mii-monitor-interval: 100 |
| 3973 | + mode: active-backup |
| 3974 | + primary: vf0 |
| 3975 | + transmit-hash-policy: layer3+4 |
| 3976 | + routes: |
| 3977 | + - to: 10.1.3.0/24 |
| 3978 | + via: 192.168.0.3 |
| 3979 | + ethernets: |
| 3980 | + eth0: |
| 3981 | + match: |
| 3982 | + driver: virtio_net |
| 3983 | + macaddress: aa:bb:cc:dd:e8:00 |
| 3984 | + vf0: |
| 3985 | + match: |
| 3986 | + driver: e1000 |
| 3987 | + macaddress: aa:bb:cc:dd:e8:01 |
| 3988 | + set-name: vf0 |
| 3989 | + version: 2 |
| 3990 | + """), |
| 3991 | + |
| 3992 | 'expected_sysconfig': { |
| 3993 | 'ifcfg-bond0': textwrap.dedent("""\ |
| 3994 | BONDING_MASTER=yes |
| 3995 | @@ -1683,6 +1777,9 @@ USERCTL=no |
| 3996 | ns = network_state.parse_net_config_data(network_cfg, |
| 3997 | skip_broken=False) |
| 3998 | renderer = sysconfig.Renderer() |
| 3999 | + # render a multiple times to simulate reboots |
| 4000 | + renderer.render_network_state(ns, render_dir) |
| 4001 | + renderer.render_network_state(ns, render_dir) |
| 4002 | renderer.render_network_state(ns, render_dir) |
| 4003 | for fn, expected_content in os_sample.get('out_sysconfig', []): |
| 4004 | with open(os.path.join(render_dir, fn)) as fh: |
| 4005 | @@ -2156,6 +2253,27 @@ class TestNetplanRoundTrip(CiTestCase): |
| 4006 | renderer.render_network_state(ns, target) |
| 4007 | return dir2dict(target) |
| 4008 | |
| 4009 | + def testsimple_render_bond_netplan(self): |
| 4010 | + entry = NETWORK_CONFIGS['bond'] |
| 4011 | + files = self._render_and_read(network_config=yaml.load(entry['yaml'])) |
| 4012 | + print(entry['expected_netplan']) |
| 4013 | + print('-- expected ^ | v rendered --') |
| 4014 | + print(files['/etc/netplan/50-cloud-init.yaml']) |
| 4015 | + self.assertEqual( |
| 4016 | + entry['expected_netplan'].splitlines(), |
| 4017 | + files['/etc/netplan/50-cloud-init.yaml'].splitlines()) |
| 4018 | + |
| 4019 | + def testsimple_render_bond_v2_input_netplan(self): |
| 4020 | + entry = NETWORK_CONFIGS['bond'] |
| 4021 | + files = self._render_and_read( |
| 4022 | + network_config=yaml.load(entry['yaml-v2'])) |
| 4023 | + print(entry['expected_netplan-v2']) |
| 4024 | + print('-- expected ^ | v rendered --') |
| 4025 | + print(files['/etc/netplan/50-cloud-init.yaml']) |
| 4026 | + self.assertEqual( |
| 4027 | + entry['expected_netplan-v2'].splitlines(), |
| 4028 | + files['/etc/netplan/50-cloud-init.yaml'].splitlines()) |
| 4029 | + |
| 4030 | def testsimple_render_small_netplan(self): |
| 4031 | entry = NETWORK_CONFIGS['small'] |
| 4032 | files = self._render_and_read(network_config=yaml.load(entry['yaml'])) |
| 4033 | diff --git a/tests/unittests/test_vmware_config_file.py b/tests/unittests/test_vmware_config_file.py |
| 4034 | index 18475f1..03b36d3 100644 |
| 4035 | --- a/tests/unittests/test_vmware_config_file.py |
| 4036 | +++ b/tests/unittests/test_vmware_config_file.py |
| 4037 | @@ -7,8 +7,8 @@ |
| 4038 | |
| 4039 | import logging |
| 4040 | import sys |
| 4041 | -import unittest |
| 4042 | |
| 4043 | +from .helpers import CiTestCase |
| 4044 | from cloudinit.sources.helpers.vmware.imc.boot_proto import BootProtoEnum |
| 4045 | from cloudinit.sources.helpers.vmware.imc.config import Config |
| 4046 | from cloudinit.sources.helpers.vmware.imc.config_file import ConfigFile |
| 4047 | @@ -17,7 +17,7 @@ logging.basicConfig(level=logging.DEBUG, stream=sys.stdout) |
| 4048 | logger = logging.getLogger(__name__) |
| 4049 | |
| 4050 | |
| 4051 | -class TestVmwareConfigFile(unittest.TestCase): |
| 4052 | +class TestVmwareConfigFile(CiTestCase): |
| 4053 | |
| 4054 | def test_utility_methods(self): |
| 4055 | cf = ConfigFile("tests/data/vmware/cust-dhcp-2nic.cfg") |
| 4056 | @@ -90,4 +90,32 @@ class TestVmwareConfigFile(unittest.TestCase): |
| 4057 | self.assertEqual('00:50:56:a6:8c:08', nics[0].mac, "mac0") |
| 4058 | self.assertEqual(BootProtoEnum.DHCP, nics[0].bootProto, "bootproto0") |
| 4059 | |
| 4060 | + def test_config_password(self): |
| 4061 | + cf = ConfigFile("tests/data/vmware/cust-dhcp-2nic.cfg") |
| 4062 | + |
| 4063 | + cf._insertKey("PASSWORD|-PASS", "test-password") |
| 4064 | + cf._insertKey("PASSWORD|RESET", "no") |
| 4065 | + |
| 4066 | + conf = Config(cf) |
| 4067 | + self.assertEqual('test-password', conf.admin_password, "password") |
| 4068 | + self.assertFalse(conf.reset_password, "do not reset password") |
| 4069 | + |
| 4070 | + def test_config_reset_passwd(self): |
| 4071 | + cf = ConfigFile("tests/data/vmware/cust-dhcp-2nic.cfg") |
| 4072 | + |
| 4073 | + cf._insertKey("PASSWORD|-PASS", "test-password") |
| 4074 | + cf._insertKey("PASSWORD|RESET", "random") |
| 4075 | + |
| 4076 | + conf = Config(cf) |
| 4077 | + with self.assertRaises(ValueError): |
| 4078 | + conf.reset_password() |
| 4079 | + |
| 4080 | + cf.clear() |
| 4081 | + cf._insertKey("PASSWORD|RESET", "yes") |
| 4082 | + self.assertEqual(1, len(cf), "insert size") |
| 4083 | + |
| 4084 | + conf = Config(cf) |
| 4085 | + self.assertTrue(conf.reset_password, "reset password") |
| 4086 | + |
| 4087 | + |
| 4088 | # vi: ts=4 expandtab |
| 4089 | diff --git a/tox.ini b/tox.ini |
| 4090 | index 1140f9b..1e7ca2d 100644 |
| 4091 | --- a/tox.ini |
| 4092 | +++ b/tox.ini |
| 4093 | @@ -21,7 +21,11 @@ setenv = |
| 4094 | LC_ALL = en_US.utf-8 |
| 4095 | |
| 4096 | [testenv:pylint] |
| 4097 | -deps = pylint==1.7.1 |
| 4098 | +deps = |
| 4099 | + # requirements |
| 4100 | + pylint==1.7.1 |
| 4101 | + # test-requirements because unit tests are now present in cloudinit tree |
| 4102 | + -r{toxinidir}/test-requirements.txt |
| 4103 | commands = {envpython} -m pylint {posargs:cloudinit} |
| 4104 | |
| 4105 | [testenv:py3] |
| 4106 | @@ -29,7 +33,7 @@ basepython = python3 |
| 4107 | deps = -r{toxinidir}/test-requirements.txt |
| 4108 | commands = {envpython} -m nose {posargs:--with-coverage \ |
| 4109 | --cover-erase --cover-branches --cover-inclusive \ |
| 4110 | - --cover-package=cloudinit tests/unittests} |
| 4111 | + --cover-package=cloudinit tests/unittests cloudinit} |
| 4112 | |
| 4113 | [testenv:py27] |
| 4114 | basepython = python2.7 |
| 4115 | @@ -98,7 +102,11 @@ deps = pyflakes |
| 4116 | |
| 4117 | [testenv:tip-pylint] |
| 4118 | commands = {envpython} -m pylint {posargs:cloudinit} |
| 4119 | -deps = pylint |
| 4120 | +deps = |
| 4121 | + # requirements |
| 4122 | + pylint |
| 4123 | + # test-requirements |
| 4124 | + -r{toxinidir}/test-requirements.txt |
| 4125 | |
| 4126 | [testenv:citest] |
| 4127 | basepython = python3 |


Uploaded.
Thank you Ryan, /lists. ubuntu. com/archives/ artful- changes/ 2017-August/ 008659. html
https:/