Merge lp:~raharper/cloud-init/make-check-cleanup into lp:~cloud-init-dev/cloud-init/trunk
- make-check-cleanup
- Merge into trunk
Proposed by
Ryan Harper
on 2016-03-03
| Status: | Merged |
|---|---|
| Merged at revision: | 1172 |
| Proposed branch: | lp:~raharper/cloud-init/make-check-cleanup |
| Merge into: | lp:~cloud-init-dev/cloud-init/trunk |
| Diff against target: |
1658 lines (+297/-265) 54 files modified
Makefile (+14/-8) bin/cloud-init (+22/-21) cloudinit/config/cc_apt_configure.py (+4/-2) cloudinit/config/cc_disk_setup.py (+17/-14) cloudinit/config/cc_grub_dpkg.py (+4/-4) cloudinit/config/cc_keys_to_console.py (+1/-1) cloudinit/config/cc_mounts.py (+6/-6) cloudinit/config/cc_power_state_change.py (+1/-1) cloudinit/config/cc_puppet.py (+3/-3) cloudinit/config/cc_resizefs.py (+1/-1) cloudinit/config/cc_rh_subscription.py (+2/-2) cloudinit/config/cc_set_hostname.py (+1/-1) cloudinit/config/cc_ssh.py (+4/-3) cloudinit/config/cc_update_etc_hosts.py (+3/-3) cloudinit/config/cc_update_hostname.py (+1/-1) cloudinit/config/cc_yum_add_repo.py (+1/-1) cloudinit/distros/__init__.py (+6/-6) cloudinit/distros/arch.py (+3/-3) cloudinit/distros/debian.py (+3/-2) cloudinit/distros/freebsd.py (+2/-2) cloudinit/distros/gentoo.py (+2/-2) cloudinit/distros/parsers/hostname.py (+1/-1) cloudinit/distros/parsers/resolv_conf.py (+1/-1) cloudinit/distros/parsers/sys_conf.py (+3/-4) cloudinit/filters/launch_index.py (+1/-1) cloudinit/helpers.py (+4/-3) cloudinit/sources/DataSourceAzure.py (+12/-9) cloudinit/sources/DataSourceConfigDrive.py (+1/-1) cloudinit/sources/DataSourceEc2.py (+5/-5) cloudinit/sources/DataSourceMAAS.py (+8/-7) cloudinit/sources/DataSourceOVF.py (+19/-13) cloudinit/sources/DataSourceOpenNebula.py (+2/-1) cloudinit/sources/DataSourceSmartOS.py (+3/-4) cloudinit/sources/helpers/vmware/imc/config_nic.py (+5/-5) cloudinit/ssh_util.py (+2/-1) cloudinit/stages.py (+9/-9) cloudinit/url_helper.py (+3/-3) cloudinit/util.py (+8/-7) tests/unittests/test_data.py (+3/-2) tests/unittests/test_datasource/test_altcloud.py (+10/-13) tests/unittests/test_datasource/test_azure.py (+8/-7) tests/unittests/test_datasource/test_configdrive.py (+6/-6) tests/unittests/test_datasource/test_maas.py (+8/-8) tests/unittests/test_datasource/test_smartos.py (+3/-3) tests/unittests/test_handler/test_handler_power_state.py (+1/-2) tests/unittests/test_handler/test_handler_seed_random.py (+2/-1) tests/unittests/test_handler/test_handler_snappy.py (+1/-2) tests/unittests/test_sshutil.py (+2/-1) tests/unittests/test_templating.py (+2/-1) tools/hacking.py (+8/-8) tools/mock-meta.py (+15/-12) tools/run-pep8 (+20/-37) tools/run-pyflakes (+18/-0) tools/run-pyflakes3 (+2/-0) |
| To merge this branch: | bzr merge lp:~raharper/cloud-init/make-check-cleanup |
| Related bugs: |
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Scott Moser | 2016-03-03 | Pending | |
|
Review via email:
|
|||
Commit Message
Description of the Change
Apply pep8, pyflakes fixes for python2 and 3
Update make check target to use pep8, pyflakes, pyflakes3
To post a comment you must log in.
| Ryan Harper (raharper) wrote : | # |
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
| 1 | === modified file 'Makefile' |
| 2 | --- Makefile 2016-03-03 17:53:41 +0000 |
| 3 | +++ Makefile 2016-03-03 23:20:30 +0000 |
| 4 | @@ -1,13 +1,13 @@ |
| 5 | CWD=$(shell pwd) |
| 6 | PY_FILES=$(shell find cloudinit bin tests tools -name "*.py" -type f ) |
| 7 | PY_FILES+="bin/cloud-init" |
| 8 | -noseopts ?= -v |
| 9 | |
| 10 | YAML_FILES=$(shell find cloudinit bin tests tools -name "*.yaml" -type f ) |
| 11 | YAML_FILES+=$(shell find doc/examples -name "cloud-config*.txt" -type f ) |
| 12 | |
| 13 | CHANGELOG_VERSION=$(shell $(CWD)/tools/read-version) |
| 14 | CODE_VERSION=$(shell python -c "from cloudinit import version; print version.version_string()") |
| 15 | +noseopts ?= -vv --nologcapture |
| 16 | |
| 17 | PIP_INSTALL := pip install |
| 18 | |
| 19 | @@ -17,13 +17,20 @@ |
| 20 | |
| 21 | all: check |
| 22 | |
| 23 | -check: test check_version pyflakes |
| 24 | +check: check_version pep8 pyflakes pyflakes3 test |
| 25 | |
| 26 | pep8: |
| 27 | - @$(CWD)/tools/run-pep8 $(PY_FILES) |
| 28 | + @$(CWD)/tools/run-pep8 |
| 29 | |
| 30 | pyflakes: |
| 31 | - @pyflakes $(PY_FILES) |
| 32 | + @$(CWD)/tools/run-pyflakes |
| 33 | + |
| 34 | +pyflakes3: |
| 35 | + @$(CWD)/tools/run-pyflakes3 |
| 36 | + |
| 37 | +unittest: clean_pyc |
| 38 | + nosetests $(noseopts) tests/unittests |
| 39 | + nosetests3 $(noseopts) tests/unittests |
| 40 | |
| 41 | pip-requirements: |
| 42 | @echo "Installing cloud-init dependencies..." |
| 43 | @@ -33,8 +40,7 @@ |
| 44 | @echo "Installing cloud-init test dependencies..." |
| 45 | $(PIP_INSTALL) -r "$@.txt" -q |
| 46 | |
| 47 | -test: clean_pyc |
| 48 | - @n=$$(which nosetests3) || n=nosetests; set -- $$n $(noseopts) tests/; echo "Running $$*"; "$$@" |
| 49 | +test: unittest |
| 50 | |
| 51 | check_version: |
| 52 | @if [ "$(CHANGELOG_VERSION)" != "$(CODE_VERSION)" ]; then \ |
| 53 | @@ -60,5 +66,5 @@ |
| 54 | deb: |
| 55 | ./packages/bddeb |
| 56 | |
| 57 | -.PHONY: test pyflakes 2to3 clean pep8 rpm deb yaml check_version |
| 58 | -.PHONY: pip-test-requirements pip-requirements clean_pyc |
| 59 | +.PHONY: test pyflakes pyflakes3 2to3 clean pep8 rpm deb yaml check_version |
| 60 | +.PHONY: pip-test-requirements pip-requirements clean_pyc unittest |
| 61 | |
| 62 | === modified file 'bin/cloud-init' |
| 63 | --- bin/cloud-init 2015-09-03 14:47:28 +0000 |
| 64 | +++ bin/cloud-init 2016-03-03 23:20:30 +0000 |
| 65 | @@ -194,7 +194,7 @@ |
| 66 | if args.debug: |
| 67 | # Reset so that all the debug handlers are closed out |
| 68 | LOG.debug(("Logging being reset, this logger may no" |
| 69 | - " longer be active shortly")) |
| 70 | + " longer be active shortly")) |
| 71 | logging.resetLogging() |
| 72 | logging.setupLogging(init.cfg) |
| 73 | apply_reporting_cfg(init.cfg) |
| 74 | @@ -276,9 +276,9 @@ |
| 75 | # This may run user-data handlers and/or perform |
| 76 | # url downloads and such as needed. |
| 77 | (ran, _results) = init.cloudify().run('consume_data', |
| 78 | - init.consume_data, |
| 79 | - args=[PER_INSTANCE], |
| 80 | - freq=PER_INSTANCE) |
| 81 | + init.consume_data, |
| 82 | + args=[PER_INSTANCE], |
| 83 | + freq=PER_INSTANCE) |
| 84 | if not ran: |
| 85 | # Just consume anything that is set to run per-always |
| 86 | # if nothing ran in the per-instance code |
| 87 | @@ -349,7 +349,7 @@ |
| 88 | if args.debug: |
| 89 | # Reset so that all the debug handlers are closed out |
| 90 | LOG.debug(("Logging being reset, this logger may no" |
| 91 | - " longer be active shortly")) |
| 92 | + " longer be active shortly")) |
| 93 | logging.resetLogging() |
| 94 | logging.setupLogging(mods.cfg) |
| 95 | apply_reporting_cfg(init.cfg) |
| 96 | @@ -534,7 +534,8 @@ |
| 97 | errors.extend(v1[m].get('errors', [])) |
| 98 | |
| 99 | atomic_write_json(result_path, |
| 100 | - {'v1': {'datasource': v1['datasource'], 'errors': errors}}) |
| 101 | + {'v1': {'datasource': v1['datasource'], |
| 102 | + 'errors': errors}}) |
| 103 | util.sym_link(os.path.relpath(result_path, link_d), result_link, |
| 104 | force=True) |
| 105 | |
| 106 | @@ -578,13 +579,13 @@ |
| 107 | |
| 108 | # These settings are used for the 'config' and 'final' stages |
| 109 | parser_mod = subparsers.add_parser('modules', |
| 110 | - help=('activates modules ' |
| 111 | - 'using a given configuration key')) |
| 112 | + help=('activates modules using ' |
| 113 | + 'a given configuration key')) |
| 114 | parser_mod.add_argument("--mode", '-m', action='store', |
| 115 | - help=("module configuration name " |
| 116 | - "to use (default: %(default)s)"), |
| 117 | - default='config', |
| 118 | - choices=('init', 'config', 'final')) |
| 119 | + help=("module configuration name " |
| 120 | + "to use (default: %(default)s)"), |
| 121 | + default='config', |
| 122 | + choices=('init', 'config', 'final')) |
| 123 | parser_mod.set_defaults(action=('modules', main_modules)) |
| 124 | |
| 125 | # These settings are used when you want to query information |
| 126 | @@ -600,22 +601,22 @@ |
| 127 | |
| 128 | # This subcommand allows you to run a single module |
| 129 | parser_single = subparsers.add_parser('single', |
| 130 | - help=('run a single module ')) |
| 131 | + help=('run a single module ')) |
| 132 | parser_single.set_defaults(action=('single', main_single)) |
| 133 | parser_single.add_argument("--name", '-n', action="store", |
| 134 | - help="module name to run", |
| 135 | - required=True) |
| 136 | + help="module name to run", |
| 137 | + required=True) |
| 138 | parser_single.add_argument("--frequency", action="store", |
| 139 | - help=("frequency of the module"), |
| 140 | - required=False, |
| 141 | - choices=list(FREQ_SHORT_NAMES.keys())) |
| 142 | + help=("frequency of the module"), |
| 143 | + required=False, |
| 144 | + choices=list(FREQ_SHORT_NAMES.keys())) |
| 145 | parser_single.add_argument("--report", action="store_true", |
| 146 | help="enable reporting", |
| 147 | required=False) |
| 148 | parser_single.add_argument("module_args", nargs="*", |
| 149 | - metavar='argument', |
| 150 | - help=('any additional arguments to' |
| 151 | - ' pass to this module')) |
| 152 | + metavar='argument', |
| 153 | + help=('any additional arguments to' |
| 154 | + ' pass to this module')) |
| 155 | parser_single.set_defaults(action=('single', main_single)) |
| 156 | |
| 157 | args = parser.parse_args() |
| 158 | |
| 159 | === modified file 'cloudinit/config/cc_apt_configure.py' |
| 160 | --- cloudinit/config/cc_apt_configure.py 2015-06-15 21:20:51 +0000 |
| 161 | +++ cloudinit/config/cc_apt_configure.py 2016-03-03 23:20:30 +0000 |
| 162 | @@ -91,7 +91,8 @@ |
| 163 | if matchcfg: |
| 164 | matcher = re.compile(matchcfg).search |
| 165 | else: |
| 166 | - matcher = lambda f: False |
| 167 | + def matcher(x): |
| 168 | + return False |
| 169 | |
| 170 | errors = add_sources(cfg['apt_sources'], params, |
| 171 | aa_repo_match=matcher) |
| 172 | @@ -173,7 +174,8 @@ |
| 173 | template_params = {} |
| 174 | |
| 175 | if aa_repo_match is None: |
| 176 | - aa_repo_match = lambda f: False |
| 177 | + def aa_repo_match(x): |
| 178 | + return False |
| 179 | |
| 180 | errorlist = [] |
| 181 | for ent in srclist: |
| 182 | |
| 183 | === modified file 'cloudinit/config/cc_disk_setup.py' |
| 184 | --- cloudinit/config/cc_disk_setup.py 2015-07-22 19:14:33 +0000 |
| 185 | +++ cloudinit/config/cc_disk_setup.py 2016-03-03 23:20:30 +0000 |
| 186 | @@ -167,11 +167,12 @@ |
| 187 | parts = [x for x in (info.strip()).splitlines() if len(x.split()) > 0] |
| 188 | |
| 189 | for part in parts: |
| 190 | - d = {'name': None, |
| 191 | - 'type': None, |
| 192 | - 'fstype': None, |
| 193 | - 'label': None, |
| 194 | - } |
| 195 | + d = { |
| 196 | + 'name': None, |
| 197 | + 'type': None, |
| 198 | + 'fstype': None, |
| 199 | + 'label': None, |
| 200 | + } |
| 201 | |
| 202 | for key, value in value_splitter(part): |
| 203 | d[key.lower()] = value |
| 204 | @@ -701,11 +702,12 @@ |
| 205 | """ |
| 206 | A force flag might be -F or -F, this look it up |
| 207 | """ |
| 208 | - flags = {'ext': '-F', |
| 209 | - 'btrfs': '-f', |
| 210 | - 'xfs': '-f', |
| 211 | - 'reiserfs': '-f', |
| 212 | - } |
| 213 | + flags = { |
| 214 | + 'ext': '-F', |
| 215 | + 'btrfs': '-f', |
| 216 | + 'xfs': '-f', |
| 217 | + 'reiserfs': '-f', |
| 218 | + } |
| 219 | |
| 220 | if 'ext' in fs.lower(): |
| 221 | fs = 'ext' |
| 222 | @@ -824,10 +826,11 @@ |
| 223 | |
| 224 | # Create the commands |
| 225 | if fs_cmd: |
| 226 | - fs_cmd = fs_cfg['cmd'] % {'label': label, |
| 227 | - 'filesystem': fs_type, |
| 228 | - 'device': device, |
| 229 | - } |
| 230 | + fs_cmd = fs_cfg['cmd'] % { |
| 231 | + 'label': label, |
| 232 | + 'filesystem': fs_type, |
| 233 | + 'device': device, |
| 234 | + } |
| 235 | else: |
| 236 | # Find the mkfs command |
| 237 | mkfs_cmd = util.which("mkfs.%s" % fs_type) |
| 238 | |
| 239 | === modified file 'cloudinit/config/cc_grub_dpkg.py' |
| 240 | --- cloudinit/config/cc_grub_dpkg.py 2015-03-04 19:49:59 +0000 |
| 241 | +++ cloudinit/config/cc_grub_dpkg.py 2016-03-03 23:20:30 +0000 |
| 242 | @@ -38,11 +38,11 @@ |
| 243 | |
| 244 | idevs = util.get_cfg_option_str(mycfg, "grub-pc/install_devices", None) |
| 245 | idevs_empty = util.get_cfg_option_str(mycfg, |
| 246 | - "grub-pc/install_devices_empty", None) |
| 247 | + "grub-pc/install_devices_empty", |
| 248 | + None) |
| 249 | |
| 250 | if ((os.path.exists("/dev/sda1") and not os.path.exists("/dev/sda")) or |
| 251 | - (os.path.exists("/dev/xvda1") |
| 252 | - and not os.path.exists("/dev/xvda"))): |
| 253 | + (os.path.exists("/dev/xvda1") and not os.path.exists("/dev/xvda"))): |
| 254 | if idevs is None: |
| 255 | idevs = "" |
| 256 | if idevs_empty is None: |
| 257 | @@ -66,7 +66,7 @@ |
| 258 | (idevs, idevs_empty)) |
| 259 | |
| 260 | log.debug("Setting grub debconf-set-selections with '%s','%s'" % |
| 261 | - (idevs, idevs_empty)) |
| 262 | + (idevs, idevs_empty)) |
| 263 | |
| 264 | try: |
| 265 | util.subp(['debconf-set-selections'], dconf_sel) |
| 266 | |
| 267 | === modified file 'cloudinit/config/cc_keys_to_console.py' |
| 268 | --- cloudinit/config/cc_keys_to_console.py 2014-11-25 19:54:51 +0000 |
| 269 | +++ cloudinit/config/cc_keys_to_console.py 2016-03-03 23:20:30 +0000 |
| 270 | @@ -48,7 +48,7 @@ |
| 271 | "ssh_fp_console_blacklist", []) |
| 272 | key_blacklist = util.get_cfg_option_list(cfg, |
| 273 | "ssh_key_console_blacklist", |
| 274 | - ["ssh-dss"]) |
| 275 | + ["ssh-dss"]) |
| 276 | |
| 277 | try: |
| 278 | cmd = [helper_path] |
| 279 | |
| 280 | === modified file 'cloudinit/config/cc_mounts.py' |
| 281 | --- cloudinit/config/cc_mounts.py 2015-11-09 23:40:43 +0000 |
| 282 | +++ cloudinit/config/cc_mounts.py 2016-03-03 23:20:30 +0000 |
| 283 | @@ -204,12 +204,12 @@ |
| 284 | try: |
| 285 | util.ensure_dir(tdir) |
| 286 | util.log_time(LOG.debug, msg, func=util.subp, |
| 287 | - args=[['sh', '-c', |
| 288 | - ('rm -f "$1" && umask 0066 && ' |
| 289 | - '{ fallocate -l "${2}M" "$1" || ' |
| 290 | - ' dd if=/dev/zero "of=$1" bs=1M "count=$2"; } && ' |
| 291 | - 'mkswap "$1" || { r=$?; rm -f "$1"; exit $r; }'), |
| 292 | - 'setup_swap', fname, mbsize]]) |
| 293 | + args=[['sh', '-c', |
| 294 | + ('rm -f "$1" && umask 0066 && ' |
| 295 | + '{ fallocate -l "${2}M" "$1" || ' |
| 296 | + ' dd if=/dev/zero "of=$1" bs=1M "count=$2"; } && ' |
| 297 | + 'mkswap "$1" || { r=$?; rm -f "$1"; exit $r; }'), |
| 298 | + 'setup_swap', fname, mbsize]]) |
| 299 | |
| 300 | except Exception as e: |
| 301 | raise IOError("Failed %s: %s" % (msg, e)) |
| 302 | |
| 303 | === modified file 'cloudinit/config/cc_power_state_change.py' |
| 304 | --- cloudinit/config/cc_power_state_change.py 2015-09-08 20:53:59 +0000 |
| 305 | +++ cloudinit/config/cc_power_state_change.py 2016-03-03 23:20:30 +0000 |
| 306 | @@ -105,7 +105,7 @@ |
| 307 | |
| 308 | log.debug("After pid %s ends, will execute: %s" % (mypid, ' '.join(args))) |
| 309 | |
| 310 | - util.fork_cb(run_after_pid_gone, mypid, cmdline, timeout, log, |
| 311 | + util.fork_cb(run_after_pid_gone, mypid, cmdline, timeout, log, |
| 312 | condition, execmd, [args, devnull_fp]) |
| 313 | |
| 314 | |
| 315 | |
| 316 | === modified file 'cloudinit/config/cc_puppet.py' |
| 317 | --- cloudinit/config/cc_puppet.py 2015-01-27 19:24:22 +0000 |
| 318 | +++ cloudinit/config/cc_puppet.py 2016-03-03 23:20:30 +0000 |
| 319 | @@ -36,8 +36,8 @@ |
| 320 | # Set puppet to automatically start |
| 321 | if os.path.exists('/etc/default/puppet'): |
| 322 | util.subp(['sed', '-i', |
| 323 | - '-e', 's/^START=.*/START=yes/', |
| 324 | - '/etc/default/puppet'], capture=False) |
| 325 | + '-e', 's/^START=.*/START=yes/', |
| 326 | + '/etc/default/puppet'], capture=False) |
| 327 | elif os.path.exists('/bin/systemctl'): |
| 328 | util.subp(['/bin/systemctl', 'enable', 'puppet.service'], |
| 329 | capture=False) |
| 330 | @@ -65,7 +65,7 @@ |
| 331 | " doing nothing.")) |
| 332 | elif install: |
| 333 | log.debug(("Attempting to install puppet %s,"), |
| 334 | - version if version else 'latest') |
| 335 | + version if version else 'latest') |
| 336 | cloud.distro.install_packages(('puppet', version)) |
| 337 | |
| 338 | # ... and then update the puppet configuration |
| 339 | |
| 340 | === modified file 'cloudinit/config/cc_resizefs.py' |
| 341 | --- cloudinit/config/cc_resizefs.py 2014-09-16 00:13:07 +0000 |
| 342 | +++ cloudinit/config/cc_resizefs.py 2016-03-03 23:20:30 +0000 |
| 343 | @@ -166,7 +166,7 @@ |
| 344 | func=do_resize, args=(resize_cmd, log)) |
| 345 | else: |
| 346 | util.log_time(logfunc=log.debug, msg="Resizing", |
| 347 | - func=do_resize, args=(resize_cmd, log)) |
| 348 | + func=do_resize, args=(resize_cmd, log)) |
| 349 | |
| 350 | action = 'Resized' |
| 351 | if resize_root == NOBLOCK: |
| 352 | |
| 353 | === modified file 'cloudinit/config/cc_rh_subscription.py' |
| 354 | --- cloudinit/config/cc_rh_subscription.py 2015-08-05 02:57:57 +0000 |
| 355 | +++ cloudinit/config/cc_rh_subscription.py 2016-03-03 23:20:30 +0000 |
| 356 | @@ -127,8 +127,8 @@ |
| 357 | return False, not_bool |
| 358 | |
| 359 | if (self.servicelevel is not None) and \ |
| 360 | - ((not self.auto_attach) |
| 361 | - or (util.is_false(str(self.auto_attach)))): |
| 362 | + ((not self.auto_attach) or |
| 363 | + (util.is_false(str(self.auto_attach)))): |
| 364 | |
| 365 | no_auto = ("The service-level key must be used in conjunction " |
| 366 | "with the auto-attach key. Please re-run with " |
| 367 | |
| 368 | === modified file 'cloudinit/config/cc_set_hostname.py' |
| 369 | --- cloudinit/config/cc_set_hostname.py 2014-02-05 15:36:47 +0000 |
| 370 | +++ cloudinit/config/cc_set_hostname.py 2016-03-03 23:20:30 +0000 |
| 371 | @@ -24,7 +24,7 @@ |
| 372 | def handle(name, cfg, cloud, log, _args): |
| 373 | if util.get_cfg_option_bool(cfg, "preserve_hostname", False): |
| 374 | log.debug(("Configuration option 'preserve_hostname' is set," |
| 375 | - " not setting the hostname in module %s"), name) |
| 376 | + " not setting the hostname in module %s"), name) |
| 377 | return |
| 378 | |
| 379 | (hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud) |
| 380 | |
| 381 | === modified file 'cloudinit/config/cc_ssh.py' |
| 382 | --- cloudinit/config/cc_ssh.py 2015-07-22 18:47:19 +0000 |
| 383 | +++ cloudinit/config/cc_ssh.py 2016-03-03 23:20:30 +0000 |
| 384 | @@ -30,9 +30,10 @@ |
| 385 | from cloudinit import ssh_util |
| 386 | from cloudinit import util |
| 387 | |
| 388 | -DISABLE_ROOT_OPTS = ("no-port-forwarding,no-agent-forwarding," |
| 389 | -"no-X11-forwarding,command=\"echo \'Please login as the user \\\"$USER\\\" " |
| 390 | -"rather than the user \\\"root\\\".\';echo;sleep 10\"") |
| 391 | +DISABLE_ROOT_OPTS = ( |
| 392 | + "no-port-forwarding,no-agent-forwarding," |
| 393 | + "no-X11-forwarding,command=\"echo \'Please login as the user \\\"$USER\\\"" |
| 394 | + " rather than the user \\\"root\\\".\';echo;sleep 10\"") |
| 395 | |
| 396 | GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519'] |
| 397 | KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key' |
| 398 | |
| 399 | === modified file 'cloudinit/config/cc_update_etc_hosts.py' |
| 400 | --- cloudinit/config/cc_update_etc_hosts.py 2014-02-05 15:36:47 +0000 |
| 401 | +++ cloudinit/config/cc_update_etc_hosts.py 2016-03-03 23:20:30 +0000 |
| 402 | @@ -41,10 +41,10 @@ |
| 403 | if not tpl_fn_name: |
| 404 | raise RuntimeError(("No hosts template could be" |
| 405 | " found for distro %s") % |
| 406 | - (cloud.distro.osfamily)) |
| 407 | + (cloud.distro.osfamily)) |
| 408 | |
| 409 | templater.render_to_file(tpl_fn_name, '/etc/hosts', |
| 410 | - {'hostname': hostname, 'fqdn': fqdn}) |
| 411 | + {'hostname': hostname, 'fqdn': fqdn}) |
| 412 | |
| 413 | elif manage_hosts == "localhost": |
| 414 | (hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud) |
| 415 | @@ -57,4 +57,4 @@ |
| 416 | cloud.distro.update_etc_hosts(hostname, fqdn) |
| 417 | else: |
| 418 | log.debug(("Configuration option 'manage_etc_hosts' is not set," |
| 419 | - " not managing /etc/hosts in module %s"), name) |
| 420 | + " not managing /etc/hosts in module %s"), name) |
| 421 | |
| 422 | === modified file 'cloudinit/config/cc_update_hostname.py' |
| 423 | --- cloudinit/config/cc_update_hostname.py 2014-02-05 15:36:47 +0000 |
| 424 | +++ cloudinit/config/cc_update_hostname.py 2016-03-03 23:20:30 +0000 |
| 425 | @@ -29,7 +29,7 @@ |
| 426 | def handle(name, cfg, cloud, log, _args): |
| 427 | if util.get_cfg_option_bool(cfg, "preserve_hostname", False): |
| 428 | log.debug(("Configuration option 'preserve_hostname' is set," |
| 429 | - " not updating the hostname in module %s"), name) |
| 430 | + " not updating the hostname in module %s"), name) |
| 431 | return |
| 432 | |
| 433 | (hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud) |
| 434 | |
| 435 | === modified file 'cloudinit/config/cc_yum_add_repo.py' |
| 436 | --- cloudinit/config/cc_yum_add_repo.py 2015-01-21 22:56:53 +0000 |
| 437 | +++ cloudinit/config/cc_yum_add_repo.py 2016-03-03 23:20:30 +0000 |
| 438 | @@ -92,7 +92,7 @@ |
| 439 | for req_field in ['baseurl']: |
| 440 | if req_field not in repo_config: |
| 441 | log.warn(("Repository %s does not contain a %s" |
| 442 | - " configuration 'required' entry"), |
| 443 | + " configuration 'required' entry"), |
| 444 | repo_id, req_field) |
| 445 | missing_required += 1 |
| 446 | if not missing_required: |
| 447 | |
| 448 | === modified file 'cloudinit/distros/__init__.py' |
| 449 | --- cloudinit/distros/__init__.py 2016-03-01 17:30:31 +0000 |
| 450 | +++ cloudinit/distros/__init__.py 2016-03-03 23:20:30 +0000 |
| 451 | @@ -211,8 +211,8 @@ |
| 452 | |
| 453 | # If the system hostname is different than the previous |
| 454 | # one or the desired one lets update it as well |
| 455 | - if (not sys_hostname) or (sys_hostname == prev_hostname |
| 456 | - and sys_hostname != hostname): |
| 457 | + if ((not sys_hostname) or (sys_hostname == prev_hostname and |
| 458 | + sys_hostname != hostname)): |
| 459 | update_files.append(sys_fn) |
| 460 | |
| 461 | # If something else has changed the hostname after we set it |
| 462 | @@ -221,7 +221,7 @@ |
| 463 | if (sys_hostname and prev_hostname and |
| 464 | sys_hostname != prev_hostname): |
| 465 | LOG.info("%s differs from %s, assuming user maintained hostname.", |
| 466 | - prev_hostname_fn, sys_fn) |
| 467 | + prev_hostname_fn, sys_fn) |
| 468 | return |
| 469 | |
| 470 | # Remove duplicates (incase the previous config filename) |
| 471 | @@ -289,7 +289,7 @@ |
| 472 | def _bring_up_interface(self, device_name): |
| 473 | cmd = ['ifup', device_name] |
| 474 | LOG.debug("Attempting to run bring up interface %s using command %s", |
| 475 | - device_name, cmd) |
| 476 | + device_name, cmd) |
| 477 | try: |
| 478 | (_out, err) = util.subp(cmd) |
| 479 | if len(err): |
| 480 | @@ -548,7 +548,7 @@ |
| 481 | for member in members: |
| 482 | if not util.is_user(member): |
| 483 | LOG.warn("Unable to add group member '%s' to group '%s'" |
| 484 | - "; user does not exist.", member, name) |
| 485 | + "; user does not exist.", member, name) |
| 486 | continue |
| 487 | |
| 488 | util.subp(['usermod', '-a', '-G', name, member]) |
| 489 | @@ -886,7 +886,7 @@ |
| 490 | locs, looked_locs = importer.find_module(name, ['', __name__], ['Distro']) |
| 491 | if not locs: |
| 492 | raise ImportError("No distribution found for distro %s (searched %s)" |
| 493 | - % (name, looked_locs)) |
| 494 | + % (name, looked_locs)) |
| 495 | mod = importer.import_module(locs[0]) |
| 496 | cls = getattr(mod, 'Distro') |
| 497 | return cls |
| 498 | |
| 499 | === modified file 'cloudinit/distros/arch.py' |
| 500 | --- cloudinit/distros/arch.py 2015-01-27 19:24:22 +0000 |
| 501 | +++ cloudinit/distros/arch.py 2016-03-03 23:20:30 +0000 |
| 502 | @@ -74,7 +74,7 @@ |
| 503 | 'Interface': dev, |
| 504 | 'IP': info.get('bootproto'), |
| 505 | 'Address': "('%s/%s')" % (info.get('address'), |
| 506 | - info.get('netmask')), |
| 507 | + info.get('netmask')), |
| 508 | 'Gateway': info.get('gateway'), |
| 509 | 'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '') |
| 510 | } |
| 511 | @@ -86,7 +86,7 @@ |
| 512 | |
| 513 | if nameservers: |
| 514 | util.write_file(self.resolve_conf_fn, |
| 515 | - convert_resolv_conf(nameservers)) |
| 516 | + convert_resolv_conf(nameservers)) |
| 517 | |
| 518 | return dev_names |
| 519 | |
| 520 | @@ -102,7 +102,7 @@ |
| 521 | def _bring_up_interface(self, device_name): |
| 522 | cmd = ['netctl', 'restart', device_name] |
| 523 | LOG.debug("Attempting to run bring up interface %s using command %s", |
| 524 | - device_name, cmd) |
| 525 | + device_name, cmd) |
| 526 | try: |
| 527 | (_out, err) = util.subp(cmd) |
| 528 | if len(err): |
| 529 | |
| 530 | === modified file 'cloudinit/distros/debian.py' |
| 531 | --- cloudinit/distros/debian.py 2015-01-23 02:21:04 +0000 |
| 532 | +++ cloudinit/distros/debian.py 2016-03-03 23:20:30 +0000 |
| 533 | @@ -159,8 +159,9 @@ |
| 534 | |
| 535 | # Allow the output of this to flow outwards (ie not be captured) |
| 536 | util.log_time(logfunc=LOG.debug, |
| 537 | - msg="apt-%s [%s]" % (command, ' '.join(cmd)), func=util.subp, |
| 538 | - args=(cmd,), kwargs={'env': e, 'capture': False}) |
| 539 | + msg="apt-%s [%s]" % (command, ' '.join(cmd)), |
| 540 | + func=util.subp, |
| 541 | + args=(cmd,), kwargs={'env': e, 'capture': False}) |
| 542 | |
| 543 | def update_package_sources(self): |
| 544 | self._runner.run("update-sources", self.package_command, |
| 545 | |
| 546 | === modified file 'cloudinit/distros/freebsd.py' |
| 547 | --- cloudinit/distros/freebsd.py 2015-01-21 22:56:53 +0000 |
| 548 | +++ cloudinit/distros/freebsd.py 2016-03-03 23:20:30 +0000 |
| 549 | @@ -205,8 +205,8 @@ |
| 550 | redact_opts = ['passwd'] |
| 551 | |
| 552 | for key, val in kwargs.items(): |
| 553 | - if (key in adduser_opts and val |
| 554 | - and isinstance(val, six.string_types)): |
| 555 | + if (key in adduser_opts and val and |
| 556 | + isinstance(val, six.string_types)): |
| 557 | adduser_cmd.extend([adduser_opts[key], val]) |
| 558 | |
| 559 | # Redact certain fields from the logs |
| 560 | |
| 561 | === modified file 'cloudinit/distros/gentoo.py' |
| 562 | --- cloudinit/distros/gentoo.py 2015-01-27 19:24:22 +0000 |
| 563 | +++ cloudinit/distros/gentoo.py 2016-03-03 23:20:30 +0000 |
| 564 | @@ -66,7 +66,7 @@ |
| 565 | def _bring_up_interface(self, device_name): |
| 566 | cmd = ['/etc/init.d/net.%s' % device_name, 'restart'] |
| 567 | LOG.debug("Attempting to run bring up interface %s using command %s", |
| 568 | - device_name, cmd) |
| 569 | + device_name, cmd) |
| 570 | try: |
| 571 | (_out, err) = util.subp(cmd) |
| 572 | if len(err): |
| 573 | @@ -88,7 +88,7 @@ |
| 574 | (_out, err) = util.subp(cmd) |
| 575 | if len(err): |
| 576 | LOG.warn("Running %s resulted in stderr output: %s", cmd, |
| 577 | - err) |
| 578 | + err) |
| 579 | except util.ProcessExecutionError: |
| 580 | util.logexc(LOG, "Running interface command %s failed", cmd) |
| 581 | return False |
| 582 | |
| 583 | === modified file 'cloudinit/distros/parsers/hostname.py' |
| 584 | --- cloudinit/distros/parsers/hostname.py 2015-01-21 22:56:53 +0000 |
| 585 | +++ cloudinit/distros/parsers/hostname.py 2016-03-03 23:20:30 +0000 |
| 586 | @@ -84,5 +84,5 @@ |
| 587 | hostnames_found.add(head) |
| 588 | if len(hostnames_found) > 1: |
| 589 | raise IOError("Multiple hostnames (%s) found!" |
| 590 | - % (hostnames_found)) |
| 591 | + % (hostnames_found)) |
| 592 | return entries |
| 593 | |
| 594 | === modified file 'cloudinit/distros/parsers/resolv_conf.py' |
| 595 | --- cloudinit/distros/parsers/resolv_conf.py 2015-01-21 22:56:53 +0000 |
| 596 | +++ cloudinit/distros/parsers/resolv_conf.py 2016-03-03 23:20:30 +0000 |
| 597 | @@ -132,7 +132,7 @@ |
| 598 | # Some hard limit on 256 chars total |
| 599 | raise ValueError(("Adding %r would go beyond the " |
| 600 | "256 maximum search list character limit") |
| 601 | - % (search_domain)) |
| 602 | + % (search_domain)) |
| 603 | self._remove_option('search') |
| 604 | self._contents.append(('option', ['search', s_list, ''])) |
| 605 | return flat_sds |
| 606 | |
| 607 | === modified file 'cloudinit/distros/parsers/sys_conf.py' |
| 608 | --- cloudinit/distros/parsers/sys_conf.py 2015-01-21 22:56:53 +0000 |
| 609 | +++ cloudinit/distros/parsers/sys_conf.py 2016-03-03 23:20:30 +0000 |
| 610 | @@ -77,8 +77,7 @@ |
| 611 | quot_func = None |
| 612 | if value[0] in ['"', "'"] and value[-1] in ['"', "'"]: |
| 613 | if len(value) == 1: |
| 614 | - quot_func = (lambda x: |
| 615 | - self._get_single_quote(x) % x) |
| 616 | + quot_func = (lambda x: self._get_single_quote(x) % x) |
| 617 | else: |
| 618 | # Quote whitespace if it isn't the start + end of a shell command |
| 619 | if value.strip().startswith("$(") and value.strip().endswith(")"): |
| 620 | @@ -91,10 +90,10 @@ |
| 621 | # to use single quotes which won't get expanded... |
| 622 | if re.search(r"[\n\"']", value): |
| 623 | quot_func = (lambda x: |
| 624 | - self._get_triple_quote(x) % x) |
| 625 | + self._get_triple_quote(x) % x) |
| 626 | else: |
| 627 | quot_func = (lambda x: |
| 628 | - self._get_single_quote(x) % x) |
| 629 | + self._get_single_quote(x) % x) |
| 630 | else: |
| 631 | quot_func = pipes.quote |
| 632 | if not quot_func: |
| 633 | |
| 634 | === modified file 'cloudinit/filters/launch_index.py' |
| 635 | --- cloudinit/filters/launch_index.py 2012-09-02 00:00:34 +0000 |
| 636 | +++ cloudinit/filters/launch_index.py 2016-03-03 23:20:30 +0000 |
| 637 | @@ -61,7 +61,7 @@ |
| 638 | discarded += 1 |
| 639 | LOG.debug(("Discarding %s multipart messages " |
| 640 | "which do not match launch index %s"), |
| 641 | - discarded, self.wanted_idx) |
| 642 | + discarded, self.wanted_idx) |
| 643 | new_message = copy.copy(message) |
| 644 | new_message.set_payload(new_msgs) |
| 645 | new_message[ud.ATTACHMENT_FIELD] = str(len(new_msgs)) |
| 646 | |
| 647 | === modified file 'cloudinit/helpers.py' |
| 648 | --- cloudinit/helpers.py 2015-01-27 19:34:25 +0000 |
| 649 | +++ cloudinit/helpers.py 2016-03-03 23:20:30 +0000 |
| 650 | @@ -139,9 +139,10 @@ |
| 651 | # but the item had run before we did canon_sem_name. |
| 652 | if cname != name and os.path.exists(self._get_path(name, freq)): |
| 653 | LOG.warn("%s has run without canonicalized name [%s].\n" |
| 654 | - "likely the migrator has not yet run. It will run next boot.\n" |
| 655 | - "run manually with: cloud-init single --name=migrator" |
| 656 | - % (name, cname)) |
| 657 | + "likely the migrator has not yet run. " |
| 658 | + "It will run next boot.\n" |
| 659 | + "run manually with: cloud-init single --name=migrator" |
| 660 | + % (name, cname)) |
| 661 | return True |
| 662 | |
| 663 | return False |
| 664 | |
| 665 | === modified file 'cloudinit/sources/DataSourceAzure.py' |
| 666 | --- cloudinit/sources/DataSourceAzure.py 2015-10-30 16:26:31 +0000 |
| 667 | +++ cloudinit/sources/DataSourceAzure.py 2016-03-03 23:20:30 +0000 |
| 668 | @@ -38,7 +38,8 @@ |
| 669 | DS_NAME = 'Azure' |
| 670 | DEFAULT_METADATA = {"instance-id": "iid-AZURE-NODE"} |
| 671 | AGENT_START = ['service', 'walinuxagent', 'start'] |
| 672 | -BOUNCE_COMMAND = ['sh', '-xc', |
| 673 | +BOUNCE_COMMAND = [ |
| 674 | + 'sh', '-xc', |
| 675 | "i=$interface; x=0; ifdown $i || x=$?; ifup $i || x=$?; exit $x"] |
| 676 | |
| 677 | BUILTIN_DS_CONFIG = { |
| 678 | @@ -91,9 +92,9 @@ |
| 679 | """ |
| 680 | policy = cfg['hostname_bounce']['policy'] |
| 681 | previous_hostname = get_hostname(hostname_command) |
| 682 | - if (not util.is_true(cfg.get('set_hostname')) |
| 683 | - or util.is_false(policy) |
| 684 | - or (previous_hostname == temp_hostname and policy != 'force')): |
| 685 | + if (not util.is_true(cfg.get('set_hostname')) or |
| 686 | + util.is_false(policy) or |
| 687 | + (previous_hostname == temp_hostname and policy != 'force')): |
| 688 | yield None |
| 689 | return |
| 690 | set_hostname(temp_hostname, hostname_command) |
| 691 | @@ -123,8 +124,8 @@ |
| 692 | with temporary_hostname(temp_hostname, self.ds_cfg, |
| 693 | hostname_command=hostname_command) \ |
| 694 | as previous_hostname: |
| 695 | - if (previous_hostname is not None |
| 696 | - and util.is_true(self.ds_cfg.get('set_hostname'))): |
| 697 | + if (previous_hostname is not None and |
| 698 | + util.is_true(self.ds_cfg.get('set_hostname'))): |
| 699 | cfg = self.ds_cfg['hostname_bounce'] |
| 700 | try: |
| 701 | perform_hostname_bounce(hostname=temp_hostname, |
| 702 | @@ -152,7 +153,8 @@ |
| 703 | else: |
| 704 | bname = str(pk['fingerprint'] + ".crt") |
| 705 | fp_files += [os.path.join(ddir, bname)] |
| 706 | - LOG.debug("ssh authentication: using fingerprint from fabirc") |
| 707 | + LOG.debug("ssh authentication: " |
| 708 | + "using fingerprint from fabirc") |
| 709 | |
| 710 | missing = util.log_time(logfunc=LOG.debug, msg="waiting for files", |
| 711 | func=wait_for_files, |
| 712 | @@ -506,7 +508,7 @@ |
| 713 | raise BrokenAzureDataSource("invalid xml: %s" % e) |
| 714 | |
| 715 | results = find_child(dom.documentElement, |
| 716 | - lambda n: n.localName == "ProvisioningSection") |
| 717 | + lambda n: n.localName == "ProvisioningSection") |
| 718 | |
| 719 | if len(results) == 0: |
| 720 | raise NonAzureDataSource("No ProvisioningSection") |
| 721 | @@ -516,7 +518,8 @@ |
| 722 | provSection = results[0] |
| 723 | |
| 724 | lpcs_nodes = find_child(provSection, |
| 725 | - lambda n: n.localName == "LinuxProvisioningConfigurationSet") |
| 726 | + lambda n: |
| 727 | + n.localName == "LinuxProvisioningConfigurationSet") |
| 728 | |
| 729 | if len(results) == 0: |
| 730 | raise NonAzureDataSource("No LinuxProvisioningConfigurationSet") |
| 731 | |
| 732 | === modified file 'cloudinit/sources/DataSourceConfigDrive.py' |
| 733 | --- cloudinit/sources/DataSourceConfigDrive.py 2015-01-21 22:56:53 +0000 |
| 734 | +++ cloudinit/sources/DataSourceConfigDrive.py 2016-03-03 23:20:30 +0000 |
| 735 | @@ -39,7 +39,7 @@ |
| 736 | LABEL_TYPES = ('config-2',) |
| 737 | POSSIBLE_MOUNTS = ('sr', 'cd') |
| 738 | OPTICAL_DEVICES = tuple(('/dev/%s%s' % (z, i) for z in POSSIBLE_MOUNTS |
| 739 | - for i in range(0, 2))) |
| 740 | + for i in range(0, 2))) |
| 741 | |
| 742 | |
| 743 | class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): |
| 744 | |
| 745 | === modified file 'cloudinit/sources/DataSourceEc2.py' |
| 746 | --- cloudinit/sources/DataSourceEc2.py 2015-07-22 12:06:34 +0000 |
| 747 | +++ cloudinit/sources/DataSourceEc2.py 2016-03-03 23:20:30 +0000 |
| 748 | @@ -61,12 +61,12 @@ |
| 749 | if not self.wait_for_metadata_service(): |
| 750 | return False |
| 751 | start_time = time.time() |
| 752 | - self.userdata_raw = ec2.get_instance_userdata(self.api_ver, |
| 753 | - self.metadata_address) |
| 754 | + self.userdata_raw = \ |
| 755 | + ec2.get_instance_userdata(self.api_ver, self.metadata_address) |
| 756 | self.metadata = ec2.get_instance_metadata(self.api_ver, |
| 757 | self.metadata_address) |
| 758 | LOG.debug("Crawl of metadata service took %s seconds", |
| 759 | - int(time.time() - start_time)) |
| 760 | + int(time.time() - start_time)) |
| 761 | return True |
| 762 | except Exception: |
| 763 | util.logexc(LOG, "Failed reading from metadata address %s", |
| 764 | @@ -132,13 +132,13 @@ |
| 765 | |
| 766 | start_time = time.time() |
| 767 | url = uhelp.wait_for_url(urls=urls, max_wait=max_wait, |
| 768 | - timeout=timeout, status_cb=LOG.warn) |
| 769 | + timeout=timeout, status_cb=LOG.warn) |
| 770 | |
| 771 | if url: |
| 772 | LOG.debug("Using metadata source: '%s'", url2base[url]) |
| 773 | else: |
| 774 | LOG.critical("Giving up on md from %s after %s seconds", |
| 775 | - urls, int(time.time() - start_time)) |
| 776 | + urls, int(time.time() - start_time)) |
| 777 | |
| 778 | self.metadata_address = url2base.get(url) |
| 779 | return bool(url) |
| 780 | |
| 781 | === modified file 'cloudinit/sources/DataSourceMAAS.py' |
| 782 | --- cloudinit/sources/DataSourceMAAS.py 2015-09-29 21:17:49 +0000 |
| 783 | +++ cloudinit/sources/DataSourceMAAS.py 2016-03-03 23:20:30 +0000 |
| 784 | @@ -275,17 +275,18 @@ |
| 785 | |
| 786 | parser = argparse.ArgumentParser(description='Interact with MAAS DS') |
| 787 | parser.add_argument("--config", metavar="file", |
| 788 | - help="specify DS config file", default=None) |
| 789 | + help="specify DS config file", default=None) |
| 790 | parser.add_argument("--ckey", metavar="key", |
| 791 | - help="the consumer key to auth with", default=None) |
| 792 | + help="the consumer key to auth with", default=None) |
| 793 | parser.add_argument("--tkey", metavar="key", |
| 794 | - help="the token key to auth with", default=None) |
| 795 | + help="the token key to auth with", default=None) |
| 796 | parser.add_argument("--csec", metavar="secret", |
| 797 | - help="the consumer secret (likely '')", default="") |
| 798 | + help="the consumer secret (likely '')", default="") |
| 799 | parser.add_argument("--tsec", metavar="secret", |
| 800 | - help="the token secret to auth with", default=None) |
| 801 | + help="the token secret to auth with", default=None) |
| 802 | parser.add_argument("--apiver", metavar="version", |
| 803 | - help="the apiver to use ("" can be used)", default=MD_VERSION) |
| 804 | + help="the apiver to use ("" can be used)", |
| 805 | + default=MD_VERSION) |
| 806 | |
| 807 | subcmds = parser.add_subparsers(title="subcommands", dest="subcmd") |
| 808 | subcmds.add_parser('crawl', help="crawl the datasource") |
| 809 | @@ -297,7 +298,7 @@ |
| 810 | args = parser.parse_args() |
| 811 | |
| 812 | creds = {'consumer_key': args.ckey, 'token_key': args.tkey, |
| 813 | - 'token_secret': args.tsec, 'consumer_secret': args.csec} |
| 814 | + 'token_secret': args.tsec, 'consumer_secret': args.csec} |
| 815 | |
| 816 | if args.config: |
| 817 | cfg = util.read_conf(args.config) |
| 818 | |
| 819 | === modified file 'cloudinit/sources/DataSourceOVF.py' |
| 820 | --- cloudinit/sources/DataSourceOVF.py 2016-03-03 17:20:48 +0000 |
| 821 | +++ cloudinit/sources/DataSourceOVF.py 2016-03-03 23:20:30 +0000 |
| 822 | @@ -66,18 +66,21 @@ |
| 823 | |
| 824 | system_type = util.read_dmi_data("system-product-name") |
| 825 | if system_type is None: |
| 826 | - LOG.debug("No system-product-name found") |
| 827 | + LOG.debug("No system-product-name found") |
| 828 | elif 'vmware' in system_type.lower(): |
| 829 | LOG.debug("VMware Virtual Platform found") |
| 830 | - deployPkgPluginPath = search_file("/usr/lib/vmware-tools", "libdeployPkgPlugin.so") |
| 831 | + deployPkgPluginPath = search_file("/usr/lib/vmware-tools", |
| 832 | + "libdeployPkgPlugin.so") |
| 833 | if deployPkgPluginPath: |
| 834 | - vmwareImcConfigFilePath = util.log_time(logfunc=LOG.debug, |
| 835 | + vmwareImcConfigFilePath = \ |
| 836 | + util.log_time(logfunc=LOG.debug, |
| 837 | msg="waiting for configuration file", |
| 838 | func=wait_for_imc_cfg_file, |
| 839 | args=("/tmp", "cust.cfg")) |
| 840 | |
| 841 | if vmwareImcConfigFilePath: |
| 842 | - LOG.debug("Found VMware DeployPkg Config File Path at %s" % vmwareImcConfigFilePath) |
| 843 | + LOG.debug("Found VMware DeployPkg Config File Path at %s" % |
| 844 | + vmwareImcConfigFilePath) |
| 845 | else: |
| 846 | LOG.debug("Didn't find VMware DeployPkg Config File Path") |
| 847 | |
| 848 | @@ -147,7 +150,7 @@ |
| 849 | |
| 850 | def get_public_ssh_keys(self): |
| 851 | if 'public-keys' not in self.metadata: |
| 852 | - return [] |
| 853 | + return [] |
| 854 | pks = self.metadata['public-keys'] |
| 855 | if isinstance(pks, (list)): |
| 856 | return pks |
| 857 | @@ -170,7 +173,7 @@ |
| 858 | |
| 859 | def wait_for_imc_cfg_file(dirpath, filename, maxwait=180, naplen=5): |
| 860 | waited = 0 |
| 861 | - |
| 862 | + |
| 863 | while waited < maxwait: |
| 864 | fileFullPath = search_file(dirpath, filename) |
| 865 | if fileFullPath: |
| 866 | @@ -179,6 +182,7 @@ |
| 867 | waited += naplen |
| 868 | return None |
| 869 | |
| 870 | + |
| 871 | # This will return a dict with some content |
| 872 | # meta-data, user-data, some config |
| 873 | def read_vmware_imc(config): |
| 874 | @@ -186,13 +190,14 @@ |
| 875 | cfg = {} |
| 876 | ud = "" |
| 877 | if config.host_name: |
| 878 | - if config.domain_name: |
| 879 | - md['local-hostname'] = config.host_name + "." + config.domain_name |
| 880 | - else: |
| 881 | - md['local-hostname'] = config.host_name |
| 882 | + if config.domain_name: |
| 883 | + md['local-hostname'] = config.host_name + "." + config.domain_name |
| 884 | + else: |
| 885 | + md['local-hostname'] = config.host_name |
| 886 | |
| 887 | return (md, ud, cfg) |
| 888 | |
| 889 | + |
| 890 | # This will return a dict with some content |
| 891 | # meta-data, user-data, some config |
| 892 | def read_ovf_environment(contents): |
| 893 | @@ -328,14 +333,14 @@ |
| 894 | # could also check here that elem.namespaceURI == |
| 895 | # "http://schemas.dmtf.org/ovf/environment/1" |
| 896 | propSections = find_child(dom.documentElement, |
| 897 | - lambda n: n.localName == "PropertySection") |
| 898 | + lambda n: n.localName == "PropertySection") |
| 899 | |
| 900 | if len(propSections) == 0: |
| 901 | raise XmlError("No 'PropertySection's") |
| 902 | |
| 903 | props = {} |
| 904 | propElems = find_child(propSections[0], |
| 905 | - (lambda n: n.localName == "Property")) |
| 906 | + (lambda n: n.localName == "Property")) |
| 907 | |
| 908 | for elem in propElems: |
| 909 | key = elem.attributes.getNamedItemNS(envNsURI, "key").value |
| 910 | @@ -347,7 +352,7 @@ |
| 911 | |
| 912 | def search_file(dirpath, filename): |
| 913 | if not dirpath or not filename: |
| 914 | - return None |
| 915 | + return None |
| 916 | |
| 917 | for root, dirs, files in os.walk(dirpath): |
| 918 | if filename in files: |
| 919 | @@ -355,6 +360,7 @@ |
| 920 | |
| 921 | return None |
| 922 | |
| 923 | + |
| 924 | class XmlError(Exception): |
| 925 | pass |
| 926 | |
| 927 | |
| 928 | === modified file 'cloudinit/sources/DataSourceOpenNebula.py' |
| 929 | --- cloudinit/sources/DataSourceOpenNebula.py 2015-05-01 09:38:56 +0000 |
| 930 | +++ cloudinit/sources/DataSourceOpenNebula.py 2016-03-03 23:20:30 +0000 |
| 931 | @@ -404,7 +404,8 @@ |
| 932 | if ssh_key_var: |
| 933 | lines = context.get(ssh_key_var).splitlines() |
| 934 | results['metadata']['public-keys'] = [l for l in lines |
| 935 | - if len(l) and not l.startswith("#")] |
| 936 | + if len(l) and not |
| 937 | + l.startswith("#")] |
| 938 | |
| 939 | # custom hostname -- try hostname or leave cloud-init |
| 940 | # itself create hostname from IP address later |
| 941 | |
| 942 | === modified file 'cloudinit/sources/DataSourceSmartOS.py' |
| 943 | --- cloudinit/sources/DataSourceSmartOS.py 2016-02-04 21:52:08 +0000 |
| 944 | +++ cloudinit/sources/DataSourceSmartOS.py 2016-03-03 23:20:30 +0000 |
| 945 | @@ -90,8 +90,7 @@ |
| 946 | 'user-data', |
| 947 | 'user-script', |
| 948 | 'sdc:datacenter_name', |
| 949 | - 'sdc:uuid', |
| 950 | - ], |
| 951 | + 'sdc:uuid'], |
| 952 | 'base64_keys': [], |
| 953 | 'base64_all': False, |
| 954 | 'disk_aliases': {'ephemeral0': '/dev/vdb'}, |
| 955 | @@ -450,7 +449,7 @@ |
| 956 | |
| 957 | response = bytearray() |
| 958 | response.extend(self.metasource.read(1)) |
| 959 | - while response[-1:] != b'\n': |
| 960 | + while response[-1:] != b'\n': |
| 961 | response.extend(self.metasource.read(1)) |
| 962 | response = response.rstrip().decode('ascii') |
| 963 | LOG.debug('Read "%s" from metadata transport.', response) |
| 964 | @@ -513,7 +512,7 @@ |
| 965 | |
| 966 | except Exception as e: |
| 967 | util.logexc(LOG, ("Failed to identify script type for %s" % |
| 968 | - content_f, e)) |
| 969 | + content_f, e)) |
| 970 | |
| 971 | if link: |
| 972 | try: |
| 973 | |
| 974 | === modified file 'cloudinit/sources/helpers/vmware/imc/config_nic.py' |
| 975 | --- cloudinit/sources/helpers/vmware/imc/config_nic.py 2016-03-03 17:20:48 +0000 |
| 976 | +++ cloudinit/sources/helpers/vmware/imc/config_nic.py 2016-03-03 23:20:30 +0000 |
| 977 | @@ -46,12 +46,12 @@ |
| 978 | """ |
| 979 | primary_nics = [nic for nic in self.nics if nic.primary] |
| 980 | if not primary_nics: |
| 981 | - return None |
| 982 | + return None |
| 983 | elif len(primary_nics) > 1: |
| 984 | - raise Exception('There can only be one primary nic', |
| 985 | + raise Exception('There can only be one primary nic', |
| 986 | [nic.mac for nic in primary_nics]) |
| 987 | else: |
| 988 | - return primary_nics[0] |
| 989 | + return primary_nics[0] |
| 990 | |
| 991 | def find_devices(self): |
| 992 | """ |
| 993 | @@ -185,8 +185,8 @@ |
| 994 | lines = [] |
| 995 | |
| 996 | for addr in addrs: |
| 997 | - lines.append(' up route -A inet6 add default gw %s metric 10000' % |
| 998 | - addr.gateway) |
| 999 | + lines.append(' up route -A inet6 add default gw ' |
| 1000 | + '%s metric 10000' % addr.gateway) |
| 1001 | |
| 1002 | return lines |
| 1003 | |
| 1004 | |
| 1005 | === modified file 'cloudinit/ssh_util.py' |
| 1006 | --- cloudinit/ssh_util.py 2015-01-21 22:56:53 +0000 |
| 1007 | +++ cloudinit/ssh_util.py 2016-03-03 23:20:30 +0000 |
| 1008 | @@ -31,7 +31,8 @@ |
| 1009 | DEF_SSHD_CFG = "/etc/ssh/sshd_config" |
| 1010 | |
| 1011 | # taken from openssh source key.c/key_type_from_name |
| 1012 | -VALID_KEY_TYPES = ("rsa", "dsa", "ssh-rsa", "ssh-dss", "ecdsa", |
| 1013 | +VALID_KEY_TYPES = ( |
| 1014 | + "rsa", "dsa", "ssh-rsa", "ssh-dss", "ecdsa", |
| 1015 | "ssh-rsa-cert-v00@openssh.com", "ssh-dss-cert-v00@openssh.com", |
| 1016 | "ssh-rsa-cert-v00@openssh.com", "ssh-dss-cert-v00@openssh.com", |
| 1017 | "ssh-rsa-cert-v01@openssh.com", "ssh-dss-cert-v01@openssh.com", |
| 1018 | |
| 1019 | === modified file 'cloudinit/stages.py' |
| 1020 | --- cloudinit/stages.py 2015-08-31 17:33:30 +0000 |
| 1021 | +++ cloudinit/stages.py 2016-03-03 23:20:30 +0000 |
| 1022 | @@ -509,13 +509,13 @@ |
| 1023 | def consume_data(self, frequency=PER_INSTANCE): |
| 1024 | # Consume the userdata first, because we need want to let the part |
| 1025 | # handlers run first (for merging stuff) |
| 1026 | - with events.ReportEventStack( |
| 1027 | - "consume-user-data", "reading and applying user-data", |
| 1028 | - parent=self.reporter): |
| 1029 | + with events.ReportEventStack("consume-user-data", |
| 1030 | + "reading and applying user-data", |
| 1031 | + parent=self.reporter): |
| 1032 | self._consume_userdata(frequency) |
| 1033 | - with events.ReportEventStack( |
| 1034 | - "consume-vendor-data", "reading and applying vendor-data", |
| 1035 | - parent=self.reporter): |
| 1036 | + with events.ReportEventStack("consume-vendor-data", |
| 1037 | + "reading and applying vendor-data", |
| 1038 | + parent=self.reporter): |
| 1039 | self._consume_vendordata(frequency) |
| 1040 | |
| 1041 | # Perform post-consumption adjustments so that |
| 1042 | @@ -655,7 +655,7 @@ |
| 1043 | else: |
| 1044 | raise TypeError(("Failed to read '%s' item in config," |
| 1045 | " unknown type %s") % |
| 1046 | - (item, type_utils.obj_name(item))) |
| 1047 | + (item, type_utils.obj_name(item))) |
| 1048 | return module_list |
| 1049 | |
| 1050 | def _fixup_modules(self, raw_mods): |
| 1051 | @@ -762,8 +762,8 @@ |
| 1052 | |
| 1053 | if skipped: |
| 1054 | LOG.info("Skipping modules %s because they are not verified " |
| 1055 | - "on distro '%s'. To run anyway, add them to " |
| 1056 | - "'unverified_modules' in config.", skipped, d_name) |
| 1057 | + "on distro '%s'. To run anyway, add them to " |
| 1058 | + "'unverified_modules' in config.", skipped, d_name) |
| 1059 | if forced: |
| 1060 | LOG.info("running unverified_modules: %s", forced) |
| 1061 | |
| 1062 | |
| 1063 | === modified file 'cloudinit/url_helper.py' |
| 1064 | --- cloudinit/url_helper.py 2015-09-29 21:17:49 +0000 |
| 1065 | +++ cloudinit/url_helper.py 2016-03-03 23:20:30 +0000 |
| 1066 | @@ -252,9 +252,9 @@ |
| 1067 | # attrs |
| 1068 | return UrlResponse(r) |
| 1069 | except exceptions.RequestException as e: |
| 1070 | - if (isinstance(e, (exceptions.HTTPError)) |
| 1071 | - and hasattr(e, 'response') # This appeared in v 0.10.8 |
| 1072 | - and hasattr(e.response, 'status_code')): |
| 1073 | + if (isinstance(e, (exceptions.HTTPError)) and |
| 1074 | + hasattr(e, 'response') and # This appeared in v 0.10.8 |
| 1075 | + hasattr(e.response, 'status_code')): |
| 1076 | excps.append(UrlError(e, code=e.response.status_code, |
| 1077 | headers=e.response.headers, |
| 1078 | url=url)) |
| 1079 | |
| 1080 | === modified file 'cloudinit/util.py' |
| 1081 | --- cloudinit/util.py 2016-03-03 17:20:48 +0000 |
| 1082 | +++ cloudinit/util.py 2016-03-03 23:20:30 +0000 |
| 1083 | @@ -612,7 +612,7 @@ |
| 1084 | |
| 1085 | |
| 1086 | def make_url(scheme, host, port=None, |
| 1087 | - path='', params='', query='', fragment=''): |
| 1088 | + path='', params='', query='', fragment=''): |
| 1089 | |
| 1090 | pieces = [] |
| 1091 | pieces.append(scheme or '') |
| 1092 | @@ -804,8 +804,8 @@ |
| 1093 | blob = decode_binary(blob) |
| 1094 | try: |
| 1095 | LOG.debug("Attempting to load yaml from string " |
| 1096 | - "of length %s with allowed root types %s", |
| 1097 | - len(blob), allowed) |
| 1098 | + "of length %s with allowed root types %s", |
| 1099 | + len(blob), allowed) |
| 1100 | converted = safeyaml.load(blob) |
| 1101 | if not isinstance(converted, allowed): |
| 1102 | # Yes this will just be caught, but thats ok for now... |
| 1103 | @@ -878,7 +878,7 @@ |
| 1104 | if not isinstance(confd, six.string_types): |
| 1105 | raise TypeError(("Config file %s contains 'conf_d' " |
| 1106 | "with non-string type %s") % |
| 1107 | - (cfgfile, type_utils.obj_name(confd))) |
| 1108 | + (cfgfile, type_utils.obj_name(confd))) |
| 1109 | else: |
| 1110 | confd = str(confd).strip() |
| 1111 | elif os.path.isdir("%s.d" % cfgfile): |
| 1112 | @@ -1041,7 +1041,8 @@ |
| 1113 | for iname in badnames: |
| 1114 | try: |
| 1115 | result = socket.getaddrinfo(iname, None, 0, 0, |
| 1116 | - socket.SOCK_STREAM, socket.AI_CANONNAME) |
| 1117 | + socket.SOCK_STREAM, |
| 1118 | + socket.AI_CANONNAME) |
| 1119 | badresults[iname] = [] |
| 1120 | for (_fam, _stype, _proto, cname, sockaddr) in result: |
| 1121 | badresults[iname].append("%s: %s" % (cname, sockaddr[0])) |
| 1122 | @@ -1109,7 +1110,7 @@ |
| 1123 | |
| 1124 | |
| 1125 | def find_devs_with(criteria=None, oformat='device', |
| 1126 | - tag=None, no_cache=False, path=None): |
| 1127 | + tag=None, no_cache=False, path=None): |
| 1128 | """ |
| 1129 | find devices matching given criteria (via blkid) |
| 1130 | criteria can be *one* of: |
| 1131 | @@ -1628,7 +1629,7 @@ |
| 1132 | content = decode_binary(content) |
| 1133 | write_type = 'characters' |
| 1134 | LOG.debug("Writing to %s - %s: [%s] %s %s", |
| 1135 | - filename, omode, mode, len(content), write_type) |
| 1136 | + filename, omode, mode, len(content), write_type) |
| 1137 | with SeLinuxGuard(path=filename): |
| 1138 | with open(filename, omode) as fh: |
| 1139 | fh.write(content) |
| 1140 | |
| 1141 | === modified file 'tests/unittests/test_data.py' |
| 1142 | --- tests/unittests/test_data.py 2015-05-01 09:26:29 +0000 |
| 1143 | +++ tests/unittests/test_data.py 2016-03-03 23:20:30 +0000 |
| 1144 | @@ -27,10 +27,11 @@ |
| 1145 | from cloudinit import user_data as ud |
| 1146 | from cloudinit import util |
| 1147 | |
| 1148 | +from . import helpers |
| 1149 | + |
| 1150 | + |
| 1151 | INSTANCE_ID = "i-testing" |
| 1152 | |
| 1153 | -from . import helpers |
| 1154 | - |
| 1155 | |
| 1156 | class FakeDataSource(sources.DataSource): |
| 1157 | |
| 1158 | |
| 1159 | === modified file 'tests/unittests/test_datasource/test_altcloud.py' |
| 1160 | --- tests/unittests/test_datasource/test_altcloud.py 2015-01-26 22:05:21 +0000 |
| 1161 | +++ tests/unittests/test_datasource/test_altcloud.py 2016-03-03 23:20:30 +0000 |
| 1162 | @@ -134,8 +134,7 @@ |
| 1163 | ''' |
| 1164 | util.read_dmi_data = _dmi_data('RHEV') |
| 1165 | dsrc = DataSourceAltCloud({}, None, self.paths) |
| 1166 | - self.assertEquals('RHEV', \ |
| 1167 | - dsrc.get_cloud_type()) |
| 1168 | + self.assertEquals('RHEV', dsrc.get_cloud_type()) |
| 1169 | |
| 1170 | def test_vsphere(self): |
| 1171 | ''' |
| 1172 | @@ -144,8 +143,7 @@ |
| 1173 | ''' |
| 1174 | util.read_dmi_data = _dmi_data('VMware Virtual Platform') |
| 1175 | dsrc = DataSourceAltCloud({}, None, self.paths) |
| 1176 | - self.assertEquals('VSPHERE', \ |
| 1177 | - dsrc.get_cloud_type()) |
| 1178 | + self.assertEquals('VSPHERE', dsrc.get_cloud_type()) |
| 1179 | |
| 1180 | def test_unknown(self): |
| 1181 | ''' |
| 1182 | @@ -154,8 +152,7 @@ |
| 1183 | ''' |
| 1184 | util.read_dmi_data = _dmi_data('Unrecognized Platform') |
| 1185 | dsrc = DataSourceAltCloud({}, None, self.paths) |
| 1186 | - self.assertEquals('UNKNOWN', \ |
| 1187 | - dsrc.get_cloud_type()) |
| 1188 | + self.assertEquals('UNKNOWN', dsrc.get_cloud_type()) |
| 1189 | |
| 1190 | |
| 1191 | class TestGetDataCloudInfoFile(TestCase): |
| 1192 | @@ -412,27 +409,27 @@ |
| 1193 | '''Test read_user_data_callback() with both files.''' |
| 1194 | |
| 1195 | self.assertEquals('test user data', |
| 1196 | - read_user_data_callback(self.mount_dir)) |
| 1197 | + read_user_data_callback(self.mount_dir)) |
| 1198 | |
| 1199 | def test_callback_dc(self): |
| 1200 | '''Test read_user_data_callback() with only DC file.''' |
| 1201 | |
| 1202 | _remove_user_data_files(self.mount_dir, |
| 1203 | - dc_file=False, |
| 1204 | - non_dc_file=True) |
| 1205 | + dc_file=False, |
| 1206 | + non_dc_file=True) |
| 1207 | |
| 1208 | self.assertEquals('test user data', |
| 1209 | - read_user_data_callback(self.mount_dir)) |
| 1210 | + read_user_data_callback(self.mount_dir)) |
| 1211 | |
| 1212 | def test_callback_non_dc(self): |
| 1213 | '''Test read_user_data_callback() with only non-DC file.''' |
| 1214 | |
| 1215 | _remove_user_data_files(self.mount_dir, |
| 1216 | - dc_file=True, |
| 1217 | - non_dc_file=False) |
| 1218 | + dc_file=True, |
| 1219 | + non_dc_file=False) |
| 1220 | |
| 1221 | self.assertEquals('test user data', |
| 1222 | - read_user_data_callback(self.mount_dir)) |
| 1223 | + read_user_data_callback(self.mount_dir)) |
| 1224 | |
| 1225 | def test_callback_none(self): |
| 1226 | '''Test read_user_data_callback() no files are found.''' |
| 1227 | |
| 1228 | === modified file 'tests/unittests/test_datasource/test_azure.py' |
| 1229 | --- tests/unittests/test_datasource/test_azure.py 2015-10-30 16:26:31 +0000 |
| 1230 | +++ tests/unittests/test_datasource/test_azure.py 2016-03-03 23:20:30 +0000 |
| 1231 | @@ -207,7 +207,7 @@ |
| 1232 | yaml_cfg = "{agent_command: my_command}\n" |
| 1233 | cfg = yaml.safe_load(yaml_cfg) |
| 1234 | odata = {'HostName': "myhost", 'UserName': "myuser", |
| 1235 | - 'dscfg': {'text': yaml_cfg, 'encoding': 'plain'}} |
| 1236 | + 'dscfg': {'text': yaml_cfg, 'encoding': 'plain'}} |
| 1237 | data = {'ovfcontent': construct_valid_ovf_env(data=odata)} |
| 1238 | |
| 1239 | dsrc = self._get_ds(data) |
| 1240 | @@ -219,8 +219,8 @@ |
| 1241 | # set dscfg in via base64 encoded yaml |
| 1242 | cfg = {'agent_command': "my_command"} |
| 1243 | odata = {'HostName': "myhost", 'UserName': "myuser", |
| 1244 | - 'dscfg': {'text': b64e(yaml.dump(cfg)), |
| 1245 | - 'encoding': 'base64'}} |
| 1246 | + 'dscfg': {'text': b64e(yaml.dump(cfg)), |
| 1247 | + 'encoding': 'base64'}} |
| 1248 | data = {'ovfcontent': construct_valid_ovf_env(data=odata)} |
| 1249 | |
| 1250 | dsrc = self._get_ds(data) |
| 1251 | @@ -267,7 +267,8 @@ |
| 1252 | # should equal that after the '$' |
| 1253 | pos = defuser['passwd'].rfind("$") + 1 |
| 1254 | self.assertEqual(defuser['passwd'], |
| 1255 | - crypt.crypt(odata['UserPassword'], defuser['passwd'][0:pos])) |
| 1256 | + crypt.crypt(odata['UserPassword'], |
| 1257 | + defuser['passwd'][0:pos])) |
| 1258 | |
| 1259 | def test_userdata_plain(self): |
| 1260 | mydata = "FOOBAR" |
| 1261 | @@ -364,8 +365,8 @@ |
| 1262 | # Make sure that user can affect disk aliases |
| 1263 | dscfg = {'disk_aliases': {'ephemeral0': '/dev/sdc'}} |
| 1264 | odata = {'HostName': "myhost", 'UserName': "myuser", |
| 1265 | - 'dscfg': {'text': b64e(yaml.dump(dscfg)), |
| 1266 | - 'encoding': 'base64'}} |
| 1267 | + 'dscfg': {'text': b64e(yaml.dump(dscfg)), |
| 1268 | + 'encoding': 'base64'}} |
| 1269 | usercfg = {'disk_setup': {'/dev/sdc': {'something': '...'}, |
| 1270 | 'ephemeral0': False}} |
| 1271 | userdata = '#cloud-config' + yaml.dump(usercfg) + "\n" |
| 1272 | @@ -634,7 +635,7 @@ |
| 1273 | def test_invalid_xml_raises_non_azure_ds(self): |
| 1274 | invalid_xml = "<foo>" + construct_valid_ovf_env(data={}) |
| 1275 | self.assertRaises(DataSourceAzure.BrokenAzureDataSource, |
| 1276 | - DataSourceAzure.read_azure_ovf, invalid_xml) |
| 1277 | + DataSourceAzure.read_azure_ovf, invalid_xml) |
| 1278 | |
| 1279 | def test_load_with_pubkeys(self): |
| 1280 | mypklist = [{'fingerprint': 'fp1', 'path': 'path1', 'value': ''}] |
| 1281 | |
| 1282 | === modified file 'tests/unittests/test_datasource/test_configdrive.py' |
| 1283 | --- tests/unittests/test_datasource/test_configdrive.py 2015-02-26 00:40:33 +0000 |
| 1284 | +++ tests/unittests/test_datasource/test_configdrive.py 2016-03-03 23:20:30 +0000 |
| 1285 | @@ -293,9 +293,8 @@ |
| 1286 | util.is_partition = my_is_partition |
| 1287 | |
| 1288 | devs_with_answers = {"TYPE=vfat": [], |
| 1289 | - "TYPE=iso9660": ["/dev/vdb"], |
| 1290 | - "LABEL=config-2": ["/dev/vdb"], |
| 1291 | - } |
| 1292 | + "TYPE=iso9660": ["/dev/vdb"], |
| 1293 | + "LABEL=config-2": ["/dev/vdb"]} |
| 1294 | self.assertEqual(["/dev/vdb"], ds.find_candidate_devs()) |
| 1295 | |
| 1296 | # add a vfat item |
| 1297 | @@ -306,9 +305,10 @@ |
| 1298 | |
| 1299 | # verify that partitions are considered, that have correct label. |
| 1300 | devs_with_answers = {"TYPE=vfat": ["/dev/sda1"], |
| 1301 | - "TYPE=iso9660": [], "LABEL=config-2": ["/dev/vdb3"]} |
| 1302 | + "TYPE=iso9660": [], |
| 1303 | + "LABEL=config-2": ["/dev/vdb3"]} |
| 1304 | self.assertEqual(["/dev/vdb3"], |
| 1305 | - ds.find_candidate_devs()) |
| 1306 | + ds.find_candidate_devs()) |
| 1307 | |
| 1308 | finally: |
| 1309 | util.find_devs_with = orig_find_devs_with |
| 1310 | @@ -319,7 +319,7 @@ |
| 1311 | populate_dir(self.tmp, CFG_DRIVE_FILES_V2) |
| 1312 | myds = cfg_ds_from_dir(self.tmp) |
| 1313 | self.assertEqual(myds.get_public_ssh_keys(), |
| 1314 | - [OSTACK_META['public_keys']['mykey']]) |
| 1315 | + [OSTACK_META['public_keys']['mykey']]) |
| 1316 | |
| 1317 | |
| 1318 | def cfg_ds_from_dir(seed_d): |
| 1319 | |
| 1320 | === modified file 'tests/unittests/test_datasource/test_maas.py' |
| 1321 | --- tests/unittests/test_datasource/test_maas.py 2015-08-07 05:22:49 +0000 |
| 1322 | +++ tests/unittests/test_datasource/test_maas.py 2016-03-03 23:20:30 +0000 |
| 1323 | @@ -25,9 +25,9 @@ |
| 1324 | """Verify a valid seeddir is read as such.""" |
| 1325 | |
| 1326 | data = {'instance-id': 'i-valid01', |
| 1327 | - 'local-hostname': 'valid01-hostname', |
| 1328 | - 'user-data': b'valid01-userdata', |
| 1329 | - 'public-keys': 'ssh-rsa AAAAB3Nz...aC1yc2E= keyname'} |
| 1330 | + 'local-hostname': 'valid01-hostname', |
| 1331 | + 'user-data': b'valid01-userdata', |
| 1332 | + 'public-keys': 'ssh-rsa AAAAB3Nz...aC1yc2E= keyname'} |
| 1333 | |
| 1334 | my_d = os.path.join(self.tmp, "valid") |
| 1335 | populate_dir(my_d, data) |
| 1336 | @@ -45,8 +45,8 @@ |
| 1337 | """Verify extra files do not affect seed_dir validity.""" |
| 1338 | |
| 1339 | data = {'instance-id': 'i-valid-extra', |
| 1340 | - 'local-hostname': 'valid-extra-hostname', |
| 1341 | - 'user-data': b'valid-extra-userdata', 'foo': 'bar'} |
| 1342 | + 'local-hostname': 'valid-extra-hostname', |
| 1343 | + 'user-data': b'valid-extra-userdata', 'foo': 'bar'} |
| 1344 | |
| 1345 | my_d = os.path.join(self.tmp, "valid_extra") |
| 1346 | populate_dir(my_d, data) |
| 1347 | @@ -64,7 +64,7 @@ |
| 1348 | """Verify that invalid seed_dir raises MAASSeedDirMalformed.""" |
| 1349 | |
| 1350 | valid = {'instance-id': 'i-instanceid', |
| 1351 | - 'local-hostname': 'test-hostname', 'user-data': ''} |
| 1352 | + 'local-hostname': 'test-hostname', 'user-data': ''} |
| 1353 | |
| 1354 | my_based = os.path.join(self.tmp, "valid_extra") |
| 1355 | |
| 1356 | @@ -94,8 +94,8 @@ |
| 1357 | def test_seed_dir_missing(self): |
| 1358 | """Verify that missing seed_dir raises MAASSeedDirNone.""" |
| 1359 | self.assertRaises(DataSourceMAAS.MAASSeedDirNone, |
| 1360 | - DataSourceMAAS.read_maas_seed_dir, |
| 1361 | - os.path.join(self.tmp, "nonexistantdirectory")) |
| 1362 | + DataSourceMAAS.read_maas_seed_dir, |
| 1363 | + os.path.join(self.tmp, "nonexistantdirectory")) |
| 1364 | |
| 1365 | def test_seed_url_valid(self): |
| 1366 | """Verify that valid seed_url is read as such.""" |
| 1367 | |
| 1368 | === modified file 'tests/unittests/test_datasource/test_smartos.py' |
| 1369 | --- tests/unittests/test_datasource/test_smartos.py 2016-03-03 17:20:48 +0000 |
| 1370 | +++ tests/unittests/test_datasource/test_smartos.py 2016-03-03 23:20:30 +0000 |
| 1371 | @@ -462,8 +462,8 @@ |
| 1372 | payloadstr = ' {0}'.format(self.response_parts['payload']) |
| 1373 | return ('V2 {length} {crc} {request_id} ' |
| 1374 | '{command}{payloadstr}\n'.format( |
| 1375 | - payloadstr=payloadstr, |
| 1376 | - **self.response_parts).encode('ascii')) |
| 1377 | + payloadstr=payloadstr, |
| 1378 | + **self.response_parts).encode('ascii')) |
| 1379 | |
| 1380 | self.metasource_data = None |
| 1381 | |
| 1382 | @@ -500,7 +500,7 @@ |
| 1383 | written_line = self.serial.write.call_args[0][0] |
| 1384 | print(type(written_line)) |
| 1385 | self.assertEndsWith(written_line.decode('ascii'), |
| 1386 | - b'\n'.decode('ascii')) |
| 1387 | + b'\n'.decode('ascii')) |
| 1388 | self.assertEqual(1, written_line.count(b'\n')) |
| 1389 | |
| 1390 | def _get_written_line(self, key='some_key'): |
| 1391 | |
| 1392 | === modified file 'tests/unittests/test_handler/test_handler_power_state.py' |
| 1393 | --- tests/unittests/test_handler/test_handler_power_state.py 2016-03-03 17:20:48 +0000 |
| 1394 | +++ tests/unittests/test_handler/test_handler_power_state.py 2016-03-03 23:20:30 +0000 |
| 1395 | @@ -74,7 +74,7 @@ |
| 1396 | class TestCheckCondition(t_help.TestCase): |
| 1397 | def cmd_with_exit(self, rc): |
| 1398 | return([sys.executable, '-c', 'import sys; sys.exit(%s)' % rc]) |
| 1399 | - |
| 1400 | + |
| 1401 | def test_true_is_true(self): |
| 1402 | self.assertEqual(psc.check_condition(True), True) |
| 1403 | |
| 1404 | @@ -94,7 +94,6 @@ |
| 1405 | self.assertEqual(mocklog.warn.call_count, 1) |
| 1406 | |
| 1407 | |
| 1408 | - |
| 1409 | def check_lps_ret(psc_return, mode=None): |
| 1410 | if len(psc_return) != 3: |
| 1411 | raise TypeError("length returned = %d" % len(psc_return)) |
| 1412 | |
| 1413 | === modified file 'tests/unittests/test_handler/test_handler_seed_random.py' |
| 1414 | --- tests/unittests/test_handler/test_handler_seed_random.py 2015-01-27 20:03:52 +0000 |
| 1415 | +++ tests/unittests/test_handler/test_handler_seed_random.py 2016-03-03 23:20:30 +0000 |
| 1416 | @@ -190,7 +190,8 @@ |
| 1417 | c = self._get_cloud('ubuntu', {}) |
| 1418 | self.whichdata = {} |
| 1419 | self.assertRaises(ValueError, cc_seed_random.handle, |
| 1420 | - 'test', {'random_seed': {'command_required': True}}, c, LOG, []) |
| 1421 | + 'test', {'random_seed': {'command_required': True}}, |
| 1422 | + c, LOG, []) |
| 1423 | |
| 1424 | def test_seed_command_and_required(self): |
| 1425 | c = self._get_cloud('ubuntu', {}) |
| 1426 | |
| 1427 | === modified file 'tests/unittests/test_handler/test_handler_snappy.py' |
| 1428 | --- tests/unittests/test_handler/test_handler_snappy.py 2015-05-01 09:38:56 +0000 |
| 1429 | +++ tests/unittests/test_handler/test_handler_snappy.py 2016-03-03 23:20:30 +0000 |
| 1430 | @@ -125,8 +125,7 @@ |
| 1431 | "pkg1.smoser.config": "pkg1.smoser.config-data", |
| 1432 | "pkg1.config": "pkg1.config-data", |
| 1433 | "pkg2.smoser_0.0_amd64.snap": "pkg2-snapdata", |
| 1434 | - "pkg2.smoser_0.0_amd64.config": "pkg2.config", |
| 1435 | - }) |
| 1436 | + "pkg2.smoser_0.0_amd64.config": "pkg2.config"}) |
| 1437 | |
| 1438 | ret = get_package_ops( |
| 1439 | packages=[], configs={}, installed=[], fspath=self.tmp) |
| 1440 | |
| 1441 | === modified file 'tests/unittests/test_sshutil.py' |
| 1442 | --- tests/unittests/test_sshutil.py 2014-11-13 10:35:46 +0000 |
| 1443 | +++ tests/unittests/test_sshutil.py 2016-03-03 23:20:30 +0000 |
| 1444 | @@ -32,7 +32,8 @@ |
| 1445 | ), |
| 1446 | } |
| 1447 | |
| 1448 | -TEST_OPTIONS = ("no-port-forwarding,no-agent-forwarding,no-X11-forwarding," |
| 1449 | +TEST_OPTIONS = ( |
| 1450 | + "no-port-forwarding,no-agent-forwarding,no-X11-forwarding," |
| 1451 | 'command="echo \'Please login as the user \"ubuntu\" rather than the' |
| 1452 | 'user \"root\".\';echo;sleep 10"') |
| 1453 | |
| 1454 | |
| 1455 | === modified file 'tests/unittests/test_templating.py' |
| 1456 | --- tests/unittests/test_templating.py 2015-05-01 09:38:56 +0000 |
| 1457 | +++ tests/unittests/test_templating.py 2016-03-03 23:20:30 +0000 |
| 1458 | @@ -114,5 +114,6 @@ |
| 1459 | codename) |
| 1460 | |
| 1461 | out_data = templater.basic_render(in_data, |
| 1462 | - {'mirror': mirror, 'codename': codename}) |
| 1463 | + {'mirror': mirror, |
| 1464 | + 'codename': codename}) |
| 1465 | self.assertEqual(ex_data, out_data) |
| 1466 | |
| 1467 | === modified file 'tools/hacking.py' |
| 1468 | --- tools/hacking.py 2015-05-01 09:38:56 +0000 |
| 1469 | +++ tools/hacking.py 2016-03-03 23:20:30 +0000 |
| 1470 | @@ -47,10 +47,10 @@ |
| 1471 | # handle "from x import y as z" to "import x.y as z" |
| 1472 | split_line = line.split() |
| 1473 | if (line.startswith("from ") and "," not in line and |
| 1474 | - split_line[2] == "import" and split_line[3] != "*" and |
| 1475 | - split_line[1] != "__future__" and |
| 1476 | - (len(split_line) == 4 or |
| 1477 | - (len(split_line) == 6 and split_line[4] == "as"))): |
| 1478 | + split_line[2] == "import" and split_line[3] != "*" and |
| 1479 | + split_line[1] != "__future__" and |
| 1480 | + (len(split_line) == 4 or |
| 1481 | + (len(split_line) == 6 and split_line[4] == "as"))): |
| 1482 | return "import %s.%s" % (split_line[1], split_line[3]) |
| 1483 | else: |
| 1484 | return line |
| 1485 | @@ -74,7 +74,7 @@ |
| 1486 | split_line[0] == "import" and split_previous[0] == "import"): |
| 1487 | if split_line[1] < split_previous[1]: |
| 1488 | return (0, "N306: imports not in alphabetical order (%s, %s)" |
| 1489 | - % (split_previous[1], split_line[1])) |
| 1490 | + % (split_previous[1], split_line[1])) |
| 1491 | |
| 1492 | |
| 1493 | def cloud_docstring_start_space(physical_line): |
| 1494 | @@ -87,8 +87,8 @@ |
| 1495 | pos = max([physical_line.find(i) for i in DOCSTRING_TRIPLE]) # start |
| 1496 | if (pos != -1 and len(physical_line) > pos + 1): |
| 1497 | if (physical_line[pos + 3] == ' '): |
| 1498 | - return (pos, "N401: one line docstring should not start with" |
| 1499 | - " a space") |
| 1500 | + return (pos, |
| 1501 | + "N401: one line docstring should not start with a space") |
| 1502 | |
| 1503 | |
| 1504 | def cloud_todo_format(physical_line): |
| 1505 | @@ -167,4 +167,4 @@ |
| 1506 | finally: |
| 1507 | if len(_missingImport) > 0: |
| 1508 | print >> sys.stderr, ("%i imports missing in this test environment" |
| 1509 | - % len(_missingImport)) |
| 1510 | + % len(_missingImport)) |
| 1511 | |
| 1512 | === modified file 'tools/mock-meta.py' |
| 1513 | --- tools/mock-meta.py 2014-08-26 19:53:41 +0000 |
| 1514 | +++ tools/mock-meta.py 2016-03-03 23:20:30 +0000 |
| 1515 | @@ -126,11 +126,11 @@ |
| 1516 | |
| 1517 | def yamlify(data): |
| 1518 | formatted = yaml.dump(data, |
| 1519 | - line_break="\n", |
| 1520 | - indent=4, |
| 1521 | - explicit_start=True, |
| 1522 | - explicit_end=True, |
| 1523 | - default_flow_style=False) |
| 1524 | + line_break="\n", |
| 1525 | + indent=4, |
| 1526 | + explicit_start=True, |
| 1527 | + explicit_end=True, |
| 1528 | + default_flow_style=False) |
| 1529 | return formatted |
| 1530 | |
| 1531 | |
| 1532 | @@ -282,7 +282,7 @@ |
| 1533 | else: |
| 1534 | log.warn(("Did not implement action %s, " |
| 1535 | "returning empty response: %r"), |
| 1536 | - action, NOT_IMPL_RESPONSE) |
| 1537 | + action, NOT_IMPL_RESPONSE) |
| 1538 | return NOT_IMPL_RESPONSE |
| 1539 | |
| 1540 | |
| 1541 | @@ -404,14 +404,17 @@ |
| 1542 | def extract_opts(): |
| 1543 | parser = OptionParser() |
| 1544 | parser.add_option("-p", "--port", dest="port", action="store", type=int, |
| 1545 | - default=80, metavar="PORT", |
| 1546 | - help="port from which to serve traffic (default: %default)") |
| 1547 | + default=80, metavar="PORT", |
| 1548 | + help=("port from which to serve traffic" |
| 1549 | + " (default: %default)")) |
| 1550 | parser.add_option("-a", "--addr", dest="address", action="store", type=str, |
| 1551 | - default='0.0.0.0', metavar="ADDRESS", |
| 1552 | - help="address from which to serve traffic (default: %default)") |
| 1553 | + default='0.0.0.0', metavar="ADDRESS", |
| 1554 | + help=("address from which to serve traffic" |
| 1555 | + " (default: %default)")) |
| 1556 | parser.add_option("-f", '--user-data-file', dest='user_data_file', |
| 1557 | - action='store', metavar='FILE', |
| 1558 | - help="user data filename to serve back to incoming requests") |
| 1559 | + action='store', metavar='FILE', |
| 1560 | + help=("user data filename to serve back to" |
| 1561 | + "incoming requests")) |
| 1562 | (options, args) = parser.parse_args() |
| 1563 | out = dict() |
| 1564 | out['extra'] = args |
| 1565 | |
| 1566 | === modified file 'tools/run-pep8' |
| 1567 | --- tools/run-pep8 2015-01-06 17:02:38 +0000 |
| 1568 | +++ tools/run-pep8 2016-03-03 23:20:30 +0000 |
| 1569 | @@ -1,39 +1,22 @@ |
| 1570 | #!/bin/bash |
| 1571 | |
| 1572 | -if [ $# -eq 0 ]; then |
| 1573 | - files=( bin/cloud-init $(find * -name "*.py" -type f) ) |
| 1574 | -else |
| 1575 | - files=( "$@" ); |
| 1576 | -fi |
| 1577 | - |
| 1578 | -if [ -f 'hacking.py' ] |
| 1579 | -then |
| 1580 | - base=`pwd` |
| 1581 | -else |
| 1582 | - base=`pwd`/tools/ |
| 1583 | -fi |
| 1584 | - |
| 1585 | -IGNORE="" |
| 1586 | - |
| 1587 | -# King Arthur: Be quiet! ... Be Quiet! I Order You to Be Quiet. |
| 1588 | -IGNORE="$IGNORE,E121" # Continuation line indentation is not a multiple of four |
| 1589 | -IGNORE="$IGNORE,E123" # Closing bracket does not match indentation of opening bracket's line |
| 1590 | -IGNORE="$IGNORE,E124" # Closing bracket missing visual indentation |
| 1591 | -IGNORE="$IGNORE,E125" # Continuation line does not distinguish itself from next logical line |
| 1592 | -IGNORE="$IGNORE,E126" # Continuation line over-indented for hanging indent |
| 1593 | -IGNORE="$IGNORE,E127" # Continuation line over-indented for visual indent |
| 1594 | -IGNORE="$IGNORE,E128" # Continuation line under-indented for visual indent |
| 1595 | -IGNORE="$IGNORE,E502" # The backslash is redundant between brackets |
| 1596 | -IGNORE="${IGNORE#,}" # remove the leading ',' added above |
| 1597 | - |
| 1598 | -cmd=( |
| 1599 | - ${base}/hacking.py |
| 1600 | - |
| 1601 | - --ignore="$IGNORE" |
| 1602 | - |
| 1603 | - "${files[@]}" |
| 1604 | -) |
| 1605 | - |
| 1606 | -echo -e "\nRunning 'cloudinit' pep8:" |
| 1607 | -echo "${cmd[@]}" |
| 1608 | -"${cmd[@]}" |
| 1609 | +pycheck_dirs=( "cloudinit/" "bin/" "tests/" "tools/" ) |
| 1610 | +# FIXME: cloud-init modifies sys module path, pep8 does not like |
| 1611 | +# bin_files=( "bin/cloud-init" ) |
| 1612 | +CR=" |
| 1613 | +" |
| 1614 | +[ "$1" = "-v" ] && { verbose="$1"; shift; } || verbose="" |
| 1615 | + |
| 1616 | +set -f |
| 1617 | +if [ $# -eq 0 ]; then unset IFS |
| 1618 | + IFS="$CR" |
| 1619 | + files=( "${bin_files[@]}" "${pycheck_dirs[@]}" ) |
| 1620 | + unset IFS |
| 1621 | +else |
| 1622 | + files=( "$@" ) |
| 1623 | +fi |
| 1624 | + |
| 1625 | +myname=${0##*/} |
| 1626 | +cmd=( "${myname#run-}" $verbose "${files[@]}" ) |
| 1627 | +echo "Running: " "${cmd[@]}" 1>&2 |
| 1628 | +exec "${cmd[@]}" |
| 1629 | |
| 1630 | === added file 'tools/run-pyflakes' |
| 1631 | --- tools/run-pyflakes 1970-01-01 00:00:00 +0000 |
| 1632 | +++ tools/run-pyflakes 2016-03-03 23:20:30 +0000 |
| 1633 | @@ -0,0 +1,18 @@ |
| 1634 | +#!/bin/bash |
| 1635 | + |
| 1636 | +PYTHON_VERSION=${PYTHON_VERSION:-2} |
| 1637 | +CR=" |
| 1638 | +" |
| 1639 | +pycheck_dirs=( "cloudinit/" "bin/" "tests/" "tools/" ) |
| 1640 | + |
| 1641 | +set -f |
| 1642 | +if [ $# -eq 0 ]; then |
| 1643 | + files=( "${pycheck_dirs[@]}" ) |
| 1644 | +else |
| 1645 | + files=( "$@" ) |
| 1646 | +fi |
| 1647 | + |
| 1648 | +cmd=( "python${PYTHON_VERSION}" -m "pyflakes" "${files[@]}" ) |
| 1649 | + |
| 1650 | +echo "Running: " "${cmd[@]}" 1>&2 |
| 1651 | +exec "${cmd[@]}" |
| 1652 | |
| 1653 | === added file 'tools/run-pyflakes3' |
| 1654 | --- tools/run-pyflakes3 1970-01-01 00:00:00 +0000 |
| 1655 | +++ tools/run-pyflakes3 2016-03-03 23:20:30 +0000 |
| 1656 | @@ -0,0 +1,2 @@ |
| 1657 | +#!/bin/sh |
| 1658 | +PYTHON_VERSION=3 exec "${0%/*}/run-pyflakes" "$@" |


On Xenial host, make check passes.
Unittests can fail if not all python deps are installed. We probably should re-use the requirements package to make it easy to install deps as needed.