Merge lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys into lp:landscape-client-charm
- sync-charmhelpers-and-keys
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Simon Poirier | ||||
Approved revision: | 69 | ||||
Merged at revision: | 69 | ||||
Proposed branch: | lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys | ||||
Merge into: | lp:landscape-client-charm | ||||
Diff against target: |
2622 lines (+1686/-161) 24 files modified
Makefile (+1/-2) hooks/charmhelpers/__init__.py (+65/-4) hooks/charmhelpers/core/hookenv.py (+450/-28) hooks/charmhelpers/core/host.py (+170/-11) hooks/charmhelpers/core/host_factory/centos.py (+16/-0) hooks/charmhelpers/core/host_factory/ubuntu.py (+58/-0) hooks/charmhelpers/core/kernel.py (+2/-2) hooks/charmhelpers/core/services/base.py (+18/-7) hooks/charmhelpers/core/strutils.py (+64/-5) hooks/charmhelpers/core/sysctl.py (+21/-10) hooks/charmhelpers/core/templating.py (+18/-9) hooks/charmhelpers/core/unitdata.py (+8/-1) hooks/charmhelpers/fetch/__init__.py (+19/-9) hooks/charmhelpers/fetch/archiveurl.py (+1/-1) hooks/charmhelpers/fetch/bzrurl.py (+2/-2) hooks/charmhelpers/fetch/centos.py (+1/-1) hooks/charmhelpers/fetch/giturl.py (+2/-2) hooks/charmhelpers/fetch/python/__init__.py (+13/-0) hooks/charmhelpers/fetch/python/debug.py (+54/-0) hooks/charmhelpers/fetch/python/packages.py (+154/-0) hooks/charmhelpers/fetch/python/rpdb.py (+56/-0) hooks/charmhelpers/fetch/python/version.py (+32/-0) hooks/charmhelpers/fetch/snap.py (+33/-5) hooks/charmhelpers/fetch/ubuntu.py (+428/-62) |
||||
To merge this branch: | bzr merge lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Daniel Manrique (community) | Approve | ||
🤖 Landscape Builder | test results | Approve | |
Review via email:
|
Commit message
Sync charmhelpers to avoid key errors on refresh.
Description of the change
Sync charmhelpers. This pulls an upstream change which import keys before updating repos, thus avoid errors.
Testing instructions:
juju deploy ubuntu
juju deploy . --config install_
juju relate landscape-client ubuntu
juju debug-log --replay | grep NO_PUBKEY
# no output after deployment means success.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) wrote : | # |
Command: make ci-test
Result: Fail
Revno: 69
Branch: lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys
Jenkins: https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) wrote : | # |
Command: make ci-test
Result: Success
Revno: 69
Branch: lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys
Jenkins: https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Landscape Builder (landscape-builder) wrote : | # |
Command: make ci-test
Result: Success
Revno: 69
Branch: lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys
Jenkins: https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Daniel Manrique (roadmr) wrote : | # |
+1, I'm not reviewing charmhelpers code but the sync seems OK to me and all the automated tests passed.
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2019-02-07 15:13:07 +0000 |
3 | +++ Makefile 2019-05-24 20:38:05 +0000 |
4 | @@ -26,8 +26,7 @@ |
5 | |
6 | $(SYNC_SCRIPT): |
7 | @mkdir -p bin |
8 | - @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
9 | - > $(SYNC_SCRIPT) |
10 | + @curl https://git.launchpad.net/charm-helpers/plain/tools/charm_helpers_sync/charm_helpers_sync.py > $(SYNC_SCRIPT) |
11 | |
12 | # Note: The target name is unfortunate, but that's what other charms use. |
13 | sync: $(SYNC_SCRIPT) |
14 | |
15 | === modified file 'hooks/charmhelpers/__init__.py' |
16 | --- hooks/charmhelpers/__init__.py 2017-03-03 19:56:10 +0000 |
17 | +++ hooks/charmhelpers/__init__.py 2019-05-24 20:38:05 +0000 |
18 | @@ -14,23 +14,84 @@ |
19 | |
20 | # Bootstrap charm-helpers, installing its dependencies if necessary using |
21 | # only standard libraries. |
22 | +from __future__ import print_function |
23 | +from __future__ import absolute_import |
24 | + |
25 | +import functools |
26 | +import inspect |
27 | import subprocess |
28 | import sys |
29 | |
30 | try: |
31 | - import six # flake8: noqa |
32 | + import six # NOQA:F401 |
33 | except ImportError: |
34 | if sys.version_info.major == 2: |
35 | subprocess.check_call(['apt-get', 'install', '-y', 'python-six']) |
36 | else: |
37 | subprocess.check_call(['apt-get', 'install', '-y', 'python3-six']) |
38 | - import six # flake8: noqa |
39 | + import six # NOQA:F401 |
40 | |
41 | try: |
42 | - import yaml # flake8: noqa |
43 | + import yaml # NOQA:F401 |
44 | except ImportError: |
45 | if sys.version_info.major == 2: |
46 | subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml']) |
47 | else: |
48 | subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml']) |
49 | - import yaml # flake8: noqa |
50 | + import yaml # NOQA:F401 |
51 | + |
52 | + |
53 | +# Holds a list of mapping of mangled function names that have been deprecated |
54 | +# using the @deprecate decorator below. This is so that the warning is only |
55 | +# printed once for each usage of the function. |
56 | +__deprecated_functions = {} |
57 | + |
58 | + |
59 | +def deprecate(warning, date=None, log=None): |
60 | + """Add a deprecation warning the first time the function is used. |
61 | + The date, which is a string in semi-ISO8660 format indicate the year-month |
62 | + that the function is officially going to be removed. |
63 | + |
64 | + usage: |
65 | + |
66 | + @deprecate('use core/fetch/add_source() instead', '2017-04') |
67 | + def contributed_add_source_thing(...): |
68 | + ... |
69 | + |
70 | + And it then prints to the log ONCE that the function is deprecated. |
71 | + The reason for passing the logging function (log) is so that hookenv.log |
72 | + can be used for a charm if needed. |
73 | + |
74 | + :param warning: String to indicat where it has moved ot. |
75 | + :param date: optional sting, in YYYY-MM format to indicate when the |
76 | + function will definitely (probably) be removed. |
77 | + :param log: The log function to call to log. If not, logs to stdout |
78 | + """ |
79 | + def wrap(f): |
80 | + |
81 | + @functools.wraps(f) |
82 | + def wrapped_f(*args, **kwargs): |
83 | + try: |
84 | + module = inspect.getmodule(f) |
85 | + file = inspect.getsourcefile(f) |
86 | + lines = inspect.getsourcelines(f) |
87 | + f_name = "{}-{}-{}..{}-{}".format( |
88 | + module.__name__, file, lines[0], lines[-1], f.__name__) |
89 | + except (IOError, TypeError): |
90 | + # assume it was local, so just use the name of the function |
91 | + f_name = f.__name__ |
92 | + if f_name not in __deprecated_functions: |
93 | + __deprecated_functions[f_name] = True |
94 | + s = "DEPRECATION WARNING: Function {} is being removed".format( |
95 | + f.__name__) |
96 | + if date: |
97 | + s = "{} on/around {}".format(s, date) |
98 | + if warning: |
99 | + s = "{} : {}".format(s, warning) |
100 | + if log: |
101 | + log(s) |
102 | + else: |
103 | + print(s) |
104 | + return f(*args, **kwargs) |
105 | + return wrapped_f |
106 | + return wrap |
107 | |
108 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
109 | --- hooks/charmhelpers/core/hookenv.py 2017-03-03 19:56:10 +0000 |
110 | +++ hooks/charmhelpers/core/hookenv.py 2019-05-24 20:38:05 +0000 |
111 | @@ -22,10 +22,12 @@ |
112 | import copy |
113 | from distutils.version import LooseVersion |
114 | from functools import wraps |
115 | +from collections import namedtuple |
116 | import glob |
117 | import os |
118 | import json |
119 | import yaml |
120 | +import re |
121 | import subprocess |
122 | import sys |
123 | import errno |
124 | @@ -38,12 +40,20 @@ |
125 | else: |
126 | from collections import UserDict |
127 | |
128 | + |
129 | CRITICAL = "CRITICAL" |
130 | ERROR = "ERROR" |
131 | WARNING = "WARNING" |
132 | INFO = "INFO" |
133 | DEBUG = "DEBUG" |
134 | +TRACE = "TRACE" |
135 | MARKER = object() |
136 | +SH_MAX_ARG = 131071 |
137 | + |
138 | + |
139 | +RANGE_WARNING = ('Passing NO_PROXY string that includes a cidr. ' |
140 | + 'This may not be compatible with software you are ' |
141 | + 'running in your shell.') |
142 | |
143 | cache = {} |
144 | |
145 | @@ -64,7 +74,7 @@ |
146 | @wraps(func) |
147 | def wrapper(*args, **kwargs): |
148 | global cache |
149 | - key = str((func, args, kwargs)) |
150 | + key = json.dumps((func, args, kwargs), sort_keys=True, default=str) |
151 | try: |
152 | return cache[key] |
153 | except KeyError: |
154 | @@ -94,7 +104,7 @@ |
155 | command += ['-l', level] |
156 | if not isinstance(message, six.string_types): |
157 | message = repr(message) |
158 | - command += [message] |
159 | + command += [message[:SH_MAX_ARG]] |
160 | # Missing juju-log should not cause failures in unit tests |
161 | # Send log output to stderr |
162 | try: |
163 | @@ -197,9 +207,56 @@ |
164 | return os.environ.get('JUJU_REMOTE_UNIT', None) |
165 | |
166 | |
167 | +def application_name(): |
168 | + """ |
169 | + The name of the deployed application this unit belongs to. |
170 | + """ |
171 | + return local_unit().split('/')[0] |
172 | + |
173 | + |
174 | def service_name(): |
175 | - """The name service group this unit belongs to""" |
176 | - return local_unit().split('/')[0] |
177 | + """ |
178 | + .. deprecated:: 0.19.1 |
179 | + Alias for :func:`application_name`. |
180 | + """ |
181 | + return application_name() |
182 | + |
183 | + |
184 | +def model_name(): |
185 | + """ |
186 | + Name of the model that this unit is deployed in. |
187 | + """ |
188 | + return os.environ['JUJU_MODEL_NAME'] |
189 | + |
190 | + |
191 | +def model_uuid(): |
192 | + """ |
193 | + UUID of the model that this unit is deployed in. |
194 | + """ |
195 | + return os.environ['JUJU_MODEL_UUID'] |
196 | + |
197 | + |
198 | +def principal_unit(): |
199 | + """Returns the principal unit of this unit, otherwise None""" |
200 | + # Juju 2.2 and above provides JUJU_PRINCIPAL_UNIT |
201 | + principal_unit = os.environ.get('JUJU_PRINCIPAL_UNIT', None) |
202 | + # If it's empty, then this unit is the principal |
203 | + if principal_unit == '': |
204 | + return os.environ['JUJU_UNIT_NAME'] |
205 | + elif principal_unit is not None: |
206 | + return principal_unit |
207 | + # For Juju 2.1 and below, let's try work out the principle unit by |
208 | + # the various charms' metadata.yaml. |
209 | + for reltype in relation_types(): |
210 | + for rid in relation_ids(reltype): |
211 | + for unit in related_units(rid): |
212 | + md = _metadata_unit(unit) |
213 | + if not md: |
214 | + continue |
215 | + subordinate = md.pop('subordinate', None) |
216 | + if not subordinate: |
217 | + return unit |
218 | + return None |
219 | |
220 | |
221 | @cached |
222 | @@ -263,7 +320,7 @@ |
223 | self.implicit_save = True |
224 | self._prev_dict = None |
225 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
226 | - if os.path.exists(self.path): |
227 | + if os.path.exists(self.path) and os.stat(self.path).st_size: |
228 | self.load_previous() |
229 | atexit(self._implicit_save) |
230 | |
231 | @@ -283,7 +340,11 @@ |
232 | """ |
233 | self.path = path or self.path |
234 | with open(self.path) as f: |
235 | - self._prev_dict = json.load(f) |
236 | + try: |
237 | + self._prev_dict = json.load(f) |
238 | + except ValueError as e: |
239 | + log('Unable to parse previous config data - {}'.format(str(e)), |
240 | + level=ERROR) |
241 | for k, v in copy.deepcopy(self._prev_dict).items(): |
242 | if k not in self: |
243 | self[k] = v |
244 | @@ -319,6 +380,7 @@ |
245 | |
246 | """ |
247 | with open(self.path, 'w') as f: |
248 | + os.fchmod(f.fileno(), 0o600) |
249 | json.dump(self, f) |
250 | |
251 | def _implicit_save(self): |
252 | @@ -326,22 +388,40 @@ |
253 | self.save() |
254 | |
255 | |
256 | -@cached |
257 | +_cache_config = None |
258 | + |
259 | + |
260 | def config(scope=None): |
261 | - """Juju charm configuration""" |
262 | - config_cmd_line = ['config-get'] |
263 | - if scope is not None: |
264 | - config_cmd_line.append(scope) |
265 | - else: |
266 | - config_cmd_line.append('--all') |
267 | - config_cmd_line.append('--format=json') |
268 | - try: |
269 | - config_data = json.loads( |
270 | - subprocess.check_output(config_cmd_line).decode('UTF-8')) |
271 | + """ |
272 | + Get the juju charm configuration (scope==None) or individual key, |
273 | + (scope=str). The returned value is a Python data structure loaded as |
274 | + JSON from the Juju config command. |
275 | + |
276 | + :param scope: If set, return the value for the specified key. |
277 | + :type scope: Optional[str] |
278 | + :returns: Either the whole config as a Config, or a key from it. |
279 | + :rtype: Any |
280 | + """ |
281 | + global _cache_config |
282 | + config_cmd_line = ['config-get', '--all', '--format=json'] |
283 | + try: |
284 | + # JSON Decode Exception for Python3.5+ |
285 | + exc_json = json.decoder.JSONDecodeError |
286 | + except AttributeError: |
287 | + # JSON Decode Exception for Python2.7 through Python3.4 |
288 | + exc_json = ValueError |
289 | + try: |
290 | + if _cache_config is None: |
291 | + config_data = json.loads( |
292 | + subprocess.check_output(config_cmd_line).decode('UTF-8')) |
293 | + _cache_config = Config(config_data) |
294 | if scope is not None: |
295 | - return config_data |
296 | - return Config(config_data) |
297 | - except ValueError: |
298 | + return _cache_config.get(scope) |
299 | + return _cache_config |
300 | + except (exc_json, UnicodeDecodeError) as e: |
301 | + log('Unable to parse output from config-get: config_cmd_line="{}" ' |
302 | + 'message="{}"' |
303 | + .format(config_cmd_line, str(e)), level=ERROR) |
304 | return None |
305 | |
306 | |
307 | @@ -435,6 +515,67 @@ |
308 | subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] |
309 | |
310 | |
311 | +def expected_peer_units(): |
312 | + """Get a generator for units we expect to join peer relation based on |
313 | + goal-state. |
314 | + |
315 | + The local unit is excluded from the result to make it easy to gauge |
316 | + completion of all peers joining the relation with existing hook tools. |
317 | + |
318 | + Example usage: |
319 | + log('peer {} of {} joined peer relation' |
320 | + .format(len(related_units()), |
321 | + len(list(expected_peer_units())))) |
322 | + |
323 | + This function will raise NotImplementedError if used with juju versions |
324 | + without goal-state support. |
325 | + |
326 | + :returns: iterator |
327 | + :rtype: types.GeneratorType |
328 | + :raises: NotImplementedError |
329 | + """ |
330 | + if not has_juju_version("2.4.0"): |
331 | + # goal-state first appeared in 2.4.0. |
332 | + raise NotImplementedError("goal-state") |
333 | + _goal_state = goal_state() |
334 | + return (key for key in _goal_state['units'] |
335 | + if '/' in key and key != local_unit()) |
336 | + |
337 | + |
338 | +def expected_related_units(reltype=None): |
339 | + """Get a generator for units we expect to join relation based on |
340 | + goal-state. |
341 | + |
342 | + Note that you can not use this function for the peer relation, take a look |
343 | + at expected_peer_units() for that. |
344 | + |
345 | + This function will raise KeyError if you request information for a |
346 | + relation type for which juju goal-state does not have information. It will |
347 | + raise NotImplementedError if used with juju versions without goal-state |
348 | + support. |
349 | + |
350 | + Example usage: |
351 | + log('participant {} of {} joined relation {}' |
352 | + .format(len(related_units()), |
353 | + len(list(expected_related_units())), |
354 | + relation_type())) |
355 | + |
356 | + :param reltype: Relation type to list data for, default is to list data for |
357 | + the realtion type we are currently executing a hook for. |
358 | + :type reltype: str |
359 | + :returns: iterator |
360 | + :rtype: types.GeneratorType |
361 | + :raises: KeyError, NotImplementedError |
362 | + """ |
363 | + if not has_juju_version("2.4.4"): |
364 | + # goal-state existed in 2.4.0, but did not list individual units to |
365 | + # join a relation in 2.4.1 through 2.4.3. (LP: #1794739) |
366 | + raise NotImplementedError("goal-state relation unit count") |
367 | + reltype = reltype or relation_type() |
368 | + _goal_state = goal_state() |
369 | + return (key for key in _goal_state['relations'][reltype] if '/' in key) |
370 | + |
371 | + |
372 | @cached |
373 | def relation_for_unit(unit=None, rid=None): |
374 | """Get the json represenation of a unit's relation""" |
375 | @@ -478,6 +619,24 @@ |
376 | return yaml.safe_load(md) |
377 | |
378 | |
379 | +def _metadata_unit(unit): |
380 | + """Given the name of a unit (e.g. apache2/0), get the unit charm's |
381 | + metadata.yaml. Very similar to metadata() but allows us to inspect |
382 | + other units. Unit needs to be co-located, such as a subordinate or |
383 | + principal/primary. |
384 | + |
385 | + :returns: metadata.yaml as a python object. |
386 | + |
387 | + """ |
388 | + basedir = os.sep.join(charm_dir().split(os.sep)[:-2]) |
389 | + unitdir = 'unit-{}'.format(unit.replace(os.sep, '-')) |
390 | + joineddir = os.path.join(basedir, unitdir, 'charm', 'metadata.yaml') |
391 | + if not os.path.exists(joineddir): |
392 | + return None |
393 | + with open(joineddir) as md: |
394 | + return yaml.safe_load(md) |
395 | + |
396 | + |
397 | @cached |
398 | def relation_types(): |
399 | """Get a list of relation types supported by this charm""" |
400 | @@ -602,18 +761,31 @@ |
401 | return False |
402 | |
403 | |
404 | +def _port_op(op_name, port, protocol="TCP"): |
405 | + """Open or close a service network port""" |
406 | + _args = [op_name] |
407 | + icmp = protocol.upper() == "ICMP" |
408 | + if icmp: |
409 | + _args.append(protocol) |
410 | + else: |
411 | + _args.append('{}/{}'.format(port, protocol)) |
412 | + try: |
413 | + subprocess.check_call(_args) |
414 | + except subprocess.CalledProcessError: |
415 | + # Older Juju pre 2.3 doesn't support ICMP |
416 | + # so treat it as a no-op if it fails. |
417 | + if not icmp: |
418 | + raise |
419 | + |
420 | + |
421 | def open_port(port, protocol="TCP"): |
422 | """Open a service network port""" |
423 | - _args = ['open-port'] |
424 | - _args.append('{}/{}'.format(port, protocol)) |
425 | - subprocess.check_call(_args) |
426 | + _port_op('open-port', port, protocol) |
427 | |
428 | |
429 | def close_port(port, protocol="TCP"): |
430 | """Close a service network port""" |
431 | - _args = ['close-port'] |
432 | - _args.append('{}/{}'.format(port, protocol)) |
433 | - subprocess.check_call(_args) |
434 | + _port_op('close-port', port, protocol) |
435 | |
436 | |
437 | def open_ports(start, end, protocol="TCP"): |
438 | @@ -630,6 +802,17 @@ |
439 | subprocess.check_call(_args) |
440 | |
441 | |
442 | +def opened_ports(): |
443 | + """Get the opened ports |
444 | + |
445 | + *Note that this will only show ports opened in a previous hook* |
446 | + |
447 | + :returns: Opened ports as a list of strings: ``['8080/tcp', '8081-8083/tcp']`` |
448 | + """ |
449 | + _args = ['opened-ports', '--format=json'] |
450 | + return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
451 | + |
452 | + |
453 | @cached |
454 | def unit_get(attribute): |
455 | """Get the unit ID for the remote unit""" |
456 | @@ -751,8 +934,15 @@ |
457 | return wrapper |
458 | |
459 | |
460 | +class NoNetworkBinding(Exception): |
461 | + pass |
462 | + |
463 | + |
464 | def charm_dir(): |
465 | """Return the root directory of the current charm""" |
466 | + d = os.environ.get('JUJU_CHARM_DIR') |
467 | + if d is not None: |
468 | + return d |
469 | return os.environ.get('CHARM_DIR') |
470 | |
471 | |
472 | @@ -874,6 +1064,14 @@ |
473 | |
474 | |
475 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
476 | +@cached |
477 | +def goal_state(): |
478 | + """Juju goal state values""" |
479 | + cmd = ['goal-state', '--format=json'] |
480 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
481 | + |
482 | + |
483 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
484 | def is_leader(): |
485 | """Does the current unit hold the juju leadership |
486 | |
487 | @@ -967,7 +1165,6 @@ |
488 | universal_newlines=True).strip() |
489 | |
490 | |
491 | -@cached |
492 | def has_juju_version(minimum_version): |
493 | """Return True if the Juju version is at least the provided version""" |
494 | return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
495 | @@ -1027,6 +1224,8 @@ |
496 | @translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
497 | def network_get_primary_address(binding): |
498 | ''' |
499 | + Deprecated since Juju 2.3; use network_get() |
500 | + |
501 | Retrieve the primary network address for a named binding |
502 | |
503 | :param binding: string. The name of a relation of extra-binding |
504 | @@ -1034,7 +1233,41 @@ |
505 | :raise: NotImplementedError if run on Juju < 2.0 |
506 | ''' |
507 | cmd = ['network-get', '--primary-address', binding] |
508 | - return subprocess.check_output(cmd).decode('UTF-8').strip() |
509 | + try: |
510 | + response = subprocess.check_output( |
511 | + cmd, |
512 | + stderr=subprocess.STDOUT).decode('UTF-8').strip() |
513 | + except CalledProcessError as e: |
514 | + if 'no network config found for binding' in e.output.decode('UTF-8'): |
515 | + raise NoNetworkBinding("No network binding for {}" |
516 | + .format(binding)) |
517 | + else: |
518 | + raise |
519 | + return response |
520 | + |
521 | + |
522 | +def network_get(endpoint, relation_id=None): |
523 | + """ |
524 | + Retrieve the network details for a relation endpoint |
525 | + |
526 | + :param endpoint: string. The name of a relation endpoint |
527 | + :param relation_id: int. The ID of the relation for the current context. |
528 | + :return: dict. The loaded YAML output of the network-get query. |
529 | + :raise: NotImplementedError if request not supported by the Juju version. |
530 | + """ |
531 | + if not has_juju_version('2.2'): |
532 | + raise NotImplementedError(juju_version()) # earlier versions require --primary-address |
533 | + if relation_id and not has_juju_version('2.3'): |
534 | + raise NotImplementedError # 2.3 added the -r option |
535 | + |
536 | + cmd = ['network-get', endpoint, '--format', 'yaml'] |
537 | + if relation_id: |
538 | + cmd.append('-r') |
539 | + cmd.append(relation_id) |
540 | + response = subprocess.check_output( |
541 | + cmd, |
542 | + stderr=subprocess.STDOUT).decode('UTF-8').strip() |
543 | + return yaml.safe_load(response) |
544 | |
545 | |
546 | def add_metric(*args, **kwargs): |
547 | @@ -1066,3 +1299,192 @@ |
548 | """Get the meter status information, if running in the meter-status-changed |
549 | hook.""" |
550 | return os.environ.get('JUJU_METER_INFO') |
551 | + |
552 | + |
553 | +def iter_units_for_relation_name(relation_name): |
554 | + """Iterate through all units in a relation |
555 | + |
556 | + Generator that iterates through all the units in a relation and yields |
557 | + a named tuple with rid and unit field names. |
558 | + |
559 | + Usage: |
560 | + data = [(u.rid, u.unit) |
561 | + for u in iter_units_for_relation_name(relation_name)] |
562 | + |
563 | + :param relation_name: string relation name |
564 | + :yield: Named Tuple with rid and unit field names |
565 | + """ |
566 | + RelatedUnit = namedtuple('RelatedUnit', 'rid, unit') |
567 | + for rid in relation_ids(relation_name): |
568 | + for unit in related_units(rid): |
569 | + yield RelatedUnit(rid, unit) |
570 | + |
571 | + |
572 | +def ingress_address(rid=None, unit=None): |
573 | + """ |
574 | + Retrieve the ingress-address from a relation when available. |
575 | + Otherwise, return the private-address. |
576 | + |
577 | + When used on the consuming side of the relation (unit is a remote |
578 | + unit), the ingress-address is the IP address that this unit needs |
579 | + to use to reach the provided service on the remote unit. |
580 | + |
581 | + When used on the providing side of the relation (unit == local_unit()), |
582 | + the ingress-address is the IP address that is advertised to remote |
583 | + units on this relation. Remote units need to use this address to |
584 | + reach the local provided service on this unit. |
585 | + |
586 | + Note that charms may document some other method to use in |
587 | + preference to the ingress_address(), such as an address provided |
588 | + on a different relation attribute or a service discovery mechanism. |
589 | + This allows charms to redirect inbound connections to their peers |
590 | + or different applications such as load balancers. |
591 | + |
592 | + Usage: |
593 | + addresses = [ingress_address(rid=u.rid, unit=u.unit) |
594 | + for u in iter_units_for_relation_name(relation_name)] |
595 | + |
596 | + :param rid: string relation id |
597 | + :param unit: string unit name |
598 | + :side effect: calls relation_get |
599 | + :return: string IP address |
600 | + """ |
601 | + settings = relation_get(rid=rid, unit=unit) |
602 | + return (settings.get('ingress-address') or |
603 | + settings.get('private-address')) |
604 | + |
605 | + |
606 | +def egress_subnets(rid=None, unit=None): |
607 | + """ |
608 | + Retrieve the egress-subnets from a relation. |
609 | + |
610 | + This function is to be used on the providing side of the |
611 | + relation, and provides the ranges of addresses that client |
612 | + connections may come from. The result is uninteresting on |
613 | + the consuming side of a relation (unit == local_unit()). |
614 | + |
615 | + Returns a stable list of subnets in CIDR format. |
616 | + eg. ['192.168.1.0/24', '2001::F00F/128'] |
617 | + |
618 | + If egress-subnets is not available, falls back to using the published |
619 | + ingress-address, or finally private-address. |
620 | + |
621 | + :param rid: string relation id |
622 | + :param unit: string unit name |
623 | + :side effect: calls relation_get |
624 | + :return: list of subnets in CIDR format. eg. ['192.168.1.0/24', '2001::F00F/128'] |
625 | + """ |
626 | + def _to_range(addr): |
627 | + if re.search(r'^(?:\d{1,3}\.){3}\d{1,3}$', addr) is not None: |
628 | + addr += '/32' |
629 | + elif ':' in addr and '/' not in addr: # IPv6 |
630 | + addr += '/128' |
631 | + return addr |
632 | + |
633 | + settings = relation_get(rid=rid, unit=unit) |
634 | + if 'egress-subnets' in settings: |
635 | + return [n.strip() for n in settings['egress-subnets'].split(',') if n.strip()] |
636 | + if 'ingress-address' in settings: |
637 | + return [_to_range(settings['ingress-address'])] |
638 | + if 'private-address' in settings: |
639 | + return [_to_range(settings['private-address'])] |
640 | + return [] # Should never happen |
641 | + |
642 | + |
643 | +def unit_doomed(unit=None): |
644 | + """Determines if the unit is being removed from the model |
645 | + |
646 | + Requires Juju 2.4.1. |
647 | + |
648 | + :param unit: string unit name, defaults to local_unit |
649 | + :side effect: calls goal_state |
650 | + :side effect: calls local_unit |
651 | + :side effect: calls has_juju_version |
652 | + :return: True if the unit is being removed, already gone, or never existed |
653 | + """ |
654 | + if not has_juju_version("2.4.1"): |
655 | + # We cannot risk blindly returning False for 'we don't know', |
656 | + # because that could cause data loss; if call sites don't |
657 | + # need an accurate answer, they likely don't need this helper |
658 | + # at all. |
659 | + # goal-state existed in 2.4.0, but did not handle removals |
660 | + # correctly until 2.4.1. |
661 | + raise NotImplementedError("is_doomed") |
662 | + if unit is None: |
663 | + unit = local_unit() |
664 | + gs = goal_state() |
665 | + units = gs.get('units', {}) |
666 | + if unit not in units: |
667 | + return True |
668 | + # I don't think 'dead' units ever show up in the goal-state, but |
669 | + # check anyway in addition to 'dying'. |
670 | + return units[unit]['status'] in ('dying', 'dead') |
671 | + |
672 | + |
673 | +def env_proxy_settings(selected_settings=None): |
674 | + """Get proxy settings from process environment variables. |
675 | + |
676 | + Get charm proxy settings from environment variables that correspond to |
677 | + juju-http-proxy, juju-https-proxy and juju-no-proxy (available as of 2.4.2, |
678 | + see lp:1782236) in a format suitable for passing to an application that |
679 | + reacts to proxy settings passed as environment variables. Some applications |
680 | + support lowercase or uppercase notation (e.g. curl), some support only |
681 | + lowercase (e.g. wget), there are also subjectively rare cases of only |
682 | + uppercase notation support. no_proxy CIDR and wildcard support also varies |
683 | + between runtimes and applications as there is no enforced standard. |
684 | + |
685 | + Some applications may connect to multiple destinations and expose config |
686 | + options that would affect only proxy settings for a specific destination |
687 | + these should be handled in charms in an application-specific manner. |
688 | + |
689 | + :param selected_settings: format only a subset of possible settings |
690 | + :type selected_settings: list |
691 | + :rtype: Option(None, dict[str, str]) |
692 | + """ |
693 | + SUPPORTED_SETTINGS = { |
694 | + 'http': 'HTTP_PROXY', |
695 | + 'https': 'HTTPS_PROXY', |
696 | + 'no_proxy': 'NO_PROXY', |
697 | + 'ftp': 'FTP_PROXY' |
698 | + } |
699 | + if selected_settings is None: |
700 | + selected_settings = SUPPORTED_SETTINGS |
701 | + |
702 | + selected_vars = [v for k, v in SUPPORTED_SETTINGS.items() |
703 | + if k in selected_settings] |
704 | + proxy_settings = {} |
705 | + for var in selected_vars: |
706 | + var_val = os.getenv(var) |
707 | + if var_val: |
708 | + proxy_settings[var] = var_val |
709 | + proxy_settings[var.lower()] = var_val |
710 | + # Now handle juju-prefixed environment variables. The legacy vs new |
711 | + # environment variable usage is mutually exclusive |
712 | + charm_var_val = os.getenv('JUJU_CHARM_{}'.format(var)) |
713 | + if charm_var_val: |
714 | + proxy_settings[var] = charm_var_val |
715 | + proxy_settings[var.lower()] = charm_var_val |
716 | + if 'no_proxy' in proxy_settings: |
717 | + if _contains_range(proxy_settings['no_proxy']): |
718 | + log(RANGE_WARNING, level=WARNING) |
719 | + return proxy_settings if proxy_settings else None |
720 | + |
721 | + |
722 | +def _contains_range(addresses): |
723 | + """Check for cidr or wildcard domain in a string. |
724 | + |
725 | + Given a string comprising a comma seperated list of ip addresses |
726 | + and domain names, determine whether the string contains IP ranges |
727 | + or wildcard domains. |
728 | + |
729 | + :param addresses: comma seperated list of domains and ip addresses. |
730 | + :type addresses: str |
731 | + """ |
732 | + return ( |
733 | + # Test for cidr (e.g. 10.20.20.0/24) |
734 | + "/" in addresses or |
735 | + # Test for wildcard domains (*.foo.com or .foo.com) |
736 | + "*" in addresses or |
737 | + addresses.startswith(".") or |
738 | + ",." in addresses or |
739 | + " ." in addresses) |
740 | |
741 | === modified file 'hooks/charmhelpers/core/host.py' |
742 | --- hooks/charmhelpers/core/host.py 2017-03-03 19:56:10 +0000 |
743 | +++ hooks/charmhelpers/core/host.py 2019-05-24 20:38:05 +0000 |
744 | @@ -34,28 +34,33 @@ |
745 | |
746 | from contextlib import contextmanager |
747 | from collections import OrderedDict |
748 | -from .hookenv import log |
749 | +from .hookenv import log, INFO, DEBUG, local_unit, charm_name |
750 | from .fstab import Fstab |
751 | from charmhelpers.osplatform import get_platform |
752 | |
753 | __platform__ = get_platform() |
754 | if __platform__ == "ubuntu": |
755 | - from charmhelpers.core.host_factory.ubuntu import ( |
756 | + from charmhelpers.core.host_factory.ubuntu import ( # NOQA:F401 |
757 | service_available, |
758 | add_new_group, |
759 | lsb_release, |
760 | cmp_pkgrevno, |
761 | + CompareHostReleases, |
762 | + get_distrib_codename, |
763 | + arch |
764 | ) # flake8: noqa -- ignore F401 for this import |
765 | elif __platform__ == "centos": |
766 | - from charmhelpers.core.host_factory.centos import ( |
767 | + from charmhelpers.core.host_factory.centos import ( # NOQA:F401 |
768 | service_available, |
769 | add_new_group, |
770 | lsb_release, |
771 | cmp_pkgrevno, |
772 | + CompareHostReleases, |
773 | ) # flake8: noqa -- ignore F401 for this import |
774 | |
775 | UPDATEDB_PATH = '/etc/updatedb.conf' |
776 | |
777 | + |
778 | def service_start(service_name, **kwargs): |
779 | """Start a system service. |
780 | |
781 | @@ -190,6 +195,7 @@ |
782 | sysv_file = os.path.join(initd_dir, service_name) |
783 | if init_is_systemd(): |
784 | service('disable', service_name) |
785 | + service('mask', service_name) |
786 | elif os.path.exists(upstart_file): |
787 | override_path = os.path.join( |
788 | init_dir, '{}.override'.format(service_name)) |
789 | @@ -222,6 +228,7 @@ |
790 | upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
791 | sysv_file = os.path.join(initd_dir, service_name) |
792 | if init_is_systemd(): |
793 | + service('unmask', service_name) |
794 | service('enable', service_name) |
795 | elif os.path.exists(upstart_file): |
796 | override_path = os.path.join( |
797 | @@ -283,8 +290,8 @@ |
798 | for key, value in six.iteritems(kwargs): |
799 | parameter = '%s=%s' % (key, value) |
800 | cmd.append(parameter) |
801 | - output = subprocess.check_output(cmd, |
802 | - stderr=subprocess.STDOUT).decode('UTF-8') |
803 | + output = subprocess.check_output( |
804 | + cmd, stderr=subprocess.STDOUT).decode('UTF-8') |
805 | except subprocess.CalledProcessError: |
806 | return False |
807 | else: |
808 | @@ -306,6 +313,8 @@ |
809 | |
810 | def init_is_systemd(): |
811 | """Return True if the host system uses systemd, False otherwise.""" |
812 | + if lsb_release()['DISTRIB_CODENAME'] == 'trusty': |
813 | + return False |
814 | return os.path.isdir(SYSTEMD_SYSTEM) |
815 | |
816 | |
817 | @@ -435,6 +444,51 @@ |
818 | subprocess.check_call(cmd) |
819 | |
820 | |
821 | +def chage(username, lastday=None, expiredate=None, inactive=None, |
822 | + mindays=None, maxdays=None, root=None, warndays=None): |
823 | + """Change user password expiry information |
824 | + |
825 | + :param str username: User to update |
826 | + :param str lastday: Set when password was changed in YYYY-MM-DD format |
827 | + :param str expiredate: Set when user's account will no longer be |
828 | + accessible in YYYY-MM-DD format. |
829 | + -1 will remove an account expiration date. |
830 | + :param str inactive: Set the number of days of inactivity after a password |
831 | + has expired before the account is locked. |
832 | + -1 will remove an account's inactivity. |
833 | + :param str mindays: Set the minimum number of days between password |
834 | + changes to MIN_DAYS. |
835 | + 0 indicates the password can be changed anytime. |
836 | + :param str maxdays: Set the maximum number of days during which a |
837 | + password is valid. |
838 | + -1 as MAX_DAYS will remove checking maxdays |
839 | + :param str root: Apply changes in the CHROOT_DIR directory |
840 | + :param str warndays: Set the number of days of warning before a password |
841 | + change is required |
842 | + :raises subprocess.CalledProcessError: if call to chage fails |
843 | + """ |
844 | + cmd = ['chage'] |
845 | + if root: |
846 | + cmd.extend(['--root', root]) |
847 | + if lastday: |
848 | + cmd.extend(['--lastday', lastday]) |
849 | + if expiredate: |
850 | + cmd.extend(['--expiredate', expiredate]) |
851 | + if inactive: |
852 | + cmd.extend(['--inactive', inactive]) |
853 | + if mindays: |
854 | + cmd.extend(['--mindays', mindays]) |
855 | + if maxdays: |
856 | + cmd.extend(['--maxdays', maxdays]) |
857 | + if warndays: |
858 | + cmd.extend(['--warndays', warndays]) |
859 | + cmd.append(username) |
860 | + subprocess.check_call(cmd) |
861 | + |
862 | + |
863 | +remove_password_expiry = functools.partial(chage, expiredate='-1', inactive='-1', mindays='0', maxdays='-1') |
864 | + |
865 | + |
866 | def rsync(from_path, to_path, flags='-r', options=None, timeout=None): |
867 | """Replicate the contents of a path""" |
868 | options = options or ['--delete', '--executability'] |
869 | @@ -481,13 +535,45 @@ |
870 | |
871 | def write_file(path, content, owner='root', group='root', perms=0o444): |
872 | """Create or overwrite a file with the contents of a byte string.""" |
873 | - log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
874 | uid = pwd.getpwnam(owner).pw_uid |
875 | gid = grp.getgrnam(group).gr_gid |
876 | - with open(path, 'wb') as target: |
877 | - os.fchown(target.fileno(), uid, gid) |
878 | - os.fchmod(target.fileno(), perms) |
879 | - target.write(content) |
880 | + # lets see if we can grab the file and compare the context, to avoid doing |
881 | + # a write. |
882 | + existing_content = None |
883 | + existing_uid, existing_gid, existing_perms = None, None, None |
884 | + try: |
885 | + with open(path, 'rb') as target: |
886 | + existing_content = target.read() |
887 | + stat = os.stat(path) |
888 | + existing_uid, existing_gid, existing_perms = ( |
889 | + stat.st_uid, stat.st_gid, stat.st_mode |
890 | + ) |
891 | + except Exception: |
892 | + pass |
893 | + if content != existing_content: |
894 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, perms), |
895 | + level=DEBUG) |
896 | + with open(path, 'wb') as target: |
897 | + os.fchown(target.fileno(), uid, gid) |
898 | + os.fchmod(target.fileno(), perms) |
899 | + if six.PY3 and isinstance(content, six.string_types): |
900 | + content = content.encode('UTF-8') |
901 | + target.write(content) |
902 | + return |
903 | + # the contents were the same, but we might still need to change the |
904 | + # ownership or permissions. |
905 | + if existing_uid != uid: |
906 | + log("Changing uid on already existing content: {} -> {}" |
907 | + .format(existing_uid, uid), level=DEBUG) |
908 | + os.chown(path, uid, -1) |
909 | + if existing_gid != gid: |
910 | + log("Changing gid on already existing content: {} -> {}" |
911 | + .format(existing_gid, gid), level=DEBUG) |
912 | + os.chown(path, -1, gid) |
913 | + if existing_perms != perms: |
914 | + log("Changing permissions on existing content: {} -> {}" |
915 | + .format(existing_perms, perms), level=DEBUG) |
916 | + os.chmod(path, perms) |
917 | |
918 | |
919 | def fstab_remove(mp): |
920 | @@ -752,7 +838,7 @@ |
921 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
922 | ip_output = (line.strip() for line in ip_output if line) |
923 | |
924 | - key = re.compile('^[0-9]+:\s+(.+):') |
925 | + key = re.compile(r'^[0-9]+:\s+(.+):') |
926 | for line in ip_output: |
927 | matched = re.search(key, line) |
928 | if matched: |
929 | @@ -897,6 +983,20 @@ |
930 | |
931 | |
932 | def add_to_updatedb_prunepath(path, updatedb_path=UPDATEDB_PATH): |
933 | + """Adds the specified path to the mlocate's udpatedb.conf PRUNEPATH list. |
934 | + |
935 | + This method has no effect if the path specified by updatedb_path does not |
936 | + exist or is not a file. |
937 | + |
938 | + @param path: string the path to add to the updatedb.conf PRUNEPATHS value |
939 | + @param updatedb_path: the path the updatedb.conf file |
940 | + """ |
941 | + if not os.path.exists(updatedb_path) or os.path.isdir(updatedb_path): |
942 | + # If the updatedb.conf file doesn't exist then don't attempt to update |
943 | + # the file as the package providing mlocate may not be installed on |
944 | + # the local system |
945 | + return |
946 | + |
947 | with open(updatedb_path, 'r+') as f_id: |
948 | updatedb_text = f_id.read() |
949 | output = updatedb(updatedb_text, path) |
950 | @@ -916,3 +1016,62 @@ |
951 | lines[i] = 'PRUNEPATHS="{}"'.format(' '.join(paths)) |
952 | output = "\n".join(lines) |
953 | return output |
954 | + |
955 | + |
956 | +def modulo_distribution(modulo=3, wait=30, non_zero_wait=False): |
957 | + """ Modulo distribution |
958 | + |
959 | + This helper uses the unit number, a modulo value and a constant wait time |
960 | + to produce a calculated wait time distribution. This is useful in large |
961 | + scale deployments to distribute load during an expensive operation such as |
962 | + service restarts. |
963 | + |
964 | + If you have 1000 nodes that need to restart 100 at a time 1 minute at a |
965 | + time: |
966 | + |
967 | + time.wait(modulo_distribution(modulo=100, wait=60)) |
968 | + restart() |
969 | + |
970 | + If you need restarts to happen serially set modulo to the exact number of |
971 | + nodes and set a high constant wait time: |
972 | + |
973 | + time.wait(modulo_distribution(modulo=10, wait=120)) |
974 | + restart() |
975 | + |
976 | + @param modulo: int The modulo number creates the group distribution |
977 | + @param wait: int The constant time wait value |
978 | + @param non_zero_wait: boolean Override unit % modulo == 0, |
979 | + return modulo * wait. Used to avoid collisions with |
980 | + leader nodes which are often given priority. |
981 | + @return: int Calculated time to wait for unit operation |
982 | + """ |
983 | + unit_number = int(local_unit().split('/')[1]) |
984 | + calculated_wait_time = (unit_number % modulo) * wait |
985 | + if non_zero_wait and calculated_wait_time == 0: |
986 | + return modulo * wait |
987 | + else: |
988 | + return calculated_wait_time |
989 | + |
990 | + |
991 | +def install_ca_cert(ca_cert, name=None): |
992 | + """ |
993 | + Install the given cert as a trusted CA. |
994 | + |
995 | + The ``name`` is the stem of the filename where the cert is written, and if |
996 | + not provided, it will default to ``juju-{charm_name}``. |
997 | + |
998 | + If the cert is empty or None, or is unchanged, nothing is done. |
999 | + """ |
1000 | + if not ca_cert: |
1001 | + return |
1002 | + if not isinstance(ca_cert, bytes): |
1003 | + ca_cert = ca_cert.encode('utf8') |
1004 | + if not name: |
1005 | + name = 'juju-{}'.format(charm_name()) |
1006 | + cert_file = '/usr/local/share/ca-certificates/{}.crt'.format(name) |
1007 | + new_hash = hashlib.md5(ca_cert).hexdigest() |
1008 | + if file_hash(cert_file) == new_hash: |
1009 | + return |
1010 | + log("Installing new CA cert at: {}".format(cert_file), level=INFO) |
1011 | + write_file(cert_file, ca_cert) |
1012 | + subprocess.check_call(['update-ca-certificates', '--fresh']) |
1013 | |
1014 | === modified file 'hooks/charmhelpers/core/host_factory/centos.py' |
1015 | --- hooks/charmhelpers/core/host_factory/centos.py 2017-03-03 19:56:10 +0000 |
1016 | +++ hooks/charmhelpers/core/host_factory/centos.py 2019-05-24 20:38:05 +0000 |
1017 | @@ -2,6 +2,22 @@ |
1018 | import yum |
1019 | import os |
1020 | |
1021 | +from charmhelpers.core.strutils import BasicStringComparator |
1022 | + |
1023 | + |
1024 | +class CompareHostReleases(BasicStringComparator): |
1025 | + """Provide comparisons of Host releases. |
1026 | + |
1027 | + Use in the form of |
1028 | + |
1029 | + if CompareHostReleases(release) > 'trusty': |
1030 | + # do something with mitaka |
1031 | + """ |
1032 | + |
1033 | + def __init__(self, item): |
1034 | + raise NotImplementedError( |
1035 | + "CompareHostReleases() is not implemented for CentOS") |
1036 | + |
1037 | |
1038 | def service_available(service_name): |
1039 | # """Determine whether a system service is available.""" |
1040 | |
1041 | === modified file 'hooks/charmhelpers/core/host_factory/ubuntu.py' |
1042 | --- hooks/charmhelpers/core/host_factory/ubuntu.py 2017-03-03 19:56:10 +0000 |
1043 | +++ hooks/charmhelpers/core/host_factory/ubuntu.py 2019-05-24 20:38:05 +0000 |
1044 | @@ -1,5 +1,42 @@ |
1045 | import subprocess |
1046 | |
1047 | +from charmhelpers.core.hookenv import cached |
1048 | +from charmhelpers.core.strutils import BasicStringComparator |
1049 | + |
1050 | + |
1051 | +UBUNTU_RELEASES = ( |
1052 | + 'lucid', |
1053 | + 'maverick', |
1054 | + 'natty', |
1055 | + 'oneiric', |
1056 | + 'precise', |
1057 | + 'quantal', |
1058 | + 'raring', |
1059 | + 'saucy', |
1060 | + 'trusty', |
1061 | + 'utopic', |
1062 | + 'vivid', |
1063 | + 'wily', |
1064 | + 'xenial', |
1065 | + 'yakkety', |
1066 | + 'zesty', |
1067 | + 'artful', |
1068 | + 'bionic', |
1069 | + 'cosmic', |
1070 | + 'disco', |
1071 | +) |
1072 | + |
1073 | + |
1074 | +class CompareHostReleases(BasicStringComparator): |
1075 | + """Provide comparisons of Ubuntu releases. |
1076 | + |
1077 | + Use in the form of |
1078 | + |
1079 | + if CompareHostReleases(release) > 'trusty': |
1080 | + # do something with mitaka |
1081 | + """ |
1082 | + _list = UBUNTU_RELEASES |
1083 | + |
1084 | |
1085 | def service_available(service_name): |
1086 | """Determine whether a system service is available""" |
1087 | @@ -37,6 +74,14 @@ |
1088 | return d |
1089 | |
1090 | |
1091 | +def get_distrib_codename(): |
1092 | + """Return the codename of the distribution |
1093 | + :returns: The codename |
1094 | + :rtype: str |
1095 | + """ |
1096 | + return lsb_release()['DISTRIB_CODENAME'].lower() |
1097 | + |
1098 | + |
1099 | def cmp_pkgrevno(package, revno, pkgcache=None): |
1100 | """Compare supplied revno with the revno of the installed package. |
1101 | |
1102 | @@ -54,3 +99,16 @@ |
1103 | pkgcache = apt_cache() |
1104 | pkg = pkgcache[package] |
1105 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
1106 | + |
1107 | + |
1108 | +@cached |
1109 | +def arch(): |
1110 | + """Return the package architecture as a string. |
1111 | + |
1112 | + :returns: the architecture |
1113 | + :rtype: str |
1114 | + :raises: subprocess.CalledProcessError if dpkg command fails |
1115 | + """ |
1116 | + return subprocess.check_output( |
1117 | + ['dpkg', '--print-architecture'] |
1118 | + ).rstrip().decode('UTF-8') |
1119 | |
1120 | === modified file 'hooks/charmhelpers/core/kernel.py' |
1121 | --- hooks/charmhelpers/core/kernel.py 2017-03-03 19:56:10 +0000 |
1122 | +++ hooks/charmhelpers/core/kernel.py 2019-05-24 20:38:05 +0000 |
1123 | @@ -26,12 +26,12 @@ |
1124 | |
1125 | __platform__ = get_platform() |
1126 | if __platform__ == "ubuntu": |
1127 | - from charmhelpers.core.kernel_factory.ubuntu import ( |
1128 | + from charmhelpers.core.kernel_factory.ubuntu import ( # NOQA:F401 |
1129 | persistent_modprobe, |
1130 | update_initramfs, |
1131 | ) # flake8: noqa -- ignore F401 for this import |
1132 | elif __platform__ == "centos": |
1133 | - from charmhelpers.core.kernel_factory.centos import ( |
1134 | + from charmhelpers.core.kernel_factory.centos import ( # NOQA:F401 |
1135 | persistent_modprobe, |
1136 | update_initramfs, |
1137 | ) # flake8: noqa -- ignore F401 for this import |
1138 | |
1139 | === modified file 'hooks/charmhelpers/core/services/base.py' |
1140 | --- hooks/charmhelpers/core/services/base.py 2017-03-03 19:56:10 +0000 |
1141 | +++ hooks/charmhelpers/core/services/base.py 2019-05-24 20:38:05 +0000 |
1142 | @@ -307,23 +307,34 @@ |
1143 | """ |
1144 | def __call__(self, manager, service_name, event_name): |
1145 | service = manager.get_service(service_name) |
1146 | - new_ports = service.get('ports', []) |
1147 | + # turn this generator into a list, |
1148 | + # as we'll be going over it multiple times |
1149 | + new_ports = list(service.get('ports', [])) |
1150 | port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) |
1151 | if os.path.exists(port_file): |
1152 | with open(port_file) as fp: |
1153 | old_ports = fp.read().split(',') |
1154 | for old_port in old_ports: |
1155 | - if bool(old_port): |
1156 | - old_port = int(old_port) |
1157 | - if old_port not in new_ports: |
1158 | - hookenv.close_port(old_port) |
1159 | + if bool(old_port) and not self.ports_contains(old_port, new_ports): |
1160 | + hookenv.close_port(old_port) |
1161 | with open(port_file, 'w') as fp: |
1162 | fp.write(','.join(str(port) for port in new_ports)) |
1163 | for port in new_ports: |
1164 | + # A port is either a number or 'ICMP' |
1165 | + protocol = 'TCP' |
1166 | + if str(port).upper() == 'ICMP': |
1167 | + protocol = 'ICMP' |
1168 | if event_name == 'start': |
1169 | - hookenv.open_port(port) |
1170 | + hookenv.open_port(port, protocol) |
1171 | elif event_name == 'stop': |
1172 | - hookenv.close_port(port) |
1173 | + hookenv.close_port(port, protocol) |
1174 | + |
1175 | + def ports_contains(self, port, ports): |
1176 | + if not bool(port): |
1177 | + return False |
1178 | + if str(port).upper() != 'ICMP': |
1179 | + port = int(port) |
1180 | + return port in ports |
1181 | |
1182 | |
1183 | def service_stop(service_name): |
1184 | |
1185 | === modified file 'hooks/charmhelpers/core/strutils.py' |
1186 | --- hooks/charmhelpers/core/strutils.py 2017-03-03 19:56:10 +0000 |
1187 | +++ hooks/charmhelpers/core/strutils.py 2019-05-24 20:38:05 +0000 |
1188 | @@ -61,10 +61,69 @@ |
1189 | if isinstance(value, six.string_types): |
1190 | value = six.text_type(value) |
1191 | else: |
1192 | - msg = "Unable to interpret non-string value '%s' as boolean" % (value) |
1193 | + msg = "Unable to interpret non-string value '%s' as bytes" % (value) |
1194 | raise ValueError(msg) |
1195 | matches = re.match("([0-9]+)([a-zA-Z]+)", value) |
1196 | - if not matches: |
1197 | - msg = "Unable to interpret string value '%s' as bytes" % (value) |
1198 | - raise ValueError(msg) |
1199 | - return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)]) |
1200 | + if matches: |
1201 | + size = int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)]) |
1202 | + else: |
1203 | + # Assume that value passed in is bytes |
1204 | + try: |
1205 | + size = int(value) |
1206 | + except ValueError: |
1207 | + msg = "Unable to interpret string value '%s' as bytes" % (value) |
1208 | + raise ValueError(msg) |
1209 | + return size |
1210 | + |
1211 | + |
1212 | +class BasicStringComparator(object): |
1213 | + """Provides a class that will compare strings from an iterator type object. |
1214 | + Used to provide > and < comparisons on strings that may not necessarily be |
1215 | + alphanumerically ordered. e.g. OpenStack or Ubuntu releases AFTER the |
1216 | + z-wrap. |
1217 | + """ |
1218 | + |
1219 | + _list = None |
1220 | + |
1221 | + def __init__(self, item): |
1222 | + if self._list is None: |
1223 | + raise Exception("Must define the _list in the class definition!") |
1224 | + try: |
1225 | + self.index = self._list.index(item) |
1226 | + except Exception: |
1227 | + raise KeyError("Item '{}' is not in list '{}'" |
1228 | + .format(item, self._list)) |
1229 | + |
1230 | + def __eq__(self, other): |
1231 | + assert isinstance(other, str) or isinstance(other, self.__class__) |
1232 | + return self.index == self._list.index(other) |
1233 | + |
1234 | + def __ne__(self, other): |
1235 | + return not self.__eq__(other) |
1236 | + |
1237 | + def __lt__(self, other): |
1238 | + assert isinstance(other, str) or isinstance(other, self.__class__) |
1239 | + return self.index < self._list.index(other) |
1240 | + |
1241 | + def __ge__(self, other): |
1242 | + return not self.__lt__(other) |
1243 | + |
1244 | + def __gt__(self, other): |
1245 | + assert isinstance(other, str) or isinstance(other, self.__class__) |
1246 | + return self.index > self._list.index(other) |
1247 | + |
1248 | + def __le__(self, other): |
1249 | + return not self.__gt__(other) |
1250 | + |
1251 | + def __str__(self): |
1252 | + """Always give back the item at the index so it can be used in |
1253 | + comparisons like: |
1254 | + |
1255 | + s_mitaka = CompareOpenStack('mitaka') |
1256 | + s_newton = CompareOpenstack('newton') |
1257 | + |
1258 | + assert s_newton > s_mitaka |
1259 | + |
1260 | + @returns: <string> |
1261 | + """ |
1262 | + return self._list[self.index] |
1263 | |
1264 | === modified file 'hooks/charmhelpers/core/sysctl.py' |
1265 | --- hooks/charmhelpers/core/sysctl.py 2017-03-03 19:56:10 +0000 |
1266 | +++ hooks/charmhelpers/core/sysctl.py 2019-05-24 20:38:05 +0000 |
1267 | @@ -28,27 +28,38 @@ |
1268 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
1269 | |
1270 | |
1271 | -def create(sysctl_dict, sysctl_file): |
1272 | +def create(sysctl_dict, sysctl_file, ignore=False): |
1273 | """Creates a sysctl.conf file from a YAML associative array |
1274 | |
1275 | - :param sysctl_dict: a YAML-formatted string of sysctl options eg "{ 'kernel.max_pid': 1337 }" |
1276 | + :param sysctl_dict: a dict or YAML-formatted string of sysctl |
1277 | + options eg "{ 'kernel.max_pid': 1337 }" |
1278 | :type sysctl_dict: str |
1279 | :param sysctl_file: path to the sysctl file to be saved |
1280 | :type sysctl_file: str or unicode |
1281 | + :param ignore: If True, ignore "unknown variable" errors. |
1282 | + :type ignore: bool |
1283 | :returns: None |
1284 | """ |
1285 | - try: |
1286 | - sysctl_dict_parsed = yaml.safe_load(sysctl_dict) |
1287 | - except yaml.YAMLError: |
1288 | - log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict), |
1289 | - level=ERROR) |
1290 | - return |
1291 | + if type(sysctl_dict) is not dict: |
1292 | + try: |
1293 | + sysctl_dict_parsed = yaml.safe_load(sysctl_dict) |
1294 | + except yaml.YAMLError: |
1295 | + log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict), |
1296 | + level=ERROR) |
1297 | + return |
1298 | + else: |
1299 | + sysctl_dict_parsed = sysctl_dict |
1300 | |
1301 | with open(sysctl_file, "w") as fd: |
1302 | for key, value in sysctl_dict_parsed.items(): |
1303 | fd.write("{}={}\n".format(key, value)) |
1304 | |
1305 | - log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict_parsed), |
1306 | + log("Updating sysctl_file: {} values: {}".format(sysctl_file, |
1307 | + sysctl_dict_parsed), |
1308 | level=DEBUG) |
1309 | |
1310 | - check_call(["sysctl", "-p", sysctl_file]) |
1311 | + call = ["sysctl", "-p", sysctl_file] |
1312 | + if ignore: |
1313 | + call.append("-e") |
1314 | + |
1315 | + check_call(call) |
1316 | |
1317 | === modified file 'hooks/charmhelpers/core/templating.py' |
1318 | --- hooks/charmhelpers/core/templating.py 2017-03-03 19:56:10 +0000 |
1319 | +++ hooks/charmhelpers/core/templating.py 2019-05-24 20:38:05 +0000 |
1320 | @@ -20,7 +20,8 @@ |
1321 | |
1322 | |
1323 | def render(source, target, context, owner='root', group='root', |
1324 | - perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None): |
1325 | + perms=0o444, templates_dir=None, encoding='UTF-8', |
1326 | + template_loader=None, config_template=None): |
1327 | """ |
1328 | Render a template. |
1329 | |
1330 | @@ -32,6 +33,9 @@ |
1331 | The context should be a dict containing the values to be replaced in the |
1332 | template. |
1333 | |
1334 | + config_template may be provided to render from a provided template instead |
1335 | + of loading from a file. |
1336 | + |
1337 | The `owner`, `group`, and `perms` options will be passed to `write_file`. |
1338 | |
1339 | If omitted, `templates_dir` defaults to the `templates` folder in the charm. |
1340 | @@ -65,14 +69,19 @@ |
1341 | if templates_dir is None: |
1342 | templates_dir = os.path.join(hookenv.charm_dir(), 'templates') |
1343 | template_env = Environment(loader=FileSystemLoader(templates_dir)) |
1344 | - try: |
1345 | - source = source |
1346 | - template = template_env.get_template(source) |
1347 | - except exceptions.TemplateNotFound as e: |
1348 | - hookenv.log('Could not load template %s from %s.' % |
1349 | - (source, templates_dir), |
1350 | - level=hookenv.ERROR) |
1351 | - raise e |
1352 | + |
1353 | + # load from a string if provided explicitly |
1354 | + if config_template is not None: |
1355 | + template = template_env.from_string(config_template) |
1356 | + else: |
1357 | + try: |
1358 | + source = source |
1359 | + template = template_env.get_template(source) |
1360 | + except exceptions.TemplateNotFound as e: |
1361 | + hookenv.log('Could not load template %s from %s.' % |
1362 | + (source, templates_dir), |
1363 | + level=hookenv.ERROR) |
1364 | + raise e |
1365 | content = template.render(context) |
1366 | if target is not None: |
1367 | target_dir = os.path.dirname(target) |
1368 | |
1369 | === modified file 'hooks/charmhelpers/core/unitdata.py' |
1370 | --- hooks/charmhelpers/core/unitdata.py 2017-03-03 19:56:10 +0000 |
1371 | +++ hooks/charmhelpers/core/unitdata.py 2019-05-24 20:38:05 +0000 |
1372 | @@ -166,6 +166,10 @@ |
1373 | |
1374 | To support dicts, lists, integer, floats, and booleans values |
1375 | are automatically json encoded/decoded. |
1376 | + |
1377 | + Note: to facilitate unit testing, ':memory:' can be passed as the |
1378 | + path parameter which causes sqlite3 to only build the db in memory. |
1379 | + This should only be used for testing purposes. |
1380 | """ |
1381 | def __init__(self, path=None): |
1382 | self.db_path = path |
1383 | @@ -175,6 +179,9 @@ |
1384 | else: |
1385 | self.db_path = os.path.join( |
1386 | os.environ.get('CHARM_DIR', ''), '.unit-state.db') |
1387 | + if self.db_path != ':memory:': |
1388 | + with open(self.db_path, 'a') as f: |
1389 | + os.fchmod(f.fileno(), 0o600) |
1390 | self.conn = sqlite3.connect('%s' % self.db_path) |
1391 | self.cursor = self.conn.cursor() |
1392 | self.revision = None |
1393 | @@ -358,7 +365,7 @@ |
1394 | try: |
1395 | yield self.revision |
1396 | self.revision = None |
1397 | - except: |
1398 | + except Exception: |
1399 | self.flush(False) |
1400 | self.revision = None |
1401 | raise |
1402 | |
1403 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
1404 | --- hooks/charmhelpers/fetch/__init__.py 2017-03-03 19:56:10 +0000 |
1405 | +++ hooks/charmhelpers/fetch/__init__.py 2019-05-24 20:38:05 +0000 |
1406 | @@ -48,6 +48,13 @@ |
1407 | pass |
1408 | |
1409 | |
1410 | +class GPGKeyError(Exception): |
1411 | + """Exception occurs when a GPG key cannot be fetched or used. The message |
1412 | + indicates what the problem is. |
1413 | + """ |
1414 | + pass |
1415 | + |
1416 | + |
1417 | class BaseFetchHandler(object): |
1418 | |
1419 | """Base class for FetchHandler implementations in fetch plugins""" |
1420 | @@ -77,21 +84,24 @@ |
1421 | fetch = importlib.import_module(module) |
1422 | |
1423 | filter_installed_packages = fetch.filter_installed_packages |
1424 | -install = fetch.install |
1425 | -upgrade = fetch.upgrade |
1426 | -update = fetch.update |
1427 | -purge = fetch.purge |
1428 | +filter_missing_packages = fetch.filter_missing_packages |
1429 | +install = fetch.apt_install |
1430 | +upgrade = fetch.apt_upgrade |
1431 | +update = _fetch_update = fetch.apt_update |
1432 | +purge = fetch.apt_purge |
1433 | add_source = fetch.add_source |
1434 | |
1435 | if __platform__ == "ubuntu": |
1436 | apt_cache = fetch.apt_cache |
1437 | - apt_install = fetch.install |
1438 | - apt_update = fetch.update |
1439 | - apt_upgrade = fetch.upgrade |
1440 | - apt_purge = fetch.purge |
1441 | + apt_install = fetch.apt_install |
1442 | + apt_update = fetch.apt_update |
1443 | + apt_upgrade = fetch.apt_upgrade |
1444 | + apt_purge = fetch.apt_purge |
1445 | + apt_autoremove = fetch.apt_autoremove |
1446 | apt_mark = fetch.apt_mark |
1447 | apt_hold = fetch.apt_hold |
1448 | apt_unhold = fetch.apt_unhold |
1449 | + import_key = fetch.import_key |
1450 | get_upstream_version = fetch.get_upstream_version |
1451 | elif __platform__ == "centos": |
1452 | yum_search = fetch.yum_search |
1453 | @@ -135,7 +145,7 @@ |
1454 | for source, key in zip(sources, keys): |
1455 | add_source(source, key) |
1456 | if update: |
1457 | - fetch.update(fatal=True) |
1458 | + _fetch_update(fatal=True) |
1459 | |
1460 | |
1461 | def install_remote(source, *args, **kwargs): |
1462 | |
1463 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' |
1464 | --- hooks/charmhelpers/fetch/archiveurl.py 2017-03-03 19:56:10 +0000 |
1465 | +++ hooks/charmhelpers/fetch/archiveurl.py 2019-05-24 20:38:05 +0000 |
1466 | @@ -89,7 +89,7 @@ |
1467 | :param str source: URL pointing to an archive file. |
1468 | :param str dest: Local path location to download archive file to. |
1469 | """ |
1470 | - # propogate all exceptions |
1471 | + # propagate all exceptions |
1472 | # URLError, OSError, etc |
1473 | proto, netloc, path, params, query, fragment = urlparse(source) |
1474 | if proto in ('http', 'https'): |
1475 | |
1476 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' |
1477 | --- hooks/charmhelpers/fetch/bzrurl.py 2017-03-03 19:56:10 +0000 |
1478 | +++ hooks/charmhelpers/fetch/bzrurl.py 2019-05-24 20:38:05 +0000 |
1479 | @@ -13,7 +13,7 @@ |
1480 | # limitations under the License. |
1481 | |
1482 | import os |
1483 | -from subprocess import check_call |
1484 | +from subprocess import STDOUT, check_output |
1485 | from charmhelpers.fetch import ( |
1486 | BaseFetchHandler, |
1487 | UnhandledSource, |
1488 | @@ -55,7 +55,7 @@ |
1489 | cmd = ['bzr', 'branch'] |
1490 | cmd += cmd_opts |
1491 | cmd += [source, dest] |
1492 | - check_call(cmd) |
1493 | + check_output(cmd, stderr=STDOUT) |
1494 | |
1495 | def install(self, source, dest=None, revno=None): |
1496 | url_parts = self.parse_url(source) |
1497 | |
1498 | === modified file 'hooks/charmhelpers/fetch/centos.py' |
1499 | --- hooks/charmhelpers/fetch/centos.py 2017-03-03 19:56:10 +0000 |
1500 | +++ hooks/charmhelpers/fetch/centos.py 2019-05-24 20:38:05 +0000 |
1501 | @@ -132,7 +132,7 @@ |
1502 | key_file.write(key) |
1503 | key_file.flush() |
1504 | key_file.seek(0) |
1505 | - subprocess.check_call(['rpm', '--import', key_file]) |
1506 | + subprocess.check_call(['rpm', '--import', key_file.name]) |
1507 | else: |
1508 | subprocess.check_call(['rpm', '--import', key]) |
1509 | |
1510 | |
1511 | === modified file 'hooks/charmhelpers/fetch/giturl.py' |
1512 | --- hooks/charmhelpers/fetch/giturl.py 2017-03-03 19:56:10 +0000 |
1513 | +++ hooks/charmhelpers/fetch/giturl.py 2019-05-24 20:38:05 +0000 |
1514 | @@ -13,7 +13,7 @@ |
1515 | # limitations under the License. |
1516 | |
1517 | import os |
1518 | -from subprocess import check_call, CalledProcessError |
1519 | +from subprocess import check_output, CalledProcessError, STDOUT |
1520 | from charmhelpers.fetch import ( |
1521 | BaseFetchHandler, |
1522 | UnhandledSource, |
1523 | @@ -50,7 +50,7 @@ |
1524 | cmd = ['git', 'clone', source, dest, '--branch', branch] |
1525 | if depth: |
1526 | cmd.extend(['--depth', depth]) |
1527 | - check_call(cmd) |
1528 | + check_output(cmd, stderr=STDOUT) |
1529 | |
1530 | def install(self, source, branch="master", dest=None, depth=None): |
1531 | url_parts = self.parse_url(source) |
1532 | |
1533 | === added directory 'hooks/charmhelpers/fetch/python' |
1534 | === added file 'hooks/charmhelpers/fetch/python/__init__.py' |
1535 | --- hooks/charmhelpers/fetch/python/__init__.py 1970-01-01 00:00:00 +0000 |
1536 | +++ hooks/charmhelpers/fetch/python/__init__.py 2019-05-24 20:38:05 +0000 |
1537 | @@ -0,0 +1,13 @@ |
1538 | +# Copyright 2014-2019 Canonical Limited. |
1539 | +# |
1540 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
1541 | +# you may not use this file except in compliance with the License. |
1542 | +# You may obtain a copy of the License at |
1543 | +# |
1544 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1545 | +# |
1546 | +# Unless required by applicable law or agreed to in writing, software |
1547 | +# distributed under the License is distributed on an "AS IS" BASIS, |
1548 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
1549 | +# See the License for the specific language governing permissions and |
1550 | +# limitations under the License. |
1551 | |
1552 | === added file 'hooks/charmhelpers/fetch/python/debug.py' |
1553 | --- hooks/charmhelpers/fetch/python/debug.py 1970-01-01 00:00:00 +0000 |
1554 | +++ hooks/charmhelpers/fetch/python/debug.py 2019-05-24 20:38:05 +0000 |
1555 | @@ -0,0 +1,54 @@ |
1556 | +#!/usr/bin/env python |
1557 | +# coding: utf-8 |
1558 | + |
1559 | +# Copyright 2014-2015 Canonical Limited. |
1560 | +# |
1561 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
1562 | +# you may not use this file except in compliance with the License. |
1563 | +# You may obtain a copy of the License at |
1564 | +# |
1565 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1566 | +# |
1567 | +# Unless required by applicable law or agreed to in writing, software |
1568 | +# distributed under the License is distributed on an "AS IS" BASIS, |
1569 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
1570 | +# See the License for the specific language governing permissions and |
1571 | +# limitations under the License. |
1572 | + |
1573 | +from __future__ import print_function |
1574 | + |
1575 | +import atexit |
1576 | +import sys |
1577 | + |
1578 | +from charmhelpers.fetch.python.rpdb import Rpdb |
1579 | +from charmhelpers.core.hookenv import ( |
1580 | + open_port, |
1581 | + close_port, |
1582 | + ERROR, |
1583 | + log |
1584 | +) |
1585 | + |
1586 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
1587 | + |
1588 | +DEFAULT_ADDR = "0.0.0.0" |
1589 | +DEFAULT_PORT = 4444 |
1590 | + |
1591 | + |
1592 | +def _error(message): |
1593 | + log(message, level=ERROR) |
1594 | + |
1595 | + |
1596 | +def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT): |
1597 | + """ |
1598 | + Set a trace point using the remote debugger |
1599 | + """ |
1600 | + atexit.register(close_port, port) |
1601 | + try: |
1602 | + log("Starting a remote python debugger session on %s:%s" % (addr, |
1603 | + port)) |
1604 | + open_port(port) |
1605 | + debugger = Rpdb(addr=addr, port=port) |
1606 | + debugger.set_trace(sys._getframe().f_back) |
1607 | + except Exception: |
1608 | + _error("Cannot start a remote debug session on %s:%s" % (addr, |
1609 | + port)) |
1610 | |
1611 | === added file 'hooks/charmhelpers/fetch/python/packages.py' |
1612 | --- hooks/charmhelpers/fetch/python/packages.py 1970-01-01 00:00:00 +0000 |
1613 | +++ hooks/charmhelpers/fetch/python/packages.py 2019-05-24 20:38:05 +0000 |
1614 | @@ -0,0 +1,154 @@ |
1615 | +#!/usr/bin/env python |
1616 | +# coding: utf-8 |
1617 | + |
1618 | +# Copyright 2014-2015 Canonical Limited. |
1619 | +# |
1620 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
1621 | +# you may not use this file except in compliance with the License. |
1622 | +# You may obtain a copy of the License at |
1623 | +# |
1624 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1625 | +# |
1626 | +# Unless required by applicable law or agreed to in writing, software |
1627 | +# distributed under the License is distributed on an "AS IS" BASIS, |
1628 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
1629 | +# See the License for the specific language governing permissions and |
1630 | +# limitations under the License. |
1631 | + |
1632 | +import os |
1633 | +import six |
1634 | +import subprocess |
1635 | +import sys |
1636 | + |
1637 | +from charmhelpers.fetch import apt_install, apt_update |
1638 | +from charmhelpers.core.hookenv import charm_dir, log |
1639 | + |
1640 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
1641 | + |
1642 | + |
1643 | +def pip_execute(*args, **kwargs): |
1644 | + """Overriden pip_execute() to stop sys.path being changed. |
1645 | + |
1646 | + The act of importing main from the pip module seems to cause add wheels |
1647 | + from the /usr/share/python-wheels which are installed by various tools. |
1648 | + This function ensures that sys.path remains the same after the call is |
1649 | + executed. |
1650 | + """ |
1651 | + try: |
1652 | + _path = sys.path |
1653 | + try: |
1654 | + from pip import main as _pip_execute |
1655 | + except ImportError: |
1656 | + apt_update() |
1657 | + if six.PY2: |
1658 | + apt_install('python-pip') |
1659 | + else: |
1660 | + apt_install('python3-pip') |
1661 | + from pip import main as _pip_execute |
1662 | + _pip_execute(*args, **kwargs) |
1663 | + finally: |
1664 | + sys.path = _path |
1665 | + |
1666 | + |
1667 | +def parse_options(given, available): |
1668 | + """Given a set of options, check if available""" |
1669 | + for key, value in sorted(given.items()): |
1670 | + if not value: |
1671 | + continue |
1672 | + if key in available: |
1673 | + yield "--{0}={1}".format(key, value) |
1674 | + |
1675 | + |
1676 | +def pip_install_requirements(requirements, constraints=None, **options): |
1677 | + """Install a requirements file. |
1678 | + |
1679 | + :param constraints: Path to pip constraints file. |
1680 | + http://pip.readthedocs.org/en/stable/user_guide/#constraints-files |
1681 | + """ |
1682 | + command = ["install"] |
1683 | + |
1684 | + available_options = ('proxy', 'src', 'log', ) |
1685 | + for option in parse_options(options, available_options): |
1686 | + command.append(option) |
1687 | + |
1688 | + command.append("-r {0}".format(requirements)) |
1689 | + if constraints: |
1690 | + command.append("-c {0}".format(constraints)) |
1691 | + log("Installing from file: {} with constraints {} " |
1692 | + "and options: {}".format(requirements, constraints, command)) |
1693 | + else: |
1694 | + log("Installing from file: {} with options: {}".format(requirements, |
1695 | + command)) |
1696 | + pip_execute(command) |
1697 | + |
1698 | + |
1699 | +def pip_install(package, fatal=False, upgrade=False, venv=None, |
1700 | + constraints=None, **options): |
1701 | + """Install a python package""" |
1702 | + if venv: |
1703 | + venv_python = os.path.join(venv, 'bin/pip') |
1704 | + command = [venv_python, "install"] |
1705 | + else: |
1706 | + command = ["install"] |
1707 | + |
1708 | + available_options = ('proxy', 'src', 'log', 'index-url', ) |
1709 | + for option in parse_options(options, available_options): |
1710 | + command.append(option) |
1711 | + |
1712 | + if upgrade: |
1713 | + command.append('--upgrade') |
1714 | + |
1715 | + if constraints: |
1716 | + command.extend(['-c', constraints]) |
1717 | + |
1718 | + if isinstance(package, list): |
1719 | + command.extend(package) |
1720 | + else: |
1721 | + command.append(package) |
1722 | + |
1723 | + log("Installing {} package with options: {}".format(package, |
1724 | + command)) |
1725 | + if venv: |
1726 | + subprocess.check_call(command) |
1727 | + else: |
1728 | + pip_execute(command) |
1729 | + |
1730 | + |
1731 | +def pip_uninstall(package, **options): |
1732 | + """Uninstall a python package""" |
1733 | + command = ["uninstall", "-q", "-y"] |
1734 | + |
1735 | + available_options = ('proxy', 'log', ) |
1736 | + for option in parse_options(options, available_options): |
1737 | + command.append(option) |
1738 | + |
1739 | + if isinstance(package, list): |
1740 | + command.extend(package) |
1741 | + else: |
1742 | + command.append(package) |
1743 | + |
1744 | + log("Uninstalling {} package with options: {}".format(package, |
1745 | + command)) |
1746 | + pip_execute(command) |
1747 | + |
1748 | + |
1749 | +def pip_list(): |
1750 | + """Returns the list of current python installed packages |
1751 | + """ |
1752 | + return pip_execute(["list"]) |
1753 | + |
1754 | + |
1755 | +def pip_create_virtualenv(path=None): |
1756 | + """Create an isolated Python environment.""" |
1757 | + if six.PY2: |
1758 | + apt_install('python-virtualenv') |
1759 | + else: |
1760 | + apt_install('python3-virtualenv') |
1761 | + |
1762 | + if path: |
1763 | + venv_path = path |
1764 | + else: |
1765 | + venv_path = os.path.join(charm_dir(), 'venv') |
1766 | + |
1767 | + if not os.path.exists(venv_path): |
1768 | + subprocess.check_call(['virtualenv', venv_path]) |
1769 | |
1770 | === added file 'hooks/charmhelpers/fetch/python/rpdb.py' |
1771 | --- hooks/charmhelpers/fetch/python/rpdb.py 1970-01-01 00:00:00 +0000 |
1772 | +++ hooks/charmhelpers/fetch/python/rpdb.py 2019-05-24 20:38:05 +0000 |
1773 | @@ -0,0 +1,56 @@ |
1774 | +# Copyright 2014-2015 Canonical Limited. |
1775 | +# |
1776 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
1777 | +# you may not use this file except in compliance with the License. |
1778 | +# You may obtain a copy of the License at |
1779 | +# |
1780 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1781 | +# |
1782 | +# Unless required by applicable law or agreed to in writing, software |
1783 | +# distributed under the License is distributed on an "AS IS" BASIS, |
1784 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
1785 | +# See the License for the specific language governing permissions and |
1786 | +# limitations under the License. |
1787 | + |
1788 | +"""Remote Python Debugger (pdb wrapper).""" |
1789 | + |
1790 | +import pdb |
1791 | +import socket |
1792 | +import sys |
1793 | + |
1794 | +__author__ = "Bertrand Janin <b@janin.com>" |
1795 | +__version__ = "0.1.3" |
1796 | + |
1797 | + |
1798 | +class Rpdb(pdb.Pdb): |
1799 | + |
1800 | + def __init__(self, addr="127.0.0.1", port=4444): |
1801 | + """Initialize the socket and initialize pdb.""" |
1802 | + |
1803 | + # Backup stdin and stdout before replacing them by the socket handle |
1804 | + self.old_stdout = sys.stdout |
1805 | + self.old_stdin = sys.stdin |
1806 | + |
1807 | + # Open a 'reusable' socket to let the webapp reload on the same port |
1808 | + self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
1809 | + self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) |
1810 | + self.skt.bind((addr, port)) |
1811 | + self.skt.listen(1) |
1812 | + (clientsocket, address) = self.skt.accept() |
1813 | + handle = clientsocket.makefile('rw') |
1814 | + pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle) |
1815 | + sys.stdout = sys.stdin = handle |
1816 | + |
1817 | + def shutdown(self): |
1818 | + """Revert stdin and stdout, close the socket.""" |
1819 | + sys.stdout = self.old_stdout |
1820 | + sys.stdin = self.old_stdin |
1821 | + self.skt.close() |
1822 | + self.set_continue() |
1823 | + |
1824 | + def do_continue(self, arg): |
1825 | + """Stop all operation on ``continue``.""" |
1826 | + self.shutdown() |
1827 | + return 1 |
1828 | + |
1829 | + do_EOF = do_quit = do_exit = do_c = do_cont = do_continue |
1830 | |
1831 | === added file 'hooks/charmhelpers/fetch/python/version.py' |
1832 | --- hooks/charmhelpers/fetch/python/version.py 1970-01-01 00:00:00 +0000 |
1833 | +++ hooks/charmhelpers/fetch/python/version.py 2019-05-24 20:38:05 +0000 |
1834 | @@ -0,0 +1,32 @@ |
1835 | +#!/usr/bin/env python |
1836 | +# coding: utf-8 |
1837 | + |
1838 | +# Copyright 2014-2015 Canonical Limited. |
1839 | +# |
1840 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
1841 | +# you may not use this file except in compliance with the License. |
1842 | +# You may obtain a copy of the License at |
1843 | +# |
1844 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1845 | +# |
1846 | +# Unless required by applicable law or agreed to in writing, software |
1847 | +# distributed under the License is distributed on an "AS IS" BASIS, |
1848 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
1849 | +# See the License for the specific language governing permissions and |
1850 | +# limitations under the License. |
1851 | + |
1852 | +import sys |
1853 | + |
1854 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
1855 | + |
1856 | + |
1857 | +def current_version(): |
1858 | + """Current system python version""" |
1859 | + return sys.version_info |
1860 | + |
1861 | + |
1862 | +def current_version_string(): |
1863 | + """Current system python version as string major.minor.micro""" |
1864 | + return "{0}.{1}.{2}".format(sys.version_info.major, |
1865 | + sys.version_info.minor, |
1866 | + sys.version_info.micro) |
1867 | |
1868 | === modified file 'hooks/charmhelpers/fetch/snap.py' |
1869 | --- hooks/charmhelpers/fetch/snap.py 2017-03-03 22:25:32 +0000 |
1870 | +++ hooks/charmhelpers/fetch/snap.py 2019-05-24 20:38:05 +0000 |
1871 | @@ -18,21 +18,33 @@ |
1872 | https://lists.ubuntu.com/archives/snapcraft/2016-September/001114.html |
1873 | """ |
1874 | import subprocess |
1875 | -from os import environ |
1876 | +import os |
1877 | from time import sleep |
1878 | from charmhelpers.core.hookenv import log |
1879 | |
1880 | __author__ = 'Joseph Borg <joseph.borg@canonical.com>' |
1881 | |
1882 | -SNAP_NO_LOCK = 1 # The return code for "couldn't acquire lock" in Snap (hopefully this will be improved). |
1883 | +# The return code for "couldn't acquire lock" in Snap |
1884 | +# (hopefully this will be improved). |
1885 | +SNAP_NO_LOCK = 1 |
1886 | SNAP_NO_LOCK_RETRY_DELAY = 10 # Wait X seconds between Snap lock checks. |
1887 | SNAP_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. |
1888 | +SNAP_CHANNELS = [ |
1889 | + 'edge', |
1890 | + 'beta', |
1891 | + 'candidate', |
1892 | + 'stable', |
1893 | +] |
1894 | |
1895 | |
1896 | class CouldNotAcquireLockException(Exception): |
1897 | pass |
1898 | |
1899 | |
1900 | +class InvalidSnapChannel(Exception): |
1901 | + pass |
1902 | + |
1903 | + |
1904 | def _snap_exec(commands): |
1905 | """ |
1906 | Execute snap commands. |
1907 | @@ -47,13 +59,17 @@ |
1908 | |
1909 | while return_code is None or return_code == SNAP_NO_LOCK: |
1910 | try: |
1911 | - return_code = subprocess.check_call(['snap'] + commands, env=environ) |
1912 | + return_code = subprocess.check_call(['snap'] + commands, |
1913 | + env=os.environ) |
1914 | except subprocess.CalledProcessError as e: |
1915 | retry_count += + 1 |
1916 | if retry_count > SNAP_NO_LOCK_RETRY_COUNT: |
1917 | - raise CouldNotAcquireLockException('Could not aquire lock after %s attempts' % SNAP_NO_LOCK_RETRY_COUNT) |
1918 | + raise CouldNotAcquireLockException( |
1919 | + 'Could not aquire lock after {} attempts' |
1920 | + .format(SNAP_NO_LOCK_RETRY_COUNT)) |
1921 | return_code = e.returncode |
1922 | - log('Snap failed to acquire lock, trying again in %s seconds.' % SNAP_NO_LOCK_RETRY_DELAY, level='WARN') |
1923 | + log('Snap failed to acquire lock, trying again in {} seconds.' |
1924 | + .format(SNAP_NO_LOCK_RETRY_DELAY, level='WARN')) |
1925 | sleep(SNAP_NO_LOCK_RETRY_DELAY) |
1926 | |
1927 | return return_code |
1928 | @@ -120,3 +136,15 @@ |
1929 | |
1930 | log(message, level='INFO') |
1931 | return _snap_exec(['refresh'] + flags + packages) |
1932 | + |
1933 | + |
1934 | +def valid_snap_channel(channel): |
1935 | + """ Validate snap channel exists |
1936 | + |
1937 | + :raises InvalidSnapChannel: When channel does not exist |
1938 | + :return: Boolean |
1939 | + """ |
1940 | + if channel.lower() in SNAP_CHANNELS: |
1941 | + return True |
1942 | + else: |
1943 | + raise InvalidSnapChannel("Invalid Snap Channel: {}".format(channel)) |
1944 | |
1945 | === modified file 'hooks/charmhelpers/fetch/ubuntu.py' |
1946 | --- hooks/charmhelpers/fetch/ubuntu.py 2017-03-03 20:50:28 +0000 |
1947 | +++ hooks/charmhelpers/fetch/ubuntu.py 2019-05-24 20:38:05 +0000 |
1948 | @@ -12,29 +12,48 @@ |
1949 | # See the License for the specific language governing permissions and |
1950 | # limitations under the License. |
1951 | |
1952 | +from collections import OrderedDict |
1953 | import os |
1954 | +import platform |
1955 | +import re |
1956 | import six |
1957 | import time |
1958 | import subprocess |
1959 | |
1960 | -from tempfile import NamedTemporaryFile |
1961 | -from charmhelpers.core.host import ( |
1962 | - lsb_release |
1963 | +from charmhelpers.core.host import get_distrib_codename |
1964 | + |
1965 | +from charmhelpers.core.hookenv import ( |
1966 | + log, |
1967 | + DEBUG, |
1968 | + WARNING, |
1969 | + env_proxy_settings, |
1970 | ) |
1971 | -from charmhelpers.core.hookenv import log |
1972 | -from charmhelpers.fetch import SourceConfigError |
1973 | +from charmhelpers.fetch import SourceConfigError, GPGKeyError |
1974 | |
1975 | +PROPOSED_POCKET = ( |
1976 | + "# Proposed\n" |
1977 | + "deb http://archive.ubuntu.com/ubuntu {}-proposed main universe " |
1978 | + "multiverse restricted\n") |
1979 | +PROPOSED_PORTS_POCKET = ( |
1980 | + "# Proposed\n" |
1981 | + "deb http://ports.ubuntu.com/ubuntu-ports {}-proposed main universe " |
1982 | + "multiverse restricted\n") |
1983 | +# Only supports 64bit and ppc64 at the moment. |
1984 | +ARCH_TO_PROPOSED_POCKET = { |
1985 | + 'x86_64': PROPOSED_POCKET, |
1986 | + 'ppc64le': PROPOSED_PORTS_POCKET, |
1987 | + 'aarch64': PROPOSED_PORTS_POCKET, |
1988 | + 's390x': PROPOSED_PORTS_POCKET, |
1989 | +} |
1990 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
1991 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
1992 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
1993 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1994 | """ |
1995 | - |
1996 | -PROPOSED_POCKET = """# Proposed |
1997 | -deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted |
1998 | -""" |
1999 | - |
2000 | CLOUD_ARCHIVE_POCKETS = { |
2001 | # Folsom |
2002 | 'folsom': 'precise-updates/folsom', |
2003 | + 'folsom/updates': 'precise-updates/folsom', |
2004 | 'precise-folsom': 'precise-updates/folsom', |
2005 | 'precise-folsom/updates': 'precise-updates/folsom', |
2006 | 'precise-updates/folsom': 'precise-updates/folsom', |
2007 | @@ -43,6 +62,7 @@ |
2008 | 'precise-proposed/folsom': 'precise-proposed/folsom', |
2009 | # Grizzly |
2010 | 'grizzly': 'precise-updates/grizzly', |
2011 | + 'grizzly/updates': 'precise-updates/grizzly', |
2012 | 'precise-grizzly': 'precise-updates/grizzly', |
2013 | 'precise-grizzly/updates': 'precise-updates/grizzly', |
2014 | 'precise-updates/grizzly': 'precise-updates/grizzly', |
2015 | @@ -51,6 +71,7 @@ |
2016 | 'precise-proposed/grizzly': 'precise-proposed/grizzly', |
2017 | # Havana |
2018 | 'havana': 'precise-updates/havana', |
2019 | + 'havana/updates': 'precise-updates/havana', |
2020 | 'precise-havana': 'precise-updates/havana', |
2021 | 'precise-havana/updates': 'precise-updates/havana', |
2022 | 'precise-updates/havana': 'precise-updates/havana', |
2023 | @@ -59,6 +80,7 @@ |
2024 | 'precise-proposed/havana': 'precise-proposed/havana', |
2025 | # Icehouse |
2026 | 'icehouse': 'precise-updates/icehouse', |
2027 | + 'icehouse/updates': 'precise-updates/icehouse', |
2028 | 'precise-icehouse': 'precise-updates/icehouse', |
2029 | 'precise-icehouse/updates': 'precise-updates/icehouse', |
2030 | 'precise-updates/icehouse': 'precise-updates/icehouse', |
2031 | @@ -67,6 +89,7 @@ |
2032 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
2033 | # Juno |
2034 | 'juno': 'trusty-updates/juno', |
2035 | + 'juno/updates': 'trusty-updates/juno', |
2036 | 'trusty-juno': 'trusty-updates/juno', |
2037 | 'trusty-juno/updates': 'trusty-updates/juno', |
2038 | 'trusty-updates/juno': 'trusty-updates/juno', |
2039 | @@ -75,6 +98,7 @@ |
2040 | 'trusty-proposed/juno': 'trusty-proposed/juno', |
2041 | # Kilo |
2042 | 'kilo': 'trusty-updates/kilo', |
2043 | + 'kilo/updates': 'trusty-updates/kilo', |
2044 | 'trusty-kilo': 'trusty-updates/kilo', |
2045 | 'trusty-kilo/updates': 'trusty-updates/kilo', |
2046 | 'trusty-updates/kilo': 'trusty-updates/kilo', |
2047 | @@ -83,6 +107,7 @@ |
2048 | 'trusty-proposed/kilo': 'trusty-proposed/kilo', |
2049 | # Liberty |
2050 | 'liberty': 'trusty-updates/liberty', |
2051 | + 'liberty/updates': 'trusty-updates/liberty', |
2052 | 'trusty-liberty': 'trusty-updates/liberty', |
2053 | 'trusty-liberty/updates': 'trusty-updates/liberty', |
2054 | 'trusty-updates/liberty': 'trusty-updates/liberty', |
2055 | @@ -91,6 +116,7 @@ |
2056 | 'trusty-proposed/liberty': 'trusty-proposed/liberty', |
2057 | # Mitaka |
2058 | 'mitaka': 'trusty-updates/mitaka', |
2059 | + 'mitaka/updates': 'trusty-updates/mitaka', |
2060 | 'trusty-mitaka': 'trusty-updates/mitaka', |
2061 | 'trusty-mitaka/updates': 'trusty-updates/mitaka', |
2062 | 'trusty-updates/mitaka': 'trusty-updates/mitaka', |
2063 | @@ -99,6 +125,7 @@ |
2064 | 'trusty-proposed/mitaka': 'trusty-proposed/mitaka', |
2065 | # Newton |
2066 | 'newton': 'xenial-updates/newton', |
2067 | + 'newton/updates': 'xenial-updates/newton', |
2068 | 'xenial-newton': 'xenial-updates/newton', |
2069 | 'xenial-newton/updates': 'xenial-updates/newton', |
2070 | 'xenial-updates/newton': 'xenial-updates/newton', |
2071 | @@ -107,17 +134,51 @@ |
2072 | 'xenial-proposed/newton': 'xenial-proposed/newton', |
2073 | # Ocata |
2074 | 'ocata': 'xenial-updates/ocata', |
2075 | + 'ocata/updates': 'xenial-updates/ocata', |
2076 | 'xenial-ocata': 'xenial-updates/ocata', |
2077 | 'xenial-ocata/updates': 'xenial-updates/ocata', |
2078 | 'xenial-updates/ocata': 'xenial-updates/ocata', |
2079 | 'ocata/proposed': 'xenial-proposed/ocata', |
2080 | 'xenial-ocata/proposed': 'xenial-proposed/ocata', |
2081 | - 'xenial-ocata/newton': 'xenial-proposed/ocata', |
2082 | + 'xenial-proposed/ocata': 'xenial-proposed/ocata', |
2083 | + # Pike |
2084 | + 'pike': 'xenial-updates/pike', |
2085 | + 'xenial-pike': 'xenial-updates/pike', |
2086 | + 'xenial-pike/updates': 'xenial-updates/pike', |
2087 | + 'xenial-updates/pike': 'xenial-updates/pike', |
2088 | + 'pike/proposed': 'xenial-proposed/pike', |
2089 | + 'xenial-pike/proposed': 'xenial-proposed/pike', |
2090 | + 'xenial-proposed/pike': 'xenial-proposed/pike', |
2091 | + # Queens |
2092 | + 'queens': 'xenial-updates/queens', |
2093 | + 'xenial-queens': 'xenial-updates/queens', |
2094 | + 'xenial-queens/updates': 'xenial-updates/queens', |
2095 | + 'xenial-updates/queens': 'xenial-updates/queens', |
2096 | + 'queens/proposed': 'xenial-proposed/queens', |
2097 | + 'xenial-queens/proposed': 'xenial-proposed/queens', |
2098 | + 'xenial-proposed/queens': 'xenial-proposed/queens', |
2099 | + # Rocky |
2100 | + 'rocky': 'bionic-updates/rocky', |
2101 | + 'bionic-rocky': 'bionic-updates/rocky', |
2102 | + 'bionic-rocky/updates': 'bionic-updates/rocky', |
2103 | + 'bionic-updates/rocky': 'bionic-updates/rocky', |
2104 | + 'rocky/proposed': 'bionic-proposed/rocky', |
2105 | + 'bionic-rocky/proposed': 'bionic-proposed/rocky', |
2106 | + 'bionic-proposed/rocky': 'bionic-proposed/rocky', |
2107 | + # Stein |
2108 | + 'stein': 'bionic-updates/stein', |
2109 | + 'bionic-stein': 'bionic-updates/stein', |
2110 | + 'bionic-stein/updates': 'bionic-updates/stein', |
2111 | + 'bionic-updates/stein': 'bionic-updates/stein', |
2112 | + 'stein/proposed': 'bionic-proposed/stein', |
2113 | + 'bionic-stein/proposed': 'bionic-proposed/stein', |
2114 | + 'bionic-proposed/stein': 'bionic-proposed/stein', |
2115 | } |
2116 | |
2117 | + |
2118 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
2119 | CMD_RETRY_DELAY = 10 # Wait 10 seconds between command retries. |
2120 | -CMD_RETRY_COUNT = 30 # Retry a failing fatal command X times. |
2121 | +CMD_RETRY_COUNT = 3 # Retry a failing fatal command X times. |
2122 | |
2123 | |
2124 | def filter_installed_packages(packages): |
2125 | @@ -135,6 +196,18 @@ |
2126 | return _pkgs |
2127 | |
2128 | |
2129 | +def filter_missing_packages(packages): |
2130 | + """Return a list of packages that are installed. |
2131 | + |
2132 | + :param packages: list of packages to evaluate. |
2133 | + :returns list: Packages that are installed. |
2134 | + """ |
2135 | + return list( |
2136 | + set(packages) - |
2137 | + set(filter_installed_packages(packages)) |
2138 | + ) |
2139 | + |
2140 | + |
2141 | def apt_cache(in_memory=True, progress=None): |
2142 | """Build and return an apt cache.""" |
2143 | from apt import apt_pkg |
2144 | @@ -145,7 +218,7 @@ |
2145 | return apt_pkg.Cache(progress) |
2146 | |
2147 | |
2148 | -def install(packages, options=None, fatal=False): |
2149 | +def apt_install(packages, options=None, fatal=False): |
2150 | """Install one or more packages.""" |
2151 | if options is None: |
2152 | options = ['--option=Dpkg::Options::=--force-confold'] |
2153 | @@ -162,7 +235,7 @@ |
2154 | _run_apt_command(cmd, fatal) |
2155 | |
2156 | |
2157 | -def upgrade(options=None, fatal=False, dist=False): |
2158 | +def apt_upgrade(options=None, fatal=False, dist=False): |
2159 | """Upgrade all packages.""" |
2160 | if options is None: |
2161 | options = ['--option=Dpkg::Options::=--force-confold'] |
2162 | @@ -177,13 +250,13 @@ |
2163 | _run_apt_command(cmd, fatal) |
2164 | |
2165 | |
2166 | -def update(fatal=False): |
2167 | +def apt_update(fatal=False): |
2168 | """Update local apt cache.""" |
2169 | cmd = ['apt-get', 'update'] |
2170 | _run_apt_command(cmd, fatal) |
2171 | |
2172 | |
2173 | -def purge(packages, fatal=False): |
2174 | +def apt_purge(packages, fatal=False): |
2175 | """Purge one or more packages.""" |
2176 | cmd = ['apt-get', '--assume-yes', 'purge'] |
2177 | if isinstance(packages, six.string_types): |
2178 | @@ -194,6 +267,14 @@ |
2179 | _run_apt_command(cmd, fatal) |
2180 | |
2181 | |
2182 | +def apt_autoremove(purge=True, fatal=False): |
2183 | + """Purge one or more packages.""" |
2184 | + cmd = ['apt-get', '--assume-yes', 'autoremove'] |
2185 | + if purge: |
2186 | + cmd.append('--purge') |
2187 | + _run_apt_command(cmd, fatal) |
2188 | + |
2189 | + |
2190 | def apt_mark(packages, mark, fatal=False): |
2191 | """Flag one or more packages using apt-mark.""" |
2192 | log("Marking {} as {}".format(packages, mark)) |
2193 | @@ -217,7 +298,159 @@ |
2194 | return apt_mark(packages, 'unhold', fatal=fatal) |
2195 | |
2196 | |
2197 | -def add_source(source, key=None): |
2198 | +def import_key(key): |
2199 | + """Import an ASCII Armor key. |
2200 | + |
2201 | + A Radix64 format keyid is also supported for backwards |
2202 | + compatibility. In this case Ubuntu keyserver will be |
2203 | + queried for a key via HTTPS by its keyid. This method |
2204 | + is less preferrable because https proxy servers may |
2205 | + require traffic decryption which is equivalent to a |
2206 | + man-in-the-middle attack (a proxy server impersonates |
2207 | + keyserver TLS certificates and has to be explicitly |
2208 | + trusted by the system). |
2209 | + |
2210 | + :param key: A GPG key in ASCII armor format, |
2211 | + including BEGIN and END markers or a keyid. |
2212 | + :type key: (bytes, str) |
2213 | + :raises: GPGKeyError if the key could not be imported |
2214 | + """ |
2215 | + key = key.strip() |
2216 | + if '-' in key or '\n' in key: |
2217 | + # Send everything not obviously a keyid to GPG to import, as |
2218 | + # we trust its validation better than our own. eg. handling |
2219 | + # comments before the key. |
2220 | + log("PGP key found (looks like ASCII Armor format)", level=DEBUG) |
2221 | + if ('-----BEGIN PGP PUBLIC KEY BLOCK-----' in key and |
2222 | + '-----END PGP PUBLIC KEY BLOCK-----' in key): |
2223 | + log("Writing provided PGP key in the binary format", level=DEBUG) |
2224 | + if six.PY3: |
2225 | + key_bytes = key.encode('utf-8') |
2226 | + else: |
2227 | + key_bytes = key |
2228 | + key_name = _get_keyid_by_gpg_key(key_bytes) |
2229 | + key_gpg = _dearmor_gpg_key(key_bytes) |
2230 | + _write_apt_gpg_keyfile(key_name=key_name, key_material=key_gpg) |
2231 | + else: |
2232 | + raise GPGKeyError("ASCII armor markers missing from GPG key") |
2233 | + else: |
2234 | + log("PGP key found (looks like Radix64 format)", level=WARNING) |
2235 | + log("SECURELY importing PGP key from keyserver; " |
2236 | + "full key not provided.", level=WARNING) |
2237 | + # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL |
2238 | + # to retrieve GPG keys. `apt-key adv` command is deprecated as is |
2239 | + # apt-key in general as noted in its manpage. See lp:1433761 for more |
2240 | + # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop |
2241 | + # gpg |
2242 | + key_asc = _get_key_by_keyid(key) |
2243 | + # write the key in GPG format so that apt-key list shows it |
2244 | + key_gpg = _dearmor_gpg_key(key_asc) |
2245 | + _write_apt_gpg_keyfile(key_name=key, key_material=key_gpg) |
2246 | + |
2247 | + |
2248 | +def _get_keyid_by_gpg_key(key_material): |
2249 | + """Get a GPG key fingerprint by GPG key material. |
2250 | + Gets a GPG key fingerprint (40-digit, 160-bit) by the ASCII armor-encoded |
2251 | + or binary GPG key material. Can be used, for example, to generate file |
2252 | + names for keys passed via charm options. |
2253 | + |
2254 | + :param key_material: ASCII armor-encoded or binary GPG key material |
2255 | + :type key_material: bytes |
2256 | + :raises: GPGKeyError if invalid key material has been provided |
2257 | + :returns: A GPG key fingerprint |
2258 | + :rtype: str |
2259 | + """ |
2260 | + # Use the same gpg command for both Xenial and Bionic |
2261 | + cmd = 'gpg --with-colons --with-fingerprint' |
2262 | + ps = subprocess.Popen(cmd.split(), |
2263 | + stdout=subprocess.PIPE, |
2264 | + stderr=subprocess.PIPE, |
2265 | + stdin=subprocess.PIPE) |
2266 | + out, err = ps.communicate(input=key_material) |
2267 | + if six.PY3: |
2268 | + out = out.decode('utf-8') |
2269 | + err = err.decode('utf-8') |
2270 | + if 'gpg: no valid OpenPGP data found.' in err: |
2271 | + raise GPGKeyError('Invalid GPG key material provided') |
2272 | + # from gnupg2 docs: fpr :: Fingerprint (fingerprint is in field 10) |
2273 | + return re.search(r"^fpr:{9}([0-9A-F]{40}):$", out, re.MULTILINE).group(1) |
2274 | + |
2275 | + |
2276 | +def _get_key_by_keyid(keyid): |
2277 | + """Get a key via HTTPS from the Ubuntu keyserver. |
2278 | + Different key ID formats are supported by SKS keyservers (the longer ones |
2279 | + are more secure, see "dead beef attack" and https://evil32.com/). Since |
2280 | + HTTPS is used, if SSLBump-like HTTPS proxies are in place, they will |
2281 | + impersonate keyserver.ubuntu.com and generate a certificate with |
2282 | + keyserver.ubuntu.com in the CN field or in SubjAltName fields of a |
2283 | + certificate. If such proxy behavior is expected it is necessary to add the |
2284 | + CA certificate chain containing the intermediate CA of the SSLBump proxy to |
2285 | + every machine that this code runs on via ca-certs cloud-init directive (via |
2286 | + cloudinit-userdata model-config) or via other means (such as through a |
2287 | + custom charm option). Also note that DNS resolution for the hostname in a |
2288 | + URL is done at a proxy server - not at the client side. |
2289 | + |
2290 | + 8-digit (32 bit) key ID |
2291 | + https://keyserver.ubuntu.com/pks/lookup?search=0x4652B4E6 |
2292 | + 16-digit (64 bit) key ID |
2293 | + https://keyserver.ubuntu.com/pks/lookup?search=0x6E85A86E4652B4E6 |
2294 | + 40-digit key ID: |
2295 | + https://keyserver.ubuntu.com/pks/lookup?search=0x35F77D63B5CEC106C577ED856E85A86E4652B4E6 |
2296 | + |
2297 | + :param keyid: An 8, 16 or 40 hex digit keyid to find a key for |
2298 | + :type keyid: (bytes, str) |
2299 | + :returns: A key material for the specified GPG key id |
2300 | + :rtype: (str, bytes) |
2301 | + :raises: subprocess.CalledProcessError |
2302 | + """ |
2303 | + # options=mr - machine-readable output (disables html wrappers) |
2304 | + keyserver_url = ('https://keyserver.ubuntu.com' |
2305 | + '/pks/lookup?op=get&options=mr&exact=on&search=0x{}') |
2306 | + curl_cmd = ['curl', keyserver_url.format(keyid)] |
2307 | + # use proxy server settings in order to retrieve the key |
2308 | + return subprocess.check_output(curl_cmd, |
2309 | + env=env_proxy_settings(['https'])) |
2310 | + |
2311 | + |
2312 | +def _dearmor_gpg_key(key_asc): |
2313 | + """Converts a GPG key in the ASCII armor format to the binary format. |
2314 | + |
2315 | + :param key_asc: A GPG key in ASCII armor format. |
2316 | + :type key_asc: (str, bytes) |
2317 | + :returns: A GPG key in binary format |
2318 | + :rtype: (str, bytes) |
2319 | + :raises: GPGKeyError |
2320 | + """ |
2321 | + ps = subprocess.Popen(['gpg', '--dearmor'], |
2322 | + stdout=subprocess.PIPE, |
2323 | + stderr=subprocess.PIPE, |
2324 | + stdin=subprocess.PIPE) |
2325 | + out, err = ps.communicate(input=key_asc) |
2326 | + # no need to decode output as it is binary (invalid utf-8), only error |
2327 | + if six.PY3: |
2328 | + err = err.decode('utf-8') |
2329 | + if 'gpg: no valid OpenPGP data found.' in err: |
2330 | + raise GPGKeyError('Invalid GPG key material. Check your network setup' |
2331 | + ' (MTU, routing, DNS) and/or proxy server settings' |
2332 | + ' as well as destination keyserver status.') |
2333 | + else: |
2334 | + return out |
2335 | + |
2336 | + |
2337 | +def _write_apt_gpg_keyfile(key_name, key_material): |
2338 | + """Writes GPG key material into a file at a provided path. |
2339 | + |
2340 | + :param key_name: A key name to use for a key file (could be a fingerprint) |
2341 | + :type key_name: str |
2342 | + :param key_material: A GPG key material (binary) |
2343 | + :type key_material: (str, bytes) |
2344 | + """ |
2345 | + with open('/etc/apt/trusted.gpg.d/{}.gpg'.format(key_name), |
2346 | + 'wb') as keyf: |
2347 | + keyf.write(key_material) |
2348 | + |
2349 | + |
2350 | +def add_source(source, key=None, fail_invalid=False): |
2351 | """Add a package source to this system. |
2352 | |
2353 | @param source: a URL or sources.list entry, as supported by |
2354 | @@ -233,6 +466,33 @@ |
2355 | such as 'cloud:icehouse' |
2356 | 'distro' may be used as a noop |
2357 | |
2358 | + Full list of source specifications supported by the function are: |
2359 | + |
2360 | + 'distro': A NOP; i.e. it has no effect. |
2361 | + 'proposed': the proposed deb spec [2] is wrtten to |
2362 | + /etc/apt/sources.list/proposed |
2363 | + 'distro-proposed': adds <version>-proposed to the debs [2] |
2364 | + 'ppa:<ppa-name>': add-apt-repository --yes <ppa_name> |
2365 | + 'deb <deb-spec>': add-apt-repository --yes deb <deb-spec> |
2366 | + 'http://....': add-apt-repository --yes http://... |
2367 | + 'cloud-archive:<spec>': add-apt-repository -yes cloud-archive:<spec> |
2368 | + 'cloud:<release>[-staging]': specify a Cloud Archive pocket <release> with |
2369 | + optional staging version. If staging is used then the staging PPA [2] |
2370 | + with be used. If staging is NOT used then the cloud archive [3] will be |
2371 | + added, and the 'ubuntu-cloud-keyring' package will be added for the |
2372 | + current distro. |
2373 | + |
2374 | + Otherwise the source is not recognised and this is logged to the juju log. |
2375 | + However, no error is raised, unless sys_error_on_exit is True. |
2376 | + |
2377 | + [1] deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
2378 | + where {} is replaced with the derived pocket name. |
2379 | + [2] deb http://archive.ubuntu.com/ubuntu {}-proposed \ |
2380 | + main universe multiverse restricted |
2381 | + where {} is replaced with the lsb_release codename (e.g. xenial) |
2382 | + [3] deb http://ubuntu-cloud.archive.canonical.com/ubuntu <pocket> |
2383 | + to /etc/apt/sources.list.d/cloud-archive-list |
2384 | + |
2385 | @param key: A key to be added to the system's APT keyring and used |
2386 | to verify the signatures on packages. Ideally, this should be an |
2387 | ASCII format GPG public key including the block headers. A GPG key |
2388 | @@ -240,51 +500,152 @@ |
2389 | available to retrieve the actual public key from a public keyserver |
2390 | placing your Juju environment at risk. ppa and cloud archive keys |
2391 | are securely added automtically, so sould not be provided. |
2392 | + |
2393 | + @param fail_invalid: (boolean) if True, then the function raises a |
2394 | + SourceConfigError is there is no matching installation source. |
2395 | + |
2396 | + @raises SourceConfigError() if for cloud:<pocket>, the <pocket> is not a |
2397 | + valid pocket in CLOUD_ARCHIVE_POCKETS |
2398 | """ |
2399 | + _mapping = OrderedDict([ |
2400 | + (r"^distro$", lambda: None), # This is a NOP |
2401 | + (r"^(?:proposed|distro-proposed)$", _add_proposed), |
2402 | + (r"^cloud-archive:(.*)$", _add_apt_repository), |
2403 | + (r"^((?:deb |http:|https:|ppa:).*)$", _add_apt_repository), |
2404 | + (r"^cloud:(.*)-(.*)\/staging$", _add_cloud_staging), |
2405 | + (r"^cloud:(.*)-(.*)$", _add_cloud_distro_check), |
2406 | + (r"^cloud:(.*)$", _add_cloud_pocket), |
2407 | + (r"^snap:.*-(.*)-(.*)$", _add_cloud_distro_check), |
2408 | + ]) |
2409 | if source is None: |
2410 | - log('Source is not present. Skipping') |
2411 | - return |
2412 | - |
2413 | - if (source.startswith('ppa:') or |
2414 | - source.startswith('http') or |
2415 | - source.startswith('deb ') or |
2416 | - source.startswith('cloud-archive:')): |
2417 | - cmd = ['add-apt-repository', '--yes', source] |
2418 | - _run_with_retries(cmd) |
2419 | - elif source.startswith('cloud:'): |
2420 | - install(filter_installed_packages(['ubuntu-cloud-keyring']), |
2421 | + source = '' |
2422 | + for r, fn in six.iteritems(_mapping): |
2423 | + m = re.match(r, source) |
2424 | + if m: |
2425 | + if key: |
2426 | + # Import key before adding the source which depends on it, |
2427 | + # as refreshing packages could fail otherwise. |
2428 | + try: |
2429 | + import_key(key) |
2430 | + except GPGKeyError as e: |
2431 | + raise SourceConfigError(str(e)) |
2432 | + # call the associated function with the captured groups |
2433 | + # raises SourceConfigError on error. |
2434 | + fn(*m.groups()) |
2435 | + break |
2436 | + else: |
2437 | + # nothing matched. log an error and maybe sys.exit |
2438 | + err = "Unknown source: {!r}".format(source) |
2439 | + log(err) |
2440 | + if fail_invalid: |
2441 | + raise SourceConfigError(err) |
2442 | + |
2443 | + |
2444 | +def _add_proposed(): |
2445 | + """Add the PROPOSED_POCKET as /etc/apt/source.list.d/proposed.list |
2446 | + |
2447 | + Uses get_distrib_codename to determine the correct stanza for |
2448 | + the deb line. |
2449 | + |
2450 | + For intel architecutres PROPOSED_POCKET is used for the release, but for |
2451 | + other architectures PROPOSED_PORTS_POCKET is used for the release. |
2452 | + """ |
2453 | + release = get_distrib_codename() |
2454 | + arch = platform.machine() |
2455 | + if arch not in six.iterkeys(ARCH_TO_PROPOSED_POCKET): |
2456 | + raise SourceConfigError("Arch {} not supported for (distro-)proposed" |
2457 | + .format(arch)) |
2458 | + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
2459 | + apt.write(ARCH_TO_PROPOSED_POCKET[arch].format(release)) |
2460 | + |
2461 | + |
2462 | +def _add_apt_repository(spec): |
2463 | + """Add the spec using add_apt_repository |
2464 | + |
2465 | + :param spec: the parameter to pass to add_apt_repository |
2466 | + :type spec: str |
2467 | + """ |
2468 | + if '{series}' in spec: |
2469 | + series = get_distrib_codename() |
2470 | + spec = spec.replace('{series}', series) |
2471 | + # software-properties package for bionic properly reacts to proxy settings |
2472 | + # passed as environment variables (See lp:1433761). This is not the case |
2473 | + # LTS and non-LTS releases below bionic. |
2474 | + _run_with_retries(['add-apt-repository', '--yes', spec], |
2475 | + cmd_env=env_proxy_settings(['https'])) |
2476 | + |
2477 | + |
2478 | +def _add_cloud_pocket(pocket): |
2479 | + """Add a cloud pocket as /etc/apt/sources.d/cloud-archive.list |
2480 | + |
2481 | + Note that this overwrites the existing file if there is one. |
2482 | + |
2483 | + This function also converts the simple pocket in to the actual pocket using |
2484 | + the CLOUD_ARCHIVE_POCKETS mapping. |
2485 | + |
2486 | + :param pocket: string representing the pocket to add a deb spec for. |
2487 | + :raises: SourceConfigError if the cloud pocket doesn't exist or the |
2488 | + requested release doesn't match the current distro version. |
2489 | + """ |
2490 | + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), |
2491 | fatal=True) |
2492 | - pocket = source.split(':')[-1] |
2493 | - if pocket not in CLOUD_ARCHIVE_POCKETS: |
2494 | - raise SourceConfigError( |
2495 | - 'Unsupported cloud: source option %s' % |
2496 | - pocket) |
2497 | - actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] |
2498 | - with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
2499 | - apt.write(CLOUD_ARCHIVE.format(actual_pocket)) |
2500 | - elif source == 'proposed': |
2501 | - release = lsb_release()['DISTRIB_CODENAME'] |
2502 | - with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
2503 | - apt.write(PROPOSED_POCKET.format(release)) |
2504 | - elif source == 'distro': |
2505 | - pass |
2506 | - else: |
2507 | - log("Unknown source: {!r}".format(source)) |
2508 | - |
2509 | - if key: |
2510 | - if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
2511 | - with NamedTemporaryFile('w+') as key_file: |
2512 | - key_file.write(key) |
2513 | - key_file.flush() |
2514 | - key_file.seek(0) |
2515 | - subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file) |
2516 | - else: |
2517 | - # Note that hkp: is in no way a secure protocol. Using a |
2518 | - # GPG key id is pointless from a security POV unless you |
2519 | - # absolutely trust your network and DNS. |
2520 | - subprocess.check_call(['apt-key', 'adv', '--keyserver', |
2521 | - 'hkp://keyserver.ubuntu.com:80', '--recv', |
2522 | - key]) |
2523 | + if pocket not in CLOUD_ARCHIVE_POCKETS: |
2524 | + raise SourceConfigError( |
2525 | + 'Unsupported cloud: source option %s' % |
2526 | + pocket) |
2527 | + actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] |
2528 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
2529 | + apt.write(CLOUD_ARCHIVE.format(actual_pocket)) |
2530 | + |
2531 | + |
2532 | +def _add_cloud_staging(cloud_archive_release, openstack_release): |
2533 | + """Add the cloud staging repository which is in |
2534 | + ppa:ubuntu-cloud-archive/<openstack_release>-staging |
2535 | + |
2536 | + This function checks that the cloud_archive_release matches the current |
2537 | + codename for the distro that charm is being installed on. |
2538 | + |
2539 | + :param cloud_archive_release: string, codename for the release. |
2540 | + :param openstack_release: String, codename for the openstack release. |
2541 | + :raises: SourceConfigError if the cloud_archive_release doesn't match the |
2542 | + current version of the os. |
2543 | + """ |
2544 | + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) |
2545 | + ppa = 'ppa:ubuntu-cloud-archive/{}-staging'.format(openstack_release) |
2546 | + cmd = 'add-apt-repository -y {}'.format(ppa) |
2547 | + _run_with_retries(cmd.split(' ')) |
2548 | + |
2549 | + |
2550 | +def _add_cloud_distro_check(cloud_archive_release, openstack_release): |
2551 | + """Add the cloud pocket, but also check the cloud_archive_release against |
2552 | + the current distro, and use the openstack_release as the full lookup. |
2553 | + |
2554 | + This just calls _add_cloud_pocket() with the openstack_release as pocket |
2555 | + to get the correct cloud-archive.list for dpkg to work with. |
2556 | + |
2557 | + :param cloud_archive_release:String, codename for the distro release. |
2558 | + :param openstack_release: String, spec for the release to look up in the |
2559 | + CLOUD_ARCHIVE_POCKETS |
2560 | + :raises: SourceConfigError if this is the wrong distro, or the pocket spec |
2561 | + doesn't exist. |
2562 | + """ |
2563 | + _verify_is_ubuntu_rel(cloud_archive_release, openstack_release) |
2564 | + _add_cloud_pocket("{}-{}".format(cloud_archive_release, openstack_release)) |
2565 | + |
2566 | + |
2567 | +def _verify_is_ubuntu_rel(release, os_release): |
2568 | + """Verify that the release is in the same as the current ubuntu release. |
2569 | + |
2570 | + :param release: String, lowercase for the release. |
2571 | + :param os_release: String, the os_release being asked for |
2572 | + :raises: SourceConfigError if the release is not the same as the ubuntu |
2573 | + release. |
2574 | + """ |
2575 | + ubuntu_rel = get_distrib_codename() |
2576 | + if release != ubuntu_rel: |
2577 | + raise SourceConfigError( |
2578 | + 'Invalid Cloud Archive release specified: {}-{} on this Ubuntu' |
2579 | + 'version ({})'.format(release, os_release, ubuntu_rel)) |
2580 | |
2581 | |
2582 | def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,), |
2583 | @@ -300,9 +661,12 @@ |
2584 | :param: cmd_env: dict: Environment variables to add to the command run. |
2585 | """ |
2586 | |
2587 | - env = os.environ.copy() |
2588 | + env = None |
2589 | + kwargs = {} |
2590 | if cmd_env: |
2591 | + env = os.environ.copy() |
2592 | env.update(cmd_env) |
2593 | + kwargs['env'] = env |
2594 | |
2595 | if not retry_message: |
2596 | retry_message = "Failed executing '{}'".format(" ".join(cmd)) |
2597 | @@ -314,7 +678,8 @@ |
2598 | retry_results = (None,) + retry_exitcodes |
2599 | while result in retry_results: |
2600 | try: |
2601 | - result = subprocess.check_call(cmd, env=env) |
2602 | + # result = subprocess.check_call(cmd, env=env) |
2603 | + result = subprocess.check_call(cmd, **kwargs) |
2604 | except subprocess.CalledProcessError as e: |
2605 | retry_count = retry_count + 1 |
2606 | if retry_count > max_retries: |
2607 | @@ -327,6 +692,7 @@ |
2608 | def _run_apt_command(cmd, fatal=False): |
2609 | """Run an apt command with optional retries. |
2610 | |
2611 | + :param: cmd: str: The apt command to run. |
2612 | :param: fatal: bool: Whether the command's output should be checked and |
2613 | retried. |
2614 | """ |
2615 | @@ -353,7 +719,7 @@ |
2616 | cache = apt_cache() |
2617 | try: |
2618 | pkg = cache[package] |
2619 | - except: |
2620 | + except Exception: |
2621 | # the package is unknown to the current apt cache. |
2622 | return None |
2623 |
Command: make ci-test /ci.lscape. net/job/ latch-test- xenial/ 3943/
Result: Success
Revno: 69
Branch: lp:~simpoir/landscape-client-charm/sync-charmhelpers-and-keys
Jenkins: https:/