Merge ~aieri/charm-nagios:bug/1864192 into ~nagios-charmers/charm-nagios:master

Proposed by Andrea Ieri
Status: Superseded
Proposed branch: ~aieri/charm-nagios:bug/1864192
Merge into: ~nagios-charmers/charm-nagios:master
Diff against target: 3760 lines (+2529/-266)
32 files modified
Makefile (+1/-2)
bin/charm_helpers_sync.py (+20/-11)
charm-helpers.yaml (+2/-1)
hooks/charmhelpers/__init__.py (+4/-4)
hooks/charmhelpers/contrib/charmsupport/__init__.py (+13/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+500/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+173/-0)
hooks/charmhelpers/core/hookenv.py (+525/-56)
hooks/charmhelpers/core/host.py (+166/-10)
hooks/charmhelpers/core/host_factory/ubuntu.py (+28/-1)
hooks/charmhelpers/core/kernel.py (+2/-2)
hooks/charmhelpers/core/services/base.py (+18/-7)
hooks/charmhelpers/core/strutils.py (+11/-5)
hooks/charmhelpers/core/sysctl.py (+32/-11)
hooks/charmhelpers/core/templating.py (+18/-9)
hooks/charmhelpers/core/unitdata.py (+8/-1)
hooks/charmhelpers/fetch/__init__.py (+4/-0)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+2/-2)
hooks/charmhelpers/fetch/giturl.py (+2/-2)
hooks/charmhelpers/fetch/python/__init__.py (+13/-0)
hooks/charmhelpers/fetch/python/debug.py (+54/-0)
hooks/charmhelpers/fetch/python/packages.py (+154/-0)
hooks/charmhelpers/fetch/python/rpdb.py (+56/-0)
hooks/charmhelpers/fetch/python/version.py (+32/-0)
hooks/charmhelpers/fetch/snap.py (+17/-1)
hooks/charmhelpers/fetch/ubuntu.py (+305/-83)
hooks/charmhelpers/fetch/ubuntu_apt_pkg.py (+267/-0)
hooks/charmhelpers/osplatform.py (+24/-3)
hooks/common.py (+6/-15)
hooks/install (+1/-1)
hooks/monitors-relation-changed (+70/-38)
Reviewer Review Type Date Requested Status
Peter Sabaini Pending
Adam Dyess Pending
Chris Sanders Pending
Review via email: mp+387302@code.launchpad.net

This proposal supersedes a proposal from 2020-06-29.

This proposal has been superseded by a proposal from 2020-07-13.

To post a comment you must log in.
Revision history for this message
Peter Sabaini (peter-sabaini) wrote : Posted in a previous version of this proposal

LGTM, some nits but nothing blocking imo

review: Approve
Revision history for this message
Andrea Ieri (aieri) wrote : Posted in a previous version of this proposal

Thanks, the charm helpers sync script actually comes from https://raw.githubusercontent.com/juju/charm-helpers/master/tools/charm_helpers_sync/charm_helpers_sync.py, it's not part of this repo.

Revision history for this message
Adam Dyess (addyess) wrote : Posted in a previous version of this proposal

Great. No issues

review: Approve
Revision history for this message
Chris Sanders (chris.sanders) wrote : Posted in a previous version of this proposal

A few comments inline, and while I hate to do this I think the charmhelpers and this change need to be split. While reviewing it I'm not convinced that this merge isn't confusing local vs charmhelpers functions. For example 'ingress_address' is defined in this change and I *believe* is only actually used from charmhelpers.

If I'm just having difficulty understanding and you *do* think the change is dependent on charmhelpers you can make this MR dependent on the charmhelpers MR so the changes specific to this bug can be reviewed seperately.

review: Needs Fixing

Unmerged commits

e4bb62e... by Andrea Ieri

Gracefully handle incorrect relation data sent over the nagios relation

Closes-Bug: 1864192

cc67cb4... by Andrea Ieri

Charmhelpers sync

Install enum for python2 as this is needed by hookenv

3a6dc7c... by Andrea Ieri

Revert "Fully switch to the network-get primitives"

This reverts commit 66b8e0577d7f7f5761da4ff7dd50a0d01e04029c.
The fix was completely wrong, because network-get can only retrieve
local data; learning the ingress-address of a remote unit must be done
via relation-get.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/Makefile b/Makefile
index dbbeab3..5ed72eb 100644
--- a/Makefile
+++ b/Makefile
@@ -35,8 +35,7 @@ test:
3535
36bin/charm_helpers_sync.py:36bin/charm_helpers_sync.py:
37 @mkdir -p bin37 @mkdir -p bin
38 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \38 @curl -o bin/charm_helpers_sync.py https://raw.githubusercontent.com/juju/charm-helpers/master/tools/charm_helpers_sync/charm_helpers_sync.py
39 > bin/charm_helpers_sync.py
4039
41sync: bin/charm_helpers_sync.py40sync: bin/charm_helpers_sync.py
42 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml41 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
diff --git a/bin/charm_helpers_sync.py b/bin/charm_helpers_sync.py
index bd79460..7c0c194 100644
--- a/bin/charm_helpers_sync.py
+++ b/bin/charm_helpers_sync.py
@@ -29,7 +29,7 @@ from fnmatch import fnmatch
2929
30import six30import six
3131
32CHARM_HELPERS_BRANCH = 'lp:charm-helpers'32CHARM_HELPERS_REPO = 'https://github.com/juju/charm-helpers'
3333
3434
35def parse_config(conf_file):35def parse_config(conf_file):
@@ -39,10 +39,16 @@ def parse_config(conf_file):
39 return yaml.load(open(conf_file).read())39 return yaml.load(open(conf_file).read())
4040
4141
42def clone_helpers(work_dir, branch):42def clone_helpers(work_dir, repo):
43 dest = os.path.join(work_dir, 'charm-helpers')43 dest = os.path.join(work_dir, 'charm-helpers')
44 logging.info('Checking out %s to %s.' % (branch, dest))44 logging.info('Cloning out %s to %s.' % (repo, dest))
45 cmd = ['bzr', 'checkout', '--lightweight', branch, dest]45 branch = None
46 if '@' in repo:
47 repo, branch = repo.split('@', 1)
48 cmd = ['git', 'clone', '--depth=1']
49 if branch is not None:
50 cmd += ['--branch', branch]
51 cmd += [repo, dest]
46 subprocess.check_call(cmd)52 subprocess.check_call(cmd)
47 return dest53 return dest
4854
@@ -174,6 +180,9 @@ def extract_options(inc, global_options=None):
174180
175181
176def sync_helpers(include, src, dest, options=None):182def sync_helpers(include, src, dest, options=None):
183 if os.path.exists(dest):
184 logging.debug('Removing existing directory: %s' % dest)
185 shutil.rmtree(dest)
177 if not os.path.isdir(dest):186 if not os.path.isdir(dest):
178 os.makedirs(dest)187 os.makedirs(dest)
179188
@@ -198,8 +207,8 @@ if __name__ == '__main__':
198 default=None, help='helper config file')207 default=None, help='helper config file')
199 parser.add_option('-D', '--debug', action='store_true', dest='debug',208 parser.add_option('-D', '--debug', action='store_true', dest='debug',
200 default=False, help='debug')209 default=False, help='debug')
201 parser.add_option('-b', '--branch', action='store', dest='branch',210 parser.add_option('-r', '--repository', action='store', dest='repo',
202 help='charm-helpers bzr branch (overrides config)')211 help='charm-helpers git repository (overrides config)')
203 parser.add_option('-d', '--destination', action='store', dest='dest_dir',212 parser.add_option('-d', '--destination', action='store', dest='dest_dir',
204 help='sync destination dir (overrides config)')213 help='sync destination dir (overrides config)')
205 (opts, args) = parser.parse_args()214 (opts, args) = parser.parse_args()
@@ -218,10 +227,10 @@ if __name__ == '__main__':
218 else:227 else:
219 config = {}228 config = {}
220229
221 if 'branch' not in config:230 if 'repo' not in config:
222 config['branch'] = CHARM_HELPERS_BRANCH231 config['repo'] = CHARM_HELPERS_REPO
223 if opts.branch:232 if opts.repo:
224 config['branch'] = opts.branch233 config['repo'] = opts.repo
225 if opts.dest_dir:234 if opts.dest_dir:
226 config['destination'] = opts.dest_dir235 config['destination'] = opts.dest_dir
227236
@@ -241,7 +250,7 @@ if __name__ == '__main__':
241 sync_options = config['options']250 sync_options = config['options']
242 tmpd = tempfile.mkdtemp()251 tmpd = tempfile.mkdtemp()
243 try:252 try:
244 checkout = clone_helpers(tmpd, config['branch'])253 checkout = clone_helpers(tmpd, config['repo'])
245 sync_helpers(config['include'], checkout, config['destination'],254 sync_helpers(config['include'], checkout, config['destination'],
246 options=sync_options)255 options=sync_options)
247 except Exception as e:256 except Exception as e:
diff --git a/charm-helpers.yaml b/charm-helpers.yaml
index e5f7760..640679e 100644
--- a/charm-helpers.yaml
+++ b/charm-helpers.yaml
@@ -1,7 +1,8 @@
1repo: https://github.com/juju/charm-helpers
1destination: hooks/charmhelpers2destination: hooks/charmhelpers
2branch: lp:charm-helpers
3include:3include:
4 - core4 - core
5 - fetch5 - fetch
6 - osplatform6 - osplatform
7 - contrib.ssl7 - contrib.ssl
8 - contrib.charmsupport
diff --git a/hooks/charmhelpers/__init__.py b/hooks/charmhelpers/__init__.py
index e7aa471..61ef907 100644
--- a/hooks/charmhelpers/__init__.py
+++ b/hooks/charmhelpers/__init__.py
@@ -23,22 +23,22 @@ import subprocess
23import sys23import sys
2424
25try:25try:
26 import six # flake8: noqa26 import six # NOQA:F401
27except ImportError:27except ImportError:
28 if sys.version_info.major == 2:28 if sys.version_info.major == 2:
29 subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])29 subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
30 else:30 else:
31 subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])31 subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
32 import six # flake8: noqa32 import six # NOQA:F401
3333
34try:34try:
35 import yaml # flake8: noqa35 import yaml # NOQA:F401
36except ImportError:36except ImportError:
37 if sys.version_info.major == 2:37 if sys.version_info.major == 2:
38 subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])38 subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
39 else:39 else:
40 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])40 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
41 import yaml # flake8: noqa41 import yaml # NOQA:F401
4242
4343
44# Holds a list of mapping of mangled function names that have been deprecated44# Holds a list of mapping of mangled function names that have been deprecated
diff --git a/hooks/charmhelpers/contrib/charmsupport/__init__.py b/hooks/charmhelpers/contrib/charmsupport/__init__.py
45new file mode 10064445new file mode 100644
index 0000000..d7567b8
--- /dev/null
+++ b/hooks/charmhelpers/contrib/charmsupport/__init__.py
@@ -0,0 +1,13 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
diff --git a/hooks/charmhelpers/contrib/charmsupport/nrpe.py b/hooks/charmhelpers/contrib/charmsupport/nrpe.py
0new file mode 10064414new file mode 100644
index 0000000..d775861
--- /dev/null
+++ b/hooks/charmhelpers/contrib/charmsupport/nrpe.py
@@ -0,0 +1,500 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
14
15"""Compatibility with the nrpe-external-master charm"""
16# Copyright 2012 Canonical Ltd.
17#
18# Authors:
19# Matthew Wedgwood <matthew.wedgwood@canonical.com>
20
21import subprocess
22import pwd
23import grp
24import os
25import glob
26import shutil
27import re
28import shlex
29import yaml
30
31from charmhelpers.core.hookenv import (
32 config,
33 hook_name,
34 local_unit,
35 log,
36 relation_get,
37 relation_ids,
38 relation_set,
39 relations_of_type,
40)
41
42from charmhelpers.core.host import service
43from charmhelpers.core import host
44
45# This module adds compatibility with the nrpe-external-master and plain nrpe
46# subordinate charms. To use it in your charm:
47#
48# 1. Update metadata.yaml
49#
50# provides:
51# (...)
52# nrpe-external-master:
53# interface: nrpe-external-master
54# scope: container
55#
56# and/or
57#
58# provides:
59# (...)
60# local-monitors:
61# interface: local-monitors
62# scope: container
63
64#
65# 2. Add the following to config.yaml
66#
67# nagios_context:
68# default: "juju"
69# type: string
70# description: |
71# Used by the nrpe subordinate charms.
72# A string that will be prepended to instance name to set the host name
73# in nagios. So for instance the hostname would be something like:
74# juju-myservice-0
75# If you're running multiple environments with the same services in them
76# this allows you to differentiate between them.
77# nagios_servicegroups:
78# default: ""
79# type: string
80# description: |
81# A comma-separated list of nagios servicegroups.
82# If left empty, the nagios_context will be used as the servicegroup
83#
84# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
85#
86# 4. Update your hooks.py with something like this:
87#
88# from charmsupport.nrpe import NRPE
89# (...)
90# def update_nrpe_config():
91# nrpe_compat = NRPE()
92# nrpe_compat.add_check(
93# shortname = "myservice",
94# description = "Check MyService",
95# check_cmd = "check_http -w 2 -c 10 http://localhost"
96# )
97# nrpe_compat.add_check(
98# "myservice_other",
99# "Check for widget failures",
100# check_cmd = "/srv/myapp/scripts/widget_check"
101# )
102# nrpe_compat.write()
103#
104# def config_changed():
105# (...)
106# update_nrpe_config()
107#
108# def nrpe_external_master_relation_changed():
109# update_nrpe_config()
110#
111# def local_monitors_relation_changed():
112# update_nrpe_config()
113#
114# 4.a If your charm is a subordinate charm set primary=False
115#
116# from charmsupport.nrpe import NRPE
117# (...)
118# def update_nrpe_config():
119# nrpe_compat = NRPE(primary=False)
120#
121# 5. ln -s hooks.py nrpe-external-master-relation-changed
122# ln -s hooks.py local-monitors-relation-changed
123
124
125class CheckException(Exception):
126 pass
127
128
129class Check(object):
130 shortname_re = '[A-Za-z0-9-_.@]+$'
131 service_template = ("""
132#---------------------------------------------------
133# This file is Juju managed
134#---------------------------------------------------
135define service {{
136 use active-service
137 host_name {nagios_hostname}
138 service_description {nagios_hostname}[{shortname}] """
139 """{description}
140 check_command check_nrpe!{command}
141 servicegroups {nagios_servicegroup}
142}}
143""")
144
145 def __init__(self, shortname, description, check_cmd):
146 super(Check, self).__init__()
147 # XXX: could be better to calculate this from the service name
148 if not re.match(self.shortname_re, shortname):
149 raise CheckException("shortname must match {}".format(
150 Check.shortname_re))
151 self.shortname = shortname
152 self.command = "check_{}".format(shortname)
153 # Note: a set of invalid characters is defined by the
154 # Nagios server config
155 # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
156 self.description = description
157 self.check_cmd = self._locate_cmd(check_cmd)
158
159 def _get_check_filename(self):
160 return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
161
162 def _get_service_filename(self, hostname):
163 return os.path.join(NRPE.nagios_exportdir,
164 'service__{}_{}.cfg'.format(hostname, self.command))
165
166 def _locate_cmd(self, check_cmd):
167 search_path = (
168 '/usr/lib/nagios/plugins',
169 '/usr/local/lib/nagios/plugins',
170 )
171 parts = shlex.split(check_cmd)
172 for path in search_path:
173 if os.path.exists(os.path.join(path, parts[0])):
174 command = os.path.join(path, parts[0])
175 if len(parts) > 1:
176 command += " " + " ".join(parts[1:])
177 return command
178 log('Check command not found: {}'.format(parts[0]))
179 return ''
180
181 def _remove_service_files(self):
182 if not os.path.exists(NRPE.nagios_exportdir):
183 return
184 for f in os.listdir(NRPE.nagios_exportdir):
185 if f.endswith('_{}.cfg'.format(self.command)):
186 os.remove(os.path.join(NRPE.nagios_exportdir, f))
187
188 def remove(self, hostname):
189 nrpe_check_file = self._get_check_filename()
190 if os.path.exists(nrpe_check_file):
191 os.remove(nrpe_check_file)
192 self._remove_service_files()
193
194 def write(self, nagios_context, hostname, nagios_servicegroups):
195 nrpe_check_file = self._get_check_filename()
196 with open(nrpe_check_file, 'w') as nrpe_check_config:
197 nrpe_check_config.write("# check {}\n".format(self.shortname))
198 if nagios_servicegroups:
199 nrpe_check_config.write(
200 "# The following header was added automatically by juju\n")
201 nrpe_check_config.write(
202 "# Modifying it will affect nagios monitoring and alerting\n")
203 nrpe_check_config.write(
204 "# servicegroups: {}\n".format(nagios_servicegroups))
205 nrpe_check_config.write("command[{}]={}\n".format(
206 self.command, self.check_cmd))
207
208 if not os.path.exists(NRPE.nagios_exportdir):
209 log('Not writing service config as {} is not accessible'.format(
210 NRPE.nagios_exportdir))
211 else:
212 self.write_service_config(nagios_context, hostname,
213 nagios_servicegroups)
214
215 def write_service_config(self, nagios_context, hostname,
216 nagios_servicegroups):
217 self._remove_service_files()
218
219 templ_vars = {
220 'nagios_hostname': hostname,
221 'nagios_servicegroup': nagios_servicegroups,
222 'description': self.description,
223 'shortname': self.shortname,
224 'command': self.command,
225 }
226 nrpe_service_text = Check.service_template.format(**templ_vars)
227 nrpe_service_file = self._get_service_filename(hostname)
228 with open(nrpe_service_file, 'w') as nrpe_service_config:
229 nrpe_service_config.write(str(nrpe_service_text))
230
231 def run(self):
232 subprocess.call(self.check_cmd)
233
234
235class NRPE(object):
236 nagios_logdir = '/var/log/nagios'
237 nagios_exportdir = '/var/lib/nagios/export'
238 nrpe_confdir = '/etc/nagios/nrpe.d'
239 homedir = '/var/lib/nagios' # home dir provided by nagios-nrpe-server
240
241 def __init__(self, hostname=None, primary=True):
242 super(NRPE, self).__init__()
243 self.config = config()
244 self.primary = primary
245 self.nagios_context = self.config['nagios_context']
246 if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']:
247 self.nagios_servicegroups = self.config['nagios_servicegroups']
248 else:
249 self.nagios_servicegroups = self.nagios_context
250 self.unit_name = local_unit().replace('/', '-')
251 if hostname:
252 self.hostname = hostname
253 else:
254 nagios_hostname = get_nagios_hostname()
255 if nagios_hostname:
256 self.hostname = nagios_hostname
257 else:
258 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
259 self.checks = []
260 # Iff in an nrpe-external-master relation hook, set primary status
261 relation = relation_ids('nrpe-external-master')
262 if relation:
263 log("Setting charm primary status {}".format(primary))
264 for rid in relation:
265 relation_set(relation_id=rid, relation_settings={'primary': self.primary})
266 self.remove_check_queue = set()
267
268 def add_check(self, *args, **kwargs):
269 shortname = None
270 if kwargs.get('shortname') is None:
271 if len(args) > 0:
272 shortname = args[0]
273 else:
274 shortname = kwargs['shortname']
275
276 self.checks.append(Check(*args, **kwargs))
277 try:
278 self.remove_check_queue.remove(shortname)
279 except KeyError:
280 pass
281
282 def remove_check(self, *args, **kwargs):
283 if kwargs.get('shortname') is None:
284 raise ValueError('shortname of check must be specified')
285
286 # Use sensible defaults if they're not specified - these are not
287 # actually used during removal, but they're required for constructing
288 # the Check object; check_disk is chosen because it's part of the
289 # nagios-plugins-basic package.
290 if kwargs.get('check_cmd') is None:
291 kwargs['check_cmd'] = 'check_disk'
292 if kwargs.get('description') is None:
293 kwargs['description'] = ''
294
295 check = Check(*args, **kwargs)
296 check.remove(self.hostname)
297 self.remove_check_queue.add(kwargs['shortname'])
298
299 def write(self):
300 try:
301 nagios_uid = pwd.getpwnam('nagios').pw_uid
302 nagios_gid = grp.getgrnam('nagios').gr_gid
303 except Exception:
304 log("Nagios user not set up, nrpe checks not updated")
305 return
306
307 if not os.path.exists(NRPE.nagios_logdir):
308 os.mkdir(NRPE.nagios_logdir)
309 os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
310
311 nrpe_monitors = {}
312 monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
313 for nrpecheck in self.checks:
314 nrpecheck.write(self.nagios_context, self.hostname,
315 self.nagios_servicegroups)
316 nrpe_monitors[nrpecheck.shortname] = {
317 "command": nrpecheck.command,
318 }
319
320 # update-status hooks are configured to firing every 5 minutes by
321 # default. When nagios-nrpe-server is restarted, the nagios server
322 # reports checks failing causing unnecessary alerts. Let's not restart
323 # on update-status hooks.
324 if not hook_name() == 'update-status':
325 service('restart', 'nagios-nrpe-server')
326
327 monitor_ids = relation_ids("local-monitors") + \
328 relation_ids("nrpe-external-master")
329 for rid in monitor_ids:
330 reldata = relation_get(unit=local_unit(), rid=rid)
331 if 'monitors' in reldata:
332 # update the existing set of monitors with the new data
333 old_monitors = yaml.safe_load(reldata['monitors'])
334 old_nrpe_monitors = old_monitors['monitors']['remote']['nrpe']
335 # remove keys that are in the remove_check_queue
336 old_nrpe_monitors = {k: v for k, v in old_nrpe_monitors.items()
337 if k not in self.remove_check_queue}
338 # update/add nrpe_monitors
339 old_nrpe_monitors.update(nrpe_monitors)
340 old_monitors['monitors']['remote']['nrpe'] = old_nrpe_monitors
341 # write back to the relation
342 relation_set(relation_id=rid, monitors=yaml.dump(old_monitors))
343 else:
344 # write a brand new set of monitors, as no existing ones.
345 relation_set(relation_id=rid, monitors=yaml.dump(monitors))
346
347 self.remove_check_queue.clear()
348
349
350def get_nagios_hostcontext(relation_name='nrpe-external-master'):
351 """
352 Query relation with nrpe subordinate, return the nagios_host_context
353
354 :param str relation_name: Name of relation nrpe sub joined to
355 """
356 for rel in relations_of_type(relation_name):
357 if 'nagios_host_context' in rel:
358 return rel['nagios_host_context']
359
360
361def get_nagios_hostname(relation_name='nrpe-external-master'):
362 """
363 Query relation with nrpe subordinate, return the nagios_hostname
364
365 :param str relation_name: Name of relation nrpe sub joined to
366 """
367 for rel in relations_of_type(relation_name):
368 if 'nagios_hostname' in rel:
369 return rel['nagios_hostname']
370
371
372def get_nagios_unit_name(relation_name='nrpe-external-master'):
373 """
374 Return the nagios unit name prepended with host_context if needed
375
376 :param str relation_name: Name of relation nrpe sub joined to
377 """
378 host_context = get_nagios_hostcontext(relation_name)
379 if host_context:
380 unit = "%s:%s" % (host_context, local_unit())
381 else:
382 unit = local_unit()
383 return unit
384
385
386def add_init_service_checks(nrpe, services, unit_name, immediate_check=True):
387 """
388 Add checks for each service in list
389
390 :param NRPE nrpe: NRPE object to add check to
391 :param list services: List of services to check
392 :param str unit_name: Unit name to use in check description
393 :param bool immediate_check: For sysv init, run the service check immediately
394 """
395 for svc in services:
396 # Don't add a check for these services from neutron-gateway
397 if svc in ['ext-port', 'os-charm-phy-nic-mtu']:
398 next
399
400 upstart_init = '/etc/init/%s.conf' % svc
401 sysv_init = '/etc/init.d/%s' % svc
402
403 if host.init_is_systemd():
404 nrpe.add_check(
405 shortname=svc,
406 description='process check {%s}' % unit_name,
407 check_cmd='check_systemd.py %s' % svc
408 )
409 elif os.path.exists(upstart_init):
410 nrpe.add_check(
411 shortname=svc,
412 description='process check {%s}' % unit_name,
413 check_cmd='check_upstart_job %s' % svc
414 )
415 elif os.path.exists(sysv_init):
416 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
417 checkpath = '%s/service-check-%s.txt' % (nrpe.homedir, svc)
418 croncmd = (
419 '/usr/local/lib/nagios/plugins/check_exit_status.pl '
420 '-e -s /etc/init.d/%s status' % svc
421 )
422 cron_file = '*/5 * * * * root %s > %s\n' % (croncmd, checkpath)
423 f = open(cronpath, 'w')
424 f.write(cron_file)
425 f.close()
426 nrpe.add_check(
427 shortname=svc,
428 description='service check {%s}' % unit_name,
429 check_cmd='check_status_file.py -f %s' % checkpath,
430 )
431 # if /var/lib/nagios doesn't exist open(checkpath, 'w') will fail
432 # (LP: #1670223).
433 if immediate_check and os.path.isdir(nrpe.homedir):
434 f = open(checkpath, 'w')
435 subprocess.call(
436 croncmd.split(),
437 stdout=f,
438 stderr=subprocess.STDOUT
439 )
440 f.close()
441 os.chmod(checkpath, 0o644)
442
443
444def copy_nrpe_checks(nrpe_files_dir=None):
445 """
446 Copy the nrpe checks into place
447
448 """
449 NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
450 if nrpe_files_dir is None:
451 # determine if "charmhelpers" is in CHARMDIR or CHARMDIR/hooks
452 for segment in ['.', 'hooks']:
453 nrpe_files_dir = os.path.abspath(os.path.join(
454 os.getenv('CHARM_DIR'),
455 segment,
456 'charmhelpers',
457 'contrib',
458 'openstack',
459 'files'))
460 if os.path.isdir(nrpe_files_dir):
461 break
462 else:
463 raise RuntimeError("Couldn't find charmhelpers directory")
464 if not os.path.exists(NAGIOS_PLUGINS):
465 os.makedirs(NAGIOS_PLUGINS)
466 for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")):
467 if os.path.isfile(fname):
468 shutil.copy2(fname,
469 os.path.join(NAGIOS_PLUGINS, os.path.basename(fname)))
470
471
472def add_haproxy_checks(nrpe, unit_name):
473 """
474 Add checks for each service in list
475
476 :param NRPE nrpe: NRPE object to add check to
477 :param str unit_name: Unit name to use in check description
478 """
479 nrpe.add_check(
480 shortname='haproxy_servers',
481 description='Check HAProxy {%s}' % unit_name,
482 check_cmd='check_haproxy.sh')
483 nrpe.add_check(
484 shortname='haproxy_queue',
485 description='Check HAProxy queue depth {%s}' % unit_name,
486 check_cmd='check_haproxy_queue_depth.sh')
487
488
489def remove_deprecated_check(nrpe, deprecated_services):
490 """
491 Remove checks fro deprecated services in list
492
493 :param nrpe: NRPE object to remove check from
494 :type nrpe: NRPE
495 :param deprecated_services: List of deprecated services that are removed
496 :type deprecated_services: list
497 """
498 for dep_svc in deprecated_services:
499 log('Deprecated service: {}'.format(dep_svc))
500 nrpe.remove_check(shortname=dep_svc)
diff --git a/hooks/charmhelpers/contrib/charmsupport/volumes.py b/hooks/charmhelpers/contrib/charmsupport/volumes.py
0new file mode 100644501new file mode 100644
index 0000000..7ea43f0
--- /dev/null
+++ b/hooks/charmhelpers/contrib/charmsupport/volumes.py
@@ -0,0 +1,173 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
14
15'''
16Functions for managing volumes in juju units. One volume is supported per unit.
17Subordinates may have their own storage, provided it is on its own partition.
18
19Configuration stanzas::
20
21 volume-ephemeral:
22 type: boolean
23 default: true
24 description: >
25 If false, a volume is mounted as sepecified in "volume-map"
26 If true, ephemeral storage will be used, meaning that log data
27 will only exist as long as the machine. YOU HAVE BEEN WARNED.
28 volume-map:
29 type: string
30 default: {}
31 description: >
32 YAML map of units to device names, e.g:
33 "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
34 Service units will raise a configure-error if volume-ephemeral
35 is 'true' and no volume-map value is set. Use 'juju set' to set a
36 value and 'juju resolved' to complete configuration.
37
38Usage::
39
40 from charmsupport.volumes import configure_volume, VolumeConfigurationError
41 from charmsupport.hookenv import log, ERROR
42 def post_mount_hook():
43 stop_service('myservice')
44 def post_mount_hook():
45 start_service('myservice')
46
47 if __name__ == '__main__':
48 try:
49 configure_volume(before_change=pre_mount_hook,
50 after_change=post_mount_hook)
51 except VolumeConfigurationError:
52 log('Storage could not be configured', ERROR)
53
54'''
55
56# XXX: Known limitations
57# - fstab is neither consulted nor updated
58
59import os
60from charmhelpers.core import hookenv
61from charmhelpers.core import host
62import yaml
63
64
65MOUNT_BASE = '/srv/juju/volumes'
66
67
68class VolumeConfigurationError(Exception):
69 '''Volume configuration data is missing or invalid'''
70 pass
71
72
73def get_config():
74 '''Gather and sanity-check volume configuration data'''
75 volume_config = {}
76 config = hookenv.config()
77
78 errors = False
79
80 if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
81 volume_config['ephemeral'] = True
82 else:
83 volume_config['ephemeral'] = False
84
85 try:
86 volume_map = yaml.safe_load(config.get('volume-map', '{}'))
87 except yaml.YAMLError as e:
88 hookenv.log("Error parsing YAML volume-map: {}".format(e),
89 hookenv.ERROR)
90 errors = True
91 if volume_map is None:
92 # probably an empty string
93 volume_map = {}
94 elif not isinstance(volume_map, dict):
95 hookenv.log("Volume-map should be a dictionary, not {}".format(
96 type(volume_map)))
97 errors = True
98
99 volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
100 if volume_config['device'] and volume_config['ephemeral']:
101 # asked for ephemeral storage but also defined a volume ID
102 hookenv.log('A volume is defined for this unit, but ephemeral '
103 'storage was requested', hookenv.ERROR)
104 errors = True
105 elif not volume_config['device'] and not volume_config['ephemeral']:
106 # asked for permanent storage but did not define volume ID
107 hookenv.log('Ephemeral storage was requested, but there is no volume '
108 'defined for this unit.', hookenv.ERROR)
109 errors = True
110
111 unit_mount_name = hookenv.local_unit().replace('/', '-')
112 volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
113
114 if errors:
115 return None
116 return volume_config
117
118
119def mount_volume(config):
120 if os.path.exists(config['mountpoint']):
121 if not os.path.isdir(config['mountpoint']):
122 hookenv.log('Not a directory: {}'.format(config['mountpoint']))
123 raise VolumeConfigurationError()
124 else:
125 host.mkdir(config['mountpoint'])
126 if os.path.ismount(config['mountpoint']):
127 unmount_volume(config)
128 if not host.mount(config['device'], config['mountpoint'], persist=True):
129 raise VolumeConfigurationError()
130
131
132def unmount_volume(config):
133 if os.path.ismount(config['mountpoint']):
134 if not host.umount(config['mountpoint'], persist=True):
135 raise VolumeConfigurationError()
136
137
138def managed_mounts():
139 '''List of all mounted managed volumes'''
140 return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
141
142
143def configure_volume(before_change=lambda: None, after_change=lambda: None):
144 '''Set up storage (or don't) according to the charm's volume configuration.
145 Returns the mount point or "ephemeral". before_change and after_change
146 are optional functions to be called if the volume configuration changes.
147 '''
148
149 config = get_config()
150 if not config:
151 hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
152 raise VolumeConfigurationError()
153
154 if config['ephemeral']:
155 if os.path.ismount(config['mountpoint']):
156 before_change()
157 unmount_volume(config)
158 after_change()
159 return 'ephemeral'
160 else:
161 # persistent storage
162 if os.path.ismount(config['mountpoint']):
163 mounts = dict(managed_mounts())
164 if mounts.get(config['mountpoint']) != config['device']:
165 before_change()
166 unmount_volume(config)
167 mount_volume(config)
168 after_change()
169 else:
170 before_change()
171 mount_volume(config)
172 after_change()
173 return config['mountpoint']
diff --git a/hooks/charmhelpers/core/hookenv.py b/hooks/charmhelpers/core/hookenv.py
index 67ad691..d7c37c1 100644
--- a/hooks/charmhelpers/core/hookenv.py
+++ b/hooks/charmhelpers/core/hookenv.py
@@ -21,23 +21,29 @@
21from __future__ import print_function21from __future__ import print_function
22import copy22import copy
23from distutils.version import LooseVersion23from distutils.version import LooseVersion
24from enum import Enum
24from functools import wraps25from functools import wraps
26from collections import namedtuple
25import glob27import glob
26import os28import os
27import json29import json
28import yaml30import yaml
31import re
29import subprocess32import subprocess
30import sys33import sys
31import errno34import errno
32import tempfile35import tempfile
33from subprocess import CalledProcessError36from subprocess import CalledProcessError
3437
38from charmhelpers import deprecate
39
35import six40import six
36if not six.PY3:41if not six.PY3:
37 from UserDict import UserDict42 from UserDict import UserDict
38else:43else:
39 from collections import UserDict44 from collections import UserDict
4045
46
41CRITICAL = "CRITICAL"47CRITICAL = "CRITICAL"
42ERROR = "ERROR"48ERROR = "ERROR"
43WARNING = "WARNING"49WARNING = "WARNING"
@@ -45,6 +51,20 @@ INFO = "INFO"
45DEBUG = "DEBUG"51DEBUG = "DEBUG"
46TRACE = "TRACE"52TRACE = "TRACE"
47MARKER = object()53MARKER = object()
54SH_MAX_ARG = 131071
55
56
57RANGE_WARNING = ('Passing NO_PROXY string that includes a cidr. '
58 'This may not be compatible with software you are '
59 'running in your shell.')
60
61
62class WORKLOAD_STATES(Enum):
63 ACTIVE = 'active'
64 BLOCKED = 'blocked'
65 MAINTENANCE = 'maintenance'
66 WAITING = 'waiting'
67
4868
49cache = {}69cache = {}
5070
@@ -65,7 +85,7 @@ def cached(func):
65 @wraps(func)85 @wraps(func)
66 def wrapper(*args, **kwargs):86 def wrapper(*args, **kwargs):
67 global cache87 global cache
68 key = str((func, args, kwargs))88 key = json.dumps((func, args, kwargs), sort_keys=True, default=str)
69 try:89 try:
70 return cache[key]90 return cache[key]
71 except KeyError:91 except KeyError:
@@ -95,7 +115,7 @@ def log(message, level=None):
95 command += ['-l', level]115 command += ['-l', level]
96 if not isinstance(message, six.string_types):116 if not isinstance(message, six.string_types):
97 message = repr(message)117 message = repr(message)
98 command += [message]118 command += [message[:SH_MAX_ARG]]
99 # Missing juju-log should not cause failures in unit tests119 # Missing juju-log should not cause failures in unit tests
100 # Send log output to stderr120 # Send log output to stderr
101 try:121 try:
@@ -110,6 +130,24 @@ def log(message, level=None):
110 raise130 raise
111131
112132
133def function_log(message):
134 """Write a function progress message"""
135 command = ['function-log']
136 if not isinstance(message, six.string_types):
137 message = repr(message)
138 command += [message[:SH_MAX_ARG]]
139 # Missing function-log should not cause failures in unit tests
140 # Send function_log output to stderr
141 try:
142 subprocess.call(command)
143 except OSError as e:
144 if e.errno == errno.ENOENT:
145 message = "function-log: {}".format(message)
146 print(message, file=sys.stderr)
147 else:
148 raise
149
150
113class Serializable(UserDict):151class Serializable(UserDict):
114 """Wrapper, an object that can be serialized to yaml or json"""152 """Wrapper, an object that can be serialized to yaml or json"""
115153
@@ -198,11 +236,35 @@ def remote_unit():
198 return os.environ.get('JUJU_REMOTE_UNIT', None)236 return os.environ.get('JUJU_REMOTE_UNIT', None)
199237
200238
201def service_name():239def application_name():
202 """The name service group this unit belongs to"""240 """
241 The name of the deployed application this unit belongs to.
242 """
203 return local_unit().split('/')[0]243 return local_unit().split('/')[0]
204244
205245
246def service_name():
247 """
248 .. deprecated:: 0.19.1
249 Alias for :func:`application_name`.
250 """
251 return application_name()
252
253
254def model_name():
255 """
256 Name of the model that this unit is deployed in.
257 """
258 return os.environ['JUJU_MODEL_NAME']
259
260
261def model_uuid():
262 """
263 UUID of the model that this unit is deployed in.
264 """
265 return os.environ['JUJU_MODEL_UUID']
266
267
206def principal_unit():268def principal_unit():
207 """Returns the principal unit of this unit, otherwise None"""269 """Returns the principal unit of this unit, otherwise None"""
208 # Juju 2.2 and above provides JUJU_PRINCIPAL_UNIT270 # Juju 2.2 and above provides JUJU_PRINCIPAL_UNIT
@@ -287,7 +349,7 @@ class Config(dict):
287 self.implicit_save = True349 self.implicit_save = True
288 self._prev_dict = None350 self._prev_dict = None
289 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)351 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
290 if os.path.exists(self.path):352 if os.path.exists(self.path) and os.stat(self.path).st_size:
291 self.load_previous()353 self.load_previous()
292 atexit(self._implicit_save)354 atexit(self._implicit_save)
293355
@@ -307,7 +369,11 @@ class Config(dict):
307 """369 """
308 self.path = path or self.path370 self.path = path or self.path
309 with open(self.path) as f:371 with open(self.path) as f:
310 self._prev_dict = json.load(f)372 try:
373 self._prev_dict = json.load(f)
374 except ValueError as e:
375 log('Unable to parse previous config data - {}'.format(str(e)),
376 level=ERROR)
311 for k, v in copy.deepcopy(self._prev_dict).items():377 for k, v in copy.deepcopy(self._prev_dict).items():
312 if k not in self:378 if k not in self:
313 self[k] = v379 self[k] = v
@@ -343,6 +409,7 @@ class Config(dict):
343409
344 """410 """
345 with open(self.path, 'w') as f:411 with open(self.path, 'w') as f:
412 os.fchmod(f.fileno(), 0o600)
346 json.dump(self, f)413 json.dump(self, f)
347414
348 def _implicit_save(self):415 def _implicit_save(self):
@@ -350,22 +417,40 @@ class Config(dict):
350 self.save()417 self.save()
351418
352419
353@cached420_cache_config = None
421
422
354def config(scope=None):423def config(scope=None):
355 """Juju charm configuration"""424 """
356 config_cmd_line = ['config-get']425 Get the juju charm configuration (scope==None) or individual key,
357 if scope is not None:426 (scope=str). The returned value is a Python data structure loaded as
358 config_cmd_line.append(scope)427 JSON from the Juju config command.
359 else:428
360 config_cmd_line.append('--all')429 :param scope: If set, return the value for the specified key.
361 config_cmd_line.append('--format=json')430 :type scope: Optional[str]
431 :returns: Either the whole config as a Config, or a key from it.
432 :rtype: Any
433 """
434 global _cache_config
435 config_cmd_line = ['config-get', '--all', '--format=json']
436 try:
437 # JSON Decode Exception for Python3.5+
438 exc_json = json.decoder.JSONDecodeError
439 except AttributeError:
440 # JSON Decode Exception for Python2.7 through Python3.4
441 exc_json = ValueError
362 try:442 try:
363 config_data = json.loads(443 if _cache_config is None:
364 subprocess.check_output(config_cmd_line).decode('UTF-8'))444 config_data = json.loads(
445 subprocess.check_output(config_cmd_line).decode('UTF-8'))
446 _cache_config = Config(config_data)
365 if scope is not None:447 if scope is not None:
366 return config_data448 return _cache_config.get(scope)
367 return Config(config_data)449 return _cache_config
368 except ValueError:450 except (exc_json, UnicodeDecodeError) as e:
451 log('Unable to parse output from config-get: config_cmd_line="{}" '
452 'message="{}"'
453 .format(config_cmd_line, str(e)), level=ERROR)
369 return None454 return None
370455
371456
@@ -459,6 +544,67 @@ def related_units(relid=None):
459 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []544 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
460545
461546
547def expected_peer_units():
548 """Get a generator for units we expect to join peer relation based on
549 goal-state.
550
551 The local unit is excluded from the result to make it easy to gauge
552 completion of all peers joining the relation with existing hook tools.
553
554 Example usage:
555 log('peer {} of {} joined peer relation'
556 .format(len(related_units()),
557 len(list(expected_peer_units()))))
558
559 This function will raise NotImplementedError if used with juju versions
560 without goal-state support.
561
562 :returns: iterator
563 :rtype: types.GeneratorType
564 :raises: NotImplementedError
565 """
566 if not has_juju_version("2.4.0"):
567 # goal-state first appeared in 2.4.0.
568 raise NotImplementedError("goal-state")
569 _goal_state = goal_state()
570 return (key for key in _goal_state['units']
571 if '/' in key and key != local_unit())
572
573
574def expected_related_units(reltype=None):
575 """Get a generator for units we expect to join relation based on
576 goal-state.
577
578 Note that you can not use this function for the peer relation, take a look
579 at expected_peer_units() for that.
580
581 This function will raise KeyError if you request information for a
582 relation type for which juju goal-state does not have information. It will
583 raise NotImplementedError if used with juju versions without goal-state
584 support.
585
586 Example usage:
587 log('participant {} of {} joined relation {}'
588 .format(len(related_units()),
589 len(list(expected_related_units())),
590 relation_type()))
591
592 :param reltype: Relation type to list data for, default is to list data for
593 the realtion type we are currently executing a hook for.
594 :type reltype: str
595 :returns: iterator
596 :rtype: types.GeneratorType
597 :raises: KeyError, NotImplementedError
598 """
599 if not has_juju_version("2.4.4"):
600 # goal-state existed in 2.4.0, but did not list individual units to
601 # join a relation in 2.4.1 through 2.4.3. (LP: #1794739)
602 raise NotImplementedError("goal-state relation unit count")
603 reltype = reltype or relation_type()
604 _goal_state = goal_state()
605 return (key for key in _goal_state['relations'][reltype] if '/' in key)
606
607
462@cached608@cached
463def relation_for_unit(unit=None, rid=None):609def relation_for_unit(unit=None, rid=None):
464 """Get the json represenation of a unit's relation"""610 """Get the json represenation of a unit's relation"""
@@ -644,18 +790,31 @@ def is_relation_made(relation, keys='private-address'):
644 return False790 return False
645791
646792
793def _port_op(op_name, port, protocol="TCP"):
794 """Open or close a service network port"""
795 _args = [op_name]
796 icmp = protocol.upper() == "ICMP"
797 if icmp:
798 _args.append(protocol)
799 else:
800 _args.append('{}/{}'.format(port, protocol))
801 try:
802 subprocess.check_call(_args)
803 except subprocess.CalledProcessError:
804 # Older Juju pre 2.3 doesn't support ICMP
805 # so treat it as a no-op if it fails.
806 if not icmp:
807 raise
808
809
647def open_port(port, protocol="TCP"):810def open_port(port, protocol="TCP"):
648 """Open a service network port"""811 """Open a service network port"""
649 _args = ['open-port']812 _port_op('open-port', port, protocol)
650 _args.append('{}/{}'.format(port, protocol))
651 subprocess.check_call(_args)
652813
653814
654def close_port(port, protocol="TCP"):815def close_port(port, protocol="TCP"):
655 """Close a service network port"""816 """Close a service network port"""
656 _args = ['close-port']817 _port_op('close-port', port, protocol)
657 _args.append('{}/{}'.format(port, protocol))
658 subprocess.check_call(_args)
659818
660819
661def open_ports(start, end, protocol="TCP"):820def open_ports(start, end, protocol="TCP"):
@@ -672,6 +831,17 @@ def close_ports(start, end, protocol="TCP"):
672 subprocess.check_call(_args)831 subprocess.check_call(_args)
673832
674833
834def opened_ports():
835 """Get the opened ports
836
837 *Note that this will only show ports opened in a previous hook*
838
839 :returns: Opened ports as a list of strings: ``['8080/tcp', '8081-8083/tcp']``
840 """
841 _args = ['opened-ports', '--format=json']
842 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
843
844
675@cached845@cached
676def unit_get(attribute):846def unit_get(attribute):
677 """Get the unit ID for the remote unit"""847 """Get the unit ID for the remote unit"""
@@ -793,6 +963,10 @@ class Hooks(object):
793 return wrapper963 return wrapper
794964
795965
966class NoNetworkBinding(Exception):
967 pass
968
969
796def charm_dir():970def charm_dir():
797 """Return the root directory of the current charm"""971 """Return the root directory of the current charm"""
798 d = os.environ.get('JUJU_CHARM_DIR')972 d = os.environ.get('JUJU_CHARM_DIR')
@@ -801,9 +975,23 @@ def charm_dir():
801 return os.environ.get('CHARM_DIR')975 return os.environ.get('CHARM_DIR')
802976
803977
978def cmd_exists(cmd):
979 """Return True if the specified cmd exists in the path"""
980 return any(
981 os.access(os.path.join(path, cmd), os.X_OK)
982 for path in os.environ["PATH"].split(os.pathsep)
983 )
984
985
804@cached986@cached
987@deprecate("moved to function_get()", log=log)
805def action_get(key=None):988def action_get(key=None):
806 """Gets the value of an action parameter, or all key/value param pairs"""989 """
990 .. deprecated:: 0.20.7
991 Alias for :func:`function_get`.
992
993 Gets the value of an action parameter, or all key/value param pairs.
994 """
807 cmd = ['action-get']995 cmd = ['action-get']
808 if key is not None:996 if key is not None:
809 cmd.append(key)997 cmd.append(key)
@@ -812,52 +1000,130 @@ def action_get(key=None):
812 return action_data1000 return action_data
8131001
8141002
1003@cached
1004def function_get(key=None):
1005 """Gets the value of an action parameter, or all key/value param pairs"""
1006 cmd = ['function-get']
1007 # Fallback for older charms.
1008 if not cmd_exists('function-get'):
1009 cmd = ['action-get']
1010
1011 if key is not None:
1012 cmd.append(key)
1013 cmd.append('--format=json')
1014 function_data = json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1015 return function_data
1016
1017
1018@deprecate("moved to function_set()", log=log)
815def action_set(values):1019def action_set(values):
816 """Sets the values to be returned after the action finishes"""1020 """
1021 .. deprecated:: 0.20.7
1022 Alias for :func:`function_set`.
1023
1024 Sets the values to be returned after the action finishes.
1025 """
817 cmd = ['action-set']1026 cmd = ['action-set']
818 for k, v in list(values.items()):1027 for k, v in list(values.items()):
819 cmd.append('{}={}'.format(k, v))1028 cmd.append('{}={}'.format(k, v))
820 subprocess.check_call(cmd)1029 subprocess.check_call(cmd)
8211030
8221031
1032def function_set(values):
1033 """Sets the values to be returned after the function finishes"""
1034 cmd = ['function-set']
1035 # Fallback for older charms.
1036 if not cmd_exists('function-get'):
1037 cmd = ['action-set']
1038
1039 for k, v in list(values.items()):
1040 cmd.append('{}={}'.format(k, v))
1041 subprocess.check_call(cmd)
1042
1043
1044@deprecate("moved to function_fail()", log=log)
823def action_fail(message):1045def action_fail(message):
824 """Sets the action status to failed and sets the error message.1046 """
1047 .. deprecated:: 0.20.7
1048 Alias for :func:`function_fail`.
1049
1050 Sets the action status to failed and sets the error message.
8251051
826 The results set by action_set are preserved."""1052 The results set by action_set are preserved.
1053 """
827 subprocess.check_call(['action-fail', message])1054 subprocess.check_call(['action-fail', message])
8281055
8291056
1057def function_fail(message):
1058 """Sets the function status to failed and sets the error message.
1059
1060 The results set by function_set are preserved."""
1061 cmd = ['function-fail']
1062 # Fallback for older charms.
1063 if not cmd_exists('function-fail'):
1064 cmd = ['action-fail']
1065 cmd.append(message)
1066
1067 subprocess.check_call(cmd)
1068
1069
830def action_name():1070def action_name():
831 """Get the name of the currently executing action."""1071 """Get the name of the currently executing action."""
832 return os.environ.get('JUJU_ACTION_NAME')1072 return os.environ.get('JUJU_ACTION_NAME')
8331073
8341074
1075def function_name():
1076 """Get the name of the currently executing function."""
1077 return os.environ.get('JUJU_FUNCTION_NAME') or action_name()
1078
1079
835def action_uuid():1080def action_uuid():
836 """Get the UUID of the currently executing action."""1081 """Get the UUID of the currently executing action."""
837 return os.environ.get('JUJU_ACTION_UUID')1082 return os.environ.get('JUJU_ACTION_UUID')
8381083
8391084
1085def function_id():
1086 """Get the ID of the currently executing function."""
1087 return os.environ.get('JUJU_FUNCTION_ID') or action_uuid()
1088
1089
840def action_tag():1090def action_tag():
841 """Get the tag for the currently executing action."""1091 """Get the tag for the currently executing action."""
842 return os.environ.get('JUJU_ACTION_TAG')1092 return os.environ.get('JUJU_ACTION_TAG')
8431093
8441094
845def status_set(workload_state, message):1095def function_tag():
1096 """Get the tag for the currently executing function."""
1097 return os.environ.get('JUJU_FUNCTION_TAG') or action_tag()
1098
1099
1100def status_set(workload_state, message, application=False):
846 """Set the workload state with a message1101 """Set the workload state with a message
8471102
848 Use status-set to set the workload state with a message which is visible1103 Use status-set to set the workload state with a message which is visible
849 to the user via juju status. If the status-set command is not found then1104 to the user via juju status. If the status-set command is not found then
850 assume this is juju < 1.23 and juju-log the message unstead.1105 assume this is juju < 1.23 and juju-log the message instead.
8511106
852 workload_state -- valid juju workload state.1107 workload_state -- valid juju workload state. str or WORKLOAD_STATES
853 message -- status update message1108 message -- status update message
1109 application -- Whether this is an application state set
854 """1110 """
855 valid_states = ['maintenance', 'blocked', 'waiting', 'active']1111 bad_state_msg = '{!r} is not a valid workload state'
856 if workload_state not in valid_states:1112
857 raise ValueError(1113 if isinstance(workload_state, str):
858 '{!r} is not a valid workload state'.format(workload_state)1114 try:
859 )1115 # Convert string to enum.
860 cmd = ['status-set', workload_state, message]1116 workload_state = WORKLOAD_STATES[workload_state.upper()]
1117 except KeyError:
1118 raise ValueError(bad_state_msg.format(workload_state))
1119
1120 if workload_state not in WORKLOAD_STATES:
1121 raise ValueError(bad_state_msg.format(workload_state))
1122
1123 cmd = ['status-set']
1124 if application:
1125 cmd.append('--application')
1126 cmd.extend([workload_state.value, message])
861 try:1127 try:
862 ret = subprocess.call(cmd)1128 ret = subprocess.call(cmd)
863 if ret == 0:1129 if ret == 0:
@@ -865,7 +1131,7 @@ def status_set(workload_state, message):
865 except OSError as e:1131 except OSError as e:
866 if e.errno != errno.ENOENT:1132 if e.errno != errno.ENOENT:
867 raise1133 raise
868 log_message = 'status-set failed: {} {}'.format(workload_state,1134 log_message = 'status-set failed: {} {}'.format(workload_state.value,
869 message)1135 message)
870 log(log_message, level='INFO')1136 log(log_message, level='INFO')
8711137
@@ -919,6 +1185,14 @@ def application_version_set(version):
9191185
9201186
921@translate_exc(from_exc=OSError, to_exc=NotImplementedError)1187@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1188@cached
1189def goal_state():
1190 """Juju goal state values"""
1191 cmd = ['goal-state', '--format=json']
1192 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
1193
1194
1195@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
922def is_leader():1196def is_leader():
923 """Does the current unit hold the juju leadership1197 """Does the current unit hold the juju leadership
9241198
@@ -1012,7 +1286,6 @@ def juju_version():
1012 universal_newlines=True).strip()1286 universal_newlines=True).strip()
10131287
10141288
1015@cached
1016def has_juju_version(minimum_version):1289def has_juju_version(minimum_version):
1017 """Return True if the Juju version is at least the provided version"""1290 """Return True if the Juju version is at least the provided version"""
1018 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)1291 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
@@ -1072,6 +1345,8 @@ def _run_atexit():
1072@translate_exc(from_exc=OSError, to_exc=NotImplementedError)1345@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1073def network_get_primary_address(binding):1346def network_get_primary_address(binding):
1074 '''1347 '''
1348 Deprecated since Juju 2.3; use network_get()
1349
1075 Retrieve the primary network address for a named binding1350 Retrieve the primary network address for a named binding
10761351
1077 :param binding: string. The name of a relation of extra-binding1352 :param binding: string. The name of a relation of extra-binding
@@ -1079,10 +1354,19 @@ def network_get_primary_address(binding):
1079 :raise: NotImplementedError if run on Juju < 2.01354 :raise: NotImplementedError if run on Juju < 2.0
1080 '''1355 '''
1081 cmd = ['network-get', '--primary-address', binding]1356 cmd = ['network-get', '--primary-address', binding]
1082 return subprocess.check_output(cmd).decode('UTF-8').strip()1357 try:
1358 response = subprocess.check_output(
1359 cmd,
1360 stderr=subprocess.STDOUT).decode('UTF-8').strip()
1361 except CalledProcessError as e:
1362 if 'no network config found for binding' in e.output.decode('UTF-8'):
1363 raise NoNetworkBinding("No network binding for {}"
1364 .format(binding))
1365 else:
1366 raise
1367 return response
10831368
10841369
1085@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1086def network_get(endpoint, relation_id=None):1370def network_get(endpoint, relation_id=None):
1087 """1371 """
1088 Retrieve the network details for a relation endpoint1372 Retrieve the network details for a relation endpoint
@@ -1090,24 +1374,20 @@ def network_get(endpoint, relation_id=None):
1090 :param endpoint: string. The name of a relation endpoint1374 :param endpoint: string. The name of a relation endpoint
1091 :param relation_id: int. The ID of the relation for the current context.1375 :param relation_id: int. The ID of the relation for the current context.
1092 :return: dict. The loaded YAML output of the network-get query.1376 :return: dict. The loaded YAML output of the network-get query.
1093 :raise: NotImplementedError if run on Juju < 2.11377 :raise: NotImplementedError if request not supported by the Juju version.
1094 """1378 """
1379 if not has_juju_version('2.2'):
1380 raise NotImplementedError(juju_version()) # earlier versions require --primary-address
1381 if relation_id and not has_juju_version('2.3'):
1382 raise NotImplementedError # 2.3 added the -r option
1383
1095 cmd = ['network-get', endpoint, '--format', 'yaml']1384 cmd = ['network-get', endpoint, '--format', 'yaml']
1096 if relation_id:1385 if relation_id:
1097 cmd.append('-r')1386 cmd.append('-r')
1098 cmd.append(relation_id)1387 cmd.append(relation_id)
1099 try:1388 response = subprocess.check_output(
1100 response = subprocess.check_output(1389 cmd,
1101 cmd,1390 stderr=subprocess.STDOUT).decode('UTF-8').strip()
1102 stderr=subprocess.STDOUT).decode('UTF-8').strip()
1103 except CalledProcessError as e:
1104 # Early versions of Juju 2.0.x required the --primary-address argument.
1105 # We catch that condition here and raise NotImplementedError since
1106 # the requested semantics are not available - the caller can then
1107 # use the network_get_primary_address() method instead.
1108 if '--primary-address is currently required' in e.output.decode('UTF-8'):
1109 raise NotImplementedError
1110 raise
1111 return yaml.safe_load(response)1391 return yaml.safe_load(response)
11121392
11131393
@@ -1140,3 +1420,192 @@ def meter_info():
1140 """Get the meter status information, if running in the meter-status-changed1420 """Get the meter status information, if running in the meter-status-changed
1141 hook."""1421 hook."""
1142 return os.environ.get('JUJU_METER_INFO')1422 return os.environ.get('JUJU_METER_INFO')
1423
1424
1425def iter_units_for_relation_name(relation_name):
1426 """Iterate through all units in a relation
1427
1428 Generator that iterates through all the units in a relation and yields
1429 a named tuple with rid and unit field names.
1430
1431 Usage:
1432 data = [(u.rid, u.unit)
1433 for u in iter_units_for_relation_name(relation_name)]
1434
1435 :param relation_name: string relation name
1436 :yield: Named Tuple with rid and unit field names
1437 """
1438 RelatedUnit = namedtuple('RelatedUnit', 'rid, unit')
1439 for rid in relation_ids(relation_name):
1440 for unit in related_units(rid):
1441 yield RelatedUnit(rid, unit)
1442
1443
1444def ingress_address(rid=None, unit=None):
1445 """
1446 Retrieve the ingress-address from a relation when available.
1447 Otherwise, return the private-address.
1448
1449 When used on the consuming side of the relation (unit is a remote
1450 unit), the ingress-address is the IP address that this unit needs
1451 to use to reach the provided service on the remote unit.
1452
1453 When used on the providing side of the relation (unit == local_unit()),
1454 the ingress-address is the IP address that is advertised to remote
1455 units on this relation. Remote units need to use this address to
1456 reach the local provided service on this unit.
1457
1458 Note that charms may document some other method to use in
1459 preference to the ingress_address(), such as an address provided
1460 on a different relation attribute or a service discovery mechanism.
1461 This allows charms to redirect inbound connections to their peers
1462 or different applications such as load balancers.
1463
1464 Usage:
1465 addresses = [ingress_address(rid=u.rid, unit=u.unit)
1466 for u in iter_units_for_relation_name(relation_name)]
1467
1468 :param rid: string relation id
1469 :param unit: string unit name
1470 :side effect: calls relation_get
1471 :return: string IP address
1472 """
1473 settings = relation_get(rid=rid, unit=unit)
1474 return (settings.get('ingress-address') or
1475 settings.get('private-address'))
1476
1477
1478def egress_subnets(rid=None, unit=None):
1479 """
1480 Retrieve the egress-subnets from a relation.
1481
1482 This function is to be used on the providing side of the
1483 relation, and provides the ranges of addresses that client
1484 connections may come from. The result is uninteresting on
1485 the consuming side of a relation (unit == local_unit()).
1486
1487 Returns a stable list of subnets in CIDR format.
1488 eg. ['192.168.1.0/24', '2001::F00F/128']
1489
1490 If egress-subnets is not available, falls back to using the published
1491 ingress-address, or finally private-address.
1492
1493 :param rid: string relation id
1494 :param unit: string unit name
1495 :side effect: calls relation_get
1496 :return: list of subnets in CIDR format. eg. ['192.168.1.0/24', '2001::F00F/128']
1497 """
1498 def _to_range(addr):
1499 if re.search(r'^(?:\d{1,3}\.){3}\d{1,3}$', addr) is not None:
1500 addr += '/32'
1501 elif ':' in addr and '/' not in addr: # IPv6
1502 addr += '/128'
1503 return addr
1504
1505 settings = relation_get(rid=rid, unit=unit)
1506 if 'egress-subnets' in settings:
1507 return [n.strip() for n in settings['egress-subnets'].split(',') if n.strip()]
1508 if 'ingress-address' in settings:
1509 return [_to_range(settings['ingress-address'])]
1510 if 'private-address' in settings:
1511 return [_to_range(settings['private-address'])]
1512 return [] # Should never happen
1513
1514
1515def unit_doomed(unit=None):
1516 """Determines if the unit is being removed from the model
1517
1518 Requires Juju 2.4.1.
1519
1520 :param unit: string unit name, defaults to local_unit
1521 :side effect: calls goal_state
1522 :side effect: calls local_unit
1523 :side effect: calls has_juju_version
1524 :return: True if the unit is being removed, already gone, or never existed
1525 """
1526 if not has_juju_version("2.4.1"):
1527 # We cannot risk blindly returning False for 'we don't know',
1528 # because that could cause data loss; if call sites don't
1529 # need an accurate answer, they likely don't need this helper
1530 # at all.
1531 # goal-state existed in 2.4.0, but did not handle removals
1532 # correctly until 2.4.1.
1533 raise NotImplementedError("is_doomed")
1534 if unit is None:
1535 unit = local_unit()
1536 gs = goal_state()
1537 units = gs.get('units', {})
1538 if unit not in units:
1539 return True
1540 # I don't think 'dead' units ever show up in the goal-state, but
1541 # check anyway in addition to 'dying'.
1542 return units[unit]['status'] in ('dying', 'dead')
1543
1544
1545def env_proxy_settings(selected_settings=None):
1546 """Get proxy settings from process environment variables.
1547
1548 Get charm proxy settings from environment variables that correspond to
1549 juju-http-proxy, juju-https-proxy juju-no-proxy (available as of 2.4.2, see
1550 lp:1782236) and juju-ftp-proxy in a format suitable for passing to an
1551 application that reacts to proxy settings passed as environment variables.
1552 Some applications support lowercase or uppercase notation (e.g. curl), some
1553 support only lowercase (e.g. wget), there are also subjectively rare cases
1554 of only uppercase notation support. no_proxy CIDR and wildcard support also
1555 varies between runtimes and applications as there is no enforced standard.
1556
1557 Some applications may connect to multiple destinations and expose config
1558 options that would affect only proxy settings for a specific destination
1559 these should be handled in charms in an application-specific manner.
1560
1561 :param selected_settings: format only a subset of possible settings
1562 :type selected_settings: list
1563 :rtype: Option(None, dict[str, str])
1564 """
1565 SUPPORTED_SETTINGS = {
1566 'http': 'HTTP_PROXY',
1567 'https': 'HTTPS_PROXY',
1568 'no_proxy': 'NO_PROXY',
1569 'ftp': 'FTP_PROXY'
1570 }
1571 if selected_settings is None:
1572 selected_settings = SUPPORTED_SETTINGS
1573
1574 selected_vars = [v for k, v in SUPPORTED_SETTINGS.items()
1575 if k in selected_settings]
1576 proxy_settings = {}
1577 for var in selected_vars:
1578 var_val = os.getenv(var)
1579 if var_val:
1580 proxy_settings[var] = var_val
1581 proxy_settings[var.lower()] = var_val
1582 # Now handle juju-prefixed environment variables. The legacy vs new
1583 # environment variable usage is mutually exclusive
1584 charm_var_val = os.getenv('JUJU_CHARM_{}'.format(var))
1585 if charm_var_val:
1586 proxy_settings[var] = charm_var_val
1587 proxy_settings[var.lower()] = charm_var_val
1588 if 'no_proxy' in proxy_settings:
1589 if _contains_range(proxy_settings['no_proxy']):
1590 log(RANGE_WARNING, level=WARNING)
1591 return proxy_settings if proxy_settings else None
1592
1593
1594def _contains_range(addresses):
1595 """Check for cidr or wildcard domain in a string.
1596
1597 Given a string comprising a comma seperated list of ip addresses
1598 and domain names, determine whether the string contains IP ranges
1599 or wildcard domains.
1600
1601 :param addresses: comma seperated list of domains and ip addresses.
1602 :type addresses: str
1603 """
1604 return (
1605 # Test for cidr (e.g. 10.20.20.0/24)
1606 "/" in addresses or
1607 # Test for wildcard domains (*.foo.com or .foo.com)
1608 "*" in addresses or
1609 addresses.startswith(".") or
1610 ",." in addresses or
1611 " ." in addresses)
diff --git a/hooks/charmhelpers/core/host.py b/hooks/charmhelpers/core/host.py
index 5656e2f..b33ac90 100644
--- a/hooks/charmhelpers/core/host.py
+++ b/hooks/charmhelpers/core/host.py
@@ -34,21 +34,23 @@ import six
3434
35from contextlib import contextmanager35from contextlib import contextmanager
36from collections import OrderedDict36from collections import OrderedDict
37from .hookenv import log, DEBUG37from .hookenv import log, INFO, DEBUG, local_unit, charm_name
38from .fstab import Fstab38from .fstab import Fstab
39from charmhelpers.osplatform import get_platform39from charmhelpers.osplatform import get_platform
4040
41__platform__ = get_platform()41__platform__ = get_platform()
42if __platform__ == "ubuntu":42if __platform__ == "ubuntu":
43 from charmhelpers.core.host_factory.ubuntu import (43 from charmhelpers.core.host_factory.ubuntu import ( # NOQA:F401
44 service_available,44 service_available,
45 add_new_group,45 add_new_group,
46 lsb_release,46 lsb_release,
47 cmp_pkgrevno,47 cmp_pkgrevno,
48 CompareHostReleases,48 CompareHostReleases,
49 get_distrib_codename,
50 arch
49 ) # flake8: noqa -- ignore F401 for this import51 ) # flake8: noqa -- ignore F401 for this import
50elif __platform__ == "centos":52elif __platform__ == "centos":
51 from charmhelpers.core.host_factory.centos import (53 from charmhelpers.core.host_factory.centos import ( # NOQA:F401
52 service_available,54 service_available,
53 add_new_group,55 add_new_group,
54 lsb_release,56 lsb_release,
@@ -58,6 +60,7 @@ elif __platform__ == "centos":
5860
59UPDATEDB_PATH = '/etc/updatedb.conf'61UPDATEDB_PATH = '/etc/updatedb.conf'
6062
63
61def service_start(service_name, **kwargs):64def service_start(service_name, **kwargs):
62 """Start a system service.65 """Start a system service.
6366
@@ -287,8 +290,8 @@ def service_running(service_name, **kwargs):
287 for key, value in six.iteritems(kwargs):290 for key, value in six.iteritems(kwargs):
288 parameter = '%s=%s' % (key, value)291 parameter = '%s=%s' % (key, value)
289 cmd.append(parameter)292 cmd.append(parameter)
290 output = subprocess.check_output(cmd,293 output = subprocess.check_output(
291 stderr=subprocess.STDOUT).decode('UTF-8')294 cmd, stderr=subprocess.STDOUT).decode('UTF-8')
292 except subprocess.CalledProcessError:295 except subprocess.CalledProcessError:
293 return False296 return False
294 else:297 else:
@@ -441,6 +444,51 @@ def add_user_to_group(username, group):
441 subprocess.check_call(cmd)444 subprocess.check_call(cmd)
442445
443446
447def chage(username, lastday=None, expiredate=None, inactive=None,
448 mindays=None, maxdays=None, root=None, warndays=None):
449 """Change user password expiry information
450
451 :param str username: User to update
452 :param str lastday: Set when password was changed in YYYY-MM-DD format
453 :param str expiredate: Set when user's account will no longer be
454 accessible in YYYY-MM-DD format.
455 -1 will remove an account expiration date.
456 :param str inactive: Set the number of days of inactivity after a password
457 has expired before the account is locked.
458 -1 will remove an account's inactivity.
459 :param str mindays: Set the minimum number of days between password
460 changes to MIN_DAYS.
461 0 indicates the password can be changed anytime.
462 :param str maxdays: Set the maximum number of days during which a
463 password is valid.
464 -1 as MAX_DAYS will remove checking maxdays
465 :param str root: Apply changes in the CHROOT_DIR directory
466 :param str warndays: Set the number of days of warning before a password
467 change is required
468 :raises subprocess.CalledProcessError: if call to chage fails
469 """
470 cmd = ['chage']
471 if root:
472 cmd.extend(['--root', root])
473 if lastday:
474 cmd.extend(['--lastday', lastday])
475 if expiredate:
476 cmd.extend(['--expiredate', expiredate])
477 if inactive:
478 cmd.extend(['--inactive', inactive])
479 if mindays:
480 cmd.extend(['--mindays', mindays])
481 if maxdays:
482 cmd.extend(['--maxdays', maxdays])
483 if warndays:
484 cmd.extend(['--warndays', warndays])
485 cmd.append(username)
486 subprocess.check_call(cmd)
487
488
489remove_password_expiry = functools.partial(chage, expiredate='-1', inactive='-1', mindays='0', maxdays='-1')
490
491
444def rsync(from_path, to_path, flags='-r', options=None, timeout=None):492def rsync(from_path, to_path, flags='-r', options=None, timeout=None):
445 """Replicate the contents of a path"""493 """Replicate the contents of a path"""
446 options = options or ['--delete', '--executability']494 options = options or ['--delete', '--executability']
@@ -492,13 +540,15 @@ def write_file(path, content, owner='root', group='root', perms=0o444):
492 # lets see if we can grab the file and compare the context, to avoid doing540 # lets see if we can grab the file and compare the context, to avoid doing
493 # a write.541 # a write.
494 existing_content = None542 existing_content = None
495 existing_uid, existing_gid = None, None543 existing_uid, existing_gid, existing_perms = None, None, None
496 try:544 try:
497 with open(path, 'rb') as target:545 with open(path, 'rb') as target:
498 existing_content = target.read()546 existing_content = target.read()
499 stat = os.stat(path)547 stat = os.stat(path)
500 existing_uid, existing_gid = stat.st_uid, stat.st_gid548 existing_uid, existing_gid, existing_perms = (
501 except:549 stat.st_uid, stat.st_gid, stat.st_mode
550 )
551 except Exception:
502 pass552 pass
503 if content != existing_content:553 if content != existing_content:
504 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms),554 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms),
@@ -506,10 +556,12 @@ def write_file(path, content, owner='root', group='root', perms=0o444):
506 with open(path, 'wb') as target:556 with open(path, 'wb') as target:
507 os.fchown(target.fileno(), uid, gid)557 os.fchown(target.fileno(), uid, gid)
508 os.fchmod(target.fileno(), perms)558 os.fchmod(target.fileno(), perms)
559 if six.PY3 and isinstance(content, six.string_types):
560 content = content.encode('UTF-8')
509 target.write(content)561 target.write(content)
510 return562 return
511 # the contents were the same, but we might still need to change the563 # the contents were the same, but we might still need to change the
512 # ownership.564 # ownership or permissions.
513 if existing_uid != uid:565 if existing_uid != uid:
514 log("Changing uid on already existing content: {} -> {}"566 log("Changing uid on already existing content: {} -> {}"
515 .format(existing_uid, uid), level=DEBUG)567 .format(existing_uid, uid), level=DEBUG)
@@ -518,6 +570,10 @@ def write_file(path, content, owner='root', group='root', perms=0o444):
518 log("Changing gid on already existing content: {} -> {}"570 log("Changing gid on already existing content: {} -> {}"
519 .format(existing_gid, gid), level=DEBUG)571 .format(existing_gid, gid), level=DEBUG)
520 os.chown(path, -1, gid)572 os.chown(path, -1, gid)
573 if existing_perms != perms:
574 log("Changing permissions on existing content: {} -> {}"
575 .format(existing_perms, perms), level=DEBUG)
576 os.chmod(path, perms)
521577
522578
523def fstab_remove(mp):579def fstab_remove(mp):
@@ -782,7 +838,7 @@ def list_nics(nic_type=None):
782 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')838 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
783 ip_output = (line.strip() for line in ip_output if line)839 ip_output = (line.strip() for line in ip_output if line)
784840
785 key = re.compile('^[0-9]+:\s+(.+):')841 key = re.compile(r'^[0-9]+:\s+(.+):')
786 for line in ip_output:842 for line in ip_output:
787 matched = re.search(key, line)843 matched = re.search(key, line)
788 if matched:844 if matched:
@@ -927,6 +983,20 @@ def is_container():
927983
928984
929def add_to_updatedb_prunepath(path, updatedb_path=UPDATEDB_PATH):985def add_to_updatedb_prunepath(path, updatedb_path=UPDATEDB_PATH):
986 """Adds the specified path to the mlocate's udpatedb.conf PRUNEPATH list.
987
988 This method has no effect if the path specified by updatedb_path does not
989 exist or is not a file.
990
991 @param path: string the path to add to the updatedb.conf PRUNEPATHS value
992 @param updatedb_path: the path the updatedb.conf file
993 """
994 if not os.path.exists(updatedb_path) or os.path.isdir(updatedb_path):
995 # If the updatedb.conf file doesn't exist then don't attempt to update
996 # the file as the package providing mlocate may not be installed on
997 # the local system
998 return
999
930 with open(updatedb_path, 'r+') as f_id:1000 with open(updatedb_path, 'r+') as f_id:
931 updatedb_text = f_id.read()1001 updatedb_text = f_id.read()
932 output = updatedb(updatedb_text, path)1002 output = updatedb(updatedb_text, path)
@@ -946,3 +1016,89 @@ def updatedb(updatedb_text, new_path):
946 lines[i] = 'PRUNEPATHS="{}"'.format(' '.join(paths))1016 lines[i] = 'PRUNEPATHS="{}"'.format(' '.join(paths))
947 output = "\n".join(lines)1017 output = "\n".join(lines)
948 return output1018 return output
1019
1020
1021def modulo_distribution(modulo=3, wait=30, non_zero_wait=False):
1022 """ Modulo distribution
1023
1024 This helper uses the unit number, a modulo value and a constant wait time
1025 to produce a calculated wait time distribution. This is useful in large
1026 scale deployments to distribute load during an expensive operation such as
1027 service restarts.
1028
1029 If you have 1000 nodes that need to restart 100 at a time 1 minute at a
1030 time:
1031
1032 time.wait(modulo_distribution(modulo=100, wait=60))
1033 restart()
1034
1035 If you need restarts to happen serially set modulo to the exact number of
1036 nodes and set a high constant wait time:
1037
1038 time.wait(modulo_distribution(modulo=10, wait=120))
1039 restart()
1040
1041 @param modulo: int The modulo number creates the group distribution
1042 @param wait: int The constant time wait value
1043 @param non_zero_wait: boolean Override unit % modulo == 0,
1044 return modulo * wait. Used to avoid collisions with
1045 leader nodes which are often given priority.
1046 @return: int Calculated time to wait for unit operation
1047 """
1048 unit_number = int(local_unit().split('/')[1])
1049 calculated_wait_time = (unit_number % modulo) * wait
1050 if non_zero_wait and calculated_wait_time == 0:
1051 return modulo * wait
1052 else:
1053 return calculated_wait_time
1054
1055
1056def install_ca_cert(ca_cert, name=None):
1057 """
1058 Install the given cert as a trusted CA.
1059
1060 The ``name`` is the stem of the filename where the cert is written, and if
1061 not provided, it will default to ``juju-{charm_name}``.
1062
1063 If the cert is empty or None, or is unchanged, nothing is done.
1064 """
1065 if not ca_cert:
1066 return
1067 if not isinstance(ca_cert, bytes):
1068 ca_cert = ca_cert.encode('utf8')
1069 if not name:
1070 name = 'juju-{}'.format(charm_name())
1071 cert_file = '/usr/local/share/ca-certificates/{}.crt'.format(name)
1072 new_hash = hashlib.md5(ca_cert).hexdigest()
1073 if file_hash(cert_file) == new_hash:
1074 return
1075 log("Installing new CA cert at: {}".format(cert_file), level=INFO)
1076 write_file(cert_file, ca_cert)
1077 subprocess.check_call(['update-ca-certificates', '--fresh'])
1078
1079
1080def get_system_env(key, default=None):
1081 """Get data from system environment as represented in ``/etc/environment``.
1082
1083 :param key: Key to look up
1084 :type key: str
1085 :param default: Value to return if key is not found
1086 :type default: any
1087 :returns: Value for key if found or contents of default parameter
1088 :rtype: any
1089 :raises: subprocess.CalledProcessError
1090 """
1091 env_file = '/etc/environment'
1092 # use the shell and env(1) to parse the global environments file. This is
1093 # done to get the correct result even if the user has shell variable
1094 # substitutions or other shell logic in that file.
1095 output = subprocess.check_output(
1096 ['env', '-i', '/bin/bash', '-c',
1097 'set -a && source {} && env'.format(env_file)],
1098 universal_newlines=True)
1099 for k, v in (line.split('=', 1)
1100 for line in output.splitlines() if '=' in line):
1101 if k == key:
1102 return v
1103 else:
1104 return default
diff --git a/hooks/charmhelpers/core/host_factory/ubuntu.py b/hooks/charmhelpers/core/host_factory/ubuntu.py
index d8dc378..3edc068 100644
--- a/hooks/charmhelpers/core/host_factory/ubuntu.py
+++ b/hooks/charmhelpers/core/host_factory/ubuntu.py
@@ -1,5 +1,6 @@
1import subprocess1import subprocess
22
3from charmhelpers.core.hookenv import cached
3from charmhelpers.core.strutils import BasicStringComparator4from charmhelpers.core.strutils import BasicStringComparator
45
56
@@ -20,6 +21,11 @@ UBUNTU_RELEASES = (
20 'yakkety',21 'yakkety',
21 'zesty',22 'zesty',
22 'artful',23 'artful',
24 'bionic',
25 'cosmic',
26 'disco',
27 'eoan',
28 'focal'
23)29)
2430
2531
@@ -70,6 +76,14 @@ def lsb_release():
70 return d76 return d
7177
7278
79def get_distrib_codename():
80 """Return the codename of the distribution
81 :returns: The codename
82 :rtype: str
83 """
84 return lsb_release()['DISTRIB_CODENAME'].lower()
85
86
73def cmp_pkgrevno(package, revno, pkgcache=None):87def cmp_pkgrevno(package, revno, pkgcache=None):
74 """Compare supplied revno with the revno of the installed package.88 """Compare supplied revno with the revno of the installed package.
7589
@@ -81,9 +95,22 @@ def cmp_pkgrevno(package, revno, pkgcache=None):
81 the pkgcache argument is None. Be sure to add charmhelpers.fetch if95 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
82 you call this function, or pass an apt_pkg.Cache() instance.96 you call this function, or pass an apt_pkg.Cache() instance.
83 """97 """
84 import apt_pkg98 from charmhelpers.fetch import apt_pkg
85 if not pkgcache:99 if not pkgcache:
86 from charmhelpers.fetch import apt_cache100 from charmhelpers.fetch import apt_cache
87 pkgcache = apt_cache()101 pkgcache = apt_cache()
88 pkg = pkgcache[package]102 pkg = pkgcache[package]
89 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)103 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
104
105
106@cached
107def arch():
108 """Return the package architecture as a string.
109
110 :returns: the architecture
111 :rtype: str
112 :raises: subprocess.CalledProcessError if dpkg command fails
113 """
114 return subprocess.check_output(
115 ['dpkg', '--print-architecture']
116 ).rstrip().decode('UTF-8')
diff --git a/hooks/charmhelpers/core/kernel.py b/hooks/charmhelpers/core/kernel.py
index 2d40452..e01f4f8 100644
--- a/hooks/charmhelpers/core/kernel.py
+++ b/hooks/charmhelpers/core/kernel.py
@@ -26,12 +26,12 @@ from charmhelpers.core.hookenv import (
2626
27__platform__ = get_platform()27__platform__ = get_platform()
28if __platform__ == "ubuntu":28if __platform__ == "ubuntu":
29 from charmhelpers.core.kernel_factory.ubuntu import (29 from charmhelpers.core.kernel_factory.ubuntu import ( # NOQA:F401
30 persistent_modprobe,30 persistent_modprobe,
31 update_initramfs,31 update_initramfs,
32 ) # flake8: noqa -- ignore F401 for this import32 ) # flake8: noqa -- ignore F401 for this import
33elif __platform__ == "centos":33elif __platform__ == "centos":
34 from charmhelpers.core.kernel_factory.centos import (34 from charmhelpers.core.kernel_factory.centos import ( # NOQA:F401
35 persistent_modprobe,35 persistent_modprobe,
36 update_initramfs,36 update_initramfs,
37 ) # flake8: noqa -- ignore F401 for this import37 ) # flake8: noqa -- ignore F401 for this import
diff --git a/hooks/charmhelpers/core/services/base.py b/hooks/charmhelpers/core/services/base.py
index ca9dc99..179ad4f 100644
--- a/hooks/charmhelpers/core/services/base.py
+++ b/hooks/charmhelpers/core/services/base.py
@@ -307,23 +307,34 @@ class PortManagerCallback(ManagerCallback):
307 """307 """
308 def __call__(self, manager, service_name, event_name):308 def __call__(self, manager, service_name, event_name):
309 service = manager.get_service(service_name)309 service = manager.get_service(service_name)
310 new_ports = service.get('ports', [])310 # turn this generator into a list,
311 # as we'll be going over it multiple times
312 new_ports = list(service.get('ports', []))
311 port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))313 port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
312 if os.path.exists(port_file):314 if os.path.exists(port_file):
313 with open(port_file) as fp:315 with open(port_file) as fp:
314 old_ports = fp.read().split(',')316 old_ports = fp.read().split(',')
315 for old_port in old_ports:317 for old_port in old_ports:
316 if bool(old_port):318 if bool(old_port) and not self.ports_contains(old_port, new_ports):
317 old_port = int(old_port)319 hookenv.close_port(old_port)
318 if old_port not in new_ports:
319 hookenv.close_port(old_port)
320 with open(port_file, 'w') as fp:320 with open(port_file, 'w') as fp:
321 fp.write(','.join(str(port) for port in new_ports))321 fp.write(','.join(str(port) for port in new_ports))
322 for port in new_ports:322 for port in new_ports:
323 # A port is either a number or 'ICMP'
324 protocol = 'TCP'
325 if str(port).upper() == 'ICMP':
326 protocol = 'ICMP'
323 if event_name == 'start':327 if event_name == 'start':
324 hookenv.open_port(port)328 hookenv.open_port(port, protocol)
325 elif event_name == 'stop':329 elif event_name == 'stop':
326 hookenv.close_port(port)330 hookenv.close_port(port, protocol)
331
332 def ports_contains(self, port, ports):
333 if not bool(port):
334 return False
335 if str(port).upper() != 'ICMP':
336 port = int(port)
337 return port in ports
327338
328339
329def service_stop(service_name):340def service_stop(service_name):
diff --git a/hooks/charmhelpers/core/strutils.py b/hooks/charmhelpers/core/strutils.py
index 685dabd..e8df045 100644
--- a/hooks/charmhelpers/core/strutils.py
+++ b/hooks/charmhelpers/core/strutils.py
@@ -61,13 +61,19 @@ def bytes_from_string(value):
61 if isinstance(value, six.string_types):61 if isinstance(value, six.string_types):
62 value = six.text_type(value)62 value = six.text_type(value)
63 else:63 else:
64 msg = "Unable to interpret non-string value '%s' as boolean" % (value)64 msg = "Unable to interpret non-string value '%s' as bytes" % (value)
65 raise ValueError(msg)65 raise ValueError(msg)
66 matches = re.match("([0-9]+)([a-zA-Z]+)", value)66 matches = re.match("([0-9]+)([a-zA-Z]+)", value)
67 if not matches:67 if matches:
68 msg = "Unable to interpret string value '%s' as bytes" % (value)68 size = int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)])
69 raise ValueError(msg)69 else:
70 return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)])70 # Assume that value passed in is bytes
71 try:
72 size = int(value)
73 except ValueError:
74 msg = "Unable to interpret string value '%s' as bytes" % (value)
75 raise ValueError(msg)
76 return size
7177
7278
73class BasicStringComparator(object):79class BasicStringComparator(object):
diff --git a/hooks/charmhelpers/core/sysctl.py b/hooks/charmhelpers/core/sysctl.py
index 6e413e3..386428d 100644
--- a/hooks/charmhelpers/core/sysctl.py
+++ b/hooks/charmhelpers/core/sysctl.py
@@ -17,38 +17,59 @@
1717
18import yaml18import yaml
1919
20from subprocess import check_call20from subprocess import check_call, CalledProcessError
2121
22from charmhelpers.core.hookenv import (22from charmhelpers.core.hookenv import (
23 log,23 log,
24 DEBUG,24 DEBUG,
25 ERROR,25 ERROR,
26 WARNING,
26)27)
2728
29from charmhelpers.core.host import is_container
30
28__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'31__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2932
3033
31def create(sysctl_dict, sysctl_file):34def create(sysctl_dict, sysctl_file, ignore=False):
32 """Creates a sysctl.conf file from a YAML associative array35 """Creates a sysctl.conf file from a YAML associative array
3336
34 :param sysctl_dict: a YAML-formatted string of sysctl options eg "{ 'kernel.max_pid': 1337 }"37 :param sysctl_dict: a dict or YAML-formatted string of sysctl
38 options eg "{ 'kernel.max_pid': 1337 }"
35 :type sysctl_dict: str39 :type sysctl_dict: str
36 :param sysctl_file: path to the sysctl file to be saved40 :param sysctl_file: path to the sysctl file to be saved
37 :type sysctl_file: str or unicode41 :type sysctl_file: str or unicode
42 :param ignore: If True, ignore "unknown variable" errors.
43 :type ignore: bool
38 :returns: None44 :returns: None
39 """45 """
40 try:46 if type(sysctl_dict) is not dict:
41 sysctl_dict_parsed = yaml.safe_load(sysctl_dict)47 try:
42 except yaml.YAMLError:48 sysctl_dict_parsed = yaml.safe_load(sysctl_dict)
43 log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict),49 except yaml.YAMLError:
44 level=ERROR)50 log("Error parsing YAML sysctl_dict: {}".format(sysctl_dict),
45 return51 level=ERROR)
52 return
53 else:
54 sysctl_dict_parsed = sysctl_dict
4655
47 with open(sysctl_file, "w") as fd:56 with open(sysctl_file, "w") as fd:
48 for key, value in sysctl_dict_parsed.items():57 for key, value in sysctl_dict_parsed.items():
49 fd.write("{}={}\n".format(key, value))58 fd.write("{}={}\n".format(key, value))
5059
51 log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict_parsed),60 log("Updating sysctl_file: {} values: {}".format(sysctl_file,
61 sysctl_dict_parsed),
52 level=DEBUG)62 level=DEBUG)
5363
54 check_call(["sysctl", "-p", sysctl_file])64 call = ["sysctl", "-p", sysctl_file]
65 if ignore:
66 call.append("-e")
67
68 try:
69 check_call(call)
70 except CalledProcessError as e:
71 if is_container():
72 log("Error setting some sysctl keys in this container: {}".format(e.output),
73 level=WARNING)
74 else:
75 raise e
diff --git a/hooks/charmhelpers/core/templating.py b/hooks/charmhelpers/core/templating.py
index 7b801a3..9014015 100644
--- a/hooks/charmhelpers/core/templating.py
+++ b/hooks/charmhelpers/core/templating.py
@@ -20,7 +20,8 @@ from charmhelpers.core import hookenv
2020
2121
22def render(source, target, context, owner='root', group='root',22def render(source, target, context, owner='root', group='root',
23 perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):23 perms=0o444, templates_dir=None, encoding='UTF-8',
24 template_loader=None, config_template=None):
24 """25 """
25 Render a template.26 Render a template.
2627
@@ -32,6 +33,9 @@ def render(source, target, context, owner='root', group='root',
32 The context should be a dict containing the values to be replaced in the33 The context should be a dict containing the values to be replaced in the
33 template.34 template.
3435
36 config_template may be provided to render from a provided template instead
37 of loading from a file.
38
35 The `owner`, `group`, and `perms` options will be passed to `write_file`.39 The `owner`, `group`, and `perms` options will be passed to `write_file`.
3640
37 If omitted, `templates_dir` defaults to the `templates` folder in the charm.41 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
@@ -65,14 +69,19 @@ def render(source, target, context, owner='root', group='root',
65 if templates_dir is None:69 if templates_dir is None:
66 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')70 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
67 template_env = Environment(loader=FileSystemLoader(templates_dir))71 template_env = Environment(loader=FileSystemLoader(templates_dir))
68 try:72
69 source = source73 # load from a string if provided explicitly
70 template = template_env.get_template(source)74 if config_template is not None:
71 except exceptions.TemplateNotFound as e:75 template = template_env.from_string(config_template)
72 hookenv.log('Could not load template %s from %s.' %76 else:
73 (source, templates_dir),77 try:
74 level=hookenv.ERROR)78 source = source
75 raise e79 template = template_env.get_template(source)
80 except exceptions.TemplateNotFound as e:
81 hookenv.log('Could not load template %s from %s.' %
82 (source, templates_dir),
83 level=hookenv.ERROR)
84 raise e
76 content = template.render(context)85 content = template.render(context)
77 if target is not None:86 if target is not None:
78 target_dir = os.path.dirname(target)87 target_dir = os.path.dirname(target)
diff --git a/hooks/charmhelpers/core/unitdata.py b/hooks/charmhelpers/core/unitdata.py
index 54ec969..ab55432 100644
--- a/hooks/charmhelpers/core/unitdata.py
+++ b/hooks/charmhelpers/core/unitdata.py
@@ -166,6 +166,10 @@ class Storage(object):
166166
167 To support dicts, lists, integer, floats, and booleans values167 To support dicts, lists, integer, floats, and booleans values
168 are automatically json encoded/decoded.168 are automatically json encoded/decoded.
169
170 Note: to facilitate unit testing, ':memory:' can be passed as the
171 path parameter which causes sqlite3 to only build the db in memory.
172 This should only be used for testing purposes.
169 """173 """
170 def __init__(self, path=None):174 def __init__(self, path=None):
171 self.db_path = path175 self.db_path = path
@@ -175,6 +179,9 @@ class Storage(object):
175 else:179 else:
176 self.db_path = os.path.join(180 self.db_path = os.path.join(
177 os.environ.get('CHARM_DIR', ''), '.unit-state.db')181 os.environ.get('CHARM_DIR', ''), '.unit-state.db')
182 if self.db_path != ':memory:':
183 with open(self.db_path, 'a') as f:
184 os.fchmod(f.fileno(), 0o600)
178 self.conn = sqlite3.connect('%s' % self.db_path)185 self.conn = sqlite3.connect('%s' % self.db_path)
179 self.cursor = self.conn.cursor()186 self.cursor = self.conn.cursor()
180 self.revision = None187 self.revision = None
@@ -358,7 +365,7 @@ class Storage(object):
358 try:365 try:
359 yield self.revision366 yield self.revision
360 self.revision = None367 self.revision = None
361 except:368 except Exception:
362 self.flush(False)369 self.flush(False)
363 self.revision = None370 self.revision = None
364 raise371 raise
diff --git a/hooks/charmhelpers/fetch/__init__.py b/hooks/charmhelpers/fetch/__init__.py
index 480a627..0cc7fc8 100644
--- a/hooks/charmhelpers/fetch/__init__.py
+++ b/hooks/charmhelpers/fetch/__init__.py
@@ -84,6 +84,7 @@ module = "charmhelpers.fetch.%s" % __platform__
84fetch = importlib.import_module(module)84fetch = importlib.import_module(module)
8585
86filter_installed_packages = fetch.filter_installed_packages86filter_installed_packages = fetch.filter_installed_packages
87filter_missing_packages = fetch.filter_missing_packages
87install = fetch.apt_install88install = fetch.apt_install
88upgrade = fetch.apt_upgrade89upgrade = fetch.apt_upgrade
89update = _fetch_update = fetch.apt_update90update = _fetch_update = fetch.apt_update
@@ -96,11 +97,14 @@ if __platform__ == "ubuntu":
96 apt_update = fetch.apt_update97 apt_update = fetch.apt_update
97 apt_upgrade = fetch.apt_upgrade98 apt_upgrade = fetch.apt_upgrade
98 apt_purge = fetch.apt_purge99 apt_purge = fetch.apt_purge
100 apt_autoremove = fetch.apt_autoremove
99 apt_mark = fetch.apt_mark101 apt_mark = fetch.apt_mark
100 apt_hold = fetch.apt_hold102 apt_hold = fetch.apt_hold
101 apt_unhold = fetch.apt_unhold103 apt_unhold = fetch.apt_unhold
102 import_key = fetch.import_key104 import_key = fetch.import_key
103 get_upstream_version = fetch.get_upstream_version105 get_upstream_version = fetch.get_upstream_version
106 apt_pkg = fetch.ubuntu_apt_pkg
107 get_apt_dpkg_env = fetch.get_apt_dpkg_env
104elif __platform__ == "centos":108elif __platform__ == "centos":
105 yum_search = fetch.yum_search109 yum_search = fetch.yum_search
106110
diff --git a/hooks/charmhelpers/fetch/archiveurl.py b/hooks/charmhelpers/fetch/archiveurl.py
index dd24f9e..d25587a 100644
--- a/hooks/charmhelpers/fetch/archiveurl.py
+++ b/hooks/charmhelpers/fetch/archiveurl.py
@@ -89,7 +89,7 @@ class ArchiveUrlFetchHandler(BaseFetchHandler):
89 :param str source: URL pointing to an archive file.89 :param str source: URL pointing to an archive file.
90 :param str dest: Local path location to download archive file to.90 :param str dest: Local path location to download archive file to.
91 """91 """
92 # propogate all exceptions92 # propagate all exceptions
93 # URLError, OSError, etc93 # URLError, OSError, etc
94 proto, netloc, path, params, query, fragment = urlparse(source)94 proto, netloc, path, params, query, fragment = urlparse(source)
95 if proto in ('http', 'https'):95 if proto in ('http', 'https'):
diff --git a/hooks/charmhelpers/fetch/bzrurl.py b/hooks/charmhelpers/fetch/bzrurl.py
index 07cd029..c4ab3ff 100644
--- a/hooks/charmhelpers/fetch/bzrurl.py
+++ b/hooks/charmhelpers/fetch/bzrurl.py
@@ -13,7 +13,7 @@
13# limitations under the License.13# limitations under the License.
1414
15import os15import os
16from subprocess import check_call16from subprocess import STDOUT, check_output
17from charmhelpers.fetch import (17from charmhelpers.fetch import (
18 BaseFetchHandler,18 BaseFetchHandler,
19 UnhandledSource,19 UnhandledSource,
@@ -55,7 +55,7 @@ class BzrUrlFetchHandler(BaseFetchHandler):
55 cmd = ['bzr', 'branch']55 cmd = ['bzr', 'branch']
56 cmd += cmd_opts56 cmd += cmd_opts
57 cmd += [source, dest]57 cmd += [source, dest]
58 check_call(cmd)58 check_output(cmd, stderr=STDOUT)
5959
60 def install(self, source, dest=None, revno=None):60 def install(self, source, dest=None, revno=None):
61 url_parts = self.parse_url(source)61 url_parts = self.parse_url(source)
diff --git a/hooks/charmhelpers/fetch/giturl.py b/hooks/charmhelpers/fetch/giturl.py
index 4cf21bc..070ca9b 100644
--- a/hooks/charmhelpers/fetch/giturl.py
+++ b/hooks/charmhelpers/fetch/giturl.py
@@ -13,7 +13,7 @@
13# limitations under the License.13# limitations under the License.
1414
15import os15import os
16from subprocess import check_call, CalledProcessError16from subprocess import check_output, CalledProcessError, STDOUT
17from charmhelpers.fetch import (17from charmhelpers.fetch import (
18 BaseFetchHandler,18 BaseFetchHandler,
19 UnhandledSource,19 UnhandledSource,
@@ -50,7 +50,7 @@ class GitUrlFetchHandler(BaseFetchHandler):
50 cmd = ['git', 'clone', source, dest, '--branch', branch]50 cmd = ['git', 'clone', source, dest, '--branch', branch]
51 if depth:51 if depth:
52 cmd.extend(['--depth', depth])52 cmd.extend(['--depth', depth])
53 check_call(cmd)53 check_output(cmd, stderr=STDOUT)
5454
55 def install(self, source, branch="master", dest=None, depth=None):55 def install(self, source, branch="master", dest=None, depth=None):
56 url_parts = self.parse_url(source)56 url_parts = self.parse_url(source)
diff --git a/hooks/charmhelpers/fetch/python/__init__.py b/hooks/charmhelpers/fetch/python/__init__.py
57new file mode 10064457new file mode 100644
index 0000000..bff99dc
--- /dev/null
+++ b/hooks/charmhelpers/fetch/python/__init__.py
@@ -0,0 +1,13 @@
1# Copyright 2014-2019 Canonical Limited.
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
diff --git a/hooks/charmhelpers/fetch/python/debug.py b/hooks/charmhelpers/fetch/python/debug.py
0new file mode 10064414new file mode 100644
index 0000000..757135e
--- /dev/null
+++ b/hooks/charmhelpers/fetch/python/debug.py
@@ -0,0 +1,54 @@
1#!/usr/bin/env python
2# coding: utf-8
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# Licensed under the Apache License, Version 2.0 (the "License");
7# you may not use this file except in compliance with the License.
8# You may obtain a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS,
14# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15# See the License for the specific language governing permissions and
16# limitations under the License.
17
18from __future__ import print_function
19
20import atexit
21import sys
22
23from charmhelpers.fetch.python.rpdb import Rpdb
24from charmhelpers.core.hookenv import (
25 open_port,
26 close_port,
27 ERROR,
28 log
29)
30
31__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
32
33DEFAULT_ADDR = "0.0.0.0"
34DEFAULT_PORT = 4444
35
36
37def _error(message):
38 log(message, level=ERROR)
39
40
41def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT):
42 """
43 Set a trace point using the remote debugger
44 """
45 atexit.register(close_port, port)
46 try:
47 log("Starting a remote python debugger session on %s:%s" % (addr,
48 port))
49 open_port(port)
50 debugger = Rpdb(addr=addr, port=port)
51 debugger.set_trace(sys._getframe().f_back)
52 except Exception:
53 _error("Cannot start a remote debug session on %s:%s" % (addr,
54 port))
diff --git a/hooks/charmhelpers/fetch/python/packages.py b/hooks/charmhelpers/fetch/python/packages.py
0new file mode 10064455new file mode 100644
index 0000000..6e95028
--- /dev/null
+++ b/hooks/charmhelpers/fetch/python/packages.py
@@ -0,0 +1,154 @@
1#!/usr/bin/env python
2# coding: utf-8
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# Licensed under the Apache License, Version 2.0 (the "License");
7# you may not use this file except in compliance with the License.
8# You may obtain a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS,
14# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15# See the License for the specific language governing permissions and
16# limitations under the License.
17
18import os
19import six
20import subprocess
21import sys
22
23from charmhelpers.fetch import apt_install, apt_update
24from charmhelpers.core.hookenv import charm_dir, log
25
26__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
27
28
29def pip_execute(*args, **kwargs):
30 """Overriden pip_execute() to stop sys.path being changed.
31
32 The act of importing main from the pip module seems to cause add wheels
33 from the /usr/share/python-wheels which are installed by various tools.
34 This function ensures that sys.path remains the same after the call is
35 executed.
36 """
37 try:
38 _path = sys.path
39 try:
40 from pip import main as _pip_execute
41 except ImportError:
42 apt_update()
43 if six.PY2:
44 apt_install('python-pip')
45 else:
46 apt_install('python3-pip')
47 from pip import main as _pip_execute
48 _pip_execute(*args, **kwargs)
49 finally:
50 sys.path = _path
51
52
53def parse_options(given, available):
54 """Given a set of options, check if available"""
55 for key, value in sorted(given.items()):
56 if not value:
57 continue
58 if key in available:
59 yield "--{0}={1}".format(key, value)
60
61
62def pip_install_requirements(requirements, constraints=None, **options):
63 """Install a requirements file.
64
65 :param constraints: Path to pip constraints file.
66 http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
67 """
68 command = ["install"]
69
70 available_options = ('proxy', 'src', 'log', )
71 for option in parse_options(options, available_options):
72 command.append(option)
73
74 command.append("-r {0}".format(requirements))
75 if constraints:
76 command.append("-c {0}".format(constraints))
77 log("Installing from file: {} with constraints {} "
78 "and options: {}".format(requirements, constraints, command))
79 else:
80 log("Installing from file: {} with options: {}".format(requirements,
81 command))
82 pip_execute(command)
83
84
85def pip_install(package, fatal=False, upgrade=False, venv=None,
86 constraints=None, **options):
87 """Install a python package"""
88 if venv:
89 venv_python = os.path.join(venv, 'bin/pip')
90 command = [venv_python, "install"]
91 else:
92 command = ["install"]
93
94 available_options = ('proxy', 'src', 'log', 'index-url', )
95 for option in parse_options(options, available_options):
96 command.append(option)
97
98 if upgrade:
99 command.append('--upgrade')
100
101 if constraints:
102 command.extend(['-c', constraints])
103
104 if isinstance(package, list):
105 command.extend(package)
106 else:
107 command.append(package)
108
109 log("Installing {} package with options: {}".format(package,
110 command))
111 if venv:
112 subprocess.check_call(command)
113 else:
114 pip_execute(command)
115
116
117def pip_uninstall(package, **options):
118 """Uninstall a python package"""
119 command = ["uninstall", "-q", "-y"]
120
121 available_options = ('proxy', 'log', )
122 for option in parse_options(options, available_options):
123 command.append(option)
124
125 if isinstance(package, list):
126 command.extend(package)
127 else:
128 command.append(package)
129
130 log("Uninstalling {} package with options: {}".format(package,
131 command))
132 pip_execute(command)
133
134
135def pip_list():
136 """Returns the list of current python installed packages
137 """
138 return pip_execute(["list"])
139
140
141def pip_create_virtualenv(path=None):
142 """Create an isolated Python environment."""
143 if six.PY2:
144 apt_install('python-virtualenv')
145 else:
146 apt_install('python3-virtualenv')
147
148 if path:
149 venv_path = path
150 else:
151 venv_path = os.path.join(charm_dir(), 'venv')
152
153 if not os.path.exists(venv_path):
154 subprocess.check_call(['virtualenv', venv_path])
diff --git a/hooks/charmhelpers/fetch/python/rpdb.py b/hooks/charmhelpers/fetch/python/rpdb.py
0new file mode 100644155new file mode 100644
index 0000000..9b31610
--- /dev/null
+++ b/hooks/charmhelpers/fetch/python/rpdb.py
@@ -0,0 +1,56 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
14
15"""Remote Python Debugger (pdb wrapper)."""
16
17import pdb
18import socket
19import sys
20
21__author__ = "Bertrand Janin <b@janin.com>"
22__version__ = "0.1.3"
23
24
25class Rpdb(pdb.Pdb):
26
27 def __init__(self, addr="127.0.0.1", port=4444):
28 """Initialize the socket and initialize pdb."""
29
30 # Backup stdin and stdout before replacing them by the socket handle
31 self.old_stdout = sys.stdout
32 self.old_stdin = sys.stdin
33
34 # Open a 'reusable' socket to let the webapp reload on the same port
35 self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
36 self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
37 self.skt.bind((addr, port))
38 self.skt.listen(1)
39 (clientsocket, address) = self.skt.accept()
40 handle = clientsocket.makefile('rw')
41 pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle)
42 sys.stdout = sys.stdin = handle
43
44 def shutdown(self):
45 """Revert stdin and stdout, close the socket."""
46 sys.stdout = self.old_stdout
47 sys.stdin = self.old_stdin
48 self.skt.close()
49 self.set_continue()
50
51 def do_continue(self, arg):
52 """Stop all operation on ``continue``."""
53 self.shutdown()
54 return 1
55
56 do_EOF = do_quit = do_exit = do_c = do_cont = do_continue
diff --git a/hooks/charmhelpers/fetch/python/version.py b/hooks/charmhelpers/fetch/python/version.py
0new file mode 10064457new file mode 100644
index 0000000..3eb4210
--- /dev/null
+++ b/hooks/charmhelpers/fetch/python/version.py
@@ -0,0 +1,32 @@
1#!/usr/bin/env python
2# coding: utf-8
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# Licensed under the Apache License, Version 2.0 (the "License");
7# you may not use this file except in compliance with the License.
8# You may obtain a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS,
14# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15# See the License for the specific language governing permissions and
16# limitations under the License.
17
18import sys
19
20__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
21
22
23def current_version():
24 """Current system python version"""
25 return sys.version_info
26
27
28def current_version_string():
29 """Current system python version as string major.minor.micro"""
30 return "{0}.{1}.{2}".format(sys.version_info.major,
31 sys.version_info.minor,
32 sys.version_info.micro)
diff --git a/hooks/charmhelpers/fetch/snap.py b/hooks/charmhelpers/fetch/snap.py
index 112a54c..fc70aa9 100644
--- a/hooks/charmhelpers/fetch/snap.py
+++ b/hooks/charmhelpers/fetch/snap.py
@@ -41,6 +41,10 @@ class CouldNotAcquireLockException(Exception):
41 pass41 pass
4242
4343
44class InvalidSnapChannel(Exception):
45 pass
46
47
44def _snap_exec(commands):48def _snap_exec(commands):
45 """49 """
46 Execute snap commands.50 Execute snap commands.
@@ -65,7 +69,7 @@ def _snap_exec(commands):
65 .format(SNAP_NO_LOCK_RETRY_COUNT))69 .format(SNAP_NO_LOCK_RETRY_COUNT))
66 return_code = e.returncode70 return_code = e.returncode
67 log('Snap failed to acquire lock, trying again in {} seconds.'71 log('Snap failed to acquire lock, trying again in {} seconds.'
68 .format(SNAP_NO_LOCK_RETRY_DELAY, level='WARN'))72 .format(SNAP_NO_LOCK_RETRY_DELAY), level='WARN')
69 sleep(SNAP_NO_LOCK_RETRY_DELAY)73 sleep(SNAP_NO_LOCK_RETRY_DELAY)
7074
71 return return_code75 return return_code
@@ -132,3 +136,15 @@ def snap_refresh(packages, *flags):
132136
133 log(message, level='INFO')137 log(message, level='INFO')
134 return _snap_exec(['refresh'] + flags + packages)138 return _snap_exec(['refresh'] + flags + packages)
139
140
141def valid_snap_channel(channel):
142 """ Validate snap channel exists
143
144 :raises InvalidSnapChannel: When channel does not exist
145 :return: Boolean
146 """
147 if channel.lower() in SNAP_CHANNELS:
148 return True
149 else:
150 raise InvalidSnapChannel("Invalid Snap Channel: {}".format(channel))
diff --git a/hooks/charmhelpers/fetch/ubuntu.py b/hooks/charmhelpers/fetch/ubuntu.py
index 40e1cb5..3ddaf0d 100644
--- a/hooks/charmhelpers/fetch/ubuntu.py
+++ b/hooks/charmhelpers/fetch/ubuntu.py
@@ -13,23 +13,23 @@
13# limitations under the License.13# limitations under the License.
1414
15from collections import OrderedDict15from collections import OrderedDict
16import os
17import platform16import platform
18import re17import re
19import six18import six
20import time
21import subprocess19import subprocess
22from tempfile import NamedTemporaryFile20import sys
21import time
22
23from charmhelpers.core.host import get_distrib_codename, get_system_env
2324
24from charmhelpers.core.host import (
25 lsb_release
26)
27from charmhelpers.core.hookenv import (25from charmhelpers.core.hookenv import (
28 log,26 log,
29 DEBUG,27 DEBUG,
30 WARNING,28 WARNING,
29 env_proxy_settings,
31)30)
32from charmhelpers.fetch import SourceConfigError, GPGKeyError31from charmhelpers.fetch import SourceConfigError, GPGKeyError
32from charmhelpers.fetch import ubuntu_apt_pkg
3333
34PROPOSED_POCKET = (34PROPOSED_POCKET = (
35 "# Proposed\n"35 "# Proposed\n"
@@ -44,6 +44,7 @@ ARCH_TO_PROPOSED_POCKET = {
44 'x86_64': PROPOSED_POCKET,44 'x86_64': PROPOSED_POCKET,
45 'ppc64le': PROPOSED_PORTS_POCKET,45 'ppc64le': PROPOSED_PORTS_POCKET,
46 'aarch64': PROPOSED_PORTS_POCKET,46 'aarch64': PROPOSED_PORTS_POCKET,
47 's390x': PROPOSED_PORTS_POCKET,
47}48}
48CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"49CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
49CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'50CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
@@ -157,6 +158,38 @@ CLOUD_ARCHIVE_POCKETS = {
157 'queens/proposed': 'xenial-proposed/queens',158 'queens/proposed': 'xenial-proposed/queens',
158 'xenial-queens/proposed': 'xenial-proposed/queens',159 'xenial-queens/proposed': 'xenial-proposed/queens',
159 'xenial-proposed/queens': 'xenial-proposed/queens',160 'xenial-proposed/queens': 'xenial-proposed/queens',
161 # Rocky
162 'rocky': 'bionic-updates/rocky',
163 'bionic-rocky': 'bionic-updates/rocky',
164 'bionic-rocky/updates': 'bionic-updates/rocky',
165 'bionic-updates/rocky': 'bionic-updates/rocky',
166 'rocky/proposed': 'bionic-proposed/rocky',
167 'bionic-rocky/proposed': 'bionic-proposed/rocky',
168 'bionic-proposed/rocky': 'bionic-proposed/rocky',
169 # Stein
170 'stein': 'bionic-updates/stein',
171 'bionic-stein': 'bionic-updates/stein',
172 'bionic-stein/updates': 'bionic-updates/stein',
173 'bionic-updates/stein': 'bionic-updates/stein',
174 'stein/proposed': 'bionic-proposed/stein',
175 'bionic-stein/proposed': 'bionic-proposed/stein',
176 'bionic-proposed/stein': 'bionic-proposed/stein',
177 # Train
178 'train': 'bionic-updates/train',
179 'bionic-train': 'bionic-updates/train',
180 'bionic-train/updates': 'bionic-updates/train',
181 'bionic-updates/train': 'bionic-updates/train',
182 'train/proposed': 'bionic-proposed/train',
183 'bionic-train/proposed': 'bionic-proposed/train',
184 'bionic-proposed/train': 'bionic-proposed/train',
185 # Ussuri
186 'ussuri': 'bionic-updates/ussuri',
187 'bionic-ussuri': 'bionic-updates/ussuri',
188 'bionic-ussuri/updates': 'bionic-updates/ussuri',
189 'bionic-updates/ussuri': 'bionic-updates/ussuri',
190 'ussuri/proposed': 'bionic-proposed/ussuri',
191 'bionic-ussuri/proposed': 'bionic-proposed/ussuri',
192 'bionic-proposed/ussuri': 'bionic-proposed/ussuri',
160}193}
161194
162195
@@ -180,18 +213,54 @@ def filter_installed_packages(packages):
180 return _pkgs213 return _pkgs
181214
182215
183def apt_cache(in_memory=True, progress=None):216def filter_missing_packages(packages):
184 """Build and return an apt cache."""217 """Return a list of packages that are installed.
185 from apt import apt_pkg218
186 apt_pkg.init()219 :param packages: list of packages to evaluate.
187 if in_memory:220 :returns list: Packages that are installed.
188 apt_pkg.config.set("Dir::Cache::pkgcache", "")221 """
189 apt_pkg.config.set("Dir::Cache::srcpkgcache", "")222 return list(
190 return apt_pkg.Cache(progress)223 set(packages) -
224 set(filter_installed_packages(packages))
225 )
226
227
228def apt_cache(*_, **__):
229 """Shim returning an object simulating the apt_pkg Cache.
230
231 :param _: Accept arguments for compability, not used.
232 :type _: any
233 :param __: Accept keyword arguments for compability, not used.
234 :type __: any
235 :returns:Object used to interrogate the system apt and dpkg databases.
236 :rtype:ubuntu_apt_pkg.Cache
237 """
238 if 'apt_pkg' in sys.modules:
239 # NOTE(fnordahl): When our consumer use the upstream ``apt_pkg`` module
240 # in conjunction with the apt_cache helper function, they may expect us
241 # to call ``apt_pkg.init()`` for them.
242 #
243 # Detect this situation, log a warning and make the call to
244 # ``apt_pkg.init()`` to avoid the consumer Python interpreter from
245 # crashing with a segmentation fault.
246 log('Support for use of upstream ``apt_pkg`` module in conjunction'
247 'with charm-helpers is deprecated since 2019-06-25', level=WARNING)
248 sys.modules['apt_pkg'].init()
249 return ubuntu_apt_pkg.Cache()
191250
192251
193def apt_install(packages, options=None, fatal=False):252def apt_install(packages, options=None, fatal=False):
194 """Install one or more packages."""253 """Install one or more packages.
254
255 :param packages: Package(s) to install
256 :type packages: Option[str, List[str]]
257 :param options: Options to pass on to apt-get
258 :type options: Option[None, List[str]]
259 :param fatal: Whether the command's output should be checked and
260 retried.
261 :type fatal: bool
262 :raises: subprocess.CalledProcessError
263 """
195 if options is None:264 if options is None:
196 options = ['--option=Dpkg::Options::=--force-confold']265 options = ['--option=Dpkg::Options::=--force-confold']
197266
@@ -208,7 +277,17 @@ def apt_install(packages, options=None, fatal=False):
208277
209278
210def apt_upgrade(options=None, fatal=False, dist=False):279def apt_upgrade(options=None, fatal=False, dist=False):
211 """Upgrade all packages."""280 """Upgrade all packages.
281
282 :param options: Options to pass on to apt-get
283 :type options: Option[None, List[str]]
284 :param fatal: Whether the command's output should be checked and
285 retried.
286 :type fatal: bool
287 :param dist: Whether ``dist-upgrade`` should be used over ``upgrade``
288 :type dist: bool
289 :raises: subprocess.CalledProcessError
290 """
212 if options is None:291 if options is None:
213 options = ['--option=Dpkg::Options::=--force-confold']292 options = ['--option=Dpkg::Options::=--force-confold']
214293
@@ -229,7 +308,15 @@ def apt_update(fatal=False):
229308
230309
231def apt_purge(packages, fatal=False):310def apt_purge(packages, fatal=False):
232 """Purge one or more packages."""311 """Purge one or more packages.
312
313 :param packages: Package(s) to install
314 :type packages: Option[str, List[str]]
315 :param fatal: Whether the command's output should be checked and
316 retried.
317 :type fatal: bool
318 :raises: subprocess.CalledProcessError
319 """
233 cmd = ['apt-get', '--assume-yes', 'purge']320 cmd = ['apt-get', '--assume-yes', 'purge']
234 if isinstance(packages, six.string_types):321 if isinstance(packages, six.string_types):
235 cmd.append(packages)322 cmd.append(packages)
@@ -239,6 +326,21 @@ def apt_purge(packages, fatal=False):
239 _run_apt_command(cmd, fatal)326 _run_apt_command(cmd, fatal)
240327
241328
329def apt_autoremove(purge=True, fatal=False):
330 """Purge one or more packages.
331 :param purge: Whether the ``--purge`` option should be passed on or not.
332 :type purge: bool
333 :param fatal: Whether the command's output should be checked and
334 retried.
335 :type fatal: bool
336 :raises: subprocess.CalledProcessError
337 """
338 cmd = ['apt-get', '--assume-yes', 'autoremove']
339 if purge:
340 cmd.append('--purge')
341 _run_apt_command(cmd, fatal)
342
343
242def apt_mark(packages, mark, fatal=False):344def apt_mark(packages, mark, fatal=False):
243 """Flag one or more packages using apt-mark."""345 """Flag one or more packages using apt-mark."""
244 log("Marking {} as {}".format(packages, mark))346 log("Marking {} as {}".format(packages, mark))
@@ -265,13 +367,18 @@ def apt_unhold(packages, fatal=False):
265def import_key(key):367def import_key(key):
266 """Import an ASCII Armor key.368 """Import an ASCII Armor key.
267369
268 /!\ A Radix64 format keyid is also supported for backwards370 A Radix64 format keyid is also supported for backwards
269 compatibility, but should never be used; the key retrieval371 compatibility. In this case Ubuntu keyserver will be
270 mechanism is insecure and subject to man-in-the-middle attacks372 queried for a key via HTTPS by its keyid. This method
271 voiding all signature checks using that key.373 is less preferrable because https proxy servers may
272374 require traffic decryption which is equivalent to a
273 :param keyid: The key in ASCII armor format,375 man-in-the-middle attack (a proxy server impersonates
274 including BEGIN and END markers.376 keyserver TLS certificates and has to be explicitly
377 trusted by the system).
378
379 :param key: A GPG key in ASCII armor format,
380 including BEGIN and END markers or a keyid.
381 :type key: (bytes, str)
275 :raises: GPGKeyError if the key could not be imported382 :raises: GPGKeyError if the key could not be imported
276 """383 """
277 key = key.strip()384 key = key.strip()
@@ -282,35 +389,131 @@ def import_key(key):
282 log("PGP key found (looks like ASCII Armor format)", level=DEBUG)389 log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
283 if ('-----BEGIN PGP PUBLIC KEY BLOCK-----' in key and390 if ('-----BEGIN PGP PUBLIC KEY BLOCK-----' in key and
284 '-----END PGP PUBLIC KEY BLOCK-----' in key):391 '-----END PGP PUBLIC KEY BLOCK-----' in key):
285 log("Importing ASCII Armor PGP key", level=DEBUG)392 log("Writing provided PGP key in the binary format", level=DEBUG)
286 with NamedTemporaryFile() as keyfile:393 if six.PY3:
287 with open(keyfile.name, 'w') as fd:394 key_bytes = key.encode('utf-8')
288 fd.write(key)395 else:
289 fd.write("\n")396 key_bytes = key
290 cmd = ['apt-key', 'add', keyfile.name]397 key_name = _get_keyid_by_gpg_key(key_bytes)
291 try:398 key_gpg = _dearmor_gpg_key(key_bytes)
292 subprocess.check_call(cmd)399 _write_apt_gpg_keyfile(key_name=key_name, key_material=key_gpg)
293 except subprocess.CalledProcessError:
294 error = "Error importing PGP key '{}'".format(key)
295 log(error)
296 raise GPGKeyError(error)
297 else:400 else:
298 raise GPGKeyError("ASCII armor markers missing from GPG key")401 raise GPGKeyError("ASCII armor markers missing from GPG key")
299 else:402 else:
300 # We should only send things obviously not a keyid offsite
301 # via this unsecured protocol, as it may be a secret or part
302 # of one.
303 log("PGP key found (looks like Radix64 format)", level=WARNING)403 log("PGP key found (looks like Radix64 format)", level=WARNING)
304 log("INSECURLY importing PGP key from keyserver; "404 log("SECURELY importing PGP key from keyserver; "
305 "full key not provided.", level=WARNING)405 "full key not provided.", level=WARNING)
306 cmd = ['apt-key', 'adv', '--keyserver',406 # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL
307 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]407 # to retrieve GPG keys. `apt-key adv` command is deprecated as is
308 try:408 # apt-key in general as noted in its manpage. See lp:1433761 for more
309 subprocess.check_call(cmd)409 # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop
310 except subprocess.CalledProcessError:410 # gpg
311 error = "Error importing PGP key '{}'".format(key)411 key_asc = _get_key_by_keyid(key)
312 log(error)412 # write the key in GPG format so that apt-key list shows it
313 raise GPGKeyError(error)413 key_gpg = _dearmor_gpg_key(key_asc)
414 _write_apt_gpg_keyfile(key_name=key, key_material=key_gpg)
415
416
417def _get_keyid_by_gpg_key(key_material):
418 """Get a GPG key fingerprint by GPG key material.
419 Gets a GPG key fingerprint (40-digit, 160-bit) by the ASCII armor-encoded
420 or binary GPG key material. Can be used, for example, to generate file
421 names for keys passed via charm options.
422
423 :param key_material: ASCII armor-encoded or binary GPG key material
424 :type key_material: bytes
425 :raises: GPGKeyError if invalid key material has been provided
426 :returns: A GPG key fingerprint
427 :rtype: str
428 """
429 # Use the same gpg command for both Xenial and Bionic
430 cmd = 'gpg --with-colons --with-fingerprint'
431 ps = subprocess.Popen(cmd.split(),
432 stdout=subprocess.PIPE,
433 stderr=subprocess.PIPE,
434 stdin=subprocess.PIPE)
435 out, err = ps.communicate(input=key_material)
436 if six.PY3:
437 out = out.decode('utf-8')
438 err = err.decode('utf-8')
439 if 'gpg: no valid OpenPGP data found.' in err:
440 raise GPGKeyError('Invalid GPG key material provided')
441 # from gnupg2 docs: fpr :: Fingerprint (fingerprint is in field 10)
442 return re.search(r"^fpr:{9}([0-9A-F]{40}):$", out, re.MULTILINE).group(1)
443
444
445def _get_key_by_keyid(keyid):
446 """Get a key via HTTPS from the Ubuntu keyserver.
447 Different key ID formats are supported by SKS keyservers (the longer ones
448 are more secure, see "dead beef attack" and https://evil32.com/). Since
449 HTTPS is used, if SSLBump-like HTTPS proxies are in place, they will
450 impersonate keyserver.ubuntu.com and generate a certificate with
451 keyserver.ubuntu.com in the CN field or in SubjAltName fields of a
452 certificate. If such proxy behavior is expected it is necessary to add the
453 CA certificate chain containing the intermediate CA of the SSLBump proxy to
454 every machine that this code runs on via ca-certs cloud-init directive (via
455 cloudinit-userdata model-config) or via other means (such as through a
456 custom charm option). Also note that DNS resolution for the hostname in a
457 URL is done at a proxy server - not at the client side.
458
459 8-digit (32 bit) key ID
460 https://keyserver.ubuntu.com/pks/lookup?search=0x4652B4E6
461 16-digit (64 bit) key ID
462 https://keyserver.ubuntu.com/pks/lookup?search=0x6E85A86E4652B4E6
463 40-digit key ID:
464 https://keyserver.ubuntu.com/pks/lookup?search=0x35F77D63B5CEC106C577ED856E85A86E4652B4E6
465
466 :param keyid: An 8, 16 or 40 hex digit keyid to find a key for
467 :type keyid: (bytes, str)
468 :returns: A key material for the specified GPG key id
469 :rtype: (str, bytes)
470 :raises: subprocess.CalledProcessError
471 """
472 # options=mr - machine-readable output (disables html wrappers)
473 keyserver_url = ('https://keyserver.ubuntu.com'
474 '/pks/lookup?op=get&options=mr&exact=on&search=0x{}')
475 curl_cmd = ['curl', keyserver_url.format(keyid)]
476 # use proxy server settings in order to retrieve the key
477 return subprocess.check_output(curl_cmd,
478 env=env_proxy_settings(['https']))
479
480
481def _dearmor_gpg_key(key_asc):
482 """Converts a GPG key in the ASCII armor format to the binary format.
483
484 :param key_asc: A GPG key in ASCII armor format.
485 :type key_asc: (str, bytes)
486 :returns: A GPG key in binary format
487 :rtype: (str, bytes)
488 :raises: GPGKeyError
489 """
490 ps = subprocess.Popen(['gpg', '--dearmor'],
491 stdout=subprocess.PIPE,
492 stderr=subprocess.PIPE,
493 stdin=subprocess.PIPE)
494 out, err = ps.communicate(input=key_asc)
495 # no need to decode output as it is binary (invalid utf-8), only error
496 if six.PY3:
497 err = err.decode('utf-8')
498 if 'gpg: no valid OpenPGP data found.' in err:
499 raise GPGKeyError('Invalid GPG key material. Check your network setup'
500 ' (MTU, routing, DNS) and/or proxy server settings'
501 ' as well as destination keyserver status.')
502 else:
503 return out
504
505
506def _write_apt_gpg_keyfile(key_name, key_material):
507 """Writes GPG key material into a file at a provided path.
508
509 :param key_name: A key name to use for a key file (could be a fingerprint)
510 :type key_name: str
511 :param key_material: A GPG key material (binary)
512 :type key_material: (str, bytes)
513 """
514 with open('/etc/apt/trusted.gpg.d/{}.gpg'.format(key_name),
515 'wb') as keyf:
516 keyf.write(key_material)
314517
315518
316def add_source(source, key=None, fail_invalid=False):519def add_source(source, key=None, fail_invalid=False):
@@ -385,14 +588,16 @@ def add_source(source, key=None, fail_invalid=False):
385 for r, fn in six.iteritems(_mapping):588 for r, fn in six.iteritems(_mapping):
386 m = re.match(r, source)589 m = re.match(r, source)
387 if m:590 if m:
388 # call the assoicated function with the captured groups
389 # raises SourceConfigError on error.
390 fn(*m.groups())
391 if key:591 if key:
592 # Import key before adding the source which depends on it,
593 # as refreshing packages could fail otherwise.
392 try:594 try:
393 import_key(key)595 import_key(key)
394 except GPGKeyError as e:596 except GPGKeyError as e:
395 raise SourceConfigError(str(e))597 raise SourceConfigError(str(e))
598 # call the associated function with the captured groups
599 # raises SourceConfigError on error.
600 fn(*m.groups())
396 break601 break
397 else:602 else:
398 # nothing matched. log an error and maybe sys.exit603 # nothing matched. log an error and maybe sys.exit
@@ -405,13 +610,13 @@ def add_source(source, key=None, fail_invalid=False):
405def _add_proposed():610def _add_proposed():
406 """Add the PROPOSED_POCKET as /etc/apt/source.list.d/proposed.list611 """Add the PROPOSED_POCKET as /etc/apt/source.list.d/proposed.list
407612
408 Uses lsb_release()['DISTRIB_CODENAME'] to determine the correct staza for613 Uses get_distrib_codename to determine the correct stanza for
409 the deb line.614 the deb line.
410615
411 For intel architecutres PROPOSED_POCKET is used for the release, but for616 For intel architecutres PROPOSED_POCKET is used for the release, but for
412 other architectures PROPOSED_PORTS_POCKET is used for the release.617 other architectures PROPOSED_PORTS_POCKET is used for the release.
413 """618 """
414 release = lsb_release()['DISTRIB_CODENAME']619 release = get_distrib_codename()
415 arch = platform.machine()620 arch = platform.machine()
416 if arch not in six.iterkeys(ARCH_TO_PROPOSED_POCKET):621 if arch not in six.iterkeys(ARCH_TO_PROPOSED_POCKET):
417 raise SourceConfigError("Arch {} not supported for (distro-)proposed"622 raise SourceConfigError("Arch {} not supported for (distro-)proposed"
@@ -424,8 +629,16 @@ def _add_apt_repository(spec):
424 """Add the spec using add_apt_repository629 """Add the spec using add_apt_repository
425630
426 :param spec: the parameter to pass to add_apt_repository631 :param spec: the parameter to pass to add_apt_repository
632 :type spec: str
427 """633 """
428 _run_with_retries(['add-apt-repository', '--yes', spec])634 if '{series}' in spec:
635 series = get_distrib_codename()
636 spec = spec.replace('{series}', series)
637 # software-properties package for bionic properly reacts to proxy settings
638 # passed as environment variables (See lp:1433761). This is not the case
639 # LTS and non-LTS releases below bionic.
640 _run_with_retries(['add-apt-repository', '--yes', spec],
641 cmd_env=env_proxy_settings(['https']))
429642
430643
431def _add_cloud_pocket(pocket):644def _add_cloud_pocket(pocket):
@@ -494,7 +707,7 @@ def _verify_is_ubuntu_rel(release, os_release):
494 :raises: SourceConfigError if the release is not the same as the ubuntu707 :raises: SourceConfigError if the release is not the same as the ubuntu
495 release.708 release.
496 """709 """
497 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']710 ubuntu_rel = get_distrib_codename()
498 if release != ubuntu_rel:711 if release != ubuntu_rel:
499 raise SourceConfigError(712 raise SourceConfigError(
500 'Invalid Cloud Archive release specified: {}-{} on this Ubuntu'713 'Invalid Cloud Archive release specified: {}-{} on this Ubuntu'
@@ -505,21 +718,22 @@ def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,),
505 retry_message="", cmd_env=None):718 retry_message="", cmd_env=None):
506 """Run a command and retry until success or max_retries is reached.719 """Run a command and retry until success or max_retries is reached.
507720
508 :param: cmd: str: The apt command to run.721 :param cmd: The apt command to run.
509 :param: max_retries: int: The number of retries to attempt on a fatal722 :type cmd: str
510 command. Defaults to CMD_RETRY_COUNT.723 :param max_retries: The number of retries to attempt on a fatal
511 :param: retry_exitcodes: tuple: Optional additional exit codes to retry.724 command. Defaults to CMD_RETRY_COUNT.
512 Defaults to retry on exit code 1.725 :type max_retries: int
513 :param: retry_message: str: Optional log prefix emitted during retries.726 :param retry_exitcodes: Optional additional exit codes to retry.
514 :param: cmd_env: dict: Environment variables to add to the command run.727 Defaults to retry on exit code 1.
728 :type retry_exitcodes: tuple
729 :param retry_message: Optional log prefix emitted during retries.
730 :type retry_message: str
731 :param: cmd_env: Environment variables to add to the command run.
732 :type cmd_env: Option[None, Dict[str, str]]
515 """733 """
516734 env = get_apt_dpkg_env()
517 env = None
518 kwargs = {}
519 if cmd_env:735 if cmd_env:
520 env = os.environ.copy()
521 env.update(cmd_env)736 env.update(cmd_env)
522 kwargs['env'] = env
523737
524 if not retry_message:738 if not retry_message:
525 retry_message = "Failed executing '{}'".format(" ".join(cmd))739 retry_message = "Failed executing '{}'".format(" ".join(cmd))
@@ -531,8 +745,7 @@ def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,),
531 retry_results = (None,) + retry_exitcodes745 retry_results = (None,) + retry_exitcodes
532 while result in retry_results:746 while result in retry_results:
533 try:747 try:
534 # result = subprocess.check_call(cmd, env=env)748 result = subprocess.check_call(cmd, env=env)
535 result = subprocess.check_call(cmd, **kwargs)
536 except subprocess.CalledProcessError as e:749 except subprocess.CalledProcessError as e:
537 retry_count = retry_count + 1750 retry_count = retry_count + 1
538 if retry_count > max_retries:751 if retry_count > max_retries:
@@ -545,22 +758,18 @@ def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,),
545def _run_apt_command(cmd, fatal=False):758def _run_apt_command(cmd, fatal=False):
546 """Run an apt command with optional retries.759 """Run an apt command with optional retries.
547760
548 :param: cmd: str: The apt command to run.761 :param cmd: The apt command to run.
549 :param: fatal: bool: Whether the command's output should be checked and762 :type cmd: str
550 retried.763 :param fatal: Whether the command's output should be checked and
764 retried.
765 :type fatal: bool
551 """766 """
552 # Provide DEBIAN_FRONTEND=noninteractive if not present in the environment.
553 cmd_env = {
554 'DEBIAN_FRONTEND': os.environ.get('DEBIAN_FRONTEND', 'noninteractive')}
555
556 if fatal:767 if fatal:
557 _run_with_retries(768 _run_with_retries(
558 cmd, cmd_env=cmd_env, retry_exitcodes=(1, APT_NO_LOCK,),769 cmd, retry_exitcodes=(1, APT_NO_LOCK,),
559 retry_message="Couldn't acquire DPKG lock")770 retry_message="Couldn't acquire DPKG lock")
560 else:771 else:
561 env = os.environ.copy()772 subprocess.call(cmd, env=get_apt_dpkg_env())
562 env.update(cmd_env)
563 subprocess.call(cmd, env=env)
564773
565774
566def get_upstream_version(package):775def get_upstream_version(package):
@@ -568,11 +777,10 @@ def get_upstream_version(package):
568777
569 @returns None (if not installed) or the upstream version778 @returns None (if not installed) or the upstream version
570 """779 """
571 import apt_pkg
572 cache = apt_cache()780 cache = apt_cache()
573 try:781 try:
574 pkg = cache[package]782 pkg = cache[package]
575 except:783 except Exception:
576 # the package is unknown to the current apt cache.784 # the package is unknown to the current apt cache.
577 return None785 return None
578786
@@ -580,4 +788,18 @@ def get_upstream_version(package):
580 # package is known, but no version is currently installed.788 # package is known, but no version is currently installed.
581 return None789 return None
582790
583 return apt_pkg.upstream_version(pkg.current_ver.ver_str)791 return ubuntu_apt_pkg.upstream_version(pkg.current_ver.ver_str)
792
793
794def get_apt_dpkg_env():
795 """Get environment suitable for execution of APT and DPKG tools.
796
797 We keep this in a helper function instead of in a global constant to
798 avoid execution on import of the library.
799 :returns: Environment suitable for execution of APT and DPKG tools.
800 :rtype: Dict[str, str]
801 """
802 # The fallback is used in the event of ``/etc/environment`` not containing
803 # avalid PATH variable.
804 return {'DEBIAN_FRONTEND': 'noninteractive',
805 'PATH': get_system_env('PATH', '/usr/sbin:/usr/bin:/sbin:/bin')}
diff --git a/hooks/charmhelpers/fetch/ubuntu_apt_pkg.py b/hooks/charmhelpers/fetch/ubuntu_apt_pkg.py
584new file mode 100644806new file mode 100644
index 0000000..929a75d
--- /dev/null
+++ b/hooks/charmhelpers/fetch/ubuntu_apt_pkg.py
@@ -0,0 +1,267 @@
1# Copyright 2019 Canonical Ltd
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
14
15"""Provide a subset of the ``python-apt`` module API.
16
17Data collection is done through subprocess calls to ``apt-cache`` and
18``dpkg-query`` commands.
19
20The main purpose for this module is to avoid dependency on the
21``python-apt`` python module.
22
23The indicated python module is a wrapper around the ``apt`` C++ library
24which is tightly connected to the version of the distribution it was
25shipped on. It is not developed in a backward/forward compatible manner.
26
27This in turn makes it incredibly hard to distribute as a wheel for a piece
28of python software that supports a span of distro releases [0][1].
29
30Upstream feedback like [2] does not give confidence in this ever changing,
31so with this we get rid of the dependency.
32
330: https://github.com/juju-solutions/layer-basic/pull/135
341: https://bugs.launchpad.net/charm-octavia/+bug/1824112
352: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845330#10
36"""
37
38import locale
39import os
40import subprocess
41import sys
42
43
44class _container(dict):
45 """Simple container for attributes."""
46 __getattr__ = dict.__getitem__
47 __setattr__ = dict.__setitem__
48
49
50class Package(_container):
51 """Simple container for package attributes."""
52
53
54class Version(_container):
55 """Simple container for version attributes."""
56
57
58class Cache(object):
59 """Simulation of ``apt_pkg`` Cache object."""
60 def __init__(self, progress=None):
61 pass
62
63 def __contains__(self, package):
64 try:
65 pkg = self.__getitem__(package)
66 return pkg is not None
67 except KeyError:
68 return False
69
70 def __getitem__(self, package):
71 """Get information about a package from apt and dpkg databases.
72
73 :param package: Name of package
74 :type package: str
75 :returns: Package object
76 :rtype: object
77 :raises: KeyError, subprocess.CalledProcessError
78 """
79 apt_result = self._apt_cache_show([package])[package]
80 apt_result['name'] = apt_result.pop('package')
81 pkg = Package(apt_result)
82 dpkg_result = self._dpkg_list([package]).get(package, {})
83 current_ver = None
84 installed_version = dpkg_result.get('version')
85 if installed_version:
86 current_ver = Version({'ver_str': installed_version})
87 pkg.current_ver = current_ver
88 pkg.architecture = dpkg_result.get('architecture')
89 return pkg
90
91 def _dpkg_list(self, packages):
92 """Get data from system dpkg database for package.
93
94 :param packages: Packages to get data from
95 :type packages: List[str]
96 :returns: Structured data about installed packages, keys like
97 ``dpkg-query --list``
98 :rtype: dict
99 :raises: subprocess.CalledProcessError
100 """
101 pkgs = {}
102 cmd = ['dpkg-query', '--list']
103 cmd.extend(packages)
104 if locale.getlocale() == (None, None):
105 # subprocess calls out to locale.getpreferredencoding(False) to
106 # determine encoding. Workaround for Trusty where the
107 # environment appears to not be set up correctly.
108 locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
109 try:
110 output = subprocess.check_output(cmd,
111 stderr=subprocess.STDOUT,
112 universal_newlines=True)
113 except subprocess.CalledProcessError as cp:
114 # ``dpkg-query`` may return error and at the same time have
115 # produced useful output, for example when asked for multiple
116 # packages where some are not installed
117 if cp.returncode != 1:
118 raise
119 output = cp.output
120 headings = []
121 for line in output.splitlines():
122 if line.startswith('||/'):
123 headings = line.split()
124 headings.pop(0)
125 continue
126 elif (line.startswith('|') or line.startswith('+') or
127 line.startswith('dpkg-query:')):
128 continue
129 else:
130 data = line.split(None, 4)
131 status = data.pop(0)
132 if status != 'ii':
133 continue
134 pkg = {}
135 pkg.update({k.lower(): v for k, v in zip(headings, data)})
136 if 'name' in pkg:
137 pkgs.update({pkg['name']: pkg})
138 return pkgs
139
140 def _apt_cache_show(self, packages):
141 """Get data from system apt cache for package.
142
143 :param packages: Packages to get data from
144 :type packages: List[str]
145 :returns: Structured data about package, keys like
146 ``apt-cache show``
147 :rtype: dict
148 :raises: subprocess.CalledProcessError
149 """
150 pkgs = {}
151 cmd = ['apt-cache', 'show', '--no-all-versions']
152 cmd.extend(packages)
153 if locale.getlocale() == (None, None):
154 # subprocess calls out to locale.getpreferredencoding(False) to
155 # determine encoding. Workaround for Trusty where the
156 # environment appears to not be set up correctly.
157 locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
158 try:
159 output = subprocess.check_output(cmd,
160 stderr=subprocess.STDOUT,
161 universal_newlines=True)
162 previous = None
163 pkg = {}
164 for line in output.splitlines():
165 if not line:
166 if 'package' in pkg:
167 pkgs.update({pkg['package']: pkg})
168 pkg = {}
169 continue
170 if line.startswith(' '):
171 if previous and previous in pkg:
172 pkg[previous] += os.linesep + line.lstrip()
173 continue
174 if ':' in line:
175 kv = line.split(':', 1)
176 key = kv[0].lower()
177 if key == 'n':
178 continue
179 previous = key
180 pkg.update({key: kv[1].lstrip()})
181 except subprocess.CalledProcessError as cp:
182 # ``apt-cache`` returns 100 if none of the packages asked for
183 # exist in the apt cache.
184 if cp.returncode != 100:
185 raise
186 return pkgs
187
188
189class Config(_container):
190 def __init__(self):
191 super(Config, self).__init__(self._populate())
192
193 def _populate(self):
194 cfgs = {}
195 cmd = ['apt-config', 'dump']
196 output = subprocess.check_output(cmd,
197 stderr=subprocess.STDOUT,
198 universal_newlines=True)
199 for line in output.splitlines():
200 if not line.startswith("CommandLine"):
201 k, v = line.split(" ", 1)
202 cfgs[k] = v.strip(";").strip("\"")
203
204 return cfgs
205
206
207# Backwards compatibility with old apt_pkg module
208sys.modules[__name__].config = Config()
209
210
211def init():
212 """Compability shim that does nothing."""
213 pass
214
215
216def upstream_version(version):
217 """Extracts upstream version from a version string.
218
219 Upstream reference: https://salsa.debian.org/apt-team/apt/blob/master/
220 apt-pkg/deb/debversion.cc#L259
221
222 :param version: Version string
223 :type version: str
224 :returns: Upstream version
225 :rtype: str
226 """
227 if version:
228 version = version.split(':')[-1]
229 version = version.split('-')[0]
230 return version
231
232
233def version_compare(a, b):
234 """Compare the given versions.
235
236 Call out to ``dpkg`` to make sure the code doing the comparison is
237 compatible with what the ``apt`` library would do. Mimic the return
238 values.
239
240 Upstream reference:
241 https://apt-team.pages.debian.net/python-apt/library/apt_pkg.html
242 ?highlight=version_compare#apt_pkg.version_compare
243
244 :param a: version string
245 :type a: str
246 :param b: version string
247 :type b: str
248 :returns: >0 if ``a`` is greater than ``b``, 0 if a equals b,
249 <0 if ``a`` is smaller than ``b``
250 :rtype: int
251 :raises: subprocess.CalledProcessError, RuntimeError
252 """
253 for op in ('gt', 1), ('eq', 0), ('lt', -1):
254 try:
255 subprocess.check_call(['dpkg', '--compare-versions',
256 a, op[0], b],
257 stderr=subprocess.STDOUT,
258 universal_newlines=True)
259 return op[1]
260 except subprocess.CalledProcessError as cp:
261 if cp.returncode == 1:
262 continue
263 raise
264 else:
265 raise RuntimeError('Unable to compare "{}" and "{}", according to '
266 'our logic they are neither greater, equal nor '
267 'less than each other.')
diff --git a/hooks/charmhelpers/osplatform.py b/hooks/charmhelpers/osplatform.py
index d9a4d5c..78c81af 100644
--- a/hooks/charmhelpers/osplatform.py
+++ b/hooks/charmhelpers/osplatform.py
@@ -1,4 +1,5 @@
1import platform1import platform
2import os
23
34
4def get_platform():5def get_platform():
@@ -9,9 +10,13 @@ def get_platform():
9 This string is used to decide which platform module should be imported.10 This string is used to decide which platform module should be imported.
10 """11 """
11 # linux_distribution is deprecated and will be removed in Python 3.712 # linux_distribution is deprecated and will be removed in Python 3.7
12 # Warings *not* disabled, as we certainly need to fix this.13 # Warnings *not* disabled, as we certainly need to fix this.
13 tuple_platform = platform.linux_distribution()14 if hasattr(platform, 'linux_distribution'):
14 current_platform = tuple_platform[0]15 tuple_platform = platform.linux_distribution()
16 current_platform = tuple_platform[0]
17 else:
18 current_platform = _get_platform_from_fs()
19
15 if "Ubuntu" in current_platform:20 if "Ubuntu" in current_platform:
16 return "ubuntu"21 return "ubuntu"
17 elif "CentOS" in current_platform:22 elif "CentOS" in current_platform:
@@ -20,6 +25,22 @@ def get_platform():
20 # Stock Python does not detect Ubuntu and instead returns debian.25 # Stock Python does not detect Ubuntu and instead returns debian.
21 # Or at least it does in some build environments like Travis CI26 # Or at least it does in some build environments like Travis CI
22 return "ubuntu"27 return "ubuntu"
28 elif "elementary" in current_platform:
29 # ElementaryOS fails to run tests locally without this.
30 return "ubuntu"
23 else:31 else:
24 raise RuntimeError("This module is not supported on {}."32 raise RuntimeError("This module is not supported on {}."
25 .format(current_platform))33 .format(current_platform))
34
35
36def _get_platform_from_fs():
37 """Get Platform from /etc/os-release."""
38 with open(os.path.join(os.sep, 'etc', 'os-release')) as fin:
39 content = dict(
40 line.split('=', 1)
41 for line in fin.read().splitlines()
42 if '=' in line
43 )
44 for k, v in content.items():
45 content[k] = v.strip('"')
46 return content["NAME"]
diff --git a/hooks/common.py b/hooks/common.py
index 66d41ec..c2280a3 100644
--- a/hooks/common.py
+++ b/hooks/common.py
@@ -43,6 +43,12 @@ def check_ip(n):
43 return False43 return False
4444
4545
46def ingress_address(relation_data):
47 if 'ingress-address' in relation_data:
48 return relation_data['ingress-address']
49 return relation_data['private-address']
50
51
46def get_local_ingress_address(binding='website'):52def get_local_ingress_address(binding='website'):
47 # using network-get to retrieve the address details if available.53 # using network-get to retrieve the address details if available.
48 log('Getting hostname for binding %s' % binding)54 log('Getting hostname for binding %s' % binding)
@@ -342,21 +348,6 @@ def apply_host_policy(target_id, owner_unit, owner_relation):
342 ssh_service.save()348 ssh_service.save()
343349
344350
345def get_valid_relations():
346 for x in subprocess.Popen(['relation-ids', 'monitors'],
347 stdout=subprocess.PIPE).stdout:
348 yield x.strip()
349 for x in subprocess.Popen(['relation-ids', 'nagios'],
350 stdout=subprocess.PIPE).stdout:
351 yield x.strip()
352
353
354def get_valid_units(relation_id):
355 for x in subprocess.Popen(['relation-list', '-r', relation_id],
356 stdout=subprocess.PIPE).stdout:
357 yield x.strip()
358
359
360def _replace_in_config(find_me, replacement):351def _replace_in_config(find_me, replacement):
361 with open(INPROGRESS_CFG) as cf:352 with open(INPROGRESS_CFG) as cf:
362 with tempfile.NamedTemporaryFile(dir=INPROGRESS_DIR, delete=False) as new_cf:353 with tempfile.NamedTemporaryFile(dir=INPROGRESS_DIR, delete=False) as new_cf:
diff --git a/hooks/install b/hooks/install
index f002e46..a8900a3 100755
--- a/hooks/install
+++ b/hooks/install
@@ -29,7 +29,7 @@ echo nagios3-cgi nagios3/adminpassword password $PASSWORD | debconf-set-selectio
29echo nagios3-cgi nagios3/adminpassword-repeat password $PASSWORD | debconf-set-selections29echo nagios3-cgi nagios3/adminpassword-repeat password $PASSWORD | debconf-set-selections
3030
31DEBIAN_FRONTEND=noninteractive apt-get -qy \31DEBIAN_FRONTEND=noninteractive apt-get -qy \
32 install nagios3 nagios-plugins python-cheetah python-jinja2 dnsutils debconf-utils nagios-nrpe-plugin pynag python-apt python-yaml32 install nagios3 nagios-plugins python-cheetah python-jinja2 dnsutils debconf-utils nagios-nrpe-plugin pynag python-apt python-yaml python-enum34
3333
34scripts/postfix_loopback_only.sh34scripts/postfix_loopback_only.sh
3535
diff --git a/hooks/monitors-relation-changed b/hooks/monitors-relation-changed
index 13cb96c..e16589d 100755
--- a/hooks/monitors-relation-changed
+++ b/hooks/monitors-relation-changed
@@ -18,17 +18,77 @@
1818
19import sys19import sys
20import os20import os
21import subprocess
22import yaml21import yaml
23import json
24import re22import re
2523from collections import defaultdict
2624
27from common import (customize_service, get_pynag_host,25from charmhelpers.core.hookenv import (
28 get_pynag_service, refresh_hostgroups,26 relation_get,
29 get_valid_relations, get_valid_units,27 ingress_address,
30 initialize_inprogress_config, flush_inprogress_config,28 related_units,
31 get_local_ingress_address)29 relation_ids,
30 log,
31 DEBUG
32)
33
34from common import (
35 customize_service,
36 get_pynag_host,
37 get_pynag_service,
38 refresh_hostgroups,
39 initialize_inprogress_config,
40 flush_inprogress_config
41)
42
43
44REQUIRED_REL_DATA_KEYS = [
45 'target-address',
46 'monitors',
47 'target-id',
48]
49
50
51def _prepare_relation_data(unit, rid):
52 relation_data = relation_get(unit=unit, rid=rid)
53
54 if not relation_data:
55 msg = (
56 'no relation data found for unit {} in relation {} - '
57 'skipping'.format(unit, rid)
58 )
59 log(msg, level=DEBUG)
60 return {}
61
62 if rid.split(':')[0] == 'nagios':
63 # Fake it for the more generic 'nagios' relation'
64 relation_data['target-id'] = unit.replace('/', '-')
65 relation_data['monitors'] = {'monitors': {'remote': {}}}
66
67 if not relation_data.get('target-address'):
68 relation_data['target-address'] = ingress_address(unit=unit, rid=rid)
69
70 for key in REQUIRED_REL_DATA_KEYS:
71 if not relation_data.get(key):
72 msg = (
73 '{} not found for unit {} in relation {} - '
74 'skipping'.format(key, unit, rid)
75 )
76 log(msg, level=DEBUG)
77 return {}
78
79 return relation_data
80
81
82def _collect_relation_data():
83 all_relations = defaultdict(dict)
84 for relname in ['nagios', 'monitors']:
85 for relid in relation_ids(relname):
86 for unit in related_units(relid):
87 relation_data = _prepare_relation_data(unit=unit, rid=relid)
88 if relation_data:
89 all_relations[relid][unit] = relation_data
90
91 return all_relations
3292
3393
34def main(argv):94def main(argv):
@@ -43,35 +103,7 @@ def main(argv):
43 relation_settings['target-address'] = argv[3]103 relation_settings['target-address'] = argv[3]
44 all_relations = {'monitors:99': {'testing/0': relation_settings}}104 all_relations = {'monitors:99': {'testing/0': relation_settings}}
45 else:105 else:
46 all_relations = {}106 all_relations = _collect_relation_data()
47 for relid in get_valid_relations():
48 (relname, relnum) = relid.split(':')
49 for unit in get_valid_units(relid):
50 relation_settings = json.loads(
51 subprocess.check_output(['relation-get', '--format=json',
52 '-r', relid,
53 '-',unit]).strip())
54
55 if relation_settings is None or relation_settings == '':
56 continue
57
58 if relname == 'monitors':
59 if ('monitors' not in relation_settings
60 or 'target-id' not in relation_settings):
61 continue
62 if ('target-id' in relation_settings and 'target-address' not in relation_settings):
63 relation_settings['target-address'] = get_local_ingress_address('monitors')
64
65 else:
66 # Fake it for the more generic 'nagios' relation'
67 relation_settings['target-id'] = unit.replace('/','-')
68 relation_settings['target-address'] = get_local_ingress_address('monitors')
69 relation_settings['monitors'] = {'monitors': {'remote': {} } }
70
71 if relid not in all_relations:
72 all_relations[relid] = {}
73
74 all_relations[relid][unit] = relation_settings
75107
76 # Hack to work around http://pad.lv/1025478108 # Hack to work around http://pad.lv/1025478
77 targets_with_addresses = set()109 targets_with_addresses = set()

Subscribers

People subscribed via source and target branches