Merge lp:~axino/charms/trusty/ubuntu-repository-cache/xenial-ready into lp:charms/trusty/ubuntu-repository-cache

Proposed by Junien F
Status: Merged
Merged at revision: 212
Proposed branch: lp:~axino/charms/trusty/ubuntu-repository-cache/xenial-ready
Merge into: lp:charms/trusty/ubuntu-repository-cache
Diff against target: 3038 lines (+1893/-292)
23 files modified
hooks/hooks.py (+4/-4)
lib/charmhelpers/contrib/charmsupport/nrpe.py (+52/-14)
lib/charmhelpers/contrib/storage/linux/ceph.py (+864/-61)
lib/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
lib/charmhelpers/contrib/storage/linux/utils.py (+8/-7)
lib/charmhelpers/core/hookenv.py (+220/-13)
lib/charmhelpers/core/host.py (+349/-79)
lib/charmhelpers/core/hugepage.py (+71/-0)
lib/charmhelpers/core/kernel.py (+68/-0)
lib/charmhelpers/core/services/helpers.py (+30/-5)
lib/charmhelpers/core/strutils.py (+30/-0)
lib/charmhelpers/core/templating.py (+21/-8)
lib/charmhelpers/core/unitdata.py (+61/-17)
lib/charmhelpers/fetch/__init__.py (+26/-2)
lib/charmhelpers/fetch/archiveurl.py (+1/-1)
lib/charmhelpers/fetch/bzrurl.py (+22/-32)
lib/charmhelpers/fetch/giturl.py (+20/-23)
lib/ubuntu_repository_cache/apache.py (+1/-1)
lib/ubuntu_repository_cache/mirror.py (+10/-7)
lib/ubuntu_repository_cache/service.py (+5/-5)
lib/ubuntu_repository_cache/squid.py (+17/-10)
lib/ubuntu_repository_cache/storage.py (+2/-2)
lib/ubuntu_repository_cache/util.py (+1/-1)
To merge this branch: bzr merge lp:~axino/charms/trusty/ubuntu-repository-cache/xenial-ready
Reviewer Review Type Date Requested Status
Review Queue (community) automated testing Needs Fixing
Robert C Jennings (community) Approve
Stuart Bishop (community) Approve
Review via email: mp+298880@code.launchpad.net

Description of the change

make the charm ready for xenial and python3

To post a comment you must log in.
Revision history for this message
Junien F (axino) wrote :

Note that ideally, this should go in lp:charms/xenial/ubuntu-repository-cache

Revision history for this message
Stuart Bishop (stub) wrote :

All good, apart from a minor nit mentioned in line that doesn't really matter.

Revision history for this message
Stuart Bishop (stub) wrote :

This branch supports both Xenial and Trusty. It should probably be declared as multiseries in metadata.yaml, and moved to a new series-independent home like https://launchpad.net/ubuntu-repository-cache or https://launchpad.net/ubuntu-repository-cache-charm.

The choice of where it ends up is with the maintainer (Robert Jennings is listed, but maybe this needs to be changed to a team since you are doing this fix), and they will need to publish their charm onto the charm store and request to the ecosystem team that it is promulgated to cs:ubuntu-repository-cache (ingestion has been replaced by maintainers publishing directly).

Revision history for this message
Stuart Bishop (stub) wrote :

Approve, pending test run.

review: Approve
Revision history for this message
Robert C Jennings (rcj) wrote :

@axino, thanks for doing this work to enable the charm for xenial.

@stub, we'll change the maintainer to ~cloudware and work on where this code needs to live.

review: Approve
Revision history for this message
Robert C Jennings (rcj) wrote :

stub, can this be merged soon so that we can move on to MP#299472 that will add the multiseries metadata to make use of this change? What are the next actions?

Revision history for this message
Stuart Bishop (stub) wrote :

I have merge this into lp:charms/trusty/ubuntu-repository-cache as requested.

Next steps are to push this to a new non-charmers home (launchpad.net/ubuntu-repository-cache?), make it multiseries, push and publish it to the charm store, and get the ecosystem team to promulgate it.

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws/job/charm-bundle-test-aws/5012/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws/job/charm-bundle-test-aws/5035/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws/job/charm-bundle-test-lxc/4816/

review: Needs Fixing (automated testing)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/hooks.py'
2--- hooks/hooks.py 2015-10-20 11:21:23 +0000
3+++ hooks/hooks.py 2016-07-01 11:47:59 +0000
4@@ -1,4 +1,4 @@
5-#!/usr/bin/python
6+#!/usr/bin/python3
7 '''ubuntu-repository-cache charm
8
9 This 'hooks' source constitutes all of the code for handle charm hooks.
10@@ -11,9 +11,9 @@
11 sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
12
13 # pylint can't find the modules # pylint: disable=F0401
14-from charmhelpers.core import hookenv
15-from charmhelpers.contrib import unison
16-from ubuntu_repository_cache import (
17+from charmhelpers.core import hookenv # noqa: E402
18+from charmhelpers.contrib import unison # noqa: E402
19+from ubuntu_repository_cache import ( # noqa: E402
20 mirror,
21 service,
22 util,
23
24=== modified file 'lib/charmhelpers/contrib/charmsupport/nrpe.py'
25--- lib/charmhelpers/contrib/charmsupport/nrpe.py 2015-06-03 10:05:41 +0000
26+++ lib/charmhelpers/contrib/charmsupport/nrpe.py 2016-07-01 11:47:59 +0000
27@@ -148,6 +148,13 @@
28 self.description = description
29 self.check_cmd = self._locate_cmd(check_cmd)
30
31+ def _get_check_filename(self):
32+ return os.path.join(NRPE.nrpe_confdir, '{}.cfg'.format(self.command))
33+
34+ def _get_service_filename(self, hostname):
35+ return os.path.join(NRPE.nagios_exportdir,
36+ 'service__{}_{}.cfg'.format(hostname, self.command))
37+
38 def _locate_cmd(self, check_cmd):
39 search_path = (
40 '/usr/lib/nagios/plugins',
41@@ -163,9 +170,21 @@
42 log('Check command not found: {}'.format(parts[0]))
43 return ''
44
45+ def _remove_service_files(self):
46+ if not os.path.exists(NRPE.nagios_exportdir):
47+ return
48+ for f in os.listdir(NRPE.nagios_exportdir):
49+ if f.endswith('_{}.cfg'.format(self.command)):
50+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
51+
52+ def remove(self, hostname):
53+ nrpe_check_file = self._get_check_filename()
54+ if os.path.exists(nrpe_check_file):
55+ os.remove(nrpe_check_file)
56+ self._remove_service_files()
57+
58 def write(self, nagios_context, hostname, nagios_servicegroups):
59- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
60- self.command)
61+ nrpe_check_file = self._get_check_filename()
62 with open(nrpe_check_file, 'w') as nrpe_check_config:
63 nrpe_check_config.write("# check {}\n".format(self.shortname))
64 nrpe_check_config.write("command[{}]={}\n".format(
65@@ -180,9 +199,7 @@
66
67 def write_service_config(self, nagios_context, hostname,
68 nagios_servicegroups):
69- for f in os.listdir(NRPE.nagios_exportdir):
70- if re.search('.*{}.cfg'.format(self.command), f):
71- os.remove(os.path.join(NRPE.nagios_exportdir, f))
72+ self._remove_service_files()
73
74 templ_vars = {
75 'nagios_hostname': hostname,
76@@ -192,8 +209,7 @@
77 'command': self.command,
78 }
79 nrpe_service_text = Check.service_template.format(**templ_vars)
80- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
81- NRPE.nagios_exportdir, hostname, self.command)
82+ nrpe_service_file = self._get_service_filename(hostname)
83 with open(nrpe_service_file, 'w') as nrpe_service_config:
84 nrpe_service_config.write(str(nrpe_service_text))
85
86@@ -218,12 +234,32 @@
87 if hostname:
88 self.hostname = hostname
89 else:
90- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
91+ nagios_hostname = get_nagios_hostname()
92+ if nagios_hostname:
93+ self.hostname = nagios_hostname
94+ else:
95+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
96 self.checks = []
97
98 def add_check(self, *args, **kwargs):
99 self.checks.append(Check(*args, **kwargs))
100
101+ def remove_check(self, *args, **kwargs):
102+ if kwargs.get('shortname') is None:
103+ raise ValueError('shortname of check must be specified')
104+
105+ # Use sensible defaults if they're not specified - these are not
106+ # actually used during removal, but they're required for constructing
107+ # the Check object; check_disk is chosen because it's part of the
108+ # nagios-plugins-basic package.
109+ if kwargs.get('check_cmd') is None:
110+ kwargs['check_cmd'] = 'check_disk'
111+ if kwargs.get('description') is None:
112+ kwargs['description'] = ''
113+
114+ check = Check(*args, **kwargs)
115+ check.remove(self.hostname)
116+
117 def write(self):
118 try:
119 nagios_uid = pwd.getpwnam('nagios').pw_uid
120@@ -260,7 +296,7 @@
121 :param str relation_name: Name of relation nrpe sub joined to
122 """
123 for rel in relations_of_type(relation_name):
124- if 'nagios_hostname' in rel:
125+ if 'nagios_host_context' in rel:
126 return rel['nagios_host_context']
127
128
129@@ -301,11 +337,13 @@
130 upstart_init = '/etc/init/%s.conf' % svc
131 sysv_init = '/etc/init.d/%s' % svc
132 if os.path.exists(upstart_init):
133- nrpe.add_check(
134- shortname=svc,
135- description='process check {%s}' % unit_name,
136- check_cmd='check_upstart_job %s' % svc
137- )
138+ # Don't add a check for these services from neutron-gateway
139+ if svc not in ['ext-port', 'os-charm-phy-nic-mtu']:
140+ nrpe.add_check(
141+ shortname=svc,
142+ description='process check {%s}' % unit_name,
143+ check_cmd='check_upstart_job %s' % svc
144+ )
145 elif os.path.exists(sysv_init):
146 cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
147 cron_file = ('*/5 * * * * root '
148
149=== modified file 'lib/charmhelpers/contrib/storage/linux/ceph.py'
150--- lib/charmhelpers/contrib/storage/linux/ceph.py 2015-07-23 11:40:02 +0000
151+++ lib/charmhelpers/contrib/storage/linux/ceph.py 2016-07-01 11:47:59 +0000
152@@ -23,11 +23,16 @@
153 # James Page <james.page@ubuntu.com>
154 # Adam Gandelman <adamg@ubuntu.com>
155 #
156+import bisect
157+import errno
158+import hashlib
159+import six
160
161 import os
162 import shutil
163 import json
164 import time
165+import uuid
166
167 from subprocess import (
168 check_call,
169@@ -35,8 +40,11 @@
170 CalledProcessError,
171 )
172 from charmhelpers.core.hookenv import (
173+ config,
174+ local_unit,
175 relation_get,
176 relation_ids,
177+ relation_set,
178 related_units,
179 log,
180 DEBUG,
181@@ -56,6 +64,9 @@
182 apt_install,
183 )
184
185+from charmhelpers.core.kernel import modprobe
186+from charmhelpers.contrib.openstack.utils import config_flags_parser
187+
188 KEYRING = '/etc/ceph/ceph.client.{}.keyring'
189 KEYFILE = '/etc/ceph/ceph.client.{}.key'
190
191@@ -67,6 +78,559 @@
192 err to syslog = {use_syslog}
193 clog to syslog = {use_syslog}
194 """
195+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
196+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
197+
198+
199+def validator(value, valid_type, valid_range=None):
200+ """
201+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
202+ Example input:
203+ validator(value=1,
204+ valid_type=int,
205+ valid_range=[0, 2])
206+ This says I'm testing value=1. It must be an int inclusive in [0,2]
207+
208+ :param value: The value to validate
209+ :param valid_type: The type that value should be.
210+ :param valid_range: A range of values that value can assume.
211+ :return:
212+ """
213+ assert isinstance(value, valid_type), "{} is not a {}".format(
214+ value,
215+ valid_type)
216+ if valid_range is not None:
217+ assert isinstance(valid_range, list), \
218+ "valid_range must be a list, was given {}".format(valid_range)
219+ # If we're dealing with strings
220+ if valid_type is six.string_types:
221+ assert value in valid_range, \
222+ "{} is not in the list {}".format(value, valid_range)
223+ # Integer, float should have a min and max
224+ else:
225+ if len(valid_range) != 2:
226+ raise ValueError(
227+ "Invalid valid_range list of {} for {}. "
228+ "List must be [min,max]".format(valid_range, value))
229+ assert value >= valid_range[0], \
230+ "{} is less than minimum allowed value of {}".format(
231+ value, valid_range[0])
232+ assert value <= valid_range[1], \
233+ "{} is greater than maximum allowed value of {}".format(
234+ value, valid_range[1])
235+
236+
237+class PoolCreationError(Exception):
238+ """
239+ A custom error to inform the caller that a pool creation failed. Provides an error message
240+ """
241+
242+ def __init__(self, message):
243+ super(PoolCreationError, self).__init__(message)
244+
245+
246+class Pool(object):
247+ """
248+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
249+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
250+ """
251+
252+ def __init__(self, service, name):
253+ self.service = service
254+ self.name = name
255+
256+ # Create the pool if it doesn't exist already
257+ # To be implemented by subclasses
258+ def create(self):
259+ pass
260+
261+ def add_cache_tier(self, cache_pool, mode):
262+ """
263+ Adds a new cache tier to an existing pool.
264+ :param cache_pool: six.string_types. The cache tier pool name to add.
265+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
266+ :return: None
267+ """
268+ # Check the input types and values
269+ validator(value=cache_pool, valid_type=six.string_types)
270+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
271+
272+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
273+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
274+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
275+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
276+
277+ def remove_cache_tier(self, cache_pool):
278+ """
279+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
280+ :param cache_pool: six.string_types. The cache tier pool name to remove.
281+ :return: None
282+ """
283+ # read-only is easy, writeback is much harder
284+ mode = get_cache_mode(self.service, cache_pool)
285+ version = ceph_version()
286+ if mode == 'readonly':
287+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
288+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
289+
290+ elif mode == 'writeback':
291+ pool_forward_cmd = ['ceph', '--id', self.service, 'osd', 'tier',
292+ 'cache-mode', cache_pool, 'forward']
293+ if version >= '10.1':
294+ # Jewel added a mandatory flag
295+ pool_forward_cmd.append('--yes-i-really-mean-it')
296+
297+ check_call(pool_forward_cmd)
298+ # Flush the cache and wait for it to return
299+ check_call(['rados', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
300+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
301+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
302+
303+ def get_pgs(self, pool_size):
304+ """
305+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
306+ erasure coded pools
307+ :return: int. The number of pgs to use.
308+ """
309+ validator(value=pool_size, valid_type=int)
310+ osd_list = get_osds(self.service)
311+ if not osd_list:
312+ # NOTE(james-page): Default to 200 for older ceph versions
313+ # which don't support OSD query from cli
314+ return 200
315+
316+ osd_list_length = len(osd_list)
317+ # Calculate based on Ceph best practices
318+ if osd_list_length < 5:
319+ return 128
320+ elif 5 < osd_list_length < 10:
321+ return 512
322+ elif 10 < osd_list_length < 50:
323+ return 4096
324+ else:
325+ estimate = (osd_list_length * 100) / pool_size
326+ # Return the next nearest power of 2
327+ index = bisect.bisect_right(powers_of_two, estimate)
328+ return powers_of_two[index]
329+
330+
331+class ReplicatedPool(Pool):
332+ def __init__(self, service, name, pg_num=None, replicas=2):
333+ super(ReplicatedPool, self).__init__(service=service, name=name)
334+ self.replicas = replicas
335+ if pg_num is None:
336+ self.pg_num = self.get_pgs(self.replicas)
337+ else:
338+ self.pg_num = pg_num
339+
340+ def create(self):
341+ if not pool_exists(self.service, self.name):
342+ # Create it
343+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create',
344+ self.name, str(self.pg_num)]
345+ try:
346+ check_call(cmd)
347+ # Set the pool replica size
348+ update_pool(client=self.service,
349+ pool=self.name,
350+ settings={'size': str(self.replicas)})
351+ except CalledProcessError:
352+ raise
353+
354+
355+# Default jerasure erasure coded pool
356+class ErasurePool(Pool):
357+ def __init__(self, service, name, erasure_code_profile="default"):
358+ super(ErasurePool, self).__init__(service=service, name=name)
359+ self.erasure_code_profile = erasure_code_profile
360+
361+ def create(self):
362+ if not pool_exists(self.service, self.name):
363+ # Try to find the erasure profile information so we can properly size the pgs
364+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
365+
366+ # Check for errors
367+ if erasure_profile is None:
368+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
369+ level=ERROR)
370+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
371+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
372+ # Error
373+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
374+ level=ERROR)
375+ raise PoolCreationError(
376+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
377+
378+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
379+ # Create it
380+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs), str(pgs),
381+ 'erasure', self.erasure_code_profile]
382+ try:
383+ check_call(cmd)
384+ except CalledProcessError:
385+ raise
386+
387+ """Get an existing erasure code profile if it already exists.
388+ Returns json formatted output"""
389+
390+
391+def get_mon_map(service):
392+ """
393+ Returns the current monitor map.
394+ :param service: six.string_types. The Ceph user name to run the command under
395+ :return: json string. :raise: ValueError if the monmap fails to parse.
396+ Also raises CalledProcessError if our ceph command fails
397+ """
398+ try:
399+ mon_status = check_output(
400+ ['ceph', '--id', service,
401+ 'mon_status', '--format=json'])
402+ try:
403+ return json.loads(mon_status)
404+ except ValueError as v:
405+ log("Unable to parse mon_status json: {}. Error: {}".format(
406+ mon_status, v.message))
407+ raise
408+ except CalledProcessError as e:
409+ log("mon_status command failed with message: {}".format(
410+ e.message))
411+ raise
412+
413+
414+def hash_monitor_names(service):
415+ """
416+ Uses the get_mon_map() function to get information about the monitor
417+ cluster.
418+ Hash the name of each monitor. Return a sorted list of monitor hashes
419+ in an ascending order.
420+ :param service: six.string_types. The Ceph user name to run the command under
421+ :rtype : dict. json dict of monitor name, ip address and rank
422+ example: {
423+ 'name': 'ip-172-31-13-165',
424+ 'rank': 0,
425+ 'addr': '172.31.13.165:6789/0'}
426+ """
427+ try:
428+ hash_list = []
429+ monitor_list = get_mon_map(service=service)
430+ if monitor_list['monmap']['mons']:
431+ for mon in monitor_list['monmap']['mons']:
432+ hash_list.append(
433+ hashlib.sha224(mon['name'].encode('utf-8')).hexdigest())
434+ return sorted(hash_list)
435+ else:
436+ return None
437+ except (ValueError, CalledProcessError):
438+ raise
439+
440+
441+def monitor_key_delete(service, key):
442+ """
443+ Delete a key and value pair from the monitor cluster
444+ :param service: six.string_types. The Ceph user name to run the command under
445+ Deletes a key value pair on the monitor cluster.
446+ :param key: six.string_types. The key to delete.
447+ """
448+ try:
449+ check_output(
450+ ['ceph', '--id', service,
451+ 'config-key', 'del', str(key)])
452+ except CalledProcessError as e:
453+ log("Monitor config-key put failed with message: {}".format(
454+ e.output))
455+ raise
456+
457+
458+def monitor_key_set(service, key, value):
459+ """
460+ Sets a key value pair on the monitor cluster.
461+ :param service: six.string_types. The Ceph user name to run the command under
462+ :param key: six.string_types. The key to set.
463+ :param value: The value to set. This will be converted to a string
464+ before setting
465+ """
466+ try:
467+ check_output(
468+ ['ceph', '--id', service,
469+ 'config-key', 'put', str(key), str(value)])
470+ except CalledProcessError as e:
471+ log("Monitor config-key put failed with message: {}".format(
472+ e.output))
473+ raise
474+
475+
476+def monitor_key_get(service, key):
477+ """
478+ Gets the value of an existing key in the monitor cluster.
479+ :param service: six.string_types. The Ceph user name to run the command under
480+ :param key: six.string_types. The key to search for.
481+ :return: Returns the value of that key or None if not found.
482+ """
483+ try:
484+ output = check_output(
485+ ['ceph', '--id', service,
486+ 'config-key', 'get', str(key)])
487+ return output
488+ except CalledProcessError as e:
489+ log("Monitor config-key get failed with message: {}".format(
490+ e.output))
491+ return None
492+
493+
494+def monitor_key_exists(service, key):
495+ """
496+ Searches for the existence of a key in the monitor cluster.
497+ :param service: six.string_types. The Ceph user name to run the command under
498+ :param key: six.string_types. The key to search for
499+ :return: Returns True if the key exists, False if not and raises an
500+ exception if an unknown error occurs. :raise: CalledProcessError if
501+ an unknown error occurs
502+ """
503+ try:
504+ check_call(
505+ ['ceph', '--id', service,
506+ 'config-key', 'exists', str(key)])
507+ # I can return true here regardless because Ceph returns
508+ # ENOENT if the key wasn't found
509+ return True
510+ except CalledProcessError as e:
511+ if e.returncode == errno.ENOENT:
512+ return False
513+ else:
514+ log("Unknown error from ceph config-get exists: {} {}".format(
515+ e.returncode, e.output))
516+ raise
517+
518+
519+def get_erasure_profile(service, name):
520+ """
521+ :param service: six.string_types. The Ceph user name to run the command under
522+ :param name:
523+ :return:
524+ """
525+ try:
526+ out = check_output(['ceph', '--id', service,
527+ 'osd', 'erasure-code-profile', 'get',
528+ name, '--format=json'])
529+ return json.loads(out)
530+ except (CalledProcessError, OSError, ValueError):
531+ return None
532+
533+
534+def pool_set(service, pool_name, key, value):
535+ """
536+ Sets a value for a RADOS pool in ceph.
537+ :param service: six.string_types. The Ceph user name to run the command under
538+ :param pool_name: six.string_types
539+ :param key: six.string_types
540+ :param value:
541+ :return: None. Can raise CalledProcessError
542+ """
543+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
544+ try:
545+ check_call(cmd)
546+ except CalledProcessError:
547+ raise
548+
549+
550+def snapshot_pool(service, pool_name, snapshot_name):
551+ """
552+ Snapshots a RADOS pool in ceph.
553+ :param service: six.string_types. The Ceph user name to run the command under
554+ :param pool_name: six.string_types
555+ :param snapshot_name: six.string_types
556+ :return: None. Can raise CalledProcessError
557+ """
558+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
559+ try:
560+ check_call(cmd)
561+ except CalledProcessError:
562+ raise
563+
564+
565+def remove_pool_snapshot(service, pool_name, snapshot_name):
566+ """
567+ Remove a snapshot from a RADOS pool in ceph.
568+ :param service: six.string_types. The Ceph user name to run the command under
569+ :param pool_name: six.string_types
570+ :param snapshot_name: six.string_types
571+ :return: None. Can raise CalledProcessError
572+ """
573+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
574+ try:
575+ check_call(cmd)
576+ except CalledProcessError:
577+ raise
578+
579+
580+# max_bytes should be an int or long
581+def set_pool_quota(service, pool_name, max_bytes):
582+ """
583+ :param service: six.string_types. The Ceph user name to run the command under
584+ :param pool_name: six.string_types
585+ :param max_bytes: int or long
586+ :return: None. Can raise CalledProcessError
587+ """
588+ # Set a byte quota on a RADOS pool in ceph.
589+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name,
590+ 'max_bytes', str(max_bytes)]
591+ try:
592+ check_call(cmd)
593+ except CalledProcessError:
594+ raise
595+
596+
597+def remove_pool_quota(service, pool_name):
598+ """
599+ Set a byte quota on a RADOS pool in ceph.
600+ :param service: six.string_types. The Ceph user name to run the command under
601+ :param pool_name: six.string_types
602+ :return: None. Can raise CalledProcessError
603+ """
604+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
605+ try:
606+ check_call(cmd)
607+ except CalledProcessError:
608+ raise
609+
610+
611+def remove_erasure_profile(service, profile_name):
612+ """
613+ Create a new erasure code profile if one does not already exist for it. Updates
614+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
615+ for more details
616+ :param service: six.string_types. The Ceph user name to run the command under
617+ :param profile_name: six.string_types
618+ :return: None. Can raise CalledProcessError
619+ """
620+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'rm',
621+ profile_name]
622+ try:
623+ check_call(cmd)
624+ except CalledProcessError:
625+ raise
626+
627+
628+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure',
629+ failure_domain='host',
630+ data_chunks=2, coding_chunks=1,
631+ locality=None, durability_estimator=None):
632+ """
633+ Create a new erasure code profile if one does not already exist for it. Updates
634+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
635+ for more details
636+ :param service: six.string_types. The Ceph user name to run the command under
637+ :param profile_name: six.string_types
638+ :param erasure_plugin_name: six.string_types
639+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
640+ 'room', 'root', 'row'])
641+ :param data_chunks: int
642+ :param coding_chunks: int
643+ :param locality: int
644+ :param durability_estimator: int
645+ :return: None. Can raise CalledProcessError
646+ """
647+ # Ensure this failure_domain is allowed by Ceph
648+ validator(failure_domain, six.string_types,
649+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
650+
651+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
652+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
653+ 'ruleset_failure_domain=' + failure_domain]
654+ if locality is not None and durability_estimator is not None:
655+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
656+
657+ # Add plugin specific information
658+ if locality is not None:
659+ # For local erasure codes
660+ cmd.append('l=' + str(locality))
661+ if durability_estimator is not None:
662+ # For Shec erasure codes
663+ cmd.append('c=' + str(durability_estimator))
664+
665+ if erasure_profile_exists(service, profile_name):
666+ cmd.append('--force')
667+
668+ try:
669+ check_call(cmd)
670+ except CalledProcessError:
671+ raise
672+
673+
674+def rename_pool(service, old_name, new_name):
675+ """
676+ Rename a Ceph pool from old_name to new_name
677+ :param service: six.string_types. The Ceph user name to run the command under
678+ :param old_name: six.string_types
679+ :param new_name: six.string_types
680+ :return: None
681+ """
682+ validator(value=old_name, valid_type=six.string_types)
683+ validator(value=new_name, valid_type=six.string_types)
684+
685+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
686+ check_call(cmd)
687+
688+
689+def erasure_profile_exists(service, name):
690+ """
691+ Check to see if an Erasure code profile already exists.
692+ :param service: six.string_types. The Ceph user name to run the command under
693+ :param name: six.string_types
694+ :return: int or None
695+ """
696+ validator(value=name, valid_type=six.string_types)
697+ try:
698+ check_call(['ceph', '--id', service,
699+ 'osd', 'erasure-code-profile', 'get',
700+ name])
701+ return True
702+ except CalledProcessError:
703+ return False
704+
705+
706+def get_cache_mode(service, pool_name):
707+ """
708+ Find the current caching mode of the pool_name given.
709+ :param service: six.string_types. The Ceph user name to run the command under
710+ :param pool_name: six.string_types
711+ :return: int or None
712+ """
713+ validator(value=service, valid_type=six.string_types)
714+ validator(value=pool_name, valid_type=six.string_types)
715+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
716+ try:
717+ osd_json = json.loads(out)
718+ for pool in osd_json['pools']:
719+ if pool['pool_name'] == pool_name:
720+ return pool['cache_mode']
721+ return None
722+ except ValueError:
723+ raise
724+
725+
726+def pool_exists(service, name):
727+ """Check to see if a RADOS pool already exists."""
728+ try:
729+ out = check_output(['rados', '--id', service,
730+ 'lspools']).decode('UTF-8')
731+ except CalledProcessError:
732+ return False
733+
734+ return name in out.split()
735+
736+
737+def get_osds(service):
738+ """Return a list of all Ceph Object Storage Daemons currently in the
739+ cluster.
740+ """
741+ version = ceph_version()
742+ if version and version >= '0.56':
743+ return json.loads(check_output(['ceph', '--id', service,
744+ 'osd', 'ls',
745+ '--format=json']).decode('UTF-8'))
746+
747+ return None
748
749
750 def install():
751@@ -96,53 +660,37 @@
752 check_call(cmd)
753
754
755-def pool_exists(service, name):
756- """Check to see if a RADOS pool already exists."""
757- try:
758- out = check_output(['rados', '--id', service,
759- 'lspools']).decode('UTF-8')
760- except CalledProcessError:
761- return False
762-
763- return name in out
764-
765-
766-def get_osds(service):
767- """Return a list of all Ceph Object Storage Daemons currently in the
768- cluster.
769- """
770- version = ceph_version()
771- if version and version >= '0.56':
772- return json.loads(check_output(['ceph', '--id', service,
773- 'osd', 'ls',
774- '--format=json']).decode('UTF-8'))
775-
776- return None
777-
778-
779-def create_pool(service, name, replicas=3):
780+def update_pool(client, pool, settings):
781+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
782+ for k, v in six.iteritems(settings):
783+ cmd.append(k)
784+ cmd.append(v)
785+
786+ check_call(cmd)
787+
788+
789+def create_pool(service, name, replicas=3, pg_num=None):
790 """Create a new RADOS pool."""
791 if pool_exists(service, name):
792 log("Ceph pool {} already exists, skipping creation".format(name),
793 level=WARNING)
794 return
795
796- # Calculate the number of placement groups based
797- # on upstream recommended best practices.
798- osds = get_osds(service)
799- if osds:
800- pgnum = (len(osds) * 100 // replicas)
801- else:
802- # NOTE(james-page): Default to 200 for older ceph versions
803- # which don't support OSD query from cli
804- pgnum = 200
805-
806- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
807- check_call(cmd)
808-
809- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
810- str(replicas)]
811- check_call(cmd)
812+ if not pg_num:
813+ # Calculate the number of placement groups based
814+ # on upstream recommended best practices.
815+ osds = get_osds(service)
816+ if osds:
817+ pg_num = (len(osds) * 100 // replicas)
818+ else:
819+ # NOTE(james-page): Default to 200 for older ceph versions
820+ # which don't support OSD query from cli
821+ pg_num = 200
822+
823+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
824+ check_call(cmd)
825+
826+ update_pool(service, name, settings={'size': str(replicas)})
827
828
829 def delete_pool(service, name):
830@@ -197,10 +745,10 @@
831 log('Created new keyfile at %s.' % keyfile, level=INFO)
832
833
834-def get_ceph_nodes():
835- """Query named relation 'ceph' to determine current nodes."""
836+def get_ceph_nodes(relation='ceph'):
837+ """Query named relation to determine current nodes."""
838 hosts = []
839- for r_id in relation_ids('ceph'):
840+ for r_id in relation_ids(relation):
841 for unit in related_units(r_id):
842 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
843
844@@ -288,17 +836,6 @@
845 os.chown(data_src_dst, uid, gid)
846
847
848-# TODO: re-use
849-def modprobe(module):
850- """Load a kernel module and configure for auto-load on reboot."""
851- log('Loading kernel module', level=INFO)
852- cmd = ['modprobe', module]
853- check_call(cmd)
854- with open('/etc/modules', 'r+') as modules:
855- if module not in modules.read():
856- modules.write(module)
857-
858-
859 def copy_files(src, dst, symlinks=False, ignore=None):
860 """Copy files from src to dst."""
861 for item in os.listdir(src):
862@@ -363,14 +900,14 @@
863 service_start(svc)
864
865
866-def ensure_ceph_keyring(service, user=None, group=None):
867+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
868 """Ensures a ceph keyring is created for a named service and optionally
869 ensures user and group ownership.
870
871 Returns False if no ceph key is available in relation state.
872 """
873 key = None
874- for rid in relation_ids('ceph'):
875+ for rid in relation_ids(relation):
876 for unit in related_units(rid):
877 key = relation_get('key', rid=rid, unit=unit)
878 if key:
879@@ -411,17 +948,60 @@
880
881 The API is versioned and defaults to version 1.
882 """
883- def __init__(self, api_version=1):
884+
885+ def __init__(self, api_version=1, request_id=None):
886 self.api_version = api_version
887+ if request_id:
888+ self.request_id = request_id
889+ else:
890+ self.request_id = str(uuid.uuid1())
891 self.ops = []
892
893- def add_op_create_pool(self, name, replica_count=3):
894+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
895+ """Adds an operation to create a pool.
896+
897+ @param pg_num setting: optional setting. If not provided, this value
898+ will be calculated by the broker based on how many OSDs are in the
899+ cluster at the time of creation. Note that, if provided, this value
900+ will be capped at the current available maximum.
901+ """
902 self.ops.append({'op': 'create-pool', 'name': name,
903- 'replicas': replica_count})
904+ 'replicas': replica_count, 'pg_num': pg_num})
905+
906+ def set_ops(self, ops):
907+ """Set request ops to provided value.
908+
909+ Useful for injecting ops that come from a previous request
910+ to allow comparisons to ensure validity.
911+ """
912+ self.ops = ops
913
914 @property
915 def request(self):
916- return json.dumps({'api-version': self.api_version, 'ops': self.ops})
917+ return json.dumps({'api-version': self.api_version, 'ops': self.ops,
918+ 'request-id': self.request_id})
919+
920+ def _ops_equal(self, other):
921+ if len(self.ops) == len(other.ops):
922+ for req_no in range(0, len(self.ops)):
923+ for key in ['replicas', 'name', 'op', 'pg_num']:
924+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
925+ return False
926+ else:
927+ return False
928+ return True
929+
930+ def __eq__(self, other):
931+ if not isinstance(other, self.__class__):
932+ return False
933+ if self.api_version == other.api_version and \
934+ self._ops_equal(other):
935+ return True
936+ else:
937+ return False
938+
939+ def __ne__(self, other):
940+ return not self.__eq__(other)
941
942
943 class CephBrokerRsp(object):
944@@ -431,14 +1011,237 @@
945
946 The API is versioned and defaults to version 1.
947 """
948+
949 def __init__(self, encoded_rsp):
950 self.api_version = None
951 self.rsp = json.loads(encoded_rsp)
952
953 @property
954+ def request_id(self):
955+ return self.rsp.get('request-id')
956+
957+ @property
958 def exit_code(self):
959 return self.rsp.get('exit-code')
960
961 @property
962 def exit_msg(self):
963 return self.rsp.get('stderr')
964+
965+
966+# Ceph Broker Conversation:
967+# If a charm needs an action to be taken by ceph it can create a CephBrokerRq
968+# and send that request to ceph via the ceph relation. The CephBrokerRq has a
969+# unique id so that the client can identity which CephBrokerRsp is associated
970+# with the request. Ceph will also respond to each client unit individually
971+# creating a response key per client unit eg glance/0 will get a CephBrokerRsp
972+# via key broker-rsp-glance-0
973+#
974+# To use this the charm can just do something like:
975+#
976+# from charmhelpers.contrib.storage.linux.ceph import (
977+# send_request_if_needed,
978+# is_request_complete,
979+# CephBrokerRq,
980+# )
981+#
982+# @hooks.hook('ceph-relation-changed')
983+# def ceph_changed():
984+# rq = CephBrokerRq()
985+# rq.add_op_create_pool(name='poolname', replica_count=3)
986+#
987+# if is_request_complete(rq):
988+# <Request complete actions>
989+# else:
990+# send_request_if_needed(get_ceph_request())
991+#
992+# CephBrokerRq and CephBrokerRsp are serialized into JSON. Below is an example
993+# of glance having sent a request to ceph which ceph has successfully processed
994+# 'ceph:8': {
995+# 'ceph/0': {
996+# 'auth': 'cephx',
997+# 'broker-rsp-glance-0': '{"request-id": "0bc7dc54", "exit-code": 0}',
998+# 'broker_rsp': '{"request-id": "0da543b8", "exit-code": 0}',
999+# 'ceph-public-address': '10.5.44.103',
1000+# 'key': 'AQCLDttVuHXINhAAvI144CB09dYchhHyTUY9BQ==',
1001+# 'private-address': '10.5.44.103',
1002+# },
1003+# 'glance/0': {
1004+# 'broker_req': ('{"api-version": 1, "request-id": "0bc7dc54", '
1005+# '"ops": [{"replicas": 3, "name": "glance", '
1006+# '"op": "create-pool"}]}'),
1007+# 'private-address': '10.5.44.109',
1008+# },
1009+# }
1010+
1011+def get_previous_request(rid):
1012+ """Return the last ceph broker request sent on a given relation
1013+
1014+ @param rid: Relation id to query for request
1015+ """
1016+ request = None
1017+ broker_req = relation_get(attribute='broker_req', rid=rid,
1018+ unit=local_unit())
1019+ if broker_req:
1020+ request_data = json.loads(broker_req)
1021+ request = CephBrokerRq(api_version=request_data['api-version'],
1022+ request_id=request_data['request-id'])
1023+ request.set_ops(request_data['ops'])
1024+
1025+ return request
1026+
1027+
1028+def get_request_states(request, relation='ceph'):
1029+ """Return a dict of requests per relation id with their corresponding
1030+ completion state.
1031+
1032+ This allows a charm, which has a request for ceph, to see whether there is
1033+ an equivalent request already being processed and if so what state that
1034+ request is in.
1035+
1036+ @param request: A CephBrokerRq object
1037+ """
1038+ complete = []
1039+ requests = {}
1040+ for rid in relation_ids(relation):
1041+ complete = False
1042+ previous_request = get_previous_request(rid)
1043+ if request == previous_request:
1044+ sent = True
1045+ complete = is_request_complete_for_rid(previous_request, rid)
1046+ else:
1047+ sent = False
1048+ complete = False
1049+
1050+ requests[rid] = {
1051+ 'sent': sent,
1052+ 'complete': complete,
1053+ }
1054+
1055+ return requests
1056+
1057+
1058+def is_request_sent(request, relation='ceph'):
1059+ """Check to see if a functionally equivalent request has already been sent
1060+
1061+ Returns True if a similair request has been sent
1062+
1063+ @param request: A CephBrokerRq object
1064+ """
1065+ states = get_request_states(request, relation=relation)
1066+ for rid in states.keys():
1067+ if not states[rid]['sent']:
1068+ return False
1069+
1070+ return True
1071+
1072+
1073+def is_request_complete(request, relation='ceph'):
1074+ """Check to see if a functionally equivalent request has already been
1075+ completed
1076+
1077+ Returns True if a similair request has been completed
1078+
1079+ @param request: A CephBrokerRq object
1080+ """
1081+ states = get_request_states(request, relation=relation)
1082+ for rid in states.keys():
1083+ if not states[rid]['complete']:
1084+ return False
1085+
1086+ return True
1087+
1088+
1089+def is_request_complete_for_rid(request, rid):
1090+ """Check if a given request has been completed on the given relation
1091+
1092+ @param request: A CephBrokerRq object
1093+ @param rid: Relation ID
1094+ """
1095+ broker_key = get_broker_rsp_key()
1096+ for unit in related_units(rid):
1097+ rdata = relation_get(rid=rid, unit=unit)
1098+ if rdata.get(broker_key):
1099+ rsp = CephBrokerRsp(rdata.get(broker_key))
1100+ if rsp.request_id == request.request_id:
1101+ if not rsp.exit_code:
1102+ return True
1103+ else:
1104+ # The remote unit sent no reply targeted at this unit so either the
1105+ # remote ceph cluster does not support unit targeted replies or it
1106+ # has not processed our request yet.
1107+ if rdata.get('broker_rsp'):
1108+ request_data = json.loads(rdata['broker_rsp'])
1109+ if request_data.get('request-id'):
1110+ log('Ignoring legacy broker_rsp without unit key as remote '
1111+ 'service supports unit specific replies', level=DEBUG)
1112+ else:
1113+ log('Using legacy broker_rsp as remote service does not '
1114+ 'supports unit specific replies', level=DEBUG)
1115+ rsp = CephBrokerRsp(rdata['broker_rsp'])
1116+ if not rsp.exit_code:
1117+ return True
1118+
1119+ return False
1120+
1121+
1122+def get_broker_rsp_key():
1123+ """Return broker response key for this unit
1124+
1125+ This is the key that ceph is going to use to pass request status
1126+ information back to this unit
1127+ """
1128+ return 'broker-rsp-' + local_unit().replace('/', '-')
1129+
1130+
1131+def send_request_if_needed(request, relation='ceph'):
1132+ """Send broker request if an equivalent request has not already been sent
1133+
1134+ @param request: A CephBrokerRq object
1135+ """
1136+ if is_request_sent(request, relation=relation):
1137+ log('Request already sent but not complete, not sending new request',
1138+ level=DEBUG)
1139+ else:
1140+ for rid in relation_ids(relation):
1141+ log('Sending request {}'.format(request.request_id), level=DEBUG)
1142+ relation_set(relation_id=rid, broker_req=request.request)
1143+
1144+
1145+class CephConfContext(object):
1146+ """Ceph config (ceph.conf) context.
1147+
1148+ Supports user-provided Ceph configuration settings. Use can provide a
1149+ dictionary as the value for the config-flags charm option containing
1150+ Ceph configuration settings keyede by their section in ceph.conf.
1151+ """
1152+ def __init__(self, permitted_sections=None):
1153+ self.permitted_sections = permitted_sections or []
1154+
1155+ def __call__(self):
1156+ conf = config('config-flags')
1157+ if not conf:
1158+ return {}
1159+
1160+ conf = config_flags_parser(conf)
1161+ if type(conf) != dict:
1162+ log("Provided config-flags is not a dictionary - ignoring",
1163+ level=WARNING)
1164+ return {}
1165+
1166+ permitted = self.permitted_sections
1167+ if permitted:
1168+ diff = set(conf.keys()).difference(set(permitted))
1169+ if diff:
1170+ log("Config-flags contains invalid keys '%s' - they will be "
1171+ "ignored" % (', '.join(diff)), level=WARNING)
1172+
1173+ ceph_conf = {}
1174+ for key in conf:
1175+ if permitted and key not in permitted:
1176+ log("Ignoring key '%s'" % key, level=WARNING)
1177+ continue
1178+
1179+ ceph_conf[key] = conf[key]
1180+
1181+ return ceph_conf
1182
1183=== modified file 'lib/charmhelpers/contrib/storage/linux/loopback.py'
1184--- lib/charmhelpers/contrib/storage/linux/loopback.py 2015-06-03 10:05:41 +0000
1185+++ lib/charmhelpers/contrib/storage/linux/loopback.py 2016-07-01 11:47:59 +0000
1186@@ -76,3 +76,13 @@
1187 check_call(cmd)
1188
1189 return create_loopback(path)
1190+
1191+
1192+def is_mapped_loopback_device(device):
1193+ """
1194+ Checks if a given device name is an existing/mapped loopback device.
1195+ :param device: str: Full path to the device (eg, /dev/loop1).
1196+ :returns: str: Path to the backing file if is a loopback device
1197+ empty string otherwise
1198+ """
1199+ return loopback_devices().get(device, "")
1200
1201=== modified file 'lib/charmhelpers/contrib/storage/linux/utils.py'
1202--- lib/charmhelpers/contrib/storage/linux/utils.py 2015-07-23 11:40:02 +0000
1203+++ lib/charmhelpers/contrib/storage/linux/utils.py 2016-07-01 11:47:59 +0000
1204@@ -43,9 +43,10 @@
1205
1206 :param block_device: str: Full path of block device to clean.
1207 '''
1208+ # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b
1209 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
1210- call(['sgdisk', '--zap-all', '--mbrtogpt',
1211- '--clear', block_device])
1212+ call(['sgdisk', '--zap-all', '--', block_device])
1213+ call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device])
1214 dev_end = check_output(['blockdev', '--getsz',
1215 block_device]).decode('UTF-8')
1216 gpt_end = int(dev_end.split()[0]) - 100
1217@@ -63,8 +64,8 @@
1218 :returns: boolean: True if the path represents a mounted device, False if
1219 it doesn't.
1220 '''
1221- is_partition = bool(re.search(r".*[0-9]+\b", device))
1222- out = check_output(['mount']).decode('UTF-8')
1223- if is_partition:
1224- return bool(re.search(device + r"\b", out))
1225- return bool(re.search(device + r"[0-9]*\b", out))
1226+ try:
1227+ out = check_output(['lsblk', '-P', device]).decode('UTF-8')
1228+ except:
1229+ return False
1230+ return bool(re.search(r'MOUNTPOINT=".+"', out))
1231
1232=== modified file 'lib/charmhelpers/core/hookenv.py'
1233--- lib/charmhelpers/core/hookenv.py 2015-07-23 11:40:02 +0000
1234+++ lib/charmhelpers/core/hookenv.py 2016-07-01 11:47:59 +0000
1235@@ -74,6 +74,7 @@
1236 res = func(*args, **kwargs)
1237 cache[key] = res
1238 return res
1239+ wrapper._wrapped = func
1240 return wrapper
1241
1242
1243@@ -173,9 +174,19 @@
1244 return os.environ.get('JUJU_RELATION', None)
1245
1246
1247-def relation_id():
1248- """The relation ID for the current relation hook"""
1249- return os.environ.get('JUJU_RELATION_ID', None)
1250+@cached
1251+def relation_id(relation_name=None, service_or_unit=None):
1252+ """The relation ID for the current or a specified relation"""
1253+ if not relation_name and not service_or_unit:
1254+ return os.environ.get('JUJU_RELATION_ID', None)
1255+ elif relation_name and service_or_unit:
1256+ service_name = service_or_unit.split('/')[0]
1257+ for relid in relation_ids(relation_name):
1258+ remote_service = remote_service_name(relid)
1259+ if remote_service == service_name:
1260+ return relid
1261+ else:
1262+ raise ValueError('Must specify neither or both of relation_name and service_or_unit')
1263
1264
1265 def local_unit():
1266@@ -193,9 +204,20 @@
1267 return local_unit().split('/')[0]
1268
1269
1270+@cached
1271+def remote_service_name(relid=None):
1272+ """The remote service name for a given relation-id (or the current relation)"""
1273+ if relid is None:
1274+ unit = remote_unit()
1275+ else:
1276+ units = related_units(relid)
1277+ unit = units[0] if units else None
1278+ return unit.split('/')[0] if unit else None
1279+
1280+
1281 def hook_name():
1282 """The name of the currently executing hook"""
1283- return os.path.basename(sys.argv[0])
1284+ return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))
1285
1286
1287 class Config(dict):
1288@@ -469,6 +491,76 @@
1289
1290
1291 @cached
1292+def peer_relation_id():
1293+ '''Get the peers relation id if a peers relation has been joined, else None.'''
1294+ md = metadata()
1295+ section = md.get('peers')
1296+ if section:
1297+ for key in section:
1298+ relids = relation_ids(key)
1299+ if relids:
1300+ return relids[0]
1301+ return None
1302+
1303+
1304+@cached
1305+def relation_to_interface(relation_name):
1306+ """
1307+ Given the name of a relation, return the interface that relation uses.
1308+
1309+ :returns: The interface name, or ``None``.
1310+ """
1311+ return relation_to_role_and_interface(relation_name)[1]
1312+
1313+
1314+@cached
1315+def relation_to_role_and_interface(relation_name):
1316+ """
1317+ Given the name of a relation, return the role and the name of the interface
1318+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
1319+
1320+ :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
1321+ """
1322+ _metadata = metadata()
1323+ for role in ('provides', 'requires', 'peers'):
1324+ interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
1325+ if interface:
1326+ return role, interface
1327+ return None, None
1328+
1329+
1330+@cached
1331+def role_and_interface_to_relations(role, interface_name):
1332+ """
1333+ Given a role and interface name, return a list of relation names for the
1334+ current charm that use that interface under that role (where role is one
1335+ of ``provides``, ``requires``, or ``peers``).
1336+
1337+ :returns: A list of relation names.
1338+ """
1339+ _metadata = metadata()
1340+ results = []
1341+ for relation_name, relation in _metadata.get(role, {}).items():
1342+ if relation['interface'] == interface_name:
1343+ results.append(relation_name)
1344+ return results
1345+
1346+
1347+@cached
1348+def interface_to_relations(interface_name):
1349+ """
1350+ Given an interface, return a list of relation names for the current
1351+ charm that use that interface.
1352+
1353+ :returns: A list of relation names.
1354+ """
1355+ results = []
1356+ for role in ('provides', 'requires', 'peers'):
1357+ results.extend(role_and_interface_to_relations(role, interface_name))
1358+ return results
1359+
1360+
1361+@cached
1362 def charm_name():
1363 """Get the name of the current charm as is specified on metadata.yaml"""
1364 return metadata().get('name')
1365@@ -544,6 +636,38 @@
1366 return unit_get('private-address')
1367
1368
1369+@cached
1370+def storage_get(attribute=None, storage_id=None):
1371+ """Get storage attributes"""
1372+ _args = ['storage-get', '--format=json']
1373+ if storage_id:
1374+ _args.extend(('-s', storage_id))
1375+ if attribute:
1376+ _args.append(attribute)
1377+ try:
1378+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
1379+ except ValueError:
1380+ return None
1381+
1382+
1383+@cached
1384+def storage_list(storage_name=None):
1385+ """List the storage IDs for the unit"""
1386+ _args = ['storage-list', '--format=json']
1387+ if storage_name:
1388+ _args.append(storage_name)
1389+ try:
1390+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
1391+ except ValueError:
1392+ return None
1393+ except OSError as e:
1394+ import errno
1395+ if e.errno == errno.ENOENT:
1396+ # storage-list does not exist
1397+ return []
1398+ raise
1399+
1400+
1401 class UnregisteredHookError(Exception):
1402 """Raised when an undefined hook is called"""
1403 pass
1404@@ -644,6 +768,21 @@
1405 subprocess.check_call(['action-fail', message])
1406
1407
1408+def action_name():
1409+ """Get the name of the currently executing action."""
1410+ return os.environ.get('JUJU_ACTION_NAME')
1411+
1412+
1413+def action_uuid():
1414+ """Get the UUID of the currently executing action."""
1415+ return os.environ.get('JUJU_ACTION_UUID')
1416+
1417+
1418+def action_tag():
1419+ """Get the tag for the currently executing action."""
1420+ return os.environ.get('JUJU_ACTION_TAG')
1421+
1422+
1423 def status_set(workload_state, message):
1424 """Set the workload state with a message
1425
1426@@ -673,25 +812,28 @@
1427
1428
1429 def status_get():
1430- """Retrieve the previously set juju workload state
1431-
1432- If the status-set command is not found then assume this is juju < 1.23 and
1433- return 'unknown'
1434+ """Retrieve the previously set juju workload state and message
1435+
1436+ If the status-get command is not found then assume this is juju < 1.23 and
1437+ return 'unknown', ""
1438+
1439 """
1440- cmd = ['status-get']
1441+ cmd = ['status-get', "--format=json", "--include-data"]
1442 try:
1443- raw_status = subprocess.check_output(cmd, universal_newlines=True)
1444- status = raw_status.rstrip()
1445- return status
1446+ raw_status = subprocess.check_output(cmd)
1447 except OSError as e:
1448 if e.errno == errno.ENOENT:
1449- return 'unknown'
1450+ return ('unknown', "")
1451 else:
1452 raise
1453+ else:
1454+ status = json.loads(raw_status.decode("UTF-8"))
1455+ return (status["status"], status["message"])
1456
1457
1458 def translate_exc(from_exc, to_exc):
1459 def inner_translate_exc1(f):
1460+ @wraps(f)
1461 def inner_translate_exc2(*args, **kwargs):
1462 try:
1463 return f(*args, **kwargs)
1464@@ -736,6 +878,58 @@
1465 subprocess.check_call(cmd)
1466
1467
1468+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1469+def payload_register(ptype, klass, pid):
1470+ """ is used while a hook is running to let Juju know that a
1471+ payload has been started."""
1472+ cmd = ['payload-register']
1473+ for x in [ptype, klass, pid]:
1474+ cmd.append(x)
1475+ subprocess.check_call(cmd)
1476+
1477+
1478+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1479+def payload_unregister(klass, pid):
1480+ """ is used while a hook is running to let Juju know
1481+ that a payload has been manually stopped. The <class> and <id> provided
1482+ must match a payload that has been previously registered with juju using
1483+ payload-register."""
1484+ cmd = ['payload-unregister']
1485+ for x in [klass, pid]:
1486+ cmd.append(x)
1487+ subprocess.check_call(cmd)
1488+
1489+
1490+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1491+def payload_status_set(klass, pid, status):
1492+ """is used to update the current status of a registered payload.
1493+ The <class> and <id> provided must match a payload that has been previously
1494+ registered with juju using payload-register. The <status> must be one of the
1495+ follow: starting, started, stopping, stopped"""
1496+ cmd = ['payload-status-set']
1497+ for x in [klass, pid, status]:
1498+ cmd.append(x)
1499+ subprocess.check_call(cmd)
1500+
1501+
1502+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1503+def resource_get(name):
1504+ """used to fetch the resource path of the given name.
1505+
1506+ <name> must match a name of defined resource in metadata.yaml
1507+
1508+ returns either a path or False if resource not available
1509+ """
1510+ if not name:
1511+ return False
1512+
1513+ cmd = ['resource-get', name]
1514+ try:
1515+ return subprocess.check_output(cmd).decode('UTF-8')
1516+ except subprocess.CalledProcessError:
1517+ return False
1518+
1519+
1520 @cached
1521 def juju_version():
1522 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1523@@ -800,3 +994,16 @@
1524 for callback, args, kwargs in reversed(_atexit):
1525 callback(*args, **kwargs)
1526 del _atexit[:]
1527+
1528+
1529+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1530+def network_get_primary_address(binding):
1531+ '''
1532+ Retrieve the primary network address for a named binding
1533+
1534+ :param binding: string. The name of a relation of extra-binding
1535+ :return: string. The primary IP address for the named binding
1536+ :raise: NotImplementedError if run on Juju < 2.0
1537+ '''
1538+ cmd = ['network-get', '--primary-address', binding]
1539+ return subprocess.check_output(cmd).strip()
1540
1541=== modified file 'lib/charmhelpers/core/host.py'
1542--- lib/charmhelpers/core/host.py 2015-07-23 11:40:02 +0000
1543+++ lib/charmhelpers/core/host.py 2016-07-01 11:47:59 +0000
1544@@ -30,6 +30,8 @@
1545 import string
1546 import subprocess
1547 import hashlib
1548+import functools
1549+import itertools
1550 from contextlib import contextmanager
1551 from collections import OrderedDict
1552
1553@@ -63,55 +65,94 @@
1554 return service_result
1555
1556
1557-def service_pause(service_name, init_dir=None):
1558+def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
1559 """Pause a system service.
1560
1561 Stop it, and prevent it from starting again at boot."""
1562- if init_dir is None:
1563- init_dir = "/etc/init"
1564- stopped = service_stop(service_name)
1565- # XXX: Support systemd too
1566- override_path = os.path.join(
1567- init_dir, '{}.conf.override'.format(service_name))
1568- with open(override_path, 'w') as fh:
1569- fh.write("manual\n")
1570+ stopped = True
1571+ if service_running(service_name):
1572+ stopped = service_stop(service_name)
1573+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1574+ sysv_file = os.path.join(initd_dir, service_name)
1575+ if init_is_systemd():
1576+ service('disable', service_name)
1577+ elif os.path.exists(upstart_file):
1578+ override_path = os.path.join(
1579+ init_dir, '{}.override'.format(service_name))
1580+ with open(override_path, 'w') as fh:
1581+ fh.write("manual\n")
1582+ elif os.path.exists(sysv_file):
1583+ subprocess.check_call(["update-rc.d", service_name, "disable"])
1584+ else:
1585+ raise ValueError(
1586+ "Unable to detect {0} as SystemD, Upstart {1} or"
1587+ " SysV {2}".format(
1588+ service_name, upstart_file, sysv_file))
1589 return stopped
1590
1591
1592-def service_resume(service_name, init_dir=None):
1593+def service_resume(service_name, init_dir="/etc/init",
1594+ initd_dir="/etc/init.d"):
1595 """Resume a system service.
1596
1597 Reenable starting again at boot. Start the service"""
1598- # XXX: Support systemd too
1599- if init_dir is None:
1600- init_dir = "/etc/init"
1601- override_path = os.path.join(
1602- init_dir, '{}.conf.override'.format(service_name))
1603- if os.path.exists(override_path):
1604- os.unlink(override_path)
1605- started = service_start(service_name)
1606+ upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1607+ sysv_file = os.path.join(initd_dir, service_name)
1608+ if init_is_systemd():
1609+ service('enable', service_name)
1610+ elif os.path.exists(upstart_file):
1611+ override_path = os.path.join(
1612+ init_dir, '{}.override'.format(service_name))
1613+ if os.path.exists(override_path):
1614+ os.unlink(override_path)
1615+ elif os.path.exists(sysv_file):
1616+ subprocess.check_call(["update-rc.d", service_name, "enable"])
1617+ else:
1618+ raise ValueError(
1619+ "Unable to detect {0} as SystemD, Upstart {1} or"
1620+ " SysV {2}".format(
1621+ service_name, upstart_file, sysv_file))
1622+
1623+ started = service_running(service_name)
1624+ if not started:
1625+ started = service_start(service_name)
1626 return started
1627
1628
1629 def service(action, service_name):
1630 """Control a system service"""
1631- cmd = ['service', service_name, action]
1632+ if init_is_systemd():
1633+ cmd = ['systemctl', action, service_name]
1634+ else:
1635+ cmd = ['service', service_name, action]
1636 return subprocess.call(cmd) == 0
1637
1638
1639-def service_running(service):
1640+_UPSTART_CONF = "/etc/init/{}.conf"
1641+_INIT_D_CONF = "/etc/init.d/{}"
1642+
1643+
1644+def service_running(service_name):
1645 """Determine whether a system service is running"""
1646- try:
1647- output = subprocess.check_output(
1648- ['service', service, 'status'],
1649- stderr=subprocess.STDOUT).decode('UTF-8')
1650- except subprocess.CalledProcessError:
1651+ if init_is_systemd():
1652+ return service('is-active', service_name)
1653+ else:
1654+ if os.path.exists(_UPSTART_CONF.format(service_name)):
1655+ try:
1656+ output = subprocess.check_output(
1657+ ['status', service_name],
1658+ stderr=subprocess.STDOUT).decode('UTF-8')
1659+ except subprocess.CalledProcessError:
1660+ return False
1661+ else:
1662+ # This works for upstart scripts where the 'service' command
1663+ # returns a consistent string to represent running 'start/running'
1664+ if "start/running" in output:
1665+ return True
1666+ elif os.path.exists(_INIT_D_CONF.format(service_name)):
1667+ # Check System V scripts init script return codes
1668+ return service('status', service_name)
1669 return False
1670- else:
1671- if ("start/running" in output or "is running" in output):
1672- return True
1673- else:
1674- return False
1675
1676
1677 def service_available(service_name):
1678@@ -126,14 +167,41 @@
1679 return True
1680
1681
1682-def adduser(username, password=None, shell='/bin/bash', system_user=False):
1683- """Add a user to the system"""
1684+SYSTEMD_SYSTEM = '/run/systemd/system'
1685+
1686+
1687+def init_is_systemd():
1688+ """Return True if the host system uses systemd, False otherwise."""
1689+ return os.path.isdir(SYSTEMD_SYSTEM)
1690+
1691+
1692+def adduser(username, password=None, shell='/bin/bash', system_user=False,
1693+ primary_group=None, secondary_groups=None, uid=None):
1694+ """Add a user to the system.
1695+
1696+ Will log but otherwise succeed if the user already exists.
1697+
1698+ :param str username: Username to create
1699+ :param str password: Password for user; if ``None``, create a system user
1700+ :param str shell: The default shell for the user
1701+ :param bool system_user: Whether to create a login or system user
1702+ :param str primary_group: Primary group for user; defaults to username
1703+ :param list secondary_groups: Optional list of additional groups
1704+ :param int uid: UID for user being created
1705+
1706+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
1707+ """
1708 try:
1709 user_info = pwd.getpwnam(username)
1710 log('user {0} already exists!'.format(username))
1711+ if uid:
1712+ user_info = pwd.getpwuid(int(uid))
1713+ log('user with uid {0} already exists!'.format(uid))
1714 except KeyError:
1715 log('creating user {0}'.format(username))
1716 cmd = ['useradd']
1717+ if uid:
1718+ cmd.extend(['--uid', str(uid)])
1719 if system_user or password is None:
1720 cmd.append('--system')
1721 else:
1722@@ -142,20 +210,84 @@
1723 '--shell', shell,
1724 '--password', password,
1725 ])
1726+ if not primary_group:
1727+ try:
1728+ grp.getgrnam(username)
1729+ primary_group = username # avoid "group exists" error
1730+ except KeyError:
1731+ pass
1732+ if primary_group:
1733+ cmd.extend(['-g', primary_group])
1734+ if secondary_groups:
1735+ cmd.extend(['-G', ','.join(secondary_groups)])
1736 cmd.append(username)
1737 subprocess.check_call(cmd)
1738 user_info = pwd.getpwnam(username)
1739 return user_info
1740
1741
1742-def add_group(group_name, system_group=False):
1743- """Add a group to the system"""
1744+def user_exists(username):
1745+ """Check if a user exists"""
1746+ try:
1747+ pwd.getpwnam(username)
1748+ user_exists = True
1749+ except KeyError:
1750+ user_exists = False
1751+ return user_exists
1752+
1753+
1754+def uid_exists(uid):
1755+ """Check if a uid exists"""
1756+ try:
1757+ pwd.getpwuid(uid)
1758+ uid_exists = True
1759+ except KeyError:
1760+ uid_exists = False
1761+ return uid_exists
1762+
1763+
1764+def group_exists(groupname):
1765+ """Check if a group exists"""
1766+ try:
1767+ grp.getgrnam(groupname)
1768+ group_exists = True
1769+ except KeyError:
1770+ group_exists = False
1771+ return group_exists
1772+
1773+
1774+def gid_exists(gid):
1775+ """Check if a gid exists"""
1776+ try:
1777+ grp.getgrgid(gid)
1778+ gid_exists = True
1779+ except KeyError:
1780+ gid_exists = False
1781+ return gid_exists
1782+
1783+
1784+def add_group(group_name, system_group=False, gid=None):
1785+ """Add a group to the system
1786+
1787+ Will log but otherwise succeed if the group already exists.
1788+
1789+ :param str group_name: group to create
1790+ :param bool system_group: Create system group
1791+ :param int gid: GID for user being created
1792+
1793+ :returns: The password database entry struct, as returned by `grp.getgrnam`
1794+ """
1795 try:
1796 group_info = grp.getgrnam(group_name)
1797 log('group {0} already exists!'.format(group_name))
1798+ if gid:
1799+ group_info = grp.getgrgid(gid)
1800+ log('group with gid {0} already exists!'.format(gid))
1801 except KeyError:
1802 log('creating group {0}'.format(group_name))
1803 cmd = ['addgroup']
1804+ if gid:
1805+ cmd.extend(['--gid', str(gid)])
1806 if system_group:
1807 cmd.append('--system')
1808 else:
1809@@ -229,14 +361,12 @@
1810
1811
1812 def fstab_remove(mp):
1813- """Remove the given mountpoint entry from /etc/fstab
1814- """
1815+ """Remove the given mountpoint entry from /etc/fstab"""
1816 return Fstab.remove_by_mountpoint(mp)
1817
1818
1819 def fstab_add(dev, mp, fs, options=None):
1820- """Adds the given device entry to the /etc/fstab file
1821- """
1822+ """Adds the given device entry to the /etc/fstab file"""
1823 return Fstab.add(dev, mp, fs, options=options)
1824
1825
1826@@ -280,9 +410,19 @@
1827 return system_mounts
1828
1829
1830+def fstab_mount(mountpoint):
1831+ """Mount filesystem using fstab"""
1832+ cmd_args = ['mount', mountpoint]
1833+ try:
1834+ subprocess.check_output(cmd_args)
1835+ except subprocess.CalledProcessError as e:
1836+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1837+ return False
1838+ return True
1839+
1840+
1841 def file_hash(path, hash_type='md5'):
1842- """
1843- Generate a hash checksum of the contents of 'path' or None if not found.
1844+ """Generate a hash checksum of the contents of 'path' or None if not found.
1845
1846 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1847 such as md5, sha1, sha256, sha512, etc.
1848@@ -297,10 +437,9 @@
1849
1850
1851 def path_hash(path):
1852- """
1853- Generate a hash checksum of all files matching 'path'. Standard wildcards
1854- like '*' and '?' are supported, see documentation for the 'glob' module for
1855- more information.
1856+ """Generate a hash checksum of all files matching 'path'. Standard
1857+ wildcards like '*' and '?' are supported, see documentation for the 'glob'
1858+ module for more information.
1859
1860 :return: dict: A { filename: hash } dictionary for all matched files.
1861 Empty if none found.
1862@@ -312,8 +451,7 @@
1863
1864
1865 def check_hash(path, checksum, hash_type='md5'):
1866- """
1867- Validate a file using a cryptographic checksum.
1868+ """Validate a file using a cryptographic checksum.
1869
1870 :param str checksum: Value of the checksum used to validate the file.
1871 :param str hash_type: Hash algorithm used to generate `checksum`.
1872@@ -328,10 +466,11 @@
1873
1874
1875 class ChecksumError(ValueError):
1876+ """A class derived from Value error to indicate the checksum failed."""
1877 pass
1878
1879
1880-def restart_on_change(restart_map, stopstart=False):
1881+def restart_on_change(restart_map, stopstart=False, restart_functions=None):
1882 """Restart services based on configuration files changing
1883
1884 This function is used a decorator, for example::
1885@@ -349,27 +488,58 @@
1886 restarted if any file matching the pattern got changed, created
1887 or removed. Standard wildcards are supported, see documentation
1888 for the 'glob' module for more information.
1889+
1890+ @param restart_map: {path_file_name: [service_name, ...]
1891+ @param stopstart: DEFAULT false; whether to stop, start OR restart
1892+ @param restart_functions: nonstandard functions to use to restart services
1893+ {svc: func, ...}
1894+ @returns result from decorated function
1895 """
1896 def wrap(f):
1897+ @functools.wraps(f)
1898 def wrapped_f(*args, **kwargs):
1899- checksums = {path: path_hash(path) for path in restart_map}
1900- f(*args, **kwargs)
1901- restarts = []
1902- for path in restart_map:
1903- if path_hash(path) != checksums[path]:
1904- restarts += restart_map[path]
1905- services_list = list(OrderedDict.fromkeys(restarts))
1906- if not stopstart:
1907- for service_name in services_list:
1908- service('restart', service_name)
1909- else:
1910- for action in ['stop', 'start']:
1911- for service_name in services_list:
1912- service(action, service_name)
1913+ return restart_on_change_helper(
1914+ (lambda: f(*args, **kwargs)), restart_map, stopstart,
1915+ restart_functions)
1916 return wrapped_f
1917 return wrap
1918
1919
1920+def restart_on_change_helper(lambda_f, restart_map, stopstart=False,
1921+ restart_functions=None):
1922+ """Helper function to perform the restart_on_change function.
1923+
1924+ This is provided for decorators to restart services if files described
1925+ in the restart_map have changed after an invocation of lambda_f().
1926+
1927+ @param lambda_f: function to call.
1928+ @param restart_map: {file: [service, ...]}
1929+ @param stopstart: whether to stop, start or restart a service
1930+ @param restart_functions: nonstandard functions to use to restart services
1931+ {svc: func, ...}
1932+ @returns result of lambda_f()
1933+ """
1934+ if restart_functions is None:
1935+ restart_functions = {}
1936+ checksums = {path: path_hash(path) for path in restart_map}
1937+ r = lambda_f()
1938+ # create a list of lists of the services to restart
1939+ restarts = [restart_map[path]
1940+ for path in restart_map
1941+ if path_hash(path) != checksums[path]]
1942+ # create a flat list of ordered services without duplicates from lists
1943+ services_list = list(OrderedDict.fromkeys(itertools.chain(*restarts)))
1944+ if services_list:
1945+ actions = ('stop', 'start') if stopstart else ('restart',)
1946+ for service_name in services_list:
1947+ if service_name in restart_functions:
1948+ restart_functions[service_name](service_name)
1949+ else:
1950+ for action in actions:
1951+ service(action, service_name)
1952+ return r
1953+
1954+
1955 def lsb_release():
1956 """Return /etc/lsb-release in a dict"""
1957 d = {}
1958@@ -396,36 +566,92 @@
1959 return(''.join(random_chars))
1960
1961
1962-def list_nics(nic_type):
1963- '''Return a list of nics of given type(s)'''
1964+def is_phy_iface(interface):
1965+ """Returns True if interface is not virtual, otherwise False."""
1966+ if interface:
1967+ sys_net = '/sys/class/net'
1968+ if os.path.isdir(sys_net):
1969+ for iface in glob.glob(os.path.join(sys_net, '*')):
1970+ if '/virtual/' in os.path.realpath(iface):
1971+ continue
1972+
1973+ if interface == os.path.basename(iface):
1974+ return True
1975+
1976+ return False
1977+
1978+
1979+def get_bond_master(interface):
1980+ """Returns bond master if interface is bond slave otherwise None.
1981+
1982+ NOTE: the provided interface is expected to be physical
1983+ """
1984+ if interface:
1985+ iface_path = '/sys/class/net/%s' % (interface)
1986+ if os.path.exists(iface_path):
1987+ if '/virtual/' in os.path.realpath(iface_path):
1988+ return None
1989+
1990+ master = os.path.join(iface_path, 'master')
1991+ if os.path.exists(master):
1992+ master = os.path.realpath(master)
1993+ # make sure it is a bond master
1994+ if os.path.exists(os.path.join(master, 'bonding')):
1995+ return os.path.basename(master)
1996+
1997+ return None
1998+
1999+
2000+def list_nics(nic_type=None):
2001+ """Return a list of nics of given type(s)"""
2002 if isinstance(nic_type, six.string_types):
2003 int_types = [nic_type]
2004 else:
2005 int_types = nic_type
2006+
2007 interfaces = []
2008- for int_type in int_types:
2009- cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2010+ if nic_type:
2011+ for int_type in int_types:
2012+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2013+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2014+ ip_output = ip_output.split('\n')
2015+ ip_output = (line for line in ip_output if line)
2016+ for line in ip_output:
2017+ if line.split()[1].startswith(int_type):
2018+ matched = re.search('.*: (' + int_type +
2019+ r'[0-9]+\.[0-9]+)@.*', line)
2020+ if matched:
2021+ iface = matched.groups()[0]
2022+ else:
2023+ iface = line.split()[1].replace(":", "")
2024+
2025+ if iface not in interfaces:
2026+ interfaces.append(iface)
2027+ else:
2028+ cmd = ['ip', 'a']
2029 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2030- ip_output = (line for line in ip_output if line)
2031+ ip_output = (line.strip() for line in ip_output if line)
2032+
2033+ key = re.compile('^[0-9]+:\s+(.+):')
2034 for line in ip_output:
2035- if line.split()[1].startswith(int_type):
2036- matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line)
2037- if matched:
2038- interface = matched.groups()[0]
2039- else:
2040- interface = line.split()[1].replace(":", "")
2041- interfaces.append(interface)
2042+ matched = re.search(key, line)
2043+ if matched:
2044+ iface = matched.group(1)
2045+ iface = iface.partition("@")[0]
2046+ if iface not in interfaces:
2047+ interfaces.append(iface)
2048
2049 return interfaces
2050
2051
2052 def set_nic_mtu(nic, mtu):
2053- '''Set MTU on a network interface'''
2054+ """Set the Maximum Transmission Unit (MTU) on a network interface."""
2055 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
2056 subprocess.check_call(cmd)
2057
2058
2059 def get_nic_mtu(nic):
2060+ """Return the Maximum Transmission Unit (MTU) for a network interface."""
2061 cmd = ['ip', 'addr', 'show', nic]
2062 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2063 mtu = ""
2064@@ -437,6 +663,7 @@
2065
2066
2067 def get_nic_hwaddr(nic):
2068+ """Return the Media Access Control (MAC) for a network interface."""
2069 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2070 ip_output = subprocess.check_output(cmd).decode('UTF-8')
2071 hwaddr = ""
2072@@ -447,7 +674,7 @@
2073
2074
2075 def cmp_pkgrevno(package, revno, pkgcache=None):
2076- '''Compare supplied revno with the revno of the installed package
2077+ """Compare supplied revno with the revno of the installed package
2078
2079 * 1 => Installed revno is greater than supplied arg
2080 * 0 => Installed revno is the same as supplied arg
2081@@ -456,7 +683,7 @@
2082 This function imports apt_cache function from charmhelpers.fetch if
2083 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
2084 you call this function, or pass an apt_pkg.Cache() instance.
2085- '''
2086+ """
2087 import apt_pkg
2088 if not pkgcache:
2089 from charmhelpers.fetch import apt_cache
2090@@ -466,15 +693,30 @@
2091
2092
2093 @contextmanager
2094-def chdir(d):
2095+def chdir(directory):
2096+ """Change the current working directory to a different directory for a code
2097+ block and return the previous directory after the block exits. Useful to
2098+ run commands from a specificed directory.
2099+
2100+ :param str directory: The directory path to change to for this context.
2101+ """
2102 cur = os.getcwd()
2103 try:
2104- yield os.chdir(d)
2105+ yield os.chdir(directory)
2106 finally:
2107 os.chdir(cur)
2108
2109
2110-def chownr(path, owner, group, follow_links=True):
2111+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
2112+ """Recursively change user and group ownership of files and directories
2113+ in given path. Doesn't chown path itself by default, only its children.
2114+
2115+ :param str path: The string path to start changing ownership.
2116+ :param str owner: The owner string to use when looking up the uid.
2117+ :param str group: The group string to use when looking up the gid.
2118+ :param bool follow_links: Also Chown links if True
2119+ :param bool chowntopdir: Also chown path itself if True
2120+ """
2121 uid = pwd.getpwnam(owner).pw_uid
2122 gid = grp.getgrnam(group).gr_gid
2123 if follow_links:
2124@@ -482,6 +724,10 @@
2125 else:
2126 chown = os.lchown
2127
2128+ if chowntopdir:
2129+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
2130+ if not broken_symlink:
2131+ chown(path, uid, gid)
2132 for root, dirs, files in os.walk(path):
2133 for name in dirs + files:
2134 full = os.path.join(root, name)
2135@@ -491,4 +737,28 @@
2136
2137
2138 def lchownr(path, owner, group):
2139+ """Recursively change user and group ownership of files and directories
2140+ in a given path, not following symbolic links. See the documentation for
2141+ 'os.lchown' for more information.
2142+
2143+ :param str path: The string path to start changing ownership.
2144+ :param str owner: The owner string to use when looking up the uid.
2145+ :param str group: The group string to use when looking up the gid.
2146+ """
2147 chownr(path, owner, group, follow_links=False)
2148+
2149+
2150+def get_total_ram():
2151+ """The total amount of system RAM in bytes.
2152+
2153+ This is what is reported by the OS, and may be overcommitted when
2154+ there are multiple containers hosted on the same machine.
2155+ """
2156+ with open('/proc/meminfo', 'r') as f:
2157+ for line in f.readlines():
2158+ if line:
2159+ key, value, unit = line.split()
2160+ if key == 'MemTotal:':
2161+ assert unit == 'kB', 'Unknown unit'
2162+ return int(value) * 1024 # Classic, not KiB.
2163+ raise NotImplementedError()
2164
2165=== added file 'lib/charmhelpers/core/hugepage.py'
2166--- lib/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000
2167+++ lib/charmhelpers/core/hugepage.py 2016-07-01 11:47:59 +0000
2168@@ -0,0 +1,71 @@
2169+# -*- coding: utf-8 -*-
2170+
2171+# Copyright 2014-2015 Canonical Limited.
2172+#
2173+# This file is part of charm-helpers.
2174+#
2175+# charm-helpers is free software: you can redistribute it and/or modify
2176+# it under the terms of the GNU Lesser General Public License version 3 as
2177+# published by the Free Software Foundation.
2178+#
2179+# charm-helpers is distributed in the hope that it will be useful,
2180+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2181+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2182+# GNU Lesser General Public License for more details.
2183+#
2184+# You should have received a copy of the GNU Lesser General Public License
2185+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2186+
2187+import yaml
2188+from charmhelpers.core import fstab
2189+from charmhelpers.core import sysctl
2190+from charmhelpers.core.host import (
2191+ add_group,
2192+ add_user_to_group,
2193+ fstab_mount,
2194+ mkdir,
2195+)
2196+from charmhelpers.core.strutils import bytes_from_string
2197+from subprocess import check_output
2198+
2199+
2200+def hugepage_support(user, group='hugetlb', nr_hugepages=256,
2201+ max_map_count=65536, mnt_point='/run/hugepages/kvm',
2202+ pagesize='2MB', mount=True, set_shmmax=False):
2203+ """Enable hugepages on system.
2204+
2205+ Args:
2206+ user (str) -- Username to allow access to hugepages to
2207+ group (str) -- Group name to own hugepages
2208+ nr_hugepages (int) -- Number of pages to reserve
2209+ max_map_count (int) -- Number of Virtual Memory Areas a process can own
2210+ mnt_point (str) -- Directory to mount hugepages on
2211+ pagesize (str) -- Size of hugepages
2212+ mount (bool) -- Whether to Mount hugepages
2213+ """
2214+ group_info = add_group(group)
2215+ gid = group_info.gr_gid
2216+ add_user_to_group(user, group)
2217+ if max_map_count < 2 * nr_hugepages:
2218+ max_map_count = 2 * nr_hugepages
2219+ sysctl_settings = {
2220+ 'vm.nr_hugepages': nr_hugepages,
2221+ 'vm.max_map_count': max_map_count,
2222+ 'vm.hugetlb_shm_group': gid,
2223+ }
2224+ if set_shmmax:
2225+ shmmax_current = int(check_output(['sysctl', '-n', 'kernel.shmmax']))
2226+ shmmax_minsize = bytes_from_string(pagesize) * nr_hugepages
2227+ if shmmax_minsize > shmmax_current:
2228+ sysctl_settings['kernel.shmmax'] = shmmax_minsize
2229+ sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
2230+ mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
2231+ lfstab = fstab.Fstab()
2232+ fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point)
2233+ if fstab_entry:
2234+ lfstab.remove_entry(fstab_entry)
2235+ entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs',
2236+ 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0)
2237+ lfstab.add_entry(entry)
2238+ if mount:
2239+ fstab_mount(mnt_point)
2240
2241=== added file 'lib/charmhelpers/core/kernel.py'
2242--- lib/charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
2243+++ lib/charmhelpers/core/kernel.py 2016-07-01 11:47:59 +0000
2244@@ -0,0 +1,68 @@
2245+#!/usr/bin/env python
2246+# -*- coding: utf-8 -*-
2247+
2248+# Copyright 2014-2015 Canonical Limited.
2249+#
2250+# This file is part of charm-helpers.
2251+#
2252+# charm-helpers is free software: you can redistribute it and/or modify
2253+# it under the terms of the GNU Lesser General Public License version 3 as
2254+# published by the Free Software Foundation.
2255+#
2256+# charm-helpers is distributed in the hope that it will be useful,
2257+# but WITHOUT ANY WARRANTY; without even the implied warranty of
2258+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2259+# GNU Lesser General Public License for more details.
2260+#
2261+# You should have received a copy of the GNU Lesser General Public License
2262+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2263+
2264+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2265+
2266+from charmhelpers.core.hookenv import (
2267+ log,
2268+ INFO
2269+)
2270+
2271+from subprocess import check_call, check_output
2272+import re
2273+
2274+
2275+def modprobe(module, persist=True):
2276+ """Load a kernel module and configure for auto-load on reboot."""
2277+ cmd = ['modprobe', module]
2278+
2279+ log('Loading kernel module %s' % module, level=INFO)
2280+
2281+ check_call(cmd)
2282+ if persist:
2283+ with open('/etc/modules', 'r+') as modules:
2284+ if module not in modules.read():
2285+ modules.write(module)
2286+
2287+
2288+def rmmod(module, force=False):
2289+ """Remove a module from the linux kernel"""
2290+ cmd = ['rmmod']
2291+ if force:
2292+ cmd.append('-f')
2293+ cmd.append(module)
2294+ log('Removing kernel module %s' % module, level=INFO)
2295+ return check_call(cmd)
2296+
2297+
2298+def lsmod():
2299+ """Shows what kernel modules are currently loaded"""
2300+ return check_output(['lsmod'],
2301+ universal_newlines=True)
2302+
2303+
2304+def is_module_loaded(module):
2305+ """Checks if a kernel module is already loaded"""
2306+ matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
2307+ return len(matches) > 0
2308+
2309+
2310+def update_initramfs(version='all'):
2311+ """Updates an initramfs image"""
2312+ return check_call(["update-initramfs", "-k", version, "-u"])
2313
2314=== modified file 'lib/charmhelpers/core/services/helpers.py'
2315--- lib/charmhelpers/core/services/helpers.py 2015-07-23 11:40:02 +0000
2316+++ lib/charmhelpers/core/services/helpers.py 2016-07-01 11:47:59 +0000
2317@@ -16,7 +16,9 @@
2318
2319 import os
2320 import yaml
2321+
2322 from charmhelpers.core import hookenv
2323+from charmhelpers.core import host
2324 from charmhelpers.core import templating
2325
2326 from charmhelpers.core.services.base import ManagerCallback
2327@@ -240,27 +242,50 @@
2328
2329 :param str source: The template source file, relative to
2330 `$CHARM_DIR/templates`
2331- :param str target: The target to write the rendered template to
2332+
2333+ :param str target: The target to write the rendered template to (or None)
2334 :param str owner: The owner of the rendered file
2335 :param str group: The group of the rendered file
2336 :param int perms: The permissions of the rendered file
2337+ :param partial on_change_action: functools partial to be executed when
2338+ rendered file changes
2339+ :param jinja2 loader template_loader: A jinja2 template loader
2340
2341+ :return str: The rendered template
2342 """
2343 def __init__(self, source, target,
2344- owner='root', group='root', perms=0o444):
2345+ owner='root', group='root', perms=0o444,
2346+ on_change_action=None, template_loader=None):
2347 self.source = source
2348 self.target = target
2349 self.owner = owner
2350 self.group = group
2351 self.perms = perms
2352+ self.on_change_action = on_change_action
2353+ self.template_loader = template_loader
2354
2355 def __call__(self, manager, service_name, event_name):
2356+ pre_checksum = ''
2357+ if self.on_change_action and os.path.isfile(self.target):
2358+ pre_checksum = host.file_hash(self.target)
2359 service = manager.get_service(service_name)
2360- context = {}
2361+ context = {'ctx': {}}
2362 for ctx in service.get('required_data', []):
2363 context.update(ctx)
2364- templating.render(self.source, self.target, context,
2365- self.owner, self.group, self.perms)
2366+ context['ctx'].update(ctx)
2367+
2368+ result = templating.render(self.source, self.target, context,
2369+ self.owner, self.group, self.perms,
2370+ template_loader=self.template_loader)
2371+ if self.on_change_action:
2372+ if pre_checksum == host.file_hash(self.target):
2373+ hookenv.log(
2374+ 'No change detected: {}'.format(self.target),
2375+ hookenv.DEBUG)
2376+ else:
2377+ self.on_change_action()
2378+
2379+ return result
2380
2381
2382 # Convenience aliases for templates
2383
2384=== modified file 'lib/charmhelpers/core/strutils.py'
2385--- lib/charmhelpers/core/strutils.py 2015-07-23 11:40:02 +0000
2386+++ lib/charmhelpers/core/strutils.py 2016-07-01 11:47:59 +0000
2387@@ -18,6 +18,7 @@
2388 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2389
2390 import six
2391+import re
2392
2393
2394 def bool_from_string(value):
2395@@ -40,3 +41,32 @@
2396
2397 msg = "Unable to interpret string value '%s' as boolean" % (value)
2398 raise ValueError(msg)
2399+
2400+
2401+def bytes_from_string(value):
2402+ """Interpret human readable string value as bytes.
2403+
2404+ Returns int
2405+ """
2406+ BYTE_POWER = {
2407+ 'K': 1,
2408+ 'KB': 1,
2409+ 'M': 2,
2410+ 'MB': 2,
2411+ 'G': 3,
2412+ 'GB': 3,
2413+ 'T': 4,
2414+ 'TB': 4,
2415+ 'P': 5,
2416+ 'PB': 5,
2417+ }
2418+ if isinstance(value, six.string_types):
2419+ value = six.text_type(value)
2420+ else:
2421+ msg = "Unable to interpret non-string value '%s' as boolean" % (value)
2422+ raise ValueError(msg)
2423+ matches = re.match("([0-9]+)([a-zA-Z]+)", value)
2424+ if not matches:
2425+ msg = "Unable to interpret string value '%s' as bytes" % (value)
2426+ raise ValueError(msg)
2427+ return int(matches.group(1)) * (1024 ** BYTE_POWER[matches.group(2)])
2428
2429=== modified file 'lib/charmhelpers/core/templating.py'
2430--- lib/charmhelpers/core/templating.py 2015-06-03 10:05:41 +0000
2431+++ lib/charmhelpers/core/templating.py 2016-07-01 11:47:59 +0000
2432@@ -21,13 +21,14 @@
2433
2434
2435 def render(source, target, context, owner='root', group='root',
2436- perms=0o444, templates_dir=None, encoding='UTF-8'):
2437+ perms=0o444, templates_dir=None, encoding='UTF-8', template_loader=None):
2438 """
2439 Render a template.
2440
2441 The `source` path, if not absolute, is relative to the `templates_dir`.
2442
2443- The `target` path should be absolute.
2444+ The `target` path should be absolute. It can also be `None`, in which
2445+ case no file will be written.
2446
2447 The context should be a dict containing the values to be replaced in the
2448 template.
2449@@ -36,6 +37,9 @@
2450
2451 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
2452
2453+ The rendered template will be written to the file as well as being returned
2454+ as a string.
2455+
2456 Note: Using this requires python-jinja2; if it is not installed, calling
2457 this will attempt to use charmhelpers.fetch.apt_install to install it.
2458 """
2459@@ -52,17 +56,26 @@
2460 apt_install('python-jinja2', fatal=True)
2461 from jinja2 import FileSystemLoader, Environment, exceptions
2462
2463- if templates_dir is None:
2464- templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
2465- loader = Environment(loader=FileSystemLoader(templates_dir))
2466+ if template_loader:
2467+ template_env = Environment(loader=template_loader)
2468+ else:
2469+ if templates_dir is None:
2470+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
2471+ template_env = Environment(loader=FileSystemLoader(templates_dir))
2472 try:
2473 source = source
2474- template = loader.get_template(source)
2475+ template = template_env.get_template(source)
2476 except exceptions.TemplateNotFound as e:
2477 hookenv.log('Could not load template %s from %s.' %
2478 (source, templates_dir),
2479 level=hookenv.ERROR)
2480 raise e
2481 content = template.render(context)
2482- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
2483- host.write_file(target, content.encode(encoding), owner, group, perms)
2484+ if target is not None:
2485+ target_dir = os.path.dirname(target)
2486+ if not os.path.exists(target_dir):
2487+ # This is a terrible default directory permission, as the file
2488+ # or its siblings will often contain secrets.
2489+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
2490+ host.write_file(target, content.encode(encoding), owner, group, perms)
2491+ return content
2492
2493=== modified file 'lib/charmhelpers/core/unitdata.py'
2494--- lib/charmhelpers/core/unitdata.py 2015-07-23 11:40:02 +0000
2495+++ lib/charmhelpers/core/unitdata.py 2016-07-01 11:47:59 +0000
2496@@ -152,6 +152,7 @@
2497 import collections
2498 import contextlib
2499 import datetime
2500+import itertools
2501 import json
2502 import os
2503 import pprint
2504@@ -164,8 +165,7 @@
2505 class Storage(object):
2506 """Simple key value database for local unit state within charms.
2507
2508- Modifications are automatically committed at hook exit. That's
2509- currently regardless of exit code.
2510+ Modifications are not persisted unless :meth:`flush` is called.
2511
2512 To support dicts, lists, integer, floats, and booleans values
2513 are automatically json encoded/decoded.
2514@@ -173,8 +173,11 @@
2515 def __init__(self, path=None):
2516 self.db_path = path
2517 if path is None:
2518- self.db_path = os.path.join(
2519- os.environ.get('CHARM_DIR', ''), '.unit-state.db')
2520+ if 'UNIT_STATE_DB' in os.environ:
2521+ self.db_path = os.environ['UNIT_STATE_DB']
2522+ else:
2523+ self.db_path = os.path.join(
2524+ os.environ.get('CHARM_DIR', ''), '.unit-state.db')
2525 self.conn = sqlite3.connect('%s' % self.db_path)
2526 self.cursor = self.conn.cursor()
2527 self.revision = None
2528@@ -189,15 +192,8 @@
2529 self.conn.close()
2530 self._closed = True
2531
2532- def _scoped_query(self, stmt, params=None):
2533- if params is None:
2534- params = []
2535- return stmt, params
2536-
2537 def get(self, key, default=None, record=False):
2538- self.cursor.execute(
2539- *self._scoped_query(
2540- 'select data from kv where key=?', [key]))
2541+ self.cursor.execute('select data from kv where key=?', [key])
2542 result = self.cursor.fetchone()
2543 if not result:
2544 return default
2545@@ -206,33 +202,81 @@
2546 return json.loads(result[0])
2547
2548 def getrange(self, key_prefix, strip=False):
2549- stmt = "select key, data from kv where key like '%s%%'" % key_prefix
2550- self.cursor.execute(*self._scoped_query(stmt))
2551+ """
2552+ Get a range of keys starting with a common prefix as a mapping of
2553+ keys to values.
2554+
2555+ :param str key_prefix: Common prefix among all keys
2556+ :param bool strip: Optionally strip the common prefix from the key
2557+ names in the returned dict
2558+ :return dict: A (possibly empty) dict of key-value mappings
2559+ """
2560+ self.cursor.execute("select key, data from kv where key like ?",
2561+ ['%s%%' % key_prefix])
2562 result = self.cursor.fetchall()
2563
2564 if not result:
2565- return None
2566+ return {}
2567 if not strip:
2568 key_prefix = ''
2569 return dict([
2570 (k[len(key_prefix):], json.loads(v)) for k, v in result])
2571
2572 def update(self, mapping, prefix=""):
2573+ """
2574+ Set the values of multiple keys at once.
2575+
2576+ :param dict mapping: Mapping of keys to values
2577+ :param str prefix: Optional prefix to apply to all keys in `mapping`
2578+ before setting
2579+ """
2580 for k, v in mapping.items():
2581 self.set("%s%s" % (prefix, k), v)
2582
2583 def unset(self, key):
2584+ """
2585+ Remove a key from the database entirely.
2586+ """
2587 self.cursor.execute('delete from kv where key=?', [key])
2588 if self.revision and self.cursor.rowcount:
2589 self.cursor.execute(
2590 'insert into kv_revisions values (?, ?, ?)',
2591 [key, self.revision, json.dumps('DELETED')])
2592
2593+ def unsetrange(self, keys=None, prefix=""):
2594+ """
2595+ Remove a range of keys starting with a common prefix, from the database
2596+ entirely.
2597+
2598+ :param list keys: List of keys to remove.
2599+ :param str prefix: Optional prefix to apply to all keys in ``keys``
2600+ before removing.
2601+ """
2602+ if keys is not None:
2603+ keys = ['%s%s' % (prefix, key) for key in keys]
2604+ self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys)
2605+ if self.revision and self.cursor.rowcount:
2606+ self.cursor.execute(
2607+ 'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)),
2608+ list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys)))
2609+ else:
2610+ self.cursor.execute('delete from kv where key like ?',
2611+ ['%s%%' % prefix])
2612+ if self.revision and self.cursor.rowcount:
2613+ self.cursor.execute(
2614+ 'insert into kv_revisions values (?, ?, ?)',
2615+ ['%s%%' % prefix, self.revision, json.dumps('DELETED')])
2616+
2617 def set(self, key, value):
2618+ """
2619+ Set a value in the database.
2620+
2621+ :param str key: Key to set the value for
2622+ :param value: Any JSON-serializable value to be set
2623+ """
2624 serialized = json.dumps(value)
2625
2626- self.cursor.execute(
2627- 'select data from kv where key=?', [key])
2628+ self.cursor.execute('select data from kv where key=?', [key])
2629 exists = self.cursor.fetchone()
2630
2631 # Skip mutations to the same value
2632
2633=== modified file 'lib/charmhelpers/fetch/__init__.py'
2634--- lib/charmhelpers/fetch/__init__.py 2015-07-23 11:40:02 +0000
2635+++ lib/charmhelpers/fetch/__init__.py 2016-07-01 11:47:59 +0000
2636@@ -90,6 +90,30 @@
2637 'kilo/proposed': 'trusty-proposed/kilo',
2638 'trusty-kilo/proposed': 'trusty-proposed/kilo',
2639 'trusty-proposed/kilo': 'trusty-proposed/kilo',
2640+ # Liberty
2641+ 'liberty': 'trusty-updates/liberty',
2642+ 'trusty-liberty': 'trusty-updates/liberty',
2643+ 'trusty-liberty/updates': 'trusty-updates/liberty',
2644+ 'trusty-updates/liberty': 'trusty-updates/liberty',
2645+ 'liberty/proposed': 'trusty-proposed/liberty',
2646+ 'trusty-liberty/proposed': 'trusty-proposed/liberty',
2647+ 'trusty-proposed/liberty': 'trusty-proposed/liberty',
2648+ # Mitaka
2649+ 'mitaka': 'trusty-updates/mitaka',
2650+ 'trusty-mitaka': 'trusty-updates/mitaka',
2651+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
2652+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
2653+ 'mitaka/proposed': 'trusty-proposed/mitaka',
2654+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
2655+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
2656+ # Newton
2657+ 'newton': 'xenial-updates/newton',
2658+ 'xenial-newton': 'xenial-updates/newton',
2659+ 'xenial-newton/updates': 'xenial-updates/newton',
2660+ 'xenial-updates/newton': 'xenial-updates/newton',
2661+ 'newton/proposed': 'xenial-proposed/newton',
2662+ 'xenial-newton/proposed': 'xenial-proposed/newton',
2663+ 'xenial-proposed/newton': 'xenial-proposed/newton',
2664 }
2665
2666 # The order of this list is very important. Handlers should be listed in from
2667@@ -217,12 +241,12 @@
2668
2669 def apt_mark(packages, mark, fatal=False):
2670 """Flag one or more packages using apt-mark"""
2671+ log("Marking {} as {}".format(packages, mark))
2672 cmd = ['apt-mark', mark]
2673 if isinstance(packages, six.string_types):
2674 cmd.append(packages)
2675 else:
2676 cmd.extend(packages)
2677- log("Holding {}".format(packages))
2678
2679 if fatal:
2680 subprocess.check_call(cmd, universal_newlines=True)
2681@@ -403,7 +427,7 @@
2682 importlib.import_module(package),
2683 classname)
2684 plugin_list.append(handler_class())
2685- except (ImportError, AttributeError):
2686+ except NotImplementedError:
2687 # Skip missing plugins so that they can be ommitted from
2688 # installation if desired
2689 log("FetchHandler {} not found, skipping plugin".format(
2690
2691=== modified file 'lib/charmhelpers/fetch/archiveurl.py'
2692--- lib/charmhelpers/fetch/archiveurl.py 2015-07-23 11:40:02 +0000
2693+++ lib/charmhelpers/fetch/archiveurl.py 2016-07-01 11:47:59 +0000
2694@@ -108,7 +108,7 @@
2695 install_opener(opener)
2696 response = urlopen(source)
2697 try:
2698- with open(dest, 'w') as dest_file:
2699+ with open(dest, 'wb') as dest_file:
2700 dest_file.write(response.read())
2701 except Exception as e:
2702 if os.path.isfile(dest):
2703
2704=== modified file 'lib/charmhelpers/fetch/bzrurl.py'
2705--- lib/charmhelpers/fetch/bzrurl.py 2015-06-03 10:05:41 +0000
2706+++ lib/charmhelpers/fetch/bzrurl.py 2016-07-01 11:47:59 +0000
2707@@ -15,60 +15,50 @@
2708 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2709
2710 import os
2711+from subprocess import check_call
2712 from charmhelpers.fetch import (
2713 BaseFetchHandler,
2714- UnhandledSource
2715+ UnhandledSource,
2716+ filter_installed_packages,
2717+ apt_install,
2718 )
2719 from charmhelpers.core.host import mkdir
2720
2721-import six
2722-if six.PY3:
2723- raise ImportError('bzrlib does not support Python3')
2724
2725-try:
2726- from bzrlib.branch import Branch
2727- from bzrlib import bzrdir, workingtree, errors
2728-except ImportError:
2729- from charmhelpers.fetch import apt_install
2730- apt_install("python-bzrlib")
2731- from bzrlib.branch import Branch
2732- from bzrlib import bzrdir, workingtree, errors
2733+if filter_installed_packages(['bzr']) != []:
2734+ apt_install(['bzr'])
2735+ if filter_installed_packages(['bzr']) != []:
2736+ raise NotImplementedError('Unable to install bzr')
2737
2738
2739 class BzrUrlFetchHandler(BaseFetchHandler):
2740 """Handler for bazaar branches via generic and lp URLs"""
2741 def can_handle(self, source):
2742 url_parts = self.parse_url(source)
2743- if url_parts.scheme not in ('bzr+ssh', 'lp'):
2744+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
2745 return False
2746+ elif not url_parts.scheme:
2747+ return os.path.exists(os.path.join(source, '.bzr'))
2748 else:
2749 return True
2750
2751 def branch(self, source, dest):
2752- url_parts = self.parse_url(source)
2753- # If we use lp:branchname scheme we need to load plugins
2754 if not self.can_handle(source):
2755 raise UnhandledSource("Cannot handle {}".format(source))
2756- if url_parts.scheme == "lp":
2757- from bzrlib.plugin import load_plugins
2758- load_plugins()
2759- try:
2760- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
2761- except errors.AlreadyControlDirError:
2762- local_branch = Branch.open(dest)
2763- try:
2764- remote_branch = Branch.open(source)
2765- remote_branch.push(local_branch)
2766- tree = workingtree.WorkingTree.open(dest)
2767- tree.update()
2768- except Exception as e:
2769- raise e
2770+ if os.path.exists(dest):
2771+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
2772+ else:
2773+ check_call(['bzr', 'branch', source, dest])
2774
2775- def install(self, source):
2776+ def install(self, source, dest=None):
2777 url_parts = self.parse_url(source)
2778 branch_name = url_parts.path.strip("/").split("/")[-1]
2779- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2780- branch_name)
2781+ if dest:
2782+ dest_dir = os.path.join(dest, branch_name)
2783+ else:
2784+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2785+ branch_name)
2786+
2787 if not os.path.exists(dest_dir):
2788 mkdir(dest_dir, perms=0o755)
2789 try:
2790
2791=== modified file 'lib/charmhelpers/fetch/giturl.py'
2792--- lib/charmhelpers/fetch/giturl.py 2015-07-23 11:40:02 +0000
2793+++ lib/charmhelpers/fetch/giturl.py 2016-07-01 11:47:59 +0000
2794@@ -15,24 +15,18 @@
2795 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2796
2797 import os
2798+from subprocess import check_call, CalledProcessError
2799 from charmhelpers.fetch import (
2800 BaseFetchHandler,
2801- UnhandledSource
2802+ UnhandledSource,
2803+ filter_installed_packages,
2804+ apt_install,
2805 )
2806-from charmhelpers.core.host import mkdir
2807-
2808-import six
2809-if six.PY3:
2810- raise ImportError('GitPython does not support Python 3')
2811-
2812-try:
2813- from git import Repo
2814-except ImportError:
2815- from charmhelpers.fetch import apt_install
2816- apt_install("python-git")
2817- from git import Repo
2818-
2819-from git.exc import GitCommandError # noqa E402
2820+
2821+if filter_installed_packages(['git']) != []:
2822+ apt_install(['git'])
2823+ if filter_installed_packages(['git']) != []:
2824+ raise NotImplementedError('Unable to install git')
2825
2826
2827 class GitUrlFetchHandler(BaseFetchHandler):
2828@@ -40,19 +34,24 @@
2829 def can_handle(self, source):
2830 url_parts = self.parse_url(source)
2831 # TODO (mattyw) no support for ssh git@ yet
2832- if url_parts.scheme not in ('http', 'https', 'git'):
2833+ if url_parts.scheme not in ('http', 'https', 'git', ''):
2834 return False
2835+ elif not url_parts.scheme:
2836+ return os.path.exists(os.path.join(source, '.git'))
2837 else:
2838 return True
2839
2840- def clone(self, source, dest, branch, depth=None):
2841+ def clone(self, source, dest, branch="master", depth=None):
2842 if not self.can_handle(source):
2843 raise UnhandledSource("Cannot handle {}".format(source))
2844
2845- if depth:
2846- Repo.clone_from(source, dest, branch=branch, depth=depth)
2847+ if os.path.exists(dest):
2848+ cmd = ['git', '-C', dest, 'pull', source, branch]
2849 else:
2850- Repo.clone_from(source, dest, branch=branch)
2851+ cmd = ['git', 'clone', source, dest, '--branch', branch]
2852+ if depth:
2853+ cmd.extend(['--depth', depth])
2854+ check_call(cmd)
2855
2856 def install(self, source, branch="master", dest=None, depth=None):
2857 url_parts = self.parse_url(source)
2858@@ -62,11 +61,9 @@
2859 else:
2860 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2861 branch_name)
2862- if not os.path.exists(dest_dir):
2863- mkdir(dest_dir, perms=0o755)
2864 try:
2865 self.clone(source, dest_dir, branch, depth)
2866- except GitCommandError as e:
2867+ except CalledProcessError as e:
2868 raise UnhandledSource(e)
2869 except OSError as e:
2870 raise UnhandledSource(e.strerror)
2871
2872=== modified file 'lib/ubuntu_repository_cache/apache.py'
2873--- lib/ubuntu_repository_cache/apache.py 2015-04-29 13:09:48 +0000
2874+++ lib/ubuntu_repository_cache/apache.py 2016-07-01 11:47:59 +0000
2875@@ -13,7 +13,7 @@
2876 host,
2877 templating
2878 )
2879-import util
2880+from . import util
2881 # pylint: enable=F0401
2882
2883
2884
2885=== modified file 'lib/ubuntu_repository_cache/mirror.py'
2886--- lib/ubuntu_repository_cache/mirror.py 2015-09-28 20:43:45 +0000
2887+++ lib/ubuntu_repository_cache/mirror.py 2016-07-01 11:47:59 +0000
2888@@ -22,15 +22,15 @@
2889 host,
2890 )
2891
2892-import service
2893-import util
2894+from . import service
2895+from . import util
2896 # pylint: enable=F0401
2897
2898 LOG = hookenv.log
2899 SERVICE = 'ubuntu-repository-cache'
2900
2901
2902-class ContentBad(StandardError):
2903+class ContentBad(Exception):
2904 '''Mirror content is inconsistent'''
2905 pass
2906
2907@@ -202,7 +202,7 @@
2908 # link_active will not exist on first run, need to create it
2909 try:
2910 os.stat(link_active)
2911- except OSError, err:
2912+ except OSError as err:
2913 if err.errno is errno.ENOENT:
2914 host.symlink(dest, link_active)
2915 else:
2916@@ -211,7 +211,7 @@
2917 # The ubuntu_next link indicates the latest mirrored content
2918 try:
2919 os.stat(link_next)
2920- except OSError, err:
2921+ except OSError as err:
2922 if err.errno is errno.ENOENT:
2923 host.symlink(dest, link_next)
2924 else:
2925@@ -282,8 +282,11 @@
2926
2927 with open(filepath, 'rb') as infile:
2928 sha256 = hashlib.sha256()
2929- for data in infile.read():
2930- sha256.update(data)
2931+ while True:
2932+ buf = infile.read(1024*1024*1024)
2933+ if not buf:
2934+ break
2935+ sha256.update(buf)
2936 digest = sha256.hexdigest()
2937 if digest != files[filename]['SHA256']:
2938 file_mtime = os.stat(filepath).st_mtime
2939
2940=== modified file 'lib/ubuntu_repository_cache/service.py'
2941--- lib/ubuntu_repository_cache/service.py 2016-01-28 12:03:26 +0000
2942+++ lib/ubuntu_repository_cache/service.py 2016-07-01 11:47:59 +0000
2943@@ -15,12 +15,12 @@
2944 templating,
2945 )
2946
2947-import apache
2948+from . import apache
2949 import glob
2950-import mirror
2951-import squid
2952-import storage
2953-import util
2954+from . import mirror
2955+from . import squid
2956+from . import storage
2957+from . import util
2958 # pylint: enable=F0401
2959
2960 LOG = hookenv.log
2961
2962=== modified file 'lib/ubuntu_repository_cache/squid.py'
2963--- lib/ubuntu_repository_cache/squid.py 2015-03-02 18:31:05 +0000
2964+++ lib/ubuntu_repository_cache/squid.py 2016-07-01 11:47:59 +0000
2965@@ -12,7 +12,11 @@
2966 templating,
2967 )
2968
2969-import util
2970+from charmhelpers.core.host import (
2971+ lsb_release
2972+)
2973+
2974+from . import util
2975 # pylint: enable=F0401
2976
2977 LOG = hookenv.log
2978@@ -85,15 +89,18 @@
2979 LOG('Installing squid', hookenv.INFO)
2980 fetch.apt_install(SERVICE, fatal=True)
2981
2982- # We don't need avahi-daemon or squid3 running
2983- host.service_stop('avahi-daemon')
2984- with open('/etc/init/avahi-daemon.override', 'w') as avahi_override_file:
2985- avahi_override_file.write('manual\n')
2986-
2987- host.service_stop('squid3')
2988- with open('/etc/init/squid3.override', 'w') as squid3_override_file:
2989- squid3_override_file.write('manual\n')
2990-
2991+ release = lsb_release()['DISTRIB_RELEASE']
2992+
2993+ # We don't need avahi-daemon ...
2994+ host.service_pause('avahi-daemon')
2995+
2996+ # ... or squid3 running
2997+ if release < '16.04': # before xenial
2998+ squid_service = 'squid3'
2999+ else:
3000+ squid_service = 'squid'
3001+
3002+ host.service_pause(squid_service)
3003 stop()
3004
3005 size_caches()
3006
3007=== modified file 'lib/ubuntu_repository_cache/storage.py'
3008--- lib/ubuntu_repository_cache/storage.py 2015-08-05 11:11:48 +0000
3009+++ lib/ubuntu_repository_cache/storage.py 2016-07-01 11:47:59 +0000
3010@@ -166,7 +166,7 @@
3011 config.save()
3012
3013
3014-def _mkdir(path, owner='root', group='root', perms=0555, force=False):
3015+def _mkdir(path, owner='root', group='root', perms=0o555, force=False):
3016 '''Make directories recursively with a umask=0o000.
3017
3018 This storage module makes calls to mkdir repeatedly with permissions
3019@@ -183,7 +183,7 @@
3020 def _disk_free(path):
3021 '''Return disk space free, in MB, for the specified path'''
3022
3023- output = subprocess.check_output(['df', '-BM', path]).split('\n')
3024+ output = subprocess.check_output(['df', '-BM', path]).decode().split('\n')
3025 _, _, _, avail, _, _ = output[1].split()
3026 LOG('disk free for {} is {}MB'.format(path, avail), hookenv.DEBUG)
3027 return int(avail.rstrip('M'))
3028
3029=== modified file 'lib/ubuntu_repository_cache/util.py'
3030--- lib/ubuntu_repository_cache/util.py 2015-09-24 13:09:29 +0000
3031+++ lib/ubuntu_repository_cache/util.py 2016-07-01 11:47:59 +0000
3032@@ -130,7 +130,7 @@
3033 def system_total_mem():
3034 '''Return the total system memory in megabytes'''
3035
3036- output = subprocess.check_output(['free', '-m']).split('\n')
3037+ output = subprocess.check_output(['free', '-m']).decode().split('\n')
3038 return int(output[1].split()[1])
3039
3040

Subscribers

People subscribed via source and target branches

to all changes: