Merge lp:~niedbalski/charms/precise/mysql/precise-syncup into lp:charms/mysql

Proposed by Jorge Niedbalski
Status: Merged
Merged at revision: 129
Proposed branch: lp:~niedbalski/charms/precise/mysql/precise-syncup
Merge into: lp:charms/mysql
Diff against target: 2881 lines (+1954/-183)
26 files modified
.bzrignore (+1/-0)
Makefile (+7/-2)
README.md (+1/-1)
charm-helpers.yaml (+2/-0)
config.yaml (+17/-1)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/network/ip.py (+351/-0)
hooks/charmhelpers/core/fstab.py (+118/-0)
hooks/charmhelpers/core/hookenv.py (+155/-16)
hooks/charmhelpers/core/host.py (+126/-27)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+313/-0)
hooks/charmhelpers/core/services/helpers.py (+243/-0)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+52/-0)
hooks/charmhelpers/fetch/__init__.py (+205/-97)
hooks/charmhelpers/fetch/archiveurl.py (+99/-17)
hooks/charmhelpers/fetch/bzrurl.py (+7/-2)
hooks/charmhelpers/fetch/giturl.py (+48/-0)
hooks/common.py (+3/-1)
hooks/config-changed (+12/-3)
hooks/ha_relations.py (+11/-3)
hooks/lib/ceph_utils.py (+12/-2)
hooks/lib/utils.py (+9/-0)
hooks/monitors-relation-joined (+2/-0)
hooks/shared_db_relations.py (+102/-11)
To merge this branch: bzr merge lp:~niedbalski/charms/precise/mysql/precise-syncup
Reviewer Review Type Date Requested Status
Tim Van Steenburgh (community) Approve
Review Queue (community) automated testing Approve
Review via email: mp+244436@code.launchpad.net

Description of the change

Trusty to precise <-> syncup

To post a comment you must log in.
Revision history for this message
Review Queue (review-queue) wrote :

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10690-results

review: Needs Fixing (automated testing)
Revision history for this message
Whit Morriss (whitmo) wrote :

Test failure here seems to be incidental rather than an issue of concern. Confirmed manual deployment works for precise on local and aws. Confirm replication as described in README works on AWS.

LGTM? +1

Revision history for this message
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://reports.vapour.ws/charm-tests/charm-bundle-test-10913-results

review: Approve (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://reports.vapour.ws/charm-tests/charm-bundle-test-10919-results

review: Approve (automated testing)
Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

+1 LGTM.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2014-12-11 14:18:46 +0000
4@@ -0,0 +1,1 @@
5+bin/
6
7=== modified file 'Makefile'
8--- Makefile 2014-03-04 17:28:10 +0000
9+++ Makefile 2014-12-11 14:18:46 +0000
10@@ -11,5 +11,10 @@
11 # @echo Starting tests...
12 # @$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
13
14-sync:
15- @charm-helper-sync -c charm-helpers.yaml
16+bin/charm_helpers_sync.py:
17+ @mkdir -p bin
18+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
19+ > bin/charm_helpers_sync.py
20+
21+sync: bin/charm_helpers_sync.py
22+ $(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
23
24=== modified file 'README.md'
25--- README.md 2014-06-12 22:06:54 +0000
26+++ README.md 2014-12-11 14:18:46 +0000
27@@ -12,7 +12,7 @@
28
29 juju deploy mysql
30
31-Once deployed, you can retrive the MySQL root user password by logging in to the machine via `juju ssh` and readin the `/var/lib/mysql/mysql.passwd` file. To log in as root MySQL User at the MySQL console you can issue the following:
32+Once deployed, you can retrieve the MySQL root user password by logging in to the machine via `juju ssh` and readin the `/var/lib/mysql/mysql.passwd` file. To log in as root MySQL User at the MySQL console you can issue the following:
33
34 juju ssh mysql/0
35 mysql -u root -p`sudo cat /var/lib/mysql/mysql.passwd`
36
37=== modified file 'charm-helpers.yaml'
38--- charm-helpers.yaml 2014-02-19 14:49:31 +0000
39+++ charm-helpers.yaml 2014-12-11 14:18:46 +0000
40@@ -1,5 +1,7 @@
41 branch: lp:charm-helpers
42 destination: hooks/charmhelpers
43 include:
44+ - __init__
45 - core
46 - fetch
47+ - contrib.network.ip
48
49=== modified file 'config.yaml'
50--- config.yaml 2014-11-05 14:54:57 +0000
51+++ config.yaml 2014-12-11 14:18:46 +0000
52@@ -69,7 +69,7 @@
53 image name exists in Ceph, it will be re-used and the data will be
54 overwritten.
55 ceph-osd-replication-count:
56- default: 2
57+ default: 3
58 type: int
59 description: |
60 This value dictates the number of replicas ceph must make of any
61@@ -106,3 +106,19 @@
62 juju-myservice-0
63 If you're running multiple environments with the same services in them
64 this allows you to differentiate between them.
65+ prefer-ipv6:
66+ type: boolean
67+ default: False
68+ description: |
69+ If True enables IPv6 support. The charm will expect network interfaces
70+ to be configured with an IPv6 address. If set to False (default) IPv4
71+ is expected.
72+ .
73+ NOTE: these charms do not currently support IPv6 privacy extension. In
74+ order for this charm to function correctly, the privacy extension must
75+ be disabled and a non-temporary address must be configured/available
76+ on your network interface.
77+ bind-address:
78+ default: '0.0.0.0'
79+ type: string
80+ description: "mysql bind host address"
81
82=== modified file 'hooks/charmhelpers/__init__.py'
83--- hooks/charmhelpers/__init__.py 2014-02-19 14:49:31 +0000
84+++ hooks/charmhelpers/__init__.py 2014-12-11 14:18:46 +0000
85@@ -0,0 +1,22 @@
86+# Bootstrap charm-helpers, installing its dependencies if necessary using
87+# only standard libraries.
88+import subprocess
89+import sys
90+
91+try:
92+ import six # flake8: noqa
93+except ImportError:
94+ if sys.version_info.major == 2:
95+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
96+ else:
97+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
98+ import six # flake8: noqa
99+
100+try:
101+ import yaml # flake8: noqa
102+except ImportError:
103+ if sys.version_info.major == 2:
104+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
105+ else:
106+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
107+ import yaml # flake8: noqa
108
109=== added directory 'hooks/charmhelpers/contrib/network'
110=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
111=== added file 'hooks/charmhelpers/contrib/network/ip.py'
112--- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
113+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-11 14:18:46 +0000
114@@ -0,0 +1,351 @@
115+import glob
116+import re
117+import subprocess
118+
119+from functools import partial
120+
121+from charmhelpers.core.hookenv import unit_get
122+from charmhelpers.fetch import apt_install
123+from charmhelpers.core.hookenv import (
124+ log
125+)
126+
127+try:
128+ import netifaces
129+except ImportError:
130+ apt_install('python-netifaces')
131+ import netifaces
132+
133+try:
134+ import netaddr
135+except ImportError:
136+ apt_install('python-netaddr')
137+ import netaddr
138+
139+
140+def _validate_cidr(network):
141+ try:
142+ netaddr.IPNetwork(network)
143+ except (netaddr.core.AddrFormatError, ValueError):
144+ raise ValueError("Network (%s) is not in CIDR presentation format" %
145+ network)
146+
147+
148+def no_ip_found_error_out(network):
149+ errmsg = ("No IP address found in network: %s" % network)
150+ raise ValueError(errmsg)
151+
152+
153+def get_address_in_network(network, fallback=None, fatal=False):
154+ """Get an IPv4 or IPv6 address within the network from the host.
155+
156+ :param network (str): CIDR presentation format. For example,
157+ '192.168.1.0/24'.
158+ :param fallback (str): If no address is found, return fallback.
159+ :param fatal (boolean): If no address is found, fallback is not
160+ set and fatal is True then exit(1).
161+ """
162+ if network is None:
163+ if fallback is not None:
164+ return fallback
165+
166+ if fatal:
167+ no_ip_found_error_out(network)
168+ else:
169+ return None
170+
171+ _validate_cidr(network)
172+ network = netaddr.IPNetwork(network)
173+ for iface in netifaces.interfaces():
174+ addresses = netifaces.ifaddresses(iface)
175+ if network.version == 4 and netifaces.AF_INET in addresses:
176+ addr = addresses[netifaces.AF_INET][0]['addr']
177+ netmask = addresses[netifaces.AF_INET][0]['netmask']
178+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
179+ if cidr in network:
180+ return str(cidr.ip)
181+
182+ if network.version == 6 and netifaces.AF_INET6 in addresses:
183+ for addr in addresses[netifaces.AF_INET6]:
184+ if not addr['addr'].startswith('fe80'):
185+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
186+ addr['netmask']))
187+ if cidr in network:
188+ return str(cidr.ip)
189+
190+ if fallback is not None:
191+ return fallback
192+
193+ if fatal:
194+ no_ip_found_error_out(network)
195+
196+ return None
197+
198+
199+def is_ipv6(address):
200+ """Determine whether provided address is IPv6 or not."""
201+ try:
202+ address = netaddr.IPAddress(address)
203+ except netaddr.AddrFormatError:
204+ # probably a hostname - so not an address at all!
205+ return False
206+
207+ return address.version == 6
208+
209+
210+def is_address_in_network(network, address):
211+ """
212+ Determine whether the provided address is within a network range.
213+
214+ :param network (str): CIDR presentation format. For example,
215+ '192.168.1.0/24'.
216+ :param address: An individual IPv4 or IPv6 address without a net
217+ mask or subnet prefix. For example, '192.168.1.1'.
218+ :returns boolean: Flag indicating whether address is in network.
219+ """
220+ try:
221+ network = netaddr.IPNetwork(network)
222+ except (netaddr.core.AddrFormatError, ValueError):
223+ raise ValueError("Network (%s) is not in CIDR presentation format" %
224+ network)
225+
226+ try:
227+ address = netaddr.IPAddress(address)
228+ except (netaddr.core.AddrFormatError, ValueError):
229+ raise ValueError("Address (%s) is not in correct presentation format" %
230+ address)
231+
232+ if address in network:
233+ return True
234+ else:
235+ return False
236+
237+
238+def _get_for_address(address, key):
239+ """Retrieve an attribute of or the physical interface that
240+ the IP address provided could be bound to.
241+
242+ :param address (str): An individual IPv4 or IPv6 address without a net
243+ mask or subnet prefix. For example, '192.168.1.1'.
244+ :param key: 'iface' for the physical interface name or an attribute
245+ of the configured interface, for example 'netmask'.
246+ :returns str: Requested attribute or None if address is not bindable.
247+ """
248+ address = netaddr.IPAddress(address)
249+ for iface in netifaces.interfaces():
250+ addresses = netifaces.ifaddresses(iface)
251+ if address.version == 4 and netifaces.AF_INET in addresses:
252+ addr = addresses[netifaces.AF_INET][0]['addr']
253+ netmask = addresses[netifaces.AF_INET][0]['netmask']
254+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
255+ cidr = network.cidr
256+ if address in cidr:
257+ if key == 'iface':
258+ return iface
259+ else:
260+ return addresses[netifaces.AF_INET][0][key]
261+
262+ if address.version == 6 and netifaces.AF_INET6 in addresses:
263+ for addr in addresses[netifaces.AF_INET6]:
264+ if not addr['addr'].startswith('fe80'):
265+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
266+ addr['netmask']))
267+ cidr = network.cidr
268+ if address in cidr:
269+ if key == 'iface':
270+ return iface
271+ elif key == 'netmask' and cidr:
272+ return str(cidr).split('/')[1]
273+ else:
274+ return addr[key]
275+
276+ return None
277+
278+
279+get_iface_for_address = partial(_get_for_address, key='iface')
280+
281+
282+get_netmask_for_address = partial(_get_for_address, key='netmask')
283+
284+
285+def format_ipv6_addr(address):
286+ """If address is IPv6, wrap it in '[]' otherwise return None.
287+
288+ This is required by most configuration files when specifying IPv6
289+ addresses.
290+ """
291+ if is_ipv6(address):
292+ return "[%s]" % address
293+
294+ return None
295+
296+
297+def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
298+ fatal=True, exc_list=None):
299+ """Return the assigned IP address for a given interface, if any."""
300+ # Extract nic if passed /dev/ethX
301+ if '/' in iface:
302+ iface = iface.split('/')[-1]
303+
304+ if not exc_list:
305+ exc_list = []
306+
307+ try:
308+ inet_num = getattr(netifaces, inet_type)
309+ except AttributeError:
310+ raise Exception("Unknown inet type '%s'" % str(inet_type))
311+
312+ interfaces = netifaces.interfaces()
313+ if inc_aliases:
314+ ifaces = []
315+ for _iface in interfaces:
316+ if iface == _iface or _iface.split(':')[0] == iface:
317+ ifaces.append(_iface)
318+
319+ if fatal and not ifaces:
320+ raise Exception("Invalid interface '%s'" % iface)
321+
322+ ifaces.sort()
323+ else:
324+ if iface not in interfaces:
325+ if fatal:
326+ raise Exception("Interface '%s' not found " % (iface))
327+ else:
328+ return []
329+
330+ else:
331+ ifaces = [iface]
332+
333+ addresses = []
334+ for netiface in ifaces:
335+ net_info = netifaces.ifaddresses(netiface)
336+ if inet_num in net_info:
337+ for entry in net_info[inet_num]:
338+ if 'addr' in entry and entry['addr'] not in exc_list:
339+ addresses.append(entry['addr'])
340+
341+ if fatal and not addresses:
342+ raise Exception("Interface '%s' doesn't have any %s addresses." %
343+ (iface, inet_type))
344+
345+ return sorted(addresses)
346+
347+
348+get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
349+
350+
351+def get_iface_from_addr(addr):
352+ """Work out on which interface the provided address is configured."""
353+ for iface in netifaces.interfaces():
354+ addresses = netifaces.ifaddresses(iface)
355+ for inet_type in addresses:
356+ for _addr in addresses[inet_type]:
357+ _addr = _addr['addr']
358+ # link local
359+ ll_key = re.compile("(.+)%.*")
360+ raw = re.match(ll_key, _addr)
361+ if raw:
362+ _addr = raw.group(1)
363+
364+ if _addr == addr:
365+ log("Address '%s' is configured on iface '%s'" %
366+ (addr, iface))
367+ return iface
368+
369+ msg = "Unable to infer net iface on which '%s' is configured" % (addr)
370+ raise Exception(msg)
371+
372+
373+def sniff_iface(f):
374+ """Ensure decorated function is called with a value for iface.
375+
376+ If no iface provided, inject net iface inferred from unit private address.
377+ """
378+ def iface_sniffer(*args, **kwargs):
379+ if not kwargs.get('iface', None):
380+ kwargs['iface'] = get_iface_from_addr(unit_get('private-address'))
381+
382+ return f(*args, **kwargs)
383+
384+ return iface_sniffer
385+
386+
387+@sniff_iface
388+def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None,
389+ dynamic_only=True):
390+ """Get assigned IPv6 address for a given interface.
391+
392+ Returns list of addresses found. If no address found, returns empty list.
393+
394+ If iface is None, we infer the current primary interface by doing a reverse
395+ lookup on the unit private-address.
396+
397+ We currently only support scope global IPv6 addresses i.e. non-temporary
398+ addresses. If no global IPv6 address is found, return the first one found
399+ in the ipv6 address list.
400+ """
401+ addresses = get_iface_addr(iface=iface, inet_type='AF_INET6',
402+ inc_aliases=inc_aliases, fatal=fatal,
403+ exc_list=exc_list)
404+
405+ if addresses:
406+ global_addrs = []
407+ for addr in addresses:
408+ key_scope_link_local = re.compile("^fe80::..(.+)%(.+)")
409+ m = re.match(key_scope_link_local, addr)
410+ if m:
411+ eui_64_mac = m.group(1)
412+ iface = m.group(2)
413+ else:
414+ global_addrs.append(addr)
415+
416+ if global_addrs:
417+ # Make sure any found global addresses are not temporary
418+ cmd = ['ip', 'addr', 'show', iface]
419+ out = subprocess.check_output(cmd).decode('UTF-8')
420+ if dynamic_only:
421+ key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
422+ else:
423+ key = re.compile("inet6 (.+)/[0-9]+ scope global.*")
424+
425+ addrs = []
426+ for line in out.split('\n'):
427+ line = line.strip()
428+ m = re.match(key, line)
429+ if m and 'temporary' not in line:
430+ # Return the first valid address we find
431+ for addr in global_addrs:
432+ if m.group(1) == addr:
433+ if not dynamic_only or \
434+ m.group(1).endswith(eui_64_mac):
435+ addrs.append(addr)
436+
437+ if addrs:
438+ return addrs
439+
440+ if fatal:
441+ raise Exception("Interface '%s' does not have a scope global "
442+ "non-temporary ipv6 address." % iface)
443+
444+ return []
445+
446+
447+def get_bridges(vnic_dir='/sys/devices/virtual/net'):
448+ """Return a list of bridges on the system."""
449+ b_regex = "%s/*/bridge" % vnic_dir
450+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
451+
452+
453+def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
454+ """Return a list of nics comprising a given bridge on the system."""
455+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
456+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
457+
458+
459+def is_bridge_member(nic):
460+ """Check if a given nic is a member of a bridge."""
461+ for bridge in get_bridges():
462+ if nic in get_bridge_nics(bridge):
463+ return True
464+
465+ return False
466
467=== added file 'hooks/charmhelpers/core/fstab.py'
468--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
469+++ hooks/charmhelpers/core/fstab.py 2014-12-11 14:18:46 +0000
470@@ -0,0 +1,118 @@
471+#!/usr/bin/env python
472+# -*- coding: utf-8 -*-
473+
474+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
475+
476+import io
477+import os
478+
479+
480+class Fstab(io.FileIO):
481+ """This class extends file in order to implement a file reader/writer
482+ for file `/etc/fstab`
483+ """
484+
485+ class Entry(object):
486+ """Entry class represents a non-comment line on the `/etc/fstab` file
487+ """
488+ def __init__(self, device, mountpoint, filesystem,
489+ options, d=0, p=0):
490+ self.device = device
491+ self.mountpoint = mountpoint
492+ self.filesystem = filesystem
493+
494+ if not options:
495+ options = "defaults"
496+
497+ self.options = options
498+ self.d = int(d)
499+ self.p = int(p)
500+
501+ def __eq__(self, o):
502+ return str(self) == str(o)
503+
504+ def __str__(self):
505+ return "{} {} {} {} {} {}".format(self.device,
506+ self.mountpoint,
507+ self.filesystem,
508+ self.options,
509+ self.d,
510+ self.p)
511+
512+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
513+
514+ def __init__(self, path=None):
515+ if path:
516+ self._path = path
517+ else:
518+ self._path = self.DEFAULT_PATH
519+ super(Fstab, self).__init__(self._path, 'rb+')
520+
521+ def _hydrate_entry(self, line):
522+ # NOTE: use split with no arguments to split on any
523+ # whitespace including tabs
524+ return Fstab.Entry(*filter(
525+ lambda x: x not in ('', None),
526+ line.strip("\n").split()))
527+
528+ @property
529+ def entries(self):
530+ self.seek(0)
531+ for line in self.readlines():
532+ line = line.decode('us-ascii')
533+ try:
534+ if line.strip() and not line.startswith("#"):
535+ yield self._hydrate_entry(line)
536+ except ValueError:
537+ pass
538+
539+ def get_entry_by_attr(self, attr, value):
540+ for entry in self.entries:
541+ e_attr = getattr(entry, attr)
542+ if e_attr == value:
543+ return entry
544+ return None
545+
546+ def add_entry(self, entry):
547+ if self.get_entry_by_attr('device', entry.device):
548+ return False
549+
550+ self.write((str(entry) + '\n').encode('us-ascii'))
551+ self.truncate()
552+ return entry
553+
554+ def remove_entry(self, entry):
555+ self.seek(0)
556+
557+ lines = [l.decode('us-ascii') for l in self.readlines()]
558+
559+ found = False
560+ for index, line in enumerate(lines):
561+ if not line.startswith("#"):
562+ if self._hydrate_entry(line) == entry:
563+ found = True
564+ break
565+
566+ if not found:
567+ return False
568+
569+ lines.remove(line)
570+
571+ self.seek(0)
572+ self.write(''.join(lines).encode('us-ascii'))
573+ self.truncate()
574+ return True
575+
576+ @classmethod
577+ def remove_by_mountpoint(cls, mountpoint, path=None):
578+ fstab = cls(path=path)
579+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
580+ if entry:
581+ return fstab.remove_entry(entry)
582+ return False
583+
584+ @classmethod
585+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
586+ return cls(path=path).add_entry(Fstab.Entry(device,
587+ mountpoint, filesystem,
588+ options=options))
589
590=== modified file 'hooks/charmhelpers/core/hookenv.py'
591--- hooks/charmhelpers/core/hookenv.py 2014-02-19 14:49:31 +0000
592+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 14:18:46 +0000
593@@ -9,9 +9,14 @@
594 import yaml
595 import subprocess
596 import sys
597-import UserDict
598 from subprocess import CalledProcessError
599
600+import six
601+if not six.PY3:
602+ from UserDict import UserDict
603+else:
604+ from collections import UserDict
605+
606 CRITICAL = "CRITICAL"
607 ERROR = "ERROR"
608 WARNING = "WARNING"
609@@ -25,7 +30,7 @@
610 def cached(func):
611 """Cache return values for multiple executions of func + args
612
613- For example:
614+ For example::
615
616 @cached
617 def unit_get(attribute):
618@@ -67,12 +72,12 @@
619 subprocess.call(command)
620
621
622-class Serializable(UserDict.IterableUserDict):
623+class Serializable(UserDict):
624 """Wrapper, an object that can be serialized to yaml or json"""
625
626 def __init__(self, obj):
627 # wrap the object
628- UserDict.IterableUserDict.__init__(self)
629+ UserDict.__init__(self)
630 self.data = obj
631
632 def __getattr__(self, attr):
633@@ -155,6 +160,127 @@
634 return os.path.basename(sys.argv[0])
635
636
637+class Config(dict):
638+ """A dictionary representation of the charm's config.yaml, with some
639+ extra features:
640+
641+ - See which values in the dictionary have changed since the previous hook.
642+ - For values that have changed, see what the previous value was.
643+ - Store arbitrary data for use in a later hook.
644+
645+ NOTE: Do not instantiate this object directly - instead call
646+ ``hookenv.config()``, which will return an instance of :class:`Config`.
647+
648+ Example usage::
649+
650+ >>> # inside a hook
651+ >>> from charmhelpers.core import hookenv
652+ >>> config = hookenv.config()
653+ >>> config['foo']
654+ 'bar'
655+ >>> # store a new key/value for later use
656+ >>> config['mykey'] = 'myval'
657+
658+
659+ >>> # user runs `juju set mycharm foo=baz`
660+ >>> # now we're inside subsequent config-changed hook
661+ >>> config = hookenv.config()
662+ >>> config['foo']
663+ 'baz'
664+ >>> # test to see if this val has changed since last hook
665+ >>> config.changed('foo')
666+ True
667+ >>> # what was the previous value?
668+ >>> config.previous('foo')
669+ 'bar'
670+ >>> # keys/values that we add are preserved across hooks
671+ >>> config['mykey']
672+ 'myval'
673+
674+ """
675+ CONFIG_FILE_NAME = '.juju-persistent-config'
676+
677+ def __init__(self, *args, **kw):
678+ super(Config, self).__init__(*args, **kw)
679+ self.implicit_save = True
680+ self._prev_dict = None
681+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
682+ if os.path.exists(self.path):
683+ self.load_previous()
684+
685+ def __getitem__(self, key):
686+ """For regular dict lookups, check the current juju config first,
687+ then the previous (saved) copy. This ensures that user-saved values
688+ will be returned by a dict lookup.
689+
690+ """
691+ try:
692+ return dict.__getitem__(self, key)
693+ except KeyError:
694+ return (self._prev_dict or {})[key]
695+
696+ def keys(self):
697+ prev_keys = []
698+ if self._prev_dict is not None:
699+ prev_keys = self._prev_dict.keys()
700+ return list(set(prev_keys + list(dict.keys(self))))
701+
702+ def load_previous(self, path=None):
703+ """Load previous copy of config from disk.
704+
705+ In normal usage you don't need to call this method directly - it
706+ is called automatically at object initialization.
707+
708+ :param path:
709+
710+ File path from which to load the previous config. If `None`,
711+ config is loaded from the default location. If `path` is
712+ specified, subsequent `save()` calls will write to the same
713+ path.
714+
715+ """
716+ self.path = path or self.path
717+ with open(self.path) as f:
718+ self._prev_dict = json.load(f)
719+
720+ def changed(self, key):
721+ """Return True if the current value for this key is different from
722+ the previous value.
723+
724+ """
725+ if self._prev_dict is None:
726+ return True
727+ return self.previous(key) != self.get(key)
728+
729+ def previous(self, key):
730+ """Return previous value for this key, or None if there
731+ is no previous value.
732+
733+ """
734+ if self._prev_dict:
735+ return self._prev_dict.get(key)
736+ return None
737+
738+ def save(self):
739+ """Save this config to disk.
740+
741+ If the charm is using the :mod:`Services Framework <services.base>`
742+ or :meth:'@hook <Hooks.hook>' decorator, this
743+ is called automatically at the end of successful hook execution.
744+ Otherwise, it should be called directly by user code.
745+
746+ To disable automatic saves, set ``implicit_save=False`` on this
747+ instance.
748+
749+ """
750+ if self._prev_dict:
751+ for k, v in six.iteritems(self._prev_dict):
752+ if k not in self:
753+ self[k] = v
754+ with open(self.path, 'w') as f:
755+ json.dump(self, f)
756+
757+
758 @cached
759 def config(scope=None):
760 """Juju charm configuration"""
761@@ -163,7 +289,11 @@
762 config_cmd_line.append(scope)
763 config_cmd_line.append('--format=json')
764 try:
765- return json.loads(subprocess.check_output(config_cmd_line))
766+ config_data = json.loads(
767+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
768+ if scope is not None:
769+ return config_data
770+ return Config(config_data)
771 except ValueError:
772 return None
773
774@@ -179,21 +309,22 @@
775 if unit:
776 _args.append(unit)
777 try:
778- return json.loads(subprocess.check_output(_args))
779+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
780 except ValueError:
781 return None
782- except CalledProcessError, e:
783+ except CalledProcessError as e:
784 if e.returncode == 2:
785 return None
786 raise
787
788
789-def relation_set(relation_id=None, relation_settings={}, **kwargs):
790+def relation_set(relation_id=None, relation_settings=None, **kwargs):
791 """Set relation information for the current unit"""
792+ relation_settings = relation_settings if relation_settings else {}
793 relation_cmd_line = ['relation-set']
794 if relation_id is not None:
795 relation_cmd_line.extend(('-r', relation_id))
796- for k, v in (relation_settings.items() + kwargs.items()):
797+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
798 if v is None:
799 relation_cmd_line.append('{}='.format(k))
800 else:
801@@ -210,7 +341,8 @@
802 relid_cmd_line = ['relation-ids', '--format=json']
803 if reltype is not None:
804 relid_cmd_line.append(reltype)
805- return json.loads(subprocess.check_output(relid_cmd_line)) or []
806+ return json.loads(
807+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
808 return []
809
810
811@@ -221,7 +353,8 @@
812 units_cmd_line = ['relation-list', '--format=json']
813 if relid is not None:
814 units_cmd_line.extend(('-r', relid))
815- return json.loads(subprocess.check_output(units_cmd_line)) or []
816+ return json.loads(
817+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
818
819
820 @cached
821@@ -330,7 +463,7 @@
822 """Get the unit ID for the remote unit"""
823 _args = ['unit-get', '--format=json', attribute]
824 try:
825- return json.loads(subprocess.check_output(_args))
826+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
827 except ValueError:
828 return None
829
830@@ -348,27 +481,29 @@
831 class Hooks(object):
832 """A convenient handler for hook functions.
833
834- Example:
835+ Example::
836+
837 hooks = Hooks()
838
839 # register a hook, taking its name from the function name
840 @hooks.hook()
841 def install():
842- ...
843+ pass # your code here
844
845 # register a hook, providing a custom hook name
846 @hooks.hook("config-changed")
847 def config_changed():
848- ...
849+ pass # your code here
850
851 if __name__ == "__main__":
852 # execute a hook based on the name the program is called by
853 hooks.execute(sys.argv)
854 """
855
856- def __init__(self):
857+ def __init__(self, config_save=True):
858 super(Hooks, self).__init__()
859 self._hooks = {}
860+ self._config_save = config_save
861
862 def register(self, name, function):
863 """Register a hook"""
864@@ -379,6 +514,10 @@
865 hook_name = os.path.basename(args[0])
866 if hook_name in self._hooks:
867 self._hooks[hook_name]()
868+ if self._config_save:
869+ cfg = config()
870+ if cfg.implicit_save:
871+ cfg.save()
872 else:
873 raise UnregisteredHookError(hook_name)
874
875
876=== modified file 'hooks/charmhelpers/core/host.py'
877--- hooks/charmhelpers/core/host.py 2014-02-19 14:49:31 +0000
878+++ hooks/charmhelpers/core/host.py 2014-12-11 14:18:46 +0000
879@@ -6,16 +6,20 @@
880 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
881
882 import os
883+import re
884 import pwd
885 import grp
886 import random
887 import string
888 import subprocess
889 import hashlib
890-
891+from contextlib import contextmanager
892 from collections import OrderedDict
893
894-from hookenv import log
895+import six
896+
897+from .hookenv import log
898+from .fstab import Fstab
899
900
901 def service_start(service_name):
902@@ -34,7 +38,8 @@
903
904
905 def service_reload(service_name, restart_on_failure=False):
906- """Reload a system service, optionally falling back to restart if reload fails"""
907+ """Reload a system service, optionally falling back to restart if
908+ reload fails"""
909 service_result = service('reload', service_name)
910 if not service_result and restart_on_failure:
911 service_result = service('restart', service_name)
912@@ -50,7 +55,9 @@
913 def service_running(service):
914 """Determine whether a system service is running"""
915 try:
916- output = subprocess.check_output(['service', service, 'status'])
917+ output = subprocess.check_output(
918+ ['service', service, 'status'],
919+ stderr=subprocess.STDOUT).decode('UTF-8')
920 except subprocess.CalledProcessError:
921 return False
922 else:
923@@ -60,6 +67,18 @@
924 return False
925
926
927+def service_available(service_name):
928+ """Determine whether a system service is available"""
929+ try:
930+ subprocess.check_output(
931+ ['service', service_name, 'status'],
932+ stderr=subprocess.STDOUT).decode('UTF-8')
933+ except subprocess.CalledProcessError as e:
934+ return 'unrecognized service' not in e.output
935+ else:
936+ return True
937+
938+
939 def adduser(username, password=None, shell='/bin/bash', system_user=False):
940 """Add a user to the system"""
941 try:
942@@ -101,7 +120,7 @@
943 cmd.append(from_path)
944 cmd.append(to_path)
945 log(" ".join(cmd))
946- return subprocess.check_output(cmd).strip()
947+ return subprocess.check_output(cmd).decode('UTF-8').strip()
948
949
950 def symlink(source, destination):
951@@ -116,7 +135,7 @@
952 subprocess.check_call(cmd)
953
954
955-def mkdir(path, owner='root', group='root', perms=0555, force=False):
956+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
957 """Create a directory"""
958 log("Making dir {} {}:{} {:o}".format(path, owner, group,
959 perms))
960@@ -132,7 +151,7 @@
961 os.chown(realpath, uid, gid)
962
963
964-def write_file(path, content, owner='root', group='root', perms=0444):
965+def write_file(path, content, owner='root', group='root', perms=0o444):
966 """Create or overwrite a file with the contents of a string"""
967 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
968 uid = pwd.getpwnam(owner).pw_uid
969@@ -143,7 +162,19 @@
970 target.write(content)
971
972
973-def mount(device, mountpoint, options=None, persist=False):
974+def fstab_remove(mp):
975+ """Remove the given mountpoint entry from /etc/fstab
976+ """
977+ return Fstab.remove_by_mountpoint(mp)
978+
979+
980+def fstab_add(dev, mp, fs, options=None):
981+ """Adds the given device entry to the /etc/fstab file
982+ """
983+ return Fstab.add(dev, mp, fs, options=options)
984+
985+
986+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
987 """Mount a filesystem at a particular mountpoint"""
988 cmd_args = ['mount']
989 if options is not None:
990@@ -151,12 +182,12 @@
991 cmd_args.extend([device, mountpoint])
992 try:
993 subprocess.check_output(cmd_args)
994- except subprocess.CalledProcessError, e:
995+ except subprocess.CalledProcessError as e:
996 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
997 return False
998+
999 if persist:
1000- # TODO: update fstab
1001- pass
1002+ return fstab_add(device, mountpoint, filesystem, options=options)
1003 return True
1004
1005
1006@@ -165,12 +196,12 @@
1007 cmd_args = ['umount', mountpoint]
1008 try:
1009 subprocess.check_output(cmd_args)
1010- except subprocess.CalledProcessError, e:
1011+ except subprocess.CalledProcessError as e:
1012 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1013 return False
1014+
1015 if persist:
1016- # TODO: update fstab
1017- pass
1018+ return fstab_remove(mountpoint)
1019 return True
1020
1021
1022@@ -183,27 +214,52 @@
1023 return system_mounts
1024
1025
1026-def file_hash(path):
1027- """Generate a md5 hash of the contents of 'path' or None if not found """
1028+def file_hash(path, hash_type='md5'):
1029+ """
1030+ Generate a hash checksum of the contents of 'path' or None if not found.
1031+
1032+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1033+ such as md5, sha1, sha256, sha512, etc.
1034+ """
1035 if os.path.exists(path):
1036- h = hashlib.md5()
1037- with open(path, 'r') as source:
1038- h.update(source.read()) # IGNORE:E1101 - it does have update
1039+ h = getattr(hashlib, hash_type)()
1040+ with open(path, 'rb') as source:
1041+ h.update(source.read())
1042 return h.hexdigest()
1043 else:
1044 return None
1045
1046
1047+def check_hash(path, checksum, hash_type='md5'):
1048+ """
1049+ Validate a file using a cryptographic checksum.
1050+
1051+ :param str checksum: Value of the checksum used to validate the file.
1052+ :param str hash_type: Hash algorithm used to generate `checksum`.
1053+ Can be any hash alrgorithm supported by :mod:`hashlib`,
1054+ such as md5, sha1, sha256, sha512, etc.
1055+ :raises ChecksumError: If the file fails the checksum
1056+
1057+ """
1058+ actual_checksum = file_hash(path, hash_type)
1059+ if checksum != actual_checksum:
1060+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
1061+
1062+
1063+class ChecksumError(ValueError):
1064+ pass
1065+
1066+
1067 def restart_on_change(restart_map, stopstart=False):
1068 """Restart services based on configuration files changing
1069
1070- This function is used a decorator, for example
1071+ This function is used a decorator, for example::
1072
1073 @restart_on_change({
1074 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1075 })
1076 def ceph_client_changed():
1077- ...
1078+ pass # your code here
1079
1080 In this example, the cinder-api and cinder-volume services
1081 would be restarted if /etc/ceph/ceph.conf is changed by the
1082@@ -246,7 +302,7 @@
1083 if length is None:
1084 length = random.choice(range(35, 45))
1085 alphanumeric_chars = [
1086- l for l in (string.letters + string.digits)
1087+ l for l in (string.ascii_letters + string.digits)
1088 if l not in 'l0QD1vAEIOUaeiou']
1089 random_chars = [
1090 random.choice(alphanumeric_chars) for _ in range(length)]
1091@@ -255,18 +311,24 @@
1092
1093 def list_nics(nic_type):
1094 '''Return a list of nics of given type(s)'''
1095- if isinstance(nic_type, basestring):
1096+ if isinstance(nic_type, six.string_types):
1097 int_types = [nic_type]
1098 else:
1099 int_types = nic_type
1100 interfaces = []
1101 for int_type in int_types:
1102 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
1103- ip_output = subprocess.check_output(cmd).split('\n')
1104+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1105 ip_output = (line for line in ip_output if line)
1106 for line in ip_output:
1107 if line.split()[1].startswith(int_type):
1108- interfaces.append(line.split()[1].replace(":", ""))
1109+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
1110+ if matched:
1111+ interface = matched.groups()[0]
1112+ else:
1113+ interface = line.split()[1].replace(":", "")
1114+ interfaces.append(interface)
1115+
1116 return interfaces
1117
1118
1119@@ -278,7 +340,7 @@
1120
1121 def get_nic_mtu(nic):
1122 cmd = ['ip', 'addr', 'show', nic]
1123- ip_output = subprocess.check_output(cmd).split('\n')
1124+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1125 mtu = ""
1126 for line in ip_output:
1127 words = line.split()
1128@@ -289,9 +351,46 @@
1129
1130 def get_nic_hwaddr(nic):
1131 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1132- ip_output = subprocess.check_output(cmd)
1133+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
1134 hwaddr = ""
1135 words = ip_output.split()
1136 if 'link/ether' in words:
1137 hwaddr = words[words.index('link/ether') + 1]
1138 return hwaddr
1139+
1140+
1141+def cmp_pkgrevno(package, revno, pkgcache=None):
1142+ '''Compare supplied revno with the revno of the installed package
1143+
1144+ * 1 => Installed revno is greater than supplied arg
1145+ * 0 => Installed revno is the same as supplied arg
1146+ * -1 => Installed revno is less than supplied arg
1147+
1148+ '''
1149+ import apt_pkg
1150+ from charmhelpers.fetch import apt_cache
1151+ if not pkgcache:
1152+ pkgcache = apt_cache()
1153+ pkg = pkgcache[package]
1154+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1155+
1156+
1157+@contextmanager
1158+def chdir(d):
1159+ cur = os.getcwd()
1160+ try:
1161+ yield os.chdir(d)
1162+ finally:
1163+ os.chdir(cur)
1164+
1165+
1166+def chownr(path, owner, group):
1167+ uid = pwd.getpwnam(owner).pw_uid
1168+ gid = grp.getgrnam(group).gr_gid
1169+
1170+ for root, dirs, files in os.walk(path):
1171+ for name in dirs + files:
1172+ full = os.path.join(root, name)
1173+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
1174+ if not broken_symlink:
1175+ os.chown(full, uid, gid)
1176
1177=== added directory 'hooks/charmhelpers/core/services'
1178=== added file 'hooks/charmhelpers/core/services/__init__.py'
1179--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
1180+++ hooks/charmhelpers/core/services/__init__.py 2014-12-11 14:18:46 +0000
1181@@ -0,0 +1,2 @@
1182+from .base import * # NOQA
1183+from .helpers import * # NOQA
1184
1185=== added file 'hooks/charmhelpers/core/services/base.py'
1186--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
1187+++ hooks/charmhelpers/core/services/base.py 2014-12-11 14:18:46 +0000
1188@@ -0,0 +1,313 @@
1189+import os
1190+import re
1191+import json
1192+from collections import Iterable
1193+
1194+from charmhelpers.core import host
1195+from charmhelpers.core import hookenv
1196+
1197+
1198+__all__ = ['ServiceManager', 'ManagerCallback',
1199+ 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
1200+ 'service_restart', 'service_stop']
1201+
1202+
1203+class ServiceManager(object):
1204+ def __init__(self, services=None):
1205+ """
1206+ Register a list of services, given their definitions.
1207+
1208+ Service definitions are dicts in the following formats (all keys except
1209+ 'service' are optional)::
1210+
1211+ {
1212+ "service": <service name>,
1213+ "required_data": <list of required data contexts>,
1214+ "provided_data": <list of provided data contexts>,
1215+ "data_ready": <one or more callbacks>,
1216+ "data_lost": <one or more callbacks>,
1217+ "start": <one or more callbacks>,
1218+ "stop": <one or more callbacks>,
1219+ "ports": <list of ports to manage>,
1220+ }
1221+
1222+ The 'required_data' list should contain dicts of required data (or
1223+ dependency managers that act like dicts and know how to collect the data).
1224+ Only when all items in the 'required_data' list are populated are the list
1225+ of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
1226+ information.
1227+
1228+ The 'provided_data' list should contain relation data providers, most likely
1229+ a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
1230+ that will indicate a set of data to set on a given relation.
1231+
1232+ The 'data_ready' value should be either a single callback, or a list of
1233+ callbacks, to be called when all items in 'required_data' pass `is_ready()`.
1234+ Each callback will be called with the service name as the only parameter.
1235+ After all of the 'data_ready' callbacks are called, the 'start' callbacks
1236+ are fired.
1237+
1238+ The 'data_lost' value should be either a single callback, or a list of
1239+ callbacks, to be called when a 'required_data' item no longer passes
1240+ `is_ready()`. Each callback will be called with the service name as the
1241+ only parameter. After all of the 'data_lost' callbacks are called,
1242+ the 'stop' callbacks are fired.
1243+
1244+ The 'start' value should be either a single callback, or a list of
1245+ callbacks, to be called when starting the service, after the 'data_ready'
1246+ callbacks are complete. Each callback will be called with the service
1247+ name as the only parameter. This defaults to
1248+ `[host.service_start, services.open_ports]`.
1249+
1250+ The 'stop' value should be either a single callback, or a list of
1251+ callbacks, to be called when stopping the service. If the service is
1252+ being stopped because it no longer has all of its 'required_data', this
1253+ will be called after all of the 'data_lost' callbacks are complete.
1254+ Each callback will be called with the service name as the only parameter.
1255+ This defaults to `[services.close_ports, host.service_stop]`.
1256+
1257+ The 'ports' value should be a list of ports to manage. The default
1258+ 'start' handler will open the ports after the service is started,
1259+ and the default 'stop' handler will close the ports prior to stopping
1260+ the service.
1261+
1262+
1263+ Examples:
1264+
1265+ The following registers an Upstart service called bingod that depends on
1266+ a mongodb relation and which runs a custom `db_migrate` function prior to
1267+ restarting the service, and a Runit service called spadesd::
1268+
1269+ manager = services.ServiceManager([
1270+ {
1271+ 'service': 'bingod',
1272+ 'ports': [80, 443],
1273+ 'required_data': [MongoRelation(), config(), {'my': 'data'}],
1274+ 'data_ready': [
1275+ services.template(source='bingod.conf'),
1276+ services.template(source='bingod.ini',
1277+ target='/etc/bingod.ini',
1278+ owner='bingo', perms=0400),
1279+ ],
1280+ },
1281+ {
1282+ 'service': 'spadesd',
1283+ 'data_ready': services.template(source='spadesd_run.j2',
1284+ target='/etc/sv/spadesd/run',
1285+ perms=0555),
1286+ 'start': runit_start,
1287+ 'stop': runit_stop,
1288+ },
1289+ ])
1290+ manager.manage()
1291+ """
1292+ self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
1293+ self._ready = None
1294+ self.services = {}
1295+ for service in services or []:
1296+ service_name = service['service']
1297+ self.services[service_name] = service
1298+
1299+ def manage(self):
1300+ """
1301+ Handle the current hook by doing The Right Thing with the registered services.
1302+ """
1303+ hook_name = hookenv.hook_name()
1304+ if hook_name == 'stop':
1305+ self.stop_services()
1306+ else:
1307+ self.provide_data()
1308+ self.reconfigure_services()
1309+ cfg = hookenv.config()
1310+ if cfg.implicit_save:
1311+ cfg.save()
1312+
1313+ def provide_data(self):
1314+ """
1315+ Set the relation data for each provider in the ``provided_data`` list.
1316+
1317+ A provider must have a `name` attribute, which indicates which relation
1318+ to set data on, and a `provide_data()` method, which returns a dict of
1319+ data to set.
1320+ """
1321+ hook_name = hookenv.hook_name()
1322+ for service in self.services.values():
1323+ for provider in service.get('provided_data', []):
1324+ if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
1325+ data = provider.provide_data()
1326+ _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
1327+ if _ready:
1328+ hookenv.relation_set(None, data)
1329+
1330+ def reconfigure_services(self, *service_names):
1331+ """
1332+ Update all files for one or more registered services, and,
1333+ if ready, optionally restart them.
1334+
1335+ If no service names are given, reconfigures all registered services.
1336+ """
1337+ for service_name in service_names or self.services.keys():
1338+ if self.is_ready(service_name):
1339+ self.fire_event('data_ready', service_name)
1340+ self.fire_event('start', service_name, default=[
1341+ service_restart,
1342+ manage_ports])
1343+ self.save_ready(service_name)
1344+ else:
1345+ if self.was_ready(service_name):
1346+ self.fire_event('data_lost', service_name)
1347+ self.fire_event('stop', service_name, default=[
1348+ manage_ports,
1349+ service_stop])
1350+ self.save_lost(service_name)
1351+
1352+ def stop_services(self, *service_names):
1353+ """
1354+ Stop one or more registered services, by name.
1355+
1356+ If no service names are given, stops all registered services.
1357+ """
1358+ for service_name in service_names or self.services.keys():
1359+ self.fire_event('stop', service_name, default=[
1360+ manage_ports,
1361+ service_stop])
1362+
1363+ def get_service(self, service_name):
1364+ """
1365+ Given the name of a registered service, return its service definition.
1366+ """
1367+ service = self.services.get(service_name)
1368+ if not service:
1369+ raise KeyError('Service not registered: %s' % service_name)
1370+ return service
1371+
1372+ def fire_event(self, event_name, service_name, default=None):
1373+ """
1374+ Fire a data_ready, data_lost, start, or stop event on a given service.
1375+ """
1376+ service = self.get_service(service_name)
1377+ callbacks = service.get(event_name, default)
1378+ if not callbacks:
1379+ return
1380+ if not isinstance(callbacks, Iterable):
1381+ callbacks = [callbacks]
1382+ for callback in callbacks:
1383+ if isinstance(callback, ManagerCallback):
1384+ callback(self, service_name, event_name)
1385+ else:
1386+ callback(service_name)
1387+
1388+ def is_ready(self, service_name):
1389+ """
1390+ Determine if a registered service is ready, by checking its 'required_data'.
1391+
1392+ A 'required_data' item can be any mapping type, and is considered ready
1393+ if `bool(item)` evaluates as True.
1394+ """
1395+ service = self.get_service(service_name)
1396+ reqs = service.get('required_data', [])
1397+ return all(bool(req) for req in reqs)
1398+
1399+ def _load_ready_file(self):
1400+ if self._ready is not None:
1401+ return
1402+ if os.path.exists(self._ready_file):
1403+ with open(self._ready_file) as fp:
1404+ self._ready = set(json.load(fp))
1405+ else:
1406+ self._ready = set()
1407+
1408+ def _save_ready_file(self):
1409+ if self._ready is None:
1410+ return
1411+ with open(self._ready_file, 'w') as fp:
1412+ json.dump(list(self._ready), fp)
1413+
1414+ def save_ready(self, service_name):
1415+ """
1416+ Save an indicator that the given service is now data_ready.
1417+ """
1418+ self._load_ready_file()
1419+ self._ready.add(service_name)
1420+ self._save_ready_file()
1421+
1422+ def save_lost(self, service_name):
1423+ """
1424+ Save an indicator that the given service is no longer data_ready.
1425+ """
1426+ self._load_ready_file()
1427+ self._ready.discard(service_name)
1428+ self._save_ready_file()
1429+
1430+ def was_ready(self, service_name):
1431+ """
1432+ Determine if the given service was previously data_ready.
1433+ """
1434+ self._load_ready_file()
1435+ return service_name in self._ready
1436+
1437+
1438+class ManagerCallback(object):
1439+ """
1440+ Special case of a callback that takes the `ServiceManager` instance
1441+ in addition to the service name.
1442+
1443+ Subclasses should implement `__call__` which should accept three parameters:
1444+
1445+ * `manager` The `ServiceManager` instance
1446+ * `service_name` The name of the service it's being triggered for
1447+ * `event_name` The name of the event that this callback is handling
1448+ """
1449+ def __call__(self, manager, service_name, event_name):
1450+ raise NotImplementedError()
1451+
1452+
1453+class PortManagerCallback(ManagerCallback):
1454+ """
1455+ Callback class that will open or close ports, for use as either
1456+ a start or stop action.
1457+ """
1458+ def __call__(self, manager, service_name, event_name):
1459+ service = manager.get_service(service_name)
1460+ new_ports = service.get('ports', [])
1461+ port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
1462+ if os.path.exists(port_file):
1463+ with open(port_file) as fp:
1464+ old_ports = fp.read().split(',')
1465+ for old_port in old_ports:
1466+ if bool(old_port):
1467+ old_port = int(old_port)
1468+ if old_port not in new_ports:
1469+ hookenv.close_port(old_port)
1470+ with open(port_file, 'w') as fp:
1471+ fp.write(','.join(str(port) for port in new_ports))
1472+ for port in new_ports:
1473+ if event_name == 'start':
1474+ hookenv.open_port(port)
1475+ elif event_name == 'stop':
1476+ hookenv.close_port(port)
1477+
1478+
1479+def service_stop(service_name):
1480+ """
1481+ Wrapper around host.service_stop to prevent spurious "unknown service"
1482+ messages in the logs.
1483+ """
1484+ if host.service_running(service_name):
1485+ host.service_stop(service_name)
1486+
1487+
1488+def service_restart(service_name):
1489+ """
1490+ Wrapper around host.service_restart to prevent spurious "unknown service"
1491+ messages in the logs.
1492+ """
1493+ if host.service_available(service_name):
1494+ if host.service_running(service_name):
1495+ host.service_restart(service_name)
1496+ else:
1497+ host.service_start(service_name)
1498+
1499+
1500+# Convenience aliases
1501+open_ports = close_ports = manage_ports = PortManagerCallback()
1502
1503=== added file 'hooks/charmhelpers/core/services/helpers.py'
1504--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
1505+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 14:18:46 +0000
1506@@ -0,0 +1,243 @@
1507+import os
1508+import yaml
1509+from charmhelpers.core import hookenv
1510+from charmhelpers.core import templating
1511+
1512+from charmhelpers.core.services.base import ManagerCallback
1513+
1514+
1515+__all__ = ['RelationContext', 'TemplateCallback',
1516+ 'render_template', 'template']
1517+
1518+
1519+class RelationContext(dict):
1520+ """
1521+ Base class for a context generator that gets relation data from juju.
1522+
1523+ Subclasses must provide the attributes `name`, which is the name of the
1524+ interface of interest, `interface`, which is the type of the interface of
1525+ interest, and `required_keys`, which is the set of keys required for the
1526+ relation to be considered complete. The data for all interfaces matching
1527+ the `name` attribute that are complete will used to populate the dictionary
1528+ values (see `get_data`, below).
1529+
1530+ The generated context will be namespaced under the relation :attr:`name`,
1531+ to prevent potential naming conflicts.
1532+
1533+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
1534+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
1535+ """
1536+ name = None
1537+ interface = None
1538+ required_keys = []
1539+
1540+ def __init__(self, name=None, additional_required_keys=None):
1541+ if name is not None:
1542+ self.name = name
1543+ if additional_required_keys is not None:
1544+ self.required_keys.extend(additional_required_keys)
1545+ self.get_data()
1546+
1547+ def __bool__(self):
1548+ """
1549+ Returns True if all of the required_keys are available.
1550+ """
1551+ return self.is_ready()
1552+
1553+ __nonzero__ = __bool__
1554+
1555+ def __repr__(self):
1556+ return super(RelationContext, self).__repr__()
1557+
1558+ def is_ready(self):
1559+ """
1560+ Returns True if all of the `required_keys` are available from any units.
1561+ """
1562+ ready = len(self.get(self.name, [])) > 0
1563+ if not ready:
1564+ hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
1565+ return ready
1566+
1567+ def _is_ready(self, unit_data):
1568+ """
1569+ Helper method that tests a set of relation data and returns True if
1570+ all of the `required_keys` are present.
1571+ """
1572+ return set(unit_data.keys()).issuperset(set(self.required_keys))
1573+
1574+ def get_data(self):
1575+ """
1576+ Retrieve the relation data for each unit involved in a relation and,
1577+ if complete, store it in a list under `self[self.name]`. This
1578+ is automatically called when the RelationContext is instantiated.
1579+
1580+ The units are sorted lexographically first by the service ID, then by
1581+ the unit ID. Thus, if an interface has two other services, 'db:1'
1582+ and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
1583+ and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
1584+ set of data, the relation data for the units will be stored in the
1585+ order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
1586+
1587+ If you only care about a single unit on the relation, you can just
1588+ access it as `{{ interface[0]['key'] }}`. However, if you can at all
1589+ support multiple units on a relation, you should iterate over the list,
1590+ like::
1591+
1592+ {% for unit in interface -%}
1593+ {{ unit['key'] }}{% if not loop.last %},{% endif %}
1594+ {%- endfor %}
1595+
1596+ Note that since all sets of relation data from all related services and
1597+ units are in a single list, if you need to know which service or unit a
1598+ set of data came from, you'll need to extend this class to preserve
1599+ that information.
1600+ """
1601+ if not hookenv.relation_ids(self.name):
1602+ return
1603+
1604+ ns = self.setdefault(self.name, [])
1605+ for rid in sorted(hookenv.relation_ids(self.name)):
1606+ for unit in sorted(hookenv.related_units(rid)):
1607+ reldata = hookenv.relation_get(rid=rid, unit=unit)
1608+ if self._is_ready(reldata):
1609+ ns.append(reldata)
1610+
1611+ def provide_data(self):
1612+ """
1613+ Return data to be relation_set for this interface.
1614+ """
1615+ return {}
1616+
1617+
1618+class MysqlRelation(RelationContext):
1619+ """
1620+ Relation context for the `mysql` interface.
1621+
1622+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
1623+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
1624+ """
1625+ name = 'db'
1626+ interface = 'mysql'
1627+ required_keys = ['host', 'user', 'password', 'database']
1628+
1629+
1630+class HttpRelation(RelationContext):
1631+ """
1632+ Relation context for the `http` interface.
1633+
1634+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
1635+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
1636+ """
1637+ name = 'website'
1638+ interface = 'http'
1639+ required_keys = ['host', 'port']
1640+
1641+ def provide_data(self):
1642+ return {
1643+ 'host': hookenv.unit_get('private-address'),
1644+ 'port': 80,
1645+ }
1646+
1647+
1648+class RequiredConfig(dict):
1649+ """
1650+ Data context that loads config options with one or more mandatory options.
1651+
1652+ Once the required options have been changed from their default values, all
1653+ config options will be available, namespaced under `config` to prevent
1654+ potential naming conflicts (for example, between a config option and a
1655+ relation property).
1656+
1657+ :param list *args: List of options that must be changed from their default values.
1658+ """
1659+
1660+ def __init__(self, *args):
1661+ self.required_options = args
1662+ self['config'] = hookenv.config()
1663+ with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
1664+ self.config = yaml.load(fp).get('options', {})
1665+
1666+ def __bool__(self):
1667+ for option in self.required_options:
1668+ if option not in self['config']:
1669+ return False
1670+ current_value = self['config'][option]
1671+ default_value = self.config[option].get('default')
1672+ if current_value == default_value:
1673+ return False
1674+ if current_value in (None, '') and default_value in (None, ''):
1675+ return False
1676+ return True
1677+
1678+ def __nonzero__(self):
1679+ return self.__bool__()
1680+
1681+
1682+class StoredContext(dict):
1683+ """
1684+ A data context that always returns the data that it was first created with.
1685+
1686+ This is useful to do a one-time generation of things like passwords, that
1687+ will thereafter use the same value that was originally generated, instead
1688+ of generating a new value each time it is run.
1689+ """
1690+ def __init__(self, file_name, config_data):
1691+ """
1692+ If the file exists, populate `self` with the data from the file.
1693+ Otherwise, populate with the given data and persist it to the file.
1694+ """
1695+ if os.path.exists(file_name):
1696+ self.update(self.read_context(file_name))
1697+ else:
1698+ self.store_context(file_name, config_data)
1699+ self.update(config_data)
1700+
1701+ def store_context(self, file_name, config_data):
1702+ if not os.path.isabs(file_name):
1703+ file_name = os.path.join(hookenv.charm_dir(), file_name)
1704+ with open(file_name, 'w') as file_stream:
1705+ os.fchmod(file_stream.fileno(), 0o600)
1706+ yaml.dump(config_data, file_stream)
1707+
1708+ def read_context(self, file_name):
1709+ if not os.path.isabs(file_name):
1710+ file_name = os.path.join(hookenv.charm_dir(), file_name)
1711+ with open(file_name, 'r') as file_stream:
1712+ data = yaml.load(file_stream)
1713+ if not data:
1714+ raise OSError("%s is empty" % file_name)
1715+ return data
1716+
1717+
1718+class TemplateCallback(ManagerCallback):
1719+ """
1720+ Callback class that will render a Jinja2 template, for use as a ready
1721+ action.
1722+
1723+ :param str source: The template source file, relative to
1724+ `$CHARM_DIR/templates`
1725+
1726+ :param str target: The target to write the rendered template to
1727+ :param str owner: The owner of the rendered file
1728+ :param str group: The group of the rendered file
1729+ :param int perms: The permissions of the rendered file
1730+ """
1731+ def __init__(self, source, target,
1732+ owner='root', group='root', perms=0o444):
1733+ self.source = source
1734+ self.target = target
1735+ self.owner = owner
1736+ self.group = group
1737+ self.perms = perms
1738+
1739+ def __call__(self, manager, service_name, event_name):
1740+ service = manager.get_service(service_name)
1741+ context = {}
1742+ for ctx in service.get('required_data', []):
1743+ context.update(ctx)
1744+ templating.render(self.source, self.target, context,
1745+ self.owner, self.group, self.perms)
1746+
1747+
1748+# Convenience aliases for templates
1749+render_template = template = TemplateCallback
1750
1751=== added file 'hooks/charmhelpers/core/sysctl.py'
1752--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
1753+++ hooks/charmhelpers/core/sysctl.py 2014-12-11 14:18:46 +0000
1754@@ -0,0 +1,34 @@
1755+#!/usr/bin/env python
1756+# -*- coding: utf-8 -*-
1757+
1758+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
1759+
1760+import yaml
1761+
1762+from subprocess import check_call
1763+
1764+from charmhelpers.core.hookenv import (
1765+ log,
1766+ DEBUG,
1767+)
1768+
1769+
1770+def create(sysctl_dict, sysctl_file):
1771+ """Creates a sysctl.conf file from a YAML associative array
1772+
1773+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
1774+ :type sysctl_dict: dict
1775+ :param sysctl_file: path to the sysctl file to be saved
1776+ :type sysctl_file: str or unicode
1777+ :returns: None
1778+ """
1779+ sysctl_dict = yaml.load(sysctl_dict)
1780+
1781+ with open(sysctl_file, "w") as fd:
1782+ for key, value in sysctl_dict.items():
1783+ fd.write("{}={}\n".format(key, value))
1784+
1785+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
1786+ level=DEBUG)
1787+
1788+ check_call(["sysctl", "-p", sysctl_file])
1789
1790=== added file 'hooks/charmhelpers/core/templating.py'
1791--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
1792+++ hooks/charmhelpers/core/templating.py 2014-12-11 14:18:46 +0000
1793@@ -0,0 +1,52 @@
1794+import os
1795+
1796+from charmhelpers.core import host
1797+from charmhelpers.core import hookenv
1798+
1799+
1800+def render(source, target, context, owner='root', group='root',
1801+ perms=0o444, templates_dir=None):
1802+ """
1803+ Render a template.
1804+
1805+ The `source` path, if not absolute, is relative to the `templates_dir`.
1806+
1807+ The `target` path should be absolute.
1808+
1809+ The context should be a dict containing the values to be replaced in the
1810+ template.
1811+
1812+ The `owner`, `group`, and `perms` options will be passed to `write_file`.
1813+
1814+ If omitted, `templates_dir` defaults to the `templates` folder in the charm.
1815+
1816+ Note: Using this requires python-jinja2; if it is not installed, calling
1817+ this will attempt to use charmhelpers.fetch.apt_install to install it.
1818+ """
1819+ try:
1820+ from jinja2 import FileSystemLoader, Environment, exceptions
1821+ except ImportError:
1822+ try:
1823+ from charmhelpers.fetch import apt_install
1824+ except ImportError:
1825+ hookenv.log('Could not import jinja2, and could not import '
1826+ 'charmhelpers.fetch to install it',
1827+ level=hookenv.ERROR)
1828+ raise
1829+ apt_install('python-jinja2', fatal=True)
1830+ from jinja2 import FileSystemLoader, Environment, exceptions
1831+
1832+ if templates_dir is None:
1833+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
1834+ loader = Environment(loader=FileSystemLoader(templates_dir))
1835+ try:
1836+ source = source
1837+ template = loader.get_template(source)
1838+ except exceptions.TemplateNotFound as e:
1839+ hookenv.log('Could not load template %s from %s.' %
1840+ (source, templates_dir),
1841+ level=hookenv.ERROR)
1842+ raise e
1843+ content = template.render(context)
1844+ host.mkdir(os.path.dirname(target))
1845+ host.write_file(target, content, owner, group, perms)
1846
1847=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1848--- hooks/charmhelpers/fetch/__init__.py 2014-05-08 10:22:43 +0000
1849+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 14:18:46 +0000
1850@@ -1,20 +1,24 @@
1851 import importlib
1852+from tempfile import NamedTemporaryFile
1853+import time
1854 from yaml import safe_load
1855 from charmhelpers.core.host import (
1856 lsb_release
1857 )
1858-from urlparse import (
1859- urlparse,
1860- urlunparse,
1861-)
1862 import subprocess
1863 from charmhelpers.core.hookenv import (
1864 config,
1865 log,
1866 )
1867-import apt_pkg
1868 import os
1869
1870+import six
1871+if six.PY3:
1872+ from urllib.parse import urlparse, urlunparse
1873+else:
1874+ from urlparse import urlparse, urlunparse
1875+
1876+
1877 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1878 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1879 """
1880@@ -54,13 +58,69 @@
1881 'icehouse/proposed': 'precise-proposed/icehouse',
1882 'precise-icehouse/proposed': 'precise-proposed/icehouse',
1883 'precise-proposed/icehouse': 'precise-proposed/icehouse',
1884+ # Juno
1885+ 'juno': 'trusty-updates/juno',
1886+ 'trusty-juno': 'trusty-updates/juno',
1887+ 'trusty-juno/updates': 'trusty-updates/juno',
1888+ 'trusty-updates/juno': 'trusty-updates/juno',
1889+ 'juno/proposed': 'trusty-proposed/juno',
1890+ 'juno/proposed': 'trusty-proposed/juno',
1891+ 'trusty-juno/proposed': 'trusty-proposed/juno',
1892+ 'trusty-proposed/juno': 'trusty-proposed/juno',
1893 }
1894
1895+# The order of this list is very important. Handlers should be listed in from
1896+# least- to most-specific URL matching.
1897+FETCH_HANDLERS = (
1898+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
1899+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
1900+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
1901+)
1902+
1903+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
1904+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
1905+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
1906+
1907+
1908+class SourceConfigError(Exception):
1909+ pass
1910+
1911+
1912+class UnhandledSource(Exception):
1913+ pass
1914+
1915+
1916+class AptLockError(Exception):
1917+ pass
1918+
1919+
1920+class BaseFetchHandler(object):
1921+
1922+ """Base class for FetchHandler implementations in fetch plugins"""
1923+
1924+ def can_handle(self, source):
1925+ """Returns True if the source can be handled. Otherwise returns
1926+ a string explaining why it cannot"""
1927+ return "Wrong source type"
1928+
1929+ def install(self, source):
1930+ """Try to download and unpack the source. Return the path to the
1931+ unpacked files or raise UnhandledSource."""
1932+ raise UnhandledSource("Wrong source type {}".format(source))
1933+
1934+ def parse_url(self, url):
1935+ return urlparse(url)
1936+
1937+ def base_url(self, url):
1938+ """Return url without querystring or fragment"""
1939+ parts = list(self.parse_url(url))
1940+ parts[4:] = ['' for i in parts[4:]]
1941+ return urlunparse(parts)
1942+
1943
1944 def filter_installed_packages(packages):
1945 """Returns a list of packages that require installation"""
1946- apt_pkg.init()
1947- cache = apt_pkg.Cache()
1948+ cache = apt_cache()
1949 _pkgs = []
1950 for package in packages:
1951 try:
1952@@ -73,6 +133,16 @@
1953 return _pkgs
1954
1955
1956+def apt_cache(in_memory=True):
1957+ """Build and return an apt cache"""
1958+ import apt_pkg
1959+ apt_pkg.init()
1960+ if in_memory:
1961+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
1962+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
1963+ return apt_pkg.Cache()
1964+
1965+
1966 def apt_install(packages, options=None, fatal=False):
1967 """Install one or more packages"""
1968 if options is None:
1969@@ -81,20 +151,13 @@
1970 cmd = ['apt-get', '--assume-yes']
1971 cmd.extend(options)
1972 cmd.append('install')
1973- if isinstance(packages, basestring):
1974+ if isinstance(packages, six.string_types):
1975 cmd.append(packages)
1976 else:
1977 cmd.extend(packages)
1978 log("Installing {} with options: {}".format(packages,
1979 options))
1980- env = os.environ.copy()
1981- if 'DEBIAN_FRONTEND' not in env:
1982- env['DEBIAN_FRONTEND'] = 'noninteractive'
1983-
1984- if fatal:
1985- subprocess.check_call(cmd, env=env)
1986- else:
1987- subprocess.call(cmd, env=env)
1988+ _run_apt_command(cmd, fatal)
1989
1990
1991 def apt_upgrade(options=None, fatal=False, dist=False):
1992@@ -109,48 +172,35 @@
1993 else:
1994 cmd.append('upgrade')
1995 log("Upgrading with options: {}".format(options))
1996-
1997- env = os.environ.copy()
1998- if 'DEBIAN_FRONTEND' not in env:
1999- env['DEBIAN_FRONTEND'] = 'noninteractive'
2000-
2001- if fatal:
2002- subprocess.check_call(cmd, env=env)
2003- else:
2004- subprocess.call(cmd, env=env)
2005+ _run_apt_command(cmd, fatal)
2006
2007
2008 def apt_update(fatal=False):
2009 """Update local apt cache"""
2010 cmd = ['apt-get', 'update']
2011- if fatal:
2012- subprocess.check_call(cmd)
2013- else:
2014- subprocess.call(cmd)
2015+ _run_apt_command(cmd, fatal)
2016
2017
2018 def apt_purge(packages, fatal=False):
2019 """Purge one or more packages"""
2020 cmd = ['apt-get', '--assume-yes', 'purge']
2021- if isinstance(packages, basestring):
2022+ if isinstance(packages, six.string_types):
2023 cmd.append(packages)
2024 else:
2025 cmd.extend(packages)
2026 log("Purging {}".format(packages))
2027- if fatal:
2028- subprocess.check_call(cmd)
2029- else:
2030- subprocess.call(cmd)
2031+ _run_apt_command(cmd, fatal)
2032
2033
2034 def apt_hold(packages, fatal=False):
2035 """Hold one or more packages"""
2036 cmd = ['apt-mark', 'hold']
2037- if isinstance(packages, basestring):
2038+ if isinstance(packages, six.string_types):
2039 cmd.append(packages)
2040 else:
2041 cmd.extend(packages)
2042 log("Holding {}".format(packages))
2043+
2044 if fatal:
2045 subprocess.check_call(cmd)
2046 else:
2047@@ -158,6 +208,29 @@
2048
2049
2050 def add_source(source, key=None):
2051+ """Add a package source to this system.
2052+
2053+ @param source: a URL or sources.list entry, as supported by
2054+ add-apt-repository(1). Examples::
2055+
2056+ ppa:charmers/example
2057+ deb https://stub:key@private.example.com/ubuntu trusty main
2058+
2059+ In addition:
2060+ 'proposed:' may be used to enable the standard 'proposed'
2061+ pocket for the release.
2062+ 'cloud:' may be used to activate official cloud archive pockets,
2063+ such as 'cloud:icehouse'
2064+ 'distro' may be used as a noop
2065+
2066+ @param key: A key to be added to the system's APT keyring and used
2067+ to verify the signatures on packages. Ideally, this should be an
2068+ ASCII format GPG public key including the block headers. A GPG key
2069+ id may also be used, but be aware that only insecure protocols are
2070+ available to retrieve the actual public key from a public keyserver
2071+ placing your Juju environment at risk. ppa and cloud archive keys
2072+ are securely added automtically, so sould not be provided.
2073+ """
2074 if source is None:
2075 log('Source is not present. Skipping')
2076 return
2077@@ -182,76 +255,98 @@
2078 release = lsb_release()['DISTRIB_CODENAME']
2079 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2080 apt.write(PROPOSED_POCKET.format(release))
2081+ elif source == 'distro':
2082+ pass
2083+ else:
2084+ log("Unknown source: {!r}".format(source))
2085+
2086 if key:
2087- subprocess.check_call(['apt-key', 'adv', '--keyserver',
2088- 'hkp://keyserver.ubuntu.com:80', '--recv',
2089- key])
2090-
2091-
2092-class SourceConfigError(Exception):
2093- pass
2094+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2095+ with NamedTemporaryFile('w+') as key_file:
2096+ key_file.write(key)
2097+ key_file.flush()
2098+ key_file.seek(0)
2099+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
2100+ else:
2101+ # Note that hkp: is in no way a secure protocol. Using a
2102+ # GPG key id is pointless from a security POV unless you
2103+ # absolutely trust your network and DNS.
2104+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
2105+ 'hkp://keyserver.ubuntu.com:80', '--recv',
2106+ key])
2107
2108
2109 def configure_sources(update=False,
2110 sources_var='install_sources',
2111 keys_var='install_keys'):
2112 """
2113- Configure multiple sources from charm configuration
2114+ Configure multiple sources from charm configuration.
2115+
2116+ The lists are encoded as yaml fragments in the configuration.
2117+ The frament needs to be included as a string. Sources and their
2118+ corresponding keys are of the types supported by add_source().
2119
2120 Example config:
2121- install_sources:
2122+ install_sources: |
2123 - "ppa:foo"
2124 - "http://example.com/repo precise main"
2125- install_keys:
2126+ install_keys: |
2127 - null
2128 - "a1b2c3d4"
2129
2130 Note that 'null' (a.k.a. None) should not be quoted.
2131 """
2132- sources = safe_load(config(sources_var))
2133- keys = config(keys_var)
2134- if keys is not None:
2135- keys = safe_load(keys)
2136- if isinstance(sources, basestring) and (
2137- keys is None or isinstance(keys, basestring)):
2138- add_source(sources, keys)
2139+ sources = safe_load((config(sources_var) or '').strip()) or []
2140+ keys = safe_load((config(keys_var) or '').strip()) or None
2141+
2142+ if isinstance(sources, six.string_types):
2143+ sources = [sources]
2144+
2145+ if keys is None:
2146+ for source in sources:
2147+ add_source(source, None)
2148 else:
2149- if not len(sources) == len(keys):
2150- msg = 'Install sources and keys lists are different lengths'
2151- raise SourceConfigError(msg)
2152- for src_num in range(len(sources)):
2153- add_source(sources[src_num], keys[src_num])
2154+ if isinstance(keys, six.string_types):
2155+ keys = [keys]
2156+
2157+ if len(sources) != len(keys):
2158+ raise SourceConfigError(
2159+ 'Install sources and keys lists are different lengths')
2160+ for source, key in zip(sources, keys):
2161+ add_source(source, key)
2162 if update:
2163 apt_update(fatal=True)
2164
2165-# The order of this list is very important. Handlers should be listed in from
2166-# least- to most-specific URL matching.
2167-FETCH_HANDLERS = (
2168- 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2169- 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2170-)
2171-
2172-
2173-class UnhandledSource(Exception):
2174- pass
2175-
2176-
2177-def install_remote(source):
2178+
2179+def install_remote(source, *args, **kwargs):
2180 """
2181 Install a file tree from a remote source
2182
2183 The specified source should be a url of the form:
2184 scheme://[host]/path[#[option=value][&...]]
2185
2186- Schemes supported are based on this modules submodules
2187- Options supported are submodule-specific"""
2188+ Schemes supported are based on this modules submodules.
2189+ Options supported are submodule-specific.
2190+ Additional arguments are passed through to the submodule.
2191+
2192+ For example::
2193+
2194+ dest = install_remote('http://example.com/archive.tgz',
2195+ checksum='deadbeef',
2196+ hash_type='sha1')
2197+
2198+ This will download `archive.tgz`, validate it using SHA1 and, if
2199+ the file is ok, extract it and return the directory in which it
2200+ was extracted. If the checksum fails, it will raise
2201+ :class:`charmhelpers.core.host.ChecksumError`.
2202+ """
2203 # We ONLY check for True here because can_handle may return a string
2204 # explaining why it can't handle a given source.
2205 handlers = [h for h in plugins() if h.can_handle(source) is True]
2206 installed_to = None
2207 for handler in handlers:
2208 try:
2209- installed_to = handler.install(source)
2210+ installed_to = handler.install(source, *args, **kwargs)
2211 except UnhandledSource:
2212 pass
2213 if not installed_to:
2214@@ -265,30 +360,6 @@
2215 return install_remote(source)
2216
2217
2218-class BaseFetchHandler(object):
2219-
2220- """Base class for FetchHandler implementations in fetch plugins"""
2221-
2222- def can_handle(self, source):
2223- """Returns True if the source can be handled. Otherwise returns
2224- a string explaining why it cannot"""
2225- return "Wrong source type"
2226-
2227- def install(self, source):
2228- """Try to download and unpack the source. Return the path to the
2229- unpacked files or raise UnhandledSource."""
2230- raise UnhandledSource("Wrong source type {}".format(source))
2231-
2232- def parse_url(self, url):
2233- return urlparse(url)
2234-
2235- def base_url(self, url):
2236- """Return url without querystring or fragment"""
2237- parts = list(self.parse_url(url))
2238- parts[4:] = ['' for i in parts[4:]]
2239- return urlunparse(parts)
2240-
2241-
2242 def plugins(fetch_handlers=None):
2243 if not fetch_handlers:
2244 fetch_handlers = FETCH_HANDLERS
2245@@ -306,3 +377,40 @@
2246 log("FetchHandler {} not found, skipping plugin".format(
2247 handler_name))
2248 return plugin_list
2249+
2250+
2251+def _run_apt_command(cmd, fatal=False):
2252+ """
2253+ Run an APT command, checking output and retrying if the fatal flag is set
2254+ to True.
2255+
2256+ :param: cmd: str: The apt command to run.
2257+ :param: fatal: bool: Whether the command's output should be checked and
2258+ retried.
2259+ """
2260+ env = os.environ.copy()
2261+
2262+ if 'DEBIAN_FRONTEND' not in env:
2263+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2264+
2265+ if fatal:
2266+ retry_count = 0
2267+ result = None
2268+
2269+ # If the command is considered "fatal", we need to retry if the apt
2270+ # lock was not acquired.
2271+
2272+ while result is None or result == APT_NO_LOCK:
2273+ try:
2274+ result = subprocess.check_call(cmd, env=env)
2275+ except subprocess.CalledProcessError as e:
2276+ retry_count = retry_count + 1
2277+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
2278+ raise
2279+ result = e.returncode
2280+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
2281+ "".format(APT_NO_LOCK_RETRY_DELAY))
2282+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
2283+
2284+ else:
2285+ subprocess.call(cmd, env=env)
2286
2287=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
2288--- hooks/charmhelpers/fetch/archiveurl.py 2014-04-17 10:53:00 +0000
2289+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-11 14:18:46 +0000
2290@@ -1,6 +1,23 @@
2291 import os
2292-import urllib2
2293-import urlparse
2294+import hashlib
2295+import re
2296+
2297+import six
2298+if six.PY3:
2299+ from urllib.request import (
2300+ build_opener, install_opener, urlopen, urlretrieve,
2301+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2302+ )
2303+ from urllib.parse import urlparse, urlunparse, parse_qs
2304+ from urllib.error import URLError
2305+else:
2306+ from urllib import urlretrieve
2307+ from urllib2 import (
2308+ build_opener, install_opener, urlopen,
2309+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2310+ URLError
2311+ )
2312+ from urlparse import urlparse, urlunparse, parse_qs
2313
2314 from charmhelpers.fetch import (
2315 BaseFetchHandler,
2316@@ -10,11 +27,37 @@
2317 get_archive_handler,
2318 extract,
2319 )
2320-from charmhelpers.core.host import mkdir
2321+from charmhelpers.core.host import mkdir, check_hash
2322+
2323+
2324+def splituser(host):
2325+ '''urllib.splituser(), but six's support of this seems broken'''
2326+ _userprog = re.compile('^(.*)@(.*)$')
2327+ match = _userprog.match(host)
2328+ if match:
2329+ return match.group(1, 2)
2330+ return None, host
2331+
2332+
2333+def splitpasswd(user):
2334+ '''urllib.splitpasswd(), but six's support of this is missing'''
2335+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
2336+ match = _passwdprog.match(user)
2337+ if match:
2338+ return match.group(1, 2)
2339+ return user, None
2340
2341
2342 class ArchiveUrlFetchHandler(BaseFetchHandler):
2343- """Handler for archives via generic URLs"""
2344+ """
2345+ Handler to download archive files from arbitrary URLs.
2346+
2347+ Can fetch from http, https, ftp, and file URLs.
2348+
2349+ Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
2350+
2351+ Installs the contents of the archive in $CHARM_DIR/fetched/.
2352+ """
2353 def can_handle(self, source):
2354 url_parts = self.parse_url(source)
2355 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
2356@@ -24,22 +67,28 @@
2357 return False
2358
2359 def download(self, source, dest):
2360+ """
2361+ Download an archive file.
2362+
2363+ :param str source: URL pointing to an archive file.
2364+ :param str dest: Local path location to download archive file to.
2365+ """
2366 # propogate all exceptions
2367 # URLError, OSError, etc
2368- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
2369+ proto, netloc, path, params, query, fragment = urlparse(source)
2370 if proto in ('http', 'https'):
2371- auth, barehost = urllib2.splituser(netloc)
2372+ auth, barehost = splituser(netloc)
2373 if auth is not None:
2374- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
2375- username, password = urllib2.splitpasswd(auth)
2376- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
2377+ source = urlunparse((proto, barehost, path, params, query, fragment))
2378+ username, password = splitpasswd(auth)
2379+ passman = HTTPPasswordMgrWithDefaultRealm()
2380 # Realm is set to None in add_password to force the username and password
2381 # to be used whatever the realm
2382 passman.add_password(None, source, username, password)
2383- authhandler = urllib2.HTTPBasicAuthHandler(passman)
2384- opener = urllib2.build_opener(authhandler)
2385- urllib2.install_opener(opener)
2386- response = urllib2.urlopen(source)
2387+ authhandler = HTTPBasicAuthHandler(passman)
2388+ opener = build_opener(authhandler)
2389+ install_opener(opener)
2390+ response = urlopen(source)
2391 try:
2392 with open(dest, 'w') as dest_file:
2393 dest_file.write(response.read())
2394@@ -48,16 +97,49 @@
2395 os.unlink(dest)
2396 raise e
2397
2398- def install(self, source):
2399+ # Mandatory file validation via Sha1 or MD5 hashing.
2400+ def download_and_validate(self, url, hashsum, validate="sha1"):
2401+ tempfile, headers = urlretrieve(url)
2402+ check_hash(tempfile, hashsum, validate)
2403+ return tempfile
2404+
2405+ def install(self, source, dest=None, checksum=None, hash_type='sha1'):
2406+ """
2407+ Download and install an archive file, with optional checksum validation.
2408+
2409+ The checksum can also be given on the `source` URL's fragment.
2410+ For example::
2411+
2412+ handler.install('http://example.com/file.tgz#sha1=deadbeef')
2413+
2414+ :param str source: URL pointing to an archive file.
2415+ :param str dest: Local destination path to install to. If not given,
2416+ installs to `$CHARM_DIR/archives/archive_file_name`.
2417+ :param str checksum: If given, validate the archive file after download.
2418+ :param str hash_type: Algorithm used to generate `checksum`.
2419+ Can be any hash alrgorithm supported by :mod:`hashlib`,
2420+ such as md5, sha1, sha256, sha512, etc.
2421+
2422+ """
2423 url_parts = self.parse_url(source)
2424 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2425 if not os.path.exists(dest_dir):
2426- mkdir(dest_dir, perms=0755)
2427+ mkdir(dest_dir, perms=0o755)
2428 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2429 try:
2430 self.download(source, dld_file)
2431- except urllib2.URLError as e:
2432+ except URLError as e:
2433 raise UnhandledSource(e.reason)
2434 except OSError as e:
2435 raise UnhandledSource(e.strerror)
2436- return extract(dld_file)
2437+ options = parse_qs(url_parts.fragment)
2438+ for key, value in options.items():
2439+ if not six.PY3:
2440+ algorithms = hashlib.algorithms
2441+ else:
2442+ algorithms = hashlib.algorithms_available
2443+ if key in algorithms:
2444+ check_hash(dld_file, value, key)
2445+ if checksum:
2446+ check_hash(dld_file, checksum, hash_type)
2447+ return extract(dld_file, dest)
2448
2449=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
2450--- hooks/charmhelpers/fetch/bzrurl.py 2014-02-19 14:49:31 +0000
2451+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-11 14:18:46 +0000
2452@@ -5,6 +5,10 @@
2453 )
2454 from charmhelpers.core.host import mkdir
2455
2456+import six
2457+if six.PY3:
2458+ raise ImportError('bzrlib does not support Python3')
2459+
2460 try:
2461 from bzrlib.branch import Branch
2462 except ImportError:
2463@@ -39,9 +43,10 @@
2464 def install(self, source):
2465 url_parts = self.parse_url(source)
2466 branch_name = url_parts.path.strip("/").split("/")[-1]
2467- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
2468+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2469+ branch_name)
2470 if not os.path.exists(dest_dir):
2471- mkdir(dest_dir, perms=0755)
2472+ mkdir(dest_dir, perms=0o755)
2473 try:
2474 self.branch(source, dest_dir)
2475 except OSError as e:
2476
2477=== added file 'hooks/charmhelpers/fetch/giturl.py'
2478--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
2479+++ hooks/charmhelpers/fetch/giturl.py 2014-12-11 14:18:46 +0000
2480@@ -0,0 +1,48 @@
2481+import os
2482+from charmhelpers.fetch import (
2483+ BaseFetchHandler,
2484+ UnhandledSource
2485+)
2486+from charmhelpers.core.host import mkdir
2487+
2488+import six
2489+if six.PY3:
2490+ raise ImportError('GitPython does not support Python 3')
2491+
2492+try:
2493+ from git import Repo
2494+except ImportError:
2495+ from charmhelpers.fetch import apt_install
2496+ apt_install("python-git")
2497+ from git import Repo
2498+
2499+
2500+class GitUrlFetchHandler(BaseFetchHandler):
2501+ """Handler for git branches via generic and github URLs"""
2502+ def can_handle(self, source):
2503+ url_parts = self.parse_url(source)
2504+ # TODO (mattyw) no support for ssh git@ yet
2505+ if url_parts.scheme not in ('http', 'https', 'git'):
2506+ return False
2507+ else:
2508+ return True
2509+
2510+ def clone(self, source, dest, branch):
2511+ if not self.can_handle(source):
2512+ raise UnhandledSource("Cannot handle {}".format(source))
2513+
2514+ repo = Repo.clone_from(source, dest)
2515+ repo.git.checkout(branch)
2516+
2517+ def install(self, source, branch="master"):
2518+ url_parts = self.parse_url(source)
2519+ branch_name = url_parts.path.strip("/").split("/")[-1]
2520+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2521+ branch_name)
2522+ if not os.path.exists(dest_dir):
2523+ mkdir(dest_dir, perms=0o755)
2524+ try:
2525+ self.clone(source, dest_dir, branch)
2526+ except OSError as e:
2527+ raise UnhandledSource(e.strerror)
2528+ return dest_dir
2529
2530=== modified file 'hooks/common.py'
2531--- hooks/common.py 2014-03-04 17:28:10 +0000
2532+++ hooks/common.py 2014-12-11 14:18:46 +0000
2533@@ -84,6 +84,8 @@
2534
2535 def grant_exists(db_name, db_user, remote_ip):
2536 cursor = get_db_cursor()
2537+ priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \
2538+ "TO '{}'@'{}'".format(db_name, db_user, remote_ip)
2539 try:
2540 cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user,
2541 remote_ip))
2542@@ -93,7 +95,7 @@
2543 return False
2544 finally:
2545 cursor.close()
2546- return "GRANT ALL PRIVILEGES ON `{}`".format(db_name) in grants
2547+ return priv_string in grants
2548
2549
2550 def create_grant(db_name, db_user,
2551
2552=== modified file 'hooks/config-changed'
2553--- hooks/config-changed 2014-05-09 11:17:36 +0000
2554+++ hooks/config-changed 2014-12-11 14:18:46 +0000
2555@@ -12,7 +12,8 @@
2556 from string import upper
2557 from charmhelpers.fetch import (
2558 add_source,
2559- apt_update
2560+ apt_update,
2561+ apt_install
2562 )
2563 from charmhelpers.core.hookenv import relations_of_type
2564
2565@@ -71,6 +72,9 @@
2566 return '%s%s' % (mtot, upper(modifier[0]))
2567
2568
2569+if configs['prefer-ipv6']:
2570+ utils.check_ipv6_compatibility()
2571+
2572 # There is preliminary code for mariadb, but switching
2573 # from mariadb -> mysql fails badly, so it is disabled for now.
2574 valid_flavors = ['distro','percona']
2575@@ -116,7 +120,7 @@
2576
2577 if len(remove_pkgs):
2578 check_call(['apt-get','-y','remove'] + remove_pkgs)
2579-check_call(['apt-get','-y','install','-qq',package])
2580+apt_install(package)
2581
2582 # smart-calc stuff in the configs
2583 dataset_bytes = human_to_bytes(configs['dataset-size'])
2584@@ -171,6 +175,11 @@
2585 else:
2586 configs['max-connections'] = 'max_connections = %s' % configs['max-connections']
2587
2588+if configs['prefer-ipv6']:
2589+ configs['bind-address'] = '::'
2590+else:
2591+ configs['bind-address'] = '0.0.0.0'
2592+
2593 template="""
2594 ######################################
2595 #
2596@@ -234,7 +243,7 @@
2597 #
2598 # Instead of skip-networking the default is now to listen only on
2599 # localhost which is more compatible and is not less secure.
2600-bind-address = 0.0.0.0
2601+bind-address = %(bind-address)s
2602 #
2603 # * Fine Tuning
2604 #
2605
2606=== modified file 'hooks/ha_relations.py'
2607--- hooks/ha_relations.py 2014-03-04 17:28:10 +0000
2608+++ hooks/ha_relations.py 2014-12-11 14:18:46 +0000
2609@@ -42,10 +42,18 @@
2610
2611 block_storage = 'ceph'
2612
2613+ if utils.config_get('prefer-ipv6'):
2614+ res_mysql_vip = 'ocf:heartbeat:IPv6addr'
2615+ vip_params = 'ipv6addr'
2616+ vip_cidr = '64'
2617+ else:
2618+ res_mysql_vip = 'ocf:heartbeat:IPaddr2'
2619+ vip_params = 'ip'
2620+
2621 resources = {
2622 'res_mysql_rbd': 'ocf:ceph:rbd',
2623 'res_mysql_fs': 'ocf:heartbeat:Filesystem',
2624- 'res_mysql_vip': 'ocf:heartbeat:IPaddr2',
2625+ 'res_mysql_vip': res_mysql_vip,
2626 'res_mysqld': 'upstart:mysql'}
2627
2628 rbd_name = utils.config_get('rbd-name')
2629@@ -57,8 +65,8 @@
2630 'res_mysql_fs': 'params device="/dev/rbd/%s/%s" directory="%s" '
2631 'fstype="ext4" op start start-delay="10s"' %
2632 (POOL_NAME, rbd_name, DATA_SRC_DST),
2633- 'res_mysql_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' %
2634- (vip, vip_cidr, vip_iface),
2635+ 'res_mysql_vip': 'params "%s"="%s" cidr_netmask="%s" nic="%s"' %
2636+ (vip_params, vip, vip_cidr, vip_iface),
2637 'res_mysqld': 'op start start-delay="5s" op monitor interval="5s"'}
2638
2639 groups = {
2640
2641=== modified file 'hooks/lib/ceph_utils.py'
2642--- hooks/lib/ceph_utils.py 2014-03-04 17:28:10 +0000
2643+++ hooks/lib/ceph_utils.py 2014-12-11 14:18:46 +0000
2644@@ -16,6 +16,10 @@
2645 import time
2646 import lib.utils as utils
2647
2648+from charmhelpers.contrib.network.ip import (
2649+ format_ipv6_addr
2650+)
2651+
2652 KEYRING = '/etc/ceph/ceph.client.%s.keyring'
2653 KEYFILE = '/etc/ceph/ceph.client.%s.key'
2654
2655@@ -163,8 +167,14 @@
2656 hosts = []
2657 for r_id in utils.relation_ids('ceph'):
2658 for unit in utils.relation_list(r_id):
2659- hosts.append(utils.relation_get('private-address',
2660- unit=unit, rid=r_id))
2661+ ceph_addr = \
2662+ utils.relation_get('ceph-public-address', rid=r_id,
2663+ unit=unit) or \
2664+ utils.relation_get('private-address', rid=r_id, unit=unit)
2665+ # We host is an ipv6 address we need to wrap it in []
2666+ ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
2667+ hosts.append(ceph_addr)
2668+
2669 return hosts
2670
2671
2672
2673=== modified file 'hooks/lib/utils.py'
2674--- hooks/lib/utils.py 2014-03-04 17:28:10 +0000
2675+++ hooks/lib/utils.py 2014-12-11 14:18:46 +0000
2676@@ -14,6 +14,9 @@
2677 import subprocess
2678 import socket
2679 import sys
2680+from charmhelpers.core.host import (
2681+ lsb_release
2682+)
2683
2684
2685 def do_hooks(hooks):
2686@@ -219,3 +222,9 @@
2687 if relation_get(key, rid=r_id, unit=unit):
2688 return True
2689 return False
2690+
2691+
2692+def check_ipv6_compatibility():
2693+ if lsb_release()['DISTRIB_CODENAME'].lower() < "trusty":
2694+ raise Exception("IPv6 is not supported in charms for Ubuntu "
2695+ "versions less than Trusty 14.04")
2696
2697=== modified file 'hooks/monitors-relation-joined'
2698--- hooks/monitors-relation-joined 2012-07-12 21:58:36 +0000
2699+++ hooks/monitors-relation-joined 2014-12-11 14:18:46 +0000
2700@@ -3,5 +3,7 @@
2701 . hooks/monitors.common.bash
2702 echo "'${monitor_user}'@'${remote_addr}'" >> $revoke_todo
2703 $MYSQL -e "GRANT USAGE ON *.* TO '${monitor_user}'@'${remote_addr}'"
2704+# Won't be getting the real remote address from NRPE subordinate. Need to have NRPE tell the charm the remote address in the near future
2705+$MYSQL -e "GRANT USAGE ON *.* TO '${monitor_user}'@'%'"
2706
2707 relation-set monitors="$(cat monitors.yaml)" target-id=${JUJU_UNIT_NAME//\//-} target-address=$(unit-get private-address)
2708
2709=== modified file 'hooks/shared_db_relations.py'
2710--- hooks/shared_db_relations.py 2014-10-09 10:18:28 +0000
2711+++ hooks/shared_db_relations.py 2014-12-11 14:18:46 +0000
2712@@ -19,6 +19,10 @@
2713 import os
2714 import lib.utils as utils
2715 import lib.cluster_utils as cluster
2716+from charmhelpers.core import hookenv
2717+from charmhelpers.contrib.network.ip import (
2718+ get_ipv6_addr
2719+)
2720
2721 LEADER_RES = 'res_mysql_vip'
2722
2723@@ -33,14 +37,62 @@
2724 'json']))
2725
2726
2727+def unit_sorted(units):
2728+ """Return a sorted list of unit names."""
2729+ return sorted(
2730+ units, lambda a, b: cmp(int(a.split('/')[-1]), int(b.split('/')[-1])))
2731+
2732+
2733+def get_unit_addr(relid, unitid):
2734+ return hookenv.relation_get(attribute='private-address',
2735+ unit=unitid,
2736+ rid=relid)
2737+
2738+
2739 def shared_db_changed():
2740
2741+ def get_allowed_units(database, username):
2742+ allowed_units = set()
2743+ for relid in hookenv.relation_ids('shared-db'):
2744+ for unit in hookenv.related_units(relid):
2745+ attr = "%s_%s" % (database, 'hostname')
2746+ hosts = hookenv.relation_get(attribute=attr, unit=unit,
2747+ rid=relid)
2748+ if not hosts:
2749+ hosts = [hookenv.relation_get(attribute='private-address',
2750+ unit=unit, rid=relid)]
2751+ else:
2752+ # hostname can be json-encoded list of hostnames
2753+ try:
2754+ hosts = json.loads(hosts)
2755+ except ValueError:
2756+ pass
2757+
2758+ if not isinstance(hosts, list):
2759+ hosts = [hosts]
2760+
2761+ if hosts:
2762+ for host in hosts:
2763+ utils.juju_log('INFO', "Checking host '%s' grant" %
2764+ (host))
2765+ if grant_exists(database, username, host):
2766+ if unit not in allowed_units:
2767+ allowed_units.add(unit)
2768+ else:
2769+ utils.juju_log('INFO', "No hosts found for grant check")
2770+
2771+ return allowed_units
2772+
2773 def configure_db(hostname,
2774 database,
2775 username):
2776 passwd_file = "/var/lib/mysql/mysql-{}.passwd".format(username)
2777 if hostname != local_hostname:
2778- remote_ip = socket.gethostbyname(hostname)
2779+ try:
2780+ remote_ip = socket.gethostbyname(hostname)
2781+ except Exception:
2782+ # socket.gethostbyname doesn't support ipv6
2783+ remote_ip = hostname
2784 else:
2785 remote_ip = '127.0.0.1'
2786
2787@@ -69,8 +121,12 @@
2788 ' as this service unit is not the leader')
2789 return
2790
2791+ if utils.config_get('prefer-ipv6'):
2792+ local_hostname = get_ipv6_addr(exc_list=[utils.config_get('vip')])[0]
2793+ else:
2794+ local_hostname = utils.unit_get('private-address')
2795+
2796 settings = relation_get()
2797- local_hostname = utils.unit_get('private-address')
2798 singleset = set([
2799 'database',
2800 'username',
2801@@ -78,15 +134,33 @@
2802
2803 if singleset.issubset(settings):
2804 # Process a single database configuration
2805- password = configure_db(settings['hostname'],
2806- settings['database'],
2807- settings['username'])
2808+ hostname = settings['hostname']
2809+ database = settings['database']
2810+ username = settings['username']
2811+
2812+ # Hostname can be json-encoded list of hostnames
2813+ try:
2814+ hostname = json.loads(hostname)
2815+ except ValueError:
2816+ pass
2817+
2818+ if isinstance(hostname, list):
2819+ for host in hostname:
2820+ password = configure_db(host, database, username)
2821+ else:
2822+ password = configure_db(hostname, database, username)
2823+
2824+ allowed_units = " ".join(unit_sorted(get_allowed_units(database,
2825+ username)))
2826+
2827 if not cluster.is_clustered():
2828 utils.relation_set(db_host=local_hostname,
2829- password=password)
2830+ password=password,
2831+ allowed_units=allowed_units)
2832 else:
2833 utils.relation_set(db_host=utils.config_get("vip"),
2834- password=password)
2835+ password=password,
2836+ allowed_units=allowed_units)
2837
2838 else:
2839 # Process multiple database setup requests.
2840@@ -114,13 +188,29 @@
2841 if db not in databases:
2842 databases[db] = {}
2843 databases[db][x] = v
2844+
2845 return_data = {}
2846 for db in databases:
2847 if singleset.issubset(databases[db]):
2848- return_data['_'.join([db, 'password'])] = \
2849- configure_db(databases[db]['hostname'],
2850- databases[db]['database'],
2851- databases[db]['username'])
2852+ database = databases[db]['database']
2853+ hostname = databases[db]['hostname']
2854+ username = databases[db]['username']
2855+ try:
2856+ hostname = json.loads(hostname)
2857+ except ValueError:
2858+ hostname = hostname
2859+
2860+ if isinstance(hostname, list):
2861+ for host in hostname:
2862+ password = configure_db(host, database, username)
2863+ else:
2864+ password = configure_db(hostname, database, username)
2865+
2866+ return_data['_'.join([db, 'password'])] = password
2867+ allowed_units = unit_sorted(get_allowed_units(database,
2868+ username))
2869+ return_data['_'.join([db, 'allowed_units'])] = \
2870+ " ".join(allowed_units)
2871 if len(return_data) > 0:
2872 utils.relation_set(**return_data)
2873 if not cluster.is_clustered():
2874@@ -128,6 +218,7 @@
2875 else:
2876 utils.relation_set(db_host=utils.config_get("vip"))
2877
2878+
2879 hooks = {"shared-db-relation-changed": shared_db_changed}
2880
2881 utils.do_hooks(hooks)

Subscribers

People subscribed via source and target branches