Merge lp:~freyes/charms/trusty/memcached/python-rewrite into lp:charms/trusty/memcached

Proposed by Felipe Reyes
Status: Merged
Merged at revision: 61
Proposed branch: lp:~freyes/charms/trusty/memcached/python-rewrite
Merge into: lp:charms/trusty/memcached
Diff against target: 4436 lines (+3861/-264)
39 files modified
.bzrignore (+5/-0)
Makefile (+39/-0)
charm-helpers.yaml (+6/-0)
config.yaml (+6/-6)
hooks/cache-relation-joined (+0/-9)
hooks/charmhelpers/contrib/network/ip.py (+351/-0)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+80/-0)
hooks/charmhelpers/contrib/network/ufw.py (+189/-0)
hooks/charmhelpers/core/fstab.py (+118/-0)
hooks/charmhelpers/core/hookenv.py (+542/-0)
hooks/charmhelpers/core/host.py (+416/-0)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+313/-0)
hooks/charmhelpers/core/services/helpers.py (+243/-0)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+52/-0)
hooks/charmhelpers/fetch/__init__.py (+416/-0)
hooks/charmhelpers/fetch/archiveurl.py (+145/-0)
hooks/charmhelpers/fetch/bzrurl.py (+54/-0)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
hooks/config-changed (+0/-155)
hooks/install (+0/-20)
hooks/memcached_hooks.py (+217/-0)
hooks/memcached_utils.py (+27/-0)
hooks/munin-relation-changed (+0/-29)
hooks/nrpe-external-master-relation-changed (+0/-19)
hooks/start (+0/-2)
hooks/stop (+0/-3)
hooks/upgrade-charm (+0/-4)
metadata.yaml (+2/-2)
templates/memcached.conf (+62/-0)
templates/munin-node.conf (+66/-0)
templates/nrpe_check_memcached.cfg.tmpl (+1/-1)
templates/nrpe_export.cfg.tmpl (+3/-3)
test_requirements.txt (+7/-0)
tests/10_deploy_test.py (+22/-11)
unit_tests/__init__.py (+2/-0)
unit_tests/test_memcached_hooks.py (+268/-0)
unit_tests/test_utils.py (+122/-0)
To merge this branch: bzr merge lp:~freyes/charms/trusty/memcached/python-rewrite
Reviewer Review Type Date Requested Status
Jorge Niedbalski Pending
Review via email: mp+244233@code.launchpad.net

This proposal supersedes a proposal from 2014-12-09.

Description of the change

Dear Charmers,

This MP is a rewrite in python to leverage charmhelpers and secure memcached service using ufw rules to only allow access from machines related to the service.

Summary of changes:

* Drop all the bash in favor of python
* Added unit tests
* Changed config type to boolean of following keys:
  * disable-auto-cleanup
  * disable-cas
  * disable-large-pages
* Use ufw to secure memcached[0]

[0] root@ubuntu:~# ufw status # before relating memcache with mediawiki
Status: active

To Action From
-- ------ ----
22 ALLOW Anywhere
4949 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
4949 (v6) ALLOW Anywhere (v6)
21/tcp ALLOW ::1

root@ubuntu:~# ufw status # after juju add-relation memcached mediawiki
Status: active

To Action From
-- ------ ----
22 ALLOW Anywhere
4949 ALLOW Anywhere
11211/tcp ALLOW 192.168.0.120
22 (v6) ALLOW Anywhere (v6)
4949 (v6) ALLOW Anywhere (v6)

To post a comment you must log in.
Revision history for this message
Jorge Niedbalski (niedbalski) wrote : Posted in a previous version of this proposal

Hello Felipe,

Thanks for this contribution. I made some inline comments, required to be fixed. Also i have a few suggestions for improving.

On the lint side,

- Remove the 'tags' section on the metadata.yaml file.
- Add the categories: ["system"] on the metadata.yaml file.

On the test side,

- Remove the pre_install_hooks code.
- Add test coverage for the method cache_relation_departed
- Add test coverage for the method upgrade_charm

On the memcache_utils file.

- Please instead of create your own template rendering method, please see:
hooks/charmhelpers/core/templating.py
- Add test coverage for method revoke_access

review: Needs Fixing
Revision history for this message
Jorge Niedbalski (niedbalski) wrote : Posted in a previous version of this proposal

Hello Felipe,

The 'make test' target doesn't works properly, in fact, it references to files that doesn't longer exists on the project.

Please fix them and OTT LGTM

review: Needs Fixing
Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

@niedbalski, I fixed the amulet test, please try running 'make test' again. Thanks.

Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10637-results

review: Needs Fixing (automated testing)
Revision history for this message
Jorge Niedbalski (niedbalski) wrote : Posted in a previous version of this proposal

Felipe,

Your makefile is assuming you already have nosetests installed,
Could you please modify your Makefile for making it looking like this one https://pastebin.canonical.com/122033/ ?

review: Needs Fixing
Revision history for this message
Jorge Niedbalski (niedbalski) wrote : Posted in a previous version of this proposal

Felipe,

Everything looks OK. Tests/Lint passing and you followed the overall suggestions.

Thanks

review: Approve
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10646-results

review: Needs Fixing (automated testing)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2014-12-09 22:04:35 +0000
4@@ -0,0 +1,5 @@
5+*.py[co]
6+*~
7+bin/
8+.coverage
9+.venv
10
11=== added file 'Makefile'
12--- Makefile 1970-01-01 00:00:00 +0000
13+++ Makefile 2014-12-09 22:04:35 +0000
14@@ -0,0 +1,39 @@
15+#!/usr/bin/make
16+PYTHON := /usr/bin/env python
17+EXTRA :=
18+
19+
20+clean:
21+ rm -f .coverage
22+ find . -name '*.pyc' -delete
23+ rm -rf .venv
24+ (which dh_clean && dh_clean) || true
25+
26+.venv:
27+ sudo apt-get install -y gcc python-dev python-virtualenv python-apt
28+ virtualenv .venv --system-site-packages
29+ .venv/bin/pip install -I -r test_requirements.txt
30+
31+lint: .venv
32+ .venv/bin/flake8 --exclude hooks/charmhelpers hooks tests unit_tests
33+ @charm proof
34+
35+test: clean .venv
36+ @echo Starting unit tests...
37+ .venv/bin/nosetests -s --nologcapture --with-coverage $(EXTRA) unit_tests/
38+
39+functional_test:
40+ @echo Starting amulet tests...
41+ @juju test -v -p AMULET_HTTP_PROXY --timeout 900
42+
43+bin/charm_helpers_sync.py:
44+ @mkdir -p bin
45+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
46+ > bin/charm_helpers_sync.py
47+
48+sync: bin/charm_helpers_sync.py
49+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
50+
51+publish: lint unit_test
52+ bzr push lp:charms/memcached
53+ bzr push lp:charms/trusty/memcached
54
55=== added file 'charm-helpers.yaml'
56--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
57+++ charm-helpers.yaml 2014-12-09 22:04:35 +0000
58@@ -0,0 +1,6 @@
59+destination: hooks/charmhelpers
60+branch: lp:charm-helpers
61+include:
62+ - fetch
63+ - core
64+ - contrib.network
65
66=== modified file 'config.yaml'
67--- config.yaml 2013-07-30 12:19:47 +0000
68+++ config.yaml 2014-12-09 22:04:35 +0000
69@@ -28,8 +28,8 @@
70 description: |
71 do not remove things automatically from the cache on OOM
72 (memcached option -M)
73- default: "no"
74- type: string
75+ default: False
76+ type: boolean
77 factor:
78 description: |
79 Use <factor> as the multiplier for computing the sizes of memory
80@@ -52,8 +52,8 @@
81 default: -1
82 disable-cas:
83 description: disable use of CAS (and reduce the per-item size by 8 bytes)
84- default: "no"
85- type: string
86+ default: False
87+ type: boolean
88 slab-page-size:
89 description: |
90 Override the size of each slab page in bytes. In mundane
91@@ -72,8 +72,8 @@
92 type: int
93 disable-large-pages:
94 description: The charm will will try to use large pages if given more than 2GB of RAM. You may want to disable this behavior. (memcached option -L)
95- type: string
96- default: "no"
97+ type: boolean
98+ default: False
99 extra-options:
100 description: memcached has many other options documented in its man page. You may pass them here as a string which will be appended to memcached's execution.
101 type: string
102
103=== added symlink 'hooks/cache-relation-departed'
104=== target is u'memcached_hooks.py'
105=== modified file 'hooks/cache-relation-joined'
106--- hooks/cache-relation-joined 2012-09-20 11:43:24 +0000
107+++ hooks/cache-relation-joined 1970-01-01 00:00:00 +0000
108@@ -1,9 +0,0 @@
109-#!/bin/sh
110-if [ -z "$JUJU_RELATION_ID" ] ; then
111- rids=$(relation-ids cache)
112-else
113- rids=$JUJU_RELATION_ID
114-fi
115-for rid in $rids ; do
116- relation-set -r $rid host=`unit-get private-address` port=`config-get tcp-port` udp-port=`config-get udp-port`
117-done
118
119=== target is u'memcached_hooks.py'
120=== added directory 'hooks/charmhelpers'
121=== added file 'hooks/charmhelpers/__init__.py'
122=== added directory 'hooks/charmhelpers/contrib'
123=== added file 'hooks/charmhelpers/contrib/__init__.py'
124=== added directory 'hooks/charmhelpers/contrib/network'
125=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
126=== added file 'hooks/charmhelpers/contrib/network/ip.py'
127--- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
128+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-09 22:04:35 +0000
129@@ -0,0 +1,351 @@
130+import glob
131+import re
132+import subprocess
133+
134+from functools import partial
135+
136+from charmhelpers.core.hookenv import unit_get
137+from charmhelpers.fetch import apt_install
138+from charmhelpers.core.hookenv import (
139+ log
140+)
141+
142+try:
143+ import netifaces
144+except ImportError:
145+ apt_install('python-netifaces')
146+ import netifaces
147+
148+try:
149+ import netaddr
150+except ImportError:
151+ apt_install('python-netaddr')
152+ import netaddr
153+
154+
155+def _validate_cidr(network):
156+ try:
157+ netaddr.IPNetwork(network)
158+ except (netaddr.core.AddrFormatError, ValueError):
159+ raise ValueError("Network (%s) is not in CIDR presentation format" %
160+ network)
161+
162+
163+def no_ip_found_error_out(network):
164+ errmsg = ("No IP address found in network: %s" % network)
165+ raise ValueError(errmsg)
166+
167+
168+def get_address_in_network(network, fallback=None, fatal=False):
169+ """Get an IPv4 or IPv6 address within the network from the host.
170+
171+ :param network (str): CIDR presentation format. For example,
172+ '192.168.1.0/24'.
173+ :param fallback (str): If no address is found, return fallback.
174+ :param fatal (boolean): If no address is found, fallback is not
175+ set and fatal is True then exit(1).
176+ """
177+ if network is None:
178+ if fallback is not None:
179+ return fallback
180+
181+ if fatal:
182+ no_ip_found_error_out(network)
183+ else:
184+ return None
185+
186+ _validate_cidr(network)
187+ network = netaddr.IPNetwork(network)
188+ for iface in netifaces.interfaces():
189+ addresses = netifaces.ifaddresses(iface)
190+ if network.version == 4 and netifaces.AF_INET in addresses:
191+ addr = addresses[netifaces.AF_INET][0]['addr']
192+ netmask = addresses[netifaces.AF_INET][0]['netmask']
193+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
194+ if cidr in network:
195+ return str(cidr.ip)
196+
197+ if network.version == 6 and netifaces.AF_INET6 in addresses:
198+ for addr in addresses[netifaces.AF_INET6]:
199+ if not addr['addr'].startswith('fe80'):
200+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
201+ addr['netmask']))
202+ if cidr in network:
203+ return str(cidr.ip)
204+
205+ if fallback is not None:
206+ return fallback
207+
208+ if fatal:
209+ no_ip_found_error_out(network)
210+
211+ return None
212+
213+
214+def is_ipv6(address):
215+ """Determine whether provided address is IPv6 or not."""
216+ try:
217+ address = netaddr.IPAddress(address)
218+ except netaddr.AddrFormatError:
219+ # probably a hostname - so not an address at all!
220+ return False
221+
222+ return address.version == 6
223+
224+
225+def is_address_in_network(network, address):
226+ """
227+ Determine whether the provided address is within a network range.
228+
229+ :param network (str): CIDR presentation format. For example,
230+ '192.168.1.0/24'.
231+ :param address: An individual IPv4 or IPv6 address without a net
232+ mask or subnet prefix. For example, '192.168.1.1'.
233+ :returns boolean: Flag indicating whether address is in network.
234+ """
235+ try:
236+ network = netaddr.IPNetwork(network)
237+ except (netaddr.core.AddrFormatError, ValueError):
238+ raise ValueError("Network (%s) is not in CIDR presentation format" %
239+ network)
240+
241+ try:
242+ address = netaddr.IPAddress(address)
243+ except (netaddr.core.AddrFormatError, ValueError):
244+ raise ValueError("Address (%s) is not in correct presentation format" %
245+ address)
246+
247+ if address in network:
248+ return True
249+ else:
250+ return False
251+
252+
253+def _get_for_address(address, key):
254+ """Retrieve an attribute of or the physical interface that
255+ the IP address provided could be bound to.
256+
257+ :param address (str): An individual IPv4 or IPv6 address without a net
258+ mask or subnet prefix. For example, '192.168.1.1'.
259+ :param key: 'iface' for the physical interface name or an attribute
260+ of the configured interface, for example 'netmask'.
261+ :returns str: Requested attribute or None if address is not bindable.
262+ """
263+ address = netaddr.IPAddress(address)
264+ for iface in netifaces.interfaces():
265+ addresses = netifaces.ifaddresses(iface)
266+ if address.version == 4 and netifaces.AF_INET in addresses:
267+ addr = addresses[netifaces.AF_INET][0]['addr']
268+ netmask = addresses[netifaces.AF_INET][0]['netmask']
269+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
270+ cidr = network.cidr
271+ if address in cidr:
272+ if key == 'iface':
273+ return iface
274+ else:
275+ return addresses[netifaces.AF_INET][0][key]
276+
277+ if address.version == 6 and netifaces.AF_INET6 in addresses:
278+ for addr in addresses[netifaces.AF_INET6]:
279+ if not addr['addr'].startswith('fe80'):
280+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
281+ addr['netmask']))
282+ cidr = network.cidr
283+ if address in cidr:
284+ if key == 'iface':
285+ return iface
286+ elif key == 'netmask' and cidr:
287+ return str(cidr).split('/')[1]
288+ else:
289+ return addr[key]
290+
291+ return None
292+
293+
294+get_iface_for_address = partial(_get_for_address, key='iface')
295+
296+
297+get_netmask_for_address = partial(_get_for_address, key='netmask')
298+
299+
300+def format_ipv6_addr(address):
301+ """If address is IPv6, wrap it in '[]' otherwise return None.
302+
303+ This is required by most configuration files when specifying IPv6
304+ addresses.
305+ """
306+ if is_ipv6(address):
307+ return "[%s]" % address
308+
309+ return None
310+
311+
312+def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
313+ fatal=True, exc_list=None):
314+ """Return the assigned IP address for a given interface, if any."""
315+ # Extract nic if passed /dev/ethX
316+ if '/' in iface:
317+ iface = iface.split('/')[-1]
318+
319+ if not exc_list:
320+ exc_list = []
321+
322+ try:
323+ inet_num = getattr(netifaces, inet_type)
324+ except AttributeError:
325+ raise Exception("Unknown inet type '%s'" % str(inet_type))
326+
327+ interfaces = netifaces.interfaces()
328+ if inc_aliases:
329+ ifaces = []
330+ for _iface in interfaces:
331+ if iface == _iface or _iface.split(':')[0] == iface:
332+ ifaces.append(_iface)
333+
334+ if fatal and not ifaces:
335+ raise Exception("Invalid interface '%s'" % iface)
336+
337+ ifaces.sort()
338+ else:
339+ if iface not in interfaces:
340+ if fatal:
341+ raise Exception("Interface '%s' not found " % (iface))
342+ else:
343+ return []
344+
345+ else:
346+ ifaces = [iface]
347+
348+ addresses = []
349+ for netiface in ifaces:
350+ net_info = netifaces.ifaddresses(netiface)
351+ if inet_num in net_info:
352+ for entry in net_info[inet_num]:
353+ if 'addr' in entry and entry['addr'] not in exc_list:
354+ addresses.append(entry['addr'])
355+
356+ if fatal and not addresses:
357+ raise Exception("Interface '%s' doesn't have any %s addresses." %
358+ (iface, inet_type))
359+
360+ return sorted(addresses)
361+
362+
363+get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
364+
365+
366+def get_iface_from_addr(addr):
367+ """Work out on which interface the provided address is configured."""
368+ for iface in netifaces.interfaces():
369+ addresses = netifaces.ifaddresses(iface)
370+ for inet_type in addresses:
371+ for _addr in addresses[inet_type]:
372+ _addr = _addr['addr']
373+ # link local
374+ ll_key = re.compile("(.+)%.*")
375+ raw = re.match(ll_key, _addr)
376+ if raw:
377+ _addr = raw.group(1)
378+
379+ if _addr == addr:
380+ log("Address '%s' is configured on iface '%s'" %
381+ (addr, iface))
382+ return iface
383+
384+ msg = "Unable to infer net iface on which '%s' is configured" % (addr)
385+ raise Exception(msg)
386+
387+
388+def sniff_iface(f):
389+ """Ensure decorated function is called with a value for iface.
390+
391+ If no iface provided, inject net iface inferred from unit private address.
392+ """
393+ def iface_sniffer(*args, **kwargs):
394+ if not kwargs.get('iface', None):
395+ kwargs['iface'] = get_iface_from_addr(unit_get('private-address'))
396+
397+ return f(*args, **kwargs)
398+
399+ return iface_sniffer
400+
401+
402+@sniff_iface
403+def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None,
404+ dynamic_only=True):
405+ """Get assigned IPv6 address for a given interface.
406+
407+ Returns list of addresses found. If no address found, returns empty list.
408+
409+ If iface is None, we infer the current primary interface by doing a reverse
410+ lookup on the unit private-address.
411+
412+ We currently only support scope global IPv6 addresses i.e. non-temporary
413+ addresses. If no global IPv6 address is found, return the first one found
414+ in the ipv6 address list.
415+ """
416+ addresses = get_iface_addr(iface=iface, inet_type='AF_INET6',
417+ inc_aliases=inc_aliases, fatal=fatal,
418+ exc_list=exc_list)
419+
420+ if addresses:
421+ global_addrs = []
422+ for addr in addresses:
423+ key_scope_link_local = re.compile("^fe80::..(.+)%(.+)")
424+ m = re.match(key_scope_link_local, addr)
425+ if m:
426+ eui_64_mac = m.group(1)
427+ iface = m.group(2)
428+ else:
429+ global_addrs.append(addr)
430+
431+ if global_addrs:
432+ # Make sure any found global addresses are not temporary
433+ cmd = ['ip', 'addr', 'show', iface]
434+ out = subprocess.check_output(cmd).decode('UTF-8')
435+ if dynamic_only:
436+ key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
437+ else:
438+ key = re.compile("inet6 (.+)/[0-9]+ scope global.*")
439+
440+ addrs = []
441+ for line in out.split('\n'):
442+ line = line.strip()
443+ m = re.match(key, line)
444+ if m and 'temporary' not in line:
445+ # Return the first valid address we find
446+ for addr in global_addrs:
447+ if m.group(1) == addr:
448+ if not dynamic_only or \
449+ m.group(1).endswith(eui_64_mac):
450+ addrs.append(addr)
451+
452+ if addrs:
453+ return addrs
454+
455+ if fatal:
456+ raise Exception("Interface '%s' does not have a scope global "
457+ "non-temporary ipv6 address." % iface)
458+
459+ return []
460+
461+
462+def get_bridges(vnic_dir='/sys/devices/virtual/net'):
463+ """Return a list of bridges on the system."""
464+ b_regex = "%s/*/bridge" % vnic_dir
465+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
466+
467+
468+def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
469+ """Return a list of nics comprising a given bridge on the system."""
470+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
471+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
472+
473+
474+def is_bridge_member(nic):
475+ """Check if a given nic is a member of a bridge."""
476+ for bridge in get_bridges():
477+ if nic in get_bridge_nics(bridge):
478+ return True
479+
480+ return False
481
482=== added directory 'hooks/charmhelpers/contrib/network/ovs'
483=== added file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
484--- hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000
485+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2014-12-09 22:04:35 +0000
486@@ -0,0 +1,80 @@
487+''' Helpers for interacting with OpenvSwitch '''
488+import subprocess
489+import os
490+from charmhelpers.core.hookenv import (
491+ log, WARNING
492+)
493+from charmhelpers.core.host import (
494+ service
495+)
496+
497+
498+def add_bridge(name):
499+ ''' Add the named bridge to openvswitch '''
500+ log('Creating bridge {}'.format(name))
501+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
502+
503+
504+def del_bridge(name):
505+ ''' Delete the named bridge from openvswitch '''
506+ log('Deleting bridge {}'.format(name))
507+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
508+
509+
510+def add_bridge_port(name, port, promisc=False):
511+ ''' Add a port to the named openvswitch bridge '''
512+ log('Adding port {} to bridge {}'.format(port, name))
513+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
514+ name, port])
515+ subprocess.check_call(["ip", "link", "set", port, "up"])
516+ if promisc:
517+ subprocess.check_call(["ip", "link", "set", port, "promisc", "on"])
518+ else:
519+ subprocess.check_call(["ip", "link", "set", port, "promisc", "off"])
520+
521+
522+def del_bridge_port(name, port):
523+ ''' Delete a port from the named openvswitch bridge '''
524+ log('Deleting port {} from bridge {}'.format(port, name))
525+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
526+ name, port])
527+ subprocess.check_call(["ip", "link", "set", port, "down"])
528+ subprocess.check_call(["ip", "link", "set", port, "promisc", "off"])
529+
530+
531+def set_manager(manager):
532+ ''' Set the controller for the local openvswitch '''
533+ log('Setting manager for local ovs to {}'.format(manager))
534+ subprocess.check_call(['ovs-vsctl', 'set-manager',
535+ 'ssl:{}'.format(manager)])
536+
537+
538+CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
539+
540+
541+def get_certificate():
542+ ''' Read openvswitch certificate from disk '''
543+ if os.path.exists(CERT_PATH):
544+ log('Reading ovs certificate from {}'.format(CERT_PATH))
545+ with open(CERT_PATH, 'r') as cert:
546+ full_cert = cert.read()
547+ begin_marker = "-----BEGIN CERTIFICATE-----"
548+ end_marker = "-----END CERTIFICATE-----"
549+ begin_index = full_cert.find(begin_marker)
550+ end_index = full_cert.rfind(end_marker)
551+ if end_index == -1 or begin_index == -1:
552+ raise RuntimeError("Certificate does not contain valid begin"
553+ " and end markers.")
554+ full_cert = full_cert[begin_index:(end_index + len(end_marker))]
555+ return full_cert
556+ else:
557+ log('Certificate not found', level=WARNING)
558+ return None
559+
560+
561+def full_restart():
562+ ''' Full restart and reload of openvswitch '''
563+ if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'):
564+ service('start', 'openvswitch-force-reload-kmod')
565+ else:
566+ service('force-reload-kmod', 'openvswitch-switch')
567
568=== added file 'hooks/charmhelpers/contrib/network/ufw.py'
569--- hooks/charmhelpers/contrib/network/ufw.py 1970-01-01 00:00:00 +0000
570+++ hooks/charmhelpers/contrib/network/ufw.py 2014-12-09 22:04:35 +0000
571@@ -0,0 +1,189 @@
572+"""
573+This module contains helpers to add and remove ufw rules.
574+
575+Examples:
576+
577+- open SSH port for subnet 10.0.3.0/24:
578+
579+ >>> from charmhelpers.contrib.network import ufw
580+ >>> ufw.enable()
581+ >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp')
582+
583+- open service by name as defined in /etc/services:
584+
585+ >>> from charmhelpers.contrib.network import ufw
586+ >>> ufw.enable()
587+ >>> ufw.service('ssh', 'open')
588+
589+- close service by port number:
590+
591+ >>> from charmhelpers.contrib.network import ufw
592+ >>> ufw.enable()
593+ >>> ufw.service('4949', 'close') # munin
594+"""
595+
596+__author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
597+
598+import re
599+import os
600+import subprocess
601+from charmhelpers.core import hookenv
602+
603+
604+def is_enabled():
605+ """
606+ Check if `ufw` is enabled
607+
608+ :returns: True if ufw is enabled
609+ """
610+ output = subprocess.check_output(['ufw', 'status'],
611+ env={'LANG': 'en_US',
612+ 'PATH': os.environ['PATH']})
613+
614+ m = re.findall(r'^Status: active\n', output, re.M)
615+
616+ return len(m) >= 1
617+
618+
619+def enable():
620+ """
621+ Enable ufw
622+
623+ :returns: True if ufw is successfully enabled
624+ """
625+ if is_enabled():
626+ return True
627+
628+ output = subprocess.check_output(['ufw', 'enable'],
629+ env={'LANG': 'en_US',
630+ 'PATH': os.environ['PATH']})
631+
632+ m = re.findall('^Firewall is active and enabled on system startup\n',
633+ output, re.M)
634+ hookenv.log(output, level='DEBUG')
635+
636+ if len(m) == 0:
637+ hookenv.log("ufw couldn't be enabled", level='WARN')
638+ return False
639+ else:
640+ hookenv.log("ufw enabled", level='INFO')
641+ return True
642+
643+
644+def disable():
645+ """
646+ Disable ufw
647+
648+ :returns: True if ufw is successfully disabled
649+ """
650+ if not is_enabled():
651+ return True
652+
653+ output = subprocess.check_output(['ufw', 'disable'],
654+ env={'LANG': 'en_US',
655+ 'PATH': os.environ['PATH']})
656+
657+ m = re.findall(r'^Firewall stopped and disabled on system startup\n',
658+ output, re.M)
659+ hookenv.log(output, level='DEBUG')
660+
661+ if len(m) == 0:
662+ hookenv.log("ufw couldn't be disabled", level='WARN')
663+ return False
664+ else:
665+ hookenv.log("ufw disabled", level='INFO')
666+ return True
667+
668+
669+def modify_access(src, dst='any', port=None, proto=None, action='allow'):
670+ """
671+ Grant access to an address or subnet
672+
673+ :param src: address (e.g. 192.168.1.234) or subnet
674+ (e.g. 192.168.1.0/24).
675+ :param dst: destiny of the connection, if the machine has multiple IPs and
676+ connections to only one of those have to accepted this is the
677+ field has to be set.
678+ :param port: destiny port
679+ :param proto: protocol (tcp or udp)
680+ :param action: `allow` or `delete`
681+ """
682+ if not is_enabled():
683+ hookenv.log('ufw is disabled, skipping modify_access()', level='WARN')
684+ return
685+
686+ if action == 'delete':
687+ cmd = ['ufw', 'delete', 'allow']
688+ else:
689+ cmd = ['ufw', action]
690+
691+ if src is not None:
692+ cmd += ['from', src]
693+
694+ if dst is not None:
695+ cmd += ['to', dst]
696+
697+ if port is not None:
698+ cmd += ['port', port]
699+
700+ if proto is not None:
701+ cmd += ['proto', proto]
702+
703+ hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG')
704+ p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
705+ (stdout, stderr) = p.communicate()
706+
707+ hookenv.log(stdout, level='INFO')
708+
709+ if p.returncode != 0:
710+ hookenv.log(stderr, level='ERROR')
711+ hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd),
712+ p.returncode),
713+ level='ERROR')
714+
715+
716+def grant_access(src, dst='any', port=None, proto=None):
717+ """
718+ Grant access to an address or subnet
719+
720+ :param src: address (e.g. 192.168.1.234) or subnet
721+ (e.g. 192.168.1.0/24).
722+ :param dst: destiny of the connection, if the machine has multiple IPs and
723+ connections to only one of those have to accepted this is the
724+ field has to be set.
725+ :param port: destiny port
726+ :param proto: protocol (tcp or udp)
727+ """
728+ return modify_access(src, dst=dst, port=port, proto=proto, action='allow')
729+
730+
731+def revoke_access(src, dst='any', port=None, proto=None):
732+ """
733+ Revoke access to an address or subnet
734+
735+ :param src: address (e.g. 192.168.1.234) or subnet
736+ (e.g. 192.168.1.0/24).
737+ :param dst: destiny of the connection, if the machine has multiple IPs and
738+ connections to only one of those have to accepted this is the
739+ field has to be set.
740+ :param port: destiny port
741+ :param proto: protocol (tcp or udp)
742+ """
743+ return modify_access(src, dst=dst, port=port, proto=proto, action='delete')
744+
745+
746+def service(name, action):
747+ """
748+ Open/close access to a service
749+
750+ :param name: could be a service name defined in `/etc/services` or a port
751+ number.
752+ :param action: `open` or `close`
753+ """
754+ if action == 'open':
755+ subprocess.check_output(['ufw', 'allow', name])
756+ elif action == 'close':
757+ subprocess.check_output(['ufw', 'delete', 'allow', name])
758+ else:
759+ raise Exception(("'{}' not supported, use 'allow' "
760+ "or 'delete'").format(action))
761
762=== added directory 'hooks/charmhelpers/core'
763=== added file 'hooks/charmhelpers/core/__init__.py'
764=== added file 'hooks/charmhelpers/core/fstab.py'
765--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
766+++ hooks/charmhelpers/core/fstab.py 2014-12-09 22:04:35 +0000
767@@ -0,0 +1,118 @@
768+#!/usr/bin/env python
769+# -*- coding: utf-8 -*-
770+
771+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
772+
773+import io
774+import os
775+
776+
777+class Fstab(io.FileIO):
778+ """This class extends file in order to implement a file reader/writer
779+ for file `/etc/fstab`
780+ """
781+
782+ class Entry(object):
783+ """Entry class represents a non-comment line on the `/etc/fstab` file
784+ """
785+ def __init__(self, device, mountpoint, filesystem,
786+ options, d=0, p=0):
787+ self.device = device
788+ self.mountpoint = mountpoint
789+ self.filesystem = filesystem
790+
791+ if not options:
792+ options = "defaults"
793+
794+ self.options = options
795+ self.d = int(d)
796+ self.p = int(p)
797+
798+ def __eq__(self, o):
799+ return str(self) == str(o)
800+
801+ def __str__(self):
802+ return "{} {} {} {} {} {}".format(self.device,
803+ self.mountpoint,
804+ self.filesystem,
805+ self.options,
806+ self.d,
807+ self.p)
808+
809+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
810+
811+ def __init__(self, path=None):
812+ if path:
813+ self._path = path
814+ else:
815+ self._path = self.DEFAULT_PATH
816+ super(Fstab, self).__init__(self._path, 'rb+')
817+
818+ def _hydrate_entry(self, line):
819+ # NOTE: use split with no arguments to split on any
820+ # whitespace including tabs
821+ return Fstab.Entry(*filter(
822+ lambda x: x not in ('', None),
823+ line.strip("\n").split()))
824+
825+ @property
826+ def entries(self):
827+ self.seek(0)
828+ for line in self.readlines():
829+ line = line.decode('us-ascii')
830+ try:
831+ if line.strip() and not line.startswith("#"):
832+ yield self._hydrate_entry(line)
833+ except ValueError:
834+ pass
835+
836+ def get_entry_by_attr(self, attr, value):
837+ for entry in self.entries:
838+ e_attr = getattr(entry, attr)
839+ if e_attr == value:
840+ return entry
841+ return None
842+
843+ def add_entry(self, entry):
844+ if self.get_entry_by_attr('device', entry.device):
845+ return False
846+
847+ self.write((str(entry) + '\n').encode('us-ascii'))
848+ self.truncate()
849+ return entry
850+
851+ def remove_entry(self, entry):
852+ self.seek(0)
853+
854+ lines = [l.decode('us-ascii') for l in self.readlines()]
855+
856+ found = False
857+ for index, line in enumerate(lines):
858+ if not line.startswith("#"):
859+ if self._hydrate_entry(line) == entry:
860+ found = True
861+ break
862+
863+ if not found:
864+ return False
865+
866+ lines.remove(line)
867+
868+ self.seek(0)
869+ self.write(''.join(lines).encode('us-ascii'))
870+ self.truncate()
871+ return True
872+
873+ @classmethod
874+ def remove_by_mountpoint(cls, mountpoint, path=None):
875+ fstab = cls(path=path)
876+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
877+ if entry:
878+ return fstab.remove_entry(entry)
879+ return False
880+
881+ @classmethod
882+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
883+ return cls(path=path).add_entry(Fstab.Entry(device,
884+ mountpoint, filesystem,
885+ options=options))
886
887=== added file 'hooks/charmhelpers/core/hookenv.py'
888--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
889+++ hooks/charmhelpers/core/hookenv.py 2014-12-09 22:04:35 +0000
890@@ -0,0 +1,542 @@
891+"Interactions with the Juju environment"
892+# Copyright 2013 Canonical Ltd.
893+#
894+# Authors:
895+# Charm Helpers Developers <juju@lists.ubuntu.com>
896+
897+import os
898+import json
899+import yaml
900+import subprocess
901+import sys
902+from subprocess import CalledProcessError
903+
904+import six
905+if not six.PY3:
906+ from UserDict import UserDict
907+else:
908+ from collections import UserDict
909+
910+CRITICAL = "CRITICAL"
911+ERROR = "ERROR"
912+WARNING = "WARNING"
913+INFO = "INFO"
914+DEBUG = "DEBUG"
915+MARKER = object()
916+
917+cache = {}
918+
919+
920+def cached(func):
921+ """Cache return values for multiple executions of func + args
922+
923+ For example::
924+
925+ @cached
926+ def unit_get(attribute):
927+ pass
928+
929+ unit_get('test')
930+
931+ will cache the result of unit_get + 'test' for future calls.
932+ """
933+ def wrapper(*args, **kwargs):
934+ global cache
935+ key = str((func, args, kwargs))
936+ try:
937+ return cache[key]
938+ except KeyError:
939+ res = func(*args, **kwargs)
940+ cache[key] = res
941+ return res
942+ return wrapper
943+
944+
945+def flush(key):
946+ """Flushes any entries from function cache where the
947+ key is found in the function+args """
948+ flush_list = []
949+ for item in cache:
950+ if key in item:
951+ flush_list.append(item)
952+ for item in flush_list:
953+ del cache[item]
954+
955+
956+def log(message, level=None):
957+ """Write a message to the juju log"""
958+ command = ['juju-log']
959+ if level:
960+ command += ['-l', level]
961+ if not isinstance(message, six.string_types):
962+ message = repr(message)
963+ command += [message]
964+ subprocess.call(command)
965+
966+
967+class Serializable(UserDict):
968+ """Wrapper, an object that can be serialized to yaml or json"""
969+
970+ def __init__(self, obj):
971+ # wrap the object
972+ UserDict.__init__(self)
973+ self.data = obj
974+
975+ def __getattr__(self, attr):
976+ # See if this object has attribute.
977+ if attr in ("json", "yaml", "data"):
978+ return self.__dict__[attr]
979+ # Check for attribute in wrapped object.
980+ got = getattr(self.data, attr, MARKER)
981+ if got is not MARKER:
982+ return got
983+ # Proxy to the wrapped object via dict interface.
984+ try:
985+ return self.data[attr]
986+ except KeyError:
987+ raise AttributeError(attr)
988+
989+ def __getstate__(self):
990+ # Pickle as a standard dictionary.
991+ return self.data
992+
993+ def __setstate__(self, state):
994+ # Unpickle into our wrapper.
995+ self.data = state
996+
997+ def json(self):
998+ """Serialize the object to json"""
999+ return json.dumps(self.data)
1000+
1001+ def yaml(self):
1002+ """Serialize the object to yaml"""
1003+ return yaml.dump(self.data)
1004+
1005+
1006+def execution_environment():
1007+ """A convenient bundling of the current execution context"""
1008+ context = {}
1009+ context['conf'] = config()
1010+ if relation_id():
1011+ context['reltype'] = relation_type()
1012+ context['relid'] = relation_id()
1013+ context['rel'] = relation_get()
1014+ context['unit'] = local_unit()
1015+ context['rels'] = relations()
1016+ context['env'] = os.environ
1017+ return context
1018+
1019+
1020+def in_relation_hook():
1021+ """Determine whether we're running in a relation hook"""
1022+ return 'JUJU_RELATION' in os.environ
1023+
1024+
1025+def relation_type():
1026+ """The scope for the current relation hook"""
1027+ return os.environ.get('JUJU_RELATION', None)
1028+
1029+
1030+def relation_id():
1031+ """The relation ID for the current relation hook"""
1032+ return os.environ.get('JUJU_RELATION_ID', None)
1033+
1034+
1035+def local_unit():
1036+ """Local unit ID"""
1037+ return os.environ['JUJU_UNIT_NAME']
1038+
1039+
1040+def remote_unit():
1041+ """The remote unit for the current relation hook"""
1042+ return os.environ['JUJU_REMOTE_UNIT']
1043+
1044+
1045+def service_name():
1046+ """The name service group this unit belongs to"""
1047+ return local_unit().split('/')[0]
1048+
1049+
1050+def hook_name():
1051+ """The name of the currently executing hook"""
1052+ return os.path.basename(sys.argv[0])
1053+
1054+
1055+class Config(dict):
1056+ """A dictionary representation of the charm's config.yaml, with some
1057+ extra features:
1058+
1059+ - See which values in the dictionary have changed since the previous hook.
1060+ - For values that have changed, see what the previous value was.
1061+ - Store arbitrary data for use in a later hook.
1062+
1063+ NOTE: Do not instantiate this object directly - instead call
1064+ ``hookenv.config()``, which will return an instance of :class:`Config`.
1065+
1066+ Example usage::
1067+
1068+ >>> # inside a hook
1069+ >>> from charmhelpers.core import hookenv
1070+ >>> config = hookenv.config()
1071+ >>> config['foo']
1072+ 'bar'
1073+ >>> # store a new key/value for later use
1074+ >>> config['mykey'] = 'myval'
1075+
1076+
1077+ >>> # user runs `juju set mycharm foo=baz`
1078+ >>> # now we're inside subsequent config-changed hook
1079+ >>> config = hookenv.config()
1080+ >>> config['foo']
1081+ 'baz'
1082+ >>> # test to see if this val has changed since last hook
1083+ >>> config.changed('foo')
1084+ True
1085+ >>> # what was the previous value?
1086+ >>> config.previous('foo')
1087+ 'bar'
1088+ >>> # keys/values that we add are preserved across hooks
1089+ >>> config['mykey']
1090+ 'myval'
1091+
1092+ """
1093+ CONFIG_FILE_NAME = '.juju-persistent-config'
1094+
1095+ def __init__(self, *args, **kw):
1096+ super(Config, self).__init__(*args, **kw)
1097+ self.implicit_save = True
1098+ self._prev_dict = None
1099+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
1100+ if os.path.exists(self.path):
1101+ self.load_previous()
1102+
1103+ def __getitem__(self, key):
1104+ """For regular dict lookups, check the current juju config first,
1105+ then the previous (saved) copy. This ensures that user-saved values
1106+ will be returned by a dict lookup.
1107+
1108+ """
1109+ try:
1110+ return dict.__getitem__(self, key)
1111+ except KeyError:
1112+ return (self._prev_dict or {})[key]
1113+
1114+ def keys(self):
1115+ prev_keys = []
1116+ if self._prev_dict is not None:
1117+ prev_keys = self._prev_dict.keys()
1118+ return list(set(prev_keys + list(dict.keys(self))))
1119+
1120+ def load_previous(self, path=None):
1121+ """Load previous copy of config from disk.
1122+
1123+ In normal usage you don't need to call this method directly - it
1124+ is called automatically at object initialization.
1125+
1126+ :param path:
1127+
1128+ File path from which to load the previous config. If `None`,
1129+ config is loaded from the default location. If `path` is
1130+ specified, subsequent `save()` calls will write to the same
1131+ path.
1132+
1133+ """
1134+ self.path = path or self.path
1135+ with open(self.path) as f:
1136+ self._prev_dict = json.load(f)
1137+
1138+ def changed(self, key):
1139+ """Return True if the current value for this key is different from
1140+ the previous value.
1141+
1142+ """
1143+ if self._prev_dict is None:
1144+ return True
1145+ return self.previous(key) != self.get(key)
1146+
1147+ def previous(self, key):
1148+ """Return previous value for this key, or None if there
1149+ is no previous value.
1150+
1151+ """
1152+ if self._prev_dict:
1153+ return self._prev_dict.get(key)
1154+ return None
1155+
1156+ def save(self):
1157+ """Save this config to disk.
1158+
1159+ If the charm is using the :mod:`Services Framework <services.base>`
1160+ or :meth:'@hook <Hooks.hook>' decorator, this
1161+ is called automatically at the end of successful hook execution.
1162+ Otherwise, it should be called directly by user code.
1163+
1164+ To disable automatic saves, set ``implicit_save=False`` on this
1165+ instance.
1166+
1167+ """
1168+ if self._prev_dict:
1169+ for k, v in six.iteritems(self._prev_dict):
1170+ if k not in self:
1171+ self[k] = v
1172+ with open(self.path, 'w') as f:
1173+ json.dump(self, f)
1174+
1175+
1176+@cached
1177+def config(scope=None):
1178+ """Juju charm configuration"""
1179+ config_cmd_line = ['config-get']
1180+ if scope is not None:
1181+ config_cmd_line.append(scope)
1182+ config_cmd_line.append('--format=json')
1183+ try:
1184+ config_data = json.loads(
1185+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
1186+ if scope is not None:
1187+ return config_data
1188+ return Config(config_data)
1189+ except ValueError:
1190+ return None
1191+
1192+
1193+@cached
1194+def relation_get(attribute=None, unit=None, rid=None):
1195+ """Get relation information"""
1196+ _args = ['relation-get', '--format=json']
1197+ if rid:
1198+ _args.append('-r')
1199+ _args.append(rid)
1200+ _args.append(attribute or '-')
1201+ if unit:
1202+ _args.append(unit)
1203+ try:
1204+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
1205+ except ValueError:
1206+ return None
1207+ except CalledProcessError as e:
1208+ if e.returncode == 2:
1209+ return None
1210+ raise
1211+
1212+
1213+def relation_set(relation_id=None, relation_settings=None, **kwargs):
1214+ """Set relation information for the current unit"""
1215+ relation_settings = relation_settings if relation_settings else {}
1216+ relation_cmd_line = ['relation-set']
1217+ if relation_id is not None:
1218+ relation_cmd_line.extend(('-r', relation_id))
1219+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
1220+ if v is None:
1221+ relation_cmd_line.append('{}='.format(k))
1222+ else:
1223+ relation_cmd_line.append('{}={}'.format(k, v))
1224+ subprocess.check_call(relation_cmd_line)
1225+ # Flush cache of any relation-gets for local unit
1226+ flush(local_unit())
1227+
1228+
1229+@cached
1230+def relation_ids(reltype=None):
1231+ """A list of relation_ids"""
1232+ reltype = reltype or relation_type()
1233+ relid_cmd_line = ['relation-ids', '--format=json']
1234+ if reltype is not None:
1235+ relid_cmd_line.append(reltype)
1236+ return json.loads(
1237+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
1238+ return []
1239+
1240+
1241+@cached
1242+def related_units(relid=None):
1243+ """A list of related units"""
1244+ relid = relid or relation_id()
1245+ units_cmd_line = ['relation-list', '--format=json']
1246+ if relid is not None:
1247+ units_cmd_line.extend(('-r', relid))
1248+ return json.loads(
1249+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
1250+
1251+
1252+@cached
1253+def relation_for_unit(unit=None, rid=None):
1254+ """Get the json represenation of a unit's relation"""
1255+ unit = unit or remote_unit()
1256+ relation = relation_get(unit=unit, rid=rid)
1257+ for key in relation:
1258+ if key.endswith('-list'):
1259+ relation[key] = relation[key].split()
1260+ relation['__unit__'] = unit
1261+ return relation
1262+
1263+
1264+@cached
1265+def relations_for_id(relid=None):
1266+ """Get relations of a specific relation ID"""
1267+ relation_data = []
1268+ relid = relid or relation_ids()
1269+ for unit in related_units(relid):
1270+ unit_data = relation_for_unit(unit, relid)
1271+ unit_data['__relid__'] = relid
1272+ relation_data.append(unit_data)
1273+ return relation_data
1274+
1275+
1276+@cached
1277+def relations_of_type(reltype=None):
1278+ """Get relations of a specific type"""
1279+ relation_data = []
1280+ reltype = reltype or relation_type()
1281+ for relid in relation_ids(reltype):
1282+ for relation in relations_for_id(relid):
1283+ relation['__relid__'] = relid
1284+ relation_data.append(relation)
1285+ return relation_data
1286+
1287+
1288+@cached
1289+def relation_types():
1290+ """Get a list of relation types supported by this charm"""
1291+ charmdir = os.environ.get('CHARM_DIR', '')
1292+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1293+ md = yaml.safe_load(mdf)
1294+ rel_types = []
1295+ for key in ('provides', 'requires', 'peers'):
1296+ section = md.get(key)
1297+ if section:
1298+ rel_types.extend(section.keys())
1299+ mdf.close()
1300+ return rel_types
1301+
1302+
1303+@cached
1304+def relations():
1305+ """Get a nested dictionary of relation data for all related units"""
1306+ rels = {}
1307+ for reltype in relation_types():
1308+ relids = {}
1309+ for relid in relation_ids(reltype):
1310+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1311+ for unit in related_units(relid):
1312+ reldata = relation_get(unit=unit, rid=relid)
1313+ units[unit] = reldata
1314+ relids[relid] = units
1315+ rels[reltype] = relids
1316+ return rels
1317+
1318+
1319+@cached
1320+def is_relation_made(relation, keys='private-address'):
1321+ '''
1322+ Determine whether a relation is established by checking for
1323+ presence of key(s). If a list of keys is provided, they
1324+ must all be present for the relation to be identified as made
1325+ '''
1326+ if isinstance(keys, str):
1327+ keys = [keys]
1328+ for r_id in relation_ids(relation):
1329+ for unit in related_units(r_id):
1330+ context = {}
1331+ for k in keys:
1332+ context[k] = relation_get(k, rid=r_id,
1333+ unit=unit)
1334+ if None not in context.values():
1335+ return True
1336+ return False
1337+
1338+
1339+def open_port(port, protocol="TCP"):
1340+ """Open a service network port"""
1341+ _args = ['open-port']
1342+ _args.append('{}/{}'.format(port, protocol))
1343+ subprocess.check_call(_args)
1344+
1345+
1346+def close_port(port, protocol="TCP"):
1347+ """Close a service network port"""
1348+ _args = ['close-port']
1349+ _args.append('{}/{}'.format(port, protocol))
1350+ subprocess.check_call(_args)
1351+
1352+
1353+@cached
1354+def unit_get(attribute):
1355+ """Get the unit ID for the remote unit"""
1356+ _args = ['unit-get', '--format=json', attribute]
1357+ try:
1358+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
1359+ except ValueError:
1360+ return None
1361+
1362+
1363+def unit_private_ip():
1364+ """Get this unit's private IP address"""
1365+ return unit_get('private-address')
1366+
1367+
1368+class UnregisteredHookError(Exception):
1369+ """Raised when an undefined hook is called"""
1370+ pass
1371+
1372+
1373+class Hooks(object):
1374+ """A convenient handler for hook functions.
1375+
1376+ Example::
1377+
1378+ hooks = Hooks()
1379+
1380+ # register a hook, taking its name from the function name
1381+ @hooks.hook()
1382+ def install():
1383+ pass # your code here
1384+
1385+ # register a hook, providing a custom hook name
1386+ @hooks.hook("config-changed")
1387+ def config_changed():
1388+ pass # your code here
1389+
1390+ if __name__ == "__main__":
1391+ # execute a hook based on the name the program is called by
1392+ hooks.execute(sys.argv)
1393+ """
1394+
1395+ def __init__(self, config_save=True):
1396+ super(Hooks, self).__init__()
1397+ self._hooks = {}
1398+ self._config_save = config_save
1399+
1400+ def register(self, name, function):
1401+ """Register a hook"""
1402+ self._hooks[name] = function
1403+
1404+ def execute(self, args):
1405+ """Execute a registered hook based on args[0]"""
1406+ hook_name = os.path.basename(args[0])
1407+ if hook_name in self._hooks:
1408+ self._hooks[hook_name]()
1409+ if self._config_save:
1410+ cfg = config()
1411+ if cfg.implicit_save:
1412+ cfg.save()
1413+ else:
1414+ raise UnregisteredHookError(hook_name)
1415+
1416+ def hook(self, *hook_names):
1417+ """Decorator, registering them as hooks"""
1418+ def wrapper(decorated):
1419+ for hook_name in hook_names:
1420+ self.register(hook_name, decorated)
1421+ else:
1422+ self.register(decorated.__name__, decorated)
1423+ if '_' in decorated.__name__:
1424+ self.register(
1425+ decorated.__name__.replace('_', '-'), decorated)
1426+ return decorated
1427+ return wrapper
1428+
1429+
1430+def charm_dir():
1431+ """Return the root directory of the current charm"""
1432+ return os.environ.get('CHARM_DIR')
1433
1434=== added file 'hooks/charmhelpers/core/host.py'
1435--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
1436+++ hooks/charmhelpers/core/host.py 2014-12-09 22:04:35 +0000
1437@@ -0,0 +1,416 @@
1438+"""Tools for working with the host system"""
1439+# Copyright 2012 Canonical Ltd.
1440+#
1441+# Authors:
1442+# Nick Moffitt <nick.moffitt@canonical.com>
1443+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1444+
1445+import os
1446+import re
1447+import pwd
1448+import grp
1449+import random
1450+import string
1451+import subprocess
1452+import hashlib
1453+from contextlib import contextmanager
1454+from collections import OrderedDict
1455+
1456+import six
1457+
1458+from .hookenv import log
1459+from .fstab import Fstab
1460+
1461+
1462+def service_start(service_name):
1463+ """Start a system service"""
1464+ return service('start', service_name)
1465+
1466+
1467+def service_stop(service_name):
1468+ """Stop a system service"""
1469+ return service('stop', service_name)
1470+
1471+
1472+def service_restart(service_name):
1473+ """Restart a system service"""
1474+ return service('restart', service_name)
1475+
1476+
1477+def service_reload(service_name, restart_on_failure=False):
1478+ """Reload a system service, optionally falling back to restart if
1479+ reload fails"""
1480+ service_result = service('reload', service_name)
1481+ if not service_result and restart_on_failure:
1482+ service_result = service('restart', service_name)
1483+ return service_result
1484+
1485+
1486+def service(action, service_name):
1487+ """Control a system service"""
1488+ cmd = ['service', service_name, action]
1489+ return subprocess.call(cmd) == 0
1490+
1491+
1492+def service_running(service):
1493+ """Determine whether a system service is running"""
1494+ try:
1495+ output = subprocess.check_output(
1496+ ['service', service, 'status'],
1497+ stderr=subprocess.STDOUT).decode('UTF-8')
1498+ except subprocess.CalledProcessError:
1499+ return False
1500+ else:
1501+ if ("start/running" in output or "is running" in output):
1502+ return True
1503+ else:
1504+ return False
1505+
1506+
1507+def service_available(service_name):
1508+ """Determine whether a system service is available"""
1509+ try:
1510+ subprocess.check_output(
1511+ ['service', service_name, 'status'],
1512+ stderr=subprocess.STDOUT).decode('UTF-8')
1513+ except subprocess.CalledProcessError as e:
1514+ return 'unrecognized service' not in e.output
1515+ else:
1516+ return True
1517+
1518+
1519+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1520+ """Add a user to the system"""
1521+ try:
1522+ user_info = pwd.getpwnam(username)
1523+ log('user {0} already exists!'.format(username))
1524+ except KeyError:
1525+ log('creating user {0}'.format(username))
1526+ cmd = ['useradd']
1527+ if system_user or password is None:
1528+ cmd.append('--system')
1529+ else:
1530+ cmd.extend([
1531+ '--create-home',
1532+ '--shell', shell,
1533+ '--password', password,
1534+ ])
1535+ cmd.append(username)
1536+ subprocess.check_call(cmd)
1537+ user_info = pwd.getpwnam(username)
1538+ return user_info
1539+
1540+
1541+def add_group(group_name, system_group=False):
1542+ """Add a group to the system"""
1543+ try:
1544+ group_info = grp.getgrnam(group_name)
1545+ log('group {0} already exists!'.format(group_name))
1546+ except KeyError:
1547+ log('creating group {0}'.format(group_name))
1548+ cmd = ['addgroup']
1549+ if system_group:
1550+ cmd.append('--system')
1551+ else:
1552+ cmd.extend([
1553+ '--group',
1554+ ])
1555+ cmd.append(group_name)
1556+ subprocess.check_call(cmd)
1557+ group_info = grp.getgrnam(group_name)
1558+ return group_info
1559+
1560+
1561+def add_user_to_group(username, group):
1562+ """Add a user to a group"""
1563+ cmd = [
1564+ 'gpasswd', '-a',
1565+ username,
1566+ group
1567+ ]
1568+ log("Adding user {} to group {}".format(username, group))
1569+ subprocess.check_call(cmd)
1570+
1571+
1572+def rsync(from_path, to_path, flags='-r', options=None):
1573+ """Replicate the contents of a path"""
1574+ options = options or ['--delete', '--executability']
1575+ cmd = ['/usr/bin/rsync', flags]
1576+ cmd.extend(options)
1577+ cmd.append(from_path)
1578+ cmd.append(to_path)
1579+ log(" ".join(cmd))
1580+ return subprocess.check_output(cmd).decode('UTF-8').strip()
1581+
1582+
1583+def symlink(source, destination):
1584+ """Create a symbolic link"""
1585+ log("Symlinking {} as {}".format(source, destination))
1586+ cmd = [
1587+ 'ln',
1588+ '-sf',
1589+ source,
1590+ destination,
1591+ ]
1592+ subprocess.check_call(cmd)
1593+
1594+
1595+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
1596+ """Create a directory"""
1597+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
1598+ perms))
1599+ uid = pwd.getpwnam(owner).pw_uid
1600+ gid = grp.getgrnam(group).gr_gid
1601+ realpath = os.path.abspath(path)
1602+ if os.path.exists(realpath):
1603+ if force and not os.path.isdir(realpath):
1604+ log("Removing non-directory file {} prior to mkdir()".format(path))
1605+ os.unlink(realpath)
1606+ else:
1607+ os.makedirs(realpath, perms)
1608+ os.chown(realpath, uid, gid)
1609+
1610+
1611+def write_file(path, content, owner='root', group='root', perms=0o444):
1612+ """Create or overwrite a file with the contents of a string"""
1613+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
1614+ uid = pwd.getpwnam(owner).pw_uid
1615+ gid = grp.getgrnam(group).gr_gid
1616+ with open(path, 'w') as target:
1617+ os.fchown(target.fileno(), uid, gid)
1618+ os.fchmod(target.fileno(), perms)
1619+ target.write(content)
1620+
1621+
1622+def fstab_remove(mp):
1623+ """Remove the given mountpoint entry from /etc/fstab
1624+ """
1625+ return Fstab.remove_by_mountpoint(mp)
1626+
1627+
1628+def fstab_add(dev, mp, fs, options=None):
1629+ """Adds the given device entry to the /etc/fstab file
1630+ """
1631+ return Fstab.add(dev, mp, fs, options=options)
1632+
1633+
1634+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
1635+ """Mount a filesystem at a particular mountpoint"""
1636+ cmd_args = ['mount']
1637+ if options is not None:
1638+ cmd_args.extend(['-o', options])
1639+ cmd_args.extend([device, mountpoint])
1640+ try:
1641+ subprocess.check_output(cmd_args)
1642+ except subprocess.CalledProcessError as e:
1643+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1644+ return False
1645+
1646+ if persist:
1647+ return fstab_add(device, mountpoint, filesystem, options=options)
1648+ return True
1649+
1650+
1651+def umount(mountpoint, persist=False):
1652+ """Unmount a filesystem"""
1653+ cmd_args = ['umount', mountpoint]
1654+ try:
1655+ subprocess.check_output(cmd_args)
1656+ except subprocess.CalledProcessError as e:
1657+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1658+ return False
1659+
1660+ if persist:
1661+ return fstab_remove(mountpoint)
1662+ return True
1663+
1664+
1665+def mounts():
1666+ """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
1667+ with open('/proc/mounts') as f:
1668+ # [['/mount/point','/dev/path'],[...]]
1669+ system_mounts = [m[1::-1] for m in [l.strip().split()
1670+ for l in f.readlines()]]
1671+ return system_mounts
1672+
1673+
1674+def file_hash(path, hash_type='md5'):
1675+ """
1676+ Generate a hash checksum of the contents of 'path' or None if not found.
1677+
1678+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1679+ such as md5, sha1, sha256, sha512, etc.
1680+ """
1681+ if os.path.exists(path):
1682+ h = getattr(hashlib, hash_type)()
1683+ with open(path, 'rb') as source:
1684+ h.update(source.read())
1685+ return h.hexdigest()
1686+ else:
1687+ return None
1688+
1689+
1690+def check_hash(path, checksum, hash_type='md5'):
1691+ """
1692+ Validate a file using a cryptographic checksum.
1693+
1694+ :param str checksum: Value of the checksum used to validate the file.
1695+ :param str hash_type: Hash algorithm used to generate `checksum`.
1696+ Can be any hash alrgorithm supported by :mod:`hashlib`,
1697+ such as md5, sha1, sha256, sha512, etc.
1698+ :raises ChecksumError: If the file fails the checksum
1699+
1700+ """
1701+ actual_checksum = file_hash(path, hash_type)
1702+ if checksum != actual_checksum:
1703+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
1704+
1705+
1706+class ChecksumError(ValueError):
1707+ pass
1708+
1709+
1710+def restart_on_change(restart_map, stopstart=False):
1711+ """Restart services based on configuration files changing
1712+
1713+ This function is used a decorator, for example::
1714+
1715+ @restart_on_change({
1716+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1717+ })
1718+ def ceph_client_changed():
1719+ pass # your code here
1720+
1721+ In this example, the cinder-api and cinder-volume services
1722+ would be restarted if /etc/ceph/ceph.conf is changed by the
1723+ ceph_client_changed function.
1724+ """
1725+ def wrap(f):
1726+ def wrapped_f(*args):
1727+ checksums = {}
1728+ for path in restart_map:
1729+ checksums[path] = file_hash(path)
1730+ f(*args)
1731+ restarts = []
1732+ for path in restart_map:
1733+ if checksums[path] != file_hash(path):
1734+ restarts += restart_map[path]
1735+ services_list = list(OrderedDict.fromkeys(restarts))
1736+ if not stopstart:
1737+ for service_name in services_list:
1738+ service('restart', service_name)
1739+ else:
1740+ for action in ['stop', 'start']:
1741+ for service_name in services_list:
1742+ service(action, service_name)
1743+ return wrapped_f
1744+ return wrap
1745+
1746+
1747+def lsb_release():
1748+ """Return /etc/lsb-release in a dict"""
1749+ d = {}
1750+ with open('/etc/lsb-release', 'r') as lsb:
1751+ for l in lsb:
1752+ k, v = l.split('=')
1753+ d[k.strip()] = v.strip()
1754+ return d
1755+
1756+
1757+def pwgen(length=None):
1758+ """Generate a random pasword."""
1759+ if length is None:
1760+ length = random.choice(range(35, 45))
1761+ alphanumeric_chars = [
1762+ l for l in (string.ascii_letters + string.digits)
1763+ if l not in 'l0QD1vAEIOUaeiou']
1764+ random_chars = [
1765+ random.choice(alphanumeric_chars) for _ in range(length)]
1766+ return(''.join(random_chars))
1767+
1768+
1769+def list_nics(nic_type):
1770+ '''Return a list of nics of given type(s)'''
1771+ if isinstance(nic_type, six.string_types):
1772+ int_types = [nic_type]
1773+ else:
1774+ int_types = nic_type
1775+ interfaces = []
1776+ for int_type in int_types:
1777+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
1778+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1779+ ip_output = (line for line in ip_output if line)
1780+ for line in ip_output:
1781+ if line.split()[1].startswith(int_type):
1782+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
1783+ if matched:
1784+ interface = matched.groups()[0]
1785+ else:
1786+ interface = line.split()[1].replace(":", "")
1787+ interfaces.append(interface)
1788+
1789+ return interfaces
1790+
1791+
1792+def set_nic_mtu(nic, mtu):
1793+ '''Set MTU on a network interface'''
1794+ cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
1795+ subprocess.check_call(cmd)
1796+
1797+
1798+def get_nic_mtu(nic):
1799+ cmd = ['ip', 'addr', 'show', nic]
1800+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1801+ mtu = ""
1802+ for line in ip_output:
1803+ words = line.split()
1804+ if 'mtu' in words:
1805+ mtu = words[words.index("mtu") + 1]
1806+ return mtu
1807+
1808+
1809+def get_nic_hwaddr(nic):
1810+ cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1811+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
1812+ hwaddr = ""
1813+ words = ip_output.split()
1814+ if 'link/ether' in words:
1815+ hwaddr = words[words.index('link/ether') + 1]
1816+ return hwaddr
1817+
1818+
1819+def cmp_pkgrevno(package, revno, pkgcache=None):
1820+ '''Compare supplied revno with the revno of the installed package
1821+
1822+ * 1 => Installed revno is greater than supplied arg
1823+ * 0 => Installed revno is the same as supplied arg
1824+ * -1 => Installed revno is less than supplied arg
1825+
1826+ '''
1827+ import apt_pkg
1828+ if not pkgcache:
1829+ from charmhelpers.fetch import apt_cache
1830+ pkgcache = apt_cache()
1831+ pkg = pkgcache[package]
1832+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1833+
1834+
1835+@contextmanager
1836+def chdir(d):
1837+ cur = os.getcwd()
1838+ try:
1839+ yield os.chdir(d)
1840+ finally:
1841+ os.chdir(cur)
1842+
1843+
1844+def chownr(path, owner, group):
1845+ uid = pwd.getpwnam(owner).pw_uid
1846+ gid = grp.getgrnam(group).gr_gid
1847+
1848+ for root, dirs, files in os.walk(path):
1849+ for name in dirs + files:
1850+ full = os.path.join(root, name)
1851+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
1852+ if not broken_symlink:
1853+ os.chown(full, uid, gid)
1854
1855=== added directory 'hooks/charmhelpers/core/services'
1856=== added file 'hooks/charmhelpers/core/services/__init__.py'
1857--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
1858+++ hooks/charmhelpers/core/services/__init__.py 2014-12-09 22:04:35 +0000
1859@@ -0,0 +1,2 @@
1860+from .base import * # NOQA
1861+from .helpers import * # NOQA
1862
1863=== added file 'hooks/charmhelpers/core/services/base.py'
1864--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
1865+++ hooks/charmhelpers/core/services/base.py 2014-12-09 22:04:35 +0000
1866@@ -0,0 +1,313 @@
1867+import os
1868+import re
1869+import json
1870+from collections import Iterable
1871+
1872+from charmhelpers.core import host
1873+from charmhelpers.core import hookenv
1874+
1875+
1876+__all__ = ['ServiceManager', 'ManagerCallback',
1877+ 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
1878+ 'service_restart', 'service_stop']
1879+
1880+
1881+class ServiceManager(object):
1882+ def __init__(self, services=None):
1883+ """
1884+ Register a list of services, given their definitions.
1885+
1886+ Service definitions are dicts in the following formats (all keys except
1887+ 'service' are optional)::
1888+
1889+ {
1890+ "service": <service name>,
1891+ "required_data": <list of required data contexts>,
1892+ "provided_data": <list of provided data contexts>,
1893+ "data_ready": <one or more callbacks>,
1894+ "data_lost": <one or more callbacks>,
1895+ "start": <one or more callbacks>,
1896+ "stop": <one or more callbacks>,
1897+ "ports": <list of ports to manage>,
1898+ }
1899+
1900+ The 'required_data' list should contain dicts of required data (or
1901+ dependency managers that act like dicts and know how to collect the data).
1902+ Only when all items in the 'required_data' list are populated are the list
1903+ of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
1904+ information.
1905+
1906+ The 'provided_data' list should contain relation data providers, most likely
1907+ a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
1908+ that will indicate a set of data to set on a given relation.
1909+
1910+ The 'data_ready' value should be either a single callback, or a list of
1911+ callbacks, to be called when all items in 'required_data' pass `is_ready()`.
1912+ Each callback will be called with the service name as the only parameter.
1913+ After all of the 'data_ready' callbacks are called, the 'start' callbacks
1914+ are fired.
1915+
1916+ The 'data_lost' value should be either a single callback, or a list of
1917+ callbacks, to be called when a 'required_data' item no longer passes
1918+ `is_ready()`. Each callback will be called with the service name as the
1919+ only parameter. After all of the 'data_lost' callbacks are called,
1920+ the 'stop' callbacks are fired.
1921+
1922+ The 'start' value should be either a single callback, or a list of
1923+ callbacks, to be called when starting the service, after the 'data_ready'
1924+ callbacks are complete. Each callback will be called with the service
1925+ name as the only parameter. This defaults to
1926+ `[host.service_start, services.open_ports]`.
1927+
1928+ The 'stop' value should be either a single callback, or a list of
1929+ callbacks, to be called when stopping the service. If the service is
1930+ being stopped because it no longer has all of its 'required_data', this
1931+ will be called after all of the 'data_lost' callbacks are complete.
1932+ Each callback will be called with the service name as the only parameter.
1933+ This defaults to `[services.close_ports, host.service_stop]`.
1934+
1935+ The 'ports' value should be a list of ports to manage. The default
1936+ 'start' handler will open the ports after the service is started,
1937+ and the default 'stop' handler will close the ports prior to stopping
1938+ the service.
1939+
1940+
1941+ Examples:
1942+
1943+ The following registers an Upstart service called bingod that depends on
1944+ a mongodb relation and which runs a custom `db_migrate` function prior to
1945+ restarting the service, and a Runit service called spadesd::
1946+
1947+ manager = services.ServiceManager([
1948+ {
1949+ 'service': 'bingod',
1950+ 'ports': [80, 443],
1951+ 'required_data': [MongoRelation(), config(), {'my': 'data'}],
1952+ 'data_ready': [
1953+ services.template(source='bingod.conf'),
1954+ services.template(source='bingod.ini',
1955+ target='/etc/bingod.ini',
1956+ owner='bingo', perms=0400),
1957+ ],
1958+ },
1959+ {
1960+ 'service': 'spadesd',
1961+ 'data_ready': services.template(source='spadesd_run.j2',
1962+ target='/etc/sv/spadesd/run',
1963+ perms=0555),
1964+ 'start': runit_start,
1965+ 'stop': runit_stop,
1966+ },
1967+ ])
1968+ manager.manage()
1969+ """
1970+ self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
1971+ self._ready = None
1972+ self.services = {}
1973+ for service in services or []:
1974+ service_name = service['service']
1975+ self.services[service_name] = service
1976+
1977+ def manage(self):
1978+ """
1979+ Handle the current hook by doing The Right Thing with the registered services.
1980+ """
1981+ hook_name = hookenv.hook_name()
1982+ if hook_name == 'stop':
1983+ self.stop_services()
1984+ else:
1985+ self.provide_data()
1986+ self.reconfigure_services()
1987+ cfg = hookenv.config()
1988+ if cfg.implicit_save:
1989+ cfg.save()
1990+
1991+ def provide_data(self):
1992+ """
1993+ Set the relation data for each provider in the ``provided_data`` list.
1994+
1995+ A provider must have a `name` attribute, which indicates which relation
1996+ to set data on, and a `provide_data()` method, which returns a dict of
1997+ data to set.
1998+ """
1999+ hook_name = hookenv.hook_name()
2000+ for service in self.services.values():
2001+ for provider in service.get('provided_data', []):
2002+ if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
2003+ data = provider.provide_data()
2004+ _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
2005+ if _ready:
2006+ hookenv.relation_set(None, data)
2007+
2008+ def reconfigure_services(self, *service_names):
2009+ """
2010+ Update all files for one or more registered services, and,
2011+ if ready, optionally restart them.
2012+
2013+ If no service names are given, reconfigures all registered services.
2014+ """
2015+ for service_name in service_names or self.services.keys():
2016+ if self.is_ready(service_name):
2017+ self.fire_event('data_ready', service_name)
2018+ self.fire_event('start', service_name, default=[
2019+ service_restart,
2020+ manage_ports])
2021+ self.save_ready(service_name)
2022+ else:
2023+ if self.was_ready(service_name):
2024+ self.fire_event('data_lost', service_name)
2025+ self.fire_event('stop', service_name, default=[
2026+ manage_ports,
2027+ service_stop])
2028+ self.save_lost(service_name)
2029+
2030+ def stop_services(self, *service_names):
2031+ """
2032+ Stop one or more registered services, by name.
2033+
2034+ If no service names are given, stops all registered services.
2035+ """
2036+ for service_name in service_names or self.services.keys():
2037+ self.fire_event('stop', service_name, default=[
2038+ manage_ports,
2039+ service_stop])
2040+
2041+ def get_service(self, service_name):
2042+ """
2043+ Given the name of a registered service, return its service definition.
2044+ """
2045+ service = self.services.get(service_name)
2046+ if not service:
2047+ raise KeyError('Service not registered: %s' % service_name)
2048+ return service
2049+
2050+ def fire_event(self, event_name, service_name, default=None):
2051+ """
2052+ Fire a data_ready, data_lost, start, or stop event on a given service.
2053+ """
2054+ service = self.get_service(service_name)
2055+ callbacks = service.get(event_name, default)
2056+ if not callbacks:
2057+ return
2058+ if not isinstance(callbacks, Iterable):
2059+ callbacks = [callbacks]
2060+ for callback in callbacks:
2061+ if isinstance(callback, ManagerCallback):
2062+ callback(self, service_name, event_name)
2063+ else:
2064+ callback(service_name)
2065+
2066+ def is_ready(self, service_name):
2067+ """
2068+ Determine if a registered service is ready, by checking its 'required_data'.
2069+
2070+ A 'required_data' item can be any mapping type, and is considered ready
2071+ if `bool(item)` evaluates as True.
2072+ """
2073+ service = self.get_service(service_name)
2074+ reqs = service.get('required_data', [])
2075+ return all(bool(req) for req in reqs)
2076+
2077+ def _load_ready_file(self):
2078+ if self._ready is not None:
2079+ return
2080+ if os.path.exists(self._ready_file):
2081+ with open(self._ready_file) as fp:
2082+ self._ready = set(json.load(fp))
2083+ else:
2084+ self._ready = set()
2085+
2086+ def _save_ready_file(self):
2087+ if self._ready is None:
2088+ return
2089+ with open(self._ready_file, 'w') as fp:
2090+ json.dump(list(self._ready), fp)
2091+
2092+ def save_ready(self, service_name):
2093+ """
2094+ Save an indicator that the given service is now data_ready.
2095+ """
2096+ self._load_ready_file()
2097+ self._ready.add(service_name)
2098+ self._save_ready_file()
2099+
2100+ def save_lost(self, service_name):
2101+ """
2102+ Save an indicator that the given service is no longer data_ready.
2103+ """
2104+ self._load_ready_file()
2105+ self._ready.discard(service_name)
2106+ self._save_ready_file()
2107+
2108+ def was_ready(self, service_name):
2109+ """
2110+ Determine if the given service was previously data_ready.
2111+ """
2112+ self._load_ready_file()
2113+ return service_name in self._ready
2114+
2115+
2116+class ManagerCallback(object):
2117+ """
2118+ Special case of a callback that takes the `ServiceManager` instance
2119+ in addition to the service name.
2120+
2121+ Subclasses should implement `__call__` which should accept three parameters:
2122+
2123+ * `manager` The `ServiceManager` instance
2124+ * `service_name` The name of the service it's being triggered for
2125+ * `event_name` The name of the event that this callback is handling
2126+ """
2127+ def __call__(self, manager, service_name, event_name):
2128+ raise NotImplementedError()
2129+
2130+
2131+class PortManagerCallback(ManagerCallback):
2132+ """
2133+ Callback class that will open or close ports, for use as either
2134+ a start or stop action.
2135+ """
2136+ def __call__(self, manager, service_name, event_name):
2137+ service = manager.get_service(service_name)
2138+ new_ports = service.get('ports', [])
2139+ port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
2140+ if os.path.exists(port_file):
2141+ with open(port_file) as fp:
2142+ old_ports = fp.read().split(',')
2143+ for old_port in old_ports:
2144+ if bool(old_port):
2145+ old_port = int(old_port)
2146+ if old_port not in new_ports:
2147+ hookenv.close_port(old_port)
2148+ with open(port_file, 'w') as fp:
2149+ fp.write(','.join(str(port) for port in new_ports))
2150+ for port in new_ports:
2151+ if event_name == 'start':
2152+ hookenv.open_port(port)
2153+ elif event_name == 'stop':
2154+ hookenv.close_port(port)
2155+
2156+
2157+def service_stop(service_name):
2158+ """
2159+ Wrapper around host.service_stop to prevent spurious "unknown service"
2160+ messages in the logs.
2161+ """
2162+ if host.service_running(service_name):
2163+ host.service_stop(service_name)
2164+
2165+
2166+def service_restart(service_name):
2167+ """
2168+ Wrapper around host.service_restart to prevent spurious "unknown service"
2169+ messages in the logs.
2170+ """
2171+ if host.service_available(service_name):
2172+ if host.service_running(service_name):
2173+ host.service_restart(service_name)
2174+ else:
2175+ host.service_start(service_name)
2176+
2177+
2178+# Convenience aliases
2179+open_ports = close_ports = manage_ports = PortManagerCallback()
2180
2181=== added file 'hooks/charmhelpers/core/services/helpers.py'
2182--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
2183+++ hooks/charmhelpers/core/services/helpers.py 2014-12-09 22:04:35 +0000
2184@@ -0,0 +1,243 @@
2185+import os
2186+import yaml
2187+from charmhelpers.core import hookenv
2188+from charmhelpers.core import templating
2189+
2190+from charmhelpers.core.services.base import ManagerCallback
2191+
2192+
2193+__all__ = ['RelationContext', 'TemplateCallback',
2194+ 'render_template', 'template']
2195+
2196+
2197+class RelationContext(dict):
2198+ """
2199+ Base class for a context generator that gets relation data from juju.
2200+
2201+ Subclasses must provide the attributes `name`, which is the name of the
2202+ interface of interest, `interface`, which is the type of the interface of
2203+ interest, and `required_keys`, which is the set of keys required for the
2204+ relation to be considered complete. The data for all interfaces matching
2205+ the `name` attribute that are complete will used to populate the dictionary
2206+ values (see `get_data`, below).
2207+
2208+ The generated context will be namespaced under the relation :attr:`name`,
2209+ to prevent potential naming conflicts.
2210+
2211+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
2212+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
2213+ """
2214+ name = None
2215+ interface = None
2216+ required_keys = []
2217+
2218+ def __init__(self, name=None, additional_required_keys=None):
2219+ if name is not None:
2220+ self.name = name
2221+ if additional_required_keys is not None:
2222+ self.required_keys.extend(additional_required_keys)
2223+ self.get_data()
2224+
2225+ def __bool__(self):
2226+ """
2227+ Returns True if all of the required_keys are available.
2228+ """
2229+ return self.is_ready()
2230+
2231+ __nonzero__ = __bool__
2232+
2233+ def __repr__(self):
2234+ return super(RelationContext, self).__repr__()
2235+
2236+ def is_ready(self):
2237+ """
2238+ Returns True if all of the `required_keys` are available from any units.
2239+ """
2240+ ready = len(self.get(self.name, [])) > 0
2241+ if not ready:
2242+ hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
2243+ return ready
2244+
2245+ def _is_ready(self, unit_data):
2246+ """
2247+ Helper method that tests a set of relation data and returns True if
2248+ all of the `required_keys` are present.
2249+ """
2250+ return set(unit_data.keys()).issuperset(set(self.required_keys))
2251+
2252+ def get_data(self):
2253+ """
2254+ Retrieve the relation data for each unit involved in a relation and,
2255+ if complete, store it in a list under `self[self.name]`. This
2256+ is automatically called when the RelationContext is instantiated.
2257+
2258+ The units are sorted lexographically first by the service ID, then by
2259+ the unit ID. Thus, if an interface has two other services, 'db:1'
2260+ and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
2261+ and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
2262+ set of data, the relation data for the units will be stored in the
2263+ order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
2264+
2265+ If you only care about a single unit on the relation, you can just
2266+ access it as `{{ interface[0]['key'] }}`. However, if you can at all
2267+ support multiple units on a relation, you should iterate over the list,
2268+ like::
2269+
2270+ {% for unit in interface -%}
2271+ {{ unit['key'] }}{% if not loop.last %},{% endif %}
2272+ {%- endfor %}
2273+
2274+ Note that since all sets of relation data from all related services and
2275+ units are in a single list, if you need to know which service or unit a
2276+ set of data came from, you'll need to extend this class to preserve
2277+ that information.
2278+ """
2279+ if not hookenv.relation_ids(self.name):
2280+ return
2281+
2282+ ns = self.setdefault(self.name, [])
2283+ for rid in sorted(hookenv.relation_ids(self.name)):
2284+ for unit in sorted(hookenv.related_units(rid)):
2285+ reldata = hookenv.relation_get(rid=rid, unit=unit)
2286+ if self._is_ready(reldata):
2287+ ns.append(reldata)
2288+
2289+ def provide_data(self):
2290+ """
2291+ Return data to be relation_set for this interface.
2292+ """
2293+ return {}
2294+
2295+
2296+class MysqlRelation(RelationContext):
2297+ """
2298+ Relation context for the `mysql` interface.
2299+
2300+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
2301+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
2302+ """
2303+ name = 'db'
2304+ interface = 'mysql'
2305+ required_keys = ['host', 'user', 'password', 'database']
2306+
2307+
2308+class HttpRelation(RelationContext):
2309+ """
2310+ Relation context for the `http` interface.
2311+
2312+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
2313+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
2314+ """
2315+ name = 'website'
2316+ interface = 'http'
2317+ required_keys = ['host', 'port']
2318+
2319+ def provide_data(self):
2320+ return {
2321+ 'host': hookenv.unit_get('private-address'),
2322+ 'port': 80,
2323+ }
2324+
2325+
2326+class RequiredConfig(dict):
2327+ """
2328+ Data context that loads config options with one or more mandatory options.
2329+
2330+ Once the required options have been changed from their default values, all
2331+ config options will be available, namespaced under `config` to prevent
2332+ potential naming conflicts (for example, between a config option and a
2333+ relation property).
2334+
2335+ :param list *args: List of options that must be changed from their default values.
2336+ """
2337+
2338+ def __init__(self, *args):
2339+ self.required_options = args
2340+ self['config'] = hookenv.config()
2341+ with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
2342+ self.config = yaml.load(fp).get('options', {})
2343+
2344+ def __bool__(self):
2345+ for option in self.required_options:
2346+ if option not in self['config']:
2347+ return False
2348+ current_value = self['config'][option]
2349+ default_value = self.config[option].get('default')
2350+ if current_value == default_value:
2351+ return False
2352+ if current_value in (None, '') and default_value in (None, ''):
2353+ return False
2354+ return True
2355+
2356+ def __nonzero__(self):
2357+ return self.__bool__()
2358+
2359+
2360+class StoredContext(dict):
2361+ """
2362+ A data context that always returns the data that it was first created with.
2363+
2364+ This is useful to do a one-time generation of things like passwords, that
2365+ will thereafter use the same value that was originally generated, instead
2366+ of generating a new value each time it is run.
2367+ """
2368+ def __init__(self, file_name, config_data):
2369+ """
2370+ If the file exists, populate `self` with the data from the file.
2371+ Otherwise, populate with the given data and persist it to the file.
2372+ """
2373+ if os.path.exists(file_name):
2374+ self.update(self.read_context(file_name))
2375+ else:
2376+ self.store_context(file_name, config_data)
2377+ self.update(config_data)
2378+
2379+ def store_context(self, file_name, config_data):
2380+ if not os.path.isabs(file_name):
2381+ file_name = os.path.join(hookenv.charm_dir(), file_name)
2382+ with open(file_name, 'w') as file_stream:
2383+ os.fchmod(file_stream.fileno(), 0o600)
2384+ yaml.dump(config_data, file_stream)
2385+
2386+ def read_context(self, file_name):
2387+ if not os.path.isabs(file_name):
2388+ file_name = os.path.join(hookenv.charm_dir(), file_name)
2389+ with open(file_name, 'r') as file_stream:
2390+ data = yaml.load(file_stream)
2391+ if not data:
2392+ raise OSError("%s is empty" % file_name)
2393+ return data
2394+
2395+
2396+class TemplateCallback(ManagerCallback):
2397+ """
2398+ Callback class that will render a Jinja2 template, for use as a ready
2399+ action.
2400+
2401+ :param str source: The template source file, relative to
2402+ `$CHARM_DIR/templates`
2403+
2404+ :param str target: The target to write the rendered template to
2405+ :param str owner: The owner of the rendered file
2406+ :param str group: The group of the rendered file
2407+ :param int perms: The permissions of the rendered file
2408+ """
2409+ def __init__(self, source, target,
2410+ owner='root', group='root', perms=0o444):
2411+ self.source = source
2412+ self.target = target
2413+ self.owner = owner
2414+ self.group = group
2415+ self.perms = perms
2416+
2417+ def __call__(self, manager, service_name, event_name):
2418+ service = manager.get_service(service_name)
2419+ context = {}
2420+ for ctx in service.get('required_data', []):
2421+ context.update(ctx)
2422+ templating.render(self.source, self.target, context,
2423+ self.owner, self.group, self.perms)
2424+
2425+
2426+# Convenience aliases for templates
2427+render_template = template = TemplateCallback
2428
2429=== added file 'hooks/charmhelpers/core/sysctl.py'
2430--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
2431+++ hooks/charmhelpers/core/sysctl.py 2014-12-09 22:04:35 +0000
2432@@ -0,0 +1,34 @@
2433+#!/usr/bin/env python
2434+# -*- coding: utf-8 -*-
2435+
2436+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2437+
2438+import yaml
2439+
2440+from subprocess import check_call
2441+
2442+from charmhelpers.core.hookenv import (
2443+ log,
2444+ DEBUG,
2445+)
2446+
2447+
2448+def create(sysctl_dict, sysctl_file):
2449+ """Creates a sysctl.conf file from a YAML associative array
2450+
2451+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
2452+ :type sysctl_dict: dict
2453+ :param sysctl_file: path to the sysctl file to be saved
2454+ :type sysctl_file: str or unicode
2455+ :returns: None
2456+ """
2457+ sysctl_dict = yaml.load(sysctl_dict)
2458+
2459+ with open(sysctl_file, "w") as fd:
2460+ for key, value in sysctl_dict.items():
2461+ fd.write("{}={}\n".format(key, value))
2462+
2463+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
2464+ level=DEBUG)
2465+
2466+ check_call(["sysctl", "-p", sysctl_file])
2467
2468=== added file 'hooks/charmhelpers/core/templating.py'
2469--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
2470+++ hooks/charmhelpers/core/templating.py 2014-12-09 22:04:35 +0000
2471@@ -0,0 +1,52 @@
2472+import os
2473+
2474+from charmhelpers.core import host
2475+from charmhelpers.core import hookenv
2476+
2477+
2478+def render(source, target, context, owner='root', group='root',
2479+ perms=0o444, templates_dir=None):
2480+ """
2481+ Render a template.
2482+
2483+ The `source` path, if not absolute, is relative to the `templates_dir`.
2484+
2485+ The `target` path should be absolute.
2486+
2487+ The context should be a dict containing the values to be replaced in the
2488+ template.
2489+
2490+ The `owner`, `group`, and `perms` options will be passed to `write_file`.
2491+
2492+ If omitted, `templates_dir` defaults to the `templates` folder in the charm.
2493+
2494+ Note: Using this requires python-jinja2; if it is not installed, calling
2495+ this will attempt to use charmhelpers.fetch.apt_install to install it.
2496+ """
2497+ try:
2498+ from jinja2 import FileSystemLoader, Environment, exceptions
2499+ except ImportError:
2500+ try:
2501+ from charmhelpers.fetch import apt_install
2502+ except ImportError:
2503+ hookenv.log('Could not import jinja2, and could not import '
2504+ 'charmhelpers.fetch to install it',
2505+ level=hookenv.ERROR)
2506+ raise
2507+ apt_install('python-jinja2', fatal=True)
2508+ from jinja2 import FileSystemLoader, Environment, exceptions
2509+
2510+ if templates_dir is None:
2511+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
2512+ loader = Environment(loader=FileSystemLoader(templates_dir))
2513+ try:
2514+ source = source
2515+ template = loader.get_template(source)
2516+ except exceptions.TemplateNotFound as e:
2517+ hookenv.log('Could not load template %s from %s.' %
2518+ (source, templates_dir),
2519+ level=hookenv.ERROR)
2520+ raise e
2521+ content = template.render(context)
2522+ host.mkdir(os.path.dirname(target))
2523+ host.write_file(target, content, owner, group, perms)
2524
2525=== added directory 'hooks/charmhelpers/fetch'
2526=== added file 'hooks/charmhelpers/fetch/__init__.py'
2527--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
2528+++ hooks/charmhelpers/fetch/__init__.py 2014-12-09 22:04:35 +0000
2529@@ -0,0 +1,416 @@
2530+import importlib
2531+from tempfile import NamedTemporaryFile
2532+import time
2533+from yaml import safe_load
2534+from charmhelpers.core.host import (
2535+ lsb_release
2536+)
2537+import subprocess
2538+from charmhelpers.core.hookenv import (
2539+ config,
2540+ log,
2541+)
2542+import os
2543+
2544+import six
2545+if six.PY3:
2546+ from urllib.parse import urlparse, urlunparse
2547+else:
2548+ from urlparse import urlparse, urlunparse
2549+
2550+
2551+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2552+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2553+"""
2554+PROPOSED_POCKET = """# Proposed
2555+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
2556+"""
2557+CLOUD_ARCHIVE_POCKETS = {
2558+ # Folsom
2559+ 'folsom': 'precise-updates/folsom',
2560+ 'precise-folsom': 'precise-updates/folsom',
2561+ 'precise-folsom/updates': 'precise-updates/folsom',
2562+ 'precise-updates/folsom': 'precise-updates/folsom',
2563+ 'folsom/proposed': 'precise-proposed/folsom',
2564+ 'precise-folsom/proposed': 'precise-proposed/folsom',
2565+ 'precise-proposed/folsom': 'precise-proposed/folsom',
2566+ # Grizzly
2567+ 'grizzly': 'precise-updates/grizzly',
2568+ 'precise-grizzly': 'precise-updates/grizzly',
2569+ 'precise-grizzly/updates': 'precise-updates/grizzly',
2570+ 'precise-updates/grizzly': 'precise-updates/grizzly',
2571+ 'grizzly/proposed': 'precise-proposed/grizzly',
2572+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
2573+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
2574+ # Havana
2575+ 'havana': 'precise-updates/havana',
2576+ 'precise-havana': 'precise-updates/havana',
2577+ 'precise-havana/updates': 'precise-updates/havana',
2578+ 'precise-updates/havana': 'precise-updates/havana',
2579+ 'havana/proposed': 'precise-proposed/havana',
2580+ 'precise-havana/proposed': 'precise-proposed/havana',
2581+ 'precise-proposed/havana': 'precise-proposed/havana',
2582+ # Icehouse
2583+ 'icehouse': 'precise-updates/icehouse',
2584+ 'precise-icehouse': 'precise-updates/icehouse',
2585+ 'precise-icehouse/updates': 'precise-updates/icehouse',
2586+ 'precise-updates/icehouse': 'precise-updates/icehouse',
2587+ 'icehouse/proposed': 'precise-proposed/icehouse',
2588+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
2589+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
2590+ # Juno
2591+ 'juno': 'trusty-updates/juno',
2592+ 'trusty-juno': 'trusty-updates/juno',
2593+ 'trusty-juno/updates': 'trusty-updates/juno',
2594+ 'trusty-updates/juno': 'trusty-updates/juno',
2595+ 'juno/proposed': 'trusty-proposed/juno',
2596+ 'juno/proposed': 'trusty-proposed/juno',
2597+ 'trusty-juno/proposed': 'trusty-proposed/juno',
2598+ 'trusty-proposed/juno': 'trusty-proposed/juno',
2599+}
2600+
2601+# The order of this list is very important. Handlers should be listed in from
2602+# least- to most-specific URL matching.
2603+FETCH_HANDLERS = (
2604+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2605+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2606+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2607+)
2608+
2609+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2610+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
2611+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
2612+
2613+
2614+class SourceConfigError(Exception):
2615+ pass
2616+
2617+
2618+class UnhandledSource(Exception):
2619+ pass
2620+
2621+
2622+class AptLockError(Exception):
2623+ pass
2624+
2625+
2626+class BaseFetchHandler(object):
2627+
2628+ """Base class for FetchHandler implementations in fetch plugins"""
2629+
2630+ def can_handle(self, source):
2631+ """Returns True if the source can be handled. Otherwise returns
2632+ a string explaining why it cannot"""
2633+ return "Wrong source type"
2634+
2635+ def install(self, source):
2636+ """Try to download and unpack the source. Return the path to the
2637+ unpacked files or raise UnhandledSource."""
2638+ raise UnhandledSource("Wrong source type {}".format(source))
2639+
2640+ def parse_url(self, url):
2641+ return urlparse(url)
2642+
2643+ def base_url(self, url):
2644+ """Return url without querystring or fragment"""
2645+ parts = list(self.parse_url(url))
2646+ parts[4:] = ['' for i in parts[4:]]
2647+ return urlunparse(parts)
2648+
2649+
2650+def filter_installed_packages(packages):
2651+ """Returns a list of packages that require installation"""
2652+ cache = apt_cache()
2653+ _pkgs = []
2654+ for package in packages:
2655+ try:
2656+ p = cache[package]
2657+ p.current_ver or _pkgs.append(package)
2658+ except KeyError:
2659+ log('Package {} has no installation candidate.'.format(package),
2660+ level='WARNING')
2661+ _pkgs.append(package)
2662+ return _pkgs
2663+
2664+
2665+def apt_cache(in_memory=True):
2666+ """Build and return an apt cache"""
2667+ import apt_pkg
2668+ apt_pkg.init()
2669+ if in_memory:
2670+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
2671+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
2672+ return apt_pkg.Cache()
2673+
2674+
2675+def apt_install(packages, options=None, fatal=False):
2676+ """Install one or more packages"""
2677+ if options is None:
2678+ options = ['--option=Dpkg::Options::=--force-confold']
2679+
2680+ cmd = ['apt-get', '--assume-yes']
2681+ cmd.extend(options)
2682+ cmd.append('install')
2683+ if isinstance(packages, six.string_types):
2684+ cmd.append(packages)
2685+ else:
2686+ cmd.extend(packages)
2687+ log("Installing {} with options: {}".format(packages,
2688+ options))
2689+ _run_apt_command(cmd, fatal)
2690+
2691+
2692+def apt_upgrade(options=None, fatal=False, dist=False):
2693+ """Upgrade all packages"""
2694+ if options is None:
2695+ options = ['--option=Dpkg::Options::=--force-confold']
2696+
2697+ cmd = ['apt-get', '--assume-yes']
2698+ cmd.extend(options)
2699+ if dist:
2700+ cmd.append('dist-upgrade')
2701+ else:
2702+ cmd.append('upgrade')
2703+ log("Upgrading with options: {}".format(options))
2704+ _run_apt_command(cmd, fatal)
2705+
2706+
2707+def apt_update(fatal=False):
2708+ """Update local apt cache"""
2709+ cmd = ['apt-get', 'update']
2710+ _run_apt_command(cmd, fatal)
2711+
2712+
2713+def apt_purge(packages, fatal=False):
2714+ """Purge one or more packages"""
2715+ cmd = ['apt-get', '--assume-yes', 'purge']
2716+ if isinstance(packages, six.string_types):
2717+ cmd.append(packages)
2718+ else:
2719+ cmd.extend(packages)
2720+ log("Purging {}".format(packages))
2721+ _run_apt_command(cmd, fatal)
2722+
2723+
2724+def apt_hold(packages, fatal=False):
2725+ """Hold one or more packages"""
2726+ cmd = ['apt-mark', 'hold']
2727+ if isinstance(packages, six.string_types):
2728+ cmd.append(packages)
2729+ else:
2730+ cmd.extend(packages)
2731+ log("Holding {}".format(packages))
2732+
2733+ if fatal:
2734+ subprocess.check_call(cmd)
2735+ else:
2736+ subprocess.call(cmd)
2737+
2738+
2739+def add_source(source, key=None):
2740+ """Add a package source to this system.
2741+
2742+ @param source: a URL or sources.list entry, as supported by
2743+ add-apt-repository(1). Examples::
2744+
2745+ ppa:charmers/example
2746+ deb https://stub:key@private.example.com/ubuntu trusty main
2747+
2748+ In addition:
2749+ 'proposed:' may be used to enable the standard 'proposed'
2750+ pocket for the release.
2751+ 'cloud:' may be used to activate official cloud archive pockets,
2752+ such as 'cloud:icehouse'
2753+ 'distro' may be used as a noop
2754+
2755+ @param key: A key to be added to the system's APT keyring and used
2756+ to verify the signatures on packages. Ideally, this should be an
2757+ ASCII format GPG public key including the block headers. A GPG key
2758+ id may also be used, but be aware that only insecure protocols are
2759+ available to retrieve the actual public key from a public keyserver
2760+ placing your Juju environment at risk. ppa and cloud archive keys
2761+ are securely added automtically, so sould not be provided.
2762+ """
2763+ if source is None:
2764+ log('Source is not present. Skipping')
2765+ return
2766+
2767+ if (source.startswith('ppa:') or
2768+ source.startswith('http') or
2769+ source.startswith('deb ') or
2770+ source.startswith('cloud-archive:')):
2771+ subprocess.check_call(['add-apt-repository', '--yes', source])
2772+ elif source.startswith('cloud:'):
2773+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
2774+ fatal=True)
2775+ pocket = source.split(':')[-1]
2776+ if pocket not in CLOUD_ARCHIVE_POCKETS:
2777+ raise SourceConfigError(
2778+ 'Unsupported cloud: source option %s' %
2779+ pocket)
2780+ actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
2781+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2782+ apt.write(CLOUD_ARCHIVE.format(actual_pocket))
2783+ elif source == 'proposed':
2784+ release = lsb_release()['DISTRIB_CODENAME']
2785+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2786+ apt.write(PROPOSED_POCKET.format(release))
2787+ elif source == 'distro':
2788+ pass
2789+ else:
2790+ log("Unknown source: {!r}".format(source))
2791+
2792+ if key:
2793+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2794+ with NamedTemporaryFile('w+') as key_file:
2795+ key_file.write(key)
2796+ key_file.flush()
2797+ key_file.seek(0)
2798+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
2799+ else:
2800+ # Note that hkp: is in no way a secure protocol. Using a
2801+ # GPG key id is pointless from a security POV unless you
2802+ # absolutely trust your network and DNS.
2803+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
2804+ 'hkp://keyserver.ubuntu.com:80', '--recv',
2805+ key])
2806+
2807+
2808+def configure_sources(update=False,
2809+ sources_var='install_sources',
2810+ keys_var='install_keys'):
2811+ """
2812+ Configure multiple sources from charm configuration.
2813+
2814+ The lists are encoded as yaml fragments in the configuration.
2815+ The frament needs to be included as a string. Sources and their
2816+ corresponding keys are of the types supported by add_source().
2817+
2818+ Example config:
2819+ install_sources: |
2820+ - "ppa:foo"
2821+ - "http://example.com/repo precise main"
2822+ install_keys: |
2823+ - null
2824+ - "a1b2c3d4"
2825+
2826+ Note that 'null' (a.k.a. None) should not be quoted.
2827+ """
2828+ sources = safe_load((config(sources_var) or '').strip()) or []
2829+ keys = safe_load((config(keys_var) or '').strip()) or None
2830+
2831+ if isinstance(sources, six.string_types):
2832+ sources = [sources]
2833+
2834+ if keys is None:
2835+ for source in sources:
2836+ add_source(source, None)
2837+ else:
2838+ if isinstance(keys, six.string_types):
2839+ keys = [keys]
2840+
2841+ if len(sources) != len(keys):
2842+ raise SourceConfigError(
2843+ 'Install sources and keys lists are different lengths')
2844+ for source, key in zip(sources, keys):
2845+ add_source(source, key)
2846+ if update:
2847+ apt_update(fatal=True)
2848+
2849+
2850+def install_remote(source, *args, **kwargs):
2851+ """
2852+ Install a file tree from a remote source
2853+
2854+ The specified source should be a url of the form:
2855+ scheme://[host]/path[#[option=value][&...]]
2856+
2857+ Schemes supported are based on this modules submodules.
2858+ Options supported are submodule-specific.
2859+ Additional arguments are passed through to the submodule.
2860+
2861+ For example::
2862+
2863+ dest = install_remote('http://example.com/archive.tgz',
2864+ checksum='deadbeef',
2865+ hash_type='sha1')
2866+
2867+ This will download `archive.tgz`, validate it using SHA1 and, if
2868+ the file is ok, extract it and return the directory in which it
2869+ was extracted. If the checksum fails, it will raise
2870+ :class:`charmhelpers.core.host.ChecksumError`.
2871+ """
2872+ # We ONLY check for True here because can_handle may return a string
2873+ # explaining why it can't handle a given source.
2874+ handlers = [h for h in plugins() if h.can_handle(source) is True]
2875+ installed_to = None
2876+ for handler in handlers:
2877+ try:
2878+ installed_to = handler.install(source, *args, **kwargs)
2879+ except UnhandledSource:
2880+ pass
2881+ if not installed_to:
2882+ raise UnhandledSource("No handler found for source {}".format(source))
2883+ return installed_to
2884+
2885+
2886+def install_from_config(config_var_name):
2887+ charm_config = config()
2888+ source = charm_config[config_var_name]
2889+ return install_remote(source)
2890+
2891+
2892+def plugins(fetch_handlers=None):
2893+ if not fetch_handlers:
2894+ fetch_handlers = FETCH_HANDLERS
2895+ plugin_list = []
2896+ for handler_name in fetch_handlers:
2897+ package, classname = handler_name.rsplit('.', 1)
2898+ try:
2899+ handler_class = getattr(
2900+ importlib.import_module(package),
2901+ classname)
2902+ plugin_list.append(handler_class())
2903+ except (ImportError, AttributeError):
2904+ # Skip missing plugins so that they can be ommitted from
2905+ # installation if desired
2906+ log("FetchHandler {} not found, skipping plugin".format(
2907+ handler_name))
2908+ return plugin_list
2909+
2910+
2911+def _run_apt_command(cmd, fatal=False):
2912+ """
2913+ Run an APT command, checking output and retrying if the fatal flag is set
2914+ to True.
2915+
2916+ :param: cmd: str: The apt command to run.
2917+ :param: fatal: bool: Whether the command's output should be checked and
2918+ retried.
2919+ """
2920+ env = os.environ.copy()
2921+
2922+ if 'DEBIAN_FRONTEND' not in env:
2923+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2924+
2925+ if fatal:
2926+ retry_count = 0
2927+ result = None
2928+
2929+ # If the command is considered "fatal", we need to retry if the apt
2930+ # lock was not acquired.
2931+
2932+ while result is None or result == APT_NO_LOCK:
2933+ try:
2934+ result = subprocess.check_call(cmd, env=env)
2935+ except subprocess.CalledProcessError as e:
2936+ retry_count = retry_count + 1
2937+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
2938+ raise
2939+ result = e.returncode
2940+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
2941+ "".format(APT_NO_LOCK_RETRY_DELAY))
2942+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
2943+
2944+ else:
2945+ subprocess.call(cmd, env=env)
2946
2947=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
2948--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
2949+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-09 22:04:35 +0000
2950@@ -0,0 +1,145 @@
2951+import os
2952+import hashlib
2953+import re
2954+
2955+import six
2956+if six.PY3:
2957+ from urllib.request import (
2958+ build_opener, install_opener, urlopen, urlretrieve,
2959+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2960+ )
2961+ from urllib.parse import urlparse, urlunparse, parse_qs
2962+ from urllib.error import URLError
2963+else:
2964+ from urllib import urlretrieve
2965+ from urllib2 import (
2966+ build_opener, install_opener, urlopen,
2967+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2968+ URLError
2969+ )
2970+ from urlparse import urlparse, urlunparse, parse_qs
2971+
2972+from charmhelpers.fetch import (
2973+ BaseFetchHandler,
2974+ UnhandledSource
2975+)
2976+from charmhelpers.payload.archive import (
2977+ get_archive_handler,
2978+ extract,
2979+)
2980+from charmhelpers.core.host import mkdir, check_hash
2981+
2982+
2983+def splituser(host):
2984+ '''urllib.splituser(), but six's support of this seems broken'''
2985+ _userprog = re.compile('^(.*)@(.*)$')
2986+ match = _userprog.match(host)
2987+ if match:
2988+ return match.group(1, 2)
2989+ return None, host
2990+
2991+
2992+def splitpasswd(user):
2993+ '''urllib.splitpasswd(), but six's support of this is missing'''
2994+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
2995+ match = _passwdprog.match(user)
2996+ if match:
2997+ return match.group(1, 2)
2998+ return user, None
2999+
3000+
3001+class ArchiveUrlFetchHandler(BaseFetchHandler):
3002+ """
3003+ Handler to download archive files from arbitrary URLs.
3004+
3005+ Can fetch from http, https, ftp, and file URLs.
3006+
3007+ Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
3008+
3009+ Installs the contents of the archive in $CHARM_DIR/fetched/.
3010+ """
3011+ def can_handle(self, source):
3012+ url_parts = self.parse_url(source)
3013+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
3014+ return "Wrong source type"
3015+ if get_archive_handler(self.base_url(source)):
3016+ return True
3017+ return False
3018+
3019+ def download(self, source, dest):
3020+ """
3021+ Download an archive file.
3022+
3023+ :param str source: URL pointing to an archive file.
3024+ :param str dest: Local path location to download archive file to.
3025+ """
3026+ # propogate all exceptions
3027+ # URLError, OSError, etc
3028+ proto, netloc, path, params, query, fragment = urlparse(source)
3029+ if proto in ('http', 'https'):
3030+ auth, barehost = splituser(netloc)
3031+ if auth is not None:
3032+ source = urlunparse((proto, barehost, path, params, query, fragment))
3033+ username, password = splitpasswd(auth)
3034+ passman = HTTPPasswordMgrWithDefaultRealm()
3035+ # Realm is set to None in add_password to force the username and password
3036+ # to be used whatever the realm
3037+ passman.add_password(None, source, username, password)
3038+ authhandler = HTTPBasicAuthHandler(passman)
3039+ opener = build_opener(authhandler)
3040+ install_opener(opener)
3041+ response = urlopen(source)
3042+ try:
3043+ with open(dest, 'w') as dest_file:
3044+ dest_file.write(response.read())
3045+ except Exception as e:
3046+ if os.path.isfile(dest):
3047+ os.unlink(dest)
3048+ raise e
3049+
3050+ # Mandatory file validation via Sha1 or MD5 hashing.
3051+ def download_and_validate(self, url, hashsum, validate="sha1"):
3052+ tempfile, headers = urlretrieve(url)
3053+ check_hash(tempfile, hashsum, validate)
3054+ return tempfile
3055+
3056+ def install(self, source, dest=None, checksum=None, hash_type='sha1'):
3057+ """
3058+ Download and install an archive file, with optional checksum validation.
3059+
3060+ The checksum can also be given on the `source` URL's fragment.
3061+ For example::
3062+
3063+ handler.install('http://example.com/file.tgz#sha1=deadbeef')
3064+
3065+ :param str source: URL pointing to an archive file.
3066+ :param str dest: Local destination path to install to. If not given,
3067+ installs to `$CHARM_DIR/archives/archive_file_name`.
3068+ :param str checksum: If given, validate the archive file after download.
3069+ :param str hash_type: Algorithm used to generate `checksum`.
3070+ Can be any hash alrgorithm supported by :mod:`hashlib`,
3071+ such as md5, sha1, sha256, sha512, etc.
3072+
3073+ """
3074+ url_parts = self.parse_url(source)
3075+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3076+ if not os.path.exists(dest_dir):
3077+ mkdir(dest_dir, perms=0o755)
3078+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3079+ try:
3080+ self.download(source, dld_file)
3081+ except URLError as e:
3082+ raise UnhandledSource(e.reason)
3083+ except OSError as e:
3084+ raise UnhandledSource(e.strerror)
3085+ options = parse_qs(url_parts.fragment)
3086+ for key, value in options.items():
3087+ if not six.PY3:
3088+ algorithms = hashlib.algorithms
3089+ else:
3090+ algorithms = hashlib.algorithms_available
3091+ if key in algorithms:
3092+ check_hash(dld_file, value, key)
3093+ if checksum:
3094+ check_hash(dld_file, checksum, hash_type)
3095+ return extract(dld_file, dest)
3096
3097=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
3098--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
3099+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-09 22:04:35 +0000
3100@@ -0,0 +1,54 @@
3101+import os
3102+from charmhelpers.fetch import (
3103+ BaseFetchHandler,
3104+ UnhandledSource
3105+)
3106+from charmhelpers.core.host import mkdir
3107+
3108+import six
3109+if six.PY3:
3110+ raise ImportError('bzrlib does not support Python3')
3111+
3112+try:
3113+ from bzrlib.branch import Branch
3114+except ImportError:
3115+ from charmhelpers.fetch import apt_install
3116+ apt_install("python-bzrlib")
3117+ from bzrlib.branch import Branch
3118+
3119+
3120+class BzrUrlFetchHandler(BaseFetchHandler):
3121+ """Handler for bazaar branches via generic and lp URLs"""
3122+ def can_handle(self, source):
3123+ url_parts = self.parse_url(source)
3124+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
3125+ return False
3126+ else:
3127+ return True
3128+
3129+ def branch(self, source, dest):
3130+ url_parts = self.parse_url(source)
3131+ # If we use lp:branchname scheme we need to load plugins
3132+ if not self.can_handle(source):
3133+ raise UnhandledSource("Cannot handle {}".format(source))
3134+ if url_parts.scheme == "lp":
3135+ from bzrlib.plugin import load_plugins
3136+ load_plugins()
3137+ try:
3138+ remote_branch = Branch.open(source)
3139+ remote_branch.bzrdir.sprout(dest).open_branch()
3140+ except Exception as e:
3141+ raise e
3142+
3143+ def install(self, source):
3144+ url_parts = self.parse_url(source)
3145+ branch_name = url_parts.path.strip("/").split("/")[-1]
3146+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3147+ branch_name)
3148+ if not os.path.exists(dest_dir):
3149+ mkdir(dest_dir, perms=0o755)
3150+ try:
3151+ self.branch(source, dest_dir)
3152+ except OSError as e:
3153+ raise UnhandledSource(e.strerror)
3154+ return dest_dir
3155
3156=== added file 'hooks/charmhelpers/fetch/giturl.py'
3157--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3158+++ hooks/charmhelpers/fetch/giturl.py 2014-12-09 22:04:35 +0000
3159@@ -0,0 +1,51 @@
3160+import os
3161+from charmhelpers.fetch import (
3162+ BaseFetchHandler,
3163+ UnhandledSource
3164+)
3165+from charmhelpers.core.host import mkdir
3166+
3167+import six
3168+if six.PY3:
3169+ raise ImportError('GitPython does not support Python 3')
3170+
3171+try:
3172+ from git import Repo
3173+except ImportError:
3174+ from charmhelpers.fetch import apt_install
3175+ apt_install("python-git")
3176+ from git import Repo
3177+
3178+
3179+class GitUrlFetchHandler(BaseFetchHandler):
3180+ """Handler for git branches via generic and github URLs"""
3181+ def can_handle(self, source):
3182+ url_parts = self.parse_url(source)
3183+ # TODO (mattyw) no support for ssh git@ yet
3184+ if url_parts.scheme not in ('http', 'https', 'git'):
3185+ return False
3186+ else:
3187+ return True
3188+
3189+ def clone(self, source, dest, branch):
3190+ if not self.can_handle(source):
3191+ raise UnhandledSource("Cannot handle {}".format(source))
3192+
3193+ repo = Repo.clone_from(source, dest)
3194+ repo.git.checkout(branch)
3195+
3196+ def install(self, source, branch="master", dest=None):
3197+ url_parts = self.parse_url(source)
3198+ branch_name = url_parts.path.strip("/").split("/")[-1]
3199+ if dest:
3200+ dest_dir = os.path.join(dest, branch_name)
3201+ else:
3202+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3203+ branch_name)
3204+ if not os.path.exists(dest_dir):
3205+ mkdir(dest_dir, perms=0o755)
3206+ try:
3207+ self.clone(source, dest_dir, branch)
3208+ except OSError as e:
3209+ raise UnhandledSource(e.strerror)
3210+ return dest_dir
3211
3212=== modified file 'hooks/config-changed'
3213--- hooks/config-changed 2012-09-13 18:43:22 +0000
3214+++ hooks/config-changed 1970-01-01 00:00:00 +0000
3215@@ -1,155 +0,0 @@
3216-#!/bin/sh
3217-
3218-set -ue
3219-
3220-tfile=`mktemp /etc/.memcached.conf.XXXXXXX`
3221-cat > $tfile <<EOF
3222-################################
3223-#
3224-# This config file generated by the juju memcached charm. Changes
3225-# may not be preserved.
3226-#
3227-################################
3228-#
3229-# memcached default config file
3230-# 2003 - Jay Bonci <jaybonci@debian.org>
3231-# This configuration file is read by the start-memcached script provided as
3232-# part of the Debian GNU/Linux distribution.
3233-
3234-# Run memcached as a daemon. This command is implied, and is not needed for the
3235-# daemon to run. See the README.Debian that comes with this package for more
3236-# information.
3237--d
3238-
3239-# Log memcached's output to /var/log/memcached
3240-logfile /var/log/memcached.log
3241-
3242-# Be verbose
3243-# -v
3244-
3245-# Be even more verbose (print client commands as well)
3246-# -vv
3247-
3248-# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
3249-# Note that the daemon will grow to this size, but does not start out holding this much
3250-# memory
3251--m `config-get size`
3252-
3253-# Default connection port is 11211
3254--p `config-get tcp-port`
3255-
3256-# Run the daemon as root. The start-memcached will default to running as root if no
3257-# -u command is present in this config file
3258--u memcache
3259-
3260-# Specify which IP address to listen on. The default is to listen on all IP addresses
3261-# This parameter is one of the only security measures that memcached has, so make sure
3262-# it's listening on a firewalled interface.
3263--l 0.0.0.0
3264-
3265-# Limit the number of simultaneous incoming connections. The daemon default is 1024
3266--c `config-get connection-limit`
3267-
3268-# Lock down all paged memory. Consult with the README and homepage before you do this
3269-# -k
3270-
3271-# Return error when memory is exhausted (rather than removing items)
3272-# -M
3273-
3274-# Maximize core file limit
3275-# -r
3276-EOF
3277-
3278-append_numeric() {
3279- local name=$1
3280- local opt=$2
3281- local target=$3
3282- val=`config-get $name`
3283- if [ "$val" != "-1" ] ; then
3284- cat >> $target <<EOF
3285-# $name
3286-$opt $val
3287-EOF
3288- fi
3289-}
3290-
3291-append_bool() {
3292- local name=$1
3293- local opt=$2
3294- local target=$3
3295- val=`config-get $name`
3296- if [ "$val" = "yes" ] || [ "$val" = "Y" ] ; then
3297- cat >> $target <<EOF
3298-# $name
3299-$opt
3300-EOF
3301- fi
3302-}
3303-
3304-append_numeric "request-limit" "-R" $tfile
3305-append_numeric "min-item-size" "-n" $tfile
3306-append_numeric "slab-page-size" "-I" $tfile
3307-append_numeric "threads" "-t" $tfile
3308-append_bool "disable-auto-cleanup" "-M" $tfile
3309-append_bool "disable-cas" "-C" $tfile
3310-
3311-factor=`config-get factor`
3312-if [ "$factor" != "-1.0" ] ; then
3313- cat >> $tfile <<EOF
3314-# factor
3315--f $factor
3316-EOF
3317-fi
3318-
3319-size=`config-get size`
3320-if [ $size -ge 2048 ] ; then
3321- disableLP=`config-get disable-large-pages`
3322- if [ "$disableLP" != "yes" ] && [ "$disableLP" != "Y" ] ; then
3323- juju-log "Memory >= 2GB, using large-pages to speed up access"
3324- # First try to reserve them
3325- save_hugepages=`sysctl -n vm.nr_hugepages`
3326- npages=$(($size/2))
3327- sysctl -w vm.nr_hugepages=$npages
3328- allocated=`sysctl -n vm.nr_hugepages`
3329- if [ $allocated -lt $npages ] ; then
3330- juju-log -l WARNING "Cannot allocate $npages contiguous pages for huge page allocation"
3331- juju-log -l WARNING "Setting vm.nr_hugepages back to $save_hugepages"
3332- sysctl -w vm.nr_hugepages=$save_hugepages
3333- juju-log -l WARNING "Will disable large pages and fall back to regular memory"
3334- else
3335- juju-log "Allocated $npages contiguous blocks only for huge pages."
3336- cat >> $tfile <<EOF
3337-# large pages can be disabled with disable-large-pages
3338--L
3339-EOF
3340- fi
3341- fi
3342-fi
3343-
3344-new_hash=`md5sum $tfile|cut -d' ' -f1`
3345-old_hash=`md5sum /etc/memcached.conf|cut -d' ' -f1`
3346-if [ "$new_hash" != "$old_hash" ] ; then
3347- juju-log "New config generated. hash=$new_hash"
3348- mv -f /etc/memcached.conf /etc/memcached.conf.$old_hash
3349- mv -f $tfile /etc/memcached.conf
3350- service memcached restart
3351-else
3352- juju-log "Config not changed. hash=$new_hash"
3353-fi
3354-
3355-tcp_port=`config-get tcp-port`
3356-udp_port=`config-get udp-port`
3357-
3358-# Work around http://pad.lv/900517
3359-[ -n "$tcp_port" ] || tcp_port=0
3360-[ -n "$udp_port" ] || udp_port=0
3361-
3362-if [ $tcp_port -gt 0 ] ; then
3363- open-port $tcp_port/TCP
3364-fi
3365-
3366-if [ $udp_port -gt 0 ] ; then
3367- open-port $udp_port/UDP
3368-fi
3369-# In case port changed, inform consumers
3370-hooks/cache-relation-joined
3371
3372=== target is u'memcached_hooks.py'
3373=== modified file 'hooks/install'
3374--- hooks/install 2013-04-17 17:04:41 +0000
3375+++ hooks/install 1970-01-01 00:00:00 +0000
3376@@ -1,20 +0,0 @@
3377-#!/bin/bash
3378-
3379-set -e
3380-if [[ -d exec.d ]]; then
3381- shopt -s nullglob
3382- for f in exec.d/*/charm-pre-install; do
3383- [[ -x "$f" ]] || continue
3384- ${SHELL} -c "$f"|| {
3385- ## bail out if anyone fails
3386- juju-log -l ERROR "$f: returned exit_status=$? "
3387- exit 1
3388- }
3389- done
3390-fi
3391-
3392-DEBIAN_FRONTEND=noninteractive apt-get -y install -qq memcached python-cheetah python-memcache
3393-
3394-cat > /etc/default/memcached <<EOF
3395-ENABLE_MEMCACHED=yes
3396-EOF
3397
3398=== target is u'memcached_hooks.py'
3399=== added file 'hooks/memcached_hooks.py'
3400--- hooks/memcached_hooks.py 1970-01-01 00:00:00 +0000
3401+++ hooks/memcached_hooks.py 2014-12-09 22:04:35 +0000
3402@@ -0,0 +1,217 @@
3403+#!/usr/bin/env python
3404+__author__ = 'Felipe Reyes <felipe.reyes@canonical.com>'
3405+
3406+import re
3407+import os
3408+import shutil
3409+import subprocess
3410+import sys
3411+
3412+from charmhelpers.core.hookenv import (
3413+ config,
3414+ local_unit,
3415+ log,
3416+ relation_ids,
3417+ relation_get,
3418+ relation_set,
3419+ unit_get,
3420+ Hooks,
3421+ UnregisteredHookError,
3422+)
3423+from charmhelpers.core import templating
3424+from charmhelpers.core.host import (
3425+ restart_on_change,
3426+ service_reload,
3427+ service_start,
3428+ service_stop,
3429+)
3430+from charmhelpers.fetch import apt_install
3431+from charmhelpers.contrib.network import ufw
3432+import memcached_utils
3433+
3434+
3435+DOT = os.path.dirname(os.path.abspath(__file__))
3436+ETC_DEFAULT_MEMCACHED = '/etc/default/memcached'
3437+ETC_MEMCACHED_CONF = '/etc/memcached.conf'
3438+ETC_MUNIN_NODE_CONF = '/etc/munin/munin-node.conf'
3439+LOCAL_NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
3440+MEMCACHE_NAGIOS_PLUGIN = '/usr/local/lib/nagios/plugins/check_memcache.py'
3441+RESTART_MAP = {ETC_MEMCACHED_CONF: ['memcached'],
3442+ ETC_MUNIN_NODE_CONF: ['munin-node']}
3443+hooks = Hooks()
3444+
3445+
3446+class MemcachedError(Exception):
3447+ pass
3448+
3449+
3450+@hooks.hook('install')
3451+def install():
3452+ apt_install(["memcached", "python-cheetah", "python-memcache"], fatal=True)
3453+
3454+ with open(ETC_DEFAULT_MEMCACHED, 'w') as f:
3455+ f.write('ENABLE_MEMCACHED=yes\n')
3456+
3457+ ufw.enable()
3458+ ufw.service('ssh', 'open')
3459+
3460+
3461+@hooks.hook('start')
3462+def start():
3463+ service_start('memcached')
3464+
3465+
3466+@hooks.hook('stop')
3467+def stop():
3468+ service_stop('memcached')
3469+
3470+
3471+@hooks.hook('cache-relation-joined')
3472+def cache_relation_joined():
3473+
3474+ settings = {'host': unit_get('private-address'),
3475+ 'port': config('tcp-port'),
3476+ 'udp-port': config('udp-port')}
3477+
3478+ for rid in relation_ids('cache'):
3479+ relation_set(rid, **settings)
3480+
3481+ addr = relation_get('private-address')
3482+ if addr:
3483+ log('Granting memcached access to {}'.format(addr), level='INFO')
3484+ memcached_utils.grant_access(addr)
3485+
3486+
3487+@hooks.hook('cache-relation-departed')
3488+def cache_relation_departed():
3489+ addr = relation_get('private-address')
3490+ log('Revoking memcached access to {}'.format(addr))
3491+ memcached_utils.revoke_access(addr)
3492+
3493+
3494+@hooks.hook('config-changed')
3495+@restart_on_change(RESTART_MAP)
3496+def config_changed():
3497+ mem_size = config('size')
3498+
3499+ if mem_size == 0:
3500+ output = subprocess.check_output(['free', '-m'])
3501+ mem_size = int(re.findall(r'\d+', output.split('\n')[2])[1])
3502+ mem_size = int(mem_size * 0.9)
3503+
3504+ configs = {'mem_size': mem_size,
3505+ 'large_pages_enabled': False}
3506+
3507+ for key in ['request-limit', 'min-item-size', 'slab-page-size', 'threads',
3508+ 'disable-auto-cleanup', 'disable-cas', 'factor',
3509+ 'connection-limit', 'tcp-port', 'udp-port']:
3510+ configs[key.replace('-', '_')] = config(key)
3511+ log('Config: {} = {}'.format(key, configs[key.replace('-', '_')]),
3512+ level='DEBUG')
3513+
3514+ if mem_size >= 2048:
3515+ disable_lp = config('disable-large-pages')
3516+ if not disable_lp:
3517+ log('Memory >= 2GB, using large-pages to speed up access',
3518+ level='INFO')
3519+ save_hp = int(subprocess.check_output(['sysctl', '-n',
3520+ 'vm.nr_hugepages']))
3521+
3522+ npages = mem_size / 2
3523+ subprocess.check_output(['sysctl', '-w',
3524+ 'vm.nr_hugepages={}'.format(npages)])
3525+ allocated = int(subprocess.check_output(['sysctl', '-n',
3526+ 'vm.nr_hugepages']))
3527+
3528+ if allocated < npages:
3529+ log(('Cannot allocate $npages contiguous pages for huge '
3530+ 'page allocation'), level='WARNING')
3531+ log('Setting vm.nr_hugepages back to {}'.format(save_hp),
3532+ level='WARNING')
3533+ subprocess.check_output(['sysctl', '-w',
3534+ 'vm.nr_hugepages={}'.format(save_hp)])
3535+ log('Will disable large pages and fall back to regular memory',
3536+ level='WARNING')
3537+ configs['large_pages_enabled'] = False
3538+ else:
3539+ log(('Allocated $npages contiguous blocks only '
3540+ 'for huge pages.'.format(npages)), level='WARNING')
3541+ configs['large_pages_enabled'] = True
3542+
3543+ templating.render('memcached.conf', ETC_MEMCACHED_CONF, configs)
3544+
3545+ # In case port changed, inform consumers
3546+ cache_relation_joined()
3547+
3548+
3549+@hooks.hook('munin-relation-changed')
3550+@restart_on_change(RESTART_MAP)
3551+def munin_relation_changed():
3552+ remote_ip = relation_get('private-address')
3553+
3554+ if not remote_ip:
3555+ log('Remote node must provide IP', level='INFO')
3556+ sys.exit(0)
3557+
3558+ munin_server_ip = memcached_utils.munin_format_ip(remote_ip)
3559+
3560+ # make sure munin port is open, the access is restricted by munin
3561+ ufw.service('4949', 'open')
3562+
3563+ # make sure it's instaled
3564+ apt_install(['munin-node'], fatal=True)
3565+ configs = {'munin_server': munin_server_ip}
3566+ templating.render('munin-node.conf', ETC_MUNIN_NODE_CONF, configs)
3567+ relation_set(ip=unit_get('private-address'))
3568+
3569+
3570+@hooks.hook('nrpe-external-master-relation-changed')
3571+@restart_on_change(RESTART_MAP)
3572+def nrpe_external_master_relation_changed():
3573+
3574+ # make sure it's installed
3575+ apt_install(['nagios-nrpe-server'], fatal=True)
3576+ ufw.service('5666', 'open')
3577+
3578+ if not os.path.isdir(LOCAL_NAGIOS_PLUGINS):
3579+ os.makedirs(LOCAL_NAGIOS_PLUGINS)
3580+
3581+ if not os.path.isfile(MEMCACHE_NAGIOS_PLUGIN):
3582+ shutil.copy(os.path.join(DOT, "..", "files", "nrpe-external-master",
3583+ "check_memcache.py"),
3584+ os.path.join(LOCAL_NAGIOS_PLUGINS, "check_memcache.py"))
3585+ os.chmod(os.path.join(LOCAL_NAGIOS_PLUGINS, "check_memcache.py"),
3586+ 0o555)
3587+
3588+ nagios_hostname = "{}-{}".format(config('nagios_context'),
3589+ local_unit().replace('/', '-'))
3590+ configs = {'tcp_port': config('tcp_port'),
3591+ 'servicegroup': config('nagios-context'),
3592+ 'hostname': nagios_hostname}
3593+
3594+ templating.render('nrpe_check_memcached.cfg.tmpl',
3595+ '/etc/nagios/nrpe.d/check_memcached.cfg',
3596+ configs)
3597+ templating.render('nrpe_export.cfg.tmpl',
3598+ ('/var/lib/nagios/export/service__{}_check_'
3599+ 'memcached.cfg').format(nagios_hostname),
3600+ configs)
3601+
3602+ service_reload('nagios-nrpe-server')
3603+
3604+
3605+@hooks.hook('upgrade-charm')
3606+def upgrade_charm():
3607+ install()
3608+ config_changed()
3609+
3610+
3611+def main():
3612+ try:
3613+ hooks.execute(sys.argv)
3614+ except UnregisteredHookError as e:
3615+ log('Unknown hook {} - skipping.'.format(e))
3616+
3617+
3618+if __name__ == '__main__':
3619+ main()
3620
3621=== added file 'hooks/memcached_utils.py'
3622--- hooks/memcached_utils.py 1970-01-01 00:00:00 +0000
3623+++ hooks/memcached_utils.py 2014-12-09 22:04:35 +0000
3624@@ -0,0 +1,27 @@
3625+__author__ = 'Felipe Reyes <felipe.reyes@canonical.com>'
3626+
3627+from charmhelpers.contrib.network import ufw
3628+from charmhelpers.core.hookenv import (
3629+ config,
3630+ log,
3631+)
3632+
3633+
3634+def munin_format_ip(ip):
3635+ return "^{}$".format(ip.replace('.', '\\.'))
3636+
3637+
3638+def grant_access(address):
3639+ log('granting access: {}'.format(address), level='DEBUG')
3640+ ufw.grant_access(address, port=str(config('tcp-port')), proto='tcp')
3641+
3642+ if config('udp-port') > 0:
3643+ ufw.grant_access(address, port=str(config('udp-port')), proto='udp')
3644+
3645+
3646+def revoke_access(address):
3647+ log('revoking access: {}'.format(address), level='DEBUG')
3648+ ufw.revoke_access(address, port=str(config('tcp-port')), proto='tcp')
3649+
3650+ if config('udp-port') > 0:
3651+ ufw.revoke_access(address, port=str(config('udp-port')), proto='udp')
3652
3653=== modified file 'hooks/munin-relation-changed'
3654--- hooks/munin-relation-changed 2011-06-01 06:33:38 +0000
3655+++ hooks/munin-relation-changed 1970-01-01 00:00:00 +0000
3656@@ -1,29 +0,0 @@
3657-#!/bin/sh
3658-
3659-set -ue
3660-
3661-ip=`relation-get ip`
3662-
3663-if [ -z "$ip" ] ; then
3664- echo "Remote node must provide IP"
3665- exit 0
3666-fi
3667-
3668-reip="^`echo $ip | sed -e 's,\.,\\.,g'`$"
3669-
3670-# Make sure its installed
3671-apt-get -y install munin-node
3672-
3673-if grep -q "^allow $reip$" /etc/munin/munin-node.conf ; then
3674- echo $ip already has access.
3675-else
3676- echo "# added by $0 `date`" >> /etc/munin/munin-node.conf
3677- echo allow $reip >> /etc/munin/munin-node.conf
3678-fi
3679-
3680-service munin-node reload
3681-
3682-# Ubuntu package already enables all plugins at install time.
3683-
3684-# now tell remote server about our IP
3685-relation-set ip=`ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}'|head -n 1`
3686
3687=== target is u'memcached_hooks.py'
3688=== modified file 'hooks/nrpe-external-master-relation-changed'
3689--- hooks/nrpe-external-master-relation-changed 2013-04-17 15:19:48 +0000
3690+++ hooks/nrpe-external-master-relation-changed 1970-01-01 00:00:00 +0000
3691@@ -1,19 +0,0 @@
3692-#!/bin/bash
3693-set -eux
3694-
3695-if [ ! -d /usr/local/lib/nagios/plugins ]; then
3696- mkdir -p /usr/local/lib/nagios/plugins
3697-fi
3698-if [ ! -f /usr/local/lib/nagios/plugins/check_memcache.py ]; then
3699- cp files/nrpe-external-master/check_memcache.py /usr/local/lib/nagios/plugins/check_memcache.py
3700- chmod 555 /usr/local/lib/nagios/plugins/check_memcache.py
3701-fi
3702-
3703-export NAGIOS_HOSTNAME="$(config-get nagios_context)-${JUJU_UNIT_NAME//\//-}"
3704-export NAGIOS_SERVICEGROUP="$(config-get nagios-context)"
3705-export TCP_PORT="$(config-get tcp-port)"
3706-
3707-cheetah fill --env -p templates/nrpe_check_memcached.cfg.tmpl > /etc/nagios/nrpe.d/check_memcached.cfg
3708-cheetah fill --env -p templates/nrpe_export.cfg.tmpl > /var/lib/nagios/export/service__${NAGIOS_HOSTNAME}_check_memcached.cfg
3709-
3710-/etc/init.d/nagios-nrpe-server reload
3711
3712=== target is u'memcached_hooks.py'
3713=== modified file 'hooks/start'
3714--- hooks/start 2011-12-05 22:55:46 +0000
3715+++ hooks/start 1970-01-01 00:00:00 +0000
3716@@ -1,2 +0,0 @@
3717-#!/bin/bash
3718-service memcached start
3719
3720=== target is u'memcached_hooks.py'
3721=== modified file 'hooks/stop'
3722--- hooks/stop 2011-12-05 22:55:46 +0000
3723+++ hooks/stop 1970-01-01 00:00:00 +0000
3724@@ -1,3 +0,0 @@
3725-#!/bin/bash
3726-
3727-service memcached stop
3728
3729=== target is u'memcached_hooks.py'
3730=== modified file 'hooks/upgrade-charm'
3731--- hooks/upgrade-charm 2011-12-05 22:55:46 +0000
3732+++ hooks/upgrade-charm 1970-01-01 00:00:00 +0000
3733@@ -1,4 +0,0 @@
3734-#!/bin/sh
3735-base=`dirname $0`
3736-$base/install
3737-exec $base/config-changed
3738
3739=== target is u'memcached_hooks.py'
3740=== modified file 'metadata.yaml'
3741--- metadata.yaml 2014-01-15 18:42:43 +0000
3742+++ metadata.yaml 2014-12-09 22:04:35 +0000
3743@@ -1,6 +1,6 @@
3744 name: memcached
3745 summary: "A high-performance memory object caching system"
3746-maintainer: Clint Byrum <clint@ubuntu.com>
3747+maintainer: Felipe Reyes <felipe.reyes@canonical.com>
3748 description:
3749 memcached optimizes specific high-load serving applications that are designed
3750 to take advantage of its versatile no-locking memory access system. Clients
3751@@ -8,7 +8,7 @@
3752 of the specific application. Traditionally this has been used in mod_perl
3753 apps to avoid storing large chunks of data in Apache memory, and to share
3754 this burden across several machines.
3755-categories: ["applications"]
3756+tags: ["system"]
3757 provides:
3758 cache:
3759 interface: memcache
3760
3761=== added file 'templates/memcached.conf'
3762--- templates/memcached.conf 1970-01-01 00:00:00 +0000
3763+++ templates/memcached.conf 2014-12-09 22:04:35 +0000
3764@@ -0,0 +1,62 @@
3765+################################
3766+#
3767+# This config file generated by the juju memcached charm. Changes
3768+# may not be preserved.
3769+#
3770+################################
3771+#
3772+# memcached default config file
3773+# 2003 - Jay Bonci <jaybonci@debian.org>
3774+# This configuration file is read by the start-memcached script provided as
3775+# part of the Debian GNU/Linux distribution.
3776+
3777+# Run memcached as a daemon. This command is implied, and is not needed for the
3778+# daemon to run. See the README.Debian that comes with this package for more
3779+# information.
3780+-d
3781+
3782+# Log memcached's output to /var/log/memcached
3783+logfile /var/log/memcached.log
3784+
3785+# Be verbose
3786+# -v
3787+
3788+# Be even more verbose (print client commands as well)
3789+# -vv
3790+
3791+# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
3792+# Note that the daemon will grow to this size, but does not start out holding this much
3793+# memory
3794+-m {{mem_size}}
3795+
3796+# Default connection port is 11211
3797+-p {{tcp_port}}
3798+
3799+# Run the daemon as root. The start-memcached will default to running as root if no
3800+# -u command is present in this config file
3801+-u memcache
3802+
3803+# Specify which IP address to listen on. The default is to listen on all IP addresses
3804+# This parameter is one of the only security measures that memcached has, so make sure
3805+# it's listening on a firewalled interface.
3806+-l 0.0.0.0
3807+
3808+# Limit the number of simultaneous incoming connections. The daemon default is 1024
3809+-c {{connection_limit}}
3810+
3811+# Lock down all paged memory. Consult with the README and homepage before you do this
3812+# -k
3813+
3814+# Return error when memory is exhausted (rather than removing items)
3815+# -M
3816+
3817+# Maximize core file limit
3818+# -r
3819+
3820+{% if request_limit != -1 %}-R {{request_limit}}{% endif %}
3821+{% if min_item_size != -1 %}-n {{min_item_size}}{% endif %}
3822+{% if slab_page_size != -1 %}-I {{slab_page_size}}{% endif %}
3823+{% if threads != -1 %}-t {{threads}}{% endif %}
3824+{% if disable_auto_cleanup %}-M{% endif %}
3825+{% if disable_cas %}-C{% endif %}
3826+{% if factor != -1.0 %}-f {{factor}}{% endif %}
3827
3828=== added file 'templates/munin-node.conf'
3829--- templates/munin-node.conf 1970-01-01 00:00:00 +0000
3830+++ templates/munin-node.conf 2014-12-09 22:04:35 +0000
3831@@ -0,0 +1,66 @@
3832+################################
3833+#
3834+# This config file generated by the juju memcached charm. Changes
3835+# may not be preserved.
3836+#
3837+################################
3838+
3839+log_level 4
3840+log_file /var/log/munin/munin-node.log
3841+pid_file /var/run/munin/munin-node.pid
3842+
3843+background 1
3844+setsid 1
3845+
3846+user root
3847+group root
3848+
3849+# This is the timeout for the whole transaction.
3850+# Units are in sec. Default is 15 min
3851+#
3852+# global_timeout 900
3853+
3854+# This is the timeout for each plugin.
3855+# Units are in sec. Default is 1 min
3856+#
3857+# timeout 60
3858+
3859+# Regexps for files to ignore
3860+ignore_file [\#~]$
3861+ignore_file DEADJOE$
3862+ignore_file \.bak$
3863+ignore_file %$
3864+ignore_file \.dpkg-(tmp|new|old|dist)$
3865+ignore_file \.rpm(save|new)$
3866+ignore_file \.pod$
3867+
3868+# Set this if the client doesn't report the correct hostname when
3869+# telnetting to localhost, port 4949
3870+#
3871+#host_name localhost.localdomain
3872+
3873+# A list of addresses that are allowed to connect. This must be a
3874+# regular expression, since Net::Server does not understand CIDR-style
3875+# network notation unless the perl module Net::CIDR is installed. You
3876+# may repeat the allow line as many times as you'd like
3877+
3878+allow ^127\.0\.0\.1$
3879+allow ^::1$
3880+{% if munin_server %}allow {{munin_server}}{% endif %}
3881+# If you have installed the Net::CIDR perl module, you can use one or more
3882+# cidr_allow and cidr_deny address/mask patterns. A connecting client must
3883+# match any cidr_allow, and not match any cidr_deny. Note that a netmask
3884+# *must* be provided, even if it's /32
3885+#
3886+# Example:
3887+#
3888+# cidr_allow 127.0.0.1/32
3889+# cidr_allow 192.0.2.0/24
3890+# cidr_deny 192.0.2.42/32
3891+
3892+# Which address to bind to;
3893+host *
3894+# host 127.0.0.1
3895+
3896+# And which port
3897+port 4949
3898
3899=== modified file 'templates/nrpe_check_memcached.cfg.tmpl'
3900--- templates/nrpe_check_memcached.cfg.tmpl 2013-04-17 15:19:48 +0000
3901+++ templates/nrpe_check_memcached.cfg.tmpl 2014-12-09 22:04:35 +0000
3902@@ -1,4 +1,4 @@
3903 #---------------------------------------------------
3904 # This file is Juju managed
3905 #---------------------------------------------------
3906-command[check_memcached]=/usr/local/lib/nagios/plugins/check_memcache.py -H 127.0.0.1 -p ${TCP_PORT} -c version
3907+command[check_memcached]=/usr/local/lib/nagios/plugins/check_memcache.py -H 127.0.0.1 -p {{tcp_port}} -c version
3908
3909=== modified file 'templates/nrpe_export.cfg.tmpl'
3910--- templates/nrpe_export.cfg.tmpl 2013-04-15 11:11:09 +0000
3911+++ templates/nrpe_export.cfg.tmpl 2014-12-09 22:04:35 +0000
3912@@ -3,8 +3,8 @@
3913 #---------------------------------------------------
3914 define service {
3915 use active-service
3916- host_name ${NAGIOS_HOSTNAME}
3917- service_description ${NAGIOS_HOSTNAME} memcached
3918+ host_name {{NAGIOS_HOSTNAME}}
3919+ service_description {{NAGIOS_HOSTNAME}} memcached
3920 check_command check_nrpe!check_memcached
3921- servicegroups ${NAGIOS_SERVICEGROUP},
3922+ servicegroups {{NAGIOS_SERVICEGROUP}},
3923 }
3924
3925=== added file 'test_requirements.txt'
3926--- test_requirements.txt 1970-01-01 00:00:00 +0000
3927+++ test_requirements.txt 2014-12-09 22:04:35 +0000
3928@@ -0,0 +1,7 @@
3929+nose
3930+Jinja2
3931+mock
3932+PyYAML
3933+six
3934+coverage
3935+flake8
3936
3937=== modified file 'tests/10_deploy_test.py'
3938--- tests/10_deploy_test.py 2014-02-06 18:10:07 +0000
3939+++ tests/10_deploy_test.py 2014-12-09 22:04:35 +0000
3940@@ -1,6 +1,6 @@
3941 #!/usr/bin/python3
3942
3943-## This Amulet test deploys memcached.
3944+# This Amulet test deploys memcached.
3945
3946 import amulet
3947 import telnetlib
3948@@ -9,7 +9,7 @@
3949 # The number of seconds to wait for the environment to setup.
3950 seconds = 1200
3951
3952-d = amulet.Deployment()
3953+d = amulet.Deployment(series="trusty")
3954 # Add the memcached charm to the deployment.
3955 d.add('memcached')
3956 # Add the mediawiki charm to the deployment.
3957@@ -39,7 +39,7 @@
3958 # Get the sentry for memcached.
3959 memcached_unit = d.sentry.unit['memcached/0']
3960
3961-## Test if the memcached service is running.
3962+# Test if the memcached service is running.
3963
3964 # Run the command that checks if the memcached server instance is running.
3965 command = 'service memcached status'
3966@@ -55,12 +55,22 @@
3967 print(output)
3968 print(message)
3969
3970-## Test memcached using telnet commands.
3971+# Test memcached using telnet commands.
3972
3973 # Get the public address for memcached instance.
3974 memcached_address = memcached_unit.info['public-address']
3975 # Get the port for memcached instance.
3976-memcached_port = configuration['tcp-port']
3977+memcached_port = configuration['tcp-port']
3978+
3979+try:
3980+ telnetlib.Telnet(memcached_address, memcached_port)
3981+ raise Exception(('memcached tcp port {} '
3982+ 'is not closed').format(memcached_port))
3983+except TimeoutError: # noqa this exception only available in py3
3984+ pass # this is good
3985+
3986+# open memcache to be able to connect from this machine
3987+memcached_unit.run('ufw allow {}'.format(memcached_port))
3988
3989 try:
3990 # Connect to memcached via telnet.
3991@@ -95,13 +105,13 @@
3992 amulet.raise_status(amulet.FAIL, msg=message)
3993 tn.write(b'quit\n')
3994 except Exception as e:
3995- message = 'An error occurred communicating with memcached over telnet ' \
3996- '{0}:{1} {3}'.format(memcached_address, memcached_port, str(e))
3997+ message = ('An error occurred communicating with memcached over telnet '
3998+ '{0}:{1} {2}').format(memcached_address, memcached_port, str(e))
3999 amulet.raise_status(amulet.FAIL, msg=message)
4000 finally:
4001 tn.close()
4002
4003-## Test if the memcached service is configured properly.
4004+# Test if the memcached service is configured properly.
4005
4006 # Get the contents of the memcached configuration file.
4007 config_string = memcached_unit.file_contents('/etc/memcached.conf')
4008@@ -125,7 +135,7 @@
4009 message = 'The memcached deployment was configured correctly.'
4010 print(message)
4011
4012-## Test if the relation is complete and data was exchanged properly.
4013+# Test if the relation is complete and data was exchanged properly.
4014
4015 memcached_unit = d.sentry.unit['memcached/0']
4016 # Get the relation from memcached to mediawiki.
4017@@ -134,8 +144,9 @@
4018 # Make sure the relation got the port information set by the configuration.
4019 if (configuration['tcp-port'] != int(relation['port']) or
4020 configuration['udp-port'] != int(relation['udp-port'])):
4021- message = 'The memcached relation was not configured correctly, port: ' \
4022- '{0} udp-port: {1}'.format(relation['port'], relation['udp-port'])
4023+ message = ('The memcached relation was not configured correctly, port: '
4024+ '{0} udp-port: {1}').format(relation['port'],
4025+ relation['udp-port'])
4026 amulet.raise_status(amulet.FAIL, msg=message)
4027 else:
4028 message = 'The memcached relation was configured correctly.'
4029
4030=== added directory 'unit_tests'
4031=== added file 'unit_tests/__init__.py'
4032--- unit_tests/__init__.py 1970-01-01 00:00:00 +0000
4033+++ unit_tests/__init__.py 2014-12-09 22:04:35 +0000
4034@@ -0,0 +1,2 @@
4035+import sys
4036+sys.path.append('hooks/')
4037
4038=== added file 'unit_tests/test_memcached_hooks.py'
4039--- unit_tests/test_memcached_hooks.py 1970-01-01 00:00:00 +0000
4040+++ unit_tests/test_memcached_hooks.py 2014-12-09 22:04:35 +0000
4041@@ -0,0 +1,268 @@
4042+__author__ = 'Felipe Reyes <felipe.reyes@canonical.com>'
4043+
4044+import mock
4045+import os
4046+import shutil
4047+import tempfile
4048+from test_utils import CharmTestCase
4049+import memcached_hooks
4050+
4051+
4052+DOT = os.path.dirname(os.path.abspath(__file__))
4053+TO_PATCH = [
4054+ 'apt_install',
4055+ 'service_start',
4056+ 'service_stop',
4057+ 'relation_ids',
4058+ 'relation_set',
4059+ 'relation_get',
4060+ 'unit_get',
4061+ 'config',
4062+ 'log',
4063+]
4064+FREE_MEM_SMALL = """ total used free shared \
4065+buffers cached
4066+Mem: 2002 744 1257 0 82 433
4067+-/+ buffers/cache: 227 1774
4068+Swap: 0 0 0
4069+"""
4070+FREE_MEM_BIG = """ total used free shared \
4071+buffers cached
4072+Mem: 24010 13371 10639 207 484 6669
4073+-/+ buffers/cache: 6216 17793
4074+Swap: 7811 0 7811
4075+"""
4076+
4077+
4078+@mock.patch('charmhelpers.contrib.network.ufw.is_enabled', lambda: True)
4079+@mock.patch('charmhelpers.core.hookenv.log', mock.MagicMock())
4080+class TestMemcachedHooks(CharmTestCase):
4081+ def setUp(self):
4082+ self.fake_repo = {}
4083+ super(TestMemcachedHooks, self).setUp(memcached_hooks, TO_PATCH)
4084+ self.relation_get.return_value = '127.0.0.1'
4085+ self.tmpdir = tempfile.mkdtemp()
4086+ memcached_hooks.ETC_DEFAULT_MEMCACHED = os.path.join(self.tmpdir,
4087+ "memcached")
4088+ memcached_hooks.ETC_MEMCACHED_CONF = os.path.join(self.tmpdir,
4089+ "memcached.conf")
4090+
4091+ def tearDown(self):
4092+ super(TestMemcachedHooks, self).tearDown()
4093+ shutil.rmtree(self.tmpdir, ignore_errors=True)
4094+
4095+ @mock.patch('subprocess.check_output')
4096+ def test_install(self, check_output):
4097+ memcached_hooks.install()
4098+
4099+ self.apt_install.assert_called_with(["memcached", "python-cheetah",
4100+ "python-memcache"], fatal=True)
4101+ check_output.assert_any_call(['ufw', 'allow', 'ssh'])
4102+
4103+ def test_start(self):
4104+ memcached_hooks.start()
4105+ self.service_start.assert_called_with('memcached')
4106+
4107+ def test_stop(self):
4108+ memcached_hooks.stop()
4109+ self.service_stop.assert_called_with('memcached')
4110+
4111+ @mock.patch('subprocess.check_output')
4112+ @mock.patch('memcached_hooks.relation_get')
4113+ @mock.patch('memcached_utils.grant_access')
4114+ @mock.patch('memcached_utils.config')
4115+ @mock.patch('memcached_utils.log')
4116+ def test_cache_relation_joined(self, log, config, grant_access,
4117+ relation_get, check_output):
4118+ configs = {'tcp-port': '1234', 'udp-port': '3456'}
4119+
4120+ def f(c):
4121+ return configs.get(c, None)
4122+
4123+ self.config.side_effect = f
4124+ config.side_effect = f
4125+ relation_get.return_value = '127.0.1.1'
4126+ self.unit_get.return_value = '127.0.0.1'
4127+ self.relation_ids.return_value = ['cache:1', 'cache:2']
4128+ memcached_hooks.cache_relation_joined()
4129+ self.relation_set.assert_any_call('cache:1', **{'host': '127.0.0.1',
4130+ 'port': '1234',
4131+ 'udp-port': '3456'})
4132+ self.relation_set.assert_any_call('cache:2', **{'host': '127.0.0.1',
4133+ 'port': '1234',
4134+ 'udp-port': '3456'})
4135+ grant_access.assert_called_with('127.0.1.1')
4136+
4137+ @mock.patch('subprocess.check_output')
4138+ @mock.patch('memcached_hooks.relation_get')
4139+ @mock.patch('memcached_utils.config')
4140+ @mock.patch('memcached_utils.log')
4141+ @mock.patch('charmhelpers.contrib.network.ufw.revoke_access')
4142+ def test_cache_relation_departed(self, revoke_access, log, config,
4143+ relation_get, check_output):
4144+ config.return_value = 1234
4145+ relation_get.return_value = '127.0.1.1'
4146+ memcached_hooks.cache_relation_departed()
4147+
4148+ revoke_access.assert_any_call('127.0.1.1', port='1234', proto='tcp')
4149+ revoke_access.assert_any_call('127.0.1.1', port='1234', proto='udp')
4150+
4151+ @mock.patch('subprocess.Popen')
4152+ @mock.patch('subprocess.check_output')
4153+ @mock.patch('charmhelpers.core.templating.render')
4154+ @mock.patch('memcached_utils.config')
4155+ @mock.patch('memcached_utils.log')
4156+ def test_config_changed_defaults(self, log, config, render, check_output,
4157+ popen):
4158+ p = mock.Mock()
4159+ p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'),
4160+ 'returncode': 0})
4161+ popen.return_value = p
4162+
4163+ configs = self.test_config.config
4164+
4165+ def f(c):
4166+ return configs.get(c, None)
4167+
4168+ self.config.side_effect = f
4169+ config.side_effect = f
4170+
4171+ memcached_hooks.config_changed()
4172+
4173+ passed_vars = render.call_args[0][2]
4174+
4175+ self.assertEqual(passed_vars['mem_size'],
4176+ self.test_config.config['size'])
4177+
4178+ @mock.patch('subprocess.Popen')
4179+ @mock.patch('charmhelpers.core.templating.render')
4180+ @mock.patch('subprocess.check_output')
4181+ @mock.patch('memcached_utils.log')
4182+ def test_config_changed_size_set_0_small(self, log, check_output, render,
4183+ popen):
4184+ p = mock.Mock()
4185+ p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'),
4186+ 'returncode': 0})
4187+ popen.return_value = p
4188+
4189+ configs = {'size': 0}
4190+
4191+ def f(c):
4192+ return configs.get(c, None)
4193+
4194+ self.config.side_effect = f
4195+
4196+ def g(*args, **kwargs):
4197+ if args[0] == ['free', '-m']:
4198+ return FREE_MEM_SMALL
4199+ else:
4200+ return ""
4201+
4202+ check_output.side_effect = g
4203+
4204+ memcached_hooks.config_changed()
4205+ passed_vars = render.call_args[0][2]
4206+
4207+ self.assertEqual(passed_vars['mem_size'], 1596)
4208+ self.assertFalse(passed_vars['large_pages_enabled'])
4209+
4210+ @mock.patch('subprocess.Popen')
4211+ @mock.patch('charmhelpers.core.templating.render')
4212+ @mock.patch('subprocess.check_output')
4213+ @mock.patch('memcached_utils.log')
4214+ def test_config_changed_size_set_0_big(self, log, check_output, render,
4215+ popen):
4216+ p = mock.Mock()
4217+ p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'),
4218+ 'returncode': 0})
4219+ popen.return_value = p
4220+
4221+ configs = {'size': 0,
4222+ 'disable-large-pages': False}
4223+
4224+ def f(c):
4225+ return configs.get(c, None)
4226+
4227+ self.config.side_effect = f
4228+
4229+ def g(*args, **kwargs):
4230+ if args[0] == ['free', '-m']:
4231+ return FREE_MEM_BIG
4232+ if args[0] == ['sysctl', '-n', 'vm.nr_hugepages']:
4233+ return int(16013 / 2)
4234+ else:
4235+ return ""
4236+
4237+ check_output.side_effect = g
4238+
4239+ memcached_hooks.config_changed()
4240+ passed_vars = render.call_args[0][2]
4241+
4242+ self.assertEqual(passed_vars['mem_size'], 16013)
4243+ check_output.assert_any_call(['sysctl', '-w',
4244+ 'vm.nr_hugepages={}'.format(16013/2)])
4245+ self.assertTrue(passed_vars['large_pages_enabled'])
4246+
4247+ @mock.patch('subprocess.Popen')
4248+ @mock.patch('charmhelpers.core.templating.render')
4249+ @mock.patch('subprocess.check_output')
4250+ @mock.patch('memcached_utils.log')
4251+ def test_config_changed_size_failed_set_hpages(self, log, check_output,
4252+ render, popen):
4253+ p = mock.Mock()
4254+ p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'),
4255+ 'returncode': 0})
4256+ popen.return_value = p
4257+
4258+ configs = {'size': 0,
4259+ 'disable-large-pages': False}
4260+
4261+ def f(c):
4262+ return configs.get(c, None)
4263+
4264+ self.config.side_effect = f
4265+
4266+ def g(*args, **kwargs):
4267+ if args[0] == ['free', '-m']:
4268+ return FREE_MEM_BIG
4269+ if args[0] == ['sysctl', '-n', 'vm.nr_hugepages']:
4270+ return 16
4271+ else:
4272+ return ""
4273+
4274+ check_output.side_effect = g
4275+
4276+ memcached_hooks.config_changed()
4277+ passed_vars = render.call_args[0][2]
4278+
4279+ self.assertEqual(passed_vars['mem_size'], 16013)
4280+ self.assertFalse(passed_vars['large_pages_enabled'])
4281+ check_output.assert_any_call(['sysctl', '-w',
4282+ 'vm.nr_hugepages={}'.format(16013/2)])
4283+ check_output.assert_any_call(['sysctl', '-w',
4284+ 'vm.nr_hugepages=16'])
4285+
4286+ @mock.patch('os.fchown')
4287+ @mock.patch('os.chown')
4288+ @mock.patch('memcached_utils.log')
4289+ @mock.patch('subprocess.Popen')
4290+ @mock.patch('memcached_utils.config')
4291+ @mock.patch('subprocess.check_output')
4292+ @mock.patch('charmhelpers.contrib.network.ufw.enable')
4293+ @mock.patch('charmhelpers.contrib.network.ufw.service')
4294+ @mock.patch('charmhelpers.core.hookenv.charm_dir')
4295+ @mock.patch('charmhelpers.core.host.log')
4296+ def test_upgrade_charm(self, log, charm_dir, service, enable, check_output,
4297+ config, popen, *args):
4298+ p = mock.Mock()
4299+ p.configure_mock(**{'communicate.return_value': ('stdout', 'stderr'),
4300+ 'returncode': 0})
4301+ popen.return_value = p
4302+
4303+ charm_dir.return_value = os.path.join(DOT, '..')
4304+ config.return_value = 12111
4305+ memcached_hooks.upgrade_charm()
4306+ self.apt_install.assert_any_call(['memcached', 'python-cheetah',
4307+ 'python-memcache'], fatal=True)
4308+ enable.assert_called_with()
4309+ service.assert_called_with('ssh', 'open')
4310
4311=== added file 'unit_tests/test_utils.py'
4312--- unit_tests/test_utils.py 1970-01-01 00:00:00 +0000
4313+++ unit_tests/test_utils.py 2014-12-09 22:04:35 +0000
4314@@ -0,0 +1,122 @@
4315+# grabbed from nova-cloud-controller/unit_tests/test_utils.py
4316+import logging
4317+import unittest
4318+import os
4319+import yaml
4320+
4321+from contextlib import contextmanager
4322+from mock import patch, MagicMock
4323+
4324+
4325+def load_config():
4326+ '''
4327+ Walk backwords from __file__ looking for config.yaml, load and return the
4328+ 'options' section'
4329+ '''
4330+ config = None
4331+ f = __file__
4332+ while config is None:
4333+ d = os.path.dirname(f)
4334+ if os.path.isfile(os.path.join(d, 'config.yaml')):
4335+ config = os.path.join(d, 'config.yaml')
4336+ break
4337+ f = d
4338+
4339+ if not config:
4340+ logging.error('Could not find config.yaml in any parent directory '
4341+ 'of %s. ' % file)
4342+ raise Exception
4343+
4344+ return yaml.safe_load(open(config).read())['options']
4345+
4346+
4347+def get_default_config():
4348+ '''
4349+ Load default charm config from config.yaml return as a dict.
4350+ If no default is set in config.yaml, its value is None.
4351+ '''
4352+ default_config = {}
4353+ config = load_config()
4354+ for k, v in config.iteritems():
4355+ if 'default' in v:
4356+ default_config[k] = v['default']
4357+ else:
4358+ default_config[k] = None
4359+ return default_config
4360+
4361+
4362+class CharmTestCase(unittest.TestCase):
4363+
4364+ def setUp(self, obj, patches):
4365+ super(CharmTestCase, self).setUp()
4366+ self.patches = patches
4367+ self.obj = obj
4368+ self.test_config = TestConfig()
4369+ self.test_relation = TestRelation()
4370+ self.patch_all()
4371+
4372+ def patch(self, method):
4373+ _m = patch.object(self.obj, method)
4374+ mock = _m.start()
4375+ self.addCleanup(_m.stop)
4376+ return mock
4377+
4378+ def patch_all(self):
4379+ for method in self.patches:
4380+ setattr(self, method, self.patch(method))
4381+
4382+
4383+class TestConfig(object):
4384+
4385+ def __init__(self):
4386+ self.config = get_default_config()
4387+
4388+ def get(self, attr=None):
4389+ if not attr:
4390+ return self.get_all()
4391+ try:
4392+ return self.config[attr]
4393+ except KeyError:
4394+ return None
4395+
4396+ def get_all(self):
4397+ return self.config
4398+
4399+ def set(self, attr, value):
4400+ if attr not in self.config:
4401+ raise KeyError
4402+ self.config[attr] = value
4403+
4404+
4405+class TestRelation(object):
4406+
4407+ def __init__(self, relation_data={}):
4408+ self.relation_data = relation_data
4409+
4410+ def set(self, relation_data):
4411+ self.relation_data = relation_data
4412+
4413+ def get(self, attr=None, unit=None, rid=None):
4414+ if attr is None:
4415+ return self.relation_data
4416+ elif attr in self.relation_data:
4417+ return self.relation_data[attr]
4418+ return None
4419+
4420+
4421+@contextmanager
4422+def patch_open():
4423+ '''Patch open() to allow mocking both open() itself and the file that is
4424+ yielded.
4425+
4426+ Yields the mock for "open" and "file", respectively.'''
4427+ mock_open = MagicMock(spec=open)
4428+ mock_file = MagicMock(spec=file)
4429+
4430+ @contextmanager
4431+ def stub_open(*args, **kwargs):
4432+ mock_open(*args, **kwargs)
4433+ yield mock_file
4434+
4435+ with patch('__builtin__.open', stub_open):
4436+ yield mock_open, mock_file

Subscribers

People subscribed via source and target branches