Merge lp:~lazypower/charms/trusty/mariadb/enterprise-upgrade-option into lp:~dbart/charms/trusty/mariadb/trunk

Proposed by Charles Butler
Status: Merged
Merged at revision: 19
Proposed branch: lp:~lazypower/charms/trusty/mariadb/enterprise-upgrade-option
Merge into: lp:~dbart/charms/trusty/mariadb/trunk
Diff against target: 3229 lines (+2918/-133)
24 files modified
ENTERPRISE-LICENSE.md (+72/-0)
README.md (+13/-1)
charm-helpers.yaml (+6/-0)
config.yaml (+17/-1)
hooks/config-changed (+40/-39)
hooks/install (+0/-2)
lib/charmhelpers/core/fstab.py (+118/-0)
lib/charmhelpers/core/hookenv.py (+540/-0)
lib/charmhelpers/core/host.py (+396/-0)
lib/charmhelpers/core/services/__init__.py (+2/-0)
lib/charmhelpers/core/services/base.py (+313/-0)
lib/charmhelpers/core/services/helpers.py (+243/-0)
lib/charmhelpers/core/sysctl.py (+34/-0)
lib/charmhelpers/core/templating.py (+52/-0)
lib/charmhelpers/fetch/__init__.py (+416/-0)
lib/charmhelpers/fetch/archiveurl.py (+145/-0)
lib/charmhelpers/fetch/bzrurl.py (+54/-0)
lib/charmhelpers/fetch/giturl.py (+48/-0)
lib/charmhelpers/payload/__init__.py (+1/-0)
lib/charmhelpers/payload/archive.py (+57/-0)
lib/charmhelpers/payload/execd.py (+50/-0)
scripts/charm_helpers_sync.py (+223/-0)
tests/10-deploy-and-upgrade (+78/-0)
tests/10-deploy-test.py (+0/-90)
To merge this branch: bzr merge lp:~lazypower/charms/trusty/mariadb/enterprise-upgrade-option
Reviewer Review Type Date Requested Status
Daniel Bartholomew Needs Fixing
Review via email: mp+243454@code.launchpad.net

Description of the change

Adds enterprise upgrade option, along with revised tests and charm-helpers inclusion. (precursor to additional cleanup in the charm for charmhelpers)

To post a comment you must log in.
Revision history for this message
Daniel Bartholomew (dbart) wrote :

This is looking really good. But there are a couple of things need to be fixed for it to be ready for prime time.

First off, the MariaDB Enterprise signing key needs to be imported so that installing or upgrading to MariaDB Enterprise works. The gpg ID of the enterprise key is: D324876EBE6A595F

Next, the MariaDB community repository needs to be disabled or removed so that it doesn't conflict with the Enterprise repository.

Lastly, MariaDB needs to be uninstalled and reinstalled to replace community with Enterprise with commands similar to the following:

  sudo apt-get remove mariadb-common mariadb-client

  sudo apt-get install mariadb-server mariadb-client

Looking at the above two commands, it would be much better if the MariaDB Enterprise packages replaced the MariaDB community packages, and I think if they were at the same version number then they would, the issue is that the Enterprise packages, which go through extra tweaks and development above and beyond the community version usually therefore are a version behind the community packages, so I don't know if there is an easy way to do it. What I don't want to have happen is for the removal of MariaDB community to also trigger the removal of some other package that is depending on MariaDB. Any ideas?

Thanks!

review: Needs Fixing

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'ENTERPRISE-LICENSE.md'
--- ENTERPRISE-LICENSE.md 1970-01-01 00:00:00 +0000
+++ ENTERPRISE-LICENSE.md 2014-12-02 20:02:04 +0000
@@ -0,0 +1,72 @@
1# Evaluation Agreement
2
3
4Our fee-bearing, annual subscriptions for MariaDB Enterprise, MariaDB Enterprise
5 Cluster, MariaDB Galera Cluster, MariaDB MaxScale or other MariaDB products
6 (“MariaDB Subscriptions”) includes access to: (a) software; and (b) services,
7 such as software support, access to our customer portal and related product
8 documentation, and the right to receive executable binaries of our software
9 patches, updates and security fixes (“Services”).
10
11The purpose of this Evaluation Agreement is to provide you with access to a
12MariaDB Subscription on an evaluation basis (an “Evaluation Subscription”).
13Some of the differences between an Evaluation Subscription and a MariaDB
14Subscription are described below.
15
16An Evaluation Subscription for MariaDB Enterprise, MariaDB Enterprise Cluster,
17MariaDB Galera Cluster, MariaDB MaxScale entitles you to:
18
19- executable binary code
20- patches
21- bug fixes
22- security updates
23- the customer portal
24- product documentation
25- software trials for MONYog
26- limited support from MariaDB's Sales Engineering team
27
28A full, annual MariaDB Subscription for MariaDB Enterprise or MariaDB Enterprise
29 Cluster entitles you to:
30
31- executable binary code
32- patches
33- bug fixes
34- security updates
35- the customer portal
36- product documentation
37- Full access to MONYog Ultimate Monitor
38- product roadmaps
39- 24/7 help desk support from MariaDB’s Technical Support team
40- Other items that may be added at MariaDB’s discretion
41
42While access to software components of an Evaluation Subscription are subject
43to underlying applicable open source or proprietary license(s), as the case may
44be (and this Evaluation Agreement does not limit or further restrict any open
45 source license rights), access to any Services or non-open source software
46 components is for the sole purpose of evaluating and testing
47 (“Evaluation Purpose”) the suitability of a MariaDB Subscription for your own
48 use for a defined time period and support level.
49
50Your right to access an Evaluation Subscription without charge is conditioned on
51 the following: (a) you agree to the MariaDB Enterprise Terms and Conditions
52 (the “Subscription Agreement”);
53(b) you agree that this Evaluation Subscription does not grant any right or
54license, express or implied, for the use of any MariaDB or third party trade
55names, service marks or trademarks, including, without limitation, the right to
56distribute any software using any such marks; and (c) if you use our Services
57for any purpose other than Evaluation Purposes, you agree to pay MariaDB
58per-unit Subscription Fee(s) pursuant to the Subscription Agreement. Using our
59Services in ways that do not constitute an Evaluation Purpose include (but are
60 not limited to) using Services in connection with Production Purposes or third
61 parties, or as a complement or supplement to third party support services.
62 Capitalized terms not defined in this Evaluation Agreement shall have the
63 meaning provided in the Subscription Agreement, which is incorporated by
64 reference in its entirety.
65
66By using any of our Services, you affirm that you have read, understood, and
67agree to all of the terms and conditions of this Evaluation Agreement
68(including the Subscription Agreement). If you are an individual acting on
69behalf of an entity, you represent that you have the authority to enter into
70this Evaluation Agreement on behalf of that entity. If you do not accept the
71terms and conditions of this Evaluation Agreement, then you must not use any
72of our Services.
073
=== modified file 'README.md'
--- README.md 2014-09-26 18:37:52 +0000
+++ README.md 2014-12-02 20:02:04 +0000
@@ -32,7 +32,19 @@
32 juju ssh mariadb/032 juju ssh mariadb/0
33 mysql -u root -p$(sudo cat /var/lib/mysql/mysql.passwd)33 mysql -u root -p$(sudo cat /var/lib/mysql/mysql.passwd)
3434
35# Scale Out Usage 35## To upgrade from Community to Enterprise Evaluation
36
37Once you have obtained a username/password from the [MariaDB Portal](http://mariadb.com)
38there will be a repository provided for your enterprise trial installation. You can enable
39this in the charm with the following configuration:
40
41 juju set mariadb enterprise-eula=true source="deb https://username:password@code.mariadb.com/mariadb-enterprise/10.0/repo/ubuntu trusty main"
42
43This will perform an in-place binary upgrade on all the mariadb nodes from community
44edition to the Enterprise Evaluation. You must agree to all terms contained in
45 `ENTERPRISE-LICENSE.md` in the charm directory
46
47# Scale Out Usage
3648
37## Replication49## Replication
3850
3951
=== added file 'charm-helpers.yaml'
--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers.yaml 2014-12-02 20:02:04 +0000
@@ -0,0 +1,6 @@
1destination: lib/charmhelpers
2branch: lp:charm-helpers
3include:
4 - core
5 - fetch
6 - payload
07
=== modified file 'config.yaml'
--- config.yaml 2014-09-25 20:29:40 +0000
+++ config.yaml 2014-12-02 20:02:04 +0000
@@ -100,4 +100,20 @@
100 rbd pool has been created, changing this value will not have any100 rbd pool has been created, changing this value will not have any
101 effect (although it can be changed in ceph by manually configuring101 effect (although it can be changed in ceph by manually configuring
102 your ceph cluster).102 your ceph cluster).
103103 enterprise-eula:
104 type: boolean
105 default: false
106 description: |
107 I have read and agree to the ENTERPRISE TRIAL agreement, located
108 in ENTERPRISE-LICENSE.md located in the charm, or on the web here:
109 https://mariadb.com/about/legal/evaluation-agreement
110 source:
111 type: string
112 default: "deb http://mirror.jmu.edu/pub/mariadb/repo/10.0/ubuntu trusty main"
113 description: |
114 Repository Mirror string to install MariaDB from
115 key:
116 type: string
117 default: "0xcbcb082a1bb943db"
118 description: |
119 GPG Key used to verify apt packages.
104120
=== modified file 'hooks/config-changed'
--- hooks/config-changed 2014-09-25 20:29:40 +0000
+++ hooks/config-changed 2014-12-02 20:02:04 +0000
@@ -1,6 +1,6 @@
1#!/usr/bin/python1#!/usr/bin/python
22
3from subprocess import check_output,check_call, CalledProcessError, Popen, PIPE3from subprocess import check_output,check_call, CalledProcessError
4import tempfile4import tempfile
5import json5import json
6import re6import re
@@ -10,6 +10,18 @@
10import platform10import platform
11from string import upper11from string import upper
1212
13sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
14
15from charmhelpers import fetch
16
17from charmhelpers.core import (
18 hookenv,
19 host,
20)
21
22log = hookenv.log
23config = hookenv.config()
24
13num_re = re.compile('^[0-9]+$')25num_re = re.compile('^[0-9]+$')
1426
15# There should be a library for this27# There should be a library for this
@@ -52,7 +64,6 @@
5264
53def get_memtotal():65def get_memtotal():
54 with open('/proc/meminfo') as meminfo_file:66 with open('/proc/meminfo') as meminfo_file:
55 meminfo = {}
56 for line in meminfo_file:67 for line in meminfo_file:
57 (key, mem) = line.split(':', 2)68 (key, mem) = line.split(':', 2)
58 if key == 'MemTotal':69 if key == 'MemTotal':
@@ -60,41 +71,31 @@
60 return '%s%s' % (mtot, upper(modifier[0]))71 return '%s%s' % (mtot, upper(modifier[0]))
6172
6273
63# There is preliminary code for mariadb, but switching74
64# from mariadb -> mysql fails badly, so it is disabled for now.75# Enterprise MariaDB Bits
65valid_flavors = ['distro']76# Clean up bintar when upgrading to enterprise
6677def cleanup_bintar():
67#remove_pkgs=[]78 opath = os.path.join(os.path.sep, 'usr', 'local', 'mysql')
68#apt_sources = []79 if os.path.exists(opath):
69#package = 'mariadb-server'80 log("Cleaning up Bin/Tar installation", "INFO")
70#81 os.remove(os.path.join(os.path.sep, 'etc', 'init.d', 'mysql'))
71#series = check_output(['lsb_release','-cs'])82 npath = os.path.join(os.path.sep, 'mnt', 'mysql.old')
72#83 os.rename(opath, npath)
73#for source in apt_sources:84
74# server = source.split('/')[0]85source = config['source']
75# if os.path.exists('keys/%s' % server):86accepted = config['enterprise-eula']
76# check_call(['apt-key','add','keys/%s' % server])87
77# else:88# assumption of mariadb packages being delivered from code.mariadb
78# check_call(['juju-log','-l','ERROR',89if accepted and "code.mariadb" in source:
79# 'No key for %s' % (server)])90 cleanup_bintar()
80# sys.exit(1)91 host.service_stop('mysql')
81# check_call(['add-apt-repository','-y','deb http://%s %s main' % (source, series)])92 fetch.add_source(source, config['key'])
82# check_call(['apt-get','update'])93 fetch.apt_update()
83#94
84#with open('/var/lib/mysql/mysql.passwd','r') as rpw:95 packages = ['mariadb-server', 'mariadb-client']
85# root_pass = rpw.read()96 fetch.apt_install(packages)
86#97
87#dconf = Popen(['debconf-set-selections'], stdin=PIPE)98
88#dconf.stdin.write("%s %s/root_password password %s\n" % (package, package, root_pass))
89#dconf.stdin.write("%s %s/root_password_again password %s\n" % (package, package, root_pass))
90#dconf.stdin.write("%s-5.5 mysql-server/root_password password %s\n" % (package, root_pass))
91#dconf.stdin.write("%s-5.5 mysql-server/root_password_again password %s\n" % (package, root_pass))
92#dconf.communicate()
93#dconf.wait()
94#
95#if len(remove_pkgs):
96# check_call(['apt-get','-y','remove'] + remove_pkgs)
97#check_call(['apt-get','-y','install','-qq',package])
9899
99# smart-calc stuff in the configs100# smart-calc stuff in the configs
100dataset_bytes = human_to_bytes(configs['dataset-size'])101dataset_bytes = human_to_bytes(configs['dataset-size'])
@@ -160,7 +161,7 @@
160# You can copy this to one of:161# You can copy this to one of:
161# - "/etc/mysql/my.cnf" to set global options,162# - "/etc/mysql/my.cnf" to set global options,
162# - "~/.my.cnf" to set user-specific options.163# - "~/.my.cnf" to set user-specific options.
163# 164#
164# One can use all long options that the program supports.165# One can use all long options that the program supports.
165# Run program with --help to get a list of available options and with166# Run program with --help to get a list of available options and with
166# --print-defaults to see which it would actually understand and use.167# --print-defaults to see which it would actually understand and use.
@@ -314,7 +315,7 @@
314315
315need_restart = False316need_restart = False
316for target,content in targets.iteritems():317for target,content in targets.iteritems():
317 tdir = os.path.dirname(target) 318 tdir = os.path.dirname(target)
318 if len(content) == 0 and os.path.exists(target):319 if len(content) == 0 and os.path.exists(target):
319 os.unlink(target)320 os.unlink(target)
320 need_restart = True321 need_restart = True
321322
=== modified file 'hooks/install'
--- hooks/install 2014-11-12 18:30:06 +0000
+++ hooks/install 2014-12-02 20:02:04 +0000
@@ -173,5 +173,3 @@
173# As the last step of the install process, stop MariaDB (the start trigger173# As the last step of the install process, stop MariaDB (the start trigger
174# handles starting MariaDB).174# handles starting MariaDB).
175/etc/init.d/mysql stop175/etc/init.d/mysql stop
176
177
178176
=== added directory 'lib/charmhelpers'
=== added file 'lib/charmhelpers/__init__.py'
=== added directory 'lib/charmhelpers/core'
=== added file 'lib/charmhelpers/core/__init__.py'
=== added file 'lib/charmhelpers/core/fstab.py'
--- lib/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/fstab.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,118 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import io
7import os
8
9
10class Fstab(io.FileIO):
11 """This class extends file in order to implement a file reader/writer
12 for file `/etc/fstab`
13 """
14
15 class Entry(object):
16 """Entry class represents a non-comment line on the `/etc/fstab` file
17 """
18 def __init__(self, device, mountpoint, filesystem,
19 options, d=0, p=0):
20 self.device = device
21 self.mountpoint = mountpoint
22 self.filesystem = filesystem
23
24 if not options:
25 options = "defaults"
26
27 self.options = options
28 self.d = int(d)
29 self.p = int(p)
30
31 def __eq__(self, o):
32 return str(self) == str(o)
33
34 def __str__(self):
35 return "{} {} {} {} {} {}".format(self.device,
36 self.mountpoint,
37 self.filesystem,
38 self.options,
39 self.d,
40 self.p)
41
42 DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
43
44 def __init__(self, path=None):
45 if path:
46 self._path = path
47 else:
48 self._path = self.DEFAULT_PATH
49 super(Fstab, self).__init__(self._path, 'rb+')
50
51 def _hydrate_entry(self, line):
52 # NOTE: use split with no arguments to split on any
53 # whitespace including tabs
54 return Fstab.Entry(*filter(
55 lambda x: x not in ('', None),
56 line.strip("\n").split()))
57
58 @property
59 def entries(self):
60 self.seek(0)
61 for line in self.readlines():
62 line = line.decode('us-ascii')
63 try:
64 if line.strip() and not line.startswith("#"):
65 yield self._hydrate_entry(line)
66 except ValueError:
67 pass
68
69 def get_entry_by_attr(self, attr, value):
70 for entry in self.entries:
71 e_attr = getattr(entry, attr)
72 if e_attr == value:
73 return entry
74 return None
75
76 def add_entry(self, entry):
77 if self.get_entry_by_attr('device', entry.device):
78 return False
79
80 self.write((str(entry) + '\n').encode('us-ascii'))
81 self.truncate()
82 return entry
83
84 def remove_entry(self, entry):
85 self.seek(0)
86
87 lines = [l.decode('us-ascii') for l in self.readlines()]
88
89 found = False
90 for index, line in enumerate(lines):
91 if not line.startswith("#"):
92 if self._hydrate_entry(line) == entry:
93 found = True
94 break
95
96 if not found:
97 return False
98
99 lines.remove(line)
100
101 self.seek(0)
102 self.write(''.join(lines).encode('us-ascii'))
103 self.truncate()
104 return True
105
106 @classmethod
107 def remove_by_mountpoint(cls, mountpoint, path=None):
108 fstab = cls(path=path)
109 entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
110 if entry:
111 return fstab.remove_entry(entry)
112 return False
113
114 @classmethod
115 def add(cls, device, mountpoint, filesystem, options=None, path=None):
116 return cls(path=path).add_entry(Fstab.Entry(device,
117 mountpoint, filesystem,
118 options=options))
0119
=== added file 'lib/charmhelpers/core/hookenv.py'
--- lib/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/hookenv.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,540 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import sys
12from subprocess import CalledProcessError
13
14import six
15if not six.PY3:
16 from UserDict import UserDict
17else:
18 from collections import UserDict
19
20CRITICAL = "CRITICAL"
21ERROR = "ERROR"
22WARNING = "WARNING"
23INFO = "INFO"
24DEBUG = "DEBUG"
25MARKER = object()
26
27cache = {}
28
29
30def cached(func):
31 """Cache return values for multiple executions of func + args
32
33 For example::
34
35 @cached
36 def unit_get(attribute):
37 pass
38
39 unit_get('test')
40
41 will cache the result of unit_get + 'test' for future calls.
42 """
43 def wrapper(*args, **kwargs):
44 global cache
45 key = str((func, args, kwargs))
46 try:
47 return cache[key]
48 except KeyError:
49 res = func(*args, **kwargs)
50 cache[key] = res
51 return res
52 return wrapper
53
54
55def flush(key):
56 """Flushes any entries from function cache where the
57 key is found in the function+args """
58 flush_list = []
59 for item in cache:
60 if key in item:
61 flush_list.append(item)
62 for item in flush_list:
63 del cache[item]
64
65
66def log(message, level=None):
67 """Write a message to the juju log"""
68 command = ['juju-log']
69 if level:
70 command += ['-l', level]
71 command += [message]
72 subprocess.call(command)
73
74
75class Serializable(UserDict):
76 """Wrapper, an object that can be serialized to yaml or json"""
77
78 def __init__(self, obj):
79 # wrap the object
80 UserDict.__init__(self)
81 self.data = obj
82
83 def __getattr__(self, attr):
84 # See if this object has attribute.
85 if attr in ("json", "yaml", "data"):
86 return self.__dict__[attr]
87 # Check for attribute in wrapped object.
88 got = getattr(self.data, attr, MARKER)
89 if got is not MARKER:
90 return got
91 # Proxy to the wrapped object via dict interface.
92 try:
93 return self.data[attr]
94 except KeyError:
95 raise AttributeError(attr)
96
97 def __getstate__(self):
98 # Pickle as a standard dictionary.
99 return self.data
100
101 def __setstate__(self, state):
102 # Unpickle into our wrapper.
103 self.data = state
104
105 def json(self):
106 """Serialize the object to json"""
107 return json.dumps(self.data)
108
109 def yaml(self):
110 """Serialize the object to yaml"""
111 return yaml.dump(self.data)
112
113
114def execution_environment():
115 """A convenient bundling of the current execution context"""
116 context = {}
117 context['conf'] = config()
118 if relation_id():
119 context['reltype'] = relation_type()
120 context['relid'] = relation_id()
121 context['rel'] = relation_get()
122 context['unit'] = local_unit()
123 context['rels'] = relations()
124 context['env'] = os.environ
125 return context
126
127
128def in_relation_hook():
129 """Determine whether we're running in a relation hook"""
130 return 'JUJU_RELATION' in os.environ
131
132
133def relation_type():
134 """The scope for the current relation hook"""
135 return os.environ.get('JUJU_RELATION', None)
136
137
138def relation_id():
139 """The relation ID for the current relation hook"""
140 return os.environ.get('JUJU_RELATION_ID', None)
141
142
143def local_unit():
144 """Local unit ID"""
145 return os.environ['JUJU_UNIT_NAME']
146
147
148def remote_unit():
149 """The remote unit for the current relation hook"""
150 return os.environ['JUJU_REMOTE_UNIT']
151
152
153def service_name():
154 """The name service group this unit belongs to"""
155 return local_unit().split('/')[0]
156
157
158def hook_name():
159 """The name of the currently executing hook"""
160 return os.path.basename(sys.argv[0])
161
162
163class Config(dict):
164 """A dictionary representation of the charm's config.yaml, with some
165 extra features:
166
167 - See which values in the dictionary have changed since the previous hook.
168 - For values that have changed, see what the previous value was.
169 - Store arbitrary data for use in a later hook.
170
171 NOTE: Do not instantiate this object directly - instead call
172 ``hookenv.config()``, which will return an instance of :class:`Config`.
173
174 Example usage::
175
176 >>> # inside a hook
177 >>> from charmhelpers.core import hookenv
178 >>> config = hookenv.config()
179 >>> config['foo']
180 'bar'
181 >>> # store a new key/value for later use
182 >>> config['mykey'] = 'myval'
183
184
185 >>> # user runs `juju set mycharm foo=baz`
186 >>> # now we're inside subsequent config-changed hook
187 >>> config = hookenv.config()
188 >>> config['foo']
189 'baz'
190 >>> # test to see if this val has changed since last hook
191 >>> config.changed('foo')
192 True
193 >>> # what was the previous value?
194 >>> config.previous('foo')
195 'bar'
196 >>> # keys/values that we add are preserved across hooks
197 >>> config['mykey']
198 'myval'
199
200 """
201 CONFIG_FILE_NAME = '.juju-persistent-config'
202
203 def __init__(self, *args, **kw):
204 super(Config, self).__init__(*args, **kw)
205 self.implicit_save = True
206 self._prev_dict = None
207 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
208 if os.path.exists(self.path):
209 self.load_previous()
210
211 def __getitem__(self, key):
212 """For regular dict lookups, check the current juju config first,
213 then the previous (saved) copy. This ensures that user-saved values
214 will be returned by a dict lookup.
215
216 """
217 try:
218 return dict.__getitem__(self, key)
219 except KeyError:
220 return (self._prev_dict or {})[key]
221
222 def keys(self):
223 prev_keys = []
224 if self._prev_dict is not None:
225 prev_keys = self._prev_dict.keys()
226 return list(set(prev_keys + list(dict.keys(self))))
227
228 def load_previous(self, path=None):
229 """Load previous copy of config from disk.
230
231 In normal usage you don't need to call this method directly - it
232 is called automatically at object initialization.
233
234 :param path:
235
236 File path from which to load the previous config. If `None`,
237 config is loaded from the default location. If `path` is
238 specified, subsequent `save()` calls will write to the same
239 path.
240
241 """
242 self.path = path or self.path
243 with open(self.path) as f:
244 self._prev_dict = json.load(f)
245
246 def changed(self, key):
247 """Return True if the current value for this key is different from
248 the previous value.
249
250 """
251 if self._prev_dict is None:
252 return True
253 return self.previous(key) != self.get(key)
254
255 def previous(self, key):
256 """Return previous value for this key, or None if there
257 is no previous value.
258
259 """
260 if self._prev_dict:
261 return self._prev_dict.get(key)
262 return None
263
264 def save(self):
265 """Save this config to disk.
266
267 If the charm is using the :mod:`Services Framework <services.base>`
268 or :meth:'@hook <Hooks.hook>' decorator, this
269 is called automatically at the end of successful hook execution.
270 Otherwise, it should be called directly by user code.
271
272 To disable automatic saves, set ``implicit_save=False`` on this
273 instance.
274
275 """
276 if self._prev_dict:
277 for k, v in six.iteritems(self._prev_dict):
278 if k not in self:
279 self[k] = v
280 with open(self.path, 'w') as f:
281 json.dump(self, f)
282
283
284@cached
285def config(scope=None):
286 """Juju charm configuration"""
287 config_cmd_line = ['config-get']
288 if scope is not None:
289 config_cmd_line.append(scope)
290 config_cmd_line.append('--format=json')
291 try:
292 config_data = json.loads(
293 subprocess.check_output(config_cmd_line).decode('UTF-8'))
294 if scope is not None:
295 return config_data
296 return Config(config_data)
297 except ValueError:
298 return None
299
300
301@cached
302def relation_get(attribute=None, unit=None, rid=None):
303 """Get relation information"""
304 _args = ['relation-get', '--format=json']
305 if rid:
306 _args.append('-r')
307 _args.append(rid)
308 _args.append(attribute or '-')
309 if unit:
310 _args.append(unit)
311 try:
312 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
313 except ValueError:
314 return None
315 except CalledProcessError as e:
316 if e.returncode == 2:
317 return None
318 raise
319
320
321def relation_set(relation_id=None, relation_settings=None, **kwargs):
322 """Set relation information for the current unit"""
323 relation_settings = relation_settings if relation_settings else {}
324 relation_cmd_line = ['relation-set']
325 if relation_id is not None:
326 relation_cmd_line.extend(('-r', relation_id))
327 for k, v in (list(relation_settings.items()) + list(kwargs.items())):
328 if v is None:
329 relation_cmd_line.append('{}='.format(k))
330 else:
331 relation_cmd_line.append('{}={}'.format(k, v))
332 subprocess.check_call(relation_cmd_line)
333 # Flush cache of any relation-gets for local unit
334 flush(local_unit())
335
336
337@cached
338def relation_ids(reltype=None):
339 """A list of relation_ids"""
340 reltype = reltype or relation_type()
341 relid_cmd_line = ['relation-ids', '--format=json']
342 if reltype is not None:
343 relid_cmd_line.append(reltype)
344 return json.loads(
345 subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
346 return []
347
348
349@cached
350def related_units(relid=None):
351 """A list of related units"""
352 relid = relid or relation_id()
353 units_cmd_line = ['relation-list', '--format=json']
354 if relid is not None:
355 units_cmd_line.extend(('-r', relid))
356 return json.loads(
357 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
358
359
360@cached
361def relation_for_unit(unit=None, rid=None):
362 """Get the json represenation of a unit's relation"""
363 unit = unit or remote_unit()
364 relation = relation_get(unit=unit, rid=rid)
365 for key in relation:
366 if key.endswith('-list'):
367 relation[key] = relation[key].split()
368 relation['__unit__'] = unit
369 return relation
370
371
372@cached
373def relations_for_id(relid=None):
374 """Get relations of a specific relation ID"""
375 relation_data = []
376 relid = relid or relation_ids()
377 for unit in related_units(relid):
378 unit_data = relation_for_unit(unit, relid)
379 unit_data['__relid__'] = relid
380 relation_data.append(unit_data)
381 return relation_data
382
383
384@cached
385def relations_of_type(reltype=None):
386 """Get relations of a specific type"""
387 relation_data = []
388 reltype = reltype or relation_type()
389 for relid in relation_ids(reltype):
390 for relation in relations_for_id(relid):
391 relation['__relid__'] = relid
392 relation_data.append(relation)
393 return relation_data
394
395
396@cached
397def relation_types():
398 """Get a list of relation types supported by this charm"""
399 charmdir = os.environ.get('CHARM_DIR', '')
400 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
401 md = yaml.safe_load(mdf)
402 rel_types = []
403 for key in ('provides', 'requires', 'peers'):
404 section = md.get(key)
405 if section:
406 rel_types.extend(section.keys())
407 mdf.close()
408 return rel_types
409
410
411@cached
412def relations():
413 """Get a nested dictionary of relation data for all related units"""
414 rels = {}
415 for reltype in relation_types():
416 relids = {}
417 for relid in relation_ids(reltype):
418 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
419 for unit in related_units(relid):
420 reldata = relation_get(unit=unit, rid=relid)
421 units[unit] = reldata
422 relids[relid] = units
423 rels[reltype] = relids
424 return rels
425
426
427@cached
428def is_relation_made(relation, keys='private-address'):
429 '''
430 Determine whether a relation is established by checking for
431 presence of key(s). If a list of keys is provided, they
432 must all be present for the relation to be identified as made
433 '''
434 if isinstance(keys, str):
435 keys = [keys]
436 for r_id in relation_ids(relation):
437 for unit in related_units(r_id):
438 context = {}
439 for k in keys:
440 context[k] = relation_get(k, rid=r_id,
441 unit=unit)
442 if None not in context.values():
443 return True
444 return False
445
446
447def open_port(port, protocol="TCP"):
448 """Open a service network port"""
449 _args = ['open-port']
450 _args.append('{}/{}'.format(port, protocol))
451 subprocess.check_call(_args)
452
453
454def close_port(port, protocol="TCP"):
455 """Close a service network port"""
456 _args = ['close-port']
457 _args.append('{}/{}'.format(port, protocol))
458 subprocess.check_call(_args)
459
460
461@cached
462def unit_get(attribute):
463 """Get the unit ID for the remote unit"""
464 _args = ['unit-get', '--format=json', attribute]
465 try:
466 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
467 except ValueError:
468 return None
469
470
471def unit_private_ip():
472 """Get this unit's private IP address"""
473 return unit_get('private-address')
474
475
476class UnregisteredHookError(Exception):
477 """Raised when an undefined hook is called"""
478 pass
479
480
481class Hooks(object):
482 """A convenient handler for hook functions.
483
484 Example::
485
486 hooks = Hooks()
487
488 # register a hook, taking its name from the function name
489 @hooks.hook()
490 def install():
491 pass # your code here
492
493 # register a hook, providing a custom hook name
494 @hooks.hook("config-changed")
495 def config_changed():
496 pass # your code here
497
498 if __name__ == "__main__":
499 # execute a hook based on the name the program is called by
500 hooks.execute(sys.argv)
501 """
502
503 def __init__(self, config_save=True):
504 super(Hooks, self).__init__()
505 self._hooks = {}
506 self._config_save = config_save
507
508 def register(self, name, function):
509 """Register a hook"""
510 self._hooks[name] = function
511
512 def execute(self, args):
513 """Execute a registered hook based on args[0]"""
514 hook_name = os.path.basename(args[0])
515 if hook_name in self._hooks:
516 self._hooks[hook_name]()
517 if self._config_save:
518 cfg = config()
519 if cfg.implicit_save:
520 cfg.save()
521 else:
522 raise UnregisteredHookError(hook_name)
523
524 def hook(self, *hook_names):
525 """Decorator, registering them as hooks"""
526 def wrapper(decorated):
527 for hook_name in hook_names:
528 self.register(hook_name, decorated)
529 else:
530 self.register(decorated.__name__, decorated)
531 if '_' in decorated.__name__:
532 self.register(
533 decorated.__name__.replace('_', '-'), decorated)
534 return decorated
535 return wrapper
536
537
538def charm_dir():
539 """Return the root directory of the current charm"""
540 return os.environ.get('CHARM_DIR')
0541
=== added file 'lib/charmhelpers/core/host.py'
--- lib/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/host.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,396 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import re
10import pwd
11import grp
12import random
13import string
14import subprocess
15import hashlib
16from contextlib import contextmanager
17from collections import OrderedDict
18
19import six
20
21from .hookenv import log
22from .fstab import Fstab
23
24
25def service_start(service_name):
26 """Start a system service"""
27 return service('start', service_name)
28
29
30def service_stop(service_name):
31 """Stop a system service"""
32 return service('stop', service_name)
33
34
35def service_restart(service_name):
36 """Restart a system service"""
37 return service('restart', service_name)
38
39
40def service_reload(service_name, restart_on_failure=False):
41 """Reload a system service, optionally falling back to restart if
42 reload fails"""
43 service_result = service('reload', service_name)
44 if not service_result and restart_on_failure:
45 service_result = service('restart', service_name)
46 return service_result
47
48
49def service(action, service_name):
50 """Control a system service"""
51 cmd = ['service', service_name, action]
52 return subprocess.call(cmd) == 0
53
54
55def service_running(service):
56 """Determine whether a system service is running"""
57 try:
58 output = subprocess.check_output(
59 ['service', service, 'status'],
60 stderr=subprocess.STDOUT).decode('UTF-8')
61 except subprocess.CalledProcessError:
62 return False
63 else:
64 if ("start/running" in output or "is running" in output):
65 return True
66 else:
67 return False
68
69
70def service_available(service_name):
71 """Determine whether a system service is available"""
72 try:
73 subprocess.check_output(
74 ['service', service_name, 'status'],
75 stderr=subprocess.STDOUT).decode('UTF-8')
76 except subprocess.CalledProcessError as e:
77 return 'unrecognized service' not in e.output
78 else:
79 return True
80
81
82def adduser(username, password=None, shell='/bin/bash', system_user=False):
83 """Add a user to the system"""
84 try:
85 user_info = pwd.getpwnam(username)
86 log('user {0} already exists!'.format(username))
87 except KeyError:
88 log('creating user {0}'.format(username))
89 cmd = ['useradd']
90 if system_user or password is None:
91 cmd.append('--system')
92 else:
93 cmd.extend([
94 '--create-home',
95 '--shell', shell,
96 '--password', password,
97 ])
98 cmd.append(username)
99 subprocess.check_call(cmd)
100 user_info = pwd.getpwnam(username)
101 return user_info
102
103
104def add_user_to_group(username, group):
105 """Add a user to a group"""
106 cmd = [
107 'gpasswd', '-a',
108 username,
109 group
110 ]
111 log("Adding user {} to group {}".format(username, group))
112 subprocess.check_call(cmd)
113
114
115def rsync(from_path, to_path, flags='-r', options=None):
116 """Replicate the contents of a path"""
117 options = options or ['--delete', '--executability']
118 cmd = ['/usr/bin/rsync', flags]
119 cmd.extend(options)
120 cmd.append(from_path)
121 cmd.append(to_path)
122 log(" ".join(cmd))
123 return subprocess.check_output(cmd).decode('UTF-8').strip()
124
125
126def symlink(source, destination):
127 """Create a symbolic link"""
128 log("Symlinking {} as {}".format(source, destination))
129 cmd = [
130 'ln',
131 '-sf',
132 source,
133 destination,
134 ]
135 subprocess.check_call(cmd)
136
137
138def mkdir(path, owner='root', group='root', perms=0o555, force=False):
139 """Create a directory"""
140 log("Making dir {} {}:{} {:o}".format(path, owner, group,
141 perms))
142 uid = pwd.getpwnam(owner).pw_uid
143 gid = grp.getgrnam(group).gr_gid
144 realpath = os.path.abspath(path)
145 if os.path.exists(realpath):
146 if force and not os.path.isdir(realpath):
147 log("Removing non-directory file {} prior to mkdir()".format(path))
148 os.unlink(realpath)
149 else:
150 os.makedirs(realpath, perms)
151 os.chown(realpath, uid, gid)
152
153
154def write_file(path, content, owner='root', group='root', perms=0o444):
155 """Create or overwrite a file with the contents of a string"""
156 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
157 uid = pwd.getpwnam(owner).pw_uid
158 gid = grp.getgrnam(group).gr_gid
159 with open(path, 'w') as target:
160 os.fchown(target.fileno(), uid, gid)
161 os.fchmod(target.fileno(), perms)
162 target.write(content)
163
164
165def fstab_remove(mp):
166 """Remove the given mountpoint entry from /etc/fstab
167 """
168 return Fstab.remove_by_mountpoint(mp)
169
170
171def fstab_add(dev, mp, fs, options=None):
172 """Adds the given device entry to the /etc/fstab file
173 """
174 return Fstab.add(dev, mp, fs, options=options)
175
176
177def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
178 """Mount a filesystem at a particular mountpoint"""
179 cmd_args = ['mount']
180 if options is not None:
181 cmd_args.extend(['-o', options])
182 cmd_args.extend([device, mountpoint])
183 try:
184 subprocess.check_output(cmd_args)
185 except subprocess.CalledProcessError as e:
186 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
187 return False
188
189 if persist:
190 return fstab_add(device, mountpoint, filesystem, options=options)
191 return True
192
193
194def umount(mountpoint, persist=False):
195 """Unmount a filesystem"""
196 cmd_args = ['umount', mountpoint]
197 try:
198 subprocess.check_output(cmd_args)
199 except subprocess.CalledProcessError as e:
200 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
201 return False
202
203 if persist:
204 return fstab_remove(mountpoint)
205 return True
206
207
208def mounts():
209 """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
210 with open('/proc/mounts') as f:
211 # [['/mount/point','/dev/path'],[...]]
212 system_mounts = [m[1::-1] for m in [l.strip().split()
213 for l in f.readlines()]]
214 return system_mounts
215
216
217def file_hash(path, hash_type='md5'):
218 """
219 Generate a hash checksum of the contents of 'path' or None if not found.
220
221 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
222 such as md5, sha1, sha256, sha512, etc.
223 """
224 if os.path.exists(path):
225 h = getattr(hashlib, hash_type)()
226 with open(path, 'rb') as source:
227 h.update(source.read())
228 return h.hexdigest()
229 else:
230 return None
231
232
233def check_hash(path, checksum, hash_type='md5'):
234 """
235 Validate a file using a cryptographic checksum.
236
237 :param str checksum: Value of the checksum used to validate the file.
238 :param str hash_type: Hash algorithm used to generate `checksum`.
239 Can be any hash alrgorithm supported by :mod:`hashlib`,
240 such as md5, sha1, sha256, sha512, etc.
241 :raises ChecksumError: If the file fails the checksum
242
243 """
244 actual_checksum = file_hash(path, hash_type)
245 if checksum != actual_checksum:
246 raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
247
248
249class ChecksumError(ValueError):
250 pass
251
252
253def restart_on_change(restart_map, stopstart=False):
254 """Restart services based on configuration files changing
255
256 This function is used a decorator, for example::
257
258 @restart_on_change({
259 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
260 })
261 def ceph_client_changed():
262 pass # your code here
263
264 In this example, the cinder-api and cinder-volume services
265 would be restarted if /etc/ceph/ceph.conf is changed by the
266 ceph_client_changed function.
267 """
268 def wrap(f):
269 def wrapped_f(*args):
270 checksums = {}
271 for path in restart_map:
272 checksums[path] = file_hash(path)
273 f(*args)
274 restarts = []
275 for path in restart_map:
276 if checksums[path] != file_hash(path):
277 restarts += restart_map[path]
278 services_list = list(OrderedDict.fromkeys(restarts))
279 if not stopstart:
280 for service_name in services_list:
281 service('restart', service_name)
282 else:
283 for action in ['stop', 'start']:
284 for service_name in services_list:
285 service(action, service_name)
286 return wrapped_f
287 return wrap
288
289
290def lsb_release():
291 """Return /etc/lsb-release in a dict"""
292 d = {}
293 with open('/etc/lsb-release', 'r') as lsb:
294 for l in lsb:
295 k, v = l.split('=')
296 d[k.strip()] = v.strip()
297 return d
298
299
300def pwgen(length=None):
301 """Generate a random pasword."""
302 if length is None:
303 length = random.choice(range(35, 45))
304 alphanumeric_chars = [
305 l for l in (string.ascii_letters + string.digits)
306 if l not in 'l0QD1vAEIOUaeiou']
307 random_chars = [
308 random.choice(alphanumeric_chars) for _ in range(length)]
309 return(''.join(random_chars))
310
311
312def list_nics(nic_type):
313 '''Return a list of nics of given type(s)'''
314 if isinstance(nic_type, six.string_types):
315 int_types = [nic_type]
316 else:
317 int_types = nic_type
318 interfaces = []
319 for int_type in int_types:
320 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
321 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
322 ip_output = (line for line in ip_output if line)
323 for line in ip_output:
324 if line.split()[1].startswith(int_type):
325 matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
326 if matched:
327 interface = matched.groups()[0]
328 else:
329 interface = line.split()[1].replace(":", "")
330 interfaces.append(interface)
331
332 return interfaces
333
334
335def set_nic_mtu(nic, mtu):
336 '''Set MTU on a network interface'''
337 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
338 subprocess.check_call(cmd)
339
340
341def get_nic_mtu(nic):
342 cmd = ['ip', 'addr', 'show', nic]
343 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
344 mtu = ""
345 for line in ip_output:
346 words = line.split()
347 if 'mtu' in words:
348 mtu = words[words.index("mtu") + 1]
349 return mtu
350
351
352def get_nic_hwaddr(nic):
353 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
354 ip_output = subprocess.check_output(cmd).decode('UTF-8')
355 hwaddr = ""
356 words = ip_output.split()
357 if 'link/ether' in words:
358 hwaddr = words[words.index('link/ether') + 1]
359 return hwaddr
360
361
362def cmp_pkgrevno(package, revno, pkgcache=None):
363 '''Compare supplied revno with the revno of the installed package
364
365 * 1 => Installed revno is greater than supplied arg
366 * 0 => Installed revno is the same as supplied arg
367 * -1 => Installed revno is less than supplied arg
368
369 '''
370 import apt_pkg
371 from charmhelpers.fetch import apt_cache
372 if not pkgcache:
373 pkgcache = apt_cache()
374 pkg = pkgcache[package]
375 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
376
377
378@contextmanager
379def chdir(d):
380 cur = os.getcwd()
381 try:
382 yield os.chdir(d)
383 finally:
384 os.chdir(cur)
385
386
387def chownr(path, owner, group):
388 uid = pwd.getpwnam(owner).pw_uid
389 gid = grp.getgrnam(group).gr_gid
390
391 for root, dirs, files in os.walk(path):
392 for name in dirs + files:
393 full = os.path.join(root, name)
394 broken_symlink = os.path.lexists(full) and not os.path.exists(full)
395 if not broken_symlink:
396 os.chown(full, uid, gid)
0397
=== added directory 'lib/charmhelpers/core/services'
=== added file 'lib/charmhelpers/core/services/__init__.py'
--- lib/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/services/__init__.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,2 @@
1from .base import * # NOQA
2from .helpers import * # NOQA
03
=== added file 'lib/charmhelpers/core/services/base.py'
--- lib/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/services/base.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,313 @@
1import os
2import re
3import json
4from collections import Iterable
5
6from charmhelpers.core import host
7from charmhelpers.core import hookenv
8
9
10__all__ = ['ServiceManager', 'ManagerCallback',
11 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
12 'service_restart', 'service_stop']
13
14
15class ServiceManager(object):
16 def __init__(self, services=None):
17 """
18 Register a list of services, given their definitions.
19
20 Service definitions are dicts in the following formats (all keys except
21 'service' are optional)::
22
23 {
24 "service": <service name>,
25 "required_data": <list of required data contexts>,
26 "provided_data": <list of provided data contexts>,
27 "data_ready": <one or more callbacks>,
28 "data_lost": <one or more callbacks>,
29 "start": <one or more callbacks>,
30 "stop": <one or more callbacks>,
31 "ports": <list of ports to manage>,
32 }
33
34 The 'required_data' list should contain dicts of required data (or
35 dependency managers that act like dicts and know how to collect the data).
36 Only when all items in the 'required_data' list are populated are the list
37 of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
38 information.
39
40 The 'provided_data' list should contain relation data providers, most likely
41 a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
42 that will indicate a set of data to set on a given relation.
43
44 The 'data_ready' value should be either a single callback, or a list of
45 callbacks, to be called when all items in 'required_data' pass `is_ready()`.
46 Each callback will be called with the service name as the only parameter.
47 After all of the 'data_ready' callbacks are called, the 'start' callbacks
48 are fired.
49
50 The 'data_lost' value should be either a single callback, or a list of
51 callbacks, to be called when a 'required_data' item no longer passes
52 `is_ready()`. Each callback will be called with the service name as the
53 only parameter. After all of the 'data_lost' callbacks are called,
54 the 'stop' callbacks are fired.
55
56 The 'start' value should be either a single callback, or a list of
57 callbacks, to be called when starting the service, after the 'data_ready'
58 callbacks are complete. Each callback will be called with the service
59 name as the only parameter. This defaults to
60 `[host.service_start, services.open_ports]`.
61
62 The 'stop' value should be either a single callback, or a list of
63 callbacks, to be called when stopping the service. If the service is
64 being stopped because it no longer has all of its 'required_data', this
65 will be called after all of the 'data_lost' callbacks are complete.
66 Each callback will be called with the service name as the only parameter.
67 This defaults to `[services.close_ports, host.service_stop]`.
68
69 The 'ports' value should be a list of ports to manage. The default
70 'start' handler will open the ports after the service is started,
71 and the default 'stop' handler will close the ports prior to stopping
72 the service.
73
74
75 Examples:
76
77 The following registers an Upstart service called bingod that depends on
78 a mongodb relation and which runs a custom `db_migrate` function prior to
79 restarting the service, and a Runit service called spadesd::
80
81 manager = services.ServiceManager([
82 {
83 'service': 'bingod',
84 'ports': [80, 443],
85 'required_data': [MongoRelation(), config(), {'my': 'data'}],
86 'data_ready': [
87 services.template(source='bingod.conf'),
88 services.template(source='bingod.ini',
89 target='/etc/bingod.ini',
90 owner='bingo', perms=0400),
91 ],
92 },
93 {
94 'service': 'spadesd',
95 'data_ready': services.template(source='spadesd_run.j2',
96 target='/etc/sv/spadesd/run',
97 perms=0555),
98 'start': runit_start,
99 'stop': runit_stop,
100 },
101 ])
102 manager.manage()
103 """
104 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
105 self._ready = None
106 self.services = {}
107 for service in services or []:
108 service_name = service['service']
109 self.services[service_name] = service
110
111 def manage(self):
112 """
113 Handle the current hook by doing The Right Thing with the registered services.
114 """
115 hook_name = hookenv.hook_name()
116 if hook_name == 'stop':
117 self.stop_services()
118 else:
119 self.provide_data()
120 self.reconfigure_services()
121 cfg = hookenv.config()
122 if cfg.implicit_save:
123 cfg.save()
124
125 def provide_data(self):
126 """
127 Set the relation data for each provider in the ``provided_data`` list.
128
129 A provider must have a `name` attribute, which indicates which relation
130 to set data on, and a `provide_data()` method, which returns a dict of
131 data to set.
132 """
133 hook_name = hookenv.hook_name()
134 for service in self.services.values():
135 for provider in service.get('provided_data', []):
136 if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
137 data = provider.provide_data()
138 _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
139 if _ready:
140 hookenv.relation_set(None, data)
141
142 def reconfigure_services(self, *service_names):
143 """
144 Update all files for one or more registered services, and,
145 if ready, optionally restart them.
146
147 If no service names are given, reconfigures all registered services.
148 """
149 for service_name in service_names or self.services.keys():
150 if self.is_ready(service_name):
151 self.fire_event('data_ready', service_name)
152 self.fire_event('start', service_name, default=[
153 service_restart,
154 manage_ports])
155 self.save_ready(service_name)
156 else:
157 if self.was_ready(service_name):
158 self.fire_event('data_lost', service_name)
159 self.fire_event('stop', service_name, default=[
160 manage_ports,
161 service_stop])
162 self.save_lost(service_name)
163
164 def stop_services(self, *service_names):
165 """
166 Stop one or more registered services, by name.
167
168 If no service names are given, stops all registered services.
169 """
170 for service_name in service_names or self.services.keys():
171 self.fire_event('stop', service_name, default=[
172 manage_ports,
173 service_stop])
174
175 def get_service(self, service_name):
176 """
177 Given the name of a registered service, return its service definition.
178 """
179 service = self.services.get(service_name)
180 if not service:
181 raise KeyError('Service not registered: %s' % service_name)
182 return service
183
184 def fire_event(self, event_name, service_name, default=None):
185 """
186 Fire a data_ready, data_lost, start, or stop event on a given service.
187 """
188 service = self.get_service(service_name)
189 callbacks = service.get(event_name, default)
190 if not callbacks:
191 return
192 if not isinstance(callbacks, Iterable):
193 callbacks = [callbacks]
194 for callback in callbacks:
195 if isinstance(callback, ManagerCallback):
196 callback(self, service_name, event_name)
197 else:
198 callback(service_name)
199
200 def is_ready(self, service_name):
201 """
202 Determine if a registered service is ready, by checking its 'required_data'.
203
204 A 'required_data' item can be any mapping type, and is considered ready
205 if `bool(item)` evaluates as True.
206 """
207 service = self.get_service(service_name)
208 reqs = service.get('required_data', [])
209 return all(bool(req) for req in reqs)
210
211 def _load_ready_file(self):
212 if self._ready is not None:
213 return
214 if os.path.exists(self._ready_file):
215 with open(self._ready_file) as fp:
216 self._ready = set(json.load(fp))
217 else:
218 self._ready = set()
219
220 def _save_ready_file(self):
221 if self._ready is None:
222 return
223 with open(self._ready_file, 'w') as fp:
224 json.dump(list(self._ready), fp)
225
226 def save_ready(self, service_name):
227 """
228 Save an indicator that the given service is now data_ready.
229 """
230 self._load_ready_file()
231 self._ready.add(service_name)
232 self._save_ready_file()
233
234 def save_lost(self, service_name):
235 """
236 Save an indicator that the given service is no longer data_ready.
237 """
238 self._load_ready_file()
239 self._ready.discard(service_name)
240 self._save_ready_file()
241
242 def was_ready(self, service_name):
243 """
244 Determine if the given service was previously data_ready.
245 """
246 self._load_ready_file()
247 return service_name in self._ready
248
249
250class ManagerCallback(object):
251 """
252 Special case of a callback that takes the `ServiceManager` instance
253 in addition to the service name.
254
255 Subclasses should implement `__call__` which should accept three parameters:
256
257 * `manager` The `ServiceManager` instance
258 * `service_name` The name of the service it's being triggered for
259 * `event_name` The name of the event that this callback is handling
260 """
261 def __call__(self, manager, service_name, event_name):
262 raise NotImplementedError()
263
264
265class PortManagerCallback(ManagerCallback):
266 """
267 Callback class that will open or close ports, for use as either
268 a start or stop action.
269 """
270 def __call__(self, manager, service_name, event_name):
271 service = manager.get_service(service_name)
272 new_ports = service.get('ports', [])
273 port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
274 if os.path.exists(port_file):
275 with open(port_file) as fp:
276 old_ports = fp.read().split(',')
277 for old_port in old_ports:
278 if bool(old_port):
279 old_port = int(old_port)
280 if old_port not in new_ports:
281 hookenv.close_port(old_port)
282 with open(port_file, 'w') as fp:
283 fp.write(','.join(str(port) for port in new_ports))
284 for port in new_ports:
285 if event_name == 'start':
286 hookenv.open_port(port)
287 elif event_name == 'stop':
288 hookenv.close_port(port)
289
290
291def service_stop(service_name):
292 """
293 Wrapper around host.service_stop to prevent spurious "unknown service"
294 messages in the logs.
295 """
296 if host.service_running(service_name):
297 host.service_stop(service_name)
298
299
300def service_restart(service_name):
301 """
302 Wrapper around host.service_restart to prevent spurious "unknown service"
303 messages in the logs.
304 """
305 if host.service_available(service_name):
306 if host.service_running(service_name):
307 host.service_restart(service_name)
308 else:
309 host.service_start(service_name)
310
311
312# Convenience aliases
313open_ports = close_ports = manage_ports = PortManagerCallback()
0314
=== added file 'lib/charmhelpers/core/services/helpers.py'
--- lib/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/services/helpers.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,243 @@
1import os
2import yaml
3from charmhelpers.core import hookenv
4from charmhelpers.core import templating
5
6from charmhelpers.core.services.base import ManagerCallback
7
8
9__all__ = ['RelationContext', 'TemplateCallback',
10 'render_template', 'template']
11
12
13class RelationContext(dict):
14 """
15 Base class for a context generator that gets relation data from juju.
16
17 Subclasses must provide the attributes `name`, which is the name of the
18 interface of interest, `interface`, which is the type of the interface of
19 interest, and `required_keys`, which is the set of keys required for the
20 relation to be considered complete. The data for all interfaces matching
21 the `name` attribute that are complete will used to populate the dictionary
22 values (see `get_data`, below).
23
24 The generated context will be namespaced under the relation :attr:`name`,
25 to prevent potential naming conflicts.
26
27 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
28 :param list additional_required_keys: Extend the list of :attr:`required_keys`
29 """
30 name = None
31 interface = None
32 required_keys = []
33
34 def __init__(self, name=None, additional_required_keys=None):
35 if name is not None:
36 self.name = name
37 if additional_required_keys is not None:
38 self.required_keys.extend(additional_required_keys)
39 self.get_data()
40
41 def __bool__(self):
42 """
43 Returns True if all of the required_keys are available.
44 """
45 return self.is_ready()
46
47 __nonzero__ = __bool__
48
49 def __repr__(self):
50 return super(RelationContext, self).__repr__()
51
52 def is_ready(self):
53 """
54 Returns True if all of the `required_keys` are available from any units.
55 """
56 ready = len(self.get(self.name, [])) > 0
57 if not ready:
58 hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
59 return ready
60
61 def _is_ready(self, unit_data):
62 """
63 Helper method that tests a set of relation data and returns True if
64 all of the `required_keys` are present.
65 """
66 return set(unit_data.keys()).issuperset(set(self.required_keys))
67
68 def get_data(self):
69 """
70 Retrieve the relation data for each unit involved in a relation and,
71 if complete, store it in a list under `self[self.name]`. This
72 is automatically called when the RelationContext is instantiated.
73
74 The units are sorted lexographically first by the service ID, then by
75 the unit ID. Thus, if an interface has two other services, 'db:1'
76 and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
77 and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
78 set of data, the relation data for the units will be stored in the
79 order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
80
81 If you only care about a single unit on the relation, you can just
82 access it as `{{ interface[0]['key'] }}`. However, if you can at all
83 support multiple units on a relation, you should iterate over the list,
84 like::
85
86 {% for unit in interface -%}
87 {{ unit['key'] }}{% if not loop.last %},{% endif %}
88 {%- endfor %}
89
90 Note that since all sets of relation data from all related services and
91 units are in a single list, if you need to know which service or unit a
92 set of data came from, you'll need to extend this class to preserve
93 that information.
94 """
95 if not hookenv.relation_ids(self.name):
96 return
97
98 ns = self.setdefault(self.name, [])
99 for rid in sorted(hookenv.relation_ids(self.name)):
100 for unit in sorted(hookenv.related_units(rid)):
101 reldata = hookenv.relation_get(rid=rid, unit=unit)
102 if self._is_ready(reldata):
103 ns.append(reldata)
104
105 def provide_data(self):
106 """
107 Return data to be relation_set for this interface.
108 """
109 return {}
110
111
112class MysqlRelation(RelationContext):
113 """
114 Relation context for the `mysql` interface.
115
116 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
117 :param list additional_required_keys: Extend the list of :attr:`required_keys`
118 """
119 name = 'db'
120 interface = 'mysql'
121 required_keys = ['host', 'user', 'password', 'database']
122
123
124class HttpRelation(RelationContext):
125 """
126 Relation context for the `http` interface.
127
128 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
129 :param list additional_required_keys: Extend the list of :attr:`required_keys`
130 """
131 name = 'website'
132 interface = 'http'
133 required_keys = ['host', 'port']
134
135 def provide_data(self):
136 return {
137 'host': hookenv.unit_get('private-address'),
138 'port': 80,
139 }
140
141
142class RequiredConfig(dict):
143 """
144 Data context that loads config options with one or more mandatory options.
145
146 Once the required options have been changed from their default values, all
147 config options will be available, namespaced under `config` to prevent
148 potential naming conflicts (for example, between a config option and a
149 relation property).
150
151 :param list *args: List of options that must be changed from their default values.
152 """
153
154 def __init__(self, *args):
155 self.required_options = args
156 self['config'] = hookenv.config()
157 with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
158 self.config = yaml.load(fp).get('options', {})
159
160 def __bool__(self):
161 for option in self.required_options:
162 if option not in self['config']:
163 return False
164 current_value = self['config'][option]
165 default_value = self.config[option].get('default')
166 if current_value == default_value:
167 return False
168 if current_value in (None, '') and default_value in (None, ''):
169 return False
170 return True
171
172 def __nonzero__(self):
173 return self.__bool__()
174
175
176class StoredContext(dict):
177 """
178 A data context that always returns the data that it was first created with.
179
180 This is useful to do a one-time generation of things like passwords, that
181 will thereafter use the same value that was originally generated, instead
182 of generating a new value each time it is run.
183 """
184 def __init__(self, file_name, config_data):
185 """
186 If the file exists, populate `self` with the data from the file.
187 Otherwise, populate with the given data and persist it to the file.
188 """
189 if os.path.exists(file_name):
190 self.update(self.read_context(file_name))
191 else:
192 self.store_context(file_name, config_data)
193 self.update(config_data)
194
195 def store_context(self, file_name, config_data):
196 if not os.path.isabs(file_name):
197 file_name = os.path.join(hookenv.charm_dir(), file_name)
198 with open(file_name, 'w') as file_stream:
199 os.fchmod(file_stream.fileno(), 0o600)
200 yaml.dump(config_data, file_stream)
201
202 def read_context(self, file_name):
203 if not os.path.isabs(file_name):
204 file_name = os.path.join(hookenv.charm_dir(), file_name)
205 with open(file_name, 'r') as file_stream:
206 data = yaml.load(file_stream)
207 if not data:
208 raise OSError("%s is empty" % file_name)
209 return data
210
211
212class TemplateCallback(ManagerCallback):
213 """
214 Callback class that will render a Jinja2 template, for use as a ready
215 action.
216
217 :param str source: The template source file, relative to
218 `$CHARM_DIR/templates`
219
220 :param str target: The target to write the rendered template to
221 :param str owner: The owner of the rendered file
222 :param str group: The group of the rendered file
223 :param int perms: The permissions of the rendered file
224 """
225 def __init__(self, source, target,
226 owner='root', group='root', perms=0o444):
227 self.source = source
228 self.target = target
229 self.owner = owner
230 self.group = group
231 self.perms = perms
232
233 def __call__(self, manager, service_name, event_name):
234 service = manager.get_service(service_name)
235 context = {}
236 for ctx in service.get('required_data', []):
237 context.update(ctx)
238 templating.render(self.source, self.target, context,
239 self.owner, self.group, self.perms)
240
241
242# Convenience aliases for templates
243render_template = template = TemplateCallback
0244
=== added file 'lib/charmhelpers/core/sysctl.py'
--- lib/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/sysctl.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,34 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import yaml
7
8from subprocess import check_call
9
10from charmhelpers.core.hookenv import (
11 log,
12 DEBUG,
13)
14
15
16def create(sysctl_dict, sysctl_file):
17 """Creates a sysctl.conf file from a YAML associative array
18
19 :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
20 :type sysctl_dict: dict
21 :param sysctl_file: path to the sysctl file to be saved
22 :type sysctl_file: str or unicode
23 :returns: None
24 """
25 sysctl_dict = yaml.load(sysctl_dict)
26
27 with open(sysctl_file, "w") as fd:
28 for key, value in sysctl_dict.items():
29 fd.write("{}={}\n".format(key, value))
30
31 log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
32 level=DEBUG)
33
34 check_call(["sysctl", "-p", sysctl_file])
035
=== added file 'lib/charmhelpers/core/templating.py'
--- lib/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/core/templating.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,52 @@
1import os
2
3from charmhelpers.core import host
4from charmhelpers.core import hookenv
5
6
7def render(source, target, context, owner='root', group='root',
8 perms=0o444, templates_dir=None):
9 """
10 Render a template.
11
12 The `source` path, if not absolute, is relative to the `templates_dir`.
13
14 The `target` path should be absolute.
15
16 The context should be a dict containing the values to be replaced in the
17 template.
18
19 The `owner`, `group`, and `perms` options will be passed to `write_file`.
20
21 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
22
23 Note: Using this requires python-jinja2; if it is not installed, calling
24 this will attempt to use charmhelpers.fetch.apt_install to install it.
25 """
26 try:
27 from jinja2 import FileSystemLoader, Environment, exceptions
28 except ImportError:
29 try:
30 from charmhelpers.fetch import apt_install
31 except ImportError:
32 hookenv.log('Could not import jinja2, and could not import '
33 'charmhelpers.fetch to install it',
34 level=hookenv.ERROR)
35 raise
36 apt_install('python-jinja2', fatal=True)
37 from jinja2 import FileSystemLoader, Environment, exceptions
38
39 if templates_dir is None:
40 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
41 loader = Environment(loader=FileSystemLoader(templates_dir))
42 try:
43 source = source
44 template = loader.get_template(source)
45 except exceptions.TemplateNotFound as e:
46 hookenv.log('Could not load template %s from %s.' %
47 (source, templates_dir),
48 level=hookenv.ERROR)
49 raise e
50 content = template.render(context)
51 host.mkdir(os.path.dirname(target))
52 host.write_file(target, content, owner, group, perms)
053
=== added directory 'lib/charmhelpers/fetch'
=== added file 'lib/charmhelpers/fetch/__init__.py'
--- lib/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/fetch/__init__.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,416 @@
1import importlib
2from tempfile import NamedTemporaryFile
3import time
4from yaml import safe_load
5from charmhelpers.core.host import (
6 lsb_release
7)
8import subprocess
9from charmhelpers.core.hookenv import (
10 config,
11 log,
12)
13import os
14
15import six
16if six.PY3:
17 from urllib.parse import urlparse, urlunparse
18else:
19 from urlparse import urlparse, urlunparse
20
21
22CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
23deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
24"""
25PROPOSED_POCKET = """# Proposed
26deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
27"""
28CLOUD_ARCHIVE_POCKETS = {
29 # Folsom
30 'folsom': 'precise-updates/folsom',
31 'precise-folsom': 'precise-updates/folsom',
32 'precise-folsom/updates': 'precise-updates/folsom',
33 'precise-updates/folsom': 'precise-updates/folsom',
34 'folsom/proposed': 'precise-proposed/folsom',
35 'precise-folsom/proposed': 'precise-proposed/folsom',
36 'precise-proposed/folsom': 'precise-proposed/folsom',
37 # Grizzly
38 'grizzly': 'precise-updates/grizzly',
39 'precise-grizzly': 'precise-updates/grizzly',
40 'precise-grizzly/updates': 'precise-updates/grizzly',
41 'precise-updates/grizzly': 'precise-updates/grizzly',
42 'grizzly/proposed': 'precise-proposed/grizzly',
43 'precise-grizzly/proposed': 'precise-proposed/grizzly',
44 'precise-proposed/grizzly': 'precise-proposed/grizzly',
45 # Havana
46 'havana': 'precise-updates/havana',
47 'precise-havana': 'precise-updates/havana',
48 'precise-havana/updates': 'precise-updates/havana',
49 'precise-updates/havana': 'precise-updates/havana',
50 'havana/proposed': 'precise-proposed/havana',
51 'precise-havana/proposed': 'precise-proposed/havana',
52 'precise-proposed/havana': 'precise-proposed/havana',
53 # Icehouse
54 'icehouse': 'precise-updates/icehouse',
55 'precise-icehouse': 'precise-updates/icehouse',
56 'precise-icehouse/updates': 'precise-updates/icehouse',
57 'precise-updates/icehouse': 'precise-updates/icehouse',
58 'icehouse/proposed': 'precise-proposed/icehouse',
59 'precise-icehouse/proposed': 'precise-proposed/icehouse',
60 'precise-proposed/icehouse': 'precise-proposed/icehouse',
61 # Juno
62 'juno': 'trusty-updates/juno',
63 'trusty-juno': 'trusty-updates/juno',
64 'trusty-juno/updates': 'trusty-updates/juno',
65 'trusty-updates/juno': 'trusty-updates/juno',
66 'juno/proposed': 'trusty-proposed/juno',
67 'juno/proposed': 'trusty-proposed/juno',
68 'trusty-juno/proposed': 'trusty-proposed/juno',
69 'trusty-proposed/juno': 'trusty-proposed/juno',
70}
71
72# The order of this list is very important. Handlers should be listed in from
73# least- to most-specific URL matching.
74FETCH_HANDLERS = (
75 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
76 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
77 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
78)
79
80APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
81APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
82APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
83
84
85class SourceConfigError(Exception):
86 pass
87
88
89class UnhandledSource(Exception):
90 pass
91
92
93class AptLockError(Exception):
94 pass
95
96
97class BaseFetchHandler(object):
98
99 """Base class for FetchHandler implementations in fetch plugins"""
100
101 def can_handle(self, source):
102 """Returns True if the source can be handled. Otherwise returns
103 a string explaining why it cannot"""
104 return "Wrong source type"
105
106 def install(self, source):
107 """Try to download and unpack the source. Return the path to the
108 unpacked files or raise UnhandledSource."""
109 raise UnhandledSource("Wrong source type {}".format(source))
110
111 def parse_url(self, url):
112 return urlparse(url)
113
114 def base_url(self, url):
115 """Return url without querystring or fragment"""
116 parts = list(self.parse_url(url))
117 parts[4:] = ['' for i in parts[4:]]
118 return urlunparse(parts)
119
120
121def filter_installed_packages(packages):
122 """Returns a list of packages that require installation"""
123 cache = apt_cache()
124 _pkgs = []
125 for package in packages:
126 try:
127 p = cache[package]
128 p.current_ver or _pkgs.append(package)
129 except KeyError:
130 log('Package {} has no installation candidate.'.format(package),
131 level='WARNING')
132 _pkgs.append(package)
133 return _pkgs
134
135
136def apt_cache(in_memory=True):
137 """Build and return an apt cache"""
138 import apt_pkg
139 apt_pkg.init()
140 if in_memory:
141 apt_pkg.config.set("Dir::Cache::pkgcache", "")
142 apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
143 return apt_pkg.Cache()
144
145
146def apt_install(packages, options=None, fatal=False):
147 """Install one or more packages"""
148 if options is None:
149 options = ['--option=Dpkg::Options::=--force-confold']
150
151 cmd = ['apt-get', '--assume-yes']
152 cmd.extend(options)
153 cmd.append('install')
154 if isinstance(packages, six.string_types):
155 cmd.append(packages)
156 else:
157 cmd.extend(packages)
158 log("Installing {} with options: {}".format(packages,
159 options))
160 _run_apt_command(cmd, fatal)
161
162
163def apt_upgrade(options=None, fatal=False, dist=False):
164 """Upgrade all packages"""
165 if options is None:
166 options = ['--option=Dpkg::Options::=--force-confold']
167
168 cmd = ['apt-get', '--assume-yes']
169 cmd.extend(options)
170 if dist:
171 cmd.append('dist-upgrade')
172 else:
173 cmd.append('upgrade')
174 log("Upgrading with options: {}".format(options))
175 _run_apt_command(cmd, fatal)
176
177
178def apt_update(fatal=False):
179 """Update local apt cache"""
180 cmd = ['apt-get', 'update']
181 _run_apt_command(cmd, fatal)
182
183
184def apt_purge(packages, fatal=False):
185 """Purge one or more packages"""
186 cmd = ['apt-get', '--assume-yes', 'purge']
187 if isinstance(packages, six.string_types):
188 cmd.append(packages)
189 else:
190 cmd.extend(packages)
191 log("Purging {}".format(packages))
192 _run_apt_command(cmd, fatal)
193
194
195def apt_hold(packages, fatal=False):
196 """Hold one or more packages"""
197 cmd = ['apt-mark', 'hold']
198 if isinstance(packages, six.string_types):
199 cmd.append(packages)
200 else:
201 cmd.extend(packages)
202 log("Holding {}".format(packages))
203
204 if fatal:
205 subprocess.check_call(cmd)
206 else:
207 subprocess.call(cmd)
208
209
210def add_source(source, key=None):
211 """Add a package source to this system.
212
213 @param source: a URL or sources.list entry, as supported by
214 add-apt-repository(1). Examples::
215
216 ppa:charmers/example
217 deb https://stub:key@private.example.com/ubuntu trusty main
218
219 In addition:
220 'proposed:' may be used to enable the standard 'proposed'
221 pocket for the release.
222 'cloud:' may be used to activate official cloud archive pockets,
223 such as 'cloud:icehouse'
224 'distro' may be used as a noop
225
226 @param key: A key to be added to the system's APT keyring and used
227 to verify the signatures on packages. Ideally, this should be an
228 ASCII format GPG public key including the block headers. A GPG key
229 id may also be used, but be aware that only insecure protocols are
230 available to retrieve the actual public key from a public keyserver
231 placing your Juju environment at risk. ppa and cloud archive keys
232 are securely added automtically, so sould not be provided.
233 """
234 if source is None:
235 log('Source is not present. Skipping')
236 return
237
238 if (source.startswith('ppa:') or
239 source.startswith('http') or
240 source.startswith('deb ') or
241 source.startswith('cloud-archive:')):
242 subprocess.check_call(['add-apt-repository', '--yes', source])
243 elif source.startswith('cloud:'):
244 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
245 fatal=True)
246 pocket = source.split(':')[-1]
247 if pocket not in CLOUD_ARCHIVE_POCKETS:
248 raise SourceConfigError(
249 'Unsupported cloud: source option %s' %
250 pocket)
251 actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
252 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
253 apt.write(CLOUD_ARCHIVE.format(actual_pocket))
254 elif source == 'proposed':
255 release = lsb_release()['DISTRIB_CODENAME']
256 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
257 apt.write(PROPOSED_POCKET.format(release))
258 elif source == 'distro':
259 pass
260 else:
261 log("Unknown source: {!r}".format(source))
262
263 if key:
264 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
265 with NamedTemporaryFile('w+') as key_file:
266 key_file.write(key)
267 key_file.flush()
268 key_file.seek(0)
269 subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
270 else:
271 # Note that hkp: is in no way a secure protocol. Using a
272 # GPG key id is pointless from a security POV unless you
273 # absolutely trust your network and DNS.
274 subprocess.check_call(['apt-key', 'adv', '--keyserver',
275 'hkp://keyserver.ubuntu.com:80', '--recv',
276 key])
277
278
279def configure_sources(update=False,
280 sources_var='install_sources',
281 keys_var='install_keys'):
282 """
283 Configure multiple sources from charm configuration.
284
285 The lists are encoded as yaml fragments in the configuration.
286 The frament needs to be included as a string. Sources and their
287 corresponding keys are of the types supported by add_source().
288
289 Example config:
290 install_sources: |
291 - "ppa:foo"
292 - "http://example.com/repo precise main"
293 install_keys: |
294 - null
295 - "a1b2c3d4"
296
297 Note that 'null' (a.k.a. None) should not be quoted.
298 """
299 sources = safe_load((config(sources_var) or '').strip()) or []
300 keys = safe_load((config(keys_var) or '').strip()) or None
301
302 if isinstance(sources, six.string_types):
303 sources = [sources]
304
305 if keys is None:
306 for source in sources:
307 add_source(source, None)
308 else:
309 if isinstance(keys, six.string_types):
310 keys = [keys]
311
312 if len(sources) != len(keys):
313 raise SourceConfigError(
314 'Install sources and keys lists are different lengths')
315 for source, key in zip(sources, keys):
316 add_source(source, key)
317 if update:
318 apt_update(fatal=True)
319
320
321def install_remote(source, *args, **kwargs):
322 """
323 Install a file tree from a remote source
324
325 The specified source should be a url of the form:
326 scheme://[host]/path[#[option=value][&...]]
327
328 Schemes supported are based on this modules submodules.
329 Options supported are submodule-specific.
330 Additional arguments are passed through to the submodule.
331
332 For example::
333
334 dest = install_remote('http://example.com/archive.tgz',
335 checksum='deadbeef',
336 hash_type='sha1')
337
338 This will download `archive.tgz`, validate it using SHA1 and, if
339 the file is ok, extract it and return the directory in which it
340 was extracted. If the checksum fails, it will raise
341 :class:`charmhelpers.core.host.ChecksumError`.
342 """
343 # We ONLY check for True here because can_handle may return a string
344 # explaining why it can't handle a given source.
345 handlers = [h for h in plugins() if h.can_handle(source) is True]
346 installed_to = None
347 for handler in handlers:
348 try:
349 installed_to = handler.install(source, *args, **kwargs)
350 except UnhandledSource:
351 pass
352 if not installed_to:
353 raise UnhandledSource("No handler found for source {}".format(source))
354 return installed_to
355
356
357def install_from_config(config_var_name):
358 charm_config = config()
359 source = charm_config[config_var_name]
360 return install_remote(source)
361
362
363def plugins(fetch_handlers=None):
364 if not fetch_handlers:
365 fetch_handlers = FETCH_HANDLERS
366 plugin_list = []
367 for handler_name in fetch_handlers:
368 package, classname = handler_name.rsplit('.', 1)
369 try:
370 handler_class = getattr(
371 importlib.import_module(package),
372 classname)
373 plugin_list.append(handler_class())
374 except (ImportError, AttributeError):
375 # Skip missing plugins so that they can be ommitted from
376 # installation if desired
377 log("FetchHandler {} not found, skipping plugin".format(
378 handler_name))
379 return plugin_list
380
381
382def _run_apt_command(cmd, fatal=False):
383 """
384 Run an APT command, checking output and retrying if the fatal flag is set
385 to True.
386
387 :param: cmd: str: The apt command to run.
388 :param: fatal: bool: Whether the command's output should be checked and
389 retried.
390 """
391 env = os.environ.copy()
392
393 if 'DEBIAN_FRONTEND' not in env:
394 env['DEBIAN_FRONTEND'] = 'noninteractive'
395
396 if fatal:
397 retry_count = 0
398 result = None
399
400 # If the command is considered "fatal", we need to retry if the apt
401 # lock was not acquired.
402
403 while result is None or result == APT_NO_LOCK:
404 try:
405 result = subprocess.check_call(cmd, env=env)
406 except subprocess.CalledProcessError as e:
407 retry_count = retry_count + 1
408 if retry_count > APT_NO_LOCK_RETRY_COUNT:
409 raise
410 result = e.returncode
411 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
412 "".format(APT_NO_LOCK_RETRY_DELAY))
413 time.sleep(APT_NO_LOCK_RETRY_DELAY)
414
415 else:
416 subprocess.call(cmd, env=env)
0417
=== added file 'lib/charmhelpers/fetch/archiveurl.py'
--- lib/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/fetch/archiveurl.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,145 @@
1import os
2import hashlib
3import re
4
5import six
6if six.PY3:
7 from urllib.request import (
8 build_opener, install_opener, urlopen, urlretrieve,
9 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
10 )
11 from urllib.parse import urlparse, urlunparse, parse_qs
12 from urllib.error import URLError
13else:
14 from urllib import urlretrieve
15 from urllib2 import (
16 build_opener, install_opener, urlopen,
17 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
18 URLError
19 )
20 from urlparse import urlparse, urlunparse, parse_qs
21
22from charmhelpers.fetch import (
23 BaseFetchHandler,
24 UnhandledSource
25)
26from charmhelpers.payload.archive import (
27 get_archive_handler,
28 extract,
29)
30from charmhelpers.core.host import mkdir, check_hash
31
32
33def splituser(host):
34 '''urllib.splituser(), but six's support of this seems broken'''
35 _userprog = re.compile('^(.*)@(.*)$')
36 match = _userprog.match(host)
37 if match:
38 return match.group(1, 2)
39 return None, host
40
41
42def splitpasswd(user):
43 '''urllib.splitpasswd(), but six's support of this is missing'''
44 _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
45 match = _passwdprog.match(user)
46 if match:
47 return match.group(1, 2)
48 return user, None
49
50
51class ArchiveUrlFetchHandler(BaseFetchHandler):
52 """
53 Handler to download archive files from arbitrary URLs.
54
55 Can fetch from http, https, ftp, and file URLs.
56
57 Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
58
59 Installs the contents of the archive in $CHARM_DIR/fetched/.
60 """
61 def can_handle(self, source):
62 url_parts = self.parse_url(source)
63 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
64 return "Wrong source type"
65 if get_archive_handler(self.base_url(source)):
66 return True
67 return False
68
69 def download(self, source, dest):
70 """
71 Download an archive file.
72
73 :param str source: URL pointing to an archive file.
74 :param str dest: Local path location to download archive file to.
75 """
76 # propogate all exceptions
77 # URLError, OSError, etc
78 proto, netloc, path, params, query, fragment = urlparse(source)
79 if proto in ('http', 'https'):
80 auth, barehost = splituser(netloc)
81 if auth is not None:
82 source = urlunparse((proto, barehost, path, params, query, fragment))
83 username, password = splitpasswd(auth)
84 passman = HTTPPasswordMgrWithDefaultRealm()
85 # Realm is set to None in add_password to force the username and password
86 # to be used whatever the realm
87 passman.add_password(None, source, username, password)
88 authhandler = HTTPBasicAuthHandler(passman)
89 opener = build_opener(authhandler)
90 install_opener(opener)
91 response = urlopen(source)
92 try:
93 with open(dest, 'w') as dest_file:
94 dest_file.write(response.read())
95 except Exception as e:
96 if os.path.isfile(dest):
97 os.unlink(dest)
98 raise e
99
100 # Mandatory file validation via Sha1 or MD5 hashing.
101 def download_and_validate(self, url, hashsum, validate="sha1"):
102 tempfile, headers = urlretrieve(url)
103 check_hash(tempfile, hashsum, validate)
104 return tempfile
105
106 def install(self, source, dest=None, checksum=None, hash_type='sha1'):
107 """
108 Download and install an archive file, with optional checksum validation.
109
110 The checksum can also be given on the `source` URL's fragment.
111 For example::
112
113 handler.install('http://example.com/file.tgz#sha1=deadbeef')
114
115 :param str source: URL pointing to an archive file.
116 :param str dest: Local destination path to install to. If not given,
117 installs to `$CHARM_DIR/archives/archive_file_name`.
118 :param str checksum: If given, validate the archive file after download.
119 :param str hash_type: Algorithm used to generate `checksum`.
120 Can be any hash alrgorithm supported by :mod:`hashlib`,
121 such as md5, sha1, sha256, sha512, etc.
122
123 """
124 url_parts = self.parse_url(source)
125 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
126 if not os.path.exists(dest_dir):
127 mkdir(dest_dir, perms=0o755)
128 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
129 try:
130 self.download(source, dld_file)
131 except URLError as e:
132 raise UnhandledSource(e.reason)
133 except OSError as e:
134 raise UnhandledSource(e.strerror)
135 options = parse_qs(url_parts.fragment)
136 for key, value in options.items():
137 if not six.PY3:
138 algorithms = hashlib.algorithms
139 else:
140 algorithms = hashlib.algorithms_available
141 if key in algorithms:
142 check_hash(dld_file, value, key)
143 if checksum:
144 check_hash(dld_file, checksum, hash_type)
145 return extract(dld_file, dest)
0146
=== added file 'lib/charmhelpers/fetch/bzrurl.py'
--- lib/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/fetch/bzrurl.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,54 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8import six
9if six.PY3:
10 raise ImportError('bzrlib does not support Python3')
11
12try:
13 from bzrlib.branch import Branch
14except ImportError:
15 from charmhelpers.fetch import apt_install
16 apt_install("python-bzrlib")
17 from bzrlib.branch import Branch
18
19
20class BzrUrlFetchHandler(BaseFetchHandler):
21 """Handler for bazaar branches via generic and lp URLs"""
22 def can_handle(self, source):
23 url_parts = self.parse_url(source)
24 if url_parts.scheme not in ('bzr+ssh', 'lp'):
25 return False
26 else:
27 return True
28
29 def branch(self, source, dest):
30 url_parts = self.parse_url(source)
31 # If we use lp:branchname scheme we need to load plugins
32 if not self.can_handle(source):
33 raise UnhandledSource("Cannot handle {}".format(source))
34 if url_parts.scheme == "lp":
35 from bzrlib.plugin import load_plugins
36 load_plugins()
37 try:
38 remote_branch = Branch.open(source)
39 remote_branch.bzrdir.sprout(dest).open_branch()
40 except Exception as e:
41 raise e
42
43 def install(self, source):
44 url_parts = self.parse_url(source)
45 branch_name = url_parts.path.strip("/").split("/")[-1]
46 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
47 branch_name)
48 if not os.path.exists(dest_dir):
49 mkdir(dest_dir, perms=0o755)
50 try:
51 self.branch(source, dest_dir)
52 except OSError as e:
53 raise UnhandledSource(e.strerror)
54 return dest_dir
055
=== added file 'lib/charmhelpers/fetch/giturl.py'
--- lib/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/fetch/giturl.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,48 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8import six
9if six.PY3:
10 raise ImportError('GitPython does not support Python 3')
11
12try:
13 from git import Repo
14except ImportError:
15 from charmhelpers.fetch import apt_install
16 apt_install("python-git")
17 from git import Repo
18
19
20class GitUrlFetchHandler(BaseFetchHandler):
21 """Handler for git branches via generic and github URLs"""
22 def can_handle(self, source):
23 url_parts = self.parse_url(source)
24 # TODO (mattyw) no support for ssh git@ yet
25 if url_parts.scheme not in ('http', 'https', 'git'):
26 return False
27 else:
28 return True
29
30 def clone(self, source, dest, branch):
31 if not self.can_handle(source):
32 raise UnhandledSource("Cannot handle {}".format(source))
33
34 repo = Repo.clone_from(source, dest)
35 repo.git.checkout(branch)
36
37 def install(self, source, branch="master"):
38 url_parts = self.parse_url(source)
39 branch_name = url_parts.path.strip("/").split("/")[-1]
40 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
41 branch_name)
42 if not os.path.exists(dest_dir):
43 mkdir(dest_dir, perms=0o755)
44 try:
45 self.clone(source, dest_dir, branch)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return dest_dir
049
=== added directory 'lib/charmhelpers/payload'
=== added file 'lib/charmhelpers/payload/__init__.py'
--- lib/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/payload/__init__.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,1 @@
1"Tools for working with files injected into a charm just before deployment."
02
=== added file 'lib/charmhelpers/payload/archive.py'
--- lib/charmhelpers/payload/archive.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/payload/archive.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,57 @@
1import os
2import tarfile
3import zipfile
4from charmhelpers.core import (
5 host,
6 hookenv,
7)
8
9
10class ArchiveError(Exception):
11 pass
12
13
14def get_archive_handler(archive_name):
15 if os.path.isfile(archive_name):
16 if tarfile.is_tarfile(archive_name):
17 return extract_tarfile
18 elif zipfile.is_zipfile(archive_name):
19 return extract_zipfile
20 else:
21 # look at the file name
22 for ext in ('.tar', '.tar.gz', '.tgz', 'tar.bz2', '.tbz2', '.tbz'):
23 if archive_name.endswith(ext):
24 return extract_tarfile
25 for ext in ('.zip', '.jar'):
26 if archive_name.endswith(ext):
27 return extract_zipfile
28
29
30def archive_dest_default(archive_name):
31 archive_file = os.path.basename(archive_name)
32 return os.path.join(hookenv.charm_dir(), "archives", archive_file)
33
34
35def extract(archive_name, destpath=None):
36 handler = get_archive_handler(archive_name)
37 if handler:
38 if not destpath:
39 destpath = archive_dest_default(archive_name)
40 if not os.path.isdir(destpath):
41 host.mkdir(destpath)
42 handler(archive_name, destpath)
43 return destpath
44 else:
45 raise ArchiveError("No handler for archive")
46
47
48def extract_tarfile(archive_name, destpath):
49 "Unpack a tar archive, optionally compressed"
50 archive = tarfile.open(archive_name)
51 archive.extractall(destpath)
52
53
54def extract_zipfile(archive_name, destpath):
55 "Unpack a zip file"
56 archive = zipfile.ZipFile(archive_name)
57 archive.extractall(destpath)
058
=== added file 'lib/charmhelpers/payload/execd.py'
--- lib/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
+++ lib/charmhelpers/payload/execd.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,50 @@
1#!/usr/bin/env python
2
3import os
4import sys
5import subprocess
6from charmhelpers.core import hookenv
7
8
9def default_execd_dir():
10 return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
11
12
13def execd_module_paths(execd_dir=None):
14 """Generate a list of full paths to modules within execd_dir."""
15 if not execd_dir:
16 execd_dir = default_execd_dir()
17
18 if not os.path.exists(execd_dir):
19 return
20
21 for subpath in os.listdir(execd_dir):
22 module = os.path.join(execd_dir, subpath)
23 if os.path.isdir(module):
24 yield module
25
26
27def execd_submodule_paths(command, execd_dir=None):
28 """Generate a list of full paths to the specified command within exec_dir.
29 """
30 for module_path in execd_module_paths(execd_dir):
31 path = os.path.join(module_path, command)
32 if os.access(path, os.X_OK) and os.path.isfile(path):
33 yield path
34
35
36def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
37 """Run command for each module within execd_dir which defines it."""
38 for submodule_path in execd_submodule_paths(command, execd_dir):
39 try:
40 subprocess.check_call(submodule_path, shell=True, stderr=stderr)
41 except subprocess.CalledProcessError as e:
42 hookenv.log("Error ({}) running {}. Output: {}".format(
43 e.returncode, e.cmd, e.output))
44 if die_on_error:
45 sys.exit(e.returncode)
46
47
48def execd_preinstall(execd_dir=None):
49 """Run charm-pre-install for each module within execd_dir."""
50 execd_run('charm-pre-install', execd_dir=execd_dir)
051
=== added file 'scripts/charm_helpers_sync.py'
--- scripts/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
+++ scripts/charm_helpers_sync.py 2014-12-02 20:02:04 +0000
@@ -0,0 +1,223 @@
1#!/usr/bin/env python
2# Copyright 2013 Canonical Ltd.
3
4# Authors:
5# Adam Gandelman <adamg@ubuntu.com>
6
7import logging
8import optparse
9import os
10import subprocess
11import shutil
12import sys
13import tempfile
14import yaml
15
16from fnmatch import fnmatch
17
18CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
19
20
21def parse_config(conf_file):
22 if not os.path.isfile(conf_file):
23 logging.error('Invalid config file: %s.' % conf_file)
24 return False
25 return yaml.load(open(conf_file).read())
26
27
28def clone_helpers(work_dir, branch):
29 dest = os.path.join(work_dir, 'charm-helpers')
30 logging.info('Checking out %s to %s.' % (branch, dest))
31 cmd = ['bzr', 'branch', branch, dest]
32 subprocess.check_call(cmd)
33 return dest
34
35
36def _module_path(module):
37 return os.path.join(*module.split('.'))
38
39
40def _src_path(src, module):
41 return os.path.join(src, 'charmhelpers', _module_path(module))
42
43
44def _dest_path(dest, module):
45 return os.path.join(dest, _module_path(module))
46
47
48def _is_pyfile(path):
49 return os.path.isfile(path + '.py')
50
51
52def ensure_init(path):
53 '''
54 ensure directories leading up to path are importable, omitting
55 parent directory, eg path='/hooks/helpers/foo'/:
56 hooks/
57 hooks/helpers/__init__.py
58 hooks/helpers/foo/__init__.py
59 '''
60 for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
61 _i = os.path.join(d, '__init__.py')
62 if not os.path.exists(_i):
63 logging.info('Adding missing __init__.py: %s' % _i)
64 open(_i, 'wb').close()
65
66
67def sync_pyfile(src, dest):
68 src = src + '.py'
69 src_dir = os.path.dirname(src)
70 logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
71 if not os.path.exists(dest):
72 os.makedirs(dest)
73 shutil.copy(src, dest)
74 if os.path.isfile(os.path.join(src_dir, '__init__.py')):
75 shutil.copy(os.path.join(src_dir, '__init__.py'),
76 dest)
77 ensure_init(dest)
78
79
80def get_filter(opts=None):
81 opts = opts or []
82 if 'inc=*' in opts:
83 # do not filter any files, include everything
84 return None
85
86 def _filter(dir, ls):
87 incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
88 _filter = []
89 for f in ls:
90 _f = os.path.join(dir, f)
91
92 if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
93 if True not in [fnmatch(_f, inc) for inc in incs]:
94 logging.debug('Not syncing %s, does not match include '
95 'filters (%s)' % (_f, incs))
96 _filter.append(f)
97 else:
98 logging.debug('Including file, which matches include '
99 'filters (%s): %s' % (incs, _f))
100 elif (os.path.isfile(_f) and not _f.endswith('.py')):
101 logging.debug('Not syncing file: %s' % f)
102 _filter.append(f)
103 elif (os.path.isdir(_f) and not
104 os.path.isfile(os.path.join(_f, '__init__.py'))):
105 logging.debug('Not syncing directory: %s' % f)
106 _filter.append(f)
107 return _filter
108 return _filter
109
110
111def sync_directory(src, dest, opts=None):
112 if os.path.exists(dest):
113 logging.debug('Removing existing directory: %s' % dest)
114 shutil.rmtree(dest)
115 logging.info('Syncing directory: %s -> %s.' % (src, dest))
116
117 shutil.copytree(src, dest, ignore=get_filter(opts))
118 ensure_init(dest)
119
120
121def sync(src, dest, module, opts=None):
122 if os.path.isdir(_src_path(src, module)):
123 sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
124 elif _is_pyfile(_src_path(src, module)):
125 sync_pyfile(_src_path(src, module),
126 os.path.dirname(_dest_path(dest, module)))
127 else:
128 logging.warn('Could not sync: %s. Neither a pyfile or directory, '
129 'does it even exist?' % module)
130
131
132def parse_sync_options(options):
133 if not options:
134 return []
135 return options.split(',')
136
137
138def extract_options(inc, global_options=None):
139 global_options = global_options or []
140 if global_options and isinstance(global_options, basestring):
141 global_options = [global_options]
142 if '|' not in inc:
143 return (inc, global_options)
144 inc, opts = inc.split('|')
145 return (inc, parse_sync_options(opts) + global_options)
146
147
148def sync_helpers(include, src, dest, options=None):
149 if not os.path.isdir(dest):
150 os.mkdir(dest)
151
152 global_options = parse_sync_options(options)
153
154 for inc in include:
155 if isinstance(inc, str):
156 inc, opts = extract_options(inc, global_options)
157 sync(src, dest, inc, opts)
158 elif isinstance(inc, dict):
159 # could also do nested dicts here.
160 for k, v in inc.iteritems():
161 if isinstance(v, list):
162 for m in v:
163 inc, opts = extract_options(m, global_options)
164 sync(src, dest, '%s.%s' % (k, inc), opts)
165
166if __name__ == '__main__':
167 parser = optparse.OptionParser()
168 parser.add_option('-c', '--config', action='store', dest='config',
169 default=None, help='helper config file')
170 parser.add_option('-D', '--debug', action='store_true', dest='debug',
171 default=False, help='debug')
172 parser.add_option('-b', '--branch', action='store', dest='branch',
173 help='charm-helpers bzr branch (overrides config)')
174 parser.add_option('-d', '--destination', action='store', dest='dest_dir',
175 help='sync destination dir (overrides config)')
176 (opts, args) = parser.parse_args()
177
178 if opts.debug:
179 logging.basicConfig(level=logging.DEBUG)
180 else:
181 logging.basicConfig(level=logging.INFO)
182
183 if opts.config:
184 logging.info('Loading charm helper config from %s.' % opts.config)
185 config = parse_config(opts.config)
186 if not config:
187 logging.error('Could not parse config from %s.' % opts.config)
188 sys.exit(1)
189 else:
190 config = {}
191
192 if 'branch' not in config:
193 config['branch'] = CHARM_HELPERS_BRANCH
194 if opts.branch:
195 config['branch'] = opts.branch
196 if opts.dest_dir:
197 config['destination'] = opts.dest_dir
198
199 if 'destination' not in config:
200 logging.error('No destination dir. specified as option or config.')
201 sys.exit(1)
202
203 if 'include' not in config:
204 if not args:
205 logging.error('No modules to sync specified as option or config.')
206 sys.exit(1)
207 config['include'] = []
208 [config['include'].append(a) for a in args]
209
210 sync_options = None
211 if 'options' in config:
212 sync_options = config['options']
213 tmpd = tempfile.mkdtemp()
214 try:
215 checkout = clone_helpers(tmpd, config['branch'])
216 sync_helpers(config['include'], checkout, config['destination'],
217 options=sync_options)
218 except Exception, e:
219 logging.error("Could not sync: %s" % e)
220 raise e
221 finally:
222 logging.debug('Cleaning up %s' % tmpd)
223 shutil.rmtree(tmpd)
0224
=== added file 'tests/10-deploy-and-upgrade'
--- tests/10-deploy-and-upgrade 1970-01-01 00:00:00 +0000
+++ tests/10-deploy-and-upgrade 2014-12-02 20:02:04 +0000
@@ -0,0 +1,78 @@
1#!/usr/bin/env python3
2
3import amulet
4import requests
5import unittest
6
7
8class TestDeployment(unittest.TestCase):
9 @classmethod
10 def setUpClass(cls):
11 cls.deployment = amulet.Deployment(series='trusty')
12
13 mw_config = { 'name': 'MariaDB Test'}
14
15 cls.deployment.add('mariadb')
16 cls.deployment.add('mediawiki')
17 cls.deployment.configure('mediawiki', mw_config)
18 cls.deployment.relate('mediawiki:db', 'mariadb:db')
19 cls.deployment.expose('mediawiki')
20
21
22 try:
23 cls.deployment.setup(timeout=1200)
24 cls.deployment.sentry.wait()
25 except amulet.helpers.TimeoutError:
26 amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")
27 except:
28 raise
29
30 '''
31 test_relation: Verify the credentials being sent over the wire were valid
32 when attempting to verify the MariaDB Service Status
33 '''
34 def test_credentials(self):
35 dbunit = self.deployment.sentry.unit['mariadb/0']
36 db_relation = dbunit.relation('db', 'mediawiki:db')
37 db_ip = db_relation['host']
38 db_user = db_relation['user']
39 db_pass = db_relation['password']
40 ctmp = 'mysqladmin status -h {0} -u {1} --password={2}'
41 cmd = ctmp.format(db_ip, db_user, db_pass)
42
43 output, code = dbunit.run(cmd)
44 if code != 0:
45 message = 'Unable to get status of the mariadb serverat %s' % db_ip
46 amulet.raise_status(amulet.FAIL, msg=message)
47
48 '''
49 test_wiki: Verify Mediawiki setup was successful with MariaDB. No page will
50 be available if setup did not complete
51 '''
52 def test_wiki(self):
53 wikiunit = self.deployment.sentry.unit['mediawiki/0']
54 wiki_url = "http://{}".format(wikiunit.info['public-address'])
55 response = requests.get(wiki_url)
56 response.raise_for_status()
57
58
59 def test_enterprise_eval(self):
60 self.deployment.configure('mariadb', {'enterprise-eula': True,
61 'source': 'deb https://charlesbutler:foobarbaz@code.mariadb.com/mariadb-enterprise/10.0/repo/ubuntu trusty main'})
62
63 # Ensure the bintar was relocated
64 dbunit = self.deployment.sentry.unit['mariadb/0']
65
66 try:
67 dbunit.directory_stat('/usr/local/mysql')
68 amulet.raise_status(amulet.SKIP, 'bintar directory found, uncertain results ahead')
69 except:
70 # this is what we want to happen
71 pass
72
73 # re-run the test after in-place upgrade
74 self.test_credentials()
75
76
77if __name__ == '__main__':
78 unittest.main()
079
=== removed file 'tests/10-deploy-test.py'
--- tests/10-deploy-test.py 2014-11-12 23:12:00 +0000
+++ tests/10-deploy-test.py 1970-01-01 00:00:00 +0000
@@ -1,90 +0,0 @@
1#!/usr/bin/python3
2
3# This amulet code is to test the mariadb charm.
4
5import amulet
6import requests
7
8# The number of seconds to wait for Juju to set up the environment.
9seconds = 1200
10
11# The mediawiki configuration to test.
12mediawiki_configuration = {
13 'name': 'MariaDB test'
14}
15
16d = amulet.Deployment()
17
18# Add the mediawiki charm to the deployment.
19d.add('mediawiki')
20# Add the MariaDB charm to the deployment.
21d.add('mariadb', charm='lp:~dbart/charms/trusty/mariadb/trunk')
22# Configure the mediawiki charm.
23d.configure('mediawiki', mediawiki_configuration)
24# Relate the mediawiki and mariadb charms.
25d.relate('mediawiki:db', 'mariadb:db')
26# Expose the open ports on mediawiki.
27d.expose('mediawiki')
28
29# Deploy the environment and wait for it to setup.
30try:
31 d.setup(timeout=seconds)
32 d.sentry.wait(seconds)
33except amulet.helpers.TimeoutError:
34 message = 'The environment did not setup in %d seconds.' % seconds
35 # The SKIP status enables skip or fail the test based on configuration.
36 amulet.raise_status(amulet.SKIP, msg=message)
37except:
38 raise
39
40# Get the sentry unit for mariadb.
41mariadb_unit = d.sentry.unit['mariadb/0']
42
43# Get the sentry unit for mediawiki.
44mediawiki_unit = d.sentry.unit['mediawiki/0']
45
46# Get the public address for the system running the mediawiki charm.
47mediawiki_address = mediawiki_unit.info['public-address']
48
49###############################################################################
50## Verify MariaDB
51###############################################################################
52# Verify that mediawiki was related to mariadb
53mediawiki_relation = mediawiki_unit.relation('db', 'mariadb:db')
54print('mediawiki relation to mariadb')
55for key, value in mediawiki_relation.items():
56 print(key, value)
57# Verify that mariadb was related to mediawiki
58mariadb_relation = mariadb_unit.relation('db', 'mediawiki:db')
59print('mariadb relation to mediawiki')
60for key, value in mariadb_relation.items():
61 print(key, value)
62# Get the db_host from the mediawiki relation to mariadb
63mariadb_ip = mariadb_relation['host']
64# Get the user from the mediawiki relation to mariadb
65mariadb_user = mariadb_relation['user']
66# Get the password from the mediawiki relation to mariadb
67mariadb_password = mariadb_relation['password']
68# Create the command to get the mariadb status with username and password.
69command = '/usr/local/mysql/bin/mysqladmin status -h {0} -u {1} --password={2}'.format(mariadb_ip,
70 mariadb_user, mariadb_password)
71print(command)
72output, code = mariadb_unit.run(command)
73print(output)
74if code != 0:
75 message = 'Unable to get the status of mariadb server at %s' % mariadb_ip
76 amulet.raise_status(amulet.FAIL, msg=message)
77
78###############################################################################
79## Verify mediawiki
80###############################################################################
81# Create a URL string to the mediawiki server.
82mediawiki_url = 'http://%s' % mediawiki_address
83print(mediawiki_url)
84# Get the mediawiki url with the authentication for guest.
85response = requests.get(mediawiki_url)
86# Raise an exception if response is not 200 OK.
87response.raise_for_status()
88
89print('The MariaDB deploy test completed successfully.')
90

Subscribers

People subscribed via source and target branches