Merge lp:~hopem/charms/trusty/jenkins/python-redux into lp:charms/trusty/jenkins

Proposed by Edward Hope-Morley
Status: Superseded
Proposed branch: lp:~hopem/charms/trusty/jenkins/python-redux
Merge into: lp:charms/trusty/jenkins
Diff against target: 3754 lines (+3253/-274)
38 files modified
Makefile (+30/-0)
bin/charm_helpers_sync.py (+225/-0)
charm-helpers-hooks.yaml (+7/-0)
config.yaml (+1/-1)
hooks/addnode (+0/-21)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/core/decorators.py (+41/-0)
hooks/charmhelpers/core/fstab.py (+118/-0)
hooks/charmhelpers/core/hookenv.py (+552/-0)
hooks/charmhelpers/core/host.py (+419/-0)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+313/-0)
hooks/charmhelpers/core/services/helpers.py (+243/-0)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+52/-0)
hooks/charmhelpers/fetch/__init__.py (+423/-0)
hooks/charmhelpers/fetch/archiveurl.py (+145/-0)
hooks/charmhelpers/fetch/bzrurl.py (+54/-0)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/config-changed (+0/-7)
hooks/delnode (+0/-16)
hooks/install (+0/-151)
hooks/jenkins_hooks.py (+220/-0)
hooks/jenkins_utils.py (+178/-0)
hooks/master-relation-broken (+0/-17)
hooks/master-relation-changed (+0/-24)
hooks/master-relation-departed (+0/-12)
hooks/master-relation-joined (+0/-5)
hooks/start (+0/-3)
hooks/stop (+0/-3)
hooks/upgrade-charm (+0/-7)
hooks/website-relation-joined (+0/-5)
tests/100-deploy-trusty (+4/-2)
tests/README (+56/-0)
unit_tests/test_jenkins_hooks.py (+6/-0)
unit_tests/test_jenkins_utils.py (+6/-0)
To merge this branch: bzr merge lp:~hopem/charms/trusty/jenkins/python-redux
Reviewer Review Type Date Requested Status
Whit Morriss (community) Needs Fixing
Review Queue (community) automated testing Needs Fixing
Ryan Beisner Pending
Felipe Reyes Pending
Paul Larson Pending
Jorge Niedbalski Pending
James Page Pending
Review via email: mp+245769@code.launchpad.net

This proposal supersedes a proposal from 2014-12-12.

This proposal has been superseded by a proposal from 2015-01-20.

To post a comment you must log in.
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10336-results

review: Needs Fixing (automated testing)
Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Setting the password doesn't work, deploying as below doesn't allow you to login with admin/admin. Also when first deploying from the charm store and then upgrading to this branch breaks the password.

---
jenkins:
    password: "admin"
---

$ juju deploy --config config.yaml local:trusty/jenkins

review: Needs Fixing
Revision history for this message
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal

Thanks Felipe, taking a look.

Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10636-results

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10684-results

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10704-results

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10869-results

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10876-results

review: Needs Fixing (automated testing)
Revision history for this message
Whit Morriss (whitmo) wrote :

Thanks Edward, your python rewrite generally looks good at a glance.

I confirmed the test failures reported by automated testing: jenkins-slave is not being found because it is in the precise series only.

Changing test/100-deploy-trusty:line 19 to "d.add('jenkins-slave', 'cs:precise/jenkins-slave')" remedies this issue until there is a trusty version of jenkins slave.

The tests also hit an error in "master-relation-changed" due to what appears to be a race condition with the configuration and restart of the jenkins slave and adding a node to master. Running the hook via debug hook runs fine.

See logs: https://gist.githubusercontent.com/anonymous/0067138ce2cc697b8c88/raw/3f4c03688400b32f3aa66e1bd3bad5b7398f80a5/jenkins-race-condition

I confirmed this as an issue in the merge targets bash implementation also. Adding some retry logic to add_node should fix this issues.

-1 for test fixes, but otherwise looks good. Thanks again!

review: Needs Fixing
Revision history for this message
Edward Hope-Morley (hopem) wrote :

@whitmo awesome, thanks for reviewing. I'll see if I can improve the add_node issue and I'll get the amulet test fixed up. Thanks!

48. By Edward Hope-Morley

fix amulet test

49. By Edward Hope-Morley

synced charmhelpers

50. By Edward Hope-Morley

allow retries when adding node

51. By Edward Hope-Morley

 * Fixed Makefile amulet test filename
 * Synced charm-helpers python-six deps
 * Synced charm-helpers test deps

52. By Edward Hope-Morley

added precise and trusty amulet

53. By Edward Hope-Morley

switch makefile rules to names that juju ci (hopefully) understands

54. By Edward Hope-Morley

ensure apt update prior to install

55. By Edward Hope-Morley

added venv for tests and lint

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2015-01-20 18:33:44 +0000
@@ -0,0 +1,30 @@
1#!/usr/bin/make
2PYTHON := /usr/bin/env python
3
4lint:
5 @flake8 --exclude hooks/charmhelpers hooks unit_tests tests
6 @charm proof
7
8test:
9 @echo Starting Amulet tests...
10 # coreycb note: The -v should only be temporary until Amulet sends
11 # raise_status() messages to stderr:
12 # https://bugs.launchpad.net/amulet/+bug/1320357
13 @juju test -v -p AMULET_HTTP_PROXY --timeout 900 \
14 00-setup 100-deploy
15
16unit_test:
17 @echo Starting unit tests...
18 @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
19
20bin/charm_helpers_sync.py:
21 @mkdir -p bin
22 @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
23 > bin/charm_helpers_sync.py
24
25sync: bin/charm_helpers_sync.py
26 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
27
28publish: lint unit_test
29 bzr push lp:charms/jenkins
30 bzr push lp:charms/trusty/jenkins
031
=== added directory 'bin'
=== added file 'bin/charm_helpers_sync.py'
--- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
+++ bin/charm_helpers_sync.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,225 @@
1#!/usr/bin/python
2#
3# Copyright 2013 Canonical Ltd.
4
5# Authors:
6# Adam Gandelman <adamg@ubuntu.com>
7#
8
9import logging
10import optparse
11import os
12import subprocess
13import shutil
14import sys
15import tempfile
16import yaml
17
18from fnmatch import fnmatch
19
20CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
21
22
23def parse_config(conf_file):
24 if not os.path.isfile(conf_file):
25 logging.error('Invalid config file: %s.' % conf_file)
26 return False
27 return yaml.load(open(conf_file).read())
28
29
30def clone_helpers(work_dir, branch):
31 dest = os.path.join(work_dir, 'charm-helpers')
32 logging.info('Checking out %s to %s.' % (branch, dest))
33 cmd = ['bzr', 'checkout', '--lightweight', branch, dest]
34 subprocess.check_call(cmd)
35 return dest
36
37
38def _module_path(module):
39 return os.path.join(*module.split('.'))
40
41
42def _src_path(src, module):
43 return os.path.join(src, 'charmhelpers', _module_path(module))
44
45
46def _dest_path(dest, module):
47 return os.path.join(dest, _module_path(module))
48
49
50def _is_pyfile(path):
51 return os.path.isfile(path + '.py')
52
53
54def ensure_init(path):
55 '''
56 ensure directories leading up to path are importable, omitting
57 parent directory, eg path='/hooks/helpers/foo'/:
58 hooks/
59 hooks/helpers/__init__.py
60 hooks/helpers/foo/__init__.py
61 '''
62 for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
63 _i = os.path.join(d, '__init__.py')
64 if not os.path.exists(_i):
65 logging.info('Adding missing __init__.py: %s' % _i)
66 open(_i, 'wb').close()
67
68
69def sync_pyfile(src, dest):
70 src = src + '.py'
71 src_dir = os.path.dirname(src)
72 logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
73 if not os.path.exists(dest):
74 os.makedirs(dest)
75 shutil.copy(src, dest)
76 if os.path.isfile(os.path.join(src_dir, '__init__.py')):
77 shutil.copy(os.path.join(src_dir, '__init__.py'),
78 dest)
79 ensure_init(dest)
80
81
82def get_filter(opts=None):
83 opts = opts or []
84 if 'inc=*' in opts:
85 # do not filter any files, include everything
86 return None
87
88 def _filter(dir, ls):
89 incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
90 _filter = []
91 for f in ls:
92 _f = os.path.join(dir, f)
93
94 if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
95 if True not in [fnmatch(_f, inc) for inc in incs]:
96 logging.debug('Not syncing %s, does not match include '
97 'filters (%s)' % (_f, incs))
98 _filter.append(f)
99 else:
100 logging.debug('Including file, which matches include '
101 'filters (%s): %s' % (incs, _f))
102 elif (os.path.isfile(_f) and not _f.endswith('.py')):
103 logging.debug('Not syncing file: %s' % f)
104 _filter.append(f)
105 elif (os.path.isdir(_f) and not
106 os.path.isfile(os.path.join(_f, '__init__.py'))):
107 logging.debug('Not syncing directory: %s' % f)
108 _filter.append(f)
109 return _filter
110 return _filter
111
112
113def sync_directory(src, dest, opts=None):
114 if os.path.exists(dest):
115 logging.debug('Removing existing directory: %s' % dest)
116 shutil.rmtree(dest)
117 logging.info('Syncing directory: %s -> %s.' % (src, dest))
118
119 shutil.copytree(src, dest, ignore=get_filter(opts))
120 ensure_init(dest)
121
122
123def sync(src, dest, module, opts=None):
124 if os.path.isdir(_src_path(src, module)):
125 sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
126 elif _is_pyfile(_src_path(src, module)):
127 sync_pyfile(_src_path(src, module),
128 os.path.dirname(_dest_path(dest, module)))
129 else:
130 logging.warn('Could not sync: %s. Neither a pyfile or directory, '
131 'does it even exist?' % module)
132
133
134def parse_sync_options(options):
135 if not options:
136 return []
137 return options.split(',')
138
139
140def extract_options(inc, global_options=None):
141 global_options = global_options or []
142 if global_options and isinstance(global_options, basestring):
143 global_options = [global_options]
144 if '|' not in inc:
145 return (inc, global_options)
146 inc, opts = inc.split('|')
147 return (inc, parse_sync_options(opts) + global_options)
148
149
150def sync_helpers(include, src, dest, options=None):
151 if not os.path.isdir(dest):
152 os.makedirs(dest)
153
154 global_options = parse_sync_options(options)
155
156 for inc in include:
157 if isinstance(inc, str):
158 inc, opts = extract_options(inc, global_options)
159 sync(src, dest, inc, opts)
160 elif isinstance(inc, dict):
161 # could also do nested dicts here.
162 for k, v in inc.iteritems():
163 if isinstance(v, list):
164 for m in v:
165 inc, opts = extract_options(m, global_options)
166 sync(src, dest, '%s.%s' % (k, inc), opts)
167
168if __name__ == '__main__':
169 parser = optparse.OptionParser()
170 parser.add_option('-c', '--config', action='store', dest='config',
171 default=None, help='helper config file')
172 parser.add_option('-D', '--debug', action='store_true', dest='debug',
173 default=False, help='debug')
174 parser.add_option('-b', '--branch', action='store', dest='branch',
175 help='charm-helpers bzr branch (overrides config)')
176 parser.add_option('-d', '--destination', action='store', dest='dest_dir',
177 help='sync destination dir (overrides config)')
178 (opts, args) = parser.parse_args()
179
180 if opts.debug:
181 logging.basicConfig(level=logging.DEBUG)
182 else:
183 logging.basicConfig(level=logging.INFO)
184
185 if opts.config:
186 logging.info('Loading charm helper config from %s.' % opts.config)
187 config = parse_config(opts.config)
188 if not config:
189 logging.error('Could not parse config from %s.' % opts.config)
190 sys.exit(1)
191 else:
192 config = {}
193
194 if 'branch' not in config:
195 config['branch'] = CHARM_HELPERS_BRANCH
196 if opts.branch:
197 config['branch'] = opts.branch
198 if opts.dest_dir:
199 config['destination'] = opts.dest_dir
200
201 if 'destination' not in config:
202 logging.error('No destination dir. specified as option or config.')
203 sys.exit(1)
204
205 if 'include' not in config:
206 if not args:
207 logging.error('No modules to sync specified as option or config.')
208 sys.exit(1)
209 config['include'] = []
210 [config['include'].append(a) for a in args]
211
212 sync_options = None
213 if 'options' in config:
214 sync_options = config['options']
215 tmpd = tempfile.mkdtemp()
216 try:
217 checkout = clone_helpers(tmpd, config['branch'])
218 sync_helpers(config['include'], checkout, config['destination'],
219 options=sync_options)
220 except Exception, e:
221 logging.error("Could not sync: %s" % e)
222 raise e
223 finally:
224 logging.debug('Cleaning up %s' % tmpd)
225 shutil.rmtree(tmpd)
0226
=== added file 'charm-helpers-hooks.yaml'
--- charm-helpers-hooks.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers-hooks.yaml 2015-01-20 18:33:44 +0000
@@ -0,0 +1,7 @@
1branch: lp:charm-helpers
2destination: hooks/charmhelpers
3include:
4 - __init__
5 - core
6 - fetch
7 - payload.execd
08
=== modified file 'config.yaml'
--- config.yaml 2014-08-14 19:53:02 +0000
+++ config.yaml 2015-01-20 18:33:44 +0000
@@ -17,9 +17,9 @@
17 slave nodes so please don't change in Jenkins.17 slave nodes so please don't change in Jenkins.
18 password:18 password:
19 type: string19 type: string
20 default: ""
20 description: Admin user password - used to manage21 description: Admin user password - used to manage
21 slave nodes so please don't change in Jenkins.22 slave nodes so please don't change in Jenkins.
22 default:
23 plugins:23 plugins:
24 type: string24 type: string
25 default: ""25 default: ""
2626
=== removed file 'hooks/addnode'
--- hooks/addnode 2012-04-27 13:04:33 +0000
+++ hooks/addnode 1970-01-01 00:00:00 +0000
@@ -1,21 +0,0 @@
1#!/usr/bin/python
2
3import jenkins
4import sys
5
6host=sys.argv[1]
7executors=sys.argv[2]
8labels=sys.argv[3]
9username=sys.argv[4]
10password=sys.argv[5]
11
12l_jenkins = jenkins.Jenkins("http://localhost:8080/",username,password)
13
14if l_jenkins.node_exists(host):
15 print "Node exists - not adding"
16else:
17 print "Adding node to Jenkins master"
18 l_jenkins.create_node(host, int(executors) * 2, host , labels=labels)
19
20if not l_jenkins.node_exists(host):
21 print "Failed to create node"
220
=== added directory 'hooks/charmhelpers'
=== added file 'hooks/charmhelpers/__init__.py'
--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/__init__.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,22 @@
1# Bootstrap charm-helpers, installing its dependencies if necessary using
2# only standard libraries.
3import subprocess
4import sys
5
6try:
7 import six # flake8: noqa
8except ImportError:
9 if sys.version_info.major == 2:
10 subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
11 else:
12 subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
13 import six # flake8: noqa
14
15try:
16 import yaml # flake8: noqa
17except ImportError:
18 if sys.version_info.major == 2:
19 subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
20 else:
21 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
22 import yaml # flake8: noqa
023
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/core'
=== added file 'hooks/charmhelpers/core/__init__.py'
=== added file 'hooks/charmhelpers/core/decorators.py'
--- hooks/charmhelpers/core/decorators.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/decorators.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,41 @@
1#
2# Copyright 2014 Canonical Ltd.
3#
4# Authors:
5# Edward Hope-Morley <opentastic@gmail.com>
6#
7
8import time
9
10from charmhelpers.core.hookenv import (
11 log,
12 INFO,
13)
14
15
16def retry_on_exception(num_retries, base_delay=0, exc_type=Exception):
17 """If the decorated function raises exception exc_type, allow num_retries
18 retry attempts before raise the exception.
19 """
20 def _retry_on_exception_inner_1(f):
21 def _retry_on_exception_inner_2(*args, **kwargs):
22 retries = num_retries
23 multiplier = 1
24 while True:
25 try:
26 return f(*args, **kwargs)
27 except exc_type:
28 if not retries:
29 raise
30
31 delay = base_delay * multiplier
32 multiplier += 1
33 log("Retrying '%s' %d more times (delay=%s)" %
34 (f.__name__, retries, delay), level=INFO)
35 retries -= 1
36 if delay:
37 time.sleep(delay)
38
39 return _retry_on_exception_inner_2
40
41 return _retry_on_exception_inner_1
042
=== added file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/fstab.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,118 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import io
7import os
8
9
10class Fstab(io.FileIO):
11 """This class extends file in order to implement a file reader/writer
12 for file `/etc/fstab`
13 """
14
15 class Entry(object):
16 """Entry class represents a non-comment line on the `/etc/fstab` file
17 """
18 def __init__(self, device, mountpoint, filesystem,
19 options, d=0, p=0):
20 self.device = device
21 self.mountpoint = mountpoint
22 self.filesystem = filesystem
23
24 if not options:
25 options = "defaults"
26
27 self.options = options
28 self.d = int(d)
29 self.p = int(p)
30
31 def __eq__(self, o):
32 return str(self) == str(o)
33
34 def __str__(self):
35 return "{} {} {} {} {} {}".format(self.device,
36 self.mountpoint,
37 self.filesystem,
38 self.options,
39 self.d,
40 self.p)
41
42 DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
43
44 def __init__(self, path=None):
45 if path:
46 self._path = path
47 else:
48 self._path = self.DEFAULT_PATH
49 super(Fstab, self).__init__(self._path, 'rb+')
50
51 def _hydrate_entry(self, line):
52 # NOTE: use split with no arguments to split on any
53 # whitespace including tabs
54 return Fstab.Entry(*filter(
55 lambda x: x not in ('', None),
56 line.strip("\n").split()))
57
58 @property
59 def entries(self):
60 self.seek(0)
61 for line in self.readlines():
62 line = line.decode('us-ascii')
63 try:
64 if line.strip() and not line.startswith("#"):
65 yield self._hydrate_entry(line)
66 except ValueError:
67 pass
68
69 def get_entry_by_attr(self, attr, value):
70 for entry in self.entries:
71 e_attr = getattr(entry, attr)
72 if e_attr == value:
73 return entry
74 return None
75
76 def add_entry(self, entry):
77 if self.get_entry_by_attr('device', entry.device):
78 return False
79
80 self.write((str(entry) + '\n').encode('us-ascii'))
81 self.truncate()
82 return entry
83
84 def remove_entry(self, entry):
85 self.seek(0)
86
87 lines = [l.decode('us-ascii') for l in self.readlines()]
88
89 found = False
90 for index, line in enumerate(lines):
91 if not line.startswith("#"):
92 if self._hydrate_entry(line) == entry:
93 found = True
94 break
95
96 if not found:
97 return False
98
99 lines.remove(line)
100
101 self.seek(0)
102 self.write(''.join(lines).encode('us-ascii'))
103 self.truncate()
104 return True
105
106 @classmethod
107 def remove_by_mountpoint(cls, mountpoint, path=None):
108 fstab = cls(path=path)
109 entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
110 if entry:
111 return fstab.remove_entry(entry)
112 return False
113
114 @classmethod
115 def add(cls, device, mountpoint, filesystem, options=None, path=None):
116 return cls(path=path).add_entry(Fstab.Entry(device,
117 mountpoint, filesystem,
118 options=options))
0119
=== added file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,552 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import sys
12from subprocess import CalledProcessError
13
14import six
15if not six.PY3:
16 from UserDict import UserDict
17else:
18 from collections import UserDict
19
20CRITICAL = "CRITICAL"
21ERROR = "ERROR"
22WARNING = "WARNING"
23INFO = "INFO"
24DEBUG = "DEBUG"
25MARKER = object()
26
27cache = {}
28
29
30def cached(func):
31 """Cache return values for multiple executions of func + args
32
33 For example::
34
35 @cached
36 def unit_get(attribute):
37 pass
38
39 unit_get('test')
40
41 will cache the result of unit_get + 'test' for future calls.
42 """
43 def wrapper(*args, **kwargs):
44 global cache
45 key = str((func, args, kwargs))
46 try:
47 return cache[key]
48 except KeyError:
49 res = func(*args, **kwargs)
50 cache[key] = res
51 return res
52 return wrapper
53
54
55def flush(key):
56 """Flushes any entries from function cache where the
57 key is found in the function+args """
58 flush_list = []
59 for item in cache:
60 if key in item:
61 flush_list.append(item)
62 for item in flush_list:
63 del cache[item]
64
65
66def log(message, level=None):
67 """Write a message to the juju log"""
68 command = ['juju-log']
69 if level:
70 command += ['-l', level]
71 if not isinstance(message, six.string_types):
72 message = repr(message)
73 command += [message]
74 subprocess.call(command)
75
76
77class Serializable(UserDict):
78 """Wrapper, an object that can be serialized to yaml or json"""
79
80 def __init__(self, obj):
81 # wrap the object
82 UserDict.__init__(self)
83 self.data = obj
84
85 def __getattr__(self, attr):
86 # See if this object has attribute.
87 if attr in ("json", "yaml", "data"):
88 return self.__dict__[attr]
89 # Check for attribute in wrapped object.
90 got = getattr(self.data, attr, MARKER)
91 if got is not MARKER:
92 return got
93 # Proxy to the wrapped object via dict interface.
94 try:
95 return self.data[attr]
96 except KeyError:
97 raise AttributeError(attr)
98
99 def __getstate__(self):
100 # Pickle as a standard dictionary.
101 return self.data
102
103 def __setstate__(self, state):
104 # Unpickle into our wrapper.
105 self.data = state
106
107 def json(self):
108 """Serialize the object to json"""
109 return json.dumps(self.data)
110
111 def yaml(self):
112 """Serialize the object to yaml"""
113 return yaml.dump(self.data)
114
115
116def execution_environment():
117 """A convenient bundling of the current execution context"""
118 context = {}
119 context['conf'] = config()
120 if relation_id():
121 context['reltype'] = relation_type()
122 context['relid'] = relation_id()
123 context['rel'] = relation_get()
124 context['unit'] = local_unit()
125 context['rels'] = relations()
126 context['env'] = os.environ
127 return context
128
129
130def in_relation_hook():
131 """Determine whether we're running in a relation hook"""
132 return 'JUJU_RELATION' in os.environ
133
134
135def relation_type():
136 """The scope for the current relation hook"""
137 return os.environ.get('JUJU_RELATION', None)
138
139
140def relation_id():
141 """The relation ID for the current relation hook"""
142 return os.environ.get('JUJU_RELATION_ID', None)
143
144
145def local_unit():
146 """Local unit ID"""
147 return os.environ['JUJU_UNIT_NAME']
148
149
150def remote_unit():
151 """The remote unit for the current relation hook"""
152 return os.environ['JUJU_REMOTE_UNIT']
153
154
155def service_name():
156 """The name service group this unit belongs to"""
157 return local_unit().split('/')[0]
158
159
160def hook_name():
161 """The name of the currently executing hook"""
162 return os.path.basename(sys.argv[0])
163
164
165class Config(dict):
166 """A dictionary representation of the charm's config.yaml, with some
167 extra features:
168
169 - See which values in the dictionary have changed since the previous hook.
170 - For values that have changed, see what the previous value was.
171 - Store arbitrary data for use in a later hook.
172
173 NOTE: Do not instantiate this object directly - instead call
174 ``hookenv.config()``, which will return an instance of :class:`Config`.
175
176 Example usage::
177
178 >>> # inside a hook
179 >>> from charmhelpers.core import hookenv
180 >>> config = hookenv.config()
181 >>> config['foo']
182 'bar'
183 >>> # store a new key/value for later use
184 >>> config['mykey'] = 'myval'
185
186
187 >>> # user runs `juju set mycharm foo=baz`
188 >>> # now we're inside subsequent config-changed hook
189 >>> config = hookenv.config()
190 >>> config['foo']
191 'baz'
192 >>> # test to see if this val has changed since last hook
193 >>> config.changed('foo')
194 True
195 >>> # what was the previous value?
196 >>> config.previous('foo')
197 'bar'
198 >>> # keys/values that we add are preserved across hooks
199 >>> config['mykey']
200 'myval'
201
202 """
203 CONFIG_FILE_NAME = '.juju-persistent-config'
204
205 def __init__(self, *args, **kw):
206 super(Config, self).__init__(*args, **kw)
207 self.implicit_save = True
208 self._prev_dict = None
209 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
210 if os.path.exists(self.path):
211 self.load_previous()
212
213 def __getitem__(self, key):
214 """For regular dict lookups, check the current juju config first,
215 then the previous (saved) copy. This ensures that user-saved values
216 will be returned by a dict lookup.
217
218 """
219 try:
220 return dict.__getitem__(self, key)
221 except KeyError:
222 return (self._prev_dict or {})[key]
223
224 def keys(self):
225 prev_keys = []
226 if self._prev_dict is not None:
227 prev_keys = self._prev_dict.keys()
228 return list(set(prev_keys + list(dict.keys(self))))
229
230 def load_previous(self, path=None):
231 """Load previous copy of config from disk.
232
233 In normal usage you don't need to call this method directly - it
234 is called automatically at object initialization.
235
236 :param path:
237
238 File path from which to load the previous config. If `None`,
239 config is loaded from the default location. If `path` is
240 specified, subsequent `save()` calls will write to the same
241 path.
242
243 """
244 self.path = path or self.path
245 with open(self.path) as f:
246 self._prev_dict = json.load(f)
247
248 def changed(self, key):
249 """Return True if the current value for this key is different from
250 the previous value.
251
252 """
253 if self._prev_dict is None:
254 return True
255 return self.previous(key) != self.get(key)
256
257 def previous(self, key):
258 """Return previous value for this key, or None if there
259 is no previous value.
260
261 """
262 if self._prev_dict:
263 return self._prev_dict.get(key)
264 return None
265
266 def save(self):
267 """Save this config to disk.
268
269 If the charm is using the :mod:`Services Framework <services.base>`
270 or :meth:'@hook <Hooks.hook>' decorator, this
271 is called automatically at the end of successful hook execution.
272 Otherwise, it should be called directly by user code.
273
274 To disable automatic saves, set ``implicit_save=False`` on this
275 instance.
276
277 """
278 if self._prev_dict:
279 for k, v in six.iteritems(self._prev_dict):
280 if k not in self:
281 self[k] = v
282 with open(self.path, 'w') as f:
283 json.dump(self, f)
284
285
286@cached
287def config(scope=None):
288 """Juju charm configuration"""
289 config_cmd_line = ['config-get']
290 if scope is not None:
291 config_cmd_line.append(scope)
292 config_cmd_line.append('--format=json')
293 try:
294 config_data = json.loads(
295 subprocess.check_output(config_cmd_line).decode('UTF-8'))
296 if scope is not None:
297 return config_data
298 return Config(config_data)
299 except ValueError:
300 return None
301
302
303@cached
304def relation_get(attribute=None, unit=None, rid=None):
305 """Get relation information"""
306 _args = ['relation-get', '--format=json']
307 if rid:
308 _args.append('-r')
309 _args.append(rid)
310 _args.append(attribute or '-')
311 if unit:
312 _args.append(unit)
313 try:
314 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
315 except ValueError:
316 return None
317 except CalledProcessError as e:
318 if e.returncode == 2:
319 return None
320 raise
321
322
323def relation_set(relation_id=None, relation_settings=None, **kwargs):
324 """Set relation information for the current unit"""
325 relation_settings = relation_settings if relation_settings else {}
326 relation_cmd_line = ['relation-set']
327 if relation_id is not None:
328 relation_cmd_line.extend(('-r', relation_id))
329 for k, v in (list(relation_settings.items()) + list(kwargs.items())):
330 if v is None:
331 relation_cmd_line.append('{}='.format(k))
332 else:
333 relation_cmd_line.append('{}={}'.format(k, v))
334 subprocess.check_call(relation_cmd_line)
335 # Flush cache of any relation-gets for local unit
336 flush(local_unit())
337
338
339@cached
340def relation_ids(reltype=None):
341 """A list of relation_ids"""
342 reltype = reltype or relation_type()
343 relid_cmd_line = ['relation-ids', '--format=json']
344 if reltype is not None:
345 relid_cmd_line.append(reltype)
346 return json.loads(
347 subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
348 return []
349
350
351@cached
352def related_units(relid=None):
353 """A list of related units"""
354 relid = relid or relation_id()
355 units_cmd_line = ['relation-list', '--format=json']
356 if relid is not None:
357 units_cmd_line.extend(('-r', relid))
358 return json.loads(
359 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
360
361
362@cached
363def relation_for_unit(unit=None, rid=None):
364 """Get the json represenation of a unit's relation"""
365 unit = unit or remote_unit()
366 relation = relation_get(unit=unit, rid=rid)
367 for key in relation:
368 if key.endswith('-list'):
369 relation[key] = relation[key].split()
370 relation['__unit__'] = unit
371 return relation
372
373
374@cached
375def relations_for_id(relid=None):
376 """Get relations of a specific relation ID"""
377 relation_data = []
378 relid = relid or relation_ids()
379 for unit in related_units(relid):
380 unit_data = relation_for_unit(unit, relid)
381 unit_data['__relid__'] = relid
382 relation_data.append(unit_data)
383 return relation_data
384
385
386@cached
387def relations_of_type(reltype=None):
388 """Get relations of a specific type"""
389 relation_data = []
390 reltype = reltype or relation_type()
391 for relid in relation_ids(reltype):
392 for relation in relations_for_id(relid):
393 relation['__relid__'] = relid
394 relation_data.append(relation)
395 return relation_data
396
397
398@cached
399def metadata():
400 """Get the current charm metadata.yaml contents as a python object"""
401 with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
402 return yaml.safe_load(md)
403
404
405@cached
406def relation_types():
407 """Get a list of relation types supported by this charm"""
408 rel_types = []
409 md = metadata()
410 for key in ('provides', 'requires', 'peers'):
411 section = md.get(key)
412 if section:
413 rel_types.extend(section.keys())
414 return rel_types
415
416
417@cached
418def charm_name():
419 """Get the name of the current charm as is specified on metadata.yaml"""
420 return metadata().get('name')
421
422
423@cached
424def relations():
425 """Get a nested dictionary of relation data for all related units"""
426 rels = {}
427 for reltype in relation_types():
428 relids = {}
429 for relid in relation_ids(reltype):
430 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
431 for unit in related_units(relid):
432 reldata = relation_get(unit=unit, rid=relid)
433 units[unit] = reldata
434 relids[relid] = units
435 rels[reltype] = relids
436 return rels
437
438
439@cached
440def is_relation_made(relation, keys='private-address'):
441 '''
442 Determine whether a relation is established by checking for
443 presence of key(s). If a list of keys is provided, they
444 must all be present for the relation to be identified as made
445 '''
446 if isinstance(keys, str):
447 keys = [keys]
448 for r_id in relation_ids(relation):
449 for unit in related_units(r_id):
450 context = {}
451 for k in keys:
452 context[k] = relation_get(k, rid=r_id,
453 unit=unit)
454 if None not in context.values():
455 return True
456 return False
457
458
459def open_port(port, protocol="TCP"):
460 """Open a service network port"""
461 _args = ['open-port']
462 _args.append('{}/{}'.format(port, protocol))
463 subprocess.check_call(_args)
464
465
466def close_port(port, protocol="TCP"):
467 """Close a service network port"""
468 _args = ['close-port']
469 _args.append('{}/{}'.format(port, protocol))
470 subprocess.check_call(_args)
471
472
473@cached
474def unit_get(attribute):
475 """Get the unit ID for the remote unit"""
476 _args = ['unit-get', '--format=json', attribute]
477 try:
478 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
479 except ValueError:
480 return None
481
482
483def unit_private_ip():
484 """Get this unit's private IP address"""
485 return unit_get('private-address')
486
487
488class UnregisteredHookError(Exception):
489 """Raised when an undefined hook is called"""
490 pass
491
492
493class Hooks(object):
494 """A convenient handler for hook functions.
495
496 Example::
497
498 hooks = Hooks()
499
500 # register a hook, taking its name from the function name
501 @hooks.hook()
502 def install():
503 pass # your code here
504
505 # register a hook, providing a custom hook name
506 @hooks.hook("config-changed")
507 def config_changed():
508 pass # your code here
509
510 if __name__ == "__main__":
511 # execute a hook based on the name the program is called by
512 hooks.execute(sys.argv)
513 """
514
515 def __init__(self, config_save=True):
516 super(Hooks, self).__init__()
517 self._hooks = {}
518 self._config_save = config_save
519
520 def register(self, name, function):
521 """Register a hook"""
522 self._hooks[name] = function
523
524 def execute(self, args):
525 """Execute a registered hook based on args[0]"""
526 hook_name = os.path.basename(args[0])
527 if hook_name in self._hooks:
528 self._hooks[hook_name]()
529 if self._config_save:
530 cfg = config()
531 if cfg.implicit_save:
532 cfg.save()
533 else:
534 raise UnregisteredHookError(hook_name)
535
536 def hook(self, *hook_names):
537 """Decorator, registering them as hooks"""
538 def wrapper(decorated):
539 for hook_name in hook_names:
540 self.register(hook_name, decorated)
541 else:
542 self.register(decorated.__name__, decorated)
543 if '_' in decorated.__name__:
544 self.register(
545 decorated.__name__.replace('_', '-'), decorated)
546 return decorated
547 return wrapper
548
549
550def charm_dir():
551 """Return the root directory of the current charm"""
552 return os.environ.get('CHARM_DIR')
0553
=== added file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/host.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,419 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import re
10import pwd
11import grp
12import random
13import string
14import subprocess
15import hashlib
16from contextlib import contextmanager
17from collections import OrderedDict
18
19import six
20
21from .hookenv import log
22from .fstab import Fstab
23
24
25def service_start(service_name):
26 """Start a system service"""
27 return service('start', service_name)
28
29
30def service_stop(service_name):
31 """Stop a system service"""
32 return service('stop', service_name)
33
34
35def service_restart(service_name):
36 """Restart a system service"""
37 return service('restart', service_name)
38
39
40def service_reload(service_name, restart_on_failure=False):
41 """Reload a system service, optionally falling back to restart if
42 reload fails"""
43 service_result = service('reload', service_name)
44 if not service_result and restart_on_failure:
45 service_result = service('restart', service_name)
46 return service_result
47
48
49def service(action, service_name):
50 """Control a system service"""
51 cmd = ['service', service_name, action]
52 return subprocess.call(cmd) == 0
53
54
55def service_running(service):
56 """Determine whether a system service is running"""
57 try:
58 output = subprocess.check_output(
59 ['service', service, 'status'],
60 stderr=subprocess.STDOUT).decode('UTF-8')
61 except subprocess.CalledProcessError:
62 return False
63 else:
64 if ("start/running" in output or "is running" in output):
65 return True
66 else:
67 return False
68
69
70def service_available(service_name):
71 """Determine whether a system service is available"""
72 try:
73 subprocess.check_output(
74 ['service', service_name, 'status'],
75 stderr=subprocess.STDOUT).decode('UTF-8')
76 except subprocess.CalledProcessError as e:
77 return 'unrecognized service' not in e.output
78 else:
79 return True
80
81
82def adduser(username, password=None, shell='/bin/bash', system_user=False):
83 """Add a user to the system"""
84 try:
85 user_info = pwd.getpwnam(username)
86 log('user {0} already exists!'.format(username))
87 except KeyError:
88 log('creating user {0}'.format(username))
89 cmd = ['useradd']
90 if system_user or password is None:
91 cmd.append('--system')
92 else:
93 cmd.extend([
94 '--create-home',
95 '--shell', shell,
96 '--password', password,
97 ])
98 cmd.append(username)
99 subprocess.check_call(cmd)
100 user_info = pwd.getpwnam(username)
101 return user_info
102
103
104def add_group(group_name, system_group=False):
105 """Add a group to the system"""
106 try:
107 group_info = grp.getgrnam(group_name)
108 log('group {0} already exists!'.format(group_name))
109 except KeyError:
110 log('creating group {0}'.format(group_name))
111 cmd = ['addgroup']
112 if system_group:
113 cmd.append('--system')
114 else:
115 cmd.extend([
116 '--group',
117 ])
118 cmd.append(group_name)
119 subprocess.check_call(cmd)
120 group_info = grp.getgrnam(group_name)
121 return group_info
122
123
124def add_user_to_group(username, group):
125 """Add a user to a group"""
126 cmd = [
127 'gpasswd', '-a',
128 username,
129 group
130 ]
131 log("Adding user {} to group {}".format(username, group))
132 subprocess.check_call(cmd)
133
134
135def rsync(from_path, to_path, flags='-r', options=None):
136 """Replicate the contents of a path"""
137 options = options or ['--delete', '--executability']
138 cmd = ['/usr/bin/rsync', flags]
139 cmd.extend(options)
140 cmd.append(from_path)
141 cmd.append(to_path)
142 log(" ".join(cmd))
143 return subprocess.check_output(cmd).decode('UTF-8').strip()
144
145
146def symlink(source, destination):
147 """Create a symbolic link"""
148 log("Symlinking {} as {}".format(source, destination))
149 cmd = [
150 'ln',
151 '-sf',
152 source,
153 destination,
154 ]
155 subprocess.check_call(cmd)
156
157
158def mkdir(path, owner='root', group='root', perms=0o555, force=False):
159 """Create a directory"""
160 log("Making dir {} {}:{} {:o}".format(path, owner, group,
161 perms))
162 uid = pwd.getpwnam(owner).pw_uid
163 gid = grp.getgrnam(group).gr_gid
164 realpath = os.path.abspath(path)
165 path_exists = os.path.exists(realpath)
166 if path_exists and force:
167 if not os.path.isdir(realpath):
168 log("Removing non-directory file {} prior to mkdir()".format(path))
169 os.unlink(realpath)
170 os.makedirs(realpath, perms)
171 os.chown(realpath, uid, gid)
172 elif not path_exists:
173 os.makedirs(realpath, perms)
174 os.chown(realpath, uid, gid)
175
176
177def write_file(path, content, owner='root', group='root', perms=0o444):
178 """Create or overwrite a file with the contents of a string"""
179 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
180 uid = pwd.getpwnam(owner).pw_uid
181 gid = grp.getgrnam(group).gr_gid
182 with open(path, 'w') as target:
183 os.fchown(target.fileno(), uid, gid)
184 os.fchmod(target.fileno(), perms)
185 target.write(content)
186
187
188def fstab_remove(mp):
189 """Remove the given mountpoint entry from /etc/fstab
190 """
191 return Fstab.remove_by_mountpoint(mp)
192
193
194def fstab_add(dev, mp, fs, options=None):
195 """Adds the given device entry to the /etc/fstab file
196 """
197 return Fstab.add(dev, mp, fs, options=options)
198
199
200def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
201 """Mount a filesystem at a particular mountpoint"""
202 cmd_args = ['mount']
203 if options is not None:
204 cmd_args.extend(['-o', options])
205 cmd_args.extend([device, mountpoint])
206 try:
207 subprocess.check_output(cmd_args)
208 except subprocess.CalledProcessError as e:
209 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
210 return False
211
212 if persist:
213 return fstab_add(device, mountpoint, filesystem, options=options)
214 return True
215
216
217def umount(mountpoint, persist=False):
218 """Unmount a filesystem"""
219 cmd_args = ['umount', mountpoint]
220 try:
221 subprocess.check_output(cmd_args)
222 except subprocess.CalledProcessError as e:
223 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
224 return False
225
226 if persist:
227 return fstab_remove(mountpoint)
228 return True
229
230
231def mounts():
232 """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
233 with open('/proc/mounts') as f:
234 # [['/mount/point','/dev/path'],[...]]
235 system_mounts = [m[1::-1] for m in [l.strip().split()
236 for l in f.readlines()]]
237 return system_mounts
238
239
240def file_hash(path, hash_type='md5'):
241 """
242 Generate a hash checksum of the contents of 'path' or None if not found.
243
244 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
245 such as md5, sha1, sha256, sha512, etc.
246 """
247 if os.path.exists(path):
248 h = getattr(hashlib, hash_type)()
249 with open(path, 'rb') as source:
250 h.update(source.read())
251 return h.hexdigest()
252 else:
253 return None
254
255
256def check_hash(path, checksum, hash_type='md5'):
257 """
258 Validate a file using a cryptographic checksum.
259
260 :param str checksum: Value of the checksum used to validate the file.
261 :param str hash_type: Hash algorithm used to generate `checksum`.
262 Can be any hash alrgorithm supported by :mod:`hashlib`,
263 such as md5, sha1, sha256, sha512, etc.
264 :raises ChecksumError: If the file fails the checksum
265
266 """
267 actual_checksum = file_hash(path, hash_type)
268 if checksum != actual_checksum:
269 raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
270
271
272class ChecksumError(ValueError):
273 pass
274
275
276def restart_on_change(restart_map, stopstart=False):
277 """Restart services based on configuration files changing
278
279 This function is used a decorator, for example::
280
281 @restart_on_change({
282 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
283 })
284 def ceph_client_changed():
285 pass # your code here
286
287 In this example, the cinder-api and cinder-volume services
288 would be restarted if /etc/ceph/ceph.conf is changed by the
289 ceph_client_changed function.
290 """
291 def wrap(f):
292 def wrapped_f(*args):
293 checksums = {}
294 for path in restart_map:
295 checksums[path] = file_hash(path)
296 f(*args)
297 restarts = []
298 for path in restart_map:
299 if checksums[path] != file_hash(path):
300 restarts += restart_map[path]
301 services_list = list(OrderedDict.fromkeys(restarts))
302 if not stopstart:
303 for service_name in services_list:
304 service('restart', service_name)
305 else:
306 for action in ['stop', 'start']:
307 for service_name in services_list:
308 service(action, service_name)
309 return wrapped_f
310 return wrap
311
312
313def lsb_release():
314 """Return /etc/lsb-release in a dict"""
315 d = {}
316 with open('/etc/lsb-release', 'r') as lsb:
317 for l in lsb:
318 k, v = l.split('=')
319 d[k.strip()] = v.strip()
320 return d
321
322
323def pwgen(length=None):
324 """Generate a random pasword."""
325 if length is None:
326 length = random.choice(range(35, 45))
327 alphanumeric_chars = [
328 l for l in (string.ascii_letters + string.digits)
329 if l not in 'l0QD1vAEIOUaeiou']
330 random_chars = [
331 random.choice(alphanumeric_chars) for _ in range(length)]
332 return(''.join(random_chars))
333
334
335def list_nics(nic_type):
336 '''Return a list of nics of given type(s)'''
337 if isinstance(nic_type, six.string_types):
338 int_types = [nic_type]
339 else:
340 int_types = nic_type
341 interfaces = []
342 for int_type in int_types:
343 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
344 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
345 ip_output = (line for line in ip_output if line)
346 for line in ip_output:
347 if line.split()[1].startswith(int_type):
348 matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
349 if matched:
350 interface = matched.groups()[0]
351 else:
352 interface = line.split()[1].replace(":", "")
353 interfaces.append(interface)
354
355 return interfaces
356
357
358def set_nic_mtu(nic, mtu):
359 '''Set MTU on a network interface'''
360 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
361 subprocess.check_call(cmd)
362
363
364def get_nic_mtu(nic):
365 cmd = ['ip', 'addr', 'show', nic]
366 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
367 mtu = ""
368 for line in ip_output:
369 words = line.split()
370 if 'mtu' in words:
371 mtu = words[words.index("mtu") + 1]
372 return mtu
373
374
375def get_nic_hwaddr(nic):
376 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
377 ip_output = subprocess.check_output(cmd).decode('UTF-8')
378 hwaddr = ""
379 words = ip_output.split()
380 if 'link/ether' in words:
381 hwaddr = words[words.index('link/ether') + 1]
382 return hwaddr
383
384
385def cmp_pkgrevno(package, revno, pkgcache=None):
386 '''Compare supplied revno with the revno of the installed package
387
388 * 1 => Installed revno is greater than supplied arg
389 * 0 => Installed revno is the same as supplied arg
390 * -1 => Installed revno is less than supplied arg
391
392 '''
393 import apt_pkg
394 if not pkgcache:
395 from charmhelpers.fetch import apt_cache
396 pkgcache = apt_cache()
397 pkg = pkgcache[package]
398 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
399
400
401@contextmanager
402def chdir(d):
403 cur = os.getcwd()
404 try:
405 yield os.chdir(d)
406 finally:
407 os.chdir(cur)
408
409
410def chownr(path, owner, group):
411 uid = pwd.getpwnam(owner).pw_uid
412 gid = grp.getgrnam(group).gr_gid
413
414 for root, dirs, files in os.walk(path):
415 for name in dirs + files:
416 full = os.path.join(root, name)
417 broken_symlink = os.path.lexists(full) and not os.path.exists(full)
418 if not broken_symlink:
419 os.chown(full, uid, gid)
0420
=== added directory 'hooks/charmhelpers/core/services'
=== added file 'hooks/charmhelpers/core/services/__init__.py'
--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/__init__.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,2 @@
1from .base import * # NOQA
2from .helpers import * # NOQA
03
=== added file 'hooks/charmhelpers/core/services/base.py'
--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/base.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,313 @@
1import os
2import re
3import json
4from collections import Iterable
5
6from charmhelpers.core import host
7from charmhelpers.core import hookenv
8
9
10__all__ = ['ServiceManager', 'ManagerCallback',
11 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
12 'service_restart', 'service_stop']
13
14
15class ServiceManager(object):
16 def __init__(self, services=None):
17 """
18 Register a list of services, given their definitions.
19
20 Service definitions are dicts in the following formats (all keys except
21 'service' are optional)::
22
23 {
24 "service": <service name>,
25 "required_data": <list of required data contexts>,
26 "provided_data": <list of provided data contexts>,
27 "data_ready": <one or more callbacks>,
28 "data_lost": <one or more callbacks>,
29 "start": <one or more callbacks>,
30 "stop": <one or more callbacks>,
31 "ports": <list of ports to manage>,
32 }
33
34 The 'required_data' list should contain dicts of required data (or
35 dependency managers that act like dicts and know how to collect the data).
36 Only when all items in the 'required_data' list are populated are the list
37 of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
38 information.
39
40 The 'provided_data' list should contain relation data providers, most likely
41 a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
42 that will indicate a set of data to set on a given relation.
43
44 The 'data_ready' value should be either a single callback, or a list of
45 callbacks, to be called when all items in 'required_data' pass `is_ready()`.
46 Each callback will be called with the service name as the only parameter.
47 After all of the 'data_ready' callbacks are called, the 'start' callbacks
48 are fired.
49
50 The 'data_lost' value should be either a single callback, or a list of
51 callbacks, to be called when a 'required_data' item no longer passes
52 `is_ready()`. Each callback will be called with the service name as the
53 only parameter. After all of the 'data_lost' callbacks are called,
54 the 'stop' callbacks are fired.
55
56 The 'start' value should be either a single callback, or a list of
57 callbacks, to be called when starting the service, after the 'data_ready'
58 callbacks are complete. Each callback will be called with the service
59 name as the only parameter. This defaults to
60 `[host.service_start, services.open_ports]`.
61
62 The 'stop' value should be either a single callback, or a list of
63 callbacks, to be called when stopping the service. If the service is
64 being stopped because it no longer has all of its 'required_data', this
65 will be called after all of the 'data_lost' callbacks are complete.
66 Each callback will be called with the service name as the only parameter.
67 This defaults to `[services.close_ports, host.service_stop]`.
68
69 The 'ports' value should be a list of ports to manage. The default
70 'start' handler will open the ports after the service is started,
71 and the default 'stop' handler will close the ports prior to stopping
72 the service.
73
74
75 Examples:
76
77 The following registers an Upstart service called bingod that depends on
78 a mongodb relation and which runs a custom `db_migrate` function prior to
79 restarting the service, and a Runit service called spadesd::
80
81 manager = services.ServiceManager([
82 {
83 'service': 'bingod',
84 'ports': [80, 443],
85 'required_data': [MongoRelation(), config(), {'my': 'data'}],
86 'data_ready': [
87 services.template(source='bingod.conf'),
88 services.template(source='bingod.ini',
89 target='/etc/bingod.ini',
90 owner='bingo', perms=0400),
91 ],
92 },
93 {
94 'service': 'spadesd',
95 'data_ready': services.template(source='spadesd_run.j2',
96 target='/etc/sv/spadesd/run',
97 perms=0555),
98 'start': runit_start,
99 'stop': runit_stop,
100 },
101 ])
102 manager.manage()
103 """
104 self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
105 self._ready = None
106 self.services = {}
107 for service in services or []:
108 service_name = service['service']
109 self.services[service_name] = service
110
111 def manage(self):
112 """
113 Handle the current hook by doing The Right Thing with the registered services.
114 """
115 hook_name = hookenv.hook_name()
116 if hook_name == 'stop':
117 self.stop_services()
118 else:
119 self.provide_data()
120 self.reconfigure_services()
121 cfg = hookenv.config()
122 if cfg.implicit_save:
123 cfg.save()
124
125 def provide_data(self):
126 """
127 Set the relation data for each provider in the ``provided_data`` list.
128
129 A provider must have a `name` attribute, which indicates which relation
130 to set data on, and a `provide_data()` method, which returns a dict of
131 data to set.
132 """
133 hook_name = hookenv.hook_name()
134 for service in self.services.values():
135 for provider in service.get('provided_data', []):
136 if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
137 data = provider.provide_data()
138 _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
139 if _ready:
140 hookenv.relation_set(None, data)
141
142 def reconfigure_services(self, *service_names):
143 """
144 Update all files for one or more registered services, and,
145 if ready, optionally restart them.
146
147 If no service names are given, reconfigures all registered services.
148 """
149 for service_name in service_names or self.services.keys():
150 if self.is_ready(service_name):
151 self.fire_event('data_ready', service_name)
152 self.fire_event('start', service_name, default=[
153 service_restart,
154 manage_ports])
155 self.save_ready(service_name)
156 else:
157 if self.was_ready(service_name):
158 self.fire_event('data_lost', service_name)
159 self.fire_event('stop', service_name, default=[
160 manage_ports,
161 service_stop])
162 self.save_lost(service_name)
163
164 def stop_services(self, *service_names):
165 """
166 Stop one or more registered services, by name.
167
168 If no service names are given, stops all registered services.
169 """
170 for service_name in service_names or self.services.keys():
171 self.fire_event('stop', service_name, default=[
172 manage_ports,
173 service_stop])
174
175 def get_service(self, service_name):
176 """
177 Given the name of a registered service, return its service definition.
178 """
179 service = self.services.get(service_name)
180 if not service:
181 raise KeyError('Service not registered: %s' % service_name)
182 return service
183
184 def fire_event(self, event_name, service_name, default=None):
185 """
186 Fire a data_ready, data_lost, start, or stop event on a given service.
187 """
188 service = self.get_service(service_name)
189 callbacks = service.get(event_name, default)
190 if not callbacks:
191 return
192 if not isinstance(callbacks, Iterable):
193 callbacks = [callbacks]
194 for callback in callbacks:
195 if isinstance(callback, ManagerCallback):
196 callback(self, service_name, event_name)
197 else:
198 callback(service_name)
199
200 def is_ready(self, service_name):
201 """
202 Determine if a registered service is ready, by checking its 'required_data'.
203
204 A 'required_data' item can be any mapping type, and is considered ready
205 if `bool(item)` evaluates as True.
206 """
207 service = self.get_service(service_name)
208 reqs = service.get('required_data', [])
209 return all(bool(req) for req in reqs)
210
211 def _load_ready_file(self):
212 if self._ready is not None:
213 return
214 if os.path.exists(self._ready_file):
215 with open(self._ready_file) as fp:
216 self._ready = set(json.load(fp))
217 else:
218 self._ready = set()
219
220 def _save_ready_file(self):
221 if self._ready is None:
222 return
223 with open(self._ready_file, 'w') as fp:
224 json.dump(list(self._ready), fp)
225
226 def save_ready(self, service_name):
227 """
228 Save an indicator that the given service is now data_ready.
229 """
230 self._load_ready_file()
231 self._ready.add(service_name)
232 self._save_ready_file()
233
234 def save_lost(self, service_name):
235 """
236 Save an indicator that the given service is no longer data_ready.
237 """
238 self._load_ready_file()
239 self._ready.discard(service_name)
240 self._save_ready_file()
241
242 def was_ready(self, service_name):
243 """
244 Determine if the given service was previously data_ready.
245 """
246 self._load_ready_file()
247 return service_name in self._ready
248
249
250class ManagerCallback(object):
251 """
252 Special case of a callback that takes the `ServiceManager` instance
253 in addition to the service name.
254
255 Subclasses should implement `__call__` which should accept three parameters:
256
257 * `manager` The `ServiceManager` instance
258 * `service_name` The name of the service it's being triggered for
259 * `event_name` The name of the event that this callback is handling
260 """
261 def __call__(self, manager, service_name, event_name):
262 raise NotImplementedError()
263
264
265class PortManagerCallback(ManagerCallback):
266 """
267 Callback class that will open or close ports, for use as either
268 a start or stop action.
269 """
270 def __call__(self, manager, service_name, event_name):
271 service = manager.get_service(service_name)
272 new_ports = service.get('ports', [])
273 port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
274 if os.path.exists(port_file):
275 with open(port_file) as fp:
276 old_ports = fp.read().split(',')
277 for old_port in old_ports:
278 if bool(old_port):
279 old_port = int(old_port)
280 if old_port not in new_ports:
281 hookenv.close_port(old_port)
282 with open(port_file, 'w') as fp:
283 fp.write(','.join(str(port) for port in new_ports))
284 for port in new_ports:
285 if event_name == 'start':
286 hookenv.open_port(port)
287 elif event_name == 'stop':
288 hookenv.close_port(port)
289
290
291def service_stop(service_name):
292 """
293 Wrapper around host.service_stop to prevent spurious "unknown service"
294 messages in the logs.
295 """
296 if host.service_running(service_name):
297 host.service_stop(service_name)
298
299
300def service_restart(service_name):
301 """
302 Wrapper around host.service_restart to prevent spurious "unknown service"
303 messages in the logs.
304 """
305 if host.service_available(service_name):
306 if host.service_running(service_name):
307 host.service_restart(service_name)
308 else:
309 host.service_start(service_name)
310
311
312# Convenience aliases
313open_ports = close_ports = manage_ports = PortManagerCallback()
0314
=== added file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,243 @@
1import os
2import yaml
3from charmhelpers.core import hookenv
4from charmhelpers.core import templating
5
6from charmhelpers.core.services.base import ManagerCallback
7
8
9__all__ = ['RelationContext', 'TemplateCallback',
10 'render_template', 'template']
11
12
13class RelationContext(dict):
14 """
15 Base class for a context generator that gets relation data from juju.
16
17 Subclasses must provide the attributes `name`, which is the name of the
18 interface of interest, `interface`, which is the type of the interface of
19 interest, and `required_keys`, which is the set of keys required for the
20 relation to be considered complete. The data for all interfaces matching
21 the `name` attribute that are complete will used to populate the dictionary
22 values (see `get_data`, below).
23
24 The generated context will be namespaced under the relation :attr:`name`,
25 to prevent potential naming conflicts.
26
27 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
28 :param list additional_required_keys: Extend the list of :attr:`required_keys`
29 """
30 name = None
31 interface = None
32 required_keys = []
33
34 def __init__(self, name=None, additional_required_keys=None):
35 if name is not None:
36 self.name = name
37 if additional_required_keys is not None:
38 self.required_keys.extend(additional_required_keys)
39 self.get_data()
40
41 def __bool__(self):
42 """
43 Returns True if all of the required_keys are available.
44 """
45 return self.is_ready()
46
47 __nonzero__ = __bool__
48
49 def __repr__(self):
50 return super(RelationContext, self).__repr__()
51
52 def is_ready(self):
53 """
54 Returns True if all of the `required_keys` are available from any units.
55 """
56 ready = len(self.get(self.name, [])) > 0
57 if not ready:
58 hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
59 return ready
60
61 def _is_ready(self, unit_data):
62 """
63 Helper method that tests a set of relation data and returns True if
64 all of the `required_keys` are present.
65 """
66 return set(unit_data.keys()).issuperset(set(self.required_keys))
67
68 def get_data(self):
69 """
70 Retrieve the relation data for each unit involved in a relation and,
71 if complete, store it in a list under `self[self.name]`. This
72 is automatically called when the RelationContext is instantiated.
73
74 The units are sorted lexographically first by the service ID, then by
75 the unit ID. Thus, if an interface has two other services, 'db:1'
76 and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
77 and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
78 set of data, the relation data for the units will be stored in the
79 order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
80
81 If you only care about a single unit on the relation, you can just
82 access it as `{{ interface[0]['key'] }}`. However, if you can at all
83 support multiple units on a relation, you should iterate over the list,
84 like::
85
86 {% for unit in interface -%}
87 {{ unit['key'] }}{% if not loop.last %},{% endif %}
88 {%- endfor %}
89
90 Note that since all sets of relation data from all related services and
91 units are in a single list, if you need to know which service or unit a
92 set of data came from, you'll need to extend this class to preserve
93 that information.
94 """
95 if not hookenv.relation_ids(self.name):
96 return
97
98 ns = self.setdefault(self.name, [])
99 for rid in sorted(hookenv.relation_ids(self.name)):
100 for unit in sorted(hookenv.related_units(rid)):
101 reldata = hookenv.relation_get(rid=rid, unit=unit)
102 if self._is_ready(reldata):
103 ns.append(reldata)
104
105 def provide_data(self):
106 """
107 Return data to be relation_set for this interface.
108 """
109 return {}
110
111
112class MysqlRelation(RelationContext):
113 """
114 Relation context for the `mysql` interface.
115
116 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
117 :param list additional_required_keys: Extend the list of :attr:`required_keys`
118 """
119 name = 'db'
120 interface = 'mysql'
121 required_keys = ['host', 'user', 'password', 'database']
122
123
124class HttpRelation(RelationContext):
125 """
126 Relation context for the `http` interface.
127
128 :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
129 :param list additional_required_keys: Extend the list of :attr:`required_keys`
130 """
131 name = 'website'
132 interface = 'http'
133 required_keys = ['host', 'port']
134
135 def provide_data(self):
136 return {
137 'host': hookenv.unit_get('private-address'),
138 'port': 80,
139 }
140
141
142class RequiredConfig(dict):
143 """
144 Data context that loads config options with one or more mandatory options.
145
146 Once the required options have been changed from their default values, all
147 config options will be available, namespaced under `config` to prevent
148 potential naming conflicts (for example, between a config option and a
149 relation property).
150
151 :param list *args: List of options that must be changed from their default values.
152 """
153
154 def __init__(self, *args):
155 self.required_options = args
156 self['config'] = hookenv.config()
157 with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
158 self.config = yaml.load(fp).get('options', {})
159
160 def __bool__(self):
161 for option in self.required_options:
162 if option not in self['config']:
163 return False
164 current_value = self['config'][option]
165 default_value = self.config[option].get('default')
166 if current_value == default_value:
167 return False
168 if current_value in (None, '') and default_value in (None, ''):
169 return False
170 return True
171
172 def __nonzero__(self):
173 return self.__bool__()
174
175
176class StoredContext(dict):
177 """
178 A data context that always returns the data that it was first created with.
179
180 This is useful to do a one-time generation of things like passwords, that
181 will thereafter use the same value that was originally generated, instead
182 of generating a new value each time it is run.
183 """
184 def __init__(self, file_name, config_data):
185 """
186 If the file exists, populate `self` with the data from the file.
187 Otherwise, populate with the given data and persist it to the file.
188 """
189 if os.path.exists(file_name):
190 self.update(self.read_context(file_name))
191 else:
192 self.store_context(file_name, config_data)
193 self.update(config_data)
194
195 def store_context(self, file_name, config_data):
196 if not os.path.isabs(file_name):
197 file_name = os.path.join(hookenv.charm_dir(), file_name)
198 with open(file_name, 'w') as file_stream:
199 os.fchmod(file_stream.fileno(), 0o600)
200 yaml.dump(config_data, file_stream)
201
202 def read_context(self, file_name):
203 if not os.path.isabs(file_name):
204 file_name = os.path.join(hookenv.charm_dir(), file_name)
205 with open(file_name, 'r') as file_stream:
206 data = yaml.load(file_stream)
207 if not data:
208 raise OSError("%s is empty" % file_name)
209 return data
210
211
212class TemplateCallback(ManagerCallback):
213 """
214 Callback class that will render a Jinja2 template, for use as a ready
215 action.
216
217 :param str source: The template source file, relative to
218 `$CHARM_DIR/templates`
219
220 :param str target: The target to write the rendered template to
221 :param str owner: The owner of the rendered file
222 :param str group: The group of the rendered file
223 :param int perms: The permissions of the rendered file
224 """
225 def __init__(self, source, target,
226 owner='root', group='root', perms=0o444):
227 self.source = source
228 self.target = target
229 self.owner = owner
230 self.group = group
231 self.perms = perms
232
233 def __call__(self, manager, service_name, event_name):
234 service = manager.get_service(service_name)
235 context = {}
236 for ctx in service.get('required_data', []):
237 context.update(ctx)
238 templating.render(self.source, self.target, context,
239 self.owner, self.group, self.perms)
240
241
242# Convenience aliases for templates
243render_template = template = TemplateCallback
0244
=== added file 'hooks/charmhelpers/core/sysctl.py'
--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/sysctl.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,34 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import yaml
7
8from subprocess import check_call
9
10from charmhelpers.core.hookenv import (
11 log,
12 DEBUG,
13)
14
15
16def create(sysctl_dict, sysctl_file):
17 """Creates a sysctl.conf file from a YAML associative array
18
19 :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
20 :type sysctl_dict: dict
21 :param sysctl_file: path to the sysctl file to be saved
22 :type sysctl_file: str or unicode
23 :returns: None
24 """
25 sysctl_dict = yaml.load(sysctl_dict)
26
27 with open(sysctl_file, "w") as fd:
28 for key, value in sysctl_dict.items():
29 fd.write("{}={}\n".format(key, value))
30
31 log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
32 level=DEBUG)
33
34 check_call(["sysctl", "-p", sysctl_file])
035
=== added file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/templating.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,52 @@
1import os
2
3from charmhelpers.core import host
4from charmhelpers.core import hookenv
5
6
7def render(source, target, context, owner='root', group='root',
8 perms=0o444, templates_dir=None):
9 """
10 Render a template.
11
12 The `source` path, if not absolute, is relative to the `templates_dir`.
13
14 The `target` path should be absolute.
15
16 The context should be a dict containing the values to be replaced in the
17 template.
18
19 The `owner`, `group`, and `perms` options will be passed to `write_file`.
20
21 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
22
23 Note: Using this requires python-jinja2; if it is not installed, calling
24 this will attempt to use charmhelpers.fetch.apt_install to install it.
25 """
26 try:
27 from jinja2 import FileSystemLoader, Environment, exceptions
28 except ImportError:
29 try:
30 from charmhelpers.fetch import apt_install
31 except ImportError:
32 hookenv.log('Could not import jinja2, and could not import '
33 'charmhelpers.fetch to install it',
34 level=hookenv.ERROR)
35 raise
36 apt_install('python-jinja2', fatal=True)
37 from jinja2 import FileSystemLoader, Environment, exceptions
38
39 if templates_dir is None:
40 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
41 loader = Environment(loader=FileSystemLoader(templates_dir))
42 try:
43 source = source
44 template = loader.get_template(source)
45 except exceptions.TemplateNotFound as e:
46 hookenv.log('Could not load template %s from %s.' %
47 (source, templates_dir),
48 level=hookenv.ERROR)
49 raise e
50 content = template.render(context)
51 host.mkdir(os.path.dirname(target), owner, group)
52 host.write_file(target, content, owner, group, perms)
053
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,423 @@
1import importlib
2from tempfile import NamedTemporaryFile
3import time
4from yaml import safe_load
5from charmhelpers.core.host import (
6 lsb_release
7)
8import subprocess
9from charmhelpers.core.hookenv import (
10 config,
11 log,
12)
13import os
14
15import six
16if six.PY3:
17 from urllib.parse import urlparse, urlunparse
18else:
19 from urlparse import urlparse, urlunparse
20
21
22CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
23deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
24"""
25PROPOSED_POCKET = """# Proposed
26deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
27"""
28CLOUD_ARCHIVE_POCKETS = {
29 # Folsom
30 'folsom': 'precise-updates/folsom',
31 'precise-folsom': 'precise-updates/folsom',
32 'precise-folsom/updates': 'precise-updates/folsom',
33 'precise-updates/folsom': 'precise-updates/folsom',
34 'folsom/proposed': 'precise-proposed/folsom',
35 'precise-folsom/proposed': 'precise-proposed/folsom',
36 'precise-proposed/folsom': 'precise-proposed/folsom',
37 # Grizzly
38 'grizzly': 'precise-updates/grizzly',
39 'precise-grizzly': 'precise-updates/grizzly',
40 'precise-grizzly/updates': 'precise-updates/grizzly',
41 'precise-updates/grizzly': 'precise-updates/grizzly',
42 'grizzly/proposed': 'precise-proposed/grizzly',
43 'precise-grizzly/proposed': 'precise-proposed/grizzly',
44 'precise-proposed/grizzly': 'precise-proposed/grizzly',
45 # Havana
46 'havana': 'precise-updates/havana',
47 'precise-havana': 'precise-updates/havana',
48 'precise-havana/updates': 'precise-updates/havana',
49 'precise-updates/havana': 'precise-updates/havana',
50 'havana/proposed': 'precise-proposed/havana',
51 'precise-havana/proposed': 'precise-proposed/havana',
52 'precise-proposed/havana': 'precise-proposed/havana',
53 # Icehouse
54 'icehouse': 'precise-updates/icehouse',
55 'precise-icehouse': 'precise-updates/icehouse',
56 'precise-icehouse/updates': 'precise-updates/icehouse',
57 'precise-updates/icehouse': 'precise-updates/icehouse',
58 'icehouse/proposed': 'precise-proposed/icehouse',
59 'precise-icehouse/proposed': 'precise-proposed/icehouse',
60 'precise-proposed/icehouse': 'precise-proposed/icehouse',
61 # Juno
62 'juno': 'trusty-updates/juno',
63 'trusty-juno': 'trusty-updates/juno',
64 'trusty-juno/updates': 'trusty-updates/juno',
65 'trusty-updates/juno': 'trusty-updates/juno',
66 'juno/proposed': 'trusty-proposed/juno',
67 'trusty-juno/proposed': 'trusty-proposed/juno',
68 'trusty-proposed/juno': 'trusty-proposed/juno',
69 # Kilo
70 'kilo': 'trusty-updates/kilo',
71 'trusty-kilo': 'trusty-updates/kilo',
72 'trusty-kilo/updates': 'trusty-updates/kilo',
73 'trusty-updates/kilo': 'trusty-updates/kilo',
74 'kilo/proposed': 'trusty-proposed/kilo',
75 'trusty-kilo/proposed': 'trusty-proposed/kilo',
76 'trusty-proposed/kilo': 'trusty-proposed/kilo',
77}
78
79# The order of this list is very important. Handlers should be listed in from
80# least- to most-specific URL matching.
81FETCH_HANDLERS = (
82 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
83 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
84 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
85)
86
87APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
88APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
89APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
90
91
92class SourceConfigError(Exception):
93 pass
94
95
96class UnhandledSource(Exception):
97 pass
98
99
100class AptLockError(Exception):
101 pass
102
103
104class BaseFetchHandler(object):
105
106 """Base class for FetchHandler implementations in fetch plugins"""
107
108 def can_handle(self, source):
109 """Returns True if the source can be handled. Otherwise returns
110 a string explaining why it cannot"""
111 return "Wrong source type"
112
113 def install(self, source):
114 """Try to download and unpack the source. Return the path to the
115 unpacked files or raise UnhandledSource."""
116 raise UnhandledSource("Wrong source type {}".format(source))
117
118 def parse_url(self, url):
119 return urlparse(url)
120
121 def base_url(self, url):
122 """Return url without querystring or fragment"""
123 parts = list(self.parse_url(url))
124 parts[4:] = ['' for i in parts[4:]]
125 return urlunparse(parts)
126
127
128def filter_installed_packages(packages):
129 """Returns a list of packages that require installation"""
130 cache = apt_cache()
131 _pkgs = []
132 for package in packages:
133 try:
134 p = cache[package]
135 p.current_ver or _pkgs.append(package)
136 except KeyError:
137 log('Package {} has no installation candidate.'.format(package),
138 level='WARNING')
139 _pkgs.append(package)
140 return _pkgs
141
142
143def apt_cache(in_memory=True):
144 """Build and return an apt cache"""
145 import apt_pkg
146 apt_pkg.init()
147 if in_memory:
148 apt_pkg.config.set("Dir::Cache::pkgcache", "")
149 apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
150 return apt_pkg.Cache()
151
152
153def apt_install(packages, options=None, fatal=False):
154 """Install one or more packages"""
155 if options is None:
156 options = ['--option=Dpkg::Options::=--force-confold']
157
158 cmd = ['apt-get', '--assume-yes']
159 cmd.extend(options)
160 cmd.append('install')
161 if isinstance(packages, six.string_types):
162 cmd.append(packages)
163 else:
164 cmd.extend(packages)
165 log("Installing {} with options: {}".format(packages,
166 options))
167 _run_apt_command(cmd, fatal)
168
169
170def apt_upgrade(options=None, fatal=False, dist=False):
171 """Upgrade all packages"""
172 if options is None:
173 options = ['--option=Dpkg::Options::=--force-confold']
174
175 cmd = ['apt-get', '--assume-yes']
176 cmd.extend(options)
177 if dist:
178 cmd.append('dist-upgrade')
179 else:
180 cmd.append('upgrade')
181 log("Upgrading with options: {}".format(options))
182 _run_apt_command(cmd, fatal)
183
184
185def apt_update(fatal=False):
186 """Update local apt cache"""
187 cmd = ['apt-get', 'update']
188 _run_apt_command(cmd, fatal)
189
190
191def apt_purge(packages, fatal=False):
192 """Purge one or more packages"""
193 cmd = ['apt-get', '--assume-yes', 'purge']
194 if isinstance(packages, six.string_types):
195 cmd.append(packages)
196 else:
197 cmd.extend(packages)
198 log("Purging {}".format(packages))
199 _run_apt_command(cmd, fatal)
200
201
202def apt_hold(packages, fatal=False):
203 """Hold one or more packages"""
204 cmd = ['apt-mark', 'hold']
205 if isinstance(packages, six.string_types):
206 cmd.append(packages)
207 else:
208 cmd.extend(packages)
209 log("Holding {}".format(packages))
210
211 if fatal:
212 subprocess.check_call(cmd)
213 else:
214 subprocess.call(cmd)
215
216
217def add_source(source, key=None):
218 """Add a package source to this system.
219
220 @param source: a URL or sources.list entry, as supported by
221 add-apt-repository(1). Examples::
222
223 ppa:charmers/example
224 deb https://stub:key@private.example.com/ubuntu trusty main
225
226 In addition:
227 'proposed:' may be used to enable the standard 'proposed'
228 pocket for the release.
229 'cloud:' may be used to activate official cloud archive pockets,
230 such as 'cloud:icehouse'
231 'distro' may be used as a noop
232
233 @param key: A key to be added to the system's APT keyring and used
234 to verify the signatures on packages. Ideally, this should be an
235 ASCII format GPG public key including the block headers. A GPG key
236 id may also be used, but be aware that only insecure protocols are
237 available to retrieve the actual public key from a public keyserver
238 placing your Juju environment at risk. ppa and cloud archive keys
239 are securely added automtically, so sould not be provided.
240 """
241 if source is None:
242 log('Source is not present. Skipping')
243 return
244
245 if (source.startswith('ppa:') or
246 source.startswith('http') or
247 source.startswith('deb ') or
248 source.startswith('cloud-archive:')):
249 subprocess.check_call(['add-apt-repository', '--yes', source])
250 elif source.startswith('cloud:'):
251 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
252 fatal=True)
253 pocket = source.split(':')[-1]
254 if pocket not in CLOUD_ARCHIVE_POCKETS:
255 raise SourceConfigError(
256 'Unsupported cloud: source option %s' %
257 pocket)
258 actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
259 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
260 apt.write(CLOUD_ARCHIVE.format(actual_pocket))
261 elif source == 'proposed':
262 release = lsb_release()['DISTRIB_CODENAME']
263 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
264 apt.write(PROPOSED_POCKET.format(release))
265 elif source == 'distro':
266 pass
267 else:
268 log("Unknown source: {!r}".format(source))
269
270 if key:
271 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
272 with NamedTemporaryFile('w+') as key_file:
273 key_file.write(key)
274 key_file.flush()
275 key_file.seek(0)
276 subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
277 else:
278 # Note that hkp: is in no way a secure protocol. Using a
279 # GPG key id is pointless from a security POV unless you
280 # absolutely trust your network and DNS.
281 subprocess.check_call(['apt-key', 'adv', '--keyserver',
282 'hkp://keyserver.ubuntu.com:80', '--recv',
283 key])
284
285
286def configure_sources(update=False,
287 sources_var='install_sources',
288 keys_var='install_keys'):
289 """
290 Configure multiple sources from charm configuration.
291
292 The lists are encoded as yaml fragments in the configuration.
293 The frament needs to be included as a string. Sources and their
294 corresponding keys are of the types supported by add_source().
295
296 Example config:
297 install_sources: |
298 - "ppa:foo"
299 - "http://example.com/repo precise main"
300 install_keys: |
301 - null
302 - "a1b2c3d4"
303
304 Note that 'null' (a.k.a. None) should not be quoted.
305 """
306 sources = safe_load((config(sources_var) or '').strip()) or []
307 keys = safe_load((config(keys_var) or '').strip()) or None
308
309 if isinstance(sources, six.string_types):
310 sources = [sources]
311
312 if keys is None:
313 for source in sources:
314 add_source(source, None)
315 else:
316 if isinstance(keys, six.string_types):
317 keys = [keys]
318
319 if len(sources) != len(keys):
320 raise SourceConfigError(
321 'Install sources and keys lists are different lengths')
322 for source, key in zip(sources, keys):
323 add_source(source, key)
324 if update:
325 apt_update(fatal=True)
326
327
328def install_remote(source, *args, **kwargs):
329 """
330 Install a file tree from a remote source
331
332 The specified source should be a url of the form:
333 scheme://[host]/path[#[option=value][&...]]
334
335 Schemes supported are based on this modules submodules.
336 Options supported are submodule-specific.
337 Additional arguments are passed through to the submodule.
338
339 For example::
340
341 dest = install_remote('http://example.com/archive.tgz',
342 checksum='deadbeef',
343 hash_type='sha1')
344
345 This will download `archive.tgz`, validate it using SHA1 and, if
346 the file is ok, extract it and return the directory in which it
347 was extracted. If the checksum fails, it will raise
348 :class:`charmhelpers.core.host.ChecksumError`.
349 """
350 # We ONLY check for True here because can_handle may return a string
351 # explaining why it can't handle a given source.
352 handlers = [h for h in plugins() if h.can_handle(source) is True]
353 installed_to = None
354 for handler in handlers:
355 try:
356 installed_to = handler.install(source, *args, **kwargs)
357 except UnhandledSource:
358 pass
359 if not installed_to:
360 raise UnhandledSource("No handler found for source {}".format(source))
361 return installed_to
362
363
364def install_from_config(config_var_name):
365 charm_config = config()
366 source = charm_config[config_var_name]
367 return install_remote(source)
368
369
370def plugins(fetch_handlers=None):
371 if not fetch_handlers:
372 fetch_handlers = FETCH_HANDLERS
373 plugin_list = []
374 for handler_name in fetch_handlers:
375 package, classname = handler_name.rsplit('.', 1)
376 try:
377 handler_class = getattr(
378 importlib.import_module(package),
379 classname)
380 plugin_list.append(handler_class())
381 except (ImportError, AttributeError):
382 # Skip missing plugins so that they can be ommitted from
383 # installation if desired
384 log("FetchHandler {} not found, skipping plugin".format(
385 handler_name))
386 return plugin_list
387
388
389def _run_apt_command(cmd, fatal=False):
390 """
391 Run an APT command, checking output and retrying if the fatal flag is set
392 to True.
393
394 :param: cmd: str: The apt command to run.
395 :param: fatal: bool: Whether the command's output should be checked and
396 retried.
397 """
398 env = os.environ.copy()
399
400 if 'DEBIAN_FRONTEND' not in env:
401 env['DEBIAN_FRONTEND'] = 'noninteractive'
402
403 if fatal:
404 retry_count = 0
405 result = None
406
407 # If the command is considered "fatal", we need to retry if the apt
408 # lock was not acquired.
409
410 while result is None or result == APT_NO_LOCK:
411 try:
412 result = subprocess.check_call(cmd, env=env)
413 except subprocess.CalledProcessError as e:
414 retry_count = retry_count + 1
415 if retry_count > APT_NO_LOCK_RETRY_COUNT:
416 raise
417 result = e.returncode
418 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
419 "".format(APT_NO_LOCK_RETRY_DELAY))
420 time.sleep(APT_NO_LOCK_RETRY_DELAY)
421
422 else:
423 subprocess.call(cmd, env=env)
0424
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,145 @@
1import os
2import hashlib
3import re
4
5import six
6if six.PY3:
7 from urllib.request import (
8 build_opener, install_opener, urlopen, urlretrieve,
9 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
10 )
11 from urllib.parse import urlparse, urlunparse, parse_qs
12 from urllib.error import URLError
13else:
14 from urllib import urlretrieve
15 from urllib2 import (
16 build_opener, install_opener, urlopen,
17 HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
18 URLError
19 )
20 from urlparse import urlparse, urlunparse, parse_qs
21
22from charmhelpers.fetch import (
23 BaseFetchHandler,
24 UnhandledSource
25)
26from charmhelpers.payload.archive import (
27 get_archive_handler,
28 extract,
29)
30from charmhelpers.core.host import mkdir, check_hash
31
32
33def splituser(host):
34 '''urllib.splituser(), but six's support of this seems broken'''
35 _userprog = re.compile('^(.*)@(.*)$')
36 match = _userprog.match(host)
37 if match:
38 return match.group(1, 2)
39 return None, host
40
41
42def splitpasswd(user):
43 '''urllib.splitpasswd(), but six's support of this is missing'''
44 _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
45 match = _passwdprog.match(user)
46 if match:
47 return match.group(1, 2)
48 return user, None
49
50
51class ArchiveUrlFetchHandler(BaseFetchHandler):
52 """
53 Handler to download archive files from arbitrary URLs.
54
55 Can fetch from http, https, ftp, and file URLs.
56
57 Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
58
59 Installs the contents of the archive in $CHARM_DIR/fetched/.
60 """
61 def can_handle(self, source):
62 url_parts = self.parse_url(source)
63 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
64 return "Wrong source type"
65 if get_archive_handler(self.base_url(source)):
66 return True
67 return False
68
69 def download(self, source, dest):
70 """
71 Download an archive file.
72
73 :param str source: URL pointing to an archive file.
74 :param str dest: Local path location to download archive file to.
75 """
76 # propogate all exceptions
77 # URLError, OSError, etc
78 proto, netloc, path, params, query, fragment = urlparse(source)
79 if proto in ('http', 'https'):
80 auth, barehost = splituser(netloc)
81 if auth is not None:
82 source = urlunparse((proto, barehost, path, params, query, fragment))
83 username, password = splitpasswd(auth)
84 passman = HTTPPasswordMgrWithDefaultRealm()
85 # Realm is set to None in add_password to force the username and password
86 # to be used whatever the realm
87 passman.add_password(None, source, username, password)
88 authhandler = HTTPBasicAuthHandler(passman)
89 opener = build_opener(authhandler)
90 install_opener(opener)
91 response = urlopen(source)
92 try:
93 with open(dest, 'w') as dest_file:
94 dest_file.write(response.read())
95 except Exception as e:
96 if os.path.isfile(dest):
97 os.unlink(dest)
98 raise e
99
100 # Mandatory file validation via Sha1 or MD5 hashing.
101 def download_and_validate(self, url, hashsum, validate="sha1"):
102 tempfile, headers = urlretrieve(url)
103 check_hash(tempfile, hashsum, validate)
104 return tempfile
105
106 def install(self, source, dest=None, checksum=None, hash_type='sha1'):
107 """
108 Download and install an archive file, with optional checksum validation.
109
110 The checksum can also be given on the `source` URL's fragment.
111 For example::
112
113 handler.install('http://example.com/file.tgz#sha1=deadbeef')
114
115 :param str source: URL pointing to an archive file.
116 :param str dest: Local destination path to install to. If not given,
117 installs to `$CHARM_DIR/archives/archive_file_name`.
118 :param str checksum: If given, validate the archive file after download.
119 :param str hash_type: Algorithm used to generate `checksum`.
120 Can be any hash alrgorithm supported by :mod:`hashlib`,
121 such as md5, sha1, sha256, sha512, etc.
122
123 """
124 url_parts = self.parse_url(source)
125 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
126 if not os.path.exists(dest_dir):
127 mkdir(dest_dir, perms=0o755)
128 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
129 try:
130 self.download(source, dld_file)
131 except URLError as e:
132 raise UnhandledSource(e.reason)
133 except OSError as e:
134 raise UnhandledSource(e.strerror)
135 options = parse_qs(url_parts.fragment)
136 for key, value in options.items():
137 if not six.PY3:
138 algorithms = hashlib.algorithms
139 else:
140 algorithms = hashlib.algorithms_available
141 if key in algorithms:
142 check_hash(dld_file, value, key)
143 if checksum:
144 check_hash(dld_file, checksum, hash_type)
145 return extract(dld_file, dest)
0146
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,54 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8import six
9if six.PY3:
10 raise ImportError('bzrlib does not support Python3')
11
12try:
13 from bzrlib.branch import Branch
14except ImportError:
15 from charmhelpers.fetch import apt_install
16 apt_install("python-bzrlib")
17 from bzrlib.branch import Branch
18
19
20class BzrUrlFetchHandler(BaseFetchHandler):
21 """Handler for bazaar branches via generic and lp URLs"""
22 def can_handle(self, source):
23 url_parts = self.parse_url(source)
24 if url_parts.scheme not in ('bzr+ssh', 'lp'):
25 return False
26 else:
27 return True
28
29 def branch(self, source, dest):
30 url_parts = self.parse_url(source)
31 # If we use lp:branchname scheme we need to load plugins
32 if not self.can_handle(source):
33 raise UnhandledSource("Cannot handle {}".format(source))
34 if url_parts.scheme == "lp":
35 from bzrlib.plugin import load_plugins
36 load_plugins()
37 try:
38 remote_branch = Branch.open(source)
39 remote_branch.bzrdir.sprout(dest).open_branch()
40 except Exception as e:
41 raise e
42
43 def install(self, source):
44 url_parts = self.parse_url(source)
45 branch_name = url_parts.path.strip("/").split("/")[-1]
46 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
47 branch_name)
48 if not os.path.exists(dest_dir):
49 mkdir(dest_dir, perms=0o755)
50 try:
51 self.branch(source, dest_dir)
52 except OSError as e:
53 raise UnhandledSource(e.strerror)
54 return dest_dir
055
=== added file 'hooks/charmhelpers/fetch/giturl.py'
--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/giturl.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,51 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8import six
9if six.PY3:
10 raise ImportError('GitPython does not support Python 3')
11
12try:
13 from git import Repo
14except ImportError:
15 from charmhelpers.fetch import apt_install
16 apt_install("python-git")
17 from git import Repo
18
19
20class GitUrlFetchHandler(BaseFetchHandler):
21 """Handler for git branches via generic and github URLs"""
22 def can_handle(self, source):
23 url_parts = self.parse_url(source)
24 # TODO (mattyw) no support for ssh git@ yet
25 if url_parts.scheme not in ('http', 'https', 'git'):
26 return False
27 else:
28 return True
29
30 def clone(self, source, dest, branch):
31 if not self.can_handle(source):
32 raise UnhandledSource("Cannot handle {}".format(source))
33
34 repo = Repo.clone_from(source, dest)
35 repo.git.checkout(branch)
36
37 def install(self, source, branch="master", dest=None):
38 url_parts = self.parse_url(source)
39 branch_name = url_parts.path.strip("/").split("/")[-1]
40 if dest:
41 dest_dir = os.path.join(dest, branch_name)
42 else:
43 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
44 branch_name)
45 if not os.path.exists(dest_dir):
46 mkdir(dest_dir, perms=0o755)
47 try:
48 self.clone(source, dest_dir, branch)
49 except OSError as e:
50 raise UnhandledSource(e.strerror)
51 return dest_dir
052
=== added directory 'hooks/charmhelpers/payload'
=== added file 'hooks/charmhelpers/payload/__init__.py'
--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/__init__.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,1 @@
1"Tools for working with files injected into a charm just before deployment."
02
=== added file 'hooks/charmhelpers/payload/execd.py'
--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/payload/execd.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,50 @@
1#!/usr/bin/env python
2
3import os
4import sys
5import subprocess
6from charmhelpers.core import hookenv
7
8
9def default_execd_dir():
10 return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
11
12
13def execd_module_paths(execd_dir=None):
14 """Generate a list of full paths to modules within execd_dir."""
15 if not execd_dir:
16 execd_dir = default_execd_dir()
17
18 if not os.path.exists(execd_dir):
19 return
20
21 for subpath in os.listdir(execd_dir):
22 module = os.path.join(execd_dir, subpath)
23 if os.path.isdir(module):
24 yield module
25
26
27def execd_submodule_paths(command, execd_dir=None):
28 """Generate a list of full paths to the specified command within exec_dir.
29 """
30 for module_path in execd_module_paths(execd_dir):
31 path = os.path.join(module_path, command)
32 if os.access(path, os.X_OK) and os.path.isfile(path):
33 yield path
34
35
36def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
37 """Run command for each module within execd_dir which defines it."""
38 for submodule_path in execd_submodule_paths(command, execd_dir):
39 try:
40 subprocess.check_call(submodule_path, shell=True, stderr=stderr)
41 except subprocess.CalledProcessError as e:
42 hookenv.log("Error ({}) running {}. Output: {}".format(
43 e.returncode, e.cmd, e.output))
44 if die_on_error:
45 sys.exit(e.returncode)
46
47
48def execd_preinstall(execd_dir=None):
49 """Run charm-pre-install for each module within execd_dir."""
50 execd_run('charm-pre-install', execd_dir=execd_dir)
051
=== modified file 'hooks/config-changed'
--- hooks/config-changed 2011-09-22 14:46:56 +0000
+++ hooks/config-changed 1970-01-01 00:00:00 +0000
@@ -1,7 +0,0 @@
1#!/bin/sh
2set -e
3
4home=`dirname $0`
5
6juju-log "Reconfiguring charm by installing hook again."
7exec $home/install
80
=== target is u'jenkins_hooks.py'
=== removed file 'hooks/delnode'
--- hooks/delnode 2011-09-22 14:46:56 +0000
+++ hooks/delnode 1970-01-01 00:00:00 +0000
@@ -1,16 +0,0 @@
1#!/usr/bin/python
2
3import jenkins
4import sys
5
6host=sys.argv[1]
7username=sys.argv[2]
8password=sys.argv[3]
9
10l_jenkins = jenkins.Jenkins("http://localhost:8080/",username,password)
11
12if l_jenkins.node_exists(host):
13 print "Node exists"
14 l_jenkins.delete_node(host)
15else:
16 print "Node does not exist - not deleting"
170
=== modified file 'hooks/install'
--- hooks/install 2014-04-17 12:35:18 +0000
+++ hooks/install 1970-01-01 00:00:00 +0000
@@ -1,151 +0,0 @@
1#!/bin/bash
2
3set -eu
4
5RELEASE=$(config-get release)
6ADMIN_USERNAME=$(config-get username)
7ADMIN_PASSWORD=$(config-get password)
8PLUGINS=$(config-get plugins)
9PLUGINS_SITE=$(config-get plugins-site)
10PLUGINS_CHECK_CERT=$(config-get plugins-check-certificate)
11REMOVE_UNLISTED_PLUGINS=$(config-get remove-unlisted-plugins)
12CWD=$(dirname $0)
13JENKINS_HOME=/var/lib/jenkins
14
15setup_source () {
16 # Do something with < Oneiric releases - maybe PPA
17 # apt-get -y install python-software-properties
18 # add-apt-repository ppa:hudson-ubuntu/testing
19 juju-log "Configuring source of jenkins as $RELEASE"
20 # Configure to use upstream archives
21 # lts - debian-stable
22 # trunk - debian
23 case $RELEASE in
24 lts)
25 SOURCE="debian-stable";;
26 trunk)
27 SOURCE="debian";;
28 *)
29 juju-log "release configuration not recognised" && exit 1;;
30 esac
31 # Setup archive to use appropriate jenkins upstream
32 wget -q -O - http://pkg.jenkins-ci.org/$SOURCE/jenkins-ci.org.key | apt-key add -
33 echo "deb http://pkg.jenkins-ci.org/$SOURCE binary/" \
34 > /etc/apt/sources.list.d/jenkins.list
35 apt-get update || true
36}
37# Only setup the source if jenkins is not already installed
38# this makes the config 'release' immutable - i.e. you
39# can change source once deployed
40[[ -d /var/lib/jenkins ]] || setup_source
41
42# Install jenkins
43install_jenkins () {
44 juju-log "Installing/upgrading jenkins..."
45 apt-get -y install -qq jenkins default-jre-headless
46}
47# Re-run whenever called to pickup any updates
48install_jenkins
49
50configure_jenkins_user () {
51 juju-log "Configuring user for jenkins..."
52 # Check to see if password provided
53 if [ -z "$ADMIN_PASSWORD" ]
54 then
55 # Generate a random one for security
56 # User can then override using juju set
57 ADMIN_PASSWORD=$(< /dev/urandom tr -dc A-Za-z | head -c16)
58 echo $ADMIN_PASSWORD > $JENKINS_HOME/.admin_password
59 chmod 0600 $JENKINS_HOME/.admin_password
60 fi
61 # Generate Salt and Hash Password for Jenkins
62 SALT="$(< /dev/urandom tr -dc A-Za-z | head -c6)"
63 PASSWORD="$SALT:$(echo -n "$ADMIN_PASSWORD{$SALT}" | shasum -a 256 | awk '{ print $1 }')"
64 mkdir -p $JENKINS_HOME/users/$ADMIN_USERNAME
65 sed -e s#__USERNAME__#$ADMIN_USERNAME# -e s#__PASSWORD__#$PASSWORD# \
66 $CWD/../templates/user-config.xml > $JENKINS_HOME/users/$ADMIN_USERNAME/config.xml
67 chown -R jenkins:nogroup $JENKINS_HOME/users
68}
69# Always run - even if config has not changed, its safe
70configure_jenkins_user
71
72boostrap_jenkins_configuration (){
73 juju-log "Bootstrapping secure initial configuration in Jenkins..."
74 cp $CWD/../templates/jenkins-config.xml $JENKINS_HOME/config.xml
75 chown jenkins:nogroup $JENKINS_HOME/config.xml
76 touch /var/lib/jenkins/config.bootstrapped
77}
78# Only run on first invocation otherwise we blast
79# any configuration changes made
80[[ -f /var/lib/jenkins/config.bootstrapped ]] || boostrap_jenkins_configuration
81
82install_plugins(){
83 juju-log "Installing plugins ($PLUGINS)"
84 mkdir -p $JENKINS_HOME/plugins
85 chmod a+rx $JENKINS_HOME/plugins
86 chown jenkins:nogroup $JENKINS_HOME/plugins
87 track_dir=`mktemp -d /tmp/plugins.installed.XXXXXXXX`
88 installed_plugins=`find $JENKINS_HOME/plugins -name '*.hpi'`
89 [ -z "$installed_plugins" ] || ln -s $installed_plugins $track_dir
90 local plugin=""
91 local plugin_file=""
92 local opts=""
93 pushd $JENKINS_HOME/plugins
94 for plugin in $PLUGINS ; do
95 plugin_file=$JENKINS_HOME/plugins/$plugin.hpi
96 # Note that by default wget verifies certificates as of 1.10.
97 if [ "$PLUGINS_CHECK_CERT" = "no" ] ; then
98 opts="--no-check-certificate"
99 fi
100 wget $opts --timestamping $PLUGINS_SITE/latest/$plugin.hpi
101 chmod a+r $plugin_file
102 rm -f $track_dir/$plugin.hpi
103 done
104 popd
105 # Warn about undesirable plugins, or remove them.
106 unlisted_plugins=`ls $track_dir`
107 [[ -n "$unlisted_plugins" ]] || return 0
108 if [[ $REMOVE_UNLISTED_PLUGINS = "yes" ]] ; then
109 for plugin_file in `ls $track_dir` ; do
110 rm -vf $JENKINS_HOME/plugins/$plugin_file
111 done
112 else
113 juju-log -l WARNING "Unlisted plugins: (`ls $track_dir`) Not removed. Set remove-unlisted-plugins to yes to clear them away."
114 fi
115}
116
117install_plugins
118
119juju-log "Restarting jenkins to pickup configuration changes"
120service jenkins restart
121
122# Install helpers - python jenkins ++
123install_python_jenkins () {
124 juju-log "Installing python-jenkins..."
125 apt-get -y install -qq python-jenkins
126}
127# Only install once
128[[ -d /usr/share/pyshared/jenkins ]] || install_python_jenkins
129
130# Install some tools - can get set up deployment time
131install_tools () {
132 juju-log "Installing tools..."
133 apt-get -y install -qq `config-get tools`
134}
135# Always run - tools might get re-configured
136install_tools
137
138juju-log "Opening ports"
139open-port 8080
140
141# Execute any hook overlay which may be provided
142# by forks of this charm
143if [ -d hooks/install.d ]
144then
145 for i in `ls -1 hooks/install.d/*`
146 do
147 [[ -x $i ]] && . ./$i
148 done
149fi
150
151exit 0
1520
=== target is u'jenkins_hooks.py'
=== added file 'hooks/jenkins_hooks.py'
--- hooks/jenkins_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/jenkins_hooks.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,220 @@
1#!/usr/bin/python
2import grp
3import hashlib
4import os
5import pwd
6import shutil
7import subprocess
8import sys
9
10from charmhelpers.core.hookenv import (
11 Hooks,
12 UnregisteredHookError,
13 config,
14 remote_unit,
15 relation_get,
16 relation_set,
17 relation_ids,
18 unit_get,
19 open_port,
20 log,
21 DEBUG,
22 INFO,
23)
24from charmhelpers.fetch import apt_install
25from charmhelpers.core.host import (
26 service_start,
27 service_stop,
28)
29from charmhelpers.payload.execd import execd_preinstall
30from jenkins_utils import (
31 JENKINS_HOME,
32 JENKINS_USERS,
33 TEMPLATES_DIR,
34 add_node,
35 del_node,
36 setup_source,
37 install_jenkins_plugins,
38)
39
40hooks = Hooks()
41
42
43@hooks.hook('install')
44def install():
45 execd_preinstall('hooks/install.d')
46 # Only setup the source if jenkins is not already installed i.e. makes the
47 # config 'release' immutable so you can't change source once deployed
48 setup_source(config('release'))
49 config_changed()
50 open_port(8080)
51
52
53@hooks.hook('config-changed')
54def config_changed():
55 # Re-run whenever called to pickup any updates
56 log("Installing/upgrading jenkins.", level=DEBUG)
57 apt_install(['jenkins', 'default-jre-headless', 'pwgen'], fatal=True)
58
59 # Always run - even if config has not changed, its safe
60 log("Configuring user for jenkins.", level=DEBUG)
61 # Check to see if password provided
62 admin_passwd = config('password')
63 if not admin_passwd:
64 # Generate a random one for security. User can then override using juju
65 # set.
66 admin_passwd = subprocess.check_output(['pwgen', '-N1', '15'])
67 admin_passwd = admin_passwd.strip()
68
69 passwd_file = os.path.join(JENKINS_HOME, '.admin_password')
70 with open(passwd_file, 'w+') as fd:
71 fd.write(admin_passwd)
72
73 os.chmod(passwd_file, 0600)
74
75 jenkins_uid = pwd.getpwnam('jenkins').pw_uid
76 jenkins_gid = grp.getgrnam('jenkins').gr_gid
77 nogroup_gid = grp.getgrnam('nogroup').gr_gid
78
79 # Generate Salt and Hash Password for Jenkins
80 salt = subprocess.check_output(['pwgen', '-N1', '6']).strip()
81 csum = hashlib.sha256("%s{%s}" % (admin_passwd, salt)).hexdigest()
82 salty_password = "%s:%s" % (salt, csum)
83
84 admin_username = config('username')
85 admin_user_home = os.path.join(JENKINS_USERS, admin_username)
86 if not os.path.isdir(admin_user_home):
87 os.makedirs(admin_user_home, 0o0700)
88 os.chown(JENKINS_USERS, jenkins_uid, nogroup_gid)
89 os.chown(admin_user_home, jenkins_uid, nogroup_gid)
90
91 # NOTE: overwriting will destroy any data added by jenkins or via the ui
92 admin_user_config = os.path.join(admin_user_home, 'config.xml')
93 with open(os.path.join(TEMPLATES_DIR, 'user-config.xml')) as src_fd:
94 with open(admin_user_config, 'w') as dst_fd:
95 lines = src_fd.readlines()
96 for line in lines:
97 kvs = {'__USERNAME__': admin_username,
98 '__PASSWORD__': salty_password}
99
100 for key, val in kvs.iteritems():
101 if key in line:
102 line = line.replace(key, val)
103
104 dst_fd.write(line)
105 os.chown(admin_user_config, jenkins_uid, nogroup_gid)
106
107 # Only run on first invocation otherwise we blast
108 # any configuration changes made
109 jenkins_bootstrap_flag = '/var/lib/jenkins/config.bootstrapped'
110 if not os.path.exists(jenkins_bootstrap_flag):
111 log("Bootstrapping secure initial configuration in Jenkins.",
112 level=DEBUG)
113 src = os.path.join(TEMPLATES_DIR, 'jenkins-config.xml')
114 dst = os.path.join(JENKINS_HOME, 'config.xml')
115 shutil.copy(src, dst)
116 os.chown(dst, jenkins_uid, nogroup_gid)
117 # Touch
118 with open(jenkins_bootstrap_flag, 'w'):
119 pass
120
121 log("Stopping jenkins for plugin update(s)", level=DEBUG)
122 service_stop('jenkins')
123 install_jenkins_plugins(jenkins_uid, jenkins_gid)
124 log("Starting jenkins to pickup configuration changes", level=DEBUG)
125 service_start('jenkins')
126
127 apt_install(['python-jenkins'], fatal=True)
128 tools = config('tools')
129 if tools:
130 log("Installing tools.", level=DEBUG)
131 apt_install(tools.split(), fatal=True)
132
133
134@hooks.hook('start')
135def start():
136 service_start('jenkins')
137
138
139@hooks.hook('stop')
140def stop():
141 service_stop('jenkins')
142
143
144@hooks.hook('upgrade-charm')
145def upgrade_charm():
146 log("Upgrading charm.", level=DEBUG)
147 config_changed()
148
149
150@hooks.hook('master-relation-joined')
151def master_relation_joined():
152 HOSTNAME = unit_get('private-address')
153 log("Setting url relation to http://%s:8080" % (HOSTNAME), level=DEBUG)
154 relation_set(url="http://%s:8080" % (HOSTNAME))
155
156
157@hooks.hook('master-relation-changed')
158def master_relation_changed():
159 PASSWORD = config('password')
160 if PASSWORD:
161 with open('/var/lib/jenkins/.admin_password', 'r') as fd:
162 PASSWORD = fd.read()
163
164 required_settings = ['executors', 'labels', 'slavehost']
165 settings = relation_get()
166 missing = [s for s in required_settings if s not in settings]
167 if missing:
168 log("Not all required relation settings received yet (missing=%s) - "
169 "skipping" % (', '.join(missing)), level=INFO)
170 return
171
172 slavehost = settings['slavehost']
173 executors = settings['executors']
174 labels = settings['labels']
175
176 # Double check to see if this has happened yet
177 if "x%s" % (slavehost) == "x":
178 log("Slave host not yet defined - skipping", level=INFO)
179 return
180
181 log("Adding slave with hostname %s." % (slavehost), level=DEBUG)
182 add_node(slavehost, executors, labels, config('username'), PASSWORD)
183 log("Node slave %s added." % (slavehost), level=DEBUG)
184
185
186@hooks.hook('master-relation-departed')
187def master_relation_departed():
188 # Slave hostname is derived from unit name so
189 # this is pretty safe
190 slavehost = remote_unit()
191 log("Deleting slave with hostname %s." % (slavehost), level=DEBUG)
192 del_node(slavehost, config('username'), config('password'))
193
194
195@hooks.hook('master-relation-broken')
196def master_relation_broken():
197 password = config('password')
198 if not password:
199 passwd_file = os.path.join(JENKINS_HOME, '.admin_password')
200 with open(passwd_file, 'w+') as fd:
201 PASSWORD = fd.read()
202
203 for member in relation_ids():
204 member = member.replace('/', '-')
205 log("Removing node %s from Jenkins master." % (member), level=DEBUG)
206 del_node(member, config('username'), PASSWORD)
207
208
209@hooks.hook('website-relation-joined')
210def website_relation_joined():
211 hostname = unit_get('private-address')
212 log("Setting website URL to %s:8080" % (hostname), level=DEBUG)
213 relation_set(port=8080, hostname=hostname)
214
215
216if __name__ == '__main__':
217 try:
218 hooks.execute(sys.argv)
219 except UnregisteredHookError as e:
220 log('Unknown hook {} - skipping.'.format(e), level=INFO)
0221
=== added file 'hooks/jenkins_utils.py'
--- hooks/jenkins_utils.py 1970-01-01 00:00:00 +0000
+++ hooks/jenkins_utils.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,178 @@
1#!/usr/bin/python
2import glob
3import os
4import shutil
5import subprocess
6import tempfile
7
8from charmhelpers.core.hookenv import (
9 config,
10 log,
11 DEBUG,
12 INFO,
13 WARNING,
14)
15from charmhelpers.fetch import (
16 apt_update,
17 add_source,
18)
19
20from charmhelpers.core.decorators import (
21 retry_on_exception,
22)
23
24JENKINS_HOME = '/var/lib/jenkins'
25JENKINS_USERS = os.path.join(JENKINS_HOME, 'users')
26JENKINS_PLUGINS = os.path.join(JENKINS_HOME, 'plugins')
27TEMPLATES_DIR = 'templates'
28
29
30def add_node(host, executors, labels, username, password):
31 import jenkins
32
33 @retry_on_exception(2, 2, exc_type=jenkins.JenkinsException)
34 def _add_node(*args, **kwargs):
35 l_jenkins = jenkins.Jenkins("http://localhost:8080/", username,
36 password)
37
38 if l_jenkins.node_exists(host):
39 log("Node exists - not adding", level=DEBUG)
40 return
41
42 log("Adding node '%s' to Jenkins master" % (host), level=INFO)
43 l_jenkins.create_node(host, int(executors) * 2, host, labels=labels)
44
45 if not l_jenkins.node_exists(host):
46 log("Failed to create node '%s'" % (host), level=WARNING)
47
48 return _add_node()
49
50
51def del_node(host, username, password):
52 import jenkins
53
54 l_jenkins = jenkins.Jenkins("http://localhost:8080/", username, password)
55
56 if l_jenkins.node_exists(host):
57 log("Node '%s' exists" % (host), level=DEBUG)
58 l_jenkins.delete_node(host)
59 else:
60 log("Node '%s' does not exist - not deleting" % (host), level=INFO)
61
62
63def setup_source(release):
64 """Install Jenkins archive."""
65 log("Configuring source of jenkins as %s" % release, level=INFO)
66
67 # Configure to use upstream archives
68 # lts - debian-stable
69 # trunk - debian
70 if release == 'lts':
71 source = "debian-stable"
72 elif release == 'trunk':
73 source = "debian"
74 else:
75 errmsg = "Release '%s' configuration not recognised" % (release)
76 raise Exception(errmsg)
77
78 # Setup archive to use appropriate jenkins upstream
79 key = 'http://pkg.jenkins-ci.org/%s/jenkins-ci.org.key' % source
80 target = "%s-%s" % (source, 'jenkins-ci.org.key')
81 subprocess.check_call(['wget', '-q', '-O', target, key])
82 with open(target, 'r') as fd:
83 key = fd.read()
84
85 deb = "deb http://pkg.jenkins-ci.org/%s binary/" % (source)
86 sources_file = "/etc/apt/sources.list.d/jenkins.list"
87
88 found = False
89 if os.path.exists(sources_file):
90 with open(sources_file, 'r') as fd:
91 for line in fd:
92 if deb in line:
93 found = True
94 break
95
96 if not found:
97 with open(sources_file, 'a') as fd:
98 fd.write("%s\n" % deb)
99 else:
100 with open(sources_file, 'w') as fd:
101 fd.write("%s\n" % deb)
102
103 if not found:
104 # NOTE: don't use add_source for adding source since it adds deb and
105 # deb-src entries but pkg.jenkins-ci.org has no deb-src.
106 add_source("#dummy-source", key=key)
107
108 apt_update(fatal=True)
109
110
111def install_jenkins_plugins(jenkins_uid, jenkins_gid):
112 plugins = config('plugins')
113 if plugins:
114 plugins = plugins.split()
115 else:
116 plugins = []
117
118 log("Installing plugins (%s)" % (' '.join(plugins)), level=DEBUG)
119 if not os.path.isdir(JENKINS_PLUGINS):
120 os.makedirs(JENKINS_PLUGINS)
121
122 os.chmod(JENKINS_PLUGINS, 0o0755)
123 os.chown(JENKINS_PLUGINS, jenkins_uid, jenkins_gid)
124
125 track_dir = tempfile.mkdtemp(prefix='/tmp/plugins.installed')
126 try:
127 installed_plugins = glob.glob("%s/*.hpi" % (JENKINS_PLUGINS))
128 for plugin in installed_plugins:
129 # Create a ref of installed plugin
130 with open(os.path.join(track_dir, os.path.basename(plugin)),
131 'w'):
132 pass
133
134 plugins_site = config('plugins-site')
135 log("Fetching plugins from %s" % (plugins_site), level=DEBUG)
136 # NOTE: by default wget verifies certificates as of 1.10.
137 if config('plugins-check-certificate') == "no":
138 opts = ["--no-check-certificate"]
139 else:
140 opts = []
141
142 for plugin in plugins:
143 plugin_filename = "%s.hpi" % (plugin)
144 url = os.path.join(plugins_site, 'latest', plugin_filename)
145 plugin_path = os.path.join(JENKINS_PLUGINS, plugin_filename)
146 if not os.path.isfile(plugin_path):
147 log("Installing plugin %s" % (plugin_filename), level=DEBUG)
148 cmd = ['wget'] + opts + ['--timestamping', url, '-O',
149 plugin_path]
150 subprocess.check_call(cmd)
151 os.chmod(plugin_path, 0744)
152 os.chown(plugin_path, jenkins_uid, jenkins_gid)
153
154 else:
155 log("Plugin %s already installed" % (plugin_filename),
156 level=DEBUG)
157
158 ref = os.path.join(track_dir, plugin_filename)
159 if os.path.exists(ref):
160 # Delete ref since plugin is installed.
161 os.remove(ref)
162
163 installed_plugins = os.listdir(track_dir)
164 if installed_plugins:
165 if config('remove-unlisted-plugins') == "yes":
166 for plugin in installed_plugins:
167 path = os.path.join(JENKINS_HOME, 'plugins', plugin)
168 if os.path.isfile(path):
169 log("Deleting unlisted plugin '%s'" % (path),
170 level=INFO)
171 os.remove(path)
172 else:
173 log("Unlisted plugins: (%s) Not removed. Set "
174 "remove-unlisted-plugins to 'yes' to clear them away." %
175 ', '.join(installed_plugins), level=INFO)
176 finally:
177 # Delete install refs
178 shutil.rmtree(track_dir)
0179
=== modified file 'hooks/master-relation-broken'
--- hooks/master-relation-broken 2012-07-31 10:32:36 +0000
+++ hooks/master-relation-broken 1970-01-01 00:00:00 +0000
@@ -1,17 +0,0 @@
1#!/bin/sh
2
3PASSWORD=`config-get password`
4if [ -z "$PASSWORD" ]
5then
6 PASSWORD=`cat /var/lib/jenkins/.admin_password`
7fi
8
9MEMBERS=`relation-list`
10
11for MEMBER in $MEMBERS
12do
13 juju-log "Removing node $MEMBER from Jenkins master..."
14 $(dirname $0)/delnode `echo $MEMBER | sed s,/,-,` `config-get username` $PASSWORD
15done
16
17exit 0
180
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/master-relation-changed'
--- hooks/master-relation-changed 2012-07-31 10:32:36 +0000
+++ hooks/master-relation-changed 1970-01-01 00:00:00 +0000
@@ -1,24 +0,0 @@
1#!/bin/bash
2
3set -ue
4
5PASSWORD=`config-get password`
6if [ -z "$PASSWORD" ]
7then
8 PASSWORD=`cat /var/lib/jenkins/.admin_password`
9fi
10
11# Grab information that remote unit has posted to relation
12slavehost=$(relation-get slavehost)
13executors=$(relation-get executors)
14labels=$(relation-get labels)
15
16# Double check to see if this has happened yet
17if [ "x$slavehost" = "x" ]; then
18 juju-log "Slave host not yet defined, exiting..."
19 exit 0
20fi
21
22juju-log "Adding slave with hostname $slavehost..."
23$(dirname $0)/addnode $slavehost $executors "$labels" `config-get username` $PASSWORD
24juju-log "Node slave $slavehost added..."
250
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/master-relation-departed'
--- hooks/master-relation-departed 2011-09-22 14:46:56 +0000
+++ hooks/master-relation-departed 1970-01-01 00:00:00 +0000
@@ -1,12 +0,0 @@
1#!/bin/bash
2
3set -ue
4
5# Slave hostname is derived from unit name so
6# this is pretty safe
7slavehost=`echo $JUJU_REMOTE_UNIT | sed s,/,-,`
8
9juju-log "Deleting slave with hostname $slavehost..."
10$(dirname $0)/delnode $slavehost `config-get username` `config-get password`
11
12exit 0
130
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/master-relation-joined'
--- hooks/master-relation-joined 2011-10-07 13:43:19 +0000
+++ hooks/master-relation-joined 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
1#!/bin/sh
2
3HOSTNAME=`unit-get private-address`
4juju-log "Setting url relation to http://$HOSTNAME:8080"
5relation-set url="http://$HOSTNAME:8080"
60
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/start'
--- hooks/start 2011-09-22 14:46:56 +0000
+++ hooks/start 1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
1#!/bin/bash
2
3service jenkins start || true
40
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/stop'
--- hooks/stop 2011-09-22 14:46:56 +0000
+++ hooks/stop 1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
1#!/bin/bash
2
3service jenkins stop
40
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/upgrade-charm'
--- hooks/upgrade-charm 2011-09-22 14:46:56 +0000
+++ hooks/upgrade-charm 1970-01-01 00:00:00 +0000
@@ -1,7 +0,0 @@
1#!/bin/sh
2set -e
3
4home=`dirname $0`
5
6juju-log "Upgrading charm by running install hook again."
7exec $home/install
80
=== target is u'jenkins_hooks.py'
=== modified file 'hooks/website-relation-joined'
--- hooks/website-relation-joined 2011-10-07 13:43:19 +0000
+++ hooks/website-relation-joined 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
1#!/bin/sh
2
3HOSTNAME=`unit-get private-address`
4juju-log "Setting website URL to $HOSTNAME:8080"
5relation-set port=8080 hostname=$HOSTNAME
60
=== target is u'jenkins_hooks.py'
=== renamed file 'tests/100-deploy' => 'tests/100-deploy-trusty'
--- tests/100-deploy 2014-03-05 19:18:19 +0000
+++ tests/100-deploy-trusty 2015-01-20 18:33:44 +0000
@@ -12,11 +12,13 @@
12###12###
13# Deployment Setup13# Deployment Setup
14###14###
15d = amulet.Deployment()15d = amulet.Deployment(series='trusty')
1616
17d.add('haproxy') # website-relation17d.add('haproxy') # website-relation
18d.add('jenkins') # Subject matter18d.add('jenkins') # Subject matter
19d.add('jenkins-slave') # Job Runner19# TODO(hopem): we don't yet have a precise version of jenkins-slave
20# so use the precise version for now.
21d.add('jenkins-slave', 'cs:precise/jenkins-slave') # Job Runner
2022
2123
22d.relate('jenkins:website', 'haproxy:reverseproxy')24d.relate('jenkins:website', 'haproxy:reverseproxy')
2325
=== added file 'tests/README'
--- tests/README 1970-01-01 00:00:00 +0000
+++ tests/README 2015-01-20 18:33:44 +0000
@@ -0,0 +1,56 @@
1This directory provides Amulet tests that focus on verification of Jenkins
2deployments.
3
4In order to run tests, you'll need charm-tools installed (in addition to
5juju, of course):
6
7 sudo add-apt-repository ppa:juju/stable
8 sudo apt-get update
9 sudo apt-get install charm-tools
10
11If you use a web proxy server to access the web, you'll need to set the
12AMULET_HTTP_PROXY environment variable to the http URL of the proxy server.
13
14The following examples demonstrate different ways that tests can be executed.
15All examples are run from the charm's root directory.
16
17 * To run all tests (starting with 00-setup):
18
19 make test
20
21 * To run a specific test module (or modules):
22
23 juju test -v -p AMULET_HTTP_PROXY 100-deploy
24
25 * To run a specific test module (or modules), and keep the environment
26 deployed after a failure:
27
28 juju test --set-e -v -p AMULET_HTTP_PROXY 100-deploy
29
30 * To re-run a test module against an already deployed environment (one
31 that was deployed by a previous call to 'juju test --set-e'):
32
33 ./tests/100-deploy
34
35
36For debugging and test development purposes, all code should be idempotent.
37In other words, the code should have the ability to be re-run without changing
38the results beyond the initial run. This enables editing and re-running of a
39test module against an already deployed environment, as described above.
40
41
42Notes for additional test writing:
43
44 * Use DEBUG to turn on debug logging, use ERROR otherwise.
45 u = OpenStackAmuletUtils(ERROR)
46 u = OpenStackAmuletUtils(DEBUG)
47
48 * Preserving the deployed environment:
49 Even with juju --set-e, amulet will tear down the juju environment
50 when all tests pass. This force_fail 'test' can be used in basic_deployment.py
51 to simulate a failed test and keep the environment.
52
53 def test_zzzz_fake_fail(self):
54 '''Force a fake fail to keep juju environment after a successful test run'''
55 # Useful in test writing, when used with: juju test --set-e
56 amulet.raise_status(amulet.FAIL, msg='using fake fail to keep juju environment')
057
=== added directory 'unit_tests'
=== added file 'unit_tests/__init__.py'
=== added file 'unit_tests/test_jenkins_hooks.py'
--- unit_tests/test_jenkins_hooks.py 1970-01-01 00:00:00 +0000
+++ unit_tests/test_jenkins_hooks.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,6 @@
1import unittest
2
3
4class JenkinsHooksTests(unittest.TestCase):
5 def setUp(self):
6 super(JenkinsHooksTests, self).setUp()
07
=== added file 'unit_tests/test_jenkins_utils.py'
--- unit_tests/test_jenkins_utils.py 1970-01-01 00:00:00 +0000
+++ unit_tests/test_jenkins_utils.py 2015-01-20 18:33:44 +0000
@@ -0,0 +1,6 @@
1import unittest
2
3
4class JenkinsUtilsTests(unittest.TestCase):
5 def setUp(self):
6 super(JenkinsUtilsTests, self).setUp()

Subscribers

People subscribed via source and target branches

to all changes: