Merge lp:~lazypower/charms/trusty/mariadb/enterprise-upgrade-option into lp:~dbart/charms/trusty/mariadb/trunk

Proposed by Charles Butler
Status: Merged
Merged at revision: 19
Proposed branch: lp:~lazypower/charms/trusty/mariadb/enterprise-upgrade-option
Merge into: lp:~dbart/charms/trusty/mariadb/trunk
Diff against target: 3229 lines (+2918/-133)
24 files modified
ENTERPRISE-LICENSE.md (+72/-0)
README.md (+13/-1)
charm-helpers.yaml (+6/-0)
config.yaml (+17/-1)
hooks/config-changed (+40/-39)
hooks/install (+0/-2)
lib/charmhelpers/core/fstab.py (+118/-0)
lib/charmhelpers/core/hookenv.py (+540/-0)
lib/charmhelpers/core/host.py (+396/-0)
lib/charmhelpers/core/services/__init__.py (+2/-0)
lib/charmhelpers/core/services/base.py (+313/-0)
lib/charmhelpers/core/services/helpers.py (+243/-0)
lib/charmhelpers/core/sysctl.py (+34/-0)
lib/charmhelpers/core/templating.py (+52/-0)
lib/charmhelpers/fetch/__init__.py (+416/-0)
lib/charmhelpers/fetch/archiveurl.py (+145/-0)
lib/charmhelpers/fetch/bzrurl.py (+54/-0)
lib/charmhelpers/fetch/giturl.py (+48/-0)
lib/charmhelpers/payload/__init__.py (+1/-0)
lib/charmhelpers/payload/archive.py (+57/-0)
lib/charmhelpers/payload/execd.py (+50/-0)
scripts/charm_helpers_sync.py (+223/-0)
tests/10-deploy-and-upgrade (+78/-0)
tests/10-deploy-test.py (+0/-90)
To merge this branch: bzr merge lp:~lazypower/charms/trusty/mariadb/enterprise-upgrade-option
Reviewer Review Type Date Requested Status
Daniel Bartholomew Needs Fixing
Review via email: mp+243454@code.launchpad.net

Description of the change

Adds enterprise upgrade option, along with revised tests and charm-helpers inclusion. (precursor to additional cleanup in the charm for charmhelpers)

To post a comment you must log in.
Revision history for this message
Daniel Bartholomew (dbart) wrote :

This is looking really good. But there are a couple of things need to be fixed for it to be ready for prime time.

First off, the MariaDB Enterprise signing key needs to be imported so that installing or upgrading to MariaDB Enterprise works. The gpg ID of the enterprise key is: D324876EBE6A595F

Next, the MariaDB community repository needs to be disabled or removed so that it doesn't conflict with the Enterprise repository.

Lastly, MariaDB needs to be uninstalled and reinstalled to replace community with Enterprise with commands similar to the following:

  sudo apt-get remove mariadb-common mariadb-client

  sudo apt-get install mariadb-server mariadb-client

Looking at the above two commands, it would be much better if the MariaDB Enterprise packages replaced the MariaDB community packages, and I think if they were at the same version number then they would, the issue is that the Enterprise packages, which go through extra tweaks and development above and beyond the community version usually therefore are a version behind the community packages, so I don't know if there is an easy way to do it. What I don't want to have happen is for the removal of MariaDB community to also trigger the removal of some other package that is depending on MariaDB. Any ideas?

Thanks!

review: Needs Fixing

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'ENTERPRISE-LICENSE.md'
2--- ENTERPRISE-LICENSE.md 1970-01-01 00:00:00 +0000
3+++ ENTERPRISE-LICENSE.md 2014-12-02 20:02:04 +0000
4@@ -0,0 +1,72 @@
5+# Evaluation Agreement
6+
7+
8+Our fee-bearing, annual subscriptions for MariaDB Enterprise, MariaDB Enterprise
9+ Cluster, MariaDB Galera Cluster, MariaDB MaxScale or other MariaDB products
10+ (“MariaDB Subscriptions”) includes access to: (a) software; and (b) services,
11+ such as software support, access to our customer portal and related product
12+ documentation, and the right to receive executable binaries of our software
13+ patches, updates and security fixes (“Services”).
14+
15+The purpose of this Evaluation Agreement is to provide you with access to a
16+MariaDB Subscription on an evaluation basis (an “Evaluation Subscription”).
17+Some of the differences between an Evaluation Subscription and a MariaDB
18+Subscription are described below.
19+
20+An Evaluation Subscription for MariaDB Enterprise, MariaDB Enterprise Cluster,
21+MariaDB Galera Cluster, MariaDB MaxScale entitles you to:
22+
23+- executable binary code
24+- patches
25+- bug fixes
26+- security updates
27+- the customer portal
28+- product documentation
29+- software trials for MONYog
30+- limited support from MariaDB's Sales Engineering team
31+
32+A full, annual MariaDB Subscription for MariaDB Enterprise or MariaDB Enterprise
33+ Cluster entitles you to:
34+
35+- executable binary code
36+- patches
37+- bug fixes
38+- security updates
39+- the customer portal
40+- product documentation
41+- Full access to MONYog Ultimate Monitor
42+- product roadmaps
43+- 24/7 help desk support from MariaDB’s Technical Support team
44+- Other items that may be added at MariaDB’s discretion
45+
46+While access to software components of an Evaluation Subscription are subject
47+to underlying applicable open source or proprietary license(s), as the case may
48+be (and this Evaluation Agreement does not limit or further restrict any open
49+ source license rights), access to any Services or non-open source software
50+ components is for the sole purpose of evaluating and testing
51+ (“Evaluation Purpose”) the suitability of a MariaDB Subscription for your own
52+ use for a defined time period and support level.
53+
54+Your right to access an Evaluation Subscription without charge is conditioned on
55+ the following: (a) you agree to the MariaDB Enterprise Terms and Conditions
56+ (the “Subscription Agreement”);
57+(b) you agree that this Evaluation Subscription does not grant any right or
58+license, express or implied, for the use of any MariaDB or third party trade
59+names, service marks or trademarks, including, without limitation, the right to
60+distribute any software using any such marks; and (c) if you use our Services
61+for any purpose other than Evaluation Purposes, you agree to pay MariaDB
62+per-unit Subscription Fee(s) pursuant to the Subscription Agreement. Using our
63+Services in ways that do not constitute an Evaluation Purpose include (but are
64+ not limited to) using Services in connection with Production Purposes or third
65+ parties, or as a complement or supplement to third party support services.
66+ Capitalized terms not defined in this Evaluation Agreement shall have the
67+ meaning provided in the Subscription Agreement, which is incorporated by
68+ reference in its entirety.
69+
70+By using any of our Services, you affirm that you have read, understood, and
71+agree to all of the terms and conditions of this Evaluation Agreement
72+(including the Subscription Agreement). If you are an individual acting on
73+behalf of an entity, you represent that you have the authority to enter into
74+this Evaluation Agreement on behalf of that entity. If you do not accept the
75+terms and conditions of this Evaluation Agreement, then you must not use any
76+of our Services.
77
78=== modified file 'README.md'
79--- README.md 2014-09-26 18:37:52 +0000
80+++ README.md 2014-12-02 20:02:04 +0000
81@@ -32,7 +32,19 @@
82 juju ssh mariadb/0
83 mysql -u root -p$(sudo cat /var/lib/mysql/mysql.passwd)
84
85-# Scale Out Usage
86+## To upgrade from Community to Enterprise Evaluation
87+
88+Once you have obtained a username/password from the [MariaDB Portal](http://mariadb.com)
89+there will be a repository provided for your enterprise trial installation. You can enable
90+this in the charm with the following configuration:
91+
92+ juju set mariadb enterprise-eula=true source="deb https://username:password@code.mariadb.com/mariadb-enterprise/10.0/repo/ubuntu trusty main"
93+
94+This will perform an in-place binary upgrade on all the mariadb nodes from community
95+edition to the Enterprise Evaluation. You must agree to all terms contained in
96+ `ENTERPRISE-LICENSE.md` in the charm directory
97+
98+# Scale Out Usage
99
100 ## Replication
101
102
103=== added file 'charm-helpers.yaml'
104--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
105+++ charm-helpers.yaml 2014-12-02 20:02:04 +0000
106@@ -0,0 +1,6 @@
107+destination: lib/charmhelpers
108+branch: lp:charm-helpers
109+include:
110+ - core
111+ - fetch
112+ - payload
113
114=== modified file 'config.yaml'
115--- config.yaml 2014-09-25 20:29:40 +0000
116+++ config.yaml 2014-12-02 20:02:04 +0000
117@@ -100,4 +100,20 @@
118 rbd pool has been created, changing this value will not have any
119 effect (although it can be changed in ceph by manually configuring
120 your ceph cluster).
121-
122+ enterprise-eula:
123+ type: boolean
124+ default: false
125+ description: |
126+ I have read and agree to the ENTERPRISE TRIAL agreement, located
127+ in ENTERPRISE-LICENSE.md located in the charm, or on the web here:
128+ https://mariadb.com/about/legal/evaluation-agreement
129+ source:
130+ type: string
131+ default: "deb http://mirror.jmu.edu/pub/mariadb/repo/10.0/ubuntu trusty main"
132+ description: |
133+ Repository Mirror string to install MariaDB from
134+ key:
135+ type: string
136+ default: "0xcbcb082a1bb943db"
137+ description: |
138+ GPG Key used to verify apt packages.
139
140=== modified file 'hooks/config-changed'
141--- hooks/config-changed 2014-09-25 20:29:40 +0000
142+++ hooks/config-changed 2014-12-02 20:02:04 +0000
143@@ -1,6 +1,6 @@
144 #!/usr/bin/python
145
146-from subprocess import check_output,check_call, CalledProcessError, Popen, PIPE
147+from subprocess import check_output,check_call, CalledProcessError
148 import tempfile
149 import json
150 import re
151@@ -10,6 +10,18 @@
152 import platform
153 from string import upper
154
155+sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
156+
157+from charmhelpers import fetch
158+
159+from charmhelpers.core import (
160+ hookenv,
161+ host,
162+)
163+
164+log = hookenv.log
165+config = hookenv.config()
166+
167 num_re = re.compile('^[0-9]+$')
168
169 # There should be a library for this
170@@ -52,7 +64,6 @@
171
172 def get_memtotal():
173 with open('/proc/meminfo') as meminfo_file:
174- meminfo = {}
175 for line in meminfo_file:
176 (key, mem) = line.split(':', 2)
177 if key == 'MemTotal':
178@@ -60,41 +71,31 @@
179 return '%s%s' % (mtot, upper(modifier[0]))
180
181
182-# There is preliminary code for mariadb, but switching
183-# from mariadb -> mysql fails badly, so it is disabled for now.
184-valid_flavors = ['distro']
185-
186-#remove_pkgs=[]
187-#apt_sources = []
188-#package = 'mariadb-server'
189-#
190-#series = check_output(['lsb_release','-cs'])
191-#
192-#for source in apt_sources:
193-# server = source.split('/')[0]
194-# if os.path.exists('keys/%s' % server):
195-# check_call(['apt-key','add','keys/%s' % server])
196-# else:
197-# check_call(['juju-log','-l','ERROR',
198-# 'No key for %s' % (server)])
199-# sys.exit(1)
200-# check_call(['add-apt-repository','-y','deb http://%s %s main' % (source, series)])
201-# check_call(['apt-get','update'])
202-#
203-#with open('/var/lib/mysql/mysql.passwd','r') as rpw:
204-# root_pass = rpw.read()
205-#
206-#dconf = Popen(['debconf-set-selections'], stdin=PIPE)
207-#dconf.stdin.write("%s %s/root_password password %s\n" % (package, package, root_pass))
208-#dconf.stdin.write("%s %s/root_password_again password %s\n" % (package, package, root_pass))
209-#dconf.stdin.write("%s-5.5 mysql-server/root_password password %s\n" % (package, root_pass))
210-#dconf.stdin.write("%s-5.5 mysql-server/root_password_again password %s\n" % (package, root_pass))
211-#dconf.communicate()
212-#dconf.wait()
213-#
214-#if len(remove_pkgs):
215-# check_call(['apt-get','-y','remove'] + remove_pkgs)
216-#check_call(['apt-get','-y','install','-qq',package])
217+
218+# Enterprise MariaDB Bits
219+# Clean up bintar when upgrading to enterprise
220+def cleanup_bintar():
221+ opath = os.path.join(os.path.sep, 'usr', 'local', 'mysql')
222+ if os.path.exists(opath):
223+ log("Cleaning up Bin/Tar installation", "INFO")
224+ os.remove(os.path.join(os.path.sep, 'etc', 'init.d', 'mysql'))
225+ npath = os.path.join(os.path.sep, 'mnt', 'mysql.old')
226+ os.rename(opath, npath)
227+
228+source = config['source']
229+accepted = config['enterprise-eula']
230+
231+# assumption of mariadb packages being delivered from code.mariadb
232+if accepted and "code.mariadb" in source:
233+ cleanup_bintar()
234+ host.service_stop('mysql')
235+ fetch.add_source(source, config['key'])
236+ fetch.apt_update()
237+
238+ packages = ['mariadb-server', 'mariadb-client']
239+ fetch.apt_install(packages)
240+
241+
242
243 # smart-calc stuff in the configs
244 dataset_bytes = human_to_bytes(configs['dataset-size'])
245@@ -160,7 +161,7 @@
246 # You can copy this to one of:
247 # - "/etc/mysql/my.cnf" to set global options,
248 # - "~/.my.cnf" to set user-specific options.
249-#
250+#
251 # One can use all long options that the program supports.
252 # Run program with --help to get a list of available options and with
253 # --print-defaults to see which it would actually understand and use.
254@@ -314,7 +315,7 @@
255
256 need_restart = False
257 for target,content in targets.iteritems():
258- tdir = os.path.dirname(target)
259+ tdir = os.path.dirname(target)
260 if len(content) == 0 and os.path.exists(target):
261 os.unlink(target)
262 need_restart = True
263
264=== modified file 'hooks/install'
265--- hooks/install 2014-11-12 18:30:06 +0000
266+++ hooks/install 2014-12-02 20:02:04 +0000
267@@ -173,5 +173,3 @@
268 # As the last step of the install process, stop MariaDB (the start trigger
269 # handles starting MariaDB).
270 /etc/init.d/mysql stop
271-
272-
273
274=== added directory 'lib/charmhelpers'
275=== added file 'lib/charmhelpers/__init__.py'
276=== added directory 'lib/charmhelpers/core'
277=== added file 'lib/charmhelpers/core/__init__.py'
278=== added file 'lib/charmhelpers/core/fstab.py'
279--- lib/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
280+++ lib/charmhelpers/core/fstab.py 2014-12-02 20:02:04 +0000
281@@ -0,0 +1,118 @@
282+#!/usr/bin/env python
283+# -*- coding: utf-8 -*-
284+
285+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
286+
287+import io
288+import os
289+
290+
291+class Fstab(io.FileIO):
292+ """This class extends file in order to implement a file reader/writer
293+ for file `/etc/fstab`
294+ """
295+
296+ class Entry(object):
297+ """Entry class represents a non-comment line on the `/etc/fstab` file
298+ """
299+ def __init__(self, device, mountpoint, filesystem,
300+ options, d=0, p=0):
301+ self.device = device
302+ self.mountpoint = mountpoint
303+ self.filesystem = filesystem
304+
305+ if not options:
306+ options = "defaults"
307+
308+ self.options = options
309+ self.d = int(d)
310+ self.p = int(p)
311+
312+ def __eq__(self, o):
313+ return str(self) == str(o)
314+
315+ def __str__(self):
316+ return "{} {} {} {} {} {}".format(self.device,
317+ self.mountpoint,
318+ self.filesystem,
319+ self.options,
320+ self.d,
321+ self.p)
322+
323+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
324+
325+ def __init__(self, path=None):
326+ if path:
327+ self._path = path
328+ else:
329+ self._path = self.DEFAULT_PATH
330+ super(Fstab, self).__init__(self._path, 'rb+')
331+
332+ def _hydrate_entry(self, line):
333+ # NOTE: use split with no arguments to split on any
334+ # whitespace including tabs
335+ return Fstab.Entry(*filter(
336+ lambda x: x not in ('', None),
337+ line.strip("\n").split()))
338+
339+ @property
340+ def entries(self):
341+ self.seek(0)
342+ for line in self.readlines():
343+ line = line.decode('us-ascii')
344+ try:
345+ if line.strip() and not line.startswith("#"):
346+ yield self._hydrate_entry(line)
347+ except ValueError:
348+ pass
349+
350+ def get_entry_by_attr(self, attr, value):
351+ for entry in self.entries:
352+ e_attr = getattr(entry, attr)
353+ if e_attr == value:
354+ return entry
355+ return None
356+
357+ def add_entry(self, entry):
358+ if self.get_entry_by_attr('device', entry.device):
359+ return False
360+
361+ self.write((str(entry) + '\n').encode('us-ascii'))
362+ self.truncate()
363+ return entry
364+
365+ def remove_entry(self, entry):
366+ self.seek(0)
367+
368+ lines = [l.decode('us-ascii') for l in self.readlines()]
369+
370+ found = False
371+ for index, line in enumerate(lines):
372+ if not line.startswith("#"):
373+ if self._hydrate_entry(line) == entry:
374+ found = True
375+ break
376+
377+ if not found:
378+ return False
379+
380+ lines.remove(line)
381+
382+ self.seek(0)
383+ self.write(''.join(lines).encode('us-ascii'))
384+ self.truncate()
385+ return True
386+
387+ @classmethod
388+ def remove_by_mountpoint(cls, mountpoint, path=None):
389+ fstab = cls(path=path)
390+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
391+ if entry:
392+ return fstab.remove_entry(entry)
393+ return False
394+
395+ @classmethod
396+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
397+ return cls(path=path).add_entry(Fstab.Entry(device,
398+ mountpoint, filesystem,
399+ options=options))
400
401=== added file 'lib/charmhelpers/core/hookenv.py'
402--- lib/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
403+++ lib/charmhelpers/core/hookenv.py 2014-12-02 20:02:04 +0000
404@@ -0,0 +1,540 @@
405+"Interactions with the Juju environment"
406+# Copyright 2013 Canonical Ltd.
407+#
408+# Authors:
409+# Charm Helpers Developers <juju@lists.ubuntu.com>
410+
411+import os
412+import json
413+import yaml
414+import subprocess
415+import sys
416+from subprocess import CalledProcessError
417+
418+import six
419+if not six.PY3:
420+ from UserDict import UserDict
421+else:
422+ from collections import UserDict
423+
424+CRITICAL = "CRITICAL"
425+ERROR = "ERROR"
426+WARNING = "WARNING"
427+INFO = "INFO"
428+DEBUG = "DEBUG"
429+MARKER = object()
430+
431+cache = {}
432+
433+
434+def cached(func):
435+ """Cache return values for multiple executions of func + args
436+
437+ For example::
438+
439+ @cached
440+ def unit_get(attribute):
441+ pass
442+
443+ unit_get('test')
444+
445+ will cache the result of unit_get + 'test' for future calls.
446+ """
447+ def wrapper(*args, **kwargs):
448+ global cache
449+ key = str((func, args, kwargs))
450+ try:
451+ return cache[key]
452+ except KeyError:
453+ res = func(*args, **kwargs)
454+ cache[key] = res
455+ return res
456+ return wrapper
457+
458+
459+def flush(key):
460+ """Flushes any entries from function cache where the
461+ key is found in the function+args """
462+ flush_list = []
463+ for item in cache:
464+ if key in item:
465+ flush_list.append(item)
466+ for item in flush_list:
467+ del cache[item]
468+
469+
470+def log(message, level=None):
471+ """Write a message to the juju log"""
472+ command = ['juju-log']
473+ if level:
474+ command += ['-l', level]
475+ command += [message]
476+ subprocess.call(command)
477+
478+
479+class Serializable(UserDict):
480+ """Wrapper, an object that can be serialized to yaml or json"""
481+
482+ def __init__(self, obj):
483+ # wrap the object
484+ UserDict.__init__(self)
485+ self.data = obj
486+
487+ def __getattr__(self, attr):
488+ # See if this object has attribute.
489+ if attr in ("json", "yaml", "data"):
490+ return self.__dict__[attr]
491+ # Check for attribute in wrapped object.
492+ got = getattr(self.data, attr, MARKER)
493+ if got is not MARKER:
494+ return got
495+ # Proxy to the wrapped object via dict interface.
496+ try:
497+ return self.data[attr]
498+ except KeyError:
499+ raise AttributeError(attr)
500+
501+ def __getstate__(self):
502+ # Pickle as a standard dictionary.
503+ return self.data
504+
505+ def __setstate__(self, state):
506+ # Unpickle into our wrapper.
507+ self.data = state
508+
509+ def json(self):
510+ """Serialize the object to json"""
511+ return json.dumps(self.data)
512+
513+ def yaml(self):
514+ """Serialize the object to yaml"""
515+ return yaml.dump(self.data)
516+
517+
518+def execution_environment():
519+ """A convenient bundling of the current execution context"""
520+ context = {}
521+ context['conf'] = config()
522+ if relation_id():
523+ context['reltype'] = relation_type()
524+ context['relid'] = relation_id()
525+ context['rel'] = relation_get()
526+ context['unit'] = local_unit()
527+ context['rels'] = relations()
528+ context['env'] = os.environ
529+ return context
530+
531+
532+def in_relation_hook():
533+ """Determine whether we're running in a relation hook"""
534+ return 'JUJU_RELATION' in os.environ
535+
536+
537+def relation_type():
538+ """The scope for the current relation hook"""
539+ return os.environ.get('JUJU_RELATION', None)
540+
541+
542+def relation_id():
543+ """The relation ID for the current relation hook"""
544+ return os.environ.get('JUJU_RELATION_ID', None)
545+
546+
547+def local_unit():
548+ """Local unit ID"""
549+ return os.environ['JUJU_UNIT_NAME']
550+
551+
552+def remote_unit():
553+ """The remote unit for the current relation hook"""
554+ return os.environ['JUJU_REMOTE_UNIT']
555+
556+
557+def service_name():
558+ """The name service group this unit belongs to"""
559+ return local_unit().split('/')[0]
560+
561+
562+def hook_name():
563+ """The name of the currently executing hook"""
564+ return os.path.basename(sys.argv[0])
565+
566+
567+class Config(dict):
568+ """A dictionary representation of the charm's config.yaml, with some
569+ extra features:
570+
571+ - See which values in the dictionary have changed since the previous hook.
572+ - For values that have changed, see what the previous value was.
573+ - Store arbitrary data for use in a later hook.
574+
575+ NOTE: Do not instantiate this object directly - instead call
576+ ``hookenv.config()``, which will return an instance of :class:`Config`.
577+
578+ Example usage::
579+
580+ >>> # inside a hook
581+ >>> from charmhelpers.core import hookenv
582+ >>> config = hookenv.config()
583+ >>> config['foo']
584+ 'bar'
585+ >>> # store a new key/value for later use
586+ >>> config['mykey'] = 'myval'
587+
588+
589+ >>> # user runs `juju set mycharm foo=baz`
590+ >>> # now we're inside subsequent config-changed hook
591+ >>> config = hookenv.config()
592+ >>> config['foo']
593+ 'baz'
594+ >>> # test to see if this val has changed since last hook
595+ >>> config.changed('foo')
596+ True
597+ >>> # what was the previous value?
598+ >>> config.previous('foo')
599+ 'bar'
600+ >>> # keys/values that we add are preserved across hooks
601+ >>> config['mykey']
602+ 'myval'
603+
604+ """
605+ CONFIG_FILE_NAME = '.juju-persistent-config'
606+
607+ def __init__(self, *args, **kw):
608+ super(Config, self).__init__(*args, **kw)
609+ self.implicit_save = True
610+ self._prev_dict = None
611+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
612+ if os.path.exists(self.path):
613+ self.load_previous()
614+
615+ def __getitem__(self, key):
616+ """For regular dict lookups, check the current juju config first,
617+ then the previous (saved) copy. This ensures that user-saved values
618+ will be returned by a dict lookup.
619+
620+ """
621+ try:
622+ return dict.__getitem__(self, key)
623+ except KeyError:
624+ return (self._prev_dict or {})[key]
625+
626+ def keys(self):
627+ prev_keys = []
628+ if self._prev_dict is not None:
629+ prev_keys = self._prev_dict.keys()
630+ return list(set(prev_keys + list(dict.keys(self))))
631+
632+ def load_previous(self, path=None):
633+ """Load previous copy of config from disk.
634+
635+ In normal usage you don't need to call this method directly - it
636+ is called automatically at object initialization.
637+
638+ :param path:
639+
640+ File path from which to load the previous config. If `None`,
641+ config is loaded from the default location. If `path` is
642+ specified, subsequent `save()` calls will write to the same
643+ path.
644+
645+ """
646+ self.path = path or self.path
647+ with open(self.path) as f:
648+ self._prev_dict = json.load(f)
649+
650+ def changed(self, key):
651+ """Return True if the current value for this key is different from
652+ the previous value.
653+
654+ """
655+ if self._prev_dict is None:
656+ return True
657+ return self.previous(key) != self.get(key)
658+
659+ def previous(self, key):
660+ """Return previous value for this key, or None if there
661+ is no previous value.
662+
663+ """
664+ if self._prev_dict:
665+ return self._prev_dict.get(key)
666+ return None
667+
668+ def save(self):
669+ """Save this config to disk.
670+
671+ If the charm is using the :mod:`Services Framework <services.base>`
672+ or :meth:'@hook <Hooks.hook>' decorator, this
673+ is called automatically at the end of successful hook execution.
674+ Otherwise, it should be called directly by user code.
675+
676+ To disable automatic saves, set ``implicit_save=False`` on this
677+ instance.
678+
679+ """
680+ if self._prev_dict:
681+ for k, v in six.iteritems(self._prev_dict):
682+ if k not in self:
683+ self[k] = v
684+ with open(self.path, 'w') as f:
685+ json.dump(self, f)
686+
687+
688+@cached
689+def config(scope=None):
690+ """Juju charm configuration"""
691+ config_cmd_line = ['config-get']
692+ if scope is not None:
693+ config_cmd_line.append(scope)
694+ config_cmd_line.append('--format=json')
695+ try:
696+ config_data = json.loads(
697+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
698+ if scope is not None:
699+ return config_data
700+ return Config(config_data)
701+ except ValueError:
702+ return None
703+
704+
705+@cached
706+def relation_get(attribute=None, unit=None, rid=None):
707+ """Get relation information"""
708+ _args = ['relation-get', '--format=json']
709+ if rid:
710+ _args.append('-r')
711+ _args.append(rid)
712+ _args.append(attribute or '-')
713+ if unit:
714+ _args.append(unit)
715+ try:
716+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
717+ except ValueError:
718+ return None
719+ except CalledProcessError as e:
720+ if e.returncode == 2:
721+ return None
722+ raise
723+
724+
725+def relation_set(relation_id=None, relation_settings=None, **kwargs):
726+ """Set relation information for the current unit"""
727+ relation_settings = relation_settings if relation_settings else {}
728+ relation_cmd_line = ['relation-set']
729+ if relation_id is not None:
730+ relation_cmd_line.extend(('-r', relation_id))
731+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
732+ if v is None:
733+ relation_cmd_line.append('{}='.format(k))
734+ else:
735+ relation_cmd_line.append('{}={}'.format(k, v))
736+ subprocess.check_call(relation_cmd_line)
737+ # Flush cache of any relation-gets for local unit
738+ flush(local_unit())
739+
740+
741+@cached
742+def relation_ids(reltype=None):
743+ """A list of relation_ids"""
744+ reltype = reltype or relation_type()
745+ relid_cmd_line = ['relation-ids', '--format=json']
746+ if reltype is not None:
747+ relid_cmd_line.append(reltype)
748+ return json.loads(
749+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
750+ return []
751+
752+
753+@cached
754+def related_units(relid=None):
755+ """A list of related units"""
756+ relid = relid or relation_id()
757+ units_cmd_line = ['relation-list', '--format=json']
758+ if relid is not None:
759+ units_cmd_line.extend(('-r', relid))
760+ return json.loads(
761+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
762+
763+
764+@cached
765+def relation_for_unit(unit=None, rid=None):
766+ """Get the json represenation of a unit's relation"""
767+ unit = unit or remote_unit()
768+ relation = relation_get(unit=unit, rid=rid)
769+ for key in relation:
770+ if key.endswith('-list'):
771+ relation[key] = relation[key].split()
772+ relation['__unit__'] = unit
773+ return relation
774+
775+
776+@cached
777+def relations_for_id(relid=None):
778+ """Get relations of a specific relation ID"""
779+ relation_data = []
780+ relid = relid or relation_ids()
781+ for unit in related_units(relid):
782+ unit_data = relation_for_unit(unit, relid)
783+ unit_data['__relid__'] = relid
784+ relation_data.append(unit_data)
785+ return relation_data
786+
787+
788+@cached
789+def relations_of_type(reltype=None):
790+ """Get relations of a specific type"""
791+ relation_data = []
792+ reltype = reltype or relation_type()
793+ for relid in relation_ids(reltype):
794+ for relation in relations_for_id(relid):
795+ relation['__relid__'] = relid
796+ relation_data.append(relation)
797+ return relation_data
798+
799+
800+@cached
801+def relation_types():
802+ """Get a list of relation types supported by this charm"""
803+ charmdir = os.environ.get('CHARM_DIR', '')
804+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
805+ md = yaml.safe_load(mdf)
806+ rel_types = []
807+ for key in ('provides', 'requires', 'peers'):
808+ section = md.get(key)
809+ if section:
810+ rel_types.extend(section.keys())
811+ mdf.close()
812+ return rel_types
813+
814+
815+@cached
816+def relations():
817+ """Get a nested dictionary of relation data for all related units"""
818+ rels = {}
819+ for reltype in relation_types():
820+ relids = {}
821+ for relid in relation_ids(reltype):
822+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
823+ for unit in related_units(relid):
824+ reldata = relation_get(unit=unit, rid=relid)
825+ units[unit] = reldata
826+ relids[relid] = units
827+ rels[reltype] = relids
828+ return rels
829+
830+
831+@cached
832+def is_relation_made(relation, keys='private-address'):
833+ '''
834+ Determine whether a relation is established by checking for
835+ presence of key(s). If a list of keys is provided, they
836+ must all be present for the relation to be identified as made
837+ '''
838+ if isinstance(keys, str):
839+ keys = [keys]
840+ for r_id in relation_ids(relation):
841+ for unit in related_units(r_id):
842+ context = {}
843+ for k in keys:
844+ context[k] = relation_get(k, rid=r_id,
845+ unit=unit)
846+ if None not in context.values():
847+ return True
848+ return False
849+
850+
851+def open_port(port, protocol="TCP"):
852+ """Open a service network port"""
853+ _args = ['open-port']
854+ _args.append('{}/{}'.format(port, protocol))
855+ subprocess.check_call(_args)
856+
857+
858+def close_port(port, protocol="TCP"):
859+ """Close a service network port"""
860+ _args = ['close-port']
861+ _args.append('{}/{}'.format(port, protocol))
862+ subprocess.check_call(_args)
863+
864+
865+@cached
866+def unit_get(attribute):
867+ """Get the unit ID for the remote unit"""
868+ _args = ['unit-get', '--format=json', attribute]
869+ try:
870+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
871+ except ValueError:
872+ return None
873+
874+
875+def unit_private_ip():
876+ """Get this unit's private IP address"""
877+ return unit_get('private-address')
878+
879+
880+class UnregisteredHookError(Exception):
881+ """Raised when an undefined hook is called"""
882+ pass
883+
884+
885+class Hooks(object):
886+ """A convenient handler for hook functions.
887+
888+ Example::
889+
890+ hooks = Hooks()
891+
892+ # register a hook, taking its name from the function name
893+ @hooks.hook()
894+ def install():
895+ pass # your code here
896+
897+ # register a hook, providing a custom hook name
898+ @hooks.hook("config-changed")
899+ def config_changed():
900+ pass # your code here
901+
902+ if __name__ == "__main__":
903+ # execute a hook based on the name the program is called by
904+ hooks.execute(sys.argv)
905+ """
906+
907+ def __init__(self, config_save=True):
908+ super(Hooks, self).__init__()
909+ self._hooks = {}
910+ self._config_save = config_save
911+
912+ def register(self, name, function):
913+ """Register a hook"""
914+ self._hooks[name] = function
915+
916+ def execute(self, args):
917+ """Execute a registered hook based on args[0]"""
918+ hook_name = os.path.basename(args[0])
919+ if hook_name in self._hooks:
920+ self._hooks[hook_name]()
921+ if self._config_save:
922+ cfg = config()
923+ if cfg.implicit_save:
924+ cfg.save()
925+ else:
926+ raise UnregisteredHookError(hook_name)
927+
928+ def hook(self, *hook_names):
929+ """Decorator, registering them as hooks"""
930+ def wrapper(decorated):
931+ for hook_name in hook_names:
932+ self.register(hook_name, decorated)
933+ else:
934+ self.register(decorated.__name__, decorated)
935+ if '_' in decorated.__name__:
936+ self.register(
937+ decorated.__name__.replace('_', '-'), decorated)
938+ return decorated
939+ return wrapper
940+
941+
942+def charm_dir():
943+ """Return the root directory of the current charm"""
944+ return os.environ.get('CHARM_DIR')
945
946=== added file 'lib/charmhelpers/core/host.py'
947--- lib/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
948+++ lib/charmhelpers/core/host.py 2014-12-02 20:02:04 +0000
949@@ -0,0 +1,396 @@
950+"""Tools for working with the host system"""
951+# Copyright 2012 Canonical Ltd.
952+#
953+# Authors:
954+# Nick Moffitt <nick.moffitt@canonical.com>
955+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
956+
957+import os
958+import re
959+import pwd
960+import grp
961+import random
962+import string
963+import subprocess
964+import hashlib
965+from contextlib import contextmanager
966+from collections import OrderedDict
967+
968+import six
969+
970+from .hookenv import log
971+from .fstab import Fstab
972+
973+
974+def service_start(service_name):
975+ """Start a system service"""
976+ return service('start', service_name)
977+
978+
979+def service_stop(service_name):
980+ """Stop a system service"""
981+ return service('stop', service_name)
982+
983+
984+def service_restart(service_name):
985+ """Restart a system service"""
986+ return service('restart', service_name)
987+
988+
989+def service_reload(service_name, restart_on_failure=False):
990+ """Reload a system service, optionally falling back to restart if
991+ reload fails"""
992+ service_result = service('reload', service_name)
993+ if not service_result and restart_on_failure:
994+ service_result = service('restart', service_name)
995+ return service_result
996+
997+
998+def service(action, service_name):
999+ """Control a system service"""
1000+ cmd = ['service', service_name, action]
1001+ return subprocess.call(cmd) == 0
1002+
1003+
1004+def service_running(service):
1005+ """Determine whether a system service is running"""
1006+ try:
1007+ output = subprocess.check_output(
1008+ ['service', service, 'status'],
1009+ stderr=subprocess.STDOUT).decode('UTF-8')
1010+ except subprocess.CalledProcessError:
1011+ return False
1012+ else:
1013+ if ("start/running" in output or "is running" in output):
1014+ return True
1015+ else:
1016+ return False
1017+
1018+
1019+def service_available(service_name):
1020+ """Determine whether a system service is available"""
1021+ try:
1022+ subprocess.check_output(
1023+ ['service', service_name, 'status'],
1024+ stderr=subprocess.STDOUT).decode('UTF-8')
1025+ except subprocess.CalledProcessError as e:
1026+ return 'unrecognized service' not in e.output
1027+ else:
1028+ return True
1029+
1030+
1031+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1032+ """Add a user to the system"""
1033+ try:
1034+ user_info = pwd.getpwnam(username)
1035+ log('user {0} already exists!'.format(username))
1036+ except KeyError:
1037+ log('creating user {0}'.format(username))
1038+ cmd = ['useradd']
1039+ if system_user or password is None:
1040+ cmd.append('--system')
1041+ else:
1042+ cmd.extend([
1043+ '--create-home',
1044+ '--shell', shell,
1045+ '--password', password,
1046+ ])
1047+ cmd.append(username)
1048+ subprocess.check_call(cmd)
1049+ user_info = pwd.getpwnam(username)
1050+ return user_info
1051+
1052+
1053+def add_user_to_group(username, group):
1054+ """Add a user to a group"""
1055+ cmd = [
1056+ 'gpasswd', '-a',
1057+ username,
1058+ group
1059+ ]
1060+ log("Adding user {} to group {}".format(username, group))
1061+ subprocess.check_call(cmd)
1062+
1063+
1064+def rsync(from_path, to_path, flags='-r', options=None):
1065+ """Replicate the contents of a path"""
1066+ options = options or ['--delete', '--executability']
1067+ cmd = ['/usr/bin/rsync', flags]
1068+ cmd.extend(options)
1069+ cmd.append(from_path)
1070+ cmd.append(to_path)
1071+ log(" ".join(cmd))
1072+ return subprocess.check_output(cmd).decode('UTF-8').strip()
1073+
1074+
1075+def symlink(source, destination):
1076+ """Create a symbolic link"""
1077+ log("Symlinking {} as {}".format(source, destination))
1078+ cmd = [
1079+ 'ln',
1080+ '-sf',
1081+ source,
1082+ destination,
1083+ ]
1084+ subprocess.check_call(cmd)
1085+
1086+
1087+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
1088+ """Create a directory"""
1089+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
1090+ perms))
1091+ uid = pwd.getpwnam(owner).pw_uid
1092+ gid = grp.getgrnam(group).gr_gid
1093+ realpath = os.path.abspath(path)
1094+ if os.path.exists(realpath):
1095+ if force and not os.path.isdir(realpath):
1096+ log("Removing non-directory file {} prior to mkdir()".format(path))
1097+ os.unlink(realpath)
1098+ else:
1099+ os.makedirs(realpath, perms)
1100+ os.chown(realpath, uid, gid)
1101+
1102+
1103+def write_file(path, content, owner='root', group='root', perms=0o444):
1104+ """Create or overwrite a file with the contents of a string"""
1105+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
1106+ uid = pwd.getpwnam(owner).pw_uid
1107+ gid = grp.getgrnam(group).gr_gid
1108+ with open(path, 'w') as target:
1109+ os.fchown(target.fileno(), uid, gid)
1110+ os.fchmod(target.fileno(), perms)
1111+ target.write(content)
1112+
1113+
1114+def fstab_remove(mp):
1115+ """Remove the given mountpoint entry from /etc/fstab
1116+ """
1117+ return Fstab.remove_by_mountpoint(mp)
1118+
1119+
1120+def fstab_add(dev, mp, fs, options=None):
1121+ """Adds the given device entry to the /etc/fstab file
1122+ """
1123+ return Fstab.add(dev, mp, fs, options=options)
1124+
1125+
1126+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
1127+ """Mount a filesystem at a particular mountpoint"""
1128+ cmd_args = ['mount']
1129+ if options is not None:
1130+ cmd_args.extend(['-o', options])
1131+ cmd_args.extend([device, mountpoint])
1132+ try:
1133+ subprocess.check_output(cmd_args)
1134+ except subprocess.CalledProcessError as e:
1135+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1136+ return False
1137+
1138+ if persist:
1139+ return fstab_add(device, mountpoint, filesystem, options=options)
1140+ return True
1141+
1142+
1143+def umount(mountpoint, persist=False):
1144+ """Unmount a filesystem"""
1145+ cmd_args = ['umount', mountpoint]
1146+ try:
1147+ subprocess.check_output(cmd_args)
1148+ except subprocess.CalledProcessError as e:
1149+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1150+ return False
1151+
1152+ if persist:
1153+ return fstab_remove(mountpoint)
1154+ return True
1155+
1156+
1157+def mounts():
1158+ """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
1159+ with open('/proc/mounts') as f:
1160+ # [['/mount/point','/dev/path'],[...]]
1161+ system_mounts = [m[1::-1] for m in [l.strip().split()
1162+ for l in f.readlines()]]
1163+ return system_mounts
1164+
1165+
1166+def file_hash(path, hash_type='md5'):
1167+ """
1168+ Generate a hash checksum of the contents of 'path' or None if not found.
1169+
1170+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1171+ such as md5, sha1, sha256, sha512, etc.
1172+ """
1173+ if os.path.exists(path):
1174+ h = getattr(hashlib, hash_type)()
1175+ with open(path, 'rb') as source:
1176+ h.update(source.read())
1177+ return h.hexdigest()
1178+ else:
1179+ return None
1180+
1181+
1182+def check_hash(path, checksum, hash_type='md5'):
1183+ """
1184+ Validate a file using a cryptographic checksum.
1185+
1186+ :param str checksum: Value of the checksum used to validate the file.
1187+ :param str hash_type: Hash algorithm used to generate `checksum`.
1188+ Can be any hash alrgorithm supported by :mod:`hashlib`,
1189+ such as md5, sha1, sha256, sha512, etc.
1190+ :raises ChecksumError: If the file fails the checksum
1191+
1192+ """
1193+ actual_checksum = file_hash(path, hash_type)
1194+ if checksum != actual_checksum:
1195+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
1196+
1197+
1198+class ChecksumError(ValueError):
1199+ pass
1200+
1201+
1202+def restart_on_change(restart_map, stopstart=False):
1203+ """Restart services based on configuration files changing
1204+
1205+ This function is used a decorator, for example::
1206+
1207+ @restart_on_change({
1208+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1209+ })
1210+ def ceph_client_changed():
1211+ pass # your code here
1212+
1213+ In this example, the cinder-api and cinder-volume services
1214+ would be restarted if /etc/ceph/ceph.conf is changed by the
1215+ ceph_client_changed function.
1216+ """
1217+ def wrap(f):
1218+ def wrapped_f(*args):
1219+ checksums = {}
1220+ for path in restart_map:
1221+ checksums[path] = file_hash(path)
1222+ f(*args)
1223+ restarts = []
1224+ for path in restart_map:
1225+ if checksums[path] != file_hash(path):
1226+ restarts += restart_map[path]
1227+ services_list = list(OrderedDict.fromkeys(restarts))
1228+ if not stopstart:
1229+ for service_name in services_list:
1230+ service('restart', service_name)
1231+ else:
1232+ for action in ['stop', 'start']:
1233+ for service_name in services_list:
1234+ service(action, service_name)
1235+ return wrapped_f
1236+ return wrap
1237+
1238+
1239+def lsb_release():
1240+ """Return /etc/lsb-release in a dict"""
1241+ d = {}
1242+ with open('/etc/lsb-release', 'r') as lsb:
1243+ for l in lsb:
1244+ k, v = l.split('=')
1245+ d[k.strip()] = v.strip()
1246+ return d
1247+
1248+
1249+def pwgen(length=None):
1250+ """Generate a random pasword."""
1251+ if length is None:
1252+ length = random.choice(range(35, 45))
1253+ alphanumeric_chars = [
1254+ l for l in (string.ascii_letters + string.digits)
1255+ if l not in 'l0QD1vAEIOUaeiou']
1256+ random_chars = [
1257+ random.choice(alphanumeric_chars) for _ in range(length)]
1258+ return(''.join(random_chars))
1259+
1260+
1261+def list_nics(nic_type):
1262+ '''Return a list of nics of given type(s)'''
1263+ if isinstance(nic_type, six.string_types):
1264+ int_types = [nic_type]
1265+ else:
1266+ int_types = nic_type
1267+ interfaces = []
1268+ for int_type in int_types:
1269+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
1270+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1271+ ip_output = (line for line in ip_output if line)
1272+ for line in ip_output:
1273+ if line.split()[1].startswith(int_type):
1274+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
1275+ if matched:
1276+ interface = matched.groups()[0]
1277+ else:
1278+ interface = line.split()[1].replace(":", "")
1279+ interfaces.append(interface)
1280+
1281+ return interfaces
1282+
1283+
1284+def set_nic_mtu(nic, mtu):
1285+ '''Set MTU on a network interface'''
1286+ cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
1287+ subprocess.check_call(cmd)
1288+
1289+
1290+def get_nic_mtu(nic):
1291+ cmd = ['ip', 'addr', 'show', nic]
1292+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1293+ mtu = ""
1294+ for line in ip_output:
1295+ words = line.split()
1296+ if 'mtu' in words:
1297+ mtu = words[words.index("mtu") + 1]
1298+ return mtu
1299+
1300+
1301+def get_nic_hwaddr(nic):
1302+ cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1303+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
1304+ hwaddr = ""
1305+ words = ip_output.split()
1306+ if 'link/ether' in words:
1307+ hwaddr = words[words.index('link/ether') + 1]
1308+ return hwaddr
1309+
1310+
1311+def cmp_pkgrevno(package, revno, pkgcache=None):
1312+ '''Compare supplied revno with the revno of the installed package
1313+
1314+ * 1 => Installed revno is greater than supplied arg
1315+ * 0 => Installed revno is the same as supplied arg
1316+ * -1 => Installed revno is less than supplied arg
1317+
1318+ '''
1319+ import apt_pkg
1320+ from charmhelpers.fetch import apt_cache
1321+ if not pkgcache:
1322+ pkgcache = apt_cache()
1323+ pkg = pkgcache[package]
1324+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1325+
1326+
1327+@contextmanager
1328+def chdir(d):
1329+ cur = os.getcwd()
1330+ try:
1331+ yield os.chdir(d)
1332+ finally:
1333+ os.chdir(cur)
1334+
1335+
1336+def chownr(path, owner, group):
1337+ uid = pwd.getpwnam(owner).pw_uid
1338+ gid = grp.getgrnam(group).gr_gid
1339+
1340+ for root, dirs, files in os.walk(path):
1341+ for name in dirs + files:
1342+ full = os.path.join(root, name)
1343+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
1344+ if not broken_symlink:
1345+ os.chown(full, uid, gid)
1346
1347=== added directory 'lib/charmhelpers/core/services'
1348=== added file 'lib/charmhelpers/core/services/__init__.py'
1349--- lib/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
1350+++ lib/charmhelpers/core/services/__init__.py 2014-12-02 20:02:04 +0000
1351@@ -0,0 +1,2 @@
1352+from .base import * # NOQA
1353+from .helpers import * # NOQA
1354
1355=== added file 'lib/charmhelpers/core/services/base.py'
1356--- lib/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
1357+++ lib/charmhelpers/core/services/base.py 2014-12-02 20:02:04 +0000
1358@@ -0,0 +1,313 @@
1359+import os
1360+import re
1361+import json
1362+from collections import Iterable
1363+
1364+from charmhelpers.core import host
1365+from charmhelpers.core import hookenv
1366+
1367+
1368+__all__ = ['ServiceManager', 'ManagerCallback',
1369+ 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
1370+ 'service_restart', 'service_stop']
1371+
1372+
1373+class ServiceManager(object):
1374+ def __init__(self, services=None):
1375+ """
1376+ Register a list of services, given their definitions.
1377+
1378+ Service definitions are dicts in the following formats (all keys except
1379+ 'service' are optional)::
1380+
1381+ {
1382+ "service": <service name>,
1383+ "required_data": <list of required data contexts>,
1384+ "provided_data": <list of provided data contexts>,
1385+ "data_ready": <one or more callbacks>,
1386+ "data_lost": <one or more callbacks>,
1387+ "start": <one or more callbacks>,
1388+ "stop": <one or more callbacks>,
1389+ "ports": <list of ports to manage>,
1390+ }
1391+
1392+ The 'required_data' list should contain dicts of required data (or
1393+ dependency managers that act like dicts and know how to collect the data).
1394+ Only when all items in the 'required_data' list are populated are the list
1395+ of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
1396+ information.
1397+
1398+ The 'provided_data' list should contain relation data providers, most likely
1399+ a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
1400+ that will indicate a set of data to set on a given relation.
1401+
1402+ The 'data_ready' value should be either a single callback, or a list of
1403+ callbacks, to be called when all items in 'required_data' pass `is_ready()`.
1404+ Each callback will be called with the service name as the only parameter.
1405+ After all of the 'data_ready' callbacks are called, the 'start' callbacks
1406+ are fired.
1407+
1408+ The 'data_lost' value should be either a single callback, or a list of
1409+ callbacks, to be called when a 'required_data' item no longer passes
1410+ `is_ready()`. Each callback will be called with the service name as the
1411+ only parameter. After all of the 'data_lost' callbacks are called,
1412+ the 'stop' callbacks are fired.
1413+
1414+ The 'start' value should be either a single callback, or a list of
1415+ callbacks, to be called when starting the service, after the 'data_ready'
1416+ callbacks are complete. Each callback will be called with the service
1417+ name as the only parameter. This defaults to
1418+ `[host.service_start, services.open_ports]`.
1419+
1420+ The 'stop' value should be either a single callback, or a list of
1421+ callbacks, to be called when stopping the service. If the service is
1422+ being stopped because it no longer has all of its 'required_data', this
1423+ will be called after all of the 'data_lost' callbacks are complete.
1424+ Each callback will be called with the service name as the only parameter.
1425+ This defaults to `[services.close_ports, host.service_stop]`.
1426+
1427+ The 'ports' value should be a list of ports to manage. The default
1428+ 'start' handler will open the ports after the service is started,
1429+ and the default 'stop' handler will close the ports prior to stopping
1430+ the service.
1431+
1432+
1433+ Examples:
1434+
1435+ The following registers an Upstart service called bingod that depends on
1436+ a mongodb relation and which runs a custom `db_migrate` function prior to
1437+ restarting the service, and a Runit service called spadesd::
1438+
1439+ manager = services.ServiceManager([
1440+ {
1441+ 'service': 'bingod',
1442+ 'ports': [80, 443],
1443+ 'required_data': [MongoRelation(), config(), {'my': 'data'}],
1444+ 'data_ready': [
1445+ services.template(source='bingod.conf'),
1446+ services.template(source='bingod.ini',
1447+ target='/etc/bingod.ini',
1448+ owner='bingo', perms=0400),
1449+ ],
1450+ },
1451+ {
1452+ 'service': 'spadesd',
1453+ 'data_ready': services.template(source='spadesd_run.j2',
1454+ target='/etc/sv/spadesd/run',
1455+ perms=0555),
1456+ 'start': runit_start,
1457+ 'stop': runit_stop,
1458+ },
1459+ ])
1460+ manager.manage()
1461+ """
1462+ self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
1463+ self._ready = None
1464+ self.services = {}
1465+ for service in services or []:
1466+ service_name = service['service']
1467+ self.services[service_name] = service
1468+
1469+ def manage(self):
1470+ """
1471+ Handle the current hook by doing The Right Thing with the registered services.
1472+ """
1473+ hook_name = hookenv.hook_name()
1474+ if hook_name == 'stop':
1475+ self.stop_services()
1476+ else:
1477+ self.provide_data()
1478+ self.reconfigure_services()
1479+ cfg = hookenv.config()
1480+ if cfg.implicit_save:
1481+ cfg.save()
1482+
1483+ def provide_data(self):
1484+ """
1485+ Set the relation data for each provider in the ``provided_data`` list.
1486+
1487+ A provider must have a `name` attribute, which indicates which relation
1488+ to set data on, and a `provide_data()` method, which returns a dict of
1489+ data to set.
1490+ """
1491+ hook_name = hookenv.hook_name()
1492+ for service in self.services.values():
1493+ for provider in service.get('provided_data', []):
1494+ if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
1495+ data = provider.provide_data()
1496+ _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
1497+ if _ready:
1498+ hookenv.relation_set(None, data)
1499+
1500+ def reconfigure_services(self, *service_names):
1501+ """
1502+ Update all files for one or more registered services, and,
1503+ if ready, optionally restart them.
1504+
1505+ If no service names are given, reconfigures all registered services.
1506+ """
1507+ for service_name in service_names or self.services.keys():
1508+ if self.is_ready(service_name):
1509+ self.fire_event('data_ready', service_name)
1510+ self.fire_event('start', service_name, default=[
1511+ service_restart,
1512+ manage_ports])
1513+ self.save_ready(service_name)
1514+ else:
1515+ if self.was_ready(service_name):
1516+ self.fire_event('data_lost', service_name)
1517+ self.fire_event('stop', service_name, default=[
1518+ manage_ports,
1519+ service_stop])
1520+ self.save_lost(service_name)
1521+
1522+ def stop_services(self, *service_names):
1523+ """
1524+ Stop one or more registered services, by name.
1525+
1526+ If no service names are given, stops all registered services.
1527+ """
1528+ for service_name in service_names or self.services.keys():
1529+ self.fire_event('stop', service_name, default=[
1530+ manage_ports,
1531+ service_stop])
1532+
1533+ def get_service(self, service_name):
1534+ """
1535+ Given the name of a registered service, return its service definition.
1536+ """
1537+ service = self.services.get(service_name)
1538+ if not service:
1539+ raise KeyError('Service not registered: %s' % service_name)
1540+ return service
1541+
1542+ def fire_event(self, event_name, service_name, default=None):
1543+ """
1544+ Fire a data_ready, data_lost, start, or stop event on a given service.
1545+ """
1546+ service = self.get_service(service_name)
1547+ callbacks = service.get(event_name, default)
1548+ if not callbacks:
1549+ return
1550+ if not isinstance(callbacks, Iterable):
1551+ callbacks = [callbacks]
1552+ for callback in callbacks:
1553+ if isinstance(callback, ManagerCallback):
1554+ callback(self, service_name, event_name)
1555+ else:
1556+ callback(service_name)
1557+
1558+ def is_ready(self, service_name):
1559+ """
1560+ Determine if a registered service is ready, by checking its 'required_data'.
1561+
1562+ A 'required_data' item can be any mapping type, and is considered ready
1563+ if `bool(item)` evaluates as True.
1564+ """
1565+ service = self.get_service(service_name)
1566+ reqs = service.get('required_data', [])
1567+ return all(bool(req) for req in reqs)
1568+
1569+ def _load_ready_file(self):
1570+ if self._ready is not None:
1571+ return
1572+ if os.path.exists(self._ready_file):
1573+ with open(self._ready_file) as fp:
1574+ self._ready = set(json.load(fp))
1575+ else:
1576+ self._ready = set()
1577+
1578+ def _save_ready_file(self):
1579+ if self._ready is None:
1580+ return
1581+ with open(self._ready_file, 'w') as fp:
1582+ json.dump(list(self._ready), fp)
1583+
1584+ def save_ready(self, service_name):
1585+ """
1586+ Save an indicator that the given service is now data_ready.
1587+ """
1588+ self._load_ready_file()
1589+ self._ready.add(service_name)
1590+ self._save_ready_file()
1591+
1592+ def save_lost(self, service_name):
1593+ """
1594+ Save an indicator that the given service is no longer data_ready.
1595+ """
1596+ self._load_ready_file()
1597+ self._ready.discard(service_name)
1598+ self._save_ready_file()
1599+
1600+ def was_ready(self, service_name):
1601+ """
1602+ Determine if the given service was previously data_ready.
1603+ """
1604+ self._load_ready_file()
1605+ return service_name in self._ready
1606+
1607+
1608+class ManagerCallback(object):
1609+ """
1610+ Special case of a callback that takes the `ServiceManager` instance
1611+ in addition to the service name.
1612+
1613+ Subclasses should implement `__call__` which should accept three parameters:
1614+
1615+ * `manager` The `ServiceManager` instance
1616+ * `service_name` The name of the service it's being triggered for
1617+ * `event_name` The name of the event that this callback is handling
1618+ """
1619+ def __call__(self, manager, service_name, event_name):
1620+ raise NotImplementedError()
1621+
1622+
1623+class PortManagerCallback(ManagerCallback):
1624+ """
1625+ Callback class that will open or close ports, for use as either
1626+ a start or stop action.
1627+ """
1628+ def __call__(self, manager, service_name, event_name):
1629+ service = manager.get_service(service_name)
1630+ new_ports = service.get('ports', [])
1631+ port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
1632+ if os.path.exists(port_file):
1633+ with open(port_file) as fp:
1634+ old_ports = fp.read().split(',')
1635+ for old_port in old_ports:
1636+ if bool(old_port):
1637+ old_port = int(old_port)
1638+ if old_port not in new_ports:
1639+ hookenv.close_port(old_port)
1640+ with open(port_file, 'w') as fp:
1641+ fp.write(','.join(str(port) for port in new_ports))
1642+ for port in new_ports:
1643+ if event_name == 'start':
1644+ hookenv.open_port(port)
1645+ elif event_name == 'stop':
1646+ hookenv.close_port(port)
1647+
1648+
1649+def service_stop(service_name):
1650+ """
1651+ Wrapper around host.service_stop to prevent spurious "unknown service"
1652+ messages in the logs.
1653+ """
1654+ if host.service_running(service_name):
1655+ host.service_stop(service_name)
1656+
1657+
1658+def service_restart(service_name):
1659+ """
1660+ Wrapper around host.service_restart to prevent spurious "unknown service"
1661+ messages in the logs.
1662+ """
1663+ if host.service_available(service_name):
1664+ if host.service_running(service_name):
1665+ host.service_restart(service_name)
1666+ else:
1667+ host.service_start(service_name)
1668+
1669+
1670+# Convenience aliases
1671+open_ports = close_ports = manage_ports = PortManagerCallback()
1672
1673=== added file 'lib/charmhelpers/core/services/helpers.py'
1674--- lib/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
1675+++ lib/charmhelpers/core/services/helpers.py 2014-12-02 20:02:04 +0000
1676@@ -0,0 +1,243 @@
1677+import os
1678+import yaml
1679+from charmhelpers.core import hookenv
1680+from charmhelpers.core import templating
1681+
1682+from charmhelpers.core.services.base import ManagerCallback
1683+
1684+
1685+__all__ = ['RelationContext', 'TemplateCallback',
1686+ 'render_template', 'template']
1687+
1688+
1689+class RelationContext(dict):
1690+ """
1691+ Base class for a context generator that gets relation data from juju.
1692+
1693+ Subclasses must provide the attributes `name`, which is the name of the
1694+ interface of interest, `interface`, which is the type of the interface of
1695+ interest, and `required_keys`, which is the set of keys required for the
1696+ relation to be considered complete. The data for all interfaces matching
1697+ the `name` attribute that are complete will used to populate the dictionary
1698+ values (see `get_data`, below).
1699+
1700+ The generated context will be namespaced under the relation :attr:`name`,
1701+ to prevent potential naming conflicts.
1702+
1703+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
1704+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
1705+ """
1706+ name = None
1707+ interface = None
1708+ required_keys = []
1709+
1710+ def __init__(self, name=None, additional_required_keys=None):
1711+ if name is not None:
1712+ self.name = name
1713+ if additional_required_keys is not None:
1714+ self.required_keys.extend(additional_required_keys)
1715+ self.get_data()
1716+
1717+ def __bool__(self):
1718+ """
1719+ Returns True if all of the required_keys are available.
1720+ """
1721+ return self.is_ready()
1722+
1723+ __nonzero__ = __bool__
1724+
1725+ def __repr__(self):
1726+ return super(RelationContext, self).__repr__()
1727+
1728+ def is_ready(self):
1729+ """
1730+ Returns True if all of the `required_keys` are available from any units.
1731+ """
1732+ ready = len(self.get(self.name, [])) > 0
1733+ if not ready:
1734+ hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
1735+ return ready
1736+
1737+ def _is_ready(self, unit_data):
1738+ """
1739+ Helper method that tests a set of relation data and returns True if
1740+ all of the `required_keys` are present.
1741+ """
1742+ return set(unit_data.keys()).issuperset(set(self.required_keys))
1743+
1744+ def get_data(self):
1745+ """
1746+ Retrieve the relation data for each unit involved in a relation and,
1747+ if complete, store it in a list under `self[self.name]`. This
1748+ is automatically called when the RelationContext is instantiated.
1749+
1750+ The units are sorted lexographically first by the service ID, then by
1751+ the unit ID. Thus, if an interface has two other services, 'db:1'
1752+ and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
1753+ and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
1754+ set of data, the relation data for the units will be stored in the
1755+ order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
1756+
1757+ If you only care about a single unit on the relation, you can just
1758+ access it as `{{ interface[0]['key'] }}`. However, if you can at all
1759+ support multiple units on a relation, you should iterate over the list,
1760+ like::
1761+
1762+ {% for unit in interface -%}
1763+ {{ unit['key'] }}{% if not loop.last %},{% endif %}
1764+ {%- endfor %}
1765+
1766+ Note that since all sets of relation data from all related services and
1767+ units are in a single list, if you need to know which service or unit a
1768+ set of data came from, you'll need to extend this class to preserve
1769+ that information.
1770+ """
1771+ if not hookenv.relation_ids(self.name):
1772+ return
1773+
1774+ ns = self.setdefault(self.name, [])
1775+ for rid in sorted(hookenv.relation_ids(self.name)):
1776+ for unit in sorted(hookenv.related_units(rid)):
1777+ reldata = hookenv.relation_get(rid=rid, unit=unit)
1778+ if self._is_ready(reldata):
1779+ ns.append(reldata)
1780+
1781+ def provide_data(self):
1782+ """
1783+ Return data to be relation_set for this interface.
1784+ """
1785+ return {}
1786+
1787+
1788+class MysqlRelation(RelationContext):
1789+ """
1790+ Relation context for the `mysql` interface.
1791+
1792+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
1793+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
1794+ """
1795+ name = 'db'
1796+ interface = 'mysql'
1797+ required_keys = ['host', 'user', 'password', 'database']
1798+
1799+
1800+class HttpRelation(RelationContext):
1801+ """
1802+ Relation context for the `http` interface.
1803+
1804+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
1805+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
1806+ """
1807+ name = 'website'
1808+ interface = 'http'
1809+ required_keys = ['host', 'port']
1810+
1811+ def provide_data(self):
1812+ return {
1813+ 'host': hookenv.unit_get('private-address'),
1814+ 'port': 80,
1815+ }
1816+
1817+
1818+class RequiredConfig(dict):
1819+ """
1820+ Data context that loads config options with one or more mandatory options.
1821+
1822+ Once the required options have been changed from their default values, all
1823+ config options will be available, namespaced under `config` to prevent
1824+ potential naming conflicts (for example, between a config option and a
1825+ relation property).
1826+
1827+ :param list *args: List of options that must be changed from their default values.
1828+ """
1829+
1830+ def __init__(self, *args):
1831+ self.required_options = args
1832+ self['config'] = hookenv.config()
1833+ with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
1834+ self.config = yaml.load(fp).get('options', {})
1835+
1836+ def __bool__(self):
1837+ for option in self.required_options:
1838+ if option not in self['config']:
1839+ return False
1840+ current_value = self['config'][option]
1841+ default_value = self.config[option].get('default')
1842+ if current_value == default_value:
1843+ return False
1844+ if current_value in (None, '') and default_value in (None, ''):
1845+ return False
1846+ return True
1847+
1848+ def __nonzero__(self):
1849+ return self.__bool__()
1850+
1851+
1852+class StoredContext(dict):
1853+ """
1854+ A data context that always returns the data that it was first created with.
1855+
1856+ This is useful to do a one-time generation of things like passwords, that
1857+ will thereafter use the same value that was originally generated, instead
1858+ of generating a new value each time it is run.
1859+ """
1860+ def __init__(self, file_name, config_data):
1861+ """
1862+ If the file exists, populate `self` with the data from the file.
1863+ Otherwise, populate with the given data and persist it to the file.
1864+ """
1865+ if os.path.exists(file_name):
1866+ self.update(self.read_context(file_name))
1867+ else:
1868+ self.store_context(file_name, config_data)
1869+ self.update(config_data)
1870+
1871+ def store_context(self, file_name, config_data):
1872+ if not os.path.isabs(file_name):
1873+ file_name = os.path.join(hookenv.charm_dir(), file_name)
1874+ with open(file_name, 'w') as file_stream:
1875+ os.fchmod(file_stream.fileno(), 0o600)
1876+ yaml.dump(config_data, file_stream)
1877+
1878+ def read_context(self, file_name):
1879+ if not os.path.isabs(file_name):
1880+ file_name = os.path.join(hookenv.charm_dir(), file_name)
1881+ with open(file_name, 'r') as file_stream:
1882+ data = yaml.load(file_stream)
1883+ if not data:
1884+ raise OSError("%s is empty" % file_name)
1885+ return data
1886+
1887+
1888+class TemplateCallback(ManagerCallback):
1889+ """
1890+ Callback class that will render a Jinja2 template, for use as a ready
1891+ action.
1892+
1893+ :param str source: The template source file, relative to
1894+ `$CHARM_DIR/templates`
1895+
1896+ :param str target: The target to write the rendered template to
1897+ :param str owner: The owner of the rendered file
1898+ :param str group: The group of the rendered file
1899+ :param int perms: The permissions of the rendered file
1900+ """
1901+ def __init__(self, source, target,
1902+ owner='root', group='root', perms=0o444):
1903+ self.source = source
1904+ self.target = target
1905+ self.owner = owner
1906+ self.group = group
1907+ self.perms = perms
1908+
1909+ def __call__(self, manager, service_name, event_name):
1910+ service = manager.get_service(service_name)
1911+ context = {}
1912+ for ctx in service.get('required_data', []):
1913+ context.update(ctx)
1914+ templating.render(self.source, self.target, context,
1915+ self.owner, self.group, self.perms)
1916+
1917+
1918+# Convenience aliases for templates
1919+render_template = template = TemplateCallback
1920
1921=== added file 'lib/charmhelpers/core/sysctl.py'
1922--- lib/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
1923+++ lib/charmhelpers/core/sysctl.py 2014-12-02 20:02:04 +0000
1924@@ -0,0 +1,34 @@
1925+#!/usr/bin/env python
1926+# -*- coding: utf-8 -*-
1927+
1928+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
1929+
1930+import yaml
1931+
1932+from subprocess import check_call
1933+
1934+from charmhelpers.core.hookenv import (
1935+ log,
1936+ DEBUG,
1937+)
1938+
1939+
1940+def create(sysctl_dict, sysctl_file):
1941+ """Creates a sysctl.conf file from a YAML associative array
1942+
1943+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
1944+ :type sysctl_dict: dict
1945+ :param sysctl_file: path to the sysctl file to be saved
1946+ :type sysctl_file: str or unicode
1947+ :returns: None
1948+ """
1949+ sysctl_dict = yaml.load(sysctl_dict)
1950+
1951+ with open(sysctl_file, "w") as fd:
1952+ for key, value in sysctl_dict.items():
1953+ fd.write("{}={}\n".format(key, value))
1954+
1955+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
1956+ level=DEBUG)
1957+
1958+ check_call(["sysctl", "-p", sysctl_file])
1959
1960=== added file 'lib/charmhelpers/core/templating.py'
1961--- lib/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
1962+++ lib/charmhelpers/core/templating.py 2014-12-02 20:02:04 +0000
1963@@ -0,0 +1,52 @@
1964+import os
1965+
1966+from charmhelpers.core import host
1967+from charmhelpers.core import hookenv
1968+
1969+
1970+def render(source, target, context, owner='root', group='root',
1971+ perms=0o444, templates_dir=None):
1972+ """
1973+ Render a template.
1974+
1975+ The `source` path, if not absolute, is relative to the `templates_dir`.
1976+
1977+ The `target` path should be absolute.
1978+
1979+ The context should be a dict containing the values to be replaced in the
1980+ template.
1981+
1982+ The `owner`, `group`, and `perms` options will be passed to `write_file`.
1983+
1984+ If omitted, `templates_dir` defaults to the `templates` folder in the charm.
1985+
1986+ Note: Using this requires python-jinja2; if it is not installed, calling
1987+ this will attempt to use charmhelpers.fetch.apt_install to install it.
1988+ """
1989+ try:
1990+ from jinja2 import FileSystemLoader, Environment, exceptions
1991+ except ImportError:
1992+ try:
1993+ from charmhelpers.fetch import apt_install
1994+ except ImportError:
1995+ hookenv.log('Could not import jinja2, and could not import '
1996+ 'charmhelpers.fetch to install it',
1997+ level=hookenv.ERROR)
1998+ raise
1999+ apt_install('python-jinja2', fatal=True)
2000+ from jinja2 import FileSystemLoader, Environment, exceptions
2001+
2002+ if templates_dir is None:
2003+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
2004+ loader = Environment(loader=FileSystemLoader(templates_dir))
2005+ try:
2006+ source = source
2007+ template = loader.get_template(source)
2008+ except exceptions.TemplateNotFound as e:
2009+ hookenv.log('Could not load template %s from %s.' %
2010+ (source, templates_dir),
2011+ level=hookenv.ERROR)
2012+ raise e
2013+ content = template.render(context)
2014+ host.mkdir(os.path.dirname(target))
2015+ host.write_file(target, content, owner, group, perms)
2016
2017=== added directory 'lib/charmhelpers/fetch'
2018=== added file 'lib/charmhelpers/fetch/__init__.py'
2019--- lib/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
2020+++ lib/charmhelpers/fetch/__init__.py 2014-12-02 20:02:04 +0000
2021@@ -0,0 +1,416 @@
2022+import importlib
2023+from tempfile import NamedTemporaryFile
2024+import time
2025+from yaml import safe_load
2026+from charmhelpers.core.host import (
2027+ lsb_release
2028+)
2029+import subprocess
2030+from charmhelpers.core.hookenv import (
2031+ config,
2032+ log,
2033+)
2034+import os
2035+
2036+import six
2037+if six.PY3:
2038+ from urllib.parse import urlparse, urlunparse
2039+else:
2040+ from urlparse import urlparse, urlunparse
2041+
2042+
2043+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2044+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2045+"""
2046+PROPOSED_POCKET = """# Proposed
2047+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
2048+"""
2049+CLOUD_ARCHIVE_POCKETS = {
2050+ # Folsom
2051+ 'folsom': 'precise-updates/folsom',
2052+ 'precise-folsom': 'precise-updates/folsom',
2053+ 'precise-folsom/updates': 'precise-updates/folsom',
2054+ 'precise-updates/folsom': 'precise-updates/folsom',
2055+ 'folsom/proposed': 'precise-proposed/folsom',
2056+ 'precise-folsom/proposed': 'precise-proposed/folsom',
2057+ 'precise-proposed/folsom': 'precise-proposed/folsom',
2058+ # Grizzly
2059+ 'grizzly': 'precise-updates/grizzly',
2060+ 'precise-grizzly': 'precise-updates/grizzly',
2061+ 'precise-grizzly/updates': 'precise-updates/grizzly',
2062+ 'precise-updates/grizzly': 'precise-updates/grizzly',
2063+ 'grizzly/proposed': 'precise-proposed/grizzly',
2064+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
2065+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
2066+ # Havana
2067+ 'havana': 'precise-updates/havana',
2068+ 'precise-havana': 'precise-updates/havana',
2069+ 'precise-havana/updates': 'precise-updates/havana',
2070+ 'precise-updates/havana': 'precise-updates/havana',
2071+ 'havana/proposed': 'precise-proposed/havana',
2072+ 'precise-havana/proposed': 'precise-proposed/havana',
2073+ 'precise-proposed/havana': 'precise-proposed/havana',
2074+ # Icehouse
2075+ 'icehouse': 'precise-updates/icehouse',
2076+ 'precise-icehouse': 'precise-updates/icehouse',
2077+ 'precise-icehouse/updates': 'precise-updates/icehouse',
2078+ 'precise-updates/icehouse': 'precise-updates/icehouse',
2079+ 'icehouse/proposed': 'precise-proposed/icehouse',
2080+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
2081+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
2082+ # Juno
2083+ 'juno': 'trusty-updates/juno',
2084+ 'trusty-juno': 'trusty-updates/juno',
2085+ 'trusty-juno/updates': 'trusty-updates/juno',
2086+ 'trusty-updates/juno': 'trusty-updates/juno',
2087+ 'juno/proposed': 'trusty-proposed/juno',
2088+ 'juno/proposed': 'trusty-proposed/juno',
2089+ 'trusty-juno/proposed': 'trusty-proposed/juno',
2090+ 'trusty-proposed/juno': 'trusty-proposed/juno',
2091+}
2092+
2093+# The order of this list is very important. Handlers should be listed in from
2094+# least- to most-specific URL matching.
2095+FETCH_HANDLERS = (
2096+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2097+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2098+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2099+)
2100+
2101+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2102+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
2103+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
2104+
2105+
2106+class SourceConfigError(Exception):
2107+ pass
2108+
2109+
2110+class UnhandledSource(Exception):
2111+ pass
2112+
2113+
2114+class AptLockError(Exception):
2115+ pass
2116+
2117+
2118+class BaseFetchHandler(object):
2119+
2120+ """Base class for FetchHandler implementations in fetch plugins"""
2121+
2122+ def can_handle(self, source):
2123+ """Returns True if the source can be handled. Otherwise returns
2124+ a string explaining why it cannot"""
2125+ return "Wrong source type"
2126+
2127+ def install(self, source):
2128+ """Try to download and unpack the source. Return the path to the
2129+ unpacked files or raise UnhandledSource."""
2130+ raise UnhandledSource("Wrong source type {}".format(source))
2131+
2132+ def parse_url(self, url):
2133+ return urlparse(url)
2134+
2135+ def base_url(self, url):
2136+ """Return url without querystring or fragment"""
2137+ parts = list(self.parse_url(url))
2138+ parts[4:] = ['' for i in parts[4:]]
2139+ return urlunparse(parts)
2140+
2141+
2142+def filter_installed_packages(packages):
2143+ """Returns a list of packages that require installation"""
2144+ cache = apt_cache()
2145+ _pkgs = []
2146+ for package in packages:
2147+ try:
2148+ p = cache[package]
2149+ p.current_ver or _pkgs.append(package)
2150+ except KeyError:
2151+ log('Package {} has no installation candidate.'.format(package),
2152+ level='WARNING')
2153+ _pkgs.append(package)
2154+ return _pkgs
2155+
2156+
2157+def apt_cache(in_memory=True):
2158+ """Build and return an apt cache"""
2159+ import apt_pkg
2160+ apt_pkg.init()
2161+ if in_memory:
2162+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
2163+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
2164+ return apt_pkg.Cache()
2165+
2166+
2167+def apt_install(packages, options=None, fatal=False):
2168+ """Install one or more packages"""
2169+ if options is None:
2170+ options = ['--option=Dpkg::Options::=--force-confold']
2171+
2172+ cmd = ['apt-get', '--assume-yes']
2173+ cmd.extend(options)
2174+ cmd.append('install')
2175+ if isinstance(packages, six.string_types):
2176+ cmd.append(packages)
2177+ else:
2178+ cmd.extend(packages)
2179+ log("Installing {} with options: {}".format(packages,
2180+ options))
2181+ _run_apt_command(cmd, fatal)
2182+
2183+
2184+def apt_upgrade(options=None, fatal=False, dist=False):
2185+ """Upgrade all packages"""
2186+ if options is None:
2187+ options = ['--option=Dpkg::Options::=--force-confold']
2188+
2189+ cmd = ['apt-get', '--assume-yes']
2190+ cmd.extend(options)
2191+ if dist:
2192+ cmd.append('dist-upgrade')
2193+ else:
2194+ cmd.append('upgrade')
2195+ log("Upgrading with options: {}".format(options))
2196+ _run_apt_command(cmd, fatal)
2197+
2198+
2199+def apt_update(fatal=False):
2200+ """Update local apt cache"""
2201+ cmd = ['apt-get', 'update']
2202+ _run_apt_command(cmd, fatal)
2203+
2204+
2205+def apt_purge(packages, fatal=False):
2206+ """Purge one or more packages"""
2207+ cmd = ['apt-get', '--assume-yes', 'purge']
2208+ if isinstance(packages, six.string_types):
2209+ cmd.append(packages)
2210+ else:
2211+ cmd.extend(packages)
2212+ log("Purging {}".format(packages))
2213+ _run_apt_command(cmd, fatal)
2214+
2215+
2216+def apt_hold(packages, fatal=False):
2217+ """Hold one or more packages"""
2218+ cmd = ['apt-mark', 'hold']
2219+ if isinstance(packages, six.string_types):
2220+ cmd.append(packages)
2221+ else:
2222+ cmd.extend(packages)
2223+ log("Holding {}".format(packages))
2224+
2225+ if fatal:
2226+ subprocess.check_call(cmd)
2227+ else:
2228+ subprocess.call(cmd)
2229+
2230+
2231+def add_source(source, key=None):
2232+ """Add a package source to this system.
2233+
2234+ @param source: a URL or sources.list entry, as supported by
2235+ add-apt-repository(1). Examples::
2236+
2237+ ppa:charmers/example
2238+ deb https://stub:key@private.example.com/ubuntu trusty main
2239+
2240+ In addition:
2241+ 'proposed:' may be used to enable the standard 'proposed'
2242+ pocket for the release.
2243+ 'cloud:' may be used to activate official cloud archive pockets,
2244+ such as 'cloud:icehouse'
2245+ 'distro' may be used as a noop
2246+
2247+ @param key: A key to be added to the system's APT keyring and used
2248+ to verify the signatures on packages. Ideally, this should be an
2249+ ASCII format GPG public key including the block headers. A GPG key
2250+ id may also be used, but be aware that only insecure protocols are
2251+ available to retrieve the actual public key from a public keyserver
2252+ placing your Juju environment at risk. ppa and cloud archive keys
2253+ are securely added automtically, so sould not be provided.
2254+ """
2255+ if source is None:
2256+ log('Source is not present. Skipping')
2257+ return
2258+
2259+ if (source.startswith('ppa:') or
2260+ source.startswith('http') or
2261+ source.startswith('deb ') or
2262+ source.startswith('cloud-archive:')):
2263+ subprocess.check_call(['add-apt-repository', '--yes', source])
2264+ elif source.startswith('cloud:'):
2265+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
2266+ fatal=True)
2267+ pocket = source.split(':')[-1]
2268+ if pocket not in CLOUD_ARCHIVE_POCKETS:
2269+ raise SourceConfigError(
2270+ 'Unsupported cloud: source option %s' %
2271+ pocket)
2272+ actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
2273+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2274+ apt.write(CLOUD_ARCHIVE.format(actual_pocket))
2275+ elif source == 'proposed':
2276+ release = lsb_release()['DISTRIB_CODENAME']
2277+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2278+ apt.write(PROPOSED_POCKET.format(release))
2279+ elif source == 'distro':
2280+ pass
2281+ else:
2282+ log("Unknown source: {!r}".format(source))
2283+
2284+ if key:
2285+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2286+ with NamedTemporaryFile('w+') as key_file:
2287+ key_file.write(key)
2288+ key_file.flush()
2289+ key_file.seek(0)
2290+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
2291+ else:
2292+ # Note that hkp: is in no way a secure protocol. Using a
2293+ # GPG key id is pointless from a security POV unless you
2294+ # absolutely trust your network and DNS.
2295+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
2296+ 'hkp://keyserver.ubuntu.com:80', '--recv',
2297+ key])
2298+
2299+
2300+def configure_sources(update=False,
2301+ sources_var='install_sources',
2302+ keys_var='install_keys'):
2303+ """
2304+ Configure multiple sources from charm configuration.
2305+
2306+ The lists are encoded as yaml fragments in the configuration.
2307+ The frament needs to be included as a string. Sources and their
2308+ corresponding keys are of the types supported by add_source().
2309+
2310+ Example config:
2311+ install_sources: |
2312+ - "ppa:foo"
2313+ - "http://example.com/repo precise main"
2314+ install_keys: |
2315+ - null
2316+ - "a1b2c3d4"
2317+
2318+ Note that 'null' (a.k.a. None) should not be quoted.
2319+ """
2320+ sources = safe_load((config(sources_var) or '').strip()) or []
2321+ keys = safe_load((config(keys_var) or '').strip()) or None
2322+
2323+ if isinstance(sources, six.string_types):
2324+ sources = [sources]
2325+
2326+ if keys is None:
2327+ for source in sources:
2328+ add_source(source, None)
2329+ else:
2330+ if isinstance(keys, six.string_types):
2331+ keys = [keys]
2332+
2333+ if len(sources) != len(keys):
2334+ raise SourceConfigError(
2335+ 'Install sources and keys lists are different lengths')
2336+ for source, key in zip(sources, keys):
2337+ add_source(source, key)
2338+ if update:
2339+ apt_update(fatal=True)
2340+
2341+
2342+def install_remote(source, *args, **kwargs):
2343+ """
2344+ Install a file tree from a remote source
2345+
2346+ The specified source should be a url of the form:
2347+ scheme://[host]/path[#[option=value][&...]]
2348+
2349+ Schemes supported are based on this modules submodules.
2350+ Options supported are submodule-specific.
2351+ Additional arguments are passed through to the submodule.
2352+
2353+ For example::
2354+
2355+ dest = install_remote('http://example.com/archive.tgz',
2356+ checksum='deadbeef',
2357+ hash_type='sha1')
2358+
2359+ This will download `archive.tgz`, validate it using SHA1 and, if
2360+ the file is ok, extract it and return the directory in which it
2361+ was extracted. If the checksum fails, it will raise
2362+ :class:`charmhelpers.core.host.ChecksumError`.
2363+ """
2364+ # We ONLY check for True here because can_handle may return a string
2365+ # explaining why it can't handle a given source.
2366+ handlers = [h for h in plugins() if h.can_handle(source) is True]
2367+ installed_to = None
2368+ for handler in handlers:
2369+ try:
2370+ installed_to = handler.install(source, *args, **kwargs)
2371+ except UnhandledSource:
2372+ pass
2373+ if not installed_to:
2374+ raise UnhandledSource("No handler found for source {}".format(source))
2375+ return installed_to
2376+
2377+
2378+def install_from_config(config_var_name):
2379+ charm_config = config()
2380+ source = charm_config[config_var_name]
2381+ return install_remote(source)
2382+
2383+
2384+def plugins(fetch_handlers=None):
2385+ if not fetch_handlers:
2386+ fetch_handlers = FETCH_HANDLERS
2387+ plugin_list = []
2388+ for handler_name in fetch_handlers:
2389+ package, classname = handler_name.rsplit('.', 1)
2390+ try:
2391+ handler_class = getattr(
2392+ importlib.import_module(package),
2393+ classname)
2394+ plugin_list.append(handler_class())
2395+ except (ImportError, AttributeError):
2396+ # Skip missing plugins so that they can be ommitted from
2397+ # installation if desired
2398+ log("FetchHandler {} not found, skipping plugin".format(
2399+ handler_name))
2400+ return plugin_list
2401+
2402+
2403+def _run_apt_command(cmd, fatal=False):
2404+ """
2405+ Run an APT command, checking output and retrying if the fatal flag is set
2406+ to True.
2407+
2408+ :param: cmd: str: The apt command to run.
2409+ :param: fatal: bool: Whether the command's output should be checked and
2410+ retried.
2411+ """
2412+ env = os.environ.copy()
2413+
2414+ if 'DEBIAN_FRONTEND' not in env:
2415+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2416+
2417+ if fatal:
2418+ retry_count = 0
2419+ result = None
2420+
2421+ # If the command is considered "fatal", we need to retry if the apt
2422+ # lock was not acquired.
2423+
2424+ while result is None or result == APT_NO_LOCK:
2425+ try:
2426+ result = subprocess.check_call(cmd, env=env)
2427+ except subprocess.CalledProcessError as e:
2428+ retry_count = retry_count + 1
2429+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
2430+ raise
2431+ result = e.returncode
2432+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
2433+ "".format(APT_NO_LOCK_RETRY_DELAY))
2434+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
2435+
2436+ else:
2437+ subprocess.call(cmd, env=env)
2438
2439=== added file 'lib/charmhelpers/fetch/archiveurl.py'
2440--- lib/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
2441+++ lib/charmhelpers/fetch/archiveurl.py 2014-12-02 20:02:04 +0000
2442@@ -0,0 +1,145 @@
2443+import os
2444+import hashlib
2445+import re
2446+
2447+import six
2448+if six.PY3:
2449+ from urllib.request import (
2450+ build_opener, install_opener, urlopen, urlretrieve,
2451+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2452+ )
2453+ from urllib.parse import urlparse, urlunparse, parse_qs
2454+ from urllib.error import URLError
2455+else:
2456+ from urllib import urlretrieve
2457+ from urllib2 import (
2458+ build_opener, install_opener, urlopen,
2459+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2460+ URLError
2461+ )
2462+ from urlparse import urlparse, urlunparse, parse_qs
2463+
2464+from charmhelpers.fetch import (
2465+ BaseFetchHandler,
2466+ UnhandledSource
2467+)
2468+from charmhelpers.payload.archive import (
2469+ get_archive_handler,
2470+ extract,
2471+)
2472+from charmhelpers.core.host import mkdir, check_hash
2473+
2474+
2475+def splituser(host):
2476+ '''urllib.splituser(), but six's support of this seems broken'''
2477+ _userprog = re.compile('^(.*)@(.*)$')
2478+ match = _userprog.match(host)
2479+ if match:
2480+ return match.group(1, 2)
2481+ return None, host
2482+
2483+
2484+def splitpasswd(user):
2485+ '''urllib.splitpasswd(), but six's support of this is missing'''
2486+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
2487+ match = _passwdprog.match(user)
2488+ if match:
2489+ return match.group(1, 2)
2490+ return user, None
2491+
2492+
2493+class ArchiveUrlFetchHandler(BaseFetchHandler):
2494+ """
2495+ Handler to download archive files from arbitrary URLs.
2496+
2497+ Can fetch from http, https, ftp, and file URLs.
2498+
2499+ Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
2500+
2501+ Installs the contents of the archive in $CHARM_DIR/fetched/.
2502+ """
2503+ def can_handle(self, source):
2504+ url_parts = self.parse_url(source)
2505+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
2506+ return "Wrong source type"
2507+ if get_archive_handler(self.base_url(source)):
2508+ return True
2509+ return False
2510+
2511+ def download(self, source, dest):
2512+ """
2513+ Download an archive file.
2514+
2515+ :param str source: URL pointing to an archive file.
2516+ :param str dest: Local path location to download archive file to.
2517+ """
2518+ # propogate all exceptions
2519+ # URLError, OSError, etc
2520+ proto, netloc, path, params, query, fragment = urlparse(source)
2521+ if proto in ('http', 'https'):
2522+ auth, barehost = splituser(netloc)
2523+ if auth is not None:
2524+ source = urlunparse((proto, barehost, path, params, query, fragment))
2525+ username, password = splitpasswd(auth)
2526+ passman = HTTPPasswordMgrWithDefaultRealm()
2527+ # Realm is set to None in add_password to force the username and password
2528+ # to be used whatever the realm
2529+ passman.add_password(None, source, username, password)
2530+ authhandler = HTTPBasicAuthHandler(passman)
2531+ opener = build_opener(authhandler)
2532+ install_opener(opener)
2533+ response = urlopen(source)
2534+ try:
2535+ with open(dest, 'w') as dest_file:
2536+ dest_file.write(response.read())
2537+ except Exception as e:
2538+ if os.path.isfile(dest):
2539+ os.unlink(dest)
2540+ raise e
2541+
2542+ # Mandatory file validation via Sha1 or MD5 hashing.
2543+ def download_and_validate(self, url, hashsum, validate="sha1"):
2544+ tempfile, headers = urlretrieve(url)
2545+ check_hash(tempfile, hashsum, validate)
2546+ return tempfile
2547+
2548+ def install(self, source, dest=None, checksum=None, hash_type='sha1'):
2549+ """
2550+ Download and install an archive file, with optional checksum validation.
2551+
2552+ The checksum can also be given on the `source` URL's fragment.
2553+ For example::
2554+
2555+ handler.install('http://example.com/file.tgz#sha1=deadbeef')
2556+
2557+ :param str source: URL pointing to an archive file.
2558+ :param str dest: Local destination path to install to. If not given,
2559+ installs to `$CHARM_DIR/archives/archive_file_name`.
2560+ :param str checksum: If given, validate the archive file after download.
2561+ :param str hash_type: Algorithm used to generate `checksum`.
2562+ Can be any hash alrgorithm supported by :mod:`hashlib`,
2563+ such as md5, sha1, sha256, sha512, etc.
2564+
2565+ """
2566+ url_parts = self.parse_url(source)
2567+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2568+ if not os.path.exists(dest_dir):
2569+ mkdir(dest_dir, perms=0o755)
2570+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2571+ try:
2572+ self.download(source, dld_file)
2573+ except URLError as e:
2574+ raise UnhandledSource(e.reason)
2575+ except OSError as e:
2576+ raise UnhandledSource(e.strerror)
2577+ options = parse_qs(url_parts.fragment)
2578+ for key, value in options.items():
2579+ if not six.PY3:
2580+ algorithms = hashlib.algorithms
2581+ else:
2582+ algorithms = hashlib.algorithms_available
2583+ if key in algorithms:
2584+ check_hash(dld_file, value, key)
2585+ if checksum:
2586+ check_hash(dld_file, checksum, hash_type)
2587+ return extract(dld_file, dest)
2588
2589=== added file 'lib/charmhelpers/fetch/bzrurl.py'
2590--- lib/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
2591+++ lib/charmhelpers/fetch/bzrurl.py 2014-12-02 20:02:04 +0000
2592@@ -0,0 +1,54 @@
2593+import os
2594+from charmhelpers.fetch import (
2595+ BaseFetchHandler,
2596+ UnhandledSource
2597+)
2598+from charmhelpers.core.host import mkdir
2599+
2600+import six
2601+if six.PY3:
2602+ raise ImportError('bzrlib does not support Python3')
2603+
2604+try:
2605+ from bzrlib.branch import Branch
2606+except ImportError:
2607+ from charmhelpers.fetch import apt_install
2608+ apt_install("python-bzrlib")
2609+ from bzrlib.branch import Branch
2610+
2611+
2612+class BzrUrlFetchHandler(BaseFetchHandler):
2613+ """Handler for bazaar branches via generic and lp URLs"""
2614+ def can_handle(self, source):
2615+ url_parts = self.parse_url(source)
2616+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
2617+ return False
2618+ else:
2619+ return True
2620+
2621+ def branch(self, source, dest):
2622+ url_parts = self.parse_url(source)
2623+ # If we use lp:branchname scheme we need to load plugins
2624+ if not self.can_handle(source):
2625+ raise UnhandledSource("Cannot handle {}".format(source))
2626+ if url_parts.scheme == "lp":
2627+ from bzrlib.plugin import load_plugins
2628+ load_plugins()
2629+ try:
2630+ remote_branch = Branch.open(source)
2631+ remote_branch.bzrdir.sprout(dest).open_branch()
2632+ except Exception as e:
2633+ raise e
2634+
2635+ def install(self, source):
2636+ url_parts = self.parse_url(source)
2637+ branch_name = url_parts.path.strip("/").split("/")[-1]
2638+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2639+ branch_name)
2640+ if not os.path.exists(dest_dir):
2641+ mkdir(dest_dir, perms=0o755)
2642+ try:
2643+ self.branch(source, dest_dir)
2644+ except OSError as e:
2645+ raise UnhandledSource(e.strerror)
2646+ return dest_dir
2647
2648=== added file 'lib/charmhelpers/fetch/giturl.py'
2649--- lib/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
2650+++ lib/charmhelpers/fetch/giturl.py 2014-12-02 20:02:04 +0000
2651@@ -0,0 +1,48 @@
2652+import os
2653+from charmhelpers.fetch import (
2654+ BaseFetchHandler,
2655+ UnhandledSource
2656+)
2657+from charmhelpers.core.host import mkdir
2658+
2659+import six
2660+if six.PY3:
2661+ raise ImportError('GitPython does not support Python 3')
2662+
2663+try:
2664+ from git import Repo
2665+except ImportError:
2666+ from charmhelpers.fetch import apt_install
2667+ apt_install("python-git")
2668+ from git import Repo
2669+
2670+
2671+class GitUrlFetchHandler(BaseFetchHandler):
2672+ """Handler for git branches via generic and github URLs"""
2673+ def can_handle(self, source):
2674+ url_parts = self.parse_url(source)
2675+ # TODO (mattyw) no support for ssh git@ yet
2676+ if url_parts.scheme not in ('http', 'https', 'git'):
2677+ return False
2678+ else:
2679+ return True
2680+
2681+ def clone(self, source, dest, branch):
2682+ if not self.can_handle(source):
2683+ raise UnhandledSource("Cannot handle {}".format(source))
2684+
2685+ repo = Repo.clone_from(source, dest)
2686+ repo.git.checkout(branch)
2687+
2688+ def install(self, source, branch="master"):
2689+ url_parts = self.parse_url(source)
2690+ branch_name = url_parts.path.strip("/").split("/")[-1]
2691+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2692+ branch_name)
2693+ if not os.path.exists(dest_dir):
2694+ mkdir(dest_dir, perms=0o755)
2695+ try:
2696+ self.clone(source, dest_dir, branch)
2697+ except OSError as e:
2698+ raise UnhandledSource(e.strerror)
2699+ return dest_dir
2700
2701=== added directory 'lib/charmhelpers/payload'
2702=== added file 'lib/charmhelpers/payload/__init__.py'
2703--- lib/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
2704+++ lib/charmhelpers/payload/__init__.py 2014-12-02 20:02:04 +0000
2705@@ -0,0 +1,1 @@
2706+"Tools for working with files injected into a charm just before deployment."
2707
2708=== added file 'lib/charmhelpers/payload/archive.py'
2709--- lib/charmhelpers/payload/archive.py 1970-01-01 00:00:00 +0000
2710+++ lib/charmhelpers/payload/archive.py 2014-12-02 20:02:04 +0000
2711@@ -0,0 +1,57 @@
2712+import os
2713+import tarfile
2714+import zipfile
2715+from charmhelpers.core import (
2716+ host,
2717+ hookenv,
2718+)
2719+
2720+
2721+class ArchiveError(Exception):
2722+ pass
2723+
2724+
2725+def get_archive_handler(archive_name):
2726+ if os.path.isfile(archive_name):
2727+ if tarfile.is_tarfile(archive_name):
2728+ return extract_tarfile
2729+ elif zipfile.is_zipfile(archive_name):
2730+ return extract_zipfile
2731+ else:
2732+ # look at the file name
2733+ for ext in ('.tar', '.tar.gz', '.tgz', 'tar.bz2', '.tbz2', '.tbz'):
2734+ if archive_name.endswith(ext):
2735+ return extract_tarfile
2736+ for ext in ('.zip', '.jar'):
2737+ if archive_name.endswith(ext):
2738+ return extract_zipfile
2739+
2740+
2741+def archive_dest_default(archive_name):
2742+ archive_file = os.path.basename(archive_name)
2743+ return os.path.join(hookenv.charm_dir(), "archives", archive_file)
2744+
2745+
2746+def extract(archive_name, destpath=None):
2747+ handler = get_archive_handler(archive_name)
2748+ if handler:
2749+ if not destpath:
2750+ destpath = archive_dest_default(archive_name)
2751+ if not os.path.isdir(destpath):
2752+ host.mkdir(destpath)
2753+ handler(archive_name, destpath)
2754+ return destpath
2755+ else:
2756+ raise ArchiveError("No handler for archive")
2757+
2758+
2759+def extract_tarfile(archive_name, destpath):
2760+ "Unpack a tar archive, optionally compressed"
2761+ archive = tarfile.open(archive_name)
2762+ archive.extractall(destpath)
2763+
2764+
2765+def extract_zipfile(archive_name, destpath):
2766+ "Unpack a zip file"
2767+ archive = zipfile.ZipFile(archive_name)
2768+ archive.extractall(destpath)
2769
2770=== added file 'lib/charmhelpers/payload/execd.py'
2771--- lib/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
2772+++ lib/charmhelpers/payload/execd.py 2014-12-02 20:02:04 +0000
2773@@ -0,0 +1,50 @@
2774+#!/usr/bin/env python
2775+
2776+import os
2777+import sys
2778+import subprocess
2779+from charmhelpers.core import hookenv
2780+
2781+
2782+def default_execd_dir():
2783+ return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
2784+
2785+
2786+def execd_module_paths(execd_dir=None):
2787+ """Generate a list of full paths to modules within execd_dir."""
2788+ if not execd_dir:
2789+ execd_dir = default_execd_dir()
2790+
2791+ if not os.path.exists(execd_dir):
2792+ return
2793+
2794+ for subpath in os.listdir(execd_dir):
2795+ module = os.path.join(execd_dir, subpath)
2796+ if os.path.isdir(module):
2797+ yield module
2798+
2799+
2800+def execd_submodule_paths(command, execd_dir=None):
2801+ """Generate a list of full paths to the specified command within exec_dir.
2802+ """
2803+ for module_path in execd_module_paths(execd_dir):
2804+ path = os.path.join(module_path, command)
2805+ if os.access(path, os.X_OK) and os.path.isfile(path):
2806+ yield path
2807+
2808+
2809+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
2810+ """Run command for each module within execd_dir which defines it."""
2811+ for submodule_path in execd_submodule_paths(command, execd_dir):
2812+ try:
2813+ subprocess.check_call(submodule_path, shell=True, stderr=stderr)
2814+ except subprocess.CalledProcessError as e:
2815+ hookenv.log("Error ({}) running {}. Output: {}".format(
2816+ e.returncode, e.cmd, e.output))
2817+ if die_on_error:
2818+ sys.exit(e.returncode)
2819+
2820+
2821+def execd_preinstall(execd_dir=None):
2822+ """Run charm-pre-install for each module within execd_dir."""
2823+ execd_run('charm-pre-install', execd_dir=execd_dir)
2824
2825=== added file 'scripts/charm_helpers_sync.py'
2826--- scripts/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
2827+++ scripts/charm_helpers_sync.py 2014-12-02 20:02:04 +0000
2828@@ -0,0 +1,223 @@
2829+#!/usr/bin/env python
2830+# Copyright 2013 Canonical Ltd.
2831+
2832+# Authors:
2833+# Adam Gandelman <adamg@ubuntu.com>
2834+
2835+import logging
2836+import optparse
2837+import os
2838+import subprocess
2839+import shutil
2840+import sys
2841+import tempfile
2842+import yaml
2843+
2844+from fnmatch import fnmatch
2845+
2846+CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
2847+
2848+
2849+def parse_config(conf_file):
2850+ if not os.path.isfile(conf_file):
2851+ logging.error('Invalid config file: %s.' % conf_file)
2852+ return False
2853+ return yaml.load(open(conf_file).read())
2854+
2855+
2856+def clone_helpers(work_dir, branch):
2857+ dest = os.path.join(work_dir, 'charm-helpers')
2858+ logging.info('Checking out %s to %s.' % (branch, dest))
2859+ cmd = ['bzr', 'branch', branch, dest]
2860+ subprocess.check_call(cmd)
2861+ return dest
2862+
2863+
2864+def _module_path(module):
2865+ return os.path.join(*module.split('.'))
2866+
2867+
2868+def _src_path(src, module):
2869+ return os.path.join(src, 'charmhelpers', _module_path(module))
2870+
2871+
2872+def _dest_path(dest, module):
2873+ return os.path.join(dest, _module_path(module))
2874+
2875+
2876+def _is_pyfile(path):
2877+ return os.path.isfile(path + '.py')
2878+
2879+
2880+def ensure_init(path):
2881+ '''
2882+ ensure directories leading up to path are importable, omitting
2883+ parent directory, eg path='/hooks/helpers/foo'/:
2884+ hooks/
2885+ hooks/helpers/__init__.py
2886+ hooks/helpers/foo/__init__.py
2887+ '''
2888+ for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
2889+ _i = os.path.join(d, '__init__.py')
2890+ if not os.path.exists(_i):
2891+ logging.info('Adding missing __init__.py: %s' % _i)
2892+ open(_i, 'wb').close()
2893+
2894+
2895+def sync_pyfile(src, dest):
2896+ src = src + '.py'
2897+ src_dir = os.path.dirname(src)
2898+ logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
2899+ if not os.path.exists(dest):
2900+ os.makedirs(dest)
2901+ shutil.copy(src, dest)
2902+ if os.path.isfile(os.path.join(src_dir, '__init__.py')):
2903+ shutil.copy(os.path.join(src_dir, '__init__.py'),
2904+ dest)
2905+ ensure_init(dest)
2906+
2907+
2908+def get_filter(opts=None):
2909+ opts = opts or []
2910+ if 'inc=*' in opts:
2911+ # do not filter any files, include everything
2912+ return None
2913+
2914+ def _filter(dir, ls):
2915+ incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
2916+ _filter = []
2917+ for f in ls:
2918+ _f = os.path.join(dir, f)
2919+
2920+ if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
2921+ if True not in [fnmatch(_f, inc) for inc in incs]:
2922+ logging.debug('Not syncing %s, does not match include '
2923+ 'filters (%s)' % (_f, incs))
2924+ _filter.append(f)
2925+ else:
2926+ logging.debug('Including file, which matches include '
2927+ 'filters (%s): %s' % (incs, _f))
2928+ elif (os.path.isfile(_f) and not _f.endswith('.py')):
2929+ logging.debug('Not syncing file: %s' % f)
2930+ _filter.append(f)
2931+ elif (os.path.isdir(_f) and not
2932+ os.path.isfile(os.path.join(_f, '__init__.py'))):
2933+ logging.debug('Not syncing directory: %s' % f)
2934+ _filter.append(f)
2935+ return _filter
2936+ return _filter
2937+
2938+
2939+def sync_directory(src, dest, opts=None):
2940+ if os.path.exists(dest):
2941+ logging.debug('Removing existing directory: %s' % dest)
2942+ shutil.rmtree(dest)
2943+ logging.info('Syncing directory: %s -> %s.' % (src, dest))
2944+
2945+ shutil.copytree(src, dest, ignore=get_filter(opts))
2946+ ensure_init(dest)
2947+
2948+
2949+def sync(src, dest, module, opts=None):
2950+ if os.path.isdir(_src_path(src, module)):
2951+ sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
2952+ elif _is_pyfile(_src_path(src, module)):
2953+ sync_pyfile(_src_path(src, module),
2954+ os.path.dirname(_dest_path(dest, module)))
2955+ else:
2956+ logging.warn('Could not sync: %s. Neither a pyfile or directory, '
2957+ 'does it even exist?' % module)
2958+
2959+
2960+def parse_sync_options(options):
2961+ if not options:
2962+ return []
2963+ return options.split(',')
2964+
2965+
2966+def extract_options(inc, global_options=None):
2967+ global_options = global_options or []
2968+ if global_options and isinstance(global_options, basestring):
2969+ global_options = [global_options]
2970+ if '|' not in inc:
2971+ return (inc, global_options)
2972+ inc, opts = inc.split('|')
2973+ return (inc, parse_sync_options(opts) + global_options)
2974+
2975+
2976+def sync_helpers(include, src, dest, options=None):
2977+ if not os.path.isdir(dest):
2978+ os.mkdir(dest)
2979+
2980+ global_options = parse_sync_options(options)
2981+
2982+ for inc in include:
2983+ if isinstance(inc, str):
2984+ inc, opts = extract_options(inc, global_options)
2985+ sync(src, dest, inc, opts)
2986+ elif isinstance(inc, dict):
2987+ # could also do nested dicts here.
2988+ for k, v in inc.iteritems():
2989+ if isinstance(v, list):
2990+ for m in v:
2991+ inc, opts = extract_options(m, global_options)
2992+ sync(src, dest, '%s.%s' % (k, inc), opts)
2993+
2994+if __name__ == '__main__':
2995+ parser = optparse.OptionParser()
2996+ parser.add_option('-c', '--config', action='store', dest='config',
2997+ default=None, help='helper config file')
2998+ parser.add_option('-D', '--debug', action='store_true', dest='debug',
2999+ default=False, help='debug')
3000+ parser.add_option('-b', '--branch', action='store', dest='branch',
3001+ help='charm-helpers bzr branch (overrides config)')
3002+ parser.add_option('-d', '--destination', action='store', dest='dest_dir',
3003+ help='sync destination dir (overrides config)')
3004+ (opts, args) = parser.parse_args()
3005+
3006+ if opts.debug:
3007+ logging.basicConfig(level=logging.DEBUG)
3008+ else:
3009+ logging.basicConfig(level=logging.INFO)
3010+
3011+ if opts.config:
3012+ logging.info('Loading charm helper config from %s.' % opts.config)
3013+ config = parse_config(opts.config)
3014+ if not config:
3015+ logging.error('Could not parse config from %s.' % opts.config)
3016+ sys.exit(1)
3017+ else:
3018+ config = {}
3019+
3020+ if 'branch' not in config:
3021+ config['branch'] = CHARM_HELPERS_BRANCH
3022+ if opts.branch:
3023+ config['branch'] = opts.branch
3024+ if opts.dest_dir:
3025+ config['destination'] = opts.dest_dir
3026+
3027+ if 'destination' not in config:
3028+ logging.error('No destination dir. specified as option or config.')
3029+ sys.exit(1)
3030+
3031+ if 'include' not in config:
3032+ if not args:
3033+ logging.error('No modules to sync specified as option or config.')
3034+ sys.exit(1)
3035+ config['include'] = []
3036+ [config['include'].append(a) for a in args]
3037+
3038+ sync_options = None
3039+ if 'options' in config:
3040+ sync_options = config['options']
3041+ tmpd = tempfile.mkdtemp()
3042+ try:
3043+ checkout = clone_helpers(tmpd, config['branch'])
3044+ sync_helpers(config['include'], checkout, config['destination'],
3045+ options=sync_options)
3046+ except Exception, e:
3047+ logging.error("Could not sync: %s" % e)
3048+ raise e
3049+ finally:
3050+ logging.debug('Cleaning up %s' % tmpd)
3051+ shutil.rmtree(tmpd)
3052
3053=== added file 'tests/10-deploy-and-upgrade'
3054--- tests/10-deploy-and-upgrade 1970-01-01 00:00:00 +0000
3055+++ tests/10-deploy-and-upgrade 2014-12-02 20:02:04 +0000
3056@@ -0,0 +1,78 @@
3057+#!/usr/bin/env python3
3058+
3059+import amulet
3060+import requests
3061+import unittest
3062+
3063+
3064+class TestDeployment(unittest.TestCase):
3065+ @classmethod
3066+ def setUpClass(cls):
3067+ cls.deployment = amulet.Deployment(series='trusty')
3068+
3069+ mw_config = { 'name': 'MariaDB Test'}
3070+
3071+ cls.deployment.add('mariadb')
3072+ cls.deployment.add('mediawiki')
3073+ cls.deployment.configure('mediawiki', mw_config)
3074+ cls.deployment.relate('mediawiki:db', 'mariadb:db')
3075+ cls.deployment.expose('mediawiki')
3076+
3077+
3078+ try:
3079+ cls.deployment.setup(timeout=1200)
3080+ cls.deployment.sentry.wait()
3081+ except amulet.helpers.TimeoutError:
3082+ amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")
3083+ except:
3084+ raise
3085+
3086+ '''
3087+ test_relation: Verify the credentials being sent over the wire were valid
3088+ when attempting to verify the MariaDB Service Status
3089+ '''
3090+ def test_credentials(self):
3091+ dbunit = self.deployment.sentry.unit['mariadb/0']
3092+ db_relation = dbunit.relation('db', 'mediawiki:db')
3093+ db_ip = db_relation['host']
3094+ db_user = db_relation['user']
3095+ db_pass = db_relation['password']
3096+ ctmp = 'mysqladmin status -h {0} -u {1} --password={2}'
3097+ cmd = ctmp.format(db_ip, db_user, db_pass)
3098+
3099+ output, code = dbunit.run(cmd)
3100+ if code != 0:
3101+ message = 'Unable to get status of the mariadb serverat %s' % db_ip
3102+ amulet.raise_status(amulet.FAIL, msg=message)
3103+
3104+ '''
3105+ test_wiki: Verify Mediawiki setup was successful with MariaDB. No page will
3106+ be available if setup did not complete
3107+ '''
3108+ def test_wiki(self):
3109+ wikiunit = self.deployment.sentry.unit['mediawiki/0']
3110+ wiki_url = "http://{}".format(wikiunit.info['public-address'])
3111+ response = requests.get(wiki_url)
3112+ response.raise_for_status()
3113+
3114+
3115+ def test_enterprise_eval(self):
3116+ self.deployment.configure('mariadb', {'enterprise-eula': True,
3117+ 'source': 'deb https://charlesbutler:foobarbaz@code.mariadb.com/mariadb-enterprise/10.0/repo/ubuntu trusty main'})
3118+
3119+ # Ensure the bintar was relocated
3120+ dbunit = self.deployment.sentry.unit['mariadb/0']
3121+
3122+ try:
3123+ dbunit.directory_stat('/usr/local/mysql')
3124+ amulet.raise_status(amulet.SKIP, 'bintar directory found, uncertain results ahead')
3125+ except:
3126+ # this is what we want to happen
3127+ pass
3128+
3129+ # re-run the test after in-place upgrade
3130+ self.test_credentials()
3131+
3132+
3133+if __name__ == '__main__':
3134+ unittest.main()
3135
3136=== removed file 'tests/10-deploy-test.py'
3137--- tests/10-deploy-test.py 2014-11-12 23:12:00 +0000
3138+++ tests/10-deploy-test.py 1970-01-01 00:00:00 +0000
3139@@ -1,90 +0,0 @@
3140-#!/usr/bin/python3
3141-
3142-# This amulet code is to test the mariadb charm.
3143-
3144-import amulet
3145-import requests
3146-
3147-# The number of seconds to wait for Juju to set up the environment.
3148-seconds = 1200
3149-
3150-# The mediawiki configuration to test.
3151-mediawiki_configuration = {
3152- 'name': 'MariaDB test'
3153-}
3154-
3155-d = amulet.Deployment()
3156-
3157-# Add the mediawiki charm to the deployment.
3158-d.add('mediawiki')
3159-# Add the MariaDB charm to the deployment.
3160-d.add('mariadb', charm='lp:~dbart/charms/trusty/mariadb/trunk')
3161-# Configure the mediawiki charm.
3162-d.configure('mediawiki', mediawiki_configuration)
3163-# Relate the mediawiki and mariadb charms.
3164-d.relate('mediawiki:db', 'mariadb:db')
3165-# Expose the open ports on mediawiki.
3166-d.expose('mediawiki')
3167-
3168-# Deploy the environment and wait for it to setup.
3169-try:
3170- d.setup(timeout=seconds)
3171- d.sentry.wait(seconds)
3172-except amulet.helpers.TimeoutError:
3173- message = 'The environment did not setup in %d seconds.' % seconds
3174- # The SKIP status enables skip or fail the test based on configuration.
3175- amulet.raise_status(amulet.SKIP, msg=message)
3176-except:
3177- raise
3178-
3179-# Get the sentry unit for mariadb.
3180-mariadb_unit = d.sentry.unit['mariadb/0']
3181-
3182-# Get the sentry unit for mediawiki.
3183-mediawiki_unit = d.sentry.unit['mediawiki/0']
3184-
3185-# Get the public address for the system running the mediawiki charm.
3186-mediawiki_address = mediawiki_unit.info['public-address']
3187-
3188-###############################################################################
3189-## Verify MariaDB
3190-###############################################################################
3191-# Verify that mediawiki was related to mariadb
3192-mediawiki_relation = mediawiki_unit.relation('db', 'mariadb:db')
3193-print('mediawiki relation to mariadb')
3194-for key, value in mediawiki_relation.items():
3195- print(key, value)
3196-# Verify that mariadb was related to mediawiki
3197-mariadb_relation = mariadb_unit.relation('db', 'mediawiki:db')
3198-print('mariadb relation to mediawiki')
3199-for key, value in mariadb_relation.items():
3200- print(key, value)
3201-# Get the db_host from the mediawiki relation to mariadb
3202-mariadb_ip = mariadb_relation['host']
3203-# Get the user from the mediawiki relation to mariadb
3204-mariadb_user = mariadb_relation['user']
3205-# Get the password from the mediawiki relation to mariadb
3206-mariadb_password = mariadb_relation['password']
3207-# Create the command to get the mariadb status with username and password.
3208-command = '/usr/local/mysql/bin/mysqladmin status -h {0} -u {1} --password={2}'.format(mariadb_ip,
3209- mariadb_user, mariadb_password)
3210-print(command)
3211-output, code = mariadb_unit.run(command)
3212-print(output)
3213-if code != 0:
3214- message = 'Unable to get the status of mariadb server at %s' % mariadb_ip
3215- amulet.raise_status(amulet.FAIL, msg=message)
3216-
3217-###############################################################################
3218-## Verify mediawiki
3219-###############################################################################
3220-# Create a URL string to the mediawiki server.
3221-mediawiki_url = 'http://%s' % mediawiki_address
3222-print(mediawiki_url)
3223-# Get the mediawiki url with the authentication for guest.
3224-response = requests.get(mediawiki_url)
3225-# Raise an exception if response is not 200 OK.
3226-response.raise_for_status()
3227-
3228-print('The MariaDB deploy test completed successfully.')
3229-

Subscribers

People subscribed via source and target branches