Merge lp:~lazypower/charms/trusty/mariadb/replace-bintar-with-repository into lp:~dbart/charms/trusty/mariadb/trunk

Proposed by Charles Butler
Status: Merged
Approved by: Daniel Bartholomew
Approved revision: 26
Merged at revision: 20
Proposed branch: lp:~lazypower/charms/trusty/mariadb/replace-bintar-with-repository
Merge into: lp:~dbart/charms/trusty/mariadb/trunk
Diff against target: 3588 lines (+3005/-333)
24 files modified
ENTERPRISE-LICENSE.md (+72/-0)
README.md (+13/-1)
charm-helpers.yaml (+6/-0)
config.yaml (+17/-1)
hooks/config-changed (+70/-64)
hooks/install (+57/-177)
lib/charmhelpers/core/fstab.py (+118/-0)
lib/charmhelpers/core/hookenv.py (+540/-0)
lib/charmhelpers/core/host.py (+396/-0)
lib/charmhelpers/core/services/__init__.py (+2/-0)
lib/charmhelpers/core/services/base.py (+313/-0)
lib/charmhelpers/core/services/helpers.py (+243/-0)
lib/charmhelpers/core/sysctl.py (+34/-0)
lib/charmhelpers/core/templating.py (+52/-0)
lib/charmhelpers/fetch/__init__.py (+416/-0)
lib/charmhelpers/fetch/archiveurl.py (+145/-0)
lib/charmhelpers/fetch/bzrurl.py (+54/-0)
lib/charmhelpers/fetch/giturl.py (+48/-0)
lib/charmhelpers/payload/__init__.py (+1/-0)
lib/charmhelpers/payload/archive.py (+57/-0)
lib/charmhelpers/payload/execd.py (+50/-0)
scripts/charm_helpers_sync.py (+223/-0)
tests/10-deploy-and-upgrade (+78/-0)
tests/10-deploy-test.py (+0/-90)
To merge this branch: bzr merge lp:~lazypower/charms/trusty/mariadb/replace-bintar-with-repository
Reviewer Review Type Date Requested Status
Daniel Bartholomew Approve
Review via email: mp+244779@code.launchpad.net

Description of the change

Replace bintar delivery with a pure PPA based approach

To post a comment you must log in.
Revision history for this message
Daniel Bartholomew (dbart) wrote :

This looks good and works

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'ENTERPRISE-LICENSE.md'
2--- ENTERPRISE-LICENSE.md 1970-01-01 00:00:00 +0000
3+++ ENTERPRISE-LICENSE.md 2014-12-15 18:28:33 +0000
4@@ -0,0 +1,72 @@
5+# Evaluation Agreement
6+
7+
8+Our fee-bearing, annual subscriptions for MariaDB Enterprise, MariaDB Enterprise
9+ Cluster, MariaDB Galera Cluster, MariaDB MaxScale or other MariaDB products
10+ (“MariaDB Subscriptions”) includes access to: (a) software; and (b) services,
11+ such as software support, access to our customer portal and related product
12+ documentation, and the right to receive executable binaries of our software
13+ patches, updates and security fixes (“Services”).
14+
15+The purpose of this Evaluation Agreement is to provide you with access to a
16+MariaDB Subscription on an evaluation basis (an “Evaluation Subscription”).
17+Some of the differences between an Evaluation Subscription and a MariaDB
18+Subscription are described below.
19+
20+An Evaluation Subscription for MariaDB Enterprise, MariaDB Enterprise Cluster,
21+MariaDB Galera Cluster, MariaDB MaxScale entitles you to:
22+
23+- executable binary code
24+- patches
25+- bug fixes
26+- security updates
27+- the customer portal
28+- product documentation
29+- software trials for MONYog
30+- limited support from MariaDB's Sales Engineering team
31+
32+A full, annual MariaDB Subscription for MariaDB Enterprise or MariaDB Enterprise
33+ Cluster entitles you to:
34+
35+- executable binary code
36+- patches
37+- bug fixes
38+- security updates
39+- the customer portal
40+- product documentation
41+- Full access to MONYog Ultimate Monitor
42+- product roadmaps
43+- 24/7 help desk support from MariaDB’s Technical Support team
44+- Other items that may be added at MariaDB’s discretion
45+
46+While access to software components of an Evaluation Subscription are subject
47+to underlying applicable open source or proprietary license(s), as the case may
48+be (and this Evaluation Agreement does not limit or further restrict any open
49+ source license rights), access to any Services or non-open source software
50+ components is for the sole purpose of evaluating and testing
51+ (“Evaluation Purpose”) the suitability of a MariaDB Subscription for your own
52+ use for a defined time period and support level.
53+
54+Your right to access an Evaluation Subscription without charge is conditioned on
55+ the following: (a) you agree to the MariaDB Enterprise Terms and Conditions
56+ (the “Subscription Agreement”);
57+(b) you agree that this Evaluation Subscription does not grant any right or
58+license, express or implied, for the use of any MariaDB or third party trade
59+names, service marks or trademarks, including, without limitation, the right to
60+distribute any software using any such marks; and (c) if you use our Services
61+for any purpose other than Evaluation Purposes, you agree to pay MariaDB
62+per-unit Subscription Fee(s) pursuant to the Subscription Agreement. Using our
63+Services in ways that do not constitute an Evaluation Purpose include (but are
64+ not limited to) using Services in connection with Production Purposes or third
65+ parties, or as a complement or supplement to third party support services.
66+ Capitalized terms not defined in this Evaluation Agreement shall have the
67+ meaning provided in the Subscription Agreement, which is incorporated by
68+ reference in its entirety.
69+
70+By using any of our Services, you affirm that you have read, understood, and
71+agree to all of the terms and conditions of this Evaluation Agreement
72+(including the Subscription Agreement). If you are an individual acting on
73+behalf of an entity, you represent that you have the authority to enter into
74+this Evaluation Agreement on behalf of that entity. If you do not accept the
75+terms and conditions of this Evaluation Agreement, then you must not use any
76+of our Services.
77
78=== modified file 'README.md'
79--- README.md 2014-09-26 18:37:52 +0000
80+++ README.md 2014-12-15 18:28:33 +0000
81@@ -32,7 +32,19 @@
82 juju ssh mariadb/0
83 mysql -u root -p$(sudo cat /var/lib/mysql/mysql.passwd)
84
85-# Scale Out Usage
86+## To upgrade from Community to Enterprise Evaluation
87+
88+Once you have obtained a username/password from the [MariaDB Portal](http://mariadb.com)
89+there will be a repository provided for your enterprise trial installation. You can enable
90+this in the charm with the following configuration:
91+
92+ juju set mariadb enterprise-eula=true source="deb https://username:password@code.mariadb.com/mariadb-enterprise/10.0/repo/ubuntu trusty main"
93+
94+This will perform an in-place binary upgrade on all the mariadb nodes from community
95+edition to the Enterprise Evaluation. You must agree to all terms contained in
96+ `ENTERPRISE-LICENSE.md` in the charm directory
97+
98+# Scale Out Usage
99
100 ## Replication
101
102
103=== added file 'charm-helpers.yaml'
104--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
105+++ charm-helpers.yaml 2014-12-15 18:28:33 +0000
106@@ -0,0 +1,6 @@
107+destination: lib/charmhelpers
108+branch: lp:charm-helpers
109+include:
110+ - core
111+ - fetch
112+ - payload
113
114=== modified file 'config.yaml'
115--- config.yaml 2014-09-25 20:29:40 +0000
116+++ config.yaml 2014-12-15 18:28:33 +0000
117@@ -100,4 +100,20 @@
118 rbd pool has been created, changing this value will not have any
119 effect (although it can be changed in ceph by manually configuring
120 your ceph cluster).
121-
122+ enterprise-eula:
123+ type: boolean
124+ default: false
125+ description: |
126+ I have read and agree to the ENTERPRISE TRIAL agreement, located
127+ in ENTERPRISE-LICENSE.md located in the charm, or on the web here:
128+ https://mariadb.com/about/legal/evaluation-agreement
129+ source:
130+ type: string
131+ default: "deb http://mirror.jmu.edu/pub/mariadb/repo/10.0/ubuntu trusty main"
132+ description: |
133+ Repository Mirror string to install MariaDB from
134+ key:
135+ type: string
136+ default: "0xcbcb082a1bb943db"
137+ description: |
138+ GPG Key used to verify apt packages.
139
140=== modified file 'hooks/config-changed'
141--- hooks/config-changed 2014-09-25 20:29:40 +0000
142+++ hooks/config-changed 2014-12-15 18:28:33 +0000
143@@ -1,6 +1,6 @@
144 #!/usr/bin/python
145
146-from subprocess import check_output,check_call, CalledProcessError, Popen, PIPE
147+from subprocess import check_output, check_call, CalledProcessError
148 import tempfile
149 import json
150 import re
151@@ -9,15 +9,30 @@
152 import sys
153 import platform
154 from string import upper
155+from subprocess import Popen, PIPE
156+
157+sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
158+
159+from charmhelpers import fetch
160+
161+from charmhelpers.core import (
162+ hookenv,
163+ host,
164+)
165+
166+log = hookenv.log
167+config = hookenv.config()
168
169 num_re = re.compile('^[0-9]+$')
170+package = 'mysql-server'
171+
172
173 # There should be a library for this
174 def human_to_bytes(human):
175 if num_re.match(human):
176 return human
177- factors = { 'K' : 1024 , 'M' : 1048576, 'G' : 1073741824, 'T' : 1099511627776 }
178- modifier=human[-1]
179+ factors = {'K': 1024, 'M': 1048576, 'G': 1073741824, 'T': 1099511627776}
180+ modifier = human[-1]
181 if modifier in factors:
182 return int(human[:-1]) * factors[modifier]
183 if modifier == '%':
184@@ -41,18 +56,17 @@
185 IS_32BIT_SYSTEM = True
186
187 if platform.machine() in ['armv7l']:
188- SYS_MEM_LIMIT = human_to_bytes('2700M') # experimentally determined
189+ SYS_MEM_LIMIT = human_to_bytes('2700M') # experimentally determined
190 else:
191 SYS_MEM_LIMIT = human_to_bytes('4G')
192
193 if IS_32BIT_SYSTEM:
194- check_call(['juju-log','-l','INFO','32bit system restrictions in play'])
195+ log("32bit system restrictions in play", "INFO")
196
197-configs=json.loads(check_output(['config-get','--format=json']))
198+configs = json.loads(check_output(['config-get','--format=json']))
199
200 def get_memtotal():
201 with open('/proc/meminfo') as meminfo_file:
202- meminfo = {}
203 for line in meminfo_file:
204 (key, mem) = line.split(':', 2)
205 if key == 'MemTotal':
206@@ -60,48 +74,40 @@
207 return '%s%s' % (mtot, upper(modifier[0]))
208
209
210-# There is preliminary code for mariadb, but switching
211-# from mariadb -> mysql fails badly, so it is disabled for now.
212-valid_flavors = ['distro']
213-
214-#remove_pkgs=[]
215-#apt_sources = []
216-#package = 'mariadb-server'
217-#
218-#series = check_output(['lsb_release','-cs'])
219-#
220-#for source in apt_sources:
221-# server = source.split('/')[0]
222-# if os.path.exists('keys/%s' % server):
223-# check_call(['apt-key','add','keys/%s' % server])
224-# else:
225-# check_call(['juju-log','-l','ERROR',
226-# 'No key for %s' % (server)])
227-# sys.exit(1)
228-# check_call(['add-apt-repository','-y','deb http://%s %s main' % (source, series)])
229-# check_call(['apt-get','update'])
230-#
231-#with open('/var/lib/mysql/mysql.passwd','r') as rpw:
232-# root_pass = rpw.read()
233-#
234-#dconf = Popen(['debconf-set-selections'], stdin=PIPE)
235-#dconf.stdin.write("%s %s/root_password password %s\n" % (package, package, root_pass))
236-#dconf.stdin.write("%s %s/root_password_again password %s\n" % (package, package, root_pass))
237-#dconf.stdin.write("%s-5.5 mysql-server/root_password password %s\n" % (package, root_pass))
238-#dconf.stdin.write("%s-5.5 mysql-server/root_password_again password %s\n" % (package, root_pass))
239-#dconf.communicate()
240-#dconf.wait()
241-#
242-#if len(remove_pkgs):
243-# check_call(['apt-get','-y','remove'] + remove_pkgs)
244-#check_call(['apt-get','-y','install','-qq',package])
245+
246+source = config['source']
247+accepted = config['enterprise-eula']
248+
249+
250+# set MariaDB Root Password
251+with open('/var/lib/mysql/mysql.passwd', 'r') as rpw:
252+ root_pass = rpw.read()
253+
254+
255+# preseed debconf with our admin password
256+dconf = Popen(['debconf-set-selections'], stdin=PIPE)
257+dconf.stdin.write("%s %s/root_password password %s\n" % (package, package, root_pass))
258+dconf.stdin.write("%s %s/root_password_again password %s\n" % (package, package, root_pass))
259+dconf.communicate()
260+dconf.wait()
261+
262+# assumption of mariadb packages being delivered from code.mariadb
263+if not accepted and "code.mariadb" in source:
264+ log('EULA not accepted - doing nothing', 'WARNING')
265+ host.service_stop('mysql')
266+else:
267+ fetch.add_source(source, config['key'])
268+ fetch.apt_update()
269+
270+ packages = ['mariadb-server', 'mariadb-client']
271+ fetch.apt_install(packages)
272
273 # smart-calc stuff in the configs
274 dataset_bytes = human_to_bytes(configs['dataset-size'])
275
276-check_call(['juju-log','-l','INFO','dataset size in bytes: %d' % dataset_bytes])
277+log('dataset size in bytes: %d' % dataset_bytes)
278
279-if configs['query-cache-size'] == -1 and configs['query-cache-type'] in ['ON','DEMAND']:
280+if configs['query-cache-size'] == -1 and configs['query-cache-type'] in ['ON', 'DEMAND']:
281 qcache_bytes = (dataset_bytes * 0.20)
282 qcache_bytes = int(qcache_bytes - (qcache_bytes % PAGE_SIZE))
283 configs['query-cache-size'] = qcache_bytes
284@@ -109,20 +115,20 @@
285
286 # 5.5 allows the words, but not 5.1
287 if configs['query-cache-type'] == 'ON':
288- configs['query-cache-type']=1
289+ configs['query-cache-type'] = 1
290 elif configs['query-cache-type'] == 'DEMAND':
291- configs['query-cache-type']=2
292+ configs['query-cache-type'] = 2
293 else:
294- configs['query-cache-type']=0
295+ configs['query-cache-type'] = 0
296
297-preferred_engines=configs['preferred-storage-engine'].split(',')
298+preferred_engines = configs['preferred-storage-engine'].split(',')
299 chunk_size = int(dataset_bytes / len(preferred_engines))
300-configs['innodb-flush-log-at-trx-commit']=1
301-configs['sync-binlog']=1
302+configs['innodb-flush-log-at-trx-commit'] = 1
303+configs['sync-binlog'] = 1
304 if 'InnoDB' in preferred_engines:
305 configs['innodb-buffer-pool-size'] = chunk_size
306 if configs['tuning-level'] == 'fast':
307- configs['innodb-flush-log-at-trx-commit']=2
308+ configs['innodb-flush-log-at-trx-commit'] = 2
309 else:
310 configs['innodb-buffer-pool-size'] = 0
311
312@@ -135,14 +141,14 @@
313 configs['key-buffer'] = human_to_bytes('8M')
314
315 if configs['tuning-level'] == 'fast':
316- configs['sync-binlog']=0
317+ configs['sync-binlog'] = 0
318
319 if configs['max-connections'] == -1:
320 configs['max-connections'] = '# max_connections = ?'
321 else:
322 configs['max-connections'] = 'max_connections = %s' % configs['max-connections']
323
324-template="""
325+template = """
326 ######################################
327 #
328 #
329@@ -160,7 +166,7 @@
330 # You can copy this to one of:
331 # - "/etc/mysql/my.cnf" to set global options,
332 # - "~/.my.cnf" to set user-specific options.
333-#
334+#
335 # One can use all long options that the program supports.
336 # Run program with --help to get a list of available options and with
337 # --print-defaults to see which it would actually understand and use.
338@@ -299,38 +305,38 @@
339 binlog_template = """
340 [mysqld]
341 server_id = %s
342-log_bin = /usr/local/mysql/data/mysql-bin.log
343+log_bin = /var/log/mysql/mysql-bin.log
344 binlog_format = %s
345 """
346
347 binlog_cnf = binlog_template % (unit_id,
348- configs.get('binlog-format','MIXED'))
349+ configs.get('binlog-format', 'MIXED'))
350
351-mycnf=template % configs
352+mycnf = template % configs
353
354 targets = {'/etc/my.cnf.d/binlog.cnf': binlog_cnf,
355 '/etc/my.cnf': mycnf,
356 }
357
358 need_restart = False
359-for target,content in targets.iteritems():
360- tdir = os.path.dirname(target)
361+for target, content in targets.iteritems():
362+ tdir = os.path.dirname(target)
363 if len(content) == 0 and os.path.exists(target):
364 os.unlink(target)
365 need_restart = True
366 continue
367- with tempfile.NamedTemporaryFile(mode='w',dir=tdir,delete=False) as t:
368+ with tempfile.NamedTemporaryFile(mode='w', dir=tdir, delete=False) as t:
369 t.write(content)
370 t.flush()
371 tmd5 = hashlib.md5()
372 tmd5.update(content)
373 if os.path.exists(target):
374- with open(target,'r') as old:
375- md5=hashlib.md5()
376+ with open(target, 'r') as old:
377+ md5 = hashlib.md5()
378 md5.update(old.read())
379 oldhash = md5.digest()
380 if oldhash != tmd5.digest():
381- os.rename(target,'%s.%s' % (target, md5.hexdigest()))
382+ os.rename(target, '%s.%s' % (target, md5.hexdigest()))
383 need_restart = True
384 else:
385 need_restart = True
386@@ -340,5 +346,5 @@
387 try:
388 check_call(['service', 'mysql', 'restart'])
389 except CalledProcessError:
390- check_call(['juju-log', '-l', 'INFO', 'Restart failed, trying again'])
391+ log('Restart failed, trying again', 'INFO')
392 check_call(['service', 'mysql', 'restart'])
393
394=== modified file 'hooks/install'
395--- hooks/install 2014-11-12 18:30:06 +0000
396+++ hooks/install 2014-12-15 18:28:33 +0000
397@@ -1,177 +1,57 @@
398-#!/bin/bash
399-#===============================================================================
400-#
401-# FILE: install
402-#
403-# DESCRIPTION: Install hook for MariaDB-bintar charm
404-#
405-# AUTHOR: Daniel Bartholomew <dbart@mariadb.com>
406-# ORGANIZATION: MariaDB
407-# CREATED: 09/18/2014 10:36
408-# REVISION: 003
409-#===============================================================================
410-
411-# mbruzek: we recommend set -e so the script stops when an error is encountered.
412-set -ex
413-
414-#-------------------------------------------------------------------------------
415-# Variables
416-#-------------------------------------------------------------------------------
417-
418-TREE="10.0" # MariaDB tree to use
419-REV="4416" # Revision in the tree to use
420-VER="10.0.14" # MariaDB version number
421-ARCH="$(uname -p)" # MariaDB CPU architechture
422-LOCATION="http://bb01.mariadb.net/10.0" # Where the builds are
423-PASSFILE="/var/lib/mysql/mysql.passwd" # Password file for mysql "root" user
424-
425-#-------------------------------------------------------------------------------
426-# Main Script
427-#-------------------------------------------------------------------------------
428-
429-# Install necessary packages
430-apt-get update -qq
431-apt-get -y install uuid pwgen dnsutils libaio1 python-dev python-pip libssl-dev
432-
433-# Install MariaDB under /usr/local/
434-cd /usr/local
435-if [ ! -d "bintar_${TREE}_rev${REV}" ]; then
436- mkdir bintar_${TREE}_rev${REV}
437-fi
438-
439-cd bintar_${TREE}_rev${REV}
440-# If the file exists in the charm directory
441-if [ -f ${CHARM_DIR}/files/mariadb-${VER}-linux-${ARCH}.tar.gz ]; then
442- # if the file also exists in the current directory
443- if [ -f "mariadb-${VER}-linux-${ARCH}.tar.gz" ]; then
444- echo "File is already in place."
445- else
446- # Copy the file from the charm directory.
447- cp -v ${CHARM_DIR}/files/mariadb-${VER}-linux-${ARCH}.tar.gz .
448- fi
449-else
450- # Download the bintar using -nv to avoid printing progress to the log files.
451- wget -Nnv ${LOCATION}/bintar_${TREE}_rev${REV}/mariadb-${VER}-linux-${ARCH}.tar.gz
452- wget -Nnv ${LOCATION}/bintar_${TREE}_rev${REV}/mariadb-${VER}-linux-${ARCH}.tar.gz.sha1sum
453- # check the sha1sum to make sure the file we downloaded is good
454- if sha1sum -c --quiet mariadb-${VER}-linux-${ARCH}.tar.gz.sha1sum;then
455- echo "file is good"
456- else
457- echo "file is corrupt, replace"
458- exit 1
459- fi
460-fi
461-
462-# Expand the bintar to /usr/local/bintar_${TREE}_rev${REV}/
463-tar zxf mariadb-${VER}-linux-${ARCH}.tar.gz
464-
465-cd /usr/local
466-
467-# If the mysql symlink already exists, remove it so we know we're pointing at
468-# the version we just untarred
469-if [ -e "/usr/local/mysql" ]; then
470- rm /usr/local/mysql
471-fi
472-
473-# Create a symlink to the real MariaDB directory.
474-# The symlink is named "mysql" for compatibility.
475-ln -s bintar_${TREE}_rev${REV}/mariadb-${VER}-linux-${ARCH} mysql
476-
477-# Check if the lib dir has been added to /etc/ld.so.conf and if not, add it,
478-# then run ldconfig to apply the change
479-if grep -q -e "/usr/local/mysql/lib" /etc/ld.so.conf; then
480- echo "ld.so.conf entry already exists"
481-else
482- echo "/usr/local/mysql/lib" >> /etc/ld.so.conf
483-fi
484-ldconfig
485-
486-cd /usr/local/mysql
487-
488-# Check if the "mysql" group has already been created. If not, create it.
489-if grep -q -e "^mysql" /etc/group; then
490- echo "Group mysql already created."
491-else
492- groupadd mysql
493-fi
494-
495-# Check if the user has already been created. If not, create it.
496-if grep -q -e "^mysql" /etc/passwd; then
497- echo "User mysql already created."
498-else
499- useradd -r -g mysql mysql
500-fi
501-
502-# Install the default mysql database and privilege tables. If they already
503-# exist, the script will not overwrite them, so it's always safe to run.
504-# It's also always safe to chown the directories.
505-./scripts/mysql_install_db --user=mysql
506-chown -R root:root .
507-chown -R mysql data
508-
509-# Add /usr/local/mysql/bin to the PATH
510-if grep -q -e '/usr/local/mysql/bin:$PATH' /etc/profile; then
511- echo "/etc/profile PATH entry already exists."
512-else
513- echo 'export PATH=/usr/local/mysql/bin:$PATH' >> /etc/profile
514-fi
515-
516-# Add /usr/local/mysql/man to the MANPATH
517-if grep -q -e '/usr/local/mysql/man:$MANPATH' /etc/profile; then
518- echo "/etc/profile MANPATH entry already exists."
519-else
520- echo 'export MANPATH=/usr/local/mysql/man:$MANPATH' >> /etc/profile
521-fi
522-
523-# Apply the above changes to PATH and MANPATH
524-source /etc/profile
525-
526-# Add init script to /etc/init.d/, removing any existing version, if it exists
527-if [ -f "/etc/init.d/mysql" ]; then
528- rm /etc/init.d/mysql
529-fi
530-cp support-files/mysql.server /etc/init.d/mysql
531-chmod 755 /etc/init.d/mysql
532-update-rc.d mysql defaults
533-
534-
535-# Start MariaDB so we can set the MariaDB root user password
536-/etc/init.d/mysql start
537-
538-# If there is no existing ${PASSFILE}, use the UUID command to generate a new
539-# MariaDB root user password and store it in the ${PASSFILE}
540-if ! [ -f ${PASSFILE} ] ; then
541- mkdir -p /var/lib/mysql
542- touch ${PASSFILE}
543-fi
544-chmod 0600 ${PASSFILE}
545-if ! [ -s ${PASSFILE} ] ; then
546- uuid > ${PASSFILE}
547-fi
548-
549-# if the root password is not set, set it
550-if mysql -u root -e "quit";then
551- mysqladmin -u root password "$(cat ${PASSFILE})"
552-fi
553-
554-# if the password in ${PASSFILE} doesn't work, we have a problem
555-if ! mysql -u root -p$(cat ${PASSFILE}) -e "quit";then
556- echo "+ The password in ${PASSFILE} doesn't work."
557- exit 1
558-else
559- echo "+ Password tested, it works!"
560-fi
561-
562-# Create /etc/my.cnf.d, if it doesn't exist
563-if [ ! -d "/etc/my.cnf.d" ]; then
564- mkdir -p /etc/my.cnf.d
565-fi
566-
567-# Install python MySQLdb module
568-pip install MySQL-python
569-
570-# As the last step of the install process, stop MariaDB (the start trigger
571-# handles starting MariaDB).
572-/etc/init.d/mysql stop
573-
574-
575+#!/usr/bin/env python
576+
577+import os
578+import sys
579+import uuid
580+
581+sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
582+
583+from charmhelpers import fetch
584+from charmhelpers.core import hookenv
585+
586+hooks = hookenv.Hooks()
587+log = hookenv.log
588+config = hookenv.config()
589+PASSFILE = "/var/lib/mysql/mysql.passwd"
590+
591+''' Do all preinstallation work here, as the app is installed in config-changed
592+ hook to handle repository updates, and code de-duplication '''
593+@hooks.hook('install')
594+def install():
595+ fetch.apt_update()
596+
597+ packages = ['debconf-utils',
598+ 'python-mysqldb',
599+ 'uuid',
600+ 'pwgen',
601+ 'dnsutils']
602+ fetch.apt_install(packages)
603+ admin_maria()
604+
605+def admin_maria():
606+ # If there is no existing ${PASSFILE}, use the UUID command to generate a
607+ # new MariaDB root user password and store it in the ${PASSFILE}
608+ varpath = os.path.join(os.path.sep, 'var', 'lib', 'mysql')
609+ if not os.path.exists(PASSFILE):
610+ try:
611+ os.makedirs(varpath)
612+ with open(PASSFILE, 'a'):
613+ os.utime(PASSFILE, None)
614+ os.chmod(PASSFILE, 0600)
615+ except:
616+ pass
617+ # Touch the passfile
618+
619+ with open(PASSFILE, 'a') as fd:
620+ fd.seek(0, os.SEEK_END)
621+ if fd.tell() == 0:
622+ fd.seek(0)
623+ fd.write(str(uuid.uuid4()))
624+ etc_path = os.path.join(os.path.sep, 'etc', 'my.cnf.d')
625+ if not os.path.exists(etc_path):
626+ os.makedirs(os.path.join(os.path.sep, 'etc', 'my.cnf.d'))
627+
628+
629+if __name__ == "__main__":
630+ # execute a hook based on the name the program is called by
631+ hooks.execute(sys.argv)
632
633=== added directory 'lib/charmhelpers'
634=== added file 'lib/charmhelpers/__init__.py'
635=== added directory 'lib/charmhelpers/core'
636=== added file 'lib/charmhelpers/core/__init__.py'
637=== added file 'lib/charmhelpers/core/fstab.py'
638--- lib/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
639+++ lib/charmhelpers/core/fstab.py 2014-12-15 18:28:33 +0000
640@@ -0,0 +1,118 @@
641+#!/usr/bin/env python
642+# -*- coding: utf-8 -*-
643+
644+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
645+
646+import io
647+import os
648+
649+
650+class Fstab(io.FileIO):
651+ """This class extends file in order to implement a file reader/writer
652+ for file `/etc/fstab`
653+ """
654+
655+ class Entry(object):
656+ """Entry class represents a non-comment line on the `/etc/fstab` file
657+ """
658+ def __init__(self, device, mountpoint, filesystem,
659+ options, d=0, p=0):
660+ self.device = device
661+ self.mountpoint = mountpoint
662+ self.filesystem = filesystem
663+
664+ if not options:
665+ options = "defaults"
666+
667+ self.options = options
668+ self.d = int(d)
669+ self.p = int(p)
670+
671+ def __eq__(self, o):
672+ return str(self) == str(o)
673+
674+ def __str__(self):
675+ return "{} {} {} {} {} {}".format(self.device,
676+ self.mountpoint,
677+ self.filesystem,
678+ self.options,
679+ self.d,
680+ self.p)
681+
682+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
683+
684+ def __init__(self, path=None):
685+ if path:
686+ self._path = path
687+ else:
688+ self._path = self.DEFAULT_PATH
689+ super(Fstab, self).__init__(self._path, 'rb+')
690+
691+ def _hydrate_entry(self, line):
692+ # NOTE: use split with no arguments to split on any
693+ # whitespace including tabs
694+ return Fstab.Entry(*filter(
695+ lambda x: x not in ('', None),
696+ line.strip("\n").split()))
697+
698+ @property
699+ def entries(self):
700+ self.seek(0)
701+ for line in self.readlines():
702+ line = line.decode('us-ascii')
703+ try:
704+ if line.strip() and not line.startswith("#"):
705+ yield self._hydrate_entry(line)
706+ except ValueError:
707+ pass
708+
709+ def get_entry_by_attr(self, attr, value):
710+ for entry in self.entries:
711+ e_attr = getattr(entry, attr)
712+ if e_attr == value:
713+ return entry
714+ return None
715+
716+ def add_entry(self, entry):
717+ if self.get_entry_by_attr('device', entry.device):
718+ return False
719+
720+ self.write((str(entry) + '\n').encode('us-ascii'))
721+ self.truncate()
722+ return entry
723+
724+ def remove_entry(self, entry):
725+ self.seek(0)
726+
727+ lines = [l.decode('us-ascii') for l in self.readlines()]
728+
729+ found = False
730+ for index, line in enumerate(lines):
731+ if not line.startswith("#"):
732+ if self._hydrate_entry(line) == entry:
733+ found = True
734+ break
735+
736+ if not found:
737+ return False
738+
739+ lines.remove(line)
740+
741+ self.seek(0)
742+ self.write(''.join(lines).encode('us-ascii'))
743+ self.truncate()
744+ return True
745+
746+ @classmethod
747+ def remove_by_mountpoint(cls, mountpoint, path=None):
748+ fstab = cls(path=path)
749+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
750+ if entry:
751+ return fstab.remove_entry(entry)
752+ return False
753+
754+ @classmethod
755+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
756+ return cls(path=path).add_entry(Fstab.Entry(device,
757+ mountpoint, filesystem,
758+ options=options))
759
760=== added file 'lib/charmhelpers/core/hookenv.py'
761--- lib/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
762+++ lib/charmhelpers/core/hookenv.py 2014-12-15 18:28:33 +0000
763@@ -0,0 +1,540 @@
764+"Interactions with the Juju environment"
765+# Copyright 2013 Canonical Ltd.
766+#
767+# Authors:
768+# Charm Helpers Developers <juju@lists.ubuntu.com>
769+
770+import os
771+import json
772+import yaml
773+import subprocess
774+import sys
775+from subprocess import CalledProcessError
776+
777+import six
778+if not six.PY3:
779+ from UserDict import UserDict
780+else:
781+ from collections import UserDict
782+
783+CRITICAL = "CRITICAL"
784+ERROR = "ERROR"
785+WARNING = "WARNING"
786+INFO = "INFO"
787+DEBUG = "DEBUG"
788+MARKER = object()
789+
790+cache = {}
791+
792+
793+def cached(func):
794+ """Cache return values for multiple executions of func + args
795+
796+ For example::
797+
798+ @cached
799+ def unit_get(attribute):
800+ pass
801+
802+ unit_get('test')
803+
804+ will cache the result of unit_get + 'test' for future calls.
805+ """
806+ def wrapper(*args, **kwargs):
807+ global cache
808+ key = str((func, args, kwargs))
809+ try:
810+ return cache[key]
811+ except KeyError:
812+ res = func(*args, **kwargs)
813+ cache[key] = res
814+ return res
815+ return wrapper
816+
817+
818+def flush(key):
819+ """Flushes any entries from function cache where the
820+ key is found in the function+args """
821+ flush_list = []
822+ for item in cache:
823+ if key in item:
824+ flush_list.append(item)
825+ for item in flush_list:
826+ del cache[item]
827+
828+
829+def log(message, level=None):
830+ """Write a message to the juju log"""
831+ command = ['juju-log']
832+ if level:
833+ command += ['-l', level]
834+ command += [message]
835+ subprocess.call(command)
836+
837+
838+class Serializable(UserDict):
839+ """Wrapper, an object that can be serialized to yaml or json"""
840+
841+ def __init__(self, obj):
842+ # wrap the object
843+ UserDict.__init__(self)
844+ self.data = obj
845+
846+ def __getattr__(self, attr):
847+ # See if this object has attribute.
848+ if attr in ("json", "yaml", "data"):
849+ return self.__dict__[attr]
850+ # Check for attribute in wrapped object.
851+ got = getattr(self.data, attr, MARKER)
852+ if got is not MARKER:
853+ return got
854+ # Proxy to the wrapped object via dict interface.
855+ try:
856+ return self.data[attr]
857+ except KeyError:
858+ raise AttributeError(attr)
859+
860+ def __getstate__(self):
861+ # Pickle as a standard dictionary.
862+ return self.data
863+
864+ def __setstate__(self, state):
865+ # Unpickle into our wrapper.
866+ self.data = state
867+
868+ def json(self):
869+ """Serialize the object to json"""
870+ return json.dumps(self.data)
871+
872+ def yaml(self):
873+ """Serialize the object to yaml"""
874+ return yaml.dump(self.data)
875+
876+
877+def execution_environment():
878+ """A convenient bundling of the current execution context"""
879+ context = {}
880+ context['conf'] = config()
881+ if relation_id():
882+ context['reltype'] = relation_type()
883+ context['relid'] = relation_id()
884+ context['rel'] = relation_get()
885+ context['unit'] = local_unit()
886+ context['rels'] = relations()
887+ context['env'] = os.environ
888+ return context
889+
890+
891+def in_relation_hook():
892+ """Determine whether we're running in a relation hook"""
893+ return 'JUJU_RELATION' in os.environ
894+
895+
896+def relation_type():
897+ """The scope for the current relation hook"""
898+ return os.environ.get('JUJU_RELATION', None)
899+
900+
901+def relation_id():
902+ """The relation ID for the current relation hook"""
903+ return os.environ.get('JUJU_RELATION_ID', None)
904+
905+
906+def local_unit():
907+ """Local unit ID"""
908+ return os.environ['JUJU_UNIT_NAME']
909+
910+
911+def remote_unit():
912+ """The remote unit for the current relation hook"""
913+ return os.environ['JUJU_REMOTE_UNIT']
914+
915+
916+def service_name():
917+ """The name service group this unit belongs to"""
918+ return local_unit().split('/')[0]
919+
920+
921+def hook_name():
922+ """The name of the currently executing hook"""
923+ return os.path.basename(sys.argv[0])
924+
925+
926+class Config(dict):
927+ """A dictionary representation of the charm's config.yaml, with some
928+ extra features:
929+
930+ - See which values in the dictionary have changed since the previous hook.
931+ - For values that have changed, see what the previous value was.
932+ - Store arbitrary data for use in a later hook.
933+
934+ NOTE: Do not instantiate this object directly - instead call
935+ ``hookenv.config()``, which will return an instance of :class:`Config`.
936+
937+ Example usage::
938+
939+ >>> # inside a hook
940+ >>> from charmhelpers.core import hookenv
941+ >>> config = hookenv.config()
942+ >>> config['foo']
943+ 'bar'
944+ >>> # store a new key/value for later use
945+ >>> config['mykey'] = 'myval'
946+
947+
948+ >>> # user runs `juju set mycharm foo=baz`
949+ >>> # now we're inside subsequent config-changed hook
950+ >>> config = hookenv.config()
951+ >>> config['foo']
952+ 'baz'
953+ >>> # test to see if this val has changed since last hook
954+ >>> config.changed('foo')
955+ True
956+ >>> # what was the previous value?
957+ >>> config.previous('foo')
958+ 'bar'
959+ >>> # keys/values that we add are preserved across hooks
960+ >>> config['mykey']
961+ 'myval'
962+
963+ """
964+ CONFIG_FILE_NAME = '.juju-persistent-config'
965+
966+ def __init__(self, *args, **kw):
967+ super(Config, self).__init__(*args, **kw)
968+ self.implicit_save = True
969+ self._prev_dict = None
970+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
971+ if os.path.exists(self.path):
972+ self.load_previous()
973+
974+ def __getitem__(self, key):
975+ """For regular dict lookups, check the current juju config first,
976+ then the previous (saved) copy. This ensures that user-saved values
977+ will be returned by a dict lookup.
978+
979+ """
980+ try:
981+ return dict.__getitem__(self, key)
982+ except KeyError:
983+ return (self._prev_dict or {})[key]
984+
985+ def keys(self):
986+ prev_keys = []
987+ if self._prev_dict is not None:
988+ prev_keys = self._prev_dict.keys()
989+ return list(set(prev_keys + list(dict.keys(self))))
990+
991+ def load_previous(self, path=None):
992+ """Load previous copy of config from disk.
993+
994+ In normal usage you don't need to call this method directly - it
995+ is called automatically at object initialization.
996+
997+ :param path:
998+
999+ File path from which to load the previous config. If `None`,
1000+ config is loaded from the default location. If `path` is
1001+ specified, subsequent `save()` calls will write to the same
1002+ path.
1003+
1004+ """
1005+ self.path = path or self.path
1006+ with open(self.path) as f:
1007+ self._prev_dict = json.load(f)
1008+
1009+ def changed(self, key):
1010+ """Return True if the current value for this key is different from
1011+ the previous value.
1012+
1013+ """
1014+ if self._prev_dict is None:
1015+ return True
1016+ return self.previous(key) != self.get(key)
1017+
1018+ def previous(self, key):
1019+ """Return previous value for this key, or None if there
1020+ is no previous value.
1021+
1022+ """
1023+ if self._prev_dict:
1024+ return self._prev_dict.get(key)
1025+ return None
1026+
1027+ def save(self):
1028+ """Save this config to disk.
1029+
1030+ If the charm is using the :mod:`Services Framework <services.base>`
1031+ or :meth:'@hook <Hooks.hook>' decorator, this
1032+ is called automatically at the end of successful hook execution.
1033+ Otherwise, it should be called directly by user code.
1034+
1035+ To disable automatic saves, set ``implicit_save=False`` on this
1036+ instance.
1037+
1038+ """
1039+ if self._prev_dict:
1040+ for k, v in six.iteritems(self._prev_dict):
1041+ if k not in self:
1042+ self[k] = v
1043+ with open(self.path, 'w') as f:
1044+ json.dump(self, f)
1045+
1046+
1047+@cached
1048+def config(scope=None):
1049+ """Juju charm configuration"""
1050+ config_cmd_line = ['config-get']
1051+ if scope is not None:
1052+ config_cmd_line.append(scope)
1053+ config_cmd_line.append('--format=json')
1054+ try:
1055+ config_data = json.loads(
1056+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
1057+ if scope is not None:
1058+ return config_data
1059+ return Config(config_data)
1060+ except ValueError:
1061+ return None
1062+
1063+
1064+@cached
1065+def relation_get(attribute=None, unit=None, rid=None):
1066+ """Get relation information"""
1067+ _args = ['relation-get', '--format=json']
1068+ if rid:
1069+ _args.append('-r')
1070+ _args.append(rid)
1071+ _args.append(attribute or '-')
1072+ if unit:
1073+ _args.append(unit)
1074+ try:
1075+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
1076+ except ValueError:
1077+ return None
1078+ except CalledProcessError as e:
1079+ if e.returncode == 2:
1080+ return None
1081+ raise
1082+
1083+
1084+def relation_set(relation_id=None, relation_settings=None, **kwargs):
1085+ """Set relation information for the current unit"""
1086+ relation_settings = relation_settings if relation_settings else {}
1087+ relation_cmd_line = ['relation-set']
1088+ if relation_id is not None:
1089+ relation_cmd_line.extend(('-r', relation_id))
1090+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
1091+ if v is None:
1092+ relation_cmd_line.append('{}='.format(k))
1093+ else:
1094+ relation_cmd_line.append('{}={}'.format(k, v))
1095+ subprocess.check_call(relation_cmd_line)
1096+ # Flush cache of any relation-gets for local unit
1097+ flush(local_unit())
1098+
1099+
1100+@cached
1101+def relation_ids(reltype=None):
1102+ """A list of relation_ids"""
1103+ reltype = reltype or relation_type()
1104+ relid_cmd_line = ['relation-ids', '--format=json']
1105+ if reltype is not None:
1106+ relid_cmd_line.append(reltype)
1107+ return json.loads(
1108+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
1109+ return []
1110+
1111+
1112+@cached
1113+def related_units(relid=None):
1114+ """A list of related units"""
1115+ relid = relid or relation_id()
1116+ units_cmd_line = ['relation-list', '--format=json']
1117+ if relid is not None:
1118+ units_cmd_line.extend(('-r', relid))
1119+ return json.loads(
1120+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
1121+
1122+
1123+@cached
1124+def relation_for_unit(unit=None, rid=None):
1125+ """Get the json represenation of a unit's relation"""
1126+ unit = unit or remote_unit()
1127+ relation = relation_get(unit=unit, rid=rid)
1128+ for key in relation:
1129+ if key.endswith('-list'):
1130+ relation[key] = relation[key].split()
1131+ relation['__unit__'] = unit
1132+ return relation
1133+
1134+
1135+@cached
1136+def relations_for_id(relid=None):
1137+ """Get relations of a specific relation ID"""
1138+ relation_data = []
1139+ relid = relid or relation_ids()
1140+ for unit in related_units(relid):
1141+ unit_data = relation_for_unit(unit, relid)
1142+ unit_data['__relid__'] = relid
1143+ relation_data.append(unit_data)
1144+ return relation_data
1145+
1146+
1147+@cached
1148+def relations_of_type(reltype=None):
1149+ """Get relations of a specific type"""
1150+ relation_data = []
1151+ reltype = reltype or relation_type()
1152+ for relid in relation_ids(reltype):
1153+ for relation in relations_for_id(relid):
1154+ relation['__relid__'] = relid
1155+ relation_data.append(relation)
1156+ return relation_data
1157+
1158+
1159+@cached
1160+def relation_types():
1161+ """Get a list of relation types supported by this charm"""
1162+ charmdir = os.environ.get('CHARM_DIR', '')
1163+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1164+ md = yaml.safe_load(mdf)
1165+ rel_types = []
1166+ for key in ('provides', 'requires', 'peers'):
1167+ section = md.get(key)
1168+ if section:
1169+ rel_types.extend(section.keys())
1170+ mdf.close()
1171+ return rel_types
1172+
1173+
1174+@cached
1175+def relations():
1176+ """Get a nested dictionary of relation data for all related units"""
1177+ rels = {}
1178+ for reltype in relation_types():
1179+ relids = {}
1180+ for relid in relation_ids(reltype):
1181+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1182+ for unit in related_units(relid):
1183+ reldata = relation_get(unit=unit, rid=relid)
1184+ units[unit] = reldata
1185+ relids[relid] = units
1186+ rels[reltype] = relids
1187+ return rels
1188+
1189+
1190+@cached
1191+def is_relation_made(relation, keys='private-address'):
1192+ '''
1193+ Determine whether a relation is established by checking for
1194+ presence of key(s). If a list of keys is provided, they
1195+ must all be present for the relation to be identified as made
1196+ '''
1197+ if isinstance(keys, str):
1198+ keys = [keys]
1199+ for r_id in relation_ids(relation):
1200+ for unit in related_units(r_id):
1201+ context = {}
1202+ for k in keys:
1203+ context[k] = relation_get(k, rid=r_id,
1204+ unit=unit)
1205+ if None not in context.values():
1206+ return True
1207+ return False
1208+
1209+
1210+def open_port(port, protocol="TCP"):
1211+ """Open a service network port"""
1212+ _args = ['open-port']
1213+ _args.append('{}/{}'.format(port, protocol))
1214+ subprocess.check_call(_args)
1215+
1216+
1217+def close_port(port, protocol="TCP"):
1218+ """Close a service network port"""
1219+ _args = ['close-port']
1220+ _args.append('{}/{}'.format(port, protocol))
1221+ subprocess.check_call(_args)
1222+
1223+
1224+@cached
1225+def unit_get(attribute):
1226+ """Get the unit ID for the remote unit"""
1227+ _args = ['unit-get', '--format=json', attribute]
1228+ try:
1229+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
1230+ except ValueError:
1231+ return None
1232+
1233+
1234+def unit_private_ip():
1235+ """Get this unit's private IP address"""
1236+ return unit_get('private-address')
1237+
1238+
1239+class UnregisteredHookError(Exception):
1240+ """Raised when an undefined hook is called"""
1241+ pass
1242+
1243+
1244+class Hooks(object):
1245+ """A convenient handler for hook functions.
1246+
1247+ Example::
1248+
1249+ hooks = Hooks()
1250+
1251+ # register a hook, taking its name from the function name
1252+ @hooks.hook()
1253+ def install():
1254+ pass # your code here
1255+
1256+ # register a hook, providing a custom hook name
1257+ @hooks.hook("config-changed")
1258+ def config_changed():
1259+ pass # your code here
1260+
1261+ if __name__ == "__main__":
1262+ # execute a hook based on the name the program is called by
1263+ hooks.execute(sys.argv)
1264+ """
1265+
1266+ def __init__(self, config_save=True):
1267+ super(Hooks, self).__init__()
1268+ self._hooks = {}
1269+ self._config_save = config_save
1270+
1271+ def register(self, name, function):
1272+ """Register a hook"""
1273+ self._hooks[name] = function
1274+
1275+ def execute(self, args):
1276+ """Execute a registered hook based on args[0]"""
1277+ hook_name = os.path.basename(args[0])
1278+ if hook_name in self._hooks:
1279+ self._hooks[hook_name]()
1280+ if self._config_save:
1281+ cfg = config()
1282+ if cfg.implicit_save:
1283+ cfg.save()
1284+ else:
1285+ raise UnregisteredHookError(hook_name)
1286+
1287+ def hook(self, *hook_names):
1288+ """Decorator, registering them as hooks"""
1289+ def wrapper(decorated):
1290+ for hook_name in hook_names:
1291+ self.register(hook_name, decorated)
1292+ else:
1293+ self.register(decorated.__name__, decorated)
1294+ if '_' in decorated.__name__:
1295+ self.register(
1296+ decorated.__name__.replace('_', '-'), decorated)
1297+ return decorated
1298+ return wrapper
1299+
1300+
1301+def charm_dir():
1302+ """Return the root directory of the current charm"""
1303+ return os.environ.get('CHARM_DIR')
1304
1305=== added file 'lib/charmhelpers/core/host.py'
1306--- lib/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
1307+++ lib/charmhelpers/core/host.py 2014-12-15 18:28:33 +0000
1308@@ -0,0 +1,396 @@
1309+"""Tools for working with the host system"""
1310+# Copyright 2012 Canonical Ltd.
1311+#
1312+# Authors:
1313+# Nick Moffitt <nick.moffitt@canonical.com>
1314+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1315+
1316+import os
1317+import re
1318+import pwd
1319+import grp
1320+import random
1321+import string
1322+import subprocess
1323+import hashlib
1324+from contextlib import contextmanager
1325+from collections import OrderedDict
1326+
1327+import six
1328+
1329+from .hookenv import log
1330+from .fstab import Fstab
1331+
1332+
1333+def service_start(service_name):
1334+ """Start a system service"""
1335+ return service('start', service_name)
1336+
1337+
1338+def service_stop(service_name):
1339+ """Stop a system service"""
1340+ return service('stop', service_name)
1341+
1342+
1343+def service_restart(service_name):
1344+ """Restart a system service"""
1345+ return service('restart', service_name)
1346+
1347+
1348+def service_reload(service_name, restart_on_failure=False):
1349+ """Reload a system service, optionally falling back to restart if
1350+ reload fails"""
1351+ service_result = service('reload', service_name)
1352+ if not service_result and restart_on_failure:
1353+ service_result = service('restart', service_name)
1354+ return service_result
1355+
1356+
1357+def service(action, service_name):
1358+ """Control a system service"""
1359+ cmd = ['service', service_name, action]
1360+ return subprocess.call(cmd) == 0
1361+
1362+
1363+def service_running(service):
1364+ """Determine whether a system service is running"""
1365+ try:
1366+ output = subprocess.check_output(
1367+ ['service', service, 'status'],
1368+ stderr=subprocess.STDOUT).decode('UTF-8')
1369+ except subprocess.CalledProcessError:
1370+ return False
1371+ else:
1372+ if ("start/running" in output or "is running" in output):
1373+ return True
1374+ else:
1375+ return False
1376+
1377+
1378+def service_available(service_name):
1379+ """Determine whether a system service is available"""
1380+ try:
1381+ subprocess.check_output(
1382+ ['service', service_name, 'status'],
1383+ stderr=subprocess.STDOUT).decode('UTF-8')
1384+ except subprocess.CalledProcessError as e:
1385+ return 'unrecognized service' not in e.output
1386+ else:
1387+ return True
1388+
1389+
1390+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1391+ """Add a user to the system"""
1392+ try:
1393+ user_info = pwd.getpwnam(username)
1394+ log('user {0} already exists!'.format(username))
1395+ except KeyError:
1396+ log('creating user {0}'.format(username))
1397+ cmd = ['useradd']
1398+ if system_user or password is None:
1399+ cmd.append('--system')
1400+ else:
1401+ cmd.extend([
1402+ '--create-home',
1403+ '--shell', shell,
1404+ '--password', password,
1405+ ])
1406+ cmd.append(username)
1407+ subprocess.check_call(cmd)
1408+ user_info = pwd.getpwnam(username)
1409+ return user_info
1410+
1411+
1412+def add_user_to_group(username, group):
1413+ """Add a user to a group"""
1414+ cmd = [
1415+ 'gpasswd', '-a',
1416+ username,
1417+ group
1418+ ]
1419+ log("Adding user {} to group {}".format(username, group))
1420+ subprocess.check_call(cmd)
1421+
1422+
1423+def rsync(from_path, to_path, flags='-r', options=None):
1424+ """Replicate the contents of a path"""
1425+ options = options or ['--delete', '--executability']
1426+ cmd = ['/usr/bin/rsync', flags]
1427+ cmd.extend(options)
1428+ cmd.append(from_path)
1429+ cmd.append(to_path)
1430+ log(" ".join(cmd))
1431+ return subprocess.check_output(cmd).decode('UTF-8').strip()
1432+
1433+
1434+def symlink(source, destination):
1435+ """Create a symbolic link"""
1436+ log("Symlinking {} as {}".format(source, destination))
1437+ cmd = [
1438+ 'ln',
1439+ '-sf',
1440+ source,
1441+ destination,
1442+ ]
1443+ subprocess.check_call(cmd)
1444+
1445+
1446+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
1447+ """Create a directory"""
1448+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
1449+ perms))
1450+ uid = pwd.getpwnam(owner).pw_uid
1451+ gid = grp.getgrnam(group).gr_gid
1452+ realpath = os.path.abspath(path)
1453+ if os.path.exists(realpath):
1454+ if force and not os.path.isdir(realpath):
1455+ log("Removing non-directory file {} prior to mkdir()".format(path))
1456+ os.unlink(realpath)
1457+ else:
1458+ os.makedirs(realpath, perms)
1459+ os.chown(realpath, uid, gid)
1460+
1461+
1462+def write_file(path, content, owner='root', group='root', perms=0o444):
1463+ """Create or overwrite a file with the contents of a string"""
1464+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
1465+ uid = pwd.getpwnam(owner).pw_uid
1466+ gid = grp.getgrnam(group).gr_gid
1467+ with open(path, 'w') as target:
1468+ os.fchown(target.fileno(), uid, gid)
1469+ os.fchmod(target.fileno(), perms)
1470+ target.write(content)
1471+
1472+
1473+def fstab_remove(mp):
1474+ """Remove the given mountpoint entry from /etc/fstab
1475+ """
1476+ return Fstab.remove_by_mountpoint(mp)
1477+
1478+
1479+def fstab_add(dev, mp, fs, options=None):
1480+ """Adds the given device entry to the /etc/fstab file
1481+ """
1482+ return Fstab.add(dev, mp, fs, options=options)
1483+
1484+
1485+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
1486+ """Mount a filesystem at a particular mountpoint"""
1487+ cmd_args = ['mount']
1488+ if options is not None:
1489+ cmd_args.extend(['-o', options])
1490+ cmd_args.extend([device, mountpoint])
1491+ try:
1492+ subprocess.check_output(cmd_args)
1493+ except subprocess.CalledProcessError as e:
1494+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1495+ return False
1496+
1497+ if persist:
1498+ return fstab_add(device, mountpoint, filesystem, options=options)
1499+ return True
1500+
1501+
1502+def umount(mountpoint, persist=False):
1503+ """Unmount a filesystem"""
1504+ cmd_args = ['umount', mountpoint]
1505+ try:
1506+ subprocess.check_output(cmd_args)
1507+ except subprocess.CalledProcessError as e:
1508+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1509+ return False
1510+
1511+ if persist:
1512+ return fstab_remove(mountpoint)
1513+ return True
1514+
1515+
1516+def mounts():
1517+ """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
1518+ with open('/proc/mounts') as f:
1519+ # [['/mount/point','/dev/path'],[...]]
1520+ system_mounts = [m[1::-1] for m in [l.strip().split()
1521+ for l in f.readlines()]]
1522+ return system_mounts
1523+
1524+
1525+def file_hash(path, hash_type='md5'):
1526+ """
1527+ Generate a hash checksum of the contents of 'path' or None if not found.
1528+
1529+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1530+ such as md5, sha1, sha256, sha512, etc.
1531+ """
1532+ if os.path.exists(path):
1533+ h = getattr(hashlib, hash_type)()
1534+ with open(path, 'rb') as source:
1535+ h.update(source.read())
1536+ return h.hexdigest()
1537+ else:
1538+ return None
1539+
1540+
1541+def check_hash(path, checksum, hash_type='md5'):
1542+ """
1543+ Validate a file using a cryptographic checksum.
1544+
1545+ :param str checksum: Value of the checksum used to validate the file.
1546+ :param str hash_type: Hash algorithm used to generate `checksum`.
1547+ Can be any hash alrgorithm supported by :mod:`hashlib`,
1548+ such as md5, sha1, sha256, sha512, etc.
1549+ :raises ChecksumError: If the file fails the checksum
1550+
1551+ """
1552+ actual_checksum = file_hash(path, hash_type)
1553+ if checksum != actual_checksum:
1554+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
1555+
1556+
1557+class ChecksumError(ValueError):
1558+ pass
1559+
1560+
1561+def restart_on_change(restart_map, stopstart=False):
1562+ """Restart services based on configuration files changing
1563+
1564+ This function is used a decorator, for example::
1565+
1566+ @restart_on_change({
1567+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1568+ })
1569+ def ceph_client_changed():
1570+ pass # your code here
1571+
1572+ In this example, the cinder-api and cinder-volume services
1573+ would be restarted if /etc/ceph/ceph.conf is changed by the
1574+ ceph_client_changed function.
1575+ """
1576+ def wrap(f):
1577+ def wrapped_f(*args):
1578+ checksums = {}
1579+ for path in restart_map:
1580+ checksums[path] = file_hash(path)
1581+ f(*args)
1582+ restarts = []
1583+ for path in restart_map:
1584+ if checksums[path] != file_hash(path):
1585+ restarts += restart_map[path]
1586+ services_list = list(OrderedDict.fromkeys(restarts))
1587+ if not stopstart:
1588+ for service_name in services_list:
1589+ service('restart', service_name)
1590+ else:
1591+ for action in ['stop', 'start']:
1592+ for service_name in services_list:
1593+ service(action, service_name)
1594+ return wrapped_f
1595+ return wrap
1596+
1597+
1598+def lsb_release():
1599+ """Return /etc/lsb-release in a dict"""
1600+ d = {}
1601+ with open('/etc/lsb-release', 'r') as lsb:
1602+ for l in lsb:
1603+ k, v = l.split('=')
1604+ d[k.strip()] = v.strip()
1605+ return d
1606+
1607+
1608+def pwgen(length=None):
1609+ """Generate a random pasword."""
1610+ if length is None:
1611+ length = random.choice(range(35, 45))
1612+ alphanumeric_chars = [
1613+ l for l in (string.ascii_letters + string.digits)
1614+ if l not in 'l0QD1vAEIOUaeiou']
1615+ random_chars = [
1616+ random.choice(alphanumeric_chars) for _ in range(length)]
1617+ return(''.join(random_chars))
1618+
1619+
1620+def list_nics(nic_type):
1621+ '''Return a list of nics of given type(s)'''
1622+ if isinstance(nic_type, six.string_types):
1623+ int_types = [nic_type]
1624+ else:
1625+ int_types = nic_type
1626+ interfaces = []
1627+ for int_type in int_types:
1628+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
1629+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1630+ ip_output = (line for line in ip_output if line)
1631+ for line in ip_output:
1632+ if line.split()[1].startswith(int_type):
1633+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
1634+ if matched:
1635+ interface = matched.groups()[0]
1636+ else:
1637+ interface = line.split()[1].replace(":", "")
1638+ interfaces.append(interface)
1639+
1640+ return interfaces
1641+
1642+
1643+def set_nic_mtu(nic, mtu):
1644+ '''Set MTU on a network interface'''
1645+ cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
1646+ subprocess.check_call(cmd)
1647+
1648+
1649+def get_nic_mtu(nic):
1650+ cmd = ['ip', 'addr', 'show', nic]
1651+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
1652+ mtu = ""
1653+ for line in ip_output:
1654+ words = line.split()
1655+ if 'mtu' in words:
1656+ mtu = words[words.index("mtu") + 1]
1657+ return mtu
1658+
1659+
1660+def get_nic_hwaddr(nic):
1661+ cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1662+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
1663+ hwaddr = ""
1664+ words = ip_output.split()
1665+ if 'link/ether' in words:
1666+ hwaddr = words[words.index('link/ether') + 1]
1667+ return hwaddr
1668+
1669+
1670+def cmp_pkgrevno(package, revno, pkgcache=None):
1671+ '''Compare supplied revno with the revno of the installed package
1672+
1673+ * 1 => Installed revno is greater than supplied arg
1674+ * 0 => Installed revno is the same as supplied arg
1675+ * -1 => Installed revno is less than supplied arg
1676+
1677+ '''
1678+ import apt_pkg
1679+ from charmhelpers.fetch import apt_cache
1680+ if not pkgcache:
1681+ pkgcache = apt_cache()
1682+ pkg = pkgcache[package]
1683+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1684+
1685+
1686+@contextmanager
1687+def chdir(d):
1688+ cur = os.getcwd()
1689+ try:
1690+ yield os.chdir(d)
1691+ finally:
1692+ os.chdir(cur)
1693+
1694+
1695+def chownr(path, owner, group):
1696+ uid = pwd.getpwnam(owner).pw_uid
1697+ gid = grp.getgrnam(group).gr_gid
1698+
1699+ for root, dirs, files in os.walk(path):
1700+ for name in dirs + files:
1701+ full = os.path.join(root, name)
1702+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
1703+ if not broken_symlink:
1704+ os.chown(full, uid, gid)
1705
1706=== added directory 'lib/charmhelpers/core/services'
1707=== added file 'lib/charmhelpers/core/services/__init__.py'
1708--- lib/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
1709+++ lib/charmhelpers/core/services/__init__.py 2014-12-15 18:28:33 +0000
1710@@ -0,0 +1,2 @@
1711+from .base import * # NOQA
1712+from .helpers import * # NOQA
1713
1714=== added file 'lib/charmhelpers/core/services/base.py'
1715--- lib/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
1716+++ lib/charmhelpers/core/services/base.py 2014-12-15 18:28:33 +0000
1717@@ -0,0 +1,313 @@
1718+import os
1719+import re
1720+import json
1721+from collections import Iterable
1722+
1723+from charmhelpers.core import host
1724+from charmhelpers.core import hookenv
1725+
1726+
1727+__all__ = ['ServiceManager', 'ManagerCallback',
1728+ 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
1729+ 'service_restart', 'service_stop']
1730+
1731+
1732+class ServiceManager(object):
1733+ def __init__(self, services=None):
1734+ """
1735+ Register a list of services, given their definitions.
1736+
1737+ Service definitions are dicts in the following formats (all keys except
1738+ 'service' are optional)::
1739+
1740+ {
1741+ "service": <service name>,
1742+ "required_data": <list of required data contexts>,
1743+ "provided_data": <list of provided data contexts>,
1744+ "data_ready": <one or more callbacks>,
1745+ "data_lost": <one or more callbacks>,
1746+ "start": <one or more callbacks>,
1747+ "stop": <one or more callbacks>,
1748+ "ports": <list of ports to manage>,
1749+ }
1750+
1751+ The 'required_data' list should contain dicts of required data (or
1752+ dependency managers that act like dicts and know how to collect the data).
1753+ Only when all items in the 'required_data' list are populated are the list
1754+ of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
1755+ information.
1756+
1757+ The 'provided_data' list should contain relation data providers, most likely
1758+ a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
1759+ that will indicate a set of data to set on a given relation.
1760+
1761+ The 'data_ready' value should be either a single callback, or a list of
1762+ callbacks, to be called when all items in 'required_data' pass `is_ready()`.
1763+ Each callback will be called with the service name as the only parameter.
1764+ After all of the 'data_ready' callbacks are called, the 'start' callbacks
1765+ are fired.
1766+
1767+ The 'data_lost' value should be either a single callback, or a list of
1768+ callbacks, to be called when a 'required_data' item no longer passes
1769+ `is_ready()`. Each callback will be called with the service name as the
1770+ only parameter. After all of the 'data_lost' callbacks are called,
1771+ the 'stop' callbacks are fired.
1772+
1773+ The 'start' value should be either a single callback, or a list of
1774+ callbacks, to be called when starting the service, after the 'data_ready'
1775+ callbacks are complete. Each callback will be called with the service
1776+ name as the only parameter. This defaults to
1777+ `[host.service_start, services.open_ports]`.
1778+
1779+ The 'stop' value should be either a single callback, or a list of
1780+ callbacks, to be called when stopping the service. If the service is
1781+ being stopped because it no longer has all of its 'required_data', this
1782+ will be called after all of the 'data_lost' callbacks are complete.
1783+ Each callback will be called with the service name as the only parameter.
1784+ This defaults to `[services.close_ports, host.service_stop]`.
1785+
1786+ The 'ports' value should be a list of ports to manage. The default
1787+ 'start' handler will open the ports after the service is started,
1788+ and the default 'stop' handler will close the ports prior to stopping
1789+ the service.
1790+
1791+
1792+ Examples:
1793+
1794+ The following registers an Upstart service called bingod that depends on
1795+ a mongodb relation and which runs a custom `db_migrate` function prior to
1796+ restarting the service, and a Runit service called spadesd::
1797+
1798+ manager = services.ServiceManager([
1799+ {
1800+ 'service': 'bingod',
1801+ 'ports': [80, 443],
1802+ 'required_data': [MongoRelation(), config(), {'my': 'data'}],
1803+ 'data_ready': [
1804+ services.template(source='bingod.conf'),
1805+ services.template(source='bingod.ini',
1806+ target='/etc/bingod.ini',
1807+ owner='bingo', perms=0400),
1808+ ],
1809+ },
1810+ {
1811+ 'service': 'spadesd',
1812+ 'data_ready': services.template(source='spadesd_run.j2',
1813+ target='/etc/sv/spadesd/run',
1814+ perms=0555),
1815+ 'start': runit_start,
1816+ 'stop': runit_stop,
1817+ },
1818+ ])
1819+ manager.manage()
1820+ """
1821+ self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
1822+ self._ready = None
1823+ self.services = {}
1824+ for service in services or []:
1825+ service_name = service['service']
1826+ self.services[service_name] = service
1827+
1828+ def manage(self):
1829+ """
1830+ Handle the current hook by doing The Right Thing with the registered services.
1831+ """
1832+ hook_name = hookenv.hook_name()
1833+ if hook_name == 'stop':
1834+ self.stop_services()
1835+ else:
1836+ self.provide_data()
1837+ self.reconfigure_services()
1838+ cfg = hookenv.config()
1839+ if cfg.implicit_save:
1840+ cfg.save()
1841+
1842+ def provide_data(self):
1843+ """
1844+ Set the relation data for each provider in the ``provided_data`` list.
1845+
1846+ A provider must have a `name` attribute, which indicates which relation
1847+ to set data on, and a `provide_data()` method, which returns a dict of
1848+ data to set.
1849+ """
1850+ hook_name = hookenv.hook_name()
1851+ for service in self.services.values():
1852+ for provider in service.get('provided_data', []):
1853+ if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
1854+ data = provider.provide_data()
1855+ _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
1856+ if _ready:
1857+ hookenv.relation_set(None, data)
1858+
1859+ def reconfigure_services(self, *service_names):
1860+ """
1861+ Update all files for one or more registered services, and,
1862+ if ready, optionally restart them.
1863+
1864+ If no service names are given, reconfigures all registered services.
1865+ """
1866+ for service_name in service_names or self.services.keys():
1867+ if self.is_ready(service_name):
1868+ self.fire_event('data_ready', service_name)
1869+ self.fire_event('start', service_name, default=[
1870+ service_restart,
1871+ manage_ports])
1872+ self.save_ready(service_name)
1873+ else:
1874+ if self.was_ready(service_name):
1875+ self.fire_event('data_lost', service_name)
1876+ self.fire_event('stop', service_name, default=[
1877+ manage_ports,
1878+ service_stop])
1879+ self.save_lost(service_name)
1880+
1881+ def stop_services(self, *service_names):
1882+ """
1883+ Stop one or more registered services, by name.
1884+
1885+ If no service names are given, stops all registered services.
1886+ """
1887+ for service_name in service_names or self.services.keys():
1888+ self.fire_event('stop', service_name, default=[
1889+ manage_ports,
1890+ service_stop])
1891+
1892+ def get_service(self, service_name):
1893+ """
1894+ Given the name of a registered service, return its service definition.
1895+ """
1896+ service = self.services.get(service_name)
1897+ if not service:
1898+ raise KeyError('Service not registered: %s' % service_name)
1899+ return service
1900+
1901+ def fire_event(self, event_name, service_name, default=None):
1902+ """
1903+ Fire a data_ready, data_lost, start, or stop event on a given service.
1904+ """
1905+ service = self.get_service(service_name)
1906+ callbacks = service.get(event_name, default)
1907+ if not callbacks:
1908+ return
1909+ if not isinstance(callbacks, Iterable):
1910+ callbacks = [callbacks]
1911+ for callback in callbacks:
1912+ if isinstance(callback, ManagerCallback):
1913+ callback(self, service_name, event_name)
1914+ else:
1915+ callback(service_name)
1916+
1917+ def is_ready(self, service_name):
1918+ """
1919+ Determine if a registered service is ready, by checking its 'required_data'.
1920+
1921+ A 'required_data' item can be any mapping type, and is considered ready
1922+ if `bool(item)` evaluates as True.
1923+ """
1924+ service = self.get_service(service_name)
1925+ reqs = service.get('required_data', [])
1926+ return all(bool(req) for req in reqs)
1927+
1928+ def _load_ready_file(self):
1929+ if self._ready is not None:
1930+ return
1931+ if os.path.exists(self._ready_file):
1932+ with open(self._ready_file) as fp:
1933+ self._ready = set(json.load(fp))
1934+ else:
1935+ self._ready = set()
1936+
1937+ def _save_ready_file(self):
1938+ if self._ready is None:
1939+ return
1940+ with open(self._ready_file, 'w') as fp:
1941+ json.dump(list(self._ready), fp)
1942+
1943+ def save_ready(self, service_name):
1944+ """
1945+ Save an indicator that the given service is now data_ready.
1946+ """
1947+ self._load_ready_file()
1948+ self._ready.add(service_name)
1949+ self._save_ready_file()
1950+
1951+ def save_lost(self, service_name):
1952+ """
1953+ Save an indicator that the given service is no longer data_ready.
1954+ """
1955+ self._load_ready_file()
1956+ self._ready.discard(service_name)
1957+ self._save_ready_file()
1958+
1959+ def was_ready(self, service_name):
1960+ """
1961+ Determine if the given service was previously data_ready.
1962+ """
1963+ self._load_ready_file()
1964+ return service_name in self._ready
1965+
1966+
1967+class ManagerCallback(object):
1968+ """
1969+ Special case of a callback that takes the `ServiceManager` instance
1970+ in addition to the service name.
1971+
1972+ Subclasses should implement `__call__` which should accept three parameters:
1973+
1974+ * `manager` The `ServiceManager` instance
1975+ * `service_name` The name of the service it's being triggered for
1976+ * `event_name` The name of the event that this callback is handling
1977+ """
1978+ def __call__(self, manager, service_name, event_name):
1979+ raise NotImplementedError()
1980+
1981+
1982+class PortManagerCallback(ManagerCallback):
1983+ """
1984+ Callback class that will open or close ports, for use as either
1985+ a start or stop action.
1986+ """
1987+ def __call__(self, manager, service_name, event_name):
1988+ service = manager.get_service(service_name)
1989+ new_ports = service.get('ports', [])
1990+ port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
1991+ if os.path.exists(port_file):
1992+ with open(port_file) as fp:
1993+ old_ports = fp.read().split(',')
1994+ for old_port in old_ports:
1995+ if bool(old_port):
1996+ old_port = int(old_port)
1997+ if old_port not in new_ports:
1998+ hookenv.close_port(old_port)
1999+ with open(port_file, 'w') as fp:
2000+ fp.write(','.join(str(port) for port in new_ports))
2001+ for port in new_ports:
2002+ if event_name == 'start':
2003+ hookenv.open_port(port)
2004+ elif event_name == 'stop':
2005+ hookenv.close_port(port)
2006+
2007+
2008+def service_stop(service_name):
2009+ """
2010+ Wrapper around host.service_stop to prevent spurious "unknown service"
2011+ messages in the logs.
2012+ """
2013+ if host.service_running(service_name):
2014+ host.service_stop(service_name)
2015+
2016+
2017+def service_restart(service_name):
2018+ """
2019+ Wrapper around host.service_restart to prevent spurious "unknown service"
2020+ messages in the logs.
2021+ """
2022+ if host.service_available(service_name):
2023+ if host.service_running(service_name):
2024+ host.service_restart(service_name)
2025+ else:
2026+ host.service_start(service_name)
2027+
2028+
2029+# Convenience aliases
2030+open_ports = close_ports = manage_ports = PortManagerCallback()
2031
2032=== added file 'lib/charmhelpers/core/services/helpers.py'
2033--- lib/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
2034+++ lib/charmhelpers/core/services/helpers.py 2014-12-15 18:28:33 +0000
2035@@ -0,0 +1,243 @@
2036+import os
2037+import yaml
2038+from charmhelpers.core import hookenv
2039+from charmhelpers.core import templating
2040+
2041+from charmhelpers.core.services.base import ManagerCallback
2042+
2043+
2044+__all__ = ['RelationContext', 'TemplateCallback',
2045+ 'render_template', 'template']
2046+
2047+
2048+class RelationContext(dict):
2049+ """
2050+ Base class for a context generator that gets relation data from juju.
2051+
2052+ Subclasses must provide the attributes `name`, which is the name of the
2053+ interface of interest, `interface`, which is the type of the interface of
2054+ interest, and `required_keys`, which is the set of keys required for the
2055+ relation to be considered complete. The data for all interfaces matching
2056+ the `name` attribute that are complete will used to populate the dictionary
2057+ values (see `get_data`, below).
2058+
2059+ The generated context will be namespaced under the relation :attr:`name`,
2060+ to prevent potential naming conflicts.
2061+
2062+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
2063+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
2064+ """
2065+ name = None
2066+ interface = None
2067+ required_keys = []
2068+
2069+ def __init__(self, name=None, additional_required_keys=None):
2070+ if name is not None:
2071+ self.name = name
2072+ if additional_required_keys is not None:
2073+ self.required_keys.extend(additional_required_keys)
2074+ self.get_data()
2075+
2076+ def __bool__(self):
2077+ """
2078+ Returns True if all of the required_keys are available.
2079+ """
2080+ return self.is_ready()
2081+
2082+ __nonzero__ = __bool__
2083+
2084+ def __repr__(self):
2085+ return super(RelationContext, self).__repr__()
2086+
2087+ def is_ready(self):
2088+ """
2089+ Returns True if all of the `required_keys` are available from any units.
2090+ """
2091+ ready = len(self.get(self.name, [])) > 0
2092+ if not ready:
2093+ hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
2094+ return ready
2095+
2096+ def _is_ready(self, unit_data):
2097+ """
2098+ Helper method that tests a set of relation data and returns True if
2099+ all of the `required_keys` are present.
2100+ """
2101+ return set(unit_data.keys()).issuperset(set(self.required_keys))
2102+
2103+ def get_data(self):
2104+ """
2105+ Retrieve the relation data for each unit involved in a relation and,
2106+ if complete, store it in a list under `self[self.name]`. This
2107+ is automatically called when the RelationContext is instantiated.
2108+
2109+ The units are sorted lexographically first by the service ID, then by
2110+ the unit ID. Thus, if an interface has two other services, 'db:1'
2111+ and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
2112+ and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
2113+ set of data, the relation data for the units will be stored in the
2114+ order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
2115+
2116+ If you only care about a single unit on the relation, you can just
2117+ access it as `{{ interface[0]['key'] }}`. However, if you can at all
2118+ support multiple units on a relation, you should iterate over the list,
2119+ like::
2120+
2121+ {% for unit in interface -%}
2122+ {{ unit['key'] }}{% if not loop.last %},{% endif %}
2123+ {%- endfor %}
2124+
2125+ Note that since all sets of relation data from all related services and
2126+ units are in a single list, if you need to know which service or unit a
2127+ set of data came from, you'll need to extend this class to preserve
2128+ that information.
2129+ """
2130+ if not hookenv.relation_ids(self.name):
2131+ return
2132+
2133+ ns = self.setdefault(self.name, [])
2134+ for rid in sorted(hookenv.relation_ids(self.name)):
2135+ for unit in sorted(hookenv.related_units(rid)):
2136+ reldata = hookenv.relation_get(rid=rid, unit=unit)
2137+ if self._is_ready(reldata):
2138+ ns.append(reldata)
2139+
2140+ def provide_data(self):
2141+ """
2142+ Return data to be relation_set for this interface.
2143+ """
2144+ return {}
2145+
2146+
2147+class MysqlRelation(RelationContext):
2148+ """
2149+ Relation context for the `mysql` interface.
2150+
2151+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
2152+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
2153+ """
2154+ name = 'db'
2155+ interface = 'mysql'
2156+ required_keys = ['host', 'user', 'password', 'database']
2157+
2158+
2159+class HttpRelation(RelationContext):
2160+ """
2161+ Relation context for the `http` interface.
2162+
2163+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
2164+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
2165+ """
2166+ name = 'website'
2167+ interface = 'http'
2168+ required_keys = ['host', 'port']
2169+
2170+ def provide_data(self):
2171+ return {
2172+ 'host': hookenv.unit_get('private-address'),
2173+ 'port': 80,
2174+ }
2175+
2176+
2177+class RequiredConfig(dict):
2178+ """
2179+ Data context that loads config options with one or more mandatory options.
2180+
2181+ Once the required options have been changed from their default values, all
2182+ config options will be available, namespaced under `config` to prevent
2183+ potential naming conflicts (for example, between a config option and a
2184+ relation property).
2185+
2186+ :param list *args: List of options that must be changed from their default values.
2187+ """
2188+
2189+ def __init__(self, *args):
2190+ self.required_options = args
2191+ self['config'] = hookenv.config()
2192+ with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
2193+ self.config = yaml.load(fp).get('options', {})
2194+
2195+ def __bool__(self):
2196+ for option in self.required_options:
2197+ if option not in self['config']:
2198+ return False
2199+ current_value = self['config'][option]
2200+ default_value = self.config[option].get('default')
2201+ if current_value == default_value:
2202+ return False
2203+ if current_value in (None, '') and default_value in (None, ''):
2204+ return False
2205+ return True
2206+
2207+ def __nonzero__(self):
2208+ return self.__bool__()
2209+
2210+
2211+class StoredContext(dict):
2212+ """
2213+ A data context that always returns the data that it was first created with.
2214+
2215+ This is useful to do a one-time generation of things like passwords, that
2216+ will thereafter use the same value that was originally generated, instead
2217+ of generating a new value each time it is run.
2218+ """
2219+ def __init__(self, file_name, config_data):
2220+ """
2221+ If the file exists, populate `self` with the data from the file.
2222+ Otherwise, populate with the given data and persist it to the file.
2223+ """
2224+ if os.path.exists(file_name):
2225+ self.update(self.read_context(file_name))
2226+ else:
2227+ self.store_context(file_name, config_data)
2228+ self.update(config_data)
2229+
2230+ def store_context(self, file_name, config_data):
2231+ if not os.path.isabs(file_name):
2232+ file_name = os.path.join(hookenv.charm_dir(), file_name)
2233+ with open(file_name, 'w') as file_stream:
2234+ os.fchmod(file_stream.fileno(), 0o600)
2235+ yaml.dump(config_data, file_stream)
2236+
2237+ def read_context(self, file_name):
2238+ if not os.path.isabs(file_name):
2239+ file_name = os.path.join(hookenv.charm_dir(), file_name)
2240+ with open(file_name, 'r') as file_stream:
2241+ data = yaml.load(file_stream)
2242+ if not data:
2243+ raise OSError("%s is empty" % file_name)
2244+ return data
2245+
2246+
2247+class TemplateCallback(ManagerCallback):
2248+ """
2249+ Callback class that will render a Jinja2 template, for use as a ready
2250+ action.
2251+
2252+ :param str source: The template source file, relative to
2253+ `$CHARM_DIR/templates`
2254+
2255+ :param str target: The target to write the rendered template to
2256+ :param str owner: The owner of the rendered file
2257+ :param str group: The group of the rendered file
2258+ :param int perms: The permissions of the rendered file
2259+ """
2260+ def __init__(self, source, target,
2261+ owner='root', group='root', perms=0o444):
2262+ self.source = source
2263+ self.target = target
2264+ self.owner = owner
2265+ self.group = group
2266+ self.perms = perms
2267+
2268+ def __call__(self, manager, service_name, event_name):
2269+ service = manager.get_service(service_name)
2270+ context = {}
2271+ for ctx in service.get('required_data', []):
2272+ context.update(ctx)
2273+ templating.render(self.source, self.target, context,
2274+ self.owner, self.group, self.perms)
2275+
2276+
2277+# Convenience aliases for templates
2278+render_template = template = TemplateCallback
2279
2280=== added file 'lib/charmhelpers/core/sysctl.py'
2281--- lib/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
2282+++ lib/charmhelpers/core/sysctl.py 2014-12-15 18:28:33 +0000
2283@@ -0,0 +1,34 @@
2284+#!/usr/bin/env python
2285+# -*- coding: utf-8 -*-
2286+
2287+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2288+
2289+import yaml
2290+
2291+from subprocess import check_call
2292+
2293+from charmhelpers.core.hookenv import (
2294+ log,
2295+ DEBUG,
2296+)
2297+
2298+
2299+def create(sysctl_dict, sysctl_file):
2300+ """Creates a sysctl.conf file from a YAML associative array
2301+
2302+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
2303+ :type sysctl_dict: dict
2304+ :param sysctl_file: path to the sysctl file to be saved
2305+ :type sysctl_file: str or unicode
2306+ :returns: None
2307+ """
2308+ sysctl_dict = yaml.load(sysctl_dict)
2309+
2310+ with open(sysctl_file, "w") as fd:
2311+ for key, value in sysctl_dict.items():
2312+ fd.write("{}={}\n".format(key, value))
2313+
2314+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
2315+ level=DEBUG)
2316+
2317+ check_call(["sysctl", "-p", sysctl_file])
2318
2319=== added file 'lib/charmhelpers/core/templating.py'
2320--- lib/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
2321+++ lib/charmhelpers/core/templating.py 2014-12-15 18:28:33 +0000
2322@@ -0,0 +1,52 @@
2323+import os
2324+
2325+from charmhelpers.core import host
2326+from charmhelpers.core import hookenv
2327+
2328+
2329+def render(source, target, context, owner='root', group='root',
2330+ perms=0o444, templates_dir=None):
2331+ """
2332+ Render a template.
2333+
2334+ The `source` path, if not absolute, is relative to the `templates_dir`.
2335+
2336+ The `target` path should be absolute.
2337+
2338+ The context should be a dict containing the values to be replaced in the
2339+ template.
2340+
2341+ The `owner`, `group`, and `perms` options will be passed to `write_file`.
2342+
2343+ If omitted, `templates_dir` defaults to the `templates` folder in the charm.
2344+
2345+ Note: Using this requires python-jinja2; if it is not installed, calling
2346+ this will attempt to use charmhelpers.fetch.apt_install to install it.
2347+ """
2348+ try:
2349+ from jinja2 import FileSystemLoader, Environment, exceptions
2350+ except ImportError:
2351+ try:
2352+ from charmhelpers.fetch import apt_install
2353+ except ImportError:
2354+ hookenv.log('Could not import jinja2, and could not import '
2355+ 'charmhelpers.fetch to install it',
2356+ level=hookenv.ERROR)
2357+ raise
2358+ apt_install('python-jinja2', fatal=True)
2359+ from jinja2 import FileSystemLoader, Environment, exceptions
2360+
2361+ if templates_dir is None:
2362+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
2363+ loader = Environment(loader=FileSystemLoader(templates_dir))
2364+ try:
2365+ source = source
2366+ template = loader.get_template(source)
2367+ except exceptions.TemplateNotFound as e:
2368+ hookenv.log('Could not load template %s from %s.' %
2369+ (source, templates_dir),
2370+ level=hookenv.ERROR)
2371+ raise e
2372+ content = template.render(context)
2373+ host.mkdir(os.path.dirname(target))
2374+ host.write_file(target, content, owner, group, perms)
2375
2376=== added directory 'lib/charmhelpers/fetch'
2377=== added file 'lib/charmhelpers/fetch/__init__.py'
2378--- lib/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
2379+++ lib/charmhelpers/fetch/__init__.py 2014-12-15 18:28:33 +0000
2380@@ -0,0 +1,416 @@
2381+import importlib
2382+from tempfile import NamedTemporaryFile
2383+import time
2384+from yaml import safe_load
2385+from charmhelpers.core.host import (
2386+ lsb_release
2387+)
2388+import subprocess
2389+from charmhelpers.core.hookenv import (
2390+ config,
2391+ log,
2392+)
2393+import os
2394+
2395+import six
2396+if six.PY3:
2397+ from urllib.parse import urlparse, urlunparse
2398+else:
2399+ from urlparse import urlparse, urlunparse
2400+
2401+
2402+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2403+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2404+"""
2405+PROPOSED_POCKET = """# Proposed
2406+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
2407+"""
2408+CLOUD_ARCHIVE_POCKETS = {
2409+ # Folsom
2410+ 'folsom': 'precise-updates/folsom',
2411+ 'precise-folsom': 'precise-updates/folsom',
2412+ 'precise-folsom/updates': 'precise-updates/folsom',
2413+ 'precise-updates/folsom': 'precise-updates/folsom',
2414+ 'folsom/proposed': 'precise-proposed/folsom',
2415+ 'precise-folsom/proposed': 'precise-proposed/folsom',
2416+ 'precise-proposed/folsom': 'precise-proposed/folsom',
2417+ # Grizzly
2418+ 'grizzly': 'precise-updates/grizzly',
2419+ 'precise-grizzly': 'precise-updates/grizzly',
2420+ 'precise-grizzly/updates': 'precise-updates/grizzly',
2421+ 'precise-updates/grizzly': 'precise-updates/grizzly',
2422+ 'grizzly/proposed': 'precise-proposed/grizzly',
2423+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
2424+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
2425+ # Havana
2426+ 'havana': 'precise-updates/havana',
2427+ 'precise-havana': 'precise-updates/havana',
2428+ 'precise-havana/updates': 'precise-updates/havana',
2429+ 'precise-updates/havana': 'precise-updates/havana',
2430+ 'havana/proposed': 'precise-proposed/havana',
2431+ 'precise-havana/proposed': 'precise-proposed/havana',
2432+ 'precise-proposed/havana': 'precise-proposed/havana',
2433+ # Icehouse
2434+ 'icehouse': 'precise-updates/icehouse',
2435+ 'precise-icehouse': 'precise-updates/icehouse',
2436+ 'precise-icehouse/updates': 'precise-updates/icehouse',
2437+ 'precise-updates/icehouse': 'precise-updates/icehouse',
2438+ 'icehouse/proposed': 'precise-proposed/icehouse',
2439+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
2440+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
2441+ # Juno
2442+ 'juno': 'trusty-updates/juno',
2443+ 'trusty-juno': 'trusty-updates/juno',
2444+ 'trusty-juno/updates': 'trusty-updates/juno',
2445+ 'trusty-updates/juno': 'trusty-updates/juno',
2446+ 'juno/proposed': 'trusty-proposed/juno',
2447+ 'juno/proposed': 'trusty-proposed/juno',
2448+ 'trusty-juno/proposed': 'trusty-proposed/juno',
2449+ 'trusty-proposed/juno': 'trusty-proposed/juno',
2450+}
2451+
2452+# The order of this list is very important. Handlers should be listed in from
2453+# least- to most-specific URL matching.
2454+FETCH_HANDLERS = (
2455+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2456+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2457+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2458+)
2459+
2460+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2461+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
2462+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
2463+
2464+
2465+class SourceConfigError(Exception):
2466+ pass
2467+
2468+
2469+class UnhandledSource(Exception):
2470+ pass
2471+
2472+
2473+class AptLockError(Exception):
2474+ pass
2475+
2476+
2477+class BaseFetchHandler(object):
2478+
2479+ """Base class for FetchHandler implementations in fetch plugins"""
2480+
2481+ def can_handle(self, source):
2482+ """Returns True if the source can be handled. Otherwise returns
2483+ a string explaining why it cannot"""
2484+ return "Wrong source type"
2485+
2486+ def install(self, source):
2487+ """Try to download and unpack the source. Return the path to the
2488+ unpacked files or raise UnhandledSource."""
2489+ raise UnhandledSource("Wrong source type {}".format(source))
2490+
2491+ def parse_url(self, url):
2492+ return urlparse(url)
2493+
2494+ def base_url(self, url):
2495+ """Return url without querystring or fragment"""
2496+ parts = list(self.parse_url(url))
2497+ parts[4:] = ['' for i in parts[4:]]
2498+ return urlunparse(parts)
2499+
2500+
2501+def filter_installed_packages(packages):
2502+ """Returns a list of packages that require installation"""
2503+ cache = apt_cache()
2504+ _pkgs = []
2505+ for package in packages:
2506+ try:
2507+ p = cache[package]
2508+ p.current_ver or _pkgs.append(package)
2509+ except KeyError:
2510+ log('Package {} has no installation candidate.'.format(package),
2511+ level='WARNING')
2512+ _pkgs.append(package)
2513+ return _pkgs
2514+
2515+
2516+def apt_cache(in_memory=True):
2517+ """Build and return an apt cache"""
2518+ import apt_pkg
2519+ apt_pkg.init()
2520+ if in_memory:
2521+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
2522+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
2523+ return apt_pkg.Cache()
2524+
2525+
2526+def apt_install(packages, options=None, fatal=False):
2527+ """Install one or more packages"""
2528+ if options is None:
2529+ options = ['--option=Dpkg::Options::=--force-confold']
2530+
2531+ cmd = ['apt-get', '--assume-yes']
2532+ cmd.extend(options)
2533+ cmd.append('install')
2534+ if isinstance(packages, six.string_types):
2535+ cmd.append(packages)
2536+ else:
2537+ cmd.extend(packages)
2538+ log("Installing {} with options: {}".format(packages,
2539+ options))
2540+ _run_apt_command(cmd, fatal)
2541+
2542+
2543+def apt_upgrade(options=None, fatal=False, dist=False):
2544+ """Upgrade all packages"""
2545+ if options is None:
2546+ options = ['--option=Dpkg::Options::=--force-confold']
2547+
2548+ cmd = ['apt-get', '--assume-yes']
2549+ cmd.extend(options)
2550+ if dist:
2551+ cmd.append('dist-upgrade')
2552+ else:
2553+ cmd.append('upgrade')
2554+ log("Upgrading with options: {}".format(options))
2555+ _run_apt_command(cmd, fatal)
2556+
2557+
2558+def apt_update(fatal=False):
2559+ """Update local apt cache"""
2560+ cmd = ['apt-get', 'update']
2561+ _run_apt_command(cmd, fatal)
2562+
2563+
2564+def apt_purge(packages, fatal=False):
2565+ """Purge one or more packages"""
2566+ cmd = ['apt-get', '--assume-yes', 'purge']
2567+ if isinstance(packages, six.string_types):
2568+ cmd.append(packages)
2569+ else:
2570+ cmd.extend(packages)
2571+ log("Purging {}".format(packages))
2572+ _run_apt_command(cmd, fatal)
2573+
2574+
2575+def apt_hold(packages, fatal=False):
2576+ """Hold one or more packages"""
2577+ cmd = ['apt-mark', 'hold']
2578+ if isinstance(packages, six.string_types):
2579+ cmd.append(packages)
2580+ else:
2581+ cmd.extend(packages)
2582+ log("Holding {}".format(packages))
2583+
2584+ if fatal:
2585+ subprocess.check_call(cmd)
2586+ else:
2587+ subprocess.call(cmd)
2588+
2589+
2590+def add_source(source, key=None):
2591+ """Add a package source to this system.
2592+
2593+ @param source: a URL or sources.list entry, as supported by
2594+ add-apt-repository(1). Examples::
2595+
2596+ ppa:charmers/example
2597+ deb https://stub:key@private.example.com/ubuntu trusty main
2598+
2599+ In addition:
2600+ 'proposed:' may be used to enable the standard 'proposed'
2601+ pocket for the release.
2602+ 'cloud:' may be used to activate official cloud archive pockets,
2603+ such as 'cloud:icehouse'
2604+ 'distro' may be used as a noop
2605+
2606+ @param key: A key to be added to the system's APT keyring and used
2607+ to verify the signatures on packages. Ideally, this should be an
2608+ ASCII format GPG public key including the block headers. A GPG key
2609+ id may also be used, but be aware that only insecure protocols are
2610+ available to retrieve the actual public key from a public keyserver
2611+ placing your Juju environment at risk. ppa and cloud archive keys
2612+ are securely added automtically, so sould not be provided.
2613+ """
2614+ if source is None:
2615+ log('Source is not present. Skipping')
2616+ return
2617+
2618+ if (source.startswith('ppa:') or
2619+ source.startswith('http') or
2620+ source.startswith('deb ') or
2621+ source.startswith('cloud-archive:')):
2622+ subprocess.check_call(['add-apt-repository', '--yes', source])
2623+ elif source.startswith('cloud:'):
2624+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
2625+ fatal=True)
2626+ pocket = source.split(':')[-1]
2627+ if pocket not in CLOUD_ARCHIVE_POCKETS:
2628+ raise SourceConfigError(
2629+ 'Unsupported cloud: source option %s' %
2630+ pocket)
2631+ actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
2632+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2633+ apt.write(CLOUD_ARCHIVE.format(actual_pocket))
2634+ elif source == 'proposed':
2635+ release = lsb_release()['DISTRIB_CODENAME']
2636+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2637+ apt.write(PROPOSED_POCKET.format(release))
2638+ elif source == 'distro':
2639+ pass
2640+ else:
2641+ log("Unknown source: {!r}".format(source))
2642+
2643+ if key:
2644+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2645+ with NamedTemporaryFile('w+') as key_file:
2646+ key_file.write(key)
2647+ key_file.flush()
2648+ key_file.seek(0)
2649+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
2650+ else:
2651+ # Note that hkp: is in no way a secure protocol. Using a
2652+ # GPG key id is pointless from a security POV unless you
2653+ # absolutely trust your network and DNS.
2654+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
2655+ 'hkp://keyserver.ubuntu.com:80', '--recv',
2656+ key])
2657+
2658+
2659+def configure_sources(update=False,
2660+ sources_var='install_sources',
2661+ keys_var='install_keys'):
2662+ """
2663+ Configure multiple sources from charm configuration.
2664+
2665+ The lists are encoded as yaml fragments in the configuration.
2666+ The frament needs to be included as a string. Sources and their
2667+ corresponding keys are of the types supported by add_source().
2668+
2669+ Example config:
2670+ install_sources: |
2671+ - "ppa:foo"
2672+ - "http://example.com/repo precise main"
2673+ install_keys: |
2674+ - null
2675+ - "a1b2c3d4"
2676+
2677+ Note that 'null' (a.k.a. None) should not be quoted.
2678+ """
2679+ sources = safe_load((config(sources_var) or '').strip()) or []
2680+ keys = safe_load((config(keys_var) or '').strip()) or None
2681+
2682+ if isinstance(sources, six.string_types):
2683+ sources = [sources]
2684+
2685+ if keys is None:
2686+ for source in sources:
2687+ add_source(source, None)
2688+ else:
2689+ if isinstance(keys, six.string_types):
2690+ keys = [keys]
2691+
2692+ if len(sources) != len(keys):
2693+ raise SourceConfigError(
2694+ 'Install sources and keys lists are different lengths')
2695+ for source, key in zip(sources, keys):
2696+ add_source(source, key)
2697+ if update:
2698+ apt_update(fatal=True)
2699+
2700+
2701+def install_remote(source, *args, **kwargs):
2702+ """
2703+ Install a file tree from a remote source
2704+
2705+ The specified source should be a url of the form:
2706+ scheme://[host]/path[#[option=value][&...]]
2707+
2708+ Schemes supported are based on this modules submodules.
2709+ Options supported are submodule-specific.
2710+ Additional arguments are passed through to the submodule.
2711+
2712+ For example::
2713+
2714+ dest = install_remote('http://example.com/archive.tgz',
2715+ checksum='deadbeef',
2716+ hash_type='sha1')
2717+
2718+ This will download `archive.tgz`, validate it using SHA1 and, if
2719+ the file is ok, extract it and return the directory in which it
2720+ was extracted. If the checksum fails, it will raise
2721+ :class:`charmhelpers.core.host.ChecksumError`.
2722+ """
2723+ # We ONLY check for True here because can_handle may return a string
2724+ # explaining why it can't handle a given source.
2725+ handlers = [h for h in plugins() if h.can_handle(source) is True]
2726+ installed_to = None
2727+ for handler in handlers:
2728+ try:
2729+ installed_to = handler.install(source, *args, **kwargs)
2730+ except UnhandledSource:
2731+ pass
2732+ if not installed_to:
2733+ raise UnhandledSource("No handler found for source {}".format(source))
2734+ return installed_to
2735+
2736+
2737+def install_from_config(config_var_name):
2738+ charm_config = config()
2739+ source = charm_config[config_var_name]
2740+ return install_remote(source)
2741+
2742+
2743+def plugins(fetch_handlers=None):
2744+ if not fetch_handlers:
2745+ fetch_handlers = FETCH_HANDLERS
2746+ plugin_list = []
2747+ for handler_name in fetch_handlers:
2748+ package, classname = handler_name.rsplit('.', 1)
2749+ try:
2750+ handler_class = getattr(
2751+ importlib.import_module(package),
2752+ classname)
2753+ plugin_list.append(handler_class())
2754+ except (ImportError, AttributeError):
2755+ # Skip missing plugins so that they can be ommitted from
2756+ # installation if desired
2757+ log("FetchHandler {} not found, skipping plugin".format(
2758+ handler_name))
2759+ return plugin_list
2760+
2761+
2762+def _run_apt_command(cmd, fatal=False):
2763+ """
2764+ Run an APT command, checking output and retrying if the fatal flag is set
2765+ to True.
2766+
2767+ :param: cmd: str: The apt command to run.
2768+ :param: fatal: bool: Whether the command's output should be checked and
2769+ retried.
2770+ """
2771+ env = os.environ.copy()
2772+
2773+ if 'DEBIAN_FRONTEND' not in env:
2774+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2775+
2776+ if fatal:
2777+ retry_count = 0
2778+ result = None
2779+
2780+ # If the command is considered "fatal", we need to retry if the apt
2781+ # lock was not acquired.
2782+
2783+ while result is None or result == APT_NO_LOCK:
2784+ try:
2785+ result = subprocess.check_call(cmd, env=env)
2786+ except subprocess.CalledProcessError as e:
2787+ retry_count = retry_count + 1
2788+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
2789+ raise
2790+ result = e.returncode
2791+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
2792+ "".format(APT_NO_LOCK_RETRY_DELAY))
2793+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
2794+
2795+ else:
2796+ subprocess.call(cmd, env=env)
2797
2798=== added file 'lib/charmhelpers/fetch/archiveurl.py'
2799--- lib/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
2800+++ lib/charmhelpers/fetch/archiveurl.py 2014-12-15 18:28:33 +0000
2801@@ -0,0 +1,145 @@
2802+import os
2803+import hashlib
2804+import re
2805+
2806+import six
2807+if six.PY3:
2808+ from urllib.request import (
2809+ build_opener, install_opener, urlopen, urlretrieve,
2810+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2811+ )
2812+ from urllib.parse import urlparse, urlunparse, parse_qs
2813+ from urllib.error import URLError
2814+else:
2815+ from urllib import urlretrieve
2816+ from urllib2 import (
2817+ build_opener, install_opener, urlopen,
2818+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2819+ URLError
2820+ )
2821+ from urlparse import urlparse, urlunparse, parse_qs
2822+
2823+from charmhelpers.fetch import (
2824+ BaseFetchHandler,
2825+ UnhandledSource
2826+)
2827+from charmhelpers.payload.archive import (
2828+ get_archive_handler,
2829+ extract,
2830+)
2831+from charmhelpers.core.host import mkdir, check_hash
2832+
2833+
2834+def splituser(host):
2835+ '''urllib.splituser(), but six's support of this seems broken'''
2836+ _userprog = re.compile('^(.*)@(.*)$')
2837+ match = _userprog.match(host)
2838+ if match:
2839+ return match.group(1, 2)
2840+ return None, host
2841+
2842+
2843+def splitpasswd(user):
2844+ '''urllib.splitpasswd(), but six's support of this is missing'''
2845+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
2846+ match = _passwdprog.match(user)
2847+ if match:
2848+ return match.group(1, 2)
2849+ return user, None
2850+
2851+
2852+class ArchiveUrlFetchHandler(BaseFetchHandler):
2853+ """
2854+ Handler to download archive files from arbitrary URLs.
2855+
2856+ Can fetch from http, https, ftp, and file URLs.
2857+
2858+ Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
2859+
2860+ Installs the contents of the archive in $CHARM_DIR/fetched/.
2861+ """
2862+ def can_handle(self, source):
2863+ url_parts = self.parse_url(source)
2864+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
2865+ return "Wrong source type"
2866+ if get_archive_handler(self.base_url(source)):
2867+ return True
2868+ return False
2869+
2870+ def download(self, source, dest):
2871+ """
2872+ Download an archive file.
2873+
2874+ :param str source: URL pointing to an archive file.
2875+ :param str dest: Local path location to download archive file to.
2876+ """
2877+ # propogate all exceptions
2878+ # URLError, OSError, etc
2879+ proto, netloc, path, params, query, fragment = urlparse(source)
2880+ if proto in ('http', 'https'):
2881+ auth, barehost = splituser(netloc)
2882+ if auth is not None:
2883+ source = urlunparse((proto, barehost, path, params, query, fragment))
2884+ username, password = splitpasswd(auth)
2885+ passman = HTTPPasswordMgrWithDefaultRealm()
2886+ # Realm is set to None in add_password to force the username and password
2887+ # to be used whatever the realm
2888+ passman.add_password(None, source, username, password)
2889+ authhandler = HTTPBasicAuthHandler(passman)
2890+ opener = build_opener(authhandler)
2891+ install_opener(opener)
2892+ response = urlopen(source)
2893+ try:
2894+ with open(dest, 'w') as dest_file:
2895+ dest_file.write(response.read())
2896+ except Exception as e:
2897+ if os.path.isfile(dest):
2898+ os.unlink(dest)
2899+ raise e
2900+
2901+ # Mandatory file validation via Sha1 or MD5 hashing.
2902+ def download_and_validate(self, url, hashsum, validate="sha1"):
2903+ tempfile, headers = urlretrieve(url)
2904+ check_hash(tempfile, hashsum, validate)
2905+ return tempfile
2906+
2907+ def install(self, source, dest=None, checksum=None, hash_type='sha1'):
2908+ """
2909+ Download and install an archive file, with optional checksum validation.
2910+
2911+ The checksum can also be given on the `source` URL's fragment.
2912+ For example::
2913+
2914+ handler.install('http://example.com/file.tgz#sha1=deadbeef')
2915+
2916+ :param str source: URL pointing to an archive file.
2917+ :param str dest: Local destination path to install to. If not given,
2918+ installs to `$CHARM_DIR/archives/archive_file_name`.
2919+ :param str checksum: If given, validate the archive file after download.
2920+ :param str hash_type: Algorithm used to generate `checksum`.
2921+ Can be any hash alrgorithm supported by :mod:`hashlib`,
2922+ such as md5, sha1, sha256, sha512, etc.
2923+
2924+ """
2925+ url_parts = self.parse_url(source)
2926+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2927+ if not os.path.exists(dest_dir):
2928+ mkdir(dest_dir, perms=0o755)
2929+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2930+ try:
2931+ self.download(source, dld_file)
2932+ except URLError as e:
2933+ raise UnhandledSource(e.reason)
2934+ except OSError as e:
2935+ raise UnhandledSource(e.strerror)
2936+ options = parse_qs(url_parts.fragment)
2937+ for key, value in options.items():
2938+ if not six.PY3:
2939+ algorithms = hashlib.algorithms
2940+ else:
2941+ algorithms = hashlib.algorithms_available
2942+ if key in algorithms:
2943+ check_hash(dld_file, value, key)
2944+ if checksum:
2945+ check_hash(dld_file, checksum, hash_type)
2946+ return extract(dld_file, dest)
2947
2948=== added file 'lib/charmhelpers/fetch/bzrurl.py'
2949--- lib/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
2950+++ lib/charmhelpers/fetch/bzrurl.py 2014-12-15 18:28:33 +0000
2951@@ -0,0 +1,54 @@
2952+import os
2953+from charmhelpers.fetch import (
2954+ BaseFetchHandler,
2955+ UnhandledSource
2956+)
2957+from charmhelpers.core.host import mkdir
2958+
2959+import six
2960+if six.PY3:
2961+ raise ImportError('bzrlib does not support Python3')
2962+
2963+try:
2964+ from bzrlib.branch import Branch
2965+except ImportError:
2966+ from charmhelpers.fetch import apt_install
2967+ apt_install("python-bzrlib")
2968+ from bzrlib.branch import Branch
2969+
2970+
2971+class BzrUrlFetchHandler(BaseFetchHandler):
2972+ """Handler for bazaar branches via generic and lp URLs"""
2973+ def can_handle(self, source):
2974+ url_parts = self.parse_url(source)
2975+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
2976+ return False
2977+ else:
2978+ return True
2979+
2980+ def branch(self, source, dest):
2981+ url_parts = self.parse_url(source)
2982+ # If we use lp:branchname scheme we need to load plugins
2983+ if not self.can_handle(source):
2984+ raise UnhandledSource("Cannot handle {}".format(source))
2985+ if url_parts.scheme == "lp":
2986+ from bzrlib.plugin import load_plugins
2987+ load_plugins()
2988+ try:
2989+ remote_branch = Branch.open(source)
2990+ remote_branch.bzrdir.sprout(dest).open_branch()
2991+ except Exception as e:
2992+ raise e
2993+
2994+ def install(self, source):
2995+ url_parts = self.parse_url(source)
2996+ branch_name = url_parts.path.strip("/").split("/")[-1]
2997+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2998+ branch_name)
2999+ if not os.path.exists(dest_dir):
3000+ mkdir(dest_dir, perms=0o755)
3001+ try:
3002+ self.branch(source, dest_dir)
3003+ except OSError as e:
3004+ raise UnhandledSource(e.strerror)
3005+ return dest_dir
3006
3007=== added file 'lib/charmhelpers/fetch/giturl.py'
3008--- lib/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3009+++ lib/charmhelpers/fetch/giturl.py 2014-12-15 18:28:33 +0000
3010@@ -0,0 +1,48 @@
3011+import os
3012+from charmhelpers.fetch import (
3013+ BaseFetchHandler,
3014+ UnhandledSource
3015+)
3016+from charmhelpers.core.host import mkdir
3017+
3018+import six
3019+if six.PY3:
3020+ raise ImportError('GitPython does not support Python 3')
3021+
3022+try:
3023+ from git import Repo
3024+except ImportError:
3025+ from charmhelpers.fetch import apt_install
3026+ apt_install("python-git")
3027+ from git import Repo
3028+
3029+
3030+class GitUrlFetchHandler(BaseFetchHandler):
3031+ """Handler for git branches via generic and github URLs"""
3032+ def can_handle(self, source):
3033+ url_parts = self.parse_url(source)
3034+ # TODO (mattyw) no support for ssh git@ yet
3035+ if url_parts.scheme not in ('http', 'https', 'git'):
3036+ return False
3037+ else:
3038+ return True
3039+
3040+ def clone(self, source, dest, branch):
3041+ if not self.can_handle(source):
3042+ raise UnhandledSource("Cannot handle {}".format(source))
3043+
3044+ repo = Repo.clone_from(source, dest)
3045+ repo.git.checkout(branch)
3046+
3047+ def install(self, source, branch="master"):
3048+ url_parts = self.parse_url(source)
3049+ branch_name = url_parts.path.strip("/").split("/")[-1]
3050+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3051+ branch_name)
3052+ if not os.path.exists(dest_dir):
3053+ mkdir(dest_dir, perms=0o755)
3054+ try:
3055+ self.clone(source, dest_dir, branch)
3056+ except OSError as e:
3057+ raise UnhandledSource(e.strerror)
3058+ return dest_dir
3059
3060=== added directory 'lib/charmhelpers/payload'
3061=== added file 'lib/charmhelpers/payload/__init__.py'
3062--- lib/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
3063+++ lib/charmhelpers/payload/__init__.py 2014-12-15 18:28:33 +0000
3064@@ -0,0 +1,1 @@
3065+"Tools for working with files injected into a charm just before deployment."
3066
3067=== added file 'lib/charmhelpers/payload/archive.py'
3068--- lib/charmhelpers/payload/archive.py 1970-01-01 00:00:00 +0000
3069+++ lib/charmhelpers/payload/archive.py 2014-12-15 18:28:33 +0000
3070@@ -0,0 +1,57 @@
3071+import os
3072+import tarfile
3073+import zipfile
3074+from charmhelpers.core import (
3075+ host,
3076+ hookenv,
3077+)
3078+
3079+
3080+class ArchiveError(Exception):
3081+ pass
3082+
3083+
3084+def get_archive_handler(archive_name):
3085+ if os.path.isfile(archive_name):
3086+ if tarfile.is_tarfile(archive_name):
3087+ return extract_tarfile
3088+ elif zipfile.is_zipfile(archive_name):
3089+ return extract_zipfile
3090+ else:
3091+ # look at the file name
3092+ for ext in ('.tar', '.tar.gz', '.tgz', 'tar.bz2', '.tbz2', '.tbz'):
3093+ if archive_name.endswith(ext):
3094+ return extract_tarfile
3095+ for ext in ('.zip', '.jar'):
3096+ if archive_name.endswith(ext):
3097+ return extract_zipfile
3098+
3099+
3100+def archive_dest_default(archive_name):
3101+ archive_file = os.path.basename(archive_name)
3102+ return os.path.join(hookenv.charm_dir(), "archives", archive_file)
3103+
3104+
3105+def extract(archive_name, destpath=None):
3106+ handler = get_archive_handler(archive_name)
3107+ if handler:
3108+ if not destpath:
3109+ destpath = archive_dest_default(archive_name)
3110+ if not os.path.isdir(destpath):
3111+ host.mkdir(destpath)
3112+ handler(archive_name, destpath)
3113+ return destpath
3114+ else:
3115+ raise ArchiveError("No handler for archive")
3116+
3117+
3118+def extract_tarfile(archive_name, destpath):
3119+ "Unpack a tar archive, optionally compressed"
3120+ archive = tarfile.open(archive_name)
3121+ archive.extractall(destpath)
3122+
3123+
3124+def extract_zipfile(archive_name, destpath):
3125+ "Unpack a zip file"
3126+ archive = zipfile.ZipFile(archive_name)
3127+ archive.extractall(destpath)
3128
3129=== added file 'lib/charmhelpers/payload/execd.py'
3130--- lib/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
3131+++ lib/charmhelpers/payload/execd.py 2014-12-15 18:28:33 +0000
3132@@ -0,0 +1,50 @@
3133+#!/usr/bin/env python
3134+
3135+import os
3136+import sys
3137+import subprocess
3138+from charmhelpers.core import hookenv
3139+
3140+
3141+def default_execd_dir():
3142+ return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
3143+
3144+
3145+def execd_module_paths(execd_dir=None):
3146+ """Generate a list of full paths to modules within execd_dir."""
3147+ if not execd_dir:
3148+ execd_dir = default_execd_dir()
3149+
3150+ if not os.path.exists(execd_dir):
3151+ return
3152+
3153+ for subpath in os.listdir(execd_dir):
3154+ module = os.path.join(execd_dir, subpath)
3155+ if os.path.isdir(module):
3156+ yield module
3157+
3158+
3159+def execd_submodule_paths(command, execd_dir=None):
3160+ """Generate a list of full paths to the specified command within exec_dir.
3161+ """
3162+ for module_path in execd_module_paths(execd_dir):
3163+ path = os.path.join(module_path, command)
3164+ if os.access(path, os.X_OK) and os.path.isfile(path):
3165+ yield path
3166+
3167+
3168+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
3169+ """Run command for each module within execd_dir which defines it."""
3170+ for submodule_path in execd_submodule_paths(command, execd_dir):
3171+ try:
3172+ subprocess.check_call(submodule_path, shell=True, stderr=stderr)
3173+ except subprocess.CalledProcessError as e:
3174+ hookenv.log("Error ({}) running {}. Output: {}".format(
3175+ e.returncode, e.cmd, e.output))
3176+ if die_on_error:
3177+ sys.exit(e.returncode)
3178+
3179+
3180+def execd_preinstall(execd_dir=None):
3181+ """Run charm-pre-install for each module within execd_dir."""
3182+ execd_run('charm-pre-install', execd_dir=execd_dir)
3183
3184=== added file 'scripts/charm_helpers_sync.py'
3185--- scripts/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
3186+++ scripts/charm_helpers_sync.py 2014-12-15 18:28:33 +0000
3187@@ -0,0 +1,223 @@
3188+#!/usr/bin/env python
3189+# Copyright 2013 Canonical Ltd.
3190+
3191+# Authors:
3192+# Adam Gandelman <adamg@ubuntu.com>
3193+
3194+import logging
3195+import optparse
3196+import os
3197+import subprocess
3198+import shutil
3199+import sys
3200+import tempfile
3201+import yaml
3202+
3203+from fnmatch import fnmatch
3204+
3205+CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
3206+
3207+
3208+def parse_config(conf_file):
3209+ if not os.path.isfile(conf_file):
3210+ logging.error('Invalid config file: %s.' % conf_file)
3211+ return False
3212+ return yaml.load(open(conf_file).read())
3213+
3214+
3215+def clone_helpers(work_dir, branch):
3216+ dest = os.path.join(work_dir, 'charm-helpers')
3217+ logging.info('Checking out %s to %s.' % (branch, dest))
3218+ cmd = ['bzr', 'branch', branch, dest]
3219+ subprocess.check_call(cmd)
3220+ return dest
3221+
3222+
3223+def _module_path(module):
3224+ return os.path.join(*module.split('.'))
3225+
3226+
3227+def _src_path(src, module):
3228+ return os.path.join(src, 'charmhelpers', _module_path(module))
3229+
3230+
3231+def _dest_path(dest, module):
3232+ return os.path.join(dest, _module_path(module))
3233+
3234+
3235+def _is_pyfile(path):
3236+ return os.path.isfile(path + '.py')
3237+
3238+
3239+def ensure_init(path):
3240+ '''
3241+ ensure directories leading up to path are importable, omitting
3242+ parent directory, eg path='/hooks/helpers/foo'/:
3243+ hooks/
3244+ hooks/helpers/__init__.py
3245+ hooks/helpers/foo/__init__.py
3246+ '''
3247+ for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
3248+ _i = os.path.join(d, '__init__.py')
3249+ if not os.path.exists(_i):
3250+ logging.info('Adding missing __init__.py: %s' % _i)
3251+ open(_i, 'wb').close()
3252+
3253+
3254+def sync_pyfile(src, dest):
3255+ src = src + '.py'
3256+ src_dir = os.path.dirname(src)
3257+ logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
3258+ if not os.path.exists(dest):
3259+ os.makedirs(dest)
3260+ shutil.copy(src, dest)
3261+ if os.path.isfile(os.path.join(src_dir, '__init__.py')):
3262+ shutil.copy(os.path.join(src_dir, '__init__.py'),
3263+ dest)
3264+ ensure_init(dest)
3265+
3266+
3267+def get_filter(opts=None):
3268+ opts = opts or []
3269+ if 'inc=*' in opts:
3270+ # do not filter any files, include everything
3271+ return None
3272+
3273+ def _filter(dir, ls):
3274+ incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
3275+ _filter = []
3276+ for f in ls:
3277+ _f = os.path.join(dir, f)
3278+
3279+ if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
3280+ if True not in [fnmatch(_f, inc) for inc in incs]:
3281+ logging.debug('Not syncing %s, does not match include '
3282+ 'filters (%s)' % (_f, incs))
3283+ _filter.append(f)
3284+ else:
3285+ logging.debug('Including file, which matches include '
3286+ 'filters (%s): %s' % (incs, _f))
3287+ elif (os.path.isfile(_f) and not _f.endswith('.py')):
3288+ logging.debug('Not syncing file: %s' % f)
3289+ _filter.append(f)
3290+ elif (os.path.isdir(_f) and not
3291+ os.path.isfile(os.path.join(_f, '__init__.py'))):
3292+ logging.debug('Not syncing directory: %s' % f)
3293+ _filter.append(f)
3294+ return _filter
3295+ return _filter
3296+
3297+
3298+def sync_directory(src, dest, opts=None):
3299+ if os.path.exists(dest):
3300+ logging.debug('Removing existing directory: %s' % dest)
3301+ shutil.rmtree(dest)
3302+ logging.info('Syncing directory: %s -> %s.' % (src, dest))
3303+
3304+ shutil.copytree(src, dest, ignore=get_filter(opts))
3305+ ensure_init(dest)
3306+
3307+
3308+def sync(src, dest, module, opts=None):
3309+ if os.path.isdir(_src_path(src, module)):
3310+ sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
3311+ elif _is_pyfile(_src_path(src, module)):
3312+ sync_pyfile(_src_path(src, module),
3313+ os.path.dirname(_dest_path(dest, module)))
3314+ else:
3315+ logging.warn('Could not sync: %s. Neither a pyfile or directory, '
3316+ 'does it even exist?' % module)
3317+
3318+
3319+def parse_sync_options(options):
3320+ if not options:
3321+ return []
3322+ return options.split(',')
3323+
3324+
3325+def extract_options(inc, global_options=None):
3326+ global_options = global_options or []
3327+ if global_options and isinstance(global_options, basestring):
3328+ global_options = [global_options]
3329+ if '|' not in inc:
3330+ return (inc, global_options)
3331+ inc, opts = inc.split('|')
3332+ return (inc, parse_sync_options(opts) + global_options)
3333+
3334+
3335+def sync_helpers(include, src, dest, options=None):
3336+ if not os.path.isdir(dest):
3337+ os.mkdir(dest)
3338+
3339+ global_options = parse_sync_options(options)
3340+
3341+ for inc in include:
3342+ if isinstance(inc, str):
3343+ inc, opts = extract_options(inc, global_options)
3344+ sync(src, dest, inc, opts)
3345+ elif isinstance(inc, dict):
3346+ # could also do nested dicts here.
3347+ for k, v in inc.iteritems():
3348+ if isinstance(v, list):
3349+ for m in v:
3350+ inc, opts = extract_options(m, global_options)
3351+ sync(src, dest, '%s.%s' % (k, inc), opts)
3352+
3353+if __name__ == '__main__':
3354+ parser = optparse.OptionParser()
3355+ parser.add_option('-c', '--config', action='store', dest='config',
3356+ default=None, help='helper config file')
3357+ parser.add_option('-D', '--debug', action='store_true', dest='debug',
3358+ default=False, help='debug')
3359+ parser.add_option('-b', '--branch', action='store', dest='branch',
3360+ help='charm-helpers bzr branch (overrides config)')
3361+ parser.add_option('-d', '--destination', action='store', dest='dest_dir',
3362+ help='sync destination dir (overrides config)')
3363+ (opts, args) = parser.parse_args()
3364+
3365+ if opts.debug:
3366+ logging.basicConfig(level=logging.DEBUG)
3367+ else:
3368+ logging.basicConfig(level=logging.INFO)
3369+
3370+ if opts.config:
3371+ logging.info('Loading charm helper config from %s.' % opts.config)
3372+ config = parse_config(opts.config)
3373+ if not config:
3374+ logging.error('Could not parse config from %s.' % opts.config)
3375+ sys.exit(1)
3376+ else:
3377+ config = {}
3378+
3379+ if 'branch' not in config:
3380+ config['branch'] = CHARM_HELPERS_BRANCH
3381+ if opts.branch:
3382+ config['branch'] = opts.branch
3383+ if opts.dest_dir:
3384+ config['destination'] = opts.dest_dir
3385+
3386+ if 'destination' not in config:
3387+ logging.error('No destination dir. specified as option or config.')
3388+ sys.exit(1)
3389+
3390+ if 'include' not in config:
3391+ if not args:
3392+ logging.error('No modules to sync specified as option or config.')
3393+ sys.exit(1)
3394+ config['include'] = []
3395+ [config['include'].append(a) for a in args]
3396+
3397+ sync_options = None
3398+ if 'options' in config:
3399+ sync_options = config['options']
3400+ tmpd = tempfile.mkdtemp()
3401+ try:
3402+ checkout = clone_helpers(tmpd, config['branch'])
3403+ sync_helpers(config['include'], checkout, config['destination'],
3404+ options=sync_options)
3405+ except Exception, e:
3406+ logging.error("Could not sync: %s" % e)
3407+ raise e
3408+ finally:
3409+ logging.debug('Cleaning up %s' % tmpd)
3410+ shutil.rmtree(tmpd)
3411
3412=== added file 'tests/10-deploy-and-upgrade'
3413--- tests/10-deploy-and-upgrade 1970-01-01 00:00:00 +0000
3414+++ tests/10-deploy-and-upgrade 2014-12-15 18:28:33 +0000
3415@@ -0,0 +1,78 @@
3416+#!/usr/bin/env python3
3417+
3418+import amulet
3419+import requests
3420+import unittest
3421+
3422+
3423+class TestDeployment(unittest.TestCase):
3424+ @classmethod
3425+ def setUpClass(cls):
3426+ cls.deployment = amulet.Deployment(series='trusty')
3427+
3428+ mw_config = { 'name': 'MariaDB Test'}
3429+
3430+ cls.deployment.add('mariadb')
3431+ cls.deployment.add('mediawiki')
3432+ cls.deployment.configure('mediawiki', mw_config)
3433+ cls.deployment.relate('mediawiki:db', 'mariadb:db')
3434+ cls.deployment.expose('mediawiki')
3435+
3436+
3437+ try:
3438+ cls.deployment.setup(timeout=1200)
3439+ cls.deployment.sentry.wait()
3440+ except amulet.helpers.TimeoutError:
3441+ amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")
3442+ except:
3443+ raise
3444+
3445+ '''
3446+ test_relation: Verify the credentials being sent over the wire were valid
3447+ when attempting to verify the MariaDB Service Status
3448+ '''
3449+ def test_credentials(self):
3450+ dbunit = self.deployment.sentry.unit['mariadb/0']
3451+ db_relation = dbunit.relation('db', 'mediawiki:db')
3452+ db_ip = db_relation['host']
3453+ db_user = db_relation['user']
3454+ db_pass = db_relation['password']
3455+ ctmp = 'mysqladmin status -h {0} -u {1} --password={2}'
3456+ cmd = ctmp.format(db_ip, db_user, db_pass)
3457+
3458+ output, code = dbunit.run(cmd)
3459+ if code != 0:
3460+ message = 'Unable to get status of the mariadb serverat %s' % db_ip
3461+ amulet.raise_status(amulet.FAIL, msg=message)
3462+
3463+ '''
3464+ test_wiki: Verify Mediawiki setup was successful with MariaDB. No page will
3465+ be available if setup did not complete
3466+ '''
3467+ def test_wiki(self):
3468+ wikiunit = self.deployment.sentry.unit['mediawiki/0']
3469+ wiki_url = "http://{}".format(wikiunit.info['public-address'])
3470+ response = requests.get(wiki_url)
3471+ response.raise_for_status()
3472+
3473+
3474+ def test_enterprise_eval(self):
3475+ self.deployment.configure('mariadb', {'enterprise-eula': True,
3476+ 'source': 'deb https://charlesbutler:foobarbaz@code.mariadb.com/mariadb-enterprise/10.0/repo/ubuntu trusty main'})
3477+
3478+ # Ensure the bintar was relocated
3479+ dbunit = self.deployment.sentry.unit['mariadb/0']
3480+
3481+ try:
3482+ dbunit.directory_stat('/usr/local/mysql')
3483+ amulet.raise_status(amulet.SKIP, 'bintar directory found, uncertain results ahead')
3484+ except:
3485+ # this is what we want to happen
3486+ pass
3487+
3488+ # re-run the test after in-place upgrade
3489+ self.test_credentials()
3490+
3491+
3492+if __name__ == '__main__':
3493+ unittest.main()
3494
3495=== removed file 'tests/10-deploy-test.py'
3496--- tests/10-deploy-test.py 2014-11-12 23:12:00 +0000
3497+++ tests/10-deploy-test.py 1970-01-01 00:00:00 +0000
3498@@ -1,90 +0,0 @@
3499-#!/usr/bin/python3
3500-
3501-# This amulet code is to test the mariadb charm.
3502-
3503-import amulet
3504-import requests
3505-
3506-# The number of seconds to wait for Juju to set up the environment.
3507-seconds = 1200
3508-
3509-# The mediawiki configuration to test.
3510-mediawiki_configuration = {
3511- 'name': 'MariaDB test'
3512-}
3513-
3514-d = amulet.Deployment()
3515-
3516-# Add the mediawiki charm to the deployment.
3517-d.add('mediawiki')
3518-# Add the MariaDB charm to the deployment.
3519-d.add('mariadb', charm='lp:~dbart/charms/trusty/mariadb/trunk')
3520-# Configure the mediawiki charm.
3521-d.configure('mediawiki', mediawiki_configuration)
3522-# Relate the mediawiki and mariadb charms.
3523-d.relate('mediawiki:db', 'mariadb:db')
3524-# Expose the open ports on mediawiki.
3525-d.expose('mediawiki')
3526-
3527-# Deploy the environment and wait for it to setup.
3528-try:
3529- d.setup(timeout=seconds)
3530- d.sentry.wait(seconds)
3531-except amulet.helpers.TimeoutError:
3532- message = 'The environment did not setup in %d seconds.' % seconds
3533- # The SKIP status enables skip or fail the test based on configuration.
3534- amulet.raise_status(amulet.SKIP, msg=message)
3535-except:
3536- raise
3537-
3538-# Get the sentry unit for mariadb.
3539-mariadb_unit = d.sentry.unit['mariadb/0']
3540-
3541-# Get the sentry unit for mediawiki.
3542-mediawiki_unit = d.sentry.unit['mediawiki/0']
3543-
3544-# Get the public address for the system running the mediawiki charm.
3545-mediawiki_address = mediawiki_unit.info['public-address']
3546-
3547-###############################################################################
3548-## Verify MariaDB
3549-###############################################################################
3550-# Verify that mediawiki was related to mariadb
3551-mediawiki_relation = mediawiki_unit.relation('db', 'mariadb:db')
3552-print('mediawiki relation to mariadb')
3553-for key, value in mediawiki_relation.items():
3554- print(key, value)
3555-# Verify that mariadb was related to mediawiki
3556-mariadb_relation = mariadb_unit.relation('db', 'mediawiki:db')
3557-print('mariadb relation to mediawiki')
3558-for key, value in mariadb_relation.items():
3559- print(key, value)
3560-# Get the db_host from the mediawiki relation to mariadb
3561-mariadb_ip = mariadb_relation['host']
3562-# Get the user from the mediawiki relation to mariadb
3563-mariadb_user = mariadb_relation['user']
3564-# Get the password from the mediawiki relation to mariadb
3565-mariadb_password = mariadb_relation['password']
3566-# Create the command to get the mariadb status with username and password.
3567-command = '/usr/local/mysql/bin/mysqladmin status -h {0} -u {1} --password={2}'.format(mariadb_ip,
3568- mariadb_user, mariadb_password)
3569-print(command)
3570-output, code = mariadb_unit.run(command)
3571-print(output)
3572-if code != 0:
3573- message = 'Unable to get the status of mariadb server at %s' % mariadb_ip
3574- amulet.raise_status(amulet.FAIL, msg=message)
3575-
3576-###############################################################################
3577-## Verify mediawiki
3578-###############################################################################
3579-# Create a URL string to the mediawiki server.
3580-mediawiki_url = 'http://%s' % mediawiki_address
3581-print(mediawiki_url)
3582-# Get the mediawiki url with the authentication for guest.
3583-response = requests.get(mediawiki_url)
3584-# Raise an exception if response is not 200 OK.
3585-response.raise_for_status()
3586-
3587-print('The MariaDB deploy test completed successfully.')
3588-

Subscribers

People subscribed via source and target branches