Merge lp:~sidnei/charms/precise/haproxy/trunk into lp:charms/haproxy

Proposed by Sidnei da Silva
Status: Merged
Merged at revision: 68
Proposed branch: lp:~sidnei/charms/precise/haproxy/trunk
Merge into: lp:charms/haproxy
Diff against target: 5102 lines (+3700/-904)
30 files modified
.bzrignore (+10/-0)
Makefile (+39/-0)
README.md (+22/-15)
charm-helpers.yaml (+4/-0)
cm.py (+193/-0)
config-manager.txt (+6/-0)
config.yaml (+13/-1)
files/nrpe/check_haproxy.sh (+2/-3)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+239/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+44/-0)
hooks/hooks.py (+522/-450)
hooks/install (+13/-0)
hooks/nrpe.py (+0/-170)
hooks/test_hooks.py (+0/-263)
hooks/tests/test_config_changed_hooks.py (+120/-0)
hooks/tests/test_helpers.py (+750/-0)
hooks/tests/test_nrpe_hooks.py (+24/-0)
hooks/tests/test_peer_hooks.py (+200/-0)
hooks/tests/test_reverseproxy_hooks.py (+345/-0)
hooks/tests/test_website_hooks.py (+145/-0)
hooks/tests/utils_for_tests.py (+21/-0)
metadata.yaml (+7/-1)
revision (+0/-1)
setup.cfg (+4/-0)
tarmac_tests.sh (+6/-0)
To merge this branch: bzr merge lp:~sidnei/charms/precise/haproxy/trunk
Reviewer Review Type Date Requested Status
Marco Ceppi (community) Approve
Review via email: mp+190501@code.launchpad.net

Description of the change

* The 'all_services' config now supports a static list of servers to be used *in addition* to the ones provided via relation.

* When more than one haproxy units exist, the configured service is upgraded in-place to a mode where traffic is routed to a single haproxy unit (the first one in unit-name order) and the remaining ones are configured as 'backup'. This is done to allow the enforcement of a 'maxconn' session in the configured services, which would not be possible to enforce otherwise.

* Changes to the configured services are properly propagated to the upstream relation.

To post a comment you must log in.
87. By JuanJo Ciarlante

[sidnei, r=jjo] Dupe mode http/tcp and option httplog/tcplog between frontend and backend

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

LGTM +1, only comment is from the previous merge about using Hooks.hook() https://code.launchpad.net/~sidnei/charms/precise/apache2/trunk/+merge/190504/comments/440317

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2013-10-16 14:05:24 +0000
4@@ -0,0 +1,10 @@
5+revision
6+_trial_temp
7+.coverage
8+coverage.xml
9+*.crt
10+*.key
11+lib/*
12+*.pyc
13+exec.d
14+build/charm-helpers
15
16=== added file 'Makefile'
17--- Makefile 1970-01-01 00:00:00 +0000
18+++ Makefile 2013-10-16 14:05:24 +0000
19@@ -0,0 +1,39 @@
20+PWD := $(shell pwd)
21+SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode
22+HOOKS_DIR := $(PWD)/hooks
23+TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR)
24+TEST_DIR := $(PWD)/hooks/tests
25+CHARM_DIR := $(PWD)
26+PYTHON := /usr/bin/env python
27+
28+
29+build: test lint proof
30+
31+revision:
32+ @test -f revision || echo 0 > revision
33+
34+proof: revision
35+ @echo Proofing charm...
36+ @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK
37+ @test `cat revision` = 0 && rm revision
38+
39+test:
40+ @echo Starting tests...
41+ @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests $(TEST_DIR)
42+
43+lint:
44+ @echo Checking for Python syntax...
45+ @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK
46+
47+sourcedeps: $(PWD)/config-manager.txt
48+ @echo Updating source dependencies...
49+ @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \
50+ -p $(SOURCEDEPS_DIR) \
51+ -t $(PWD)
52+ @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
53+ -c charm-helpers.yaml \
54+ -b build/charm-helpers \
55+ -d hooks/charmhelpers
56+ @echo Do not forget to commit the updated files if any.
57+
58+.PHONY: revision proof test lint sourcedeps charm-payload
59
60=== modified file 'README.md'
61--- README.md 2013-02-12 23:43:54 +0000
62+++ README.md 2013-10-16 14:05:24 +0000
63@@ -1,5 +1,5 @@
64-Juju charm haproxy
65-==================
66+Juju charm for HAProxy
67+======================
68
69 HAProxy is a free, very fast and reliable solution offering high availability,
70 load balancing, and proxying for TCP and HTTP-based applications. It is
71@@ -9,6 +9,23 @@
72 integration into existing architectures very easy and riskless, while still
73 offering the possibility not to expose fragile web servers to the Net.
74
75+Development
76+-----------
77+The following steps are needed for testing and development of the charm,
78+but **not** for deployment:
79+
80+ sudo apt-get install python-software-properties
81+ sudo add-apt-repository ppa:cjohnston/flake8
82+ sudo apt-get update
83+ sudo apt-get install python-mock python-flake8 python-nose python-nosexcover
84+
85+To run the tests:
86+
87+ make build
88+
89+... will run the unit tests, run flake8 over the source to warn about
90+formatting issues and output a code coverage summary of the 'hooks.py' module.
91+
92 How to deploy the charm
93 -----------------------
94 juju deploy haproxy
95@@ -27,7 +44,7 @@
96 the "Website Relation" section for more information about that.
97
98 When your charm hooks into reverseproxy you have two general approaches
99-which can be used to notify haproxy about what services you are running.
100+which can be used to notify haproxy about what services you are running.
101 1) Single-service proxying or 2) Multi-service or relation-driven proxying.
102
103 ** 1) Single-Service Proxying **
104@@ -67,7 +84,7 @@
105
106 #!/bin/bash
107 # hooks/website-relation-changed
108-
109+
110 host=$(unit-get private-address)
111 port=80
112
113@@ -80,7 +97,7 @@
114 "
115
116 Once set, haproxy will union multiple `servers` stanzas from any units
117-joining with the same `service_name` under one listen stanza.
118+joining with the same `service_name` under one listen stanza.
119 `service-options` and `server_options` will be overwritten, so ensure they
120 are set uniformly on all services with the same name.
121
122@@ -102,18 +119,8 @@
123 Many of the haproxy settings can be altered via the standard juju configuration
124 settings. Please see the config.yaml file as each is fairly clearly documented.
125
126-Testing
127--------
128-This charm has a simple unit-test program. Please expand it and make sure new
129-changes are covered by simple unit tests. To run the unit tests:
130-
131- sudo apt-get install python-mocker
132- sudo apt-get install python-twisted-core
133- cd hooks; trial test_hooks
134-
135 TODO:
136 -----
137
138 * Expand Single-Service section as I have not tested that mode fully.
139 * Trigger website-relation-changed when the reverse-proxy relation changes
140-
141
142=== added directory 'build'
143=== added file 'charm-helpers.yaml'
144--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
145+++ charm-helpers.yaml 2013-10-16 14:05:24 +0000
146@@ -0,0 +1,4 @@
147+include:
148+ - core
149+ - fetch
150+ - contrib.charmsupport
151\ No newline at end of file
152
153=== added file 'cm.py'
154--- cm.py 1970-01-01 00:00:00 +0000
155+++ cm.py 2013-10-16 14:05:24 +0000
156@@ -0,0 +1,193 @@
157+# Copyright 2010-2013 Canonical Ltd. All rights reserved.
158+import os
159+import re
160+import sys
161+import errno
162+import hashlib
163+import subprocess
164+import optparse
165+
166+from os import curdir
167+from bzrlib.branch import Branch
168+from bzrlib.plugin import load_plugins
169+load_plugins()
170+from bzrlib.plugins.launchpad import account as lp_account
171+
172+if 'GlobalConfig' in dir(lp_account):
173+ from bzrlib.config import LocationConfig as LocationConfiguration
174+ _ = LocationConfiguration
175+else:
176+ from bzrlib.config import LocationStack as LocationConfiguration
177+ _ = LocationConfiguration
178+
179+
180+def get_branch_config(config_file):
181+ """
182+ Retrieves the sourcedeps configuration for an source dir.
183+ Returns a dict of (branch, revspec) tuples, keyed by branch name.
184+ """
185+ branches = {}
186+ with open(config_file, 'r') as stream:
187+ for line in stream:
188+ line = line.split('#')[0].strip()
189+ bzr_match = re.match(r'(\S+)\s+'
190+ 'lp:([^;]+)'
191+ '(?:;revno=(\d+))?', line)
192+ if bzr_match:
193+ name, branch, revno = bzr_match.group(1, 2, 3)
194+ if revno is None:
195+ revspec = -1
196+ else:
197+ revspec = revno
198+ branches[name] = (branch, revspec)
199+ continue
200+ dir_match = re.match(r'(\S+)\s+'
201+ '\(directory\)', line)
202+ if dir_match:
203+ name = dir_match.group(1)
204+ branches[name] = None
205+ return branches
206+
207+
208+def main(config_file, parent_dir, target_dir, verbose):
209+ """Do the deed."""
210+
211+ try:
212+ os.makedirs(parent_dir)
213+ except OSError, e:
214+ if e.errno != errno.EEXIST:
215+ raise
216+
217+ branches = sorted(get_branch_config(config_file).items())
218+ for branch_name, spec in branches:
219+ if spec is None:
220+ # It's a directory, just create it and move on.
221+ destination_path = os.path.join(target_dir, branch_name)
222+ if not os.path.isdir(destination_path):
223+ os.makedirs(destination_path)
224+ continue
225+
226+ (quoted_branch_spec, revspec) = spec
227+ revno = int(revspec)
228+
229+ # qualify mirror branch name with hash of remote repo path to deal
230+ # with changes to the remote branch URL over time
231+ branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest()
232+ branch_directory = branch_spec_digest
233+
234+ source_path = os.path.join(parent_dir, branch_directory)
235+ destination_path = os.path.join(target_dir, branch_name)
236+
237+ # Remove leftover symlinks/stray files.
238+ try:
239+ os.remove(destination_path)
240+ except OSError, e:
241+ if e.errno != errno.EISDIR and e.errno != errno.ENOENT:
242+ raise
243+
244+ lp_url = "lp:" + quoted_branch_spec
245+
246+ # Create the local mirror branch if it doesn't already exist
247+ if verbose:
248+ sys.stderr.write('%30s: ' % (branch_name,))
249+ sys.stderr.flush()
250+
251+ fresh = False
252+ if not os.path.exists(source_path):
253+ subprocess.check_call(['bzr', 'branch', '-q', '--no-tree',
254+ '--', lp_url, source_path])
255+ fresh = True
256+
257+ if not fresh:
258+ source_branch = Branch.open(source_path)
259+ if revno == -1:
260+ orig_branch = Branch.open(lp_url)
261+ fresh = source_branch.revno() == orig_branch.revno()
262+ else:
263+ fresh = source_branch.revno() == revno
264+
265+ # Freshen the source branch if required.
266+ if not fresh:
267+ subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r',
268+ str(revno), '-d', source_path,
269+ '--', lp_url])
270+
271+ if os.path.exists(destination_path):
272+ # Overwrite the destination with the appropriate revision.
273+ subprocess.check_call(['bzr', 'clean-tree', '--force', '-q',
274+ '--ignored', '-d', destination_path])
275+ subprocess.check_call(['bzr', 'pull', '-q', '--overwrite',
276+ '-r', str(revno),
277+ '-d', destination_path, '--', source_path])
278+ else:
279+ # Create a new branch.
280+ subprocess.check_call(['bzr', 'branch', '-q', '--hardlink',
281+ '-r', str(revno),
282+ '--', source_path, destination_path])
283+
284+ # Check the state of the destination branch.
285+ destination_branch = Branch.open(destination_path)
286+ destination_revno = destination_branch.revno()
287+
288+ if verbose:
289+ sys.stderr.write('checked out %4s of %s\n' %
290+ ("r" + str(destination_revno), lp_url))
291+ sys.stderr.flush()
292+
293+ if revno != -1 and destination_revno != revno:
294+ raise RuntimeError("Expected revno %d but got revno %d" %
295+ (revno, destination_revno))
296+
297+if __name__ == '__main__':
298+ parser = optparse.OptionParser(
299+ usage="%prog [options]",
300+ description=(
301+ "Add a lightweight checkout in <target> for each "
302+ "corresponding file in <parent>."),
303+ add_help_option=False)
304+ parser.add_option(
305+ '-p', '--parent', dest='parent',
306+ default=None,
307+ help=("The directory of the parent tree."),
308+ metavar="DIR")
309+ parser.add_option(
310+ '-t', '--target', dest='target', default=curdir,
311+ help=("The directory of the target tree."),
312+ metavar="DIR")
313+ parser.add_option(
314+ '-c', '--config', dest='config', default=None,
315+ help=("The config file to be used for config-manager."),
316+ metavar="DIR")
317+ parser.add_option(
318+ '-q', '--quiet', dest='verbose', action='store_false',
319+ help="Be less verbose.")
320+ parser.add_option(
321+ '-v', '--verbose', dest='verbose', action='store_true',
322+ help="Be more verbose.")
323+ parser.add_option(
324+ '-h', '--help', action='help',
325+ help="Show this help message and exit.")
326+ parser.set_defaults(verbose=True)
327+
328+ options, args = parser.parse_args()
329+
330+ if options.parent is None:
331+ options.parent = os.environ.get(
332+ "SOURCEDEPS_DIR",
333+ os.path.join(curdir, ".sourcecode"))
334+
335+ if options.target is None:
336+ parser.error(
337+ "Target directory not specified.")
338+
339+ if options.config is None:
340+ config = [arg for arg in args
341+ if arg != "update"]
342+ if not config or len(config) > 1:
343+ parser.error("Config not specified")
344+ options.config = config[0]
345+
346+ sys.exit(main(config_file=options.config,
347+ parent_dir=options.parent,
348+ target_dir=options.target,
349+ verbose=options.verbose))
350
351=== added file 'config-manager.txt'
352--- config-manager.txt 1970-01-01 00:00:00 +0000
353+++ config-manager.txt 2013-10-16 14:05:24 +0000
354@@ -0,0 +1,6 @@
355+# After making changes to this file, to ensure that your sourcedeps are
356+# up-to-date do:
357+#
358+# make sourcedeps
359+
360+./build/charm-helpers lp:charm-helpers;revno=70
361
362=== modified file 'config.yaml'
363--- config.yaml 2012-10-10 14:38:47 +0000
364+++ config.yaml 2013-10-16 14:05:24 +0000
365@@ -59,7 +59,7 @@
366 restarting, a turn-around timer of 1 second is applied before a retry
367 occurs.
368 default_timeouts:
369- default: "queue 1000, connect 1000, client 1000, server 1000"
370+ default: "queue 20000, client 50000, connect 5000, server 50000"
371 type: string
372 description: Default timeouts
373 enable_monitoring:
374@@ -90,6 +90,12 @@
375 default: 3
376 type: int
377 description: Monitoring interface refresh interval (in seconds)
378+ package_status:
379+ default: "install"
380+ type: "string"
381+ description: |
382+ The status of service-affecting packages will be set to this value in the dpkg database.
383+ Useful valid values are "install" and "hold".
384 services:
385 default: |
386 - service_name: haproxy_service
387@@ -106,6 +112,12 @@
388 before the first variable, service_name, as above. Service options is a
389 comma separated list, server options will be appended as a string to
390 the individual server lines for a given listen stanza.
391+ sysctl:
392+ default: ""
393+ type: string
394+ description: >
395+ YAML-formatted list of sysctl values, e.g.:
396+ '{ net.ipv4.tcp_max_syn_backlog : 65536 }'
397 nagios_context:
398 default: "juju"
399 type: string
400
401=== renamed directory 'files/nrpe-external-master' => 'files/nrpe'
402=== modified file 'files/nrpe/check_haproxy.sh'
403--- files/nrpe-external-master/check_haproxy.sh 2012-11-07 22:32:06 +0000
404+++ files/nrpe/check_haproxy.sh 2013-10-16 14:05:24 +0000
405@@ -2,7 +2,7 @@
406 #--------------------------------------------
407 # This file is managed by Juju
408 #--------------------------------------------
409-#
410+#
411 # Copyright 2009,2012 Canonical Ltd.
412 # Author: Tom Haddon
413
414@@ -13,7 +13,7 @@
415
416 for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
417 do
418- output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"active(2|3).*${appserver}" -e ' 200 OK')
419+ output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
420 if [ $? != 0 ]; then
421 date >> $LOGFILE
422 echo $output >> $LOGFILE
423@@ -30,4 +30,3 @@
424
425 echo "OK: All haproxy instances looking good"
426 exit 0
427-
428
429=== added directory 'hooks/charmhelpers'
430=== added file 'hooks/charmhelpers/__init__.py'
431=== added directory 'hooks/charmhelpers/contrib'
432=== added file 'hooks/charmhelpers/contrib/__init__.py'
433=== added directory 'hooks/charmhelpers/contrib/charmsupport'
434=== added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
435=== added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
436--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
437+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-16 14:05:24 +0000
438@@ -0,0 +1,218 @@
439+"""Compatibility with the nrpe-external-master charm"""
440+# Copyright 2012 Canonical Ltd.
441+#
442+# Authors:
443+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
444+
445+import subprocess
446+import pwd
447+import grp
448+import os
449+import re
450+import shlex
451+import yaml
452+
453+from charmhelpers.core.hookenv import (
454+ config,
455+ local_unit,
456+ log,
457+ relation_ids,
458+ relation_set,
459+)
460+
461+from charmhelpers.core.host import service
462+
463+# This module adds compatibility with the nrpe-external-master and plain nrpe
464+# subordinate charms. To use it in your charm:
465+#
466+# 1. Update metadata.yaml
467+#
468+# provides:
469+# (...)
470+# nrpe-external-master:
471+# interface: nrpe-external-master
472+# scope: container
473+#
474+# and/or
475+#
476+# provides:
477+# (...)
478+# local-monitors:
479+# interface: local-monitors
480+# scope: container
481+
482+#
483+# 2. Add the following to config.yaml
484+#
485+# nagios_context:
486+# default: "juju"
487+# type: string
488+# description: |
489+# Used by the nrpe subordinate charms.
490+# A string that will be prepended to instance name to set the host name
491+# in nagios. So for instance the hostname would be something like:
492+# juju-myservice-0
493+# If you're running multiple environments with the same services in them
494+# this allows you to differentiate between them.
495+#
496+# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
497+#
498+# 4. Update your hooks.py with something like this:
499+#
500+# from charmsupport.nrpe import NRPE
501+# (...)
502+# def update_nrpe_config():
503+# nrpe_compat = NRPE()
504+# nrpe_compat.add_check(
505+# shortname = "myservice",
506+# description = "Check MyService",
507+# check_cmd = "check_http -w 2 -c 10 http://localhost"
508+# )
509+# nrpe_compat.add_check(
510+# "myservice_other",
511+# "Check for widget failures",
512+# check_cmd = "/srv/myapp/scripts/widget_check"
513+# )
514+# nrpe_compat.write()
515+#
516+# def config_changed():
517+# (...)
518+# update_nrpe_config()
519+#
520+# def nrpe_external_master_relation_changed():
521+# update_nrpe_config()
522+#
523+# def local_monitors_relation_changed():
524+# update_nrpe_config()
525+#
526+# 5. ln -s hooks.py nrpe-external-master-relation-changed
527+# ln -s hooks.py local-monitors-relation-changed
528+
529+
530+class CheckException(Exception):
531+ pass
532+
533+
534+class Check(object):
535+ shortname_re = '[A-Za-z0-9-_]+$'
536+ service_template = ("""
537+#---------------------------------------------------
538+# This file is Juju managed
539+#---------------------------------------------------
540+define service {{
541+ use active-service
542+ host_name {nagios_hostname}
543+ service_description {nagios_hostname}[{shortname}] """
544+ """{description}
545+ check_command check_nrpe!{command}
546+ servicegroups {nagios_servicegroup}
547+}}
548+""")
549+
550+ def __init__(self, shortname, description, check_cmd):
551+ super(Check, self).__init__()
552+ # XXX: could be better to calculate this from the service name
553+ if not re.match(self.shortname_re, shortname):
554+ raise CheckException("shortname must match {}".format(
555+ Check.shortname_re))
556+ self.shortname = shortname
557+ self.command = "check_{}".format(shortname)
558+ # Note: a set of invalid characters is defined by the
559+ # Nagios server config
560+ # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
561+ self.description = description
562+ self.check_cmd = self._locate_cmd(check_cmd)
563+
564+ def _locate_cmd(self, check_cmd):
565+ search_path = (
566+ '/',
567+ os.path.join(os.environ['CHARM_DIR'],
568+ 'files/nrpe-external-master'),
569+ '/usr/lib/nagios/plugins',
570+ )
571+ parts = shlex.split(check_cmd)
572+ for path in search_path:
573+ if os.path.exists(os.path.join(path, parts[0])):
574+ command = os.path.join(path, parts[0])
575+ if len(parts) > 1:
576+ command += " " + " ".join(parts[1:])
577+ return command
578+ log('Check command not found: {}'.format(parts[0]))
579+ return ''
580+
581+ def write(self, nagios_context, hostname):
582+ nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
583+ self.command)
584+ with open(nrpe_check_file, 'w') as nrpe_check_config:
585+ nrpe_check_config.write("# check {}\n".format(self.shortname))
586+ nrpe_check_config.write("command[{}]={}\n".format(
587+ self.command, self.check_cmd))
588+
589+ if not os.path.exists(NRPE.nagios_exportdir):
590+ log('Not writing service config as {} is not accessible'.format(
591+ NRPE.nagios_exportdir))
592+ else:
593+ self.write_service_config(nagios_context, hostname)
594+
595+ def write_service_config(self, nagios_context, hostname):
596+ for f in os.listdir(NRPE.nagios_exportdir):
597+ if re.search('.*{}.cfg'.format(self.command), f):
598+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
599+
600+ templ_vars = {
601+ 'nagios_hostname': hostname,
602+ 'nagios_servicegroup': nagios_context,
603+ 'description': self.description,
604+ 'shortname': self.shortname,
605+ 'command': self.command,
606+ }
607+ nrpe_service_text = Check.service_template.format(**templ_vars)
608+ nrpe_service_file = '{}/service__{}_{}.cfg'.format(
609+ NRPE.nagios_exportdir, hostname, self.command)
610+ with open(nrpe_service_file, 'w') as nrpe_service_config:
611+ nrpe_service_config.write(str(nrpe_service_text))
612+
613+ def run(self):
614+ subprocess.call(self.check_cmd)
615+
616+
617+class NRPE(object):
618+ nagios_logdir = '/var/log/nagios'
619+ nagios_exportdir = '/var/lib/nagios/export'
620+ nrpe_confdir = '/etc/nagios/nrpe.d'
621+
622+ def __init__(self):
623+ super(NRPE, self).__init__()
624+ self.config = config()
625+ self.nagios_context = self.config['nagios_context']
626+ self.unit_name = local_unit().replace('/', '-')
627+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
628+ self.checks = []
629+
630+ def add_check(self, *args, **kwargs):
631+ self.checks.append(Check(*args, **kwargs))
632+
633+ def write(self):
634+ try:
635+ nagios_uid = pwd.getpwnam('nagios').pw_uid
636+ nagios_gid = grp.getgrnam('nagios').gr_gid
637+ except:
638+ log("Nagios user not set up, nrpe checks not updated")
639+ return
640+
641+ if not os.path.exists(NRPE.nagios_logdir):
642+ os.mkdir(NRPE.nagios_logdir)
643+ os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
644+
645+ nrpe_monitors = {}
646+ monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
647+ for nrpecheck in self.checks:
648+ nrpecheck.write(self.nagios_context, self.hostname)
649+ nrpe_monitors[nrpecheck.shortname] = {
650+ "command": nrpecheck.command,
651+ }
652+
653+ service('restart', 'nagios-nrpe-server')
654+
655+ for rid in relation_ids("local-monitors"):
656+ relation_set(relation_id=rid, monitors=yaml.dump(monitors))
657
658=== added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
659--- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
660+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-16 14:05:24 +0000
661@@ -0,0 +1,156 @@
662+'''
663+Functions for managing volumes in juju units. One volume is supported per unit.
664+Subordinates may have their own storage, provided it is on its own partition.
665+
666+Configuration stanzas:
667+ volume-ephemeral:
668+ type: boolean
669+ default: true
670+ description: >
671+ If false, a volume is mounted as sepecified in "volume-map"
672+ If true, ephemeral storage will be used, meaning that log data
673+ will only exist as long as the machine. YOU HAVE BEEN WARNED.
674+ volume-map:
675+ type: string
676+ default: {}
677+ description: >
678+ YAML map of units to device names, e.g:
679+ "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
680+ Service units will raise a configure-error if volume-ephemeral
681+ is 'true' and no volume-map value is set. Use 'juju set' to set a
682+ value and 'juju resolved' to complete configuration.
683+
684+Usage:
685+ from charmsupport.volumes import configure_volume, VolumeConfigurationError
686+ from charmsupport.hookenv import log, ERROR
687+ def post_mount_hook():
688+ stop_service('myservice')
689+ def post_mount_hook():
690+ start_service('myservice')
691+
692+ if __name__ == '__main__':
693+ try:
694+ configure_volume(before_change=pre_mount_hook,
695+ after_change=post_mount_hook)
696+ except VolumeConfigurationError:
697+ log('Storage could not be configured', ERROR)
698+'''
699+
700+# XXX: Known limitations
701+# - fstab is neither consulted nor updated
702+
703+import os
704+from charmhelpers.core import hookenv
705+from charmhelpers.core import host
706+import yaml
707+
708+
709+MOUNT_BASE = '/srv/juju/volumes'
710+
711+
712+class VolumeConfigurationError(Exception):
713+ '''Volume configuration data is missing or invalid'''
714+ pass
715+
716+
717+def get_config():
718+ '''Gather and sanity-check volume configuration data'''
719+ volume_config = {}
720+ config = hookenv.config()
721+
722+ errors = False
723+
724+ if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
725+ volume_config['ephemeral'] = True
726+ else:
727+ volume_config['ephemeral'] = False
728+
729+ try:
730+ volume_map = yaml.safe_load(config.get('volume-map', '{}'))
731+ except yaml.YAMLError as e:
732+ hookenv.log("Error parsing YAML volume-map: {}".format(e),
733+ hookenv.ERROR)
734+ errors = True
735+ if volume_map is None:
736+ # probably an empty string
737+ volume_map = {}
738+ elif not isinstance(volume_map, dict):
739+ hookenv.log("Volume-map should be a dictionary, not {}".format(
740+ type(volume_map)))
741+ errors = True
742+
743+ volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
744+ if volume_config['device'] and volume_config['ephemeral']:
745+ # asked for ephemeral storage but also defined a volume ID
746+ hookenv.log('A volume is defined for this unit, but ephemeral '
747+ 'storage was requested', hookenv.ERROR)
748+ errors = True
749+ elif not volume_config['device'] and not volume_config['ephemeral']:
750+ # asked for permanent storage but did not define volume ID
751+ hookenv.log('Ephemeral storage was requested, but there is no volume '
752+ 'defined for this unit.', hookenv.ERROR)
753+ errors = True
754+
755+ unit_mount_name = hookenv.local_unit().replace('/', '-')
756+ volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
757+
758+ if errors:
759+ return None
760+ return volume_config
761+
762+
763+def mount_volume(config):
764+ if os.path.exists(config['mountpoint']):
765+ if not os.path.isdir(config['mountpoint']):
766+ hookenv.log('Not a directory: {}'.format(config['mountpoint']))
767+ raise VolumeConfigurationError()
768+ else:
769+ host.mkdir(config['mountpoint'])
770+ if os.path.ismount(config['mountpoint']):
771+ unmount_volume(config)
772+ if not host.mount(config['device'], config['mountpoint'], persist=True):
773+ raise VolumeConfigurationError()
774+
775+
776+def unmount_volume(config):
777+ if os.path.ismount(config['mountpoint']):
778+ if not host.umount(config['mountpoint'], persist=True):
779+ raise VolumeConfigurationError()
780+
781+
782+def managed_mounts():
783+ '''List of all mounted managed volumes'''
784+ return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
785+
786+
787+def configure_volume(before_change=lambda: None, after_change=lambda: None):
788+ '''Set up storage (or don't) according to the charm's volume configuration.
789+ Returns the mount point or "ephemeral". before_change and after_change
790+ are optional functions to be called if the volume configuration changes.
791+ '''
792+
793+ config = get_config()
794+ if not config:
795+ hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
796+ raise VolumeConfigurationError()
797+
798+ if config['ephemeral']:
799+ if os.path.ismount(config['mountpoint']):
800+ before_change()
801+ unmount_volume(config)
802+ after_change()
803+ return 'ephemeral'
804+ else:
805+ # persistent storage
806+ if os.path.ismount(config['mountpoint']):
807+ mounts = dict(managed_mounts())
808+ if mounts.get(config['mountpoint']) != config['device']:
809+ before_change()
810+ unmount_volume(config)
811+ mount_volume(config)
812+ after_change()
813+ else:
814+ before_change()
815+ mount_volume(config)
816+ after_change()
817+ return config['mountpoint']
818
819=== added directory 'hooks/charmhelpers/core'
820=== added file 'hooks/charmhelpers/core/__init__.py'
821=== added file 'hooks/charmhelpers/core/hookenv.py'
822--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
823+++ hooks/charmhelpers/core/hookenv.py 2013-10-16 14:05:24 +0000
824@@ -0,0 +1,340 @@
825+"Interactions with the Juju environment"
826+# Copyright 2013 Canonical Ltd.
827+#
828+# Authors:
829+# Charm Helpers Developers <juju@lists.ubuntu.com>
830+
831+import os
832+import json
833+import yaml
834+import subprocess
835+import UserDict
836+
837+CRITICAL = "CRITICAL"
838+ERROR = "ERROR"
839+WARNING = "WARNING"
840+INFO = "INFO"
841+DEBUG = "DEBUG"
842+MARKER = object()
843+
844+cache = {}
845+
846+
847+def cached(func):
848+ ''' Cache return values for multiple executions of func + args
849+
850+ For example:
851+
852+ @cached
853+ def unit_get(attribute):
854+ pass
855+
856+ unit_get('test')
857+
858+ will cache the result of unit_get + 'test' for future calls.
859+ '''
860+ def wrapper(*args, **kwargs):
861+ global cache
862+ key = str((func, args, kwargs))
863+ try:
864+ return cache[key]
865+ except KeyError:
866+ res = func(*args, **kwargs)
867+ cache[key] = res
868+ return res
869+ return wrapper
870+
871+
872+def flush(key):
873+ ''' Flushes any entries from function cache where the
874+ key is found in the function+args '''
875+ flush_list = []
876+ for item in cache:
877+ if key in item:
878+ flush_list.append(item)
879+ for item in flush_list:
880+ del cache[item]
881+
882+
883+def log(message, level=None):
884+ "Write a message to the juju log"
885+ command = ['juju-log']
886+ if level:
887+ command += ['-l', level]
888+ command += [message]
889+ subprocess.call(command)
890+
891+
892+class Serializable(UserDict.IterableUserDict):
893+ "Wrapper, an object that can be serialized to yaml or json"
894+
895+ def __init__(self, obj):
896+ # wrap the object
897+ UserDict.IterableUserDict.__init__(self)
898+ self.data = obj
899+
900+ def __getattr__(self, attr):
901+ # See if this object has attribute.
902+ if attr in ("json", "yaml", "data"):
903+ return self.__dict__[attr]
904+ # Check for attribute in wrapped object.
905+ got = getattr(self.data, attr, MARKER)
906+ if got is not MARKER:
907+ return got
908+ # Proxy to the wrapped object via dict interface.
909+ try:
910+ return self.data[attr]
911+ except KeyError:
912+ raise AttributeError(attr)
913+
914+ def __getstate__(self):
915+ # Pickle as a standard dictionary.
916+ return self.data
917+
918+ def __setstate__(self, state):
919+ # Unpickle into our wrapper.
920+ self.data = state
921+
922+ def json(self):
923+ "Serialize the object to json"
924+ return json.dumps(self.data)
925+
926+ def yaml(self):
927+ "Serialize the object to yaml"
928+ return yaml.dump(self.data)
929+
930+
931+def execution_environment():
932+ """A convenient bundling of the current execution context"""
933+ context = {}
934+ context['conf'] = config()
935+ if relation_id():
936+ context['reltype'] = relation_type()
937+ context['relid'] = relation_id()
938+ context['rel'] = relation_get()
939+ context['unit'] = local_unit()
940+ context['rels'] = relations()
941+ context['env'] = os.environ
942+ return context
943+
944+
945+def in_relation_hook():
946+ "Determine whether we're running in a relation hook"
947+ return 'JUJU_RELATION' in os.environ
948+
949+
950+def relation_type():
951+ "The scope for the current relation hook"
952+ return os.environ.get('JUJU_RELATION', None)
953+
954+
955+def relation_id():
956+ "The relation ID for the current relation hook"
957+ return os.environ.get('JUJU_RELATION_ID', None)
958+
959+
960+def local_unit():
961+ "Local unit ID"
962+ return os.environ['JUJU_UNIT_NAME']
963+
964+
965+def remote_unit():
966+ "The remote unit for the current relation hook"
967+ return os.environ['JUJU_REMOTE_UNIT']
968+
969+
970+def service_name():
971+ "The name service group this unit belongs to"
972+ return local_unit().split('/')[0]
973+
974+
975+@cached
976+def config(scope=None):
977+ "Juju charm configuration"
978+ config_cmd_line = ['config-get']
979+ if scope is not None:
980+ config_cmd_line.append(scope)
981+ config_cmd_line.append('--format=json')
982+ try:
983+ return json.loads(subprocess.check_output(config_cmd_line))
984+ except ValueError:
985+ return None
986+
987+
988+@cached
989+def relation_get(attribute=None, unit=None, rid=None):
990+ _args = ['relation-get', '--format=json']
991+ if rid:
992+ _args.append('-r')
993+ _args.append(rid)
994+ _args.append(attribute or '-')
995+ if unit:
996+ _args.append(unit)
997+ try:
998+ return json.loads(subprocess.check_output(_args))
999+ except ValueError:
1000+ return None
1001+
1002+
1003+def relation_set(relation_id=None, relation_settings={}, **kwargs):
1004+ relation_cmd_line = ['relation-set']
1005+ if relation_id is not None:
1006+ relation_cmd_line.extend(('-r', relation_id))
1007+ for k, v in (relation_settings.items() + kwargs.items()):
1008+ if v is None:
1009+ relation_cmd_line.append('{}='.format(k))
1010+ else:
1011+ relation_cmd_line.append('{}={}'.format(k, v))
1012+ subprocess.check_call(relation_cmd_line)
1013+ # Flush cache of any relation-gets for local unit
1014+ flush(local_unit())
1015+
1016+
1017+@cached
1018+def relation_ids(reltype=None):
1019+ "A list of relation_ids"
1020+ reltype = reltype or relation_type()
1021+ relid_cmd_line = ['relation-ids', '--format=json']
1022+ if reltype is not None:
1023+ relid_cmd_line.append(reltype)
1024+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
1025+ return []
1026+
1027+
1028+@cached
1029+def related_units(relid=None):
1030+ "A list of related units"
1031+ relid = relid or relation_id()
1032+ units_cmd_line = ['relation-list', '--format=json']
1033+ if relid is not None:
1034+ units_cmd_line.extend(('-r', relid))
1035+ return json.loads(subprocess.check_output(units_cmd_line)) or []
1036+
1037+
1038+@cached
1039+def relation_for_unit(unit=None, rid=None):
1040+ "Get the json represenation of a unit's relation"
1041+ unit = unit or remote_unit()
1042+ relation = relation_get(unit=unit, rid=rid)
1043+ for key in relation:
1044+ if key.endswith('-list'):
1045+ relation[key] = relation[key].split()
1046+ relation['__unit__'] = unit
1047+ return relation
1048+
1049+
1050+@cached
1051+def relations_for_id(relid=None):
1052+ "Get relations of a specific relation ID"
1053+ relation_data = []
1054+ relid = relid or relation_ids()
1055+ for unit in related_units(relid):
1056+ unit_data = relation_for_unit(unit, relid)
1057+ unit_data['__relid__'] = relid
1058+ relation_data.append(unit_data)
1059+ return relation_data
1060+
1061+
1062+@cached
1063+def relations_of_type(reltype=None):
1064+ "Get relations of a specific type"
1065+ relation_data = []
1066+ reltype = reltype or relation_type()
1067+ for relid in relation_ids(reltype):
1068+ for relation in relations_for_id(relid):
1069+ relation['__relid__'] = relid
1070+ relation_data.append(relation)
1071+ return relation_data
1072+
1073+
1074+@cached
1075+def relation_types():
1076+ "Get a list of relation types supported by this charm"
1077+ charmdir = os.environ.get('CHARM_DIR', '')
1078+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1079+ md = yaml.safe_load(mdf)
1080+ rel_types = []
1081+ for key in ('provides', 'requires', 'peers'):
1082+ section = md.get(key)
1083+ if section:
1084+ rel_types.extend(section.keys())
1085+ mdf.close()
1086+ return rel_types
1087+
1088+
1089+@cached
1090+def relations():
1091+ rels = {}
1092+ for reltype in relation_types():
1093+ relids = {}
1094+ for relid in relation_ids(reltype):
1095+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1096+ for unit in related_units(relid):
1097+ reldata = relation_get(unit=unit, rid=relid)
1098+ units[unit] = reldata
1099+ relids[relid] = units
1100+ rels[reltype] = relids
1101+ return rels
1102+
1103+
1104+def open_port(port, protocol="TCP"):
1105+ "Open a service network port"
1106+ _args = ['open-port']
1107+ _args.append('{}/{}'.format(port, protocol))
1108+ subprocess.check_call(_args)
1109+
1110+
1111+def close_port(port, protocol="TCP"):
1112+ "Close a service network port"
1113+ _args = ['close-port']
1114+ _args.append('{}/{}'.format(port, protocol))
1115+ subprocess.check_call(_args)
1116+
1117+
1118+@cached
1119+def unit_get(attribute):
1120+ _args = ['unit-get', '--format=json', attribute]
1121+ try:
1122+ return json.loads(subprocess.check_output(_args))
1123+ except ValueError:
1124+ return None
1125+
1126+
1127+def unit_private_ip():
1128+ return unit_get('private-address')
1129+
1130+
1131+class UnregisteredHookError(Exception):
1132+ pass
1133+
1134+
1135+class Hooks(object):
1136+ def __init__(self):
1137+ super(Hooks, self).__init__()
1138+ self._hooks = {}
1139+
1140+ def register(self, name, function):
1141+ self._hooks[name] = function
1142+
1143+ def execute(self, args):
1144+ hook_name = os.path.basename(args[0])
1145+ if hook_name in self._hooks:
1146+ self._hooks[hook_name]()
1147+ else:
1148+ raise UnregisteredHookError(hook_name)
1149+
1150+ def hook(self, *hook_names):
1151+ def wrapper(decorated):
1152+ for hook_name in hook_names:
1153+ self.register(hook_name, decorated)
1154+ else:
1155+ self.register(decorated.__name__, decorated)
1156+ if '_' in decorated.__name__:
1157+ self.register(
1158+ decorated.__name__.replace('_', '-'), decorated)
1159+ return decorated
1160+ return wrapper
1161+
1162+
1163+def charm_dir():
1164+ return os.environ.get('CHARM_DIR')
1165
1166=== added file 'hooks/charmhelpers/core/host.py'
1167--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
1168+++ hooks/charmhelpers/core/host.py 2013-10-16 14:05:24 +0000
1169@@ -0,0 +1,239 @@
1170+"""Tools for working with the host system"""
1171+# Copyright 2012 Canonical Ltd.
1172+#
1173+# Authors:
1174+# Nick Moffitt <nick.moffitt@canonical.com>
1175+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1176+
1177+import os
1178+import pwd
1179+import grp
1180+import random
1181+import string
1182+import subprocess
1183+import hashlib
1184+
1185+from collections import OrderedDict
1186+
1187+from hookenv import log
1188+
1189+
1190+def service_start(service_name):
1191+ service('start', service_name)
1192+
1193+
1194+def service_stop(service_name):
1195+ service('stop', service_name)
1196+
1197+
1198+def service_restart(service_name):
1199+ service('restart', service_name)
1200+
1201+
1202+def service_reload(service_name, restart_on_failure=False):
1203+ if not service('reload', service_name) and restart_on_failure:
1204+ service('restart', service_name)
1205+
1206+
1207+def service(action, service_name):
1208+ cmd = ['service', service_name, action]
1209+ return subprocess.call(cmd) == 0
1210+
1211+
1212+def service_running(service):
1213+ try:
1214+ output = subprocess.check_output(['service', service, 'status'])
1215+ except subprocess.CalledProcessError:
1216+ return False
1217+ else:
1218+ if ("start/running" in output or "is running" in output):
1219+ return True
1220+ else:
1221+ return False
1222+
1223+
1224+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1225+ """Add a user"""
1226+ try:
1227+ user_info = pwd.getpwnam(username)
1228+ log('user {0} already exists!'.format(username))
1229+ except KeyError:
1230+ log('creating user {0}'.format(username))
1231+ cmd = ['useradd']
1232+ if system_user or password is None:
1233+ cmd.append('--system')
1234+ else:
1235+ cmd.extend([
1236+ '--create-home',
1237+ '--shell', shell,
1238+ '--password', password,
1239+ ])
1240+ cmd.append(username)
1241+ subprocess.check_call(cmd)
1242+ user_info = pwd.getpwnam(username)
1243+ return user_info
1244+
1245+
1246+def add_user_to_group(username, group):
1247+ """Add a user to a group"""
1248+ cmd = [
1249+ 'gpasswd', '-a',
1250+ username,
1251+ group
1252+ ]
1253+ log("Adding user {} to group {}".format(username, group))
1254+ subprocess.check_call(cmd)
1255+
1256+
1257+def rsync(from_path, to_path, flags='-r', options=None):
1258+ """Replicate the contents of a path"""
1259+ options = options or ['--delete', '--executability']
1260+ cmd = ['/usr/bin/rsync', flags]
1261+ cmd.extend(options)
1262+ cmd.append(from_path)
1263+ cmd.append(to_path)
1264+ log(" ".join(cmd))
1265+ return subprocess.check_output(cmd).strip()
1266+
1267+
1268+def symlink(source, destination):
1269+ """Create a symbolic link"""
1270+ log("Symlinking {} as {}".format(source, destination))
1271+ cmd = [
1272+ 'ln',
1273+ '-sf',
1274+ source,
1275+ destination,
1276+ ]
1277+ subprocess.check_call(cmd)
1278+
1279+
1280+def mkdir(path, owner='root', group='root', perms=0555, force=False):
1281+ """Create a directory"""
1282+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
1283+ perms))
1284+ uid = pwd.getpwnam(owner).pw_uid
1285+ gid = grp.getgrnam(group).gr_gid
1286+ realpath = os.path.abspath(path)
1287+ if os.path.exists(realpath):
1288+ if force and not os.path.isdir(realpath):
1289+ log("Removing non-directory file {} prior to mkdir()".format(path))
1290+ os.unlink(realpath)
1291+ else:
1292+ os.makedirs(realpath, perms)
1293+ os.chown(realpath, uid, gid)
1294+
1295+
1296+def write_file(path, content, owner='root', group='root', perms=0444):
1297+ """Create or overwrite a file with the contents of a string"""
1298+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
1299+ uid = pwd.getpwnam(owner).pw_uid
1300+ gid = grp.getgrnam(group).gr_gid
1301+ with open(path, 'w') as target:
1302+ os.fchown(target.fileno(), uid, gid)
1303+ os.fchmod(target.fileno(), perms)
1304+ target.write(content)
1305+
1306+
1307+def mount(device, mountpoint, options=None, persist=False):
1308+ '''Mount a filesystem'''
1309+ cmd_args = ['mount']
1310+ if options is not None:
1311+ cmd_args.extend(['-o', options])
1312+ cmd_args.extend([device, mountpoint])
1313+ try:
1314+ subprocess.check_output(cmd_args)
1315+ except subprocess.CalledProcessError, e:
1316+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1317+ return False
1318+ if persist:
1319+ # TODO: update fstab
1320+ pass
1321+ return True
1322+
1323+
1324+def umount(mountpoint, persist=False):
1325+ '''Unmount a filesystem'''
1326+ cmd_args = ['umount', mountpoint]
1327+ try:
1328+ subprocess.check_output(cmd_args)
1329+ except subprocess.CalledProcessError, e:
1330+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1331+ return False
1332+ if persist:
1333+ # TODO: update fstab
1334+ pass
1335+ return True
1336+
1337+
1338+def mounts():
1339+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
1340+ with open('/proc/mounts') as f:
1341+ # [['/mount/point','/dev/path'],[...]]
1342+ system_mounts = [m[1::-1] for m in [l.strip().split()
1343+ for l in f.readlines()]]
1344+ return system_mounts
1345+
1346+
1347+def file_hash(path):
1348+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
1349+ if os.path.exists(path):
1350+ h = hashlib.md5()
1351+ with open(path, 'r') as source:
1352+ h.update(source.read()) # IGNORE:E1101 - it does have update
1353+ return h.hexdigest()
1354+ else:
1355+ return None
1356+
1357+
1358+def restart_on_change(restart_map):
1359+ ''' Restart services based on configuration files changing
1360+
1361+ This function is used a decorator, for example
1362+
1363+ @restart_on_change({
1364+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1365+ })
1366+ def ceph_client_changed():
1367+ ...
1368+
1369+ In this example, the cinder-api and cinder-volume services
1370+ would be restarted if /etc/ceph/ceph.conf is changed by the
1371+ ceph_client_changed function.
1372+ '''
1373+ def wrap(f):
1374+ def wrapped_f(*args):
1375+ checksums = {}
1376+ for path in restart_map:
1377+ checksums[path] = file_hash(path)
1378+ f(*args)
1379+ restarts = []
1380+ for path in restart_map:
1381+ if checksums[path] != file_hash(path):
1382+ restarts += restart_map[path]
1383+ for service_name in list(OrderedDict.fromkeys(restarts)):
1384+ service('restart', service_name)
1385+ return wrapped_f
1386+ return wrap
1387+
1388+
1389+def lsb_release():
1390+ '''Return /etc/lsb-release in a dict'''
1391+ d = {}
1392+ with open('/etc/lsb-release', 'r') as lsb:
1393+ for l in lsb:
1394+ k, v = l.split('=')
1395+ d[k.strip()] = v.strip()
1396+ return d
1397+
1398+
1399+def pwgen(length=None):
1400+ '''Generate a random pasword.'''
1401+ if length is None:
1402+ length = random.choice(range(35, 45))
1403+ alphanumeric_chars = [
1404+ l for l in (string.letters + string.digits)
1405+ if l not in 'l0QD1vAEIOUaeiou']
1406+ random_chars = [
1407+ random.choice(alphanumeric_chars) for _ in range(length)]
1408+ return(''.join(random_chars))
1409
1410=== added directory 'hooks/charmhelpers/fetch'
1411=== added file 'hooks/charmhelpers/fetch/__init__.py'
1412--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
1413+++ hooks/charmhelpers/fetch/__init__.py 2013-10-16 14:05:24 +0000
1414@@ -0,0 +1,209 @@
1415+import importlib
1416+from yaml import safe_load
1417+from charmhelpers.core.host import (
1418+ lsb_release
1419+)
1420+from urlparse import (
1421+ urlparse,
1422+ urlunparse,
1423+)
1424+import subprocess
1425+from charmhelpers.core.hookenv import (
1426+ config,
1427+ log,
1428+)
1429+import apt_pkg
1430+
1431+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1432+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1433+"""
1434+PROPOSED_POCKET = """# Proposed
1435+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
1436+"""
1437+
1438+
1439+def filter_installed_packages(packages):
1440+ """Returns a list of packages that require installation"""
1441+ apt_pkg.init()
1442+ cache = apt_pkg.Cache()
1443+ _pkgs = []
1444+ for package in packages:
1445+ try:
1446+ p = cache[package]
1447+ p.current_ver or _pkgs.append(package)
1448+ except KeyError:
1449+ log('Package {} has no installation candidate.'.format(package),
1450+ level='WARNING')
1451+ _pkgs.append(package)
1452+ return _pkgs
1453+
1454+
1455+def apt_install(packages, options=None, fatal=False):
1456+ """Install one or more packages"""
1457+ options = options or []
1458+ cmd = ['apt-get', '-y']
1459+ cmd.extend(options)
1460+ cmd.append('install')
1461+ if isinstance(packages, basestring):
1462+ cmd.append(packages)
1463+ else:
1464+ cmd.extend(packages)
1465+ log("Installing {} with options: {}".format(packages,
1466+ options))
1467+ if fatal:
1468+ subprocess.check_call(cmd)
1469+ else:
1470+ subprocess.call(cmd)
1471+
1472+
1473+def apt_update(fatal=False):
1474+ """Update local apt cache"""
1475+ cmd = ['apt-get', 'update']
1476+ if fatal:
1477+ subprocess.check_call(cmd)
1478+ else:
1479+ subprocess.call(cmd)
1480+
1481+
1482+def apt_purge(packages, fatal=False):
1483+ """Purge one or more packages"""
1484+ cmd = ['apt-get', '-y', 'purge']
1485+ if isinstance(packages, basestring):
1486+ cmd.append(packages)
1487+ else:
1488+ cmd.extend(packages)
1489+ log("Purging {}".format(packages))
1490+ if fatal:
1491+ subprocess.check_call(cmd)
1492+ else:
1493+ subprocess.call(cmd)
1494+
1495+
1496+def add_source(source, key=None):
1497+ if ((source.startswith('ppa:') or
1498+ source.startswith('http:'))):
1499+ subprocess.check_call(['add-apt-repository', '--yes', source])
1500+ elif source.startswith('cloud:'):
1501+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
1502+ fatal=True)
1503+ pocket = source.split(':')[-1]
1504+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
1505+ apt.write(CLOUD_ARCHIVE.format(pocket))
1506+ elif source == 'proposed':
1507+ release = lsb_release()['DISTRIB_CODENAME']
1508+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
1509+ apt.write(PROPOSED_POCKET.format(release))
1510+ if key:
1511+ subprocess.check_call(['apt-key', 'import', key])
1512+
1513+
1514+class SourceConfigError(Exception):
1515+ pass
1516+
1517+
1518+def configure_sources(update=False,
1519+ sources_var='install_sources',
1520+ keys_var='install_keys'):
1521+ """
1522+ Configure multiple sources from charm configuration
1523+
1524+ Example config:
1525+ install_sources:
1526+ - "ppa:foo"
1527+ - "http://example.com/repo precise main"
1528+ install_keys:
1529+ - null
1530+ - "a1b2c3d4"
1531+
1532+ Note that 'null' (a.k.a. None) should not be quoted.
1533+ """
1534+ sources = safe_load(config(sources_var))
1535+ keys = safe_load(config(keys_var))
1536+ if isinstance(sources, basestring) and isinstance(keys, basestring):
1537+ add_source(sources, keys)
1538+ else:
1539+ if not len(sources) == len(keys):
1540+ msg = 'Install sources and keys lists are different lengths'
1541+ raise SourceConfigError(msg)
1542+ for src_num in range(len(sources)):
1543+ add_source(sources[src_num], keys[src_num])
1544+ if update:
1545+ apt_update(fatal=True)
1546+
1547+# The order of this list is very important. Handlers should be listed in from
1548+# least- to most-specific URL matching.
1549+FETCH_HANDLERS = (
1550+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
1551+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
1552+)
1553+
1554+
1555+class UnhandledSource(Exception):
1556+ pass
1557+
1558+
1559+def install_remote(source):
1560+ """
1561+ Install a file tree from a remote source
1562+
1563+ The specified source should be a url of the form:
1564+ scheme://[host]/path[#[option=value][&...]]
1565+
1566+ Schemes supported are based on this modules submodules
1567+ Options supported are submodule-specific"""
1568+ # We ONLY check for True here because can_handle may return a string
1569+ # explaining why it can't handle a given source.
1570+ handlers = [h for h in plugins() if h.can_handle(source) is True]
1571+ installed_to = None
1572+ for handler in handlers:
1573+ try:
1574+ installed_to = handler.install(source)
1575+ except UnhandledSource:
1576+ pass
1577+ if not installed_to:
1578+ raise UnhandledSource("No handler found for source {}".format(source))
1579+ return installed_to
1580+
1581+
1582+def install_from_config(config_var_name):
1583+ charm_config = config()
1584+ source = charm_config[config_var_name]
1585+ return install_remote(source)
1586+
1587+
1588+class BaseFetchHandler(object):
1589+ """Base class for FetchHandler implementations in fetch plugins"""
1590+ def can_handle(self, source):
1591+ """Returns True if the source can be handled. Otherwise returns
1592+ a string explaining why it cannot"""
1593+ return "Wrong source type"
1594+
1595+ def install(self, source):
1596+ """Try to download and unpack the source. Return the path to the
1597+ unpacked files or raise UnhandledSource."""
1598+ raise UnhandledSource("Wrong source type {}".format(source))
1599+
1600+ def parse_url(self, url):
1601+ return urlparse(url)
1602+
1603+ def base_url(self, url):
1604+ """Return url without querystring or fragment"""
1605+ parts = list(self.parse_url(url))
1606+ parts[4:] = ['' for i in parts[4:]]
1607+ return urlunparse(parts)
1608+
1609+
1610+def plugins(fetch_handlers=None):
1611+ if not fetch_handlers:
1612+ fetch_handlers = FETCH_HANDLERS
1613+ plugin_list = []
1614+ for handler_name in fetch_handlers:
1615+ package, classname = handler_name.rsplit('.', 1)
1616+ try:
1617+ handler_class = getattr(importlib.import_module(package), classname)
1618+ plugin_list.append(handler_class())
1619+ except (ImportError, AttributeError):
1620+ # Skip missing plugins so that they can be ommitted from
1621+ # installation if desired
1622+ log("FetchHandler {} not found, skipping plugin".format(handler_name))
1623+ return plugin_list
1624
1625=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
1626--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
1627+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-16 14:05:24 +0000
1628@@ -0,0 +1,48 @@
1629+import os
1630+import urllib2
1631+from charmhelpers.fetch import (
1632+ BaseFetchHandler,
1633+ UnhandledSource
1634+)
1635+from charmhelpers.payload.archive import (
1636+ get_archive_handler,
1637+ extract,
1638+)
1639+from charmhelpers.core.host import mkdir
1640+
1641+
1642+class ArchiveUrlFetchHandler(BaseFetchHandler):
1643+ """Handler for archives via generic URLs"""
1644+ def can_handle(self, source):
1645+ url_parts = self.parse_url(source)
1646+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
1647+ return "Wrong source type"
1648+ if get_archive_handler(self.base_url(source)):
1649+ return True
1650+ return False
1651+
1652+ def download(self, source, dest):
1653+ # propogate all exceptions
1654+ # URLError, OSError, etc
1655+ response = urllib2.urlopen(source)
1656+ try:
1657+ with open(dest, 'w') as dest_file:
1658+ dest_file.write(response.read())
1659+ except Exception as e:
1660+ if os.path.isfile(dest):
1661+ os.unlink(dest)
1662+ raise e
1663+
1664+ def install(self, source):
1665+ url_parts = self.parse_url(source)
1666+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
1667+ if not os.path.exists(dest_dir):
1668+ mkdir(dest_dir, perms=0755)
1669+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
1670+ try:
1671+ self.download(source, dld_file)
1672+ except urllib2.URLError as e:
1673+ raise UnhandledSource(e.reason)
1674+ except OSError as e:
1675+ raise UnhandledSource(e.strerror)
1676+ return extract(dld_file)
1677
1678=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
1679--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
1680+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-16 14:05:24 +0000
1681@@ -0,0 +1,44 @@
1682+import os
1683+from bzrlib.branch import Branch
1684+from charmhelpers.fetch import (
1685+ BaseFetchHandler,
1686+ UnhandledSource
1687+)
1688+from charmhelpers.core.host import mkdir
1689+
1690+
1691+class BzrUrlFetchHandler(BaseFetchHandler):
1692+ """Handler for bazaar branches via generic and lp URLs"""
1693+ def can_handle(self, source):
1694+ url_parts = self.parse_url(source)
1695+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
1696+ return False
1697+ else:
1698+ return True
1699+
1700+ def branch(self, source, dest):
1701+ url_parts = self.parse_url(source)
1702+ # If we use lp:branchname scheme we need to load plugins
1703+ if not self.can_handle(source):
1704+ raise UnhandledSource("Cannot handle {}".format(source))
1705+ if url_parts.scheme == "lp":
1706+ from bzrlib.plugin import load_plugins
1707+ load_plugins()
1708+ try:
1709+ remote_branch = Branch.open(source)
1710+ remote_branch.bzrdir.sprout(dest).open_branch()
1711+ except Exception as e:
1712+ raise e
1713+
1714+ def install(self, source):
1715+ url_parts = self.parse_url(source)
1716+ branch_name = url_parts.path.strip("/").split("/")[-1]
1717+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
1718+ if not os.path.exists(dest_dir):
1719+ mkdir(dest_dir, perms=0755)
1720+ try:
1721+ self.branch(source, dest_dir)
1722+ except OSError as e:
1723+ raise UnhandledSource(e.strerror)
1724+ return dest_dir
1725+
1726
1727=== modified file 'hooks/hooks.py'
1728--- hooks/hooks.py 2013-05-23 21:52:06 +0000
1729+++ hooks/hooks.py 2013-10-16 14:05:24 +0000
1730@@ -1,17 +1,31 @@
1731 #!/usr/bin/env python
1732
1733-import json
1734 import glob
1735 import os
1736-import random
1737 import re
1738 import socket
1739-import string
1740+import shutil
1741 import subprocess
1742 import sys
1743 import yaml
1744-import nrpe
1745-import time
1746+
1747+from itertools import izip, tee
1748+
1749+from charmhelpers.core.host import pwgen
1750+from charmhelpers.core.hookenv import (
1751+ log,
1752+ config as config_get,
1753+ relation_set,
1754+ relation_ids as get_relation_ids,
1755+ relations_of_type,
1756+ relations_for_id,
1757+ relation_id,
1758+ open_port,
1759+ close_port,
1760+ unit_get,
1761+ )
1762+from charmhelpers.fetch import apt_install
1763+from charmhelpers.contrib.charmsupport import nrpe
1764
1765
1766 ###############################################################################
1767@@ -20,92 +34,59 @@
1768 default_haproxy_config_dir = "/etc/haproxy"
1769 default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir
1770 default_haproxy_service_config_dir = "/var/run/haproxy"
1771-HOOK_NAME = os.path.basename(sys.argv[0])
1772+service_affecting_packages = ['haproxy']
1773+
1774+dupe_options = [
1775+ "mode tcp",
1776+ "option tcplog",
1777+ "mode http",
1778+ "option httplog",
1779+ ]
1780+
1781+frontend_only_options = [
1782+ "backlog",
1783+ "bind",
1784+ "capture cookie",
1785+ "capture request header",
1786+ "capture response header",
1787+ "clitimeout",
1788+ "default_backend",
1789+ "maxconn",
1790+ "monitor fail",
1791+ "monitor-net",
1792+ "monitor-uri",
1793+ "option accept-invalid-http-request",
1794+ "option clitcpka",
1795+ "option contstats",
1796+ "option dontlog-normal",
1797+ "option dontlognull",
1798+ "option http-use-proxy-header",
1799+ "option log-separate-errors",
1800+ "option logasap",
1801+ "option socket-stats",
1802+ "option tcp-smart-accept",
1803+ "rate-limit sessions",
1804+ "tcp-request content accept",
1805+ "tcp-request content reject",
1806+ "tcp-request inspect-delay",
1807+ "timeout client",
1808+ "timeout clitimeout",
1809+ "use_backend",
1810+ ]
1811+
1812
1813 ###############################################################################
1814 # Supporting functions
1815 ###############################################################################
1816
1817-def unit_get(*args):
1818- """Simple wrapper around unit-get, all arguments passed untouched"""
1819- get_args = ["unit-get"]
1820- get_args.extend(args)
1821- return subprocess.check_output(get_args)
1822-
1823-def juju_log(*args):
1824- """Simple wrapper around juju-log, all arguments are passed untouched"""
1825- log_args = ["juju-log"]
1826- log_args.extend(args)
1827- subprocess.call(log_args)
1828-
1829-#------------------------------------------------------------------------------
1830-# config_get: Returns a dictionary containing all of the config information
1831-# Optional parameter: scope
1832-# scope: limits the scope of the returned configuration to the
1833-# desired config item.
1834-#------------------------------------------------------------------------------
1835-def config_get(scope=None):
1836- try:
1837- config_cmd_line = ['config-get']
1838- if scope is not None:
1839- config_cmd_line.append(scope)
1840- config_cmd_line.append('--format=json')
1841- config_data = json.loads(subprocess.check_output(config_cmd_line))
1842- except Exception, e:
1843- subprocess.call(['juju-log', str(e)])
1844- config_data = None
1845- finally:
1846- return(config_data)
1847-
1848-
1849-#------------------------------------------------------------------------------
1850-# relation_get: Returns a dictionary containing the relation information
1851-# Optional parameters: scope, relation_id
1852-# scope: limits the scope of the returned data to the
1853-# desired item.
1854-# unit_name: limits the data ( and optionally the scope )
1855-# to the specified unit
1856-# relation_id: specify relation id for out of context usage.
1857-#------------------------------------------------------------------------------
1858-def relation_get(scope=None, unit_name=None, relation_id=None):
1859- try:
1860- relation_cmd_line = ['relation-get', '--format=json']
1861- if relation_id is not None:
1862- relation_cmd_line.extend(('-r', relation_id))
1863- if scope is not None:
1864- relation_cmd_line.append(scope)
1865- else:
1866- relation_cmd_line.append('')
1867- if unit_name is not None:
1868- relation_cmd_line.append(unit_name)
1869- relation_data = json.loads(subprocess.check_output(relation_cmd_line))
1870- except Exception, e:
1871- subprocess.call(['juju-log', str(e)])
1872- relation_data = None
1873- finally:
1874- return(relation_data)
1875-
1876-def relation_set(arguments, relation_id=None):
1877- """
1878- Wrapper around relation-set
1879- @param arguments: list of command line arguments
1880- @param relation_id: optional relation-id (passed to -r parameter) to use
1881- """
1882- set_args = ["relation-set"]
1883- if relation_id is not None:
1884- set_args.extend(["-r", str(relation_id)])
1885- set_args.extend(arguments)
1886- subprocess.check_call(set_args)
1887-
1888-#------------------------------------------------------------------------------
1889-# apt_get_install( package ): Installs a package
1890-#------------------------------------------------------------------------------
1891-def apt_get_install(packages=None):
1892- if packages is None:
1893- return(False)
1894- cmd_line = ['apt-get', '-y', 'install', '-qq']
1895- cmd_line.append(packages)
1896- return(subprocess.call(cmd_line))
1897+
1898+def ensure_package_status(packages, status):
1899+ if status in ['install', 'hold']:
1900+ selections = ''.join(['{} {}\n'.format(package, status)
1901+ for package in packages])
1902+ dpkg = subprocess.Popen(['dpkg', '--set-selections'],
1903+ stdin=subprocess.PIPE)
1904+ dpkg.communicate(input=selections)
1905
1906
1907 #------------------------------------------------------------------------------
1908@@ -113,8 +94,8 @@
1909 #------------------------------------------------------------------------------
1910 def enable_haproxy():
1911 default_haproxy = "/etc/default/haproxy"
1912- enabled_haproxy = \
1913- open(default_haproxy).read().replace('ENABLED=0', 'ENABLED=1')
1914+ with open(default_haproxy) as f:
1915+ enabled_haproxy = f.read().replace('ENABLED=0', 'ENABLED=1')
1916 with open(default_haproxy, 'w') as f:
1917 f.write(enabled_haproxy)
1918
1919@@ -137,8 +118,8 @@
1920 if config_data['global_quiet'] is True:
1921 haproxy_globals.append(" quiet")
1922 haproxy_globals.append(" spread-checks %d" %
1923- config_data['global_spread_checks'])
1924- return('\n'.join(haproxy_globals))
1925+ config_data['global_spread_checks'])
1926+ return '\n'.join(haproxy_globals)
1927
1928
1929 #------------------------------------------------------------------------------
1930@@ -157,7 +138,7 @@
1931 haproxy_defaults.append(" retries %d" % config_data['default_retries'])
1932 for timeout_item in default_timeouts:
1933 haproxy_defaults.append(" timeout %s" % timeout_item.strip())
1934- return('\n'.join(haproxy_defaults))
1935+ return '\n'.join(haproxy_defaults)
1936
1937
1938 #------------------------------------------------------------------------------
1939@@ -168,9 +149,9 @@
1940 #------------------------------------------------------------------------------
1941 def load_haproxy_config(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1942 if os.path.isfile(haproxy_config_file):
1943- return(open(haproxy_config_file).read())
1944+ return open(haproxy_config_file).read()
1945 else:
1946- return(None)
1947+ return None
1948
1949
1950 #------------------------------------------------------------------------------
1951@@ -182,12 +163,12 @@
1952 def get_monitoring_password(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1953 haproxy_config = load_haproxy_config(haproxy_config_file)
1954 if haproxy_config is None:
1955- return(None)
1956+ return None
1957 m = re.search("stats auth\s+(\w+):(\w+)", haproxy_config)
1958 if m is not None:
1959- return(m.group(2))
1960+ return m.group(2)
1961 else:
1962- return(None)
1963+ return None
1964
1965
1966 #------------------------------------------------------------------------------
1967@@ -197,32 +178,29 @@
1968 # to open and close when exposing/unexposing a service
1969 #------------------------------------------------------------------------------
1970 def get_service_ports(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1971+ stanzas = get_listen_stanzas(haproxy_config_file=haproxy_config_file)
1972+ return tuple((int(port) for service, addr, port in stanzas))
1973+
1974+
1975+#------------------------------------------------------------------------------
1976+# get_listen_stanzas: Convenience function that scans the existing haproxy
1977+# configuration file and returns a list of the existing
1978+# listen stanzas cofnigured.
1979+#------------------------------------------------------------------------------
1980+def get_listen_stanzas(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1981 haproxy_config = load_haproxy_config(haproxy_config_file)
1982 if haproxy_config is None:
1983- return(None)
1984- return(re.findall("listen.*:(.*)", haproxy_config))
1985-
1986-
1987-#------------------------------------------------------------------------------
1988-# open_port: Convenience function to open a port in juju to
1989-# expose a service
1990-#------------------------------------------------------------------------------
1991-def open_port(port=None, protocol="TCP"):
1992- if port is None:
1993- return(None)
1994- return(subprocess.call(['open-port', "%d/%s" %
1995- (int(port), protocol)]))
1996-
1997-
1998-#------------------------------------------------------------------------------
1999-# close_port: Convenience function to close a port in juju to
2000-# unexpose a service
2001-#------------------------------------------------------------------------------
2002-def close_port(port=None, protocol="TCP"):
2003- if port is None:
2004- return(None)
2005- return(subprocess.call(['close-port', "%d/%s" %
2006- (int(port), protocol)]))
2007+ return ()
2008+ listen_stanzas = re.findall(
2009+ "listen\s+([^\s]+)\s+([^:]+):(.*)",
2010+ haproxy_config)
2011+ bind_stanzas = re.findall(
2012+ "\s+bind\s+([^:]+):(\d+)\s*\n\s+default_backend\s+([^\s]+)",
2013+ haproxy_config, re.M)
2014+ return (tuple(((service, addr, int(port))
2015+ for service, addr, port in listen_stanzas)) +
2016+ tuple(((service, addr, int(port))
2017+ for addr, port, service in bind_stanzas)))
2018
2019
2020 #------------------------------------------------------------------------------
2021@@ -232,26 +210,25 @@
2022 #------------------------------------------------------------------------------
2023 def update_service_ports(old_service_ports=None, new_service_ports=None):
2024 if old_service_ports is None or new_service_ports is None:
2025- return(None)
2026+ return None
2027 for port in old_service_ports:
2028 if port not in new_service_ports:
2029 close_port(port)
2030 for port in new_service_ports:
2031- if port not in old_service_ports:
2032- open_port(port)
2033-
2034-
2035-#------------------------------------------------------------------------------
2036-# pwgen: Generates a random password
2037-# pwd_length: Defines the length of the password to generate
2038-# default: 20
2039-#------------------------------------------------------------------------------
2040-def pwgen(pwd_length=20):
2041- alphanumeric_chars = [l for l in (string.letters + string.digits)
2042- if l not in 'Iil0oO1']
2043- random_chars = [random.choice(alphanumeric_chars)
2044- for i in range(pwd_length)]
2045- return(''.join(random_chars))
2046+ open_port(port)
2047+
2048+
2049+#------------------------------------------------------------------------------
2050+# update_sysctl: create a sysctl.conf file from YAML-formatted 'sysctl' config
2051+#------------------------------------------------------------------------------
2052+def update_sysctl(config_data):
2053+ sysctl_dict = yaml.load(config_data.get("sysctl", "{}"))
2054+ if sysctl_dict:
2055+ sysctl_file = open("/etc/sysctl.d/50-haproxy.conf", "w")
2056+ for key in sysctl_dict:
2057+ sysctl_file.write("{}={}\n".format(key, sysctl_dict[key]))
2058+ sysctl_file.close()
2059+ subprocess.call(["sysctl", "-p", "/etc/sysctl.d/50-haproxy.conf"])
2060
2061
2062 #------------------------------------------------------------------------------
2063@@ -271,22 +248,47 @@
2064 service_port=None, service_options=None,
2065 server_entries=None):
2066 if service_name is None or service_ip is None or service_port is None:
2067- return(None)
2068+ return None
2069+ fe_options = []
2070+ be_options = []
2071+ if service_options is not None:
2072+ # For options that should be duplicated in both frontend and backend,
2073+ # copy them to both.
2074+ for o in dupe_options:
2075+ if any(map(o.strip().startswith, service_options)):
2076+ fe_options.append(o)
2077+ be_options.append(o)
2078+ # Filter provided service options into frontend-only and backend-only.
2079+ results = izip(
2080+ (fe_options, be_options),
2081+ (True, False),
2082+ tee((o, any(map(o.strip().startswith,
2083+ frontend_only_options)))
2084+ for o in service_options))
2085+ for out, cond, result in results:
2086+ out.extend(option for option, match in result
2087+ if match is cond and option not in out)
2088 service_config = []
2089- service_config.append("listen %s %s:%s" %
2090- (service_name, service_ip, service_port))
2091- if service_options is not None:
2092- for service_option in service_options:
2093- service_config.append(" %s" % service_option.strip())
2094- if server_entries is not None and isinstance(server_entries, list):
2095- for (server_name, server_ip, server_port, server_options) \
2096- in server_entries:
2097+ unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-")
2098+ service_config.append("frontend %s-%s" % (unit_name, service_port))
2099+ service_config.append(" bind %s:%s" %
2100+ (service_ip, service_port))
2101+ service_config.append(" default_backend %s" % (service_name,))
2102+ service_config.extend(" %s" % service_option.strip()
2103+ for service_option in fe_options)
2104+ service_config.append("")
2105+ service_config.append("backend %s" % (service_name,))
2106+ service_config.extend(" %s" % service_option.strip()
2107+ for service_option in be_options)
2108+ if isinstance(server_entries, (list, tuple)):
2109+ for (server_name, server_ip, server_port,
2110+ server_options) in server_entries:
2111 server_line = " server %s %s:%s" % \
2112- (server_name, server_ip, server_port)
2113+ (server_name, server_ip, server_port)
2114 if server_options is not None:
2115- server_line += " %s" % server_options
2116+ server_line += " %s" % " ".join(server_options)
2117 service_config.append(server_line)
2118- return('\n'.join(service_config))
2119+ return '\n'.join(service_config)
2120
2121
2122 #------------------------------------------------------------------------------
2123@@ -296,216 +298,234 @@
2124 def create_monitoring_stanza(service_name="haproxy_monitoring"):
2125 config_data = config_get()
2126 if config_data['enable_monitoring'] is False:
2127- return(None)
2128+ return None
2129 monitoring_password = get_monitoring_password()
2130 if config_data['monitoring_password'] != "changeme":
2131 monitoring_password = config_data['monitoring_password']
2132- elif monitoring_password is None and \
2133- config_data['monitoring_password'] == "changeme":
2134- monitoring_password = pwgen()
2135+ elif (monitoring_password is None and
2136+ config_data['monitoring_password'] == "changeme"):
2137+ monitoring_password = pwgen(length=20)
2138 monitoring_config = []
2139 monitoring_config.append("mode http")
2140 monitoring_config.append("acl allowed_cidr src %s" %
2141- config_data['monitoring_allowed_cidr'])
2142+ config_data['monitoring_allowed_cidr'])
2143 monitoring_config.append("block unless allowed_cidr")
2144 monitoring_config.append("stats enable")
2145 monitoring_config.append("stats uri /")
2146 monitoring_config.append("stats realm Haproxy\ Statistics")
2147 monitoring_config.append("stats auth %s:%s" %
2148- (config_data['monitoring_username'], monitoring_password))
2149+ (config_data['monitoring_username'],
2150+ monitoring_password))
2151 monitoring_config.append("stats refresh %d" %
2152- config_data['monitoring_stats_refresh'])
2153- return(create_listen_stanza(service_name,
2154+ config_data['monitoring_stats_refresh'])
2155+ return create_listen_stanza(service_name,
2156 "0.0.0.0",
2157 config_data['monitoring_port'],
2158- monitoring_config))
2159-
2160-def get_host_port(services_list):
2161- """
2162- Given a services list and global juju information, get a host
2163- and port for this system.
2164- """
2165- host = services_list[0]["service_host"]
2166- port = int(services_list[0]["service_port"])
2167- return (host, port)
2168-
2169+ monitoring_config)
2170+
2171+
2172+#------------------------------------------------------------------------------
2173+# get_config_services: Convenience function that returns a mapping containing
2174+# all of the services configuration
2175+#------------------------------------------------------------------------------
2176 def get_config_services():
2177- """
2178- Return dict of all services in the configuration, and in the relation
2179- where appropriate. If a relation contains a "services" key, read
2180- it in as yaml as is the case with the configuration. Set the host and
2181- port for any relation initiated service entry as those items cannot be
2182- known by the other side of the relation. In the case of a
2183- proxy configuration found, ensure the forward for option is set.
2184- """
2185 config_data = config_get()
2186- config_services_list = yaml.load(config_data['services'])
2187- (host, port) = get_host_port(config_services_list)
2188- all_relations = relation_get_all("reverseproxy")
2189- services_list = []
2190- if hasattr(all_relations, "iteritems"):
2191- for relid, reldata in all_relations.iteritems():
2192- for unit, relation_info in reldata.iteritems():
2193- if relation_info.has_key("services"):
2194- rservices = yaml.load(relation_info["services"])
2195- for r in rservices:
2196- r["service_host"] = host
2197- r["service_port"] = port
2198- port += 1
2199- services_list.extend(rservices)
2200- if len(services_list) == 0:
2201- services_list = config_services_list
2202- return(services_list)
2203+ services = {}
2204+ for service in yaml.safe_load(config_data['services']):
2205+ service_name = service["service_name"]
2206+ if not services:
2207+ # 'None' is used as a marker for the first service defined, which
2208+ # is used as the default service if a proxied server doesn't
2209+ # specify which service it is bound to.
2210+ services[None] = {"service_name": service_name}
2211+ if is_proxy(service_name) and ("option forwardfor" not in
2212+ service["service_options"]):
2213+ service["service_options"].append("option forwardfor")
2214+
2215+ if isinstance(service["server_options"], basestring):
2216+ service["server_options"] = service["server_options"].split()
2217+
2218+ services[service_name] = service
2219+
2220+ return services
2221
2222
2223 #------------------------------------------------------------------------------
2224 # get_config_service: Convenience function that returns a dictionary
2225-# of the configuration of a given services configuration
2226+# of the configuration of a given service's configuration
2227 #------------------------------------------------------------------------------
2228 def get_config_service(service_name=None):
2229- services_list = get_config_services()
2230- for service_item in services_list:
2231- if service_item['service_name'] == service_name:
2232- return(service_item)
2233- return(None)
2234-
2235-
2236-def relation_get_all(relation_name):
2237- """
2238- Iterate through all relations, and return large data structure with the
2239- relation data set:
2240-
2241- @param relation_name: The name of the relation to check
2242-
2243- Returns:
2244-
2245- relation_id:
2246- unit:
2247- key: value
2248- key2: value
2249- """
2250- result = {}
2251- try:
2252- relids = subprocess.Popen(
2253- ['relation-ids', relation_name], stdout=subprocess.PIPE)
2254- for relid in [x.strip() for x in relids.stdout]:
2255- result[relid] = {}
2256- for unit in json.loads(
2257- subprocess.check_output(
2258- ['relation-list', '--format=json', '-r', relid])):
2259- result[relid][unit] = relation_get(None, unit, relid)
2260- return result
2261- except Exception, e:
2262- subprocess.call(['juju-log', str(e)])
2263-
2264-def get_services_dict():
2265- """
2266- Transform the services list into a dict for easier comprehension,
2267- and to ensure that we have only one entry per service type. If multiple
2268- relations specify the same server_name, try to union the servers
2269- entries.
2270- """
2271- services_list = get_config_services()
2272- services_dict = {}
2273-
2274- for service_item in services_list:
2275- if not hasattr(service_item, "iteritems"):
2276- juju_log("Each 'services' entry must be a dict: %s" % service_item)
2277- continue;
2278- if "service_name" not in service_item:
2279- juju_log("Missing 'service_name': %s" % service_item)
2280- continue;
2281- name = service_item["service_name"]
2282- options = service_item["service_options"]
2283- if name in services_dict:
2284- if "servers" in services_dict[name]:
2285- services_dict[name]["servers"].extend(service_item["servers"])
2286- else:
2287- services_dict[name] = service_item
2288- if os.path.exists("%s/%s.is.proxy" % (
2289- default_haproxy_service_config_dir, name)):
2290- if 'option forwardfor' not in options:
2291- options.append("option forwardfor")
2292-
2293- return services_dict
2294-
2295-def get_all_services():
2296- """
2297- Transform a services dict into an "all_services" relation setting expected
2298- by apache2. This is needed to ensure we have the port and hostname setting
2299- correct and in the proper format
2300- """
2301- services = get_services_dict()
2302- all_services = []
2303- for name in services:
2304- s = {"service_name": name,
2305- "service_port": services[name]["service_port"]}
2306- all_services.append(s)
2307- return all_services
2308+ return get_config_services().get(service_name, None)
2309+
2310+
2311+def is_proxy(service_name):
2312+ flag_path = os.path.join(default_haproxy_service_config_dir,
2313+ "%s.is.proxy" % service_name)
2314+ return os.path.exists(flag_path)
2315+
2316
2317 #------------------------------------------------------------------------------
2318 # create_services: Function that will create the services configuration
2319 # from the config data and/or relation information
2320 #------------------------------------------------------------------------------
2321 def create_services():
2322- services_list = get_config_services()
2323- services_dict = get_services_dict()
2324-
2325- # service definitions overwrites user specified haproxy file in
2326- # a pseudo-template form
2327- all_relations = relation_get_all("reverseproxy")
2328- for relid, reldata in all_relations.iteritems():
2329- for unit, relation_info in reldata.iteritems():
2330- if not isinstance(relation_info, dict):
2331- sys.exit(0)
2332- if "services" in relation_info:
2333- juju_log("Relation %s has services override defined" % relid)
2334- continue;
2335- if "hostname" not in relation_info or "port" not in relation_info:
2336- juju_log("Relation %s needs hostname and port defined" % relid)
2337- continue;
2338- juju_service_name = unit.rpartition('/')[0]
2339- # Mandatory switches ( hostname, port )
2340- server_name = "%s__%s" % (
2341- relation_info['hostname'].replace('.', '_'),
2342- relation_info['port'])
2343- server_ip = relation_info['hostname']
2344- server_port = relation_info['port']
2345- # Optional switches ( service_name )
2346- if 'service_name' in relation_info:
2347- if relation_info['service_name'] in services_dict:
2348- service_name = relation_info['service_name']
2349- else:
2350- juju_log("service %s does not exist." % (
2351- relation_info['service_name']))
2352- sys.exit(1)
2353- elif juju_service_name + '_service' in services_dict:
2354- service_name = juju_service_name + '_service'
2355+ services_dict = get_config_services()
2356+ if len(services_dict) == 0:
2357+ log("No services configured, exiting.")
2358+ return
2359+
2360+ relation_data = relations_of_type("reverseproxy")
2361+
2362+ for relation_info in relation_data:
2363+ unit = relation_info['__unit__']
2364+ juju_service_name = unit.rpartition('/')[0]
2365+
2366+ relation_ok = True
2367+ for required in ("port", "private-address", "hostname"):
2368+ if not required in relation_info:
2369+ log("No %s in relation data for '%s', skipping." %
2370+ (required, unit))
2371+ relation_ok = False
2372+ break
2373+
2374+ if not relation_ok:
2375+ continue
2376+
2377+ # Mandatory switches ( private-address, port )
2378+ host = relation_info['private-address']
2379+ port = relation_info['port']
2380+ server_name = ("%s-%s" % (unit.replace("/", "-"), port))
2381+
2382+ # Optional switches ( service_name, sitenames )
2383+ service_names = set()
2384+ if 'service_name' in relation_info:
2385+ if relation_info['service_name'] in services_dict:
2386+ service_names.add(relation_info['service_name'])
2387 else:
2388- service_name = services_list[0]['service_name']
2389+ log("Service '%s' does not exist." %
2390+ relation_info['service_name'])
2391+ continue
2392+
2393+ if 'sitenames' in relation_info:
2394+ sitenames = relation_info['sitenames'].split()
2395+ for sitename in sitenames:
2396+ if sitename in services_dict:
2397+ service_names.add(sitename)
2398+
2399+ if juju_service_name + "_service" in services_dict:
2400+ service_names.add(juju_service_name + "_service")
2401+
2402+ if juju_service_name in services_dict:
2403+ service_names.add(juju_service_name)
2404+
2405+ if not service_names:
2406+ service_names.add(services_dict[None]["service_name"])
2407+
2408+ for service_name in service_names:
2409+ service = services_dict[service_name]
2410+
2411 # Add the server entries
2412- if not 'servers' in services_dict[service_name]:
2413- services_dict[service_name]['servers'] = []
2414- services_dict[service_name]['servers'].append((
2415- server_name, server_ip, server_port,
2416- services_dict[service_name]['server_options']))
2417-
2418+ servers = service.setdefault("servers", [])
2419+ servers.append((server_name, host, port,
2420+ services_dict[service_name].get(
2421+ 'server_options', [])))
2422+
2423+ has_servers = False
2424+ for service_name, service in services_dict.iteritems():
2425+ if service.get("servers", []):
2426+ has_servers = True
2427+
2428+ if not has_servers:
2429+ log("No backend servers, exiting.")
2430+ return
2431+
2432+ del services_dict[None]
2433+ services_dict = apply_peer_config(services_dict)
2434+ write_service_config(services_dict)
2435+ return services_dict
2436+
2437+
2438+def apply_peer_config(services_dict):
2439+ peer_data = relations_of_type("peer")
2440+
2441+ peer_services = {}
2442+ for relation_info in peer_data:
2443+ unit_name = relation_info["__unit__"]
2444+ peer_services_data = relation_info.get("all_services")
2445+ if peer_services_data is None:
2446+ continue
2447+ service_data = yaml.safe_load(peer_services_data)
2448+ for service in service_data:
2449+ service_name = service["service_name"]
2450+ if service_name in services_dict:
2451+ peer_service = peer_services.setdefault(service_name, {})
2452+ peer_service["service_name"] = service_name
2453+ peer_service["service_host"] = service["service_host"]
2454+ peer_service["service_port"] = service["service_port"]
2455+ peer_service["service_options"] = ["balance leastconn",
2456+ "mode tcp",
2457+ "option tcplog"]
2458+ servers = peer_service.setdefault("servers", [])
2459+ servers.append((unit_name.replace("/", "-"),
2460+ relation_info["private-address"],
2461+ service["service_port"] + 1, ["check"]))
2462+
2463+ if not peer_services:
2464+ return services_dict
2465+
2466+ unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-")
2467+ private_address = unit_get("private-address")
2468+ for service_name, peer_service in peer_services.iteritems():
2469+ original_service = services_dict[service_name]
2470+
2471+ # If the original service has timeout settings, copy them over to the
2472+ # peer service.
2473+ for option in original_service.get("service_options", ()):
2474+ if "timeout" in option:
2475+ peer_service["service_options"].append(option)
2476+
2477+ servers = peer_service["servers"]
2478+ # Add ourselves to the list of servers for the peer listen stanza.
2479+ servers.append((unit_name, private_address,
2480+ original_service["service_port"] + 1,
2481+ ["check"]))
2482+
2483+ # Make all but the first server in the peer listen stanza a backup
2484+ # server.
2485+ servers.sort()
2486+ for server in servers[1:]:
2487+ server[3].append("backup")
2488+
2489+ # Remap original service port, will now be used by peer listen stanza.
2490+ original_service["service_port"] += 1
2491+
2492+ # Remap original service to a new name, stuff peer listen stanza into
2493+ # it's place.
2494+ be_service = service_name + "_be"
2495+ original_service["service_name"] = be_service
2496+ services_dict[be_service] = original_service
2497+ services_dict[service_name] = peer_service
2498+
2499+ return services_dict
2500+
2501+
2502+def write_service_config(services_dict):
2503 # Construct the new haproxy.cfg file
2504- for service in services_dict:
2505- juju_log("Service: ", service)
2506- server_entries = None
2507- if 'servers' in services_dict[service]:
2508- server_entries = services_dict[service]['servers']
2509- service_config_file = "%s/%s.service" % (
2510- default_haproxy_service_config_dir,
2511- services_dict[service]['service_name'])
2512- with open(service_config_file, 'w') as service_config:
2513- service_config.write(
2514- create_listen_stanza(services_dict[service]['service_name'],
2515- services_dict[service]['service_host'],
2516- services_dict[service]['service_port'],
2517- services_dict[service]['service_options'],
2518- server_entries))
2519+ for service_key, service_config in services_dict.items():
2520+ log("Service: %s" % service_key)
2521+ server_entries = service_config.get('servers')
2522+
2523+ service_name = service_config["service_name"]
2524+ if not os.path.exists(default_haproxy_service_config_dir):
2525+ os.mkdir(default_haproxy_service_config_dir, 0600)
2526+ with open(os.path.join(default_haproxy_service_config_dir,
2527+ "%s.service" % service_name), 'w') as config:
2528+ config.write(create_listen_stanza(
2529+ service_name,
2530+ service_config['service_host'],
2531+ service_config['service_port'],
2532+ service_config['service_options'],
2533+ server_entries))
2534
2535
2536 #------------------------------------------------------------------------------
2537@@ -516,17 +536,19 @@
2538 services = ''
2539 if service_name is not None:
2540 if os.path.exists("%s/%s.service" %
2541- (default_haproxy_service_config_dir, service_name)):
2542- services = open("%s/%s.service" %
2543- (default_haproxy_service_config_dir, service_name)).read()
2544+ (default_haproxy_service_config_dir, service_name)):
2545+ with open("%s/%s.service" % (default_haproxy_service_config_dir,
2546+ service_name)) as f:
2547+ services = f.read()
2548 else:
2549 services = None
2550 else:
2551 for service in glob.glob("%s/*.service" %
2552- default_haproxy_service_config_dir):
2553- services += open(service).read()
2554- services += "\n\n"
2555- return(services)
2556+ default_haproxy_service_config_dir):
2557+ with open(service) as f:
2558+ services += f.read()
2559+ services += "\n\n"
2560+ return services
2561
2562
2563 #------------------------------------------------------------------------------
2564@@ -537,24 +559,24 @@
2565 #------------------------------------------------------------------------------
2566 def remove_services(service_name=None):
2567 if service_name is not None:
2568- if os.path.exists("%s/%s.service" %
2569- (default_haproxy_service_config_dir, service_name)):
2570+ path = "%s/%s.service" % (default_haproxy_service_config_dir,
2571+ service_name)
2572+ if os.path.exists(path):
2573 try:
2574- os.remove("%s/%s.service" %
2575- (default_haproxy_service_config_dir, service_name))
2576- return(True)
2577+ os.remove(path)
2578 except Exception, e:
2579- subprocess.call(['juju-log', str(e)])
2580- return(False)
2581+ log(str(e))
2582+ return False
2583+ return True
2584 else:
2585 for service in glob.glob("%s/*.service" %
2586- default_haproxy_service_config_dir):
2587+ default_haproxy_service_config_dir):
2588 try:
2589 os.remove(service)
2590 except Exception, e:
2591- subprocess.call(['juju-log', str(e)])
2592+ log(str(e))
2593 pass
2594- return(True)
2595+ return True
2596
2597
2598 #------------------------------------------------------------------------------
2599@@ -567,27 +589,18 @@
2600 # optional arguments
2601 #------------------------------------------------------------------------------
2602 def construct_haproxy_config(haproxy_globals=None,
2603- haproxy_defaults=None,
2604- haproxy_monitoring=None,
2605- haproxy_services=None):
2606- if haproxy_globals is None or \
2607- haproxy_defaults is None:
2608- return(None)
2609+ haproxy_defaults=None,
2610+ haproxy_monitoring=None,
2611+ haproxy_services=None):
2612+ if None in (haproxy_globals, haproxy_defaults):
2613+ return
2614 with open(default_haproxy_config, 'w') as haproxy_config:
2615- haproxy_config.write(haproxy_globals)
2616- haproxy_config.write("\n")
2617- haproxy_config.write("\n")
2618- haproxy_config.write(haproxy_defaults)
2619- haproxy_config.write("\n")
2620- haproxy_config.write("\n")
2621- if haproxy_monitoring is not None:
2622- haproxy_config.write(haproxy_monitoring)
2623- haproxy_config.write("\n")
2624- haproxy_config.write("\n")
2625- if haproxy_services is not None:
2626- haproxy_config.write(haproxy_services)
2627- haproxy_config.write("\n")
2628- haproxy_config.write("\n")
2629+ config_string = ''
2630+ for config in (haproxy_globals, haproxy_defaults, haproxy_monitoring,
2631+ haproxy_services):
2632+ if config is not None:
2633+ config_string += config + '\n\n'
2634+ haproxy_config.write(config_string)
2635
2636
2637 #------------------------------------------------------------------------------
2638@@ -595,50 +608,37 @@
2639 # the haproxy service
2640 #------------------------------------------------------------------------------
2641 def service_haproxy(action=None, haproxy_config=default_haproxy_config):
2642- if action is None or haproxy_config is None:
2643- return(None)
2644+ if None in (action, haproxy_config):
2645+ return None
2646 elif action == "check":
2647- retVal = subprocess.call(
2648- ['/usr/sbin/haproxy', '-f', haproxy_config, '-c'])
2649- if retVal == 1:
2650- return(False)
2651- elif retVal == 0:
2652- return(True)
2653- else:
2654- return(False)
2655+ command = ['/usr/sbin/haproxy', '-f', haproxy_config, '-c']
2656 else:
2657- retVal = subprocess.call(['service', 'haproxy', action])
2658- if retVal == 0:
2659- return(True)
2660- else:
2661- return(False)
2662-
2663-def website_notify():
2664- """
2665- Notify any webiste relations of any configuration changes.
2666- """
2667- juju_log("Notifying all website relations of change")
2668- all_relations = relation_get_all("website")
2669- if hasattr(all_relations, "iteritems"):
2670- for relid, reldata in all_relations.iteritems():
2671- relation_set(["time=%s" % time.time()], relation_id=relid)
2672+ command = ['service', 'haproxy', action]
2673+ return_value = subprocess.call(command)
2674+ return return_value == 0
2675
2676
2677 ###############################################################################
2678 # Hook functions
2679 ###############################################################################
2680 def install_hook():
2681- for f in glob.glob('exec.d/*/charm-pre-install'):
2682- if os.path.isfile(f) and os.access(f, os.X_OK):
2683- subprocess.check_call(['sh', '-c', f])
2684 if not os.path.exists(default_haproxy_service_config_dir):
2685 os.mkdir(default_haproxy_service_config_dir, 0600)
2686- return ((apt_get_install("haproxy") == enable_haproxy()) is True)
2687+
2688+ apt_install('haproxy', fatal=True)
2689+ ensure_package_status(service_affecting_packages,
2690+ config_get('package_status'))
2691+ enable_haproxy()
2692
2693
2694 def config_changed():
2695 config_data = config_get()
2696- current_service_ports = get_service_ports()
2697+
2698+ ensure_package_status(service_affecting_packages,
2699+ config_data['package_status'])
2700+
2701+ old_service_ports = get_service_ports()
2702+ old_stanzas = get_listen_stanzas()
2703 haproxy_globals = create_haproxy_globals()
2704 haproxy_defaults = create_haproxy_defaults()
2705 if config_data['enable_monitoring'] is True:
2706@@ -648,105 +648,177 @@
2707 remove_services()
2708 create_services()
2709 haproxy_services = load_services()
2710+ update_sysctl(config_data)
2711 construct_haproxy_config(haproxy_globals,
2712 haproxy_defaults,
2713 haproxy_monitoring,
2714 haproxy_services)
2715
2716 if service_haproxy("check"):
2717- updated_service_ports = get_service_ports()
2718- update_service_ports(current_service_ports, updated_service_ports)
2719+ update_service_ports(old_service_ports, get_service_ports())
2720 service_haproxy("reload")
2721+ if not (get_listen_stanzas() == old_stanzas):
2722+ notify_website()
2723+ notify_peer()
2724+ else:
2725+ # XXX Ideally the config should be restored to a working state if the
2726+ # check fails, otherwise an inadvertent reload will cause the service
2727+ # to be broken.
2728+ log("HAProxy configuration check failed, exiting.")
2729+ sys.exit(1)
2730
2731
2732 def start_hook():
2733 if service_haproxy("status"):
2734- return(service_haproxy("restart"))
2735+ return service_haproxy("restart")
2736 else:
2737- return(service_haproxy("start"))
2738+ return service_haproxy("start")
2739
2740
2741 def stop_hook():
2742 if service_haproxy("status"):
2743- return(service_haproxy("stop"))
2744+ return service_haproxy("stop")
2745
2746
2747 def reverseproxy_interface(hook_name=None):
2748 if hook_name is None:
2749- return(None)
2750- elif hook_name == "changed":
2751- config_changed()
2752- website_notify()
2753- elif hook_name=="departed":
2754- config_changed()
2755- website_notify()
2756+ return None
2757+ if hook_name in ("changed", "departed"):
2758+ config_changed()
2759+
2760
2761 def website_interface(hook_name=None):
2762 if hook_name is None:
2763- return(None)
2764+ return None
2765+ # Notify website relation but only for the current relation in context.
2766+ notify_website(changed=hook_name == "changed",
2767+ relation_ids=(relation_id(),))
2768+
2769+
2770+def get_hostname(host=None):
2771+ my_host = socket.gethostname()
2772+ if host is None or host == "0.0.0.0":
2773+ # If the listen ip has been set to 0.0.0.0 then pass back the hostname
2774+ return socket.getfqdn(my_host)
2775+ elif host == "localhost":
2776+ # If the fqdn lookup has returned localhost (lxc setups) then return
2777+ # hostname
2778+ return my_host
2779+ return host
2780+
2781+
2782+def notify_relation(relation, changed=False, relation_ids=None):
2783+ config_data = config_get()
2784+ default_host = get_hostname()
2785 default_port = 80
2786- relation_data = relation_get()
2787-
2788- # If a specfic service has been asked for then return the ip:port for
2789- # that service, else pass back the default
2790- if 'service_name' in relation_data:
2791- service_name = relation_data['service_name']
2792- requestedservice = get_config_service(service_name)
2793- my_host = requestedservice['service_host']
2794- my_port = requestedservice['service_port']
2795- else:
2796- my_host = socket.getfqdn(socket.gethostname())
2797- my_port = default_port
2798-
2799- # If the listen ip has been set to 0.0.0.0 then pass back the hostname
2800- if my_host == "0.0.0.0":
2801- my_host = socket.getfqdn(socket.gethostname())
2802-
2803- # If the fqdn lookup has returned localhost (lxc setups) then return
2804- # hostname
2805- if my_host == "localhost":
2806- my_host = socket.gethostname()
2807- subprocess.call(
2808- ['relation-set', 'port=%d' % my_port, 'hostname=%s' % my_host,
2809- 'all_services=%s' % yaml.safe_dump(get_all_services())])
2810- if hook_name == "changed":
2811- if 'is-proxy' in relation_data:
2812- service_name = "%s__%d" % \
2813- (relation_data['hostname'], relation_data['port'])
2814- open("%s/%s.is.proxy" %
2815- (default_haproxy_service_config_dir, service_name), 'a').close()
2816+
2817+ for rid in relation_ids or get_relation_ids(relation):
2818+ service_names = set()
2819+ if rid is None:
2820+ rid = relation_id()
2821+ for relation_data in relations_for_id(rid):
2822+ if 'service_name' in relation_data:
2823+ service_names.add(relation_data['service_name'])
2824+
2825+ if changed:
2826+ if 'is-proxy' in relation_data:
2827+ remote_service = ("%s__%d" % (relation_data['hostname'],
2828+ relation_data['port']))
2829+ open("%s/%s.is.proxy" % (
2830+ default_haproxy_service_config_dir,
2831+ remote_service), 'a').close()
2832+
2833+ service_name = None
2834+ if len(service_names) == 1:
2835+ service_name = service_names.pop()
2836+ elif len(service_names) > 1:
2837+ log("Remote units requested than a single service name."
2838+ "Falling back to default host/port.")
2839+
2840+ if service_name is not None:
2841+ # If a specfic service has been asked for then return the ip:port
2842+ # for that service, else pass back the default
2843+ requestedservice = get_config_service(service_name)
2844+ my_host = get_hostname(requestedservice['service_host'])
2845+ my_port = requestedservice['service_port']
2846+ else:
2847+ my_host = default_host
2848+ my_port = default_port
2849+
2850+ relation_set(relation_id=rid, port=str(my_port),
2851+ hostname=my_host,
2852+ all_services=config_data['services'])
2853+
2854+
2855+def notify_website(changed=False, relation_ids=None):
2856+ notify_relation("website", changed=changed, relation_ids=relation_ids)
2857+
2858+
2859+def notify_peer(changed=False, relation_ids=None):
2860+ notify_relation("peer", changed=changed, relation_ids=relation_ids)
2861+
2862+
2863+def install_nrpe_scripts():
2864+ scripts_src = os.path.join(os.environ["CHARM_DIR"], "files",
2865+ "nrpe")
2866+ scripts_dst = "/usr/lib/nagios/plugins"
2867+ if not os.path.exists(scripts_dst):
2868+ os.makedirs(scripts_dst)
2869+ for fname in glob.glob(os.path.join(scripts_src, "*.sh")):
2870+ shutil.copy2(fname,
2871+ os.path.join(scripts_dst, os.path.basename(fname)))
2872+
2873
2874 def update_nrpe_config():
2875+ install_nrpe_scripts()
2876 nrpe_compat = nrpe.NRPE()
2877- nrpe_compat.add_check('haproxy','Check HAProxy', 'check_haproxy.sh')
2878- nrpe_compat.add_check('haproxy_queue','Check HAProxy queue depth', 'check_haproxy_queue_depth.sh')
2879+ nrpe_compat.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh')
2880+ nrpe_compat.add_check('haproxy_queue', 'Check HAProxy queue depth',
2881+ 'check_haproxy_queue_depth.sh')
2882 nrpe_compat.write()
2883
2884+
2885 ###############################################################################
2886 # Main section
2887 ###############################################################################
2888-if __name__ == "__main__":
2889- if HOOK_NAME == "install":
2890+
2891+
2892+def main(hook_name):
2893+ if hook_name == "install":
2894 install_hook()
2895- elif HOOK_NAME == "config-changed":
2896+ elif hook_name in ("config-changed", "upgrade-charm"):
2897 config_changed()
2898 update_nrpe_config()
2899- elif HOOK_NAME == "start":
2900+ elif hook_name == "start":
2901 start_hook()
2902- elif HOOK_NAME == "stop":
2903+ elif hook_name == "stop":
2904 stop_hook()
2905- elif HOOK_NAME == "reverseproxy-relation-broken":
2906+ elif hook_name == "reverseproxy-relation-broken":
2907 config_changed()
2908- elif HOOK_NAME == "reverseproxy-relation-changed":
2909+ elif hook_name == "reverseproxy-relation-changed":
2910 reverseproxy_interface("changed")
2911- elif HOOK_NAME == "reverseproxy-relation-departed":
2912+ elif hook_name == "reverseproxy-relation-departed":
2913 reverseproxy_interface("departed")
2914- elif HOOK_NAME == "website-relation-joined":
2915+ elif hook_name == "website-relation-joined":
2916 website_interface("joined")
2917- elif HOOK_NAME == "website-relation-changed":
2918+ elif hook_name == "website-relation-changed":
2919 website_interface("changed")
2920- elif HOOK_NAME == "nrpe-external-master-relation-changed":
2921+ elif hook_name == "peer-relation-joined":
2922+ website_interface("joined")
2923+ elif hook_name == "peer-relation-changed":
2924+ reverseproxy_interface("changed")
2925+ elif hook_name in ("nrpe-external-master-relation-joined",
2926+ "local-monitors-relation-joined"):
2927 update_nrpe_config()
2928 else:
2929 print "Unknown hook"
2930 sys.exit(1)
2931+
2932+if __name__ == "__main__":
2933+ hook_name = os.path.basename(sys.argv[0])
2934+ # Also support being invoked directly with hook as argument name.
2935+ if hook_name == "hooks.py":
2936+ if len(sys.argv) < 2:
2937+ sys.exit("Missing required hook name argument.")
2938+ hook_name = sys.argv[1]
2939+ main(hook_name)
2940
2941=== modified symlink 'hooks/install' (properties changed: -x to +x)
2942=== target was u'./hooks.py'
2943--- hooks/install 1970-01-01 00:00:00 +0000
2944+++ hooks/install 2013-10-16 14:05:24 +0000
2945@@ -0,0 +1,13 @@
2946+#!/bin/sh
2947+
2948+set -eu
2949+
2950+apt_get_install() {
2951+ DEBIAN_FRONTEND=noninteractive apt-get -y -qq -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install $@
2952+}
2953+
2954+juju-log 'Invoking charm-pre-install hooks'
2955+[ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done )
2956+
2957+juju-log 'Invoking python-based install hook'
2958+python hooks/hooks.py install
2959
2960=== added symlink 'hooks/local-monitors-relation-joined'
2961=== target is u'./hooks.py'
2962=== renamed symlink 'hooks/nrpe-external-master-relation-changed' => 'hooks/nrpe-external-master-relation-joined'
2963=== removed file 'hooks/nrpe.py'
2964--- hooks/nrpe.py 2012-12-21 11:08:58 +0000
2965+++ hooks/nrpe.py 1970-01-01 00:00:00 +0000
2966@@ -1,170 +0,0 @@
2967-import json
2968-import subprocess
2969-import pwd
2970-import grp
2971-import os
2972-import re
2973-import shlex
2974-
2975-# This module adds compatibility with the nrpe_external_master
2976-# subordinate charm. To use it in your charm:
2977-#
2978-# 1. Update metadata.yaml
2979-#
2980-# provides:
2981-# (...)
2982-# nrpe-external-master:
2983-# interface: nrpe-external-master
2984-# scope: container
2985-#
2986-# 2. Add the following to config.yaml
2987-#
2988-# nagios_context:
2989-# default: "juju"
2990-# type: string
2991-# description: |
2992-# Used by the nrpe-external-master subordinate charm.
2993-# A string that will be prepended to instance name to set the host name
2994-# in nagios. So for instance the hostname would be something like:
2995-# juju-myservice-0
2996-# If you're running multiple environments with the same services in them
2997-# this allows you to differentiate between them.
2998-#
2999-# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
3000-#
3001-# 4. Update your hooks.py with something like this:
3002-#
3003-# import nrpe
3004-# (...)
3005-# def update_nrpe_config():
3006-# nrpe_compat = NRPE("myservice")
3007-# nrpe_compat.add_check(
3008-# shortname = "myservice",
3009-# description = "Check MyService",
3010-# check_cmd = "check_http -w 2 -c 10 http://localhost"
3011-# )
3012-# nrpe_compat.add_check(
3013-# "myservice_other",
3014-# "Check for widget failures",
3015-# check_cmd = "/srv/myapp/scripts/widget_check"
3016-# )
3017-# nrpe_compat.write()
3018-#
3019-# def config_changed():
3020-# (...)
3021-# update_nrpe_config()
3022-
3023-class ConfigurationError(Exception):
3024- '''An error interacting with the Juju config'''
3025- pass
3026-def config_get(scope=None):
3027- '''Return the Juju config as a dictionary'''
3028- try:
3029- config_cmd_line = ['config-get']
3030- if scope is not None:
3031- config_cmd_line.append(scope)
3032- config_cmd_line.append('--format=json')
3033- return json.loads(subprocess.check_output(config_cmd_line))
3034- except (ValueError, OSError, subprocess.CalledProcessError) as error:
3035- subprocess.call(['juju-log', str(error)])
3036- raise ConfigurationError(str(error))
3037-
3038-class CheckException(Exception): pass
3039-class Check(object):
3040- shortname_re = '[A-Za-z0-9-_]*'
3041- service_template = """
3042-#---------------------------------------------------
3043-# This file is Juju managed
3044-#---------------------------------------------------
3045-define service {{
3046- use active-service
3047- host_name {nagios_hostname}
3048- service_description {nagios_hostname} {shortname} {description}
3049- check_command check_nrpe!check_{shortname}
3050- servicegroups {nagios_servicegroup}
3051-}}
3052-"""
3053- def __init__(self, shortname, description, check_cmd):
3054- super(Check, self).__init__()
3055- # XXX: could be better to calculate this from the service name
3056- if not re.match(self.shortname_re, shortname):
3057- raise CheckException("shortname must match {}".format(Check.shortname_re))
3058- self.shortname = shortname
3059- self.description = description
3060- self.check_cmd = self._locate_cmd(check_cmd)
3061-
3062- def _locate_cmd(self, check_cmd):
3063- search_path = (
3064- '/',
3065- os.path.join(os.environ['CHARM_DIR'], 'files/nrpe-external-master'),
3066- '/usr/lib/nagios/plugins',
3067- )
3068- command = shlex.split(check_cmd)
3069- for path in search_path:
3070- if os.path.exists(os.path.join(path,command[0])):
3071- return os.path.join(path, command[0]) + " " + " ".join(command[1:])
3072- subprocess.call(['juju-log', 'Check command not found: {}'.format(command[0])])
3073- return ''
3074-
3075- def write(self, nagios_context, hostname):
3076- for f in os.listdir(NRPE.nagios_exportdir):
3077- if re.search('.*check_{}.cfg'.format(self.shortname), f):
3078- os.remove(os.path.join(NRPE.nagios_exportdir, f))
3079-
3080- templ_vars = {
3081- 'nagios_hostname': hostname,
3082- 'nagios_servicegroup': nagios_context,
3083- 'description': self.description,
3084- 'shortname': self.shortname,
3085- }
3086- nrpe_service_text = Check.service_template.format(**templ_vars)
3087- nrpe_service_file = '{}/service__{}_check_{}.cfg'.format(
3088- NRPE.nagios_exportdir, hostname, self.shortname)
3089- with open(nrpe_service_file, 'w') as nrpe_service_config:
3090- nrpe_service_config.write(str(nrpe_service_text))
3091-
3092- nrpe_check_file = '/etc/nagios/nrpe.d/check_{}.cfg'.format(self.shortname)
3093- with open(nrpe_check_file, 'w') as nrpe_check_config:
3094- nrpe_check_config.write("# check {}\n".format(self.shortname))
3095- nrpe_check_config.write("command[check_{}]={}\n".format(
3096- self.shortname, self.check_cmd))
3097-
3098- def run(self):
3099- subprocess.call(self.check_cmd)
3100-
3101-class NRPE(object):
3102- nagios_logdir = '/var/log/nagios'
3103- nagios_exportdir = '/var/lib/nagios/export'
3104- nrpe_confdir = '/etc/nagios/nrpe.d'
3105- def __init__(self):
3106- super(NRPE, self).__init__()
3107- self.config = config_get()
3108- self.nagios_context = self.config['nagios_context']
3109- self.unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-')
3110- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
3111- self.checks = []
3112-
3113- def add_check(self, *args, **kwargs):
3114- self.checks.append( Check(*args, **kwargs) )
3115-
3116- def write(self):
3117- try:
3118- nagios_uid = pwd.getpwnam('nagios').pw_uid
3119- nagios_gid = grp.getgrnam('nagios').gr_gid
3120- except:
3121- subprocess.call(['juju-log', "Nagios user not set up, nrpe checks not updated"])
3122- return
3123-
3124- if not os.path.exists(NRPE.nagios_exportdir):
3125- subprocess.call(['juju-log', 'Exiting as {} is not accessible'.format(NRPE.nagios_exportdir)])
3126- return
3127-
3128- if not os.path.exists(NRPE.nagios_logdir):
3129- os.mkdir(NRPE.nagios_logdir)
3130- os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
3131-
3132- for nrpecheck in self.checks:
3133- nrpecheck.write(self.nagios_context, self.hostname)
3134-
3135- if os.path.isfile('/etc/init.d/nagios-nrpe-server'):
3136- subprocess.call(['service', 'nagios-nrpe-server', 'reload'])
3137
3138=== added symlink 'hooks/peer-relation-changed'
3139=== target is u'./hooks.py'
3140=== added symlink 'hooks/peer-relation-joined'
3141=== target is u'./hooks.py'
3142=== removed file 'hooks/test_hooks.py'
3143--- hooks/test_hooks.py 2013-02-14 21:35:47 +0000
3144+++ hooks/test_hooks.py 1970-01-01 00:00:00 +0000
3145@@ -1,263 +0,0 @@
3146-import hooks
3147-import yaml
3148-from textwrap import dedent
3149-from mocker import MockerTestCase, ARGS
3150-
3151-class JujuHookTest(MockerTestCase):
3152-
3153- def setUp(self):
3154- self.config_services = [{
3155- "service_name": "haproxy_test",
3156- "service_host": "0.0.0.0",
3157- "service_port": "88",
3158- "service_options": ["balance leastconn"],
3159- "server_options": "maxconn 25"}]
3160- self.config_services_extended = [
3161- {"service_name": "unit_service",
3162- "service_host": "supplied-hostname",
3163- "service_port": "999",
3164- "service_options": ["balance leastconn"],
3165- "server_options": "maxconn 99"}]
3166- self.relation_services = [
3167- {"service_name": "foo_svc",
3168- "service_options": ["balance leastconn"],
3169- "servers": [("A", "hA", "1", "oA1 oA2")]},
3170- {"service_name": "bar_svc",
3171- "service_options": ["balance leastconn"],
3172- "servers": [
3173- ("A", "hA", "1", "oA1 oA2"), ("B", "hB", "2", "oB1 oB2")]}]
3174- self.relation_services2 = [
3175- {"service_name": "foo_svc",
3176- "service_options": ["balance leastconn"],
3177- "servers": [("A2", "hA2", "12", "oA12 oA22")]}]
3178- hooks.default_haproxy_config_dir = self.makeDir()
3179- hooks.default_haproxy_config = self.makeFile()
3180- hooks.default_haproxy_service_config_dir = self.makeDir()
3181- obj = self.mocker.replace("hooks.juju_log")
3182- obj(ARGS)
3183- self.mocker.count(0, None)
3184- obj = self.mocker.replace("hooks.unit_get")
3185- obj("public-address")
3186- self.mocker.result("test-host.example.com")
3187- self.mocker.count(0, None)
3188- self.maxDiff = None
3189-
3190- def _expect_config_get(self, **kwargs):
3191- result = {
3192- "default_timeouts": "queue 1000, connect 1000, client 1000, server 1000",
3193- "global_log": "127.0.0.1 local0, 127.0.0.1 local1 notice",
3194- "global_spread_checks": 0,
3195- "monitoring_allowed_cidr": "127.0.0.1/32",
3196- "monitoring_username": "haproxy",
3197- "default_log": "global",
3198- "global_group": "haproxy",
3199- "monitoring_stats_refresh": 3,
3200- "default_retries": 3,
3201- "services": yaml.dump(self.config_services),
3202- "global_maxconn": 4096,
3203- "global_user": "haproxy",
3204- "default_options": "httplog, dontlognull",
3205- "monitoring_port": 10000,
3206- "global_debug": False,
3207- "nagios_context": "juju",
3208- "global_quiet": False,
3209- "enable_monitoring": False,
3210- "monitoring_password": "changeme",
3211- "default_mode": "http"}
3212- obj = self.mocker.replace("hooks.config_get")
3213- obj()
3214- result.update(kwargs)
3215- self.mocker.result(result)
3216- self.mocker.count(1, None)
3217-
3218- def _expect_relation_get_all(self, relation, extra={}):
3219- obj = self.mocker.replace("hooks.relation_get_all")
3220- obj(relation)
3221- relation = {"hostname": "10.0.1.2",
3222- "private-address": "10.0.1.2",
3223- "port": "10000"}
3224- relation.update(extra)
3225- result = {"1": {"unit/0": relation}}
3226- self.mocker.result(result)
3227- self.mocker.count(1, None)
3228-
3229- def _expect_relation_get_all_multiple(self, relation_name):
3230- obj = self.mocker.replace("hooks.relation_get_all")
3231- obj(relation_name)
3232- result = {
3233- "1": {"unit/0": {
3234- "hostname": "10.0.1.2",
3235- "private-address": "10.0.1.2",
3236- "port": "10000",
3237- "services": yaml.dump(self.relation_services)}},
3238- "2": {"unit/1": {
3239- "hostname": "10.0.1.3",
3240- "private-address": "10.0.1.3",
3241- "port": "10001",
3242- "services": yaml.dump(self.relation_services2)}}}
3243- self.mocker.result(result)
3244- self.mocker.count(1, None)
3245-
3246- def _expect_relation_get_all_with_services(self, relation, extra={}):
3247- extra.update({"services": yaml.dump(self.relation_services)})
3248- return self._expect_relation_get_all(relation, extra)
3249-
3250- def _expect_relation_get(self):
3251- obj = self.mocker.replace("hooks.relation_get")
3252- obj()
3253- result = {}
3254- self.mocker.result(result)
3255- self.mocker.count(1, None)
3256-
3257- def test_create_services(self):
3258- """
3259- Simplest use case, config stanza seeded in config file, server line
3260- added through simple relation. Many servers can join this, but
3261- multiple services will not be presented to the outside
3262- """
3263- self._expect_config_get()
3264- self._expect_relation_get_all("reverseproxy")
3265- self.mocker.replay()
3266- hooks.create_services()
3267- services = hooks.load_services()
3268- stanza = """\
3269- listen haproxy_test 0.0.0.0:88
3270- balance leastconn
3271- server 10_0_1_2__10000 10.0.1.2:10000 maxconn 25
3272-
3273- """
3274- self.assertEquals(services, dedent(stanza))
3275-
3276- def test_create_services_extended_with_relation(self):
3277- """
3278- This case covers specifying an up-front services file to ha-proxy
3279- in the config. The relation then specifies a singular hostname,
3280- port and server_options setting which is filled into the appropriate
3281- haproxy stanza based on multiple criteria.
3282- """
3283- self._expect_config_get(
3284- services=yaml.dump(self.config_services_extended))
3285- self._expect_relation_get_all("reverseproxy")
3286- self.mocker.replay()
3287- hooks.create_services()
3288- services = hooks.load_services()
3289- stanza = """\
3290- listen unit_service supplied-hostname:999
3291- balance leastconn
3292- server 10_0_1_2__10000 10.0.1.2:10000 maxconn 99
3293-
3294- """
3295- self.assertEquals(dedent(stanza), services)
3296-
3297- def test_create_services_pure_relation(self):
3298- """
3299- In this case, the relation is in control of the haproxy config file.
3300- Each relation chooses what server it creates in the haproxy file, it
3301- relies on the haproxy service only for the hostname and front-end port.
3302- Each member of the relation will put a backend server entry under in
3303- the desired stanza. Each realtion can in fact supply multiple
3304- entries from the same juju service unit if desired.
3305- """
3306- self._expect_config_get()
3307- self._expect_relation_get_all_with_services("reverseproxy")
3308- self.mocker.replay()
3309- hooks.create_services()
3310- services = hooks.load_services()
3311- stanza = """\
3312- listen foo_svc 0.0.0.0:88
3313- balance leastconn
3314- server A hA:1 oA1 oA2
3315- """
3316- self.assertIn(dedent(stanza), services)
3317- stanza = """\
3318- listen bar_svc 0.0.0.0:89
3319- balance leastconn
3320- server A hA:1 oA1 oA2
3321- server B hB:2 oB1 oB2
3322- """
3323- self.assertIn(dedent(stanza), services)
3324-
3325- def test_create_services_pure_relation_multiple(self):
3326- """
3327- This is much liek the pure_relation case, where the relation specifies
3328- a "services" override. However, in this case we have multiple relations
3329- that partially override each other. We expect that the created haproxy
3330- conf file will combine things appropriately.
3331- """
3332- self._expect_config_get()
3333- self._expect_relation_get_all_multiple("reverseproxy")
3334- self.mocker.replay()
3335- hooks.create_services()
3336- result = hooks.load_services()
3337- stanza = """\
3338- listen foo_svc 0.0.0.0:88
3339- balance leastconn
3340- server A hA:1 oA1 oA2
3341- server A2 hA2:12 oA12 oA22
3342- """
3343- self.assertIn(dedent(stanza), result)
3344- stanza = """\
3345- listen bar_svc 0.0.0.0:89
3346- balance leastconn
3347- server A hA:1 oA1 oA2
3348- server B hB:2 oB1 oB2
3349- """
3350- self.assertIn(dedent(stanza), result)
3351-
3352- def test_get_config_services_config_only(self):
3353- """
3354- Attempting to catch the case where a relation is not joined yet
3355- """
3356- self._expect_config_get()
3357- obj = self.mocker.replace("hooks.relation_get_all")
3358- obj("reverseproxy")
3359- self.mocker.result(None)
3360- self.mocker.replay()
3361- result = hooks.get_config_services()
3362- self.assertEquals(result, self.config_services)
3363-
3364- def test_get_config_services_relation_no_services(self):
3365- """
3366- If the config specifies services and the realtion does not, just the
3367- config services should come through.
3368- """
3369- self._expect_config_get()
3370- self._expect_relation_get_all("reverseproxy")
3371- self.mocker.replay()
3372- result = hooks.get_config_services()
3373- self.assertEquals(result, self.config_services)
3374-
3375- def test_get_config_services_relation_with_services(self):
3376- """
3377- Testing with both the config and relation providing services should
3378- yield the just the relation
3379- """
3380- self._expect_config_get()
3381- self._expect_relation_get_all_with_services("reverseproxy")
3382- self.mocker.replay()
3383- result = hooks.get_config_services()
3384- # Just test "servers" since hostname and port and maybe other keys
3385- # will be added by the hook
3386- self.assertEquals(result[0]["servers"],
3387- self.relation_services[0]["servers"])
3388-
3389- def test_config_generation_indempotent(self):
3390- self._expect_config_get()
3391- self._expect_relation_get_all_multiple("reverseproxy")
3392- self.mocker.replay()
3393-
3394- # Test that we generate the same haproxy.conf file each time
3395- hooks.create_services()
3396- result1 = hooks.load_services()
3397- hooks.create_services()
3398- result2 = hooks.load_services()
3399- self.assertEqual(result1, result2)
3400-
3401- def test_get_all_services(self):
3402- self._expect_config_get()
3403- self._expect_relation_get_all_multiple("reverseproxy")
3404- self.mocker.replay()
3405- baseline = [{"service_name": "foo_svc", "service_port": 88},
3406- {"service_name": "bar_svc", "service_port": 89}]
3407- services = hooks.get_all_services()
3408- self.assertEqual(baseline, services)
3409
3410=== added directory 'hooks/tests'
3411=== added file 'hooks/tests/__init__.py'
3412=== added file 'hooks/tests/test_config_changed_hooks.py'
3413--- hooks/tests/test_config_changed_hooks.py 1970-01-01 00:00:00 +0000
3414+++ hooks/tests/test_config_changed_hooks.py 2013-10-16 14:05:24 +0000
3415@@ -0,0 +1,120 @@
3416+import sys
3417+
3418+from testtools import TestCase
3419+from mock import patch
3420+
3421+import hooks
3422+from utils_for_tests import patch_open
3423+
3424+
3425+class ConfigChangedTest(TestCase):
3426+
3427+ def setUp(self):
3428+ super(ConfigChangedTest, self).setUp()
3429+ self.config_get = self.patch_hook("config_get")
3430+ self.get_service_ports = self.patch_hook("get_service_ports")
3431+ self.get_listen_stanzas = self.patch_hook("get_listen_stanzas")
3432+ self.create_haproxy_globals = self.patch_hook(
3433+ "create_haproxy_globals")
3434+ self.create_haproxy_defaults = self.patch_hook(
3435+ "create_haproxy_defaults")
3436+ self.remove_services = self.patch_hook("remove_services")
3437+ self.create_services = self.patch_hook("create_services")
3438+ self.load_services = self.patch_hook("load_services")
3439+ self.construct_haproxy_config = self.patch_hook(
3440+ "construct_haproxy_config")
3441+ self.service_haproxy = self.patch_hook(
3442+ "service_haproxy")
3443+ self.update_sysctl = self.patch_hook(
3444+ "update_sysctl")
3445+ self.notify_website = self.patch_hook("notify_website")
3446+ self.notify_peer = self.patch_hook("notify_peer")
3447+ self.log = self.patch_hook("log")
3448+ sys_exit = patch.object(sys, "exit")
3449+ self.sys_exit = sys_exit.start()
3450+ self.addCleanup(sys_exit.stop)
3451+
3452+ def patch_hook(self, hook_name):
3453+ mock_controller = patch.object(hooks, hook_name)
3454+ mock = mock_controller.start()
3455+ self.addCleanup(mock_controller.stop)
3456+ return mock
3457+
3458+ def test_config_changed_notify_website_changed_stanzas(self):
3459+ self.service_haproxy.return_value = True
3460+ self.get_listen_stanzas.side_effect = (
3461+ (('foo.internal', '1.2.3.4', 123),),
3462+ (('foo.internal', '1.2.3.4', 123),
3463+ ('bar.internal', '1.2.3.5', 234),))
3464+
3465+ hooks.config_changed()
3466+
3467+ self.notify_website.assert_called_once_with()
3468+ self.notify_peer.assert_called_once_with()
3469+
3470+ def test_config_changed_no_notify_website_not_changed(self):
3471+ self.service_haproxy.return_value = True
3472+ self.get_listen_stanzas.side_effect = (
3473+ (('foo.internal', '1.2.3.4', 123),),
3474+ (('foo.internal', '1.2.3.4', 123),))
3475+
3476+ hooks.config_changed()
3477+
3478+ self.notify_website.assert_not_called()
3479+ self.notify_peer.assert_not_called()
3480+
3481+ def test_config_changed_no_notify_website_failed_check(self):
3482+ self.service_haproxy.return_value = False
3483+ self.get_listen_stanzas.side_effect = (
3484+ (('foo.internal', '1.2.3.4', 123),),
3485+ (('foo.internal', '1.2.3.4', 123),
3486+ ('bar.internal', '1.2.3.5', 234),))
3487+
3488+ hooks.config_changed()
3489+
3490+ self.notify_website.assert_not_called()
3491+ self.notify_peer.assert_not_called()
3492+ self.log.assert_called_once_with(
3493+ "HAProxy configuration check failed, exiting.")
3494+ self.sys_exit.assert_called_once_with(1)
3495+
3496+
3497+class HelpersTest(TestCase):
3498+ def test_constructs_haproxy_config(self):
3499+ with patch_open() as (mock_open, mock_file):
3500+ hooks.construct_haproxy_config('foo-globals', 'foo-defaults',
3501+ 'foo-monitoring', 'foo-services')
3502+
3503+ mock_file.write.assert_called_with(
3504+ 'foo-globals\n\n'
3505+ 'foo-defaults\n\n'
3506+ 'foo-monitoring\n\n'
3507+ 'foo-services\n\n'
3508+ )
3509+ mock_open.assert_called_with(hooks.default_haproxy_config, 'w')
3510+
3511+ def test_constructs_nothing_if_globals_is_none(self):
3512+ with patch_open() as (mock_open, mock_file):
3513+ hooks.construct_haproxy_config(None, 'foo-defaults',
3514+ 'foo-monitoring', 'foo-services')
3515+
3516+ self.assertFalse(mock_open.called)
3517+ self.assertFalse(mock_file.called)
3518+
3519+ def test_constructs_nothing_if_defaults_is_none(self):
3520+ with patch_open() as (mock_open, mock_file):
3521+ hooks.construct_haproxy_config('foo-globals', None,
3522+ 'foo-monitoring', 'foo-services')
3523+
3524+ self.assertFalse(mock_open.called)
3525+ self.assertFalse(mock_file.called)
3526+
3527+ def test_constructs_haproxy_config_without_optionals(self):
3528+ with patch_open() as (mock_open, mock_file):
3529+ hooks.construct_haproxy_config('foo-globals', 'foo-defaults')
3530+
3531+ mock_file.write.assert_called_with(
3532+ 'foo-globals\n\n'
3533+ 'foo-defaults\n\n'
3534+ )
3535+ mock_open.assert_called_with(hooks.default_haproxy_config, 'w')
3536
3537=== added file 'hooks/tests/test_helpers.py'
3538--- hooks/tests/test_helpers.py 1970-01-01 00:00:00 +0000
3539+++ hooks/tests/test_helpers.py 2013-10-16 14:05:24 +0000
3540@@ -0,0 +1,750 @@
3541+import os
3542+
3543+from contextlib import contextmanager
3544+from StringIO import StringIO
3545+
3546+from testtools import TestCase
3547+from mock import patch, call, MagicMock
3548+
3549+import hooks
3550+from utils_for_tests import patch_open
3551+
3552+
3553+class HelpersTest(TestCase):
3554+
3555+ @patch('hooks.config_get')
3556+ def test_creates_haproxy_globals(self, config_get):
3557+ config_get.return_value = {
3558+ 'global_log': 'foo-log, bar-log',
3559+ 'global_maxconn': 123,
3560+ 'global_user': 'foo-user',
3561+ 'global_group': 'foo-group',
3562+ 'global_spread_checks': 234,
3563+ 'global_debug': False,
3564+ 'global_quiet': False,
3565+ }
3566+ result = hooks.create_haproxy_globals()
3567+
3568+ expected = '\n'.join([
3569+ 'global',
3570+ ' log foo-log',
3571+ ' log bar-log',
3572+ ' maxconn 123',
3573+ ' user foo-user',
3574+ ' group foo-group',
3575+ ' spread-checks 234',
3576+ ])
3577+ self.assertEqual(result, expected)
3578+
3579+ @patch('hooks.config_get')
3580+ def test_creates_haproxy_globals_quietly_with_debug(self, config_get):
3581+ config_get.return_value = {
3582+ 'global_log': 'foo-log, bar-log',
3583+ 'global_maxconn': 123,
3584+ 'global_user': 'foo-user',
3585+ 'global_group': 'foo-group',
3586+ 'global_spread_checks': 234,
3587+ 'global_debug': True,
3588+ 'global_quiet': True,
3589+ }
3590+ result = hooks.create_haproxy_globals()
3591+
3592+ expected = '\n'.join([
3593+ 'global',
3594+ ' log foo-log',
3595+ ' log bar-log',
3596+ ' maxconn 123',
3597+ ' user foo-user',
3598+ ' group foo-group',
3599+ ' debug',
3600+ ' quiet',
3601+ ' spread-checks 234',
3602+ ])
3603+ self.assertEqual(result, expected)
3604+
3605+ def test_enables_haproxy(self):
3606+ mock_file = MagicMock()
3607+
3608+ @contextmanager
3609+ def mock_open(*args, **kwargs):
3610+ yield mock_file
3611+
3612+ initial_content = """
3613+ foo
3614+ ENABLED=0
3615+ bar
3616+ """
3617+ ending_content = initial_content.replace('ENABLED=0', 'ENABLED=1')
3618+
3619+ with patch('__builtin__.open', mock_open):
3620+ mock_file.read.return_value = initial_content
3621+
3622+ hooks.enable_haproxy()
3623+
3624+ mock_file.write.assert_called_with(ending_content)
3625+
3626+ @patch('hooks.config_get')
3627+ def test_creates_haproxy_defaults(self, config_get):
3628+ config_get.return_value = {
3629+ 'default_options': 'foo-option, bar-option',
3630+ 'default_timeouts': '234, 456',
3631+ 'default_log': 'foo-log',
3632+ 'default_mode': 'foo-mode',
3633+ 'default_retries': 321,
3634+ }
3635+ result = hooks.create_haproxy_defaults()
3636+
3637+ expected = '\n'.join([
3638+ 'defaults',
3639+ ' log foo-log',
3640+ ' mode foo-mode',
3641+ ' option foo-option',
3642+ ' option bar-option',
3643+ ' retries 321',
3644+ ' timeout 234',
3645+ ' timeout 456',
3646+ ])
3647+ self.assertEqual(result, expected)
3648+
3649+ def test_returns_none_when_haproxy_config_doesnt_exist(self):
3650+ self.assertIsNone(hooks.load_haproxy_config('/some/foo/file'))
3651+
3652+ @patch('__builtin__.open')
3653+ @patch('os.path.isfile')
3654+ def test_loads_haproxy_config_file(self, isfile, mock_open):
3655+ content = 'some content'
3656+ config_file = '/etc/haproxy/haproxy.cfg'
3657+ file_object = StringIO(content)
3658+ isfile.return_value = True
3659+ mock_open.return_value = file_object
3660+
3661+ result = hooks.load_haproxy_config()
3662+
3663+ self.assertEqual(result, content)
3664+ isfile.assert_called_with(config_file)
3665+ mock_open.assert_called_with(config_file)
3666+
3667+ @patch('hooks.load_haproxy_config')
3668+ def test_gets_monitoring_password(self, load_haproxy_config):
3669+ load_haproxy_config.return_value = 'stats auth foo:bar'
3670+
3671+ password = hooks.get_monitoring_password()
3672+
3673+ self.assertEqual(password, 'bar')
3674+
3675+ @patch('hooks.load_haproxy_config')
3676+ def test_gets_none_if_different_pattern(self, load_haproxy_config):
3677+ load_haproxy_config.return_value = 'some other pattern'
3678+
3679+ password = hooks.get_monitoring_password()
3680+
3681+ self.assertIsNone(password)
3682+
3683+ def test_gets_none_pass_if_config_doesnt_exist(self):
3684+ password = hooks.get_monitoring_password('/some/foo/path')
3685+
3686+ self.assertIsNone(password)
3687+
3688+ @patch('hooks.load_haproxy_config')
3689+ def test_gets_service_ports(self, load_haproxy_config):
3690+ load_haproxy_config.return_value = '''
3691+ listen foo.internal 1.2.3.4:123
3692+ listen bar.internal 1.2.3.5:234
3693+ '''
3694+
3695+ ports = hooks.get_service_ports()
3696+
3697+ self.assertEqual(ports, (123, 234))
3698+
3699+ @patch('hooks.load_haproxy_config')
3700+ def test_get_listen_stanzas(self, load_haproxy_config):
3701+ load_haproxy_config.return_value = '''
3702+ listen foo.internal 1.2.3.4:123
3703+ listen bar.internal 1.2.3.5:234
3704+ '''
3705+
3706+ stanzas = hooks.get_listen_stanzas()
3707+
3708+ self.assertEqual((('foo.internal', '1.2.3.4', 123),
3709+ ('bar.internal', '1.2.3.5', 234)),
3710+ stanzas)
3711+
3712+ @patch('hooks.load_haproxy_config')
3713+ def test_get_listen_stanzas_with_frontend(self, load_haproxy_config):
3714+ load_haproxy_config.return_value = '''
3715+ frontend foo-2-123
3716+ bind 1.2.3.4:123
3717+ default_backend foo.internal
3718+ frontend foo-2-234
3719+ bind 1.2.3.5:234
3720+ default_backend bar.internal
3721+ '''
3722+
3723+ stanzas = hooks.get_listen_stanzas()
3724+
3725+ self.assertEqual((('foo.internal', '1.2.3.4', 123),
3726+ ('bar.internal', '1.2.3.5', 234)),
3727+ stanzas)
3728+
3729+ @patch('hooks.load_haproxy_config')
3730+ def test_get_empty_tuple_when_no_stanzas(self, load_haproxy_config):
3731+ load_haproxy_config.return_value = '''
3732+ '''
3733+
3734+ stanzas = hooks.get_listen_stanzas()
3735+
3736+ self.assertEqual((), stanzas)
3737+
3738+ @patch('hooks.load_haproxy_config')
3739+ def test_get_listen_stanzas_none_configured(self, load_haproxy_config):
3740+ load_haproxy_config.return_value = ""
3741+
3742+ stanzas = hooks.get_listen_stanzas()
3743+
3744+ self.assertEqual((), stanzas)
3745+
3746+ def test_gets_no_ports_if_config_doesnt_exist(self):
3747+ ports = hooks.get_service_ports('/some/foo/path')
3748+ self.assertEqual((), ports)
3749+
3750+ @patch('hooks.open_port')
3751+ @patch('hooks.close_port')
3752+ def test_updates_service_ports(self, close_port, open_port):
3753+ old_service_ports = [123, 234, 345]
3754+ new_service_ports = [345, 456, 567]
3755+
3756+ hooks.update_service_ports(old_service_ports, new_service_ports)
3757+
3758+ self.assertEqual(close_port.mock_calls, [call(123), call(234)])
3759+ self.assertEqual(open_port.mock_calls,
3760+ [call(345), call(456), call(567)])
3761+
3762+ @patch('hooks.open_port')
3763+ @patch('hooks.close_port')
3764+ def test_updates_none_if_service_ports_not_provided(self, close_port,
3765+ open_port):
3766+ hooks.update_service_ports()
3767+
3768+ self.assertFalse(close_port.called)
3769+ self.assertFalse(open_port.called)
3770+
3771+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
3772+ def test_creates_a_listen_stanza(self):
3773+ service_name = 'some-name'
3774+ service_ip = '10.11.12.13'
3775+ service_port = 1234
3776+ service_options = ('foo', 'bar')
3777+ server_entries = [
3778+ ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
3779+ ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
3780+ ]
3781+
3782+ result = hooks.create_listen_stanza(service_name, service_ip,
3783+ service_port, service_options,
3784+ server_entries)
3785+
3786+ expected = '\n'.join((
3787+ 'frontend haproxy-2-1234',
3788+ ' bind 10.11.12.13:1234',
3789+ ' default_backend some-name',
3790+ '',
3791+ 'backend some-name',
3792+ ' foo',
3793+ ' bar',
3794+ ' server name-1 ip-1:port-1 foo1 bar1',
3795+ ' server name-2 ip-2:port-2 foo2 bar2',
3796+ ))
3797+
3798+ self.assertEqual(expected, result)
3799+
3800+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
3801+ def test_create_listen_stanza_filters_frontend_options(self):
3802+ service_name = 'some-name'
3803+ service_ip = '10.11.12.13'
3804+ service_port = 1234
3805+ service_options = ('capture request header X-Man', 'mode http',
3806+ 'option httplog', 'retries 3', 'balance uri',
3807+ 'option logasap')
3808+ server_entries = [
3809+ ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
3810+ ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
3811+ ]
3812+
3813+ result = hooks.create_listen_stanza(service_name, service_ip,
3814+ service_port, service_options,
3815+ server_entries)
3816+
3817+ expected = '\n'.join((
3818+ 'frontend haproxy-2-1234',
3819+ ' bind 10.11.12.13:1234',
3820+ ' default_backend some-name',
3821+ ' mode http',
3822+ ' option httplog',
3823+ ' capture request header X-Man',
3824+ ' option logasap',
3825+ '',
3826+ 'backend some-name',
3827+ ' mode http',
3828+ ' option httplog',
3829+ ' retries 3',
3830+ ' balance uri',
3831+ ' server name-1 ip-1:port-1 foo1 bar1',
3832+ ' server name-2 ip-2:port-2 foo2 bar2',
3833+ ))
3834+
3835+ self.assertEqual(expected, result)
3836+
3837+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
3838+ def test_creates_a_listen_stanza_with_tuple_entries(self):
3839+ service_name = 'some-name'
3840+ service_ip = '10.11.12.13'
3841+ service_port = 1234
3842+ service_options = ('foo', 'bar')
3843+ server_entries = (
3844+ ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
3845+ ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
3846+ )
3847+
3848+ result = hooks.create_listen_stanza(service_name, service_ip,
3849+ service_port, service_options,
3850+ server_entries)
3851+
3852+ expected = '\n'.join((
3853+ 'frontend haproxy-2-1234',
3854+ ' bind 10.11.12.13:1234',
3855+ ' default_backend some-name',
3856+ '',
3857+ 'backend some-name',
3858+ ' foo',
3859+ ' bar',
3860+ ' server name-1 ip-1:port-1 foo1 bar1',
3861+ ' server name-2 ip-2:port-2 foo2 bar2',
3862+ ))
3863+
3864+ self.assertEqual(expected, result)
3865+
3866+ def test_doesnt_create_listen_stanza_if_args_not_provided(self):
3867+ self.assertIsNone(hooks.create_listen_stanza())
3868+
3869+ @patch('hooks.create_listen_stanza')
3870+ @patch('hooks.config_get')
3871+ @patch('hooks.get_monitoring_password')
3872+ def test_creates_a_monitoring_stanza(self, get_monitoring_password,
3873+ config_get, create_listen_stanza):
3874+ config_get.return_value = {
3875+ 'enable_monitoring': True,
3876+ 'monitoring_allowed_cidr': 'some-cidr',
3877+ 'monitoring_password': 'some-pass',
3878+ 'monitoring_username': 'some-user',
3879+ 'monitoring_stats_refresh': 123,
3880+ 'monitoring_port': 1234,
3881+ }
3882+ create_listen_stanza.return_value = 'some result'
3883+
3884+ result = hooks.create_monitoring_stanza(service_name="some-service")
3885+
3886+ self.assertEqual('some result', result)
3887+ get_monitoring_password.assert_called_with()
3888+ create_listen_stanza.assert_called_with(
3889+ 'some-service', '0.0.0.0', 1234, [
3890+ 'mode http',
3891+ 'acl allowed_cidr src some-cidr',
3892+ 'block unless allowed_cidr',
3893+ 'stats enable',
3894+ 'stats uri /',
3895+ 'stats realm Haproxy\\ Statistics',
3896+ 'stats auth some-user:some-pass',
3897+ 'stats refresh 123',
3898+ ])
3899+
3900+ @patch('hooks.create_listen_stanza')
3901+ @patch('hooks.config_get')
3902+ @patch('hooks.get_monitoring_password')
3903+ def test_doesnt_create_a_monitoring_stanza_if_monitoring_disabled(
3904+ self, get_monitoring_password, config_get, create_listen_stanza):
3905+ config_get.return_value = {
3906+ 'enable_monitoring': False,
3907+ }
3908+
3909+ result = hooks.create_monitoring_stanza(service_name="some-service")
3910+
3911+ self.assertIsNone(result)
3912+ self.assertFalse(get_monitoring_password.called)
3913+ self.assertFalse(create_listen_stanza.called)
3914+
3915+ @patch('hooks.create_listen_stanza')
3916+ @patch('hooks.config_get')
3917+ @patch('hooks.get_monitoring_password')
3918+ def test_uses_monitoring_password_for_stanza(self, get_monitoring_password,
3919+ config_get,
3920+ create_listen_stanza):
3921+ config_get.return_value = {
3922+ 'enable_monitoring': True,
3923+ 'monitoring_allowed_cidr': 'some-cidr',
3924+ 'monitoring_password': 'changeme',
3925+ 'monitoring_username': 'some-user',
3926+ 'monitoring_stats_refresh': 123,
3927+ 'monitoring_port': 1234,
3928+ }
3929+ create_listen_stanza.return_value = 'some result'
3930+ get_monitoring_password.return_value = 'some-monitoring-pass'
3931+
3932+ hooks.create_monitoring_stanza(service_name="some-service")
3933+
3934+ get_monitoring_password.assert_called_with()
3935+ create_listen_stanza.assert_called_with(
3936+ 'some-service', '0.0.0.0', 1234, [
3937+ 'mode http',
3938+ 'acl allowed_cidr src some-cidr',
3939+ 'block unless allowed_cidr',
3940+ 'stats enable',
3941+ 'stats uri /',
3942+ 'stats realm Haproxy\\ Statistics',
3943+ 'stats auth some-user:some-monitoring-pass',
3944+ 'stats refresh 123',
3945+ ])
3946+
3947+ @patch('hooks.pwgen')
3948+ @patch('hooks.create_listen_stanza')
3949+ @patch('hooks.config_get')
3950+ @patch('hooks.get_monitoring_password')
3951+ def test_uses_new_password_for_stanza(self, get_monitoring_password,
3952+ config_get, create_listen_stanza,
3953+ pwgen):
3954+ config_get.return_value = {
3955+ 'enable_monitoring': True,
3956+ 'monitoring_allowed_cidr': 'some-cidr',
3957+ 'monitoring_password': 'changeme',
3958+ 'monitoring_username': 'some-user',
3959+ 'monitoring_stats_refresh': 123,
3960+ 'monitoring_port': 1234,
3961+ }
3962+ create_listen_stanza.return_value = 'some result'
3963+ get_monitoring_password.return_value = None
3964+ pwgen.return_value = 'some-new-pass'
3965+
3966+ hooks.create_monitoring_stanza(service_name="some-service")
3967+
3968+ get_monitoring_password.assert_called_with()
3969+ create_listen_stanza.assert_called_with(
3970+ 'some-service', '0.0.0.0', 1234, [
3971+ 'mode http',
3972+ 'acl allowed_cidr src some-cidr',
3973+ 'block unless allowed_cidr',
3974+ 'stats enable',
3975+ 'stats uri /',
3976+ 'stats realm Haproxy\\ Statistics',
3977+ 'stats auth some-user:some-new-pass',
3978+ 'stats refresh 123',
3979+ ])
3980+
3981+ @patch('hooks.is_proxy')
3982+ @patch('hooks.config_get')
3983+ @patch('yaml.safe_load')
3984+ def test_gets_config_services(self, safe_load, config_get, is_proxy):
3985+ config_get.return_value = {
3986+ 'services': 'some-services',
3987+ }
3988+ safe_load.return_value = [
3989+ {
3990+ 'service_name': 'foo',
3991+ 'service_options': {
3992+ 'foo-1': 123,
3993+ },
3994+ 'service_options': ['foo1', 'foo2'],
3995+ 'server_options': ['baz1', 'baz2'],
3996+ },
3997+ {
3998+ 'service_name': 'bar',
3999+ 'service_options': ['bar1', 'bar2'],
4000+ 'server_options': ['baz1', 'baz2'],
4001+ },
4002+ ]
4003+ is_proxy.return_value = False
4004+
4005+ result = hooks.get_config_services()
4006+ expected = {
4007+ None: {
4008+ 'service_name': 'foo',
4009+ },
4010+ 'foo': {
4011+ 'service_name': 'foo',
4012+ 'service_options': ['foo1', 'foo2'],
4013+ 'server_options': ['baz1', 'baz2'],
4014+ },
4015+ 'bar': {
4016+ 'service_name': 'bar',
4017+ 'service_options': ['bar1', 'bar2'],
4018+ 'server_options': ['baz1', 'baz2'],
4019+ },
4020+ }
4021+
4022+ self.assertEqual(expected, result)
4023+
4024+ @patch('hooks.is_proxy')
4025+ @patch('hooks.config_get')
4026+ @patch('yaml.safe_load')
4027+ def test_gets_config_services_with_forward_option(self, safe_load,
4028+ config_get, is_proxy):
4029+ config_get.return_value = {
4030+ 'services': 'some-services',
4031+ }
4032+ safe_load.return_value = [
4033+ {
4034+ 'service_name': 'foo',
4035+ 'service_options': {
4036+ 'foo-1': 123,
4037+ },
4038+ 'service_options': ['foo1', 'foo2'],
4039+ 'server_options': ['baz1', 'baz2'],
4040+ },
4041+ {
4042+ 'service_name': 'bar',
4043+ 'service_options': ['bar1', 'bar2'],
4044+ 'server_options': ['baz1', 'baz2'],
4045+ },
4046+ ]
4047+ is_proxy.return_value = True
4048+
4049+ result = hooks.get_config_services()
4050+ expected = {
4051+ None: {
4052+ 'service_name': 'foo',
4053+ },
4054+ 'foo': {
4055+ 'service_name': 'foo',
4056+ 'service_options': ['foo1', 'foo2', 'option forwardfor'],
4057+ 'server_options': ['baz1', 'baz2'],
4058+ },
4059+ 'bar': {
4060+ 'service_name': 'bar',
4061+ 'service_options': ['bar1', 'bar2', 'option forwardfor'],
4062+ 'server_options': ['baz1', 'baz2'],
4063+ },
4064+ }
4065+
4066+ self.assertEqual(expected, result)
4067+
4068+ @patch('hooks.is_proxy')
4069+ @patch('hooks.config_get')
4070+ @patch('yaml.safe_load')
4071+ def test_gets_config_services_with_options_string(self, safe_load,
4072+ config_get, is_proxy):
4073+ config_get.return_value = {
4074+ 'services': 'some-services',
4075+ }
4076+ safe_load.return_value = [
4077+ {
4078+ 'service_name': 'foo',
4079+ 'service_options': {
4080+ 'foo-1': 123,
4081+ },
4082+ 'service_options': ['foo1', 'foo2'],
4083+ 'server_options': 'baz1 baz2',
4084+ },
4085+ {
4086+ 'service_name': 'bar',
4087+ 'service_options': ['bar1', 'bar2'],
4088+ 'server_options': 'baz1 baz2',
4089+ },
4090+ ]
4091+ is_proxy.return_value = False
4092+
4093+ result = hooks.get_config_services()
4094+ expected = {
4095+ None: {
4096+ 'service_name': 'foo',
4097+ },
4098+ 'foo': {
4099+ 'service_name': 'foo',
4100+ 'service_options': ['foo1', 'foo2'],
4101+ 'server_options': ['baz1', 'baz2'],
4102+ },
4103+ 'bar': {
4104+ 'service_name': 'bar',
4105+ 'service_options': ['bar1', 'bar2'],
4106+ 'server_options': ['baz1', 'baz2'],
4107+ },
4108+ }
4109+
4110+ self.assertEqual(expected, result)
4111+
4112+ @patch('hooks.get_config_services')
4113+ def test_gets_a_service_config(self, get_config_services):
4114+ get_config_services.return_value = {
4115+ 'foo': 'bar',
4116+ }
4117+
4118+ self.assertEqual('bar', hooks.get_config_service('foo'))
4119+
4120+ @patch('hooks.get_config_services')
4121+ def test_gets_a_service_config_from_none(self, get_config_services):
4122+ get_config_services.return_value = {
4123+ None: 'bar',
4124+ }
4125+
4126+ self.assertEqual('bar', hooks.get_config_service())
4127+
4128+ @patch('hooks.get_config_services')
4129+ def test_gets_a_service_config_as_none(self, get_config_services):
4130+ get_config_services.return_value = {
4131+ 'baz': 'bar',
4132+ }
4133+
4134+ self.assertIsNone(hooks.get_config_service())
4135+
4136+ @patch('os.path.exists')
4137+ def test_mark_as_proxy_when_path_exists(self, path_exists):
4138+ path_exists.return_value = True
4139+
4140+ self.assertTrue(hooks.is_proxy('foo'))
4141+ path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy')
4142+
4143+ @patch('os.path.exists')
4144+ def test_doesnt_mark_as_proxy_when_path_doesnt_exist(self, path_exists):
4145+ path_exists.return_value = False
4146+
4147+ self.assertFalse(hooks.is_proxy('foo'))
4148+ path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy')
4149+
4150+ @patch('os.path.exists')
4151+ def test_loads_services_by_name(self, path_exists):
4152+ with patch_open() as (mock_open, mock_file):
4153+ path_exists.return_value = True
4154+ mock_file.read.return_value = 'some content'
4155+
4156+ result = hooks.load_services('some-service')
4157+
4158+ self.assertEqual('some content', result)
4159+ mock_open.assert_called_with(
4160+ '/var/run/haproxy/some-service.service')
4161+ mock_file.read.assert_called_with()
4162+
4163+ @patch('os.path.exists')
4164+ def test_loads_no_service_if_path_doesnt_exist(self, path_exists):
4165+ path_exists.return_value = False
4166+
4167+ result = hooks.load_services('some-service')
4168+
4169+ self.assertIsNone(result)
4170+
4171+ @patch('glob.glob')
4172+ def test_loads_services_within_dir_if_no_name_provided(self, glob):
4173+ with patch_open() as (mock_open, mock_file):
4174+ mock_file.read.side_effect = ['foo', 'bar']
4175+ glob.return_value = ['foo-file', 'bar-file']
4176+
4177+ result = hooks.load_services()
4178+
4179+ self.assertEqual('foo\n\nbar\n\n', result)
4180+ mock_open.assert_has_calls([call('foo-file'), call('bar-file')])
4181+ mock_file.read.assert_has_calls([call(), call()])
4182+
4183+ @patch('hooks.os')
4184+ def test_removes_services_by_name(self, os_):
4185+ service_path = '/var/run/haproxy/some-service.service'
4186+ os_.path.exists.return_value = True
4187+
4188+ self.assertTrue(hooks.remove_services('some-service'))
4189+
4190+ os_.path.exists.assert_called_with(service_path)
4191+ os_.remove.assert_called_with(service_path)
4192+
4193+ @patch('hooks.os')
4194+ def test_removes_nothing_if_service_doesnt_exist(self, os_):
4195+ service_path = '/var/run/haproxy/some-service.service'
4196+ os_.path.exists.return_value = False
4197+
4198+ self.assertTrue(hooks.remove_services('some-service'))
4199+
4200+ os_.path.exists.assert_called_with(service_path)
4201+
4202+ @patch('hooks.os')
4203+ @patch('glob.glob')
4204+ def test_removes_all_services_in_dir_if_name_not_provided(self, glob, os_):
4205+ glob.return_value = ['foo', 'bar']
4206+
4207+ self.assertTrue(hooks.remove_services())
4208+
4209+ os_.remove.assert_has_calls([call('foo'), call('bar')])
4210+
4211+ @patch('hooks.os')
4212+ @patch('hooks.log')
4213+ def test_logs_error_when_failing_to_remove_service_by_name(self, log, os_):
4214+ error = Exception('some error')
4215+ os_.path.exists.return_value = True
4216+ os_.remove.side_effect = error
4217+
4218+ self.assertFalse(hooks.remove_services('some-service'))
4219+
4220+ log.assert_called_with(str(error))
4221+
4222+ @patch('hooks.os')
4223+ @patch('hooks.log')
4224+ @patch('glob.glob')
4225+ def test_logs_error_when_failing_to_remove_services(self, glob, log, os_):
4226+ errors = [Exception('some error 1'), Exception('some error 2')]
4227+ os_.remove.side_effect = errors
4228+ glob.return_value = ['foo', 'bar']
4229+
4230+ self.assertTrue(hooks.remove_services())
4231+
4232+ log.assert_has_calls([
4233+ call(str(errors[0])),
4234+ call(str(errors[1])),
4235+ ])
4236+
4237+ @patch('subprocess.call')
4238+ def test_calls_check_action(self, mock_call):
4239+ mock_call.return_value = 0
4240+
4241+ result = hooks.service_haproxy('check')
4242+
4243+ self.assertTrue(result)
4244+ mock_call.assert_called_with(['/usr/sbin/haproxy', '-f',
4245+ hooks.default_haproxy_config, '-c'])
4246+
4247+ @patch('subprocess.call')
4248+ def test_calls_check_action_with_different_config(self, mock_call):
4249+ mock_call.return_value = 0
4250+
4251+ result = hooks.service_haproxy('check', 'some-config')
4252+
4253+ self.assertTrue(result)
4254+ mock_call.assert_called_with(['/usr/sbin/haproxy', '-f',
4255+ 'some-config', '-c'])
4256+
4257+ @patch('subprocess.call')
4258+ def test_fails_to_check_config(self, mock_call):
4259+ mock_call.return_value = 1
4260+
4261+ result = hooks.service_haproxy('check')
4262+
4263+ self.assertFalse(result)
4264+
4265+ @patch('subprocess.call')
4266+ def test_calls_different_actions(self, mock_call):
4267+ mock_call.return_value = 0
4268+
4269+ result = hooks.service_haproxy('foo')
4270+
4271+ self.assertTrue(result)
4272+ mock_call.assert_called_with(['service', 'haproxy', 'foo'])
4273+
4274+ @patch('subprocess.call')
4275+ def test_fails_to_call_different_actions(self, mock_call):
4276+ mock_call.return_value = 1
4277+
4278+ result = hooks.service_haproxy('foo')
4279+
4280+ self.assertFalse(result)
4281+
4282+ @patch('subprocess.call')
4283+ def test_doesnt_call_actions_if_action_not_provided(self, mock_call):
4284+ self.assertIsNone(hooks.service_haproxy())
4285+ self.assertFalse(mock_call.called)
4286+
4287+ @patch('subprocess.call')
4288+ def test_doesnt_call_actions_if_config_is_none(self, mock_call):
4289+ self.assertIsNone(hooks.service_haproxy('foo', None))
4290+ self.assertFalse(mock_call.called)
4291
4292=== added file 'hooks/tests/test_nrpe_hooks.py'
4293--- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000
4294+++ hooks/tests/test_nrpe_hooks.py 2013-10-16 14:05:24 +0000
4295@@ -0,0 +1,24 @@
4296+from testtools import TestCase
4297+from mock import call, patch, MagicMock
4298+
4299+import hooks
4300+
4301+
4302+class NRPEHooksTest(TestCase):
4303+
4304+ @patch('hooks.install_nrpe_scripts')
4305+ @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
4306+ def test_update_nrpe_config(self, nrpe, install_nrpe_scripts):
4307+ nrpe_compat = MagicMock()
4308+ nrpe_compat.checks = [MagicMock(shortname="haproxy"),
4309+ MagicMock(shortname="haproxy_queue")]
4310+ nrpe.return_value = nrpe_compat
4311+
4312+ hooks.update_nrpe_config()
4313+
4314+ self.assertEqual(
4315+ nrpe_compat.mock_calls,
4316+ [call.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh'),
4317+ call.add_check('haproxy_queue', 'Check HAProxy queue depth',
4318+ 'check_haproxy_queue_depth.sh'),
4319+ call.write()])
4320
4321=== added file 'hooks/tests/test_peer_hooks.py'
4322--- hooks/tests/test_peer_hooks.py 1970-01-01 00:00:00 +0000
4323+++ hooks/tests/test_peer_hooks.py 2013-10-16 14:05:24 +0000
4324@@ -0,0 +1,200 @@
4325+import os
4326+import yaml
4327+
4328+from testtools import TestCase
4329+from mock import patch
4330+
4331+import hooks
4332+from utils_for_tests import patch_open
4333+
4334+
4335+class PeerRelationTest(TestCase):
4336+
4337+ def setUp(self):
4338+ super(PeerRelationTest, self).setUp()
4339+
4340+ self.relations_of_type = self.patch_hook("relations_of_type")
4341+ self.log = self.patch_hook("log")
4342+ self.unit_get = self.patch_hook("unit_get")
4343+
4344+ def patch_hook(self, hook_name):
4345+ mock_controller = patch.object(hooks, hook_name)
4346+ mock = mock_controller.start()
4347+ self.addCleanup(mock_controller.stop)
4348+ return mock
4349+
4350+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4351+ def test_with_peer_same_services(self):
4352+ self.unit_get.return_value = "1.2.4.5"
4353+ self.relations_of_type.return_value = [
4354+ {"__unit__": "haproxy/1",
4355+ "hostname": "haproxy-1",
4356+ "private-address": "1.2.4.4",
4357+ "all_services": yaml.dump([
4358+ {"service_name": "foo_service",
4359+ "service_host": "0.0.0.0",
4360+ "service_options": ["balance leastconn"],
4361+ "service_port": 4242},
4362+ ])
4363+ }
4364+ ]
4365+
4366+ services_dict = {
4367+ "foo_service": {
4368+ "service_name": "foo_service",
4369+ "service_host": "0.0.0.0",
4370+ "service_port": 4242,
4371+ "service_options": ["balance leastconn"],
4372+ "server_options": ["maxconn 4"],
4373+ "servers": [("backend_1__8080", "1.2.3.4",
4374+ 8080, ["maxconn 4"])],
4375+ },
4376+ }
4377+
4378+ expected = {
4379+ "foo_service": {
4380+ "service_name": "foo_service",
4381+ "service_host": "0.0.0.0",
4382+ "service_port": 4242,
4383+ "service_options": ["balance leastconn",
4384+ "mode tcp",
4385+ "option tcplog"],
4386+ "servers": [
4387+ ("haproxy-1", "1.2.4.4", 4243, ["check"]),
4388+ ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"])
4389+ ],
4390+ },
4391+ "foo_service_be": {
4392+ "service_name": "foo_service_be",
4393+ "service_host": "0.0.0.0",
4394+ "service_port": 4243,
4395+ "service_options": ["balance leastconn"],
4396+ "server_options": ["maxconn 4"],
4397+ "servers": [("backend_1__8080", "1.2.3.4",
4398+ 8080, ["maxconn 4"])],
4399+ },
4400+ }
4401+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4402+
4403+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4404+ def test_inherit_timeout_settings(self):
4405+ self.unit_get.return_value = "1.2.4.5"
4406+ self.relations_of_type.return_value = [
4407+ {"__unit__": "haproxy/1",
4408+ "hostname": "haproxy-1",
4409+ "private-address": "1.2.4.4",
4410+ "all_services": yaml.dump([
4411+ {"service_name": "foo_service",
4412+ "service_host": "0.0.0.0",
4413+ "service_options": ["timeout server 5000"],
4414+ "service_port": 4242},
4415+ ])
4416+ }
4417+ ]
4418+
4419+ services_dict = {
4420+ "foo_service": {
4421+ "service_name": "foo_service",
4422+ "service_host": "0.0.0.0",
4423+ "service_port": 4242,
4424+ "service_options": ["timeout server 5000"],
4425+ "server_options": ["maxconn 4"],
4426+ "servers": [("backend_1__8080", "1.2.3.4",
4427+ 8080, ["maxconn 4"])],
4428+ },
4429+ }
4430+
4431+ expected = {
4432+ "foo_service": {
4433+ "service_name": "foo_service",
4434+ "service_host": "0.0.0.0",
4435+ "service_port": 4242,
4436+ "service_options": ["balance leastconn",
4437+ "mode tcp",
4438+ "option tcplog",
4439+ "timeout server 5000"],
4440+ "servers": [
4441+ ("haproxy-1", "1.2.4.4", 4243, ["check"]),
4442+ ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"])
4443+ ],
4444+ },
4445+ "foo_service_be": {
4446+ "service_name": "foo_service_be",
4447+ "service_host": "0.0.0.0",
4448+ "service_port": 4243,
4449+ "service_options": ["timeout server 5000"],
4450+ "server_options": ["maxconn 4"],
4451+ "servers": [("backend_1__8080", "1.2.3.4",
4452+ 8080, ["maxconn 4"])],
4453+ },
4454+ }
4455+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4456+
4457+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4458+ def test_with_no_relation_data(self):
4459+ self.unit_get.return_value = "1.2.4.5"
4460+ self.relations_of_type.return_value = []
4461+
4462+ services_dict = {
4463+ "foo_service": {
4464+ "service_name": "foo_service",
4465+ "service_host": "0.0.0.0",
4466+ "service_port": 4242,
4467+ "service_options": ["balance leastconn"],
4468+ "server_options": ["maxconn 4"],
4469+ "servers": [("backend_1__8080", "1.2.3.4",
4470+ 8080, ["maxconn 4"])],
4471+ },
4472+ }
4473+
4474+ expected = services_dict
4475+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4476+
4477+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4478+ def test_with_missing_all_services(self):
4479+ self.unit_get.return_value = "1.2.4.5"
4480+ self.relations_of_type.return_value = [
4481+ {"__unit__": "haproxy/1",
4482+ "hostname": "haproxy-1",
4483+ "private-address": "1.2.4.4",
4484+ }
4485+ ]
4486+
4487+ services_dict = {
4488+ "foo_service": {
4489+ "service_name": "foo_service",
4490+ "service_host": "0.0.0.0",
4491+ "service_port": 4242,
4492+ "service_options": ["balance leastconn"],
4493+ "server_options": ["maxconn 4"],
4494+ "servers": [("backend_1__8080", "1.2.3.4",
4495+ 8080, ["maxconn 4"])],
4496+ },
4497+ }
4498+
4499+ expected = services_dict
4500+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4501+
4502+ @patch('hooks.create_listen_stanza')
4503+ def test_writes_service_config(self, create_listen_stanza):
4504+ create_listen_stanza.return_value = 'some content'
4505+ services_dict = {
4506+ 'foo': {
4507+ 'service_name': 'bar',
4508+ 'service_host': 'some-host',
4509+ 'service_port': 'some-port',
4510+ 'service_options': 'some-options',
4511+ 'servers': (1, 2),
4512+ },
4513+ }
4514+
4515+ with patch.object(os.path, "exists") as exists:
4516+ exists.return_value = True
4517+ with patch_open() as (mock_open, mock_file):
4518+ hooks.write_service_config(services_dict)
4519+
4520+ create_listen_stanza.assert_called_with(
4521+ 'bar', 'some-host', 'some-port', 'some-options', (1, 2))
4522+ mock_open.assert_called_with(
4523+ '/var/run/haproxy/bar.service', 'w')
4524+ mock_file.write.assert_called_with('some content')
4525
4526=== added file 'hooks/tests/test_reverseproxy_hooks.py'
4527--- hooks/tests/test_reverseproxy_hooks.py 1970-01-01 00:00:00 +0000
4528+++ hooks/tests/test_reverseproxy_hooks.py 2013-10-16 14:05:24 +0000
4529@@ -0,0 +1,345 @@
4530+from testtools import TestCase
4531+from mock import patch, call
4532+
4533+import hooks
4534+
4535+
4536+class ReverseProxyRelationTest(TestCase):
4537+
4538+ def setUp(self):
4539+ super(ReverseProxyRelationTest, self).setUp()
4540+
4541+ self.relations_of_type = self.patch_hook("relations_of_type")
4542+ self.get_config_services = self.patch_hook("get_config_services")
4543+ self.log = self.patch_hook("log")
4544+ self.write_service_config = self.patch_hook("write_service_config")
4545+ self.apply_peer_config = self.patch_hook("apply_peer_config")
4546+ self.apply_peer_config.side_effect = lambda value: value
4547+
4548+ def patch_hook(self, hook_name):
4549+ mock_controller = patch.object(hooks, hook_name)
4550+ mock = mock_controller.start()
4551+ self.addCleanup(mock_controller.stop)
4552+ return mock
4553+
4554+ def test_relation_data_returns_none(self):
4555+ self.get_config_services.return_value = {
4556+ "service": {
4557+ "service_name": "service",
4558+ },
4559+ }
4560+ self.relations_of_type.return_value = []
4561+ self.assertIs(None, hooks.create_services())
4562+ self.log.assert_called_once_with("No backend servers, exiting.")
4563+ self.write_service_config.assert_not_called()
4564+
4565+ def test_relation_data_returns_no_relations(self):
4566+ self.get_config_services.return_value = {
4567+ "service": {
4568+ "service_name": "service",
4569+ },
4570+ }
4571+ self.relations_of_type.return_value = []
4572+ self.assertIs(None, hooks.create_services())
4573+ self.log.assert_called_once_with("No backend servers, exiting.")
4574+ self.write_service_config.assert_not_called()
4575+
4576+ def test_relation_no_services(self):
4577+ self.get_config_services.return_value = {}
4578+ self.relations_of_type.return_value = [
4579+ {"port": 4242,
4580+ "__unit__": "foo/0",
4581+ "hostname": "backend.1",
4582+ "private-address": "1.2.3.4"},
4583+ ]
4584+ self.assertIs(None, hooks.create_services())
4585+ self.log.assert_called_once_with("No services configured, exiting.")
4586+ self.write_service_config.assert_not_called()
4587+
4588+ def test_no_port_in_relation_data(self):
4589+ self.get_config_services.return_value = {
4590+ "service": {
4591+ "service_name": "service",
4592+ },
4593+ }
4594+ self.relations_of_type.return_value = [
4595+ {"private-address": "1.2.3.4",
4596+ "__unit__": "foo/0"},
4597+ ]
4598+ self.assertIs(None, hooks.create_services())
4599+ self.log.assert_has_calls([call.log(
4600+ "No port in relation data for 'foo/0', skipping.")])
4601+ self.write_service_config.assert_not_called()
4602+
4603+ def test_no_private_address_in_relation_data(self):
4604+ self.get_config_services.return_value = {
4605+ "service": {
4606+ "service_name": "service",
4607+ },
4608+ }
4609+ self.relations_of_type.return_value = [
4610+ {"port": 4242,
4611+ "__unit__": "foo/0"},
4612+ ]
4613+ self.assertIs(None, hooks.create_services())
4614+ self.log.assert_has_calls([call.log(
4615+ "No private-address in relation data for 'foo/0', skipping.")])
4616+ self.write_service_config.assert_not_called()
4617+
4618+ def test_no_hostname_in_relation_data(self):
4619+ self.get_config_services.return_value = {
4620+ "service": {
4621+ "service_name": "service",
4622+ },
4623+ }
4624+ self.relations_of_type.return_value = [
4625+ {"port": 4242,
4626+ "private-address": "1.2.3.4",
4627+ "__unit__": "foo/0"},
4628+ ]
4629+ self.assertIs(None, hooks.create_services())
4630+ self.log.assert_has_calls([call.log(
4631+ "No hostname in relation data for 'foo/0', skipping.")])
4632+ self.write_service_config.assert_not_called()
4633+
4634+ def test_relation_unknown_service(self):
4635+ self.get_config_services.return_value = {
4636+ "service": {
4637+ "service_name": "service",
4638+ },
4639+ }
4640+ self.relations_of_type.return_value = [
4641+ {"port": 4242,
4642+ "hostname": "backend.1",
4643+ "service_name": "invalid",
4644+ "private-address": "1.2.3.4",
4645+ "__unit__": "foo/0"},
4646+ ]
4647+ self.assertIs(None, hooks.create_services())
4648+ self.log.assert_has_calls([call.log(
4649+ "Service 'invalid' does not exist.")])
4650+ self.write_service_config.assert_not_called()
4651+
4652+ def test_no_relation_but_has_servers_from_config(self):
4653+ self.get_config_services.return_value = {
4654+ None: {
4655+ "service_name": "service",
4656+ },
4657+ "service": {
4658+ "service_name": "service",
4659+ "servers": [
4660+ ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]),
4661+ ]
4662+ },
4663+ }
4664+ self.relations_of_type.return_value = []
4665+
4666+ expected = {
4667+ 'service': {
4668+ 'service_name': 'service',
4669+ 'servers': [
4670+ ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]),
4671+ ],
4672+ },
4673+ }
4674+ self.assertEqual(expected, hooks.create_services())
4675+ self.write_service_config.assert_called_with(expected)
4676+
4677+ def test_relation_default_service(self):
4678+ self.get_config_services.return_value = {
4679+ None: {
4680+ "service_name": "service",
4681+ },
4682+ "service": {
4683+ "service_name": "service",
4684+ },
4685+ }
4686+ self.relations_of_type.return_value = [
4687+ {"port": 4242,
4688+ "hostname": "backend.1",
4689+ "private-address": "1.2.3.4",
4690+ "__unit__": "foo/0"},
4691+ ]
4692+
4693+ expected = {
4694+ 'service': {
4695+ 'service_name': 'service',
4696+ 'servers': [('foo-0-4242', '1.2.3.4', 4242, [])],
4697+ },
4698+ }
4699+ self.assertEqual(expected, hooks.create_services())
4700+ self.write_service_config.assert_called_with(expected)
4701+
4702+ def test_with_service_options(self):
4703+ self.get_config_services.return_value = {
4704+ None: {
4705+ "service_name": "service",
4706+ },
4707+ "service": {
4708+ "service_name": "service",
4709+ "server_options": ["maxconn 4"],
4710+ },
4711+ }
4712+ self.relations_of_type.return_value = [
4713+ {"port": 4242,
4714+ "hostname": "backend.1",
4715+ "private-address": "1.2.3.4",
4716+ "__unit__": "foo/0"},
4717+ ]
4718+
4719+ expected = {
4720+ 'service': {
4721+ 'service_name': 'service',
4722+ 'server_options': ["maxconn 4"],
4723+ 'servers': [('foo-0-4242', '1.2.3.4',
4724+ 4242, ["maxconn 4"])],
4725+ },
4726+ }
4727+ self.assertEqual(expected, hooks.create_services())
4728+ self.write_service_config.assert_called_with(expected)
4729+
4730+ def test_with_service_name(self):
4731+ self.get_config_services.return_value = {
4732+ None: {
4733+ "service_name": "service",
4734+ },
4735+ "foo_service": {
4736+ "service_name": "foo_service",
4737+ "server_options": ["maxconn 4"],
4738+ },
4739+ }
4740+ self.relations_of_type.return_value = [
4741+ {"port": 4242,
4742+ "hostname": "backend.1",
4743+ "service_name": "foo_service",
4744+ "private-address": "1.2.3.4",
4745+ "__unit__": "foo/0"},
4746+ ]
4747+
4748+ expected = {
4749+ 'foo_service': {
4750+ 'service_name': 'foo_service',
4751+ 'server_options': ["maxconn 4"],
4752+ 'servers': [('foo-0-4242', '1.2.3.4',
4753+ 4242, ["maxconn 4"])],
4754+ },
4755+ }
4756+ self.assertEqual(expected, hooks.create_services())
4757+ self.write_service_config.assert_called_with(expected)
4758+
4759+ def test_no_service_name_unit_name_match_service_name(self):
4760+ self.get_config_services.return_value = {
4761+ None: {
4762+ "service_name": "foo_service",
4763+ },
4764+ "foo_service": {
4765+ "service_name": "foo_service",
4766+ "server_options": ["maxconn 4"],
4767+ },
4768+ }
4769+ self.relations_of_type.return_value = [
4770+ {"port": 4242,
4771+ "hostname": "backend.1",
4772+ "private-address": "1.2.3.4",
4773+ "__unit__": "foo/1"},
4774+ ]
4775+
4776+ expected = {
4777+ 'foo_service': {
4778+ 'service_name': 'foo_service',
4779+ 'server_options': ["maxconn 4"],
4780+ 'servers': [('foo-1-4242', '1.2.3.4',
4781+ 4242, ["maxconn 4"])],
4782+ },
4783+ }
4784+ self.assertEqual(expected, hooks.create_services())
4785+ self.write_service_config.assert_called_with(expected)
4786+
4787+ def test_with_sitenames_match_service_name(self):
4788+ self.get_config_services.return_value = {
4789+ None: {
4790+ "service_name": "service",
4791+ },
4792+ "foo_srv": {
4793+ "service_name": "foo_srv",
4794+ "server_options": ["maxconn 4"],
4795+ },
4796+ }
4797+ self.relations_of_type.return_value = [
4798+ {"port": 4242,
4799+ "hostname": "backend.1",
4800+ "sitenames": "foo_srv bar_srv",
4801+ "private-address": "1.2.3.4",
4802+ "__unit__": "foo/0"},
4803+ ]
4804+
4805+ expected = {
4806+ 'foo_srv': {
4807+ 'service_name': 'foo_srv',
4808+ 'server_options': ["maxconn 4"],
4809+ 'servers': [('foo-0-4242', '1.2.3.4',
4810+ 4242, ["maxconn 4"])],
4811+ },
4812+ }
4813+ self.assertEqual(expected, hooks.create_services())
4814+ self.write_service_config.assert_called_with(expected)
4815+
4816+ def test_with_juju_services_match_service_name(self):
4817+ self.get_config_services.return_value = {
4818+ None: {
4819+ "service_name": "service",
4820+ },
4821+ "foo_service": {
4822+ "service_name": "foo_service",
4823+ "server_options": ["maxconn 4"],
4824+ },
4825+ }
4826+ self.relations_of_type.return_value = [
4827+ {"port": 4242,
4828+ "hostname": "backend.1",
4829+ "private-address": "1.2.3.4",
4830+ "__unit__": "foo/1"},
4831+ ]
4832+
4833+ expected = {
4834+ 'foo_service': {
4835+ 'service_name': 'foo_service',
4836+ 'server_options': ["maxconn 4"],
4837+ 'servers': [('foo-1-4242', '1.2.3.4',
4838+ 4242, ["maxconn 4"])],
4839+ },
4840+ }
4841+
4842+ result = hooks.create_services()
4843+
4844+ self.assertEqual(expected, result)
4845+ self.write_service_config.assert_called_with(expected)
4846+
4847+ def test_with_sitenames_no_match_but_unit_name(self):
4848+ self.get_config_services.return_value = {
4849+ None: {
4850+ "service_name": "service",
4851+ },
4852+ "foo": {
4853+ "service_name": "foo",
4854+ "server_options": ["maxconn 4"],
4855+ },
4856+ }
4857+ self.relations_of_type.return_value = [
4858+ {"port": 4242,
4859+ "hostname": "backend.1",
4860+ "sitenames": "bar_service baz_service",
4861+ "private-address": "1.2.3.4",
4862+ "__unit__": "foo/0"},
4863+ ]
4864+
4865+ expected = {
4866+ 'foo': {
4867+ 'service_name': 'foo',
4868+ 'server_options': ["maxconn 4"],
4869+ 'servers': [('foo-0-4242', '1.2.3.4',
4870+ 4242, ["maxconn 4"])],
4871+ },
4872+ }
4873+ self.assertEqual(expected, hooks.create_services())
4874+ self.write_service_config.assert_called_with(expected)
4875
4876=== added file 'hooks/tests/test_website_hooks.py'
4877--- hooks/tests/test_website_hooks.py 1970-01-01 00:00:00 +0000
4878+++ hooks/tests/test_website_hooks.py 2013-10-16 14:05:24 +0000
4879@@ -0,0 +1,145 @@
4880+from testtools import TestCase
4881+from mock import patch, call
4882+
4883+import hooks
4884+
4885+
4886+class WebsiteRelationTest(TestCase):
4887+
4888+ def setUp(self):
4889+ super(WebsiteRelationTest, self).setUp()
4890+ self.notify_website = self.patch_hook("notify_website")
4891+
4892+ def patch_hook(self, hook_name):
4893+ mock_controller = patch.object(hooks, hook_name)
4894+ mock = mock_controller.start()
4895+ self.addCleanup(mock_controller.stop)
4896+ return mock
4897+
4898+ def test_website_interface_none(self):
4899+ self.assertEqual(None, hooks.website_interface(hook_name=None))
4900+ self.notify_website.assert_not_called()
4901+
4902+ def test_website_interface_joined(self):
4903+ hooks.website_interface(hook_name="joined")
4904+ self.notify_website.assert_called_once_with(
4905+ changed=False, relation_ids=(None,))
4906+
4907+ def test_website_interface_changed(self):
4908+ hooks.website_interface(hook_name="changed")
4909+ self.notify_website.assert_called_once_with(
4910+ changed=True, relation_ids=(None,))
4911+
4912+
4913+class NotifyRelationTest(TestCase):
4914+
4915+ def setUp(self):
4916+ super(NotifyRelationTest, self).setUp()
4917+
4918+ self.relations_for_id = self.patch_hook("relations_for_id")
4919+ self.relation_set = self.patch_hook("relation_set")
4920+ self.config_get = self.patch_hook("config_get")
4921+ self.get_relation_ids = self.patch_hook("get_relation_ids")
4922+ self.get_hostname = self.patch_hook("get_hostname")
4923+ self.log = self.patch_hook("log")
4924+ self.get_config_services = self.patch_hook("get_config_service")
4925+
4926+ def patch_hook(self, hook_name):
4927+ mock_controller = patch.object(hooks, hook_name)
4928+ mock = mock_controller.start()
4929+ self.addCleanup(mock_controller.stop)
4930+ return mock
4931+
4932+ def test_notify_website_relation_no_relation_ids(self):
4933+ hooks.notify_relation("website")
4934+ self.get_relation_ids.return_value = ()
4935+ self.relation_set.assert_not_called()
4936+ self.get_relation_ids.assert_called_once_with("website")
4937+
4938+ def test_notify_website_relation_with_default_relation(self):
4939+ self.get_relation_ids.return_value = ()
4940+ self.get_hostname.return_value = "foo.local"
4941+ self.relations_for_id.return_value = [{}]
4942+ self.config_get.return_value = {"services": ""}
4943+
4944+ hooks.notify_relation("website", relation_ids=(None,))
4945+
4946+ self.get_hostname.assert_called_once_with()
4947+ self.relations_for_id.assert_called_once_with(None)
4948+ self.relation_set.assert_called_once_with(
4949+ relation_id=None, port="80", hostname="foo.local",
4950+ all_services="")
4951+ self.get_relation_ids.assert_not_called()
4952+
4953+ def test_notify_website_relation_with_relations(self):
4954+ self.get_relation_ids.return_value = ("website:1",
4955+ "website:2")
4956+ self.get_hostname.return_value = "foo.local"
4957+ self.relations_for_id.return_value = [{}]
4958+ self.config_get.return_value = {"services": ""}
4959+
4960+ hooks.notify_relation("website")
4961+
4962+ self.get_hostname.assert_called_once_with()
4963+ self.get_relation_ids.assert_called_once_with("website")
4964+ self.relations_for_id.assert_has_calls([
4965+ call("website:1"),
4966+ call("website:2"),
4967+ ])
4968+
4969+ self.relation_set.assert_has_calls([
4970+ call(relation_id="website:1", port="80", hostname="foo.local",
4971+ all_services=""),
4972+ call(relation_id="website:2", port="80", hostname="foo.local",
4973+ all_services=""),
4974+ ])
4975+
4976+ def test_notify_website_relation_with_different_sitenames(self):
4977+ self.get_relation_ids.return_value = ("website:1",)
4978+ self.get_hostname.return_value = "foo.local"
4979+ self.relations_for_id.return_value = [{"service_name": "foo"},
4980+ {"service_name": "bar"}]
4981+ self.config_get.return_value = {"services": ""}
4982+
4983+ hooks.notify_relation("website")
4984+
4985+ self.get_hostname.assert_called_once_with()
4986+ self.get_relation_ids.assert_called_once_with("website")
4987+ self.relations_for_id.assert_has_calls([
4988+ call("website:1"),
4989+ ])
4990+
4991+ self.relation_set.assert_has_calls([
4992+ call.relation_set(
4993+ relation_id="website:1", port="80", hostname="foo.local",
4994+ all_services=""),
4995+ ])
4996+ self.log.assert_called_once_with(
4997+ "Remote units requested than a single service name."
4998+ "Falling back to default host/port.")
4999+
5000+ def test_notify_website_relation_with_same_sitenames(self):
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches