Merge lp:~sidnei/charms/precise/haproxy/trunk into lp:charms/haproxy

Proposed by Sidnei da Silva
Status: Superseded
Proposed branch: lp:~sidnei/charms/precise/haproxy/trunk
Merge into: lp:charms/haproxy
Diff against target: 5083 lines (+3681/-904)
30 files modified
.bzrignore (+10/-0)
Makefile (+39/-0)
README.md (+22/-15)
charm-helpers.yaml (+4/-0)
cm.py (+193/-0)
config-manager.txt (+6/-0)
config.yaml (+13/-1)
files/nrpe/check_haproxy.sh (+2/-3)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+239/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+44/-0)
hooks/hooks.py (+508/-450)
hooks/install (+13/-0)
hooks/nrpe.py (+0/-170)
hooks/test_hooks.py (+0/-263)
hooks/tests/test_config_changed_hooks.py (+120/-0)
hooks/tests/test_helpers.py (+745/-0)
hooks/tests/test_nrpe_hooks.py (+24/-0)
hooks/tests/test_peer_hooks.py (+200/-0)
hooks/tests/test_reverseproxy_hooks.py (+345/-0)
hooks/tests/test_website_hooks.py (+145/-0)
hooks/tests/utils_for_tests.py (+21/-0)
metadata.yaml (+7/-1)
revision (+0/-1)
setup.cfg (+4/-0)
tarmac_tests.sh (+6/-0)
To merge this branch: bzr merge lp:~sidnei/charms/precise/haproxy/trunk
Reviewer Review Type Date Requested Status
charmers Pending
charmers Pending
Review via email: mp+181421@code.launchpad.net

This proposal supersedes a proposal from 2013-04-18.

Description of the change

* The 'all_services' config now supports a static list of servers to be used *in addition* to the ones provided via relation.

* When more than one haproxy units exist, the configured service is upgraded in-place to a mode where traffic is routed to a single haproxy unit (the first one in unit-name order) and the remaining ones are configured as 'backup'. This is done to allow the enforcement of a 'maxconn' session in the configured services, which would not be possible to enforce otherwise.

* Changes to the configured services are properly propagated to the upstream relation.

To post a comment you must log in.
Revision history for this message
Juan L. Negron (negronjl) wrote : Posted in a previous version of this proposal

Reviewing this now.

-Juan

Revision history for this message
Juan L. Negron (negronjl) wrote : Posted in a previous version of this proposal

Hi Sidnei:

I ran charm proof on the charm ( after mergin it locally ) and I get the following:
negronjl@negronjl-laptop:~/src/juju/charms/precise$ charm proof haproxy
W: website-relation-changed not executable
W: nrpe-external-master-relation-changed not executable
W: reverseproxy-relation-changed not executable
W: peer-relation-changed not executable
W: install not executable
W: start not executable
W: stop not executable
W: config-changed not executable

As I try to deploy the charm, I get this:
negronjl@negronjl-laptop:~/src/juju/charms$ juju deploy --repository . local:precise/haproxy
2013-02-27 10:17:30,036 INFO Searching for charm local:precise/haproxy in local charm repository: /home/negronjl/src/juju/charms
2013-02-27 10:17:30,244 INFO Connecting to environment...
2013-02-27 10:17:33,069 INFO Connected to environment.
2013-02-27 10:17:33,337 ERROR [Errno 2] No such file or directory: '/home/negronjl/src/juju/charms/precise/haproxy/lib/charmsupport'

review: Disapprove
Revision history for this message
Sidnei da Silva (sidnei) wrote : Posted in a previous version of this proposal

Ah, lib/charmsupport is a symlink and those don't get included in the charm IIRC. Maybe 'charm proof' should check for that.

I'll add charm proof to our tarmac test step and fix the other issue.

Revision history for this message
Mark Mims (mark-mims) wrote : Posted in a previous version of this proposal

same story as apache... please reserve /tests for charm tests.

review: Needs Resubmitting
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Sidnei

This all looks like great work; I really like the approach you are taking to testing (gonna steal some of that goodness for my own charms) and you have pointed me at charmsupport which overlaps alot with the openstack-charm-helpers we use for the OpenStack charms.

I have one gripe right now; as is this change will make the charm un-deployable from the charm store as you have introduced a requirement to branch it locally and pull in the other dependencies.

I personally don't think this is acceptable; the branch for haproxy should ship the required revisions of its dependencies so that this will continue work.

review: Disapprove
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Sorry - that should have been 'Needs Fixing' not 'Disapprove'.

Not got my head on straight this morning...

review: Needs Fixing
Revision history for this message
Sidnei da Silva (sidnei) wrote : Posted in a previous version of this proposal

Hi James,

re: dependencies that's actually not the case anymore, and I'll update the README to match. Notice how I've changed the install hook to be a shell script that adds a PPA and then installs the package dependencies, such that it still works if you deploy directly from the charm store.

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

Sidnei: the README.md needs the following ppa url:

  sudo add-apt-repository ppa:cjohnston/flake8

it's mispelled right now.

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

[2]: flake8 in the apt-get install line should be python-flake8

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

[3]: also add python-nosexcover

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

Just tested this branch with an upstream apache2 and downstream landscape charm that depends on the relation driven proxying change in r64. Its missing the functionality in r64 and r64 from trunk right now.

review: Needs Fixing
83. By Tom Haddon

Merge from U1 charms - also an upstream review in process for this. Add in charmhelpers and updating some tests.

84. By Matthias Arnason

[ev r=tiaz] Always write at least one listen stanza, do not pass null relationIDs to subprocess

85. By Matthias Arnason

[sidnei r=tiaz] Switch to using service name instead of hostname in backend server name, filter frontend-only options into frontend, create frontend/backend stanzas instead of a single listen stanza. Still support old listen stanzas when parsing for bw-compatibility.

86. By David Ames

[dames,r=mthaddon] reverting revno 84 and making notify_website call more explicit for relation_ids

87. By JuanJo Ciarlante

[sidnei, r=jjo] Dupe mode http/tcp and option httplog/tcplog between frontend and backend

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.bzrignore'
2--- .bzrignore 1970-01-01 00:00:00 +0000
3+++ .bzrignore 2013-10-10 22:34:35 +0000
4@@ -0,0 +1,10 @@
5+revision
6+_trial_temp
7+.coverage
8+coverage.xml
9+*.crt
10+*.key
11+lib/*
12+*.pyc
13+exec.d
14+build/charm-helpers
15
16=== added file 'Makefile'
17--- Makefile 1970-01-01 00:00:00 +0000
18+++ Makefile 2013-10-10 22:34:35 +0000
19@@ -0,0 +1,39 @@
20+PWD := $(shell pwd)
21+SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode
22+HOOKS_DIR := $(PWD)/hooks
23+TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR)
24+TEST_DIR := $(PWD)/hooks/tests
25+CHARM_DIR := $(PWD)
26+PYTHON := /usr/bin/env python
27+
28+
29+build: test lint proof
30+
31+revision:
32+ @test -f revision || echo 0 > revision
33+
34+proof: revision
35+ @echo Proofing charm...
36+ @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK
37+ @test `cat revision` = 0 && rm revision
38+
39+test:
40+ @echo Starting tests...
41+ @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests $(TEST_DIR)
42+
43+lint:
44+ @echo Checking for Python syntax...
45+ @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK
46+
47+sourcedeps: $(PWD)/config-manager.txt
48+ @echo Updating source dependencies...
49+ @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \
50+ -p $(SOURCEDEPS_DIR) \
51+ -t $(PWD)
52+ @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
53+ -c charm-helpers.yaml \
54+ -b build/charm-helpers \
55+ -d hooks/charmhelpers
56+ @echo Do not forget to commit the updated files if any.
57+
58+.PHONY: revision proof test lint sourcedeps charm-payload
59
60=== modified file 'README.md'
61--- README.md 2013-02-12 23:43:54 +0000
62+++ README.md 2013-10-10 22:34:35 +0000
63@@ -1,5 +1,5 @@
64-Juju charm haproxy
65-==================
66+Juju charm for HAProxy
67+======================
68
69 HAProxy is a free, very fast and reliable solution offering high availability,
70 load balancing, and proxying for TCP and HTTP-based applications. It is
71@@ -9,6 +9,23 @@
72 integration into existing architectures very easy and riskless, while still
73 offering the possibility not to expose fragile web servers to the Net.
74
75+Development
76+-----------
77+The following steps are needed for testing and development of the charm,
78+but **not** for deployment:
79+
80+ sudo apt-get install python-software-properties
81+ sudo add-apt-repository ppa:cjohnston/flake8
82+ sudo apt-get update
83+ sudo apt-get install python-mock python-flake8 python-nose python-nosexcover
84+
85+To run the tests:
86+
87+ make build
88+
89+... will run the unit tests, run flake8 over the source to warn about
90+formatting issues and output a code coverage summary of the 'hooks.py' module.
91+
92 How to deploy the charm
93 -----------------------
94 juju deploy haproxy
95@@ -27,7 +44,7 @@
96 the "Website Relation" section for more information about that.
97
98 When your charm hooks into reverseproxy you have two general approaches
99-which can be used to notify haproxy about what services you are running.
100+which can be used to notify haproxy about what services you are running.
101 1) Single-service proxying or 2) Multi-service or relation-driven proxying.
102
103 ** 1) Single-Service Proxying **
104@@ -67,7 +84,7 @@
105
106 #!/bin/bash
107 # hooks/website-relation-changed
108-
109+
110 host=$(unit-get private-address)
111 port=80
112
113@@ -80,7 +97,7 @@
114 "
115
116 Once set, haproxy will union multiple `servers` stanzas from any units
117-joining with the same `service_name` under one listen stanza.
118+joining with the same `service_name` under one listen stanza.
119 `service-options` and `server_options` will be overwritten, so ensure they
120 are set uniformly on all services with the same name.
121
122@@ -102,18 +119,8 @@
123 Many of the haproxy settings can be altered via the standard juju configuration
124 settings. Please see the config.yaml file as each is fairly clearly documented.
125
126-Testing
127--------
128-This charm has a simple unit-test program. Please expand it and make sure new
129-changes are covered by simple unit tests. To run the unit tests:
130-
131- sudo apt-get install python-mocker
132- sudo apt-get install python-twisted-core
133- cd hooks; trial test_hooks
134-
135 TODO:
136 -----
137
138 * Expand Single-Service section as I have not tested that mode fully.
139 * Trigger website-relation-changed when the reverse-proxy relation changes
140-
141
142=== added directory 'build'
143=== added file 'charm-helpers.yaml'
144--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
145+++ charm-helpers.yaml 2013-10-10 22:34:35 +0000
146@@ -0,0 +1,4 @@
147+include:
148+ - core
149+ - fetch
150+ - contrib.charmsupport
151\ No newline at end of file
152
153=== added file 'cm.py'
154--- cm.py 1970-01-01 00:00:00 +0000
155+++ cm.py 2013-10-10 22:34:35 +0000
156@@ -0,0 +1,193 @@
157+# Copyright 2010-2013 Canonical Ltd. All rights reserved.
158+import os
159+import re
160+import sys
161+import errno
162+import hashlib
163+import subprocess
164+import optparse
165+
166+from os import curdir
167+from bzrlib.branch import Branch
168+from bzrlib.plugin import load_plugins
169+load_plugins()
170+from bzrlib.plugins.launchpad import account as lp_account
171+
172+if 'GlobalConfig' in dir(lp_account):
173+ from bzrlib.config import LocationConfig as LocationConfiguration
174+ _ = LocationConfiguration
175+else:
176+ from bzrlib.config import LocationStack as LocationConfiguration
177+ _ = LocationConfiguration
178+
179+
180+def get_branch_config(config_file):
181+ """
182+ Retrieves the sourcedeps configuration for an source dir.
183+ Returns a dict of (branch, revspec) tuples, keyed by branch name.
184+ """
185+ branches = {}
186+ with open(config_file, 'r') as stream:
187+ for line in stream:
188+ line = line.split('#')[0].strip()
189+ bzr_match = re.match(r'(\S+)\s+'
190+ 'lp:([^;]+)'
191+ '(?:;revno=(\d+))?', line)
192+ if bzr_match:
193+ name, branch, revno = bzr_match.group(1, 2, 3)
194+ if revno is None:
195+ revspec = -1
196+ else:
197+ revspec = revno
198+ branches[name] = (branch, revspec)
199+ continue
200+ dir_match = re.match(r'(\S+)\s+'
201+ '\(directory\)', line)
202+ if dir_match:
203+ name = dir_match.group(1)
204+ branches[name] = None
205+ return branches
206+
207+
208+def main(config_file, parent_dir, target_dir, verbose):
209+ """Do the deed."""
210+
211+ try:
212+ os.makedirs(parent_dir)
213+ except OSError, e:
214+ if e.errno != errno.EEXIST:
215+ raise
216+
217+ branches = sorted(get_branch_config(config_file).items())
218+ for branch_name, spec in branches:
219+ if spec is None:
220+ # It's a directory, just create it and move on.
221+ destination_path = os.path.join(target_dir, branch_name)
222+ if not os.path.isdir(destination_path):
223+ os.makedirs(destination_path)
224+ continue
225+
226+ (quoted_branch_spec, revspec) = spec
227+ revno = int(revspec)
228+
229+ # qualify mirror branch name with hash of remote repo path to deal
230+ # with changes to the remote branch URL over time
231+ branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest()
232+ branch_directory = branch_spec_digest
233+
234+ source_path = os.path.join(parent_dir, branch_directory)
235+ destination_path = os.path.join(target_dir, branch_name)
236+
237+ # Remove leftover symlinks/stray files.
238+ try:
239+ os.remove(destination_path)
240+ except OSError, e:
241+ if e.errno != errno.EISDIR and e.errno != errno.ENOENT:
242+ raise
243+
244+ lp_url = "lp:" + quoted_branch_spec
245+
246+ # Create the local mirror branch if it doesn't already exist
247+ if verbose:
248+ sys.stderr.write('%30s: ' % (branch_name,))
249+ sys.stderr.flush()
250+
251+ fresh = False
252+ if not os.path.exists(source_path):
253+ subprocess.check_call(['bzr', 'branch', '-q', '--no-tree',
254+ '--', lp_url, source_path])
255+ fresh = True
256+
257+ if not fresh:
258+ source_branch = Branch.open(source_path)
259+ if revno == -1:
260+ orig_branch = Branch.open(lp_url)
261+ fresh = source_branch.revno() == orig_branch.revno()
262+ else:
263+ fresh = source_branch.revno() == revno
264+
265+ # Freshen the source branch if required.
266+ if not fresh:
267+ subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r',
268+ str(revno), '-d', source_path,
269+ '--', lp_url])
270+
271+ if os.path.exists(destination_path):
272+ # Overwrite the destination with the appropriate revision.
273+ subprocess.check_call(['bzr', 'clean-tree', '--force', '-q',
274+ '--ignored', '-d', destination_path])
275+ subprocess.check_call(['bzr', 'pull', '-q', '--overwrite',
276+ '-r', str(revno),
277+ '-d', destination_path, '--', source_path])
278+ else:
279+ # Create a new branch.
280+ subprocess.check_call(['bzr', 'branch', '-q', '--hardlink',
281+ '-r', str(revno),
282+ '--', source_path, destination_path])
283+
284+ # Check the state of the destination branch.
285+ destination_branch = Branch.open(destination_path)
286+ destination_revno = destination_branch.revno()
287+
288+ if verbose:
289+ sys.stderr.write('checked out %4s of %s\n' %
290+ ("r" + str(destination_revno), lp_url))
291+ sys.stderr.flush()
292+
293+ if revno != -1 and destination_revno != revno:
294+ raise RuntimeError("Expected revno %d but got revno %d" %
295+ (revno, destination_revno))
296+
297+if __name__ == '__main__':
298+ parser = optparse.OptionParser(
299+ usage="%prog [options]",
300+ description=(
301+ "Add a lightweight checkout in <target> for each "
302+ "corresponding file in <parent>."),
303+ add_help_option=False)
304+ parser.add_option(
305+ '-p', '--parent', dest='parent',
306+ default=None,
307+ help=("The directory of the parent tree."),
308+ metavar="DIR")
309+ parser.add_option(
310+ '-t', '--target', dest='target', default=curdir,
311+ help=("The directory of the target tree."),
312+ metavar="DIR")
313+ parser.add_option(
314+ '-c', '--config', dest='config', default=None,
315+ help=("The config file to be used for config-manager."),
316+ metavar="DIR")
317+ parser.add_option(
318+ '-q', '--quiet', dest='verbose', action='store_false',
319+ help="Be less verbose.")
320+ parser.add_option(
321+ '-v', '--verbose', dest='verbose', action='store_true',
322+ help="Be more verbose.")
323+ parser.add_option(
324+ '-h', '--help', action='help',
325+ help="Show this help message and exit.")
326+ parser.set_defaults(verbose=True)
327+
328+ options, args = parser.parse_args()
329+
330+ if options.parent is None:
331+ options.parent = os.environ.get(
332+ "SOURCEDEPS_DIR",
333+ os.path.join(curdir, ".sourcecode"))
334+
335+ if options.target is None:
336+ parser.error(
337+ "Target directory not specified.")
338+
339+ if options.config is None:
340+ config = [arg for arg in args
341+ if arg != "update"]
342+ if not config or len(config) > 1:
343+ parser.error("Config not specified")
344+ options.config = config[0]
345+
346+ sys.exit(main(config_file=options.config,
347+ parent_dir=options.parent,
348+ target_dir=options.target,
349+ verbose=options.verbose))
350
351=== added file 'config-manager.txt'
352--- config-manager.txt 1970-01-01 00:00:00 +0000
353+++ config-manager.txt 2013-10-10 22:34:35 +0000
354@@ -0,0 +1,6 @@
355+# After making changes to this file, to ensure that your sourcedeps are
356+# up-to-date do:
357+#
358+# make sourcedeps
359+
360+./build/charm-helpers lp:charm-helpers;revno=70
361
362=== modified file 'config.yaml'
363--- config.yaml 2012-10-10 14:38:47 +0000
364+++ config.yaml 2013-10-10 22:34:35 +0000
365@@ -59,7 +59,7 @@
366 restarting, a turn-around timer of 1 second is applied before a retry
367 occurs.
368 default_timeouts:
369- default: "queue 1000, connect 1000, client 1000, server 1000"
370+ default: "queue 20000, client 50000, connect 5000, server 50000"
371 type: string
372 description: Default timeouts
373 enable_monitoring:
374@@ -90,6 +90,12 @@
375 default: 3
376 type: int
377 description: Monitoring interface refresh interval (in seconds)
378+ package_status:
379+ default: "install"
380+ type: "string"
381+ description: |
382+ The status of service-affecting packages will be set to this value in the dpkg database.
383+ Useful valid values are "install" and "hold".
384 services:
385 default: |
386 - service_name: haproxy_service
387@@ -106,6 +112,12 @@
388 before the first variable, service_name, as above. Service options is a
389 comma separated list, server options will be appended as a string to
390 the individual server lines for a given listen stanza.
391+ sysctl:
392+ default: ""
393+ type: string
394+ description: >
395+ YAML-formatted list of sysctl values, e.g.:
396+ '{ net.ipv4.tcp_max_syn_backlog : 65536 }'
397 nagios_context:
398 default: "juju"
399 type: string
400
401=== renamed directory 'files/nrpe-external-master' => 'files/nrpe'
402=== modified file 'files/nrpe/check_haproxy.sh'
403--- files/nrpe-external-master/check_haproxy.sh 2012-11-07 22:32:06 +0000
404+++ files/nrpe/check_haproxy.sh 2013-10-10 22:34:35 +0000
405@@ -2,7 +2,7 @@
406 #--------------------------------------------
407 # This file is managed by Juju
408 #--------------------------------------------
409-#
410+#
411 # Copyright 2009,2012 Canonical Ltd.
412 # Author: Tom Haddon
413
414@@ -13,7 +13,7 @@
415
416 for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
417 do
418- output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"active(2|3).*${appserver}" -e ' 200 OK')
419+ output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
420 if [ $? != 0 ]; then
421 date >> $LOGFILE
422 echo $output >> $LOGFILE
423@@ -30,4 +30,3 @@
424
425 echo "OK: All haproxy instances looking good"
426 exit 0
427-
428
429=== added directory 'hooks/charmhelpers'
430=== added file 'hooks/charmhelpers/__init__.py'
431=== added directory 'hooks/charmhelpers/contrib'
432=== added file 'hooks/charmhelpers/contrib/__init__.py'
433=== added directory 'hooks/charmhelpers/contrib/charmsupport'
434=== added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
435=== added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
436--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
437+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-10 22:34:35 +0000
438@@ -0,0 +1,218 @@
439+"""Compatibility with the nrpe-external-master charm"""
440+# Copyright 2012 Canonical Ltd.
441+#
442+# Authors:
443+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
444+
445+import subprocess
446+import pwd
447+import grp
448+import os
449+import re
450+import shlex
451+import yaml
452+
453+from charmhelpers.core.hookenv import (
454+ config,
455+ local_unit,
456+ log,
457+ relation_ids,
458+ relation_set,
459+)
460+
461+from charmhelpers.core.host import service
462+
463+# This module adds compatibility with the nrpe-external-master and plain nrpe
464+# subordinate charms. To use it in your charm:
465+#
466+# 1. Update metadata.yaml
467+#
468+# provides:
469+# (...)
470+# nrpe-external-master:
471+# interface: nrpe-external-master
472+# scope: container
473+#
474+# and/or
475+#
476+# provides:
477+# (...)
478+# local-monitors:
479+# interface: local-monitors
480+# scope: container
481+
482+#
483+# 2. Add the following to config.yaml
484+#
485+# nagios_context:
486+# default: "juju"
487+# type: string
488+# description: |
489+# Used by the nrpe subordinate charms.
490+# A string that will be prepended to instance name to set the host name
491+# in nagios. So for instance the hostname would be something like:
492+# juju-myservice-0
493+# If you're running multiple environments with the same services in them
494+# this allows you to differentiate between them.
495+#
496+# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
497+#
498+# 4. Update your hooks.py with something like this:
499+#
500+# from charmsupport.nrpe import NRPE
501+# (...)
502+# def update_nrpe_config():
503+# nrpe_compat = NRPE()
504+# nrpe_compat.add_check(
505+# shortname = "myservice",
506+# description = "Check MyService",
507+# check_cmd = "check_http -w 2 -c 10 http://localhost"
508+# )
509+# nrpe_compat.add_check(
510+# "myservice_other",
511+# "Check for widget failures",
512+# check_cmd = "/srv/myapp/scripts/widget_check"
513+# )
514+# nrpe_compat.write()
515+#
516+# def config_changed():
517+# (...)
518+# update_nrpe_config()
519+#
520+# def nrpe_external_master_relation_changed():
521+# update_nrpe_config()
522+#
523+# def local_monitors_relation_changed():
524+# update_nrpe_config()
525+#
526+# 5. ln -s hooks.py nrpe-external-master-relation-changed
527+# ln -s hooks.py local-monitors-relation-changed
528+
529+
530+class CheckException(Exception):
531+ pass
532+
533+
534+class Check(object):
535+ shortname_re = '[A-Za-z0-9-_]+$'
536+ service_template = ("""
537+#---------------------------------------------------
538+# This file is Juju managed
539+#---------------------------------------------------
540+define service {{
541+ use active-service
542+ host_name {nagios_hostname}
543+ service_description {nagios_hostname}[{shortname}] """
544+ """{description}
545+ check_command check_nrpe!{command}
546+ servicegroups {nagios_servicegroup}
547+}}
548+""")
549+
550+ def __init__(self, shortname, description, check_cmd):
551+ super(Check, self).__init__()
552+ # XXX: could be better to calculate this from the service name
553+ if not re.match(self.shortname_re, shortname):
554+ raise CheckException("shortname must match {}".format(
555+ Check.shortname_re))
556+ self.shortname = shortname
557+ self.command = "check_{}".format(shortname)
558+ # Note: a set of invalid characters is defined by the
559+ # Nagios server config
560+ # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
561+ self.description = description
562+ self.check_cmd = self._locate_cmd(check_cmd)
563+
564+ def _locate_cmd(self, check_cmd):
565+ search_path = (
566+ '/',
567+ os.path.join(os.environ['CHARM_DIR'],
568+ 'files/nrpe-external-master'),
569+ '/usr/lib/nagios/plugins',
570+ )
571+ parts = shlex.split(check_cmd)
572+ for path in search_path:
573+ if os.path.exists(os.path.join(path, parts[0])):
574+ command = os.path.join(path, parts[0])
575+ if len(parts) > 1:
576+ command += " " + " ".join(parts[1:])
577+ return command
578+ log('Check command not found: {}'.format(parts[0]))
579+ return ''
580+
581+ def write(self, nagios_context, hostname):
582+ nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
583+ self.command)
584+ with open(nrpe_check_file, 'w') as nrpe_check_config:
585+ nrpe_check_config.write("# check {}\n".format(self.shortname))
586+ nrpe_check_config.write("command[{}]={}\n".format(
587+ self.command, self.check_cmd))
588+
589+ if not os.path.exists(NRPE.nagios_exportdir):
590+ log('Not writing service config as {} is not accessible'.format(
591+ NRPE.nagios_exportdir))
592+ else:
593+ self.write_service_config(nagios_context, hostname)
594+
595+ def write_service_config(self, nagios_context, hostname):
596+ for f in os.listdir(NRPE.nagios_exportdir):
597+ if re.search('.*{}.cfg'.format(self.command), f):
598+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
599+
600+ templ_vars = {
601+ 'nagios_hostname': hostname,
602+ 'nagios_servicegroup': nagios_context,
603+ 'description': self.description,
604+ 'shortname': self.shortname,
605+ 'command': self.command,
606+ }
607+ nrpe_service_text = Check.service_template.format(**templ_vars)
608+ nrpe_service_file = '{}/service__{}_{}.cfg'.format(
609+ NRPE.nagios_exportdir, hostname, self.command)
610+ with open(nrpe_service_file, 'w') as nrpe_service_config:
611+ nrpe_service_config.write(str(nrpe_service_text))
612+
613+ def run(self):
614+ subprocess.call(self.check_cmd)
615+
616+
617+class NRPE(object):
618+ nagios_logdir = '/var/log/nagios'
619+ nagios_exportdir = '/var/lib/nagios/export'
620+ nrpe_confdir = '/etc/nagios/nrpe.d'
621+
622+ def __init__(self):
623+ super(NRPE, self).__init__()
624+ self.config = config()
625+ self.nagios_context = self.config['nagios_context']
626+ self.unit_name = local_unit().replace('/', '-')
627+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
628+ self.checks = []
629+
630+ def add_check(self, *args, **kwargs):
631+ self.checks.append(Check(*args, **kwargs))
632+
633+ def write(self):
634+ try:
635+ nagios_uid = pwd.getpwnam('nagios').pw_uid
636+ nagios_gid = grp.getgrnam('nagios').gr_gid
637+ except:
638+ log("Nagios user not set up, nrpe checks not updated")
639+ return
640+
641+ if not os.path.exists(NRPE.nagios_logdir):
642+ os.mkdir(NRPE.nagios_logdir)
643+ os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
644+
645+ nrpe_monitors = {}
646+ monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
647+ for nrpecheck in self.checks:
648+ nrpecheck.write(self.nagios_context, self.hostname)
649+ nrpe_monitors[nrpecheck.shortname] = {
650+ "command": nrpecheck.command,
651+ }
652+
653+ service('restart', 'nagios-nrpe-server')
654+
655+ for rid in relation_ids("local-monitors"):
656+ relation_set(relation_id=rid, monitors=yaml.dump(monitors))
657
658=== added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
659--- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
660+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-10 22:34:35 +0000
661@@ -0,0 +1,156 @@
662+'''
663+Functions for managing volumes in juju units. One volume is supported per unit.
664+Subordinates may have their own storage, provided it is on its own partition.
665+
666+Configuration stanzas:
667+ volume-ephemeral:
668+ type: boolean
669+ default: true
670+ description: >
671+ If false, a volume is mounted as sepecified in "volume-map"
672+ If true, ephemeral storage will be used, meaning that log data
673+ will only exist as long as the machine. YOU HAVE BEEN WARNED.
674+ volume-map:
675+ type: string
676+ default: {}
677+ description: >
678+ YAML map of units to device names, e.g:
679+ "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
680+ Service units will raise a configure-error if volume-ephemeral
681+ is 'true' and no volume-map value is set. Use 'juju set' to set a
682+ value and 'juju resolved' to complete configuration.
683+
684+Usage:
685+ from charmsupport.volumes import configure_volume, VolumeConfigurationError
686+ from charmsupport.hookenv import log, ERROR
687+ def post_mount_hook():
688+ stop_service('myservice')
689+ def post_mount_hook():
690+ start_service('myservice')
691+
692+ if __name__ == '__main__':
693+ try:
694+ configure_volume(before_change=pre_mount_hook,
695+ after_change=post_mount_hook)
696+ except VolumeConfigurationError:
697+ log('Storage could not be configured', ERROR)
698+'''
699+
700+# XXX: Known limitations
701+# - fstab is neither consulted nor updated
702+
703+import os
704+from charmhelpers.core import hookenv
705+from charmhelpers.core import host
706+import yaml
707+
708+
709+MOUNT_BASE = '/srv/juju/volumes'
710+
711+
712+class VolumeConfigurationError(Exception):
713+ '''Volume configuration data is missing or invalid'''
714+ pass
715+
716+
717+def get_config():
718+ '''Gather and sanity-check volume configuration data'''
719+ volume_config = {}
720+ config = hookenv.config()
721+
722+ errors = False
723+
724+ if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
725+ volume_config['ephemeral'] = True
726+ else:
727+ volume_config['ephemeral'] = False
728+
729+ try:
730+ volume_map = yaml.safe_load(config.get('volume-map', '{}'))
731+ except yaml.YAMLError as e:
732+ hookenv.log("Error parsing YAML volume-map: {}".format(e),
733+ hookenv.ERROR)
734+ errors = True
735+ if volume_map is None:
736+ # probably an empty string
737+ volume_map = {}
738+ elif not isinstance(volume_map, dict):
739+ hookenv.log("Volume-map should be a dictionary, not {}".format(
740+ type(volume_map)))
741+ errors = True
742+
743+ volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
744+ if volume_config['device'] and volume_config['ephemeral']:
745+ # asked for ephemeral storage but also defined a volume ID
746+ hookenv.log('A volume is defined for this unit, but ephemeral '
747+ 'storage was requested', hookenv.ERROR)
748+ errors = True
749+ elif not volume_config['device'] and not volume_config['ephemeral']:
750+ # asked for permanent storage but did not define volume ID
751+ hookenv.log('Ephemeral storage was requested, but there is no volume '
752+ 'defined for this unit.', hookenv.ERROR)
753+ errors = True
754+
755+ unit_mount_name = hookenv.local_unit().replace('/', '-')
756+ volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
757+
758+ if errors:
759+ return None
760+ return volume_config
761+
762+
763+def mount_volume(config):
764+ if os.path.exists(config['mountpoint']):
765+ if not os.path.isdir(config['mountpoint']):
766+ hookenv.log('Not a directory: {}'.format(config['mountpoint']))
767+ raise VolumeConfigurationError()
768+ else:
769+ host.mkdir(config['mountpoint'])
770+ if os.path.ismount(config['mountpoint']):
771+ unmount_volume(config)
772+ if not host.mount(config['device'], config['mountpoint'], persist=True):
773+ raise VolumeConfigurationError()
774+
775+
776+def unmount_volume(config):
777+ if os.path.ismount(config['mountpoint']):
778+ if not host.umount(config['mountpoint'], persist=True):
779+ raise VolumeConfigurationError()
780+
781+
782+def managed_mounts():
783+ '''List of all mounted managed volumes'''
784+ return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
785+
786+
787+def configure_volume(before_change=lambda: None, after_change=lambda: None):
788+ '''Set up storage (or don't) according to the charm's volume configuration.
789+ Returns the mount point or "ephemeral". before_change and after_change
790+ are optional functions to be called if the volume configuration changes.
791+ '''
792+
793+ config = get_config()
794+ if not config:
795+ hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
796+ raise VolumeConfigurationError()
797+
798+ if config['ephemeral']:
799+ if os.path.ismount(config['mountpoint']):
800+ before_change()
801+ unmount_volume(config)
802+ after_change()
803+ return 'ephemeral'
804+ else:
805+ # persistent storage
806+ if os.path.ismount(config['mountpoint']):
807+ mounts = dict(managed_mounts())
808+ if mounts.get(config['mountpoint']) != config['device']:
809+ before_change()
810+ unmount_volume(config)
811+ mount_volume(config)
812+ after_change()
813+ else:
814+ before_change()
815+ mount_volume(config)
816+ after_change()
817+ return config['mountpoint']
818
819=== added directory 'hooks/charmhelpers/core'
820=== added file 'hooks/charmhelpers/core/__init__.py'
821=== added file 'hooks/charmhelpers/core/hookenv.py'
822--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
823+++ hooks/charmhelpers/core/hookenv.py 2013-10-10 22:34:35 +0000
824@@ -0,0 +1,340 @@
825+"Interactions with the Juju environment"
826+# Copyright 2013 Canonical Ltd.
827+#
828+# Authors:
829+# Charm Helpers Developers <juju@lists.ubuntu.com>
830+
831+import os
832+import json
833+import yaml
834+import subprocess
835+import UserDict
836+
837+CRITICAL = "CRITICAL"
838+ERROR = "ERROR"
839+WARNING = "WARNING"
840+INFO = "INFO"
841+DEBUG = "DEBUG"
842+MARKER = object()
843+
844+cache = {}
845+
846+
847+def cached(func):
848+ ''' Cache return values for multiple executions of func + args
849+
850+ For example:
851+
852+ @cached
853+ def unit_get(attribute):
854+ pass
855+
856+ unit_get('test')
857+
858+ will cache the result of unit_get + 'test' for future calls.
859+ '''
860+ def wrapper(*args, **kwargs):
861+ global cache
862+ key = str((func, args, kwargs))
863+ try:
864+ return cache[key]
865+ except KeyError:
866+ res = func(*args, **kwargs)
867+ cache[key] = res
868+ return res
869+ return wrapper
870+
871+
872+def flush(key):
873+ ''' Flushes any entries from function cache where the
874+ key is found in the function+args '''
875+ flush_list = []
876+ for item in cache:
877+ if key in item:
878+ flush_list.append(item)
879+ for item in flush_list:
880+ del cache[item]
881+
882+
883+def log(message, level=None):
884+ "Write a message to the juju log"
885+ command = ['juju-log']
886+ if level:
887+ command += ['-l', level]
888+ command += [message]
889+ subprocess.call(command)
890+
891+
892+class Serializable(UserDict.IterableUserDict):
893+ "Wrapper, an object that can be serialized to yaml or json"
894+
895+ def __init__(self, obj):
896+ # wrap the object
897+ UserDict.IterableUserDict.__init__(self)
898+ self.data = obj
899+
900+ def __getattr__(self, attr):
901+ # See if this object has attribute.
902+ if attr in ("json", "yaml", "data"):
903+ return self.__dict__[attr]
904+ # Check for attribute in wrapped object.
905+ got = getattr(self.data, attr, MARKER)
906+ if got is not MARKER:
907+ return got
908+ # Proxy to the wrapped object via dict interface.
909+ try:
910+ return self.data[attr]
911+ except KeyError:
912+ raise AttributeError(attr)
913+
914+ def __getstate__(self):
915+ # Pickle as a standard dictionary.
916+ return self.data
917+
918+ def __setstate__(self, state):
919+ # Unpickle into our wrapper.
920+ self.data = state
921+
922+ def json(self):
923+ "Serialize the object to json"
924+ return json.dumps(self.data)
925+
926+ def yaml(self):
927+ "Serialize the object to yaml"
928+ return yaml.dump(self.data)
929+
930+
931+def execution_environment():
932+ """A convenient bundling of the current execution context"""
933+ context = {}
934+ context['conf'] = config()
935+ if relation_id():
936+ context['reltype'] = relation_type()
937+ context['relid'] = relation_id()
938+ context['rel'] = relation_get()
939+ context['unit'] = local_unit()
940+ context['rels'] = relations()
941+ context['env'] = os.environ
942+ return context
943+
944+
945+def in_relation_hook():
946+ "Determine whether we're running in a relation hook"
947+ return 'JUJU_RELATION' in os.environ
948+
949+
950+def relation_type():
951+ "The scope for the current relation hook"
952+ return os.environ.get('JUJU_RELATION', None)
953+
954+
955+def relation_id():
956+ "The relation ID for the current relation hook"
957+ return os.environ.get('JUJU_RELATION_ID', None)
958+
959+
960+def local_unit():
961+ "Local unit ID"
962+ return os.environ['JUJU_UNIT_NAME']
963+
964+
965+def remote_unit():
966+ "The remote unit for the current relation hook"
967+ return os.environ['JUJU_REMOTE_UNIT']
968+
969+
970+def service_name():
971+ "The name service group this unit belongs to"
972+ return local_unit().split('/')[0]
973+
974+
975+@cached
976+def config(scope=None):
977+ "Juju charm configuration"
978+ config_cmd_line = ['config-get']
979+ if scope is not None:
980+ config_cmd_line.append(scope)
981+ config_cmd_line.append('--format=json')
982+ try:
983+ return json.loads(subprocess.check_output(config_cmd_line))
984+ except ValueError:
985+ return None
986+
987+
988+@cached
989+def relation_get(attribute=None, unit=None, rid=None):
990+ _args = ['relation-get', '--format=json']
991+ if rid:
992+ _args.append('-r')
993+ _args.append(rid)
994+ _args.append(attribute or '-')
995+ if unit:
996+ _args.append(unit)
997+ try:
998+ return json.loads(subprocess.check_output(_args))
999+ except ValueError:
1000+ return None
1001+
1002+
1003+def relation_set(relation_id=None, relation_settings={}, **kwargs):
1004+ relation_cmd_line = ['relation-set']
1005+ if relation_id is not None:
1006+ relation_cmd_line.extend(('-r', relation_id))
1007+ for k, v in (relation_settings.items() + kwargs.items()):
1008+ if v is None:
1009+ relation_cmd_line.append('{}='.format(k))
1010+ else:
1011+ relation_cmd_line.append('{}={}'.format(k, v))
1012+ subprocess.check_call(relation_cmd_line)
1013+ # Flush cache of any relation-gets for local unit
1014+ flush(local_unit())
1015+
1016+
1017+@cached
1018+def relation_ids(reltype=None):
1019+ "A list of relation_ids"
1020+ reltype = reltype or relation_type()
1021+ relid_cmd_line = ['relation-ids', '--format=json']
1022+ if reltype is not None:
1023+ relid_cmd_line.append(reltype)
1024+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
1025+ return []
1026+
1027+
1028+@cached
1029+def related_units(relid=None):
1030+ "A list of related units"
1031+ relid = relid or relation_id()
1032+ units_cmd_line = ['relation-list', '--format=json']
1033+ if relid is not None:
1034+ units_cmd_line.extend(('-r', relid))
1035+ return json.loads(subprocess.check_output(units_cmd_line)) or []
1036+
1037+
1038+@cached
1039+def relation_for_unit(unit=None, rid=None):
1040+ "Get the json represenation of a unit's relation"
1041+ unit = unit or remote_unit()
1042+ relation = relation_get(unit=unit, rid=rid)
1043+ for key in relation:
1044+ if key.endswith('-list'):
1045+ relation[key] = relation[key].split()
1046+ relation['__unit__'] = unit
1047+ return relation
1048+
1049+
1050+@cached
1051+def relations_for_id(relid=None):
1052+ "Get relations of a specific relation ID"
1053+ relation_data = []
1054+ relid = relid or relation_ids()
1055+ for unit in related_units(relid):
1056+ unit_data = relation_for_unit(unit, relid)
1057+ unit_data['__relid__'] = relid
1058+ relation_data.append(unit_data)
1059+ return relation_data
1060+
1061+
1062+@cached
1063+def relations_of_type(reltype=None):
1064+ "Get relations of a specific type"
1065+ relation_data = []
1066+ reltype = reltype or relation_type()
1067+ for relid in relation_ids(reltype):
1068+ for relation in relations_for_id(relid):
1069+ relation['__relid__'] = relid
1070+ relation_data.append(relation)
1071+ return relation_data
1072+
1073+
1074+@cached
1075+def relation_types():
1076+ "Get a list of relation types supported by this charm"
1077+ charmdir = os.environ.get('CHARM_DIR', '')
1078+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1079+ md = yaml.safe_load(mdf)
1080+ rel_types = []
1081+ for key in ('provides', 'requires', 'peers'):
1082+ section = md.get(key)
1083+ if section:
1084+ rel_types.extend(section.keys())
1085+ mdf.close()
1086+ return rel_types
1087+
1088+
1089+@cached
1090+def relations():
1091+ rels = {}
1092+ for reltype in relation_types():
1093+ relids = {}
1094+ for relid in relation_ids(reltype):
1095+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1096+ for unit in related_units(relid):
1097+ reldata = relation_get(unit=unit, rid=relid)
1098+ units[unit] = reldata
1099+ relids[relid] = units
1100+ rels[reltype] = relids
1101+ return rels
1102+
1103+
1104+def open_port(port, protocol="TCP"):
1105+ "Open a service network port"
1106+ _args = ['open-port']
1107+ _args.append('{}/{}'.format(port, protocol))
1108+ subprocess.check_call(_args)
1109+
1110+
1111+def close_port(port, protocol="TCP"):
1112+ "Close a service network port"
1113+ _args = ['close-port']
1114+ _args.append('{}/{}'.format(port, protocol))
1115+ subprocess.check_call(_args)
1116+
1117+
1118+@cached
1119+def unit_get(attribute):
1120+ _args = ['unit-get', '--format=json', attribute]
1121+ try:
1122+ return json.loads(subprocess.check_output(_args))
1123+ except ValueError:
1124+ return None
1125+
1126+
1127+def unit_private_ip():
1128+ return unit_get('private-address')
1129+
1130+
1131+class UnregisteredHookError(Exception):
1132+ pass
1133+
1134+
1135+class Hooks(object):
1136+ def __init__(self):
1137+ super(Hooks, self).__init__()
1138+ self._hooks = {}
1139+
1140+ def register(self, name, function):
1141+ self._hooks[name] = function
1142+
1143+ def execute(self, args):
1144+ hook_name = os.path.basename(args[0])
1145+ if hook_name in self._hooks:
1146+ self._hooks[hook_name]()
1147+ else:
1148+ raise UnregisteredHookError(hook_name)
1149+
1150+ def hook(self, *hook_names):
1151+ def wrapper(decorated):
1152+ for hook_name in hook_names:
1153+ self.register(hook_name, decorated)
1154+ else:
1155+ self.register(decorated.__name__, decorated)
1156+ if '_' in decorated.__name__:
1157+ self.register(
1158+ decorated.__name__.replace('_', '-'), decorated)
1159+ return decorated
1160+ return wrapper
1161+
1162+
1163+def charm_dir():
1164+ return os.environ.get('CHARM_DIR')
1165
1166=== added file 'hooks/charmhelpers/core/host.py'
1167--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
1168+++ hooks/charmhelpers/core/host.py 2013-10-10 22:34:35 +0000
1169@@ -0,0 +1,239 @@
1170+"""Tools for working with the host system"""
1171+# Copyright 2012 Canonical Ltd.
1172+#
1173+# Authors:
1174+# Nick Moffitt <nick.moffitt@canonical.com>
1175+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1176+
1177+import os
1178+import pwd
1179+import grp
1180+import random
1181+import string
1182+import subprocess
1183+import hashlib
1184+
1185+from collections import OrderedDict
1186+
1187+from hookenv import log
1188+
1189+
1190+def service_start(service_name):
1191+ service('start', service_name)
1192+
1193+
1194+def service_stop(service_name):
1195+ service('stop', service_name)
1196+
1197+
1198+def service_restart(service_name):
1199+ service('restart', service_name)
1200+
1201+
1202+def service_reload(service_name, restart_on_failure=False):
1203+ if not service('reload', service_name) and restart_on_failure:
1204+ service('restart', service_name)
1205+
1206+
1207+def service(action, service_name):
1208+ cmd = ['service', service_name, action]
1209+ return subprocess.call(cmd) == 0
1210+
1211+
1212+def service_running(service):
1213+ try:
1214+ output = subprocess.check_output(['service', service, 'status'])
1215+ except subprocess.CalledProcessError:
1216+ return False
1217+ else:
1218+ if ("start/running" in output or "is running" in output):
1219+ return True
1220+ else:
1221+ return False
1222+
1223+
1224+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1225+ """Add a user"""
1226+ try:
1227+ user_info = pwd.getpwnam(username)
1228+ log('user {0} already exists!'.format(username))
1229+ except KeyError:
1230+ log('creating user {0}'.format(username))
1231+ cmd = ['useradd']
1232+ if system_user or password is None:
1233+ cmd.append('--system')
1234+ else:
1235+ cmd.extend([
1236+ '--create-home',
1237+ '--shell', shell,
1238+ '--password', password,
1239+ ])
1240+ cmd.append(username)
1241+ subprocess.check_call(cmd)
1242+ user_info = pwd.getpwnam(username)
1243+ return user_info
1244+
1245+
1246+def add_user_to_group(username, group):
1247+ """Add a user to a group"""
1248+ cmd = [
1249+ 'gpasswd', '-a',
1250+ username,
1251+ group
1252+ ]
1253+ log("Adding user {} to group {}".format(username, group))
1254+ subprocess.check_call(cmd)
1255+
1256+
1257+def rsync(from_path, to_path, flags='-r', options=None):
1258+ """Replicate the contents of a path"""
1259+ options = options or ['--delete', '--executability']
1260+ cmd = ['/usr/bin/rsync', flags]
1261+ cmd.extend(options)
1262+ cmd.append(from_path)
1263+ cmd.append(to_path)
1264+ log(" ".join(cmd))
1265+ return subprocess.check_output(cmd).strip()
1266+
1267+
1268+def symlink(source, destination):
1269+ """Create a symbolic link"""
1270+ log("Symlinking {} as {}".format(source, destination))
1271+ cmd = [
1272+ 'ln',
1273+ '-sf',
1274+ source,
1275+ destination,
1276+ ]
1277+ subprocess.check_call(cmd)
1278+
1279+
1280+def mkdir(path, owner='root', group='root', perms=0555, force=False):
1281+ """Create a directory"""
1282+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
1283+ perms))
1284+ uid = pwd.getpwnam(owner).pw_uid
1285+ gid = grp.getgrnam(group).gr_gid
1286+ realpath = os.path.abspath(path)
1287+ if os.path.exists(realpath):
1288+ if force and not os.path.isdir(realpath):
1289+ log("Removing non-directory file {} prior to mkdir()".format(path))
1290+ os.unlink(realpath)
1291+ else:
1292+ os.makedirs(realpath, perms)
1293+ os.chown(realpath, uid, gid)
1294+
1295+
1296+def write_file(path, content, owner='root', group='root', perms=0444):
1297+ """Create or overwrite a file with the contents of a string"""
1298+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
1299+ uid = pwd.getpwnam(owner).pw_uid
1300+ gid = grp.getgrnam(group).gr_gid
1301+ with open(path, 'w') as target:
1302+ os.fchown(target.fileno(), uid, gid)
1303+ os.fchmod(target.fileno(), perms)
1304+ target.write(content)
1305+
1306+
1307+def mount(device, mountpoint, options=None, persist=False):
1308+ '''Mount a filesystem'''
1309+ cmd_args = ['mount']
1310+ if options is not None:
1311+ cmd_args.extend(['-o', options])
1312+ cmd_args.extend([device, mountpoint])
1313+ try:
1314+ subprocess.check_output(cmd_args)
1315+ except subprocess.CalledProcessError, e:
1316+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1317+ return False
1318+ if persist:
1319+ # TODO: update fstab
1320+ pass
1321+ return True
1322+
1323+
1324+def umount(mountpoint, persist=False):
1325+ '''Unmount a filesystem'''
1326+ cmd_args = ['umount', mountpoint]
1327+ try:
1328+ subprocess.check_output(cmd_args)
1329+ except subprocess.CalledProcessError, e:
1330+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1331+ return False
1332+ if persist:
1333+ # TODO: update fstab
1334+ pass
1335+ return True
1336+
1337+
1338+def mounts():
1339+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
1340+ with open('/proc/mounts') as f:
1341+ # [['/mount/point','/dev/path'],[...]]
1342+ system_mounts = [m[1::-1] for m in [l.strip().split()
1343+ for l in f.readlines()]]
1344+ return system_mounts
1345+
1346+
1347+def file_hash(path):
1348+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
1349+ if os.path.exists(path):
1350+ h = hashlib.md5()
1351+ with open(path, 'r') as source:
1352+ h.update(source.read()) # IGNORE:E1101 - it does have update
1353+ return h.hexdigest()
1354+ else:
1355+ return None
1356+
1357+
1358+def restart_on_change(restart_map):
1359+ ''' Restart services based on configuration files changing
1360+
1361+ This function is used a decorator, for example
1362+
1363+ @restart_on_change({
1364+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1365+ })
1366+ def ceph_client_changed():
1367+ ...
1368+
1369+ In this example, the cinder-api and cinder-volume services
1370+ would be restarted if /etc/ceph/ceph.conf is changed by the
1371+ ceph_client_changed function.
1372+ '''
1373+ def wrap(f):
1374+ def wrapped_f(*args):
1375+ checksums = {}
1376+ for path in restart_map:
1377+ checksums[path] = file_hash(path)
1378+ f(*args)
1379+ restarts = []
1380+ for path in restart_map:
1381+ if checksums[path] != file_hash(path):
1382+ restarts += restart_map[path]
1383+ for service_name in list(OrderedDict.fromkeys(restarts)):
1384+ service('restart', service_name)
1385+ return wrapped_f
1386+ return wrap
1387+
1388+
1389+def lsb_release():
1390+ '''Return /etc/lsb-release in a dict'''
1391+ d = {}
1392+ with open('/etc/lsb-release', 'r') as lsb:
1393+ for l in lsb:
1394+ k, v = l.split('=')
1395+ d[k.strip()] = v.strip()
1396+ return d
1397+
1398+
1399+def pwgen(length=None):
1400+ '''Generate a random pasword.'''
1401+ if length is None:
1402+ length = random.choice(range(35, 45))
1403+ alphanumeric_chars = [
1404+ l for l in (string.letters + string.digits)
1405+ if l not in 'l0QD1vAEIOUaeiou']
1406+ random_chars = [
1407+ random.choice(alphanumeric_chars) for _ in range(length)]
1408+ return(''.join(random_chars))
1409
1410=== added directory 'hooks/charmhelpers/fetch'
1411=== added file 'hooks/charmhelpers/fetch/__init__.py'
1412--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
1413+++ hooks/charmhelpers/fetch/__init__.py 2013-10-10 22:34:35 +0000
1414@@ -0,0 +1,209 @@
1415+import importlib
1416+from yaml import safe_load
1417+from charmhelpers.core.host import (
1418+ lsb_release
1419+)
1420+from urlparse import (
1421+ urlparse,
1422+ urlunparse,
1423+)
1424+import subprocess
1425+from charmhelpers.core.hookenv import (
1426+ config,
1427+ log,
1428+)
1429+import apt_pkg
1430+
1431+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1432+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1433+"""
1434+PROPOSED_POCKET = """# Proposed
1435+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
1436+"""
1437+
1438+
1439+def filter_installed_packages(packages):
1440+ """Returns a list of packages that require installation"""
1441+ apt_pkg.init()
1442+ cache = apt_pkg.Cache()
1443+ _pkgs = []
1444+ for package in packages:
1445+ try:
1446+ p = cache[package]
1447+ p.current_ver or _pkgs.append(package)
1448+ except KeyError:
1449+ log('Package {} has no installation candidate.'.format(package),
1450+ level='WARNING')
1451+ _pkgs.append(package)
1452+ return _pkgs
1453+
1454+
1455+def apt_install(packages, options=None, fatal=False):
1456+ """Install one or more packages"""
1457+ options = options or []
1458+ cmd = ['apt-get', '-y']
1459+ cmd.extend(options)
1460+ cmd.append('install')
1461+ if isinstance(packages, basestring):
1462+ cmd.append(packages)
1463+ else:
1464+ cmd.extend(packages)
1465+ log("Installing {} with options: {}".format(packages,
1466+ options))
1467+ if fatal:
1468+ subprocess.check_call(cmd)
1469+ else:
1470+ subprocess.call(cmd)
1471+
1472+
1473+def apt_update(fatal=False):
1474+ """Update local apt cache"""
1475+ cmd = ['apt-get', 'update']
1476+ if fatal:
1477+ subprocess.check_call(cmd)
1478+ else:
1479+ subprocess.call(cmd)
1480+
1481+
1482+def apt_purge(packages, fatal=False):
1483+ """Purge one or more packages"""
1484+ cmd = ['apt-get', '-y', 'purge']
1485+ if isinstance(packages, basestring):
1486+ cmd.append(packages)
1487+ else:
1488+ cmd.extend(packages)
1489+ log("Purging {}".format(packages))
1490+ if fatal:
1491+ subprocess.check_call(cmd)
1492+ else:
1493+ subprocess.call(cmd)
1494+
1495+
1496+def add_source(source, key=None):
1497+ if ((source.startswith('ppa:') or
1498+ source.startswith('http:'))):
1499+ subprocess.check_call(['add-apt-repository', '--yes', source])
1500+ elif source.startswith('cloud:'):
1501+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
1502+ fatal=True)
1503+ pocket = source.split(':')[-1]
1504+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
1505+ apt.write(CLOUD_ARCHIVE.format(pocket))
1506+ elif source == 'proposed':
1507+ release = lsb_release()['DISTRIB_CODENAME']
1508+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
1509+ apt.write(PROPOSED_POCKET.format(release))
1510+ if key:
1511+ subprocess.check_call(['apt-key', 'import', key])
1512+
1513+
1514+class SourceConfigError(Exception):
1515+ pass
1516+
1517+
1518+def configure_sources(update=False,
1519+ sources_var='install_sources',
1520+ keys_var='install_keys'):
1521+ """
1522+ Configure multiple sources from charm configuration
1523+
1524+ Example config:
1525+ install_sources:
1526+ - "ppa:foo"
1527+ - "http://example.com/repo precise main"
1528+ install_keys:
1529+ - null
1530+ - "a1b2c3d4"
1531+
1532+ Note that 'null' (a.k.a. None) should not be quoted.
1533+ """
1534+ sources = safe_load(config(sources_var))
1535+ keys = safe_load(config(keys_var))
1536+ if isinstance(sources, basestring) and isinstance(keys, basestring):
1537+ add_source(sources, keys)
1538+ else:
1539+ if not len(sources) == len(keys):
1540+ msg = 'Install sources and keys lists are different lengths'
1541+ raise SourceConfigError(msg)
1542+ for src_num in range(len(sources)):
1543+ add_source(sources[src_num], keys[src_num])
1544+ if update:
1545+ apt_update(fatal=True)
1546+
1547+# The order of this list is very important. Handlers should be listed in from
1548+# least- to most-specific URL matching.
1549+FETCH_HANDLERS = (
1550+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
1551+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
1552+)
1553+
1554+
1555+class UnhandledSource(Exception):
1556+ pass
1557+
1558+
1559+def install_remote(source):
1560+ """
1561+ Install a file tree from a remote source
1562+
1563+ The specified source should be a url of the form:
1564+ scheme://[host]/path[#[option=value][&...]]
1565+
1566+ Schemes supported are based on this modules submodules
1567+ Options supported are submodule-specific"""
1568+ # We ONLY check for True here because can_handle may return a string
1569+ # explaining why it can't handle a given source.
1570+ handlers = [h for h in plugins() if h.can_handle(source) is True]
1571+ installed_to = None
1572+ for handler in handlers:
1573+ try:
1574+ installed_to = handler.install(source)
1575+ except UnhandledSource:
1576+ pass
1577+ if not installed_to:
1578+ raise UnhandledSource("No handler found for source {}".format(source))
1579+ return installed_to
1580+
1581+
1582+def install_from_config(config_var_name):
1583+ charm_config = config()
1584+ source = charm_config[config_var_name]
1585+ return install_remote(source)
1586+
1587+
1588+class BaseFetchHandler(object):
1589+ """Base class for FetchHandler implementations in fetch plugins"""
1590+ def can_handle(self, source):
1591+ """Returns True if the source can be handled. Otherwise returns
1592+ a string explaining why it cannot"""
1593+ return "Wrong source type"
1594+
1595+ def install(self, source):
1596+ """Try to download and unpack the source. Return the path to the
1597+ unpacked files or raise UnhandledSource."""
1598+ raise UnhandledSource("Wrong source type {}".format(source))
1599+
1600+ def parse_url(self, url):
1601+ return urlparse(url)
1602+
1603+ def base_url(self, url):
1604+ """Return url without querystring or fragment"""
1605+ parts = list(self.parse_url(url))
1606+ parts[4:] = ['' for i in parts[4:]]
1607+ return urlunparse(parts)
1608+
1609+
1610+def plugins(fetch_handlers=None):
1611+ if not fetch_handlers:
1612+ fetch_handlers = FETCH_HANDLERS
1613+ plugin_list = []
1614+ for handler_name in fetch_handlers:
1615+ package, classname = handler_name.rsplit('.', 1)
1616+ try:
1617+ handler_class = getattr(importlib.import_module(package), classname)
1618+ plugin_list.append(handler_class())
1619+ except (ImportError, AttributeError):
1620+ # Skip missing plugins so that they can be ommitted from
1621+ # installation if desired
1622+ log("FetchHandler {} not found, skipping plugin".format(handler_name))
1623+ return plugin_list
1624
1625=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
1626--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
1627+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-10 22:34:35 +0000
1628@@ -0,0 +1,48 @@
1629+import os
1630+import urllib2
1631+from charmhelpers.fetch import (
1632+ BaseFetchHandler,
1633+ UnhandledSource
1634+)
1635+from charmhelpers.payload.archive import (
1636+ get_archive_handler,
1637+ extract,
1638+)
1639+from charmhelpers.core.host import mkdir
1640+
1641+
1642+class ArchiveUrlFetchHandler(BaseFetchHandler):
1643+ """Handler for archives via generic URLs"""
1644+ def can_handle(self, source):
1645+ url_parts = self.parse_url(source)
1646+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
1647+ return "Wrong source type"
1648+ if get_archive_handler(self.base_url(source)):
1649+ return True
1650+ return False
1651+
1652+ def download(self, source, dest):
1653+ # propogate all exceptions
1654+ # URLError, OSError, etc
1655+ response = urllib2.urlopen(source)
1656+ try:
1657+ with open(dest, 'w') as dest_file:
1658+ dest_file.write(response.read())
1659+ except Exception as e:
1660+ if os.path.isfile(dest):
1661+ os.unlink(dest)
1662+ raise e
1663+
1664+ def install(self, source):
1665+ url_parts = self.parse_url(source)
1666+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
1667+ if not os.path.exists(dest_dir):
1668+ mkdir(dest_dir, perms=0755)
1669+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
1670+ try:
1671+ self.download(source, dld_file)
1672+ except urllib2.URLError as e:
1673+ raise UnhandledSource(e.reason)
1674+ except OSError as e:
1675+ raise UnhandledSource(e.strerror)
1676+ return extract(dld_file)
1677
1678=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
1679--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
1680+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-10 22:34:35 +0000
1681@@ -0,0 +1,44 @@
1682+import os
1683+from bzrlib.branch import Branch
1684+from charmhelpers.fetch import (
1685+ BaseFetchHandler,
1686+ UnhandledSource
1687+)
1688+from charmhelpers.core.host import mkdir
1689+
1690+
1691+class BzrUrlFetchHandler(BaseFetchHandler):
1692+ """Handler for bazaar branches via generic and lp URLs"""
1693+ def can_handle(self, source):
1694+ url_parts = self.parse_url(source)
1695+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
1696+ return False
1697+ else:
1698+ return True
1699+
1700+ def branch(self, source, dest):
1701+ url_parts = self.parse_url(source)
1702+ # If we use lp:branchname scheme we need to load plugins
1703+ if not self.can_handle(source):
1704+ raise UnhandledSource("Cannot handle {}".format(source))
1705+ if url_parts.scheme == "lp":
1706+ from bzrlib.plugin import load_plugins
1707+ load_plugins()
1708+ try:
1709+ remote_branch = Branch.open(source)
1710+ remote_branch.bzrdir.sprout(dest).open_branch()
1711+ except Exception as e:
1712+ raise e
1713+
1714+ def install(self, source):
1715+ url_parts = self.parse_url(source)
1716+ branch_name = url_parts.path.strip("/").split("/")[-1]
1717+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
1718+ if not os.path.exists(dest_dir):
1719+ mkdir(dest_dir, perms=0755)
1720+ try:
1721+ self.branch(source, dest_dir)
1722+ except OSError as e:
1723+ raise UnhandledSource(e.strerror)
1724+ return dest_dir
1725+
1726
1727=== modified file 'hooks/hooks.py'
1728--- hooks/hooks.py 2013-05-23 21:52:06 +0000
1729+++ hooks/hooks.py 2013-10-10 22:34:35 +0000
1730@@ -1,17 +1,31 @@
1731 #!/usr/bin/env python
1732
1733-import json
1734 import glob
1735 import os
1736-import random
1737 import re
1738 import socket
1739-import string
1740+import shutil
1741 import subprocess
1742 import sys
1743 import yaml
1744-import nrpe
1745-import time
1746+
1747+from itertools import izip, tee
1748+
1749+from charmhelpers.core.host import pwgen
1750+from charmhelpers.core.hookenv import (
1751+ log,
1752+ config as config_get,
1753+ relation_set,
1754+ relation_ids as get_relation_ids,
1755+ relations_of_type,
1756+ relations_for_id,
1757+ relation_id,
1758+ open_port,
1759+ close_port,
1760+ unit_get,
1761+ )
1762+from charmhelpers.fetch import apt_install
1763+from charmhelpers.contrib.charmsupport import nrpe
1764
1765
1766 ###############################################################################
1767@@ -20,92 +34,52 @@
1768 default_haproxy_config_dir = "/etc/haproxy"
1769 default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir
1770 default_haproxy_service_config_dir = "/var/run/haproxy"
1771-HOOK_NAME = os.path.basename(sys.argv[0])
1772+service_affecting_packages = ['haproxy']
1773+
1774+frontend_only_options = [
1775+ "backlog",
1776+ "bind",
1777+ "capture cookie",
1778+ "capture request header",
1779+ "capture response header",
1780+ "clitimeout",
1781+ "default_backend",
1782+ "maxconn",
1783+ "monitor fail",
1784+ "monitor-net",
1785+ "monitor-uri",
1786+ "option accept-invalid-http-request",
1787+ "option clitcpka",
1788+ "option contstats",
1789+ "option dontlog-normal",
1790+ "option dontlognull",
1791+ "option http-use-proxy-header",
1792+ "option log-separate-errors",
1793+ "option logasap",
1794+ "option socket-stats",
1795+ "option tcp-smart-accept",
1796+ "rate-limit sessions",
1797+ "tcp-request content accept",
1798+ "tcp-request content reject",
1799+ "tcp-request inspect-delay",
1800+ "timeout client",
1801+ "timeout clitimeout",
1802+ "use_backend",
1803+ ]
1804+
1805
1806 ###############################################################################
1807 # Supporting functions
1808 ###############################################################################
1809
1810-def unit_get(*args):
1811- """Simple wrapper around unit-get, all arguments passed untouched"""
1812- get_args = ["unit-get"]
1813- get_args.extend(args)
1814- return subprocess.check_output(get_args)
1815-
1816-def juju_log(*args):
1817- """Simple wrapper around juju-log, all arguments are passed untouched"""
1818- log_args = ["juju-log"]
1819- log_args.extend(args)
1820- subprocess.call(log_args)
1821-
1822-#------------------------------------------------------------------------------
1823-# config_get: Returns a dictionary containing all of the config information
1824-# Optional parameter: scope
1825-# scope: limits the scope of the returned configuration to the
1826-# desired config item.
1827-#------------------------------------------------------------------------------
1828-def config_get(scope=None):
1829- try:
1830- config_cmd_line = ['config-get']
1831- if scope is not None:
1832- config_cmd_line.append(scope)
1833- config_cmd_line.append('--format=json')
1834- config_data = json.loads(subprocess.check_output(config_cmd_line))
1835- except Exception, e:
1836- subprocess.call(['juju-log', str(e)])
1837- config_data = None
1838- finally:
1839- return(config_data)
1840-
1841-
1842-#------------------------------------------------------------------------------
1843-# relation_get: Returns a dictionary containing the relation information
1844-# Optional parameters: scope, relation_id
1845-# scope: limits the scope of the returned data to the
1846-# desired item.
1847-# unit_name: limits the data ( and optionally the scope )
1848-# to the specified unit
1849-# relation_id: specify relation id for out of context usage.
1850-#------------------------------------------------------------------------------
1851-def relation_get(scope=None, unit_name=None, relation_id=None):
1852- try:
1853- relation_cmd_line = ['relation-get', '--format=json']
1854- if relation_id is not None:
1855- relation_cmd_line.extend(('-r', relation_id))
1856- if scope is not None:
1857- relation_cmd_line.append(scope)
1858- else:
1859- relation_cmd_line.append('')
1860- if unit_name is not None:
1861- relation_cmd_line.append(unit_name)
1862- relation_data = json.loads(subprocess.check_output(relation_cmd_line))
1863- except Exception, e:
1864- subprocess.call(['juju-log', str(e)])
1865- relation_data = None
1866- finally:
1867- return(relation_data)
1868-
1869-def relation_set(arguments, relation_id=None):
1870- """
1871- Wrapper around relation-set
1872- @param arguments: list of command line arguments
1873- @param relation_id: optional relation-id (passed to -r parameter) to use
1874- """
1875- set_args = ["relation-set"]
1876- if relation_id is not None:
1877- set_args.extend(["-r", str(relation_id)])
1878- set_args.extend(arguments)
1879- subprocess.check_call(set_args)
1880-
1881-#------------------------------------------------------------------------------
1882-# apt_get_install( package ): Installs a package
1883-#------------------------------------------------------------------------------
1884-def apt_get_install(packages=None):
1885- if packages is None:
1886- return(False)
1887- cmd_line = ['apt-get', '-y', 'install', '-qq']
1888- cmd_line.append(packages)
1889- return(subprocess.call(cmd_line))
1890+
1891+def ensure_package_status(packages, status):
1892+ if status in ['install', 'hold']:
1893+ selections = ''.join(['{} {}\n'.format(package, status)
1894+ for package in packages])
1895+ dpkg = subprocess.Popen(['dpkg', '--set-selections'],
1896+ stdin=subprocess.PIPE)
1897+ dpkg.communicate(input=selections)
1898
1899
1900 #------------------------------------------------------------------------------
1901@@ -113,8 +87,8 @@
1902 #------------------------------------------------------------------------------
1903 def enable_haproxy():
1904 default_haproxy = "/etc/default/haproxy"
1905- enabled_haproxy = \
1906- open(default_haproxy).read().replace('ENABLED=0', 'ENABLED=1')
1907+ with open(default_haproxy) as f:
1908+ enabled_haproxy = f.read().replace('ENABLED=0', 'ENABLED=1')
1909 with open(default_haproxy, 'w') as f:
1910 f.write(enabled_haproxy)
1911
1912@@ -137,8 +111,8 @@
1913 if config_data['global_quiet'] is True:
1914 haproxy_globals.append(" quiet")
1915 haproxy_globals.append(" spread-checks %d" %
1916- config_data['global_spread_checks'])
1917- return('\n'.join(haproxy_globals))
1918+ config_data['global_spread_checks'])
1919+ return '\n'.join(haproxy_globals)
1920
1921
1922 #------------------------------------------------------------------------------
1923@@ -157,7 +131,7 @@
1924 haproxy_defaults.append(" retries %d" % config_data['default_retries'])
1925 for timeout_item in default_timeouts:
1926 haproxy_defaults.append(" timeout %s" % timeout_item.strip())
1927- return('\n'.join(haproxy_defaults))
1928+ return '\n'.join(haproxy_defaults)
1929
1930
1931 #------------------------------------------------------------------------------
1932@@ -168,9 +142,9 @@
1933 #------------------------------------------------------------------------------
1934 def load_haproxy_config(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1935 if os.path.isfile(haproxy_config_file):
1936- return(open(haproxy_config_file).read())
1937+ return open(haproxy_config_file).read()
1938 else:
1939- return(None)
1940+ return None
1941
1942
1943 #------------------------------------------------------------------------------
1944@@ -182,12 +156,12 @@
1945 def get_monitoring_password(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1946 haproxy_config = load_haproxy_config(haproxy_config_file)
1947 if haproxy_config is None:
1948- return(None)
1949+ return None
1950 m = re.search("stats auth\s+(\w+):(\w+)", haproxy_config)
1951 if m is not None:
1952- return(m.group(2))
1953+ return m.group(2)
1954 else:
1955- return(None)
1956+ return None
1957
1958
1959 #------------------------------------------------------------------------------
1960@@ -197,32 +171,29 @@
1961 # to open and close when exposing/unexposing a service
1962 #------------------------------------------------------------------------------
1963 def get_service_ports(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1964+ stanzas = get_listen_stanzas(haproxy_config_file=haproxy_config_file)
1965+ return tuple((int(port) for service, addr, port in stanzas))
1966+
1967+
1968+#------------------------------------------------------------------------------
1969+# get_listen_stanzas: Convenience function that scans the existing haproxy
1970+# configuration file and returns a list of the existing
1971+# listen stanzas cofnigured.
1972+#------------------------------------------------------------------------------
1973+def get_listen_stanzas(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
1974 haproxy_config = load_haproxy_config(haproxy_config_file)
1975 if haproxy_config is None:
1976- return(None)
1977- return(re.findall("listen.*:(.*)", haproxy_config))
1978-
1979-
1980-#------------------------------------------------------------------------------
1981-# open_port: Convenience function to open a port in juju to
1982-# expose a service
1983-#------------------------------------------------------------------------------
1984-def open_port(port=None, protocol="TCP"):
1985- if port is None:
1986- return(None)
1987- return(subprocess.call(['open-port', "%d/%s" %
1988- (int(port), protocol)]))
1989-
1990-
1991-#------------------------------------------------------------------------------
1992-# close_port: Convenience function to close a port in juju to
1993-# unexpose a service
1994-#------------------------------------------------------------------------------
1995-def close_port(port=None, protocol="TCP"):
1996- if port is None:
1997- return(None)
1998- return(subprocess.call(['close-port', "%d/%s" %
1999- (int(port), protocol)]))
2000+ return ()
2001+ listen_stanzas = re.findall(
2002+ "listen\s+([^\s]+)\s+([^:]+):(.*)",
2003+ haproxy_config)
2004+ bind_stanzas = re.findall(
2005+ "\s+bind\s+([^:]+):(\d+)\s*\n\s+default_backend\s+([^\s]+)",
2006+ haproxy_config, re.M)
2007+ return (tuple(((service, addr, int(port))
2008+ for service, addr, port in listen_stanzas)) +
2009+ tuple(((service, addr, int(port))
2010+ for addr, port, service in bind_stanzas)))
2011
2012
2013 #------------------------------------------------------------------------------
2014@@ -232,26 +203,25 @@
2015 #------------------------------------------------------------------------------
2016 def update_service_ports(old_service_ports=None, new_service_ports=None):
2017 if old_service_ports is None or new_service_ports is None:
2018- return(None)
2019+ return None
2020 for port in old_service_ports:
2021 if port not in new_service_ports:
2022 close_port(port)
2023 for port in new_service_ports:
2024- if port not in old_service_ports:
2025- open_port(port)
2026-
2027-
2028-#------------------------------------------------------------------------------
2029-# pwgen: Generates a random password
2030-# pwd_length: Defines the length of the password to generate
2031-# default: 20
2032-#------------------------------------------------------------------------------
2033-def pwgen(pwd_length=20):
2034- alphanumeric_chars = [l for l in (string.letters + string.digits)
2035- if l not in 'Iil0oO1']
2036- random_chars = [random.choice(alphanumeric_chars)
2037- for i in range(pwd_length)]
2038- return(''.join(random_chars))
2039+ open_port(port)
2040+
2041+
2042+#------------------------------------------------------------------------------
2043+# update_sysctl: create a sysctl.conf file from YAML-formatted 'sysctl' config
2044+#------------------------------------------------------------------------------
2045+def update_sysctl(config_data):
2046+ sysctl_dict = yaml.load(config_data.get("sysctl", "{}"))
2047+ if sysctl_dict:
2048+ sysctl_file = open("/etc/sysctl.d/50-haproxy.conf", "w")
2049+ for key in sysctl_dict:
2050+ sysctl_file.write("{}={}\n".format(key, sysctl_dict[key]))
2051+ sysctl_file.close()
2052+ subprocess.call(["sysctl", "-p", "/etc/sysctl.d/50-haproxy.conf"])
2053
2054
2055 #------------------------------------------------------------------------------
2056@@ -271,22 +241,40 @@
2057 service_port=None, service_options=None,
2058 server_entries=None):
2059 if service_name is None or service_ip is None or service_port is None:
2060- return(None)
2061+ return None
2062+ fe_options = []
2063+ be_options = []
2064+ if service_options is not None:
2065+ # Filter provided service options into frontend-only and backend-only.
2066+ results = izip(
2067+ (fe_options, be_options),
2068+ (True, False),
2069+ tee((o, any(map(o.strip().startswith,
2070+ frontend_only_options)))
2071+ for o in service_options))
2072+ for out, cond, result in results:
2073+ out.extend(option for option, match in result if match is cond)
2074 service_config = []
2075- service_config.append("listen %s %s:%s" %
2076- (service_name, service_ip, service_port))
2077- if service_options is not None:
2078- for service_option in service_options:
2079- service_config.append(" %s" % service_option.strip())
2080- if server_entries is not None and isinstance(server_entries, list):
2081- for (server_name, server_ip, server_port, server_options) \
2082- in server_entries:
2083+ unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-")
2084+ service_config.append("frontend %s-%s" % (unit_name, service_port))
2085+ service_config.append(" bind %s:%s" %
2086+ (service_ip, service_port))
2087+ service_config.append(" default_backend %s" % (service_name,))
2088+ service_config.extend(" %s" % service_option.strip()
2089+ for service_option in fe_options)
2090+ service_config.append("")
2091+ service_config.append("backend %s" % (service_name,))
2092+ service_config.extend(" %s" % service_option.strip()
2093+ for service_option in be_options)
2094+ if isinstance(server_entries, (list, tuple)):
2095+ for (server_name, server_ip, server_port,
2096+ server_options) in server_entries:
2097 server_line = " server %s %s:%s" % \
2098- (server_name, server_ip, server_port)
2099+ (server_name, server_ip, server_port)
2100 if server_options is not None:
2101- server_line += " %s" % server_options
2102+ server_line += " %s" % " ".join(server_options)
2103 service_config.append(server_line)
2104- return('\n'.join(service_config))
2105+ return '\n'.join(service_config)
2106
2107
2108 #------------------------------------------------------------------------------
2109@@ -296,216 +284,234 @@
2110 def create_monitoring_stanza(service_name="haproxy_monitoring"):
2111 config_data = config_get()
2112 if config_data['enable_monitoring'] is False:
2113- return(None)
2114+ return None
2115 monitoring_password = get_monitoring_password()
2116 if config_data['monitoring_password'] != "changeme":
2117 monitoring_password = config_data['monitoring_password']
2118- elif monitoring_password is None and \
2119- config_data['monitoring_password'] == "changeme":
2120- monitoring_password = pwgen()
2121+ elif (monitoring_password is None and
2122+ config_data['monitoring_password'] == "changeme"):
2123+ monitoring_password = pwgen(length=20)
2124 monitoring_config = []
2125 monitoring_config.append("mode http")
2126 monitoring_config.append("acl allowed_cidr src %s" %
2127- config_data['monitoring_allowed_cidr'])
2128+ config_data['monitoring_allowed_cidr'])
2129 monitoring_config.append("block unless allowed_cidr")
2130 monitoring_config.append("stats enable")
2131 monitoring_config.append("stats uri /")
2132 monitoring_config.append("stats realm Haproxy\ Statistics")
2133 monitoring_config.append("stats auth %s:%s" %
2134- (config_data['monitoring_username'], monitoring_password))
2135+ (config_data['monitoring_username'],
2136+ monitoring_password))
2137 monitoring_config.append("stats refresh %d" %
2138- config_data['monitoring_stats_refresh'])
2139- return(create_listen_stanza(service_name,
2140+ config_data['monitoring_stats_refresh'])
2141+ return create_listen_stanza(service_name,
2142 "0.0.0.0",
2143 config_data['monitoring_port'],
2144- monitoring_config))
2145-
2146-def get_host_port(services_list):
2147- """
2148- Given a services list and global juju information, get a host
2149- and port for this system.
2150- """
2151- host = services_list[0]["service_host"]
2152- port = int(services_list[0]["service_port"])
2153- return (host, port)
2154-
2155+ monitoring_config)
2156+
2157+
2158+#------------------------------------------------------------------------------
2159+# get_config_services: Convenience function that returns a mapping containing
2160+# all of the services configuration
2161+#------------------------------------------------------------------------------
2162 def get_config_services():
2163- """
2164- Return dict of all services in the configuration, and in the relation
2165- where appropriate. If a relation contains a "services" key, read
2166- it in as yaml as is the case with the configuration. Set the host and
2167- port for any relation initiated service entry as those items cannot be
2168- known by the other side of the relation. In the case of a
2169- proxy configuration found, ensure the forward for option is set.
2170- """
2171 config_data = config_get()
2172- config_services_list = yaml.load(config_data['services'])
2173- (host, port) = get_host_port(config_services_list)
2174- all_relations = relation_get_all("reverseproxy")
2175- services_list = []
2176- if hasattr(all_relations, "iteritems"):
2177- for relid, reldata in all_relations.iteritems():
2178- for unit, relation_info in reldata.iteritems():
2179- if relation_info.has_key("services"):
2180- rservices = yaml.load(relation_info["services"])
2181- for r in rservices:
2182- r["service_host"] = host
2183- r["service_port"] = port
2184- port += 1
2185- services_list.extend(rservices)
2186- if len(services_list) == 0:
2187- services_list = config_services_list
2188- return(services_list)
2189+ services = {}
2190+ for service in yaml.safe_load(config_data['services']):
2191+ service_name = service["service_name"]
2192+ if not services:
2193+ # 'None' is used as a marker for the first service defined, which
2194+ # is used as the default service if a proxied server doesn't
2195+ # specify which service it is bound to.
2196+ services[None] = {"service_name": service_name}
2197+ if is_proxy(service_name) and ("option forwardfor" not in
2198+ service["service_options"]):
2199+ service["service_options"].append("option forwardfor")
2200+
2201+ if isinstance(service["server_options"], basestring):
2202+ service["server_options"] = service["server_options"].split()
2203+
2204+ services[service_name] = service
2205+
2206+ return services
2207
2208
2209 #------------------------------------------------------------------------------
2210 # get_config_service: Convenience function that returns a dictionary
2211-# of the configuration of a given services configuration
2212+# of the configuration of a given service's configuration
2213 #------------------------------------------------------------------------------
2214 def get_config_service(service_name=None):
2215- services_list = get_config_services()
2216- for service_item in services_list:
2217- if service_item['service_name'] == service_name:
2218- return(service_item)
2219- return(None)
2220-
2221-
2222-def relation_get_all(relation_name):
2223- """
2224- Iterate through all relations, and return large data structure with the
2225- relation data set:
2226-
2227- @param relation_name: The name of the relation to check
2228-
2229- Returns:
2230-
2231- relation_id:
2232- unit:
2233- key: value
2234- key2: value
2235- """
2236- result = {}
2237- try:
2238- relids = subprocess.Popen(
2239- ['relation-ids', relation_name], stdout=subprocess.PIPE)
2240- for relid in [x.strip() for x in relids.stdout]:
2241- result[relid] = {}
2242- for unit in json.loads(
2243- subprocess.check_output(
2244- ['relation-list', '--format=json', '-r', relid])):
2245- result[relid][unit] = relation_get(None, unit, relid)
2246- return result
2247- except Exception, e:
2248- subprocess.call(['juju-log', str(e)])
2249-
2250-def get_services_dict():
2251- """
2252- Transform the services list into a dict for easier comprehension,
2253- and to ensure that we have only one entry per service type. If multiple
2254- relations specify the same server_name, try to union the servers
2255- entries.
2256- """
2257- services_list = get_config_services()
2258- services_dict = {}
2259-
2260- for service_item in services_list:
2261- if not hasattr(service_item, "iteritems"):
2262- juju_log("Each 'services' entry must be a dict: %s" % service_item)
2263- continue;
2264- if "service_name" not in service_item:
2265- juju_log("Missing 'service_name': %s" % service_item)
2266- continue;
2267- name = service_item["service_name"]
2268- options = service_item["service_options"]
2269- if name in services_dict:
2270- if "servers" in services_dict[name]:
2271- services_dict[name]["servers"].extend(service_item["servers"])
2272- else:
2273- services_dict[name] = service_item
2274- if os.path.exists("%s/%s.is.proxy" % (
2275- default_haproxy_service_config_dir, name)):
2276- if 'option forwardfor' not in options:
2277- options.append("option forwardfor")
2278-
2279- return services_dict
2280-
2281-def get_all_services():
2282- """
2283- Transform a services dict into an "all_services" relation setting expected
2284- by apache2. This is needed to ensure we have the port and hostname setting
2285- correct and in the proper format
2286- """
2287- services = get_services_dict()
2288- all_services = []
2289- for name in services:
2290- s = {"service_name": name,
2291- "service_port": services[name]["service_port"]}
2292- all_services.append(s)
2293- return all_services
2294+ return get_config_services().get(service_name, None)
2295+
2296+
2297+def is_proxy(service_name):
2298+ flag_path = os.path.join(default_haproxy_service_config_dir,
2299+ "%s.is.proxy" % service_name)
2300+ return os.path.exists(flag_path)
2301+
2302
2303 #------------------------------------------------------------------------------
2304 # create_services: Function that will create the services configuration
2305 # from the config data and/or relation information
2306 #------------------------------------------------------------------------------
2307 def create_services():
2308- services_list = get_config_services()
2309- services_dict = get_services_dict()
2310-
2311- # service definitions overwrites user specified haproxy file in
2312- # a pseudo-template form
2313- all_relations = relation_get_all("reverseproxy")
2314- for relid, reldata in all_relations.iteritems():
2315- for unit, relation_info in reldata.iteritems():
2316- if not isinstance(relation_info, dict):
2317- sys.exit(0)
2318- if "services" in relation_info:
2319- juju_log("Relation %s has services override defined" % relid)
2320- continue;
2321- if "hostname" not in relation_info or "port" not in relation_info:
2322- juju_log("Relation %s needs hostname and port defined" % relid)
2323- continue;
2324- juju_service_name = unit.rpartition('/')[0]
2325- # Mandatory switches ( hostname, port )
2326- server_name = "%s__%s" % (
2327- relation_info['hostname'].replace('.', '_'),
2328- relation_info['port'])
2329- server_ip = relation_info['hostname']
2330- server_port = relation_info['port']
2331- # Optional switches ( service_name )
2332- if 'service_name' in relation_info:
2333- if relation_info['service_name'] in services_dict:
2334- service_name = relation_info['service_name']
2335- else:
2336- juju_log("service %s does not exist." % (
2337- relation_info['service_name']))
2338- sys.exit(1)
2339- elif juju_service_name + '_service' in services_dict:
2340- service_name = juju_service_name + '_service'
2341+ services_dict = get_config_services()
2342+ if len(services_dict) == 0:
2343+ log("No services configured, exiting.")
2344+ return
2345+
2346+ relation_data = relations_of_type("reverseproxy")
2347+
2348+ for relation_info in relation_data:
2349+ unit = relation_info['__unit__']
2350+ juju_service_name = unit.rpartition('/')[0]
2351+
2352+ relation_ok = True
2353+ for required in ("port", "private-address", "hostname"):
2354+ if not required in relation_info:
2355+ log("No %s in relation data for '%s', skipping." %
2356+ (required, unit))
2357+ relation_ok = False
2358+ break
2359+
2360+ if not relation_ok:
2361+ continue
2362+
2363+ # Mandatory switches ( private-address, port )
2364+ host = relation_info['private-address']
2365+ port = relation_info['port']
2366+ server_name = ("%s-%s" % (unit.replace("/", "-"), port))
2367+
2368+ # Optional switches ( service_name, sitenames )
2369+ service_names = set()
2370+ if 'service_name' in relation_info:
2371+ if relation_info['service_name'] in services_dict:
2372+ service_names.add(relation_info['service_name'])
2373 else:
2374- service_name = services_list[0]['service_name']
2375+ log("Service '%s' does not exist." %
2376+ relation_info['service_name'])
2377+ continue
2378+
2379+ if 'sitenames' in relation_info:
2380+ sitenames = relation_info['sitenames'].split()
2381+ for sitename in sitenames:
2382+ if sitename in services_dict:
2383+ service_names.add(sitename)
2384+
2385+ if juju_service_name + "_service" in services_dict:
2386+ service_names.add(juju_service_name + "_service")
2387+
2388+ if juju_service_name in services_dict:
2389+ service_names.add(juju_service_name)
2390+
2391+ if not service_names:
2392+ service_names.add(services_dict[None]["service_name"])
2393+
2394+ for service_name in service_names:
2395+ service = services_dict[service_name]
2396+
2397 # Add the server entries
2398- if not 'servers' in services_dict[service_name]:
2399- services_dict[service_name]['servers'] = []
2400- services_dict[service_name]['servers'].append((
2401- server_name, server_ip, server_port,
2402- services_dict[service_name]['server_options']))
2403-
2404+ servers = service.setdefault("servers", [])
2405+ servers.append((server_name, host, port,
2406+ services_dict[service_name].get(
2407+ 'server_options', [])))
2408+
2409+ has_servers = False
2410+ for service_name, service in services_dict.iteritems():
2411+ if service.get("servers", []):
2412+ has_servers = True
2413+
2414+ if not has_servers:
2415+ log("No backend servers, exiting.")
2416+ return
2417+
2418+ del services_dict[None]
2419+ services_dict = apply_peer_config(services_dict)
2420+ write_service_config(services_dict)
2421+ return services_dict
2422+
2423+
2424+def apply_peer_config(services_dict):
2425+ peer_data = relations_of_type("peer")
2426+
2427+ peer_services = {}
2428+ for relation_info in peer_data:
2429+ unit_name = relation_info["__unit__"]
2430+ peer_services_data = relation_info.get("all_services")
2431+ if peer_services_data is None:
2432+ continue
2433+ service_data = yaml.safe_load(peer_services_data)
2434+ for service in service_data:
2435+ service_name = service["service_name"]
2436+ if service_name in services_dict:
2437+ peer_service = peer_services.setdefault(service_name, {})
2438+ peer_service["service_name"] = service_name
2439+ peer_service["service_host"] = service["service_host"]
2440+ peer_service["service_port"] = service["service_port"]
2441+ peer_service["service_options"] = ["balance leastconn",
2442+ "mode tcp",
2443+ "option tcplog"]
2444+ servers = peer_service.setdefault("servers", [])
2445+ servers.append((unit_name.replace("/", "-"),
2446+ relation_info["private-address"],
2447+ service["service_port"] + 1, ["check"]))
2448+
2449+ if not peer_services:
2450+ return services_dict
2451+
2452+ unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-")
2453+ private_address = unit_get("private-address")
2454+ for service_name, peer_service in peer_services.iteritems():
2455+ original_service = services_dict[service_name]
2456+
2457+ # If the original service has timeout settings, copy them over to the
2458+ # peer service.
2459+ for option in original_service.get("service_options", ()):
2460+ if "timeout" in option:
2461+ peer_service["service_options"].append(option)
2462+
2463+ servers = peer_service["servers"]
2464+ # Add ourselves to the list of servers for the peer listen stanza.
2465+ servers.append((unit_name, private_address,
2466+ original_service["service_port"] + 1,
2467+ ["check"]))
2468+
2469+ # Make all but the first server in the peer listen stanza a backup
2470+ # server.
2471+ servers.sort()
2472+ for server in servers[1:]:
2473+ server[3].append("backup")
2474+
2475+ # Remap original service port, will now be used by peer listen stanza.
2476+ original_service["service_port"] += 1
2477+
2478+ # Remap original service to a new name, stuff peer listen stanza into
2479+ # it's place.
2480+ be_service = service_name + "_be"
2481+ original_service["service_name"] = be_service
2482+ services_dict[be_service] = original_service
2483+ services_dict[service_name] = peer_service
2484+
2485+ return services_dict
2486+
2487+
2488+def write_service_config(services_dict):
2489 # Construct the new haproxy.cfg file
2490- for service in services_dict:
2491- juju_log("Service: ", service)
2492- server_entries = None
2493- if 'servers' in services_dict[service]:
2494- server_entries = services_dict[service]['servers']
2495- service_config_file = "%s/%s.service" % (
2496- default_haproxy_service_config_dir,
2497- services_dict[service]['service_name'])
2498- with open(service_config_file, 'w') as service_config:
2499- service_config.write(
2500- create_listen_stanza(services_dict[service]['service_name'],
2501- services_dict[service]['service_host'],
2502- services_dict[service]['service_port'],
2503- services_dict[service]['service_options'],
2504- server_entries))
2505+ for service_key, service_config in services_dict.items():
2506+ log("Service: %s" % service_key)
2507+ server_entries = service_config.get('servers')
2508+
2509+ service_name = service_config["service_name"]
2510+ if not os.path.exists(default_haproxy_service_config_dir):
2511+ os.mkdir(default_haproxy_service_config_dir, 0600)
2512+ with open(os.path.join(default_haproxy_service_config_dir,
2513+ "%s.service" % service_name), 'w') as config:
2514+ config.write(create_listen_stanza(
2515+ service_name,
2516+ service_config['service_host'],
2517+ service_config['service_port'],
2518+ service_config['service_options'],
2519+ server_entries))
2520
2521
2522 #------------------------------------------------------------------------------
2523@@ -516,17 +522,19 @@
2524 services = ''
2525 if service_name is not None:
2526 if os.path.exists("%s/%s.service" %
2527- (default_haproxy_service_config_dir, service_name)):
2528- services = open("%s/%s.service" %
2529- (default_haproxy_service_config_dir, service_name)).read()
2530+ (default_haproxy_service_config_dir, service_name)):
2531+ with open("%s/%s.service" % (default_haproxy_service_config_dir,
2532+ service_name)) as f:
2533+ services = f.read()
2534 else:
2535 services = None
2536 else:
2537 for service in glob.glob("%s/*.service" %
2538- default_haproxy_service_config_dir):
2539- services += open(service).read()
2540- services += "\n\n"
2541- return(services)
2542+ default_haproxy_service_config_dir):
2543+ with open(service) as f:
2544+ services += f.read()
2545+ services += "\n\n"
2546+ return services
2547
2548
2549 #------------------------------------------------------------------------------
2550@@ -537,24 +545,24 @@
2551 #------------------------------------------------------------------------------
2552 def remove_services(service_name=None):
2553 if service_name is not None:
2554- if os.path.exists("%s/%s.service" %
2555- (default_haproxy_service_config_dir, service_name)):
2556+ path = "%s/%s.service" % (default_haproxy_service_config_dir,
2557+ service_name)
2558+ if os.path.exists(path):
2559 try:
2560- os.remove("%s/%s.service" %
2561- (default_haproxy_service_config_dir, service_name))
2562- return(True)
2563+ os.remove(path)
2564 except Exception, e:
2565- subprocess.call(['juju-log', str(e)])
2566- return(False)
2567+ log(str(e))
2568+ return False
2569+ return True
2570 else:
2571 for service in glob.glob("%s/*.service" %
2572- default_haproxy_service_config_dir):
2573+ default_haproxy_service_config_dir):
2574 try:
2575 os.remove(service)
2576 except Exception, e:
2577- subprocess.call(['juju-log', str(e)])
2578+ log(str(e))
2579 pass
2580- return(True)
2581+ return True
2582
2583
2584 #------------------------------------------------------------------------------
2585@@ -567,27 +575,18 @@
2586 # optional arguments
2587 #------------------------------------------------------------------------------
2588 def construct_haproxy_config(haproxy_globals=None,
2589- haproxy_defaults=None,
2590- haproxy_monitoring=None,
2591- haproxy_services=None):
2592- if haproxy_globals is None or \
2593- haproxy_defaults is None:
2594- return(None)
2595+ haproxy_defaults=None,
2596+ haproxy_monitoring=None,
2597+ haproxy_services=None):
2598+ if None in (haproxy_globals, haproxy_defaults):
2599+ return
2600 with open(default_haproxy_config, 'w') as haproxy_config:
2601- haproxy_config.write(haproxy_globals)
2602- haproxy_config.write("\n")
2603- haproxy_config.write("\n")
2604- haproxy_config.write(haproxy_defaults)
2605- haproxy_config.write("\n")
2606- haproxy_config.write("\n")
2607- if haproxy_monitoring is not None:
2608- haproxy_config.write(haproxy_monitoring)
2609- haproxy_config.write("\n")
2610- haproxy_config.write("\n")
2611- if haproxy_services is not None:
2612- haproxy_config.write(haproxy_services)
2613- haproxy_config.write("\n")
2614- haproxy_config.write("\n")
2615+ config_string = ''
2616+ for config in (haproxy_globals, haproxy_defaults, haproxy_monitoring,
2617+ haproxy_services):
2618+ if config is not None:
2619+ config_string += config + '\n\n'
2620+ haproxy_config.write(config_string)
2621
2622
2623 #------------------------------------------------------------------------------
2624@@ -595,50 +594,37 @@
2625 # the haproxy service
2626 #------------------------------------------------------------------------------
2627 def service_haproxy(action=None, haproxy_config=default_haproxy_config):
2628- if action is None or haproxy_config is None:
2629- return(None)
2630+ if None in (action, haproxy_config):
2631+ return None
2632 elif action == "check":
2633- retVal = subprocess.call(
2634- ['/usr/sbin/haproxy', '-f', haproxy_config, '-c'])
2635- if retVal == 1:
2636- return(False)
2637- elif retVal == 0:
2638- return(True)
2639- else:
2640- return(False)
2641+ command = ['/usr/sbin/haproxy', '-f', haproxy_config, '-c']
2642 else:
2643- retVal = subprocess.call(['service', 'haproxy', action])
2644- if retVal == 0:
2645- return(True)
2646- else:
2647- return(False)
2648-
2649-def website_notify():
2650- """
2651- Notify any webiste relations of any configuration changes.
2652- """
2653- juju_log("Notifying all website relations of change")
2654- all_relations = relation_get_all("website")
2655- if hasattr(all_relations, "iteritems"):
2656- for relid, reldata in all_relations.iteritems():
2657- relation_set(["time=%s" % time.time()], relation_id=relid)
2658+ command = ['service', 'haproxy', action]
2659+ return_value = subprocess.call(command)
2660+ return return_value == 0
2661
2662
2663 ###############################################################################
2664 # Hook functions
2665 ###############################################################################
2666 def install_hook():
2667- for f in glob.glob('exec.d/*/charm-pre-install'):
2668- if os.path.isfile(f) and os.access(f, os.X_OK):
2669- subprocess.check_call(['sh', '-c', f])
2670 if not os.path.exists(default_haproxy_service_config_dir):
2671 os.mkdir(default_haproxy_service_config_dir, 0600)
2672- return ((apt_get_install("haproxy") == enable_haproxy()) is True)
2673+
2674+ apt_install('haproxy', fatal=True)
2675+ ensure_package_status(service_affecting_packages,
2676+ config_get('package_status'))
2677+ enable_haproxy()
2678
2679
2680 def config_changed():
2681 config_data = config_get()
2682- current_service_ports = get_service_ports()
2683+
2684+ ensure_package_status(service_affecting_packages,
2685+ config_data['package_status'])
2686+
2687+ old_service_ports = get_service_ports()
2688+ old_stanzas = get_listen_stanzas()
2689 haproxy_globals = create_haproxy_globals()
2690 haproxy_defaults = create_haproxy_defaults()
2691 if config_data['enable_monitoring'] is True:
2692@@ -648,105 +634,177 @@
2693 remove_services()
2694 create_services()
2695 haproxy_services = load_services()
2696+ update_sysctl(config_data)
2697 construct_haproxy_config(haproxy_globals,
2698 haproxy_defaults,
2699 haproxy_monitoring,
2700 haproxy_services)
2701
2702 if service_haproxy("check"):
2703- updated_service_ports = get_service_ports()
2704- update_service_ports(current_service_ports, updated_service_ports)
2705+ update_service_ports(old_service_ports, get_service_ports())
2706 service_haproxy("reload")
2707+ if not (get_listen_stanzas() == old_stanzas):
2708+ notify_website()
2709+ notify_peer()
2710+ else:
2711+ # XXX Ideally the config should be restored to a working state if the
2712+ # check fails, otherwise an inadvertent reload will cause the service
2713+ # to be broken.
2714+ log("HAProxy configuration check failed, exiting.")
2715+ sys.exit(1)
2716
2717
2718 def start_hook():
2719 if service_haproxy("status"):
2720- return(service_haproxy("restart"))
2721+ return service_haproxy("restart")
2722 else:
2723- return(service_haproxy("start"))
2724+ return service_haproxy("start")
2725
2726
2727 def stop_hook():
2728 if service_haproxy("status"):
2729- return(service_haproxy("stop"))
2730+ return service_haproxy("stop")
2731
2732
2733 def reverseproxy_interface(hook_name=None):
2734 if hook_name is None:
2735- return(None)
2736- elif hook_name == "changed":
2737- config_changed()
2738- website_notify()
2739- elif hook_name=="departed":
2740- config_changed()
2741- website_notify()
2742+ return None
2743+ if hook_name in ("changed", "departed"):
2744+ config_changed()
2745+
2746
2747 def website_interface(hook_name=None):
2748 if hook_name is None:
2749- return(None)
2750+ return None
2751+ # Notify website relation but only for the current relation in context.
2752+ notify_website(changed=hook_name == "changed",
2753+ relation_ids=(relation_id(),))
2754+
2755+
2756+def get_hostname(host=None):
2757+ my_host = socket.gethostname()
2758+ if host is None or host == "0.0.0.0":
2759+ # If the listen ip has been set to 0.0.0.0 then pass back the hostname
2760+ return socket.getfqdn(my_host)
2761+ elif host == "localhost":
2762+ # If the fqdn lookup has returned localhost (lxc setups) then return
2763+ # hostname
2764+ return my_host
2765+ return host
2766+
2767+
2768+def notify_relation(relation, changed=False, relation_ids=None):
2769+ config_data = config_get()
2770+ default_host = get_hostname()
2771 default_port = 80
2772- relation_data = relation_get()
2773-
2774- # If a specfic service has been asked for then return the ip:port for
2775- # that service, else pass back the default
2776- if 'service_name' in relation_data:
2777- service_name = relation_data['service_name']
2778- requestedservice = get_config_service(service_name)
2779- my_host = requestedservice['service_host']
2780- my_port = requestedservice['service_port']
2781- else:
2782- my_host = socket.getfqdn(socket.gethostname())
2783- my_port = default_port
2784-
2785- # If the listen ip has been set to 0.0.0.0 then pass back the hostname
2786- if my_host == "0.0.0.0":
2787- my_host = socket.getfqdn(socket.gethostname())
2788-
2789- # If the fqdn lookup has returned localhost (lxc setups) then return
2790- # hostname
2791- if my_host == "localhost":
2792- my_host = socket.gethostname()
2793- subprocess.call(
2794- ['relation-set', 'port=%d' % my_port, 'hostname=%s' % my_host,
2795- 'all_services=%s' % yaml.safe_dump(get_all_services())])
2796- if hook_name == "changed":
2797- if 'is-proxy' in relation_data:
2798- service_name = "%s__%d" % \
2799- (relation_data['hostname'], relation_data['port'])
2800- open("%s/%s.is.proxy" %
2801- (default_haproxy_service_config_dir, service_name), 'a').close()
2802+
2803+ for rid in relation_ids or get_relation_ids(relation):
2804+ service_names = set()
2805+ if rid is None:
2806+ rid = relation_id()
2807+ for relation_data in relations_for_id(rid):
2808+ if 'service_name' in relation_data:
2809+ service_names.add(relation_data['service_name'])
2810+
2811+ if changed:
2812+ if 'is-proxy' in relation_data:
2813+ remote_service = ("%s__%d" % (relation_data['hostname'],
2814+ relation_data['port']))
2815+ open("%s/%s.is.proxy" % (
2816+ default_haproxy_service_config_dir,
2817+ remote_service), 'a').close()
2818+
2819+ service_name = None
2820+ if len(service_names) == 1:
2821+ service_name = service_names.pop()
2822+ elif len(service_names) > 1:
2823+ log("Remote units requested than a single service name."
2824+ "Falling back to default host/port.")
2825+
2826+ if service_name is not None:
2827+ # If a specfic service has been asked for then return the ip:port
2828+ # for that service, else pass back the default
2829+ requestedservice = get_config_service(service_name)
2830+ my_host = get_hostname(requestedservice['service_host'])
2831+ my_port = requestedservice['service_port']
2832+ else:
2833+ my_host = default_host
2834+ my_port = default_port
2835+
2836+ relation_set(relation_id=rid, port=str(my_port),
2837+ hostname=my_host,
2838+ all_services=config_data['services'])
2839+
2840+
2841+def notify_website(changed=False, relation_ids=None):
2842+ notify_relation("website", changed=changed, relation_ids=relation_ids)
2843+
2844+
2845+def notify_peer(changed=False, relation_ids=None):
2846+ notify_relation("peer", changed=changed, relation_ids=relation_ids)
2847+
2848+
2849+def install_nrpe_scripts():
2850+ scripts_src = os.path.join(os.environ["CHARM_DIR"], "files",
2851+ "nrpe")
2852+ scripts_dst = "/usr/lib/nagios/plugins"
2853+ if not os.path.exists(scripts_dst):
2854+ os.makedirs(scripts_dst)
2855+ for fname in glob.glob(os.path.join(scripts_src, "*.sh")):
2856+ shutil.copy2(fname,
2857+ os.path.join(scripts_dst, os.path.basename(fname)))
2858+
2859
2860 def update_nrpe_config():
2861+ install_nrpe_scripts()
2862 nrpe_compat = nrpe.NRPE()
2863- nrpe_compat.add_check('haproxy','Check HAProxy', 'check_haproxy.sh')
2864- nrpe_compat.add_check('haproxy_queue','Check HAProxy queue depth', 'check_haproxy_queue_depth.sh')
2865+ nrpe_compat.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh')
2866+ nrpe_compat.add_check('haproxy_queue', 'Check HAProxy queue depth',
2867+ 'check_haproxy_queue_depth.sh')
2868 nrpe_compat.write()
2869
2870+
2871 ###############################################################################
2872 # Main section
2873 ###############################################################################
2874-if __name__ == "__main__":
2875- if HOOK_NAME == "install":
2876+
2877+
2878+def main(hook_name):
2879+ if hook_name == "install":
2880 install_hook()
2881- elif HOOK_NAME == "config-changed":
2882+ elif hook_name in ("config-changed", "upgrade-charm"):
2883 config_changed()
2884 update_nrpe_config()
2885- elif HOOK_NAME == "start":
2886+ elif hook_name == "start":
2887 start_hook()
2888- elif HOOK_NAME == "stop":
2889+ elif hook_name == "stop":
2890 stop_hook()
2891- elif HOOK_NAME == "reverseproxy-relation-broken":
2892+ elif hook_name == "reverseproxy-relation-broken":
2893 config_changed()
2894- elif HOOK_NAME == "reverseproxy-relation-changed":
2895+ elif hook_name == "reverseproxy-relation-changed":
2896 reverseproxy_interface("changed")
2897- elif HOOK_NAME == "reverseproxy-relation-departed":
2898+ elif hook_name == "reverseproxy-relation-departed":
2899 reverseproxy_interface("departed")
2900- elif HOOK_NAME == "website-relation-joined":
2901+ elif hook_name == "website-relation-joined":
2902 website_interface("joined")
2903- elif HOOK_NAME == "website-relation-changed":
2904+ elif hook_name == "website-relation-changed":
2905 website_interface("changed")
2906- elif HOOK_NAME == "nrpe-external-master-relation-changed":
2907+ elif hook_name == "peer-relation-joined":
2908+ website_interface("joined")
2909+ elif hook_name == "peer-relation-changed":
2910+ reverseproxy_interface("changed")
2911+ elif hook_name in ("nrpe-external-master-relation-joined",
2912+ "local-monitors-relation-joined"):
2913 update_nrpe_config()
2914 else:
2915 print "Unknown hook"
2916 sys.exit(1)
2917+
2918+if __name__ == "__main__":
2919+ hook_name = os.path.basename(sys.argv[0])
2920+ # Also support being invoked directly with hook as argument name.
2921+ if hook_name == "hooks.py":
2922+ if len(sys.argv) < 2:
2923+ sys.exit("Missing required hook name argument.")
2924+ hook_name = sys.argv[1]
2925+ main(hook_name)
2926
2927=== modified symlink 'hooks/install' (properties changed: -x to +x)
2928=== target was u'./hooks.py'
2929--- hooks/install 1970-01-01 00:00:00 +0000
2930+++ hooks/install 2013-10-10 22:34:35 +0000
2931@@ -0,0 +1,13 @@
2932+#!/bin/sh
2933+
2934+set -eu
2935+
2936+apt_get_install() {
2937+ DEBIAN_FRONTEND=noninteractive apt-get -y -qq -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install $@
2938+}
2939+
2940+juju-log 'Invoking charm-pre-install hooks'
2941+[ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done )
2942+
2943+juju-log 'Invoking python-based install hook'
2944+python hooks/hooks.py install
2945
2946=== added symlink 'hooks/local-monitors-relation-joined'
2947=== target is u'./hooks.py'
2948=== renamed symlink 'hooks/nrpe-external-master-relation-changed' => 'hooks/nrpe-external-master-relation-joined'
2949=== removed file 'hooks/nrpe.py'
2950--- hooks/nrpe.py 2012-12-21 11:08:58 +0000
2951+++ hooks/nrpe.py 1970-01-01 00:00:00 +0000
2952@@ -1,170 +0,0 @@
2953-import json
2954-import subprocess
2955-import pwd
2956-import grp
2957-import os
2958-import re
2959-import shlex
2960-
2961-# This module adds compatibility with the nrpe_external_master
2962-# subordinate charm. To use it in your charm:
2963-#
2964-# 1. Update metadata.yaml
2965-#
2966-# provides:
2967-# (...)
2968-# nrpe-external-master:
2969-# interface: nrpe-external-master
2970-# scope: container
2971-#
2972-# 2. Add the following to config.yaml
2973-#
2974-# nagios_context:
2975-# default: "juju"
2976-# type: string
2977-# description: |
2978-# Used by the nrpe-external-master subordinate charm.
2979-# A string that will be prepended to instance name to set the host name
2980-# in nagios. So for instance the hostname would be something like:
2981-# juju-myservice-0
2982-# If you're running multiple environments with the same services in them
2983-# this allows you to differentiate between them.
2984-#
2985-# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
2986-#
2987-# 4. Update your hooks.py with something like this:
2988-#
2989-# import nrpe
2990-# (...)
2991-# def update_nrpe_config():
2992-# nrpe_compat = NRPE("myservice")
2993-# nrpe_compat.add_check(
2994-# shortname = "myservice",
2995-# description = "Check MyService",
2996-# check_cmd = "check_http -w 2 -c 10 http://localhost"
2997-# )
2998-# nrpe_compat.add_check(
2999-# "myservice_other",
3000-# "Check for widget failures",
3001-# check_cmd = "/srv/myapp/scripts/widget_check"
3002-# )
3003-# nrpe_compat.write()
3004-#
3005-# def config_changed():
3006-# (...)
3007-# update_nrpe_config()
3008-
3009-class ConfigurationError(Exception):
3010- '''An error interacting with the Juju config'''
3011- pass
3012-def config_get(scope=None):
3013- '''Return the Juju config as a dictionary'''
3014- try:
3015- config_cmd_line = ['config-get']
3016- if scope is not None:
3017- config_cmd_line.append(scope)
3018- config_cmd_line.append('--format=json')
3019- return json.loads(subprocess.check_output(config_cmd_line))
3020- except (ValueError, OSError, subprocess.CalledProcessError) as error:
3021- subprocess.call(['juju-log', str(error)])
3022- raise ConfigurationError(str(error))
3023-
3024-class CheckException(Exception): pass
3025-class Check(object):
3026- shortname_re = '[A-Za-z0-9-_]*'
3027- service_template = """
3028-#---------------------------------------------------
3029-# This file is Juju managed
3030-#---------------------------------------------------
3031-define service {{
3032- use active-service
3033- host_name {nagios_hostname}
3034- service_description {nagios_hostname} {shortname} {description}
3035- check_command check_nrpe!check_{shortname}
3036- servicegroups {nagios_servicegroup}
3037-}}
3038-"""
3039- def __init__(self, shortname, description, check_cmd):
3040- super(Check, self).__init__()
3041- # XXX: could be better to calculate this from the service name
3042- if not re.match(self.shortname_re, shortname):
3043- raise CheckException("shortname must match {}".format(Check.shortname_re))
3044- self.shortname = shortname
3045- self.description = description
3046- self.check_cmd = self._locate_cmd(check_cmd)
3047-
3048- def _locate_cmd(self, check_cmd):
3049- search_path = (
3050- '/',
3051- os.path.join(os.environ['CHARM_DIR'], 'files/nrpe-external-master'),
3052- '/usr/lib/nagios/plugins',
3053- )
3054- command = shlex.split(check_cmd)
3055- for path in search_path:
3056- if os.path.exists(os.path.join(path,command[0])):
3057- return os.path.join(path, command[0]) + " " + " ".join(command[1:])
3058- subprocess.call(['juju-log', 'Check command not found: {}'.format(command[0])])
3059- return ''
3060-
3061- def write(self, nagios_context, hostname):
3062- for f in os.listdir(NRPE.nagios_exportdir):
3063- if re.search('.*check_{}.cfg'.format(self.shortname), f):
3064- os.remove(os.path.join(NRPE.nagios_exportdir, f))
3065-
3066- templ_vars = {
3067- 'nagios_hostname': hostname,
3068- 'nagios_servicegroup': nagios_context,
3069- 'description': self.description,
3070- 'shortname': self.shortname,
3071- }
3072- nrpe_service_text = Check.service_template.format(**templ_vars)
3073- nrpe_service_file = '{}/service__{}_check_{}.cfg'.format(
3074- NRPE.nagios_exportdir, hostname, self.shortname)
3075- with open(nrpe_service_file, 'w') as nrpe_service_config:
3076- nrpe_service_config.write(str(nrpe_service_text))
3077-
3078- nrpe_check_file = '/etc/nagios/nrpe.d/check_{}.cfg'.format(self.shortname)
3079- with open(nrpe_check_file, 'w') as nrpe_check_config:
3080- nrpe_check_config.write("# check {}\n".format(self.shortname))
3081- nrpe_check_config.write("command[check_{}]={}\n".format(
3082- self.shortname, self.check_cmd))
3083-
3084- def run(self):
3085- subprocess.call(self.check_cmd)
3086-
3087-class NRPE(object):
3088- nagios_logdir = '/var/log/nagios'
3089- nagios_exportdir = '/var/lib/nagios/export'
3090- nrpe_confdir = '/etc/nagios/nrpe.d'
3091- def __init__(self):
3092- super(NRPE, self).__init__()
3093- self.config = config_get()
3094- self.nagios_context = self.config['nagios_context']
3095- self.unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-')
3096- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
3097- self.checks = []
3098-
3099- def add_check(self, *args, **kwargs):
3100- self.checks.append( Check(*args, **kwargs) )
3101-
3102- def write(self):
3103- try:
3104- nagios_uid = pwd.getpwnam('nagios').pw_uid
3105- nagios_gid = grp.getgrnam('nagios').gr_gid
3106- except:
3107- subprocess.call(['juju-log', "Nagios user not set up, nrpe checks not updated"])
3108- return
3109-
3110- if not os.path.exists(NRPE.nagios_exportdir):
3111- subprocess.call(['juju-log', 'Exiting as {} is not accessible'.format(NRPE.nagios_exportdir)])
3112- return
3113-
3114- if not os.path.exists(NRPE.nagios_logdir):
3115- os.mkdir(NRPE.nagios_logdir)
3116- os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
3117-
3118- for nrpecheck in self.checks:
3119- nrpecheck.write(self.nagios_context, self.hostname)
3120-
3121- if os.path.isfile('/etc/init.d/nagios-nrpe-server'):
3122- subprocess.call(['service', 'nagios-nrpe-server', 'reload'])
3123
3124=== added symlink 'hooks/peer-relation-changed'
3125=== target is u'./hooks.py'
3126=== added symlink 'hooks/peer-relation-joined'
3127=== target is u'./hooks.py'
3128=== removed file 'hooks/test_hooks.py'
3129--- hooks/test_hooks.py 2013-02-14 21:35:47 +0000
3130+++ hooks/test_hooks.py 1970-01-01 00:00:00 +0000
3131@@ -1,263 +0,0 @@
3132-import hooks
3133-import yaml
3134-from textwrap import dedent
3135-from mocker import MockerTestCase, ARGS
3136-
3137-class JujuHookTest(MockerTestCase):
3138-
3139- def setUp(self):
3140- self.config_services = [{
3141- "service_name": "haproxy_test",
3142- "service_host": "0.0.0.0",
3143- "service_port": "88",
3144- "service_options": ["balance leastconn"],
3145- "server_options": "maxconn 25"}]
3146- self.config_services_extended = [
3147- {"service_name": "unit_service",
3148- "service_host": "supplied-hostname",
3149- "service_port": "999",
3150- "service_options": ["balance leastconn"],
3151- "server_options": "maxconn 99"}]
3152- self.relation_services = [
3153- {"service_name": "foo_svc",
3154- "service_options": ["balance leastconn"],
3155- "servers": [("A", "hA", "1", "oA1 oA2")]},
3156- {"service_name": "bar_svc",
3157- "service_options": ["balance leastconn"],
3158- "servers": [
3159- ("A", "hA", "1", "oA1 oA2"), ("B", "hB", "2", "oB1 oB2")]}]
3160- self.relation_services2 = [
3161- {"service_name": "foo_svc",
3162- "service_options": ["balance leastconn"],
3163- "servers": [("A2", "hA2", "12", "oA12 oA22")]}]
3164- hooks.default_haproxy_config_dir = self.makeDir()
3165- hooks.default_haproxy_config = self.makeFile()
3166- hooks.default_haproxy_service_config_dir = self.makeDir()
3167- obj = self.mocker.replace("hooks.juju_log")
3168- obj(ARGS)
3169- self.mocker.count(0, None)
3170- obj = self.mocker.replace("hooks.unit_get")
3171- obj("public-address")
3172- self.mocker.result("test-host.example.com")
3173- self.mocker.count(0, None)
3174- self.maxDiff = None
3175-
3176- def _expect_config_get(self, **kwargs):
3177- result = {
3178- "default_timeouts": "queue 1000, connect 1000, client 1000, server 1000",
3179- "global_log": "127.0.0.1 local0, 127.0.0.1 local1 notice",
3180- "global_spread_checks": 0,
3181- "monitoring_allowed_cidr": "127.0.0.1/32",
3182- "monitoring_username": "haproxy",
3183- "default_log": "global",
3184- "global_group": "haproxy",
3185- "monitoring_stats_refresh": 3,
3186- "default_retries": 3,
3187- "services": yaml.dump(self.config_services),
3188- "global_maxconn": 4096,
3189- "global_user": "haproxy",
3190- "default_options": "httplog, dontlognull",
3191- "monitoring_port": 10000,
3192- "global_debug": False,
3193- "nagios_context": "juju",
3194- "global_quiet": False,
3195- "enable_monitoring": False,
3196- "monitoring_password": "changeme",
3197- "default_mode": "http"}
3198- obj = self.mocker.replace("hooks.config_get")
3199- obj()
3200- result.update(kwargs)
3201- self.mocker.result(result)
3202- self.mocker.count(1, None)
3203-
3204- def _expect_relation_get_all(self, relation, extra={}):
3205- obj = self.mocker.replace("hooks.relation_get_all")
3206- obj(relation)
3207- relation = {"hostname": "10.0.1.2",
3208- "private-address": "10.0.1.2",
3209- "port": "10000"}
3210- relation.update(extra)
3211- result = {"1": {"unit/0": relation}}
3212- self.mocker.result(result)
3213- self.mocker.count(1, None)
3214-
3215- def _expect_relation_get_all_multiple(self, relation_name):
3216- obj = self.mocker.replace("hooks.relation_get_all")
3217- obj(relation_name)
3218- result = {
3219- "1": {"unit/0": {
3220- "hostname": "10.0.1.2",
3221- "private-address": "10.0.1.2",
3222- "port": "10000",
3223- "services": yaml.dump(self.relation_services)}},
3224- "2": {"unit/1": {
3225- "hostname": "10.0.1.3",
3226- "private-address": "10.0.1.3",
3227- "port": "10001",
3228- "services": yaml.dump(self.relation_services2)}}}
3229- self.mocker.result(result)
3230- self.mocker.count(1, None)
3231-
3232- def _expect_relation_get_all_with_services(self, relation, extra={}):
3233- extra.update({"services": yaml.dump(self.relation_services)})
3234- return self._expect_relation_get_all(relation, extra)
3235-
3236- def _expect_relation_get(self):
3237- obj = self.mocker.replace("hooks.relation_get")
3238- obj()
3239- result = {}
3240- self.mocker.result(result)
3241- self.mocker.count(1, None)
3242-
3243- def test_create_services(self):
3244- """
3245- Simplest use case, config stanza seeded in config file, server line
3246- added through simple relation. Many servers can join this, but
3247- multiple services will not be presented to the outside
3248- """
3249- self._expect_config_get()
3250- self._expect_relation_get_all("reverseproxy")
3251- self.mocker.replay()
3252- hooks.create_services()
3253- services = hooks.load_services()
3254- stanza = """\
3255- listen haproxy_test 0.0.0.0:88
3256- balance leastconn
3257- server 10_0_1_2__10000 10.0.1.2:10000 maxconn 25
3258-
3259- """
3260- self.assertEquals(services, dedent(stanza))
3261-
3262- def test_create_services_extended_with_relation(self):
3263- """
3264- This case covers specifying an up-front services file to ha-proxy
3265- in the config. The relation then specifies a singular hostname,
3266- port and server_options setting which is filled into the appropriate
3267- haproxy stanza based on multiple criteria.
3268- """
3269- self._expect_config_get(
3270- services=yaml.dump(self.config_services_extended))
3271- self._expect_relation_get_all("reverseproxy")
3272- self.mocker.replay()
3273- hooks.create_services()
3274- services = hooks.load_services()
3275- stanza = """\
3276- listen unit_service supplied-hostname:999
3277- balance leastconn
3278- server 10_0_1_2__10000 10.0.1.2:10000 maxconn 99
3279-
3280- """
3281- self.assertEquals(dedent(stanza), services)
3282-
3283- def test_create_services_pure_relation(self):
3284- """
3285- In this case, the relation is in control of the haproxy config file.
3286- Each relation chooses what server it creates in the haproxy file, it
3287- relies on the haproxy service only for the hostname and front-end port.
3288- Each member of the relation will put a backend server entry under in
3289- the desired stanza. Each realtion can in fact supply multiple
3290- entries from the same juju service unit if desired.
3291- """
3292- self._expect_config_get()
3293- self._expect_relation_get_all_with_services("reverseproxy")
3294- self.mocker.replay()
3295- hooks.create_services()
3296- services = hooks.load_services()
3297- stanza = """\
3298- listen foo_svc 0.0.0.0:88
3299- balance leastconn
3300- server A hA:1 oA1 oA2
3301- """
3302- self.assertIn(dedent(stanza), services)
3303- stanza = """\
3304- listen bar_svc 0.0.0.0:89
3305- balance leastconn
3306- server A hA:1 oA1 oA2
3307- server B hB:2 oB1 oB2
3308- """
3309- self.assertIn(dedent(stanza), services)
3310-
3311- def test_create_services_pure_relation_multiple(self):
3312- """
3313- This is much liek the pure_relation case, where the relation specifies
3314- a "services" override. However, in this case we have multiple relations
3315- that partially override each other. We expect that the created haproxy
3316- conf file will combine things appropriately.
3317- """
3318- self._expect_config_get()
3319- self._expect_relation_get_all_multiple("reverseproxy")
3320- self.mocker.replay()
3321- hooks.create_services()
3322- result = hooks.load_services()
3323- stanza = """\
3324- listen foo_svc 0.0.0.0:88
3325- balance leastconn
3326- server A hA:1 oA1 oA2
3327- server A2 hA2:12 oA12 oA22
3328- """
3329- self.assertIn(dedent(stanza), result)
3330- stanza = """\
3331- listen bar_svc 0.0.0.0:89
3332- balance leastconn
3333- server A hA:1 oA1 oA2
3334- server B hB:2 oB1 oB2
3335- """
3336- self.assertIn(dedent(stanza), result)
3337-
3338- def test_get_config_services_config_only(self):
3339- """
3340- Attempting to catch the case where a relation is not joined yet
3341- """
3342- self._expect_config_get()
3343- obj = self.mocker.replace("hooks.relation_get_all")
3344- obj("reverseproxy")
3345- self.mocker.result(None)
3346- self.mocker.replay()
3347- result = hooks.get_config_services()
3348- self.assertEquals(result, self.config_services)
3349-
3350- def test_get_config_services_relation_no_services(self):
3351- """
3352- If the config specifies services and the realtion does not, just the
3353- config services should come through.
3354- """
3355- self._expect_config_get()
3356- self._expect_relation_get_all("reverseproxy")
3357- self.mocker.replay()
3358- result = hooks.get_config_services()
3359- self.assertEquals(result, self.config_services)
3360-
3361- def test_get_config_services_relation_with_services(self):
3362- """
3363- Testing with both the config and relation providing services should
3364- yield the just the relation
3365- """
3366- self._expect_config_get()
3367- self._expect_relation_get_all_with_services("reverseproxy")
3368- self.mocker.replay()
3369- result = hooks.get_config_services()
3370- # Just test "servers" since hostname and port and maybe other keys
3371- # will be added by the hook
3372- self.assertEquals(result[0]["servers"],
3373- self.relation_services[0]["servers"])
3374-
3375- def test_config_generation_indempotent(self):
3376- self._expect_config_get()
3377- self._expect_relation_get_all_multiple("reverseproxy")
3378- self.mocker.replay()
3379-
3380- # Test that we generate the same haproxy.conf file each time
3381- hooks.create_services()
3382- result1 = hooks.load_services()
3383- hooks.create_services()
3384- result2 = hooks.load_services()
3385- self.assertEqual(result1, result2)
3386-
3387- def test_get_all_services(self):
3388- self._expect_config_get()
3389- self._expect_relation_get_all_multiple("reverseproxy")
3390- self.mocker.replay()
3391- baseline = [{"service_name": "foo_svc", "service_port": 88},
3392- {"service_name": "bar_svc", "service_port": 89}]
3393- services = hooks.get_all_services()
3394- self.assertEqual(baseline, services)
3395
3396=== added directory 'hooks/tests'
3397=== added file 'hooks/tests/__init__.py'
3398=== added file 'hooks/tests/test_config_changed_hooks.py'
3399--- hooks/tests/test_config_changed_hooks.py 1970-01-01 00:00:00 +0000
3400+++ hooks/tests/test_config_changed_hooks.py 2013-10-10 22:34:35 +0000
3401@@ -0,0 +1,120 @@
3402+import sys
3403+
3404+from testtools import TestCase
3405+from mock import patch
3406+
3407+import hooks
3408+from utils_for_tests import patch_open
3409+
3410+
3411+class ConfigChangedTest(TestCase):
3412+
3413+ def setUp(self):
3414+ super(ConfigChangedTest, self).setUp()
3415+ self.config_get = self.patch_hook("config_get")
3416+ self.get_service_ports = self.patch_hook("get_service_ports")
3417+ self.get_listen_stanzas = self.patch_hook("get_listen_stanzas")
3418+ self.create_haproxy_globals = self.patch_hook(
3419+ "create_haproxy_globals")
3420+ self.create_haproxy_defaults = self.patch_hook(
3421+ "create_haproxy_defaults")
3422+ self.remove_services = self.patch_hook("remove_services")
3423+ self.create_services = self.patch_hook("create_services")
3424+ self.load_services = self.patch_hook("load_services")
3425+ self.construct_haproxy_config = self.patch_hook(
3426+ "construct_haproxy_config")
3427+ self.service_haproxy = self.patch_hook(
3428+ "service_haproxy")
3429+ self.update_sysctl = self.patch_hook(
3430+ "update_sysctl")
3431+ self.notify_website = self.patch_hook("notify_website")
3432+ self.notify_peer = self.patch_hook("notify_peer")
3433+ self.log = self.patch_hook("log")
3434+ sys_exit = patch.object(sys, "exit")
3435+ self.sys_exit = sys_exit.start()
3436+ self.addCleanup(sys_exit.stop)
3437+
3438+ def patch_hook(self, hook_name):
3439+ mock_controller = patch.object(hooks, hook_name)
3440+ mock = mock_controller.start()
3441+ self.addCleanup(mock_controller.stop)
3442+ return mock
3443+
3444+ def test_config_changed_notify_website_changed_stanzas(self):
3445+ self.service_haproxy.return_value = True
3446+ self.get_listen_stanzas.side_effect = (
3447+ (('foo.internal', '1.2.3.4', 123),),
3448+ (('foo.internal', '1.2.3.4', 123),
3449+ ('bar.internal', '1.2.3.5', 234),))
3450+
3451+ hooks.config_changed()
3452+
3453+ self.notify_website.assert_called_once_with()
3454+ self.notify_peer.assert_called_once_with()
3455+
3456+ def test_config_changed_no_notify_website_not_changed(self):
3457+ self.service_haproxy.return_value = True
3458+ self.get_listen_stanzas.side_effect = (
3459+ (('foo.internal', '1.2.3.4', 123),),
3460+ (('foo.internal', '1.2.3.4', 123),))
3461+
3462+ hooks.config_changed()
3463+
3464+ self.notify_website.assert_not_called()
3465+ self.notify_peer.assert_not_called()
3466+
3467+ def test_config_changed_no_notify_website_failed_check(self):
3468+ self.service_haproxy.return_value = False
3469+ self.get_listen_stanzas.side_effect = (
3470+ (('foo.internal', '1.2.3.4', 123),),
3471+ (('foo.internal', '1.2.3.4', 123),
3472+ ('bar.internal', '1.2.3.5', 234),))
3473+
3474+ hooks.config_changed()
3475+
3476+ self.notify_website.assert_not_called()
3477+ self.notify_peer.assert_not_called()
3478+ self.log.assert_called_once_with(
3479+ "HAProxy configuration check failed, exiting.")
3480+ self.sys_exit.assert_called_once_with(1)
3481+
3482+
3483+class HelpersTest(TestCase):
3484+ def test_constructs_haproxy_config(self):
3485+ with patch_open() as (mock_open, mock_file):
3486+ hooks.construct_haproxy_config('foo-globals', 'foo-defaults',
3487+ 'foo-monitoring', 'foo-services')
3488+
3489+ mock_file.write.assert_called_with(
3490+ 'foo-globals\n\n'
3491+ 'foo-defaults\n\n'
3492+ 'foo-monitoring\n\n'
3493+ 'foo-services\n\n'
3494+ )
3495+ mock_open.assert_called_with(hooks.default_haproxy_config, 'w')
3496+
3497+ def test_constructs_nothing_if_globals_is_none(self):
3498+ with patch_open() as (mock_open, mock_file):
3499+ hooks.construct_haproxy_config(None, 'foo-defaults',
3500+ 'foo-monitoring', 'foo-services')
3501+
3502+ self.assertFalse(mock_open.called)
3503+ self.assertFalse(mock_file.called)
3504+
3505+ def test_constructs_nothing_if_defaults_is_none(self):
3506+ with patch_open() as (mock_open, mock_file):
3507+ hooks.construct_haproxy_config('foo-globals', None,
3508+ 'foo-monitoring', 'foo-services')
3509+
3510+ self.assertFalse(mock_open.called)
3511+ self.assertFalse(mock_file.called)
3512+
3513+ def test_constructs_haproxy_config_without_optionals(self):
3514+ with patch_open() as (mock_open, mock_file):
3515+ hooks.construct_haproxy_config('foo-globals', 'foo-defaults')
3516+
3517+ mock_file.write.assert_called_with(
3518+ 'foo-globals\n\n'
3519+ 'foo-defaults\n\n'
3520+ )
3521+ mock_open.assert_called_with(hooks.default_haproxy_config, 'w')
3522
3523=== added file 'hooks/tests/test_helpers.py'
3524--- hooks/tests/test_helpers.py 1970-01-01 00:00:00 +0000
3525+++ hooks/tests/test_helpers.py 2013-10-10 22:34:35 +0000
3526@@ -0,0 +1,745 @@
3527+import os
3528+
3529+from contextlib import contextmanager
3530+from StringIO import StringIO
3531+
3532+from testtools import TestCase
3533+from mock import patch, call, MagicMock
3534+
3535+import hooks
3536+from utils_for_tests import patch_open
3537+
3538+
3539+class HelpersTest(TestCase):
3540+
3541+ @patch('hooks.config_get')
3542+ def test_creates_haproxy_globals(self, config_get):
3543+ config_get.return_value = {
3544+ 'global_log': 'foo-log, bar-log',
3545+ 'global_maxconn': 123,
3546+ 'global_user': 'foo-user',
3547+ 'global_group': 'foo-group',
3548+ 'global_spread_checks': 234,
3549+ 'global_debug': False,
3550+ 'global_quiet': False,
3551+ }
3552+ result = hooks.create_haproxy_globals()
3553+
3554+ expected = '\n'.join([
3555+ 'global',
3556+ ' log foo-log',
3557+ ' log bar-log',
3558+ ' maxconn 123',
3559+ ' user foo-user',
3560+ ' group foo-group',
3561+ ' spread-checks 234',
3562+ ])
3563+ self.assertEqual(result, expected)
3564+
3565+ @patch('hooks.config_get')
3566+ def test_creates_haproxy_globals_quietly_with_debug(self, config_get):
3567+ config_get.return_value = {
3568+ 'global_log': 'foo-log, bar-log',
3569+ 'global_maxconn': 123,
3570+ 'global_user': 'foo-user',
3571+ 'global_group': 'foo-group',
3572+ 'global_spread_checks': 234,
3573+ 'global_debug': True,
3574+ 'global_quiet': True,
3575+ }
3576+ result = hooks.create_haproxy_globals()
3577+
3578+ expected = '\n'.join([
3579+ 'global',
3580+ ' log foo-log',
3581+ ' log bar-log',
3582+ ' maxconn 123',
3583+ ' user foo-user',
3584+ ' group foo-group',
3585+ ' debug',
3586+ ' quiet',
3587+ ' spread-checks 234',
3588+ ])
3589+ self.assertEqual(result, expected)
3590+
3591+ def test_enables_haproxy(self):
3592+ mock_file = MagicMock()
3593+
3594+ @contextmanager
3595+ def mock_open(*args, **kwargs):
3596+ yield mock_file
3597+
3598+ initial_content = """
3599+ foo
3600+ ENABLED=0
3601+ bar
3602+ """
3603+ ending_content = initial_content.replace('ENABLED=0', 'ENABLED=1')
3604+
3605+ with patch('__builtin__.open', mock_open):
3606+ mock_file.read.return_value = initial_content
3607+
3608+ hooks.enable_haproxy()
3609+
3610+ mock_file.write.assert_called_with(ending_content)
3611+
3612+ @patch('hooks.config_get')
3613+ def test_creates_haproxy_defaults(self, config_get):
3614+ config_get.return_value = {
3615+ 'default_options': 'foo-option, bar-option',
3616+ 'default_timeouts': '234, 456',
3617+ 'default_log': 'foo-log',
3618+ 'default_mode': 'foo-mode',
3619+ 'default_retries': 321,
3620+ }
3621+ result = hooks.create_haproxy_defaults()
3622+
3623+ expected = '\n'.join([
3624+ 'defaults',
3625+ ' log foo-log',
3626+ ' mode foo-mode',
3627+ ' option foo-option',
3628+ ' option bar-option',
3629+ ' retries 321',
3630+ ' timeout 234',
3631+ ' timeout 456',
3632+ ])
3633+ self.assertEqual(result, expected)
3634+
3635+ def test_returns_none_when_haproxy_config_doesnt_exist(self):
3636+ self.assertIsNone(hooks.load_haproxy_config('/some/foo/file'))
3637+
3638+ @patch('__builtin__.open')
3639+ @patch('os.path.isfile')
3640+ def test_loads_haproxy_config_file(self, isfile, mock_open):
3641+ content = 'some content'
3642+ config_file = '/etc/haproxy/haproxy.cfg'
3643+ file_object = StringIO(content)
3644+ isfile.return_value = True
3645+ mock_open.return_value = file_object
3646+
3647+ result = hooks.load_haproxy_config()
3648+
3649+ self.assertEqual(result, content)
3650+ isfile.assert_called_with(config_file)
3651+ mock_open.assert_called_with(config_file)
3652+
3653+ @patch('hooks.load_haproxy_config')
3654+ def test_gets_monitoring_password(self, load_haproxy_config):
3655+ load_haproxy_config.return_value = 'stats auth foo:bar'
3656+
3657+ password = hooks.get_monitoring_password()
3658+
3659+ self.assertEqual(password, 'bar')
3660+
3661+ @patch('hooks.load_haproxy_config')
3662+ def test_gets_none_if_different_pattern(self, load_haproxy_config):
3663+ load_haproxy_config.return_value = 'some other pattern'
3664+
3665+ password = hooks.get_monitoring_password()
3666+
3667+ self.assertIsNone(password)
3668+
3669+ def test_gets_none_pass_if_config_doesnt_exist(self):
3670+ password = hooks.get_monitoring_password('/some/foo/path')
3671+
3672+ self.assertIsNone(password)
3673+
3674+ @patch('hooks.load_haproxy_config')
3675+ def test_gets_service_ports(self, load_haproxy_config):
3676+ load_haproxy_config.return_value = '''
3677+ listen foo.internal 1.2.3.4:123
3678+ listen bar.internal 1.2.3.5:234
3679+ '''
3680+
3681+ ports = hooks.get_service_ports()
3682+
3683+ self.assertEqual(ports, (123, 234))
3684+
3685+ @patch('hooks.load_haproxy_config')
3686+ def test_get_listen_stanzas(self, load_haproxy_config):
3687+ load_haproxy_config.return_value = '''
3688+ listen foo.internal 1.2.3.4:123
3689+ listen bar.internal 1.2.3.5:234
3690+ '''
3691+
3692+ stanzas = hooks.get_listen_stanzas()
3693+
3694+ self.assertEqual((('foo.internal', '1.2.3.4', 123),
3695+ ('bar.internal', '1.2.3.5', 234)),
3696+ stanzas)
3697+
3698+ @patch('hooks.load_haproxy_config')
3699+ def test_get_listen_stanzas_with_frontend(self, load_haproxy_config):
3700+ load_haproxy_config.return_value = '''
3701+ frontend foo-2-123
3702+ bind 1.2.3.4:123
3703+ default_backend foo.internal
3704+ frontend foo-2-234
3705+ bind 1.2.3.5:234
3706+ default_backend bar.internal
3707+ '''
3708+
3709+ stanzas = hooks.get_listen_stanzas()
3710+
3711+ self.assertEqual((('foo.internal', '1.2.3.4', 123),
3712+ ('bar.internal', '1.2.3.5', 234)),
3713+ stanzas)
3714+
3715+ @patch('hooks.load_haproxy_config')
3716+ def test_get_empty_tuple_when_no_stanzas(self, load_haproxy_config):
3717+ load_haproxy_config.return_value = '''
3718+ '''
3719+
3720+ stanzas = hooks.get_listen_stanzas()
3721+
3722+ self.assertEqual((), stanzas)
3723+
3724+ @patch('hooks.load_haproxy_config')
3725+ def test_get_listen_stanzas_none_configured(self, load_haproxy_config):
3726+ load_haproxy_config.return_value = ""
3727+
3728+ stanzas = hooks.get_listen_stanzas()
3729+
3730+ self.assertEqual((), stanzas)
3731+
3732+ def test_gets_no_ports_if_config_doesnt_exist(self):
3733+ ports = hooks.get_service_ports('/some/foo/path')
3734+ self.assertEqual((), ports)
3735+
3736+ @patch('hooks.open_port')
3737+ @patch('hooks.close_port')
3738+ def test_updates_service_ports(self, close_port, open_port):
3739+ old_service_ports = [123, 234, 345]
3740+ new_service_ports = [345, 456, 567]
3741+
3742+ hooks.update_service_ports(old_service_ports, new_service_ports)
3743+
3744+ self.assertEqual(close_port.mock_calls, [call(123), call(234)])
3745+ self.assertEqual(open_port.mock_calls,
3746+ [call(345), call(456), call(567)])
3747+
3748+ @patch('hooks.open_port')
3749+ @patch('hooks.close_port')
3750+ def test_updates_none_if_service_ports_not_provided(self, close_port,
3751+ open_port):
3752+ hooks.update_service_ports()
3753+
3754+ self.assertFalse(close_port.called)
3755+ self.assertFalse(open_port.called)
3756+
3757+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
3758+ def test_creates_a_listen_stanza(self):
3759+ service_name = 'some-name'
3760+ service_ip = '10.11.12.13'
3761+ service_port = 1234
3762+ service_options = ('foo', 'bar')
3763+ server_entries = [
3764+ ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
3765+ ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
3766+ ]
3767+
3768+ result = hooks.create_listen_stanza(service_name, service_ip,
3769+ service_port, service_options,
3770+ server_entries)
3771+
3772+ expected = '\n'.join((
3773+ 'frontend haproxy-2-1234',
3774+ ' bind 10.11.12.13:1234',
3775+ ' default_backend some-name',
3776+ '',
3777+ 'backend some-name',
3778+ ' foo',
3779+ ' bar',
3780+ ' server name-1 ip-1:port-1 foo1 bar1',
3781+ ' server name-2 ip-2:port-2 foo2 bar2',
3782+ ))
3783+
3784+ self.assertEqual(expected, result)
3785+
3786+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
3787+ def test_create_listen_stanza_filters_frontend_options(self):
3788+ service_name = 'some-name'
3789+ service_ip = '10.11.12.13'
3790+ service_port = 1234
3791+ service_options = ('capture request header X-Man',
3792+ 'retries 3', 'balance uri', 'option logasap')
3793+ server_entries = [
3794+ ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
3795+ ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
3796+ ]
3797+
3798+ result = hooks.create_listen_stanza(service_name, service_ip,
3799+ service_port, service_options,
3800+ server_entries)
3801+
3802+ expected = '\n'.join((
3803+ 'frontend haproxy-2-1234',
3804+ ' bind 10.11.12.13:1234',
3805+ ' default_backend some-name',
3806+ ' capture request header X-Man',
3807+ ' option logasap',
3808+ '',
3809+ 'backend some-name',
3810+ ' retries 3',
3811+ ' balance uri',
3812+ ' server name-1 ip-1:port-1 foo1 bar1',
3813+ ' server name-2 ip-2:port-2 foo2 bar2',
3814+ ))
3815+
3816+ self.assertEqual(expected, result)
3817+
3818+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
3819+ def test_creates_a_listen_stanza_with_tuple_entries(self):
3820+ service_name = 'some-name'
3821+ service_ip = '10.11.12.13'
3822+ service_port = 1234
3823+ service_options = ('foo', 'bar')
3824+ server_entries = (
3825+ ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
3826+ ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
3827+ )
3828+
3829+ result = hooks.create_listen_stanza(service_name, service_ip,
3830+ service_port, service_options,
3831+ server_entries)
3832+
3833+ expected = '\n'.join((
3834+ 'frontend haproxy-2-1234',
3835+ ' bind 10.11.12.13:1234',
3836+ ' default_backend some-name',
3837+ '',
3838+ 'backend some-name',
3839+ ' foo',
3840+ ' bar',
3841+ ' server name-1 ip-1:port-1 foo1 bar1',
3842+ ' server name-2 ip-2:port-2 foo2 bar2',
3843+ ))
3844+
3845+ self.assertEqual(expected, result)
3846+
3847+ def test_doesnt_create_listen_stanza_if_args_not_provided(self):
3848+ self.assertIsNone(hooks.create_listen_stanza())
3849+
3850+ @patch('hooks.create_listen_stanza')
3851+ @patch('hooks.config_get')
3852+ @patch('hooks.get_monitoring_password')
3853+ def test_creates_a_monitoring_stanza(self, get_monitoring_password,
3854+ config_get, create_listen_stanza):
3855+ config_get.return_value = {
3856+ 'enable_monitoring': True,
3857+ 'monitoring_allowed_cidr': 'some-cidr',
3858+ 'monitoring_password': 'some-pass',
3859+ 'monitoring_username': 'some-user',
3860+ 'monitoring_stats_refresh': 123,
3861+ 'monitoring_port': 1234,
3862+ }
3863+ create_listen_stanza.return_value = 'some result'
3864+
3865+ result = hooks.create_monitoring_stanza(service_name="some-service")
3866+
3867+ self.assertEqual('some result', result)
3868+ get_monitoring_password.assert_called_with()
3869+ create_listen_stanza.assert_called_with(
3870+ 'some-service', '0.0.0.0', 1234, [
3871+ 'mode http',
3872+ 'acl allowed_cidr src some-cidr',
3873+ 'block unless allowed_cidr',
3874+ 'stats enable',
3875+ 'stats uri /',
3876+ 'stats realm Haproxy\\ Statistics',
3877+ 'stats auth some-user:some-pass',
3878+ 'stats refresh 123',
3879+ ])
3880+
3881+ @patch('hooks.create_listen_stanza')
3882+ @patch('hooks.config_get')
3883+ @patch('hooks.get_monitoring_password')
3884+ def test_doesnt_create_a_monitoring_stanza_if_monitoring_disabled(
3885+ self, get_monitoring_password, config_get, create_listen_stanza):
3886+ config_get.return_value = {
3887+ 'enable_monitoring': False,
3888+ }
3889+
3890+ result = hooks.create_monitoring_stanza(service_name="some-service")
3891+
3892+ self.assertIsNone(result)
3893+ self.assertFalse(get_monitoring_password.called)
3894+ self.assertFalse(create_listen_stanza.called)
3895+
3896+ @patch('hooks.create_listen_stanza')
3897+ @patch('hooks.config_get')
3898+ @patch('hooks.get_monitoring_password')
3899+ def test_uses_monitoring_password_for_stanza(self, get_monitoring_password,
3900+ config_get,
3901+ create_listen_stanza):
3902+ config_get.return_value = {
3903+ 'enable_monitoring': True,
3904+ 'monitoring_allowed_cidr': 'some-cidr',
3905+ 'monitoring_password': 'changeme',
3906+ 'monitoring_username': 'some-user',
3907+ 'monitoring_stats_refresh': 123,
3908+ 'monitoring_port': 1234,
3909+ }
3910+ create_listen_stanza.return_value = 'some result'
3911+ get_monitoring_password.return_value = 'some-monitoring-pass'
3912+
3913+ hooks.create_monitoring_stanza(service_name="some-service")
3914+
3915+ get_monitoring_password.assert_called_with()
3916+ create_listen_stanza.assert_called_with(
3917+ 'some-service', '0.0.0.0', 1234, [
3918+ 'mode http',
3919+ 'acl allowed_cidr src some-cidr',
3920+ 'block unless allowed_cidr',
3921+ 'stats enable',
3922+ 'stats uri /',
3923+ 'stats realm Haproxy\\ Statistics',
3924+ 'stats auth some-user:some-monitoring-pass',
3925+ 'stats refresh 123',
3926+ ])
3927+
3928+ @patch('hooks.pwgen')
3929+ @patch('hooks.create_listen_stanza')
3930+ @patch('hooks.config_get')
3931+ @patch('hooks.get_monitoring_password')
3932+ def test_uses_new_password_for_stanza(self, get_monitoring_password,
3933+ config_get, create_listen_stanza,
3934+ pwgen):
3935+ config_get.return_value = {
3936+ 'enable_monitoring': True,
3937+ 'monitoring_allowed_cidr': 'some-cidr',
3938+ 'monitoring_password': 'changeme',
3939+ 'monitoring_username': 'some-user',
3940+ 'monitoring_stats_refresh': 123,
3941+ 'monitoring_port': 1234,
3942+ }
3943+ create_listen_stanza.return_value = 'some result'
3944+ get_monitoring_password.return_value = None
3945+ pwgen.return_value = 'some-new-pass'
3946+
3947+ hooks.create_monitoring_stanza(service_name="some-service")
3948+
3949+ get_monitoring_password.assert_called_with()
3950+ create_listen_stanza.assert_called_with(
3951+ 'some-service', '0.0.0.0', 1234, [
3952+ 'mode http',
3953+ 'acl allowed_cidr src some-cidr',
3954+ 'block unless allowed_cidr',
3955+ 'stats enable',
3956+ 'stats uri /',
3957+ 'stats realm Haproxy\\ Statistics',
3958+ 'stats auth some-user:some-new-pass',
3959+ 'stats refresh 123',
3960+ ])
3961+
3962+ @patch('hooks.is_proxy')
3963+ @patch('hooks.config_get')
3964+ @patch('yaml.safe_load')
3965+ def test_gets_config_services(self, safe_load, config_get, is_proxy):
3966+ config_get.return_value = {
3967+ 'services': 'some-services',
3968+ }
3969+ safe_load.return_value = [
3970+ {
3971+ 'service_name': 'foo',
3972+ 'service_options': {
3973+ 'foo-1': 123,
3974+ },
3975+ 'service_options': ['foo1', 'foo2'],
3976+ 'server_options': ['baz1', 'baz2'],
3977+ },
3978+ {
3979+ 'service_name': 'bar',
3980+ 'service_options': ['bar1', 'bar2'],
3981+ 'server_options': ['baz1', 'baz2'],
3982+ },
3983+ ]
3984+ is_proxy.return_value = False
3985+
3986+ result = hooks.get_config_services()
3987+ expected = {
3988+ None: {
3989+ 'service_name': 'foo',
3990+ },
3991+ 'foo': {
3992+ 'service_name': 'foo',
3993+ 'service_options': ['foo1', 'foo2'],
3994+ 'server_options': ['baz1', 'baz2'],
3995+ },
3996+ 'bar': {
3997+ 'service_name': 'bar',
3998+ 'service_options': ['bar1', 'bar2'],
3999+ 'server_options': ['baz1', 'baz2'],
4000+ },
4001+ }
4002+
4003+ self.assertEqual(expected, result)
4004+
4005+ @patch('hooks.is_proxy')
4006+ @patch('hooks.config_get')
4007+ @patch('yaml.safe_load')
4008+ def test_gets_config_services_with_forward_option(self, safe_load,
4009+ config_get, is_proxy):
4010+ config_get.return_value = {
4011+ 'services': 'some-services',
4012+ }
4013+ safe_load.return_value = [
4014+ {
4015+ 'service_name': 'foo',
4016+ 'service_options': {
4017+ 'foo-1': 123,
4018+ },
4019+ 'service_options': ['foo1', 'foo2'],
4020+ 'server_options': ['baz1', 'baz2'],
4021+ },
4022+ {
4023+ 'service_name': 'bar',
4024+ 'service_options': ['bar1', 'bar2'],
4025+ 'server_options': ['baz1', 'baz2'],
4026+ },
4027+ ]
4028+ is_proxy.return_value = True
4029+
4030+ result = hooks.get_config_services()
4031+ expected = {
4032+ None: {
4033+ 'service_name': 'foo',
4034+ },
4035+ 'foo': {
4036+ 'service_name': 'foo',
4037+ 'service_options': ['foo1', 'foo2', 'option forwardfor'],
4038+ 'server_options': ['baz1', 'baz2'],
4039+ },
4040+ 'bar': {
4041+ 'service_name': 'bar',
4042+ 'service_options': ['bar1', 'bar2', 'option forwardfor'],
4043+ 'server_options': ['baz1', 'baz2'],
4044+ },
4045+ }
4046+
4047+ self.assertEqual(expected, result)
4048+
4049+ @patch('hooks.is_proxy')
4050+ @patch('hooks.config_get')
4051+ @patch('yaml.safe_load')
4052+ def test_gets_config_services_with_options_string(self, safe_load,
4053+ config_get, is_proxy):
4054+ config_get.return_value = {
4055+ 'services': 'some-services',
4056+ }
4057+ safe_load.return_value = [
4058+ {
4059+ 'service_name': 'foo',
4060+ 'service_options': {
4061+ 'foo-1': 123,
4062+ },
4063+ 'service_options': ['foo1', 'foo2'],
4064+ 'server_options': 'baz1 baz2',
4065+ },
4066+ {
4067+ 'service_name': 'bar',
4068+ 'service_options': ['bar1', 'bar2'],
4069+ 'server_options': 'baz1 baz2',
4070+ },
4071+ ]
4072+ is_proxy.return_value = False
4073+
4074+ result = hooks.get_config_services()
4075+ expected = {
4076+ None: {
4077+ 'service_name': 'foo',
4078+ },
4079+ 'foo': {
4080+ 'service_name': 'foo',
4081+ 'service_options': ['foo1', 'foo2'],
4082+ 'server_options': ['baz1', 'baz2'],
4083+ },
4084+ 'bar': {
4085+ 'service_name': 'bar',
4086+ 'service_options': ['bar1', 'bar2'],
4087+ 'server_options': ['baz1', 'baz2'],
4088+ },
4089+ }
4090+
4091+ self.assertEqual(expected, result)
4092+
4093+ @patch('hooks.get_config_services')
4094+ def test_gets_a_service_config(self, get_config_services):
4095+ get_config_services.return_value = {
4096+ 'foo': 'bar',
4097+ }
4098+
4099+ self.assertEqual('bar', hooks.get_config_service('foo'))
4100+
4101+ @patch('hooks.get_config_services')
4102+ def test_gets_a_service_config_from_none(self, get_config_services):
4103+ get_config_services.return_value = {
4104+ None: 'bar',
4105+ }
4106+
4107+ self.assertEqual('bar', hooks.get_config_service())
4108+
4109+ @patch('hooks.get_config_services')
4110+ def test_gets_a_service_config_as_none(self, get_config_services):
4111+ get_config_services.return_value = {
4112+ 'baz': 'bar',
4113+ }
4114+
4115+ self.assertIsNone(hooks.get_config_service())
4116+
4117+ @patch('os.path.exists')
4118+ def test_mark_as_proxy_when_path_exists(self, path_exists):
4119+ path_exists.return_value = True
4120+
4121+ self.assertTrue(hooks.is_proxy('foo'))
4122+ path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy')
4123+
4124+ @patch('os.path.exists')
4125+ def test_doesnt_mark_as_proxy_when_path_doesnt_exist(self, path_exists):
4126+ path_exists.return_value = False
4127+
4128+ self.assertFalse(hooks.is_proxy('foo'))
4129+ path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy')
4130+
4131+ @patch('os.path.exists')
4132+ def test_loads_services_by_name(self, path_exists):
4133+ with patch_open() as (mock_open, mock_file):
4134+ path_exists.return_value = True
4135+ mock_file.read.return_value = 'some content'
4136+
4137+ result = hooks.load_services('some-service')
4138+
4139+ self.assertEqual('some content', result)
4140+ mock_open.assert_called_with(
4141+ '/var/run/haproxy/some-service.service')
4142+ mock_file.read.assert_called_with()
4143+
4144+ @patch('os.path.exists')
4145+ def test_loads_no_service_if_path_doesnt_exist(self, path_exists):
4146+ path_exists.return_value = False
4147+
4148+ result = hooks.load_services('some-service')
4149+
4150+ self.assertIsNone(result)
4151+
4152+ @patch('glob.glob')
4153+ def test_loads_services_within_dir_if_no_name_provided(self, glob):
4154+ with patch_open() as (mock_open, mock_file):
4155+ mock_file.read.side_effect = ['foo', 'bar']
4156+ glob.return_value = ['foo-file', 'bar-file']
4157+
4158+ result = hooks.load_services()
4159+
4160+ self.assertEqual('foo\n\nbar\n\n', result)
4161+ mock_open.assert_has_calls([call('foo-file'), call('bar-file')])
4162+ mock_file.read.assert_has_calls([call(), call()])
4163+
4164+ @patch('hooks.os')
4165+ def test_removes_services_by_name(self, os_):
4166+ service_path = '/var/run/haproxy/some-service.service'
4167+ os_.path.exists.return_value = True
4168+
4169+ self.assertTrue(hooks.remove_services('some-service'))
4170+
4171+ os_.path.exists.assert_called_with(service_path)
4172+ os_.remove.assert_called_with(service_path)
4173+
4174+ @patch('hooks.os')
4175+ def test_removes_nothing_if_service_doesnt_exist(self, os_):
4176+ service_path = '/var/run/haproxy/some-service.service'
4177+ os_.path.exists.return_value = False
4178+
4179+ self.assertTrue(hooks.remove_services('some-service'))
4180+
4181+ os_.path.exists.assert_called_with(service_path)
4182+
4183+ @patch('hooks.os')
4184+ @patch('glob.glob')
4185+ def test_removes_all_services_in_dir_if_name_not_provided(self, glob, os_):
4186+ glob.return_value = ['foo', 'bar']
4187+
4188+ self.assertTrue(hooks.remove_services())
4189+
4190+ os_.remove.assert_has_calls([call('foo'), call('bar')])
4191+
4192+ @patch('hooks.os')
4193+ @patch('hooks.log')
4194+ def test_logs_error_when_failing_to_remove_service_by_name(self, log, os_):
4195+ error = Exception('some error')
4196+ os_.path.exists.return_value = True
4197+ os_.remove.side_effect = error
4198+
4199+ self.assertFalse(hooks.remove_services('some-service'))
4200+
4201+ log.assert_called_with(str(error))
4202+
4203+ @patch('hooks.os')
4204+ @patch('hooks.log')
4205+ @patch('glob.glob')
4206+ def test_logs_error_when_failing_to_remove_services(self, glob, log, os_):
4207+ errors = [Exception('some error 1'), Exception('some error 2')]
4208+ os_.remove.side_effect = errors
4209+ glob.return_value = ['foo', 'bar']
4210+
4211+ self.assertTrue(hooks.remove_services())
4212+
4213+ log.assert_has_calls([
4214+ call(str(errors[0])),
4215+ call(str(errors[1])),
4216+ ])
4217+
4218+ @patch('subprocess.call')
4219+ def test_calls_check_action(self, mock_call):
4220+ mock_call.return_value = 0
4221+
4222+ result = hooks.service_haproxy('check')
4223+
4224+ self.assertTrue(result)
4225+ mock_call.assert_called_with(['/usr/sbin/haproxy', '-f',
4226+ hooks.default_haproxy_config, '-c'])
4227+
4228+ @patch('subprocess.call')
4229+ def test_calls_check_action_with_different_config(self, mock_call):
4230+ mock_call.return_value = 0
4231+
4232+ result = hooks.service_haproxy('check', 'some-config')
4233+
4234+ self.assertTrue(result)
4235+ mock_call.assert_called_with(['/usr/sbin/haproxy', '-f',
4236+ 'some-config', '-c'])
4237+
4238+ @patch('subprocess.call')
4239+ def test_fails_to_check_config(self, mock_call):
4240+ mock_call.return_value = 1
4241+
4242+ result = hooks.service_haproxy('check')
4243+
4244+ self.assertFalse(result)
4245+
4246+ @patch('subprocess.call')
4247+ def test_calls_different_actions(self, mock_call):
4248+ mock_call.return_value = 0
4249+
4250+ result = hooks.service_haproxy('foo')
4251+
4252+ self.assertTrue(result)
4253+ mock_call.assert_called_with(['service', 'haproxy', 'foo'])
4254+
4255+ @patch('subprocess.call')
4256+ def test_fails_to_call_different_actions(self, mock_call):
4257+ mock_call.return_value = 1
4258+
4259+ result = hooks.service_haproxy('foo')
4260+
4261+ self.assertFalse(result)
4262+
4263+ @patch('subprocess.call')
4264+ def test_doesnt_call_actions_if_action_not_provided(self, mock_call):
4265+ self.assertIsNone(hooks.service_haproxy())
4266+ self.assertFalse(mock_call.called)
4267+
4268+ @patch('subprocess.call')
4269+ def test_doesnt_call_actions_if_config_is_none(self, mock_call):
4270+ self.assertIsNone(hooks.service_haproxy('foo', None))
4271+ self.assertFalse(mock_call.called)
4272
4273=== added file 'hooks/tests/test_nrpe_hooks.py'
4274--- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000
4275+++ hooks/tests/test_nrpe_hooks.py 2013-10-10 22:34:35 +0000
4276@@ -0,0 +1,24 @@
4277+from testtools import TestCase
4278+from mock import call, patch, MagicMock
4279+
4280+import hooks
4281+
4282+
4283+class NRPEHooksTest(TestCase):
4284+
4285+ @patch('hooks.install_nrpe_scripts')
4286+ @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
4287+ def test_update_nrpe_config(self, nrpe, install_nrpe_scripts):
4288+ nrpe_compat = MagicMock()
4289+ nrpe_compat.checks = [MagicMock(shortname="haproxy"),
4290+ MagicMock(shortname="haproxy_queue")]
4291+ nrpe.return_value = nrpe_compat
4292+
4293+ hooks.update_nrpe_config()
4294+
4295+ self.assertEqual(
4296+ nrpe_compat.mock_calls,
4297+ [call.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh'),
4298+ call.add_check('haproxy_queue', 'Check HAProxy queue depth',
4299+ 'check_haproxy_queue_depth.sh'),
4300+ call.write()])
4301
4302=== added file 'hooks/tests/test_peer_hooks.py'
4303--- hooks/tests/test_peer_hooks.py 1970-01-01 00:00:00 +0000
4304+++ hooks/tests/test_peer_hooks.py 2013-10-10 22:34:35 +0000
4305@@ -0,0 +1,200 @@
4306+import os
4307+import yaml
4308+
4309+from testtools import TestCase
4310+from mock import patch
4311+
4312+import hooks
4313+from utils_for_tests import patch_open
4314+
4315+
4316+class PeerRelationTest(TestCase):
4317+
4318+ def setUp(self):
4319+ super(PeerRelationTest, self).setUp()
4320+
4321+ self.relations_of_type = self.patch_hook("relations_of_type")
4322+ self.log = self.patch_hook("log")
4323+ self.unit_get = self.patch_hook("unit_get")
4324+
4325+ def patch_hook(self, hook_name):
4326+ mock_controller = patch.object(hooks, hook_name)
4327+ mock = mock_controller.start()
4328+ self.addCleanup(mock_controller.stop)
4329+ return mock
4330+
4331+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4332+ def test_with_peer_same_services(self):
4333+ self.unit_get.return_value = "1.2.4.5"
4334+ self.relations_of_type.return_value = [
4335+ {"__unit__": "haproxy/1",
4336+ "hostname": "haproxy-1",
4337+ "private-address": "1.2.4.4",
4338+ "all_services": yaml.dump([
4339+ {"service_name": "foo_service",
4340+ "service_host": "0.0.0.0",
4341+ "service_options": ["balance leastconn"],
4342+ "service_port": 4242},
4343+ ])
4344+ }
4345+ ]
4346+
4347+ services_dict = {
4348+ "foo_service": {
4349+ "service_name": "foo_service",
4350+ "service_host": "0.0.0.0",
4351+ "service_port": 4242,
4352+ "service_options": ["balance leastconn"],
4353+ "server_options": ["maxconn 4"],
4354+ "servers": [("backend_1__8080", "1.2.3.4",
4355+ 8080, ["maxconn 4"])],
4356+ },
4357+ }
4358+
4359+ expected = {
4360+ "foo_service": {
4361+ "service_name": "foo_service",
4362+ "service_host": "0.0.0.0",
4363+ "service_port": 4242,
4364+ "service_options": ["balance leastconn",
4365+ "mode tcp",
4366+ "option tcplog"],
4367+ "servers": [
4368+ ("haproxy-1", "1.2.4.4", 4243, ["check"]),
4369+ ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"])
4370+ ],
4371+ },
4372+ "foo_service_be": {
4373+ "service_name": "foo_service_be",
4374+ "service_host": "0.0.0.0",
4375+ "service_port": 4243,
4376+ "service_options": ["balance leastconn"],
4377+ "server_options": ["maxconn 4"],
4378+ "servers": [("backend_1__8080", "1.2.3.4",
4379+ 8080, ["maxconn 4"])],
4380+ },
4381+ }
4382+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4383+
4384+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4385+ def test_inherit_timeout_settings(self):
4386+ self.unit_get.return_value = "1.2.4.5"
4387+ self.relations_of_type.return_value = [
4388+ {"__unit__": "haproxy/1",
4389+ "hostname": "haproxy-1",
4390+ "private-address": "1.2.4.4",
4391+ "all_services": yaml.dump([
4392+ {"service_name": "foo_service",
4393+ "service_host": "0.0.0.0",
4394+ "service_options": ["timeout server 5000"],
4395+ "service_port": 4242},
4396+ ])
4397+ }
4398+ ]
4399+
4400+ services_dict = {
4401+ "foo_service": {
4402+ "service_name": "foo_service",
4403+ "service_host": "0.0.0.0",
4404+ "service_port": 4242,
4405+ "service_options": ["timeout server 5000"],
4406+ "server_options": ["maxconn 4"],
4407+ "servers": [("backend_1__8080", "1.2.3.4",
4408+ 8080, ["maxconn 4"])],
4409+ },
4410+ }
4411+
4412+ expected = {
4413+ "foo_service": {
4414+ "service_name": "foo_service",
4415+ "service_host": "0.0.0.0",
4416+ "service_port": 4242,
4417+ "service_options": ["balance leastconn",
4418+ "mode tcp",
4419+ "option tcplog",
4420+ "timeout server 5000"],
4421+ "servers": [
4422+ ("haproxy-1", "1.2.4.4", 4243, ["check"]),
4423+ ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"])
4424+ ],
4425+ },
4426+ "foo_service_be": {
4427+ "service_name": "foo_service_be",
4428+ "service_host": "0.0.0.0",
4429+ "service_port": 4243,
4430+ "service_options": ["timeout server 5000"],
4431+ "server_options": ["maxconn 4"],
4432+ "servers": [("backend_1__8080", "1.2.3.4",
4433+ 8080, ["maxconn 4"])],
4434+ },
4435+ }
4436+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4437+
4438+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4439+ def test_with_no_relation_data(self):
4440+ self.unit_get.return_value = "1.2.4.5"
4441+ self.relations_of_type.return_value = []
4442+
4443+ services_dict = {
4444+ "foo_service": {
4445+ "service_name": "foo_service",
4446+ "service_host": "0.0.0.0",
4447+ "service_port": 4242,
4448+ "service_options": ["balance leastconn"],
4449+ "server_options": ["maxconn 4"],
4450+ "servers": [("backend_1__8080", "1.2.3.4",
4451+ 8080, ["maxconn 4"])],
4452+ },
4453+ }
4454+
4455+ expected = services_dict
4456+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4457+
4458+ @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
4459+ def test_with_missing_all_services(self):
4460+ self.unit_get.return_value = "1.2.4.5"
4461+ self.relations_of_type.return_value = [
4462+ {"__unit__": "haproxy/1",
4463+ "hostname": "haproxy-1",
4464+ "private-address": "1.2.4.4",
4465+ }
4466+ ]
4467+
4468+ services_dict = {
4469+ "foo_service": {
4470+ "service_name": "foo_service",
4471+ "service_host": "0.0.0.0",
4472+ "service_port": 4242,
4473+ "service_options": ["balance leastconn"],
4474+ "server_options": ["maxconn 4"],
4475+ "servers": [("backend_1__8080", "1.2.3.4",
4476+ 8080, ["maxconn 4"])],
4477+ },
4478+ }
4479+
4480+ expected = services_dict
4481+ self.assertEqual(expected, hooks.apply_peer_config(services_dict))
4482+
4483+ @patch('hooks.create_listen_stanza')
4484+ def test_writes_service_config(self, create_listen_stanza):
4485+ create_listen_stanza.return_value = 'some content'
4486+ services_dict = {
4487+ 'foo': {
4488+ 'service_name': 'bar',
4489+ 'service_host': 'some-host',
4490+ 'service_port': 'some-port',
4491+ 'service_options': 'some-options',
4492+ 'servers': (1, 2),
4493+ },
4494+ }
4495+
4496+ with patch.object(os.path, "exists") as exists:
4497+ exists.return_value = True
4498+ with patch_open() as (mock_open, mock_file):
4499+ hooks.write_service_config(services_dict)
4500+
4501+ create_listen_stanza.assert_called_with(
4502+ 'bar', 'some-host', 'some-port', 'some-options', (1, 2))
4503+ mock_open.assert_called_with(
4504+ '/var/run/haproxy/bar.service', 'w')
4505+ mock_file.write.assert_called_with('some content')
4506
4507=== added file 'hooks/tests/test_reverseproxy_hooks.py'
4508--- hooks/tests/test_reverseproxy_hooks.py 1970-01-01 00:00:00 +0000
4509+++ hooks/tests/test_reverseproxy_hooks.py 2013-10-10 22:34:35 +0000
4510@@ -0,0 +1,345 @@
4511+from testtools import TestCase
4512+from mock import patch, call
4513+
4514+import hooks
4515+
4516+
4517+class ReverseProxyRelationTest(TestCase):
4518+
4519+ def setUp(self):
4520+ super(ReverseProxyRelationTest, self).setUp()
4521+
4522+ self.relations_of_type = self.patch_hook("relations_of_type")
4523+ self.get_config_services = self.patch_hook("get_config_services")
4524+ self.log = self.patch_hook("log")
4525+ self.write_service_config = self.patch_hook("write_service_config")
4526+ self.apply_peer_config = self.patch_hook("apply_peer_config")
4527+ self.apply_peer_config.side_effect = lambda value: value
4528+
4529+ def patch_hook(self, hook_name):
4530+ mock_controller = patch.object(hooks, hook_name)
4531+ mock = mock_controller.start()
4532+ self.addCleanup(mock_controller.stop)
4533+ return mock
4534+
4535+ def test_relation_data_returns_none(self):
4536+ self.get_config_services.return_value = {
4537+ "service": {
4538+ "service_name": "service",
4539+ },
4540+ }
4541+ self.relations_of_type.return_value = []
4542+ self.assertIs(None, hooks.create_services())
4543+ self.log.assert_called_once_with("No backend servers, exiting.")
4544+ self.write_service_config.assert_not_called()
4545+
4546+ def test_relation_data_returns_no_relations(self):
4547+ self.get_config_services.return_value = {
4548+ "service": {
4549+ "service_name": "service",
4550+ },
4551+ }
4552+ self.relations_of_type.return_value = []
4553+ self.assertIs(None, hooks.create_services())
4554+ self.log.assert_called_once_with("No backend servers, exiting.")
4555+ self.write_service_config.assert_not_called()
4556+
4557+ def test_relation_no_services(self):
4558+ self.get_config_services.return_value = {}
4559+ self.relations_of_type.return_value = [
4560+ {"port": 4242,
4561+ "__unit__": "foo/0",
4562+ "hostname": "backend.1",
4563+ "private-address": "1.2.3.4"},
4564+ ]
4565+ self.assertIs(None, hooks.create_services())
4566+ self.log.assert_called_once_with("No services configured, exiting.")
4567+ self.write_service_config.assert_not_called()
4568+
4569+ def test_no_port_in_relation_data(self):
4570+ self.get_config_services.return_value = {
4571+ "service": {
4572+ "service_name": "service",
4573+ },
4574+ }
4575+ self.relations_of_type.return_value = [
4576+ {"private-address": "1.2.3.4",
4577+ "__unit__": "foo/0"},
4578+ ]
4579+ self.assertIs(None, hooks.create_services())
4580+ self.log.assert_has_calls([call.log(
4581+ "No port in relation data for 'foo/0', skipping.")])
4582+ self.write_service_config.assert_not_called()
4583+
4584+ def test_no_private_address_in_relation_data(self):
4585+ self.get_config_services.return_value = {
4586+ "service": {
4587+ "service_name": "service",
4588+ },
4589+ }
4590+ self.relations_of_type.return_value = [
4591+ {"port": 4242,
4592+ "__unit__": "foo/0"},
4593+ ]
4594+ self.assertIs(None, hooks.create_services())
4595+ self.log.assert_has_calls([call.log(
4596+ "No private-address in relation data for 'foo/0', skipping.")])
4597+ self.write_service_config.assert_not_called()
4598+
4599+ def test_no_hostname_in_relation_data(self):
4600+ self.get_config_services.return_value = {
4601+ "service": {
4602+ "service_name": "service",
4603+ },
4604+ }
4605+ self.relations_of_type.return_value = [
4606+ {"port": 4242,
4607+ "private-address": "1.2.3.4",
4608+ "__unit__": "foo/0"},
4609+ ]
4610+ self.assertIs(None, hooks.create_services())
4611+ self.log.assert_has_calls([call.log(
4612+ "No hostname in relation data for 'foo/0', skipping.")])
4613+ self.write_service_config.assert_not_called()
4614+
4615+ def test_relation_unknown_service(self):
4616+ self.get_config_services.return_value = {
4617+ "service": {
4618+ "service_name": "service",
4619+ },
4620+ }
4621+ self.relations_of_type.return_value = [
4622+ {"port": 4242,
4623+ "hostname": "backend.1",
4624+ "service_name": "invalid",
4625+ "private-address": "1.2.3.4",
4626+ "__unit__": "foo/0"},
4627+ ]
4628+ self.assertIs(None, hooks.create_services())
4629+ self.log.assert_has_calls([call.log(
4630+ "Service 'invalid' does not exist.")])
4631+ self.write_service_config.assert_not_called()
4632+
4633+ def test_no_relation_but_has_servers_from_config(self):
4634+ self.get_config_services.return_value = {
4635+ None: {
4636+ "service_name": "service",
4637+ },
4638+ "service": {
4639+ "service_name": "service",
4640+ "servers": [
4641+ ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]),
4642+ ]
4643+ },
4644+ }
4645+ self.relations_of_type.return_value = []
4646+
4647+ expected = {
4648+ 'service': {
4649+ 'service_name': 'service',
4650+ 'servers': [
4651+ ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]),
4652+ ],
4653+ },
4654+ }
4655+ self.assertEqual(expected, hooks.create_services())
4656+ self.write_service_config.assert_called_with(expected)
4657+
4658+ def test_relation_default_service(self):
4659+ self.get_config_services.return_value = {
4660+ None: {
4661+ "service_name": "service",
4662+ },
4663+ "service": {
4664+ "service_name": "service",
4665+ },
4666+ }
4667+ self.relations_of_type.return_value = [
4668+ {"port": 4242,
4669+ "hostname": "backend.1",
4670+ "private-address": "1.2.3.4",
4671+ "__unit__": "foo/0"},
4672+ ]
4673+
4674+ expected = {
4675+ 'service': {
4676+ 'service_name': 'service',
4677+ 'servers': [('foo-0-4242', '1.2.3.4', 4242, [])],
4678+ },
4679+ }
4680+ self.assertEqual(expected, hooks.create_services())
4681+ self.write_service_config.assert_called_with(expected)
4682+
4683+ def test_with_service_options(self):
4684+ self.get_config_services.return_value = {
4685+ None: {
4686+ "service_name": "service",
4687+ },
4688+ "service": {
4689+ "service_name": "service",
4690+ "server_options": ["maxconn 4"],
4691+ },
4692+ }
4693+ self.relations_of_type.return_value = [
4694+ {"port": 4242,
4695+ "hostname": "backend.1",
4696+ "private-address": "1.2.3.4",
4697+ "__unit__": "foo/0"},
4698+ ]
4699+
4700+ expected = {
4701+ 'service': {
4702+ 'service_name': 'service',
4703+ 'server_options': ["maxconn 4"],
4704+ 'servers': [('foo-0-4242', '1.2.3.4',
4705+ 4242, ["maxconn 4"])],
4706+ },
4707+ }
4708+ self.assertEqual(expected, hooks.create_services())
4709+ self.write_service_config.assert_called_with(expected)
4710+
4711+ def test_with_service_name(self):
4712+ self.get_config_services.return_value = {
4713+ None: {
4714+ "service_name": "service",
4715+ },
4716+ "foo_service": {
4717+ "service_name": "foo_service",
4718+ "server_options": ["maxconn 4"],
4719+ },
4720+ }
4721+ self.relations_of_type.return_value = [
4722+ {"port": 4242,
4723+ "hostname": "backend.1",
4724+ "service_name": "foo_service",
4725+ "private-address": "1.2.3.4",
4726+ "__unit__": "foo/0"},
4727+ ]
4728+
4729+ expected = {
4730+ 'foo_service': {
4731+ 'service_name': 'foo_service',
4732+ 'server_options': ["maxconn 4"],
4733+ 'servers': [('foo-0-4242', '1.2.3.4',
4734+ 4242, ["maxconn 4"])],
4735+ },
4736+ }
4737+ self.assertEqual(expected, hooks.create_services())
4738+ self.write_service_config.assert_called_with(expected)
4739+
4740+ def test_no_service_name_unit_name_match_service_name(self):
4741+ self.get_config_services.return_value = {
4742+ None: {
4743+ "service_name": "foo_service",
4744+ },
4745+ "foo_service": {
4746+ "service_name": "foo_service",
4747+ "server_options": ["maxconn 4"],
4748+ },
4749+ }
4750+ self.relations_of_type.return_value = [
4751+ {"port": 4242,
4752+ "hostname": "backend.1",
4753+ "private-address": "1.2.3.4",
4754+ "__unit__": "foo/1"},
4755+ ]
4756+
4757+ expected = {
4758+ 'foo_service': {
4759+ 'service_name': 'foo_service',
4760+ 'server_options': ["maxconn 4"],
4761+ 'servers': [('foo-1-4242', '1.2.3.4',
4762+ 4242, ["maxconn 4"])],
4763+ },
4764+ }
4765+ self.assertEqual(expected, hooks.create_services())
4766+ self.write_service_config.assert_called_with(expected)
4767+
4768+ def test_with_sitenames_match_service_name(self):
4769+ self.get_config_services.return_value = {
4770+ None: {
4771+ "service_name": "service",
4772+ },
4773+ "foo_srv": {
4774+ "service_name": "foo_srv",
4775+ "server_options": ["maxconn 4"],
4776+ },
4777+ }
4778+ self.relations_of_type.return_value = [
4779+ {"port": 4242,
4780+ "hostname": "backend.1",
4781+ "sitenames": "foo_srv bar_srv",
4782+ "private-address": "1.2.3.4",
4783+ "__unit__": "foo/0"},
4784+ ]
4785+
4786+ expected = {
4787+ 'foo_srv': {
4788+ 'service_name': 'foo_srv',
4789+ 'server_options': ["maxconn 4"],
4790+ 'servers': [('foo-0-4242', '1.2.3.4',
4791+ 4242, ["maxconn 4"])],
4792+ },
4793+ }
4794+ self.assertEqual(expected, hooks.create_services())
4795+ self.write_service_config.assert_called_with(expected)
4796+
4797+ def test_with_juju_services_match_service_name(self):
4798+ self.get_config_services.return_value = {
4799+ None: {
4800+ "service_name": "service",
4801+ },
4802+ "foo_service": {
4803+ "service_name": "foo_service",
4804+ "server_options": ["maxconn 4"],
4805+ },
4806+ }
4807+ self.relations_of_type.return_value = [
4808+ {"port": 4242,
4809+ "hostname": "backend.1",
4810+ "private-address": "1.2.3.4",
4811+ "__unit__": "foo/1"},
4812+ ]
4813+
4814+ expected = {
4815+ 'foo_service': {
4816+ 'service_name': 'foo_service',
4817+ 'server_options': ["maxconn 4"],
4818+ 'servers': [('foo-1-4242', '1.2.3.4',
4819+ 4242, ["maxconn 4"])],
4820+ },
4821+ }
4822+
4823+ result = hooks.create_services()
4824+
4825+ self.assertEqual(expected, result)
4826+ self.write_service_config.assert_called_with(expected)
4827+
4828+ def test_with_sitenames_no_match_but_unit_name(self):
4829+ self.get_config_services.return_value = {
4830+ None: {
4831+ "service_name": "service",
4832+ },
4833+ "foo": {
4834+ "service_name": "foo",
4835+ "server_options": ["maxconn 4"],
4836+ },
4837+ }
4838+ self.relations_of_type.return_value = [
4839+ {"port": 4242,
4840+ "hostname": "backend.1",
4841+ "sitenames": "bar_service baz_service",
4842+ "private-address": "1.2.3.4",
4843+ "__unit__": "foo/0"},
4844+ ]
4845+
4846+ expected = {
4847+ 'foo': {
4848+ 'service_name': 'foo',
4849+ 'server_options': ["maxconn 4"],
4850+ 'servers': [('foo-0-4242', '1.2.3.4',
4851+ 4242, ["maxconn 4"])],
4852+ },
4853+ }
4854+ self.assertEqual(expected, hooks.create_services())
4855+ self.write_service_config.assert_called_with(expected)
4856
4857=== added file 'hooks/tests/test_website_hooks.py'
4858--- hooks/tests/test_website_hooks.py 1970-01-01 00:00:00 +0000
4859+++ hooks/tests/test_website_hooks.py 2013-10-10 22:34:35 +0000
4860@@ -0,0 +1,145 @@
4861+from testtools import TestCase
4862+from mock import patch, call
4863+
4864+import hooks
4865+
4866+
4867+class WebsiteRelationTest(TestCase):
4868+
4869+ def setUp(self):
4870+ super(WebsiteRelationTest, self).setUp()
4871+ self.notify_website = self.patch_hook("notify_website")
4872+
4873+ def patch_hook(self, hook_name):
4874+ mock_controller = patch.object(hooks, hook_name)
4875+ mock = mock_controller.start()
4876+ self.addCleanup(mock_controller.stop)
4877+ return mock
4878+
4879+ def test_website_interface_none(self):
4880+ self.assertEqual(None, hooks.website_interface(hook_name=None))
4881+ self.notify_website.assert_not_called()
4882+
4883+ def test_website_interface_joined(self):
4884+ hooks.website_interface(hook_name="joined")
4885+ self.notify_website.assert_called_once_with(
4886+ changed=False, relation_ids=(None,))
4887+
4888+ def test_website_interface_changed(self):
4889+ hooks.website_interface(hook_name="changed")
4890+ self.notify_website.assert_called_once_with(
4891+ changed=True, relation_ids=(None,))
4892+
4893+
4894+class NotifyRelationTest(TestCase):
4895+
4896+ def setUp(self):
4897+ super(NotifyRelationTest, self).setUp()
4898+
4899+ self.relations_for_id = self.patch_hook("relations_for_id")
4900+ self.relation_set = self.patch_hook("relation_set")
4901+ self.config_get = self.patch_hook("config_get")
4902+ self.get_relation_ids = self.patch_hook("get_relation_ids")
4903+ self.get_hostname = self.patch_hook("get_hostname")
4904+ self.log = self.patch_hook("log")
4905+ self.get_config_services = self.patch_hook("get_config_service")
4906+
4907+ def patch_hook(self, hook_name):
4908+ mock_controller = patch.object(hooks, hook_name)
4909+ mock = mock_controller.start()
4910+ self.addCleanup(mock_controller.stop)
4911+ return mock
4912+
4913+ def test_notify_website_relation_no_relation_ids(self):
4914+ hooks.notify_relation("website")
4915+ self.get_relation_ids.return_value = ()
4916+ self.relation_set.assert_not_called()
4917+ self.get_relation_ids.assert_called_once_with("website")
4918+
4919+ def test_notify_website_relation_with_default_relation(self):
4920+ self.get_relation_ids.return_value = ()
4921+ self.get_hostname.return_value = "foo.local"
4922+ self.relations_for_id.return_value = [{}]
4923+ self.config_get.return_value = {"services": ""}
4924+
4925+ hooks.notify_relation("website", relation_ids=(None,))
4926+
4927+ self.get_hostname.assert_called_once_with()
4928+ self.relations_for_id.assert_called_once_with(None)
4929+ self.relation_set.assert_called_once_with(
4930+ relation_id=None, port="80", hostname="foo.local",
4931+ all_services="")
4932+ self.get_relation_ids.assert_not_called()
4933+
4934+ def test_notify_website_relation_with_relations(self):
4935+ self.get_relation_ids.return_value = ("website:1",
4936+ "website:2")
4937+ self.get_hostname.return_value = "foo.local"
4938+ self.relations_for_id.return_value = [{}]
4939+ self.config_get.return_value = {"services": ""}
4940+
4941+ hooks.notify_relation("website")
4942+
4943+ self.get_hostname.assert_called_once_with()
4944+ self.get_relation_ids.assert_called_once_with("website")
4945+ self.relations_for_id.assert_has_calls([
4946+ call("website:1"),
4947+ call("website:2"),
4948+ ])
4949+
4950+ self.relation_set.assert_has_calls([
4951+ call(relation_id="website:1", port="80", hostname="foo.local",
4952+ all_services=""),
4953+ call(relation_id="website:2", port="80", hostname="foo.local",
4954+ all_services=""),
4955+ ])
4956+
4957+ def test_notify_website_relation_with_different_sitenames(self):
4958+ self.get_relation_ids.return_value = ("website:1",)
4959+ self.get_hostname.return_value = "foo.local"
4960+ self.relations_for_id.return_value = [{"service_name": "foo"},
4961+ {"service_name": "bar"}]
4962+ self.config_get.return_value = {"services": ""}
4963+
4964+ hooks.notify_relation("website")
4965+
4966+ self.get_hostname.assert_called_once_with()
4967+ self.get_relation_ids.assert_called_once_with("website")
4968+ self.relations_for_id.assert_has_calls([
4969+ call("website:1"),
4970+ ])
4971+
4972+ self.relation_set.assert_has_calls([
4973+ call.relation_set(
4974+ relation_id="website:1", port="80", hostname="foo.local",
4975+ all_services=""),
4976+ ])
4977+ self.log.assert_called_once_with(
4978+ "Remote units requested than a single service name."
4979+ "Falling back to default host/port.")
4980+
4981+ def test_notify_website_relation_with_same_sitenames(self):
4982+ self.get_relation_ids.return_value = ("website:1",)
4983+ self.get_hostname.side_effect = ["foo.local", "bar.local"]
4984+ self.relations_for_id.return_value = [{"service_name": "bar"},
4985+ {"service_name": "bar"}]
4986+ self.config_get.return_value = {"services": ""}
4987+ self.get_config_services.return_value = {"service_host": "bar.local",
4988+ "service_port": "4242"}
4989+
4990+ hooks.notify_relation("website")
4991+
4992+ self.get_hostname.assert_has_calls([
4993+ call(),
4994+ call("bar.local")])
4995+ self.get_relation_ids.assert_called_once_with("website")
4996+ self.relations_for_id.assert_has_calls([
4997+ call("website:1"),
4998+ ])
4999+
5000+ self.relation_set.assert_has_calls([
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches