Merge lp:~sidnei/charms/precise/haproxy/trunk into lp:charms/haproxy

Proposed by Sidnei da Silva
Status: Superseded
Proposed branch: lp:~sidnei/charms/precise/haproxy/trunk
Merge into: lp:charms/haproxy
Diff against target: 5083 lines (+3681/-904)
30 files modified
.bzrignore (+10/-0)
Makefile (+39/-0)
README.md (+22/-15)
charm-helpers.yaml (+4/-0)
cm.py (+193/-0)
config-manager.txt (+6/-0)
config.yaml (+13/-1)
files/nrpe/check_haproxy.sh (+2/-3)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+239/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+44/-0)
hooks/hooks.py (+508/-450)
hooks/install (+13/-0)
hooks/nrpe.py (+0/-170)
hooks/test_hooks.py (+0/-263)
hooks/tests/test_config_changed_hooks.py (+120/-0)
hooks/tests/test_helpers.py (+745/-0)
hooks/tests/test_nrpe_hooks.py (+24/-0)
hooks/tests/test_peer_hooks.py (+200/-0)
hooks/tests/test_reverseproxy_hooks.py (+345/-0)
hooks/tests/test_website_hooks.py (+145/-0)
hooks/tests/utils_for_tests.py (+21/-0)
metadata.yaml (+7/-1)
revision (+0/-1)
setup.cfg (+4/-0)
tarmac_tests.sh (+6/-0)
To merge this branch: bzr merge lp:~sidnei/charms/precise/haproxy/trunk
Reviewer Review Type Date Requested Status
charmers Pending
charmers Pending
Review via email: mp+181421@code.launchpad.net

This proposal supersedes a proposal from 2013-04-18.

Description of the change

* The 'all_services' config now supports a static list of servers to be used *in addition* to the ones provided via relation.

* When more than one haproxy units exist, the configured service is upgraded in-place to a mode where traffic is routed to a single haproxy unit (the first one in unit-name order) and the remaining ones are configured as 'backup'. This is done to allow the enforcement of a 'maxconn' session in the configured services, which would not be possible to enforce otherwise.

* Changes to the configured services are properly propagated to the upstream relation.

To post a comment you must log in.
Revision history for this message
Juan L. Negron (negronjl) wrote : Posted in a previous version of this proposal

Reviewing this now.

-Juan

Revision history for this message
Juan L. Negron (negronjl) wrote : Posted in a previous version of this proposal

Hi Sidnei:

I ran charm proof on the charm ( after mergin it locally ) and I get the following:
negronjl@negronjl-laptop:~/src/juju/charms/precise$ charm proof haproxy
W: website-relation-changed not executable
W: nrpe-external-master-relation-changed not executable
W: reverseproxy-relation-changed not executable
W: peer-relation-changed not executable
W: install not executable
W: start not executable
W: stop not executable
W: config-changed not executable

As I try to deploy the charm, I get this:
negronjl@negronjl-laptop:~/src/juju/charms$ juju deploy --repository . local:precise/haproxy
2013-02-27 10:17:30,036 INFO Searching for charm local:precise/haproxy in local charm repository: /home/negronjl/src/juju/charms
2013-02-27 10:17:30,244 INFO Connecting to environment...
2013-02-27 10:17:33,069 INFO Connected to environment.
2013-02-27 10:17:33,337 ERROR [Errno 2] No such file or directory: '/home/negronjl/src/juju/charms/precise/haproxy/lib/charmsupport'

review: Disapprove
Revision history for this message
Sidnei da Silva (sidnei) wrote : Posted in a previous version of this proposal

Ah, lib/charmsupport is a symlink and those don't get included in the charm IIRC. Maybe 'charm proof' should check for that.

I'll add charm proof to our tarmac test step and fix the other issue.

Revision history for this message
Mark Mims (mark-mims) wrote : Posted in a previous version of this proposal

same story as apache... please reserve /tests for charm tests.

review: Needs Resubmitting
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Sidnei

This all looks like great work; I really like the approach you are taking to testing (gonna steal some of that goodness for my own charms) and you have pointed me at charmsupport which overlaps alot with the openstack-charm-helpers we use for the OpenStack charms.

I have one gripe right now; as is this change will make the charm un-deployable from the charm store as you have introduced a requirement to branch it locally and pull in the other dependencies.

I personally don't think this is acceptable; the branch for haproxy should ship the required revisions of its dependencies so that this will continue work.

review: Disapprove
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Sorry - that should have been 'Needs Fixing' not 'Disapprove'.

Not got my head on straight this morning...

review: Needs Fixing
Revision history for this message
Sidnei da Silva (sidnei) wrote : Posted in a previous version of this proposal

Hi James,

re: dependencies that's actually not the case anymore, and I'll update the README to match. Notice how I've changed the install hook to be a shell script that adds a PPA and then installs the package dependencies, such that it still works if you deploy directly from the charm store.

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

Sidnei: the README.md needs the following ppa url:

  sudo add-apt-repository ppa:cjohnston/flake8

it's mispelled right now.

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

[2]: flake8 in the apt-get install line should be python-flake8

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

[3]: also add python-nosexcover

Revision history for this message
David Britton (dpb) wrote : Posted in a previous version of this proposal

Just tested this branch with an upstream apache2 and downstream landscape charm that depends on the relation driven proxying change in r64. Its missing the functionality in r64 and r64 from trunk right now.

review: Needs Fixing
83. By Tom Haddon

Merge from U1 charms - also an upstream review in process for this. Add in charmhelpers and updating some tests.

84. By Matthias Arnason

[ev r=tiaz] Always write at least one listen stanza, do not pass null relationIDs to subprocess

85. By Matthias Arnason

[sidnei r=tiaz] Switch to using service name instead of hostname in backend server name, filter frontend-only options into frontend, create frontend/backend stanzas instead of a single listen stanza. Still support old listen stanzas when parsing for bw-compatibility.

86. By David Ames

[dames,r=mthaddon] reverting revno 84 and making notify_website call more explicit for relation_ids

87. By JuanJo Ciarlante

[sidnei, r=jjo] Dupe mode http/tcp and option httplog/tcplog between frontend and backend

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file '.bzrignore'
--- .bzrignore 1970-01-01 00:00:00 +0000
+++ .bzrignore 2013-10-10 22:34:35 +0000
@@ -0,0 +1,10 @@
1revision
2_trial_temp
3.coverage
4coverage.xml
5*.crt
6*.key
7lib/*
8*.pyc
9exec.d
10build/charm-helpers
011
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2013-10-10 22:34:35 +0000
@@ -0,0 +1,39 @@
1PWD := $(shell pwd)
2SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode
3HOOKS_DIR := $(PWD)/hooks
4TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR)
5TEST_DIR := $(PWD)/hooks/tests
6CHARM_DIR := $(PWD)
7PYTHON := /usr/bin/env python
8
9
10build: test lint proof
11
12revision:
13 @test -f revision || echo 0 > revision
14
15proof: revision
16 @echo Proofing charm...
17 @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK
18 @test `cat revision` = 0 && rm revision
19
20test:
21 @echo Starting tests...
22 @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests $(TEST_DIR)
23
24lint:
25 @echo Checking for Python syntax...
26 @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK
27
28sourcedeps: $(PWD)/config-manager.txt
29 @echo Updating source dependencies...
30 @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \
31 -p $(SOURCEDEPS_DIR) \
32 -t $(PWD)
33 @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
34 -c charm-helpers.yaml \
35 -b build/charm-helpers \
36 -d hooks/charmhelpers
37 @echo Do not forget to commit the updated files if any.
38
39.PHONY: revision proof test lint sourcedeps charm-payload
040
=== modified file 'README.md'
--- README.md 2013-02-12 23:43:54 +0000
+++ README.md 2013-10-10 22:34:35 +0000
@@ -1,5 +1,5 @@
1Juju charm haproxy1Juju charm for HAProxy
2==================2======================
33
4HAProxy is a free, very fast and reliable solution offering high availability,4HAProxy is a free, very fast and reliable solution offering high availability,
5load balancing, and proxying for TCP and HTTP-based applications. It is5load balancing, and proxying for TCP and HTTP-based applications. It is
@@ -9,6 +9,23 @@
9integration into existing architectures very easy and riskless, while still9integration into existing architectures very easy and riskless, while still
10offering the possibility not to expose fragile web servers to the Net.10offering the possibility not to expose fragile web servers to the Net.
1111
12Development
13-----------
14The following steps are needed for testing and development of the charm,
15but **not** for deployment:
16
17 sudo apt-get install python-software-properties
18 sudo add-apt-repository ppa:cjohnston/flake8
19 sudo apt-get update
20 sudo apt-get install python-mock python-flake8 python-nose python-nosexcover
21
22To run the tests:
23
24 make build
25
26... will run the unit tests, run flake8 over the source to warn about
27formatting issues and output a code coverage summary of the 'hooks.py' module.
28
12How to deploy the charm29How to deploy the charm
13-----------------------30-----------------------
14 juju deploy haproxy31 juju deploy haproxy
@@ -27,7 +44,7 @@
27the "Website Relation" section for more information about that.44the "Website Relation" section for more information about that.
2845
29When your charm hooks into reverseproxy you have two general approaches46When your charm hooks into reverseproxy you have two general approaches
30which can be used to notify haproxy about what services you are running. 47which can be used to notify haproxy about what services you are running.
311) Single-service proxying or 2) Multi-service or relation-driven proxying.481) Single-service proxying or 2) Multi-service or relation-driven proxying.
3249
33** 1) Single-Service Proxying **50** 1) Single-Service Proxying **
@@ -67,7 +84,7 @@
6784
68 #!/bin/bash85 #!/bin/bash
69 # hooks/website-relation-changed86 # hooks/website-relation-changed
70 87
71 host=$(unit-get private-address)88 host=$(unit-get private-address)
72 port=8089 port=80
7390
@@ -80,7 +97,7 @@
80 "97 "
8198
82Once set, haproxy will union multiple `servers` stanzas from any units99Once set, haproxy will union multiple `servers` stanzas from any units
83joining with the same `service_name` under one listen stanza. 100joining with the same `service_name` under one listen stanza.
84`service-options` and `server_options` will be overwritten, so ensure they101`service-options` and `server_options` will be overwritten, so ensure they
85are set uniformly on all services with the same name.102are set uniformly on all services with the same name.
86103
@@ -102,18 +119,8 @@
102Many of the haproxy settings can be altered via the standard juju configuration119Many of the haproxy settings can be altered via the standard juju configuration
103settings. Please see the config.yaml file as each is fairly clearly documented.120settings. Please see the config.yaml file as each is fairly clearly documented.
104121
105Testing
106-------
107This charm has a simple unit-test program. Please expand it and make sure new
108changes are covered by simple unit tests. To run the unit tests:
109
110 sudo apt-get install python-mocker
111 sudo apt-get install python-twisted-core
112 cd hooks; trial test_hooks
113
114TODO:122TODO:
115-----123-----
116124
117 * Expand Single-Service section as I have not tested that mode fully.125 * Expand Single-Service section as I have not tested that mode fully.
118 * Trigger website-relation-changed when the reverse-proxy relation changes126 * Trigger website-relation-changed when the reverse-proxy relation changes
119
120127
=== added directory 'build'
=== added file 'charm-helpers.yaml'
--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers.yaml 2013-10-10 22:34:35 +0000
@@ -0,0 +1,4 @@
1include:
2 - core
3 - fetch
4 - contrib.charmsupport
0\ No newline at end of file5\ No newline at end of file
16
=== added file 'cm.py'
--- cm.py 1970-01-01 00:00:00 +0000
+++ cm.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,193 @@
1# Copyright 2010-2013 Canonical Ltd. All rights reserved.
2import os
3import re
4import sys
5import errno
6import hashlib
7import subprocess
8import optparse
9
10from os import curdir
11from bzrlib.branch import Branch
12from bzrlib.plugin import load_plugins
13load_plugins()
14from bzrlib.plugins.launchpad import account as lp_account
15
16if 'GlobalConfig' in dir(lp_account):
17 from bzrlib.config import LocationConfig as LocationConfiguration
18 _ = LocationConfiguration
19else:
20 from bzrlib.config import LocationStack as LocationConfiguration
21 _ = LocationConfiguration
22
23
24def get_branch_config(config_file):
25 """
26 Retrieves the sourcedeps configuration for an source dir.
27 Returns a dict of (branch, revspec) tuples, keyed by branch name.
28 """
29 branches = {}
30 with open(config_file, 'r') as stream:
31 for line in stream:
32 line = line.split('#')[0].strip()
33 bzr_match = re.match(r'(\S+)\s+'
34 'lp:([^;]+)'
35 '(?:;revno=(\d+))?', line)
36 if bzr_match:
37 name, branch, revno = bzr_match.group(1, 2, 3)
38 if revno is None:
39 revspec = -1
40 else:
41 revspec = revno
42 branches[name] = (branch, revspec)
43 continue
44 dir_match = re.match(r'(\S+)\s+'
45 '\(directory\)', line)
46 if dir_match:
47 name = dir_match.group(1)
48 branches[name] = None
49 return branches
50
51
52def main(config_file, parent_dir, target_dir, verbose):
53 """Do the deed."""
54
55 try:
56 os.makedirs(parent_dir)
57 except OSError, e:
58 if e.errno != errno.EEXIST:
59 raise
60
61 branches = sorted(get_branch_config(config_file).items())
62 for branch_name, spec in branches:
63 if spec is None:
64 # It's a directory, just create it and move on.
65 destination_path = os.path.join(target_dir, branch_name)
66 if not os.path.isdir(destination_path):
67 os.makedirs(destination_path)
68 continue
69
70 (quoted_branch_spec, revspec) = spec
71 revno = int(revspec)
72
73 # qualify mirror branch name with hash of remote repo path to deal
74 # with changes to the remote branch URL over time
75 branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest()
76 branch_directory = branch_spec_digest
77
78 source_path = os.path.join(parent_dir, branch_directory)
79 destination_path = os.path.join(target_dir, branch_name)
80
81 # Remove leftover symlinks/stray files.
82 try:
83 os.remove(destination_path)
84 except OSError, e:
85 if e.errno != errno.EISDIR and e.errno != errno.ENOENT:
86 raise
87
88 lp_url = "lp:" + quoted_branch_spec
89
90 # Create the local mirror branch if it doesn't already exist
91 if verbose:
92 sys.stderr.write('%30s: ' % (branch_name,))
93 sys.stderr.flush()
94
95 fresh = False
96 if not os.path.exists(source_path):
97 subprocess.check_call(['bzr', 'branch', '-q', '--no-tree',
98 '--', lp_url, source_path])
99 fresh = True
100
101 if not fresh:
102 source_branch = Branch.open(source_path)
103 if revno == -1:
104 orig_branch = Branch.open(lp_url)
105 fresh = source_branch.revno() == orig_branch.revno()
106 else:
107 fresh = source_branch.revno() == revno
108
109 # Freshen the source branch if required.
110 if not fresh:
111 subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r',
112 str(revno), '-d', source_path,
113 '--', lp_url])
114
115 if os.path.exists(destination_path):
116 # Overwrite the destination with the appropriate revision.
117 subprocess.check_call(['bzr', 'clean-tree', '--force', '-q',
118 '--ignored', '-d', destination_path])
119 subprocess.check_call(['bzr', 'pull', '-q', '--overwrite',
120 '-r', str(revno),
121 '-d', destination_path, '--', source_path])
122 else:
123 # Create a new branch.
124 subprocess.check_call(['bzr', 'branch', '-q', '--hardlink',
125 '-r', str(revno),
126 '--', source_path, destination_path])
127
128 # Check the state of the destination branch.
129 destination_branch = Branch.open(destination_path)
130 destination_revno = destination_branch.revno()
131
132 if verbose:
133 sys.stderr.write('checked out %4s of %s\n' %
134 ("r" + str(destination_revno), lp_url))
135 sys.stderr.flush()
136
137 if revno != -1 and destination_revno != revno:
138 raise RuntimeError("Expected revno %d but got revno %d" %
139 (revno, destination_revno))
140
141if __name__ == '__main__':
142 parser = optparse.OptionParser(
143 usage="%prog [options]",
144 description=(
145 "Add a lightweight checkout in <target> for each "
146 "corresponding file in <parent>."),
147 add_help_option=False)
148 parser.add_option(
149 '-p', '--parent', dest='parent',
150 default=None,
151 help=("The directory of the parent tree."),
152 metavar="DIR")
153 parser.add_option(
154 '-t', '--target', dest='target', default=curdir,
155 help=("The directory of the target tree."),
156 metavar="DIR")
157 parser.add_option(
158 '-c', '--config', dest='config', default=None,
159 help=("The config file to be used for config-manager."),
160 metavar="DIR")
161 parser.add_option(
162 '-q', '--quiet', dest='verbose', action='store_false',
163 help="Be less verbose.")
164 parser.add_option(
165 '-v', '--verbose', dest='verbose', action='store_true',
166 help="Be more verbose.")
167 parser.add_option(
168 '-h', '--help', action='help',
169 help="Show this help message and exit.")
170 parser.set_defaults(verbose=True)
171
172 options, args = parser.parse_args()
173
174 if options.parent is None:
175 options.parent = os.environ.get(
176 "SOURCEDEPS_DIR",
177 os.path.join(curdir, ".sourcecode"))
178
179 if options.target is None:
180 parser.error(
181 "Target directory not specified.")
182
183 if options.config is None:
184 config = [arg for arg in args
185 if arg != "update"]
186 if not config or len(config) > 1:
187 parser.error("Config not specified")
188 options.config = config[0]
189
190 sys.exit(main(config_file=options.config,
191 parent_dir=options.parent,
192 target_dir=options.target,
193 verbose=options.verbose))
0194
=== added file 'config-manager.txt'
--- config-manager.txt 1970-01-01 00:00:00 +0000
+++ config-manager.txt 2013-10-10 22:34:35 +0000
@@ -0,0 +1,6 @@
1# After making changes to this file, to ensure that your sourcedeps are
2# up-to-date do:
3#
4# make sourcedeps
5
6./build/charm-helpers lp:charm-helpers;revno=70
07
=== modified file 'config.yaml'
--- config.yaml 2012-10-10 14:38:47 +0000
+++ config.yaml 2013-10-10 22:34:35 +0000
@@ -59,7 +59,7 @@
59 restarting, a turn-around timer of 1 second is applied before a retry59 restarting, a turn-around timer of 1 second is applied before a retry
60 occurs.60 occurs.
61 default_timeouts:61 default_timeouts:
62 default: "queue 1000, connect 1000, client 1000, server 1000"62 default: "queue 20000, client 50000, connect 5000, server 50000"
63 type: string63 type: string
64 description: Default timeouts 64 description: Default timeouts
65 enable_monitoring:65 enable_monitoring:
@@ -90,6 +90,12 @@
90 default: 390 default: 3
91 type: int91 type: int
92 description: Monitoring interface refresh interval (in seconds)92 description: Monitoring interface refresh interval (in seconds)
93 package_status:
94 default: "install"
95 type: "string"
96 description: |
97 The status of service-affecting packages will be set to this value in the dpkg database.
98 Useful valid values are "install" and "hold".
93 services:99 services:
94 default: |100 default: |
95 - service_name: haproxy_service101 - service_name: haproxy_service
@@ -106,6 +112,12 @@
106 before the first variable, service_name, as above. Service options is a112 before the first variable, service_name, as above. Service options is a
107 comma separated list, server options will be appended as a string to113 comma separated list, server options will be appended as a string to
108 the individual server lines for a given listen stanza.114 the individual server lines for a given listen stanza.
115 sysctl:
116 default: ""
117 type: string
118 description: >
119 YAML-formatted list of sysctl values, e.g.:
120 '{ net.ipv4.tcp_max_syn_backlog : 65536 }'
109 nagios_context:121 nagios_context:
110 default: "juju"122 default: "juju"
111 type: string123 type: string
112124
=== renamed directory 'files/nrpe-external-master' => 'files/nrpe'
=== modified file 'files/nrpe/check_haproxy.sh'
--- files/nrpe-external-master/check_haproxy.sh 2012-11-07 22:32:06 +0000
+++ files/nrpe/check_haproxy.sh 2013-10-10 22:34:35 +0000
@@ -2,7 +2,7 @@
2#--------------------------------------------2#--------------------------------------------
3# This file is managed by Juju3# This file is managed by Juju
4#--------------------------------------------4#--------------------------------------------
5# 5#
6# Copyright 2009,2012 Canonical Ltd.6# Copyright 2009,2012 Canonical Ltd.
7# Author: Tom Haddon7# Author: Tom Haddon
88
@@ -13,7 +13,7 @@
1313
14for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});14for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
15do15do
16 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"active(2|3).*${appserver}" -e ' 200 OK')16 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
17 if [ $? != 0 ]; then17 if [ $? != 0 ]; then
18 date >> $LOGFILE18 date >> $LOGFILE
19 echo $output >> $LOGFILE19 echo $output >> $LOGFILE
@@ -30,4 +30,3 @@
3030
31echo "OK: All haproxy instances looking good"31echo "OK: All haproxy instances looking good"
32exit 032exit 0
33
3433
=== added directory 'hooks/charmhelpers'
=== added file 'hooks/charmhelpers/__init__.py'
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/charmsupport'
=== added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
=== added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,218 @@
1"""Compatibility with the nrpe-external-master charm"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Matthew Wedgwood <matthew.wedgwood@canonical.com>
6
7import subprocess
8import pwd
9import grp
10import os
11import re
12import shlex
13import yaml
14
15from charmhelpers.core.hookenv import (
16 config,
17 local_unit,
18 log,
19 relation_ids,
20 relation_set,
21)
22
23from charmhelpers.core.host import service
24
25# This module adds compatibility with the nrpe-external-master and plain nrpe
26# subordinate charms. To use it in your charm:
27#
28# 1. Update metadata.yaml
29#
30# provides:
31# (...)
32# nrpe-external-master:
33# interface: nrpe-external-master
34# scope: container
35#
36# and/or
37#
38# provides:
39# (...)
40# local-monitors:
41# interface: local-monitors
42# scope: container
43
44#
45# 2. Add the following to config.yaml
46#
47# nagios_context:
48# default: "juju"
49# type: string
50# description: |
51# Used by the nrpe subordinate charms.
52# A string that will be prepended to instance name to set the host name
53# in nagios. So for instance the hostname would be something like:
54# juju-myservice-0
55# If you're running multiple environments with the same services in them
56# this allows you to differentiate between them.
57#
58# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
59#
60# 4. Update your hooks.py with something like this:
61#
62# from charmsupport.nrpe import NRPE
63# (...)
64# def update_nrpe_config():
65# nrpe_compat = NRPE()
66# nrpe_compat.add_check(
67# shortname = "myservice",
68# description = "Check MyService",
69# check_cmd = "check_http -w 2 -c 10 http://localhost"
70# )
71# nrpe_compat.add_check(
72# "myservice_other",
73# "Check for widget failures",
74# check_cmd = "/srv/myapp/scripts/widget_check"
75# )
76# nrpe_compat.write()
77#
78# def config_changed():
79# (...)
80# update_nrpe_config()
81#
82# def nrpe_external_master_relation_changed():
83# update_nrpe_config()
84#
85# def local_monitors_relation_changed():
86# update_nrpe_config()
87#
88# 5. ln -s hooks.py nrpe-external-master-relation-changed
89# ln -s hooks.py local-monitors-relation-changed
90
91
92class CheckException(Exception):
93 pass
94
95
96class Check(object):
97 shortname_re = '[A-Za-z0-9-_]+$'
98 service_template = ("""
99#---------------------------------------------------
100# This file is Juju managed
101#---------------------------------------------------
102define service {{
103 use active-service
104 host_name {nagios_hostname}
105 service_description {nagios_hostname}[{shortname}] """
106 """{description}
107 check_command check_nrpe!{command}
108 servicegroups {nagios_servicegroup}
109}}
110""")
111
112 def __init__(self, shortname, description, check_cmd):
113 super(Check, self).__init__()
114 # XXX: could be better to calculate this from the service name
115 if not re.match(self.shortname_re, shortname):
116 raise CheckException("shortname must match {}".format(
117 Check.shortname_re))
118 self.shortname = shortname
119 self.command = "check_{}".format(shortname)
120 # Note: a set of invalid characters is defined by the
121 # Nagios server config
122 # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
123 self.description = description
124 self.check_cmd = self._locate_cmd(check_cmd)
125
126 def _locate_cmd(self, check_cmd):
127 search_path = (
128 '/',
129 os.path.join(os.environ['CHARM_DIR'],
130 'files/nrpe-external-master'),
131 '/usr/lib/nagios/plugins',
132 )
133 parts = shlex.split(check_cmd)
134 for path in search_path:
135 if os.path.exists(os.path.join(path, parts[0])):
136 command = os.path.join(path, parts[0])
137 if len(parts) > 1:
138 command += " " + " ".join(parts[1:])
139 return command
140 log('Check command not found: {}'.format(parts[0]))
141 return ''
142
143 def write(self, nagios_context, hostname):
144 nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
145 self.command)
146 with open(nrpe_check_file, 'w') as nrpe_check_config:
147 nrpe_check_config.write("# check {}\n".format(self.shortname))
148 nrpe_check_config.write("command[{}]={}\n".format(
149 self.command, self.check_cmd))
150
151 if not os.path.exists(NRPE.nagios_exportdir):
152 log('Not writing service config as {} is not accessible'.format(
153 NRPE.nagios_exportdir))
154 else:
155 self.write_service_config(nagios_context, hostname)
156
157 def write_service_config(self, nagios_context, hostname):
158 for f in os.listdir(NRPE.nagios_exportdir):
159 if re.search('.*{}.cfg'.format(self.command), f):
160 os.remove(os.path.join(NRPE.nagios_exportdir, f))
161
162 templ_vars = {
163 'nagios_hostname': hostname,
164 'nagios_servicegroup': nagios_context,
165 'description': self.description,
166 'shortname': self.shortname,
167 'command': self.command,
168 }
169 nrpe_service_text = Check.service_template.format(**templ_vars)
170 nrpe_service_file = '{}/service__{}_{}.cfg'.format(
171 NRPE.nagios_exportdir, hostname, self.command)
172 with open(nrpe_service_file, 'w') as nrpe_service_config:
173 nrpe_service_config.write(str(nrpe_service_text))
174
175 def run(self):
176 subprocess.call(self.check_cmd)
177
178
179class NRPE(object):
180 nagios_logdir = '/var/log/nagios'
181 nagios_exportdir = '/var/lib/nagios/export'
182 nrpe_confdir = '/etc/nagios/nrpe.d'
183
184 def __init__(self):
185 super(NRPE, self).__init__()
186 self.config = config()
187 self.nagios_context = self.config['nagios_context']
188 self.unit_name = local_unit().replace('/', '-')
189 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
190 self.checks = []
191
192 def add_check(self, *args, **kwargs):
193 self.checks.append(Check(*args, **kwargs))
194
195 def write(self):
196 try:
197 nagios_uid = pwd.getpwnam('nagios').pw_uid
198 nagios_gid = grp.getgrnam('nagios').gr_gid
199 except:
200 log("Nagios user not set up, nrpe checks not updated")
201 return
202
203 if not os.path.exists(NRPE.nagios_logdir):
204 os.mkdir(NRPE.nagios_logdir)
205 os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
206
207 nrpe_monitors = {}
208 monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
209 for nrpecheck in self.checks:
210 nrpecheck.write(self.nagios_context, self.hostname)
211 nrpe_monitors[nrpecheck.shortname] = {
212 "command": nrpecheck.command,
213 }
214
215 service('restart', 'nagios-nrpe-server')
216
217 for rid in relation_ids("local-monitors"):
218 relation_set(relation_id=rid, monitors=yaml.dump(monitors))
0219
=== added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
--- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,156 @@
1'''
2Functions for managing volumes in juju units. One volume is supported per unit.
3Subordinates may have their own storage, provided it is on its own partition.
4
5Configuration stanzas:
6 volume-ephemeral:
7 type: boolean
8 default: true
9 description: >
10 If false, a volume is mounted as sepecified in "volume-map"
11 If true, ephemeral storage will be used, meaning that log data
12 will only exist as long as the machine. YOU HAVE BEEN WARNED.
13 volume-map:
14 type: string
15 default: {}
16 description: >
17 YAML map of units to device names, e.g:
18 "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
19 Service units will raise a configure-error if volume-ephemeral
20 is 'true' and no volume-map value is set. Use 'juju set' to set a
21 value and 'juju resolved' to complete configuration.
22
23Usage:
24 from charmsupport.volumes import configure_volume, VolumeConfigurationError
25 from charmsupport.hookenv import log, ERROR
26 def post_mount_hook():
27 stop_service('myservice')
28 def post_mount_hook():
29 start_service('myservice')
30
31 if __name__ == '__main__':
32 try:
33 configure_volume(before_change=pre_mount_hook,
34 after_change=post_mount_hook)
35 except VolumeConfigurationError:
36 log('Storage could not be configured', ERROR)
37'''
38
39# XXX: Known limitations
40# - fstab is neither consulted nor updated
41
42import os
43from charmhelpers.core import hookenv
44from charmhelpers.core import host
45import yaml
46
47
48MOUNT_BASE = '/srv/juju/volumes'
49
50
51class VolumeConfigurationError(Exception):
52 '''Volume configuration data is missing or invalid'''
53 pass
54
55
56def get_config():
57 '''Gather and sanity-check volume configuration data'''
58 volume_config = {}
59 config = hookenv.config()
60
61 errors = False
62
63 if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
64 volume_config['ephemeral'] = True
65 else:
66 volume_config['ephemeral'] = False
67
68 try:
69 volume_map = yaml.safe_load(config.get('volume-map', '{}'))
70 except yaml.YAMLError as e:
71 hookenv.log("Error parsing YAML volume-map: {}".format(e),
72 hookenv.ERROR)
73 errors = True
74 if volume_map is None:
75 # probably an empty string
76 volume_map = {}
77 elif not isinstance(volume_map, dict):
78 hookenv.log("Volume-map should be a dictionary, not {}".format(
79 type(volume_map)))
80 errors = True
81
82 volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
83 if volume_config['device'] and volume_config['ephemeral']:
84 # asked for ephemeral storage but also defined a volume ID
85 hookenv.log('A volume is defined for this unit, but ephemeral '
86 'storage was requested', hookenv.ERROR)
87 errors = True
88 elif not volume_config['device'] and not volume_config['ephemeral']:
89 # asked for permanent storage but did not define volume ID
90 hookenv.log('Ephemeral storage was requested, but there is no volume '
91 'defined for this unit.', hookenv.ERROR)
92 errors = True
93
94 unit_mount_name = hookenv.local_unit().replace('/', '-')
95 volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
96
97 if errors:
98 return None
99 return volume_config
100
101
102def mount_volume(config):
103 if os.path.exists(config['mountpoint']):
104 if not os.path.isdir(config['mountpoint']):
105 hookenv.log('Not a directory: {}'.format(config['mountpoint']))
106 raise VolumeConfigurationError()
107 else:
108 host.mkdir(config['mountpoint'])
109 if os.path.ismount(config['mountpoint']):
110 unmount_volume(config)
111 if not host.mount(config['device'], config['mountpoint'], persist=True):
112 raise VolumeConfigurationError()
113
114
115def unmount_volume(config):
116 if os.path.ismount(config['mountpoint']):
117 if not host.umount(config['mountpoint'], persist=True):
118 raise VolumeConfigurationError()
119
120
121def managed_mounts():
122 '''List of all mounted managed volumes'''
123 return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
124
125
126def configure_volume(before_change=lambda: None, after_change=lambda: None):
127 '''Set up storage (or don't) according to the charm's volume configuration.
128 Returns the mount point or "ephemeral". before_change and after_change
129 are optional functions to be called if the volume configuration changes.
130 '''
131
132 config = get_config()
133 if not config:
134 hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
135 raise VolumeConfigurationError()
136
137 if config['ephemeral']:
138 if os.path.ismount(config['mountpoint']):
139 before_change()
140 unmount_volume(config)
141 after_change()
142 return 'ephemeral'
143 else:
144 # persistent storage
145 if os.path.ismount(config['mountpoint']):
146 mounts = dict(managed_mounts())
147 if mounts.get(config['mountpoint']) != config['device']:
148 before_change()
149 unmount_volume(config)
150 mount_volume(config)
151 after_change()
152 else:
153 before_change()
154 mount_volume(config)
155 after_change()
156 return config['mountpoint']
0157
=== added directory 'hooks/charmhelpers/core'
=== added file 'hooks/charmhelpers/core/__init__.py'
=== added file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,340 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import UserDict
12
13CRITICAL = "CRITICAL"
14ERROR = "ERROR"
15WARNING = "WARNING"
16INFO = "INFO"
17DEBUG = "DEBUG"
18MARKER = object()
19
20cache = {}
21
22
23def cached(func):
24 ''' Cache return values for multiple executions of func + args
25
26 For example:
27
28 @cached
29 def unit_get(attribute):
30 pass
31
32 unit_get('test')
33
34 will cache the result of unit_get + 'test' for future calls.
35 '''
36 def wrapper(*args, **kwargs):
37 global cache
38 key = str((func, args, kwargs))
39 try:
40 return cache[key]
41 except KeyError:
42 res = func(*args, **kwargs)
43 cache[key] = res
44 return res
45 return wrapper
46
47
48def flush(key):
49 ''' Flushes any entries from function cache where the
50 key is found in the function+args '''
51 flush_list = []
52 for item in cache:
53 if key in item:
54 flush_list.append(item)
55 for item in flush_list:
56 del cache[item]
57
58
59def log(message, level=None):
60 "Write a message to the juju log"
61 command = ['juju-log']
62 if level:
63 command += ['-l', level]
64 command += [message]
65 subprocess.call(command)
66
67
68class Serializable(UserDict.IterableUserDict):
69 "Wrapper, an object that can be serialized to yaml or json"
70
71 def __init__(self, obj):
72 # wrap the object
73 UserDict.IterableUserDict.__init__(self)
74 self.data = obj
75
76 def __getattr__(self, attr):
77 # See if this object has attribute.
78 if attr in ("json", "yaml", "data"):
79 return self.__dict__[attr]
80 # Check for attribute in wrapped object.
81 got = getattr(self.data, attr, MARKER)
82 if got is not MARKER:
83 return got
84 # Proxy to the wrapped object via dict interface.
85 try:
86 return self.data[attr]
87 except KeyError:
88 raise AttributeError(attr)
89
90 def __getstate__(self):
91 # Pickle as a standard dictionary.
92 return self.data
93
94 def __setstate__(self, state):
95 # Unpickle into our wrapper.
96 self.data = state
97
98 def json(self):
99 "Serialize the object to json"
100 return json.dumps(self.data)
101
102 def yaml(self):
103 "Serialize the object to yaml"
104 return yaml.dump(self.data)
105
106
107def execution_environment():
108 """A convenient bundling of the current execution context"""
109 context = {}
110 context['conf'] = config()
111 if relation_id():
112 context['reltype'] = relation_type()
113 context['relid'] = relation_id()
114 context['rel'] = relation_get()
115 context['unit'] = local_unit()
116 context['rels'] = relations()
117 context['env'] = os.environ
118 return context
119
120
121def in_relation_hook():
122 "Determine whether we're running in a relation hook"
123 return 'JUJU_RELATION' in os.environ
124
125
126def relation_type():
127 "The scope for the current relation hook"
128 return os.environ.get('JUJU_RELATION', None)
129
130
131def relation_id():
132 "The relation ID for the current relation hook"
133 return os.environ.get('JUJU_RELATION_ID', None)
134
135
136def local_unit():
137 "Local unit ID"
138 return os.environ['JUJU_UNIT_NAME']
139
140
141def remote_unit():
142 "The remote unit for the current relation hook"
143 return os.environ['JUJU_REMOTE_UNIT']
144
145
146def service_name():
147 "The name service group this unit belongs to"
148 return local_unit().split('/')[0]
149
150
151@cached
152def config(scope=None):
153 "Juju charm configuration"
154 config_cmd_line = ['config-get']
155 if scope is not None:
156 config_cmd_line.append(scope)
157 config_cmd_line.append('--format=json')
158 try:
159 return json.loads(subprocess.check_output(config_cmd_line))
160 except ValueError:
161 return None
162
163
164@cached
165def relation_get(attribute=None, unit=None, rid=None):
166 _args = ['relation-get', '--format=json']
167 if rid:
168 _args.append('-r')
169 _args.append(rid)
170 _args.append(attribute or '-')
171 if unit:
172 _args.append(unit)
173 try:
174 return json.loads(subprocess.check_output(_args))
175 except ValueError:
176 return None
177
178
179def relation_set(relation_id=None, relation_settings={}, **kwargs):
180 relation_cmd_line = ['relation-set']
181 if relation_id is not None:
182 relation_cmd_line.extend(('-r', relation_id))
183 for k, v in (relation_settings.items() + kwargs.items()):
184 if v is None:
185 relation_cmd_line.append('{}='.format(k))
186 else:
187 relation_cmd_line.append('{}={}'.format(k, v))
188 subprocess.check_call(relation_cmd_line)
189 # Flush cache of any relation-gets for local unit
190 flush(local_unit())
191
192
193@cached
194def relation_ids(reltype=None):
195 "A list of relation_ids"
196 reltype = reltype or relation_type()
197 relid_cmd_line = ['relation-ids', '--format=json']
198 if reltype is not None:
199 relid_cmd_line.append(reltype)
200 return json.loads(subprocess.check_output(relid_cmd_line)) or []
201 return []
202
203
204@cached
205def related_units(relid=None):
206 "A list of related units"
207 relid = relid or relation_id()
208 units_cmd_line = ['relation-list', '--format=json']
209 if relid is not None:
210 units_cmd_line.extend(('-r', relid))
211 return json.loads(subprocess.check_output(units_cmd_line)) or []
212
213
214@cached
215def relation_for_unit(unit=None, rid=None):
216 "Get the json represenation of a unit's relation"
217 unit = unit or remote_unit()
218 relation = relation_get(unit=unit, rid=rid)
219 for key in relation:
220 if key.endswith('-list'):
221 relation[key] = relation[key].split()
222 relation['__unit__'] = unit
223 return relation
224
225
226@cached
227def relations_for_id(relid=None):
228 "Get relations of a specific relation ID"
229 relation_data = []
230 relid = relid or relation_ids()
231 for unit in related_units(relid):
232 unit_data = relation_for_unit(unit, relid)
233 unit_data['__relid__'] = relid
234 relation_data.append(unit_data)
235 return relation_data
236
237
238@cached
239def relations_of_type(reltype=None):
240 "Get relations of a specific type"
241 relation_data = []
242 reltype = reltype or relation_type()
243 for relid in relation_ids(reltype):
244 for relation in relations_for_id(relid):
245 relation['__relid__'] = relid
246 relation_data.append(relation)
247 return relation_data
248
249
250@cached
251def relation_types():
252 "Get a list of relation types supported by this charm"
253 charmdir = os.environ.get('CHARM_DIR', '')
254 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
255 md = yaml.safe_load(mdf)
256 rel_types = []
257 for key in ('provides', 'requires', 'peers'):
258 section = md.get(key)
259 if section:
260 rel_types.extend(section.keys())
261 mdf.close()
262 return rel_types
263
264
265@cached
266def relations():
267 rels = {}
268 for reltype in relation_types():
269 relids = {}
270 for relid in relation_ids(reltype):
271 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
272 for unit in related_units(relid):
273 reldata = relation_get(unit=unit, rid=relid)
274 units[unit] = reldata
275 relids[relid] = units
276 rels[reltype] = relids
277 return rels
278
279
280def open_port(port, protocol="TCP"):
281 "Open a service network port"
282 _args = ['open-port']
283 _args.append('{}/{}'.format(port, protocol))
284 subprocess.check_call(_args)
285
286
287def close_port(port, protocol="TCP"):
288 "Close a service network port"
289 _args = ['close-port']
290 _args.append('{}/{}'.format(port, protocol))
291 subprocess.check_call(_args)
292
293
294@cached
295def unit_get(attribute):
296 _args = ['unit-get', '--format=json', attribute]
297 try:
298 return json.loads(subprocess.check_output(_args))
299 except ValueError:
300 return None
301
302
303def unit_private_ip():
304 return unit_get('private-address')
305
306
307class UnregisteredHookError(Exception):
308 pass
309
310
311class Hooks(object):
312 def __init__(self):
313 super(Hooks, self).__init__()
314 self._hooks = {}
315
316 def register(self, name, function):
317 self._hooks[name] = function
318
319 def execute(self, args):
320 hook_name = os.path.basename(args[0])
321 if hook_name in self._hooks:
322 self._hooks[hook_name]()
323 else:
324 raise UnregisteredHookError(hook_name)
325
326 def hook(self, *hook_names):
327 def wrapper(decorated):
328 for hook_name in hook_names:
329 self.register(hook_name, decorated)
330 else:
331 self.register(decorated.__name__, decorated)
332 if '_' in decorated.__name__:
333 self.register(
334 decorated.__name__.replace('_', '-'), decorated)
335 return decorated
336 return wrapper
337
338
339def charm_dir():
340 return os.environ.get('CHARM_DIR')
0341
=== added file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/host.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,239 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import pwd
10import grp
11import random
12import string
13import subprocess
14import hashlib
15
16from collections import OrderedDict
17
18from hookenv import log
19
20
21def service_start(service_name):
22 service('start', service_name)
23
24
25def service_stop(service_name):
26 service('stop', service_name)
27
28
29def service_restart(service_name):
30 service('restart', service_name)
31
32
33def service_reload(service_name, restart_on_failure=False):
34 if not service('reload', service_name) and restart_on_failure:
35 service('restart', service_name)
36
37
38def service(action, service_name):
39 cmd = ['service', service_name, action]
40 return subprocess.call(cmd) == 0
41
42
43def service_running(service):
44 try:
45 output = subprocess.check_output(['service', service, 'status'])
46 except subprocess.CalledProcessError:
47 return False
48 else:
49 if ("start/running" in output or "is running" in output):
50 return True
51 else:
52 return False
53
54
55def adduser(username, password=None, shell='/bin/bash', system_user=False):
56 """Add a user"""
57 try:
58 user_info = pwd.getpwnam(username)
59 log('user {0} already exists!'.format(username))
60 except KeyError:
61 log('creating user {0}'.format(username))
62 cmd = ['useradd']
63 if system_user or password is None:
64 cmd.append('--system')
65 else:
66 cmd.extend([
67 '--create-home',
68 '--shell', shell,
69 '--password', password,
70 ])
71 cmd.append(username)
72 subprocess.check_call(cmd)
73 user_info = pwd.getpwnam(username)
74 return user_info
75
76
77def add_user_to_group(username, group):
78 """Add a user to a group"""
79 cmd = [
80 'gpasswd', '-a',
81 username,
82 group
83 ]
84 log("Adding user {} to group {}".format(username, group))
85 subprocess.check_call(cmd)
86
87
88def rsync(from_path, to_path, flags='-r', options=None):
89 """Replicate the contents of a path"""
90 options = options or ['--delete', '--executability']
91 cmd = ['/usr/bin/rsync', flags]
92 cmd.extend(options)
93 cmd.append(from_path)
94 cmd.append(to_path)
95 log(" ".join(cmd))
96 return subprocess.check_output(cmd).strip()
97
98
99def symlink(source, destination):
100 """Create a symbolic link"""
101 log("Symlinking {} as {}".format(source, destination))
102 cmd = [
103 'ln',
104 '-sf',
105 source,
106 destination,
107 ]
108 subprocess.check_call(cmd)
109
110
111def mkdir(path, owner='root', group='root', perms=0555, force=False):
112 """Create a directory"""
113 log("Making dir {} {}:{} {:o}".format(path, owner, group,
114 perms))
115 uid = pwd.getpwnam(owner).pw_uid
116 gid = grp.getgrnam(group).gr_gid
117 realpath = os.path.abspath(path)
118 if os.path.exists(realpath):
119 if force and not os.path.isdir(realpath):
120 log("Removing non-directory file {} prior to mkdir()".format(path))
121 os.unlink(realpath)
122 else:
123 os.makedirs(realpath, perms)
124 os.chown(realpath, uid, gid)
125
126
127def write_file(path, content, owner='root', group='root', perms=0444):
128 """Create or overwrite a file with the contents of a string"""
129 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
130 uid = pwd.getpwnam(owner).pw_uid
131 gid = grp.getgrnam(group).gr_gid
132 with open(path, 'w') as target:
133 os.fchown(target.fileno(), uid, gid)
134 os.fchmod(target.fileno(), perms)
135 target.write(content)
136
137
138def mount(device, mountpoint, options=None, persist=False):
139 '''Mount a filesystem'''
140 cmd_args = ['mount']
141 if options is not None:
142 cmd_args.extend(['-o', options])
143 cmd_args.extend([device, mountpoint])
144 try:
145 subprocess.check_output(cmd_args)
146 except subprocess.CalledProcessError, e:
147 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
148 return False
149 if persist:
150 # TODO: update fstab
151 pass
152 return True
153
154
155def umount(mountpoint, persist=False):
156 '''Unmount a filesystem'''
157 cmd_args = ['umount', mountpoint]
158 try:
159 subprocess.check_output(cmd_args)
160 except subprocess.CalledProcessError, e:
161 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
162 return False
163 if persist:
164 # TODO: update fstab
165 pass
166 return True
167
168
169def mounts():
170 '''List of all mounted volumes as [[mountpoint,device],[...]]'''
171 with open('/proc/mounts') as f:
172 # [['/mount/point','/dev/path'],[...]]
173 system_mounts = [m[1::-1] for m in [l.strip().split()
174 for l in f.readlines()]]
175 return system_mounts
176
177
178def file_hash(path):
179 ''' Generate a md5 hash of the contents of 'path' or None if not found '''
180 if os.path.exists(path):
181 h = hashlib.md5()
182 with open(path, 'r') as source:
183 h.update(source.read()) # IGNORE:E1101 - it does have update
184 return h.hexdigest()
185 else:
186 return None
187
188
189def restart_on_change(restart_map):
190 ''' Restart services based on configuration files changing
191
192 This function is used a decorator, for example
193
194 @restart_on_change({
195 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
196 })
197 def ceph_client_changed():
198 ...
199
200 In this example, the cinder-api and cinder-volume services
201 would be restarted if /etc/ceph/ceph.conf is changed by the
202 ceph_client_changed function.
203 '''
204 def wrap(f):
205 def wrapped_f(*args):
206 checksums = {}
207 for path in restart_map:
208 checksums[path] = file_hash(path)
209 f(*args)
210 restarts = []
211 for path in restart_map:
212 if checksums[path] != file_hash(path):
213 restarts += restart_map[path]
214 for service_name in list(OrderedDict.fromkeys(restarts)):
215 service('restart', service_name)
216 return wrapped_f
217 return wrap
218
219
220def lsb_release():
221 '''Return /etc/lsb-release in a dict'''
222 d = {}
223 with open('/etc/lsb-release', 'r') as lsb:
224 for l in lsb:
225 k, v = l.split('=')
226 d[k.strip()] = v.strip()
227 return d
228
229
230def pwgen(length=None):
231 '''Generate a random pasword.'''
232 if length is None:
233 length = random.choice(range(35, 45))
234 alphanumeric_chars = [
235 l for l in (string.letters + string.digits)
236 if l not in 'l0QD1vAEIOUaeiou']
237 random_chars = [
238 random.choice(alphanumeric_chars) for _ in range(length)]
239 return(''.join(random_chars))
0240
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,209 @@
1import importlib
2from yaml import safe_load
3from charmhelpers.core.host import (
4 lsb_release
5)
6from urlparse import (
7 urlparse,
8 urlunparse,
9)
10import subprocess
11from charmhelpers.core.hookenv import (
12 config,
13 log,
14)
15import apt_pkg
16
17CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
18deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
19"""
20PROPOSED_POCKET = """# Proposed
21deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
22"""
23
24
25def filter_installed_packages(packages):
26 """Returns a list of packages that require installation"""
27 apt_pkg.init()
28 cache = apt_pkg.Cache()
29 _pkgs = []
30 for package in packages:
31 try:
32 p = cache[package]
33 p.current_ver or _pkgs.append(package)
34 except KeyError:
35 log('Package {} has no installation candidate.'.format(package),
36 level='WARNING')
37 _pkgs.append(package)
38 return _pkgs
39
40
41def apt_install(packages, options=None, fatal=False):
42 """Install one or more packages"""
43 options = options or []
44 cmd = ['apt-get', '-y']
45 cmd.extend(options)
46 cmd.append('install')
47 if isinstance(packages, basestring):
48 cmd.append(packages)
49 else:
50 cmd.extend(packages)
51 log("Installing {} with options: {}".format(packages,
52 options))
53 if fatal:
54 subprocess.check_call(cmd)
55 else:
56 subprocess.call(cmd)
57
58
59def apt_update(fatal=False):
60 """Update local apt cache"""
61 cmd = ['apt-get', 'update']
62 if fatal:
63 subprocess.check_call(cmd)
64 else:
65 subprocess.call(cmd)
66
67
68def apt_purge(packages, fatal=False):
69 """Purge one or more packages"""
70 cmd = ['apt-get', '-y', 'purge']
71 if isinstance(packages, basestring):
72 cmd.append(packages)
73 else:
74 cmd.extend(packages)
75 log("Purging {}".format(packages))
76 if fatal:
77 subprocess.check_call(cmd)
78 else:
79 subprocess.call(cmd)
80
81
82def add_source(source, key=None):
83 if ((source.startswith('ppa:') or
84 source.startswith('http:'))):
85 subprocess.check_call(['add-apt-repository', '--yes', source])
86 elif source.startswith('cloud:'):
87 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
88 fatal=True)
89 pocket = source.split(':')[-1]
90 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
91 apt.write(CLOUD_ARCHIVE.format(pocket))
92 elif source == 'proposed':
93 release = lsb_release()['DISTRIB_CODENAME']
94 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
95 apt.write(PROPOSED_POCKET.format(release))
96 if key:
97 subprocess.check_call(['apt-key', 'import', key])
98
99
100class SourceConfigError(Exception):
101 pass
102
103
104def configure_sources(update=False,
105 sources_var='install_sources',
106 keys_var='install_keys'):
107 """
108 Configure multiple sources from charm configuration
109
110 Example config:
111 install_sources:
112 - "ppa:foo"
113 - "http://example.com/repo precise main"
114 install_keys:
115 - null
116 - "a1b2c3d4"
117
118 Note that 'null' (a.k.a. None) should not be quoted.
119 """
120 sources = safe_load(config(sources_var))
121 keys = safe_load(config(keys_var))
122 if isinstance(sources, basestring) and isinstance(keys, basestring):
123 add_source(sources, keys)
124 else:
125 if not len(sources) == len(keys):
126 msg = 'Install sources and keys lists are different lengths'
127 raise SourceConfigError(msg)
128 for src_num in range(len(sources)):
129 add_source(sources[src_num], keys[src_num])
130 if update:
131 apt_update(fatal=True)
132
133# The order of this list is very important. Handlers should be listed in from
134# least- to most-specific URL matching.
135FETCH_HANDLERS = (
136 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
137 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
138)
139
140
141class UnhandledSource(Exception):
142 pass
143
144
145def install_remote(source):
146 """
147 Install a file tree from a remote source
148
149 The specified source should be a url of the form:
150 scheme://[host]/path[#[option=value][&...]]
151
152 Schemes supported are based on this modules submodules
153 Options supported are submodule-specific"""
154 # We ONLY check for True here because can_handle may return a string
155 # explaining why it can't handle a given source.
156 handlers = [h for h in plugins() if h.can_handle(source) is True]
157 installed_to = None
158 for handler in handlers:
159 try:
160 installed_to = handler.install(source)
161 except UnhandledSource:
162 pass
163 if not installed_to:
164 raise UnhandledSource("No handler found for source {}".format(source))
165 return installed_to
166
167
168def install_from_config(config_var_name):
169 charm_config = config()
170 source = charm_config[config_var_name]
171 return install_remote(source)
172
173
174class BaseFetchHandler(object):
175 """Base class for FetchHandler implementations in fetch plugins"""
176 def can_handle(self, source):
177 """Returns True if the source can be handled. Otherwise returns
178 a string explaining why it cannot"""
179 return "Wrong source type"
180
181 def install(self, source):
182 """Try to download and unpack the source. Return the path to the
183 unpacked files or raise UnhandledSource."""
184 raise UnhandledSource("Wrong source type {}".format(source))
185
186 def parse_url(self, url):
187 return urlparse(url)
188
189 def base_url(self, url):
190 """Return url without querystring or fragment"""
191 parts = list(self.parse_url(url))
192 parts[4:] = ['' for i in parts[4:]]
193 return urlunparse(parts)
194
195
196def plugins(fetch_handlers=None):
197 if not fetch_handlers:
198 fetch_handlers = FETCH_HANDLERS
199 plugin_list = []
200 for handler_name in fetch_handlers:
201 package, classname = handler_name.rsplit('.', 1)
202 try:
203 handler_class = getattr(importlib.import_module(package), classname)
204 plugin_list.append(handler_class())
205 except (ImportError, AttributeError):
206 # Skip missing plugins so that they can be ommitted from
207 # installation if desired
208 log("FetchHandler {} not found, skipping plugin".format(handler_name))
209 return plugin_list
0210
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,48 @@
1import os
2import urllib2
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.payload.archive import (
8 get_archive_handler,
9 extract,
10)
11from charmhelpers.core.host import mkdir
12
13
14class ArchiveUrlFetchHandler(BaseFetchHandler):
15 """Handler for archives via generic URLs"""
16 def can_handle(self, source):
17 url_parts = self.parse_url(source)
18 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
19 return "Wrong source type"
20 if get_archive_handler(self.base_url(source)):
21 return True
22 return False
23
24 def download(self, source, dest):
25 # propogate all exceptions
26 # URLError, OSError, etc
27 response = urllib2.urlopen(source)
28 try:
29 with open(dest, 'w') as dest_file:
30 dest_file.write(response.read())
31 except Exception as e:
32 if os.path.isfile(dest):
33 os.unlink(dest)
34 raise e
35
36 def install(self, source):
37 url_parts = self.parse_url(source)
38 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
39 if not os.path.exists(dest_dir):
40 mkdir(dest_dir, perms=0755)
41 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
42 try:
43 self.download(source, dld_file)
44 except urllib2.URLError as e:
45 raise UnhandledSource(e.reason)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return extract(dld_file)
049
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,44 @@
1import os
2from bzrlib.branch import Branch
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.core.host import mkdir
8
9
10class BzrUrlFetchHandler(BaseFetchHandler):
11 """Handler for bazaar branches via generic and lp URLs"""
12 def can_handle(self, source):
13 url_parts = self.parse_url(source)
14 if url_parts.scheme not in ('bzr+ssh', 'lp'):
15 return False
16 else:
17 return True
18
19 def branch(self, source, dest):
20 url_parts = self.parse_url(source)
21 # If we use lp:branchname scheme we need to load plugins
22 if not self.can_handle(source):
23 raise UnhandledSource("Cannot handle {}".format(source))
24 if url_parts.scheme == "lp":
25 from bzrlib.plugin import load_plugins
26 load_plugins()
27 try:
28 remote_branch = Branch.open(source)
29 remote_branch.bzrdir.sprout(dest).open_branch()
30 except Exception as e:
31 raise e
32
33 def install(self, source):
34 url_parts = self.parse_url(source)
35 branch_name = url_parts.path.strip("/").split("/")[-1]
36 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
37 if not os.path.exists(dest_dir):
38 mkdir(dest_dir, perms=0755)
39 try:
40 self.branch(source, dest_dir)
41 except OSError as e:
42 raise UnhandledSource(e.strerror)
43 return dest_dir
44
045
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2013-05-23 21:52:06 +0000
+++ hooks/hooks.py 2013-10-10 22:34:35 +0000
@@ -1,17 +1,31 @@
1#!/usr/bin/env python1#!/usr/bin/env python
22
3import json
4import glob3import glob
5import os4import os
6import random
7import re5import re
8import socket6import socket
9import string7import shutil
10import subprocess8import subprocess
11import sys9import sys
12import yaml10import yaml
13import nrpe11
14import time12from itertools import izip, tee
13
14from charmhelpers.core.host import pwgen
15from charmhelpers.core.hookenv import (
16 log,
17 config as config_get,
18 relation_set,
19 relation_ids as get_relation_ids,
20 relations_of_type,
21 relations_for_id,
22 relation_id,
23 open_port,
24 close_port,
25 unit_get,
26 )
27from charmhelpers.fetch import apt_install
28from charmhelpers.contrib.charmsupport import nrpe
1529
1630
17###############################################################################31###############################################################################
@@ -20,92 +34,52 @@
20default_haproxy_config_dir = "/etc/haproxy"34default_haproxy_config_dir = "/etc/haproxy"
21default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir35default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir
22default_haproxy_service_config_dir = "/var/run/haproxy"36default_haproxy_service_config_dir = "/var/run/haproxy"
23HOOK_NAME = os.path.basename(sys.argv[0])37service_affecting_packages = ['haproxy']
38
39frontend_only_options = [
40 "backlog",
41 "bind",
42 "capture cookie",
43 "capture request header",
44 "capture response header",
45 "clitimeout",
46 "default_backend",
47 "maxconn",
48 "monitor fail",
49 "monitor-net",
50 "monitor-uri",
51 "option accept-invalid-http-request",
52 "option clitcpka",
53 "option contstats",
54 "option dontlog-normal",
55 "option dontlognull",
56 "option http-use-proxy-header",
57 "option log-separate-errors",
58 "option logasap",
59 "option socket-stats",
60 "option tcp-smart-accept",
61 "rate-limit sessions",
62 "tcp-request content accept",
63 "tcp-request content reject",
64 "tcp-request inspect-delay",
65 "timeout client",
66 "timeout clitimeout",
67 "use_backend",
68 ]
69
2470
25###############################################################################71###############################################################################
26# Supporting functions72# Supporting functions
27###############################################################################73###############################################################################
2874
29def unit_get(*args):75
30 """Simple wrapper around unit-get, all arguments passed untouched"""76def ensure_package_status(packages, status):
31 get_args = ["unit-get"]77 if status in ['install', 'hold']:
32 get_args.extend(args)78 selections = ''.join(['{} {}\n'.format(package, status)
33 return subprocess.check_output(get_args)79 for package in packages])
3480 dpkg = subprocess.Popen(['dpkg', '--set-selections'],
35def juju_log(*args):81 stdin=subprocess.PIPE)
36 """Simple wrapper around juju-log, all arguments are passed untouched"""82 dpkg.communicate(input=selections)
37 log_args = ["juju-log"]
38 log_args.extend(args)
39 subprocess.call(log_args)
40
41#------------------------------------------------------------------------------
42# config_get: Returns a dictionary containing all of the config information
43# Optional parameter: scope
44# scope: limits the scope of the returned configuration to the
45# desired config item.
46#------------------------------------------------------------------------------
47def config_get(scope=None):
48 try:
49 config_cmd_line = ['config-get']
50 if scope is not None:
51 config_cmd_line.append(scope)
52 config_cmd_line.append('--format=json')
53 config_data = json.loads(subprocess.check_output(config_cmd_line))
54 except Exception, e:
55 subprocess.call(['juju-log', str(e)])
56 config_data = None
57 finally:
58 return(config_data)
59
60
61#------------------------------------------------------------------------------
62# relation_get: Returns a dictionary containing the relation information
63# Optional parameters: scope, relation_id
64# scope: limits the scope of the returned data to the
65# desired item.
66# unit_name: limits the data ( and optionally the scope )
67# to the specified unit
68# relation_id: specify relation id for out of context usage.
69#------------------------------------------------------------------------------
70def relation_get(scope=None, unit_name=None, relation_id=None):
71 try:
72 relation_cmd_line = ['relation-get', '--format=json']
73 if relation_id is not None:
74 relation_cmd_line.extend(('-r', relation_id))
75 if scope is not None:
76 relation_cmd_line.append(scope)
77 else:
78 relation_cmd_line.append('')
79 if unit_name is not None:
80 relation_cmd_line.append(unit_name)
81 relation_data = json.loads(subprocess.check_output(relation_cmd_line))
82 except Exception, e:
83 subprocess.call(['juju-log', str(e)])
84 relation_data = None
85 finally:
86 return(relation_data)
87
88def relation_set(arguments, relation_id=None):
89 """
90 Wrapper around relation-set
91 @param arguments: list of command line arguments
92 @param relation_id: optional relation-id (passed to -r parameter) to use
93 """
94 set_args = ["relation-set"]
95 if relation_id is not None:
96 set_args.extend(["-r", str(relation_id)])
97 set_args.extend(arguments)
98 subprocess.check_call(set_args)
99
100#------------------------------------------------------------------------------
101# apt_get_install( package ): Installs a package
102#------------------------------------------------------------------------------
103def apt_get_install(packages=None):
104 if packages is None:
105 return(False)
106 cmd_line = ['apt-get', '-y', 'install', '-qq']
107 cmd_line.append(packages)
108 return(subprocess.call(cmd_line))
10983
11084
111#------------------------------------------------------------------------------85#------------------------------------------------------------------------------
@@ -113,8 +87,8 @@
113#------------------------------------------------------------------------------87#------------------------------------------------------------------------------
114def enable_haproxy():88def enable_haproxy():
115 default_haproxy = "/etc/default/haproxy"89 default_haproxy = "/etc/default/haproxy"
116 enabled_haproxy = \90 with open(default_haproxy) as f:
117 open(default_haproxy).read().replace('ENABLED=0', 'ENABLED=1')91 enabled_haproxy = f.read().replace('ENABLED=0', 'ENABLED=1')
118 with open(default_haproxy, 'w') as f:92 with open(default_haproxy, 'w') as f:
119 f.write(enabled_haproxy)93 f.write(enabled_haproxy)
12094
@@ -137,8 +111,8 @@
137 if config_data['global_quiet'] is True:111 if config_data['global_quiet'] is True:
138 haproxy_globals.append(" quiet")112 haproxy_globals.append(" quiet")
139 haproxy_globals.append(" spread-checks %d" %113 haproxy_globals.append(" spread-checks %d" %
140 config_data['global_spread_checks'])114 config_data['global_spread_checks'])
141 return('\n'.join(haproxy_globals))115 return '\n'.join(haproxy_globals)
142116
143117
144#------------------------------------------------------------------------------118#------------------------------------------------------------------------------
@@ -157,7 +131,7 @@
157 haproxy_defaults.append(" retries %d" % config_data['default_retries'])131 haproxy_defaults.append(" retries %d" % config_data['default_retries'])
158 for timeout_item in default_timeouts:132 for timeout_item in default_timeouts:
159 haproxy_defaults.append(" timeout %s" % timeout_item.strip())133 haproxy_defaults.append(" timeout %s" % timeout_item.strip())
160 return('\n'.join(haproxy_defaults))134 return '\n'.join(haproxy_defaults)
161135
162136
163#------------------------------------------------------------------------------137#------------------------------------------------------------------------------
@@ -168,9 +142,9 @@
168#------------------------------------------------------------------------------142#------------------------------------------------------------------------------
169def load_haproxy_config(haproxy_config_file="/etc/haproxy/haproxy.cfg"):143def load_haproxy_config(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
170 if os.path.isfile(haproxy_config_file):144 if os.path.isfile(haproxy_config_file):
171 return(open(haproxy_config_file).read())145 return open(haproxy_config_file).read()
172 else:146 else:
173 return(None)147 return None
174148
175149
176#------------------------------------------------------------------------------150#------------------------------------------------------------------------------
@@ -182,12 +156,12 @@
182def get_monitoring_password(haproxy_config_file="/etc/haproxy/haproxy.cfg"):156def get_monitoring_password(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
183 haproxy_config = load_haproxy_config(haproxy_config_file)157 haproxy_config = load_haproxy_config(haproxy_config_file)
184 if haproxy_config is None:158 if haproxy_config is None:
185 return(None)159 return None
186 m = re.search("stats auth\s+(\w+):(\w+)", haproxy_config)160 m = re.search("stats auth\s+(\w+):(\w+)", haproxy_config)
187 if m is not None:161 if m is not None:
188 return(m.group(2))162 return m.group(2)
189 else:163 else:
190 return(None)164 return None
191165
192166
193#------------------------------------------------------------------------------167#------------------------------------------------------------------------------
@@ -197,32 +171,29 @@
197# to open and close when exposing/unexposing a service171# to open and close when exposing/unexposing a service
198#------------------------------------------------------------------------------172#------------------------------------------------------------------------------
199def get_service_ports(haproxy_config_file="/etc/haproxy/haproxy.cfg"):173def get_service_ports(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
174 stanzas = get_listen_stanzas(haproxy_config_file=haproxy_config_file)
175 return tuple((int(port) for service, addr, port in stanzas))
176
177
178#------------------------------------------------------------------------------
179# get_listen_stanzas: Convenience function that scans the existing haproxy
180# configuration file and returns a list of the existing
181# listen stanzas cofnigured.
182#------------------------------------------------------------------------------
183def get_listen_stanzas(haproxy_config_file="/etc/haproxy/haproxy.cfg"):
200 haproxy_config = load_haproxy_config(haproxy_config_file)184 haproxy_config = load_haproxy_config(haproxy_config_file)
201 if haproxy_config is None:185 if haproxy_config is None:
202 return(None)186 return ()
203 return(re.findall("listen.*:(.*)", haproxy_config))187 listen_stanzas = re.findall(
204188 "listen\s+([^\s]+)\s+([^:]+):(.*)",
205189 haproxy_config)
206#------------------------------------------------------------------------------190 bind_stanzas = re.findall(
207# open_port: Convenience function to open a port in juju to191 "\s+bind\s+([^:]+):(\d+)\s*\n\s+default_backend\s+([^\s]+)",
208# expose a service192 haproxy_config, re.M)
209#------------------------------------------------------------------------------193 return (tuple(((service, addr, int(port))
210def open_port(port=None, protocol="TCP"):194 for service, addr, port in listen_stanzas)) +
211 if port is None:195 tuple(((service, addr, int(port))
212 return(None)196 for addr, port, service in bind_stanzas)))
213 return(subprocess.call(['open-port', "%d/%s" %
214 (int(port), protocol)]))
215
216
217#------------------------------------------------------------------------------
218# close_port: Convenience function to close a port in juju to
219# unexpose a service
220#------------------------------------------------------------------------------
221def close_port(port=None, protocol="TCP"):
222 if port is None:
223 return(None)
224 return(subprocess.call(['close-port', "%d/%s" %
225 (int(port), protocol)]))
226197
227198
228#------------------------------------------------------------------------------199#------------------------------------------------------------------------------
@@ -232,26 +203,25 @@
232#------------------------------------------------------------------------------203#------------------------------------------------------------------------------
233def update_service_ports(old_service_ports=None, new_service_ports=None):204def update_service_ports(old_service_ports=None, new_service_ports=None):
234 if old_service_ports is None or new_service_ports is None:205 if old_service_ports is None or new_service_ports is None:
235 return(None)206 return None
236 for port in old_service_ports:207 for port in old_service_ports:
237 if port not in new_service_ports:208 if port not in new_service_ports:
238 close_port(port)209 close_port(port)
239 for port in new_service_ports:210 for port in new_service_ports:
240 if port not in old_service_ports:211 open_port(port)
241 open_port(port)212
242213
243214#------------------------------------------------------------------------------
244#------------------------------------------------------------------------------215# update_sysctl: create a sysctl.conf file from YAML-formatted 'sysctl' config
245# pwgen: Generates a random password216#------------------------------------------------------------------------------
246# pwd_length: Defines the length of the password to generate217def update_sysctl(config_data):
247# default: 20218 sysctl_dict = yaml.load(config_data.get("sysctl", "{}"))
248#------------------------------------------------------------------------------219 if sysctl_dict:
249def pwgen(pwd_length=20):220 sysctl_file = open("/etc/sysctl.d/50-haproxy.conf", "w")
250 alphanumeric_chars = [l for l in (string.letters + string.digits)221 for key in sysctl_dict:
251 if l not in 'Iil0oO1']222 sysctl_file.write("{}={}\n".format(key, sysctl_dict[key]))
252 random_chars = [random.choice(alphanumeric_chars)223 sysctl_file.close()
253 for i in range(pwd_length)]224 subprocess.call(["sysctl", "-p", "/etc/sysctl.d/50-haproxy.conf"])
254 return(''.join(random_chars))
255225
256226
257#------------------------------------------------------------------------------227#------------------------------------------------------------------------------
@@ -271,22 +241,40 @@
271 service_port=None, service_options=None,241 service_port=None, service_options=None,
272 server_entries=None):242 server_entries=None):
273 if service_name is None or service_ip is None or service_port is None:243 if service_name is None or service_ip is None or service_port is None:
274 return(None)244 return None
245 fe_options = []
246 be_options = []
247 if service_options is not None:
248 # Filter provided service options into frontend-only and backend-only.
249 results = izip(
250 (fe_options, be_options),
251 (True, False),
252 tee((o, any(map(o.strip().startswith,
253 frontend_only_options)))
254 for o in service_options))
255 for out, cond, result in results:
256 out.extend(option for option, match in result if match is cond)
275 service_config = []257 service_config = []
276 service_config.append("listen %s %s:%s" %258 unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-")
277 (service_name, service_ip, service_port))259 service_config.append("frontend %s-%s" % (unit_name, service_port))
278 if service_options is not None:260 service_config.append(" bind %s:%s" %
279 for service_option in service_options:261 (service_ip, service_port))
280 service_config.append(" %s" % service_option.strip())262 service_config.append(" default_backend %s" % (service_name,))
281 if server_entries is not None and isinstance(server_entries, list):263 service_config.extend(" %s" % service_option.strip()
282 for (server_name, server_ip, server_port, server_options) \264 for service_option in fe_options)
283 in server_entries:265 service_config.append("")
266 service_config.append("backend %s" % (service_name,))
267 service_config.extend(" %s" % service_option.strip()
268 for service_option in be_options)
269 if isinstance(server_entries, (list, tuple)):
270 for (server_name, server_ip, server_port,
271 server_options) in server_entries:
284 server_line = " server %s %s:%s" % \272 server_line = " server %s %s:%s" % \
285 (server_name, server_ip, server_port)273 (server_name, server_ip, server_port)
286 if server_options is not None:274 if server_options is not None:
287 server_line += " %s" % server_options275 server_line += " %s" % " ".join(server_options)
288 service_config.append(server_line)276 service_config.append(server_line)
289 return('\n'.join(service_config))277 return '\n'.join(service_config)
290278
291279
292#------------------------------------------------------------------------------280#------------------------------------------------------------------------------
@@ -296,216 +284,234 @@
296def create_monitoring_stanza(service_name="haproxy_monitoring"):284def create_monitoring_stanza(service_name="haproxy_monitoring"):
297 config_data = config_get()285 config_data = config_get()
298 if config_data['enable_monitoring'] is False:286 if config_data['enable_monitoring'] is False:
299 return(None)287 return None
300 monitoring_password = get_monitoring_password()288 monitoring_password = get_monitoring_password()
301 if config_data['monitoring_password'] != "changeme":289 if config_data['monitoring_password'] != "changeme":
302 monitoring_password = config_data['monitoring_password']290 monitoring_password = config_data['monitoring_password']
303 elif monitoring_password is None and \291 elif (monitoring_password is None and
304 config_data['monitoring_password'] == "changeme":292 config_data['monitoring_password'] == "changeme"):
305 monitoring_password = pwgen()293 monitoring_password = pwgen(length=20)
306 monitoring_config = []294 monitoring_config = []
307 monitoring_config.append("mode http")295 monitoring_config.append("mode http")
308 monitoring_config.append("acl allowed_cidr src %s" %296 monitoring_config.append("acl allowed_cidr src %s" %
309 config_data['monitoring_allowed_cidr'])297 config_data['monitoring_allowed_cidr'])
310 monitoring_config.append("block unless allowed_cidr")298 monitoring_config.append("block unless allowed_cidr")
311 monitoring_config.append("stats enable")299 monitoring_config.append("stats enable")
312 monitoring_config.append("stats uri /")300 monitoring_config.append("stats uri /")
313 monitoring_config.append("stats realm Haproxy\ Statistics")301 monitoring_config.append("stats realm Haproxy\ Statistics")
314 monitoring_config.append("stats auth %s:%s" %302 monitoring_config.append("stats auth %s:%s" %
315 (config_data['monitoring_username'], monitoring_password))303 (config_data['monitoring_username'],
304 monitoring_password))
316 monitoring_config.append("stats refresh %d" %305 monitoring_config.append("stats refresh %d" %
317 config_data['monitoring_stats_refresh'])306 config_data['monitoring_stats_refresh'])
318 return(create_listen_stanza(service_name,307 return create_listen_stanza(service_name,
319 "0.0.0.0",308 "0.0.0.0",
320 config_data['monitoring_port'],309 config_data['monitoring_port'],
321 monitoring_config))310 monitoring_config)
322311
323def get_host_port(services_list):312
324 """313#------------------------------------------------------------------------------
325 Given a services list and global juju information, get a host314# get_config_services: Convenience function that returns a mapping containing
326 and port for this system.315# all of the services configuration
327 """316#------------------------------------------------------------------------------
328 host = services_list[0]["service_host"]
329 port = int(services_list[0]["service_port"])
330 return (host, port)
331
332def get_config_services():317def get_config_services():
333 """
334 Return dict of all services in the configuration, and in the relation
335 where appropriate. If a relation contains a "services" key, read
336 it in as yaml as is the case with the configuration. Set the host and
337 port for any relation initiated service entry as those items cannot be
338 known by the other side of the relation. In the case of a
339 proxy configuration found, ensure the forward for option is set.
340 """
341 config_data = config_get()318 config_data = config_get()
342 config_services_list = yaml.load(config_data['services'])319 services = {}
343 (host, port) = get_host_port(config_services_list)320 for service in yaml.safe_load(config_data['services']):
344 all_relations = relation_get_all("reverseproxy")321 service_name = service["service_name"]
345 services_list = []322 if not services:
346 if hasattr(all_relations, "iteritems"):323 # 'None' is used as a marker for the first service defined, which
347 for relid, reldata in all_relations.iteritems():324 # is used as the default service if a proxied server doesn't
348 for unit, relation_info in reldata.iteritems():325 # specify which service it is bound to.
349 if relation_info.has_key("services"):326 services[None] = {"service_name": service_name}
350 rservices = yaml.load(relation_info["services"])327 if is_proxy(service_name) and ("option forwardfor" not in
351 for r in rservices:328 service["service_options"]):
352 r["service_host"] = host329 service["service_options"].append("option forwardfor")
353 r["service_port"] = port330
354 port += 1331 if isinstance(service["server_options"], basestring):
355 services_list.extend(rservices)332 service["server_options"] = service["server_options"].split()
356 if len(services_list) == 0:333
357 services_list = config_services_list334 services[service_name] = service
358 return(services_list)335
336 return services
359337
360338
361#------------------------------------------------------------------------------339#------------------------------------------------------------------------------
362# get_config_service: Convenience function that returns a dictionary340# get_config_service: Convenience function that returns a dictionary
363# of the configuration of a given services configuration341# of the configuration of a given service's configuration
364#------------------------------------------------------------------------------342#------------------------------------------------------------------------------
365def get_config_service(service_name=None):343def get_config_service(service_name=None):
366 services_list = get_config_services()344 return get_config_services().get(service_name, None)
367 for service_item in services_list:345
368 if service_item['service_name'] == service_name:346
369 return(service_item)347def is_proxy(service_name):
370 return(None)348 flag_path = os.path.join(default_haproxy_service_config_dir,
371349 "%s.is.proxy" % service_name)
372350 return os.path.exists(flag_path)
373def relation_get_all(relation_name):351
374 """
375 Iterate through all relations, and return large data structure with the
376 relation data set:
377
378 @param relation_name: The name of the relation to check
379
380 Returns:
381
382 relation_id:
383 unit:
384 key: value
385 key2: value
386 """
387 result = {}
388 try:
389 relids = subprocess.Popen(
390 ['relation-ids', relation_name], stdout=subprocess.PIPE)
391 for relid in [x.strip() for x in relids.stdout]:
392 result[relid] = {}
393 for unit in json.loads(
394 subprocess.check_output(
395 ['relation-list', '--format=json', '-r', relid])):
396 result[relid][unit] = relation_get(None, unit, relid)
397 return result
398 except Exception, e:
399 subprocess.call(['juju-log', str(e)])
400
401def get_services_dict():
402 """
403 Transform the services list into a dict for easier comprehension,
404 and to ensure that we have only one entry per service type. If multiple
405 relations specify the same server_name, try to union the servers
406 entries.
407 """
408 services_list = get_config_services()
409 services_dict = {}
410
411 for service_item in services_list:
412 if not hasattr(service_item, "iteritems"):
413 juju_log("Each 'services' entry must be a dict: %s" % service_item)
414 continue;
415 if "service_name" not in service_item:
416 juju_log("Missing 'service_name': %s" % service_item)
417 continue;
418 name = service_item["service_name"]
419 options = service_item["service_options"]
420 if name in services_dict:
421 if "servers" in services_dict[name]:
422 services_dict[name]["servers"].extend(service_item["servers"])
423 else:
424 services_dict[name] = service_item
425 if os.path.exists("%s/%s.is.proxy" % (
426 default_haproxy_service_config_dir, name)):
427 if 'option forwardfor' not in options:
428 options.append("option forwardfor")
429
430 return services_dict
431
432def get_all_services():
433 """
434 Transform a services dict into an "all_services" relation setting expected
435 by apache2. This is needed to ensure we have the port and hostname setting
436 correct and in the proper format
437 """
438 services = get_services_dict()
439 all_services = []
440 for name in services:
441 s = {"service_name": name,
442 "service_port": services[name]["service_port"]}
443 all_services.append(s)
444 return all_services
445352
446#------------------------------------------------------------------------------353#------------------------------------------------------------------------------
447# create_services: Function that will create the services configuration354# create_services: Function that will create the services configuration
448# from the config data and/or relation information355# from the config data and/or relation information
449#------------------------------------------------------------------------------356#------------------------------------------------------------------------------
450def create_services():357def create_services():
451 services_list = get_config_services()358 services_dict = get_config_services()
452 services_dict = get_services_dict()359 if len(services_dict) == 0:
453360 log("No services configured, exiting.")
454 # service definitions overwrites user specified haproxy file in361 return
455 # a pseudo-template form362
456 all_relations = relation_get_all("reverseproxy")363 relation_data = relations_of_type("reverseproxy")
457 for relid, reldata in all_relations.iteritems():364
458 for unit, relation_info in reldata.iteritems():365 for relation_info in relation_data:
459 if not isinstance(relation_info, dict):366 unit = relation_info['__unit__']
460 sys.exit(0)367 juju_service_name = unit.rpartition('/')[0]
461 if "services" in relation_info:368
462 juju_log("Relation %s has services override defined" % relid)369 relation_ok = True
463 continue;370 for required in ("port", "private-address", "hostname"):
464 if "hostname" not in relation_info or "port" not in relation_info:371 if not required in relation_info:
465 juju_log("Relation %s needs hostname and port defined" % relid)372 log("No %s in relation data for '%s', skipping." %
466 continue;373 (required, unit))
467 juju_service_name = unit.rpartition('/')[0]374 relation_ok = False
468 # Mandatory switches ( hostname, port )375 break
469 server_name = "%s__%s" % (376
470 relation_info['hostname'].replace('.', '_'),377 if not relation_ok:
471 relation_info['port'])378 continue
472 server_ip = relation_info['hostname']379
473 server_port = relation_info['port']380 # Mandatory switches ( private-address, port )
474 # Optional switches ( service_name )381 host = relation_info['private-address']
475 if 'service_name' in relation_info:382 port = relation_info['port']
476 if relation_info['service_name'] in services_dict:383 server_name = ("%s-%s" % (unit.replace("/", "-"), port))
477 service_name = relation_info['service_name']384
478 else:385 # Optional switches ( service_name, sitenames )
479 juju_log("service %s does not exist." % (386 service_names = set()
480 relation_info['service_name']))387 if 'service_name' in relation_info:
481 sys.exit(1)388 if relation_info['service_name'] in services_dict:
482 elif juju_service_name + '_service' in services_dict:389 service_names.add(relation_info['service_name'])
483 service_name = juju_service_name + '_service'
484 else:390 else:
485 service_name = services_list[0]['service_name']391 log("Service '%s' does not exist." %
392 relation_info['service_name'])
393 continue
394
395 if 'sitenames' in relation_info:
396 sitenames = relation_info['sitenames'].split()
397 for sitename in sitenames:
398 if sitename in services_dict:
399 service_names.add(sitename)
400
401 if juju_service_name + "_service" in services_dict:
402 service_names.add(juju_service_name + "_service")
403
404 if juju_service_name in services_dict:
405 service_names.add(juju_service_name)
406
407 if not service_names:
408 service_names.add(services_dict[None]["service_name"])
409
410 for service_name in service_names:
411 service = services_dict[service_name]
412
486 # Add the server entries413 # Add the server entries
487 if not 'servers' in services_dict[service_name]:414 servers = service.setdefault("servers", [])
488 services_dict[service_name]['servers'] = []415 servers.append((server_name, host, port,
489 services_dict[service_name]['servers'].append((416 services_dict[service_name].get(
490 server_name, server_ip, server_port,417 'server_options', [])))
491 services_dict[service_name]['server_options']))418
492419 has_servers = False
420 for service_name, service in services_dict.iteritems():
421 if service.get("servers", []):
422 has_servers = True
423
424 if not has_servers:
425 log("No backend servers, exiting.")
426 return
427
428 del services_dict[None]
429 services_dict = apply_peer_config(services_dict)
430 write_service_config(services_dict)
431 return services_dict
432
433
434def apply_peer_config(services_dict):
435 peer_data = relations_of_type("peer")
436
437 peer_services = {}
438 for relation_info in peer_data:
439 unit_name = relation_info["__unit__"]
440 peer_services_data = relation_info.get("all_services")
441 if peer_services_data is None:
442 continue
443 service_data = yaml.safe_load(peer_services_data)
444 for service in service_data:
445 service_name = service["service_name"]
446 if service_name in services_dict:
447 peer_service = peer_services.setdefault(service_name, {})
448 peer_service["service_name"] = service_name
449 peer_service["service_host"] = service["service_host"]
450 peer_service["service_port"] = service["service_port"]
451 peer_service["service_options"] = ["balance leastconn",
452 "mode tcp",
453 "option tcplog"]
454 servers = peer_service.setdefault("servers", [])
455 servers.append((unit_name.replace("/", "-"),
456 relation_info["private-address"],
457 service["service_port"] + 1, ["check"]))
458
459 if not peer_services:
460 return services_dict
461
462 unit_name = os.environ["JUJU_UNIT_NAME"].replace("/", "-")
463 private_address = unit_get("private-address")
464 for service_name, peer_service in peer_services.iteritems():
465 original_service = services_dict[service_name]
466
467 # If the original service has timeout settings, copy them over to the
468 # peer service.
469 for option in original_service.get("service_options", ()):
470 if "timeout" in option:
471 peer_service["service_options"].append(option)
472
473 servers = peer_service["servers"]
474 # Add ourselves to the list of servers for the peer listen stanza.
475 servers.append((unit_name, private_address,
476 original_service["service_port"] + 1,
477 ["check"]))
478
479 # Make all but the first server in the peer listen stanza a backup
480 # server.
481 servers.sort()
482 for server in servers[1:]:
483 server[3].append("backup")
484
485 # Remap original service port, will now be used by peer listen stanza.
486 original_service["service_port"] += 1
487
488 # Remap original service to a new name, stuff peer listen stanza into
489 # it's place.
490 be_service = service_name + "_be"
491 original_service["service_name"] = be_service
492 services_dict[be_service] = original_service
493 services_dict[service_name] = peer_service
494
495 return services_dict
496
497
498def write_service_config(services_dict):
493 # Construct the new haproxy.cfg file499 # Construct the new haproxy.cfg file
494 for service in services_dict:500 for service_key, service_config in services_dict.items():
495 juju_log("Service: ", service)501 log("Service: %s" % service_key)
496 server_entries = None502 server_entries = service_config.get('servers')
497 if 'servers' in services_dict[service]:503
498 server_entries = services_dict[service]['servers']504 service_name = service_config["service_name"]
499 service_config_file = "%s/%s.service" % (505 if not os.path.exists(default_haproxy_service_config_dir):
500 default_haproxy_service_config_dir,506 os.mkdir(default_haproxy_service_config_dir, 0600)
501 services_dict[service]['service_name'])507 with open(os.path.join(default_haproxy_service_config_dir,
502 with open(service_config_file, 'w') as service_config:508 "%s.service" % service_name), 'w') as config:
503 service_config.write(509 config.write(create_listen_stanza(
504 create_listen_stanza(services_dict[service]['service_name'],510 service_name,
505 services_dict[service]['service_host'],511 service_config['service_host'],
506 services_dict[service]['service_port'],512 service_config['service_port'],
507 services_dict[service]['service_options'],513 service_config['service_options'],
508 server_entries))514 server_entries))
509515
510516
511#------------------------------------------------------------------------------517#------------------------------------------------------------------------------
@@ -516,17 +522,19 @@
516 services = ''522 services = ''
517 if service_name is not None:523 if service_name is not None:
518 if os.path.exists("%s/%s.service" %524 if os.path.exists("%s/%s.service" %
519 (default_haproxy_service_config_dir, service_name)):525 (default_haproxy_service_config_dir, service_name)):
520 services = open("%s/%s.service" %526 with open("%s/%s.service" % (default_haproxy_service_config_dir,
521 (default_haproxy_service_config_dir, service_name)).read()527 service_name)) as f:
528 services = f.read()
522 else:529 else:
523 services = None530 services = None
524 else:531 else:
525 for service in glob.glob("%s/*.service" %532 for service in glob.glob("%s/*.service" %
526 default_haproxy_service_config_dir):533 default_haproxy_service_config_dir):
527 services += open(service).read()534 with open(service) as f:
528 services += "\n\n"535 services += f.read()
529 return(services)536 services += "\n\n"
537 return services
530538
531539
532#------------------------------------------------------------------------------540#------------------------------------------------------------------------------
@@ -537,24 +545,24 @@
537#------------------------------------------------------------------------------545#------------------------------------------------------------------------------
538def remove_services(service_name=None):546def remove_services(service_name=None):
539 if service_name is not None:547 if service_name is not None:
540 if os.path.exists("%s/%s.service" %548 path = "%s/%s.service" % (default_haproxy_service_config_dir,
541 (default_haproxy_service_config_dir, service_name)):549 service_name)
550 if os.path.exists(path):
542 try:551 try:
543 os.remove("%s/%s.service" %552 os.remove(path)
544 (default_haproxy_service_config_dir, service_name))
545 return(True)
546 except Exception, e:553 except Exception, e:
547 subprocess.call(['juju-log', str(e)])554 log(str(e))
548 return(False)555 return False
556 return True
549 else:557 else:
550 for service in glob.glob("%s/*.service" %558 for service in glob.glob("%s/*.service" %
551 default_haproxy_service_config_dir):559 default_haproxy_service_config_dir):
552 try:560 try:
553 os.remove(service)561 os.remove(service)
554 except Exception, e:562 except Exception, e:
555 subprocess.call(['juju-log', str(e)])563 log(str(e))
556 pass564 pass
557 return(True)565 return True
558566
559567
560#------------------------------------------------------------------------------568#------------------------------------------------------------------------------
@@ -567,27 +575,18 @@
567# optional arguments575# optional arguments
568#------------------------------------------------------------------------------576#------------------------------------------------------------------------------
569def construct_haproxy_config(haproxy_globals=None,577def construct_haproxy_config(haproxy_globals=None,
570 haproxy_defaults=None,578 haproxy_defaults=None,
571 haproxy_monitoring=None,579 haproxy_monitoring=None,
572 haproxy_services=None):580 haproxy_services=None):
573 if haproxy_globals is None or \581 if None in (haproxy_globals, haproxy_defaults):
574 haproxy_defaults is None:582 return
575 return(None)
576 with open(default_haproxy_config, 'w') as haproxy_config:583 with open(default_haproxy_config, 'w') as haproxy_config:
577 haproxy_config.write(haproxy_globals)584 config_string = ''
578 haproxy_config.write("\n")585 for config in (haproxy_globals, haproxy_defaults, haproxy_monitoring,
579 haproxy_config.write("\n")586 haproxy_services):
580 haproxy_config.write(haproxy_defaults)587 if config is not None:
581 haproxy_config.write("\n")588 config_string += config + '\n\n'
582 haproxy_config.write("\n")589 haproxy_config.write(config_string)
583 if haproxy_monitoring is not None:
584 haproxy_config.write(haproxy_monitoring)
585 haproxy_config.write("\n")
586 haproxy_config.write("\n")
587 if haproxy_services is not None:
588 haproxy_config.write(haproxy_services)
589 haproxy_config.write("\n")
590 haproxy_config.write("\n")
591590
592591
593#------------------------------------------------------------------------------592#------------------------------------------------------------------------------
@@ -595,50 +594,37 @@
595# the haproxy service594# the haproxy service
596#------------------------------------------------------------------------------595#------------------------------------------------------------------------------
597def service_haproxy(action=None, haproxy_config=default_haproxy_config):596def service_haproxy(action=None, haproxy_config=default_haproxy_config):
598 if action is None or haproxy_config is None:597 if None in (action, haproxy_config):
599 return(None)598 return None
600 elif action == "check":599 elif action == "check":
601 retVal = subprocess.call(600 command = ['/usr/sbin/haproxy', '-f', haproxy_config, '-c']
602 ['/usr/sbin/haproxy', '-f', haproxy_config, '-c'])
603 if retVal == 1:
604 return(False)
605 elif retVal == 0:
606 return(True)
607 else:
608 return(False)
609 else:601 else:
610 retVal = subprocess.call(['service', 'haproxy', action])602 command = ['service', 'haproxy', action]
611 if retVal == 0:603 return_value = subprocess.call(command)
612 return(True)604 return return_value == 0
613 else:
614 return(False)
615
616def website_notify():
617 """
618 Notify any webiste relations of any configuration changes.
619 """
620 juju_log("Notifying all website relations of change")
621 all_relations = relation_get_all("website")
622 if hasattr(all_relations, "iteritems"):
623 for relid, reldata in all_relations.iteritems():
624 relation_set(["time=%s" % time.time()], relation_id=relid)
625605
626606
627###############################################################################607###############################################################################
628# Hook functions608# Hook functions
629###############################################################################609###############################################################################
630def install_hook():610def install_hook():
631 for f in glob.glob('exec.d/*/charm-pre-install'):
632 if os.path.isfile(f) and os.access(f, os.X_OK):
633 subprocess.check_call(['sh', '-c', f])
634 if not os.path.exists(default_haproxy_service_config_dir):611 if not os.path.exists(default_haproxy_service_config_dir):
635 os.mkdir(default_haproxy_service_config_dir, 0600)612 os.mkdir(default_haproxy_service_config_dir, 0600)
636 return ((apt_get_install("haproxy") == enable_haproxy()) is True)613
614 apt_install('haproxy', fatal=True)
615 ensure_package_status(service_affecting_packages,
616 config_get('package_status'))
617 enable_haproxy()
637618
638619
639def config_changed():620def config_changed():
640 config_data = config_get()621 config_data = config_get()
641 current_service_ports = get_service_ports()622
623 ensure_package_status(service_affecting_packages,
624 config_data['package_status'])
625
626 old_service_ports = get_service_ports()
627 old_stanzas = get_listen_stanzas()
642 haproxy_globals = create_haproxy_globals()628 haproxy_globals = create_haproxy_globals()
643 haproxy_defaults = create_haproxy_defaults()629 haproxy_defaults = create_haproxy_defaults()
644 if config_data['enable_monitoring'] is True:630 if config_data['enable_monitoring'] is True:
@@ -648,105 +634,177 @@
648 remove_services()634 remove_services()
649 create_services()635 create_services()
650 haproxy_services = load_services()636 haproxy_services = load_services()
637 update_sysctl(config_data)
651 construct_haproxy_config(haproxy_globals,638 construct_haproxy_config(haproxy_globals,
652 haproxy_defaults,639 haproxy_defaults,
653 haproxy_monitoring,640 haproxy_monitoring,
654 haproxy_services)641 haproxy_services)
655642
656 if service_haproxy("check"):643 if service_haproxy("check"):
657 updated_service_ports = get_service_ports()644 update_service_ports(old_service_ports, get_service_ports())
658 update_service_ports(current_service_ports, updated_service_ports)
659 service_haproxy("reload")645 service_haproxy("reload")
646 if not (get_listen_stanzas() == old_stanzas):
647 notify_website()
648 notify_peer()
649 else:
650 # XXX Ideally the config should be restored to a working state if the
651 # check fails, otherwise an inadvertent reload will cause the service
652 # to be broken.
653 log("HAProxy configuration check failed, exiting.")
654 sys.exit(1)
660655
661656
662def start_hook():657def start_hook():
663 if service_haproxy("status"):658 if service_haproxy("status"):
664 return(service_haproxy("restart"))659 return service_haproxy("restart")
665 else:660 else:
666 return(service_haproxy("start"))661 return service_haproxy("start")
667662
668663
669def stop_hook():664def stop_hook():
670 if service_haproxy("status"):665 if service_haproxy("status"):
671 return(service_haproxy("stop"))666 return service_haproxy("stop")
672667
673668
674def reverseproxy_interface(hook_name=None):669def reverseproxy_interface(hook_name=None):
675 if hook_name is None:670 if hook_name is None:
676 return(None)671 return None
677 elif hook_name == "changed":672 if hook_name in ("changed", "departed"):
678 config_changed()673 config_changed()
679 website_notify()674
680 elif hook_name=="departed":
681 config_changed()
682 website_notify()
683675
684def website_interface(hook_name=None):676def website_interface(hook_name=None):
685 if hook_name is None:677 if hook_name is None:
686 return(None)678 return None
679 # Notify website relation but only for the current relation in context.
680 notify_website(changed=hook_name == "changed",
681 relation_ids=(relation_id(),))
682
683
684def get_hostname(host=None):
685 my_host = socket.gethostname()
686 if host is None or host == "0.0.0.0":
687 # If the listen ip has been set to 0.0.0.0 then pass back the hostname
688 return socket.getfqdn(my_host)
689 elif host == "localhost":
690 # If the fqdn lookup has returned localhost (lxc setups) then return
691 # hostname
692 return my_host
693 return host
694
695
696def notify_relation(relation, changed=False, relation_ids=None):
697 config_data = config_get()
698 default_host = get_hostname()
687 default_port = 80699 default_port = 80
688 relation_data = relation_get()700
689701 for rid in relation_ids or get_relation_ids(relation):
690 # If a specfic service has been asked for then return the ip:port for702 service_names = set()
691 # that service, else pass back the default703 if rid is None:
692 if 'service_name' in relation_data:704 rid = relation_id()
693 service_name = relation_data['service_name']705 for relation_data in relations_for_id(rid):
694 requestedservice = get_config_service(service_name)706 if 'service_name' in relation_data:
695 my_host = requestedservice['service_host']707 service_names.add(relation_data['service_name'])
696 my_port = requestedservice['service_port']708
697 else:709 if changed:
698 my_host = socket.getfqdn(socket.gethostname())710 if 'is-proxy' in relation_data:
699 my_port = default_port711 remote_service = ("%s__%d" % (relation_data['hostname'],
700712 relation_data['port']))
701 # If the listen ip has been set to 0.0.0.0 then pass back the hostname713 open("%s/%s.is.proxy" % (
702 if my_host == "0.0.0.0":714 default_haproxy_service_config_dir,
703 my_host = socket.getfqdn(socket.gethostname())715 remote_service), 'a').close()
704716
705 # If the fqdn lookup has returned localhost (lxc setups) then return717 service_name = None
706 # hostname718 if len(service_names) == 1:
707 if my_host == "localhost":719 service_name = service_names.pop()
708 my_host = socket.gethostname()720 elif len(service_names) > 1:
709 subprocess.call(721 log("Remote units requested than a single service name."
710 ['relation-set', 'port=%d' % my_port, 'hostname=%s' % my_host,722 "Falling back to default host/port.")
711 'all_services=%s' % yaml.safe_dump(get_all_services())])723
712 if hook_name == "changed":724 if service_name is not None:
713 if 'is-proxy' in relation_data:725 # If a specfic service has been asked for then return the ip:port
714 service_name = "%s__%d" % \726 # for that service, else pass back the default
715 (relation_data['hostname'], relation_data['port'])727 requestedservice = get_config_service(service_name)
716 open("%s/%s.is.proxy" %728 my_host = get_hostname(requestedservice['service_host'])
717 (default_haproxy_service_config_dir, service_name), 'a').close()729 my_port = requestedservice['service_port']
730 else:
731 my_host = default_host
732 my_port = default_port
733
734 relation_set(relation_id=rid, port=str(my_port),
735 hostname=my_host,
736 all_services=config_data['services'])
737
738
739def notify_website(changed=False, relation_ids=None):
740 notify_relation("website", changed=changed, relation_ids=relation_ids)
741
742
743def notify_peer(changed=False, relation_ids=None):
744 notify_relation("peer", changed=changed, relation_ids=relation_ids)
745
746
747def install_nrpe_scripts():
748 scripts_src = os.path.join(os.environ["CHARM_DIR"], "files",
749 "nrpe")
750 scripts_dst = "/usr/lib/nagios/plugins"
751 if not os.path.exists(scripts_dst):
752 os.makedirs(scripts_dst)
753 for fname in glob.glob(os.path.join(scripts_src, "*.sh")):
754 shutil.copy2(fname,
755 os.path.join(scripts_dst, os.path.basename(fname)))
756
718757
719def update_nrpe_config():758def update_nrpe_config():
759 install_nrpe_scripts()
720 nrpe_compat = nrpe.NRPE()760 nrpe_compat = nrpe.NRPE()
721 nrpe_compat.add_check('haproxy','Check HAProxy', 'check_haproxy.sh')761 nrpe_compat.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh')
722 nrpe_compat.add_check('haproxy_queue','Check HAProxy queue depth', 'check_haproxy_queue_depth.sh')762 nrpe_compat.add_check('haproxy_queue', 'Check HAProxy queue depth',
763 'check_haproxy_queue_depth.sh')
723 nrpe_compat.write()764 nrpe_compat.write()
724765
766
725###############################################################################767###############################################################################
726# Main section768# Main section
727###############################################################################769###############################################################################
728if __name__ == "__main__":770
729 if HOOK_NAME == "install":771
772def main(hook_name):
773 if hook_name == "install":
730 install_hook()774 install_hook()
731 elif HOOK_NAME == "config-changed":775 elif hook_name in ("config-changed", "upgrade-charm"):
732 config_changed()776 config_changed()
733 update_nrpe_config()777 update_nrpe_config()
734 elif HOOK_NAME == "start":778 elif hook_name == "start":
735 start_hook()779 start_hook()
736 elif HOOK_NAME == "stop":780 elif hook_name == "stop":
737 stop_hook()781 stop_hook()
738 elif HOOK_NAME == "reverseproxy-relation-broken":782 elif hook_name == "reverseproxy-relation-broken":
739 config_changed()783 config_changed()
740 elif HOOK_NAME == "reverseproxy-relation-changed":784 elif hook_name == "reverseproxy-relation-changed":
741 reverseproxy_interface("changed")785 reverseproxy_interface("changed")
742 elif HOOK_NAME == "reverseproxy-relation-departed":786 elif hook_name == "reverseproxy-relation-departed":
743 reverseproxy_interface("departed")787 reverseproxy_interface("departed")
744 elif HOOK_NAME == "website-relation-joined":788 elif hook_name == "website-relation-joined":
745 website_interface("joined")789 website_interface("joined")
746 elif HOOK_NAME == "website-relation-changed":790 elif hook_name == "website-relation-changed":
747 website_interface("changed")791 website_interface("changed")
748 elif HOOK_NAME == "nrpe-external-master-relation-changed":792 elif hook_name == "peer-relation-joined":
793 website_interface("joined")
794 elif hook_name == "peer-relation-changed":
795 reverseproxy_interface("changed")
796 elif hook_name in ("nrpe-external-master-relation-joined",
797 "local-monitors-relation-joined"):
749 update_nrpe_config()798 update_nrpe_config()
750 else:799 else:
751 print "Unknown hook"800 print "Unknown hook"
752 sys.exit(1)801 sys.exit(1)
802
803if __name__ == "__main__":
804 hook_name = os.path.basename(sys.argv[0])
805 # Also support being invoked directly with hook as argument name.
806 if hook_name == "hooks.py":
807 if len(sys.argv) < 2:
808 sys.exit("Missing required hook name argument.")
809 hook_name = sys.argv[1]
810 main(hook_name)
753811
=== modified symlink 'hooks/install' (properties changed: -x to +x)
=== target was u'./hooks.py'
--- hooks/install 1970-01-01 00:00:00 +0000
+++ hooks/install 2013-10-10 22:34:35 +0000
@@ -0,0 +1,13 @@
1#!/bin/sh
2
3set -eu
4
5apt_get_install() {
6 DEBIAN_FRONTEND=noninteractive apt-get -y -qq -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install $@
7}
8
9juju-log 'Invoking charm-pre-install hooks'
10[ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done )
11
12juju-log 'Invoking python-based install hook'
13python hooks/hooks.py install
014
=== added symlink 'hooks/local-monitors-relation-joined'
=== target is u'./hooks.py'
=== renamed symlink 'hooks/nrpe-external-master-relation-changed' => 'hooks/nrpe-external-master-relation-joined'
=== removed file 'hooks/nrpe.py'
--- hooks/nrpe.py 2012-12-21 11:08:58 +0000
+++ hooks/nrpe.py 1970-01-01 00:00:00 +0000
@@ -1,170 +0,0 @@
1import json
2import subprocess
3import pwd
4import grp
5import os
6import re
7import shlex
8
9# This module adds compatibility with the nrpe_external_master
10# subordinate charm. To use it in your charm:
11#
12# 1. Update metadata.yaml
13#
14# provides:
15# (...)
16# nrpe-external-master:
17# interface: nrpe-external-master
18# scope: container
19#
20# 2. Add the following to config.yaml
21#
22# nagios_context:
23# default: "juju"
24# type: string
25# description: |
26# Used by the nrpe-external-master subordinate charm.
27# A string that will be prepended to instance name to set the host name
28# in nagios. So for instance the hostname would be something like:
29# juju-myservice-0
30# If you're running multiple environments with the same services in them
31# this allows you to differentiate between them.
32#
33# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
34#
35# 4. Update your hooks.py with something like this:
36#
37# import nrpe
38# (...)
39# def update_nrpe_config():
40# nrpe_compat = NRPE("myservice")
41# nrpe_compat.add_check(
42# shortname = "myservice",
43# description = "Check MyService",
44# check_cmd = "check_http -w 2 -c 10 http://localhost"
45# )
46# nrpe_compat.add_check(
47# "myservice_other",
48# "Check for widget failures",
49# check_cmd = "/srv/myapp/scripts/widget_check"
50# )
51# nrpe_compat.write()
52#
53# def config_changed():
54# (...)
55# update_nrpe_config()
56
57class ConfigurationError(Exception):
58 '''An error interacting with the Juju config'''
59 pass
60def config_get(scope=None):
61 '''Return the Juju config as a dictionary'''
62 try:
63 config_cmd_line = ['config-get']
64 if scope is not None:
65 config_cmd_line.append(scope)
66 config_cmd_line.append('--format=json')
67 return json.loads(subprocess.check_output(config_cmd_line))
68 except (ValueError, OSError, subprocess.CalledProcessError) as error:
69 subprocess.call(['juju-log', str(error)])
70 raise ConfigurationError(str(error))
71
72class CheckException(Exception): pass
73class Check(object):
74 shortname_re = '[A-Za-z0-9-_]*'
75 service_template = """
76#---------------------------------------------------
77# This file is Juju managed
78#---------------------------------------------------
79define service {{
80 use active-service
81 host_name {nagios_hostname}
82 service_description {nagios_hostname} {shortname} {description}
83 check_command check_nrpe!check_{shortname}
84 servicegroups {nagios_servicegroup}
85}}
86"""
87 def __init__(self, shortname, description, check_cmd):
88 super(Check, self).__init__()
89 # XXX: could be better to calculate this from the service name
90 if not re.match(self.shortname_re, shortname):
91 raise CheckException("shortname must match {}".format(Check.shortname_re))
92 self.shortname = shortname
93 self.description = description
94 self.check_cmd = self._locate_cmd(check_cmd)
95
96 def _locate_cmd(self, check_cmd):
97 search_path = (
98 '/',
99 os.path.join(os.environ['CHARM_DIR'], 'files/nrpe-external-master'),
100 '/usr/lib/nagios/plugins',
101 )
102 command = shlex.split(check_cmd)
103 for path in search_path:
104 if os.path.exists(os.path.join(path,command[0])):
105 return os.path.join(path, command[0]) + " " + " ".join(command[1:])
106 subprocess.call(['juju-log', 'Check command not found: {}'.format(command[0])])
107 return ''
108
109 def write(self, nagios_context, hostname):
110 for f in os.listdir(NRPE.nagios_exportdir):
111 if re.search('.*check_{}.cfg'.format(self.shortname), f):
112 os.remove(os.path.join(NRPE.nagios_exportdir, f))
113
114 templ_vars = {
115 'nagios_hostname': hostname,
116 'nagios_servicegroup': nagios_context,
117 'description': self.description,
118 'shortname': self.shortname,
119 }
120 nrpe_service_text = Check.service_template.format(**templ_vars)
121 nrpe_service_file = '{}/service__{}_check_{}.cfg'.format(
122 NRPE.nagios_exportdir, hostname, self.shortname)
123 with open(nrpe_service_file, 'w') as nrpe_service_config:
124 nrpe_service_config.write(str(nrpe_service_text))
125
126 nrpe_check_file = '/etc/nagios/nrpe.d/check_{}.cfg'.format(self.shortname)
127 with open(nrpe_check_file, 'w') as nrpe_check_config:
128 nrpe_check_config.write("# check {}\n".format(self.shortname))
129 nrpe_check_config.write("command[check_{}]={}\n".format(
130 self.shortname, self.check_cmd))
131
132 def run(self):
133 subprocess.call(self.check_cmd)
134
135class NRPE(object):
136 nagios_logdir = '/var/log/nagios'
137 nagios_exportdir = '/var/lib/nagios/export'
138 nrpe_confdir = '/etc/nagios/nrpe.d'
139 def __init__(self):
140 super(NRPE, self).__init__()
141 self.config = config_get()
142 self.nagios_context = self.config['nagios_context']
143 self.unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-')
144 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
145 self.checks = []
146
147 def add_check(self, *args, **kwargs):
148 self.checks.append( Check(*args, **kwargs) )
149
150 def write(self):
151 try:
152 nagios_uid = pwd.getpwnam('nagios').pw_uid
153 nagios_gid = grp.getgrnam('nagios').gr_gid
154 except:
155 subprocess.call(['juju-log', "Nagios user not set up, nrpe checks not updated"])
156 return
157
158 if not os.path.exists(NRPE.nagios_exportdir):
159 subprocess.call(['juju-log', 'Exiting as {} is not accessible'.format(NRPE.nagios_exportdir)])
160 return
161
162 if not os.path.exists(NRPE.nagios_logdir):
163 os.mkdir(NRPE.nagios_logdir)
164 os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
165
166 for nrpecheck in self.checks:
167 nrpecheck.write(self.nagios_context, self.hostname)
168
169 if os.path.isfile('/etc/init.d/nagios-nrpe-server'):
170 subprocess.call(['service', 'nagios-nrpe-server', 'reload'])
1710
=== added symlink 'hooks/peer-relation-changed'
=== target is u'./hooks.py'
=== added symlink 'hooks/peer-relation-joined'
=== target is u'./hooks.py'
=== removed file 'hooks/test_hooks.py'
--- hooks/test_hooks.py 2013-02-14 21:35:47 +0000
+++ hooks/test_hooks.py 1970-01-01 00:00:00 +0000
@@ -1,263 +0,0 @@
1import hooks
2import yaml
3from textwrap import dedent
4from mocker import MockerTestCase, ARGS
5
6class JujuHookTest(MockerTestCase):
7
8 def setUp(self):
9 self.config_services = [{
10 "service_name": "haproxy_test",
11 "service_host": "0.0.0.0",
12 "service_port": "88",
13 "service_options": ["balance leastconn"],
14 "server_options": "maxconn 25"}]
15 self.config_services_extended = [
16 {"service_name": "unit_service",
17 "service_host": "supplied-hostname",
18 "service_port": "999",
19 "service_options": ["balance leastconn"],
20 "server_options": "maxconn 99"}]
21 self.relation_services = [
22 {"service_name": "foo_svc",
23 "service_options": ["balance leastconn"],
24 "servers": [("A", "hA", "1", "oA1 oA2")]},
25 {"service_name": "bar_svc",
26 "service_options": ["balance leastconn"],
27 "servers": [
28 ("A", "hA", "1", "oA1 oA2"), ("B", "hB", "2", "oB1 oB2")]}]
29 self.relation_services2 = [
30 {"service_name": "foo_svc",
31 "service_options": ["balance leastconn"],
32 "servers": [("A2", "hA2", "12", "oA12 oA22")]}]
33 hooks.default_haproxy_config_dir = self.makeDir()
34 hooks.default_haproxy_config = self.makeFile()
35 hooks.default_haproxy_service_config_dir = self.makeDir()
36 obj = self.mocker.replace("hooks.juju_log")
37 obj(ARGS)
38 self.mocker.count(0, None)
39 obj = self.mocker.replace("hooks.unit_get")
40 obj("public-address")
41 self.mocker.result("test-host.example.com")
42 self.mocker.count(0, None)
43 self.maxDiff = None
44
45 def _expect_config_get(self, **kwargs):
46 result = {
47 "default_timeouts": "queue 1000, connect 1000, client 1000, server 1000",
48 "global_log": "127.0.0.1 local0, 127.0.0.1 local1 notice",
49 "global_spread_checks": 0,
50 "monitoring_allowed_cidr": "127.0.0.1/32",
51 "monitoring_username": "haproxy",
52 "default_log": "global",
53 "global_group": "haproxy",
54 "monitoring_stats_refresh": 3,
55 "default_retries": 3,
56 "services": yaml.dump(self.config_services),
57 "global_maxconn": 4096,
58 "global_user": "haproxy",
59 "default_options": "httplog, dontlognull",
60 "monitoring_port": 10000,
61 "global_debug": False,
62 "nagios_context": "juju",
63 "global_quiet": False,
64 "enable_monitoring": False,
65 "monitoring_password": "changeme",
66 "default_mode": "http"}
67 obj = self.mocker.replace("hooks.config_get")
68 obj()
69 result.update(kwargs)
70 self.mocker.result(result)
71 self.mocker.count(1, None)
72
73 def _expect_relation_get_all(self, relation, extra={}):
74 obj = self.mocker.replace("hooks.relation_get_all")
75 obj(relation)
76 relation = {"hostname": "10.0.1.2",
77 "private-address": "10.0.1.2",
78 "port": "10000"}
79 relation.update(extra)
80 result = {"1": {"unit/0": relation}}
81 self.mocker.result(result)
82 self.mocker.count(1, None)
83
84 def _expect_relation_get_all_multiple(self, relation_name):
85 obj = self.mocker.replace("hooks.relation_get_all")
86 obj(relation_name)
87 result = {
88 "1": {"unit/0": {
89 "hostname": "10.0.1.2",
90 "private-address": "10.0.1.2",
91 "port": "10000",
92 "services": yaml.dump(self.relation_services)}},
93 "2": {"unit/1": {
94 "hostname": "10.0.1.3",
95 "private-address": "10.0.1.3",
96 "port": "10001",
97 "services": yaml.dump(self.relation_services2)}}}
98 self.mocker.result(result)
99 self.mocker.count(1, None)
100
101 def _expect_relation_get_all_with_services(self, relation, extra={}):
102 extra.update({"services": yaml.dump(self.relation_services)})
103 return self._expect_relation_get_all(relation, extra)
104
105 def _expect_relation_get(self):
106 obj = self.mocker.replace("hooks.relation_get")
107 obj()
108 result = {}
109 self.mocker.result(result)
110 self.mocker.count(1, None)
111
112 def test_create_services(self):
113 """
114 Simplest use case, config stanza seeded in config file, server line
115 added through simple relation. Many servers can join this, but
116 multiple services will not be presented to the outside
117 """
118 self._expect_config_get()
119 self._expect_relation_get_all("reverseproxy")
120 self.mocker.replay()
121 hooks.create_services()
122 services = hooks.load_services()
123 stanza = """\
124 listen haproxy_test 0.0.0.0:88
125 balance leastconn
126 server 10_0_1_2__10000 10.0.1.2:10000 maxconn 25
127
128 """
129 self.assertEquals(services, dedent(stanza))
130
131 def test_create_services_extended_with_relation(self):
132 """
133 This case covers specifying an up-front services file to ha-proxy
134 in the config. The relation then specifies a singular hostname,
135 port and server_options setting which is filled into the appropriate
136 haproxy stanza based on multiple criteria.
137 """
138 self._expect_config_get(
139 services=yaml.dump(self.config_services_extended))
140 self._expect_relation_get_all("reverseproxy")
141 self.mocker.replay()
142 hooks.create_services()
143 services = hooks.load_services()
144 stanza = """\
145 listen unit_service supplied-hostname:999
146 balance leastconn
147 server 10_0_1_2__10000 10.0.1.2:10000 maxconn 99
148
149 """
150 self.assertEquals(dedent(stanza), services)
151
152 def test_create_services_pure_relation(self):
153 """
154 In this case, the relation is in control of the haproxy config file.
155 Each relation chooses what server it creates in the haproxy file, it
156 relies on the haproxy service only for the hostname and front-end port.
157 Each member of the relation will put a backend server entry under in
158 the desired stanza. Each realtion can in fact supply multiple
159 entries from the same juju service unit if desired.
160 """
161 self._expect_config_get()
162 self._expect_relation_get_all_with_services("reverseproxy")
163 self.mocker.replay()
164 hooks.create_services()
165 services = hooks.load_services()
166 stanza = """\
167 listen foo_svc 0.0.0.0:88
168 balance leastconn
169 server A hA:1 oA1 oA2
170 """
171 self.assertIn(dedent(stanza), services)
172 stanza = """\
173 listen bar_svc 0.0.0.0:89
174 balance leastconn
175 server A hA:1 oA1 oA2
176 server B hB:2 oB1 oB2
177 """
178 self.assertIn(dedent(stanza), services)
179
180 def test_create_services_pure_relation_multiple(self):
181 """
182 This is much liek the pure_relation case, where the relation specifies
183 a "services" override. However, in this case we have multiple relations
184 that partially override each other. We expect that the created haproxy
185 conf file will combine things appropriately.
186 """
187 self._expect_config_get()
188 self._expect_relation_get_all_multiple("reverseproxy")
189 self.mocker.replay()
190 hooks.create_services()
191 result = hooks.load_services()
192 stanza = """\
193 listen foo_svc 0.0.0.0:88
194 balance leastconn
195 server A hA:1 oA1 oA2
196 server A2 hA2:12 oA12 oA22
197 """
198 self.assertIn(dedent(stanza), result)
199 stanza = """\
200 listen bar_svc 0.0.0.0:89
201 balance leastconn
202 server A hA:1 oA1 oA2
203 server B hB:2 oB1 oB2
204 """
205 self.assertIn(dedent(stanza), result)
206
207 def test_get_config_services_config_only(self):
208 """
209 Attempting to catch the case where a relation is not joined yet
210 """
211 self._expect_config_get()
212 obj = self.mocker.replace("hooks.relation_get_all")
213 obj("reverseproxy")
214 self.mocker.result(None)
215 self.mocker.replay()
216 result = hooks.get_config_services()
217 self.assertEquals(result, self.config_services)
218
219 def test_get_config_services_relation_no_services(self):
220 """
221 If the config specifies services and the realtion does not, just the
222 config services should come through.
223 """
224 self._expect_config_get()
225 self._expect_relation_get_all("reverseproxy")
226 self.mocker.replay()
227 result = hooks.get_config_services()
228 self.assertEquals(result, self.config_services)
229
230 def test_get_config_services_relation_with_services(self):
231 """
232 Testing with both the config and relation providing services should
233 yield the just the relation
234 """
235 self._expect_config_get()
236 self._expect_relation_get_all_with_services("reverseproxy")
237 self.mocker.replay()
238 result = hooks.get_config_services()
239 # Just test "servers" since hostname and port and maybe other keys
240 # will be added by the hook
241 self.assertEquals(result[0]["servers"],
242 self.relation_services[0]["servers"])
243
244 def test_config_generation_indempotent(self):
245 self._expect_config_get()
246 self._expect_relation_get_all_multiple("reverseproxy")
247 self.mocker.replay()
248
249 # Test that we generate the same haproxy.conf file each time
250 hooks.create_services()
251 result1 = hooks.load_services()
252 hooks.create_services()
253 result2 = hooks.load_services()
254 self.assertEqual(result1, result2)
255
256 def test_get_all_services(self):
257 self._expect_config_get()
258 self._expect_relation_get_all_multiple("reverseproxy")
259 self.mocker.replay()
260 baseline = [{"service_name": "foo_svc", "service_port": 88},
261 {"service_name": "bar_svc", "service_port": 89}]
262 services = hooks.get_all_services()
263 self.assertEqual(baseline, services)
2640
=== added directory 'hooks/tests'
=== added file 'hooks/tests/__init__.py'
=== added file 'hooks/tests/test_config_changed_hooks.py'
--- hooks/tests/test_config_changed_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_config_changed_hooks.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,120 @@
1import sys
2
3from testtools import TestCase
4from mock import patch
5
6import hooks
7from utils_for_tests import patch_open
8
9
10class ConfigChangedTest(TestCase):
11
12 def setUp(self):
13 super(ConfigChangedTest, self).setUp()
14 self.config_get = self.patch_hook("config_get")
15 self.get_service_ports = self.patch_hook("get_service_ports")
16 self.get_listen_stanzas = self.patch_hook("get_listen_stanzas")
17 self.create_haproxy_globals = self.patch_hook(
18 "create_haproxy_globals")
19 self.create_haproxy_defaults = self.patch_hook(
20 "create_haproxy_defaults")
21 self.remove_services = self.patch_hook("remove_services")
22 self.create_services = self.patch_hook("create_services")
23 self.load_services = self.patch_hook("load_services")
24 self.construct_haproxy_config = self.patch_hook(
25 "construct_haproxy_config")
26 self.service_haproxy = self.patch_hook(
27 "service_haproxy")
28 self.update_sysctl = self.patch_hook(
29 "update_sysctl")
30 self.notify_website = self.patch_hook("notify_website")
31 self.notify_peer = self.patch_hook("notify_peer")
32 self.log = self.patch_hook("log")
33 sys_exit = patch.object(sys, "exit")
34 self.sys_exit = sys_exit.start()
35 self.addCleanup(sys_exit.stop)
36
37 def patch_hook(self, hook_name):
38 mock_controller = patch.object(hooks, hook_name)
39 mock = mock_controller.start()
40 self.addCleanup(mock_controller.stop)
41 return mock
42
43 def test_config_changed_notify_website_changed_stanzas(self):
44 self.service_haproxy.return_value = True
45 self.get_listen_stanzas.side_effect = (
46 (('foo.internal', '1.2.3.4', 123),),
47 (('foo.internal', '1.2.3.4', 123),
48 ('bar.internal', '1.2.3.5', 234),))
49
50 hooks.config_changed()
51
52 self.notify_website.assert_called_once_with()
53 self.notify_peer.assert_called_once_with()
54
55 def test_config_changed_no_notify_website_not_changed(self):
56 self.service_haproxy.return_value = True
57 self.get_listen_stanzas.side_effect = (
58 (('foo.internal', '1.2.3.4', 123),),
59 (('foo.internal', '1.2.3.4', 123),))
60
61 hooks.config_changed()
62
63 self.notify_website.assert_not_called()
64 self.notify_peer.assert_not_called()
65
66 def test_config_changed_no_notify_website_failed_check(self):
67 self.service_haproxy.return_value = False
68 self.get_listen_stanzas.side_effect = (
69 (('foo.internal', '1.2.3.4', 123),),
70 (('foo.internal', '1.2.3.4', 123),
71 ('bar.internal', '1.2.3.5', 234),))
72
73 hooks.config_changed()
74
75 self.notify_website.assert_not_called()
76 self.notify_peer.assert_not_called()
77 self.log.assert_called_once_with(
78 "HAProxy configuration check failed, exiting.")
79 self.sys_exit.assert_called_once_with(1)
80
81
82class HelpersTest(TestCase):
83 def test_constructs_haproxy_config(self):
84 with patch_open() as (mock_open, mock_file):
85 hooks.construct_haproxy_config('foo-globals', 'foo-defaults',
86 'foo-monitoring', 'foo-services')
87
88 mock_file.write.assert_called_with(
89 'foo-globals\n\n'
90 'foo-defaults\n\n'
91 'foo-monitoring\n\n'
92 'foo-services\n\n'
93 )
94 mock_open.assert_called_with(hooks.default_haproxy_config, 'w')
95
96 def test_constructs_nothing_if_globals_is_none(self):
97 with patch_open() as (mock_open, mock_file):
98 hooks.construct_haproxy_config(None, 'foo-defaults',
99 'foo-monitoring', 'foo-services')
100
101 self.assertFalse(mock_open.called)
102 self.assertFalse(mock_file.called)
103
104 def test_constructs_nothing_if_defaults_is_none(self):
105 with patch_open() as (mock_open, mock_file):
106 hooks.construct_haproxy_config('foo-globals', None,
107 'foo-monitoring', 'foo-services')
108
109 self.assertFalse(mock_open.called)
110 self.assertFalse(mock_file.called)
111
112 def test_constructs_haproxy_config_without_optionals(self):
113 with patch_open() as (mock_open, mock_file):
114 hooks.construct_haproxy_config('foo-globals', 'foo-defaults')
115
116 mock_file.write.assert_called_with(
117 'foo-globals\n\n'
118 'foo-defaults\n\n'
119 )
120 mock_open.assert_called_with(hooks.default_haproxy_config, 'w')
0121
=== added file 'hooks/tests/test_helpers.py'
--- hooks/tests/test_helpers.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_helpers.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,745 @@
1import os
2
3from contextlib import contextmanager
4from StringIO import StringIO
5
6from testtools import TestCase
7from mock import patch, call, MagicMock
8
9import hooks
10from utils_for_tests import patch_open
11
12
13class HelpersTest(TestCase):
14
15 @patch('hooks.config_get')
16 def test_creates_haproxy_globals(self, config_get):
17 config_get.return_value = {
18 'global_log': 'foo-log, bar-log',
19 'global_maxconn': 123,
20 'global_user': 'foo-user',
21 'global_group': 'foo-group',
22 'global_spread_checks': 234,
23 'global_debug': False,
24 'global_quiet': False,
25 }
26 result = hooks.create_haproxy_globals()
27
28 expected = '\n'.join([
29 'global',
30 ' log foo-log',
31 ' log bar-log',
32 ' maxconn 123',
33 ' user foo-user',
34 ' group foo-group',
35 ' spread-checks 234',
36 ])
37 self.assertEqual(result, expected)
38
39 @patch('hooks.config_get')
40 def test_creates_haproxy_globals_quietly_with_debug(self, config_get):
41 config_get.return_value = {
42 'global_log': 'foo-log, bar-log',
43 'global_maxconn': 123,
44 'global_user': 'foo-user',
45 'global_group': 'foo-group',
46 'global_spread_checks': 234,
47 'global_debug': True,
48 'global_quiet': True,
49 }
50 result = hooks.create_haproxy_globals()
51
52 expected = '\n'.join([
53 'global',
54 ' log foo-log',
55 ' log bar-log',
56 ' maxconn 123',
57 ' user foo-user',
58 ' group foo-group',
59 ' debug',
60 ' quiet',
61 ' spread-checks 234',
62 ])
63 self.assertEqual(result, expected)
64
65 def test_enables_haproxy(self):
66 mock_file = MagicMock()
67
68 @contextmanager
69 def mock_open(*args, **kwargs):
70 yield mock_file
71
72 initial_content = """
73 foo
74 ENABLED=0
75 bar
76 """
77 ending_content = initial_content.replace('ENABLED=0', 'ENABLED=1')
78
79 with patch('__builtin__.open', mock_open):
80 mock_file.read.return_value = initial_content
81
82 hooks.enable_haproxy()
83
84 mock_file.write.assert_called_with(ending_content)
85
86 @patch('hooks.config_get')
87 def test_creates_haproxy_defaults(self, config_get):
88 config_get.return_value = {
89 'default_options': 'foo-option, bar-option',
90 'default_timeouts': '234, 456',
91 'default_log': 'foo-log',
92 'default_mode': 'foo-mode',
93 'default_retries': 321,
94 }
95 result = hooks.create_haproxy_defaults()
96
97 expected = '\n'.join([
98 'defaults',
99 ' log foo-log',
100 ' mode foo-mode',
101 ' option foo-option',
102 ' option bar-option',
103 ' retries 321',
104 ' timeout 234',
105 ' timeout 456',
106 ])
107 self.assertEqual(result, expected)
108
109 def test_returns_none_when_haproxy_config_doesnt_exist(self):
110 self.assertIsNone(hooks.load_haproxy_config('/some/foo/file'))
111
112 @patch('__builtin__.open')
113 @patch('os.path.isfile')
114 def test_loads_haproxy_config_file(self, isfile, mock_open):
115 content = 'some content'
116 config_file = '/etc/haproxy/haproxy.cfg'
117 file_object = StringIO(content)
118 isfile.return_value = True
119 mock_open.return_value = file_object
120
121 result = hooks.load_haproxy_config()
122
123 self.assertEqual(result, content)
124 isfile.assert_called_with(config_file)
125 mock_open.assert_called_with(config_file)
126
127 @patch('hooks.load_haproxy_config')
128 def test_gets_monitoring_password(self, load_haproxy_config):
129 load_haproxy_config.return_value = 'stats auth foo:bar'
130
131 password = hooks.get_monitoring_password()
132
133 self.assertEqual(password, 'bar')
134
135 @patch('hooks.load_haproxy_config')
136 def test_gets_none_if_different_pattern(self, load_haproxy_config):
137 load_haproxy_config.return_value = 'some other pattern'
138
139 password = hooks.get_monitoring_password()
140
141 self.assertIsNone(password)
142
143 def test_gets_none_pass_if_config_doesnt_exist(self):
144 password = hooks.get_monitoring_password('/some/foo/path')
145
146 self.assertIsNone(password)
147
148 @patch('hooks.load_haproxy_config')
149 def test_gets_service_ports(self, load_haproxy_config):
150 load_haproxy_config.return_value = '''
151 listen foo.internal 1.2.3.4:123
152 listen bar.internal 1.2.3.5:234
153 '''
154
155 ports = hooks.get_service_ports()
156
157 self.assertEqual(ports, (123, 234))
158
159 @patch('hooks.load_haproxy_config')
160 def test_get_listen_stanzas(self, load_haproxy_config):
161 load_haproxy_config.return_value = '''
162 listen foo.internal 1.2.3.4:123
163 listen bar.internal 1.2.3.5:234
164 '''
165
166 stanzas = hooks.get_listen_stanzas()
167
168 self.assertEqual((('foo.internal', '1.2.3.4', 123),
169 ('bar.internal', '1.2.3.5', 234)),
170 stanzas)
171
172 @patch('hooks.load_haproxy_config')
173 def test_get_listen_stanzas_with_frontend(self, load_haproxy_config):
174 load_haproxy_config.return_value = '''
175 frontend foo-2-123
176 bind 1.2.3.4:123
177 default_backend foo.internal
178 frontend foo-2-234
179 bind 1.2.3.5:234
180 default_backend bar.internal
181 '''
182
183 stanzas = hooks.get_listen_stanzas()
184
185 self.assertEqual((('foo.internal', '1.2.3.4', 123),
186 ('bar.internal', '1.2.3.5', 234)),
187 stanzas)
188
189 @patch('hooks.load_haproxy_config')
190 def test_get_empty_tuple_when_no_stanzas(self, load_haproxy_config):
191 load_haproxy_config.return_value = '''
192 '''
193
194 stanzas = hooks.get_listen_stanzas()
195
196 self.assertEqual((), stanzas)
197
198 @patch('hooks.load_haproxy_config')
199 def test_get_listen_stanzas_none_configured(self, load_haproxy_config):
200 load_haproxy_config.return_value = ""
201
202 stanzas = hooks.get_listen_stanzas()
203
204 self.assertEqual((), stanzas)
205
206 def test_gets_no_ports_if_config_doesnt_exist(self):
207 ports = hooks.get_service_ports('/some/foo/path')
208 self.assertEqual((), ports)
209
210 @patch('hooks.open_port')
211 @patch('hooks.close_port')
212 def test_updates_service_ports(self, close_port, open_port):
213 old_service_ports = [123, 234, 345]
214 new_service_ports = [345, 456, 567]
215
216 hooks.update_service_ports(old_service_ports, new_service_ports)
217
218 self.assertEqual(close_port.mock_calls, [call(123), call(234)])
219 self.assertEqual(open_port.mock_calls,
220 [call(345), call(456), call(567)])
221
222 @patch('hooks.open_port')
223 @patch('hooks.close_port')
224 def test_updates_none_if_service_ports_not_provided(self, close_port,
225 open_port):
226 hooks.update_service_ports()
227
228 self.assertFalse(close_port.called)
229 self.assertFalse(open_port.called)
230
231 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
232 def test_creates_a_listen_stanza(self):
233 service_name = 'some-name'
234 service_ip = '10.11.12.13'
235 service_port = 1234
236 service_options = ('foo', 'bar')
237 server_entries = [
238 ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
239 ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
240 ]
241
242 result = hooks.create_listen_stanza(service_name, service_ip,
243 service_port, service_options,
244 server_entries)
245
246 expected = '\n'.join((
247 'frontend haproxy-2-1234',
248 ' bind 10.11.12.13:1234',
249 ' default_backend some-name',
250 '',
251 'backend some-name',
252 ' foo',
253 ' bar',
254 ' server name-1 ip-1:port-1 foo1 bar1',
255 ' server name-2 ip-2:port-2 foo2 bar2',
256 ))
257
258 self.assertEqual(expected, result)
259
260 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
261 def test_create_listen_stanza_filters_frontend_options(self):
262 service_name = 'some-name'
263 service_ip = '10.11.12.13'
264 service_port = 1234
265 service_options = ('capture request header X-Man',
266 'retries 3', 'balance uri', 'option logasap')
267 server_entries = [
268 ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
269 ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
270 ]
271
272 result = hooks.create_listen_stanza(service_name, service_ip,
273 service_port, service_options,
274 server_entries)
275
276 expected = '\n'.join((
277 'frontend haproxy-2-1234',
278 ' bind 10.11.12.13:1234',
279 ' default_backend some-name',
280 ' capture request header X-Man',
281 ' option logasap',
282 '',
283 'backend some-name',
284 ' retries 3',
285 ' balance uri',
286 ' server name-1 ip-1:port-1 foo1 bar1',
287 ' server name-2 ip-2:port-2 foo2 bar2',
288 ))
289
290 self.assertEqual(expected, result)
291
292 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
293 def test_creates_a_listen_stanza_with_tuple_entries(self):
294 service_name = 'some-name'
295 service_ip = '10.11.12.13'
296 service_port = 1234
297 service_options = ('foo', 'bar')
298 server_entries = (
299 ('name-1', 'ip-1', 'port-1', ('foo1', 'bar1')),
300 ('name-2', 'ip-2', 'port-2', ('foo2', 'bar2')),
301 )
302
303 result = hooks.create_listen_stanza(service_name, service_ip,
304 service_port, service_options,
305 server_entries)
306
307 expected = '\n'.join((
308 'frontend haproxy-2-1234',
309 ' bind 10.11.12.13:1234',
310 ' default_backend some-name',
311 '',
312 'backend some-name',
313 ' foo',
314 ' bar',
315 ' server name-1 ip-1:port-1 foo1 bar1',
316 ' server name-2 ip-2:port-2 foo2 bar2',
317 ))
318
319 self.assertEqual(expected, result)
320
321 def test_doesnt_create_listen_stanza_if_args_not_provided(self):
322 self.assertIsNone(hooks.create_listen_stanza())
323
324 @patch('hooks.create_listen_stanza')
325 @patch('hooks.config_get')
326 @patch('hooks.get_monitoring_password')
327 def test_creates_a_monitoring_stanza(self, get_monitoring_password,
328 config_get, create_listen_stanza):
329 config_get.return_value = {
330 'enable_monitoring': True,
331 'monitoring_allowed_cidr': 'some-cidr',
332 'monitoring_password': 'some-pass',
333 'monitoring_username': 'some-user',
334 'monitoring_stats_refresh': 123,
335 'monitoring_port': 1234,
336 }
337 create_listen_stanza.return_value = 'some result'
338
339 result = hooks.create_monitoring_stanza(service_name="some-service")
340
341 self.assertEqual('some result', result)
342 get_monitoring_password.assert_called_with()
343 create_listen_stanza.assert_called_with(
344 'some-service', '0.0.0.0', 1234, [
345 'mode http',
346 'acl allowed_cidr src some-cidr',
347 'block unless allowed_cidr',
348 'stats enable',
349 'stats uri /',
350 'stats realm Haproxy\\ Statistics',
351 'stats auth some-user:some-pass',
352 'stats refresh 123',
353 ])
354
355 @patch('hooks.create_listen_stanza')
356 @patch('hooks.config_get')
357 @patch('hooks.get_monitoring_password')
358 def test_doesnt_create_a_monitoring_stanza_if_monitoring_disabled(
359 self, get_monitoring_password, config_get, create_listen_stanza):
360 config_get.return_value = {
361 'enable_monitoring': False,
362 }
363
364 result = hooks.create_monitoring_stanza(service_name="some-service")
365
366 self.assertIsNone(result)
367 self.assertFalse(get_monitoring_password.called)
368 self.assertFalse(create_listen_stanza.called)
369
370 @patch('hooks.create_listen_stanza')
371 @patch('hooks.config_get')
372 @patch('hooks.get_monitoring_password')
373 def test_uses_monitoring_password_for_stanza(self, get_monitoring_password,
374 config_get,
375 create_listen_stanza):
376 config_get.return_value = {
377 'enable_monitoring': True,
378 'monitoring_allowed_cidr': 'some-cidr',
379 'monitoring_password': 'changeme',
380 'monitoring_username': 'some-user',
381 'monitoring_stats_refresh': 123,
382 'monitoring_port': 1234,
383 }
384 create_listen_stanza.return_value = 'some result'
385 get_monitoring_password.return_value = 'some-monitoring-pass'
386
387 hooks.create_monitoring_stanza(service_name="some-service")
388
389 get_monitoring_password.assert_called_with()
390 create_listen_stanza.assert_called_with(
391 'some-service', '0.0.0.0', 1234, [
392 'mode http',
393 'acl allowed_cidr src some-cidr',
394 'block unless allowed_cidr',
395 'stats enable',
396 'stats uri /',
397 'stats realm Haproxy\\ Statistics',
398 'stats auth some-user:some-monitoring-pass',
399 'stats refresh 123',
400 ])
401
402 @patch('hooks.pwgen')
403 @patch('hooks.create_listen_stanza')
404 @patch('hooks.config_get')
405 @patch('hooks.get_monitoring_password')
406 def test_uses_new_password_for_stanza(self, get_monitoring_password,
407 config_get, create_listen_stanza,
408 pwgen):
409 config_get.return_value = {
410 'enable_monitoring': True,
411 'monitoring_allowed_cidr': 'some-cidr',
412 'monitoring_password': 'changeme',
413 'monitoring_username': 'some-user',
414 'monitoring_stats_refresh': 123,
415 'monitoring_port': 1234,
416 }
417 create_listen_stanza.return_value = 'some result'
418 get_monitoring_password.return_value = None
419 pwgen.return_value = 'some-new-pass'
420
421 hooks.create_monitoring_stanza(service_name="some-service")
422
423 get_monitoring_password.assert_called_with()
424 create_listen_stanza.assert_called_with(
425 'some-service', '0.0.0.0', 1234, [
426 'mode http',
427 'acl allowed_cidr src some-cidr',
428 'block unless allowed_cidr',
429 'stats enable',
430 'stats uri /',
431 'stats realm Haproxy\\ Statistics',
432 'stats auth some-user:some-new-pass',
433 'stats refresh 123',
434 ])
435
436 @patch('hooks.is_proxy')
437 @patch('hooks.config_get')
438 @patch('yaml.safe_load')
439 def test_gets_config_services(self, safe_load, config_get, is_proxy):
440 config_get.return_value = {
441 'services': 'some-services',
442 }
443 safe_load.return_value = [
444 {
445 'service_name': 'foo',
446 'service_options': {
447 'foo-1': 123,
448 },
449 'service_options': ['foo1', 'foo2'],
450 'server_options': ['baz1', 'baz2'],
451 },
452 {
453 'service_name': 'bar',
454 'service_options': ['bar1', 'bar2'],
455 'server_options': ['baz1', 'baz2'],
456 },
457 ]
458 is_proxy.return_value = False
459
460 result = hooks.get_config_services()
461 expected = {
462 None: {
463 'service_name': 'foo',
464 },
465 'foo': {
466 'service_name': 'foo',
467 'service_options': ['foo1', 'foo2'],
468 'server_options': ['baz1', 'baz2'],
469 },
470 'bar': {
471 'service_name': 'bar',
472 'service_options': ['bar1', 'bar2'],
473 'server_options': ['baz1', 'baz2'],
474 },
475 }
476
477 self.assertEqual(expected, result)
478
479 @patch('hooks.is_proxy')
480 @patch('hooks.config_get')
481 @patch('yaml.safe_load')
482 def test_gets_config_services_with_forward_option(self, safe_load,
483 config_get, is_proxy):
484 config_get.return_value = {
485 'services': 'some-services',
486 }
487 safe_load.return_value = [
488 {
489 'service_name': 'foo',
490 'service_options': {
491 'foo-1': 123,
492 },
493 'service_options': ['foo1', 'foo2'],
494 'server_options': ['baz1', 'baz2'],
495 },
496 {
497 'service_name': 'bar',
498 'service_options': ['bar1', 'bar2'],
499 'server_options': ['baz1', 'baz2'],
500 },
501 ]
502 is_proxy.return_value = True
503
504 result = hooks.get_config_services()
505 expected = {
506 None: {
507 'service_name': 'foo',
508 },
509 'foo': {
510 'service_name': 'foo',
511 'service_options': ['foo1', 'foo2', 'option forwardfor'],
512 'server_options': ['baz1', 'baz2'],
513 },
514 'bar': {
515 'service_name': 'bar',
516 'service_options': ['bar1', 'bar2', 'option forwardfor'],
517 'server_options': ['baz1', 'baz2'],
518 },
519 }
520
521 self.assertEqual(expected, result)
522
523 @patch('hooks.is_proxy')
524 @patch('hooks.config_get')
525 @patch('yaml.safe_load')
526 def test_gets_config_services_with_options_string(self, safe_load,
527 config_get, is_proxy):
528 config_get.return_value = {
529 'services': 'some-services',
530 }
531 safe_load.return_value = [
532 {
533 'service_name': 'foo',
534 'service_options': {
535 'foo-1': 123,
536 },
537 'service_options': ['foo1', 'foo2'],
538 'server_options': 'baz1 baz2',
539 },
540 {
541 'service_name': 'bar',
542 'service_options': ['bar1', 'bar2'],
543 'server_options': 'baz1 baz2',
544 },
545 ]
546 is_proxy.return_value = False
547
548 result = hooks.get_config_services()
549 expected = {
550 None: {
551 'service_name': 'foo',
552 },
553 'foo': {
554 'service_name': 'foo',
555 'service_options': ['foo1', 'foo2'],
556 'server_options': ['baz1', 'baz2'],
557 },
558 'bar': {
559 'service_name': 'bar',
560 'service_options': ['bar1', 'bar2'],
561 'server_options': ['baz1', 'baz2'],
562 },
563 }
564
565 self.assertEqual(expected, result)
566
567 @patch('hooks.get_config_services')
568 def test_gets_a_service_config(self, get_config_services):
569 get_config_services.return_value = {
570 'foo': 'bar',
571 }
572
573 self.assertEqual('bar', hooks.get_config_service('foo'))
574
575 @patch('hooks.get_config_services')
576 def test_gets_a_service_config_from_none(self, get_config_services):
577 get_config_services.return_value = {
578 None: 'bar',
579 }
580
581 self.assertEqual('bar', hooks.get_config_service())
582
583 @patch('hooks.get_config_services')
584 def test_gets_a_service_config_as_none(self, get_config_services):
585 get_config_services.return_value = {
586 'baz': 'bar',
587 }
588
589 self.assertIsNone(hooks.get_config_service())
590
591 @patch('os.path.exists')
592 def test_mark_as_proxy_when_path_exists(self, path_exists):
593 path_exists.return_value = True
594
595 self.assertTrue(hooks.is_proxy('foo'))
596 path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy')
597
598 @patch('os.path.exists')
599 def test_doesnt_mark_as_proxy_when_path_doesnt_exist(self, path_exists):
600 path_exists.return_value = False
601
602 self.assertFalse(hooks.is_proxy('foo'))
603 path_exists.assert_called_with('/var/run/haproxy/foo.is.proxy')
604
605 @patch('os.path.exists')
606 def test_loads_services_by_name(self, path_exists):
607 with patch_open() as (mock_open, mock_file):
608 path_exists.return_value = True
609 mock_file.read.return_value = 'some content'
610
611 result = hooks.load_services('some-service')
612
613 self.assertEqual('some content', result)
614 mock_open.assert_called_with(
615 '/var/run/haproxy/some-service.service')
616 mock_file.read.assert_called_with()
617
618 @patch('os.path.exists')
619 def test_loads_no_service_if_path_doesnt_exist(self, path_exists):
620 path_exists.return_value = False
621
622 result = hooks.load_services('some-service')
623
624 self.assertIsNone(result)
625
626 @patch('glob.glob')
627 def test_loads_services_within_dir_if_no_name_provided(self, glob):
628 with patch_open() as (mock_open, mock_file):
629 mock_file.read.side_effect = ['foo', 'bar']
630 glob.return_value = ['foo-file', 'bar-file']
631
632 result = hooks.load_services()
633
634 self.assertEqual('foo\n\nbar\n\n', result)
635 mock_open.assert_has_calls([call('foo-file'), call('bar-file')])
636 mock_file.read.assert_has_calls([call(), call()])
637
638 @patch('hooks.os')
639 def test_removes_services_by_name(self, os_):
640 service_path = '/var/run/haproxy/some-service.service'
641 os_.path.exists.return_value = True
642
643 self.assertTrue(hooks.remove_services('some-service'))
644
645 os_.path.exists.assert_called_with(service_path)
646 os_.remove.assert_called_with(service_path)
647
648 @patch('hooks.os')
649 def test_removes_nothing_if_service_doesnt_exist(self, os_):
650 service_path = '/var/run/haproxy/some-service.service'
651 os_.path.exists.return_value = False
652
653 self.assertTrue(hooks.remove_services('some-service'))
654
655 os_.path.exists.assert_called_with(service_path)
656
657 @patch('hooks.os')
658 @patch('glob.glob')
659 def test_removes_all_services_in_dir_if_name_not_provided(self, glob, os_):
660 glob.return_value = ['foo', 'bar']
661
662 self.assertTrue(hooks.remove_services())
663
664 os_.remove.assert_has_calls([call('foo'), call('bar')])
665
666 @patch('hooks.os')
667 @patch('hooks.log')
668 def test_logs_error_when_failing_to_remove_service_by_name(self, log, os_):
669 error = Exception('some error')
670 os_.path.exists.return_value = True
671 os_.remove.side_effect = error
672
673 self.assertFalse(hooks.remove_services('some-service'))
674
675 log.assert_called_with(str(error))
676
677 @patch('hooks.os')
678 @patch('hooks.log')
679 @patch('glob.glob')
680 def test_logs_error_when_failing_to_remove_services(self, glob, log, os_):
681 errors = [Exception('some error 1'), Exception('some error 2')]
682 os_.remove.side_effect = errors
683 glob.return_value = ['foo', 'bar']
684
685 self.assertTrue(hooks.remove_services())
686
687 log.assert_has_calls([
688 call(str(errors[0])),
689 call(str(errors[1])),
690 ])
691
692 @patch('subprocess.call')
693 def test_calls_check_action(self, mock_call):
694 mock_call.return_value = 0
695
696 result = hooks.service_haproxy('check')
697
698 self.assertTrue(result)
699 mock_call.assert_called_with(['/usr/sbin/haproxy', '-f',
700 hooks.default_haproxy_config, '-c'])
701
702 @patch('subprocess.call')
703 def test_calls_check_action_with_different_config(self, mock_call):
704 mock_call.return_value = 0
705
706 result = hooks.service_haproxy('check', 'some-config')
707
708 self.assertTrue(result)
709 mock_call.assert_called_with(['/usr/sbin/haproxy', '-f',
710 'some-config', '-c'])
711
712 @patch('subprocess.call')
713 def test_fails_to_check_config(self, mock_call):
714 mock_call.return_value = 1
715
716 result = hooks.service_haproxy('check')
717
718 self.assertFalse(result)
719
720 @patch('subprocess.call')
721 def test_calls_different_actions(self, mock_call):
722 mock_call.return_value = 0
723
724 result = hooks.service_haproxy('foo')
725
726 self.assertTrue(result)
727 mock_call.assert_called_with(['service', 'haproxy', 'foo'])
728
729 @patch('subprocess.call')
730 def test_fails_to_call_different_actions(self, mock_call):
731 mock_call.return_value = 1
732
733 result = hooks.service_haproxy('foo')
734
735 self.assertFalse(result)
736
737 @patch('subprocess.call')
738 def test_doesnt_call_actions_if_action_not_provided(self, mock_call):
739 self.assertIsNone(hooks.service_haproxy())
740 self.assertFalse(mock_call.called)
741
742 @patch('subprocess.call')
743 def test_doesnt_call_actions_if_config_is_none(self, mock_call):
744 self.assertIsNone(hooks.service_haproxy('foo', None))
745 self.assertFalse(mock_call.called)
0746
=== added file 'hooks/tests/test_nrpe_hooks.py'
--- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_nrpe_hooks.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,24 @@
1from testtools import TestCase
2from mock import call, patch, MagicMock
3
4import hooks
5
6
7class NRPEHooksTest(TestCase):
8
9 @patch('hooks.install_nrpe_scripts')
10 @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
11 def test_update_nrpe_config(self, nrpe, install_nrpe_scripts):
12 nrpe_compat = MagicMock()
13 nrpe_compat.checks = [MagicMock(shortname="haproxy"),
14 MagicMock(shortname="haproxy_queue")]
15 nrpe.return_value = nrpe_compat
16
17 hooks.update_nrpe_config()
18
19 self.assertEqual(
20 nrpe_compat.mock_calls,
21 [call.add_check('haproxy', 'Check HAProxy', 'check_haproxy.sh'),
22 call.add_check('haproxy_queue', 'Check HAProxy queue depth',
23 'check_haproxy_queue_depth.sh'),
24 call.write()])
025
=== added file 'hooks/tests/test_peer_hooks.py'
--- hooks/tests/test_peer_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_peer_hooks.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,200 @@
1import os
2import yaml
3
4from testtools import TestCase
5from mock import patch
6
7import hooks
8from utils_for_tests import patch_open
9
10
11class PeerRelationTest(TestCase):
12
13 def setUp(self):
14 super(PeerRelationTest, self).setUp()
15
16 self.relations_of_type = self.patch_hook("relations_of_type")
17 self.log = self.patch_hook("log")
18 self.unit_get = self.patch_hook("unit_get")
19
20 def patch_hook(self, hook_name):
21 mock_controller = patch.object(hooks, hook_name)
22 mock = mock_controller.start()
23 self.addCleanup(mock_controller.stop)
24 return mock
25
26 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
27 def test_with_peer_same_services(self):
28 self.unit_get.return_value = "1.2.4.5"
29 self.relations_of_type.return_value = [
30 {"__unit__": "haproxy/1",
31 "hostname": "haproxy-1",
32 "private-address": "1.2.4.4",
33 "all_services": yaml.dump([
34 {"service_name": "foo_service",
35 "service_host": "0.0.0.0",
36 "service_options": ["balance leastconn"],
37 "service_port": 4242},
38 ])
39 }
40 ]
41
42 services_dict = {
43 "foo_service": {
44 "service_name": "foo_service",
45 "service_host": "0.0.0.0",
46 "service_port": 4242,
47 "service_options": ["balance leastconn"],
48 "server_options": ["maxconn 4"],
49 "servers": [("backend_1__8080", "1.2.3.4",
50 8080, ["maxconn 4"])],
51 },
52 }
53
54 expected = {
55 "foo_service": {
56 "service_name": "foo_service",
57 "service_host": "0.0.0.0",
58 "service_port": 4242,
59 "service_options": ["balance leastconn",
60 "mode tcp",
61 "option tcplog"],
62 "servers": [
63 ("haproxy-1", "1.2.4.4", 4243, ["check"]),
64 ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"])
65 ],
66 },
67 "foo_service_be": {
68 "service_name": "foo_service_be",
69 "service_host": "0.0.0.0",
70 "service_port": 4243,
71 "service_options": ["balance leastconn"],
72 "server_options": ["maxconn 4"],
73 "servers": [("backend_1__8080", "1.2.3.4",
74 8080, ["maxconn 4"])],
75 },
76 }
77 self.assertEqual(expected, hooks.apply_peer_config(services_dict))
78
79 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
80 def test_inherit_timeout_settings(self):
81 self.unit_get.return_value = "1.2.4.5"
82 self.relations_of_type.return_value = [
83 {"__unit__": "haproxy/1",
84 "hostname": "haproxy-1",
85 "private-address": "1.2.4.4",
86 "all_services": yaml.dump([
87 {"service_name": "foo_service",
88 "service_host": "0.0.0.0",
89 "service_options": ["timeout server 5000"],
90 "service_port": 4242},
91 ])
92 }
93 ]
94
95 services_dict = {
96 "foo_service": {
97 "service_name": "foo_service",
98 "service_host": "0.0.0.0",
99 "service_port": 4242,
100 "service_options": ["timeout server 5000"],
101 "server_options": ["maxconn 4"],
102 "servers": [("backend_1__8080", "1.2.3.4",
103 8080, ["maxconn 4"])],
104 },
105 }
106
107 expected = {
108 "foo_service": {
109 "service_name": "foo_service",
110 "service_host": "0.0.0.0",
111 "service_port": 4242,
112 "service_options": ["balance leastconn",
113 "mode tcp",
114 "option tcplog",
115 "timeout server 5000"],
116 "servers": [
117 ("haproxy-1", "1.2.4.4", 4243, ["check"]),
118 ("haproxy-2", "1.2.4.5", 4243, ["check", "backup"])
119 ],
120 },
121 "foo_service_be": {
122 "service_name": "foo_service_be",
123 "service_host": "0.0.0.0",
124 "service_port": 4243,
125 "service_options": ["timeout server 5000"],
126 "server_options": ["maxconn 4"],
127 "servers": [("backend_1__8080", "1.2.3.4",
128 8080, ["maxconn 4"])],
129 },
130 }
131 self.assertEqual(expected, hooks.apply_peer_config(services_dict))
132
133 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
134 def test_with_no_relation_data(self):
135 self.unit_get.return_value = "1.2.4.5"
136 self.relations_of_type.return_value = []
137
138 services_dict = {
139 "foo_service": {
140 "service_name": "foo_service",
141 "service_host": "0.0.0.0",
142 "service_port": 4242,
143 "service_options": ["balance leastconn"],
144 "server_options": ["maxconn 4"],
145 "servers": [("backend_1__8080", "1.2.3.4",
146 8080, ["maxconn 4"])],
147 },
148 }
149
150 expected = services_dict
151 self.assertEqual(expected, hooks.apply_peer_config(services_dict))
152
153 @patch.dict(os.environ, {"JUJU_UNIT_NAME": "haproxy/2"})
154 def test_with_missing_all_services(self):
155 self.unit_get.return_value = "1.2.4.5"
156 self.relations_of_type.return_value = [
157 {"__unit__": "haproxy/1",
158 "hostname": "haproxy-1",
159 "private-address": "1.2.4.4",
160 }
161 ]
162
163 services_dict = {
164 "foo_service": {
165 "service_name": "foo_service",
166 "service_host": "0.0.0.0",
167 "service_port": 4242,
168 "service_options": ["balance leastconn"],
169 "server_options": ["maxconn 4"],
170 "servers": [("backend_1__8080", "1.2.3.4",
171 8080, ["maxconn 4"])],
172 },
173 }
174
175 expected = services_dict
176 self.assertEqual(expected, hooks.apply_peer_config(services_dict))
177
178 @patch('hooks.create_listen_stanza')
179 def test_writes_service_config(self, create_listen_stanza):
180 create_listen_stanza.return_value = 'some content'
181 services_dict = {
182 'foo': {
183 'service_name': 'bar',
184 'service_host': 'some-host',
185 'service_port': 'some-port',
186 'service_options': 'some-options',
187 'servers': (1, 2),
188 },
189 }
190
191 with patch.object(os.path, "exists") as exists:
192 exists.return_value = True
193 with patch_open() as (mock_open, mock_file):
194 hooks.write_service_config(services_dict)
195
196 create_listen_stanza.assert_called_with(
197 'bar', 'some-host', 'some-port', 'some-options', (1, 2))
198 mock_open.assert_called_with(
199 '/var/run/haproxy/bar.service', 'w')
200 mock_file.write.assert_called_with('some content')
0201
=== added file 'hooks/tests/test_reverseproxy_hooks.py'
--- hooks/tests/test_reverseproxy_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_reverseproxy_hooks.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,345 @@
1from testtools import TestCase
2from mock import patch, call
3
4import hooks
5
6
7class ReverseProxyRelationTest(TestCase):
8
9 def setUp(self):
10 super(ReverseProxyRelationTest, self).setUp()
11
12 self.relations_of_type = self.patch_hook("relations_of_type")
13 self.get_config_services = self.patch_hook("get_config_services")
14 self.log = self.patch_hook("log")
15 self.write_service_config = self.patch_hook("write_service_config")
16 self.apply_peer_config = self.patch_hook("apply_peer_config")
17 self.apply_peer_config.side_effect = lambda value: value
18
19 def patch_hook(self, hook_name):
20 mock_controller = patch.object(hooks, hook_name)
21 mock = mock_controller.start()
22 self.addCleanup(mock_controller.stop)
23 return mock
24
25 def test_relation_data_returns_none(self):
26 self.get_config_services.return_value = {
27 "service": {
28 "service_name": "service",
29 },
30 }
31 self.relations_of_type.return_value = []
32 self.assertIs(None, hooks.create_services())
33 self.log.assert_called_once_with("No backend servers, exiting.")
34 self.write_service_config.assert_not_called()
35
36 def test_relation_data_returns_no_relations(self):
37 self.get_config_services.return_value = {
38 "service": {
39 "service_name": "service",
40 },
41 }
42 self.relations_of_type.return_value = []
43 self.assertIs(None, hooks.create_services())
44 self.log.assert_called_once_with("No backend servers, exiting.")
45 self.write_service_config.assert_not_called()
46
47 def test_relation_no_services(self):
48 self.get_config_services.return_value = {}
49 self.relations_of_type.return_value = [
50 {"port": 4242,
51 "__unit__": "foo/0",
52 "hostname": "backend.1",
53 "private-address": "1.2.3.4"},
54 ]
55 self.assertIs(None, hooks.create_services())
56 self.log.assert_called_once_with("No services configured, exiting.")
57 self.write_service_config.assert_not_called()
58
59 def test_no_port_in_relation_data(self):
60 self.get_config_services.return_value = {
61 "service": {
62 "service_name": "service",
63 },
64 }
65 self.relations_of_type.return_value = [
66 {"private-address": "1.2.3.4",
67 "__unit__": "foo/0"},
68 ]
69 self.assertIs(None, hooks.create_services())
70 self.log.assert_has_calls([call.log(
71 "No port in relation data for 'foo/0', skipping.")])
72 self.write_service_config.assert_not_called()
73
74 def test_no_private_address_in_relation_data(self):
75 self.get_config_services.return_value = {
76 "service": {
77 "service_name": "service",
78 },
79 }
80 self.relations_of_type.return_value = [
81 {"port": 4242,
82 "__unit__": "foo/0"},
83 ]
84 self.assertIs(None, hooks.create_services())
85 self.log.assert_has_calls([call.log(
86 "No private-address in relation data for 'foo/0', skipping.")])
87 self.write_service_config.assert_not_called()
88
89 def test_no_hostname_in_relation_data(self):
90 self.get_config_services.return_value = {
91 "service": {
92 "service_name": "service",
93 },
94 }
95 self.relations_of_type.return_value = [
96 {"port": 4242,
97 "private-address": "1.2.3.4",
98 "__unit__": "foo/0"},
99 ]
100 self.assertIs(None, hooks.create_services())
101 self.log.assert_has_calls([call.log(
102 "No hostname in relation data for 'foo/0', skipping.")])
103 self.write_service_config.assert_not_called()
104
105 def test_relation_unknown_service(self):
106 self.get_config_services.return_value = {
107 "service": {
108 "service_name": "service",
109 },
110 }
111 self.relations_of_type.return_value = [
112 {"port": 4242,
113 "hostname": "backend.1",
114 "service_name": "invalid",
115 "private-address": "1.2.3.4",
116 "__unit__": "foo/0"},
117 ]
118 self.assertIs(None, hooks.create_services())
119 self.log.assert_has_calls([call.log(
120 "Service 'invalid' does not exist.")])
121 self.write_service_config.assert_not_called()
122
123 def test_no_relation_but_has_servers_from_config(self):
124 self.get_config_services.return_value = {
125 None: {
126 "service_name": "service",
127 },
128 "service": {
129 "service_name": "service",
130 "servers": [
131 ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]),
132 ]
133 },
134 }
135 self.relations_of_type.return_value = []
136
137 expected = {
138 'service': {
139 'service_name': 'service',
140 'servers': [
141 ("legacy-backend", "1.2.3.1", 4242, ["maxconn 42"]),
142 ],
143 },
144 }
145 self.assertEqual(expected, hooks.create_services())
146 self.write_service_config.assert_called_with(expected)
147
148 def test_relation_default_service(self):
149 self.get_config_services.return_value = {
150 None: {
151 "service_name": "service",
152 },
153 "service": {
154 "service_name": "service",
155 },
156 }
157 self.relations_of_type.return_value = [
158 {"port": 4242,
159 "hostname": "backend.1",
160 "private-address": "1.2.3.4",
161 "__unit__": "foo/0"},
162 ]
163
164 expected = {
165 'service': {
166 'service_name': 'service',
167 'servers': [('foo-0-4242', '1.2.3.4', 4242, [])],
168 },
169 }
170 self.assertEqual(expected, hooks.create_services())
171 self.write_service_config.assert_called_with(expected)
172
173 def test_with_service_options(self):
174 self.get_config_services.return_value = {
175 None: {
176 "service_name": "service",
177 },
178 "service": {
179 "service_name": "service",
180 "server_options": ["maxconn 4"],
181 },
182 }
183 self.relations_of_type.return_value = [
184 {"port": 4242,
185 "hostname": "backend.1",
186 "private-address": "1.2.3.4",
187 "__unit__": "foo/0"},
188 ]
189
190 expected = {
191 'service': {
192 'service_name': 'service',
193 'server_options': ["maxconn 4"],
194 'servers': [('foo-0-4242', '1.2.3.4',
195 4242, ["maxconn 4"])],
196 },
197 }
198 self.assertEqual(expected, hooks.create_services())
199 self.write_service_config.assert_called_with(expected)
200
201 def test_with_service_name(self):
202 self.get_config_services.return_value = {
203 None: {
204 "service_name": "service",
205 },
206 "foo_service": {
207 "service_name": "foo_service",
208 "server_options": ["maxconn 4"],
209 },
210 }
211 self.relations_of_type.return_value = [
212 {"port": 4242,
213 "hostname": "backend.1",
214 "service_name": "foo_service",
215 "private-address": "1.2.3.4",
216 "__unit__": "foo/0"},
217 ]
218
219 expected = {
220 'foo_service': {
221 'service_name': 'foo_service',
222 'server_options': ["maxconn 4"],
223 'servers': [('foo-0-4242', '1.2.3.4',
224 4242, ["maxconn 4"])],
225 },
226 }
227 self.assertEqual(expected, hooks.create_services())
228 self.write_service_config.assert_called_with(expected)
229
230 def test_no_service_name_unit_name_match_service_name(self):
231 self.get_config_services.return_value = {
232 None: {
233 "service_name": "foo_service",
234 },
235 "foo_service": {
236 "service_name": "foo_service",
237 "server_options": ["maxconn 4"],
238 },
239 }
240 self.relations_of_type.return_value = [
241 {"port": 4242,
242 "hostname": "backend.1",
243 "private-address": "1.2.3.4",
244 "__unit__": "foo/1"},
245 ]
246
247 expected = {
248 'foo_service': {
249 'service_name': 'foo_service',
250 'server_options': ["maxconn 4"],
251 'servers': [('foo-1-4242', '1.2.3.4',
252 4242, ["maxconn 4"])],
253 },
254 }
255 self.assertEqual(expected, hooks.create_services())
256 self.write_service_config.assert_called_with(expected)
257
258 def test_with_sitenames_match_service_name(self):
259 self.get_config_services.return_value = {
260 None: {
261 "service_name": "service",
262 },
263 "foo_srv": {
264 "service_name": "foo_srv",
265 "server_options": ["maxconn 4"],
266 },
267 }
268 self.relations_of_type.return_value = [
269 {"port": 4242,
270 "hostname": "backend.1",
271 "sitenames": "foo_srv bar_srv",
272 "private-address": "1.2.3.4",
273 "__unit__": "foo/0"},
274 ]
275
276 expected = {
277 'foo_srv': {
278 'service_name': 'foo_srv',
279 'server_options': ["maxconn 4"],
280 'servers': [('foo-0-4242', '1.2.3.4',
281 4242, ["maxconn 4"])],
282 },
283 }
284 self.assertEqual(expected, hooks.create_services())
285 self.write_service_config.assert_called_with(expected)
286
287 def test_with_juju_services_match_service_name(self):
288 self.get_config_services.return_value = {
289 None: {
290 "service_name": "service",
291 },
292 "foo_service": {
293 "service_name": "foo_service",
294 "server_options": ["maxconn 4"],
295 },
296 }
297 self.relations_of_type.return_value = [
298 {"port": 4242,
299 "hostname": "backend.1",
300 "private-address": "1.2.3.4",
301 "__unit__": "foo/1"},
302 ]
303
304 expected = {
305 'foo_service': {
306 'service_name': 'foo_service',
307 'server_options': ["maxconn 4"],
308 'servers': [('foo-1-4242', '1.2.3.4',
309 4242, ["maxconn 4"])],
310 },
311 }
312
313 result = hooks.create_services()
314
315 self.assertEqual(expected, result)
316 self.write_service_config.assert_called_with(expected)
317
318 def test_with_sitenames_no_match_but_unit_name(self):
319 self.get_config_services.return_value = {
320 None: {
321 "service_name": "service",
322 },
323 "foo": {
324 "service_name": "foo",
325 "server_options": ["maxconn 4"],
326 },
327 }
328 self.relations_of_type.return_value = [
329 {"port": 4242,
330 "hostname": "backend.1",
331 "sitenames": "bar_service baz_service",
332 "private-address": "1.2.3.4",
333 "__unit__": "foo/0"},
334 ]
335
336 expected = {
337 'foo': {
338 'service_name': 'foo',
339 'server_options': ["maxconn 4"],
340 'servers': [('foo-0-4242', '1.2.3.4',
341 4242, ["maxconn 4"])],
342 },
343 }
344 self.assertEqual(expected, hooks.create_services())
345 self.write_service_config.assert_called_with(expected)
0346
=== added file 'hooks/tests/test_website_hooks.py'
--- hooks/tests/test_website_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_website_hooks.py 2013-10-10 22:34:35 +0000
@@ -0,0 +1,145 @@
1from testtools import TestCase
2from mock import patch, call
3
4import hooks
5
6
7class WebsiteRelationTest(TestCase):
8
9 def setUp(self):
10 super(WebsiteRelationTest, self).setUp()
11 self.notify_website = self.patch_hook("notify_website")
12
13 def patch_hook(self, hook_name):
14 mock_controller = patch.object(hooks, hook_name)
15 mock = mock_controller.start()
16 self.addCleanup(mock_controller.stop)
17 return mock
18
19 def test_website_interface_none(self):
20 self.assertEqual(None, hooks.website_interface(hook_name=None))
21 self.notify_website.assert_not_called()
22
23 def test_website_interface_joined(self):
24 hooks.website_interface(hook_name="joined")
25 self.notify_website.assert_called_once_with(
26 changed=False, relation_ids=(None,))
27
28 def test_website_interface_changed(self):
29 hooks.website_interface(hook_name="changed")
30 self.notify_website.assert_called_once_with(
31 changed=True, relation_ids=(None,))
32
33
34class NotifyRelationTest(TestCase):
35
36 def setUp(self):
37 super(NotifyRelationTest, self).setUp()
38
39 self.relations_for_id = self.patch_hook("relations_for_id")
40 self.relation_set = self.patch_hook("relation_set")
41 self.config_get = self.patch_hook("config_get")
42 self.get_relation_ids = self.patch_hook("get_relation_ids")
43 self.get_hostname = self.patch_hook("get_hostname")
44 self.log = self.patch_hook("log")
45 self.get_config_services = self.patch_hook("get_config_service")
46
47 def patch_hook(self, hook_name):
48 mock_controller = patch.object(hooks, hook_name)
49 mock = mock_controller.start()
50 self.addCleanup(mock_controller.stop)
51 return mock
52
53 def test_notify_website_relation_no_relation_ids(self):
54 hooks.notify_relation("website")
55 self.get_relation_ids.return_value = ()
56 self.relation_set.assert_not_called()
57 self.get_relation_ids.assert_called_once_with("website")
58
59 def test_notify_website_relation_with_default_relation(self):
60 self.get_relation_ids.return_value = ()
61 self.get_hostname.return_value = "foo.local"
62 self.relations_for_id.return_value = [{}]
63 self.config_get.return_value = {"services": ""}
64
65 hooks.notify_relation("website", relation_ids=(None,))
66
67 self.get_hostname.assert_called_once_with()
68 self.relations_for_id.assert_called_once_with(None)
69 self.relation_set.assert_called_once_with(
70 relation_id=None, port="80", hostname="foo.local",
71 all_services="")
72 self.get_relation_ids.assert_not_called()
73
74 def test_notify_website_relation_with_relations(self):
75 self.get_relation_ids.return_value = ("website:1",
76 "website:2")
77 self.get_hostname.return_value = "foo.local"
78 self.relations_for_id.return_value = [{}]
79 self.config_get.return_value = {"services": ""}
80
81 hooks.notify_relation("website")
82
83 self.get_hostname.assert_called_once_with()
84 self.get_relation_ids.assert_called_once_with("website")
85 self.relations_for_id.assert_has_calls([
86 call("website:1"),
87 call("website:2"),
88 ])
89
90 self.relation_set.assert_has_calls([
91 call(relation_id="website:1", port="80", hostname="foo.local",
92 all_services=""),
93 call(relation_id="website:2", port="80", hostname="foo.local",
94 all_services=""),
95 ])
96
97 def test_notify_website_relation_with_different_sitenames(self):
98 self.get_relation_ids.return_value = ("website:1",)
99 self.get_hostname.return_value = "foo.local"
100 self.relations_for_id.return_value = [{"service_name": "foo"},
101 {"service_name": "bar"}]
102 self.config_get.return_value = {"services": ""}
103
104 hooks.notify_relation("website")
105
106 self.get_hostname.assert_called_once_with()
107 self.get_relation_ids.assert_called_once_with("website")
108 self.relations_for_id.assert_has_calls([
109 call("website:1"),
110 ])
111
112 self.relation_set.assert_has_calls([
113 call.relation_set(
114 relation_id="website:1", port="80", hostname="foo.local",
115 all_services=""),
116 ])
117 self.log.assert_called_once_with(
118 "Remote units requested than a single service name."
119 "Falling back to default host/port.")
120
121 def test_notify_website_relation_with_same_sitenames(self):
122 self.get_relation_ids.return_value = ("website:1",)
123 self.get_hostname.side_effect = ["foo.local", "bar.local"]
124 self.relations_for_id.return_value = [{"service_name": "bar"},
125 {"service_name": "bar"}]
126 self.config_get.return_value = {"services": ""}
127 self.get_config_services.return_value = {"service_host": "bar.local",
128 "service_port": "4242"}
129
130 hooks.notify_relation("website")
131
132 self.get_hostname.assert_has_calls([
133 call(),
134 call("bar.local")])
135 self.get_relation_ids.assert_called_once_with("website")
136 self.relations_for_id.assert_has_calls([
137 call("website:1"),
138 ])
139
140 self.relation_set.assert_has_calls([
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches