Merge lp:~sidnei/charms/precise/squid-reverseproxy/trunk into lp:charms/squid-reverseproxy

Proposed by Sidnei da Silva
Status: Merged
Merged at revision: 43
Proposed branch: lp:~sidnei/charms/precise/squid-reverseproxy/trunk
Merge into: lp:charms/squid-reverseproxy
Diff against target: 4487 lines (+3137/-946)
27 files modified
.bzrignore (+10/-1)
Makefile (+39/-0)
charm-helpers.yaml (+4/-0)
cm.py (+193/-0)
config-manager.txt (+6/-0)
config.yaml (+43/-6)
files/check_squidpeers (+75/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+239/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+44/-0)
hooks/hooks.py (+288/-254)
hooks/install (+9/-0)
hooks/shelltoolbox/__init__.py (+0/-662)
hooks/tests/test_cached_website_hooks.py (+111/-0)
hooks/tests/test_config_changed_hooks.py (+60/-0)
hooks/tests/test_helpers.py (+415/-0)
hooks/tests/test_nrpe_hooks.py (+271/-0)
hooks/tests/test_website_hooks.py (+267/-0)
metadata.yaml (+4/-1)
revision (+0/-1)
setup.cfg (+4/-0)
tarmac_tests.sh (+6/-0)
templates/main_config.template (+78/-21)
To merge this branch: bzr merge lp:~sidnei/charms/precise/squid-reverseproxy/trunk
Reviewer Review Type Date Requested Status
Marco Ceppi (community) Approve
charmers Pending
Review via email: mp+190500@code.launchpad.net

This proposal supersedes a proposal from 2013-08-22.

Description of the change

* Greatly improve test coverage

* Allow the use of an X-Balancer-Name header to select which cache_peer backend will be used for a specific request.

  This is useful if you have multiple applications sitting behind a single hostname being directed to the same Squid from an Apache service, using the balancer relation. Since you can't rely on 'dstdomain' anymore, this special header is set by the Apache charm in the balancer, and used instead.

* Support 'all-services' being set in the relation, in the way that the haproxy sets it, in addition to the previously supported 'sitenames' setting. Makes it compatible with the haproxy charm.

* When the list of supported 'sitenames' (computed from dstdomain acls) changes, notify services related via the 'cached-website' relation. This allows to add new services in the haproxy service (or really, any other service related), which notifies the squid service, which then bubbles up to services related via cached-website.

To post a comment you must log in.
Revision history for this message
Mark Mims (mark-mims) wrote : Posted in a previous version of this proposal

same as apache... please reserve /tests for charm tests

review: Needs Resubmitting
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Same comments as for haproxy and apache2; charm branch needs to include all source dependencies.

review: Needs Fixing
41. By Matthias Arnason

[sidnei r=tiaz] Expose 'via' config setting, default to on.

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Hi Sidnei! Thanks for submitting all these improvements and clean up to the squid-reverseproxy charm! The addition of unit tests for the hooks is fantastic and really speaks to our attempts of bringing measurable quality in to the charm store.

I've reviewed everything in the code and the changes look fine, and you've addressed the concerns from previous proposals. However, I have concerns of changing the names of configuration options for a charm. I fear this will break existing deployed charms in an unknown way. I've reached out to the mailing list for a consensus of what to do in this case: https://lists.ubuntu.com/archives/juju/2013-November/003127.html I'll return with a verdict from that post.

review: Needs Information
Revision history for this message
Sidnei da Silva (sidnei) wrote :

Thanks! As per my reply on the list, that specific change reverts the config argument to the name it had previous to r31.

The currently published charm as of r42 is in fact broken because the config.yaml has 'nagios_check_url' but the update_nrpe_checks function refer to 'nagios_check_http_params', causing a KeyError.

$ bzr branch lp:charms/squid-reverseproxy
$ cd squid-reverseproxy
$ bzr revno
42
$ bzr grep nagios_check
config.yaml: nagios_check_url:
hooks/hooks.py: (config_data['nagios_check_http_params']))

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Hi Sidnei, thanks for the clarification!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.bzrignore'
--- .bzrignore 2013-02-15 02:21:16 +0000
+++ .bzrignore 2013-10-29 21:07:21 +0000
@@ -1,1 +1,10 @@
1basenode/1revision
2_trial_temp
3.coverage
4coverage.xml
5*.crt
6*.key
7lib/*
8*.pyc
9exec.d
10build/charm-helpers
211
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2013-10-29 21:07:21 +0000
@@ -0,0 +1,39 @@
1PWD := $(shell pwd)
2SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode
3HOOKS_DIR := $(PWD)/hooks
4TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR)
5TEST_DIR := $(PWD)/hooks/tests
6CHARM_DIR := $(PWD)
7PYTHON := /usr/bin/env python
8
9
10build: test lint proof
11
12revision:
13 @test -f revision || echo 0 > revision
14
15proof: revision
16 @echo Proofing charm...
17 @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK
18 @test `cat revision` = 0 && rm revision
19
20test:
21 @echo Starting tests...
22 @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests -s $(TEST_DIR)
23
24lint:
25 @echo Checking for Python syntax...
26 @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK
27
28sourcedeps: $(PWD)/config-manager.txt
29 @echo Updating source dependencies...
30 @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \
31 -p $(SOURCEDEPS_DIR) \
32 -t $(PWD)
33 @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
34 -c charm-helpers.yaml \
35 -b build/charm-helpers \
36 -d hooks/charmhelpers
37 @echo Do not forget to commit the updated files if any.
38
39.PHONY: revision proof test lint sourcedeps charm-payload
040
=== added directory 'build'
=== added file 'charm-helpers.yaml'
--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
+++ charm-helpers.yaml 2013-10-29 21:07:21 +0000
@@ -0,0 +1,4 @@
1include:
2 - core
3 - fetch
4 - contrib.charmsupport
0\ No newline at end of file5\ No newline at end of file
16
=== added file 'cm.py'
--- cm.py 1970-01-01 00:00:00 +0000
+++ cm.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,193 @@
1# Copyright 2010-2013 Canonical Ltd. All rights reserved.
2import os
3import re
4import sys
5import errno
6import hashlib
7import subprocess
8import optparse
9
10from os import curdir
11from bzrlib.branch import Branch
12from bzrlib.plugin import load_plugins
13load_plugins()
14from bzrlib.plugins.launchpad import account as lp_account
15
16if 'GlobalConfig' in dir(lp_account):
17 from bzrlib.config import LocationConfig as LocationConfiguration
18 _ = LocationConfiguration
19else:
20 from bzrlib.config import LocationStack as LocationConfiguration
21 _ = LocationConfiguration
22
23
24def get_branch_config(config_file):
25 """
26 Retrieves the sourcedeps configuration for an source dir.
27 Returns a dict of (branch, revspec) tuples, keyed by branch name.
28 """
29 branches = {}
30 with open(config_file, 'r') as stream:
31 for line in stream:
32 line = line.split('#')[0].strip()
33 bzr_match = re.match(r'(\S+)\s+'
34 'lp:([^;]+)'
35 '(?:;revno=(\d+))?', line)
36 if bzr_match:
37 name, branch, revno = bzr_match.group(1, 2, 3)
38 if revno is None:
39 revspec = -1
40 else:
41 revspec = revno
42 branches[name] = (branch, revspec)
43 continue
44 dir_match = re.match(r'(\S+)\s+'
45 '\(directory\)', line)
46 if dir_match:
47 name = dir_match.group(1)
48 branches[name] = None
49 return branches
50
51
52def main(config_file, parent_dir, target_dir, verbose):
53 """Do the deed."""
54
55 try:
56 os.makedirs(parent_dir)
57 except OSError, e:
58 if e.errno != errno.EEXIST:
59 raise
60
61 branches = sorted(get_branch_config(config_file).items())
62 for branch_name, spec in branches:
63 if spec is None:
64 # It's a directory, just create it and move on.
65 destination_path = os.path.join(target_dir, branch_name)
66 if not os.path.isdir(destination_path):
67 os.makedirs(destination_path)
68 continue
69
70 (quoted_branch_spec, revspec) = spec
71 revno = int(revspec)
72
73 # qualify mirror branch name with hash of remote repo path to deal
74 # with changes to the remote branch URL over time
75 branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest()
76 branch_directory = branch_spec_digest
77
78 source_path = os.path.join(parent_dir, branch_directory)
79 destination_path = os.path.join(target_dir, branch_name)
80
81 # Remove leftover symlinks/stray files.
82 try:
83 os.remove(destination_path)
84 except OSError, e:
85 if e.errno != errno.EISDIR and e.errno != errno.ENOENT:
86 raise
87
88 lp_url = "lp:" + quoted_branch_spec
89
90 # Create the local mirror branch if it doesn't already exist
91 if verbose:
92 sys.stderr.write('%30s: ' % (branch_name,))
93 sys.stderr.flush()
94
95 fresh = False
96 if not os.path.exists(source_path):
97 subprocess.check_call(['bzr', 'branch', '-q', '--no-tree',
98 '--', lp_url, source_path])
99 fresh = True
100
101 if not fresh:
102 source_branch = Branch.open(source_path)
103 if revno == -1:
104 orig_branch = Branch.open(lp_url)
105 fresh = source_branch.revno() == orig_branch.revno()
106 else:
107 fresh = source_branch.revno() == revno
108
109 # Freshen the source branch if required.
110 if not fresh:
111 subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r',
112 str(revno), '-d', source_path,
113 '--', lp_url])
114
115 if os.path.exists(destination_path):
116 # Overwrite the destination with the appropriate revision.
117 subprocess.check_call(['bzr', 'clean-tree', '--force', '-q',
118 '--ignored', '-d', destination_path])
119 subprocess.check_call(['bzr', 'pull', '-q', '--overwrite',
120 '-r', str(revno),
121 '-d', destination_path, '--', source_path])
122 else:
123 # Create a new branch.
124 subprocess.check_call(['bzr', 'branch', '-q', '--hardlink',
125 '-r', str(revno),
126 '--', source_path, destination_path])
127
128 # Check the state of the destination branch.
129 destination_branch = Branch.open(destination_path)
130 destination_revno = destination_branch.revno()
131
132 if verbose:
133 sys.stderr.write('checked out %4s of %s\n' %
134 ("r" + str(destination_revno), lp_url))
135 sys.stderr.flush()
136
137 if revno != -1 and destination_revno != revno:
138 raise RuntimeError("Expected revno %d but got revno %d" %
139 (revno, destination_revno))
140
141if __name__ == '__main__':
142 parser = optparse.OptionParser(
143 usage="%prog [options]",
144 description=(
145 "Add a lightweight checkout in <target> for each "
146 "corresponding file in <parent>."),
147 add_help_option=False)
148 parser.add_option(
149 '-p', '--parent', dest='parent',
150 default=None,
151 help=("The directory of the parent tree."),
152 metavar="DIR")
153 parser.add_option(
154 '-t', '--target', dest='target', default=curdir,
155 help=("The directory of the target tree."),
156 metavar="DIR")
157 parser.add_option(
158 '-c', '--config', dest='config', default=None,
159 help=("The config file to be used for config-manager."),
160 metavar="DIR")
161 parser.add_option(
162 '-q', '--quiet', dest='verbose', action='store_false',
163 help="Be less verbose.")
164 parser.add_option(
165 '-v', '--verbose', dest='verbose', action='store_true',
166 help="Be more verbose.")
167 parser.add_option(
168 '-h', '--help', action='help',
169 help="Show this help message and exit.")
170 parser.set_defaults(verbose=True)
171
172 options, args = parser.parse_args()
173
174 if options.parent is None:
175 options.parent = os.environ.get(
176 "SOURCEDEPS_DIR",
177 os.path.join(curdir, ".sourcecode"))
178
179 if options.target is None:
180 parser.error(
181 "Target directory not specified.")
182
183 if options.config is None:
184 config = [arg for arg in args
185 if arg != "update"]
186 if not config or len(config) > 1:
187 parser.error("Config not specified")
188 options.config = config[0]
189
190 sys.exit(main(config_file=options.config,
191 parent_dir=options.parent,
192 target_dir=options.target,
193 verbose=options.verbose))
0194
=== added file 'config-manager.txt'
--- config-manager.txt 1970-01-01 00:00:00 +0000
+++ config-manager.txt 2013-10-29 21:07:21 +0000
@@ -0,0 +1,6 @@
1# After making changes to this file, to ensure that your sourcedeps are
2# up-to-date do:
3#
4# make sourcedeps
5
6./build/charm-helpers lp:charm-helpers;revno=70
07
=== modified file 'config.yaml'
--- config.yaml 2013-01-11 16:49:35 +0000
+++ config.yaml 2013-10-29 21:07:21 +0000
@@ -3,10 +3,22 @@
3 type: int3 type: int
4 default: 31284 default: 3128
5 description: Squid listening port.5 description: Squid listening port.
6 port_options:
7 type: string
8 default: accel vhost
9 description: Squid listening port options
6 log_format:10 log_format:
7 type: string11 type: string
8 default: '%>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh'12 default: '%>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh'
9 description: Format of the squid log.13 description: Format of the squid log.
14 via:
15 type: string
16 default: 'on'
17 description: Add 'Via' header to outgoing requests.
18 x_balancer_name_allowed:
19 type: boolean
20 default: false
21 description: Route based on X-Balancer-Name header set by Apache charm.
10 cache_mem_mb:22 cache_mem_mb:
11 type: int23 type: int
12 default: 25624 default: 256
@@ -14,7 +26,7 @@
14 cache_size_mb:26 cache_size_mb:
15 type: int27 type: int
16 default: 51228 default: 512
17 description: Maximum size of the on-disk object cache (MB).29 description: Maximum size of the on-disk object cache (MB). Set to zero to disable caching.
18 cache_dir:30 cache_dir:
19 type: string31 type: string
20 default: '/var/spool/squid3'32 default: '/var/spool/squid3'
@@ -51,20 +63,45 @@
51 juju-squid-063 juju-squid-0
52 If you're running multiple environments with the same services in them64 If you're running multiple environments with the same services in them
53 this allows you to differentiate between them.65 this allows you to differentiate between them.
54 nagios_check_url:66 nagios_check_http_params:
55 default: ""67 default: ""
56 type: string68 type: string
57 description: >69 description: >
58 The URL to check squid has access to, most likely inside your web server farm70 The parameters to pass to the nrpe plugin check_http.
59 nagios_service_type:71 nagios_service_type:
60 default: "generic"72 default: "generic"
61 type: string73 type: string
62 description: >74 description: >
63 What service this component forms part of, e.g. supermassive-squid-cluster. Used by nrpe.75 What service this component forms part of, e.g. supermassive-squid-cluster. Used by nrpe.
76 package_status:
77 default: "install"
78 type: "string"
79 description: >
80 The status of service-affecting packages will be set to this value in the dpkg database.
81 Useful valid values are "install" and "hold".
64 refresh_patterns:82 refresh_patterns:
65 type: string83 type: string
66 default: ''84 default: ''
67 description: >85 description: >
68 JSON-formatted list of refresh patterns. For example:86 JSON- or YAML-formatted list of refresh patterns. For example:
69 '{"http://www.ubuntu.com": {"min": 0, "percent": 20, "max": 60}, "http://www.canonical.com": {"min": 0, "percent": 20, "max": 120}}'87 '{"http://www.ubuntu.com": {"min": 0, "percent": 20, "max": 60},
88 "http://www.canonical.com": {"min": 0, "percent": 20, "max": 120}}'
89 services:
90 default: ''
91 type: string
92 description: |
93 Services definition(s). Although the variable type is a string, this is
94 interpreted by the charm as yaml. To use multiple services within the
95 same instance, specify all of the variables (service_name,
96 service_host, service_port) with a "-" before the first variable,
97 service_name, as below.
7098
99 - service_name: example_proxy
100 service_domain: example.com
101 servers:
102 - [foo.internal, 80]
103 - [bar.internal, 80]
104 enable_forward_proxy:
105 default: false
106 type: boolean
107 description: Enables forward proxying
71108
=== added file 'files/check_squidpeers'
--- files/check_squidpeers 1970-01-01 00:00:00 +0000
+++ files/check_squidpeers 2013-10-29 21:07:21 +0000
@@ -0,0 +1,75 @@
1#!/usr/bin/python
2
3from operator import itemgetter
4import re
5import subprocess
6import sys
7
8
9parent_re = re.compile(r'^Parent\s+:\s*(?P<name>\S+)')
10status_re = re.compile(r'^Status\s+:\s*(?P<status>\S+)')
11
12STATUS_UP = 'Up'
13
14
15def parse(output):
16 state = 'header'
17 peers = []
18 for line in output.splitlines():
19 if state == 'header':
20 if not line.strip():
21 state = 'header gap'
22 elif state == 'header gap':
23 if not line.strip():
24 state = 'peer'
25 peers.append(dict())
26 elif line.strip() == "There are no neighbors installed.":
27 return peers
28 else:
29 raise AssertionError('Expecting blank line after header?: {}'.format(line))
30 elif state == 'peer':
31 if not line.strip():
32 peers.append(dict())
33 else:
34 match = parent_re.match(line)
35 if match:
36 peers[-1]['name'] = match.group('name')
37 match = status_re.match(line)
38 if match:
39 peers[-1]['up'] = match.group('status') == STATUS_UP
40 peers[-1]['status'] = match.group('status')
41 return peers
42
43
44def get_status(peers):
45 ok = all(map(itemgetter('up'), peers))
46 if not peers:
47 retcode = 1
48 message = 'Squid has no configured peers.'
49 elif ok:
50 retcode = 0
51 message = 'All peers are UP according to squid.'
52 else:
53 retcode = 2
54 peer_info = ["{}: {}".format(p['name'], p['status']) for p in peers
55 if not p['up']]
56 message = 'The following peers are not UP according to squid: {}'.format(
57 ", ".join(peer_info))
58 return retcode, message
59
60
61def main():
62 proc = subprocess.Popen(['squidclient', 'mgr:server_list'],
63 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
64 stdout, stderr = proc.communicate()
65 if proc.returncode != 0:
66 print("Error running squidclient: %s" % stderr)
67 return 2
68 peers = parse(stdout)
69 retcode, message = get_status(peers)
70 print(message)
71 return retcode
72
73
74if __name__ == '__main__':
75 sys.exit(main())
076
=== added directory 'hooks/charmhelpers'
=== added file 'hooks/charmhelpers/__init__.py'
=== added directory 'hooks/charmhelpers/contrib'
=== added file 'hooks/charmhelpers/contrib/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/charmsupport'
=== added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
=== added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,218 @@
1"""Compatibility with the nrpe-external-master charm"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Matthew Wedgwood <matthew.wedgwood@canonical.com>
6
7import subprocess
8import pwd
9import grp
10import os
11import re
12import shlex
13import yaml
14
15from charmhelpers.core.hookenv import (
16 config,
17 local_unit,
18 log,
19 relation_ids,
20 relation_set,
21)
22
23from charmhelpers.core.host import service
24
25# This module adds compatibility with the nrpe-external-master and plain nrpe
26# subordinate charms. To use it in your charm:
27#
28# 1. Update metadata.yaml
29#
30# provides:
31# (...)
32# nrpe-external-master:
33# interface: nrpe-external-master
34# scope: container
35#
36# and/or
37#
38# provides:
39# (...)
40# local-monitors:
41# interface: local-monitors
42# scope: container
43
44#
45# 2. Add the following to config.yaml
46#
47# nagios_context:
48# default: "juju"
49# type: string
50# description: |
51# Used by the nrpe subordinate charms.
52# A string that will be prepended to instance name to set the host name
53# in nagios. So for instance the hostname would be something like:
54# juju-myservice-0
55# If you're running multiple environments with the same services in them
56# this allows you to differentiate between them.
57#
58# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
59#
60# 4. Update your hooks.py with something like this:
61#
62# from charmsupport.nrpe import NRPE
63# (...)
64# def update_nrpe_config():
65# nrpe_compat = NRPE()
66# nrpe_compat.add_check(
67# shortname = "myservice",
68# description = "Check MyService",
69# check_cmd = "check_http -w 2 -c 10 http://localhost"
70# )
71# nrpe_compat.add_check(
72# "myservice_other",
73# "Check for widget failures",
74# check_cmd = "/srv/myapp/scripts/widget_check"
75# )
76# nrpe_compat.write()
77#
78# def config_changed():
79# (...)
80# update_nrpe_config()
81#
82# def nrpe_external_master_relation_changed():
83# update_nrpe_config()
84#
85# def local_monitors_relation_changed():
86# update_nrpe_config()
87#
88# 5. ln -s hooks.py nrpe-external-master-relation-changed
89# ln -s hooks.py local-monitors-relation-changed
90
91
92class CheckException(Exception):
93 pass
94
95
96class Check(object):
97 shortname_re = '[A-Za-z0-9-_]+$'
98 service_template = ("""
99#---------------------------------------------------
100# This file is Juju managed
101#---------------------------------------------------
102define service {{
103 use active-service
104 host_name {nagios_hostname}
105 service_description {nagios_hostname}[{shortname}] """
106 """{description}
107 check_command check_nrpe!{command}
108 servicegroups {nagios_servicegroup}
109}}
110""")
111
112 def __init__(self, shortname, description, check_cmd):
113 super(Check, self).__init__()
114 # XXX: could be better to calculate this from the service name
115 if not re.match(self.shortname_re, shortname):
116 raise CheckException("shortname must match {}".format(
117 Check.shortname_re))
118 self.shortname = shortname
119 self.command = "check_{}".format(shortname)
120 # Note: a set of invalid characters is defined by the
121 # Nagios server config
122 # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
123 self.description = description
124 self.check_cmd = self._locate_cmd(check_cmd)
125
126 def _locate_cmd(self, check_cmd):
127 search_path = (
128 '/',
129 os.path.join(os.environ['CHARM_DIR'],
130 'files/nrpe-external-master'),
131 '/usr/lib/nagios/plugins',
132 )
133 parts = shlex.split(check_cmd)
134 for path in search_path:
135 if os.path.exists(os.path.join(path, parts[0])):
136 command = os.path.join(path, parts[0])
137 if len(parts) > 1:
138 command += " " + " ".join(parts[1:])
139 return command
140 log('Check command not found: {}'.format(parts[0]))
141 return ''
142
143 def write(self, nagios_context, hostname):
144 nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
145 self.command)
146 with open(nrpe_check_file, 'w') as nrpe_check_config:
147 nrpe_check_config.write("# check {}\n".format(self.shortname))
148 nrpe_check_config.write("command[{}]={}\n".format(
149 self.command, self.check_cmd))
150
151 if not os.path.exists(NRPE.nagios_exportdir):
152 log('Not writing service config as {} is not accessible'.format(
153 NRPE.nagios_exportdir))
154 else:
155 self.write_service_config(nagios_context, hostname)
156
157 def write_service_config(self, nagios_context, hostname):
158 for f in os.listdir(NRPE.nagios_exportdir):
159 if re.search('.*{}.cfg'.format(self.command), f):
160 os.remove(os.path.join(NRPE.nagios_exportdir, f))
161
162 templ_vars = {
163 'nagios_hostname': hostname,
164 'nagios_servicegroup': nagios_context,
165 'description': self.description,
166 'shortname': self.shortname,
167 'command': self.command,
168 }
169 nrpe_service_text = Check.service_template.format(**templ_vars)
170 nrpe_service_file = '{}/service__{}_{}.cfg'.format(
171 NRPE.nagios_exportdir, hostname, self.command)
172 with open(nrpe_service_file, 'w') as nrpe_service_config:
173 nrpe_service_config.write(str(nrpe_service_text))
174
175 def run(self):
176 subprocess.call(self.check_cmd)
177
178
179class NRPE(object):
180 nagios_logdir = '/var/log/nagios'
181 nagios_exportdir = '/var/lib/nagios/export'
182 nrpe_confdir = '/etc/nagios/nrpe.d'
183
184 def __init__(self):
185 super(NRPE, self).__init__()
186 self.config = config()
187 self.nagios_context = self.config['nagios_context']
188 self.unit_name = local_unit().replace('/', '-')
189 self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
190 self.checks = []
191
192 def add_check(self, *args, **kwargs):
193 self.checks.append(Check(*args, **kwargs))
194
195 def write(self):
196 try:
197 nagios_uid = pwd.getpwnam('nagios').pw_uid
198 nagios_gid = grp.getgrnam('nagios').gr_gid
199 except:
200 log("Nagios user not set up, nrpe checks not updated")
201 return
202
203 if not os.path.exists(NRPE.nagios_logdir):
204 os.mkdir(NRPE.nagios_logdir)
205 os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
206
207 nrpe_monitors = {}
208 monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
209 for nrpecheck in self.checks:
210 nrpecheck.write(self.nagios_context, self.hostname)
211 nrpe_monitors[nrpecheck.shortname] = {
212 "command": nrpecheck.command,
213 }
214
215 service('restart', 'nagios-nrpe-server')
216
217 for rid in relation_ids("local-monitors"):
218 relation_set(relation_id=rid, monitors=yaml.dump(monitors))
0219
=== added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
--- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,156 @@
1'''
2Functions for managing volumes in juju units. One volume is supported per unit.
3Subordinates may have their own storage, provided it is on its own partition.
4
5Configuration stanzas:
6 volume-ephemeral:
7 type: boolean
8 default: true
9 description: >
10 If false, a volume is mounted as sepecified in "volume-map"
11 If true, ephemeral storage will be used, meaning that log data
12 will only exist as long as the machine. YOU HAVE BEEN WARNED.
13 volume-map:
14 type: string
15 default: {}
16 description: >
17 YAML map of units to device names, e.g:
18 "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
19 Service units will raise a configure-error if volume-ephemeral
20 is 'true' and no volume-map value is set. Use 'juju set' to set a
21 value and 'juju resolved' to complete configuration.
22
23Usage:
24 from charmsupport.volumes import configure_volume, VolumeConfigurationError
25 from charmsupport.hookenv import log, ERROR
26 def post_mount_hook():
27 stop_service('myservice')
28 def post_mount_hook():
29 start_service('myservice')
30
31 if __name__ == '__main__':
32 try:
33 configure_volume(before_change=pre_mount_hook,
34 after_change=post_mount_hook)
35 except VolumeConfigurationError:
36 log('Storage could not be configured', ERROR)
37'''
38
39# XXX: Known limitations
40# - fstab is neither consulted nor updated
41
42import os
43from charmhelpers.core import hookenv
44from charmhelpers.core import host
45import yaml
46
47
48MOUNT_BASE = '/srv/juju/volumes'
49
50
51class VolumeConfigurationError(Exception):
52 '''Volume configuration data is missing or invalid'''
53 pass
54
55
56def get_config():
57 '''Gather and sanity-check volume configuration data'''
58 volume_config = {}
59 config = hookenv.config()
60
61 errors = False
62
63 if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
64 volume_config['ephemeral'] = True
65 else:
66 volume_config['ephemeral'] = False
67
68 try:
69 volume_map = yaml.safe_load(config.get('volume-map', '{}'))
70 except yaml.YAMLError as e:
71 hookenv.log("Error parsing YAML volume-map: {}".format(e),
72 hookenv.ERROR)
73 errors = True
74 if volume_map is None:
75 # probably an empty string
76 volume_map = {}
77 elif not isinstance(volume_map, dict):
78 hookenv.log("Volume-map should be a dictionary, not {}".format(
79 type(volume_map)))
80 errors = True
81
82 volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
83 if volume_config['device'] and volume_config['ephemeral']:
84 # asked for ephemeral storage but also defined a volume ID
85 hookenv.log('A volume is defined for this unit, but ephemeral '
86 'storage was requested', hookenv.ERROR)
87 errors = True
88 elif not volume_config['device'] and not volume_config['ephemeral']:
89 # asked for permanent storage but did not define volume ID
90 hookenv.log('Ephemeral storage was requested, but there is no volume '
91 'defined for this unit.', hookenv.ERROR)
92 errors = True
93
94 unit_mount_name = hookenv.local_unit().replace('/', '-')
95 volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
96
97 if errors:
98 return None
99 return volume_config
100
101
102def mount_volume(config):
103 if os.path.exists(config['mountpoint']):
104 if not os.path.isdir(config['mountpoint']):
105 hookenv.log('Not a directory: {}'.format(config['mountpoint']))
106 raise VolumeConfigurationError()
107 else:
108 host.mkdir(config['mountpoint'])
109 if os.path.ismount(config['mountpoint']):
110 unmount_volume(config)
111 if not host.mount(config['device'], config['mountpoint'], persist=True):
112 raise VolumeConfigurationError()
113
114
115def unmount_volume(config):
116 if os.path.ismount(config['mountpoint']):
117 if not host.umount(config['mountpoint'], persist=True):
118 raise VolumeConfigurationError()
119
120
121def managed_mounts():
122 '''List of all mounted managed volumes'''
123 return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
124
125
126def configure_volume(before_change=lambda: None, after_change=lambda: None):
127 '''Set up storage (or don't) according to the charm's volume configuration.
128 Returns the mount point or "ephemeral". before_change and after_change
129 are optional functions to be called if the volume configuration changes.
130 '''
131
132 config = get_config()
133 if not config:
134 hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
135 raise VolumeConfigurationError()
136
137 if config['ephemeral']:
138 if os.path.ismount(config['mountpoint']):
139 before_change()
140 unmount_volume(config)
141 after_change()
142 return 'ephemeral'
143 else:
144 # persistent storage
145 if os.path.ismount(config['mountpoint']):
146 mounts = dict(managed_mounts())
147 if mounts.get(config['mountpoint']) != config['device']:
148 before_change()
149 unmount_volume(config)
150 mount_volume(config)
151 after_change()
152 else:
153 before_change()
154 mount_volume(config)
155 after_change()
156 return config['mountpoint']
0157
=== added directory 'hooks/charmhelpers/core'
=== added file 'hooks/charmhelpers/core/__init__.py'
=== added file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/hookenv.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,340 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import UserDict
12
13CRITICAL = "CRITICAL"
14ERROR = "ERROR"
15WARNING = "WARNING"
16INFO = "INFO"
17DEBUG = "DEBUG"
18MARKER = object()
19
20cache = {}
21
22
23def cached(func):
24 ''' Cache return values for multiple executions of func + args
25
26 For example:
27
28 @cached
29 def unit_get(attribute):
30 pass
31
32 unit_get('test')
33
34 will cache the result of unit_get + 'test' for future calls.
35 '''
36 def wrapper(*args, **kwargs):
37 global cache
38 key = str((func, args, kwargs))
39 try:
40 return cache[key]
41 except KeyError:
42 res = func(*args, **kwargs)
43 cache[key] = res
44 return res
45 return wrapper
46
47
48def flush(key):
49 ''' Flushes any entries from function cache where the
50 key is found in the function+args '''
51 flush_list = []
52 for item in cache:
53 if key in item:
54 flush_list.append(item)
55 for item in flush_list:
56 del cache[item]
57
58
59def log(message, level=None):
60 "Write a message to the juju log"
61 command = ['juju-log']
62 if level:
63 command += ['-l', level]
64 command += [message]
65 subprocess.call(command)
66
67
68class Serializable(UserDict.IterableUserDict):
69 "Wrapper, an object that can be serialized to yaml or json"
70
71 def __init__(self, obj):
72 # wrap the object
73 UserDict.IterableUserDict.__init__(self)
74 self.data = obj
75
76 def __getattr__(self, attr):
77 # See if this object has attribute.
78 if attr in ("json", "yaml", "data"):
79 return self.__dict__[attr]
80 # Check for attribute in wrapped object.
81 got = getattr(self.data, attr, MARKER)
82 if got is not MARKER:
83 return got
84 # Proxy to the wrapped object via dict interface.
85 try:
86 return self.data[attr]
87 except KeyError:
88 raise AttributeError(attr)
89
90 def __getstate__(self):
91 # Pickle as a standard dictionary.
92 return self.data
93
94 def __setstate__(self, state):
95 # Unpickle into our wrapper.
96 self.data = state
97
98 def json(self):
99 "Serialize the object to json"
100 return json.dumps(self.data)
101
102 def yaml(self):
103 "Serialize the object to yaml"
104 return yaml.dump(self.data)
105
106
107def execution_environment():
108 """A convenient bundling of the current execution context"""
109 context = {}
110 context['conf'] = config()
111 if relation_id():
112 context['reltype'] = relation_type()
113 context['relid'] = relation_id()
114 context['rel'] = relation_get()
115 context['unit'] = local_unit()
116 context['rels'] = relations()
117 context['env'] = os.environ
118 return context
119
120
121def in_relation_hook():
122 "Determine whether we're running in a relation hook"
123 return 'JUJU_RELATION' in os.environ
124
125
126def relation_type():
127 "The scope for the current relation hook"
128 return os.environ.get('JUJU_RELATION', None)
129
130
131def relation_id():
132 "The relation ID for the current relation hook"
133 return os.environ.get('JUJU_RELATION_ID', None)
134
135
136def local_unit():
137 "Local unit ID"
138 return os.environ['JUJU_UNIT_NAME']
139
140
141def remote_unit():
142 "The remote unit for the current relation hook"
143 return os.environ['JUJU_REMOTE_UNIT']
144
145
146def service_name():
147 "The name service group this unit belongs to"
148 return local_unit().split('/')[0]
149
150
151@cached
152def config(scope=None):
153 "Juju charm configuration"
154 config_cmd_line = ['config-get']
155 if scope is not None:
156 config_cmd_line.append(scope)
157 config_cmd_line.append('--format=json')
158 try:
159 return json.loads(subprocess.check_output(config_cmd_line))
160 except ValueError:
161 return None
162
163
164@cached
165def relation_get(attribute=None, unit=None, rid=None):
166 _args = ['relation-get', '--format=json']
167 if rid:
168 _args.append('-r')
169 _args.append(rid)
170 _args.append(attribute or '-')
171 if unit:
172 _args.append(unit)
173 try:
174 return json.loads(subprocess.check_output(_args))
175 except ValueError:
176 return None
177
178
179def relation_set(relation_id=None, relation_settings={}, **kwargs):
180 relation_cmd_line = ['relation-set']
181 if relation_id is not None:
182 relation_cmd_line.extend(('-r', relation_id))
183 for k, v in (relation_settings.items() + kwargs.items()):
184 if v is None:
185 relation_cmd_line.append('{}='.format(k))
186 else:
187 relation_cmd_line.append('{}={}'.format(k, v))
188 subprocess.check_call(relation_cmd_line)
189 # Flush cache of any relation-gets for local unit
190 flush(local_unit())
191
192
193@cached
194def relation_ids(reltype=None):
195 "A list of relation_ids"
196 reltype = reltype or relation_type()
197 relid_cmd_line = ['relation-ids', '--format=json']
198 if reltype is not None:
199 relid_cmd_line.append(reltype)
200 return json.loads(subprocess.check_output(relid_cmd_line)) or []
201 return []
202
203
204@cached
205def related_units(relid=None):
206 "A list of related units"
207 relid = relid or relation_id()
208 units_cmd_line = ['relation-list', '--format=json']
209 if relid is not None:
210 units_cmd_line.extend(('-r', relid))
211 return json.loads(subprocess.check_output(units_cmd_line)) or []
212
213
214@cached
215def relation_for_unit(unit=None, rid=None):
216 "Get the json represenation of a unit's relation"
217 unit = unit or remote_unit()
218 relation = relation_get(unit=unit, rid=rid)
219 for key in relation:
220 if key.endswith('-list'):
221 relation[key] = relation[key].split()
222 relation['__unit__'] = unit
223 return relation
224
225
226@cached
227def relations_for_id(relid=None):
228 "Get relations of a specific relation ID"
229 relation_data = []
230 relid = relid or relation_ids()
231 for unit in related_units(relid):
232 unit_data = relation_for_unit(unit, relid)
233 unit_data['__relid__'] = relid
234 relation_data.append(unit_data)
235 return relation_data
236
237
238@cached
239def relations_of_type(reltype=None):
240 "Get relations of a specific type"
241 relation_data = []
242 reltype = reltype or relation_type()
243 for relid in relation_ids(reltype):
244 for relation in relations_for_id(relid):
245 relation['__relid__'] = relid
246 relation_data.append(relation)
247 return relation_data
248
249
250@cached
251def relation_types():
252 "Get a list of relation types supported by this charm"
253 charmdir = os.environ.get('CHARM_DIR', '')
254 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
255 md = yaml.safe_load(mdf)
256 rel_types = []
257 for key in ('provides', 'requires', 'peers'):
258 section = md.get(key)
259 if section:
260 rel_types.extend(section.keys())
261 mdf.close()
262 return rel_types
263
264
265@cached
266def relations():
267 rels = {}
268 for reltype in relation_types():
269 relids = {}
270 for relid in relation_ids(reltype):
271 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
272 for unit in related_units(relid):
273 reldata = relation_get(unit=unit, rid=relid)
274 units[unit] = reldata
275 relids[relid] = units
276 rels[reltype] = relids
277 return rels
278
279
280def open_port(port, protocol="TCP"):
281 "Open a service network port"
282 _args = ['open-port']
283 _args.append('{}/{}'.format(port, protocol))
284 subprocess.check_call(_args)
285
286
287def close_port(port, protocol="TCP"):
288 "Close a service network port"
289 _args = ['close-port']
290 _args.append('{}/{}'.format(port, protocol))
291 subprocess.check_call(_args)
292
293
294@cached
295def unit_get(attribute):
296 _args = ['unit-get', '--format=json', attribute]
297 try:
298 return json.loads(subprocess.check_output(_args))
299 except ValueError:
300 return None
301
302
303def unit_private_ip():
304 return unit_get('private-address')
305
306
307class UnregisteredHookError(Exception):
308 pass
309
310
311class Hooks(object):
312 def __init__(self):
313 super(Hooks, self).__init__()
314 self._hooks = {}
315
316 def register(self, name, function):
317 self._hooks[name] = function
318
319 def execute(self, args):
320 hook_name = os.path.basename(args[0])
321 if hook_name in self._hooks:
322 self._hooks[hook_name]()
323 else:
324 raise UnregisteredHookError(hook_name)
325
326 def hook(self, *hook_names):
327 def wrapper(decorated):
328 for hook_name in hook_names:
329 self.register(hook_name, decorated)
330 else:
331 self.register(decorated.__name__, decorated)
332 if '_' in decorated.__name__:
333 self.register(
334 decorated.__name__.replace('_', '-'), decorated)
335 return decorated
336 return wrapper
337
338
339def charm_dir():
340 return os.environ.get('CHARM_DIR')
0341
=== added file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/host.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,239 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import pwd
10import grp
11import random
12import string
13import subprocess
14import hashlib
15
16from collections import OrderedDict
17
18from hookenv import log
19
20
21def service_start(service_name):
22 service('start', service_name)
23
24
25def service_stop(service_name):
26 service('stop', service_name)
27
28
29def service_restart(service_name):
30 service('restart', service_name)
31
32
33def service_reload(service_name, restart_on_failure=False):
34 if not service('reload', service_name) and restart_on_failure:
35 service('restart', service_name)
36
37
38def service(action, service_name):
39 cmd = ['service', service_name, action]
40 return subprocess.call(cmd) == 0
41
42
43def service_running(service):
44 try:
45 output = subprocess.check_output(['service', service, 'status'])
46 except subprocess.CalledProcessError:
47 return False
48 else:
49 if ("start/running" in output or "is running" in output):
50 return True
51 else:
52 return False
53
54
55def adduser(username, password=None, shell='/bin/bash', system_user=False):
56 """Add a user"""
57 try:
58 user_info = pwd.getpwnam(username)
59 log('user {0} already exists!'.format(username))
60 except KeyError:
61 log('creating user {0}'.format(username))
62 cmd = ['useradd']
63 if system_user or password is None:
64 cmd.append('--system')
65 else:
66 cmd.extend([
67 '--create-home',
68 '--shell', shell,
69 '--password', password,
70 ])
71 cmd.append(username)
72 subprocess.check_call(cmd)
73 user_info = pwd.getpwnam(username)
74 return user_info
75
76
77def add_user_to_group(username, group):
78 """Add a user to a group"""
79 cmd = [
80 'gpasswd', '-a',
81 username,
82 group
83 ]
84 log("Adding user {} to group {}".format(username, group))
85 subprocess.check_call(cmd)
86
87
88def rsync(from_path, to_path, flags='-r', options=None):
89 """Replicate the contents of a path"""
90 options = options or ['--delete', '--executability']
91 cmd = ['/usr/bin/rsync', flags]
92 cmd.extend(options)
93 cmd.append(from_path)
94 cmd.append(to_path)
95 log(" ".join(cmd))
96 return subprocess.check_output(cmd).strip()
97
98
99def symlink(source, destination):
100 """Create a symbolic link"""
101 log("Symlinking {} as {}".format(source, destination))
102 cmd = [
103 'ln',
104 '-sf',
105 source,
106 destination,
107 ]
108 subprocess.check_call(cmd)
109
110
111def mkdir(path, owner='root', group='root', perms=0555, force=False):
112 """Create a directory"""
113 log("Making dir {} {}:{} {:o}".format(path, owner, group,
114 perms))
115 uid = pwd.getpwnam(owner).pw_uid
116 gid = grp.getgrnam(group).gr_gid
117 realpath = os.path.abspath(path)
118 if os.path.exists(realpath):
119 if force and not os.path.isdir(realpath):
120 log("Removing non-directory file {} prior to mkdir()".format(path))
121 os.unlink(realpath)
122 else:
123 os.makedirs(realpath, perms)
124 os.chown(realpath, uid, gid)
125
126
127def write_file(path, content, owner='root', group='root', perms=0444):
128 """Create or overwrite a file with the contents of a string"""
129 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
130 uid = pwd.getpwnam(owner).pw_uid
131 gid = grp.getgrnam(group).gr_gid
132 with open(path, 'w') as target:
133 os.fchown(target.fileno(), uid, gid)
134 os.fchmod(target.fileno(), perms)
135 target.write(content)
136
137
138def mount(device, mountpoint, options=None, persist=False):
139 '''Mount a filesystem'''
140 cmd_args = ['mount']
141 if options is not None:
142 cmd_args.extend(['-o', options])
143 cmd_args.extend([device, mountpoint])
144 try:
145 subprocess.check_output(cmd_args)
146 except subprocess.CalledProcessError, e:
147 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
148 return False
149 if persist:
150 # TODO: update fstab
151 pass
152 return True
153
154
155def umount(mountpoint, persist=False):
156 '''Unmount a filesystem'''
157 cmd_args = ['umount', mountpoint]
158 try:
159 subprocess.check_output(cmd_args)
160 except subprocess.CalledProcessError, e:
161 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
162 return False
163 if persist:
164 # TODO: update fstab
165 pass
166 return True
167
168
169def mounts():
170 '''List of all mounted volumes as [[mountpoint,device],[...]]'''
171 with open('/proc/mounts') as f:
172 # [['/mount/point','/dev/path'],[...]]
173 system_mounts = [m[1::-1] for m in [l.strip().split()
174 for l in f.readlines()]]
175 return system_mounts
176
177
178def file_hash(path):
179 ''' Generate a md5 hash of the contents of 'path' or None if not found '''
180 if os.path.exists(path):
181 h = hashlib.md5()
182 with open(path, 'r') as source:
183 h.update(source.read()) # IGNORE:E1101 - it does have update
184 return h.hexdigest()
185 else:
186 return None
187
188
189def restart_on_change(restart_map):
190 ''' Restart services based on configuration files changing
191
192 This function is used a decorator, for example
193
194 @restart_on_change({
195 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
196 })
197 def ceph_client_changed():
198 ...
199
200 In this example, the cinder-api and cinder-volume services
201 would be restarted if /etc/ceph/ceph.conf is changed by the
202 ceph_client_changed function.
203 '''
204 def wrap(f):
205 def wrapped_f(*args):
206 checksums = {}
207 for path in restart_map:
208 checksums[path] = file_hash(path)
209 f(*args)
210 restarts = []
211 for path in restart_map:
212 if checksums[path] != file_hash(path):
213 restarts += restart_map[path]
214 for service_name in list(OrderedDict.fromkeys(restarts)):
215 service('restart', service_name)
216 return wrapped_f
217 return wrap
218
219
220def lsb_release():
221 '''Return /etc/lsb-release in a dict'''
222 d = {}
223 with open('/etc/lsb-release', 'r') as lsb:
224 for l in lsb:
225 k, v = l.split('=')
226 d[k.strip()] = v.strip()
227 return d
228
229
230def pwgen(length=None):
231 '''Generate a random pasword.'''
232 if length is None:
233 length = random.choice(range(35, 45))
234 alphanumeric_chars = [
235 l for l in (string.letters + string.digits)
236 if l not in 'l0QD1vAEIOUaeiou']
237 random_chars = [
238 random.choice(alphanumeric_chars) for _ in range(length)]
239 return(''.join(random_chars))
0240
=== added directory 'hooks/charmhelpers/fetch'
=== added file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,209 @@
1import importlib
2from yaml import safe_load
3from charmhelpers.core.host import (
4 lsb_release
5)
6from urlparse import (
7 urlparse,
8 urlunparse,
9)
10import subprocess
11from charmhelpers.core.hookenv import (
12 config,
13 log,
14)
15import apt_pkg
16
17CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
18deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
19"""
20PROPOSED_POCKET = """# Proposed
21deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
22"""
23
24
25def filter_installed_packages(packages):
26 """Returns a list of packages that require installation"""
27 apt_pkg.init()
28 cache = apt_pkg.Cache()
29 _pkgs = []
30 for package in packages:
31 try:
32 p = cache[package]
33 p.current_ver or _pkgs.append(package)
34 except KeyError:
35 log('Package {} has no installation candidate.'.format(package),
36 level='WARNING')
37 _pkgs.append(package)
38 return _pkgs
39
40
41def apt_install(packages, options=None, fatal=False):
42 """Install one or more packages"""
43 options = options or []
44 cmd = ['apt-get', '-y']
45 cmd.extend(options)
46 cmd.append('install')
47 if isinstance(packages, basestring):
48 cmd.append(packages)
49 else:
50 cmd.extend(packages)
51 log("Installing {} with options: {}".format(packages,
52 options))
53 if fatal:
54 subprocess.check_call(cmd)
55 else:
56 subprocess.call(cmd)
57
58
59def apt_update(fatal=False):
60 """Update local apt cache"""
61 cmd = ['apt-get', 'update']
62 if fatal:
63 subprocess.check_call(cmd)
64 else:
65 subprocess.call(cmd)
66
67
68def apt_purge(packages, fatal=False):
69 """Purge one or more packages"""
70 cmd = ['apt-get', '-y', 'purge']
71 if isinstance(packages, basestring):
72 cmd.append(packages)
73 else:
74 cmd.extend(packages)
75 log("Purging {}".format(packages))
76 if fatal:
77 subprocess.check_call(cmd)
78 else:
79 subprocess.call(cmd)
80
81
82def add_source(source, key=None):
83 if ((source.startswith('ppa:') or
84 source.startswith('http:'))):
85 subprocess.check_call(['add-apt-repository', '--yes', source])
86 elif source.startswith('cloud:'):
87 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
88 fatal=True)
89 pocket = source.split(':')[-1]
90 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
91 apt.write(CLOUD_ARCHIVE.format(pocket))
92 elif source == 'proposed':
93 release = lsb_release()['DISTRIB_CODENAME']
94 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
95 apt.write(PROPOSED_POCKET.format(release))
96 if key:
97 subprocess.check_call(['apt-key', 'import', key])
98
99
100class SourceConfigError(Exception):
101 pass
102
103
104def configure_sources(update=False,
105 sources_var='install_sources',
106 keys_var='install_keys'):
107 """
108 Configure multiple sources from charm configuration
109
110 Example config:
111 install_sources:
112 - "ppa:foo"
113 - "http://example.com/repo precise main"
114 install_keys:
115 - null
116 - "a1b2c3d4"
117
118 Note that 'null' (a.k.a. None) should not be quoted.
119 """
120 sources = safe_load(config(sources_var))
121 keys = safe_load(config(keys_var))
122 if isinstance(sources, basestring) and isinstance(keys, basestring):
123 add_source(sources, keys)
124 else:
125 if not len(sources) == len(keys):
126 msg = 'Install sources and keys lists are different lengths'
127 raise SourceConfigError(msg)
128 for src_num in range(len(sources)):
129 add_source(sources[src_num], keys[src_num])
130 if update:
131 apt_update(fatal=True)
132
133# The order of this list is very important. Handlers should be listed in from
134# least- to most-specific URL matching.
135FETCH_HANDLERS = (
136 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
137 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
138)
139
140
141class UnhandledSource(Exception):
142 pass
143
144
145def install_remote(source):
146 """
147 Install a file tree from a remote source
148
149 The specified source should be a url of the form:
150 scheme://[host]/path[#[option=value][&...]]
151
152 Schemes supported are based on this modules submodules
153 Options supported are submodule-specific"""
154 # We ONLY check for True here because can_handle may return a string
155 # explaining why it can't handle a given source.
156 handlers = [h for h in plugins() if h.can_handle(source) is True]
157 installed_to = None
158 for handler in handlers:
159 try:
160 installed_to = handler.install(source)
161 except UnhandledSource:
162 pass
163 if not installed_to:
164 raise UnhandledSource("No handler found for source {}".format(source))
165 return installed_to
166
167
168def install_from_config(config_var_name):
169 charm_config = config()
170 source = charm_config[config_var_name]
171 return install_remote(source)
172
173
174class BaseFetchHandler(object):
175 """Base class for FetchHandler implementations in fetch plugins"""
176 def can_handle(self, source):
177 """Returns True if the source can be handled. Otherwise returns
178 a string explaining why it cannot"""
179 return "Wrong source type"
180
181 def install(self, source):
182 """Try to download and unpack the source. Return the path to the
183 unpacked files or raise UnhandledSource."""
184 raise UnhandledSource("Wrong source type {}".format(source))
185
186 def parse_url(self, url):
187 return urlparse(url)
188
189 def base_url(self, url):
190 """Return url without querystring or fragment"""
191 parts = list(self.parse_url(url))
192 parts[4:] = ['' for i in parts[4:]]
193 return urlunparse(parts)
194
195
196def plugins(fetch_handlers=None):
197 if not fetch_handlers:
198 fetch_handlers = FETCH_HANDLERS
199 plugin_list = []
200 for handler_name in fetch_handlers:
201 package, classname = handler_name.rsplit('.', 1)
202 try:
203 handler_class = getattr(importlib.import_module(package), classname)
204 plugin_list.append(handler_class())
205 except (ImportError, AttributeError):
206 # Skip missing plugins so that they can be ommitted from
207 # installation if desired
208 log("FetchHandler {} not found, skipping plugin".format(handler_name))
209 return plugin_list
0210
=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,48 @@
1import os
2import urllib2
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.payload.archive import (
8 get_archive_handler,
9 extract,
10)
11from charmhelpers.core.host import mkdir
12
13
14class ArchiveUrlFetchHandler(BaseFetchHandler):
15 """Handler for archives via generic URLs"""
16 def can_handle(self, source):
17 url_parts = self.parse_url(source)
18 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
19 return "Wrong source type"
20 if get_archive_handler(self.base_url(source)):
21 return True
22 return False
23
24 def download(self, source, dest):
25 # propogate all exceptions
26 # URLError, OSError, etc
27 response = urllib2.urlopen(source)
28 try:
29 with open(dest, 'w') as dest_file:
30 dest_file.write(response.read())
31 except Exception as e:
32 if os.path.isfile(dest):
33 os.unlink(dest)
34 raise e
35
36 def install(self, source):
37 url_parts = self.parse_url(source)
38 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
39 if not os.path.exists(dest_dir):
40 mkdir(dest_dir, perms=0755)
41 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
42 try:
43 self.download(source, dld_file)
44 except urllib2.URLError as e:
45 raise UnhandledSource(e.reason)
46 except OSError as e:
47 raise UnhandledSource(e.strerror)
48 return extract(dld_file)
049
=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,44 @@
1import os
2from bzrlib.branch import Branch
3from charmhelpers.fetch import (
4 BaseFetchHandler,
5 UnhandledSource
6)
7from charmhelpers.core.host import mkdir
8
9
10class BzrUrlFetchHandler(BaseFetchHandler):
11 """Handler for bazaar branches via generic and lp URLs"""
12 def can_handle(self, source):
13 url_parts = self.parse_url(source)
14 if url_parts.scheme not in ('bzr+ssh', 'lp'):
15 return False
16 else:
17 return True
18
19 def branch(self, source, dest):
20 url_parts = self.parse_url(source)
21 # If we use lp:branchname scheme we need to load plugins
22 if not self.can_handle(source):
23 raise UnhandledSource("Cannot handle {}".format(source))
24 if url_parts.scheme == "lp":
25 from bzrlib.plugin import load_plugins
26 load_plugins()
27 try:
28 remote_branch = Branch.open(source)
29 remote_branch.bzrdir.sprout(dest).open_branch()
30 except Exception as e:
31 raise e
32
33 def install(self, source):
34 url_parts = self.parse_url(source)
35 branch_name = url_parts.path.strip("/").split("/")[-1]
36 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
37 if not os.path.exists(dest_dir):
38 mkdir(dest_dir, perms=0755)
39 try:
40 self.branch(source, dest_dir)
41 except OSError as e:
42 raise UnhandledSource(e.strerror)
43 return dest_dir
44
045
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2013-01-11 17:00:40 +0000
+++ hooks/hooks.py 2013-10-29 21:07:21 +0000
@@ -1,17 +1,30 @@
1#!/usr/bin/env python1#!/usr/bin/env python
22
3import glob
4import grp
5import json
6import math3import math
7import os4import os
8import pwd
9import random5import random
10import re6import re
11import shutil7import shutil
12import string8import string
13import subprocess9import subprocess
14import sys10import sys
11import yaml
12import collections
13import socket
14
15from charmhelpers.core.hookenv import (
16 config as config_get,
17 log,
18 relation_set,
19 relation_get,
20 relation_ids as get_relation_ids,
21 relations_of_type,
22 related_units,
23 open_port,
24 close_port,
25)
26from charmhelpers.fetch import apt_install
27from charmhelpers.contrib.charmsupport.nrpe import NRPE
1528
16###############################################################################29###############################################################################
17# Global variables30# Global variables
@@ -19,146 +32,112 @@
19default_squid3_config_dir = "/etc/squid3"32default_squid3_config_dir = "/etc/squid3"
20default_squid3_config = "%s/squid.conf" % default_squid3_config_dir33default_squid3_config = "%s/squid.conf" % default_squid3_config_dir
21default_squid3_config_cache_dir = "/var/run/squid3"34default_squid3_config_cache_dir = "/var/run/squid3"
22hook_name = os.path.basename(sys.argv[0])35default_nagios_plugin_dir = "/usr/lib/nagios/plugins"
36Server = collections.namedtuple("Server", "name address port options")
37service_affecting_packages = ['squid3']
2338
24###############################################################################39###############################################################################
25# Supporting functions40# Supporting functions
26###############################################################################41###############################################################################
2742
2843
29#------------------------------------------------------------------------------44def get_id(sitename, relation_id, unit_name):
30# config_get: Returns a dictionary containing all of the config information45 unit_name = unit_name.replace('/', '_').replace('.', '_')
31# Optional parameter: scope46 relation_id = relation_id.replace(':', '_').replace('.', '_')
32# scope: limits the scope of the returned configuration to the47 if sitename is None:
33# desired config item.48 return relation_id + '__' + unit_name
34#------------------------------------------------------------------------------49 return sitename.replace('.', '_') + '__' + relation_id + '__' + unit_name
35def config_get(scope=None):50
36 try:51
37 config_cmd_line = ['config-get']52def get_forward_sites():
38 if scope is not None:53 reldata = []
39 config_cmd_line.append(scope)54 for relid in get_relation_ids('cached-website'):
40 config_cmd_line.append('--format=json')55 units = related_units(relid)
41 config_data = json.loads(subprocess.check_output(config_cmd_line))56 for unit in units:
42 except Exception, e:57 data = relation_get(rid=relid, unit=unit)
43 subprocess.call(['juju-log', str(e)])58 if 'sitenames' in data:
44 config_data = None59 data['sitenames'] = data['sitenames'].split()
45 finally:60 data['relation-id'] = relid
46 return(config_data)61 data['name'] = unit.replace("/", "_")
4762 reldata.append(data)
4863 return reldata
49#------------------------------------------------------------------------------64
50# relation_json: Returns json-formatted relation data65
51# Optional parameters: scope, relation_id66def get_reverse_sites():
52# scope: limits the scope of the returned data to the67 all_sites = {}
53# desired item.68
54# unit_name: limits the data ( and optionally the scope )69 config_data = config_get()
55# to the specified unit70 config_services = yaml.safe_load(config_data.get("services", "")) or ()
56# relation_id: specify relation id for out of context usage.71 server_options_by_site = {}
57#------------------------------------------------------------------------------72
58def relation_json(scope=None, unit_name=None, relation_id=None):73 for service_item in config_services:
59 try:74 site = service_item["service_name"]
60 relation_cmd_line = ['relation-get', '--format=json']75 servers = all_sites.setdefault(site, [])
61 if relation_id is not None:76 options = " ".join(service_item.get("server_options", []))
62 relation_cmd_line.extend(('-r', relation_id))77 server_options_by_site[site] = options
63 if scope is not None:78 for host, port in service_item["servers"]:
64 relation_cmd_line.append(scope)79 servers.append(
65 else:80 Server(name=get_id(None, site, host),
66 relation_cmd_line.append('-')81 address=host, port=port,
67 relation_cmd_line.append(unit_name)82 options=options))
68 relation_data = subprocess.check_output(relation_cmd_line)83
69 except Exception, e:84 relations = relations_of_type("website")
70 subprocess.call(['juju-log', str(e)])85
71 relation_data = None86 for relation_data in relations:
72 finally:87 unit = relation_data["__unit__"]
73 return(relation_data)88 relation_id = relation_data["__relid__"]
7489 if not ("port" in relation_data or "all_services" in relation_data):
7590 log("No port in relation data for '%s', skipping." % unit)
76#------------------------------------------------------------------------------91 continue
77# relation_get: Returns a dictionary containing the relation information92
78# Optional parameters: scope, relation_id93 if not "private-address" in relation_data:
79# scope: limits the scope of the returned data to the94 log("No private-address in relation data "
80# desired item.95 "for '%s', skipping." % unit)
81# unit_name: limits the data ( and optionally the scope )96 continue
82# to the specified unit97
83# relation_id: specify relation id for out of context usage.98 for site in relation_data.get("sitenames", "").split():
84#------------------------------------------------------------------------------99 servers = all_sites.setdefault(site, [])
85def relation_get(scope=None, unit_name=None, relation_id=None):100 servers.append(
86 try:101 Server(name=get_id(site, relation_id, unit),
87 relation_data = json.loads(relation_json())102 address=relation_data["private-address"],
88 except Exception, e:103 port=relation_data["port"],
89 subprocess.call(['juju-log', str(e)])104 options=server_options_by_site.get(site, '')))
90 relation_data = None105
91 finally:106 services = yaml.safe_load(relation_data.get("all_services", "")) or ()
92 return(relation_data)107 for service_item in services:
93108 site = service_item["service_name"]
94109 servers = all_sites.setdefault(site, [])
95#------------------------------------------------------------------------------110 servers.append(
96# relation_ids: Returns a list of relation ids111 Server(name=get_id(site, relation_id, unit),
97# optional parameters: relation_type112 address=relation_data["private-address"],
98# relation_type: return relations only of this type113 port=service_item["service_port"],
99#------------------------------------------------------------------------------114 options=server_options_by_site.get(site, '')))
100def relation_ids(relation_types=['website']):115
101 # accept strings or iterators116 if not ("sitenames" in relation_data or
102 if isinstance(relation_types, basestring):117 "all_services" in relation_data):
103 reltypes = [relation_types]118 servers = all_sites.setdefault(None, [])
104 else:119 servers.append(
105 reltypes = relation_types120 Server(name=get_id(None, relation_id, unit),
106 relids = []121 address=relation_data["private-address"],
107 for reltype in reltypes:122 port=relation_data["port"],
108 relid_cmd_line = ['relation-ids', '--format=json', reltype]123 options=server_options_by_site.get(None, '')))
109 relids.extend(json.loads(subprocess.check_output(relid_cmd_line)))124
110 return relids125 if len(all_sites) == 0:
111126 return
112127
113#------------------------------------------------------------------------------128 for site, servers in all_sites.iteritems():
114# relation_get_all: Returns a dictionary containing the relation information129 servers.sort()
115# optional parameters: relation_type130
116# relation_type: limits the scope of the returned data to the131 return all_sites
117# desired item.132
118#------------------------------------------------------------------------------133
119def relation_get_all():134def ensure_package_status(packages, status):
120 reldata = {}135 if status in ['install', 'hold']:
121 try:136 selections = ''.join(['{} {}\n'.format(package, status)
122 relids = relation_ids()137 for package in packages])
123 for relid in relids:138 dpkg = subprocess.Popen(['dpkg', '--set-selections'],
124 units_cmd_line = ['relation-list', '--format=json', '-r', relid]139 stdin=subprocess.PIPE)
125 units = json.loads(subprocess.check_output(units_cmd_line))140 dpkg.communicate(input=selections)
126 for unit in units:
127 reldata[unit] = \
128 json.loads(relation_json(
129 relation_id=relid, unit_name=unit))
130 if 'sitenames' in reldata[unit]:
131 reldata[unit]['sitenames'] = \
132 reldata[unit]['sitenames'].split()
133 reldata[unit]['relation-id'] = relid
134 reldata[unit]['name'] = unit.replace("/", "_")
135 except Exception, e:
136 subprocess.call(['juju-log', str(e)])
137 reldata = []
138 finally:
139 return(reldata)
140
141
142#------------------------------------------------------------------------------
143# apt_get_install( packages ): Installs a package
144#------------------------------------------------------------------------------
145def apt_get_install(packages=None):
146 if packages is None:
147 return(False)
148 cmd_line = ['apt-get', '-y', 'install', '-qq']
149 try:
150 cmd_line.extend(packages.split())
151 except AttributeError:
152 cmd_line.extend(packages)
153 return(subprocess.call(cmd_line))
154
155
156#------------------------------------------------------------------------------
157# enable_squid3: Enable squid3 at boot time
158#------------------------------------------------------------------------------
159def enable_squid3():
160 # squid is enabled at boot time
161 return True
162141
163142
164#------------------------------------------------------------------------------143#------------------------------------------------------------------------------
@@ -169,9 +148,8 @@
169#------------------------------------------------------------------------------148#------------------------------------------------------------------------------
170def load_squid3_config(squid3_config_file="/etc/squid3/squid.conf"):149def load_squid3_config(squid3_config_file="/etc/squid3/squid.conf"):
171 if os.path.isfile(squid3_config_file):150 if os.path.isfile(squid3_config_file):
172 return(open(squid3_config_file).read())151 return open(squid3_config_file).read()
173 else:152 return
174 return(None)
175153
176154
177#------------------------------------------------------------------------------155#------------------------------------------------------------------------------
@@ -183,8 +161,8 @@
183def get_service_ports(squid3_config_file="/etc/squid3/squid.conf"):161def get_service_ports(squid3_config_file="/etc/squid3/squid.conf"):
184 squid3_config = load_squid3_config(squid3_config_file)162 squid3_config = load_squid3_config(squid3_config_file)
185 if squid3_config is None:163 if squid3_config is None:
186 return(None)164 return
187 return(re.findall("http_port ([0-9]+)", squid3_config))165 return re.findall("http_port ([0-9]+)", squid3_config)
188166
189167
190#------------------------------------------------------------------------------168#------------------------------------------------------------------------------
@@ -195,31 +173,9 @@
195def get_sitenames(squid3_config_file="/etc/squid3/squid.conf"):173def get_sitenames(squid3_config_file="/etc/squid3/squid.conf"):
196 squid3_config = load_squid3_config(squid3_config_file)174 squid3_config = load_squid3_config(squid3_config_file)
197 if squid3_config is None:175 if squid3_config is None:
198 return(None)176 return
199 sitenames = re.findall("cache_peer_domain \w+ ([^!].*)", squid3_config)177 sitenames = re.findall("acl [\w_-]+ dstdomain ([^!].*)", squid3_config)
200 return(list(set(sitenames)))178 return list(set(sitenames))
201
202
203#------------------------------------------------------------------------------
204# open_port: Convenience function to open a port in juju to
205# expose a service
206#------------------------------------------------------------------------------
207def open_port(port=None, protocol="TCP"):
208 if port is None:
209 return(None)
210 return(subprocess.call(['/usr/bin/open-port', "%d/%s" %
211 (int(port), protocol)]))
212
213
214#------------------------------------------------------------------------------
215# close_port: Convenience function to close a port in juju to
216# unexpose a service
217#------------------------------------------------------------------------------
218def close_port(port=None, protocol="TCP"):
219 if port is None:
220 return(None)
221 return(subprocess.call(['/usr/bin/close-port', "%d/%s" %
222 (int(port), protocol)]))
223179
224180
225#------------------------------------------------------------------------------181#------------------------------------------------------------------------------
@@ -229,7 +185,7 @@
229#------------------------------------------------------------------------------185#------------------------------------------------------------------------------
230def update_service_ports(old_service_ports=None, new_service_ports=None):186def update_service_ports(old_service_ports=None, new_service_ports=None):
231 if old_service_ports is None or new_service_ports is None:187 if old_service_ports is None or new_service_ports is None:
232 return(None)188 return
233 for port in old_service_ports:189 for port in old_service_ports:
234 if port not in new_service_ports:190 if port not in new_service_ports:
235 close_port(port)191 close_port(port)
@@ -248,7 +204,7 @@
248 if l not in 'Iil0oO1']204 if l not in 'Iil0oO1']
249 random_chars = [random.choice(alphanumeric_chars)205 random_chars = [random.choice(alphanumeric_chars)
250 for i in range(pwd_length)]206 for i in range(pwd_length)]
251 return(''.join(random_chars))207 return ''.join(random_chars)
252208
253209
254#------------------------------------------------------------------------------210#------------------------------------------------------------------------------
@@ -257,27 +213,53 @@
257def construct_squid3_config():213def construct_squid3_config():
258 from jinja2 import Environment, FileSystemLoader214 from jinja2 import Environment, FileSystemLoader
259 config_data = config_get()215 config_data = config_get()
260 relations = relation_get_all()216 reverse_sites = get_reverse_sites()
217 only_direct = set()
218 if reverse_sites is not None:
219 for site, servers in reverse_sites.iteritems():
220 if not servers:
221 only_direct.add(site)
222
261 if config_data['refresh_patterns']:223 if config_data['refresh_patterns']:
262 refresh_patterns = json.loads(config_data['refresh_patterns'])224 refresh_patterns = yaml.safe_load(config_data['refresh_patterns'])
263 else:225 else:
264 refresh_patterns = {}226 refresh_patterns = {}
265 template_env = Environment(loader=FileSystemLoader(227 # Use default refresh pattern if specified.
266 os.path.join(os.environ['CHARM_DIR'], 'templates')))228 if '.' in refresh_patterns:
229 default_refresh_pattern = refresh_patterns.pop('.')
230 else:
231 default_refresh_pattern = {
232 'min': 30,
233 'percent': 20,
234 'max': 4320,
235 'options': [],
236 }
237
238 templates_dir = os.path.join(os.environ['CHARM_DIR'], 'templates')
239 template_env = Environment(loader=FileSystemLoader(templates_dir))
240
267 config_data['cache_l1'] = int(math.ceil(math.sqrt(241 config_data['cache_l1'] = int(math.ceil(math.sqrt(
268 int(config_data['cache_size_mb']) * 1024 / (16 *242 int(config_data['cache_size_mb']) * 1024 / (
269 int(config_data['target_objs_per_dir']) * int(243 16 * int(config_data['target_objs_per_dir']) * int(
270 config_data['avg_obj_size_kb'])))))244 config_data['avg_obj_size_kb'])))))
271 config_data['cache_l2'] = config_data['cache_l1'] * 16245 config_data['cache_l2'] = config_data['cache_l1'] * 16
272 templ_vars = {246 templ_vars = {
273 'config': config_data,247 'config': config_data,
274 'relations': relations,248 'sites': reverse_sites,
249 'forward_relations': get_forward_sites(),
250 'only_direct': only_direct,
275 'refresh_patterns': refresh_patterns,251 'refresh_patterns': refresh_patterns,
252 'default_refresh_pattern': default_refresh_pattern,
276 }253 }
277 template = template_env.get_template('main_config.template').\254 template = template_env.get_template('main_config.template').\
278 render(templ_vars)255 render(templ_vars)
256 write_squid3_config('\n'.join(
257 (l.strip() for l in str(template).splitlines())))
258
259
260def write_squid3_config(contents):
279 with open(default_squid3_config, 'w') as squid3_config:261 with open(default_squid3_config, 'w') as squid3_config:
280 squid3_config.write(str(template))262 squid3_config.write(contents)
281263
282264
283#------------------------------------------------------------------------------265#------------------------------------------------------------------------------
@@ -286,147 +268,192 @@
286#------------------------------------------------------------------------------268#------------------------------------------------------------------------------
287def service_squid3(action=None, squid3_config=default_squid3_config):269def service_squid3(action=None, squid3_config=default_squid3_config):
288 if action is None or squid3_config is None:270 if action is None or squid3_config is None:
289 return(None)271 return
290 elif action == "check":272 elif action == "check":
291 check_cmd = ['/usr/sbin/squid3', '-f', squid3_config, '-k', 'parse']273 check_cmd = ['/usr/sbin/squid3', '-f', squid3_config, '-k', 'parse']
292 retVal = subprocess.call(check_cmd)274 retVal = subprocess.call(check_cmd)
293 if retVal == 1:275 if retVal == 1:
294 return(False)276 return False
295 elif retVal == 0:277 elif retVal == 0:
296 return(True)278 return True
297 else:279 else:
298 return(False)280 return False
299 elif action == 'status':281 elif action == 'status':
300 status = subprocess.check_output(['status', 'squid3'])282 status = subprocess.check_output(['status', 'squid3'])
301 if re.search('running', status) is not None:283 if re.search('running', status) is not None:
302 return(True)284 return True
303 else:285 else:
304 return(False)286 return False
305 elif action in ('start', 'stop', 'reload', 'restart'):287 elif action in ('start', 'stop', 'reload', 'restart'):
306 retVal = subprocess.call([action, 'squid3'])288 retVal = subprocess.call([action, 'squid3'])
307 if retVal == 0:289 if retVal == 0:
308 return(True)290 return True
309 else:291 else:
310 return(False)292 return False
293
294
295def install_nrpe_scripts():
296 if not os.path.exists(default_nagios_plugin_dir):
297 os.makedirs(default_nagios_plugin_dir)
298 shutil.copy2('%s/files/check_squidpeers' % (
299 os.environ['CHARM_DIR']),
300 '{}/check_squidpeers'.format(default_nagios_plugin_dir))
311301
312302
313def update_nrpe_checks():303def update_nrpe_checks():
314 config_data = config_get()304 install_nrpe_scripts()
315 try:305 nrpe_compat = NRPE()
316 nagios_uid = pwd.getpwnam('nagios').pw_uid306 conf = nrpe_compat.config
317 nagios_gid = grp.getgrnam('nagios').gr_gid307 nrpe_compat.add_check(
318 except:308 shortname='squidpeers',
319 subprocess.call(['juju-log', "Nagios user not setup, exiting"])309 description='Check Squid Peers',
320 return310 check_cmd='check_squidpeers'
321311 )
322 unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-')312 check_http_params = conf.get('nagios_check_http_params')
323 nrpe_check_file = '/etc/nagios/nrpe.d/check_squidrp.cfg'313 if check_http_params:
324 nagios_hostname = "%s-%s" % (config_data['nagios_context'], unit_name)314 nrpe_compat.add_check(
325 nagios_logdir = '/var/log/nagios'315 shortname='squidrp',
326 nagios_exportdir = '/var/lib/nagios/export'316 description='Check Squid',
327 nrpe_service_file = \317 check_cmd='check_http %s' % check_http_params
328 '/var/lib/nagios/export/service__%s_check_squidrp.cfg' % \318 )
329 (nagios_hostname)319 config_services_str = conf.get('services', '')
330 if not os.path.exists(nagios_logdir):320 config_services = yaml.safe_load(config_services_str) or ()
331 os.mkdir(nagios_logdir)321 for service in config_services:
332 os.chown(nagios_logdir, nagios_uid, nagios_gid)322 path = service.get('nrpe_check_path')
333 if not os.path.exists(nagios_exportdir):323 if path is not None:
334 subprocess.call(['juju-log', 'Exiting as %s is not accessible' %324 command = 'check_http -I 127.0.0.1 -p 3128 --method=HEAD '
335 (nagios_exportdir)])325 service_name = service['service_name']
336 return326 if conf.get('x_balancer_name_allowed'):
337 for f in os.listdir(nagios_exportdir):327 command += ("-u http://localhost%s "
338 if re.search('.*check_squidrp.cfg', f):328 "-k \\'X-Balancer-Name: %s\\'" % (
339 os.remove(os.path.join(nagios_exportdir, f))329 path, service_name))
340 from jinja2 import Environment, FileSystemLoader330 else:
341 template_env = Environment(331 command += "-u http://%s%s" % (service_name, path)
342 loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'templates')))332 nrpe_compat.add_check(
343 templ_vars = {333 shortname='squid-%s' % service_name.replace(".", "_"),
344 'nagios_hostname': nagios_hostname,334 description='Check Squid for site %s' % service_name,
345 'nagios_servicegroup': config_data['nagios_context'],335 check_cmd=command,
346 }336 )
347 template = template_env.get_template('nrpe_service.template').\337 nrpe_compat.write()
348 render(templ_vars)338
349 with open(nrpe_service_file, 'w') as nrpe_service_config:339
350 nrpe_service_config.write(str(template))340def install_packages():
351 with open(nrpe_check_file, 'w') as nrpe_check_config:341 apt_install("squid3 squidclient python-jinja2".split(), fatal=True)
352 nrpe_check_config.write("# check squid\n")342 ensure_package_status(service_affecting_packages,
353 nrpe_check_config.write(343 config_get('package_status'))
354 "command[check_squidrp]=/usr/lib/nagios/plugins/check_http %s\n" %
355 (config_data['nagios_check_http_params']))
356 if os.path.isfile('/etc/init.d/nagios-nrpe-server'):
357 subprocess.call(['service', 'nagios-nrpe-server', 'reload'])
358344
359345
360###############################################################################346###############################################################################
361# Hook functions347# Hook functions
362###############################################################################348###############################################################################
363def install_hook():349def install_hook():
364 for f in glob.glob('exec.d/*/charm-pre-install'):
365 if os.path.isfile(f) and os.access(f, os.X_OK):
366 subprocess.check_call(['sh', '-c', f])
367 if not os.path.exists(default_squid3_config_dir):350 if not os.path.exists(default_squid3_config_dir):
368 os.mkdir(default_squid3_config_dir, 0600)351 os.mkdir(default_squid3_config_dir, 0600)
369 if not os.path.exists(default_squid3_config_cache_dir):352 if not os.path.exists(default_squid3_config_cache_dir):
370 os.mkdir(default_squid3_config_cache_dir, 0600)353 os.mkdir(default_squid3_config_cache_dir, 0600)
371 shutil.copy2('%s/files/default.squid3' % (os.environ['CHARM_DIR']), '/etc/default/squid3')354 shutil.copy2('%s/files/default.squid3' % (
372 return (apt_get_install(355 os.environ['CHARM_DIR']), '/etc/default/squid3')
373 "squid3 python-jinja2") == enable_squid3() is not True)356 install_packages()
357 return True
374358
375359
376def config_changed():360def config_changed():
377 current_service_ports = get_service_ports()361 old_service_ports = get_service_ports()
362 old_sitenames = get_sitenames()
378 construct_squid3_config()363 construct_squid3_config()
379 update_nrpe_checks()364 update_nrpe_checks()
365 ensure_package_status(service_affecting_packages,
366 config_get('package_status'))
380367
381 if service_squid3("check"):368 if service_squid3("check"):
382 updated_service_ports = get_service_ports()369 updated_service_ports = get_service_ports()
383 update_service_ports(current_service_ports, updated_service_ports)370 update_service_ports(old_service_ports, updated_service_ports)
384 service_squid3("reload")371 service_squid3("reload")
372 if not (old_sitenames == get_sitenames()):
373 notify_cached_website()
385 else:374 else:
375 # XXX Ideally the config should be restored to a working state if the
376 # check fails, otherwise an inadvertent reload will cause the service
377 # to be broken.
378 log("squid configuration check failed, exiting")
386 sys.exit(1)379 sys.exit(1)
387380
388381
389def start_hook():382def start_hook():
390 if service_squid3("status"):383 if service_squid3("status"):
391 return(service_squid3("restart"))384 return service_squid3("restart")
392 else:385 else:
393 return(service_squid3("start"))386 return service_squid3("start")
394387
395388
396def stop_hook():389def stop_hook():
397 if service_squid3("status"):390 if service_squid3("status"):
398 return(service_squid3("stop"))391 return service_squid3("stop")
399392
400393
401def website_interface(hook_name=None):394def website_interface(hook_name=None):
402 if hook_name is None:395 if hook_name is None:
403 return(None)396 return
404 if hook_name in ["joined", "changed", "broken", "departed"]:397 if hook_name in ("joined", "changed", "broken", "departed"):
405 config_changed()398 config_changed()
406399
407400
408def cached_website_interface(hook_name=None):401def cached_website_interface(hook_name=None):
409 if hook_name is None:402 if hook_name is None:
410 return(None)403 return
411 if hook_name in ["joined", "changed"]:404 if hook_name in ("joined", "changed"):
412 sitenames = ' '.join(get_sitenames())405 notify_cached_website(relation_ids=(None,))
413 # passing only one port - the first one defined406 config_data = config_get()
414 subprocess.call(['relation-set', 'port=%s' % get_service_ports()[0],407 if config_data['enable_forward_proxy']:
415 'sitenames=%s' % sitenames])408 config_changed()
409
410
411def get_hostname(host=None):
412 my_host = socket.gethostname()
413 if host is None or host == "0.0.0.0":
414 # If the listen ip has been set to 0.0.0.0 then pass back the hostname
415 return socket.getfqdn(my_host)
416 elif host == "localhost":
417 # If the fqdn lookup has returned localhost (lxc setups) then return
418 # hostname
419 return my_host
420 return host
421
422
423def notify_cached_website(relation_ids=None):
424 hostname = get_hostname()
425 # passing only one port - the first one defined
426 port = get_service_ports()[0]
427 sitenames = ' '.join(get_sitenames())
428
429 for rid in relation_ids or get_relation_ids("cached-website"):
430 relation_set(relation_id=rid, port=port,
431 hostname=hostname,
432 sitenames=sitenames)
433
434
435def upgrade_charm():
436 # Ensure that all current dependencies are installed.
437 install_packages()
416438
417439
418###############################################################################440###############################################################################
419# Main section441# Main section
420###############################################################################442###############################################################################
421def main():443def main(hook_name):
422 if hook_name == "install":444 if hook_name == "install":
423 install_hook()445 install_hook()
424 elif hook_name == "config-changed":446 elif hook_name == "config-changed":
425 config_changed()447 config_changed()
448 update_nrpe_checks()
426 elif hook_name == "start":449 elif hook_name == "start":
427 start_hook()450 start_hook()
428 elif hook_name == "stop":451 elif hook_name == "stop":
429 stop_hook()452 stop_hook()
453 elif hook_name == "upgrade-charm":
454 upgrade_charm()
455 config_changed()
456 update_nrpe_checks()
430457
431 elif hook_name == "website-relation-joined":458 elif hook_name == "website-relation-joined":
432 website_interface("joined")459 website_interface("joined")
@@ -446,14 +473,21 @@
446 elif hook_name == "cached-website-relation-departed":473 elif hook_name == "cached-website-relation-departed":
447 cached_website_interface("departed")474 cached_website_interface("departed")
448475
449 elif hook_name == "nrpe-external-master-relation-changed":476 elif hook_name in ("nrpe-external-master-relation-joined",
477 "local-monitors-relation-joined"):
450 update_nrpe_checks()478 update_nrpe_checks()
451479
452 elif hook_name == "env-dump":480 elif hook_name == "env-dump":
453 print relation_get_all()481 print relations_of_type()
454 else:482 else:
455 print "Unknown hook"483 print "Unknown hook"
456 sys.exit(1)484 sys.exit(1)
457485
458if __name__ == '__main__':486if __name__ == '__main__':
459 main()487 hook_name = os.path.basename(sys.argv[0])
488 # Also support being invoked directly with hook as argument name.
489 if hook_name == "hooks.py":
490 if len(sys.argv) < 2:
491 sys.exit("Missing required hook name argument.")
492 hook_name = sys.argv[1]
493 main(hook_name)
460494
=== modified symlink 'hooks/install' (properties changed: -x to +x)
=== target was u'hooks.py'
--- hooks/install 1970-01-01 00:00:00 +0000
+++ hooks/install 2013-10-29 21:07:21 +0000
@@ -0,0 +1,9 @@
1#!/bin/sh
2
3set -eu
4
5juju-log 'Invoking charm-pre-install hooks'
6[ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done )
7
8juju-log 'Invoking python-based install hook'
9python hooks/hooks.py install
010
=== added symlink 'hooks/local-monitors-relation-joined'
=== target is u'hooks.py'
=== renamed symlink 'hooks/nrpe-external-master-relation-changed' => 'hooks/nrpe-external-master-relation-joined'
=== removed directory 'hooks/shelltoolbox'
=== removed file 'hooks/shelltoolbox/__init__.py'
--- hooks/shelltoolbox/__init__.py 2012-10-02 09:42:19 +0000
+++ hooks/shelltoolbox/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,662 +0,0 @@
1# Copyright 2012 Canonical Ltd.
2
3# This file is part of python-shell-toolbox.
4#
5# python-shell-toolbox is free software: you can redistribute it and/or modify
6# it under the terms of the GNU General Public License as published by the
7# Free Software Foundation, version 3 of the License.
8#
9# python-shell-toolbox is distributed in the hope that it will be useful, but
10# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
11# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
12# more details.
13#
14# You should have received a copy of the GNU General Public License
15# along with python-shell-toolbox. If not, see <http://www.gnu.org/licenses/>.
16
17"""Helper functions for accessing shell commands in Python."""
18
19__metaclass__ = type
20__all__ = [
21 'apt_get_install',
22 'bzr_whois',
23 'cd',
24 'command',
25 'DictDiffer',
26 'environ',
27 'file_append',
28 'file_prepend',
29 'generate_ssh_keys',
30 'get_su_command',
31 'get_user_home',
32 'get_user_ids',
33 'install_extra_repositories',
34 'join_command',
35 'mkdirs',
36 'run',
37 'Serializer',
38 'script_name',
39 'search_file',
40 'ssh',
41 'su',
42 'user_exists',
43 'wait_for_page_contents',
44 ]
45
46from collections import namedtuple
47from contextlib import contextmanager
48from email.Utils import parseaddr
49import errno
50import json
51import operator
52import os
53import pipes
54import pwd
55import re
56import subprocess
57import sys
58from textwrap import dedent
59import time
60import urllib2
61
62
63Env = namedtuple('Env', 'uid gid home')
64
65
66def apt_get_install(*args, **kwargs):
67 """Install given packages using apt.
68
69 It is possible to pass environment variables to be set during install
70 using keyword arguments.
71
72 :raises: subprocess.CalledProcessError
73 """
74 caller = kwargs.pop('caller', run)
75 debian_frontend = kwargs.pop('DEBIAN_FRONTEND', 'noninteractive')
76 with environ(DEBIAN_FRONTEND=debian_frontend, **kwargs):
77 cmd = ('apt-get', '-y', 'install') + args
78 return caller(*cmd)
79
80
81def bzr_whois(user):
82 """Return full name and email of bzr `user`.
83
84 Return None if the given `user` does not have a bzr user id.
85 """
86 with su(user):
87 try:
88 whoami = run('bzr', 'whoami')
89 except (subprocess.CalledProcessError, OSError):
90 return None
91 return parseaddr(whoami)
92
93
94@contextmanager
95def cd(directory):
96 """A context manager to temporarily change current working dir, e.g.::
97
98 >>> import os
99 >>> os.chdir('/tmp')
100 >>> with cd('/bin'): print os.getcwd()
101 /bin
102 >>> print os.getcwd()
103 /tmp
104 """
105 cwd = os.getcwd()
106 os.chdir(directory)
107 try:
108 yield
109 finally:
110 os.chdir(cwd)
111
112
113def command(*base_args):
114 """Return a callable that will run the given command with any arguments.
115
116 The first argument is the path to the command to run, subsequent arguments
117 are command-line arguments to "bake into" the returned callable.
118
119 The callable runs the given executable and also takes arguments that will
120 be appeneded to the "baked in" arguments.
121
122 For example, this code will list a file named "foo" (if it exists):
123
124 ls_foo = command('/bin/ls', 'foo')
125 ls_foo()
126
127 While this invocation will list "foo" and "bar" (assuming they exist):
128
129 ls_foo('bar')
130 """
131 def callable_command(*args):
132 all_args = base_args + args
133 return run(*all_args)
134
135 return callable_command
136
137
138@contextmanager
139def environ(**kwargs):
140 """A context manager to temporarily change environment variables.
141
142 If an existing environment variable is changed, it is restored during
143 context cleanup::
144
145 >>> import os
146 >>> os.environ['MY_VARIABLE'] = 'foo'
147 >>> with environ(MY_VARIABLE='bar'): print os.getenv('MY_VARIABLE')
148 bar
149 >>> print os.getenv('MY_VARIABLE')
150 foo
151 >>> del os.environ['MY_VARIABLE']
152
153 If we are adding environment variables, they are removed during context
154 cleanup::
155
156 >>> import os
157 >>> with environ(MY_VAR1='foo', MY_VAR2='bar'):
158 ... print os.getenv('MY_VAR1'), os.getenv('MY_VAR2')
159 foo bar
160 >>> os.getenv('MY_VAR1') == os.getenv('MY_VAR2') == None
161 True
162 """
163 backup = {}
164 for key, value in kwargs.items():
165 backup[key] = os.getenv(key)
166 os.environ[key] = value
167 try:
168 yield
169 finally:
170 for key, value in backup.items():
171 if value is None:
172 del os.environ[key]
173 else:
174 os.environ[key] = value
175
176
177def file_append(filename, line):
178 r"""Append given `line`, if not present, at the end of `filename`.
179
180 Usage example::
181
182 >>> import tempfile
183 >>> f = tempfile.NamedTemporaryFile('w', delete=False)
184 >>> f.write('line1\n')
185 >>> f.close()
186 >>> file_append(f.name, 'new line\n')
187 >>> open(f.name).read()
188 'line1\nnew line\n'
189
190 Nothing happens if the file already contains the given `line`::
191
192 >>> file_append(f.name, 'new line\n')
193 >>> open(f.name).read()
194 'line1\nnew line\n'
195
196 A new line is automatically added before the given `line` if it is not
197 present at the end of current file content::
198
199 >>> import tempfile
200 >>> f = tempfile.NamedTemporaryFile('w', delete=False)
201 >>> f.write('line1')
202 >>> f.close()
203 >>> file_append(f.name, 'new line\n')
204 >>> open(f.name).read()
205 'line1\nnew line\n'
206
207 The file is created if it does not exist::
208
209 >>> import tempfile
210 >>> filename = tempfile.mktemp()
211 >>> file_append(filename, 'line1\n')
212 >>> open(filename).read()
213 'line1\n'
214 """
215 if not line.endswith('\n'):
216 line += '\n'
217 with open(filename, 'a+') as f:
218 lines = f.readlines()
219 if line not in lines:
220 if not lines or lines[-1].endswith('\n'):
221 f.write(line)
222 else:
223 f.write('\n' + line)
224
225
226def file_prepend(filename, line):
227 r"""Insert given `line`, if not present, at the beginning of `filename`.
228
229 Usage example::
230
231 >>> import tempfile
232 >>> f = tempfile.NamedTemporaryFile('w', delete=False)
233 >>> f.write('line1\n')
234 >>> f.close()
235 >>> file_prepend(f.name, 'line0\n')
236 >>> open(f.name).read()
237 'line0\nline1\n'
238
239 If the file starts with the given `line`, nothing happens::
240
241 >>> file_prepend(f.name, 'line0\n')
242 >>> open(f.name).read()
243 'line0\nline1\n'
244
245 If the file contains the given `line`, but not at the beginning,
246 the line is moved on top::
247
248 >>> file_prepend(f.name, 'line1\n')
249 >>> open(f.name).read()
250 'line1\nline0\n'
251 """
252 if not line.endswith('\n'):
253 line += '\n'
254 with open(filename, 'r+') as f:
255 lines = f.readlines()
256 if lines[0] != line:
257 try:
258 lines.remove(line)
259 except ValueError:
260 pass
261 lines.insert(0, line)
262 f.seek(0)
263 f.writelines(lines)
264
265
266def generate_ssh_keys(path, passphrase=''):
267 """Generate ssh key pair, saving them inside the given `directory`.
268
269 >>> generate_ssh_keys('/tmp/id_rsa')
270 0
271 >>> open('/tmp/id_rsa').readlines()[0].strip()
272 '-----BEGIN RSA PRIVATE KEY-----'
273 >>> open('/tmp/id_rsa.pub').read().startswith('ssh-rsa')
274 True
275 >>> os.remove('/tmp/id_rsa')
276 >>> os.remove('/tmp/id_rsa.pub')
277
278 If either of the key files already exist, generate_ssh_keys() will
279 raise an Exception.
280
281 Note that ssh-keygen will prompt if the keyfiles already exist, but
282 when we're using it non-interactively it's better to pre-empt that
283 behaviour.
284
285 >>> with open('/tmp/id_rsa', 'w') as key_file:
286 ... key_file.write("Don't overwrite me, bro!")
287 >>> generate_ssh_keys('/tmp/id_rsa') # doctest: +ELLIPSIS
288 Traceback (most recent call last):
289 Exception: File /tmp/id_rsa already exists...
290 >>> os.remove('/tmp/id_rsa')
291
292 >>> with open('/tmp/id_rsa.pub', 'w') as key_file:
293 ... key_file.write("Don't overwrite me, bro!")
294 >>> generate_ssh_keys('/tmp/id_rsa') # doctest: +ELLIPSIS
295 Traceback (most recent call last):
296 Exception: File /tmp/id_rsa.pub already exists...
297 >>> os.remove('/tmp/id_rsa.pub')
298 """
299 if os.path.exists(path):
300 raise Exception("File {} already exists.".format(path))
301 if os.path.exists(path + '.pub'):
302 raise Exception("File {}.pub already exists.".format(path))
303 return subprocess.call([
304 'ssh-keygen', '-q', '-t', 'rsa', '-N', passphrase, '-f', path])
305
306
307def get_su_command(user, args):
308 """Return a command line as a sequence, prepending "su" if necessary.
309
310 This can be used together with `run` when the `su` context manager is not
311 enough (e.g. an external program uses uid rather than euid).
312
313 run(*get_su_command(user, ['bzr', 'whoami']))
314
315 If the su is requested as current user, the arguments are returned as
316 given::
317
318 >>> import getpass
319 >>> current_user = getpass.getuser()
320
321 >>> get_su_command(current_user, ('ls', '-l'))
322 ('ls', '-l')
323
324 Otherwise, "su" is prepended::
325
326 >>> get_su_command('nobody', ('ls', '-l', 'my file'))
327 ('su', 'nobody', '-c', "ls -l 'my file'")
328 """
329 if get_user_ids(user)[0] != os.getuid():
330 args = [i for i in args if i is not None]
331 return ('su', user, '-c', join_command(args))
332 return args
333
334
335def get_user_home(user):
336 """Return the home directory of the given `user`.
337
338 >>> get_user_home('root')
339 '/root'
340
341 If the user does not exist, return a default /home/[username] home::
342
343 >>> get_user_home('_this_user_does_not_exist_')
344 '/home/_this_user_does_not_exist_'
345 """
346 try:
347 return pwd.getpwnam(user).pw_dir
348 except KeyError:
349 return os.path.join(os.path.sep, 'home', user)
350
351
352def get_user_ids(user):
353 """Return the uid and gid of given `user`, e.g.::
354
355 >>> get_user_ids('root')
356 (0, 0)
357 """
358 userdata = pwd.getpwnam(user)
359 return userdata.pw_uid, userdata.pw_gid
360
361
362def install_extra_repositories(*repositories):
363 """Install all of the extra repositories and update apt.
364
365 Given repositories can contain a "{distribution}" placeholder, that will
366 be replaced by current distribution codename.
367
368 :raises: subprocess.CalledProcessError
369 """
370 distribution = run('lsb_release', '-cs').strip()
371 # Starting from Oneiric, `apt-add-repository` is interactive by
372 # default, and requires a "-y" flag to be set.
373 assume_yes = None if distribution == 'lucid' else '-y'
374 for repo in repositories:
375 repository = repo.format(distribution=distribution)
376 run('apt-add-repository', assume_yes, repository)
377 run('apt-get', 'clean')
378 run('apt-get', 'update')
379
380
381def join_command(args):
382 """Return a valid Unix command line from `args`.
383
384 >>> join_command(['ls', '-l'])
385 'ls -l'
386
387 Arguments containing spaces and empty args are correctly quoted::
388
389 >>> join_command(['command', 'arg1', 'arg containing spaces', ''])
390 "command arg1 'arg containing spaces' ''"
391 """
392 return ' '.join(pipes.quote(arg) for arg in args)
393
394
395def mkdirs(*args):
396 """Create leaf directories (given as `args`) and all intermediate ones.
397
398 >>> import tempfile
399 >>> base_dir = tempfile.mktemp(suffix='/')
400 >>> dir1 = tempfile.mktemp(prefix=base_dir)
401 >>> dir2 = tempfile.mktemp(prefix=base_dir)
402 >>> mkdirs(dir1, dir2)
403 >>> os.path.isdir(dir1)
404 True
405 >>> os.path.isdir(dir2)
406 True
407
408 If the leaf directory already exists the function returns without errors::
409
410 >>> mkdirs(dir1)
411
412 An `OSError` is raised if the leaf path exists and it is a file::
413
414 >>> f = tempfile.NamedTemporaryFile(
415 ... 'w', delete=False, prefix=base_dir)
416 >>> f.close()
417 >>> mkdirs(f.name) # doctest: +ELLIPSIS
418 Traceback (most recent call last):
419 OSError: ...
420 """
421 for directory in args:
422 try:
423 os.makedirs(directory)
424 except OSError as err:
425 if err.errno != errno.EEXIST or os.path.isfile(directory):
426 raise
427
428
429def run(*args, **kwargs):
430 """Run the command with the given arguments.
431
432 The first argument is the path to the command to run.
433 Subsequent arguments are command-line arguments to be passed.
434
435 This function accepts all optional keyword arguments accepted by
436 `subprocess.Popen`.
437 """
438 args = [i for i in args if i is not None]
439 pipe = subprocess.PIPE
440 process = subprocess.Popen(
441 args, stdout=kwargs.pop('stdout', pipe),
442 stderr=kwargs.pop('stderr', pipe),
443 close_fds=kwargs.pop('close_fds', True), **kwargs)
444 stdout, stderr = process.communicate()
445 if process.returncode:
446 exception = subprocess.CalledProcessError(
447 process.returncode, repr(args))
448 # The output argument of `CalledProcessError` was introduced in Python
449 # 2.7. Monkey patch the output here to avoid TypeErrors in older
450 # versions of Python, still preserving the output in Python 2.7.
451 exception.output = ''.join(filter(None, [stdout, stderr]))
452 raise exception
453 return stdout
454
455
456def script_name():
457 """Return the name of this script."""
458 return os.path.basename(sys.argv[0])
459
460
461def search_file(regexp, filename):
462 """Return the first line in `filename` that matches `regexp`."""
463 with open(filename) as f:
464 for line in f:
465 if re.search(regexp, line):
466 return line
467
468
469def ssh(location, user=None, key=None, caller=subprocess.call):
470 """Return a callable that can be used to run ssh shell commands.
471
472 The ssh `location` and, optionally, `user` must be given.
473 If the user is None then the current user is used for the connection.
474
475 The callable internally uses the given `caller`::
476
477 >>> def caller(cmd):
478 ... print tuple(cmd)
479 >>> sshcall = ssh('example.com', 'myuser', caller=caller)
480 >>> root_sshcall = ssh('example.com', caller=caller)
481 >>> sshcall('ls -l') # doctest: +ELLIPSIS
482 ('ssh', '-t', ..., 'myuser@example.com', '--', 'ls -l')
483 >>> root_sshcall('ls -l') # doctest: +ELLIPSIS
484 ('ssh', '-t', ..., 'example.com', '--', 'ls -l')
485
486 The ssh key path can be optionally provided::
487
488 >>> root_sshcall = ssh('example.com', key='/tmp/foo', caller=caller)
489 >>> root_sshcall('ls -l') # doctest: +ELLIPSIS
490 ('ssh', '-t', ..., '-i', '/tmp/foo', 'example.com', '--', 'ls -l')
491
492 If the ssh command exits with an error code,
493 a `subprocess.CalledProcessError` is raised::
494
495 >>> ssh('loc', caller=lambda cmd: 1)('ls -l') # doctest: +ELLIPSIS
496 Traceback (most recent call last):
497 CalledProcessError: ...
498
499 If ignore_errors is set to True when executing the command, no error
500 will be raised, even if the command itself returns an error code.
501
502 >>> sshcall = ssh('loc', caller=lambda cmd: 1)
503 >>> sshcall('ls -l', ignore_errors=True)
504 """
505 sshcmd = [
506 'ssh',
507 '-t',
508 '-t', # Yes, this second -t is deliberate. See `man ssh`.
509 '-o', 'StrictHostKeyChecking=no',
510 '-o', 'UserKnownHostsFile=/dev/null',
511 ]
512 if key is not None:
513 sshcmd.extend(['-i', key])
514 if user is not None:
515 location = '{}@{}'.format(user, location)
516 sshcmd.extend([location, '--'])
517
518 def _sshcall(cmd, ignore_errors=False):
519 command = sshcmd + [cmd]
520 retcode = caller(command)
521 if retcode and not ignore_errors:
522 raise subprocess.CalledProcessError(retcode, ' '.join(command))
523
524 return _sshcall
525
526
527@contextmanager
528def su(user):
529 """A context manager to temporarily run the script as a different user."""
530 uid, gid = get_user_ids(user)
531 os.setegid(gid)
532 os.seteuid(uid)
533 home = get_user_home(user)
534 with environ(HOME=home):
535 try:
536 yield Env(uid, gid, home)
537 finally:
538 os.setegid(os.getgid())
539 os.seteuid(os.getuid())
540
541
542def user_exists(username):
543 """Return True if given `username` exists, e.g.::
544
545 >>> user_exists('root')
546 True
547 >>> user_exists('_this_user_does_not_exist_')
548 False
549 """
550 try:
551 pwd.getpwnam(username)
552 except KeyError:
553 return False
554 return True
555
556
557def wait_for_page_contents(url, contents, timeout=120, validate=None):
558 if validate is None:
559 validate = operator.contains
560 start_time = time.time()
561 while True:
562 try:
563 stream = urllib2.urlopen(url)
564 except (urllib2.HTTPError, urllib2.URLError):
565 pass
566 else:
567 page = stream.read()
568 if validate(page, contents):
569 return page
570 if time.time() - start_time >= timeout:
571 raise RuntimeError('timeout waiting for contents of ' + url)
572 time.sleep(0.1)
573
574
575class DictDiffer:
576 """
577 Calculate the difference between two dictionaries as:
578 (1) items added
579 (2) items removed
580 (3) keys same in both but changed values
581 (4) keys same in both and unchanged values
582 """
583
584 # Based on answer by hughdbrown at:
585 # http://stackoverflow.com/questions/1165352
586
587 def __init__(self, current_dict, past_dict):
588 self.current_dict = current_dict
589 self.past_dict = past_dict
590 self.set_current = set(current_dict)
591 self.set_past = set(past_dict)
592 self.intersect = self.set_current.intersection(self.set_past)
593
594 @property
595 def added(self):
596 return self.set_current - self.intersect
597
598 @property
599 def removed(self):
600 return self.set_past - self.intersect
601
602 @property
603 def changed(self):
604 return set(key for key in self.intersect
605 if self.past_dict[key] != self.current_dict[key])
606 @property
607 def unchanged(self):
608 return set(key for key in self.intersect
609 if self.past_dict[key] == self.current_dict[key])
610 @property
611 def modified(self):
612 return self.current_dict != self.past_dict
613
614 @property
615 def added_or_changed(self):
616 return self.added.union(self.changed)
617
618 def _changes(self, keys):
619 new = {}
620 old = {}
621 for k in keys:
622 new[k] = self.current_dict.get(k)
623 old[k] = self.past_dict.get(k)
624 return "%s -> %s" % (old, new)
625
626 def __str__(self):
627 if self.modified:
628 s = dedent("""\
629 added: %s
630 removed: %s
631 changed: %s
632 unchanged: %s""") % (
633 self._changes(self.added),
634 self._changes(self.removed),
635 self._changes(self.changed),
636 list(self.unchanged))
637 else:
638 s = "no changes"
639 return s
640
641
642class Serializer:
643 """Handle JSON (de)serialization."""
644
645 def __init__(self, path, default=None, serialize=None, deserialize=None):
646 self.path = path
647 self.default = default or {}
648 self.serialize = serialize or json.dump
649 self.deserialize = deserialize or json.load
650
651 def exists(self):
652 return os.path.exists(self.path)
653
654 def get(self):
655 if self.exists():
656 with open(self.path) as f:
657 return self.deserialize(f)
658 return self.default
659
660 def set(self, data):
661 with open(self.path, 'w') as f:
662 self.serialize(data, f)
6630
=== added directory 'hooks/tests'
=== added file 'hooks/tests/__init__.py'
=== added file 'hooks/tests/test_cached_website_hooks.py'
--- hooks/tests/test_cached_website_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_cached_website_hooks.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,111 @@
1from testtools import TestCase
2from mock import patch, call
3
4import hooks
5
6
7class CachedWebsiteRelationTest(TestCase):
8
9 def setUp(self):
10 super(CachedWebsiteRelationTest, self).setUp()
11 self.notify_cached_website = self.patch_hook("notify_cached_website")
12 self.config_changed = self.patch_hook('config_changed')
13 self.config_get = self.patch_hook('config_get')
14 self.config_get.return_value = {
15 'enable_forward_proxy': False
16 }
17
18 def patch_hook(self, hook_name):
19 mock_controller = patch.object(hooks, hook_name)
20 mock = mock_controller.start()
21 self.addCleanup(mock_controller.stop)
22 return mock
23
24 def test_website_interface_none(self):
25 self.assertEqual(None, hooks.cached_website_interface(hook_name=None))
26 self.assertFalse(self.notify_cached_website.called)
27
28 def test_website_interface_joined(self):
29 hooks.cached_website_interface(hook_name="joined")
30 self.notify_cached_website.assert_called_once_with(
31 relation_ids=(None,))
32 self.assertFalse(self.config_changed.called)
33
34 def test_website_interface_changed(self):
35 hooks.cached_website_interface(hook_name="changed")
36 self.notify_cached_website.assert_called_once_with(
37 relation_ids=(None,))
38 self.assertFalse(self.config_changed.called)
39
40 def test_website_interface_forward_enabled(self):
41 self.config_get.return_value = {
42 'enable_forward_proxy': True
43 }
44 hooks.cached_website_interface(hook_name="changed")
45 self.notify_cached_website.assert_called_once_with(
46 relation_ids=(None,))
47 self.assertTrue(self.config_changed.called)
48
49
50class NotifyCachedWebsiteRelationTest(TestCase):
51
52 def setUp(self):
53 super(NotifyCachedWebsiteRelationTest, self).setUp()
54
55 self.get_service_ports = self.patch_hook("get_service_ports")
56 self.get_sitenames = self.patch_hook("get_sitenames")
57 self.get_relation_ids = self.patch_hook("get_relation_ids")
58 self.get_hostname = self.patch_hook("get_hostname")
59 self.relation_set = self.patch_hook("relation_set")
60
61 def patch_hook(self, hook_name):
62 mock_controller = patch.object(hooks, hook_name)
63 mock = mock_controller.start()
64 self.addCleanup(mock_controller.stop)
65 return mock
66
67 def test_notify_cached_website_no_relation_ids(self):
68 self.get_relation_ids.return_value = ()
69
70 hooks.notify_cached_website()
71
72 self.assertFalse(self.relation_set.called)
73 self.get_relation_ids.assert_called_once_with(
74 "cached-website")
75
76 def test_notify_cached_website_with_default_relation(self):
77 self.get_relation_ids.return_value = ()
78 self.get_hostname.return_value = "foo.local"
79 self.get_service_ports.return_value = (3128,)
80 self.get_sitenames.return_value = ("foo.internal", "bar.internal")
81
82 hooks.notify_cached_website(relation_ids=(None,))
83
84 self.get_hostname.assert_called_once_with()
85 self.relation_set.assert_called_once_with(
86 relation_id=None, port=3128,
87 hostname="foo.local",
88 sitenames="foo.internal bar.internal")
89 self.assertFalse(self.get_relation_ids.called)
90
91 def test_notify_cached_website_with_relations(self):
92 self.get_relation_ids.return_value = ("cached-website:1",
93 "cached-website:2")
94 self.get_hostname.return_value = "foo.local"
95 self.get_service_ports.return_value = (3128,)
96 self.get_sitenames.return_value = ("foo.internal", "bar.internal")
97
98 hooks.notify_cached_website()
99
100 self.get_hostname.assert_called_once_with()
101 self.get_relation_ids.assert_called_once_with("cached-website")
102 self.relation_set.assert_has_calls([
103 call.relation_set(
104 relation_id="cached-website:1", port=3128,
105 hostname="foo.local",
106 sitenames="foo.internal bar.internal"),
107 call.relation_set(
108 relation_id="cached-website:2", port=3128,
109 hostname="foo.local",
110 sitenames="foo.internal bar.internal"),
111 ])
0112
=== added file 'hooks/tests/test_config_changed_hooks.py'
--- hooks/tests/test_config_changed_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_config_changed_hooks.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,60 @@
1from testtools import TestCase
2from mock import patch
3
4import hooks
5
6
7class ConfigChangedTest(TestCase):
8
9 def setUp(self):
10 super(ConfigChangedTest, self).setUp()
11 self.get_service_ports = self.patch_hook("get_service_ports")
12 self.get_sitenames = self.patch_hook("get_sitenames")
13 self.construct_squid3_config = self.patch_hook(
14 "construct_squid3_config")
15 self.update_nrpe_checks = self.patch_hook(
16 "update_nrpe_checks")
17 self.update_service_ports = self.patch_hook(
18 "update_service_ports")
19 self.service_squid3 = self.patch_hook(
20 "service_squid3")
21 self.notify_cached_website = self.patch_hook("notify_cached_website")
22 self.log = self.patch_hook("log")
23 self.config_get = self.patch_hook("config_get")
24
25 def patch_hook(self, hook_name):
26 mock_controller = patch.object(hooks, hook_name)
27 mock = mock_controller.start()
28 self.addCleanup(mock_controller.stop)
29 return mock
30
31 def test_config_changed_notify_cached_website_changed_stanzas(self):
32 self.service_squid3.return_value = True
33 self.get_sitenames.side_effect = (
34 ('foo.internal',),
35 ('foo.internal', 'bar.internal'))
36
37 hooks.config_changed()
38
39 self.notify_cached_website.assert_called_once_with()
40
41 def test_config_changed_no_notify_cached_website_not_changed(self):
42 self.service_squid3.return_value = True
43 self.get_sitenames.side_effect = (
44 ('foo.internal',),
45 ('foo.internal',))
46
47 hooks.config_changed()
48
49 self.assertFalse(self.notify_cached_website.called)
50
51 @patch("sys.exit")
52 def test_config_changed_no_notify_cached_website_failed_check(self, exit):
53 self.service_squid3.return_value = False
54 self.get_sitenames.side_effect = (
55 ('foo.internal',),
56 ('foo.internal', 'bar.internal'))
57
58 hooks.config_changed()
59
60 self.assertFalse(self.notify_cached_website.called)
061
=== added file 'hooks/tests/test_helpers.py'
--- hooks/tests/test_helpers.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_helpers.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,415 @@
1import os
2import json
3import yaml
4
5from os.path import dirname
6
7from testtools import TestCase
8from testtools.matchers import AfterPreprocessing, Contains
9from mock import patch
10
11import hooks
12from charmhelpers.core.hookenv import Serializable
13
14
15def normalize_whitespace(data):
16 return ' '.join(chunk for chunk in data.split())
17
18
19@patch.dict(os.environ, {"CHARM_DIR": dirname(dirname(dirname(__file__)))})
20class SquidConfigTest(TestCase):
21
22 def _assert_contents(self, *expected):
23 def side_effect(got):
24 for chunk in expected:
25 self.assertThat(got, AfterPreprocessing(
26 normalize_whitespace,
27 Contains(normalize_whitespace(chunk))))
28 return side_effect
29
30 def _apply_patch(self, name):
31 p = patch(name)
32 mocked_name = p.start()
33 self.addCleanup(p.stop)
34 return mocked_name
35
36 def setUp(self):
37 super(SquidConfigTest, self).setUp()
38 self.get_reverse_sites = self._apply_patch('hooks.get_reverse_sites')
39 self.get_forward_sites = self._apply_patch('hooks.get_forward_sites')
40 self.config_get = self._apply_patch('hooks.config_get')
41 self.write_squid3_config = self._apply_patch(
42 'hooks.write_squid3_config')
43
44 self.get_forward_sites.return_value = {}
45
46 def test_squid_config_no_sites(self):
47 self.config_get.return_value = Serializable({
48 "refresh_patterns": "",
49 "cache_size_mb": 1024,
50 "target_objs_per_dir": 1024,
51 "avg_obj_size_kb": 1024,
52 "via": "on",
53 })
54 self.get_reverse_sites.return_value = None
55 self.write_squid3_config.side_effect = self._assert_contents(
56 """
57 always_direct deny all
58 """,
59 """
60 via on
61 """,
62 )
63 hooks.construct_squid3_config()
64
65 def test_squid_config_via_off(self):
66 self.config_get.return_value = Serializable({
67 "refresh_patterns": "",
68 "cache_size_mb": 1024,
69 "target_objs_per_dir": 1024,
70 "avg_obj_size_kb": 1024,
71 "via": "off",
72 })
73 self.get_reverse_sites.return_value = None
74 self.write_squid3_config.side_effect = self._assert_contents(
75 """
76 via off
77 """,
78 )
79 hooks.construct_squid3_config()
80
81 def test_squid_config_refresh_pattern_json(self):
82 self.config_get.return_value = Serializable({
83 "refresh_patterns": json.dumps(
84 {"http://www.ubuntu.com":
85 {"min": 0, "percent": 20, "max": 60}}),
86 "cache_size_mb": 1024,
87 "target_objs_per_dir": 1024,
88 "avg_obj_size_kb": 1024,
89 })
90 self.get_reverse_sites.return_value = None
91 self.write_squid3_config.side_effect = self._assert_contents(
92 """
93 refresh_pattern http://www.ubuntu.com 0 20% 60
94 """,
95 )
96 hooks.construct_squid3_config()
97
98 def test_squid_config_refresh_pattern_yaml(self):
99 self.config_get.return_value = Serializable({
100 "refresh_patterns": yaml.dump(
101 {"http://www.ubuntu.com":
102 {"min": 0, "percent": 20, "max": 60}}),
103 "cache_size_mb": 1024,
104 "target_objs_per_dir": 1024,
105 "avg_obj_size_kb": 1024,
106 })
107 self.get_reverse_sites.return_value = None
108 self.write_squid3_config.side_effect = self._assert_contents(
109 """
110 refresh_pattern http://www.ubuntu.com 0 20% 60
111 """,
112 )
113 hooks.construct_squid3_config()
114
115 def test_squid_config_refresh_pattern_options(self):
116 self.config_get.return_value = Serializable({
117 "refresh_patterns": yaml.dump(
118 {"http://www.ubuntu.com":
119 {"min": 0, "percent": 20, "max": 60,
120 "options": ["override-lastmod",
121 "reload-into-ims"]}}),
122 "cache_size_mb": 1024,
123 "target_objs_per_dir": 1024,
124 "avg_obj_size_kb": 1024,
125 })
126 self.get_reverse_sites.return_value = None
127 self.write_squid3_config.side_effect = self._assert_contents(
128 """
129 refresh_pattern http://www.ubuntu.com 0 20% 60
130 override-lastmod reload-into-ims
131 refresh_pattern . 30 20% 4320
132 """,
133 )
134 hooks.construct_squid3_config()
135
136 def test_squid_config_refresh_pattern_default(self):
137 self.config_get.return_value = Serializable({
138 "refresh_patterns": yaml.dump(
139 {".":
140 {"min": 0, "percent": 20, "max": 60,
141 "options": ["override-lastmod",
142 "reload-into-ims"]}}),
143 "cache_size_mb": 1024,
144 "target_objs_per_dir": 1024,
145 "avg_obj_size_kb": 1024,
146 })
147 self.get_reverse_sites.return_value = None
148 self.write_squid3_config.side_effect = self._assert_contents(
149 """
150 refresh_pattern . 0 20% 60
151 override-lastmod reload-into-ims
152 """,
153 )
154 hooks.construct_squid3_config()
155
156 def test_squid_config_no_sitenames(self):
157 self.config_get.return_value = Serializable({
158 "refresh_patterns": "",
159 "cache_size_mb": 1024,
160 "target_objs_per_dir": 1024,
161 "avg_obj_size_kb": 1024,
162 })
163 self.get_reverse_sites.return_value = {
164 None: [
165 hooks.Server("website_1__foo_1", "1.2.3.4", 4242, ''),
166 hooks.Server("website_1__foo_2", "1.2.3.5", 4242, ''),
167 ],
168 }
169 self.write_squid3_config.side_effect = self._assert_contents(
170 """
171 acl no_sitename_acl myport
172 http_access allow accel_ports no_sitename_acl
173 never_direct allow no_sitename_acl
174 """,
175 """
176 cache_peer 1.2.3.4 parent 4242 0 name=website_1__foo_1 no-query
177 no-digest originserver round-robin login=PASS
178 cache_peer_access website_1__foo_1 allow no_sitename_acl
179 cache_peer_access website_1__foo_1 deny all
180 """,
181 """
182 cache_peer 1.2.3.5 parent 4242 0 name=website_1__foo_2 no-query
183 no-digest originserver round-robin login=PASS
184 cache_peer_access website_1__foo_2 allow no_sitename_acl
185 cache_peer_access website_1__foo_2 deny all
186 """
187 )
188 hooks.construct_squid3_config()
189
190 def test_squid_config_with_domain(self):
191 self.config_get.return_value = Serializable({
192 "refresh_patterns": "",
193 "cache_size_mb": 1024,
194 "target_objs_per_dir": 1024,
195 "avg_obj_size_kb": 1024,
196 })
197 self.get_reverse_sites.return_value = {
198 "foo.com": [
199 hooks.Server("foo_com__website_1__foo_1",
200 "1.2.3.4", 4242, "forceddomain=example.com"),
201 hooks.Server("foo_com__website_1__foo_2",
202 "1.2.3.5", 4242, "forceddomain=example.com"),
203 ],
204 }
205 self.write_squid3_config.side_effect = self._assert_contents(
206 """
207 acl s_1_acl dstdomain foo.com
208 http_access allow accel_ports s_1_acl
209 http_access allow CONNECT SSL_ports s_1_acl
210 always_direct allow CONNECT SSL_ports s_1_acl
211 always_direct deny s_1_acl
212 """,
213 """
214 cache_peer 1.2.3.4 parent 4242 0 name=foo_com__website_1__foo_1
215 no-query no-digest originserver round-robin login=PASS
216 forceddomain=example.com
217 cache_peer_access foo_com__website_1__foo_1 allow s_1_acl
218 cache_peer_access foo_com__website_1__foo_1 deny all
219 """,
220 """
221 cache_peer 1.2.3.5 parent 4242 0 name=foo_com__website_1__foo_2
222 no-query no-digest originserver round-robin login=PASS
223 forceddomain=example.com
224 cache_peer_access foo_com__website_1__foo_2 allow s_1_acl
225 cache_peer_access foo_com__website_1__foo_2 deny all
226 """
227 )
228 hooks.construct_squid3_config()
229
230 def test_with_domain_no_servers_only_direct(self):
231 self.config_get.return_value = Serializable({
232 "refresh_patterns": "",
233 "cache_size_mb": 1024,
234 "target_objs_per_dir": 1024,
235 "avg_obj_size_kb": 1024,
236 })
237 self.get_reverse_sites.return_value = {
238 "foo.com": [
239 ],
240 }
241 self.write_squid3_config.side_effect = self._assert_contents(
242 """
243 acl s_1_acl dstdomain foo.com
244 http_access allow accel_ports s_1_acl
245 http_access allow CONNECT SSL_ports s_1_acl
246 always_direct allow s_1_acl
247 """,
248 )
249 hooks.construct_squid3_config()
250
251 def test_with_balancer_no_servers_only_direct(self):
252 self.config_get.return_value = Serializable({
253 "refresh_patterns": "",
254 "cache_size_mb": 1024,
255 "target_objs_per_dir": 1024,
256 "avg_obj_size_kb": 1024,
257 "x_balancer_name_allowed": True,
258 })
259 self.get_reverse_sites.return_value = {
260 "foo.com": [],
261 }
262 self.write_squid3_config.side_effect = self._assert_contents(
263 """
264 acl s_1_acl dstdomain foo.com
265 http_access allow accel_ports s_1_acl
266 http_access allow CONNECT SSL_ports s_1_acl
267 always_direct allow s_1_acl
268 """,
269 """
270 acl s_1_balancer req_header X-Balancer-Name foo\.com
271 http_access allow accel_ports s_1_balancer
272 http_access allow CONNECT SSL_ports s_1_balancer
273 always_direct allow s_1_balancer
274 """,
275 )
276 hooks.construct_squid3_config()
277
278 def test_with_balancer_name(self):
279 self.config_get.return_value = Serializable({
280 "refresh_patterns": "",
281 "cache_size_mb": 1024,
282 "target_objs_per_dir": 1024,
283 "avg_obj_size_kb": 1024,
284 "x_balancer_name_allowed": True,
285 })
286 self.get_reverse_sites.return_value = {
287 "foo.com": [
288 hooks.Server("foo_com__website_1__foo_1",
289 "1.2.3.4", 4242, ''),
290 hooks.Server("foo_com__website_1__foo_2",
291 "1.2.3.5", 4242, ''),
292 ],
293 }
294 self.write_squid3_config.side_effect = self._assert_contents(
295 """
296 acl s_1_acl dstdomain foo.com
297 http_access allow accel_ports s_1_acl
298 http_access allow CONNECT SSL_ports s_1_acl
299 always_direct allow CONNECT SSL_ports s_1_acl
300 always_direct deny s_1_acl
301 """,
302 """
303 acl s_1_balancer req_header X-Balancer-Name foo\.com
304 http_access allow accel_ports s_1_balancer
305 http_access allow CONNECT SSL_ports s_1_balancer
306 always_direct allow CONNECT SSL_ports s_1_balancer
307 always_direct deny s_1_balancer
308 """,
309 """
310 cache_peer 1.2.3.4 parent 4242 0 name=foo_com__website_1__foo_1
311 no-query no-digest originserver round-robin login=PASS
312 cache_peer_access foo_com__website_1__foo_1 allow s_1_acl
313 cache_peer_access foo_com__website_1__foo_1 allow s_1_balancer
314 cache_peer_access foo_com__website_1__foo_1 deny all
315 """,
316 """
317 cache_peer 1.2.3.5 parent 4242 0 name=foo_com__website_1__foo_2
318 no-query no-digest originserver round-robin login=PASS
319 cache_peer_access foo_com__website_1__foo_2 allow s_1_acl
320 """
321 )
322 hooks.construct_squid3_config()
323
324 def test_forward_enabled(self):
325 self.config_get.return_value = Serializable({
326 "enable_forward_proxy": True,
327 "refresh_patterns": "",
328 "cache_size_mb": 1024,
329 "target_objs_per_dir": 1024,
330 "avg_obj_size_kb": 1024,
331 })
332 self.get_reverse_sites.return_value = {
333 "foo.com": [],
334 }
335 self.get_forward_sites.return_value = [
336 {'private-address': '1.2.3.4', 'name': 'service_unit_0'},
337 {'private-address': '2.3.4.5', 'name': 'service_unit_1'},
338 ]
339 self.write_squid3_config.side_effect = self._assert_contents(
340 """
341 acl fwd_service_unit_0 src 1.2.3.4
342 http_access allow fwd_service_unit_0
343 http_access allow CONNECT SSL_ports fwd_service_unit_0
344 always_direct allow fwd_service_unit_0
345 always_direct allow CONNECT SSL_ports fwd_service_unit_0
346 acl fwd_service_unit_1 src 2.3.4.5
347 http_access allow fwd_service_unit_1
348 http_access allow CONNECT SSL_ports fwd_service_unit_1
349 always_direct allow fwd_service_unit_1
350 always_direct allow CONNECT SSL_ports fwd_service_unit_1
351 """,
352 """
353 acl s_1_acl dstdomain foo.com
354 http_access allow accel_ports s_1_acl
355 http_access allow CONNECT SSL_ports s_1_acl
356 always_direct allow s_1_acl
357 """
358 )
359 hooks.construct_squid3_config()
360
361 def test_squid_config_cache_enabled(self):
362 self.config_get.return_value = Serializable({
363 "refresh_patterns": "",
364 "cache_size_mb": 1024,
365 "cache_mem_mb": 256,
366 "cache_dir": "/var/run/squid3",
367 "target_objs_per_dir": 16,
368 "avg_obj_size_kb": 4,
369 })
370 self.get_reverse_sites.return_value = None
371 self.write_squid3_config.side_effect = self._assert_contents(
372 """
373 cache_dir aufs /var/run/squid3 1024 32 512
374 cache_mem 256 MB
375 """,
376 )
377 hooks.construct_squid3_config()
378
379 def test_squid_config_cache_disabled(self):
380 self.config_get.return_value = Serializable({
381 "refresh_patterns": "",
382 "cache_size_mb": 0,
383 "target_objs_per_dir": 1024,
384 "avg_obj_size_kb": 1024,
385 })
386 self.get_reverse_sites.return_value = None
387 self.write_squid3_config.side_effect = self._assert_contents(
388 """
389 cache deny all
390 """,
391 )
392 hooks.construct_squid3_config()
393
394
395class HelpersTest(TestCase):
396 def test_gets_config(self):
397 json_string = '{"foo": "BAR"}'
398 with patch('subprocess.check_output') as check_output:
399 check_output.return_value = json_string
400
401 result = hooks.config_get()
402
403 self.assertEqual(result['foo'], 'BAR')
404 check_output.assert_called_with(['config-get', '--format=json'])
405
406 def test_gets_config_with_scope(self):
407 json_string = '{"foo": "BAR"}'
408 with patch('subprocess.check_output') as check_output:
409 check_output.return_value = json_string
410
411 result = hooks.config_get(scope='baz')
412
413 self.assertEqual(result['foo'], 'BAR')
414 check_output.assert_called_with(['config-get', 'baz',
415 '--format=json'])
0416
=== added file 'hooks/tests/test_nrpe_hooks.py'
--- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_nrpe_hooks.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,271 @@
1import os
2import grp
3import pwd
4import subprocess
5
6from testtools import TestCase
7from mock import patch, call
8
9import hooks
10
11from charmhelpers.contrib.charmsupport import nrpe
12from charmhelpers.core.hookenv import Serializable
13
14
15class NRPERelationTest(TestCase):
16 """Tests for the update_nrpe_checks hook.
17
18 Half of this is already tested in the tests for charmsupport.nrpe, but
19 as the hook in the charm pre-dates that, the tests are left here to ensure
20 backwards-compatibility.
21
22 """
23 patches = {
24 'config': {'object': nrpe},
25 'log': {'object': nrpe},
26 'getpwnam': {'object': pwd},
27 'getgrnam': {'object': grp},
28 'mkdir': {'object': os},
29 'chown': {'object': os},
30 'exists': {'object': os.path},
31 'listdir': {'object': os},
32 'remove': {'object': os},
33 'open': {'object': nrpe, 'create': True},
34 'isfile': {'object': os.path},
35 'call': {'object': subprocess},
36 'install_nrpe_scripts': {'object': hooks},
37 'relation_ids': {'object': nrpe},
38 'relation_set': {'object': nrpe},
39 }
40
41 def setUp(self):
42 super(NRPERelationTest, self).setUp()
43 self.patched = {}
44 # Mock the universe.
45 for attr, data in self.patches.items():
46 create = data.get('create', False)
47 patcher = patch.object(data['object'], attr, create=create)
48 self.patched[attr] = patcher.start()
49 self.addCleanup(patcher.stop)
50 if not 'JUJU_UNIT_NAME' in os.environ:
51 os.environ['JUJU_UNIT_NAME'] = 'test'
52
53 def check_call_counts(self, **kwargs):
54 for attr, expected in kwargs.items():
55 patcher = self.patched[attr]
56 self.assertEqual(expected, patcher.call_count, attr)
57
58 def test_update_nrpe_no_nagios_bails(self):
59 config = {'nagios_context': 'test'}
60 self.patched['config'].return_value = Serializable(config)
61 self.patched['getpwnam'].side_effect = KeyError
62
63 self.assertEqual(None, hooks.update_nrpe_checks())
64
65 expected = 'Nagios user not set up, nrpe checks not updated'
66 self.patched['log'].assert_called_once_with(expected)
67 self.check_call_counts(log=1, config=1, getpwnam=1)
68
69 def test_update_nrpe_removes_existing_config(self):
70 config = {
71 'nagios_context': 'test',
72 'nagios_check_http_params': '-u http://example.com/url',
73 }
74 self.patched['config'].return_value = Serializable(config)
75 self.patched['exists'].return_value = True
76 self.patched['listdir'].return_value = [
77 'foo', 'bar.cfg', 'check_squidrp.cfg']
78
79 self.assertEqual(None, hooks.update_nrpe_checks())
80
81 expected = '/var/lib/nagios/export/check_squidrp.cfg'
82 self.patched['remove'].assert_called_once_with(expected)
83 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
84 exists=5, remove=1, open=4, listdir=2)
85
86 def test_update_nrpe_uses_check_squidpeers(self):
87 config = {
88 'nagios_context': 'test',
89 }
90 self.patched['config'].return_value = Serializable(config)
91 self.patched['exists'].return_value = True
92 self.patched['isfile'].return_value = False
93
94 self.assertEqual(None, hooks.update_nrpe_checks())
95 self.assertEqual(2, self.patched['open'].call_count)
96 filename = 'check_squidpeers.cfg'
97
98 service_file_contents = """
99#---------------------------------------------------
100# This file is Juju managed
101#---------------------------------------------------
102define service {
103 use active-service
104 host_name test-test
105 service_description test-test[squidpeers] Check Squid Peers
106 check_command check_nrpe!check_squidpeers
107 servicegroups test
108}
109"""
110 self.patched['open'].assert_has_calls(
111 [call('/etc/nagios/nrpe.d/%s' % filename, 'w'),
112 call('/var/lib/nagios/export/service__test-test_%s' %
113 filename, 'w'),
114 call().__enter__().write(service_file_contents),
115 call().__enter__().write('# check squidpeers\n'),
116 call().__enter__().write(
117 'command[check_squidpeers]='
118 '/check_squidpeers\n')],
119 any_order=True)
120
121 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
122 exists=3, open=2, listdir=1)
123
124 def test_update_nrpe_with_check_url(self):
125 config = {
126 'nagios_context': 'test',
127 'nagios_check_http_params': '-u foo -H bar',
128 }
129 self.patched['config'].return_value = Serializable(config)
130 self.patched['exists'].return_value = True
131 self.patched['isfile'].return_value = False
132
133 self.assertEqual(None, hooks.update_nrpe_checks())
134 self.assertEqual(4, self.patched['open'].call_count)
135 filename = 'check_squidrp.cfg'
136
137 service_file_contents = """
138#---------------------------------------------------
139# This file is Juju managed
140#---------------------------------------------------
141define service {
142 use active-service
143 host_name test-test
144 service_description test-test[squidrp] Check Squid
145 check_command check_nrpe!check_squidrp
146 servicegroups test
147}
148"""
149 self.patched['open'].assert_has_calls(
150 [call('/etc/nagios/nrpe.d/%s' % filename, 'w'),
151 call('/var/lib/nagios/export/service__test-test_%s' %
152 filename, 'w'),
153 call().__enter__().write(service_file_contents),
154 call().__enter__().write('# check squidrp\n'),
155 call().__enter__().write(
156 'command[check_squidrp]=/check_http -u foo -H bar\n')],
157 any_order=True)
158
159 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
160 exists=5, open=4, listdir=2)
161
162 def test_update_nrpe_with_no_check_path(self):
163 config = {
164 'nagios_context': 'test',
165 'services': '- {service_name: i_ytimg_com}',
166 }
167 self.patched['config'].return_value = Serializable(config)
168 self.patched['exists'].return_value = True
169
170 self.assertEqual(None, hooks.update_nrpe_checks())
171
172 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
173 exists=3, open=2, listdir=1)
174
175 def test_update_nrpe_with_services_and_host_header(self):
176 config = {
177 'nagios_context': 'test',
178 'services': '- {service_name: i_ytimg_com, nrpe_check_path: /}',
179 }
180 self.patched['config'].return_value = Serializable(config)
181 self.patched['exists'].return_value = True
182
183 self.assertEqual(None, hooks.update_nrpe_checks())
184
185 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
186 exists=5, open=4, listdir=2)
187 expected = ('command[check_squid-i_ytimg_com]=/check_http '
188 '-I 127.0.0.1 -p 3128 --method=HEAD '
189 '-u http://i_ytimg_com/\n')
190 self.patched['open'].assert_has_calls(
191 [call('/etc/nagios/nrpe.d/check_squid-i_ytimg_com.cfg', 'w'),
192 call().__enter__().write(expected)],
193 any_order=True)
194
195 def test_update_nrpe_with_dotted_service_name_and_host_header(self):
196 config = {
197 'nagios_context': 'test',
198 'services': '- {service_name: i.ytimg.com, nrpe_check_path: /}',
199 }
200 self.patched['config'].return_value = Serializable(config)
201 self.patched['exists'].return_value = True
202
203 self.assertEqual(None, hooks.update_nrpe_checks())
204
205 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
206 exists=5, open=4, listdir=2)
207 expected = ('command[check_squid-i_ytimg_com]=/check_http '
208 '-I 127.0.0.1 -p 3128 --method=HEAD '
209 '-u http://i.ytimg.com/\n')
210 self.patched['open'].assert_has_calls(
211 [call('/etc/nagios/nrpe.d/check_squid-i_ytimg_com.cfg', 'w'),
212 call().__enter__().write(expected)],
213 any_order=True)
214
215 def test_update_nrpe_with_services_and_balancer_name_header(self):
216 config = {
217 'nagios_context': 'test',
218 'x_balancer_name_allowed': True,
219 'services': '- {service_name: i_ytimg_com, nrpe_check_path: /}',
220 }
221 self.patched['config'].return_value = Serializable(config)
222 self.patched['exists'].return_value = True
223
224 self.assertEqual(None, hooks.update_nrpe_checks())
225
226 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
227 exists=5, open=4, listdir=2)
228
229 expected = ('command[check_squid-i_ytimg_com]=/check_http '
230 '-I 127.0.0.1 -p 3128 --method=HEAD -u http://localhost/ '
231 "-k 'X-Balancer-Name: i_ytimg_com'\n")
232 self.patched['open'].assert_has_calls(
233 [call('/etc/nagios/nrpe.d/check_squid-i_ytimg_com.cfg', 'w'),
234 call().__enter__().write(expected)],
235 any_order=True)
236
237 def test_update_nrpe_with_services_and_optional_path(self):
238 services = '- {nrpe_check_path: /foo.jpg, service_name: foo_com}\n'
239 config = {
240 'nagios_context': 'test',
241 'services': services,
242 }
243 self.patched['config'].return_value = Serializable(config)
244 self.patched['exists'].return_value = True
245
246 self.assertEqual(None, hooks.update_nrpe_checks())
247
248 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
249 exists=5, open=4, listdir=2)
250 expected = ('command[check_squid-foo_com]=/check_http '
251 '-I 127.0.0.1 -p 3128 --method=HEAD '
252 '-u http://foo_com/foo.jpg\n')
253 self.patched['open'].assert_has_calls(
254 [call('/etc/nagios/nrpe.d/check_squid-foo_com.cfg', 'w'),
255 call().__enter__().write(expected)],
256 any_order=True)
257
258 def test_update_nrpe_restarts_service(self):
259 config = {
260 'nagios_context': 'test',
261 'nagios_check_http_params': '-u foo -p 3128'
262 }
263 self.patched['config'].return_value = Serializable(config)
264 self.patched['exists'].return_value = True
265
266 self.assertEqual(None, hooks.update_nrpe_checks())
267
268 expected = ['service', 'nagios-nrpe-server', 'restart']
269 self.assertEqual(expected, self.patched['call'].call_args[0][0])
270 self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
271 exists=5, open=4, listdir=2, call=1)
0272
=== added file 'hooks/tests/test_website_hooks.py'
--- hooks/tests/test_website_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/tests/test_website_hooks.py 2013-10-29 21:07:21 +0000
@@ -0,0 +1,267 @@
1import yaml
2
3from testtools import TestCase
4from mock import patch
5
6import hooks
7from charmhelpers.core.hookenv import Serializable
8
9
10class WebsiteRelationTest(TestCase):
11
12 def setUp(self):
13 super(WebsiteRelationTest, self).setUp()
14
15 relations_of_type = patch.object(hooks, "relations_of_type")
16 self.relations_of_type = relations_of_type.start()
17 self.addCleanup(relations_of_type.stop)
18
19 config_get = patch.object(hooks, "config_get")
20 self.config_get = config_get.start()
21 self.addCleanup(config_get.stop)
22
23 log = patch.object(hooks, "log")
24 self.log = log.start()
25 self.addCleanup(log.stop)
26
27 def test_relation_data_returns_no_relations(self):
28 self.config_get.return_value = Serializable({})
29 self.relations_of_type.return_value = []
30 self.assertEqual(None, hooks.get_reverse_sites())
31
32 def test_no_port_in_relation_data(self):
33 self.config_get.return_value = Serializable({})
34 self.relations_of_type.return_value = [
35 {"private-address": "1.2.3.4",
36 "__unit__": "foo/1",
37 "__relid__": "website:1"},
38 ]
39 self.assertIs(None, hooks.get_reverse_sites())
40 self.log.assert_called_once_with(
41 "No port in relation data for 'foo/1', skipping.")
42
43 def test_empty_relation_services(self):
44 self.config_get.return_value = Serializable({})
45 self.relations_of_type.return_value = [
46 {"private-address": "1.2.3.4",
47 "__unit__": "foo/1",
48 "__relid__": "website:1",
49 "all_services": "",
50 },
51 ]
52
53 self.assertEqual(None, hooks.get_reverse_sites())
54 self.assertFalse(self.log.called)
55
56 def test_no_port_in_relation_data_ok_with_all_services(self):
57 self.config_get.return_value = Serializable({})
58 self.relations_of_type.return_value = [
59 {"private-address": "1.2.3.4",
60 "__unit__": "foo/1",
61 "__relid__": "website:1",
62 "all_services": yaml.dump([
63 {"service_name": "foo.internal",
64 "service_port": 4242},
65 ]),
66 },
67 ]
68
69 expected = {
70 "foo.internal": [
71 ("foo_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
72 ],
73 }
74 self.assertEqual(expected, hooks.get_reverse_sites())
75 self.assertFalse(self.log.called)
76
77 def test_no_private_address_in_relation_data(self):
78 self.config_get.return_value = Serializable({})
79 self.relations_of_type.return_value = [
80 {"port": 4242,
81 "__unit__": "foo/0",
82 "__relid__": "website:1",
83 },
84 ]
85 self.assertIs(None, hooks.get_reverse_sites())
86 self.log.assert_called_once_with(
87 "No private-address in relation data for 'foo/0', skipping.")
88
89 def test_sitenames_in_relation_data(self):
90 self.config_get.return_value = Serializable({})
91 self.relations_of_type.return_value = [
92 {"private-address": "1.2.3.4",
93 "port": 4242,
94 "__unit__": "foo/1",
95 "__relid__": "website:1",
96 "sitenames": "foo.internal bar.internal"},
97 {"private-address": "1.2.3.5",
98 "port": 4242,
99 "__unit__": "foo/2",
100 "__relid__": "website:1",
101 "sitenames": "foo.internal bar.internal"},
102 ]
103 expected = {
104 "foo.internal": [
105 ("foo_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
106 ("foo_internal__website_1__foo_2", "1.2.3.5", 4242, ''),
107 ],
108 "bar.internal": [
109 ("bar_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
110 ("bar_internal__website_1__foo_2", "1.2.3.5", 4242, ''),
111 ],
112 }
113 self.assertEqual(expected, hooks.get_reverse_sites())
114
115 def test_all_services_in_relation_data(self):
116 self.config_get.return_value = Serializable({})
117 self.relations_of_type.return_value = [
118 {"private-address": "1.2.3.4",
119 "__unit__": "foo/1",
120 "__relid__": "website:1",
121 "all_services": yaml.dump([
122 {"service_name": "foo.internal",
123 "service_port": 4242},
124 {"service_name": "bar.internal",
125 "service_port": 4243}
126 ]),
127 },
128 {"private-address": "1.2.3.5",
129 "__unit__": "foo/2",
130 "__relid__": "website:1",
131 "all_services": yaml.dump([
132 {"service_name": "foo.internal",
133 "service_port": 4242},
134 {"service_name": "bar.internal",
135 "service_port": 4243}
136 ]),
137 },
138 ]
139 expected = {
140 "foo.internal": [
141 ("foo_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
142 ("foo_internal__website_1__foo_2", "1.2.3.5", 4242, ''),
143 ],
144 "bar.internal": [
145 ("bar_internal__website_1__foo_1", "1.2.3.4", 4243, ''),
146 ("bar_internal__website_1__foo_2", "1.2.3.5", 4243, ''),
147 ],
148 }
149 self.assertEqual(expected, hooks.get_reverse_sites())
150
151 def test_unit_names_in_relation_data(self):
152 self.config_get.return_value = Serializable({})
153 self.relations_of_type.return_value = [
154 {"private-address": "1.2.3.4",
155 "__relid__": "website_1",
156 "__unit__": "foo/1",
157 "port": 4242},
158 {"private-address": "1.2.3.5",
159 "__relid__": "website_1",
160 "__unit__": "foo/2",
161 "port": 4242},
162 ]
163 expected = {
164 None: [
165 ("website_1__foo_1", "1.2.3.4", 4242, ''),
166 ("website_1__foo_2", "1.2.3.5", 4242, ''),
167 ],
168 }
169 self.assertEqual(expected, hooks.get_reverse_sites())
170
171 def test_sites_from_config_no_domain(self):
172 self.relations_of_type.return_value = []
173 self.config_get.return_value = Serializable({
174 "services": yaml.dump([
175 {"service_name": "foo.internal",
176 "servers": [
177 ["1.2.3.4", 4242],
178 ["1.2.3.5", 4242],
179 ]
180 }
181 ])})
182 expected = {
183 "foo.internal": [
184 ("foo_internal__1_2_3_4", "1.2.3.4", 4242, ''),
185 ("foo_internal__1_2_3_5", "1.2.3.5", 4242, ''),
186 ],
187 }
188 self.assertEqual(expected, hooks.get_reverse_sites())
189
190 def test_sites_from_config_with_domain(self):
191 self.relations_of_type.return_value = []
192 self.config_get.return_value = Serializable({
193 "services": yaml.dump([
194 {"service_name": "foo.internal",
195 "server_options": ["forceddomain=example.com"],
196 "servers": [
197 ["1.2.3.4", 4242],
198 ["1.2.3.5", 4242],
199 ]
200 }
201 ])})
202 expected = {
203 "foo.internal": [
204 ("foo_internal__1_2_3_4", "1.2.3.4", 4242,
205 "forceddomain=example.com"),
206 ("foo_internal__1_2_3_5", "1.2.3.5", 4242,
207 "forceddomain=example.com"),
208 ],
209 }
210 self.assertEqual(expected, hooks.get_reverse_sites())
211
212 def test_sites_from_config_and_relation_with_domain(self):
213 self.relations_of_type.return_value = [
214 {"private-address": "1.2.3.4",
215 "__unit__": "foo/1",
216 "__relid__": "website:1",
217 "all_services": yaml.dump([
218 {"service_name": "foo.internal",
219 "service_port": 4242},
220 {"service_name": "bar.internal",
221 "service_port": 4243}
222 ]),
223 },
224 {"private-address": "1.2.3.5",
225 "__unit__": "foo/2",
226 "__relid__": "website:1",
227 "all_services": yaml.dump([
228 {"service_name": "foo.internal",
229 "service_port": 4242},
230 {"service_name": "bar.internal",
231 "service_port": 4243}
232 ]),
233 },
234 ]
235 self.config_get.return_value = Serializable({
236 "services": yaml.dump([
237 {"service_name": "foo.internal",
238 "server_options": ["forceddomain=example.com"],
239 "servers": [
240 ["1.2.4.4", 4242],
241 ["1.2.4.5", 4242],
242 ]
243 }
244 ])})
245 expected = {
246 "foo.internal": [
247 ("foo_internal__1_2_4_4", "1.2.4.4", 4242,
248 "forceddomain=example.com"),
249 ("foo_internal__1_2_4_5", "1.2.4.5", 4242,
250 "forceddomain=example.com"),
251 ("foo_internal__website_1__foo_1", "1.2.3.4", 4242,
252 "forceddomain=example.com"),
253 ("foo_internal__website_1__foo_2", "1.2.3.5", 4242,
254 "forceddomain=example.com"),
255 ],
256 "bar.internal": [
257 ("bar_internal__website_1__foo_1", "1.2.3.4", 4243, ''),
258 ("bar_internal__website_1__foo_2", "1.2.3.5", 4243, ''),
259 ],
260 }
261 self.assertEqual(expected, hooks.get_reverse_sites())
262
263 def test_empty_sites_from_config_no_domain(self):
264 self.relations_of_type.return_value = []
265 self.config_get.return_value = Serializable({
266 "services": ""})
267 self.assertEqual(None, hooks.get_reverse_sites())
0268
=== added symlink 'hooks/upgrade-charm'
=== target is u'hooks.py'
=== modified file 'metadata.yaml'
--- metadata.yaml 2013-07-11 19:15:57 +0000
+++ metadata.yaml 2013-10-29 21:07:21 +0000
@@ -1,6 +1,6 @@
1name: squid-reverseproxy1name: squid-reverseproxy
2summary: Full featured Web Proxy cache (HTTP proxy)2summary: Full featured Web Proxy cache (HTTP proxy)
3maintainer: "Matthew Wedgwood <matthew.wedgwood@canonical.com>"3maintainer: [Matthew Wedgwood <matthew.wedgwood@canonical.com>, Alexander List <alexander.list@canonical.com>]
4description: >4description: >
5 Squid is a high-performance proxy caching server for web clients,5 Squid is a high-performance proxy caching server for web clients,
6 supporting FTP, gopher, and HTTP data objects. Squid version 3 is a6 supporting FTP, gopher, and HTTP data objects. Squid version 3 is a
@@ -23,6 +23,9 @@
23 nrpe-external-master:23 nrpe-external-master:
24 interface: nrpe-external-master24 interface: nrpe-external-master
25 scope: container25 scope: container
26 local-monitors:
27 interface: local-monitors
28 scope: container
26requires:29requires:
27 website:30 website:
28 interface: http31 interface: http
2932
=== removed file 'revision'
--- revision 2013-02-15 07:04:29 +0000
+++ revision 1970-01-01 00:00:00 +0000
@@ -1,1 +0,0 @@
10
20
=== added file 'setup.cfg'
--- setup.cfg 1970-01-01 00:00:00 +0000
+++ setup.cfg 2013-10-29 21:07:21 +0000
@@ -0,0 +1,4 @@
1[nosetests]
2with-coverage=1
3cover-erase=1
4cover-package=hooks
0\ No newline at end of file5\ No newline at end of file
16
=== added file 'tarmac_tests.sh'
--- tarmac_tests.sh 1970-01-01 00:00:00 +0000
+++ tarmac_tests.sh 2013-10-29 21:07:21 +0000
@@ -0,0 +1,6 @@
1#!/bin/sh
2# How the tests are run in Jenkins by Tarmac
3
4set -e
5
6make build
07
=== modified file 'templates/main_config.template'
--- templates/main_config.template 2013-01-11 16:50:54 +0000
+++ templates/main_config.template 2013-10-29 21:07:21 +0000
@@ -1,10 +1,11 @@
1http_port {{ config.port }} accel vport=4431http_port {{ config.port }} {{ config.port_options }}
22
3acl manager proto cache_object3acl manager proto cache_object
4acl localhost src 127.0.0.1/324acl localhost src 127.0.0.1/32
5acl to_localhost dst 127.0.0.0/85acl to_localhost dst 127.0.0.0/8
6acl PURGE method PURGE6acl PURGE method PURGE
7acl CONNECT method CONNECT7acl CONNECT method CONNECT
8acl SSL_ports port 443
89
9{% if config.snmp_community -%}10{% if config.snmp_community -%}
10acl snmp_access snmp_community {{ config.snmp_community }}11acl snmp_access snmp_community {{ config.snmp_community }}
@@ -19,6 +20,7 @@
19snmp_incoming_address {{ config.my_ip_address }}20snmp_incoming_address {{ config.my_ip_address }}
20{% endif -%}21{% endif -%}
2122
23via {{ config.via }}
22logformat combined {{ config.log_format }}24logformat combined {{ config.log_format }}
23access_log /var/log/squid3/access.log combined25access_log /var/log/squid3/access.log combined
2426
@@ -26,27 +28,69 @@
2628
27coredump_dir {{ config.cache_dir }}29coredump_dir {{ config.cache_dir }}
28maximum_object_size {{ config.max_obj_size_kb }} KB30maximum_object_size {{ config.max_obj_size_kb }} KB
31{% if config.cache_size_mb > 0 -%}
29cache_dir aufs {{ config.cache_dir }} {{ config.cache_size_mb }} {{ config.cache_l1 }} {{ config.cache_l2 }}32cache_dir aufs {{ config.cache_dir }} {{ config.cache_size_mb }} {{ config.cache_l1 }} {{ config.cache_l2 }}
30
31cache_mem {{ config.cache_mem_mb }} MB33cache_mem {{ config.cache_mem_mb }} MB
34{% else -%}
35cache deny all
36{% endif -%}
37
3238
33log_mime_hdrs on39log_mime_hdrs on
3440
35acl accel_ports myport {{ config.port }}41acl accel_ports myport {{ config.port }}
3642
37{% for rp in refresh_patterns.keys() -%}43{% for rp in refresh_patterns.keys() -%}
38refresh_pattern {{ rp }} {{ refresh_patterns[rp]['min'] }} {{ refresh_patterns[rp]['percent'] }}% {{ refresh_patterns[rp]['max'] }}44refresh_pattern {{ rp }} {{ refresh_patterns[rp]['min'] }} {{ refresh_patterns[rp]['percent'] }}% {{ refresh_patterns[rp]['max'] }} {{ ' '.join(refresh_patterns[rp]['options']) }}
39{% endfor -%}45{% endfor -%}
40refresh_pattern . 30 20% 432046refresh_pattern . {{default_refresh_pattern.min}} {{default_refresh_pattern.percent}}% {{default_refresh_pattern.max}} {{ ' '.join(default_refresh_pattern.options) }}
4147
42{% for relid in relations.keys() -%}48# known services
43{% for sitename in relations[relid].sitenames -%}49{% if sites -%}
44acl {{ relations[relid].name }}_acl dstdomain {{ sitename }}50{% for sitename in sites.keys() -%}
45{% else -%}51{% if sitename -%}
46acl {{ relations[relid].name }}_acl myport {{ config.port }}52{% set site_acl = "s_%s_acl" % loop.index %}
47{% endfor -%}53acl {{ site_acl }} dstdomain {{ sitename }}
48http_access allow accel_ports {{ relations[relid].name }}_acl54http_access allow accel_ports {{ site_acl }}
49{% endfor -%}55http_access allow CONNECT SSL_ports {{ site_acl }}
56{% if sitename in only_direct -%}
57always_direct allow {{ site_acl }}
58{% else -%}
59always_direct allow CONNECT SSL_ports {{ site_acl }}
60always_direct deny {{ site_acl }}
61{% endif -%}
62{% if config['x_balancer_name_allowed'] -%}
63{% set balancer_acl = "s_%s_balancer" % loop.index %}
64acl {{ balancer_acl }} req_header X-Balancer-Name {{ sitename.replace('.', '\.') }}
65http_access allow accel_ports {{ balancer_acl }}
66http_access allow CONNECT SSL_ports {{ balancer_acl }}
67{% if sitename in only_direct -%}
68always_direct allow {{ balancer_acl }}
69{% else -%}
70always_direct allow CONNECT SSL_ports {{ balancer_acl }}
71always_direct deny {{ balancer_acl }}
72{% endif -%}
73{% endif -%}
74{% else %} # no sitename
75acl no_sitename_acl myport {{ config.port }}
76http_access allow accel_ports no_sitename_acl
77never_direct allow no_sitename_acl
78{% endif %}
79{% endfor -%}
80{% endif -%}
81
82{% if config.enable_forward_proxy -%}
83# no access retrictions
84{% for relation in forward_relations -%}
85{# acl names are limited to 31 chars (!), so using short "fwd_" prefix #}
86{% set forward_acl = "fwd_%s" % relation['name'] -%}
87acl {{ forward_acl }} src {{ relation['private-address'] }}
88http_access allow {{ forward_acl }}
89http_access allow CONNECT SSL_ports {{ forward_acl }}
90always_direct allow {{ forward_acl }}
91always_direct allow CONNECT SSL_ports {{ forward_acl }}
92{% endfor -%}
93{% endif -%}
5094
51http_access allow manager localhost95http_access allow manager localhost
52http_access deny manager96http_access deny manager
@@ -57,11 +101,24 @@
57http_access deny accel_ports all101http_access deny accel_ports all
58http_access deny all102http_access deny all
59icp_access deny all103icp_access deny all
104always_direct deny all
60105
61{% for relid in relations -%}106{% if sites -%}
62{% if relations[relid].port -%}107{% for sitename in sites.keys() -%}
63cache_peer {{ relations[relid]['private-address'] }} parent {{ relations[relid].port }} 0 name={{ relations[relid].name }} no-query no-digest originserver round-robin login=PASS108{% set sites_loop = loop -%}
64cache_peer_access {{ relations[relid].name }} allow {{ relations[relid].name }}_acl109{% for peer in sites[sitename] %}
65cache_peer_access {{ relations[relid].name }} deny !{{ relations[relid].name }}_acl110{% if sitename -%}
66{% endif -%}111cache_peer {{ peer.address }} parent {{ peer.port }} 0 name={{ peer.name }} no-query no-digest originserver round-robin login=PASS {{ peer.options }}
67{% endfor -%}112cache_peer_access {{ peer.name }} allow s_{{ sites_loop.index }}_acl
113{% if config['x_balancer_name_allowed'] -%}
114cache_peer_access {{ peer.name }} allow s_{{ sites_loop.index }}_balancer
115{% endif -%}
116cache_peer_access {{ peer.name }} deny all
117{% else %}
118cache_peer {{ peer.address }} parent {{ peer.port }} 0 name={{ peer.name }} no-query no-digest originserver round-robin login=PASS
119cache_peer_access {{ peer.name }} allow no_sitename_acl
120cache_peer_access {{ peer.name }} deny all
121{% endif -%}
122{% endfor -%}
123{% endfor -%}
124{% endif -%}

Subscribers

People subscribed via source and target branches