Merge lp:~sidnei/charms/precise/squid-reverseproxy/trunk into lp:charms/squid-reverseproxy

Proposed by Sidnei da Silva
Status: Merged
Merged at revision: 43
Proposed branch: lp:~sidnei/charms/precise/squid-reverseproxy/trunk
Merge into: lp:charms/squid-reverseproxy
Diff against target: 4487 lines (+3137/-946)
27 files modified
.bzrignore (+10/-1)
Makefile (+39/-0)
charm-helpers.yaml (+4/-0)
cm.py (+193/-0)
config-manager.txt (+6/-0)
config.yaml (+43/-6)
files/check_squidpeers (+75/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+218/-0)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+239/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+44/-0)
hooks/hooks.py (+288/-254)
hooks/install (+9/-0)
hooks/shelltoolbox/__init__.py (+0/-662)
hooks/tests/test_cached_website_hooks.py (+111/-0)
hooks/tests/test_config_changed_hooks.py (+60/-0)
hooks/tests/test_helpers.py (+415/-0)
hooks/tests/test_nrpe_hooks.py (+271/-0)
hooks/tests/test_website_hooks.py (+267/-0)
metadata.yaml (+4/-1)
revision (+0/-1)
setup.cfg (+4/-0)
tarmac_tests.sh (+6/-0)
templates/main_config.template (+78/-21)
To merge this branch: bzr merge lp:~sidnei/charms/precise/squid-reverseproxy/trunk
Reviewer Review Type Date Requested Status
Marco Ceppi (community) Approve
charmers Pending
Review via email: mp+190500@code.launchpad.net

This proposal supersedes a proposal from 2013-08-22.

Description of the change

* Greatly improve test coverage

* Allow the use of an X-Balancer-Name header to select which cache_peer backend will be used for a specific request.

  This is useful if you have multiple applications sitting behind a single hostname being directed to the same Squid from an Apache service, using the balancer relation. Since you can't rely on 'dstdomain' anymore, this special header is set by the Apache charm in the balancer, and used instead.

* Support 'all-services' being set in the relation, in the way that the haproxy sets it, in addition to the previously supported 'sitenames' setting. Makes it compatible with the haproxy charm.

* When the list of supported 'sitenames' (computed from dstdomain acls) changes, notify services related via the 'cached-website' relation. This allows to add new services in the haproxy service (or really, any other service related), which notifies the squid service, which then bubbles up to services related via cached-website.

To post a comment you must log in.
Revision history for this message
Mark Mims (mark-mims) wrote : Posted in a previous version of this proposal

same as apache... please reserve /tests for charm tests

review: Needs Resubmitting
Revision history for this message
James Page (james-page) wrote : Posted in a previous version of this proposal

Same comments as for haproxy and apache2; charm branch needs to include all source dependencies.

review: Needs Fixing
41. By Matthias Arnason

[sidnei r=tiaz] Expose 'via' config setting, default to on.

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Hi Sidnei! Thanks for submitting all these improvements and clean up to the squid-reverseproxy charm! The addition of unit tests for the hooks is fantastic and really speaks to our attempts of bringing measurable quality in to the charm store.

I've reviewed everything in the code and the changes look fine, and you've addressed the concerns from previous proposals. However, I have concerns of changing the names of configuration options for a charm. I fear this will break existing deployed charms in an unknown way. I've reached out to the mailing list for a consensus of what to do in this case: https://lists.ubuntu.com/archives/juju/2013-November/003127.html I'll return with a verdict from that post.

review: Needs Information
Revision history for this message
Sidnei da Silva (sidnei) wrote :

Thanks! As per my reply on the list, that specific change reverts the config argument to the name it had previous to r31.

The currently published charm as of r42 is in fact broken because the config.yaml has 'nagios_check_url' but the update_nrpe_checks function refer to 'nagios_check_http_params', causing a KeyError.

$ bzr branch lp:charms/squid-reverseproxy
$ cd squid-reverseproxy
$ bzr revno
42
$ bzr grep nagios_check
config.yaml: nagios_check_url:
hooks/hooks.py: (config_data['nagios_check_http_params']))

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Hi Sidnei, thanks for the clarification!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2013-02-15 02:21:16 +0000
3+++ .bzrignore 2013-10-29 21:07:21 +0000
4@@ -1,1 +1,10 @@
5-basenode/
6+revision
7+_trial_temp
8+.coverage
9+coverage.xml
10+*.crt
11+*.key
12+lib/*
13+*.pyc
14+exec.d
15+build/charm-helpers
16
17=== added file 'Makefile'
18--- Makefile 1970-01-01 00:00:00 +0000
19+++ Makefile 2013-10-29 21:07:21 +0000
20@@ -0,0 +1,39 @@
21+PWD := $(shell pwd)
22+SOURCEDEPS_DIR ?= $(shell dirname $(PWD))/.sourcecode
23+HOOKS_DIR := $(PWD)/hooks
24+TEST_PREFIX := PYTHONPATH=$(HOOKS_DIR)
25+TEST_DIR := $(PWD)/hooks/tests
26+CHARM_DIR := $(PWD)
27+PYTHON := /usr/bin/env python
28+
29+
30+build: test lint proof
31+
32+revision:
33+ @test -f revision || echo 0 > revision
34+
35+proof: revision
36+ @echo Proofing charm...
37+ @(charm proof $(PWD) || [ $$? -eq 100 ]) && echo OK
38+ @test `cat revision` = 0 && rm revision
39+
40+test:
41+ @echo Starting tests...
42+ @CHARM_DIR=$(CHARM_DIR) $(TEST_PREFIX) nosetests -s $(TEST_DIR)
43+
44+lint:
45+ @echo Checking for Python syntax...
46+ @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK
47+
48+sourcedeps: $(PWD)/config-manager.txt
49+ @echo Updating source dependencies...
50+ @$(PYTHON) cm.py -c $(PWD)/config-manager.txt \
51+ -p $(SOURCEDEPS_DIR) \
52+ -t $(PWD)
53+ @$(PYTHON) build/charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
54+ -c charm-helpers.yaml \
55+ -b build/charm-helpers \
56+ -d hooks/charmhelpers
57+ @echo Do not forget to commit the updated files if any.
58+
59+.PHONY: revision proof test lint sourcedeps charm-payload
60
61=== added directory 'build'
62=== added file 'charm-helpers.yaml'
63--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
64+++ charm-helpers.yaml 2013-10-29 21:07:21 +0000
65@@ -0,0 +1,4 @@
66+include:
67+ - core
68+ - fetch
69+ - contrib.charmsupport
70\ No newline at end of file
71
72=== added file 'cm.py'
73--- cm.py 1970-01-01 00:00:00 +0000
74+++ cm.py 2013-10-29 21:07:21 +0000
75@@ -0,0 +1,193 @@
76+# Copyright 2010-2013 Canonical Ltd. All rights reserved.
77+import os
78+import re
79+import sys
80+import errno
81+import hashlib
82+import subprocess
83+import optparse
84+
85+from os import curdir
86+from bzrlib.branch import Branch
87+from bzrlib.plugin import load_plugins
88+load_plugins()
89+from bzrlib.plugins.launchpad import account as lp_account
90+
91+if 'GlobalConfig' in dir(lp_account):
92+ from bzrlib.config import LocationConfig as LocationConfiguration
93+ _ = LocationConfiguration
94+else:
95+ from bzrlib.config import LocationStack as LocationConfiguration
96+ _ = LocationConfiguration
97+
98+
99+def get_branch_config(config_file):
100+ """
101+ Retrieves the sourcedeps configuration for an source dir.
102+ Returns a dict of (branch, revspec) tuples, keyed by branch name.
103+ """
104+ branches = {}
105+ with open(config_file, 'r') as stream:
106+ for line in stream:
107+ line = line.split('#')[0].strip()
108+ bzr_match = re.match(r'(\S+)\s+'
109+ 'lp:([^;]+)'
110+ '(?:;revno=(\d+))?', line)
111+ if bzr_match:
112+ name, branch, revno = bzr_match.group(1, 2, 3)
113+ if revno is None:
114+ revspec = -1
115+ else:
116+ revspec = revno
117+ branches[name] = (branch, revspec)
118+ continue
119+ dir_match = re.match(r'(\S+)\s+'
120+ '\(directory\)', line)
121+ if dir_match:
122+ name = dir_match.group(1)
123+ branches[name] = None
124+ return branches
125+
126+
127+def main(config_file, parent_dir, target_dir, verbose):
128+ """Do the deed."""
129+
130+ try:
131+ os.makedirs(parent_dir)
132+ except OSError, e:
133+ if e.errno != errno.EEXIST:
134+ raise
135+
136+ branches = sorted(get_branch_config(config_file).items())
137+ for branch_name, spec in branches:
138+ if spec is None:
139+ # It's a directory, just create it and move on.
140+ destination_path = os.path.join(target_dir, branch_name)
141+ if not os.path.isdir(destination_path):
142+ os.makedirs(destination_path)
143+ continue
144+
145+ (quoted_branch_spec, revspec) = spec
146+ revno = int(revspec)
147+
148+ # qualify mirror branch name with hash of remote repo path to deal
149+ # with changes to the remote branch URL over time
150+ branch_spec_digest = hashlib.sha1(quoted_branch_spec).hexdigest()
151+ branch_directory = branch_spec_digest
152+
153+ source_path = os.path.join(parent_dir, branch_directory)
154+ destination_path = os.path.join(target_dir, branch_name)
155+
156+ # Remove leftover symlinks/stray files.
157+ try:
158+ os.remove(destination_path)
159+ except OSError, e:
160+ if e.errno != errno.EISDIR and e.errno != errno.ENOENT:
161+ raise
162+
163+ lp_url = "lp:" + quoted_branch_spec
164+
165+ # Create the local mirror branch if it doesn't already exist
166+ if verbose:
167+ sys.stderr.write('%30s: ' % (branch_name,))
168+ sys.stderr.flush()
169+
170+ fresh = False
171+ if not os.path.exists(source_path):
172+ subprocess.check_call(['bzr', 'branch', '-q', '--no-tree',
173+ '--', lp_url, source_path])
174+ fresh = True
175+
176+ if not fresh:
177+ source_branch = Branch.open(source_path)
178+ if revno == -1:
179+ orig_branch = Branch.open(lp_url)
180+ fresh = source_branch.revno() == orig_branch.revno()
181+ else:
182+ fresh = source_branch.revno() == revno
183+
184+ # Freshen the source branch if required.
185+ if not fresh:
186+ subprocess.check_call(['bzr', 'pull', '-q', '--overwrite', '-r',
187+ str(revno), '-d', source_path,
188+ '--', lp_url])
189+
190+ if os.path.exists(destination_path):
191+ # Overwrite the destination with the appropriate revision.
192+ subprocess.check_call(['bzr', 'clean-tree', '--force', '-q',
193+ '--ignored', '-d', destination_path])
194+ subprocess.check_call(['bzr', 'pull', '-q', '--overwrite',
195+ '-r', str(revno),
196+ '-d', destination_path, '--', source_path])
197+ else:
198+ # Create a new branch.
199+ subprocess.check_call(['bzr', 'branch', '-q', '--hardlink',
200+ '-r', str(revno),
201+ '--', source_path, destination_path])
202+
203+ # Check the state of the destination branch.
204+ destination_branch = Branch.open(destination_path)
205+ destination_revno = destination_branch.revno()
206+
207+ if verbose:
208+ sys.stderr.write('checked out %4s of %s\n' %
209+ ("r" + str(destination_revno), lp_url))
210+ sys.stderr.flush()
211+
212+ if revno != -1 and destination_revno != revno:
213+ raise RuntimeError("Expected revno %d but got revno %d" %
214+ (revno, destination_revno))
215+
216+if __name__ == '__main__':
217+ parser = optparse.OptionParser(
218+ usage="%prog [options]",
219+ description=(
220+ "Add a lightweight checkout in <target> for each "
221+ "corresponding file in <parent>."),
222+ add_help_option=False)
223+ parser.add_option(
224+ '-p', '--parent', dest='parent',
225+ default=None,
226+ help=("The directory of the parent tree."),
227+ metavar="DIR")
228+ parser.add_option(
229+ '-t', '--target', dest='target', default=curdir,
230+ help=("The directory of the target tree."),
231+ metavar="DIR")
232+ parser.add_option(
233+ '-c', '--config', dest='config', default=None,
234+ help=("The config file to be used for config-manager."),
235+ metavar="DIR")
236+ parser.add_option(
237+ '-q', '--quiet', dest='verbose', action='store_false',
238+ help="Be less verbose.")
239+ parser.add_option(
240+ '-v', '--verbose', dest='verbose', action='store_true',
241+ help="Be more verbose.")
242+ parser.add_option(
243+ '-h', '--help', action='help',
244+ help="Show this help message and exit.")
245+ parser.set_defaults(verbose=True)
246+
247+ options, args = parser.parse_args()
248+
249+ if options.parent is None:
250+ options.parent = os.environ.get(
251+ "SOURCEDEPS_DIR",
252+ os.path.join(curdir, ".sourcecode"))
253+
254+ if options.target is None:
255+ parser.error(
256+ "Target directory not specified.")
257+
258+ if options.config is None:
259+ config = [arg for arg in args
260+ if arg != "update"]
261+ if not config or len(config) > 1:
262+ parser.error("Config not specified")
263+ options.config = config[0]
264+
265+ sys.exit(main(config_file=options.config,
266+ parent_dir=options.parent,
267+ target_dir=options.target,
268+ verbose=options.verbose))
269
270=== added file 'config-manager.txt'
271--- config-manager.txt 1970-01-01 00:00:00 +0000
272+++ config-manager.txt 2013-10-29 21:07:21 +0000
273@@ -0,0 +1,6 @@
274+# After making changes to this file, to ensure that your sourcedeps are
275+# up-to-date do:
276+#
277+# make sourcedeps
278+
279+./build/charm-helpers lp:charm-helpers;revno=70
280
281=== modified file 'config.yaml'
282--- config.yaml 2013-01-11 16:49:35 +0000
283+++ config.yaml 2013-10-29 21:07:21 +0000
284@@ -3,10 +3,22 @@
285 type: int
286 default: 3128
287 description: Squid listening port.
288+ port_options:
289+ type: string
290+ default: accel vhost
291+ description: Squid listening port options
292 log_format:
293 type: string
294 default: '%>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh'
295 description: Format of the squid log.
296+ via:
297+ type: string
298+ default: 'on'
299+ description: Add 'Via' header to outgoing requests.
300+ x_balancer_name_allowed:
301+ type: boolean
302+ default: false
303+ description: Route based on X-Balancer-Name header set by Apache charm.
304 cache_mem_mb:
305 type: int
306 default: 256
307@@ -14,7 +26,7 @@
308 cache_size_mb:
309 type: int
310 default: 512
311- description: Maximum size of the on-disk object cache (MB).
312+ description: Maximum size of the on-disk object cache (MB). Set to zero to disable caching.
313 cache_dir:
314 type: string
315 default: '/var/spool/squid3'
316@@ -51,20 +63,45 @@
317 juju-squid-0
318 If you're running multiple environments with the same services in them
319 this allows you to differentiate between them.
320- nagios_check_url:
321+ nagios_check_http_params:
322 default: ""
323 type: string
324 description: >
325- The URL to check squid has access to, most likely inside your web server farm
326+ The parameters to pass to the nrpe plugin check_http.
327 nagios_service_type:
328 default: "generic"
329 type: string
330 description: >
331- What service this component forms part of, e.g. supermassive-squid-cluster. Used by nrpe.
332+ What service this component forms part of, e.g. supermassive-squid-cluster. Used by nrpe.
333+ package_status:
334+ default: "install"
335+ type: "string"
336+ description: >
337+ The status of service-affecting packages will be set to this value in the dpkg database.
338+ Useful valid values are "install" and "hold".
339 refresh_patterns:
340 type: string
341 default: ''
342 description: >
343- JSON-formatted list of refresh patterns. For example:
344- '{"http://www.ubuntu.com": {"min": 0, "percent": 20, "max": 60}, "http://www.canonical.com": {"min": 0, "percent": 20, "max": 120}}'
345+ JSON- or YAML-formatted list of refresh patterns. For example:
346+ '{"http://www.ubuntu.com": {"min": 0, "percent": 20, "max": 60},
347+ "http://www.canonical.com": {"min": 0, "percent": 20, "max": 120}}'
348+ services:
349+ default: ''
350+ type: string
351+ description: |
352+ Services definition(s). Although the variable type is a string, this is
353+ interpreted by the charm as yaml. To use multiple services within the
354+ same instance, specify all of the variables (service_name,
355+ service_host, service_port) with a "-" before the first variable,
356+ service_name, as below.
357
358+ - service_name: example_proxy
359+ service_domain: example.com
360+ servers:
361+ - [foo.internal, 80]
362+ - [bar.internal, 80]
363+ enable_forward_proxy:
364+ default: false
365+ type: boolean
366+ description: Enables forward proxying
367
368=== added file 'files/check_squidpeers'
369--- files/check_squidpeers 1970-01-01 00:00:00 +0000
370+++ files/check_squidpeers 2013-10-29 21:07:21 +0000
371@@ -0,0 +1,75 @@
372+#!/usr/bin/python
373+
374+from operator import itemgetter
375+import re
376+import subprocess
377+import sys
378+
379+
380+parent_re = re.compile(r'^Parent\s+:\s*(?P<name>\S+)')
381+status_re = re.compile(r'^Status\s+:\s*(?P<status>\S+)')
382+
383+STATUS_UP = 'Up'
384+
385+
386+def parse(output):
387+ state = 'header'
388+ peers = []
389+ for line in output.splitlines():
390+ if state == 'header':
391+ if not line.strip():
392+ state = 'header gap'
393+ elif state == 'header gap':
394+ if not line.strip():
395+ state = 'peer'
396+ peers.append(dict())
397+ elif line.strip() == "There are no neighbors installed.":
398+ return peers
399+ else:
400+ raise AssertionError('Expecting blank line after header?: {}'.format(line))
401+ elif state == 'peer':
402+ if not line.strip():
403+ peers.append(dict())
404+ else:
405+ match = parent_re.match(line)
406+ if match:
407+ peers[-1]['name'] = match.group('name')
408+ match = status_re.match(line)
409+ if match:
410+ peers[-1]['up'] = match.group('status') == STATUS_UP
411+ peers[-1]['status'] = match.group('status')
412+ return peers
413+
414+
415+def get_status(peers):
416+ ok = all(map(itemgetter('up'), peers))
417+ if not peers:
418+ retcode = 1
419+ message = 'Squid has no configured peers.'
420+ elif ok:
421+ retcode = 0
422+ message = 'All peers are UP according to squid.'
423+ else:
424+ retcode = 2
425+ peer_info = ["{}: {}".format(p['name'], p['status']) for p in peers
426+ if not p['up']]
427+ message = 'The following peers are not UP according to squid: {}'.format(
428+ ", ".join(peer_info))
429+ return retcode, message
430+
431+
432+def main():
433+ proc = subprocess.Popen(['squidclient', 'mgr:server_list'],
434+ stdout=subprocess.PIPE, stderr=subprocess.PIPE)
435+ stdout, stderr = proc.communicate()
436+ if proc.returncode != 0:
437+ print("Error running squidclient: %s" % stderr)
438+ return 2
439+ peers = parse(stdout)
440+ retcode, message = get_status(peers)
441+ print(message)
442+ return retcode
443+
444+
445+if __name__ == '__main__':
446+ sys.exit(main())
447
448=== added directory 'hooks/charmhelpers'
449=== added file 'hooks/charmhelpers/__init__.py'
450=== added directory 'hooks/charmhelpers/contrib'
451=== added file 'hooks/charmhelpers/contrib/__init__.py'
452=== added directory 'hooks/charmhelpers/contrib/charmsupport'
453=== added file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
454=== added file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
455--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
456+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-10-29 21:07:21 +0000
457@@ -0,0 +1,218 @@
458+"""Compatibility with the nrpe-external-master charm"""
459+# Copyright 2012 Canonical Ltd.
460+#
461+# Authors:
462+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
463+
464+import subprocess
465+import pwd
466+import grp
467+import os
468+import re
469+import shlex
470+import yaml
471+
472+from charmhelpers.core.hookenv import (
473+ config,
474+ local_unit,
475+ log,
476+ relation_ids,
477+ relation_set,
478+)
479+
480+from charmhelpers.core.host import service
481+
482+# This module adds compatibility with the nrpe-external-master and plain nrpe
483+# subordinate charms. To use it in your charm:
484+#
485+# 1. Update metadata.yaml
486+#
487+# provides:
488+# (...)
489+# nrpe-external-master:
490+# interface: nrpe-external-master
491+# scope: container
492+#
493+# and/or
494+#
495+# provides:
496+# (...)
497+# local-monitors:
498+# interface: local-monitors
499+# scope: container
500+
501+#
502+# 2. Add the following to config.yaml
503+#
504+# nagios_context:
505+# default: "juju"
506+# type: string
507+# description: |
508+# Used by the nrpe subordinate charms.
509+# A string that will be prepended to instance name to set the host name
510+# in nagios. So for instance the hostname would be something like:
511+# juju-myservice-0
512+# If you're running multiple environments with the same services in them
513+# this allows you to differentiate between them.
514+#
515+# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
516+#
517+# 4. Update your hooks.py with something like this:
518+#
519+# from charmsupport.nrpe import NRPE
520+# (...)
521+# def update_nrpe_config():
522+# nrpe_compat = NRPE()
523+# nrpe_compat.add_check(
524+# shortname = "myservice",
525+# description = "Check MyService",
526+# check_cmd = "check_http -w 2 -c 10 http://localhost"
527+# )
528+# nrpe_compat.add_check(
529+# "myservice_other",
530+# "Check for widget failures",
531+# check_cmd = "/srv/myapp/scripts/widget_check"
532+# )
533+# nrpe_compat.write()
534+#
535+# def config_changed():
536+# (...)
537+# update_nrpe_config()
538+#
539+# def nrpe_external_master_relation_changed():
540+# update_nrpe_config()
541+#
542+# def local_monitors_relation_changed():
543+# update_nrpe_config()
544+#
545+# 5. ln -s hooks.py nrpe-external-master-relation-changed
546+# ln -s hooks.py local-monitors-relation-changed
547+
548+
549+class CheckException(Exception):
550+ pass
551+
552+
553+class Check(object):
554+ shortname_re = '[A-Za-z0-9-_]+$'
555+ service_template = ("""
556+#---------------------------------------------------
557+# This file is Juju managed
558+#---------------------------------------------------
559+define service {{
560+ use active-service
561+ host_name {nagios_hostname}
562+ service_description {nagios_hostname}[{shortname}] """
563+ """{description}
564+ check_command check_nrpe!{command}
565+ servicegroups {nagios_servicegroup}
566+}}
567+""")
568+
569+ def __init__(self, shortname, description, check_cmd):
570+ super(Check, self).__init__()
571+ # XXX: could be better to calculate this from the service name
572+ if not re.match(self.shortname_re, shortname):
573+ raise CheckException("shortname must match {}".format(
574+ Check.shortname_re))
575+ self.shortname = shortname
576+ self.command = "check_{}".format(shortname)
577+ # Note: a set of invalid characters is defined by the
578+ # Nagios server config
579+ # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
580+ self.description = description
581+ self.check_cmd = self._locate_cmd(check_cmd)
582+
583+ def _locate_cmd(self, check_cmd):
584+ search_path = (
585+ '/',
586+ os.path.join(os.environ['CHARM_DIR'],
587+ 'files/nrpe-external-master'),
588+ '/usr/lib/nagios/plugins',
589+ )
590+ parts = shlex.split(check_cmd)
591+ for path in search_path:
592+ if os.path.exists(os.path.join(path, parts[0])):
593+ command = os.path.join(path, parts[0])
594+ if len(parts) > 1:
595+ command += " " + " ".join(parts[1:])
596+ return command
597+ log('Check command not found: {}'.format(parts[0]))
598+ return ''
599+
600+ def write(self, nagios_context, hostname):
601+ nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
602+ self.command)
603+ with open(nrpe_check_file, 'w') as nrpe_check_config:
604+ nrpe_check_config.write("# check {}\n".format(self.shortname))
605+ nrpe_check_config.write("command[{}]={}\n".format(
606+ self.command, self.check_cmd))
607+
608+ if not os.path.exists(NRPE.nagios_exportdir):
609+ log('Not writing service config as {} is not accessible'.format(
610+ NRPE.nagios_exportdir))
611+ else:
612+ self.write_service_config(nagios_context, hostname)
613+
614+ def write_service_config(self, nagios_context, hostname):
615+ for f in os.listdir(NRPE.nagios_exportdir):
616+ if re.search('.*{}.cfg'.format(self.command), f):
617+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
618+
619+ templ_vars = {
620+ 'nagios_hostname': hostname,
621+ 'nagios_servicegroup': nagios_context,
622+ 'description': self.description,
623+ 'shortname': self.shortname,
624+ 'command': self.command,
625+ }
626+ nrpe_service_text = Check.service_template.format(**templ_vars)
627+ nrpe_service_file = '{}/service__{}_{}.cfg'.format(
628+ NRPE.nagios_exportdir, hostname, self.command)
629+ with open(nrpe_service_file, 'w') as nrpe_service_config:
630+ nrpe_service_config.write(str(nrpe_service_text))
631+
632+ def run(self):
633+ subprocess.call(self.check_cmd)
634+
635+
636+class NRPE(object):
637+ nagios_logdir = '/var/log/nagios'
638+ nagios_exportdir = '/var/lib/nagios/export'
639+ nrpe_confdir = '/etc/nagios/nrpe.d'
640+
641+ def __init__(self):
642+ super(NRPE, self).__init__()
643+ self.config = config()
644+ self.nagios_context = self.config['nagios_context']
645+ self.unit_name = local_unit().replace('/', '-')
646+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
647+ self.checks = []
648+
649+ def add_check(self, *args, **kwargs):
650+ self.checks.append(Check(*args, **kwargs))
651+
652+ def write(self):
653+ try:
654+ nagios_uid = pwd.getpwnam('nagios').pw_uid
655+ nagios_gid = grp.getgrnam('nagios').gr_gid
656+ except:
657+ log("Nagios user not set up, nrpe checks not updated")
658+ return
659+
660+ if not os.path.exists(NRPE.nagios_logdir):
661+ os.mkdir(NRPE.nagios_logdir)
662+ os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
663+
664+ nrpe_monitors = {}
665+ monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
666+ for nrpecheck in self.checks:
667+ nrpecheck.write(self.nagios_context, self.hostname)
668+ nrpe_monitors[nrpecheck.shortname] = {
669+ "command": nrpecheck.command,
670+ }
671+
672+ service('restart', 'nagios-nrpe-server')
673+
674+ for rid in relation_ids("local-monitors"):
675+ relation_set(relation_id=rid, monitors=yaml.dump(monitors))
676
677=== added file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
678--- hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
679+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 2013-10-29 21:07:21 +0000
680@@ -0,0 +1,156 @@
681+'''
682+Functions for managing volumes in juju units. One volume is supported per unit.
683+Subordinates may have their own storage, provided it is on its own partition.
684+
685+Configuration stanzas:
686+ volume-ephemeral:
687+ type: boolean
688+ default: true
689+ description: >
690+ If false, a volume is mounted as sepecified in "volume-map"
691+ If true, ephemeral storage will be used, meaning that log data
692+ will only exist as long as the machine. YOU HAVE BEEN WARNED.
693+ volume-map:
694+ type: string
695+ default: {}
696+ description: >
697+ YAML map of units to device names, e.g:
698+ "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
699+ Service units will raise a configure-error if volume-ephemeral
700+ is 'true' and no volume-map value is set. Use 'juju set' to set a
701+ value and 'juju resolved' to complete configuration.
702+
703+Usage:
704+ from charmsupport.volumes import configure_volume, VolumeConfigurationError
705+ from charmsupport.hookenv import log, ERROR
706+ def post_mount_hook():
707+ stop_service('myservice')
708+ def post_mount_hook():
709+ start_service('myservice')
710+
711+ if __name__ == '__main__':
712+ try:
713+ configure_volume(before_change=pre_mount_hook,
714+ after_change=post_mount_hook)
715+ except VolumeConfigurationError:
716+ log('Storage could not be configured', ERROR)
717+'''
718+
719+# XXX: Known limitations
720+# - fstab is neither consulted nor updated
721+
722+import os
723+from charmhelpers.core import hookenv
724+from charmhelpers.core import host
725+import yaml
726+
727+
728+MOUNT_BASE = '/srv/juju/volumes'
729+
730+
731+class VolumeConfigurationError(Exception):
732+ '''Volume configuration data is missing or invalid'''
733+ pass
734+
735+
736+def get_config():
737+ '''Gather and sanity-check volume configuration data'''
738+ volume_config = {}
739+ config = hookenv.config()
740+
741+ errors = False
742+
743+ if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
744+ volume_config['ephemeral'] = True
745+ else:
746+ volume_config['ephemeral'] = False
747+
748+ try:
749+ volume_map = yaml.safe_load(config.get('volume-map', '{}'))
750+ except yaml.YAMLError as e:
751+ hookenv.log("Error parsing YAML volume-map: {}".format(e),
752+ hookenv.ERROR)
753+ errors = True
754+ if volume_map is None:
755+ # probably an empty string
756+ volume_map = {}
757+ elif not isinstance(volume_map, dict):
758+ hookenv.log("Volume-map should be a dictionary, not {}".format(
759+ type(volume_map)))
760+ errors = True
761+
762+ volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
763+ if volume_config['device'] and volume_config['ephemeral']:
764+ # asked for ephemeral storage but also defined a volume ID
765+ hookenv.log('A volume is defined for this unit, but ephemeral '
766+ 'storage was requested', hookenv.ERROR)
767+ errors = True
768+ elif not volume_config['device'] and not volume_config['ephemeral']:
769+ # asked for permanent storage but did not define volume ID
770+ hookenv.log('Ephemeral storage was requested, but there is no volume '
771+ 'defined for this unit.', hookenv.ERROR)
772+ errors = True
773+
774+ unit_mount_name = hookenv.local_unit().replace('/', '-')
775+ volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
776+
777+ if errors:
778+ return None
779+ return volume_config
780+
781+
782+def mount_volume(config):
783+ if os.path.exists(config['mountpoint']):
784+ if not os.path.isdir(config['mountpoint']):
785+ hookenv.log('Not a directory: {}'.format(config['mountpoint']))
786+ raise VolumeConfigurationError()
787+ else:
788+ host.mkdir(config['mountpoint'])
789+ if os.path.ismount(config['mountpoint']):
790+ unmount_volume(config)
791+ if not host.mount(config['device'], config['mountpoint'], persist=True):
792+ raise VolumeConfigurationError()
793+
794+
795+def unmount_volume(config):
796+ if os.path.ismount(config['mountpoint']):
797+ if not host.umount(config['mountpoint'], persist=True):
798+ raise VolumeConfigurationError()
799+
800+
801+def managed_mounts():
802+ '''List of all mounted managed volumes'''
803+ return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
804+
805+
806+def configure_volume(before_change=lambda: None, after_change=lambda: None):
807+ '''Set up storage (or don't) according to the charm's volume configuration.
808+ Returns the mount point or "ephemeral". before_change and after_change
809+ are optional functions to be called if the volume configuration changes.
810+ '''
811+
812+ config = get_config()
813+ if not config:
814+ hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
815+ raise VolumeConfigurationError()
816+
817+ if config['ephemeral']:
818+ if os.path.ismount(config['mountpoint']):
819+ before_change()
820+ unmount_volume(config)
821+ after_change()
822+ return 'ephemeral'
823+ else:
824+ # persistent storage
825+ if os.path.ismount(config['mountpoint']):
826+ mounts = dict(managed_mounts())
827+ if mounts.get(config['mountpoint']) != config['device']:
828+ before_change()
829+ unmount_volume(config)
830+ mount_volume(config)
831+ after_change()
832+ else:
833+ before_change()
834+ mount_volume(config)
835+ after_change()
836+ return config['mountpoint']
837
838=== added directory 'hooks/charmhelpers/core'
839=== added file 'hooks/charmhelpers/core/__init__.py'
840=== added file 'hooks/charmhelpers/core/hookenv.py'
841--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
842+++ hooks/charmhelpers/core/hookenv.py 2013-10-29 21:07:21 +0000
843@@ -0,0 +1,340 @@
844+"Interactions with the Juju environment"
845+# Copyright 2013 Canonical Ltd.
846+#
847+# Authors:
848+# Charm Helpers Developers <juju@lists.ubuntu.com>
849+
850+import os
851+import json
852+import yaml
853+import subprocess
854+import UserDict
855+
856+CRITICAL = "CRITICAL"
857+ERROR = "ERROR"
858+WARNING = "WARNING"
859+INFO = "INFO"
860+DEBUG = "DEBUG"
861+MARKER = object()
862+
863+cache = {}
864+
865+
866+def cached(func):
867+ ''' Cache return values for multiple executions of func + args
868+
869+ For example:
870+
871+ @cached
872+ def unit_get(attribute):
873+ pass
874+
875+ unit_get('test')
876+
877+ will cache the result of unit_get + 'test' for future calls.
878+ '''
879+ def wrapper(*args, **kwargs):
880+ global cache
881+ key = str((func, args, kwargs))
882+ try:
883+ return cache[key]
884+ except KeyError:
885+ res = func(*args, **kwargs)
886+ cache[key] = res
887+ return res
888+ return wrapper
889+
890+
891+def flush(key):
892+ ''' Flushes any entries from function cache where the
893+ key is found in the function+args '''
894+ flush_list = []
895+ for item in cache:
896+ if key in item:
897+ flush_list.append(item)
898+ for item in flush_list:
899+ del cache[item]
900+
901+
902+def log(message, level=None):
903+ "Write a message to the juju log"
904+ command = ['juju-log']
905+ if level:
906+ command += ['-l', level]
907+ command += [message]
908+ subprocess.call(command)
909+
910+
911+class Serializable(UserDict.IterableUserDict):
912+ "Wrapper, an object that can be serialized to yaml or json"
913+
914+ def __init__(self, obj):
915+ # wrap the object
916+ UserDict.IterableUserDict.__init__(self)
917+ self.data = obj
918+
919+ def __getattr__(self, attr):
920+ # See if this object has attribute.
921+ if attr in ("json", "yaml", "data"):
922+ return self.__dict__[attr]
923+ # Check for attribute in wrapped object.
924+ got = getattr(self.data, attr, MARKER)
925+ if got is not MARKER:
926+ return got
927+ # Proxy to the wrapped object via dict interface.
928+ try:
929+ return self.data[attr]
930+ except KeyError:
931+ raise AttributeError(attr)
932+
933+ def __getstate__(self):
934+ # Pickle as a standard dictionary.
935+ return self.data
936+
937+ def __setstate__(self, state):
938+ # Unpickle into our wrapper.
939+ self.data = state
940+
941+ def json(self):
942+ "Serialize the object to json"
943+ return json.dumps(self.data)
944+
945+ def yaml(self):
946+ "Serialize the object to yaml"
947+ return yaml.dump(self.data)
948+
949+
950+def execution_environment():
951+ """A convenient bundling of the current execution context"""
952+ context = {}
953+ context['conf'] = config()
954+ if relation_id():
955+ context['reltype'] = relation_type()
956+ context['relid'] = relation_id()
957+ context['rel'] = relation_get()
958+ context['unit'] = local_unit()
959+ context['rels'] = relations()
960+ context['env'] = os.environ
961+ return context
962+
963+
964+def in_relation_hook():
965+ "Determine whether we're running in a relation hook"
966+ return 'JUJU_RELATION' in os.environ
967+
968+
969+def relation_type():
970+ "The scope for the current relation hook"
971+ return os.environ.get('JUJU_RELATION', None)
972+
973+
974+def relation_id():
975+ "The relation ID for the current relation hook"
976+ return os.environ.get('JUJU_RELATION_ID', None)
977+
978+
979+def local_unit():
980+ "Local unit ID"
981+ return os.environ['JUJU_UNIT_NAME']
982+
983+
984+def remote_unit():
985+ "The remote unit for the current relation hook"
986+ return os.environ['JUJU_REMOTE_UNIT']
987+
988+
989+def service_name():
990+ "The name service group this unit belongs to"
991+ return local_unit().split('/')[0]
992+
993+
994+@cached
995+def config(scope=None):
996+ "Juju charm configuration"
997+ config_cmd_line = ['config-get']
998+ if scope is not None:
999+ config_cmd_line.append(scope)
1000+ config_cmd_line.append('--format=json')
1001+ try:
1002+ return json.loads(subprocess.check_output(config_cmd_line))
1003+ except ValueError:
1004+ return None
1005+
1006+
1007+@cached
1008+def relation_get(attribute=None, unit=None, rid=None):
1009+ _args = ['relation-get', '--format=json']
1010+ if rid:
1011+ _args.append('-r')
1012+ _args.append(rid)
1013+ _args.append(attribute or '-')
1014+ if unit:
1015+ _args.append(unit)
1016+ try:
1017+ return json.loads(subprocess.check_output(_args))
1018+ except ValueError:
1019+ return None
1020+
1021+
1022+def relation_set(relation_id=None, relation_settings={}, **kwargs):
1023+ relation_cmd_line = ['relation-set']
1024+ if relation_id is not None:
1025+ relation_cmd_line.extend(('-r', relation_id))
1026+ for k, v in (relation_settings.items() + kwargs.items()):
1027+ if v is None:
1028+ relation_cmd_line.append('{}='.format(k))
1029+ else:
1030+ relation_cmd_line.append('{}={}'.format(k, v))
1031+ subprocess.check_call(relation_cmd_line)
1032+ # Flush cache of any relation-gets for local unit
1033+ flush(local_unit())
1034+
1035+
1036+@cached
1037+def relation_ids(reltype=None):
1038+ "A list of relation_ids"
1039+ reltype = reltype or relation_type()
1040+ relid_cmd_line = ['relation-ids', '--format=json']
1041+ if reltype is not None:
1042+ relid_cmd_line.append(reltype)
1043+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
1044+ return []
1045+
1046+
1047+@cached
1048+def related_units(relid=None):
1049+ "A list of related units"
1050+ relid = relid or relation_id()
1051+ units_cmd_line = ['relation-list', '--format=json']
1052+ if relid is not None:
1053+ units_cmd_line.extend(('-r', relid))
1054+ return json.loads(subprocess.check_output(units_cmd_line)) or []
1055+
1056+
1057+@cached
1058+def relation_for_unit(unit=None, rid=None):
1059+ "Get the json represenation of a unit's relation"
1060+ unit = unit or remote_unit()
1061+ relation = relation_get(unit=unit, rid=rid)
1062+ for key in relation:
1063+ if key.endswith('-list'):
1064+ relation[key] = relation[key].split()
1065+ relation['__unit__'] = unit
1066+ return relation
1067+
1068+
1069+@cached
1070+def relations_for_id(relid=None):
1071+ "Get relations of a specific relation ID"
1072+ relation_data = []
1073+ relid = relid or relation_ids()
1074+ for unit in related_units(relid):
1075+ unit_data = relation_for_unit(unit, relid)
1076+ unit_data['__relid__'] = relid
1077+ relation_data.append(unit_data)
1078+ return relation_data
1079+
1080+
1081+@cached
1082+def relations_of_type(reltype=None):
1083+ "Get relations of a specific type"
1084+ relation_data = []
1085+ reltype = reltype or relation_type()
1086+ for relid in relation_ids(reltype):
1087+ for relation in relations_for_id(relid):
1088+ relation['__relid__'] = relid
1089+ relation_data.append(relation)
1090+ return relation_data
1091+
1092+
1093+@cached
1094+def relation_types():
1095+ "Get a list of relation types supported by this charm"
1096+ charmdir = os.environ.get('CHARM_DIR', '')
1097+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
1098+ md = yaml.safe_load(mdf)
1099+ rel_types = []
1100+ for key in ('provides', 'requires', 'peers'):
1101+ section = md.get(key)
1102+ if section:
1103+ rel_types.extend(section.keys())
1104+ mdf.close()
1105+ return rel_types
1106+
1107+
1108+@cached
1109+def relations():
1110+ rels = {}
1111+ for reltype in relation_types():
1112+ relids = {}
1113+ for relid in relation_ids(reltype):
1114+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
1115+ for unit in related_units(relid):
1116+ reldata = relation_get(unit=unit, rid=relid)
1117+ units[unit] = reldata
1118+ relids[relid] = units
1119+ rels[reltype] = relids
1120+ return rels
1121+
1122+
1123+def open_port(port, protocol="TCP"):
1124+ "Open a service network port"
1125+ _args = ['open-port']
1126+ _args.append('{}/{}'.format(port, protocol))
1127+ subprocess.check_call(_args)
1128+
1129+
1130+def close_port(port, protocol="TCP"):
1131+ "Close a service network port"
1132+ _args = ['close-port']
1133+ _args.append('{}/{}'.format(port, protocol))
1134+ subprocess.check_call(_args)
1135+
1136+
1137+@cached
1138+def unit_get(attribute):
1139+ _args = ['unit-get', '--format=json', attribute]
1140+ try:
1141+ return json.loads(subprocess.check_output(_args))
1142+ except ValueError:
1143+ return None
1144+
1145+
1146+def unit_private_ip():
1147+ return unit_get('private-address')
1148+
1149+
1150+class UnregisteredHookError(Exception):
1151+ pass
1152+
1153+
1154+class Hooks(object):
1155+ def __init__(self):
1156+ super(Hooks, self).__init__()
1157+ self._hooks = {}
1158+
1159+ def register(self, name, function):
1160+ self._hooks[name] = function
1161+
1162+ def execute(self, args):
1163+ hook_name = os.path.basename(args[0])
1164+ if hook_name in self._hooks:
1165+ self._hooks[hook_name]()
1166+ else:
1167+ raise UnregisteredHookError(hook_name)
1168+
1169+ def hook(self, *hook_names):
1170+ def wrapper(decorated):
1171+ for hook_name in hook_names:
1172+ self.register(hook_name, decorated)
1173+ else:
1174+ self.register(decorated.__name__, decorated)
1175+ if '_' in decorated.__name__:
1176+ self.register(
1177+ decorated.__name__.replace('_', '-'), decorated)
1178+ return decorated
1179+ return wrapper
1180+
1181+
1182+def charm_dir():
1183+ return os.environ.get('CHARM_DIR')
1184
1185=== added file 'hooks/charmhelpers/core/host.py'
1186--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
1187+++ hooks/charmhelpers/core/host.py 2013-10-29 21:07:21 +0000
1188@@ -0,0 +1,239 @@
1189+"""Tools for working with the host system"""
1190+# Copyright 2012 Canonical Ltd.
1191+#
1192+# Authors:
1193+# Nick Moffitt <nick.moffitt@canonical.com>
1194+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1195+
1196+import os
1197+import pwd
1198+import grp
1199+import random
1200+import string
1201+import subprocess
1202+import hashlib
1203+
1204+from collections import OrderedDict
1205+
1206+from hookenv import log
1207+
1208+
1209+def service_start(service_name):
1210+ service('start', service_name)
1211+
1212+
1213+def service_stop(service_name):
1214+ service('stop', service_name)
1215+
1216+
1217+def service_restart(service_name):
1218+ service('restart', service_name)
1219+
1220+
1221+def service_reload(service_name, restart_on_failure=False):
1222+ if not service('reload', service_name) and restart_on_failure:
1223+ service('restart', service_name)
1224+
1225+
1226+def service(action, service_name):
1227+ cmd = ['service', service_name, action]
1228+ return subprocess.call(cmd) == 0
1229+
1230+
1231+def service_running(service):
1232+ try:
1233+ output = subprocess.check_output(['service', service, 'status'])
1234+ except subprocess.CalledProcessError:
1235+ return False
1236+ else:
1237+ if ("start/running" in output or "is running" in output):
1238+ return True
1239+ else:
1240+ return False
1241+
1242+
1243+def adduser(username, password=None, shell='/bin/bash', system_user=False):
1244+ """Add a user"""
1245+ try:
1246+ user_info = pwd.getpwnam(username)
1247+ log('user {0} already exists!'.format(username))
1248+ except KeyError:
1249+ log('creating user {0}'.format(username))
1250+ cmd = ['useradd']
1251+ if system_user or password is None:
1252+ cmd.append('--system')
1253+ else:
1254+ cmd.extend([
1255+ '--create-home',
1256+ '--shell', shell,
1257+ '--password', password,
1258+ ])
1259+ cmd.append(username)
1260+ subprocess.check_call(cmd)
1261+ user_info = pwd.getpwnam(username)
1262+ return user_info
1263+
1264+
1265+def add_user_to_group(username, group):
1266+ """Add a user to a group"""
1267+ cmd = [
1268+ 'gpasswd', '-a',
1269+ username,
1270+ group
1271+ ]
1272+ log("Adding user {} to group {}".format(username, group))
1273+ subprocess.check_call(cmd)
1274+
1275+
1276+def rsync(from_path, to_path, flags='-r', options=None):
1277+ """Replicate the contents of a path"""
1278+ options = options or ['--delete', '--executability']
1279+ cmd = ['/usr/bin/rsync', flags]
1280+ cmd.extend(options)
1281+ cmd.append(from_path)
1282+ cmd.append(to_path)
1283+ log(" ".join(cmd))
1284+ return subprocess.check_output(cmd).strip()
1285+
1286+
1287+def symlink(source, destination):
1288+ """Create a symbolic link"""
1289+ log("Symlinking {} as {}".format(source, destination))
1290+ cmd = [
1291+ 'ln',
1292+ '-sf',
1293+ source,
1294+ destination,
1295+ ]
1296+ subprocess.check_call(cmd)
1297+
1298+
1299+def mkdir(path, owner='root', group='root', perms=0555, force=False):
1300+ """Create a directory"""
1301+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
1302+ perms))
1303+ uid = pwd.getpwnam(owner).pw_uid
1304+ gid = grp.getgrnam(group).gr_gid
1305+ realpath = os.path.abspath(path)
1306+ if os.path.exists(realpath):
1307+ if force and not os.path.isdir(realpath):
1308+ log("Removing non-directory file {} prior to mkdir()".format(path))
1309+ os.unlink(realpath)
1310+ else:
1311+ os.makedirs(realpath, perms)
1312+ os.chown(realpath, uid, gid)
1313+
1314+
1315+def write_file(path, content, owner='root', group='root', perms=0444):
1316+ """Create or overwrite a file with the contents of a string"""
1317+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
1318+ uid = pwd.getpwnam(owner).pw_uid
1319+ gid = grp.getgrnam(group).gr_gid
1320+ with open(path, 'w') as target:
1321+ os.fchown(target.fileno(), uid, gid)
1322+ os.fchmod(target.fileno(), perms)
1323+ target.write(content)
1324+
1325+
1326+def mount(device, mountpoint, options=None, persist=False):
1327+ '''Mount a filesystem'''
1328+ cmd_args = ['mount']
1329+ if options is not None:
1330+ cmd_args.extend(['-o', options])
1331+ cmd_args.extend([device, mountpoint])
1332+ try:
1333+ subprocess.check_output(cmd_args)
1334+ except subprocess.CalledProcessError, e:
1335+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1336+ return False
1337+ if persist:
1338+ # TODO: update fstab
1339+ pass
1340+ return True
1341+
1342+
1343+def umount(mountpoint, persist=False):
1344+ '''Unmount a filesystem'''
1345+ cmd_args = ['umount', mountpoint]
1346+ try:
1347+ subprocess.check_output(cmd_args)
1348+ except subprocess.CalledProcessError, e:
1349+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1350+ return False
1351+ if persist:
1352+ # TODO: update fstab
1353+ pass
1354+ return True
1355+
1356+
1357+def mounts():
1358+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
1359+ with open('/proc/mounts') as f:
1360+ # [['/mount/point','/dev/path'],[...]]
1361+ system_mounts = [m[1::-1] for m in [l.strip().split()
1362+ for l in f.readlines()]]
1363+ return system_mounts
1364+
1365+
1366+def file_hash(path):
1367+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
1368+ if os.path.exists(path):
1369+ h = hashlib.md5()
1370+ with open(path, 'r') as source:
1371+ h.update(source.read()) # IGNORE:E1101 - it does have update
1372+ return h.hexdigest()
1373+ else:
1374+ return None
1375+
1376+
1377+def restart_on_change(restart_map):
1378+ ''' Restart services based on configuration files changing
1379+
1380+ This function is used a decorator, for example
1381+
1382+ @restart_on_change({
1383+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1384+ })
1385+ def ceph_client_changed():
1386+ ...
1387+
1388+ In this example, the cinder-api and cinder-volume services
1389+ would be restarted if /etc/ceph/ceph.conf is changed by the
1390+ ceph_client_changed function.
1391+ '''
1392+ def wrap(f):
1393+ def wrapped_f(*args):
1394+ checksums = {}
1395+ for path in restart_map:
1396+ checksums[path] = file_hash(path)
1397+ f(*args)
1398+ restarts = []
1399+ for path in restart_map:
1400+ if checksums[path] != file_hash(path):
1401+ restarts += restart_map[path]
1402+ for service_name in list(OrderedDict.fromkeys(restarts)):
1403+ service('restart', service_name)
1404+ return wrapped_f
1405+ return wrap
1406+
1407+
1408+def lsb_release():
1409+ '''Return /etc/lsb-release in a dict'''
1410+ d = {}
1411+ with open('/etc/lsb-release', 'r') as lsb:
1412+ for l in lsb:
1413+ k, v = l.split('=')
1414+ d[k.strip()] = v.strip()
1415+ return d
1416+
1417+
1418+def pwgen(length=None):
1419+ '''Generate a random pasword.'''
1420+ if length is None:
1421+ length = random.choice(range(35, 45))
1422+ alphanumeric_chars = [
1423+ l for l in (string.letters + string.digits)
1424+ if l not in 'l0QD1vAEIOUaeiou']
1425+ random_chars = [
1426+ random.choice(alphanumeric_chars) for _ in range(length)]
1427+ return(''.join(random_chars))
1428
1429=== added directory 'hooks/charmhelpers/fetch'
1430=== added file 'hooks/charmhelpers/fetch/__init__.py'
1431--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
1432+++ hooks/charmhelpers/fetch/__init__.py 2013-10-29 21:07:21 +0000
1433@@ -0,0 +1,209 @@
1434+import importlib
1435+from yaml import safe_load
1436+from charmhelpers.core.host import (
1437+ lsb_release
1438+)
1439+from urlparse import (
1440+ urlparse,
1441+ urlunparse,
1442+)
1443+import subprocess
1444+from charmhelpers.core.hookenv import (
1445+ config,
1446+ log,
1447+)
1448+import apt_pkg
1449+
1450+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1451+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1452+"""
1453+PROPOSED_POCKET = """# Proposed
1454+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
1455+"""
1456+
1457+
1458+def filter_installed_packages(packages):
1459+ """Returns a list of packages that require installation"""
1460+ apt_pkg.init()
1461+ cache = apt_pkg.Cache()
1462+ _pkgs = []
1463+ for package in packages:
1464+ try:
1465+ p = cache[package]
1466+ p.current_ver or _pkgs.append(package)
1467+ except KeyError:
1468+ log('Package {} has no installation candidate.'.format(package),
1469+ level='WARNING')
1470+ _pkgs.append(package)
1471+ return _pkgs
1472+
1473+
1474+def apt_install(packages, options=None, fatal=False):
1475+ """Install one or more packages"""
1476+ options = options or []
1477+ cmd = ['apt-get', '-y']
1478+ cmd.extend(options)
1479+ cmd.append('install')
1480+ if isinstance(packages, basestring):
1481+ cmd.append(packages)
1482+ else:
1483+ cmd.extend(packages)
1484+ log("Installing {} with options: {}".format(packages,
1485+ options))
1486+ if fatal:
1487+ subprocess.check_call(cmd)
1488+ else:
1489+ subprocess.call(cmd)
1490+
1491+
1492+def apt_update(fatal=False):
1493+ """Update local apt cache"""
1494+ cmd = ['apt-get', 'update']
1495+ if fatal:
1496+ subprocess.check_call(cmd)
1497+ else:
1498+ subprocess.call(cmd)
1499+
1500+
1501+def apt_purge(packages, fatal=False):
1502+ """Purge one or more packages"""
1503+ cmd = ['apt-get', '-y', 'purge']
1504+ if isinstance(packages, basestring):
1505+ cmd.append(packages)
1506+ else:
1507+ cmd.extend(packages)
1508+ log("Purging {}".format(packages))
1509+ if fatal:
1510+ subprocess.check_call(cmd)
1511+ else:
1512+ subprocess.call(cmd)
1513+
1514+
1515+def add_source(source, key=None):
1516+ if ((source.startswith('ppa:') or
1517+ source.startswith('http:'))):
1518+ subprocess.check_call(['add-apt-repository', '--yes', source])
1519+ elif source.startswith('cloud:'):
1520+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
1521+ fatal=True)
1522+ pocket = source.split(':')[-1]
1523+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
1524+ apt.write(CLOUD_ARCHIVE.format(pocket))
1525+ elif source == 'proposed':
1526+ release = lsb_release()['DISTRIB_CODENAME']
1527+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
1528+ apt.write(PROPOSED_POCKET.format(release))
1529+ if key:
1530+ subprocess.check_call(['apt-key', 'import', key])
1531+
1532+
1533+class SourceConfigError(Exception):
1534+ pass
1535+
1536+
1537+def configure_sources(update=False,
1538+ sources_var='install_sources',
1539+ keys_var='install_keys'):
1540+ """
1541+ Configure multiple sources from charm configuration
1542+
1543+ Example config:
1544+ install_sources:
1545+ - "ppa:foo"
1546+ - "http://example.com/repo precise main"
1547+ install_keys:
1548+ - null
1549+ - "a1b2c3d4"
1550+
1551+ Note that 'null' (a.k.a. None) should not be quoted.
1552+ """
1553+ sources = safe_load(config(sources_var))
1554+ keys = safe_load(config(keys_var))
1555+ if isinstance(sources, basestring) and isinstance(keys, basestring):
1556+ add_source(sources, keys)
1557+ else:
1558+ if not len(sources) == len(keys):
1559+ msg = 'Install sources and keys lists are different lengths'
1560+ raise SourceConfigError(msg)
1561+ for src_num in range(len(sources)):
1562+ add_source(sources[src_num], keys[src_num])
1563+ if update:
1564+ apt_update(fatal=True)
1565+
1566+# The order of this list is very important. Handlers should be listed in from
1567+# least- to most-specific URL matching.
1568+FETCH_HANDLERS = (
1569+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
1570+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
1571+)
1572+
1573+
1574+class UnhandledSource(Exception):
1575+ pass
1576+
1577+
1578+def install_remote(source):
1579+ """
1580+ Install a file tree from a remote source
1581+
1582+ The specified source should be a url of the form:
1583+ scheme://[host]/path[#[option=value][&...]]
1584+
1585+ Schemes supported are based on this modules submodules
1586+ Options supported are submodule-specific"""
1587+ # We ONLY check for True here because can_handle may return a string
1588+ # explaining why it can't handle a given source.
1589+ handlers = [h for h in plugins() if h.can_handle(source) is True]
1590+ installed_to = None
1591+ for handler in handlers:
1592+ try:
1593+ installed_to = handler.install(source)
1594+ except UnhandledSource:
1595+ pass
1596+ if not installed_to:
1597+ raise UnhandledSource("No handler found for source {}".format(source))
1598+ return installed_to
1599+
1600+
1601+def install_from_config(config_var_name):
1602+ charm_config = config()
1603+ source = charm_config[config_var_name]
1604+ return install_remote(source)
1605+
1606+
1607+class BaseFetchHandler(object):
1608+ """Base class for FetchHandler implementations in fetch plugins"""
1609+ def can_handle(self, source):
1610+ """Returns True if the source can be handled. Otherwise returns
1611+ a string explaining why it cannot"""
1612+ return "Wrong source type"
1613+
1614+ def install(self, source):
1615+ """Try to download and unpack the source. Return the path to the
1616+ unpacked files or raise UnhandledSource."""
1617+ raise UnhandledSource("Wrong source type {}".format(source))
1618+
1619+ def parse_url(self, url):
1620+ return urlparse(url)
1621+
1622+ def base_url(self, url):
1623+ """Return url without querystring or fragment"""
1624+ parts = list(self.parse_url(url))
1625+ parts[4:] = ['' for i in parts[4:]]
1626+ return urlunparse(parts)
1627+
1628+
1629+def plugins(fetch_handlers=None):
1630+ if not fetch_handlers:
1631+ fetch_handlers = FETCH_HANDLERS
1632+ plugin_list = []
1633+ for handler_name in fetch_handlers:
1634+ package, classname = handler_name.rsplit('.', 1)
1635+ try:
1636+ handler_class = getattr(importlib.import_module(package), classname)
1637+ plugin_list.append(handler_class())
1638+ except (ImportError, AttributeError):
1639+ # Skip missing plugins so that they can be ommitted from
1640+ # installation if desired
1641+ log("FetchHandler {} not found, skipping plugin".format(handler_name))
1642+ return plugin_list
1643
1644=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
1645--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
1646+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-29 21:07:21 +0000
1647@@ -0,0 +1,48 @@
1648+import os
1649+import urllib2
1650+from charmhelpers.fetch import (
1651+ BaseFetchHandler,
1652+ UnhandledSource
1653+)
1654+from charmhelpers.payload.archive import (
1655+ get_archive_handler,
1656+ extract,
1657+)
1658+from charmhelpers.core.host import mkdir
1659+
1660+
1661+class ArchiveUrlFetchHandler(BaseFetchHandler):
1662+ """Handler for archives via generic URLs"""
1663+ def can_handle(self, source):
1664+ url_parts = self.parse_url(source)
1665+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
1666+ return "Wrong source type"
1667+ if get_archive_handler(self.base_url(source)):
1668+ return True
1669+ return False
1670+
1671+ def download(self, source, dest):
1672+ # propogate all exceptions
1673+ # URLError, OSError, etc
1674+ response = urllib2.urlopen(source)
1675+ try:
1676+ with open(dest, 'w') as dest_file:
1677+ dest_file.write(response.read())
1678+ except Exception as e:
1679+ if os.path.isfile(dest):
1680+ os.unlink(dest)
1681+ raise e
1682+
1683+ def install(self, source):
1684+ url_parts = self.parse_url(source)
1685+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
1686+ if not os.path.exists(dest_dir):
1687+ mkdir(dest_dir, perms=0755)
1688+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
1689+ try:
1690+ self.download(source, dld_file)
1691+ except urllib2.URLError as e:
1692+ raise UnhandledSource(e.reason)
1693+ except OSError as e:
1694+ raise UnhandledSource(e.strerror)
1695+ return extract(dld_file)
1696
1697=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
1698--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
1699+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-29 21:07:21 +0000
1700@@ -0,0 +1,44 @@
1701+import os
1702+from bzrlib.branch import Branch
1703+from charmhelpers.fetch import (
1704+ BaseFetchHandler,
1705+ UnhandledSource
1706+)
1707+from charmhelpers.core.host import mkdir
1708+
1709+
1710+class BzrUrlFetchHandler(BaseFetchHandler):
1711+ """Handler for bazaar branches via generic and lp URLs"""
1712+ def can_handle(self, source):
1713+ url_parts = self.parse_url(source)
1714+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
1715+ return False
1716+ else:
1717+ return True
1718+
1719+ def branch(self, source, dest):
1720+ url_parts = self.parse_url(source)
1721+ # If we use lp:branchname scheme we need to load plugins
1722+ if not self.can_handle(source):
1723+ raise UnhandledSource("Cannot handle {}".format(source))
1724+ if url_parts.scheme == "lp":
1725+ from bzrlib.plugin import load_plugins
1726+ load_plugins()
1727+ try:
1728+ remote_branch = Branch.open(source)
1729+ remote_branch.bzrdir.sprout(dest).open_branch()
1730+ except Exception as e:
1731+ raise e
1732+
1733+ def install(self, source):
1734+ url_parts = self.parse_url(source)
1735+ branch_name = url_parts.path.strip("/").split("/")[-1]
1736+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
1737+ if not os.path.exists(dest_dir):
1738+ mkdir(dest_dir, perms=0755)
1739+ try:
1740+ self.branch(source, dest_dir)
1741+ except OSError as e:
1742+ raise UnhandledSource(e.strerror)
1743+ return dest_dir
1744+
1745
1746=== modified file 'hooks/hooks.py'
1747--- hooks/hooks.py 2013-01-11 17:00:40 +0000
1748+++ hooks/hooks.py 2013-10-29 21:07:21 +0000
1749@@ -1,17 +1,30 @@
1750 #!/usr/bin/env python
1751
1752-import glob
1753-import grp
1754-import json
1755 import math
1756 import os
1757-import pwd
1758 import random
1759 import re
1760 import shutil
1761 import string
1762 import subprocess
1763 import sys
1764+import yaml
1765+import collections
1766+import socket
1767+
1768+from charmhelpers.core.hookenv import (
1769+ config as config_get,
1770+ log,
1771+ relation_set,
1772+ relation_get,
1773+ relation_ids as get_relation_ids,
1774+ relations_of_type,
1775+ related_units,
1776+ open_port,
1777+ close_port,
1778+)
1779+from charmhelpers.fetch import apt_install
1780+from charmhelpers.contrib.charmsupport.nrpe import NRPE
1781
1782 ###############################################################################
1783 # Global variables
1784@@ -19,146 +32,112 @@
1785 default_squid3_config_dir = "/etc/squid3"
1786 default_squid3_config = "%s/squid.conf" % default_squid3_config_dir
1787 default_squid3_config_cache_dir = "/var/run/squid3"
1788-hook_name = os.path.basename(sys.argv[0])
1789+default_nagios_plugin_dir = "/usr/lib/nagios/plugins"
1790+Server = collections.namedtuple("Server", "name address port options")
1791+service_affecting_packages = ['squid3']
1792
1793 ###############################################################################
1794 # Supporting functions
1795 ###############################################################################
1796
1797
1798-#------------------------------------------------------------------------------
1799-# config_get: Returns a dictionary containing all of the config information
1800-# Optional parameter: scope
1801-# scope: limits the scope of the returned configuration to the
1802-# desired config item.
1803-#------------------------------------------------------------------------------
1804-def config_get(scope=None):
1805- try:
1806- config_cmd_line = ['config-get']
1807- if scope is not None:
1808- config_cmd_line.append(scope)
1809- config_cmd_line.append('--format=json')
1810- config_data = json.loads(subprocess.check_output(config_cmd_line))
1811- except Exception, e:
1812- subprocess.call(['juju-log', str(e)])
1813- config_data = None
1814- finally:
1815- return(config_data)
1816-
1817-
1818-#------------------------------------------------------------------------------
1819-# relation_json: Returns json-formatted relation data
1820-# Optional parameters: scope, relation_id
1821-# scope: limits the scope of the returned data to the
1822-# desired item.
1823-# unit_name: limits the data ( and optionally the scope )
1824-# to the specified unit
1825-# relation_id: specify relation id for out of context usage.
1826-#------------------------------------------------------------------------------
1827-def relation_json(scope=None, unit_name=None, relation_id=None):
1828- try:
1829- relation_cmd_line = ['relation-get', '--format=json']
1830- if relation_id is not None:
1831- relation_cmd_line.extend(('-r', relation_id))
1832- if scope is not None:
1833- relation_cmd_line.append(scope)
1834- else:
1835- relation_cmd_line.append('-')
1836- relation_cmd_line.append(unit_name)
1837- relation_data = subprocess.check_output(relation_cmd_line)
1838- except Exception, e:
1839- subprocess.call(['juju-log', str(e)])
1840- relation_data = None
1841- finally:
1842- return(relation_data)
1843-
1844-
1845-#------------------------------------------------------------------------------
1846-# relation_get: Returns a dictionary containing the relation information
1847-# Optional parameters: scope, relation_id
1848-# scope: limits the scope of the returned data to the
1849-# desired item.
1850-# unit_name: limits the data ( and optionally the scope )
1851-# to the specified unit
1852-# relation_id: specify relation id for out of context usage.
1853-#------------------------------------------------------------------------------
1854-def relation_get(scope=None, unit_name=None, relation_id=None):
1855- try:
1856- relation_data = json.loads(relation_json())
1857- except Exception, e:
1858- subprocess.call(['juju-log', str(e)])
1859- relation_data = None
1860- finally:
1861- return(relation_data)
1862-
1863-
1864-#------------------------------------------------------------------------------
1865-# relation_ids: Returns a list of relation ids
1866-# optional parameters: relation_type
1867-# relation_type: return relations only of this type
1868-#------------------------------------------------------------------------------
1869-def relation_ids(relation_types=['website']):
1870- # accept strings or iterators
1871- if isinstance(relation_types, basestring):
1872- reltypes = [relation_types]
1873- else:
1874- reltypes = relation_types
1875- relids = []
1876- for reltype in reltypes:
1877- relid_cmd_line = ['relation-ids', '--format=json', reltype]
1878- relids.extend(json.loads(subprocess.check_output(relid_cmd_line)))
1879- return relids
1880-
1881-
1882-#------------------------------------------------------------------------------
1883-# relation_get_all: Returns a dictionary containing the relation information
1884-# optional parameters: relation_type
1885-# relation_type: limits the scope of the returned data to the
1886-# desired item.
1887-#------------------------------------------------------------------------------
1888-def relation_get_all():
1889- reldata = {}
1890- try:
1891- relids = relation_ids()
1892- for relid in relids:
1893- units_cmd_line = ['relation-list', '--format=json', '-r', relid]
1894- units = json.loads(subprocess.check_output(units_cmd_line))
1895- for unit in units:
1896- reldata[unit] = \
1897- json.loads(relation_json(
1898- relation_id=relid, unit_name=unit))
1899- if 'sitenames' in reldata[unit]:
1900- reldata[unit]['sitenames'] = \
1901- reldata[unit]['sitenames'].split()
1902- reldata[unit]['relation-id'] = relid
1903- reldata[unit]['name'] = unit.replace("/", "_")
1904- except Exception, e:
1905- subprocess.call(['juju-log', str(e)])
1906- reldata = []
1907- finally:
1908- return(reldata)
1909-
1910-
1911-#------------------------------------------------------------------------------
1912-# apt_get_install( packages ): Installs a package
1913-#------------------------------------------------------------------------------
1914-def apt_get_install(packages=None):
1915- if packages is None:
1916- return(False)
1917- cmd_line = ['apt-get', '-y', 'install', '-qq']
1918- try:
1919- cmd_line.extend(packages.split())
1920- except AttributeError:
1921- cmd_line.extend(packages)
1922- return(subprocess.call(cmd_line))
1923-
1924-
1925-#------------------------------------------------------------------------------
1926-# enable_squid3: Enable squid3 at boot time
1927-#------------------------------------------------------------------------------
1928-def enable_squid3():
1929- # squid is enabled at boot time
1930- return True
1931+def get_id(sitename, relation_id, unit_name):
1932+ unit_name = unit_name.replace('/', '_').replace('.', '_')
1933+ relation_id = relation_id.replace(':', '_').replace('.', '_')
1934+ if sitename is None:
1935+ return relation_id + '__' + unit_name
1936+ return sitename.replace('.', '_') + '__' + relation_id + '__' + unit_name
1937+
1938+
1939+def get_forward_sites():
1940+ reldata = []
1941+ for relid in get_relation_ids('cached-website'):
1942+ units = related_units(relid)
1943+ for unit in units:
1944+ data = relation_get(rid=relid, unit=unit)
1945+ if 'sitenames' in data:
1946+ data['sitenames'] = data['sitenames'].split()
1947+ data['relation-id'] = relid
1948+ data['name'] = unit.replace("/", "_")
1949+ reldata.append(data)
1950+ return reldata
1951+
1952+
1953+def get_reverse_sites():
1954+ all_sites = {}
1955+
1956+ config_data = config_get()
1957+ config_services = yaml.safe_load(config_data.get("services", "")) or ()
1958+ server_options_by_site = {}
1959+
1960+ for service_item in config_services:
1961+ site = service_item["service_name"]
1962+ servers = all_sites.setdefault(site, [])
1963+ options = " ".join(service_item.get("server_options", []))
1964+ server_options_by_site[site] = options
1965+ for host, port in service_item["servers"]:
1966+ servers.append(
1967+ Server(name=get_id(None, site, host),
1968+ address=host, port=port,
1969+ options=options))
1970+
1971+ relations = relations_of_type("website")
1972+
1973+ for relation_data in relations:
1974+ unit = relation_data["__unit__"]
1975+ relation_id = relation_data["__relid__"]
1976+ if not ("port" in relation_data or "all_services" in relation_data):
1977+ log("No port in relation data for '%s', skipping." % unit)
1978+ continue
1979+
1980+ if not "private-address" in relation_data:
1981+ log("No private-address in relation data "
1982+ "for '%s', skipping." % unit)
1983+ continue
1984+
1985+ for site in relation_data.get("sitenames", "").split():
1986+ servers = all_sites.setdefault(site, [])
1987+ servers.append(
1988+ Server(name=get_id(site, relation_id, unit),
1989+ address=relation_data["private-address"],
1990+ port=relation_data["port"],
1991+ options=server_options_by_site.get(site, '')))
1992+
1993+ services = yaml.safe_load(relation_data.get("all_services", "")) or ()
1994+ for service_item in services:
1995+ site = service_item["service_name"]
1996+ servers = all_sites.setdefault(site, [])
1997+ servers.append(
1998+ Server(name=get_id(site, relation_id, unit),
1999+ address=relation_data["private-address"],
2000+ port=service_item["service_port"],
2001+ options=server_options_by_site.get(site, '')))
2002+
2003+ if not ("sitenames" in relation_data or
2004+ "all_services" in relation_data):
2005+ servers = all_sites.setdefault(None, [])
2006+ servers.append(
2007+ Server(name=get_id(None, relation_id, unit),
2008+ address=relation_data["private-address"],
2009+ port=relation_data["port"],
2010+ options=server_options_by_site.get(None, '')))
2011+
2012+ if len(all_sites) == 0:
2013+ return
2014+
2015+ for site, servers in all_sites.iteritems():
2016+ servers.sort()
2017+
2018+ return all_sites
2019+
2020+
2021+def ensure_package_status(packages, status):
2022+ if status in ['install', 'hold']:
2023+ selections = ''.join(['{} {}\n'.format(package, status)
2024+ for package in packages])
2025+ dpkg = subprocess.Popen(['dpkg', '--set-selections'],
2026+ stdin=subprocess.PIPE)
2027+ dpkg.communicate(input=selections)
2028
2029
2030 #------------------------------------------------------------------------------
2031@@ -169,9 +148,8 @@
2032 #------------------------------------------------------------------------------
2033 def load_squid3_config(squid3_config_file="/etc/squid3/squid.conf"):
2034 if os.path.isfile(squid3_config_file):
2035- return(open(squid3_config_file).read())
2036- else:
2037- return(None)
2038+ return open(squid3_config_file).read()
2039+ return
2040
2041
2042 #------------------------------------------------------------------------------
2043@@ -183,8 +161,8 @@
2044 def get_service_ports(squid3_config_file="/etc/squid3/squid.conf"):
2045 squid3_config = load_squid3_config(squid3_config_file)
2046 if squid3_config is None:
2047- return(None)
2048- return(re.findall("http_port ([0-9]+)", squid3_config))
2049+ return
2050+ return re.findall("http_port ([0-9]+)", squid3_config)
2051
2052
2053 #------------------------------------------------------------------------------
2054@@ -195,31 +173,9 @@
2055 def get_sitenames(squid3_config_file="/etc/squid3/squid.conf"):
2056 squid3_config = load_squid3_config(squid3_config_file)
2057 if squid3_config is None:
2058- return(None)
2059- sitenames = re.findall("cache_peer_domain \w+ ([^!].*)", squid3_config)
2060- return(list(set(sitenames)))
2061-
2062-
2063-#------------------------------------------------------------------------------
2064-# open_port: Convenience function to open a port in juju to
2065-# expose a service
2066-#------------------------------------------------------------------------------
2067-def open_port(port=None, protocol="TCP"):
2068- if port is None:
2069- return(None)
2070- return(subprocess.call(['/usr/bin/open-port', "%d/%s" %
2071- (int(port), protocol)]))
2072-
2073-
2074-#------------------------------------------------------------------------------
2075-# close_port: Convenience function to close a port in juju to
2076-# unexpose a service
2077-#------------------------------------------------------------------------------
2078-def close_port(port=None, protocol="TCP"):
2079- if port is None:
2080- return(None)
2081- return(subprocess.call(['/usr/bin/close-port', "%d/%s" %
2082- (int(port), protocol)]))
2083+ return
2084+ sitenames = re.findall("acl [\w_-]+ dstdomain ([^!].*)", squid3_config)
2085+ return list(set(sitenames))
2086
2087
2088 #------------------------------------------------------------------------------
2089@@ -229,7 +185,7 @@
2090 #------------------------------------------------------------------------------
2091 def update_service_ports(old_service_ports=None, new_service_ports=None):
2092 if old_service_ports is None or new_service_ports is None:
2093- return(None)
2094+ return
2095 for port in old_service_ports:
2096 if port not in new_service_ports:
2097 close_port(port)
2098@@ -248,7 +204,7 @@
2099 if l not in 'Iil0oO1']
2100 random_chars = [random.choice(alphanumeric_chars)
2101 for i in range(pwd_length)]
2102- return(''.join(random_chars))
2103+ return ''.join(random_chars)
2104
2105
2106 #------------------------------------------------------------------------------
2107@@ -257,27 +213,53 @@
2108 def construct_squid3_config():
2109 from jinja2 import Environment, FileSystemLoader
2110 config_data = config_get()
2111- relations = relation_get_all()
2112+ reverse_sites = get_reverse_sites()
2113+ only_direct = set()
2114+ if reverse_sites is not None:
2115+ for site, servers in reverse_sites.iteritems():
2116+ if not servers:
2117+ only_direct.add(site)
2118+
2119 if config_data['refresh_patterns']:
2120- refresh_patterns = json.loads(config_data['refresh_patterns'])
2121+ refresh_patterns = yaml.safe_load(config_data['refresh_patterns'])
2122 else:
2123 refresh_patterns = {}
2124- template_env = Environment(loader=FileSystemLoader(
2125- os.path.join(os.environ['CHARM_DIR'], 'templates')))
2126+ # Use default refresh pattern if specified.
2127+ if '.' in refresh_patterns:
2128+ default_refresh_pattern = refresh_patterns.pop('.')
2129+ else:
2130+ default_refresh_pattern = {
2131+ 'min': 30,
2132+ 'percent': 20,
2133+ 'max': 4320,
2134+ 'options': [],
2135+ }
2136+
2137+ templates_dir = os.path.join(os.environ['CHARM_DIR'], 'templates')
2138+ template_env = Environment(loader=FileSystemLoader(templates_dir))
2139+
2140 config_data['cache_l1'] = int(math.ceil(math.sqrt(
2141- int(config_data['cache_size_mb']) * 1024 / (16 *
2142- int(config_data['target_objs_per_dir']) * int(
2143- config_data['avg_obj_size_kb'])))))
2144+ int(config_data['cache_size_mb']) * 1024 / (
2145+ 16 * int(config_data['target_objs_per_dir']) * int(
2146+ config_data['avg_obj_size_kb'])))))
2147 config_data['cache_l2'] = config_data['cache_l1'] * 16
2148 templ_vars = {
2149 'config': config_data,
2150- 'relations': relations,
2151+ 'sites': reverse_sites,
2152+ 'forward_relations': get_forward_sites(),
2153+ 'only_direct': only_direct,
2154 'refresh_patterns': refresh_patterns,
2155+ 'default_refresh_pattern': default_refresh_pattern,
2156 }
2157 template = template_env.get_template('main_config.template').\
2158 render(templ_vars)
2159+ write_squid3_config('\n'.join(
2160+ (l.strip() for l in str(template).splitlines())))
2161+
2162+
2163+def write_squid3_config(contents):
2164 with open(default_squid3_config, 'w') as squid3_config:
2165- squid3_config.write(str(template))
2166+ squid3_config.write(contents)
2167
2168
2169 #------------------------------------------------------------------------------
2170@@ -286,147 +268,192 @@
2171 #------------------------------------------------------------------------------
2172 def service_squid3(action=None, squid3_config=default_squid3_config):
2173 if action is None or squid3_config is None:
2174- return(None)
2175+ return
2176 elif action == "check":
2177 check_cmd = ['/usr/sbin/squid3', '-f', squid3_config, '-k', 'parse']
2178 retVal = subprocess.call(check_cmd)
2179 if retVal == 1:
2180- return(False)
2181+ return False
2182 elif retVal == 0:
2183- return(True)
2184+ return True
2185 else:
2186- return(False)
2187+ return False
2188 elif action == 'status':
2189 status = subprocess.check_output(['status', 'squid3'])
2190 if re.search('running', status) is not None:
2191- return(True)
2192+ return True
2193 else:
2194- return(False)
2195+ return False
2196 elif action in ('start', 'stop', 'reload', 'restart'):
2197 retVal = subprocess.call([action, 'squid3'])
2198 if retVal == 0:
2199- return(True)
2200+ return True
2201 else:
2202- return(False)
2203+ return False
2204+
2205+
2206+def install_nrpe_scripts():
2207+ if not os.path.exists(default_nagios_plugin_dir):
2208+ os.makedirs(default_nagios_plugin_dir)
2209+ shutil.copy2('%s/files/check_squidpeers' % (
2210+ os.environ['CHARM_DIR']),
2211+ '{}/check_squidpeers'.format(default_nagios_plugin_dir))
2212
2213
2214 def update_nrpe_checks():
2215- config_data = config_get()
2216- try:
2217- nagios_uid = pwd.getpwnam('nagios').pw_uid
2218- nagios_gid = grp.getgrnam('nagios').gr_gid
2219- except:
2220- subprocess.call(['juju-log', "Nagios user not setup, exiting"])
2221- return
2222-
2223- unit_name = os.environ['JUJU_UNIT_NAME'].replace('/', '-')
2224- nrpe_check_file = '/etc/nagios/nrpe.d/check_squidrp.cfg'
2225- nagios_hostname = "%s-%s" % (config_data['nagios_context'], unit_name)
2226- nagios_logdir = '/var/log/nagios'
2227- nagios_exportdir = '/var/lib/nagios/export'
2228- nrpe_service_file = \
2229- '/var/lib/nagios/export/service__%s_check_squidrp.cfg' % \
2230- (nagios_hostname)
2231- if not os.path.exists(nagios_logdir):
2232- os.mkdir(nagios_logdir)
2233- os.chown(nagios_logdir, nagios_uid, nagios_gid)
2234- if not os.path.exists(nagios_exportdir):
2235- subprocess.call(['juju-log', 'Exiting as %s is not accessible' %
2236- (nagios_exportdir)])
2237- return
2238- for f in os.listdir(nagios_exportdir):
2239- if re.search('.*check_squidrp.cfg', f):
2240- os.remove(os.path.join(nagios_exportdir, f))
2241- from jinja2 import Environment, FileSystemLoader
2242- template_env = Environment(
2243- loader=FileSystemLoader(os.path.join(os.environ['CHARM_DIR'], 'templates')))
2244- templ_vars = {
2245- 'nagios_hostname': nagios_hostname,
2246- 'nagios_servicegroup': config_data['nagios_context'],
2247- }
2248- template = template_env.get_template('nrpe_service.template').\
2249- render(templ_vars)
2250- with open(nrpe_service_file, 'w') as nrpe_service_config:
2251- nrpe_service_config.write(str(template))
2252- with open(nrpe_check_file, 'w') as nrpe_check_config:
2253- nrpe_check_config.write("# check squid\n")
2254- nrpe_check_config.write(
2255- "command[check_squidrp]=/usr/lib/nagios/plugins/check_http %s\n" %
2256- (config_data['nagios_check_http_params']))
2257- if os.path.isfile('/etc/init.d/nagios-nrpe-server'):
2258- subprocess.call(['service', 'nagios-nrpe-server', 'reload'])
2259+ install_nrpe_scripts()
2260+ nrpe_compat = NRPE()
2261+ conf = nrpe_compat.config
2262+ nrpe_compat.add_check(
2263+ shortname='squidpeers',
2264+ description='Check Squid Peers',
2265+ check_cmd='check_squidpeers'
2266+ )
2267+ check_http_params = conf.get('nagios_check_http_params')
2268+ if check_http_params:
2269+ nrpe_compat.add_check(
2270+ shortname='squidrp',
2271+ description='Check Squid',
2272+ check_cmd='check_http %s' % check_http_params
2273+ )
2274+ config_services_str = conf.get('services', '')
2275+ config_services = yaml.safe_load(config_services_str) or ()
2276+ for service in config_services:
2277+ path = service.get('nrpe_check_path')
2278+ if path is not None:
2279+ command = 'check_http -I 127.0.0.1 -p 3128 --method=HEAD '
2280+ service_name = service['service_name']
2281+ if conf.get('x_balancer_name_allowed'):
2282+ command += ("-u http://localhost%s "
2283+ "-k \\'X-Balancer-Name: %s\\'" % (
2284+ path, service_name))
2285+ else:
2286+ command += "-u http://%s%s" % (service_name, path)
2287+ nrpe_compat.add_check(
2288+ shortname='squid-%s' % service_name.replace(".", "_"),
2289+ description='Check Squid for site %s' % service_name,
2290+ check_cmd=command,
2291+ )
2292+ nrpe_compat.write()
2293+
2294+
2295+def install_packages():
2296+ apt_install("squid3 squidclient python-jinja2".split(), fatal=True)
2297+ ensure_package_status(service_affecting_packages,
2298+ config_get('package_status'))
2299
2300
2301 ###############################################################################
2302 # Hook functions
2303 ###############################################################################
2304 def install_hook():
2305- for f in glob.glob('exec.d/*/charm-pre-install'):
2306- if os.path.isfile(f) and os.access(f, os.X_OK):
2307- subprocess.check_call(['sh', '-c', f])
2308 if not os.path.exists(default_squid3_config_dir):
2309 os.mkdir(default_squid3_config_dir, 0600)
2310 if not os.path.exists(default_squid3_config_cache_dir):
2311 os.mkdir(default_squid3_config_cache_dir, 0600)
2312- shutil.copy2('%s/files/default.squid3' % (os.environ['CHARM_DIR']), '/etc/default/squid3')
2313- return (apt_get_install(
2314- "squid3 python-jinja2") == enable_squid3() is not True)
2315+ shutil.copy2('%s/files/default.squid3' % (
2316+ os.environ['CHARM_DIR']), '/etc/default/squid3')
2317+ install_packages()
2318+ return True
2319
2320
2321 def config_changed():
2322- current_service_ports = get_service_ports()
2323+ old_service_ports = get_service_ports()
2324+ old_sitenames = get_sitenames()
2325 construct_squid3_config()
2326 update_nrpe_checks()
2327+ ensure_package_status(service_affecting_packages,
2328+ config_get('package_status'))
2329
2330 if service_squid3("check"):
2331 updated_service_ports = get_service_ports()
2332- update_service_ports(current_service_ports, updated_service_ports)
2333+ update_service_ports(old_service_ports, updated_service_ports)
2334 service_squid3("reload")
2335+ if not (old_sitenames == get_sitenames()):
2336+ notify_cached_website()
2337 else:
2338+ # XXX Ideally the config should be restored to a working state if the
2339+ # check fails, otherwise an inadvertent reload will cause the service
2340+ # to be broken.
2341+ log("squid configuration check failed, exiting")
2342 sys.exit(1)
2343
2344
2345 def start_hook():
2346 if service_squid3("status"):
2347- return(service_squid3("restart"))
2348+ return service_squid3("restart")
2349 else:
2350- return(service_squid3("start"))
2351+ return service_squid3("start")
2352
2353
2354 def stop_hook():
2355 if service_squid3("status"):
2356- return(service_squid3("stop"))
2357+ return service_squid3("stop")
2358
2359
2360 def website_interface(hook_name=None):
2361 if hook_name is None:
2362- return(None)
2363- if hook_name in ["joined", "changed", "broken", "departed"]:
2364+ return
2365+ if hook_name in ("joined", "changed", "broken", "departed"):
2366 config_changed()
2367
2368
2369 def cached_website_interface(hook_name=None):
2370 if hook_name is None:
2371- return(None)
2372- if hook_name in ["joined", "changed"]:
2373- sitenames = ' '.join(get_sitenames())
2374- # passing only one port - the first one defined
2375- subprocess.call(['relation-set', 'port=%s' % get_service_ports()[0],
2376- 'sitenames=%s' % sitenames])
2377+ return
2378+ if hook_name in ("joined", "changed"):
2379+ notify_cached_website(relation_ids=(None,))
2380+ config_data = config_get()
2381+ if config_data['enable_forward_proxy']:
2382+ config_changed()
2383+
2384+
2385+def get_hostname(host=None):
2386+ my_host = socket.gethostname()
2387+ if host is None or host == "0.0.0.0":
2388+ # If the listen ip has been set to 0.0.0.0 then pass back the hostname
2389+ return socket.getfqdn(my_host)
2390+ elif host == "localhost":
2391+ # If the fqdn lookup has returned localhost (lxc setups) then return
2392+ # hostname
2393+ return my_host
2394+ return host
2395+
2396+
2397+def notify_cached_website(relation_ids=None):
2398+ hostname = get_hostname()
2399+ # passing only one port - the first one defined
2400+ port = get_service_ports()[0]
2401+ sitenames = ' '.join(get_sitenames())
2402+
2403+ for rid in relation_ids or get_relation_ids("cached-website"):
2404+ relation_set(relation_id=rid, port=port,
2405+ hostname=hostname,
2406+ sitenames=sitenames)
2407+
2408+
2409+def upgrade_charm():
2410+ # Ensure that all current dependencies are installed.
2411+ install_packages()
2412
2413
2414 ###############################################################################
2415 # Main section
2416 ###############################################################################
2417-def main():
2418+def main(hook_name):
2419 if hook_name == "install":
2420 install_hook()
2421 elif hook_name == "config-changed":
2422 config_changed()
2423+ update_nrpe_checks()
2424 elif hook_name == "start":
2425 start_hook()
2426 elif hook_name == "stop":
2427 stop_hook()
2428+ elif hook_name == "upgrade-charm":
2429+ upgrade_charm()
2430+ config_changed()
2431+ update_nrpe_checks()
2432
2433 elif hook_name == "website-relation-joined":
2434 website_interface("joined")
2435@@ -446,14 +473,21 @@
2436 elif hook_name == "cached-website-relation-departed":
2437 cached_website_interface("departed")
2438
2439- elif hook_name == "nrpe-external-master-relation-changed":
2440+ elif hook_name in ("nrpe-external-master-relation-joined",
2441+ "local-monitors-relation-joined"):
2442 update_nrpe_checks()
2443
2444 elif hook_name == "env-dump":
2445- print relation_get_all()
2446+ print relations_of_type()
2447 else:
2448 print "Unknown hook"
2449 sys.exit(1)
2450
2451 if __name__ == '__main__':
2452- main()
2453+ hook_name = os.path.basename(sys.argv[0])
2454+ # Also support being invoked directly with hook as argument name.
2455+ if hook_name == "hooks.py":
2456+ if len(sys.argv) < 2:
2457+ sys.exit("Missing required hook name argument.")
2458+ hook_name = sys.argv[1]
2459+ main(hook_name)
2460
2461=== modified symlink 'hooks/install' (properties changed: -x to +x)
2462=== target was u'hooks.py'
2463--- hooks/install 1970-01-01 00:00:00 +0000
2464+++ hooks/install 2013-10-29 21:07:21 +0000
2465@@ -0,0 +1,9 @@
2466+#!/bin/sh
2467+
2468+set -eu
2469+
2470+juju-log 'Invoking charm-pre-install hooks'
2471+[ -d exec.d ] && ( for f in exec.d/*/charm-pre-install; do [ -x $f ] && /bin/sh -c "$f"; done )
2472+
2473+juju-log 'Invoking python-based install hook'
2474+python hooks/hooks.py install
2475
2476=== added symlink 'hooks/local-monitors-relation-joined'
2477=== target is u'hooks.py'
2478=== renamed symlink 'hooks/nrpe-external-master-relation-changed' => 'hooks/nrpe-external-master-relation-joined'
2479=== removed directory 'hooks/shelltoolbox'
2480=== removed file 'hooks/shelltoolbox/__init__.py'
2481--- hooks/shelltoolbox/__init__.py 2012-10-02 09:42:19 +0000
2482+++ hooks/shelltoolbox/__init__.py 1970-01-01 00:00:00 +0000
2483@@ -1,662 +0,0 @@
2484-# Copyright 2012 Canonical Ltd.
2485-
2486-# This file is part of python-shell-toolbox.
2487-#
2488-# python-shell-toolbox is free software: you can redistribute it and/or modify
2489-# it under the terms of the GNU General Public License as published by the
2490-# Free Software Foundation, version 3 of the License.
2491-#
2492-# python-shell-toolbox is distributed in the hope that it will be useful, but
2493-# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
2494-# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
2495-# more details.
2496-#
2497-# You should have received a copy of the GNU General Public License
2498-# along with python-shell-toolbox. If not, see <http://www.gnu.org/licenses/>.
2499-
2500-"""Helper functions for accessing shell commands in Python."""
2501-
2502-__metaclass__ = type
2503-__all__ = [
2504- 'apt_get_install',
2505- 'bzr_whois',
2506- 'cd',
2507- 'command',
2508- 'DictDiffer',
2509- 'environ',
2510- 'file_append',
2511- 'file_prepend',
2512- 'generate_ssh_keys',
2513- 'get_su_command',
2514- 'get_user_home',
2515- 'get_user_ids',
2516- 'install_extra_repositories',
2517- 'join_command',
2518- 'mkdirs',
2519- 'run',
2520- 'Serializer',
2521- 'script_name',
2522- 'search_file',
2523- 'ssh',
2524- 'su',
2525- 'user_exists',
2526- 'wait_for_page_contents',
2527- ]
2528-
2529-from collections import namedtuple
2530-from contextlib import contextmanager
2531-from email.Utils import parseaddr
2532-import errno
2533-import json
2534-import operator
2535-import os
2536-import pipes
2537-import pwd
2538-import re
2539-import subprocess
2540-import sys
2541-from textwrap import dedent
2542-import time
2543-import urllib2
2544-
2545-
2546-Env = namedtuple('Env', 'uid gid home')
2547-
2548-
2549-def apt_get_install(*args, **kwargs):
2550- """Install given packages using apt.
2551-
2552- It is possible to pass environment variables to be set during install
2553- using keyword arguments.
2554-
2555- :raises: subprocess.CalledProcessError
2556- """
2557- caller = kwargs.pop('caller', run)
2558- debian_frontend = kwargs.pop('DEBIAN_FRONTEND', 'noninteractive')
2559- with environ(DEBIAN_FRONTEND=debian_frontend, **kwargs):
2560- cmd = ('apt-get', '-y', 'install') + args
2561- return caller(*cmd)
2562-
2563-
2564-def bzr_whois(user):
2565- """Return full name and email of bzr `user`.
2566-
2567- Return None if the given `user` does not have a bzr user id.
2568- """
2569- with su(user):
2570- try:
2571- whoami = run('bzr', 'whoami')
2572- except (subprocess.CalledProcessError, OSError):
2573- return None
2574- return parseaddr(whoami)
2575-
2576-
2577-@contextmanager
2578-def cd(directory):
2579- """A context manager to temporarily change current working dir, e.g.::
2580-
2581- >>> import os
2582- >>> os.chdir('/tmp')
2583- >>> with cd('/bin'): print os.getcwd()
2584- /bin
2585- >>> print os.getcwd()
2586- /tmp
2587- """
2588- cwd = os.getcwd()
2589- os.chdir(directory)
2590- try:
2591- yield
2592- finally:
2593- os.chdir(cwd)
2594-
2595-
2596-def command(*base_args):
2597- """Return a callable that will run the given command with any arguments.
2598-
2599- The first argument is the path to the command to run, subsequent arguments
2600- are command-line arguments to "bake into" the returned callable.
2601-
2602- The callable runs the given executable and also takes arguments that will
2603- be appeneded to the "baked in" arguments.
2604-
2605- For example, this code will list a file named "foo" (if it exists):
2606-
2607- ls_foo = command('/bin/ls', 'foo')
2608- ls_foo()
2609-
2610- While this invocation will list "foo" and "bar" (assuming they exist):
2611-
2612- ls_foo('bar')
2613- """
2614- def callable_command(*args):
2615- all_args = base_args + args
2616- return run(*all_args)
2617-
2618- return callable_command
2619-
2620-
2621-@contextmanager
2622-def environ(**kwargs):
2623- """A context manager to temporarily change environment variables.
2624-
2625- If an existing environment variable is changed, it is restored during
2626- context cleanup::
2627-
2628- >>> import os
2629- >>> os.environ['MY_VARIABLE'] = 'foo'
2630- >>> with environ(MY_VARIABLE='bar'): print os.getenv('MY_VARIABLE')
2631- bar
2632- >>> print os.getenv('MY_VARIABLE')
2633- foo
2634- >>> del os.environ['MY_VARIABLE']
2635-
2636- If we are adding environment variables, they are removed during context
2637- cleanup::
2638-
2639- >>> import os
2640- >>> with environ(MY_VAR1='foo', MY_VAR2='bar'):
2641- ... print os.getenv('MY_VAR1'), os.getenv('MY_VAR2')
2642- foo bar
2643- >>> os.getenv('MY_VAR1') == os.getenv('MY_VAR2') == None
2644- True
2645- """
2646- backup = {}
2647- for key, value in kwargs.items():
2648- backup[key] = os.getenv(key)
2649- os.environ[key] = value
2650- try:
2651- yield
2652- finally:
2653- for key, value in backup.items():
2654- if value is None:
2655- del os.environ[key]
2656- else:
2657- os.environ[key] = value
2658-
2659-
2660-def file_append(filename, line):
2661- r"""Append given `line`, if not present, at the end of `filename`.
2662-
2663- Usage example::
2664-
2665- >>> import tempfile
2666- >>> f = tempfile.NamedTemporaryFile('w', delete=False)
2667- >>> f.write('line1\n')
2668- >>> f.close()
2669- >>> file_append(f.name, 'new line\n')
2670- >>> open(f.name).read()
2671- 'line1\nnew line\n'
2672-
2673- Nothing happens if the file already contains the given `line`::
2674-
2675- >>> file_append(f.name, 'new line\n')
2676- >>> open(f.name).read()
2677- 'line1\nnew line\n'
2678-
2679- A new line is automatically added before the given `line` if it is not
2680- present at the end of current file content::
2681-
2682- >>> import tempfile
2683- >>> f = tempfile.NamedTemporaryFile('w', delete=False)
2684- >>> f.write('line1')
2685- >>> f.close()
2686- >>> file_append(f.name, 'new line\n')
2687- >>> open(f.name).read()
2688- 'line1\nnew line\n'
2689-
2690- The file is created if it does not exist::
2691-
2692- >>> import tempfile
2693- >>> filename = tempfile.mktemp()
2694- >>> file_append(filename, 'line1\n')
2695- >>> open(filename).read()
2696- 'line1\n'
2697- """
2698- if not line.endswith('\n'):
2699- line += '\n'
2700- with open(filename, 'a+') as f:
2701- lines = f.readlines()
2702- if line not in lines:
2703- if not lines or lines[-1].endswith('\n'):
2704- f.write(line)
2705- else:
2706- f.write('\n' + line)
2707-
2708-
2709-def file_prepend(filename, line):
2710- r"""Insert given `line`, if not present, at the beginning of `filename`.
2711-
2712- Usage example::
2713-
2714- >>> import tempfile
2715- >>> f = tempfile.NamedTemporaryFile('w', delete=False)
2716- >>> f.write('line1\n')
2717- >>> f.close()
2718- >>> file_prepend(f.name, 'line0\n')
2719- >>> open(f.name).read()
2720- 'line0\nline1\n'
2721-
2722- If the file starts with the given `line`, nothing happens::
2723-
2724- >>> file_prepend(f.name, 'line0\n')
2725- >>> open(f.name).read()
2726- 'line0\nline1\n'
2727-
2728- If the file contains the given `line`, but not at the beginning,
2729- the line is moved on top::
2730-
2731- >>> file_prepend(f.name, 'line1\n')
2732- >>> open(f.name).read()
2733- 'line1\nline0\n'
2734- """
2735- if not line.endswith('\n'):
2736- line += '\n'
2737- with open(filename, 'r+') as f:
2738- lines = f.readlines()
2739- if lines[0] != line:
2740- try:
2741- lines.remove(line)
2742- except ValueError:
2743- pass
2744- lines.insert(0, line)
2745- f.seek(0)
2746- f.writelines(lines)
2747-
2748-
2749-def generate_ssh_keys(path, passphrase=''):
2750- """Generate ssh key pair, saving them inside the given `directory`.
2751-
2752- >>> generate_ssh_keys('/tmp/id_rsa')
2753- 0
2754- >>> open('/tmp/id_rsa').readlines()[0].strip()
2755- '-----BEGIN RSA PRIVATE KEY-----'
2756- >>> open('/tmp/id_rsa.pub').read().startswith('ssh-rsa')
2757- True
2758- >>> os.remove('/tmp/id_rsa')
2759- >>> os.remove('/tmp/id_rsa.pub')
2760-
2761- If either of the key files already exist, generate_ssh_keys() will
2762- raise an Exception.
2763-
2764- Note that ssh-keygen will prompt if the keyfiles already exist, but
2765- when we're using it non-interactively it's better to pre-empt that
2766- behaviour.
2767-
2768- >>> with open('/tmp/id_rsa', 'w') as key_file:
2769- ... key_file.write("Don't overwrite me, bro!")
2770- >>> generate_ssh_keys('/tmp/id_rsa') # doctest: +ELLIPSIS
2771- Traceback (most recent call last):
2772- Exception: File /tmp/id_rsa already exists...
2773- >>> os.remove('/tmp/id_rsa')
2774-
2775- >>> with open('/tmp/id_rsa.pub', 'w') as key_file:
2776- ... key_file.write("Don't overwrite me, bro!")
2777- >>> generate_ssh_keys('/tmp/id_rsa') # doctest: +ELLIPSIS
2778- Traceback (most recent call last):
2779- Exception: File /tmp/id_rsa.pub already exists...
2780- >>> os.remove('/tmp/id_rsa.pub')
2781- """
2782- if os.path.exists(path):
2783- raise Exception("File {} already exists.".format(path))
2784- if os.path.exists(path + '.pub'):
2785- raise Exception("File {}.pub already exists.".format(path))
2786- return subprocess.call([
2787- 'ssh-keygen', '-q', '-t', 'rsa', '-N', passphrase, '-f', path])
2788-
2789-
2790-def get_su_command(user, args):
2791- """Return a command line as a sequence, prepending "su" if necessary.
2792-
2793- This can be used together with `run` when the `su` context manager is not
2794- enough (e.g. an external program uses uid rather than euid).
2795-
2796- run(*get_su_command(user, ['bzr', 'whoami']))
2797-
2798- If the su is requested as current user, the arguments are returned as
2799- given::
2800-
2801- >>> import getpass
2802- >>> current_user = getpass.getuser()
2803-
2804- >>> get_su_command(current_user, ('ls', '-l'))
2805- ('ls', '-l')
2806-
2807- Otherwise, "su" is prepended::
2808-
2809- >>> get_su_command('nobody', ('ls', '-l', 'my file'))
2810- ('su', 'nobody', '-c', "ls -l 'my file'")
2811- """
2812- if get_user_ids(user)[0] != os.getuid():
2813- args = [i for i in args if i is not None]
2814- return ('su', user, '-c', join_command(args))
2815- return args
2816-
2817-
2818-def get_user_home(user):
2819- """Return the home directory of the given `user`.
2820-
2821- >>> get_user_home('root')
2822- '/root'
2823-
2824- If the user does not exist, return a default /home/[username] home::
2825-
2826- >>> get_user_home('_this_user_does_not_exist_')
2827- '/home/_this_user_does_not_exist_'
2828- """
2829- try:
2830- return pwd.getpwnam(user).pw_dir
2831- except KeyError:
2832- return os.path.join(os.path.sep, 'home', user)
2833-
2834-
2835-def get_user_ids(user):
2836- """Return the uid and gid of given `user`, e.g.::
2837-
2838- >>> get_user_ids('root')
2839- (0, 0)
2840- """
2841- userdata = pwd.getpwnam(user)
2842- return userdata.pw_uid, userdata.pw_gid
2843-
2844-
2845-def install_extra_repositories(*repositories):
2846- """Install all of the extra repositories and update apt.
2847-
2848- Given repositories can contain a "{distribution}" placeholder, that will
2849- be replaced by current distribution codename.
2850-
2851- :raises: subprocess.CalledProcessError
2852- """
2853- distribution = run('lsb_release', '-cs').strip()
2854- # Starting from Oneiric, `apt-add-repository` is interactive by
2855- # default, and requires a "-y" flag to be set.
2856- assume_yes = None if distribution == 'lucid' else '-y'
2857- for repo in repositories:
2858- repository = repo.format(distribution=distribution)
2859- run('apt-add-repository', assume_yes, repository)
2860- run('apt-get', 'clean')
2861- run('apt-get', 'update')
2862-
2863-
2864-def join_command(args):
2865- """Return a valid Unix command line from `args`.
2866-
2867- >>> join_command(['ls', '-l'])
2868- 'ls -l'
2869-
2870- Arguments containing spaces and empty args are correctly quoted::
2871-
2872- >>> join_command(['command', 'arg1', 'arg containing spaces', ''])
2873- "command arg1 'arg containing spaces' ''"
2874- """
2875- return ' '.join(pipes.quote(arg) for arg in args)
2876-
2877-
2878-def mkdirs(*args):
2879- """Create leaf directories (given as `args`) and all intermediate ones.
2880-
2881- >>> import tempfile
2882- >>> base_dir = tempfile.mktemp(suffix='/')
2883- >>> dir1 = tempfile.mktemp(prefix=base_dir)
2884- >>> dir2 = tempfile.mktemp(prefix=base_dir)
2885- >>> mkdirs(dir1, dir2)
2886- >>> os.path.isdir(dir1)
2887- True
2888- >>> os.path.isdir(dir2)
2889- True
2890-
2891- If the leaf directory already exists the function returns without errors::
2892-
2893- >>> mkdirs(dir1)
2894-
2895- An `OSError` is raised if the leaf path exists and it is a file::
2896-
2897- >>> f = tempfile.NamedTemporaryFile(
2898- ... 'w', delete=False, prefix=base_dir)
2899- >>> f.close()
2900- >>> mkdirs(f.name) # doctest: +ELLIPSIS
2901- Traceback (most recent call last):
2902- OSError: ...
2903- """
2904- for directory in args:
2905- try:
2906- os.makedirs(directory)
2907- except OSError as err:
2908- if err.errno != errno.EEXIST or os.path.isfile(directory):
2909- raise
2910-
2911-
2912-def run(*args, **kwargs):
2913- """Run the command with the given arguments.
2914-
2915- The first argument is the path to the command to run.
2916- Subsequent arguments are command-line arguments to be passed.
2917-
2918- This function accepts all optional keyword arguments accepted by
2919- `subprocess.Popen`.
2920- """
2921- args = [i for i in args if i is not None]
2922- pipe = subprocess.PIPE
2923- process = subprocess.Popen(
2924- args, stdout=kwargs.pop('stdout', pipe),
2925- stderr=kwargs.pop('stderr', pipe),
2926- close_fds=kwargs.pop('close_fds', True), **kwargs)
2927- stdout, stderr = process.communicate()
2928- if process.returncode:
2929- exception = subprocess.CalledProcessError(
2930- process.returncode, repr(args))
2931- # The output argument of `CalledProcessError` was introduced in Python
2932- # 2.7. Monkey patch the output here to avoid TypeErrors in older
2933- # versions of Python, still preserving the output in Python 2.7.
2934- exception.output = ''.join(filter(None, [stdout, stderr]))
2935- raise exception
2936- return stdout
2937-
2938-
2939-def script_name():
2940- """Return the name of this script."""
2941- return os.path.basename(sys.argv[0])
2942-
2943-
2944-def search_file(regexp, filename):
2945- """Return the first line in `filename` that matches `regexp`."""
2946- with open(filename) as f:
2947- for line in f:
2948- if re.search(regexp, line):
2949- return line
2950-
2951-
2952-def ssh(location, user=None, key=None, caller=subprocess.call):
2953- """Return a callable that can be used to run ssh shell commands.
2954-
2955- The ssh `location` and, optionally, `user` must be given.
2956- If the user is None then the current user is used for the connection.
2957-
2958- The callable internally uses the given `caller`::
2959-
2960- >>> def caller(cmd):
2961- ... print tuple(cmd)
2962- >>> sshcall = ssh('example.com', 'myuser', caller=caller)
2963- >>> root_sshcall = ssh('example.com', caller=caller)
2964- >>> sshcall('ls -l') # doctest: +ELLIPSIS
2965- ('ssh', '-t', ..., 'myuser@example.com', '--', 'ls -l')
2966- >>> root_sshcall('ls -l') # doctest: +ELLIPSIS
2967- ('ssh', '-t', ..., 'example.com', '--', 'ls -l')
2968-
2969- The ssh key path can be optionally provided::
2970-
2971- >>> root_sshcall = ssh('example.com', key='/tmp/foo', caller=caller)
2972- >>> root_sshcall('ls -l') # doctest: +ELLIPSIS
2973- ('ssh', '-t', ..., '-i', '/tmp/foo', 'example.com', '--', 'ls -l')
2974-
2975- If the ssh command exits with an error code,
2976- a `subprocess.CalledProcessError` is raised::
2977-
2978- >>> ssh('loc', caller=lambda cmd: 1)('ls -l') # doctest: +ELLIPSIS
2979- Traceback (most recent call last):
2980- CalledProcessError: ...
2981-
2982- If ignore_errors is set to True when executing the command, no error
2983- will be raised, even if the command itself returns an error code.
2984-
2985- >>> sshcall = ssh('loc', caller=lambda cmd: 1)
2986- >>> sshcall('ls -l', ignore_errors=True)
2987- """
2988- sshcmd = [
2989- 'ssh',
2990- '-t',
2991- '-t', # Yes, this second -t is deliberate. See `man ssh`.
2992- '-o', 'StrictHostKeyChecking=no',
2993- '-o', 'UserKnownHostsFile=/dev/null',
2994- ]
2995- if key is not None:
2996- sshcmd.extend(['-i', key])
2997- if user is not None:
2998- location = '{}@{}'.format(user, location)
2999- sshcmd.extend([location, '--'])
3000-
3001- def _sshcall(cmd, ignore_errors=False):
3002- command = sshcmd + [cmd]
3003- retcode = caller(command)
3004- if retcode and not ignore_errors:
3005- raise subprocess.CalledProcessError(retcode, ' '.join(command))
3006-
3007- return _sshcall
3008-
3009-
3010-@contextmanager
3011-def su(user):
3012- """A context manager to temporarily run the script as a different user."""
3013- uid, gid = get_user_ids(user)
3014- os.setegid(gid)
3015- os.seteuid(uid)
3016- home = get_user_home(user)
3017- with environ(HOME=home):
3018- try:
3019- yield Env(uid, gid, home)
3020- finally:
3021- os.setegid(os.getgid())
3022- os.seteuid(os.getuid())
3023-
3024-
3025-def user_exists(username):
3026- """Return True if given `username` exists, e.g.::
3027-
3028- >>> user_exists('root')
3029- True
3030- >>> user_exists('_this_user_does_not_exist_')
3031- False
3032- """
3033- try:
3034- pwd.getpwnam(username)
3035- except KeyError:
3036- return False
3037- return True
3038-
3039-
3040-def wait_for_page_contents(url, contents, timeout=120, validate=None):
3041- if validate is None:
3042- validate = operator.contains
3043- start_time = time.time()
3044- while True:
3045- try:
3046- stream = urllib2.urlopen(url)
3047- except (urllib2.HTTPError, urllib2.URLError):
3048- pass
3049- else:
3050- page = stream.read()
3051- if validate(page, contents):
3052- return page
3053- if time.time() - start_time >= timeout:
3054- raise RuntimeError('timeout waiting for contents of ' + url)
3055- time.sleep(0.1)
3056-
3057-
3058-class DictDiffer:
3059- """
3060- Calculate the difference between two dictionaries as:
3061- (1) items added
3062- (2) items removed
3063- (3) keys same in both but changed values
3064- (4) keys same in both and unchanged values
3065- """
3066-
3067- # Based on answer by hughdbrown at:
3068- # http://stackoverflow.com/questions/1165352
3069-
3070- def __init__(self, current_dict, past_dict):
3071- self.current_dict = current_dict
3072- self.past_dict = past_dict
3073- self.set_current = set(current_dict)
3074- self.set_past = set(past_dict)
3075- self.intersect = self.set_current.intersection(self.set_past)
3076-
3077- @property
3078- def added(self):
3079- return self.set_current - self.intersect
3080-
3081- @property
3082- def removed(self):
3083- return self.set_past - self.intersect
3084-
3085- @property
3086- def changed(self):
3087- return set(key for key in self.intersect
3088- if self.past_dict[key] != self.current_dict[key])
3089- @property
3090- def unchanged(self):
3091- return set(key for key in self.intersect
3092- if self.past_dict[key] == self.current_dict[key])
3093- @property
3094- def modified(self):
3095- return self.current_dict != self.past_dict
3096-
3097- @property
3098- def added_or_changed(self):
3099- return self.added.union(self.changed)
3100-
3101- def _changes(self, keys):
3102- new = {}
3103- old = {}
3104- for k in keys:
3105- new[k] = self.current_dict.get(k)
3106- old[k] = self.past_dict.get(k)
3107- return "%s -> %s" % (old, new)
3108-
3109- def __str__(self):
3110- if self.modified:
3111- s = dedent("""\
3112- added: %s
3113- removed: %s
3114- changed: %s
3115- unchanged: %s""") % (
3116- self._changes(self.added),
3117- self._changes(self.removed),
3118- self._changes(self.changed),
3119- list(self.unchanged))
3120- else:
3121- s = "no changes"
3122- return s
3123-
3124-
3125-class Serializer:
3126- """Handle JSON (de)serialization."""
3127-
3128- def __init__(self, path, default=None, serialize=None, deserialize=None):
3129- self.path = path
3130- self.default = default or {}
3131- self.serialize = serialize or json.dump
3132- self.deserialize = deserialize or json.load
3133-
3134- def exists(self):
3135- return os.path.exists(self.path)
3136-
3137- def get(self):
3138- if self.exists():
3139- with open(self.path) as f:
3140- return self.deserialize(f)
3141- return self.default
3142-
3143- def set(self, data):
3144- with open(self.path, 'w') as f:
3145- self.serialize(data, f)
3146
3147=== added directory 'hooks/tests'
3148=== added file 'hooks/tests/__init__.py'
3149=== added file 'hooks/tests/test_cached_website_hooks.py'
3150--- hooks/tests/test_cached_website_hooks.py 1970-01-01 00:00:00 +0000
3151+++ hooks/tests/test_cached_website_hooks.py 2013-10-29 21:07:21 +0000
3152@@ -0,0 +1,111 @@
3153+from testtools import TestCase
3154+from mock import patch, call
3155+
3156+import hooks
3157+
3158+
3159+class CachedWebsiteRelationTest(TestCase):
3160+
3161+ def setUp(self):
3162+ super(CachedWebsiteRelationTest, self).setUp()
3163+ self.notify_cached_website = self.patch_hook("notify_cached_website")
3164+ self.config_changed = self.patch_hook('config_changed')
3165+ self.config_get = self.patch_hook('config_get')
3166+ self.config_get.return_value = {
3167+ 'enable_forward_proxy': False
3168+ }
3169+
3170+ def patch_hook(self, hook_name):
3171+ mock_controller = patch.object(hooks, hook_name)
3172+ mock = mock_controller.start()
3173+ self.addCleanup(mock_controller.stop)
3174+ return mock
3175+
3176+ def test_website_interface_none(self):
3177+ self.assertEqual(None, hooks.cached_website_interface(hook_name=None))
3178+ self.assertFalse(self.notify_cached_website.called)
3179+
3180+ def test_website_interface_joined(self):
3181+ hooks.cached_website_interface(hook_name="joined")
3182+ self.notify_cached_website.assert_called_once_with(
3183+ relation_ids=(None,))
3184+ self.assertFalse(self.config_changed.called)
3185+
3186+ def test_website_interface_changed(self):
3187+ hooks.cached_website_interface(hook_name="changed")
3188+ self.notify_cached_website.assert_called_once_with(
3189+ relation_ids=(None,))
3190+ self.assertFalse(self.config_changed.called)
3191+
3192+ def test_website_interface_forward_enabled(self):
3193+ self.config_get.return_value = {
3194+ 'enable_forward_proxy': True
3195+ }
3196+ hooks.cached_website_interface(hook_name="changed")
3197+ self.notify_cached_website.assert_called_once_with(
3198+ relation_ids=(None,))
3199+ self.assertTrue(self.config_changed.called)
3200+
3201+
3202+class NotifyCachedWebsiteRelationTest(TestCase):
3203+
3204+ def setUp(self):
3205+ super(NotifyCachedWebsiteRelationTest, self).setUp()
3206+
3207+ self.get_service_ports = self.patch_hook("get_service_ports")
3208+ self.get_sitenames = self.patch_hook("get_sitenames")
3209+ self.get_relation_ids = self.patch_hook("get_relation_ids")
3210+ self.get_hostname = self.patch_hook("get_hostname")
3211+ self.relation_set = self.patch_hook("relation_set")
3212+
3213+ def patch_hook(self, hook_name):
3214+ mock_controller = patch.object(hooks, hook_name)
3215+ mock = mock_controller.start()
3216+ self.addCleanup(mock_controller.stop)
3217+ return mock
3218+
3219+ def test_notify_cached_website_no_relation_ids(self):
3220+ self.get_relation_ids.return_value = ()
3221+
3222+ hooks.notify_cached_website()
3223+
3224+ self.assertFalse(self.relation_set.called)
3225+ self.get_relation_ids.assert_called_once_with(
3226+ "cached-website")
3227+
3228+ def test_notify_cached_website_with_default_relation(self):
3229+ self.get_relation_ids.return_value = ()
3230+ self.get_hostname.return_value = "foo.local"
3231+ self.get_service_ports.return_value = (3128,)
3232+ self.get_sitenames.return_value = ("foo.internal", "bar.internal")
3233+
3234+ hooks.notify_cached_website(relation_ids=(None,))
3235+
3236+ self.get_hostname.assert_called_once_with()
3237+ self.relation_set.assert_called_once_with(
3238+ relation_id=None, port=3128,
3239+ hostname="foo.local",
3240+ sitenames="foo.internal bar.internal")
3241+ self.assertFalse(self.get_relation_ids.called)
3242+
3243+ def test_notify_cached_website_with_relations(self):
3244+ self.get_relation_ids.return_value = ("cached-website:1",
3245+ "cached-website:2")
3246+ self.get_hostname.return_value = "foo.local"
3247+ self.get_service_ports.return_value = (3128,)
3248+ self.get_sitenames.return_value = ("foo.internal", "bar.internal")
3249+
3250+ hooks.notify_cached_website()
3251+
3252+ self.get_hostname.assert_called_once_with()
3253+ self.get_relation_ids.assert_called_once_with("cached-website")
3254+ self.relation_set.assert_has_calls([
3255+ call.relation_set(
3256+ relation_id="cached-website:1", port=3128,
3257+ hostname="foo.local",
3258+ sitenames="foo.internal bar.internal"),
3259+ call.relation_set(
3260+ relation_id="cached-website:2", port=3128,
3261+ hostname="foo.local",
3262+ sitenames="foo.internal bar.internal"),
3263+ ])
3264
3265=== added file 'hooks/tests/test_config_changed_hooks.py'
3266--- hooks/tests/test_config_changed_hooks.py 1970-01-01 00:00:00 +0000
3267+++ hooks/tests/test_config_changed_hooks.py 2013-10-29 21:07:21 +0000
3268@@ -0,0 +1,60 @@
3269+from testtools import TestCase
3270+from mock import patch
3271+
3272+import hooks
3273+
3274+
3275+class ConfigChangedTest(TestCase):
3276+
3277+ def setUp(self):
3278+ super(ConfigChangedTest, self).setUp()
3279+ self.get_service_ports = self.patch_hook("get_service_ports")
3280+ self.get_sitenames = self.patch_hook("get_sitenames")
3281+ self.construct_squid3_config = self.patch_hook(
3282+ "construct_squid3_config")
3283+ self.update_nrpe_checks = self.patch_hook(
3284+ "update_nrpe_checks")
3285+ self.update_service_ports = self.patch_hook(
3286+ "update_service_ports")
3287+ self.service_squid3 = self.patch_hook(
3288+ "service_squid3")
3289+ self.notify_cached_website = self.patch_hook("notify_cached_website")
3290+ self.log = self.patch_hook("log")
3291+ self.config_get = self.patch_hook("config_get")
3292+
3293+ def patch_hook(self, hook_name):
3294+ mock_controller = patch.object(hooks, hook_name)
3295+ mock = mock_controller.start()
3296+ self.addCleanup(mock_controller.stop)
3297+ return mock
3298+
3299+ def test_config_changed_notify_cached_website_changed_stanzas(self):
3300+ self.service_squid3.return_value = True
3301+ self.get_sitenames.side_effect = (
3302+ ('foo.internal',),
3303+ ('foo.internal', 'bar.internal'))
3304+
3305+ hooks.config_changed()
3306+
3307+ self.notify_cached_website.assert_called_once_with()
3308+
3309+ def test_config_changed_no_notify_cached_website_not_changed(self):
3310+ self.service_squid3.return_value = True
3311+ self.get_sitenames.side_effect = (
3312+ ('foo.internal',),
3313+ ('foo.internal',))
3314+
3315+ hooks.config_changed()
3316+
3317+ self.assertFalse(self.notify_cached_website.called)
3318+
3319+ @patch("sys.exit")
3320+ def test_config_changed_no_notify_cached_website_failed_check(self, exit):
3321+ self.service_squid3.return_value = False
3322+ self.get_sitenames.side_effect = (
3323+ ('foo.internal',),
3324+ ('foo.internal', 'bar.internal'))
3325+
3326+ hooks.config_changed()
3327+
3328+ self.assertFalse(self.notify_cached_website.called)
3329
3330=== added file 'hooks/tests/test_helpers.py'
3331--- hooks/tests/test_helpers.py 1970-01-01 00:00:00 +0000
3332+++ hooks/tests/test_helpers.py 2013-10-29 21:07:21 +0000
3333@@ -0,0 +1,415 @@
3334+import os
3335+import json
3336+import yaml
3337+
3338+from os.path import dirname
3339+
3340+from testtools import TestCase
3341+from testtools.matchers import AfterPreprocessing, Contains
3342+from mock import patch
3343+
3344+import hooks
3345+from charmhelpers.core.hookenv import Serializable
3346+
3347+
3348+def normalize_whitespace(data):
3349+ return ' '.join(chunk for chunk in data.split())
3350+
3351+
3352+@patch.dict(os.environ, {"CHARM_DIR": dirname(dirname(dirname(__file__)))})
3353+class SquidConfigTest(TestCase):
3354+
3355+ def _assert_contents(self, *expected):
3356+ def side_effect(got):
3357+ for chunk in expected:
3358+ self.assertThat(got, AfterPreprocessing(
3359+ normalize_whitespace,
3360+ Contains(normalize_whitespace(chunk))))
3361+ return side_effect
3362+
3363+ def _apply_patch(self, name):
3364+ p = patch(name)
3365+ mocked_name = p.start()
3366+ self.addCleanup(p.stop)
3367+ return mocked_name
3368+
3369+ def setUp(self):
3370+ super(SquidConfigTest, self).setUp()
3371+ self.get_reverse_sites = self._apply_patch('hooks.get_reverse_sites')
3372+ self.get_forward_sites = self._apply_patch('hooks.get_forward_sites')
3373+ self.config_get = self._apply_patch('hooks.config_get')
3374+ self.write_squid3_config = self._apply_patch(
3375+ 'hooks.write_squid3_config')
3376+
3377+ self.get_forward_sites.return_value = {}
3378+
3379+ def test_squid_config_no_sites(self):
3380+ self.config_get.return_value = Serializable({
3381+ "refresh_patterns": "",
3382+ "cache_size_mb": 1024,
3383+ "target_objs_per_dir": 1024,
3384+ "avg_obj_size_kb": 1024,
3385+ "via": "on",
3386+ })
3387+ self.get_reverse_sites.return_value = None
3388+ self.write_squid3_config.side_effect = self._assert_contents(
3389+ """
3390+ always_direct deny all
3391+ """,
3392+ """
3393+ via on
3394+ """,
3395+ )
3396+ hooks.construct_squid3_config()
3397+
3398+ def test_squid_config_via_off(self):
3399+ self.config_get.return_value = Serializable({
3400+ "refresh_patterns": "",
3401+ "cache_size_mb": 1024,
3402+ "target_objs_per_dir": 1024,
3403+ "avg_obj_size_kb": 1024,
3404+ "via": "off",
3405+ })
3406+ self.get_reverse_sites.return_value = None
3407+ self.write_squid3_config.side_effect = self._assert_contents(
3408+ """
3409+ via off
3410+ """,
3411+ )
3412+ hooks.construct_squid3_config()
3413+
3414+ def test_squid_config_refresh_pattern_json(self):
3415+ self.config_get.return_value = Serializable({
3416+ "refresh_patterns": json.dumps(
3417+ {"http://www.ubuntu.com":
3418+ {"min": 0, "percent": 20, "max": 60}}),
3419+ "cache_size_mb": 1024,
3420+ "target_objs_per_dir": 1024,
3421+ "avg_obj_size_kb": 1024,
3422+ })
3423+ self.get_reverse_sites.return_value = None
3424+ self.write_squid3_config.side_effect = self._assert_contents(
3425+ """
3426+ refresh_pattern http://www.ubuntu.com 0 20% 60
3427+ """,
3428+ )
3429+ hooks.construct_squid3_config()
3430+
3431+ def test_squid_config_refresh_pattern_yaml(self):
3432+ self.config_get.return_value = Serializable({
3433+ "refresh_patterns": yaml.dump(
3434+ {"http://www.ubuntu.com":
3435+ {"min": 0, "percent": 20, "max": 60}}),
3436+ "cache_size_mb": 1024,
3437+ "target_objs_per_dir": 1024,
3438+ "avg_obj_size_kb": 1024,
3439+ })
3440+ self.get_reverse_sites.return_value = None
3441+ self.write_squid3_config.side_effect = self._assert_contents(
3442+ """
3443+ refresh_pattern http://www.ubuntu.com 0 20% 60
3444+ """,
3445+ )
3446+ hooks.construct_squid3_config()
3447+
3448+ def test_squid_config_refresh_pattern_options(self):
3449+ self.config_get.return_value = Serializable({
3450+ "refresh_patterns": yaml.dump(
3451+ {"http://www.ubuntu.com":
3452+ {"min": 0, "percent": 20, "max": 60,
3453+ "options": ["override-lastmod",
3454+ "reload-into-ims"]}}),
3455+ "cache_size_mb": 1024,
3456+ "target_objs_per_dir": 1024,
3457+ "avg_obj_size_kb": 1024,
3458+ })
3459+ self.get_reverse_sites.return_value = None
3460+ self.write_squid3_config.side_effect = self._assert_contents(
3461+ """
3462+ refresh_pattern http://www.ubuntu.com 0 20% 60
3463+ override-lastmod reload-into-ims
3464+ refresh_pattern . 30 20% 4320
3465+ """,
3466+ )
3467+ hooks.construct_squid3_config()
3468+
3469+ def test_squid_config_refresh_pattern_default(self):
3470+ self.config_get.return_value = Serializable({
3471+ "refresh_patterns": yaml.dump(
3472+ {".":
3473+ {"min": 0, "percent": 20, "max": 60,
3474+ "options": ["override-lastmod",
3475+ "reload-into-ims"]}}),
3476+ "cache_size_mb": 1024,
3477+ "target_objs_per_dir": 1024,
3478+ "avg_obj_size_kb": 1024,
3479+ })
3480+ self.get_reverse_sites.return_value = None
3481+ self.write_squid3_config.side_effect = self._assert_contents(
3482+ """
3483+ refresh_pattern . 0 20% 60
3484+ override-lastmod reload-into-ims
3485+ """,
3486+ )
3487+ hooks.construct_squid3_config()
3488+
3489+ def test_squid_config_no_sitenames(self):
3490+ self.config_get.return_value = Serializable({
3491+ "refresh_patterns": "",
3492+ "cache_size_mb": 1024,
3493+ "target_objs_per_dir": 1024,
3494+ "avg_obj_size_kb": 1024,
3495+ })
3496+ self.get_reverse_sites.return_value = {
3497+ None: [
3498+ hooks.Server("website_1__foo_1", "1.2.3.4", 4242, ''),
3499+ hooks.Server("website_1__foo_2", "1.2.3.5", 4242, ''),
3500+ ],
3501+ }
3502+ self.write_squid3_config.side_effect = self._assert_contents(
3503+ """
3504+ acl no_sitename_acl myport
3505+ http_access allow accel_ports no_sitename_acl
3506+ never_direct allow no_sitename_acl
3507+ """,
3508+ """
3509+ cache_peer 1.2.3.4 parent 4242 0 name=website_1__foo_1 no-query
3510+ no-digest originserver round-robin login=PASS
3511+ cache_peer_access website_1__foo_1 allow no_sitename_acl
3512+ cache_peer_access website_1__foo_1 deny all
3513+ """,
3514+ """
3515+ cache_peer 1.2.3.5 parent 4242 0 name=website_1__foo_2 no-query
3516+ no-digest originserver round-robin login=PASS
3517+ cache_peer_access website_1__foo_2 allow no_sitename_acl
3518+ cache_peer_access website_1__foo_2 deny all
3519+ """
3520+ )
3521+ hooks.construct_squid3_config()
3522+
3523+ def test_squid_config_with_domain(self):
3524+ self.config_get.return_value = Serializable({
3525+ "refresh_patterns": "",
3526+ "cache_size_mb": 1024,
3527+ "target_objs_per_dir": 1024,
3528+ "avg_obj_size_kb": 1024,
3529+ })
3530+ self.get_reverse_sites.return_value = {
3531+ "foo.com": [
3532+ hooks.Server("foo_com__website_1__foo_1",
3533+ "1.2.3.4", 4242, "forceddomain=example.com"),
3534+ hooks.Server("foo_com__website_1__foo_2",
3535+ "1.2.3.5", 4242, "forceddomain=example.com"),
3536+ ],
3537+ }
3538+ self.write_squid3_config.side_effect = self._assert_contents(
3539+ """
3540+ acl s_1_acl dstdomain foo.com
3541+ http_access allow accel_ports s_1_acl
3542+ http_access allow CONNECT SSL_ports s_1_acl
3543+ always_direct allow CONNECT SSL_ports s_1_acl
3544+ always_direct deny s_1_acl
3545+ """,
3546+ """
3547+ cache_peer 1.2.3.4 parent 4242 0 name=foo_com__website_1__foo_1
3548+ no-query no-digest originserver round-robin login=PASS
3549+ forceddomain=example.com
3550+ cache_peer_access foo_com__website_1__foo_1 allow s_1_acl
3551+ cache_peer_access foo_com__website_1__foo_1 deny all
3552+ """,
3553+ """
3554+ cache_peer 1.2.3.5 parent 4242 0 name=foo_com__website_1__foo_2
3555+ no-query no-digest originserver round-robin login=PASS
3556+ forceddomain=example.com
3557+ cache_peer_access foo_com__website_1__foo_2 allow s_1_acl
3558+ cache_peer_access foo_com__website_1__foo_2 deny all
3559+ """
3560+ )
3561+ hooks.construct_squid3_config()
3562+
3563+ def test_with_domain_no_servers_only_direct(self):
3564+ self.config_get.return_value = Serializable({
3565+ "refresh_patterns": "",
3566+ "cache_size_mb": 1024,
3567+ "target_objs_per_dir": 1024,
3568+ "avg_obj_size_kb": 1024,
3569+ })
3570+ self.get_reverse_sites.return_value = {
3571+ "foo.com": [
3572+ ],
3573+ }
3574+ self.write_squid3_config.side_effect = self._assert_contents(
3575+ """
3576+ acl s_1_acl dstdomain foo.com
3577+ http_access allow accel_ports s_1_acl
3578+ http_access allow CONNECT SSL_ports s_1_acl
3579+ always_direct allow s_1_acl
3580+ """,
3581+ )
3582+ hooks.construct_squid3_config()
3583+
3584+ def test_with_balancer_no_servers_only_direct(self):
3585+ self.config_get.return_value = Serializable({
3586+ "refresh_patterns": "",
3587+ "cache_size_mb": 1024,
3588+ "target_objs_per_dir": 1024,
3589+ "avg_obj_size_kb": 1024,
3590+ "x_balancer_name_allowed": True,
3591+ })
3592+ self.get_reverse_sites.return_value = {
3593+ "foo.com": [],
3594+ }
3595+ self.write_squid3_config.side_effect = self._assert_contents(
3596+ """
3597+ acl s_1_acl dstdomain foo.com
3598+ http_access allow accel_ports s_1_acl
3599+ http_access allow CONNECT SSL_ports s_1_acl
3600+ always_direct allow s_1_acl
3601+ """,
3602+ """
3603+ acl s_1_balancer req_header X-Balancer-Name foo\.com
3604+ http_access allow accel_ports s_1_balancer
3605+ http_access allow CONNECT SSL_ports s_1_balancer
3606+ always_direct allow s_1_balancer
3607+ """,
3608+ )
3609+ hooks.construct_squid3_config()
3610+
3611+ def test_with_balancer_name(self):
3612+ self.config_get.return_value = Serializable({
3613+ "refresh_patterns": "",
3614+ "cache_size_mb": 1024,
3615+ "target_objs_per_dir": 1024,
3616+ "avg_obj_size_kb": 1024,
3617+ "x_balancer_name_allowed": True,
3618+ })
3619+ self.get_reverse_sites.return_value = {
3620+ "foo.com": [
3621+ hooks.Server("foo_com__website_1__foo_1",
3622+ "1.2.3.4", 4242, ''),
3623+ hooks.Server("foo_com__website_1__foo_2",
3624+ "1.2.3.5", 4242, ''),
3625+ ],
3626+ }
3627+ self.write_squid3_config.side_effect = self._assert_contents(
3628+ """
3629+ acl s_1_acl dstdomain foo.com
3630+ http_access allow accel_ports s_1_acl
3631+ http_access allow CONNECT SSL_ports s_1_acl
3632+ always_direct allow CONNECT SSL_ports s_1_acl
3633+ always_direct deny s_1_acl
3634+ """,
3635+ """
3636+ acl s_1_balancer req_header X-Balancer-Name foo\.com
3637+ http_access allow accel_ports s_1_balancer
3638+ http_access allow CONNECT SSL_ports s_1_balancer
3639+ always_direct allow CONNECT SSL_ports s_1_balancer
3640+ always_direct deny s_1_balancer
3641+ """,
3642+ """
3643+ cache_peer 1.2.3.4 parent 4242 0 name=foo_com__website_1__foo_1
3644+ no-query no-digest originserver round-robin login=PASS
3645+ cache_peer_access foo_com__website_1__foo_1 allow s_1_acl
3646+ cache_peer_access foo_com__website_1__foo_1 allow s_1_balancer
3647+ cache_peer_access foo_com__website_1__foo_1 deny all
3648+ """,
3649+ """
3650+ cache_peer 1.2.3.5 parent 4242 0 name=foo_com__website_1__foo_2
3651+ no-query no-digest originserver round-robin login=PASS
3652+ cache_peer_access foo_com__website_1__foo_2 allow s_1_acl
3653+ """
3654+ )
3655+ hooks.construct_squid3_config()
3656+
3657+ def test_forward_enabled(self):
3658+ self.config_get.return_value = Serializable({
3659+ "enable_forward_proxy": True,
3660+ "refresh_patterns": "",
3661+ "cache_size_mb": 1024,
3662+ "target_objs_per_dir": 1024,
3663+ "avg_obj_size_kb": 1024,
3664+ })
3665+ self.get_reverse_sites.return_value = {
3666+ "foo.com": [],
3667+ }
3668+ self.get_forward_sites.return_value = [
3669+ {'private-address': '1.2.3.4', 'name': 'service_unit_0'},
3670+ {'private-address': '2.3.4.5', 'name': 'service_unit_1'},
3671+ ]
3672+ self.write_squid3_config.side_effect = self._assert_contents(
3673+ """
3674+ acl fwd_service_unit_0 src 1.2.3.4
3675+ http_access allow fwd_service_unit_0
3676+ http_access allow CONNECT SSL_ports fwd_service_unit_0
3677+ always_direct allow fwd_service_unit_0
3678+ always_direct allow CONNECT SSL_ports fwd_service_unit_0
3679+ acl fwd_service_unit_1 src 2.3.4.5
3680+ http_access allow fwd_service_unit_1
3681+ http_access allow CONNECT SSL_ports fwd_service_unit_1
3682+ always_direct allow fwd_service_unit_1
3683+ always_direct allow CONNECT SSL_ports fwd_service_unit_1
3684+ """,
3685+ """
3686+ acl s_1_acl dstdomain foo.com
3687+ http_access allow accel_ports s_1_acl
3688+ http_access allow CONNECT SSL_ports s_1_acl
3689+ always_direct allow s_1_acl
3690+ """
3691+ )
3692+ hooks.construct_squid3_config()
3693+
3694+ def test_squid_config_cache_enabled(self):
3695+ self.config_get.return_value = Serializable({
3696+ "refresh_patterns": "",
3697+ "cache_size_mb": 1024,
3698+ "cache_mem_mb": 256,
3699+ "cache_dir": "/var/run/squid3",
3700+ "target_objs_per_dir": 16,
3701+ "avg_obj_size_kb": 4,
3702+ })
3703+ self.get_reverse_sites.return_value = None
3704+ self.write_squid3_config.side_effect = self._assert_contents(
3705+ """
3706+ cache_dir aufs /var/run/squid3 1024 32 512
3707+ cache_mem 256 MB
3708+ """,
3709+ )
3710+ hooks.construct_squid3_config()
3711+
3712+ def test_squid_config_cache_disabled(self):
3713+ self.config_get.return_value = Serializable({
3714+ "refresh_patterns": "",
3715+ "cache_size_mb": 0,
3716+ "target_objs_per_dir": 1024,
3717+ "avg_obj_size_kb": 1024,
3718+ })
3719+ self.get_reverse_sites.return_value = None
3720+ self.write_squid3_config.side_effect = self._assert_contents(
3721+ """
3722+ cache deny all
3723+ """,
3724+ )
3725+ hooks.construct_squid3_config()
3726+
3727+
3728+class HelpersTest(TestCase):
3729+ def test_gets_config(self):
3730+ json_string = '{"foo": "BAR"}'
3731+ with patch('subprocess.check_output') as check_output:
3732+ check_output.return_value = json_string
3733+
3734+ result = hooks.config_get()
3735+
3736+ self.assertEqual(result['foo'], 'BAR')
3737+ check_output.assert_called_with(['config-get', '--format=json'])
3738+
3739+ def test_gets_config_with_scope(self):
3740+ json_string = '{"foo": "BAR"}'
3741+ with patch('subprocess.check_output') as check_output:
3742+ check_output.return_value = json_string
3743+
3744+ result = hooks.config_get(scope='baz')
3745+
3746+ self.assertEqual(result['foo'], 'BAR')
3747+ check_output.assert_called_with(['config-get', 'baz',
3748+ '--format=json'])
3749
3750=== added file 'hooks/tests/test_nrpe_hooks.py'
3751--- hooks/tests/test_nrpe_hooks.py 1970-01-01 00:00:00 +0000
3752+++ hooks/tests/test_nrpe_hooks.py 2013-10-29 21:07:21 +0000
3753@@ -0,0 +1,271 @@
3754+import os
3755+import grp
3756+import pwd
3757+import subprocess
3758+
3759+from testtools import TestCase
3760+from mock import patch, call
3761+
3762+import hooks
3763+
3764+from charmhelpers.contrib.charmsupport import nrpe
3765+from charmhelpers.core.hookenv import Serializable
3766+
3767+
3768+class NRPERelationTest(TestCase):
3769+ """Tests for the update_nrpe_checks hook.
3770+
3771+ Half of this is already tested in the tests for charmsupport.nrpe, but
3772+ as the hook in the charm pre-dates that, the tests are left here to ensure
3773+ backwards-compatibility.
3774+
3775+ """
3776+ patches = {
3777+ 'config': {'object': nrpe},
3778+ 'log': {'object': nrpe},
3779+ 'getpwnam': {'object': pwd},
3780+ 'getgrnam': {'object': grp},
3781+ 'mkdir': {'object': os},
3782+ 'chown': {'object': os},
3783+ 'exists': {'object': os.path},
3784+ 'listdir': {'object': os},
3785+ 'remove': {'object': os},
3786+ 'open': {'object': nrpe, 'create': True},
3787+ 'isfile': {'object': os.path},
3788+ 'call': {'object': subprocess},
3789+ 'install_nrpe_scripts': {'object': hooks},
3790+ 'relation_ids': {'object': nrpe},
3791+ 'relation_set': {'object': nrpe},
3792+ }
3793+
3794+ def setUp(self):
3795+ super(NRPERelationTest, self).setUp()
3796+ self.patched = {}
3797+ # Mock the universe.
3798+ for attr, data in self.patches.items():
3799+ create = data.get('create', False)
3800+ patcher = patch.object(data['object'], attr, create=create)
3801+ self.patched[attr] = patcher.start()
3802+ self.addCleanup(patcher.stop)
3803+ if not 'JUJU_UNIT_NAME' in os.environ:
3804+ os.environ['JUJU_UNIT_NAME'] = 'test'
3805+
3806+ def check_call_counts(self, **kwargs):
3807+ for attr, expected in kwargs.items():
3808+ patcher = self.patched[attr]
3809+ self.assertEqual(expected, patcher.call_count, attr)
3810+
3811+ def test_update_nrpe_no_nagios_bails(self):
3812+ config = {'nagios_context': 'test'}
3813+ self.patched['config'].return_value = Serializable(config)
3814+ self.patched['getpwnam'].side_effect = KeyError
3815+
3816+ self.assertEqual(None, hooks.update_nrpe_checks())
3817+
3818+ expected = 'Nagios user not set up, nrpe checks not updated'
3819+ self.patched['log'].assert_called_once_with(expected)
3820+ self.check_call_counts(log=1, config=1, getpwnam=1)
3821+
3822+ def test_update_nrpe_removes_existing_config(self):
3823+ config = {
3824+ 'nagios_context': 'test',
3825+ 'nagios_check_http_params': '-u http://example.com/url',
3826+ }
3827+ self.patched['config'].return_value = Serializable(config)
3828+ self.patched['exists'].return_value = True
3829+ self.patched['listdir'].return_value = [
3830+ 'foo', 'bar.cfg', 'check_squidrp.cfg']
3831+
3832+ self.assertEqual(None, hooks.update_nrpe_checks())
3833+
3834+ expected = '/var/lib/nagios/export/check_squidrp.cfg'
3835+ self.patched['remove'].assert_called_once_with(expected)
3836+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3837+ exists=5, remove=1, open=4, listdir=2)
3838+
3839+ def test_update_nrpe_uses_check_squidpeers(self):
3840+ config = {
3841+ 'nagios_context': 'test',
3842+ }
3843+ self.patched['config'].return_value = Serializable(config)
3844+ self.patched['exists'].return_value = True
3845+ self.patched['isfile'].return_value = False
3846+
3847+ self.assertEqual(None, hooks.update_nrpe_checks())
3848+ self.assertEqual(2, self.patched['open'].call_count)
3849+ filename = 'check_squidpeers.cfg'
3850+
3851+ service_file_contents = """
3852+#---------------------------------------------------
3853+# This file is Juju managed
3854+#---------------------------------------------------
3855+define service {
3856+ use active-service
3857+ host_name test-test
3858+ service_description test-test[squidpeers] Check Squid Peers
3859+ check_command check_nrpe!check_squidpeers
3860+ servicegroups test
3861+}
3862+"""
3863+ self.patched['open'].assert_has_calls(
3864+ [call('/etc/nagios/nrpe.d/%s' % filename, 'w'),
3865+ call('/var/lib/nagios/export/service__test-test_%s' %
3866+ filename, 'w'),
3867+ call().__enter__().write(service_file_contents),
3868+ call().__enter__().write('# check squidpeers\n'),
3869+ call().__enter__().write(
3870+ 'command[check_squidpeers]='
3871+ '/check_squidpeers\n')],
3872+ any_order=True)
3873+
3874+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3875+ exists=3, open=2, listdir=1)
3876+
3877+ def test_update_nrpe_with_check_url(self):
3878+ config = {
3879+ 'nagios_context': 'test',
3880+ 'nagios_check_http_params': '-u foo -H bar',
3881+ }
3882+ self.patched['config'].return_value = Serializable(config)
3883+ self.patched['exists'].return_value = True
3884+ self.patched['isfile'].return_value = False
3885+
3886+ self.assertEqual(None, hooks.update_nrpe_checks())
3887+ self.assertEqual(4, self.patched['open'].call_count)
3888+ filename = 'check_squidrp.cfg'
3889+
3890+ service_file_contents = """
3891+#---------------------------------------------------
3892+# This file is Juju managed
3893+#---------------------------------------------------
3894+define service {
3895+ use active-service
3896+ host_name test-test
3897+ service_description test-test[squidrp] Check Squid
3898+ check_command check_nrpe!check_squidrp
3899+ servicegroups test
3900+}
3901+"""
3902+ self.patched['open'].assert_has_calls(
3903+ [call('/etc/nagios/nrpe.d/%s' % filename, 'w'),
3904+ call('/var/lib/nagios/export/service__test-test_%s' %
3905+ filename, 'w'),
3906+ call().__enter__().write(service_file_contents),
3907+ call().__enter__().write('# check squidrp\n'),
3908+ call().__enter__().write(
3909+ 'command[check_squidrp]=/check_http -u foo -H bar\n')],
3910+ any_order=True)
3911+
3912+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3913+ exists=5, open=4, listdir=2)
3914+
3915+ def test_update_nrpe_with_no_check_path(self):
3916+ config = {
3917+ 'nagios_context': 'test',
3918+ 'services': '- {service_name: i_ytimg_com}',
3919+ }
3920+ self.patched['config'].return_value = Serializable(config)
3921+ self.patched['exists'].return_value = True
3922+
3923+ self.assertEqual(None, hooks.update_nrpe_checks())
3924+
3925+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3926+ exists=3, open=2, listdir=1)
3927+
3928+ def test_update_nrpe_with_services_and_host_header(self):
3929+ config = {
3930+ 'nagios_context': 'test',
3931+ 'services': '- {service_name: i_ytimg_com, nrpe_check_path: /}',
3932+ }
3933+ self.patched['config'].return_value = Serializable(config)
3934+ self.patched['exists'].return_value = True
3935+
3936+ self.assertEqual(None, hooks.update_nrpe_checks())
3937+
3938+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3939+ exists=5, open=4, listdir=2)
3940+ expected = ('command[check_squid-i_ytimg_com]=/check_http '
3941+ '-I 127.0.0.1 -p 3128 --method=HEAD '
3942+ '-u http://i_ytimg_com/\n')
3943+ self.patched['open'].assert_has_calls(
3944+ [call('/etc/nagios/nrpe.d/check_squid-i_ytimg_com.cfg', 'w'),
3945+ call().__enter__().write(expected)],
3946+ any_order=True)
3947+
3948+ def test_update_nrpe_with_dotted_service_name_and_host_header(self):
3949+ config = {
3950+ 'nagios_context': 'test',
3951+ 'services': '- {service_name: i.ytimg.com, nrpe_check_path: /}',
3952+ }
3953+ self.patched['config'].return_value = Serializable(config)
3954+ self.patched['exists'].return_value = True
3955+
3956+ self.assertEqual(None, hooks.update_nrpe_checks())
3957+
3958+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3959+ exists=5, open=4, listdir=2)
3960+ expected = ('command[check_squid-i_ytimg_com]=/check_http '
3961+ '-I 127.0.0.1 -p 3128 --method=HEAD '
3962+ '-u http://i.ytimg.com/\n')
3963+ self.patched['open'].assert_has_calls(
3964+ [call('/etc/nagios/nrpe.d/check_squid-i_ytimg_com.cfg', 'w'),
3965+ call().__enter__().write(expected)],
3966+ any_order=True)
3967+
3968+ def test_update_nrpe_with_services_and_balancer_name_header(self):
3969+ config = {
3970+ 'nagios_context': 'test',
3971+ 'x_balancer_name_allowed': True,
3972+ 'services': '- {service_name: i_ytimg_com, nrpe_check_path: /}',
3973+ }
3974+ self.patched['config'].return_value = Serializable(config)
3975+ self.patched['exists'].return_value = True
3976+
3977+ self.assertEqual(None, hooks.update_nrpe_checks())
3978+
3979+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
3980+ exists=5, open=4, listdir=2)
3981+
3982+ expected = ('command[check_squid-i_ytimg_com]=/check_http '
3983+ '-I 127.0.0.1 -p 3128 --method=HEAD -u http://localhost/ '
3984+ "-k 'X-Balancer-Name: i_ytimg_com'\n")
3985+ self.patched['open'].assert_has_calls(
3986+ [call('/etc/nagios/nrpe.d/check_squid-i_ytimg_com.cfg', 'w'),
3987+ call().__enter__().write(expected)],
3988+ any_order=True)
3989+
3990+ def test_update_nrpe_with_services_and_optional_path(self):
3991+ services = '- {nrpe_check_path: /foo.jpg, service_name: foo_com}\n'
3992+ config = {
3993+ 'nagios_context': 'test',
3994+ 'services': services,
3995+ }
3996+ self.patched['config'].return_value = Serializable(config)
3997+ self.patched['exists'].return_value = True
3998+
3999+ self.assertEqual(None, hooks.update_nrpe_checks())
4000+
4001+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
4002+ exists=5, open=4, listdir=2)
4003+ expected = ('command[check_squid-foo_com]=/check_http '
4004+ '-I 127.0.0.1 -p 3128 --method=HEAD '
4005+ '-u http://foo_com/foo.jpg\n')
4006+ self.patched['open'].assert_has_calls(
4007+ [call('/etc/nagios/nrpe.d/check_squid-foo_com.cfg', 'w'),
4008+ call().__enter__().write(expected)],
4009+ any_order=True)
4010+
4011+ def test_update_nrpe_restarts_service(self):
4012+ config = {
4013+ 'nagios_context': 'test',
4014+ 'nagios_check_http_params': '-u foo -p 3128'
4015+ }
4016+ self.patched['config'].return_value = Serializable(config)
4017+ self.patched['exists'].return_value = True
4018+
4019+ self.assertEqual(None, hooks.update_nrpe_checks())
4020+
4021+ expected = ['service', 'nagios-nrpe-server', 'restart']
4022+ self.assertEqual(expected, self.patched['call'].call_args[0][0])
4023+ self.check_call_counts(config=1, getpwnam=1, getgrnam=1,
4024+ exists=5, open=4, listdir=2, call=1)
4025
4026=== added file 'hooks/tests/test_website_hooks.py'
4027--- hooks/tests/test_website_hooks.py 1970-01-01 00:00:00 +0000
4028+++ hooks/tests/test_website_hooks.py 2013-10-29 21:07:21 +0000
4029@@ -0,0 +1,267 @@
4030+import yaml
4031+
4032+from testtools import TestCase
4033+from mock import patch
4034+
4035+import hooks
4036+from charmhelpers.core.hookenv import Serializable
4037+
4038+
4039+class WebsiteRelationTest(TestCase):
4040+
4041+ def setUp(self):
4042+ super(WebsiteRelationTest, self).setUp()
4043+
4044+ relations_of_type = patch.object(hooks, "relations_of_type")
4045+ self.relations_of_type = relations_of_type.start()
4046+ self.addCleanup(relations_of_type.stop)
4047+
4048+ config_get = patch.object(hooks, "config_get")
4049+ self.config_get = config_get.start()
4050+ self.addCleanup(config_get.stop)
4051+
4052+ log = patch.object(hooks, "log")
4053+ self.log = log.start()
4054+ self.addCleanup(log.stop)
4055+
4056+ def test_relation_data_returns_no_relations(self):
4057+ self.config_get.return_value = Serializable({})
4058+ self.relations_of_type.return_value = []
4059+ self.assertEqual(None, hooks.get_reverse_sites())
4060+
4061+ def test_no_port_in_relation_data(self):
4062+ self.config_get.return_value = Serializable({})
4063+ self.relations_of_type.return_value = [
4064+ {"private-address": "1.2.3.4",
4065+ "__unit__": "foo/1",
4066+ "__relid__": "website:1"},
4067+ ]
4068+ self.assertIs(None, hooks.get_reverse_sites())
4069+ self.log.assert_called_once_with(
4070+ "No port in relation data for 'foo/1', skipping.")
4071+
4072+ def test_empty_relation_services(self):
4073+ self.config_get.return_value = Serializable({})
4074+ self.relations_of_type.return_value = [
4075+ {"private-address": "1.2.3.4",
4076+ "__unit__": "foo/1",
4077+ "__relid__": "website:1",
4078+ "all_services": "",
4079+ },
4080+ ]
4081+
4082+ self.assertEqual(None, hooks.get_reverse_sites())
4083+ self.assertFalse(self.log.called)
4084+
4085+ def test_no_port_in_relation_data_ok_with_all_services(self):
4086+ self.config_get.return_value = Serializable({})
4087+ self.relations_of_type.return_value = [
4088+ {"private-address": "1.2.3.4",
4089+ "__unit__": "foo/1",
4090+ "__relid__": "website:1",
4091+ "all_services": yaml.dump([
4092+ {"service_name": "foo.internal",
4093+ "service_port": 4242},
4094+ ]),
4095+ },
4096+ ]
4097+
4098+ expected = {
4099+ "foo.internal": [
4100+ ("foo_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
4101+ ],
4102+ }
4103+ self.assertEqual(expected, hooks.get_reverse_sites())
4104+ self.assertFalse(self.log.called)
4105+
4106+ def test_no_private_address_in_relation_data(self):
4107+ self.config_get.return_value = Serializable({})
4108+ self.relations_of_type.return_value = [
4109+ {"port": 4242,
4110+ "__unit__": "foo/0",
4111+ "__relid__": "website:1",
4112+ },
4113+ ]
4114+ self.assertIs(None, hooks.get_reverse_sites())
4115+ self.log.assert_called_once_with(
4116+ "No private-address in relation data for 'foo/0', skipping.")
4117+
4118+ def test_sitenames_in_relation_data(self):
4119+ self.config_get.return_value = Serializable({})
4120+ self.relations_of_type.return_value = [
4121+ {"private-address": "1.2.3.4",
4122+ "port": 4242,
4123+ "__unit__": "foo/1",
4124+ "__relid__": "website:1",
4125+ "sitenames": "foo.internal bar.internal"},
4126+ {"private-address": "1.2.3.5",
4127+ "port": 4242,
4128+ "__unit__": "foo/2",
4129+ "__relid__": "website:1",
4130+ "sitenames": "foo.internal bar.internal"},
4131+ ]
4132+ expected = {
4133+ "foo.internal": [
4134+ ("foo_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
4135+ ("foo_internal__website_1__foo_2", "1.2.3.5", 4242, ''),
4136+ ],
4137+ "bar.internal": [
4138+ ("bar_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
4139+ ("bar_internal__website_1__foo_2", "1.2.3.5", 4242, ''),
4140+ ],
4141+ }
4142+ self.assertEqual(expected, hooks.get_reverse_sites())
4143+
4144+ def test_all_services_in_relation_data(self):
4145+ self.config_get.return_value = Serializable({})
4146+ self.relations_of_type.return_value = [
4147+ {"private-address": "1.2.3.4",
4148+ "__unit__": "foo/1",
4149+ "__relid__": "website:1",
4150+ "all_services": yaml.dump([
4151+ {"service_name": "foo.internal",
4152+ "service_port": 4242},
4153+ {"service_name": "bar.internal",
4154+ "service_port": 4243}
4155+ ]),
4156+ },
4157+ {"private-address": "1.2.3.5",
4158+ "__unit__": "foo/2",
4159+ "__relid__": "website:1",
4160+ "all_services": yaml.dump([
4161+ {"service_name": "foo.internal",
4162+ "service_port": 4242},
4163+ {"service_name": "bar.internal",
4164+ "service_port": 4243}
4165+ ]),
4166+ },
4167+ ]
4168+ expected = {
4169+ "foo.internal": [
4170+ ("foo_internal__website_1__foo_1", "1.2.3.4", 4242, ''),
4171+ ("foo_internal__website_1__foo_2", "1.2.3.5", 4242, ''),
4172+ ],
4173+ "bar.internal": [
4174+ ("bar_internal__website_1__foo_1", "1.2.3.4", 4243, ''),
4175+ ("bar_internal__website_1__foo_2", "1.2.3.5", 4243, ''),
4176+ ],
4177+ }
4178+ self.assertEqual(expected, hooks.get_reverse_sites())
4179+
4180+ def test_unit_names_in_relation_data(self):
4181+ self.config_get.return_value = Serializable({})
4182+ self.relations_of_type.return_value = [
4183+ {"private-address": "1.2.3.4",
4184+ "__relid__": "website_1",
4185+ "__unit__": "foo/1",
4186+ "port": 4242},
4187+ {"private-address": "1.2.3.5",
4188+ "__relid__": "website_1",
4189+ "__unit__": "foo/2",
4190+ "port": 4242},
4191+ ]
4192+ expected = {
4193+ None: [
4194+ ("website_1__foo_1", "1.2.3.4", 4242, ''),
4195+ ("website_1__foo_2", "1.2.3.5", 4242, ''),
4196+ ],
4197+ }
4198+ self.assertEqual(expected, hooks.get_reverse_sites())
4199+
4200+ def test_sites_from_config_no_domain(self):
4201+ self.relations_of_type.return_value = []
4202+ self.config_get.return_value = Serializable({
4203+ "services": yaml.dump([
4204+ {"service_name": "foo.internal",
4205+ "servers": [
4206+ ["1.2.3.4", 4242],
4207+ ["1.2.3.5", 4242],
4208+ ]
4209+ }
4210+ ])})
4211+ expected = {
4212+ "foo.internal": [
4213+ ("foo_internal__1_2_3_4", "1.2.3.4", 4242, ''),
4214+ ("foo_internal__1_2_3_5", "1.2.3.5", 4242, ''),
4215+ ],
4216+ }
4217+ self.assertEqual(expected, hooks.get_reverse_sites())
4218+
4219+ def test_sites_from_config_with_domain(self):
4220+ self.relations_of_type.return_value = []
4221+ self.config_get.return_value = Serializable({
4222+ "services": yaml.dump([
4223+ {"service_name": "foo.internal",
4224+ "server_options": ["forceddomain=example.com"],
4225+ "servers": [
4226+ ["1.2.3.4", 4242],
4227+ ["1.2.3.5", 4242],
4228+ ]
4229+ }
4230+ ])})
4231+ expected = {
4232+ "foo.internal": [
4233+ ("foo_internal__1_2_3_4", "1.2.3.4", 4242,
4234+ "forceddomain=example.com"),
4235+ ("foo_internal__1_2_3_5", "1.2.3.5", 4242,
4236+ "forceddomain=example.com"),
4237+ ],
4238+ }
4239+ self.assertEqual(expected, hooks.get_reverse_sites())
4240+
4241+ def test_sites_from_config_and_relation_with_domain(self):
4242+ self.relations_of_type.return_value = [
4243+ {"private-address": "1.2.3.4",
4244+ "__unit__": "foo/1",
4245+ "__relid__": "website:1",
4246+ "all_services": yaml.dump([
4247+ {"service_name": "foo.internal",
4248+ "service_port": 4242},
4249+ {"service_name": "bar.internal",
4250+ "service_port": 4243}
4251+ ]),
4252+ },
4253+ {"private-address": "1.2.3.5",
4254+ "__unit__": "foo/2",
4255+ "__relid__": "website:1",
4256+ "all_services": yaml.dump([
4257+ {"service_name": "foo.internal",
4258+ "service_port": 4242},
4259+ {"service_name": "bar.internal",
4260+ "service_port": 4243}
4261+ ]),
4262+ },
4263+ ]
4264+ self.config_get.return_value = Serializable({
4265+ "services": yaml.dump([
4266+ {"service_name": "foo.internal",
4267+ "server_options": ["forceddomain=example.com"],
4268+ "servers": [
4269+ ["1.2.4.4", 4242],
4270+ ["1.2.4.5", 4242],
4271+ ]
4272+ }
4273+ ])})
4274+ expected = {
4275+ "foo.internal": [
4276+ ("foo_internal__1_2_4_4", "1.2.4.4", 4242,
4277+ "forceddomain=example.com"),
4278+ ("foo_internal__1_2_4_5", "1.2.4.5", 4242,
4279+ "forceddomain=example.com"),
4280+ ("foo_internal__website_1__foo_1", "1.2.3.4", 4242,
4281+ "forceddomain=example.com"),
4282+ ("foo_internal__website_1__foo_2", "1.2.3.5", 4242,
4283+ "forceddomain=example.com"),
4284+ ],
4285+ "bar.internal": [
4286+ ("bar_internal__website_1__foo_1", "1.2.3.4", 4243, ''),
4287+ ("bar_internal__website_1__foo_2", "1.2.3.5", 4243, ''),
4288+ ],
4289+ }
4290+ self.assertEqual(expected, hooks.get_reverse_sites())
4291+
4292+ def test_empty_sites_from_config_no_domain(self):
4293+ self.relations_of_type.return_value = []
4294+ self.config_get.return_value = Serializable({
4295+ "services": ""})
4296+ self.assertEqual(None, hooks.get_reverse_sites())
4297
4298=== added symlink 'hooks/upgrade-charm'
4299=== target is u'hooks.py'
4300=== modified file 'metadata.yaml'
4301--- metadata.yaml 2013-07-11 19:15:57 +0000
4302+++ metadata.yaml 2013-10-29 21:07:21 +0000
4303@@ -1,6 +1,6 @@
4304 name: squid-reverseproxy
4305 summary: Full featured Web Proxy cache (HTTP proxy)
4306-maintainer: "Matthew Wedgwood <matthew.wedgwood@canonical.com>"
4307+maintainer: [Matthew Wedgwood <matthew.wedgwood@canonical.com>, Alexander List <alexander.list@canonical.com>]
4308 description: >
4309 Squid is a high-performance proxy caching server for web clients,
4310 supporting FTP, gopher, and HTTP data objects. Squid version 3 is a
4311@@ -23,6 +23,9 @@
4312 nrpe-external-master:
4313 interface: nrpe-external-master
4314 scope: container
4315+ local-monitors:
4316+ interface: local-monitors
4317+ scope: container
4318 requires:
4319 website:
4320 interface: http
4321
4322=== removed file 'revision'
4323--- revision 2013-02-15 07:04:29 +0000
4324+++ revision 1970-01-01 00:00:00 +0000
4325@@ -1,1 +0,0 @@
4326-0
4327
4328=== added file 'setup.cfg'
4329--- setup.cfg 1970-01-01 00:00:00 +0000
4330+++ setup.cfg 2013-10-29 21:07:21 +0000
4331@@ -0,0 +1,4 @@
4332+[nosetests]
4333+with-coverage=1
4334+cover-erase=1
4335+cover-package=hooks
4336\ No newline at end of file
4337
4338=== added file 'tarmac_tests.sh'
4339--- tarmac_tests.sh 1970-01-01 00:00:00 +0000
4340+++ tarmac_tests.sh 2013-10-29 21:07:21 +0000
4341@@ -0,0 +1,6 @@
4342+#!/bin/sh
4343+# How the tests are run in Jenkins by Tarmac
4344+
4345+set -e
4346+
4347+make build
4348
4349=== modified file 'templates/main_config.template'
4350--- templates/main_config.template 2013-01-11 16:50:54 +0000
4351+++ templates/main_config.template 2013-10-29 21:07:21 +0000
4352@@ -1,10 +1,11 @@
4353-http_port {{ config.port }} accel vport=443
4354+http_port {{ config.port }} {{ config.port_options }}
4355
4356 acl manager proto cache_object
4357 acl localhost src 127.0.0.1/32
4358 acl to_localhost dst 127.0.0.0/8
4359 acl PURGE method PURGE
4360 acl CONNECT method CONNECT
4361+acl SSL_ports port 443
4362
4363 {% if config.snmp_community -%}
4364 acl snmp_access snmp_community {{ config.snmp_community }}
4365@@ -19,6 +20,7 @@
4366 snmp_incoming_address {{ config.my_ip_address }}
4367 {% endif -%}
4368
4369+via {{ config.via }}
4370 logformat combined {{ config.log_format }}
4371 access_log /var/log/squid3/access.log combined
4372
4373@@ -26,27 +28,69 @@
4374
4375 coredump_dir {{ config.cache_dir }}
4376 maximum_object_size {{ config.max_obj_size_kb }} KB
4377+{% if config.cache_size_mb > 0 -%}
4378 cache_dir aufs {{ config.cache_dir }} {{ config.cache_size_mb }} {{ config.cache_l1 }} {{ config.cache_l2 }}
4379-
4380 cache_mem {{ config.cache_mem_mb }} MB
4381+{% else -%}
4382+cache deny all
4383+{% endif -%}
4384+
4385
4386 log_mime_hdrs on
4387
4388 acl accel_ports myport {{ config.port }}
4389
4390 {% for rp in refresh_patterns.keys() -%}
4391-refresh_pattern {{ rp }} {{ refresh_patterns[rp]['min'] }} {{ refresh_patterns[rp]['percent'] }}% {{ refresh_patterns[rp]['max'] }}
4392-{% endfor -%}
4393-refresh_pattern . 30 20% 4320
4394-
4395-{% for relid in relations.keys() -%}
4396-{% for sitename in relations[relid].sitenames -%}
4397-acl {{ relations[relid].name }}_acl dstdomain {{ sitename }}
4398-{% else -%}
4399-acl {{ relations[relid].name }}_acl myport {{ config.port }}
4400-{% endfor -%}
4401-http_access allow accel_ports {{ relations[relid].name }}_acl
4402-{% endfor -%}
4403+refresh_pattern {{ rp }} {{ refresh_patterns[rp]['min'] }} {{ refresh_patterns[rp]['percent'] }}% {{ refresh_patterns[rp]['max'] }} {{ ' '.join(refresh_patterns[rp]['options']) }}
4404+{% endfor -%}
4405+refresh_pattern . {{default_refresh_pattern.min}} {{default_refresh_pattern.percent}}% {{default_refresh_pattern.max}} {{ ' '.join(default_refresh_pattern.options) }}
4406+
4407+# known services
4408+{% if sites -%}
4409+{% for sitename in sites.keys() -%}
4410+{% if sitename -%}
4411+{% set site_acl = "s_%s_acl" % loop.index %}
4412+acl {{ site_acl }} dstdomain {{ sitename }}
4413+http_access allow accel_ports {{ site_acl }}
4414+http_access allow CONNECT SSL_ports {{ site_acl }}
4415+{% if sitename in only_direct -%}
4416+always_direct allow {{ site_acl }}
4417+{% else -%}
4418+always_direct allow CONNECT SSL_ports {{ site_acl }}
4419+always_direct deny {{ site_acl }}
4420+{% endif -%}
4421+{% if config['x_balancer_name_allowed'] -%}
4422+{% set balancer_acl = "s_%s_balancer" % loop.index %}
4423+acl {{ balancer_acl }} req_header X-Balancer-Name {{ sitename.replace('.', '\.') }}
4424+http_access allow accel_ports {{ balancer_acl }}
4425+http_access allow CONNECT SSL_ports {{ balancer_acl }}
4426+{% if sitename in only_direct -%}
4427+always_direct allow {{ balancer_acl }}
4428+{% else -%}
4429+always_direct allow CONNECT SSL_ports {{ balancer_acl }}
4430+always_direct deny {{ balancer_acl }}
4431+{% endif -%}
4432+{% endif -%}
4433+{% else %} # no sitename
4434+acl no_sitename_acl myport {{ config.port }}
4435+http_access allow accel_ports no_sitename_acl
4436+never_direct allow no_sitename_acl
4437+{% endif %}
4438+{% endfor -%}
4439+{% endif -%}
4440+
4441+{% if config.enable_forward_proxy -%}
4442+# no access retrictions
4443+{% for relation in forward_relations -%}
4444+{# acl names are limited to 31 chars (!), so using short "fwd_" prefix #}
4445+{% set forward_acl = "fwd_%s" % relation['name'] -%}
4446+acl {{ forward_acl }} src {{ relation['private-address'] }}
4447+http_access allow {{ forward_acl }}
4448+http_access allow CONNECT SSL_ports {{ forward_acl }}
4449+always_direct allow {{ forward_acl }}
4450+always_direct allow CONNECT SSL_ports {{ forward_acl }}
4451+{% endfor -%}
4452+{% endif -%}
4453
4454 http_access allow manager localhost
4455 http_access deny manager
4456@@ -57,11 +101,24 @@
4457 http_access deny accel_ports all
4458 http_access deny all
4459 icp_access deny all
4460+always_direct deny all
4461
4462-{% for relid in relations -%}
4463-{% if relations[relid].port -%}
4464-cache_peer {{ relations[relid]['private-address'] }} parent {{ relations[relid].port }} 0 name={{ relations[relid].name }} no-query no-digest originserver round-robin login=PASS
4465-cache_peer_access {{ relations[relid].name }} allow {{ relations[relid].name }}_acl
4466-cache_peer_access {{ relations[relid].name }} deny !{{ relations[relid].name }}_acl
4467-{% endif -%}
4468-{% endfor -%}
4469+{% if sites -%}
4470+{% for sitename in sites.keys() -%}
4471+{% set sites_loop = loop -%}
4472+{% for peer in sites[sitename] %}
4473+{% if sitename -%}
4474+cache_peer {{ peer.address }} parent {{ peer.port }} 0 name={{ peer.name }} no-query no-digest originserver round-robin login=PASS {{ peer.options }}
4475+cache_peer_access {{ peer.name }} allow s_{{ sites_loop.index }}_acl
4476+{% if config['x_balancer_name_allowed'] -%}
4477+cache_peer_access {{ peer.name }} allow s_{{ sites_loop.index }}_balancer
4478+{% endif -%}
4479+cache_peer_access {{ peer.name }} deny all
4480+{% else %}
4481+cache_peer {{ peer.address }} parent {{ peer.port }} 0 name={{ peer.name }} no-query no-digest originserver round-robin login=PASS
4482+cache_peer_access {{ peer.name }} allow no_sitename_acl
4483+cache_peer_access {{ peer.name }} deny all
4484+{% endif -%}
4485+{% endfor -%}
4486+{% endfor -%}
4487+{% endif -%}

Subscribers

People subscribed via source and target branches