Merge lp:~james-w/charms/precise/haproxy/metrics into lp:charms/haproxy

Proposed by James Westby on 2014-05-15
Status: Merged
Merged at revision: 79
Proposed branch: lp:~james-w/charms/precise/haproxy/metrics
Merge into: lp:charms/haproxy
Diff against target: 298 lines (+206/-1)
7 files modified
README.md (+12/-0)
config.yaml (+17/-0)
files/metrics/haproxy_to_statsd.sh (+27/-0)
hooks/hooks.py (+71/-1)
hooks/tests/test_config_changed_hooks.py (+1/-0)
hooks/tests/test_metrics.py (+75/-0)
templates/metrics_cronjob.template (+3/-0)
To merge this branch: bzr merge lp:~james-w/charms/precise/haproxy/metrics
Reviewer Review Type Date Requested Status
David Ames 2014-05-15 Approve on 2014-05-27
Simon Davy (community) Approve on 2014-05-27
Review via email: mp+219724@code.launchpad.net

Description of the change

Hi,

This is based on https://code.launchpad.net/~bloodearnest/charms/precise/squid-reverseproxy/metrics/+merge/217138, and adds support for converting the haproxy statistics to
carbon format and sending to graphite or a compatible server.

It relies on monitoring being enabled, rather than forcing a
localhost monitoring declaration.

Thanks,

James

To post a comment you must log in.
Simon Davy (bloodearnest) wrote :

Shouldn't enabled_monitoring be added to config.yaml?

A thought: in the squid version, squid actually provides average aggregate measures for 1min/5min/60min windows, which is nice as your 5 min cron can use that for better accuracy.

I am assuming the we're sampling current unaggregated metrics here? So, it it worth thinking about a more frequent cron timing? Is 5min enough?

Another thing - I thing it would be worth trying to use statsd rather than carbon?

We'd need to do 2 things

1. change the "| nc ..." part to:

... | while read line; do echo $line | nc -u {{ statsd_host }} {{ statsd_port }}; done

which would send 1 metric per packet? Note: we could also update txstatsd to support multi-metric packets, which would then simplify this back to just "| nc -u ..."

2. add a function to output statsd format rather than carbonify (untested)

statsdify() {
   sed -r "s/([^ ]+) (.*)/${PREFIX}.\1.${PERIOD}:\2|g/"
}

(ps I like explicitly putting the period in there, rather than assuming it's in the metric_scheme, I may change the squid version to that, only reason I didn't was that I think it's custom to our statsd setup, wasn't sure it was valid to put upstream, but maybe its useful info even if your statsd setup doesn't do anything special with it)

Simon Davy (bloodearnest) :
review: Approve
David Ames (thedac) wrote :

Confirmed with James the statsd format is correct and working

+1 looks good

Merging

review: Approve
79. By David Ames on 2014-05-27

[james_w,r=dames] Allow sending haproxy metrics to external statsd service.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2014-01-07 19:03:41 +0000
3+++ README.md 2014-05-27 21:55:55 +0000
4@@ -118,6 +118,18 @@
5 Many of the haproxy settings can be altered via the standard juju configuration
6 settings. Please see the config.yaml file as each is fairly clearly documented.
7
8+## statsd
9+
10+This charm supports sending metrics to statsd.
11+
12+This is done by setting config values (metrics_target being the primary one)
13+to a host/port of a (UDP) statsd server.
14+
15+This could instead be done using a relation, but it is common to have
16+one statsd server that serves multiple environments. Once juju supports
17+cross-environment relations then that will be the best way to handle
18+this configuration, as it will work in either scenario.
19+
20 ## HAProxy Project Information
21
22 - [HAProxy Homepage](http://haproxy.1wt.eu/)
23
24=== modified file 'config.yaml'
25--- config.yaml 2014-04-30 21:48:17 +0000
26+++ config.yaml 2014-05-27 21:55:55 +0000
27@@ -137,3 +137,20 @@
28 juju-postgresql-0
29 If you're running multiple environments with the same services in them
30 this allows you to differentiate between them.
31+ metrics_target:
32+ default: ""
33+ type: string
34+ description: |
35+ Destination for statsd-format metrics, format "host:port". If
36+ not present and valid, metrics disabled. Requires "enable_monitoring"
37+ to be set to true to work.
38+ metrics_prefix:
39+ default: "dev.$UNIT.haproxy"
40+ type: string
41+ description: |
42+ Prefix for metrics. Special value $UNIT can be used to include the
43+ name of the unit in the prefix.
44+ metrics_sample_interval:
45+ default: 5
46+ type: int
47+ description: Period for metrics cron job to run in minutes
48
49=== added directory 'files/metrics'
50=== added file 'files/metrics/haproxy_to_statsd.sh'
51--- files/metrics/haproxy_to_statsd.sh 1970-01-01 00:00:00 +0000
52+++ files/metrics/haproxy_to_statsd.sh 2014-05-27 21:55:55 +0000
53@@ -0,0 +1,27 @@
54+#!/bin/bash
55+# haproxy-to-stasd: query haproxy CSV status page, transform it
56+# to stdout suitable for statsd
57+#
58+# Author: JuanJo Ciarlante <jjo@canonical.com>
59+# Copyright 2012, Canonical Ltd.
60+# License: GPLv3
61+set -u
62+PREFIX=${1:?missing statsd node prefix, e.g.: production.host.${HOSTNAME}.haproxy.stats}
63+PERIOD=${2:?missing period, e.g.: 10min}
64+HOSTPORT=${3:?missing haproxy hostport, e.g.: localhost:10000}
65+HTTPAUTH=${4:?missing httpauth, e.g.: user:pass}
66+
67+TSTAMP="$(date +%s)"
68+
69+# Filter only numeric metrics, cleanup to be only <var> <value>
70+get_metrics() {
71+ curl -s "http://${HTTPAUTH}@${HOSTPORT}/;csv;norefresh"| \
72+ awk -v FS=, '/^#/{ for(i=1;i<NF;i++) fieldname[i]=$i;}
73+/^[^#]/{ for(i=3;i<NF;i++) if(length($i) > 0) printf("%s.%s.%s %d\n", $1, $2, fieldname[i], $i);}'
74+}
75+
76+# Add statsd-isms to <var> <value> lines
77+statsdify() {
78+ sed -r "s/([^ ]+) (.*)/${PREFIX}.\1.${PERIOD}:\2|g/"
79+}
80+get_metrics | statsdify
81
82=== modified file 'hooks/hooks.py'
83--- hooks/hooks.py 2014-04-30 21:48:17 +0000
84+++ hooks/hooks.py 2014-05-27 21:55:55 +0000
85@@ -16,6 +16,7 @@
86 from charmhelpers.core.hookenv import (
87 log,
88 config as config_get,
89+ local_unit,
90 relation_set,
91 relation_ids as get_relation_ids,
92 relations_of_type,
93@@ -36,6 +37,8 @@
94 default_haproxy_config = "%s/haproxy.cfg" % default_haproxy_config_dir
95 default_haproxy_service_config_dir = "/var/run/haproxy"
96 default_haproxy_lib_dir = "/var/lib/haproxy"
97+metrics_cronjob_path = "/etc/cron.d/haproxy_metrics"
98+metrics_script_path = "/usr/local/bin/haproxy_to_statsd.sh"
99 service_affecting_packages = ['haproxy']
100
101 dupe_options = [
102@@ -95,6 +98,15 @@
103 dpkg.communicate(input=selections)
104
105
106+def render_template(template_name, vars):
107+ # deferred import so install hook can install jinja2
108+ from jinja2 import Environment, FileSystemLoader
109+ templates_dir = os.path.join(os.environ['CHARM_DIR'], 'templates')
110+ template_env = Environment(loader=FileSystemLoader(templates_dir))
111+ template = template_env.get_template(template_name)
112+ return template.render(vars)
113+
114+
115 #------------------------------------------------------------------------------
116 # enable_haproxy: Enabled haproxy at boot time
117 #------------------------------------------------------------------------------
118@@ -737,7 +749,7 @@
119 if not os.path.exists(default_haproxy_service_config_dir):
120 os.mkdir(default_haproxy_service_config_dir, 0600)
121
122- apt_install('haproxy', fatal=True)
123+ apt_install(['haproxy', 'python-jinja2'], fatal=True)
124 ensure_package_status(service_affecting_packages,
125 config_get('package_status'))
126 enable_haproxy()
127@@ -767,6 +779,9 @@
128 haproxy_monitoring,
129 haproxy_services)
130
131+ write_metrics_cronjob(metrics_script_path,
132+ metrics_cronjob_path)
133+
134 if service_haproxy("check"):
135 update_service_ports(old_service_ports, get_service_ports())
136 service_haproxy("reload")
137@@ -895,6 +910,61 @@
138 nrpe_compat.write()
139
140
141+def delete_metrics_cronjob(cron_path):
142+ try:
143+ os.unlink(cron_path)
144+ except OSError:
145+ pass
146+
147+
148+def write_metrics_cronjob(script_path, cron_path):
149+ config_data = config_get()
150+
151+ if config_data['enable_monitoring'] is False:
152+ log("enable_monitoring must be set to true for metrics")
153+ delete_metrics_cronjob(cron_path)
154+ return
155+
156+ # need the following two configs to be valid
157+ metrics_target = config_data['metrics_target'].strip()
158+ metrics_sample_interval = config_data['metrics_sample_interval']
159+ if (not metrics_target
160+ or ':' not in metrics_target
161+ or not metrics_sample_interval):
162+ log("Required config not found or invalid "
163+ "(metrics_target, metrics_sample_interval), "
164+ "disabling metrics")
165+ delete_metrics_cronjob(cron_path)
166+ return
167+
168+ charm_dir = os.environ['CHARM_DIR']
169+ statsd_host, statsd_port = metrics_target.split(':', 1)
170+ metrics_prefix = config_data['metrics_prefix'].strip()
171+ metrics_prefix = metrics_prefix.replace(
172+ "$UNIT", local_unit().replace('.', '-').replace('/', '-'))
173+ haproxy_hostport = ":".join(['localhost',
174+ str(config_data['monitoring_port'])])
175+ haproxy_httpauth = ":".join([config_data['monitoring_username'].strip(),
176+ get_monitoring_password()])
177+
178+ # ensure script installed
179+ shutil.copy2('%s/files/metrics/haproxy_to_statsd.sh' % charm_dir,
180+ metrics_script_path)
181+
182+ # write the crontab
183+ with open(cron_path, 'w') as cronjob:
184+ cronjob.write(render_template("metrics_cronjob.template", {
185+ 'interval': config_data['metrics_sample_interval'],
186+ 'script': script_path,
187+ 'metrics_prefix': metrics_prefix,
188+ 'metrics_sample_interval': metrics_sample_interval,
189+ 'haproxy_hostport': haproxy_hostport,
190+ 'haproxy_httpauth': haproxy_httpauth,
191+ 'statsd_host': statsd_host,
192+ 'statsd_port': statsd_port,
193+ }))
194+
195+
196 ###############################################################################
197 # Main section
198 ###############################################################################
199
200=== modified file 'hooks/tests/test_config_changed_hooks.py'
201--- hooks/tests/test_config_changed_hooks.py 2013-05-10 15:51:40 +0000
202+++ hooks/tests/test_config_changed_hooks.py 2014-05-27 21:55:55 +0000
203@@ -29,6 +29,7 @@
204 "update_sysctl")
205 self.notify_website = self.patch_hook("notify_website")
206 self.notify_peer = self.patch_hook("notify_peer")
207+ self.write_metrics_cronjob = self.patch_hook("write_metrics_cronjob")
208 self.log = self.patch_hook("log")
209 sys_exit = patch.object(sys, "exit")
210 self.sys_exit = sys_exit.start()
211
212=== added file 'hooks/tests/test_metrics.py'
213--- hooks/tests/test_metrics.py 1970-01-01 00:00:00 +0000
214+++ hooks/tests/test_metrics.py 2014-05-27 21:55:55 +0000
215@@ -0,0 +1,75 @@
216+import textwrap
217+
218+from testtools import TestCase
219+from mock import patch, mock_open
220+
221+import hooks
222+
223+
224+class MetricsTestCase(TestCase):
225+
226+ def add_patch(self, *args, **kwargs):
227+ p = patch(*args, **kwargs)
228+ self.addCleanup(p.stop)
229+ return p.start()
230+
231+ def setUp(self):
232+ super(MetricsTestCase, self).setUp()
233+
234+ self.open = mock_open()
235+ self.add_patch("hooks.open", self.open, create=True)
236+ self.copy2 = self.add_patch("shutil.copy2")
237+ self.config_get = self.add_patch("hooks.config_get")
238+ self.local_unit = self.add_patch("hooks.local_unit")
239+ self.log = self.add_patch("hooks.log")
240+ self.apt_install = self.add_patch("hooks.apt_install")
241+ self.get_monitoring_password = self.add_patch(
242+ "hooks.get_monitoring_password")
243+
244+ self.config_get.return_value = {
245+ 'metrics_sample_interval': 5,
246+ 'metrics_prefix': 'prefix.$UNIT',
247+ 'metrics_target': 'localhost:4321',
248+ 'enable_monitoring': True,
249+ 'monitoring_port': '1234',
250+ 'monitoring_username': 'monitor',
251+ 'monitoring_password': 'monitorpass',
252+ }
253+ self.local_unit.return_value = "unit/0"
254+
255+ @patch('hooks.os.unlink')
256+ def test_write_metrics_cronjob_disabled_no_monitoring(self, mock_unlink):
257+ self.config_get.return_value['enable_monitoring'] = False
258+ hooks.write_metrics_cronjob('/script', '/cron')
259+ self.assertEqual(mock_unlink.mock_calls[0][1][0], '/cron')
260+
261+ @patch('hooks.os.unlink')
262+ def test_write_metrics_cronjob_disabled_no_target(self, mock_unlink):
263+ self.config_get.return_value['metrics_target'] = ''
264+ hooks.write_metrics_cronjob('/script', '/cron')
265+ self.assertEqual(mock_unlink.mock_calls[0][1][0], '/cron')
266+
267+ @patch('hooks.os.unlink')
268+ def test_write_metrics_cronjob_disabled_bad_target(self, mock_unlink):
269+ self.config_get.return_value['metrics_target'] = 'sadfsadf'
270+ hooks.write_metrics_cronjob('/script', '/cron')
271+ self.assertEqual(mock_unlink.mock_calls[0][1][0], '/cron')
272+
273+ def test_write_metrics_cronjob_enabled(self):
274+ self.get_monitoring_password.return_value = 'monitorpass'
275+ self.config_get.return_value['metrics_target'] = 'localhost:4321'
276+
277+ hooks.write_metrics_cronjob('/script', '/cron')
278+
279+ cron_open_args = self.open.mock_calls[0][1]
280+ self.assertEqual(cron_open_args, ('/cron', 'w'))
281+
282+ cron_write = self.open.mock_calls[2][1][0]
283+ expected_cron = textwrap.dedent("""
284+ # crontab for pushing haproxy metrics to statsd
285+ */5 * * * * root bash /script prefix.unit-0 5min localhost:1234\
286+ monitor:monitorpass | python -c "import socket, sys; sock =\
287+ socket.socket(socket.AF_INET, socket.SOCK_DGRAM); map(lambda line:\
288+ sock.sendto(line, ('localhost', 4321)), sys.stdin)"
289+ """).lstrip()
290+ self.assertEqual(expected_cron, cron_write)
291
292=== added file 'templates/metrics_cronjob.template'
293--- templates/metrics_cronjob.template 1970-01-01 00:00:00 +0000
294+++ templates/metrics_cronjob.template 2014-05-27 21:55:55 +0000
295@@ -0,0 +1,3 @@
296+# crontab for pushing haproxy metrics to statsd
297+*/{{ metrics_sample_interval }} * * * * root bash {{ script }} {{ metrics_prefix }} {{ metrics_sample_interval }}min {{ haproxy_hostport }} {{ haproxy_httpauth }} | python -c "import socket, sys; sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM); map(lambda line: sock.sendto(line, ('{{ statsd_host }}', {{ statsd_port }})), sys.stdin)"
298+

Subscribers

People subscribed via source and target branches