Merge lp:~chad.smith/charms/precise/swift-proxy/swift-proxy-ha-with-health into lp:~james-page/charms/precise/swift-proxy/ha-support

Proposed by Chad Smith
Status: Superseded
Proposed branch: lp:~chad.smith/charms/precise/swift-proxy/swift-proxy-ha-with-health
Merge into: lp:~james-page/charms/precise/swift-proxy/ha-support
Diff against target: 121 lines (+75/-1)
7 files modified
hooks/lib/openstack_common.py (+37/-0)
hooks/swift_utils.py (+5/-0)
revision (+1/-1)
scripts/add_to_cluster (+2/-0)
scripts/health_checks.d/service_ports_live (+13/-0)
scripts/health_checks.d/service_swift_running (+15/-0)
scripts/remove_from_cluster (+2/-0)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/swift-proxy/swift-proxy-ha-with-health
Reviewer Review Type Date Requested Status
Adam Gandelman Pending
James Page Pending
Review via email: mp+151335@code.launchpad.net

This proposal has been superseded by a proposal from 2013-03-06.

Description of the change

This is a merge proposal to establish a potential template for health_script.d, add_to_cluster and remove_from_cluster delivery within the ha-supported openstack charms. The *cluster* simple scripts will be common on each charm that uses corosync configs. The health_scripts.d dir will contain some common scripts and some additional scripts that are appropriate health checks for the specific service.

The basic architecture we are hoping for is an extendable set of run-parts scripts in health_checks.d that can be seeded with juju config environment variables written into a /var/lib/juju/units/<charm_name>/charm/scripts/scriptrc file. Landscape client will do the work of calling these scripts and validating charm service health during openstack service upgrades. Both Jerry and I are tackling the initial push to add health/cluster scripts to the charms. Hopefully the model works and we can iterate on extending health or cluster scripts as service HA configuration gets more complex.

much thanks for the review.

To post a comment you must log in.
42. By Chad Smith

more strict netstat port matching

43. By Chad Smith

merge lp:~openstack-charmers/charms/precise/swift-proxy/ha-support resolve conflict

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/lib/openstack_common.py'
2--- hooks/lib/openstack_common.py 2013-03-01 23:20:05 +0000
3+++ hooks/lib/openstack_common.py 2013-03-06 22:26:24 +0000
4@@ -224,3 +224,40 @@
5 f.write(src)
6 else:
7 error_out("Invalid openstack-release specified: %s" % rel)
8+
9+HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
10+HAPROXY_DEFAULT = '/etc/default/haproxy'
11+
12+def configure_haproxy(units, service_ports, template_dir=None):
13+ template_dir = template_dir or 'templates'
14+ import jinja2
15+ context = {
16+ 'units': units,
17+ 'service_ports': service_ports
18+ }
19+ templates = jinja2.Environment(
20+ loader=jinja2.FileSystemLoader(template_dir)
21+ )
22+ template = templates.get_template(
23+ os.path.basename(HAPROXY_CONF)
24+ )
25+ with open(HAPROXY_CONF, 'w') as f:
26+ f.write(template.render(context))
27+ with open(HAPROXY_DEFAULT, 'w') as f:
28+ f.write('ENABLED=1')
29+
30+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
31+ """
32+ Write an rc file in the charm-delivered directory containing
33+ exported environment variables provided by env_vars. Any charm scripts run
34+ outside the juju hook environment can source this scriptrc to obtain
35+ updated config information necessary to perform health checks or
36+ service changes.
37+ """
38+ unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-')
39+ juju_rc_path="/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path)
40+ with open(juju_rc_path, 'wb') as rc_script:
41+ rc_script.write(
42+ "#!/bin/bash\n")
43+ [rc_script.write('export %s=%s\n' % (u, p))
44+ for u, p in env_vars.iteritems() if u != "script_path"]
45
46=== modified file 'hooks/swift_utils.py'
47--- hooks/swift_utils.py 2013-03-04 14:43:54 +0000
48+++ hooks/swift_utils.py 2013-03-06 22:26:24 +0000
49@@ -171,6 +171,11 @@
50 import multiprocessing
51 workers = multiprocessing.cpu_count()
52
53+ env_vars = {'OPENSTACK_SERVICE_SWIFT': 'proxy-server',
54+ 'OPENSTACK_PORT_API': bind_port,
55+ 'OPENSTACK_PORT_MEMCACHED': 11211}
56+ openstack.save_script_rc(**env_vars)
57+
58 ctxt = {
59 'proxy_ip': utils.get_host_ip(),
60 'bind_port': utils.determine_api_port(bind_port),
61
62=== modified file 'revision'
63--- revision 2013-01-29 00:41:52 +0000
64+++ revision 2013-03-06 22:26:24 +0000
65@@ -1,1 +1,1 @@
66-109
67+110
68
69=== added directory 'scripts'
70=== added file 'scripts/add_to_cluster'
71--- scripts/add_to_cluster 1970-01-01 00:00:00 +0000
72+++ scripts/add_to_cluster 2013-03-06 22:26:24 +0000
73@@ -0,0 +1,2 @@
74+#!/bin/bash
75+crm node online
76
77=== added directory 'scripts/health_checks.d'
78=== added file 'scripts/health_checks.d/service_ports_live'
79--- scripts/health_checks.d/service_ports_live 1970-01-01 00:00:00 +0000
80+++ scripts/health_checks.d/service_ports_live 2013-03-06 22:26:24 +0000
81@@ -0,0 +1,13 @@
82+#!/bin/bash
83+# Validate that service ports are active
84+HEALTH_DIR=`dirname $0`
85+SCRIPTS_DIR=`dirname $HEALTH_DIR`
86+. $SCRIPTS_DIR/scriptrc
87+set -e
88+
89+# Grab any OPENSTACK_PORT* environment variables
90+openstack_ports=`env| awk -F '=' '(/OPENSTACK_PORT/){print $2}'`
91+for port in $openstack_ports
92+do
93+ netstat -ln | grep -q ":$port "
94+done
95
96=== added file 'scripts/health_checks.d/service_swift_running'
97--- scripts/health_checks.d/service_swift_running 1970-01-01 00:00:00 +0000
98+++ scripts/health_checks.d/service_swift_running 2013-03-06 22:26:24 +0000
99@@ -0,0 +1,15 @@
100+#!/bin/bash
101+# Validate that service is running
102+HEALTH_DIR=`dirname $0`
103+SCRIPTS_DIR=`dirname $HEALTH_DIR`
104+. $SCRIPTS_DIR/scriptrc
105+set -e
106+
107+# Grab any OPENSTACK_SWIFT_SERVICE* environment variables
108+openstack_service_names=`env| awk -F '=' '(/OPENSTACK_SWIFT_SERVICE/){print $2}'`
109+for service_name in $openstack_service_names
110+do
111+ # Double-negative: we want to ensure swift-init does not return
112+ # 'No <service_name> running'
113+ swift-init $service_name status 2>/dev/null | grep -vq "No $service_name running"
114+done
115
116=== added file 'scripts/remove_from_cluster'
117--- scripts/remove_from_cluster 1970-01-01 00:00:00 +0000
118+++ scripts/remove_from_cluster 2013-03-06 22:26:24 +0000
119@@ -0,0 +1,2 @@
120+#!/bin/bash
121+crm node standby

Subscribers

People subscribed via source and target branches

to all changes: