Merge lp:~openstack-charmers/charms/precise/openstack-dashboard/ha-support into lp:~charmers/charms/precise/openstack-dashboard/trunk
- Precise Pangolin (12.04)
- ha-support
- Merge into trunk
Proposed by
Adam Gandelman
Status: | Merged |
---|---|
Merged at revision: | 20 |
Proposed branch: | lp:~openstack-charmers/charms/precise/openstack-dashboard/ha-support |
Merge into: | lp:~charmers/charms/precise/openstack-dashboard/trunk |
Diff against target: |
880 lines (+692/-29) 8 files modified
config.yaml (+59/-7) hooks/horizon-common (+53/-11) hooks/horizon-relations (+100/-7) hooks/lib/openstack-common (+456/-3) metadata.yaml (+6/-0) revision (+1/-1) scripts/add_to_cluster (+13/-0) scripts/remove_from_cluster (+4/-0) |
To merge this branch: | bzr merge lp:~openstack-charmers/charms/precise/openstack-dashboard/ha-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email: mp+166346@code.launchpad.net |
Commit message
Description of the change
* Updated for Grizzly.
* HA support via the hacluster subordinate.
To post a comment you must log in.
Revision history for this message
Nobuto Murata (nobuto) wrote : | # |
Revision history for this message
Nobuto Murata (nobuto) wrote : | # |
Ah, I misunderstood True/False of that option. Please disregard my last comment.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'config.yaml' |
2 | --- config.yaml 2013-01-11 21:59:22 +0000 |
3 | +++ config.yaml 2013-05-29 18:14:34 +0000 |
4 | @@ -1,5 +1,5 @@ |
5 | options: |
6 | - openstack-origin: |
7 | + openstack-origin: |
8 | default: distro |
9 | type: string |
10 | description: | |
11 | @@ -14,15 +14,67 @@ |
12 | Note that updating this setting to a source that is known to |
13 | provide a later version of OpenStack will trigger a software |
14 | upgrade. |
15 | - webroot: |
16 | + webroot: |
17 | default: "/horizon" |
18 | type: string |
19 | description: | |
20 | - Directory where application will be accessible, relative to |
21 | - http://$hostname/. |
22 | - default-role: |
23 | + Directory where application will be accessible, relative to |
24 | + http://$hostname/. |
25 | + default-role: |
26 | default: "Member" |
27 | type: string |
28 | description: | |
29 | - Default role for Horizon operations that will be created in |
30 | - Keystone upon introduction of an identity-service relation. |
31 | + Default role for Horizon operations that will be created in |
32 | + Keystone upon introduction of an identity-service relation. |
33 | + vip: |
34 | + type: string |
35 | + description: "Virtual IP to use to front openstack dashboard ha configuration" |
36 | + vip_iface: |
37 | + type: string |
38 | + default: eth0 |
39 | + description: "Network Interface where to place the Virtual IP" |
40 | + vip_cidr: |
41 | + type: int |
42 | + default: 24 |
43 | + description: "Netmask that will be used for the Virtual IP" |
44 | + ha-bindiface: |
45 | + type: string |
46 | + default: eth0 |
47 | + description: | |
48 | + Default network interface on which HA cluster will bind to communication |
49 | + with the other members of the HA Cluster. |
50 | + ha-mcastport: |
51 | + type: int |
52 | + default: 5410 |
53 | + description: | |
54 | + Default multicast port number that will be used to communicate between |
55 | + HA Cluster nodes. |
56 | + # User provided SSL cert and key |
57 | + ssl_cert: |
58 | + type: string |
59 | + description: | |
60 | + Base64 encoded SSL certificate to install and use for API ports. |
61 | + . |
62 | + juju set swift-proxy ssl_cert="$(cat cert | base64)" \ |
63 | + ssl_key="$(cat key | base64)" |
64 | + . |
65 | + Setting this value (and ssl_key) will enable reverse proxying, point |
66 | + Swifts's entry in the Keystone catalog to use https, and override |
67 | + any certficiate and key issued by Keystone (if it is configured to |
68 | + do so). |
69 | + ssl_key: |
70 | + type: string |
71 | + description: | |
72 | + Base64 encoded SSL key to use with certificate specified as ssl_cert. |
73 | + offline-compression: |
74 | + type: string |
75 | + default: "yes" |
76 | + description: Use pre-generated Less compiled JS and CSS. |
77 | + debug: |
78 | + type: string |
79 | + default: "no" |
80 | + description: Show Django debug messages. |
81 | + ubuntu-theme: |
82 | + type: string |
83 | + default: "yes" |
84 | + description: Use Ubuntu theme for the dashboard. |
85 | |
86 | === added symlink 'hooks/cluster-relation-changed' |
87 | === target is u'horizon-relations' |
88 | === added symlink 'hooks/cluster-relation-departed' |
89 | === target is u'horizon-relations' |
90 | === added symlink 'hooks/ha-relation-changed' |
91 | === target is u'horizon-relations' |
92 | === added symlink 'hooks/ha-relation-joined' |
93 | === target is u'horizon-relations' |
94 | === modified file 'hooks/horizon-common' |
95 | --- hooks/horizon-common 2012-10-10 23:32:24 +0000 |
96 | +++ hooks/horizon-common 2013-05-29 18:14:34 +0000 |
97 | @@ -1,14 +1,16 @@ |
98 | #!/bin/bash |
99 | +# vim: set ts=2:et |
100 | |
101 | CHARM="openstack-dashboard" |
102 | |
103 | -PACKAGES="openstack-dashboard openstack-dashboard-ubuntu-theme python-keystoneclient python-memcache memcached" |
104 | +PACKAGES="openstack-dashboard python-keystoneclient python-memcache memcached haproxy python-novaclient" |
105 | LOCAL_SETTINGS="/etc/openstack-dashboard/local_settings.py" |
106 | +HOOKS_DIR="$CHARM_DIR/hooks" |
107 | |
108 | -if [[ -e "$CHARM_DIR/lib/openstack-common" ]] ; then |
109 | - . $CHARM_DIR/lib/openstack-common |
110 | +if [[ -e "$HOOKS_DIR/lib/openstack-common" ]] ; then |
111 | + . $HOOKS_DIR/lib/openstack-common |
112 | else |
113 | - juju-log "ERROR: Couldn't load $CHARM_DIR/lib/openstack-common." && exit 1 |
114 | + juju-log "ERROR: Couldn't load $HOOKS_DIR/lib/openstack-common." && exit 1 |
115 | fi |
116 | |
117 | set_or_update() { |
118 | @@ -16,15 +18,28 @@ |
119 | local key=$1 value=$2 |
120 | [[ -z "$key" ]] || [[ -z "$value" ]] && |
121 | juju-log "$CHARM set_or_update: ERROR - missing parameters" && return 1 |
122 | - grep -q "^$key = \"$value\"" "$LOCAL_SETTINGS" && |
123 | - juju-log "$CHARM set_or_update: $key = $value already set" && return 0 |
124 | + if [ "$value" == "True" ] || [ "$value" == "False" ]; then |
125 | + grep -q "^$key = $value" "$LOCAL_SETTINGS" && |
126 | + juju-log "$CHARM set_or_update: $key = $value already set" && return 0 |
127 | + else |
128 | + grep -q "^$key = \"$value\"" "$LOCAL_SETTINGS" && |
129 | + juju-log "$CHARM set_or_update: $key = $value already set" && return 0 |
130 | + fi |
131 | if grep -q "^$key = " "$LOCAL_SETTINGS" ; then |
132 | juju-log "$CHARM set_or_update: Setting $key = $value" |
133 | cp "$LOCAL_SETTINGS" /etc/openstack-dashboard/local_settings.last |
134 | - sed -i "s|\(^$key = \).*|\1\"$value\"|g" "$LOCAL_SETTINGS" || return 1 |
135 | + if [ "$value" == "True" ] || [ "$value" == "False" ]; then |
136 | + sed -i "s|\(^$key = \).*|\1$value|g" "$LOCAL_SETTINGS" || return 1 |
137 | + else |
138 | + sed -i "s|\(^$key = \).*|\1\"$value\"|g" "$LOCAL_SETTINGS" || return 1 |
139 | + fi |
140 | else |
141 | juju-log "$CHARM set_or_update: Adding $key = $value" |
142 | - echo "$key = \"$value\"" >>$LOCAL_SETTINGS || return 1 |
143 | + if [ "$value" == "True" ] || [ "$value" == "False" ]; then |
144 | + echo "$key = $value" >>$LOCAL_SETTINGS || return 1 |
145 | + else |
146 | + echo "$key = \"$value\"" >>$LOCAL_SETTINGS || return 1 |
147 | + fi |
148 | fi |
149 | return 0 |
150 | } |
151 | @@ -46,10 +61,37 @@ |
152 | export JUJU_REMOTE_UNIT=$(relation-list -r $r_id | head -n1) |
153 | export JUJU_RELATION="identity-service" |
154 | export JUJU_RELATION_ID="$r_id" |
155 | - local ks_host=$(relation-get -r $r_id private-address) |
156 | - if [[ -n "$ks_host" ]] ; then |
157 | - service_url="http://$ks_host:5000/v2.0" |
158 | + local service_host=$(relation-get -r $r_id service_host) |
159 | + local service_port=$(relation-get -r $r_id service_port) |
160 | + if [[ -n "$service_host" ]] && [[ -n "$service_port" ]] ; then |
161 | + service_url="http://$service_host:$service_port/v2.0" |
162 | set_or_update OPENSTACK_KEYSTONE_URL "$service_url" |
163 | fi |
164 | fi |
165 | } |
166 | + |
167 | +configure_apache() { |
168 | + # Reconfigure to listen on provided port |
169 | + a2ensite default-ssl || : |
170 | + a2enmod ssl || : |
171 | + for ports in $@; do |
172 | + from_port=$(echo $ports | cut -d : -f 1) |
173 | + to_port=$(echo $ports | cut -d : -f 2) |
174 | + sed -i -e "s/$from_port/$to_port/g" /etc/apache2/ports.conf |
175 | + for site in $(ls -1 /etc/apache2/sites-available); do |
176 | + sed -i -e "s/$from_port/$to_port/g" \ |
177 | + /etc/apache2/sites-available/$site |
178 | + done |
179 | + done |
180 | +} |
181 | + |
182 | +configure_apache_cert() { |
183 | + cert=$1 |
184 | + key=$2 |
185 | + echo $cert | base64 -di > /etc/ssl/certs/dashboard.cert |
186 | + echo $key | base64 -di > /etc/ssl/private/dashboard.key |
187 | + chmod 0600 /etc/ssl/private/dashboard.key |
188 | + sed -i -e "s|\(.*SSLCertificateFile\).*|\1 /etc/ssl/certs/dashboard.cert|g" \ |
189 | + -e "s|\(.*SSLCertificateKeyFile\).*|\1 /etc/ssl/private/dashboard.key|g" \ |
190 | + /etc/apache2/sites-available/default-ssl |
191 | +} |
192 | |
193 | === modified file 'hooks/horizon-relations' |
194 | --- hooks/horizon-relations 2013-01-11 21:59:22 +0000 |
195 | +++ hooks/horizon-relations 2013-05-29 18:14:34 +0000 |
196 | @@ -1,13 +1,13 @@ |
197 | #!/bin/bash |
198 | set -e |
199 | |
200 | -CHARM_DIR=$(dirname $0) |
201 | +HOOKS_DIR="$CHARM_DIR/hooks" |
202 | ARG0=${0##*/} |
203 | |
204 | -if [[ -e $CHARM_DIR/horizon-common ]] ; then |
205 | - . $CHARM_DIR/horizon-common |
206 | +if [[ -e $HOOKS_DIR/horizon-common ]] ; then |
207 | + . $HOOKS_DIR/horizon-common |
208 | else |
209 | - echo "ERROR: Could not load horizon-common from $CHARM_DIR" |
210 | + echo "ERROR: Could not load horizon-common from $HOOKS_DIR" |
211 | fi |
212 | |
213 | function install_hook { |
214 | @@ -44,7 +44,19 @@ |
215 | } |
216 | |
217 | function keystone_changed { |
218 | - service_url="http://$(relation-get private-address):5000/v2.0" |
219 | + local service_host=$(relation-get service_host) |
220 | + local service_port=$(relation-get service_port) |
221 | + if [ -z "${service_host}" ] || [ -z "${service_port}" ]; then |
222 | + juju-log "Insufficient information to configure keystone url" |
223 | + exit 0 |
224 | + fi |
225 | + local ca_cert=$(relation-get ca_cert) |
226 | + if [ -n "$ca_cert" ]; then |
227 | + juju-log "Installing Keystone supplied CA cert." |
228 | + echo $ca_cert | base64 -di > /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt |
229 | + update-ca-certificates --fresh |
230 | + fi |
231 | + service_url="http://${service_host}:${service_port}/v2.0" |
232 | juju-log "$CHARM: Configuring Horizon to access keystone @ $service_url." |
233 | set_or_update OPENSTACK_KEYSTONE_URL "$service_url" |
234 | service apache2 restart |
235 | @@ -73,6 +85,15 @@ |
236 | set_or_update LOGIN_URL "$web_root/auth/login" |
237 | set_or_update LOGIN_REDIRECT_URL "$web_root" |
238 | |
239 | + # Save our scriptrc env variables for health checks |
240 | + declare -a env_vars=( |
241 | + 'OPENSTACK_URL_HORIZON="http://localhost:70'$web_root'|Login+-+OpenStack"' |
242 | + 'OPENSTACK_SERVICE_HORIZON=apache2' |
243 | + 'OPENSTACK_PORT_HORIZON_SSL=433' |
244 | + 'OPENSTACK_PORT_HORIZON=70') |
245 | + save_script_rc ${env_vars[@]} |
246 | + |
247 | + |
248 | # Set default role and trigger a identity-service relation event to |
249 | # ensure role is created in keystone. |
250 | set_or_update OPENSTACK_KEYSTONE_DEFAULT_ROLE "$(config-get default-role)" |
251 | @@ -81,8 +102,77 @@ |
252 | keystone_joined "$relid" |
253 | done |
254 | |
255 | - service apache2 reload |
256 | - |
257 | + if [ "$(config-get offline-compression)" != "yes" ]; then |
258 | + set_or_update COMPRESS_OFFLINE False |
259 | + apt-get install -y nodejs node-less |
260 | + else |
261 | + set_or_update COMPRESS_OFFLINE True |
262 | + fi |
263 | + |
264 | + # Configure default HAProxy + Apache config |
265 | + if [ -n "$(config-get ssl_cert)" ] && \ |
266 | + [ -n "$(config-get ssl_key)" ]; then |
267 | + configure_apache_cert "$(config-get ssl_cert)" "$(config-get ssl_key)" |
268 | + fi |
269 | + |
270 | + if [ "$(config-get debug)" != "yes" ]; then |
271 | + set_or_update DEBUG False |
272 | + else |
273 | + set_or_update DEBUG True |
274 | + fi |
275 | + |
276 | + if [ "$(config-get ubuntu-theme)" != "yes" ]; then |
277 | + apt-get -y purge openstack-dashboard-ubuntu-theme || : |
278 | + else |
279 | + apt-get -y install openstack-dashboard-ubuntu-theme |
280 | + fi |
281 | + |
282 | + # Reconfigure Apache Ports |
283 | + configure_apache "80:70" "443:433" |
284 | + service apache2 restart |
285 | + configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp" |
286 | + service haproxy restart |
287 | +} |
288 | + |
289 | +function cluster_changed() { |
290 | + configure_haproxy "dash_insecure:80:70:http dash_secure:443:433:tcp" |
291 | + service haproxy reload |
292 | +} |
293 | + |
294 | +function ha_relation_joined() { |
295 | + # Configure HA Cluster |
296 | + local corosync_bindiface=`config-get ha-bindiface` |
297 | + local corosync_mcastport=`config-get ha-mcastport` |
298 | + local vip=`config-get vip` |
299 | + local vip_iface=`config-get vip_iface` |
300 | + local vip_cidr=`config-get vip_cidr` |
301 | + if [ -n "$vip" ] && [ -n "$vip_iface" ] && \ |
302 | + [ -n "$vip_cidr" ] && [ -n "$corosync_bindiface" ] && \ |
303 | + [ -n "$corosync_mcastport" ]; then |
304 | + # TODO: This feels horrible but the data required by the hacluster |
305 | + # charm is quite complex and is python ast parsed. |
306 | + resources="{ |
307 | +'res_horizon_vip':'ocf:heartbeat:IPaddr2', |
308 | +'res_horizon_haproxy':'lsb:haproxy' |
309 | +}" |
310 | + resource_params="{ |
311 | +'res_horizon_vip': 'params ip=\"$vip\" cidr_netmask=\"$vip_cidr\" nic=\"$vip_iface\"', |
312 | +'res_horizon_haproxy': 'op monitor interval=\"5s\"' |
313 | +}" |
314 | + init_services="{ |
315 | +'res_horizon_haproxy':'haproxy' |
316 | +}" |
317 | + clones="{ |
318 | +'cl_horizon_haproxy':'res_horizon_haproxy' |
319 | +}" |
320 | + relation-set corosync_bindiface=$corosync_bindiface \ |
321 | + corosync_mcastport=$corosync_mcastport \ |
322 | + resources="$resources" resource_params="$resource_params" \ |
323 | + init_services="$init_services" clones="$clones" |
324 | + else |
325 | + juju-log "Insufficient configuration data to configure hacluster" |
326 | + exit 1 |
327 | + fi |
328 | } |
329 | |
330 | juju-log "$CHARM: Running hook $ARG0." |
331 | @@ -95,4 +185,7 @@ |
332 | "identity-service-relation-joined") keystone_joined;; |
333 | "identity-service-relation-changed") keystone_changed;; |
334 | "config-changed") config_changed;; |
335 | + "cluster-relation-changed") cluster_changed ;; |
336 | + "cluster-relation-departed") cluster_changed ;; |
337 | + "ha-relation-joined") ha_relation_joined ;; |
338 | esac |
339 | |
340 | === modified file 'hooks/lib/openstack-common' |
341 | --- hooks/lib/openstack-common 2013-01-11 18:30:45 +0000 |
342 | +++ hooks/lib/openstack-common 2013-05-29 18:14:34 +0000 |
343 | @@ -165,8 +165,9 @@ |
344 | fi |
345 | |
346 | # have a guess based on the deb string provided |
347 | - if [[ "${rel:0:3}" == "deb" ]]; then |
348 | - CODENAMES="diablo essex folsom grizzly" |
349 | + if [[ "${rel:0:3}" == "deb" ]] || \ |
350 | + [[ "${rel:0:3}" == "ppa" ]] ; then |
351 | + CODENAMES="diablo essex folsom grizzly havana" |
352 | for cname in $CODENAMES; do |
353 | if echo $rel | grep -q $cname; then |
354 | codename=$cname |
355 | @@ -178,11 +179,13 @@ |
356 | |
357 | get_os_codename_package() { |
358 | local pkg_vers=$(dpkg -l | grep "$1" | awk '{ print $3 }') || echo "none" |
359 | + pkg_vers=$(echo $pkg_vers | cut -d: -f2) # epochs |
360 | case "${pkg_vers:0:6}" in |
361 | "2011.2") echo "diablo" ;; |
362 | "2012.1") echo "essex" ;; |
363 | "2012.2") echo "folsom" ;; |
364 | "2013.1") echo "grizzly" ;; |
365 | + "2013.2") echo "havana" ;; |
366 | esac |
367 | } |
368 | |
369 | @@ -191,7 +194,8 @@ |
370 | "diablo") echo "2011.2" ;; |
371 | "essex") echo "2012.1" ;; |
372 | "folsom") echo "2012.2" ;; |
373 | - "grizzly") echo "2012.3" ;; |
374 | + "grizzly") echo "2013.1" ;; |
375 | + "havana") echo "2013.2" ;; |
376 | esac |
377 | } |
378 | |
379 | @@ -314,3 +318,452 @@ |
380 | echo "$found" |
381 | return 0 |
382 | } |
383 | + |
384 | +HAPROXY_CFG=/etc/haproxy/haproxy.cfg |
385 | +HAPROXY_DEFAULT=/etc/default/haproxy |
386 | +########################################################################## |
387 | +# Description: Configures HAProxy services for Openstack API's |
388 | +# Parameters: |
389 | +# Space delimited list of service:port:mode combinations for which |
390 | +# haproxy service configuration should be generated for. The function |
391 | +# assumes the name of the peer relation is 'cluster' and that every |
392 | +# service unit in the peer relation is running the same services. |
393 | +# |
394 | +# Services that do not specify :mode in parameter will default to http. |
395 | +# |
396 | +# Example |
397 | +# configure_haproxy cinder_api:8776:8756:tcp nova_api:8774:8764:http |
398 | +########################################################################## |
399 | +configure_haproxy() { |
400 | + local address=`unit-get private-address` |
401 | + local name=${JUJU_UNIT_NAME////-} |
402 | + cat > $HAPROXY_CFG << EOF |
403 | +global |
404 | + log 127.0.0.1 local0 |
405 | + log 127.0.0.1 local1 notice |
406 | + maxconn 20000 |
407 | + user haproxy |
408 | + group haproxy |
409 | + spread-checks 0 |
410 | + |
411 | +defaults |
412 | + log global |
413 | + mode http |
414 | + option httplog |
415 | + option dontlognull |
416 | + retries 3 |
417 | + timeout queue 1000 |
418 | + timeout connect 1000 |
419 | + timeout client 30000 |
420 | + timeout server 30000 |
421 | + |
422 | +listen stats :8888 |
423 | + mode http |
424 | + stats enable |
425 | + stats hide-version |
426 | + stats realm Haproxy\ Statistics |
427 | + stats uri / |
428 | + stats auth admin:password |
429 | + |
430 | +EOF |
431 | + for service in $@; do |
432 | + local service_name=$(echo $service | cut -d : -f 1) |
433 | + local haproxy_listen_port=$(echo $service | cut -d : -f 2) |
434 | + local api_listen_port=$(echo $service | cut -d : -f 3) |
435 | + local mode=$(echo $service | cut -d : -f 4) |
436 | + [[ -z "$mode" ]] && mode="http" |
437 | + juju-log "Adding haproxy configuration entry for $service "\ |
438 | + "($haproxy_listen_port -> $api_listen_port)" |
439 | + cat >> $HAPROXY_CFG << EOF |
440 | +listen $service_name 0.0.0.0:$haproxy_listen_port |
441 | + balance roundrobin |
442 | + mode $mode |
443 | + option ${mode}log |
444 | + server $name $address:$api_listen_port check |
445 | +EOF |
446 | + local r_id="" |
447 | + local unit="" |
448 | + for r_id in `relation-ids cluster`; do |
449 | + for unit in `relation-list -r $r_id`; do |
450 | + local unit_name=${unit////-} |
451 | + local unit_address=`relation-get -r $r_id private-address $unit` |
452 | + if [ -n "$unit_address" ]; then |
453 | + echo " server $unit_name $unit_address:$api_listen_port check" \ |
454 | + >> $HAPROXY_CFG |
455 | + fi |
456 | + done |
457 | + done |
458 | + done |
459 | + echo "ENABLED=1" > $HAPROXY_DEFAULT |
460 | + service haproxy restart |
461 | +} |
462 | + |
463 | +########################################################################## |
464 | +# Description: Query HA interface to determine is cluster is configured |
465 | +# Returns: 0 if configured, 1 if not configured |
466 | +########################################################################## |
467 | +is_clustered() { |
468 | + local r_id="" |
469 | + local unit="" |
470 | + for r_id in $(relation-ids ha); do |
471 | + if [ -n "$r_id" ]; then |
472 | + for unit in $(relation-list -r $r_id); do |
473 | + clustered=$(relation-get -r $r_id clustered $unit) |
474 | + if [ -n "$clustered" ]; then |
475 | + juju-log "Unit is haclustered" |
476 | + return 0 |
477 | + fi |
478 | + done |
479 | + fi |
480 | + done |
481 | + juju-log "Unit is not haclustered" |
482 | + return 1 |
483 | +} |
484 | + |
485 | +########################################################################## |
486 | +# Description: Return a list of all peers in cluster relations |
487 | +########################################################################## |
488 | +peer_units() { |
489 | + local peers="" |
490 | + local r_id="" |
491 | + for r_id in $(relation-ids cluster); do |
492 | + peers="$peers $(relation-list -r $r_id)" |
493 | + done |
494 | + echo $peers |
495 | +} |
496 | + |
497 | +########################################################################## |
498 | +# Description: Determines whether the current unit is the oldest of all |
499 | +# its peers - supports partial leader election |
500 | +# Returns: 0 if oldest, 1 if not |
501 | +########################################################################## |
502 | +oldest_peer() { |
503 | + peers=$1 |
504 | + local l_unit_no=$(echo $JUJU_UNIT_NAME | cut -d / -f 2) |
505 | + for peer in $peers; do |
506 | + echo "Comparing $JUJU_UNIT_NAME with peers: $peers" |
507 | + local r_unit_no=$(echo $peer | cut -d / -f 2) |
508 | + if (($r_unit_no<$l_unit_no)); then |
509 | + juju-log "Not oldest peer; deferring" |
510 | + return 1 |
511 | + fi |
512 | + done |
513 | + juju-log "Oldest peer; might take charge?" |
514 | + return 0 |
515 | +} |
516 | + |
517 | +########################################################################## |
518 | +# Description: Determines whether the current service units is the |
519 | +# leader within a) a cluster of its peers or b) across a |
520 | +# set of unclustered peers. |
521 | +# Parameters: CRM resource to check ownership of if clustered |
522 | +# Returns: 0 if leader, 1 if not |
523 | +########################################################################## |
524 | +eligible_leader() { |
525 | + if is_clustered; then |
526 | + if ! is_leader $1; then |
527 | + juju-log 'Deferring action to CRM leader' |
528 | + return 1 |
529 | + fi |
530 | + else |
531 | + peers=$(peer_units) |
532 | + if [ -n "$peers" ] && ! oldest_peer "$peers"; then |
533 | + juju-log 'Deferring action to oldest service unit.' |
534 | + return 1 |
535 | + fi |
536 | + fi |
537 | + return 0 |
538 | +} |
539 | + |
540 | +########################################################################## |
541 | +# Description: Query Cluster peer interface to see if peered |
542 | +# Returns: 0 if peered, 1 if not peered |
543 | +########################################################################## |
544 | +is_peered() { |
545 | + local r_id=$(relation-ids cluster) |
546 | + if [ -n "$r_id" ]; then |
547 | + if [ -n "$(relation-list -r $r_id)" ]; then |
548 | + juju-log "Unit peered" |
549 | + return 0 |
550 | + fi |
551 | + fi |
552 | + juju-log "Unit not peered" |
553 | + return 1 |
554 | +} |
555 | + |
556 | +########################################################################## |
557 | +# Description: Determines whether host is owner of clustered services |
558 | +# Parameters: Name of CRM resource to check ownership of |
559 | +# Returns: 0 if leader, 1 if not leader |
560 | +########################################################################## |
561 | +is_leader() { |
562 | + hostname=`hostname` |
563 | + if [ -x /usr/sbin/crm ]; then |
564 | + if crm resource show $1 | grep -q $hostname; then |
565 | + juju-log "$hostname is cluster leader." |
566 | + return 0 |
567 | + fi |
568 | + fi |
569 | + juju-log "$hostname is not cluster leader." |
570 | + return 1 |
571 | +} |
572 | + |
573 | +########################################################################## |
574 | +# Description: Determines whether enough data has been provided in |
575 | +# configuration or relation data to configure HTTPS. |
576 | +# Parameters: None |
577 | +# Returns: 0 if HTTPS can be configured, 1 if not. |
578 | +########################################################################## |
579 | +https() { |
580 | + local r_id="" |
581 | + if [[ -n "$(config-get ssl_cert)" ]] && |
582 | + [[ -n "$(config-get ssl_key)" ]] ; then |
583 | + return 0 |
584 | + fi |
585 | + for r_id in $(relation-ids identity-service) ; do |
586 | + for unit in $(relation-list -r $r_id) ; do |
587 | + if [[ "$(relation-get -r $r_id https_keystone $unit)" == "True" ]] && |
588 | + [[ -n "$(relation-get -r $r_id ssl_cert $unit)" ]] && |
589 | + [[ -n "$(relation-get -r $r_id ssl_key $unit)" ]] && |
590 | + [[ -n "$(relation-get -r $r_id ca_cert $unit)" ]] ; then |
591 | + return 0 |
592 | + fi |
593 | + done |
594 | + done |
595 | + return 1 |
596 | +} |
597 | + |
598 | +########################################################################## |
599 | +# Description: For a given number of port mappings, configures apache2 |
600 | +# HTTPs local reverse proxying using certficates and keys provided in |
601 | +# either configuration data (preferred) or relation data. Assumes ports |
602 | +# are not in use (calling charm should ensure that). |
603 | +# Parameters: Variable number of proxy port mappings as |
604 | +# $internal:$external. |
605 | +# Returns: 0 if reverse proxy(s) have been configured, 0 if not. |
606 | +########################################################################## |
607 | +enable_https() { |
608 | + local port_maps="$@" |
609 | + local http_restart="" |
610 | + juju-log "Enabling HTTPS for port mappings: $port_maps." |
611 | + |
612 | + # allow overriding of keystone provided certs with those set manually |
613 | + # in config. |
614 | + local cert=$(config-get ssl_cert) |
615 | + local key=$(config-get ssl_key) |
616 | + local ca_cert="" |
617 | + if [[ -z "$cert" ]] || [[ -z "$key" ]] ; then |
618 | + juju-log "Inspecting identity-service relations for SSL certificate." |
619 | + local r_id="" |
620 | + cert="" |
621 | + key="" |
622 | + ca_cert="" |
623 | + for r_id in $(relation-ids identity-service) ; do |
624 | + for unit in $(relation-list -r $r_id) ; do |
625 | + [[ -z "$cert" ]] && cert="$(relation-get -r $r_id ssl_cert $unit)" |
626 | + [[ -z "$key" ]] && key="$(relation-get -r $r_id ssl_key $unit)" |
627 | + [[ -z "$ca_cert" ]] && ca_cert="$(relation-get -r $r_id ca_cert $unit)" |
628 | + done |
629 | + done |
630 | + [[ -n "$cert" ]] && cert=$(echo $cert | base64 -di) |
631 | + [[ -n "$key" ]] && key=$(echo $key | base64 -di) |
632 | + [[ -n "$ca_cert" ]] && ca_cert=$(echo $ca_cert | base64 -di) |
633 | + else |
634 | + juju-log "Using SSL certificate provided in service config." |
635 | + fi |
636 | + |
637 | + [[ -z "$cert" ]] || [[ -z "$key" ]] && |
638 | + juju-log "Expected but could not find SSL certificate data, not "\ |
639 | + "configuring HTTPS!" && return 1 |
640 | + |
641 | + apt-get -y install apache2 |
642 | + a2enmod ssl proxy proxy_http | grep -v "To activate the new configuration" && |
643 | + http_restart=1 |
644 | + |
645 | + mkdir -p /etc/apache2/ssl/$CHARM |
646 | + echo "$cert" >/etc/apache2/ssl/$CHARM/cert |
647 | + echo "$key" >/etc/apache2/ssl/$CHARM/key |
648 | + if [[ -n "$ca_cert" ]] ; then |
649 | + juju-log "Installing Keystone supplied CA cert." |
650 | + echo "$ca_cert" >/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt |
651 | + update-ca-certificates --fresh |
652 | + |
653 | + # XXX TODO: Find a better way of exporting this? |
654 | + if [[ "$CHARM" == "nova-cloud-controller" ]] ; then |
655 | + [[ -e /var/www/keystone_juju_ca_cert.crt ]] && |
656 | + rm -rf /var/www/keystone_juju_ca_cert.crt |
657 | + ln -s /usr/local/share/ca-certificates/keystone_juju_ca_cert.crt \ |
658 | + /var/www/keystone_juju_ca_cert.crt |
659 | + fi |
660 | + |
661 | + fi |
662 | + for port_map in $port_maps ; do |
663 | + local ext_port=$(echo $port_map | cut -d: -f1) |
664 | + local int_port=$(echo $port_map | cut -d: -f2) |
665 | + juju-log "Creating apache2 reverse proxy vhost for $port_map." |
666 | + cat >/etc/apache2/sites-available/${CHARM}_${ext_port} <<END |
667 | +Listen $ext_port |
668 | +NameVirtualHost *:$ext_port |
669 | +<VirtualHost *:$ext_port> |
670 | + ServerName $(unit-get private-address) |
671 | + SSLEngine on |
672 | + SSLCertificateFile /etc/apache2/ssl/$CHARM/cert |
673 | + SSLCertificateKeyFile /etc/apache2/ssl/$CHARM/key |
674 | + ProxyPass / http://localhost:$int_port/ |
675 | + ProxyPassReverse / http://localhost:$int_port/ |
676 | + ProxyPreserveHost on |
677 | +</VirtualHost> |
678 | +<Proxy *> |
679 | + Order deny,allow |
680 | + Allow from all |
681 | +</Proxy> |
682 | +<Location /> |
683 | + Order allow,deny |
684 | + Allow from all |
685 | +</Location> |
686 | +END |
687 | + a2ensite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" && |
688 | + http_restart=1 |
689 | + done |
690 | + if [[ -n "$http_restart" ]] ; then |
691 | + service apache2 restart |
692 | + fi |
693 | +} |
694 | + |
695 | +########################################################################## |
696 | +# Description: Ensure HTTPS reverse proxying is disabled for given port |
697 | +# mappings. |
698 | +# Parameters: Variable number of proxy port mappings as |
699 | +# $internal:$external. |
700 | +# Returns: 0 if reverse proxy is not active for all portmaps, 1 on error. |
701 | +########################################################################## |
702 | +disable_https() { |
703 | + local port_maps="$@" |
704 | + local http_restart="" |
705 | + juju-log "Ensuring HTTPS disabled for $port_maps." |
706 | + ( [[ ! -d /etc/apache2 ]] || [[ ! -d /etc/apache2/ssl/$CHARM ]] ) && return 0 |
707 | + for port_map in $port_maps ; do |
708 | + local ext_port=$(echo $port_map | cut -d: -f1) |
709 | + local int_port=$(echo $port_map | cut -d: -f2) |
710 | + if [[ -e /etc/apache2/sites-available/${CHARM}_${ext_port} ]] ; then |
711 | + juju-log "Disabling HTTPS reverse proxy for $CHARM $port_map." |
712 | + a2dissite ${CHARM}_${ext_port} | grep -v "To activate the new configuration" && |
713 | + http_restart=1 |
714 | + fi |
715 | + done |
716 | + if [[ -n "$http_restart" ]] ; then |
717 | + service apache2 restart |
718 | + fi |
719 | +} |
720 | + |
721 | + |
722 | +########################################################################## |
723 | +# Description: Ensures HTTPS is either enabled or disabled for given port |
724 | +# mapping. |
725 | +# Parameters: Variable number of proxy port mappings as |
726 | +# $internal:$external. |
727 | +# Returns: 0 if HTTPS reverse proxy is in place, 1 if it is not. |
728 | +########################################################################## |
729 | +setup_https() { |
730 | + # configure https via apache reverse proxying either |
731 | + # using certs provided by config or keystone. |
732 | + [[ -z "$CHARM" ]] && |
733 | + error_out "setup_https(): CHARM not set." |
734 | + if ! https ; then |
735 | + disable_https $@ |
736 | + else |
737 | + enable_https $@ |
738 | + fi |
739 | +} |
740 | + |
741 | +########################################################################## |
742 | +# Description: Determine correct API server listening port based on |
743 | +# existence of HTTPS reverse proxy and/or haproxy. |
744 | +# Paremeters: The standard public port for given service. |
745 | +# Returns: The correct listening port for API service. |
746 | +########################################################################## |
747 | +determine_api_port() { |
748 | + local public_port="$1" |
749 | + local i=0 |
750 | + ( [[ -n "$(peer_units)" ]] || is_clustered >/dev/null 2>&1 ) && i=$[$i + 1] |
751 | + https >/dev/null 2>&1 && i=$[$i + 1] |
752 | + echo $[$public_port - $[$i * 10]] |
753 | +} |
754 | + |
755 | +########################################################################## |
756 | +# Description: Determine correct proxy listening port based on public IP + |
757 | +# existence of HTTPS reverse proxy. |
758 | +# Paremeters: The standard public port for given service. |
759 | +# Returns: The correct listening port for haproxy service public address. |
760 | +########################################################################## |
761 | +determine_haproxy_port() { |
762 | + local public_port="$1" |
763 | + local i=0 |
764 | + https >/dev/null 2>&1 && i=$[$i + 1] |
765 | + echo $[$public_port - $[$i * 10]] |
766 | +} |
767 | + |
768 | +########################################################################## |
769 | +# Description: Print the value for a given config option in an OpenStack |
770 | +# .ini style configuration file. |
771 | +# Parameters: File path, option to retrieve, optional |
772 | +# section name (default=DEFAULT) |
773 | +# Returns: Prints value if set, prints nothing otherwise. |
774 | +########################################################################## |
775 | +local_config_get() { |
776 | + # return config values set in openstack .ini config files. |
777 | + # default placeholders starting (eg, %AUTH_HOST%) treated as |
778 | + # unset values. |
779 | + local file="$1" |
780 | + local option="$2" |
781 | + local section="$3" |
782 | + [[ -z "$section" ]] && section="DEFAULT" |
783 | + python -c " |
784 | +import ConfigParser |
785 | +config = ConfigParser.RawConfigParser() |
786 | +config.read('$file') |
787 | +try: |
788 | + value = config.get('$section', '$option') |
789 | +except: |
790 | + print '' |
791 | + exit(0) |
792 | +if value.startswith('%'): exit(0) |
793 | +print value |
794 | +" |
795 | +} |
796 | + |
797 | +########################################################################## |
798 | +# Description: Creates an rc file exporting environment variables to a |
799 | +# script_path local to the charm's installed directory. |
800 | +# Any charm scripts run outside the juju hook environment can source this |
801 | +# scriptrc to obtain updated config information necessary to perform health |
802 | +# checks or service changes |
803 | +# |
804 | +# Parameters: |
805 | +# An array of '=' delimited ENV_VAR:value combinations to export. |
806 | +# If optional script_path key is not provided in the array, script_path |
807 | +# defaults to scripts/scriptrc |
808 | +########################################################################## |
809 | +function save_script_rc { |
810 | + if [ ! -n "$JUJU_UNIT_NAME" ]; then |
811 | + echo "Error: Missing JUJU_UNIT_NAME environment variable" |
812 | + exit 1 |
813 | + fi |
814 | + # our default unit_path |
815 | + unit_path="$CHARM_DIR/scripts/scriptrc" |
816 | + echo $unit_path |
817 | + tmp_rc="/tmp/${JUJU_UNIT_NAME/\//-}rc" |
818 | + |
819 | + echo "#!/bin/bash" > $tmp_rc |
820 | + for env_var in "${@}" |
821 | + do |
822 | + if `echo $env_var | grep -q script_path`; then |
823 | + # well then we need to reset the new unit-local script path |
824 | + unit_path="$CHARM_DIR/${env_var/script_path=/}" |
825 | + else |
826 | + echo "export $env_var" >> $tmp_rc |
827 | + fi |
828 | + done |
829 | + chmod 755 $tmp_rc |
830 | + mv $tmp_rc $unit_path |
831 | +} |
832 | |
833 | === modified file 'metadata.yaml' |
834 | --- metadata.yaml 2013-04-22 19:43:53 +0000 |
835 | +++ metadata.yaml 2013-05-29 18:14:34 +0000 |
836 | @@ -9,3 +9,9 @@ |
837 | interface: mysql |
838 | identity-service: |
839 | interface: keystone |
840 | + ha: |
841 | + interface: hacluster |
842 | + scope: container |
843 | +peers: |
844 | + cluster: |
845 | + interface: openstack-dashboard-ha |
846 | |
847 | === modified file 'revision' |
848 | --- revision 2013-01-11 21:59:22 +0000 |
849 | +++ revision 2013-05-29 18:14:34 +0000 |
850 | @@ -1,1 +1,1 @@ |
851 | -22 |
852 | +30 |
853 | |
854 | === added directory 'scripts' |
855 | === added file 'scripts/add_to_cluster' |
856 | --- scripts/add_to_cluster 1970-01-01 00:00:00 +0000 |
857 | +++ scripts/add_to_cluster 2013-05-29 18:14:34 +0000 |
858 | @@ -0,0 +1,13 @@ |
859 | +#!/bin/bash |
860 | +service corosync start || /bin/true |
861 | +sleep 2 |
862 | +while ! service pacemaker start; do |
863 | + echo "Attempting to start pacemaker" |
864 | + sleep 1; |
865 | +done; |
866 | +crm node online |
867 | +sleep 2 |
868 | +while crm status | egrep -q 'Stopped$'; do |
869 | + echo "Waiting for nodes to come online" |
870 | + sleep 1 |
871 | +done |
872 | |
873 | === added file 'scripts/remove_from_cluster' |
874 | --- scripts/remove_from_cluster 1970-01-01 00:00:00 +0000 |
875 | +++ scripts/remove_from_cluster 2013-05-29 18:14:34 +0000 |
876 | @@ -0,0 +1,4 @@ |
877 | +#!/bin/bash |
878 | +crm node standby |
879 | +service pacemaker stop |
880 | +service corosync stop |
Hi, I'm not in the charmers group, but I found:
257 + if [ "$(config-get offline- compression) " != "yes" ]; then
258 + set_or_update COMPRESS_OFFLINE False
259 + apt-get install -y nodejs node-less
260 + else
261 + set_or_update COMPRESS_OFFLINE True
262 + fi
Installing nodejs and node-less should be in "COMPRESS_OFFLINE True" not "COMPRESS_OFFLINE False", shouldn't it?