Merge lp:~johnsca/charms/trusty/cf-hm9000/services into lp:~cf-charmers/charms/trusty/cf-hm9000/trunk

Proposed by Cory Johns
Status: Merged
Merged at revision: 2
Proposed branch: lp:~johnsca/charms/trusty/cf-hm9000/services
Merge into: lp:~cf-charmers/charms/trusty/cf-hm9000/trunk
Diff against target: 2448 lines (+1093/-882)
53 files modified
files/README.md (+0/-3)
files/default-config.json (+0/-27)
files/hm9000.json.erb (+0/-30)
files/hm9000_analyzer_ctl (+0/-41)
files/hm9000_api_server_ctl (+0/-40)
files/hm9000_evacuator_ctl (+0/-40)
files/hm9000_fetcher_ctl (+0/-41)
files/hm9000_listener_ctl (+0/-44)
files/hm9000_metrics_server_ctl (+0/-40)
files/hm9000_sender_ctl (+0/-41)
files/hm9000_shredder_ctl (+0/-41)
files/syslog_forwarder.conf.erb (+0/-65)
hooks/cc-relation-changed (+5/-0)
hooks/charmhelpers/contrib/cloudfoundry/common.py (+0/-57)
hooks/charmhelpers/contrib/cloudfoundry/config_helper.py (+0/-11)
hooks/charmhelpers/contrib/cloudfoundry/contexts.py (+59/-55)
hooks/charmhelpers/contrib/cloudfoundry/install.py (+0/-35)
hooks/charmhelpers/contrib/cloudfoundry/services.py (+0/-118)
hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py (+0/-14)
hooks/charmhelpers/contrib/openstack/context.py (+1/-1)
hooks/charmhelpers/contrib/openstack/neutron.py (+17/-1)
hooks/charmhelpers/contrib/openstack/utils.py (+6/-1)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-1)
hooks/charmhelpers/contrib/storage/linux/utils.py (+19/-5)
hooks/charmhelpers/core/hookenv.py (+98/-1)
hooks/charmhelpers/core/host.py (+52/-0)
hooks/charmhelpers/core/services.py (+357/-0)
hooks/charmhelpers/core/templating.py (+51/-0)
hooks/charmhelpers/fetch/__init__.py (+96/-64)
hooks/config-changed (+5/-2)
hooks/config.py (+80/-0)
hooks/etcd-relation-changed (+5/-0)
hooks/install (+43/-8)
hooks/metrics-relation-changed (+5/-0)
hooks/nats-relation-changed (+5/-0)
hooks/relation-name-relation-broken (+0/-2)
hooks/relation-name-relation-changed (+0/-9)
hooks/relation-name-relation-departed (+0/-5)
hooks/relation-name-relation-joined (+0/-5)
hooks/start (+5/-4)
hooks/stop (+5/-7)
hooks/upgrade-charm (+5/-6)
metadata.yaml (+6/-9)
notes.md (+0/-8)
templates/cf-hm9k-analyzer.conf (+17/-0)
templates/cf-hm9k-api-server.conf (+17/-0)
templates/cf-hm9k-evacuator.conf (+17/-0)
templates/cf-hm9k-fetcher.conf (+17/-0)
templates/cf-hm9k-listener.conf (+17/-0)
templates/cf-hm9k-metrics-server.conf (+17/-0)
templates/cf-hm9k-sender.conf (+17/-0)
templates/cf-hm9k-shredder.conf (+17/-0)
templates/hm9000.json (+31/-0)
To merge this branch: bzr merge lp:~johnsca/charms/trusty/cf-hm9000/services
Reviewer Review Type Date Requested Status
Cloud Foundry Charmers Pending
Review via email: mp+221770@code.launchpad.net

Description of the change

Finished charm using services framework

https://codereview.appspot.com/104820043/

To post a comment you must log in.
Revision history for this message
Cory Johns (johnsca) wrote :

Reviewers: mp+221770_code.launchpad.net,

Message:
Please take a look.

Description:
Finished charm using services framework

https://code.launchpad.net/~johnsca/charms/trusty/cf-hm9000/services/+merge/221770

(do not edit description out of merge proposal)

Please review this at https://codereview.appspot.com/104820043/

Affected files (+1064, -879 lines):
   A [revision details]
   D files/README.md
   D files/default-config.json
   A files/hm9000
   D files/hm9000.json.erb
   D files/hm9000_analyzer_ctl
   D files/hm9000_api_server_ctl
   D files/hm9000_evacuator_ctl
   D files/hm9000_fetcher_ctl
   D files/hm9000_listener_ctl
   D files/hm9000_metrics_server_ctl
   D files/hm9000_sender_ctl
   D files/hm9000_shredder_ctl
   D files/syslog_forwarder.conf.erb
   A hooks/cc-relation-changed
   M hooks/charmhelpers/contrib/cloudfoundry/common.py
   D hooks/charmhelpers/contrib/cloudfoundry/config_helper.py
   M hooks/charmhelpers/contrib/cloudfoundry/contexts.py
   D hooks/charmhelpers/contrib/cloudfoundry/install.py
   D hooks/charmhelpers/contrib/cloudfoundry/services.py
   D hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py
   M hooks/charmhelpers/contrib/openstack/context.py
   M hooks/charmhelpers/contrib/openstack/neutron.py
   M hooks/charmhelpers/contrib/openstack/utils.py
   M hooks/charmhelpers/contrib/storage/linux/lvm.py
   M hooks/charmhelpers/contrib/storage/linux/utils.py
   M hooks/charmhelpers/core/hookenv.py
   M hooks/charmhelpers/core/host.py
   A hooks/charmhelpers/core/services.py
   A hooks/charmhelpers/core/templating.py
   M hooks/charmhelpers/fetch/__init__.py
   A hooks/config.py
   M hooks/config-changed
   A hooks/etcd-relation-changed
   M hooks/install
   A hooks/metrics-relation-changed
   A hooks/nats-relation-changed
   D hooks/relation-name-relation-broken
   D hooks/relation-name-relation-changed
   D hooks/relation-name-relation-departed
   D hooks/relation-name-relation-joined
   M hooks/start
   M hooks/stop
   M hooks/upgrade-charm
   M metadata.yaml
   D notes.md
   A templates/cf-hm9k-analyzer.conf
   A templates/cf-hm9k-api-server.conf
   A templates/cf-hm9k-evacuator.conf
   A templates/cf-hm9k-fetcher.conf
   A templates/cf-hm9k-listener.conf
   A templates/cf-hm9k-metrics-server.conf
   A templates/cf-hm9k-sender.conf
   A templates/cf-hm9k-shredder.conf
   A templates/hm9000.json

Revision history for this message
Benjamin Saller (bcsaller) wrote :

Thanks for this, a little hard to verify as you indicated. It looks like
we can add the internal MCAT suite to the CI as well.

I think landing this now and completing the topology is fine, even if we
have to evolve its configuration its an 'optional' component.

LGTM

https://codereview.appspot.com/104820043/diff/1/hooks/config.py
File hooks/config.py (right):

https://codereview.appspot.com/104820043/diff/1/hooks/config.py#newcode24
hooks/config.py:24: 'required_data': [contexts.NatsRelation(),
What do you think about a top level define of this list that we reuse,
something like:

required_data: hm_contexts

where hm_contexts is defined above the service block.

https://codereview.appspot.com/104820043/diff/1/templates/cf-hm9k-analyzer.conf
File templates/cf-hm9k-analyzer.conf (right):

https://codereview.appspot.com/104820043/diff/1/templates/cf-hm9k-analyzer.conf#newcode2
templates/cf-hm9k-analyzer.conf:2: author "Cory Johns
<email address hidden>"
This is an existing issue, but I'd like for all the Authors in all the
projects to point to the projects list rather than a bunch of
individuals. The charms maintainer fields as well. We should make a card
for this and switch them at once.

https://codereview.appspot.com/104820043/diff/1/templates/hm9000.json
File templates/hm9000.json (right):

https://codereview.appspot.com/104820043/diff/1/templates/hm9000.json#newcode13
templates/hm9000.json:13: "store_urls":
["http://{{etcd['hostname']}}:{{etcd['port']}}"],
This could easily be a list, no? I think we'll want to handle it that
way (via iteration), but for now this is fine.

https://codereview.appspot.com/104820043/diff/1/templates/hm9000.json#newcode15
templates/hm9000.json:15: "metrics_server_port": 7879,
Making two cards, been meaning to audit all the default ports and
username/password defaults. This is something we'll need to be able to
manage better in the future. Nothing to do in this branch

https://codereview.appspot.com/104820043/

Revision history for this message
Cory Johns (johnsca) wrote :

On 2014/06/02 18:29:44, benjamin.saller wrote:

https://codereview.appspot.com/104820043/diff/1/hooks/config.py#newcode24
> hooks/config.py:24: 'required_data': [contexts.NatsRelation(),
> What do you think about a top level define of this list that we reuse,
something
> like:

> required_data: hm_contexts

> where hm_contexts is defined above the service block.

+1 I definitely should have done this to begin with.

https://codereview.appspot.com/104820043/diff/1/templates/cf-hm9k-analyzer.conf#newcode2
> templates/cf-hm9k-analyzer.conf:2: author "Cory Johns
> <mailto:<email address hidden>>"
> This is an existing issue, but I'd like for all the Authors in all the
projects
> to point to the projects list rather than a bunch of individuals. The
charms
> maintainer fields as well. We should make a card for this and switch
them at
> once.

Easy enough to do in this charm during this review, then we can fix the
existing ones in a batch.

https://codereview.appspot.com/104820043/diff/1/templates/hm9000.json#newcode13
> templates/hm9000.json:13: "store_urls":
> ["http://{{etcd['hostname']}}:{{etcd['port']}}"],
> This could easily be a list, no? I think we'll want to handle it that
way (via
> iteration), but for now this is fine.

This raises an issue with how I implemented multiple units in the
framework, which I will address now.

https://codereview.appspot.com/104820043/

4. By Cory Johns

Updated all author & maintainer fields to cf-charmers

5. By Cory Johns

Refactor away repetition in required_data items

6. By Cory Johns

Handle multiple etcd units

7. By Cory Johns

Resynced charm-helpers for missed RelationContext classes

8. By Cory Johns

Remove extra whitespace from hm9000.json config

Revision history for this message
Cory Johns (johnsca) wrote :
9. By Cory Johns

Resynced charm-helpers for bug fix

Revision history for this message
Benjamin Saller (bcsaller) wrote :

Thanks for this, since it depends on the helpers change lets resolve
that before I can approve this. Good stuff though.

https://codereview.appspot.com/104820043/diff/20001/metadata.yaml
File metadata.yaml (right):

https://codereview.appspot.com/104820043/diff/20001/metadata.yaml#newcode3
metadata.yaml:3: maintainer: cf-charmers
ha, nice catch. we might want to use the full email address for
cf-charmers though, or say lp:~cf-charmers. Something to make it a
little more clear.

https://codereview.appspot.com/104820043/diff/20001/templates/hm9000.json
File templates/hm9000.json (right):

https://codereview.appspot.com/104820043/diff/20001/templates/hm9000.json#newcode14
templates/hm9000.json:14: {% for unit in etcd.values() -%}
This might change a little if you like the helpers changes suggested in
the other branch. Thanks for this important fix.

https://codereview.appspot.com/104820043/

Revision history for this message
Benjamin Saller (bcsaller) wrote :

Thanks for this, since it depends on the helpers change lets resolve
that before I can approve this. Good stuff though.

https://codereview.appspot.com/104820043/diff/20001/metadata.yaml
File metadata.yaml (right):

https://codereview.appspot.com/104820043/diff/20001/metadata.yaml#newcode3
metadata.yaml:3: maintainer: cf-charmers
ha, nice catch. we might want to use the full email address for
cf-charmers though, or say lp:~cf-charmers. Something to make it a
little more clear.

https://codereview.appspot.com/104820043/diff/20001/templates/hm9000.json
File templates/hm9000.json (right):

https://codereview.appspot.com/104820043/diff/20001/templates/hm9000.json#newcode14
templates/hm9000.json:14: {% for unit in etcd.values() -%}
This might change a little if you like the helpers changes suggested in
the other branch. Thanks for this important fix.

https://codereview.appspot.com/104820043/

10. By Cory Johns

Updated charm-helpers and cleaned up iteration of multiple etcd units

Revision history for this message
Cory Johns (johnsca) wrote :
Revision history for this message
Benjamin Saller (bcsaller) wrote :

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== removed file 'files/README.md'
--- files/README.md 2014-05-14 16:40:09 +0000
+++ files/README.md 1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
1# Contents
2
3ctl files and config from cf-release
40
=== removed file 'files/default-config.json'
--- files/default-config.json 2014-05-14 16:40:09 +0000
+++ files/default-config.json 1970-01-01 00:00:00 +0000
@@ -1,27 +0,0 @@
1{
2 "heartbeat_period_in_seconds": 10,
3
4 "cc_auth_user": "mcat",
5 "cc_auth_password": "testing",
6 "cc_base_url": "http://127.0.0.1:6001",
7 "skip_cert_verify": true,
8 "desired_state_batch_size": 500,
9 "fetcher_network_timeout_in_seconds": 10,
10
11 "store_schema_version": 1,
12 "store_type": "etcd",
13 "store_urls": ["http://127.0.0.1:4001"],
14
15 "metrics_server_port": 7879,
16 "metrics_server_user": "metrics_server_user",
17 "metrics_server_password": "canHazMetrics?",
18
19 "log_level": "INFO",
20
21 "nats": [{
22 "host": "127.0.0.1",
23 "port": 4222,
24 "user": "",
25 "password": ""
26 }]
27}
280
=== added file 'files/hm9000'
29Binary files files/hm9000 1970-01-01 00:00:00 +0000 and files/hm9000 2014-06-05 18:17:30 +0000 differ1Binary files files/hm9000 1970-01-01 00:00:00 +0000 and files/hm9000 2014-06-05 18:17:30 +0000 differ
=== removed file 'files/hm9000.json.erb'
--- files/hm9000.json.erb 2014-05-14 16:40:09 +0000
+++ files/hm9000.json.erb 1970-01-01 00:00:00 +0000
@@ -1,30 +0,0 @@
1{
2 "heartbeat_period_in_seconds": 10,
3
4 "cc_auth_user": "<%= p("ccng.bulk_api_user") %>",
5 "cc_auth_password": "<%= p("ccng.bulk_api_password") %>",
6 "cc_base_url": "<%= p("cc.srv_api_uri") %>",
7 "skip_cert_verify": <%= p("ssl.skip_cert_verify") %>,
8 "desired_state_batch_size": 500,
9 "fetcher_network_timeout_in_seconds": 10,
10
11 "store_schema_version": 4,
12 "store_urls": [<%= p("etcd.machines").map{|addr| "\"http://#{addr}:4001\""}.join(",")%>],
13
14 "metrics_server_port": 0,
15 "metrics_server_user": "",
16 "metrics_server_password": "",
17
18 "log_level": "INFO",
19
20 "nats": <%=
21 p("nats.machines").collect do |addr|
22 {
23 "host" => addr,
24 "port" => p("nats.port"),
25 "user" => p("nats.user"),
26 "password" => p("nats.password")
27 }
28 end.to_json
29%>
30}
310
=== removed file 'files/hm9000_analyzer_ctl'
--- files/hm9000_analyzer_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_analyzer_ctl 1970-01-01 00:00:00 +0000
@@ -1,41 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_analyzer.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_analyzer"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 analyze \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 --poll \
26 1>>$LOG_DIR/hm9000_analyzer.stdout.log \
27 2>>$LOG_DIR/hm9000_analyzer.stderr.log
28
29 ;;
30
31 stop)
32 kill_and_wait $PIDFILE
33
34 ;;
35
36 *)
37 echo "Usage: hm9000_analyzer_ctl {start|stop}"
38
39 ;;
40
41esac
420
=== removed file 'files/hm9000_api_server_ctl'
--- files/hm9000_api_server_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_api_server_ctl 1970-01-01 00:00:00 +0000
@@ -1,40 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_api_server.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_api_server"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 serve_api \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 1>>$LOG_DIR/hm9000_api_server.stdout.log \
26 2>>$LOG_DIR/hm9000_api_server.stderr.log
27
28 ;;
29
30 stop)
31 kill_and_wait $PIDFILE
32
33 ;;
34
35 *)
36 echo "Usage: hm9000_api_server_ctl {start|stop}"
37
38 ;;
39
40esac
410
=== removed file 'files/hm9000_evacuator_ctl'
--- files/hm9000_evacuator_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_evacuator_ctl 1970-01-01 00:00:00 +0000
@@ -1,40 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_evacuator.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_evacuator"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 evacuator \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 1>>$LOG_DIR/hm9000_evacuator.stdout.log \
26 2>>$LOG_DIR/hm9000_evacuator.stderr.log
27
28 ;;
29
30 stop)
31 kill_and_wait $PIDFILE
32
33 ;;
34
35 *)
36 echo "Usage: hm9000_evacuator_ctl {start|stop}"
37
38 ;;
39
40esac
410
=== removed file 'files/hm9000_fetcher_ctl'
--- files/hm9000_fetcher_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_fetcher_ctl 1970-01-01 00:00:00 +0000
@@ -1,41 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_fetcher.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_fetcher"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 fetch_desired \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 --poll \
26 1>>$LOG_DIR/hm9000_fetcher.stdout.log \
27 2>>$LOG_DIR/hm9000_fetcher.stderr.log
28
29 ;;
30
31 stop)
32 kill_and_wait $PIDFILE
33
34 ;;
35
36 *)
37 echo "Usage: hm9000_fetcher_ctl {start|stop}"
38
39 ;;
40
41esac
420
=== removed file 'files/hm9000_listener_ctl'
--- files/hm9000_listener_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_listener_ctl 1970-01-01 00:00:00 +0000
@@ -1,44 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_listener.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_listener"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 <% if_p("syslog_aggregator") do %>
21 /var/vcap/packages/syslog_aggregator/setup_syslog_forwarder.sh /var/vcap/jobs/hm9000/config
22 <% end %>
23
24 echo $$ > $PIDFILE
25
26 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
27 listen \
28 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
29 1>>$LOG_DIR/hm9000_listener.stdout.log \
30 2>>$LOG_DIR/hm9000_listener.stderr.log
31
32 ;;
33
34 stop)
35 kill_and_wait $PIDFILE
36
37 ;;
38
39 *)
40 echo "Usage: hm9000_listener_ctl {start|stop}"
41
42 ;;
43
44esac
450
=== removed file 'files/hm9000_metrics_server_ctl'
--- files/hm9000_metrics_server_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_metrics_server_ctl 1970-01-01 00:00:00 +0000
@@ -1,40 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_metrics_server.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_metrics_server"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 serve_metrics \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 1>>$LOG_DIR/hm9000_metrics_server.stdout.log \
26 2>>$LOG_DIR/hm9000_metrics_server.stderr.log
27
28 ;;
29
30 stop)
31 kill_and_wait $PIDFILE
32
33 ;;
34
35 *)
36 echo "Usage: hm9000_metrics_server_ctl {start|stop}"
37
38 ;;
39
40esac
410
=== removed file 'files/hm9000_sender_ctl'
--- files/hm9000_sender_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_sender_ctl 1970-01-01 00:00:00 +0000
@@ -1,41 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_sender.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_sender"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 send \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 --poll \
26 1>>$LOG_DIR/hm9000_sender.stdout.log \
27 2>>$LOG_DIR/hm9000_sender.stderr.log
28
29 ;;
30
31 stop)
32 kill_and_wait $PIDFILE
33
34 ;;
35
36 *)
37 echo "Usage: hm9000_sender_ctl {start|stop}"
38
39 ;;
40
41esac
420
=== removed file 'files/hm9000_shredder_ctl'
--- files/hm9000_shredder_ctl 2014-05-14 16:40:09 +0000
+++ files/hm9000_shredder_ctl 1970-01-01 00:00:00 +0000
@@ -1,41 +0,0 @@
1#!/bin/bash
2
3RUN_DIR=/var/vcap/sys/run/hm9000
4LOG_DIR=/var/vcap/sys/log/hm9000
5PIDFILE=$RUN_DIR/hm9000_shredder.pid
6
7source /var/vcap/packages/common/utils.sh
8
9case $1 in
10
11 start)
12 pid_guard $PIDFILE "hm9000_shredder"
13
14 mkdir -p $RUN_DIR
15 mkdir -p $LOG_DIR
16
17 chown -R vcap:vcap $RUN_DIR
18 chown -R vcap:vcap $LOG_DIR
19
20 echo $$ > $PIDFILE
21
22 exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \
23 shred \
24 --config=/var/vcap/jobs/hm9000/config/hm9000.json \
25 --poll \
26 1>>$LOG_DIR/hm9000_shredder.stdout.log \
27 2>>$LOG_DIR/hm9000_shredder.stderr.log
28
29 ;;
30
31 stop)
32 kill_and_wait $PIDFILE
33
34 ;;
35
36 *)
37 echo "Usage: hm9000_shredder_ctl {start|stop}"
38
39 ;;
40
41esac
420
=== removed file 'files/syslog_forwarder.conf.erb'
--- files/syslog_forwarder.conf.erb 2014-05-14 16:40:09 +0000
+++ files/syslog_forwarder.conf.erb 1970-01-01 00:00:00 +0000
@@ -1,65 +0,0 @@
1<% if_p("syslog_aggregator.address", "syslog_aggregator.port", "syslog_aggregator.transport") do |address, port, transport| %>
2$ModLoad imuxsock # local message reception (rsyslog uses a datagram socket)
3$MaxMessageSize 4k # default is 2k
4$WorkDirectory /var/vcap/sys/rsyslog/buffered # where messages should be buffered on disk
5
6# Forward vcap messages to the aggregator
7#
8$ActionResumeRetryCount -1 # Try until the server becomes available
9$ActionQueueType LinkedList # Allocate on-demand
10$ActionQueueFileName agg_backlog # Spill to disk if queue is full
11$ActionQueueMaxDiskSpace 32m # Max size for disk queue
12$ActionQueueLowWaterMark 2000 # Num messages. Assuming avg size of 512B, this is 1MiB.
13$ActionQueueHighWaterMark 8000 # Num messages. Assuming avg size of 512B, this is 4MiB. (If this is reached, messages will spill to disk until the low watermark is reached).
14$ActionQueueTimeoutEnqueue 0 # Discard messages if the queue + disk is full
15$ActionQueueSaveOnShutdown on # Save in-memory data to disk if rsyslog shuts down
16
17<% ip = spec.networks.send(properties.networks.apps).ip %>
18template(name="CfLogTemplate" type="list") {
19 constant(value="<")
20 property(name="pri")
21 constant(value=">")
22 property(name="timestamp" dateFormat="rfc3339")
23 constant(value=" <%= ip.strip %> ")
24 property(name="programname")
25 constant(value=" [job=")
26 property(name="programname")
27 constant(value=" index=<%= spec.index.to_i %>] ")
28 property(name="msg")
29}
30
31<% if transport == "relp" %>
32$ModLoad omrelp
33:programname, startswith, "vcap." :omrelp:<%= address %>:<%= port %>;CfLogTemplate
34<% elsif transport == "udp" %>
35:programname, startswith, "vcap." @<%= address %>:<%= port %>;CfLogTemplate
36<% elsif transport == "tcp" %>
37:programname, startswith, "vcap." @@<%= address %>:<%= port %>;CfLogTemplate
38<% else %>
39#only RELP, UDP, and TCP are supported
40<% end %>
41
42# Log vcap messages locally, too
43#$template VcapComponentLogFile, "/var/log/%programname:6:$%/%programname:6:$%.log"
44#$template VcapComponentLogFormat, "%timegenerated% %syslogseverity-text% -- %msg%\n"
45#:programname, startswith, "vcap." -?VcapComponentLogFile;VcapComponentLogFormat
46
47# Prevent them from reaching anywhere else
48:programname, startswith, "vcap." ~
49
50<% if properties.syslog_aggregator.all %>
51 <% if transport == "relp" %>
52*.* :omrelp:<%= address %>:<%= port %>
53 <% elsif transport == "udp" %>
54*.* @<%= address %>:<%= port %>
55 <% elsif transport == "tcp" %>
56*.* @@<%= address %>:<%= port %>
57 <% else %>
58#only RELP, UDP, and TCP are supported
59 <% end %>
60<% end %>
61
62<% end.else do %>
63# Prevent them from reaching anywhere else
64:programname, startswith, "vcap." ~
65<% end %>
660
=== added file 'hooks/cc-relation-changed'
--- hooks/cc-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/cc-relation-changed 2014-06-05 18:17:30 +0000
@@ -0,0 +1,5 @@
1#!/usr/bin/env python
2from charmhelpers.core import services
3import config
4manager = services.ServiceManager(config.SERVICES)
5manager.manage()
06
=== modified file 'hooks/charmhelpers/contrib/cloudfoundry/common.py'
--- hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-06-05 18:17:30 +0000
@@ -1,11 +1,3 @@
1import sys
2import os
3import pwd
4import grp
5import subprocess
6
7from contextlib import contextmanager
8from charmhelpers.core.hookenv import log, ERROR, DEBUG
9from charmhelpers.core import host1from charmhelpers.core import host
102
11from charmhelpers.fetch import (3from charmhelpers.fetch import (
@@ -13,55 +5,6 @@
13)5)
146
157
16def run(command, exit_on_error=True, quiet=False):
17 '''Run a command and return the output.'''
18 if not quiet:
19 log("Running {!r}".format(command), DEBUG)
20 p = subprocess.Popen(
21 command, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
22 shell=isinstance(command, basestring))
23 p.stdin.close()
24 lines = []
25 for line in p.stdout:
26 if line:
27 if not quiet:
28 print line
29 lines.append(line)
30 elif p.poll() is not None:
31 break
32
33 p.wait()
34
35 if p.returncode == 0:
36 return '\n'.join(lines)
37
38 if p.returncode != 0 and exit_on_error:
39 log("ERROR: {}".format(p.returncode), ERROR)
40 sys.exit(p.returncode)
41
42 raise subprocess.CalledProcessError(
43 p.returncode, command, '\n'.join(lines))
44
45
46def chownr(path, owner, group):
47 uid = pwd.getpwnam(owner).pw_uid
48 gid = grp.getgrnam(group).gr_gid
49 for root, dirs, files in os.walk(path):
50 for momo in dirs:
51 os.chown(os.path.join(root, momo), uid, gid)
52 for momo in files:
53 os.chown(os.path.join(root, momo), uid, gid)
54
55
56@contextmanager
57def chdir(d):
58 cur = os.getcwd()
59 try:
60 yield os.chdir(d)
61 finally:
62 os.chdir(cur)
63
64
65def prepare_cloudfoundry_environment(config_data, packages):8def prepare_cloudfoundry_environment(config_data, packages):
66 add_source(config_data['source'], config_data.get('key'))9 add_source(config_data['source'], config_data.get('key'))
67 apt_update(fatal=True)10 apt_update(fatal=True)
6811
=== removed file 'hooks/charmhelpers/contrib/cloudfoundry/config_helper.py'
--- hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 1970-01-01 00:00:00 +0000
@@ -1,11 +0,0 @@
1import jinja2
2
3TEMPLATES_DIR = 'templates'
4
5def render_template(template_name, context, template_dir=TEMPLATES_DIR):
6 templates = jinja2.Environment(
7 loader=jinja2.FileSystemLoader(template_dir))
8 template = templates.get_template(template_name)
9 return template.render(context)
10
11
120
=== modified file 'hooks/charmhelpers/contrib/cloudfoundry/contexts.py'
--- hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-06-05 18:17:30 +0000
@@ -1,70 +1,74 @@
1import os1import os
2import yaml2import yaml
33
4from charmhelpers.core import hookenv4from charmhelpers.core.services import RelationContext
5from charmhelpers.contrib.openstack.context import OSContextGenerator5
66
77class StoredContext(dict):
8class RelationContext(OSContextGenerator):8 """
9 def __call__(self):9 A data context that always returns the data that it was first created with.
10 if not hookenv.relation_ids(self.interface):10 """
11 return {}
12
13 ctx = {}
14 for rid in hookenv.relation_ids(self.interface):
15 for unit in hookenv.related_units(rid):
16 reldata = hookenv.relation_get(rid=rid, unit=unit)
17 required = set(self.required_keys)
18 if set(reldata.keys()).issuperset(required):
19 ns = ctx.setdefault(self.interface, {})
20 for k, v in reldata.items():
21 ns[k] = v
22 return ctx
23
24 return {}
25
26
27class ConfigContext(OSContextGenerator):
28 def __call__(self):
29 return hookenv.config()
30
31
32# Stores `config_data` hash into yaml file with `file_name` as a name
33# if `file_name` already exists, then it loads data from `file_name`.
34class StoredContext(OSContextGenerator):
35 def __init__(self, file_name, config_data):11 def __init__(self, file_name, config_data):
36 self.data = config_data12 """
13 If the file exists, populate `self` with the data from the file.
14 Otherwise, populate with the given data and persist it to the file.
15 """
37 if os.path.exists(file_name):16 if os.path.exists(file_name):
38 with open(file_name, 'r') as file_stream:17 self.update(self.read_context(file_name))
39 self.data = yaml.load(file_stream)
40 if not self.data:
41 raise OSError("%s is empty" % file_name)
42 else:18 else:
43 with open(file_name, 'w') as file_stream:19 self.store_context(file_name, config_data)
44 yaml.dump(config_data, file_stream)20 self.update(config_data)
45 self.data = config_data21
4622 def store_context(self, file_name, config_data):
47 def __call__(self):23 with open(file_name, 'w') as file_stream:
48 return self.data24 yaml.dump(config_data, file_stream)
4925
5026 def read_context(self, file_name):
51class StaticContext(OSContextGenerator):27 with open(file_name, 'r') as file_stream:
52 def __init__(self, data):28 data = yaml.load(file_stream)
53 self.data = data29 if not data:
5430 raise OSError("%s is empty" % file_name)
55 def __call__(self):31 return data
56 return self.data32
5733
5834class NatsRelation(RelationContext):
59class NatsContext(RelationContext):
60 interface = 'nats'35 interface = 'nats'
61 required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password']36 required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password']
6237
6338
64class RouterContext(RelationContext):39class MysqlRelation(RelationContext):
40 interface = 'db'
41 required_keys = ['user', 'password', 'host', 'database']
42 dsn_template = "mysql2://{user}:{password}@{host}:{port}/{database}"
43
44 def get_data(self):
45 RelationContext.get_data(self)
46 if self.is_ready():
47 if 'port' not in self['db']:
48 self['db']['port'] = '3306'
49 self['db']['dsn'] = self.dsn_template.format(**self['db'])
50
51
52class RouterRelation(RelationContext):
65 interface = 'router'53 interface = 'router'
66 required_keys = ['domain']54 required_keys = ['domain']
6755
68class LogRouterContext(RelationContext):56
57class LogRouterRelation(RelationContext):
69 interface = 'logrouter'58 interface = 'logrouter'
70 required_keys = ['shared-secret', 'logrouter-address']59 required_keys = ['shared-secret', 'logrouter-address']
60
61
62class LoggregatorRelation(RelationContext):
63 interface = 'loggregator'
64 required_keys = ['shared_secret', 'loggregator_address']
65
66
67class EtcdRelation(RelationContext):
68 interface = 'etcd'
69 required_keys = ['hostname', 'port']
70
71
72class CloudControllerRelation(RelationContext):
73 interface = 'cc'
74 required_keys = ['hostname', 'port', 'user', 'password']
7175
=== removed file 'hooks/charmhelpers/contrib/cloudfoundry/install.py'
--- hooks/charmhelpers/contrib/cloudfoundry/install.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/install.py 1970-01-01 00:00:00 +0000
@@ -1,35 +0,0 @@
1import os
2import subprocess
3
4
5def install(src, dest, fileprops=None, sudo=False):
6 """Install a file from src to dest. Dest can be a complete filename
7 or a target directory. fileprops is a dict with 'owner' (username of owner)
8 and mode (octal string) as keys, the defaults are 'ubuntu' and '400'
9
10 When owner is passed or when access requires it sudo can be set to True and
11 sudo will be used to install the file.
12 """
13 if not fileprops:
14 fileprops = {}
15 mode = fileprops.get('mode', '400')
16 owner = fileprops.get('owner')
17 cmd = ['install']
18
19 if not os.path.exists(src):
20 raise OSError(src)
21
22 if not os.path.exists(dest) and not os.path.exists(os.path.dirname(dest)):
23 # create all but the last component as path
24 cmd.append('-D')
25
26 if mode:
27 cmd.extend(['-m', mode])
28
29 if owner:
30 cmd.extend(['-o', owner])
31
32 if sudo:
33 cmd.insert(0, 'sudo')
34 cmd.extend([src, dest])
35 subprocess.check_call(cmd)
360
=== removed file 'hooks/charmhelpers/contrib/cloudfoundry/services.py'
--- hooks/charmhelpers/contrib/cloudfoundry/services.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/services.py 1970-01-01 00:00:00 +0000
@@ -1,118 +0,0 @@
1import os
2import tempfile
3from charmhelpers.core import host
4
5from charmhelpers.contrib.cloudfoundry.install import install
6from charmhelpers.core.hookenv import log
7from jinja2 import Environment, FileSystemLoader
8
9SERVICE_CONFIG = []
10TEMPLATE_LOADER = None
11
12
13def render_template(template_name, context):
14 """Render template to a tempfile returning the name"""
15 _, fn = tempfile.mkstemp()
16 template = load_template(template_name)
17 output = template.render(context)
18 with open(fn, "w") as fp:
19 fp.write(output)
20 return fn
21
22
23def collect_contexts(context_providers):
24 ctx = {}
25 for provider in context_providers:
26 c = provider()
27 if not c:
28 return {}
29 ctx.update(c)
30 return ctx
31
32
33def load_template(name):
34 return TEMPLATE_LOADER.get_template(name)
35
36
37def configure_templates(template_dir):
38 global TEMPLATE_LOADER
39 TEMPLATE_LOADER = Environment(loader=FileSystemLoader(template_dir))
40
41
42def register(service_configs, template_dir):
43 """Register a list of service configs.
44
45 Service Configs are dicts in the following formats:
46
47 {
48 "service": <service name>,
49 "templates": [ {
50 'target': <render target of template>,
51 'source': <optional name of template in passed in template_dir>
52 'file_properties': <optional dict taking owner and octal mode>
53 'contexts': [ context generators, see contexts.py ]
54 }
55 ] }
56
57 If 'source' is not provided for a template the template_dir will
58 be consulted for ``basename(target).j2``.
59 """
60 global SERVICE_CONFIG
61 if template_dir:
62 configure_templates(template_dir)
63 SERVICE_CONFIG.extend(service_configs)
64
65
66def reset():
67 global SERVICE_CONFIG
68 SERVICE_CONFIG = []
69
70
71# def service_context(name):
72# contexts = collect_contexts(template['contexts'])
73
74def reconfigure_service(service_name, restart=True):
75 global SERVICE_CONFIG
76 service = None
77 for service in SERVICE_CONFIG:
78 if service['service'] == service_name:
79 break
80 if not service or service['service'] != service_name:
81 raise KeyError('Service not registered: %s' % service_name)
82
83 templates = service['templates']
84 for template in templates:
85 contexts = collect_contexts(template['contexts'])
86 if contexts:
87 template_target = template['target']
88 default_template = "%s.j2" % os.path.basename(template_target)
89 template_name = template.get('source', default_template)
90 output_file = render_template(template_name, contexts)
91 file_properties = template.get('file_properties')
92 install(output_file, template_target, file_properties)
93 os.unlink(output_file)
94 else:
95 restart = False
96
97 if restart:
98 host.service_restart(service_name)
99
100
101def stop_services():
102 global SERVICE_CONFIG
103 for service in SERVICE_CONFIG:
104 if host.service_running(service['service']):
105 host.service_stop(service['service'])
106
107
108def get_service(service_name):
109 global SERVICE_CONFIG
110 for service in SERVICE_CONFIG:
111 if service_name == service['service']:
112 return service
113 return None
114
115
116def reconfigure_services(restart=True):
117 for service in SERVICE_CONFIG:
118 reconfigure_service(service['service'], restart=restart)
1190
=== removed file 'hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py'
--- hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 1970-01-01 00:00:00 +0000
@@ -1,14 +0,0 @@
1import os
2import glob
3from charmhelpers.core import hookenv
4from charmhelpers.core.hookenv import charm_dir
5from charmhelpers.contrib.cloudfoundry.install import install
6
7
8def install_upstart_scripts(dirname=os.path.join(hookenv.charm_dir(),
9 'files/upstart'),
10 pattern='*.conf'):
11 for script in glob.glob("%s/%s" % (dirname, pattern)):
12 filename = os.path.join(dirname, script)
13 hookenv.log('Installing upstart job:' + filename, hookenv.DEBUG)
14 install(filename, '/etc/init')
150
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2014-06-05 18:17:30 +0000
@@ -570,7 +570,7 @@
570570
571 if self.plugin == 'ovs':571 if self.plugin == 'ovs':
572 ctxt.update(self.ovs_ctxt())572 ctxt.update(self.ovs_ctxt())
573 elif self.plugin == 'nvp':573 elif self.plugin in ['nvp', 'nsx']:
574 ctxt.update(self.nvp_ctxt())574 ctxt.update(self.nvp_ctxt())
575575
576 alchemy_flags = config('neutron-alchemy-flags')576 alchemy_flags = config('neutron-alchemy-flags')
577577
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-05 18:17:30 +0000
@@ -114,14 +114,30 @@
114 'server_packages': ['neutron-server',114 'server_packages': ['neutron-server',
115 'neutron-plugin-nicira'],115 'neutron-plugin-nicira'],
116 'server_services': ['neutron-server']116 'server_services': ['neutron-server']
117 },
118 'nsx': {
119 'config': '/etc/neutron/plugins/vmware/nsx.ini',
120 'driver': 'vmware',
121 'contexts': [
122 context.SharedDBContext(user=config('neutron-database-user'),
123 database=config('neutron-database'),
124 relation_prefix='neutron',
125 ssl_dir=NEUTRON_CONF_DIR)],
126 'services': [],
127 'packages': [],
128 'server_packages': ['neutron-server',
129 'neutron-plugin-vmware'],
130 'server_services': ['neutron-server']
117 }131 }
118 }132 }
119 # NOTE: patch in ml2 plugin for icehouse onwards
120 if release >= 'icehouse':133 if release >= 'icehouse':
134 # NOTE: patch in ml2 plugin for icehouse onwards
121 plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'135 plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
122 plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'136 plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
123 plugins['ovs']['server_packages'] = ['neutron-server',137 plugins['ovs']['server_packages'] = ['neutron-server',
124 'neutron-plugin-ml2']138 'neutron-plugin-ml2']
139 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
140 plugins['nvp'] = plugins['nsx']
125 return plugins141 return plugins
126142
127143
128144
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-06-05 18:17:30 +0000
@@ -131,6 +131,11 @@
131def get_os_codename_package(package, fatal=True):131def get_os_codename_package(package, fatal=True):
132 '''Derive OpenStack release codename from an installed package.'''132 '''Derive OpenStack release codename from an installed package.'''
133 apt.init()133 apt.init()
134
135 # Tell apt to build an in-memory cache to prevent race conditions (if
136 # another process is already building the cache).
137 apt.config.set("Dir::Cache::pkgcache", "")
138
134 cache = apt.Cache()139 cache = apt.Cache()
135140
136 try:141 try:
@@ -183,7 +188,7 @@
183 if cname == codename:188 if cname == codename:
184 return version189 return version
185 #e = "Could not determine OpenStack version for package: %s" % pkg190 #e = "Could not determine OpenStack version for package: %s" % pkg
186 #error_out(e)191 # error_out(e)
187192
188193
189os_rel = None194os_rel = None
190195
=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-06-05 18:17:30 +0000
@@ -62,7 +62,7 @@
62 pvd = check_output(['pvdisplay', block_device]).splitlines()62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:63 for l in pvd:
64 if l.strip().startswith('VG Name'):64 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.split()).split(' ').pop()65 vg = ' '.join(l.strip().split()[2:])
66 return vg66 return vg
6767
6868
6969
=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-06-05 18:17:30 +0000
@@ -1,4 +1,5 @@
1from os import stat1import os
2import re
2from stat import S_ISBLK3from stat import S_ISBLK
34
4from subprocess import (5from subprocess import (
@@ -14,7 +15,9 @@
1415
15 :returns: boolean: True if path is a block device, False if not.16 :returns: boolean: True if path is a block device, False if not.
16 '''17 '''
17 return S_ISBLK(stat(path).st_mode)18 if not os.path.exists(path):
19 return False
20 return S_ISBLK(os.stat(path).st_mode)
1821
1922
20def zap_disk(block_device):23def zap_disk(block_device):
@@ -29,7 +32,18 @@
29 '--clear', block_device])32 '--clear', block_device])
30 dev_end = check_output(['blockdev', '--getsz', block_device])33 dev_end = check_output(['blockdev', '--getsz', block_device])
31 gpt_end = int(dev_end.split()[0]) - 10034 gpt_end = int(dev_end.split()[0]) - 100
32 check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device),35 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
33 'bs=1M', 'count=1'])36 'bs=1M', 'count=1'])
34 check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device),37 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
35 'bs=512', 'count=100', 'seek=%s'%(gpt_end)])38 'bs=512', 'count=100', 'seek=%s' % (gpt_end)])
39
40def is_device_mounted(device):
41 '''Given a device path, return True if that device is mounted, and False
42 if it isn't.
43
44 :param device: str: Full path of the device to check.
45 :returns: boolean: True if the path represents a mounted device, False if
46 it doesn't.
47 '''
48 out = check_output(['mount'])
49 return bool(re.search(device + r"[0-9]+\b", out))
3650
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/core/hookenv.py 2014-06-05 18:17:30 +0000
@@ -155,6 +155,100 @@
155 return os.path.basename(sys.argv[0])155 return os.path.basename(sys.argv[0])
156156
157157
158class Config(dict):
159 """A Juju charm config dictionary that can write itself to
160 disk (as json) and track which values have changed since
161 the previous hook invocation.
162
163 Do not instantiate this object directly - instead call
164 ``hookenv.config()``
165
166 Example usage::
167
168 >>> # inside a hook
169 >>> from charmhelpers.core import hookenv
170 >>> config = hookenv.config()
171 >>> config['foo']
172 'bar'
173 >>> config['mykey'] = 'myval'
174 >>> config.save()
175
176
177 >>> # user runs `juju set mycharm foo=baz`
178 >>> # now we're inside subsequent config-changed hook
179 >>> config = hookenv.config()
180 >>> config['foo']
181 'baz'
182 >>> # test to see if this val has changed since last hook
183 >>> config.changed('foo')
184 True
185 >>> # what was the previous value?
186 >>> config.previous('foo')
187 'bar'
188 >>> # keys/values that we add are preserved across hooks
189 >>> config['mykey']
190 'myval'
191 >>> # don't forget to save at the end of hook!
192 >>> config.save()
193
194 """
195 CONFIG_FILE_NAME = '.juju-persistent-config'
196
197 def __init__(self, *args, **kw):
198 super(Config, self).__init__(*args, **kw)
199 self._prev_dict = None
200 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
201 if os.path.exists(self.path):
202 self.load_previous()
203
204 def load_previous(self, path=None):
205 """Load previous copy of config from disk so that current values
206 can be compared to previous values.
207
208 :param path:
209
210 File path from which to load the previous config. If `None`,
211 config is loaded from the default location. If `path` is
212 specified, subsequent `save()` calls will write to the same
213 path.
214
215 """
216 self.path = path or self.path
217 with open(self.path) as f:
218 self._prev_dict = json.load(f)
219
220 def changed(self, key):
221 """Return true if the value for this key has changed since
222 the last save.
223
224 """
225 if self._prev_dict is None:
226 return True
227 return self.previous(key) != self.get(key)
228
229 def previous(self, key):
230 """Return previous value for this key, or None if there
231 is no "previous" value.
232
233 """
234 if self._prev_dict:
235 return self._prev_dict.get(key)
236 return None
237
238 def save(self):
239 """Save this config to disk.
240
241 Preserves items in _prev_dict that do not exist in self.
242
243 """
244 if self._prev_dict:
245 for k, v in self._prev_dict.iteritems():
246 if k not in self:
247 self[k] = v
248 with open(self.path, 'w') as f:
249 json.dump(self, f)
250
251
158@cached252@cached
159def config(scope=None):253def config(scope=None):
160 """Juju charm configuration"""254 """Juju charm configuration"""
@@ -163,7 +257,10 @@
163 config_cmd_line.append(scope)257 config_cmd_line.append(scope)
164 config_cmd_line.append('--format=json')258 config_cmd_line.append('--format=json')
165 try:259 try:
166 return json.loads(subprocess.check_output(config_cmd_line))260 config_data = json.loads(subprocess.check_output(config_cmd_line))
261 if scope is not None:
262 return config_data
263 return Config(config_data)
167 except ValueError:264 except ValueError:
168 return None265 return None
169266
170267
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/core/host.py 2014-06-05 18:17:30 +0000
@@ -12,6 +12,9 @@
12import string12import string
13import subprocess13import subprocess
14import hashlib14import hashlib
15import shutil
16import apt_pkg
17from contextlib import contextmanager
1518
16from collections import OrderedDict19from collections import OrderedDict
1720
@@ -60,6 +63,11 @@
60 return False63 return False
6164
6265
66def service_available(service_name):
67 """Determine whether a system service is available"""
68 return service('status', service_name)
69
70
63def adduser(username, password=None, shell='/bin/bash', system_user=False):71def adduser(username, password=None, shell='/bin/bash', system_user=False):
64 """Add a user to the system"""72 """Add a user to the system"""
65 try:73 try:
@@ -143,6 +151,16 @@
143 target.write(content)151 target.write(content)
144152
145153
154def copy_file(src, dst, owner='root', group='root', perms=0444):
155 """Create or overwrite a file with the contents of another file"""
156 log("Writing file {} {}:{} {:o} from {}".format(dst, owner, group, perms, src))
157 uid = pwd.getpwnam(owner).pw_uid
158 gid = grp.getgrnam(group).gr_gid
159 shutil.copyfile(src, dst)
160 os.chown(dst, uid, gid)
161 os.chmod(dst, perms)
162
163
146def mount(device, mountpoint, options=None, persist=False):164def mount(device, mountpoint, options=None, persist=False):
147 """Mount a filesystem at a particular mountpoint"""165 """Mount a filesystem at a particular mountpoint"""
148 cmd_args = ['mount']166 cmd_args = ['mount']
@@ -295,3 +313,37 @@
295 if 'link/ether' in words:313 if 'link/ether' in words:
296 hwaddr = words[words.index('link/ether') + 1]314 hwaddr = words[words.index('link/ether') + 1]
297 return hwaddr315 return hwaddr
316
317
318def cmp_pkgrevno(package, revno, pkgcache=None):
319 '''Compare supplied revno with the revno of the installed package
320 1 => Installed revno is greater than supplied arg
321 0 => Installed revno is the same as supplied arg
322 -1 => Installed revno is less than supplied arg
323 '''
324 if not pkgcache:
325 apt_pkg.init()
326 pkgcache = apt_pkg.Cache()
327 pkg = pkgcache[package]
328 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
329
330
331@contextmanager
332def chdir(d):
333 cur = os.getcwd()
334 try:
335 yield os.chdir(d)
336 finally:
337 os.chdir(cur)
338
339
340def chownr(path, owner, group):
341 uid = pwd.getpwnam(owner).pw_uid
342 gid = grp.getgrnam(group).gr_gid
343
344 for root, dirs, files in os.walk(path):
345 for name in dirs + files:
346 full = os.path.join(root, name)
347 broken_symlink = os.path.lexists(full) and not os.path.exists(full)
348 if not broken_symlink:
349 os.chown(full, uid, gid)
298350
=== added file 'hooks/charmhelpers/core/services.py'
--- hooks/charmhelpers/core/services.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/services.py 2014-06-05 18:17:30 +0000
@@ -0,0 +1,357 @@
1import os
2import sys
3from collections import Iterable
4from charmhelpers.core import templating
5from charmhelpers.core import host
6from charmhelpers.core import hookenv
7
8
9class ServiceManager(object):
10 def __init__(self, services=None):
11 """
12 Register a list of services, given their definitions.
13
14 Traditional charm authoring is focused on implementing hooks. That is,
15 the charm author is thinking in terms of "What hook am I handling; what
16 does this hook need to do?" However, in most cases, the real question
17 should be "Do I have the information I need to configure and start this
18 piece of software and, if so, what are the steps for doing so." The
19 ServiceManager framework tries to bring the focus to the data and the
20 setup tasks, in the most declarative way possible.
21
22 Service definitions are dicts in the following formats (all keys except
23 'service' are optional):
24
25 {
26 "service": <service name>,
27 "required_data": <list of required data contexts>,
28 "data_ready": <one or more callbacks>,
29 "data_lost": <one or more callbacks>,
30 "start": <one or more callbacks>,
31 "stop": <one or more callbacks>,
32 "ports": <list of ports to manage>,
33 }
34
35 The 'required_data' list should contain dicts of required data (or
36 dependency managers that act like dicts and know how to collect the data).
37 Only when all items in the 'required_data' list are populated are the list
38 of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
39 information.
40
41 The 'data_ready' value should be either a single callback, or a list of
42 callbacks, to be called when all items in 'required_data' pass `is_ready()`.
43 Each callback will be called with the service name as the only parameter.
44 After these all of the 'data_ready' callbacks are called, the 'start'
45 callbacks are fired.
46
47 The 'data_lost' value should be either a single callback, or a list of
48 callbacks, to be called when a 'required_data' item no longer passes
49 `is_ready()`. Each callback will be called with the service name as the
50 only parameter. After these all of the 'data_ready' callbacks are called,
51 the 'stop' callbacks are fired.
52
53 The 'start' value should be either a single callback, or a list of
54 callbacks, to be called when starting the service, after the 'data_ready'
55 callbacks are complete. Each callback will be called with the service
56 name as the only parameter. This defaults to
57 `[host.service_start, services.open_ports]`.
58
59 The 'stop' value should be either a single callback, or a list of
60 callbacks, to be called when stopping the service. If the service is
61 being stopped because it no longer has all of its 'required_data', this
62 will be called after all of the 'data_lost' callbacks are complete.
63 Each callback will be called with the service name as the only parameter.
64 This defaults to `[services.close_ports, host.service_stop]`.
65
66 The 'ports' value should be a list of ports to manage. The default
67 'start' handler will open the ports after the service is started,
68 and the default 'stop' handler will close the ports prior to stopping
69 the service.
70
71
72 Examples:
73
74 The following registers an Upstart service called bingod that depends on
75 a mongodb relation and which runs a custom `db_migrate` function prior to
76 restarting the service, and a Runit serivce called spadesd.
77
78 manager = services.ServiceManager([
79 {
80 'service': 'bingod',
81 'ports': [80, 443],
82 'required_data': [MongoRelation(), config(), {'my': 'data'}],
83 'data_ready': [
84 services.template(source='bingod.conf'),
85 services.template(source='bingod.ini',
86 target='/etc/bingod.ini',
87 owner='bingo', perms=0400),
88 ],
89 },
90 {
91 'service': 'spadesd',
92 'data_ready': services.template(source='spadesd_run.j2',
93 target='/etc/sv/spadesd/run',
94 perms=0555),
95 'start': runit_start,
96 'stop': runit_stop,
97 },
98 ])
99 manager.manage()
100 """
101 self.services = {}
102 for service in services or []:
103 service_name = service['service']
104 self.services[service_name] = service
105
106 def manage(self):
107 """
108 Handle the current hook by doing The Right Thing with the registered services.
109 """
110 hook_name = os.path.basename(sys.argv[0])
111 if hook_name == 'stop':
112 self.stop_services()
113 else:
114 self.reconfigure_services()
115
116 def reconfigure_services(self, *service_names):
117 """
118 Update all files for one or more registered services, and,
119 if ready, optionally restart them.
120
121 If no service names are given, reconfigures all registered services.
122 """
123 for service_name in service_names or self.services.keys():
124 if self.is_ready(service_name):
125 self.fire_event('data_ready', service_name)
126 self.fire_event('start', service_name, default=[
127 host.service_restart,
128 open_ports])
129 self.save_ready(service_name)
130 else:
131 if self.was_ready(service_name):
132 self.fire_event('data_lost', service_name)
133 self.fire_event('stop', service_name, default=[
134 close_ports,
135 host.service_stop])
136 self.save_lost(service_name)
137
138 def stop_services(self, *service_names):
139 """
140 Stop one or more registered services, by name.
141
142 If no service names are given, stops all registered services.
143 """
144 for service_name in service_names or self.services.keys():
145 self.fire_event('stop', service_name, default=[
146 close_ports,
147 host.service_stop])
148
149 def get_service(self, service_name):
150 """
151 Given the name of a registered service, return its service definition.
152 """
153 service = self.services.get(service_name)
154 if not service:
155 raise KeyError('Service not registered: %s' % service_name)
156 return service
157
158 def fire_event(self, event_name, service_name, default=None):
159 """
160 Fire a data_ready, data_lost, start, or stop event on a given service.
161 """
162 service = self.get_service(service_name)
163 callbacks = service.get(event_name, default)
164 if not callbacks:
165 return
166 if not isinstance(callbacks, Iterable):
167 callbacks = [callbacks]
168 for callback in callbacks:
169 if isinstance(callback, ManagerCallback):
170 callback(self, service_name, event_name)
171 else:
172 callback(service_name)
173
174 def is_ready(self, service_name):
175 """
176 Determine if a registered service is ready, by checking its 'required_data'.
177
178 A 'required_data' item can be any mapping type, and is considered ready
179 if `bool(item)` evaluates as True.
180 """
181 service = self.get_service(service_name)
182 reqs = service.get('required_data', [])
183 return all(bool(req) for req in reqs)
184
185 def save_ready(self, service_name):
186 """
187 Save an indicator that the given service is now data_ready.
188 """
189 ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name)
190 with open(ready_file, 'a'):
191 pass
192
193 def save_lost(self, service_name):
194 """
195 Save an indicator that the given service is no longer data_ready.
196 """
197 ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name)
198 if os.path.exists(ready_file):
199 os.remove(ready_file)
200
201 def was_ready(self, service_name):
202 """
203 Determine if the given service was previously data_ready.
204 """
205 ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name)
206 return os.path.exists(ready_file)
207
208
209class DefaultMappingList(list):
210 """
211 A list of mappings that proxies calls to `__getitem__` with non-int keys,
212 as well as calls to `get`, to the first mapping in the list.
213
214 >>> dml = DefaultMappingList([
215 ... {'foo': 'bar'},
216 ... {'foo': 'qux'},
217 ... ])
218 >>> dml['foo'] == 'bar'
219 True
220 >>> dml[1]['foo'] == 'qux'
221 True
222 >>> dml.get('foo') == 'bar'
223 True
224 """
225 def __getitem__(self, key):
226 if isinstance(key, int):
227 return super(DefaultMappingList, self).__getitem__(key)
228 else:
229 return super(DefaultMappingList, self).__getitem__(0)[key]
230
231 def get(self, key, default=None):
232 return self[0].get(key, default)
233
234
235class RelationContext(dict):
236 """
237 Base class for a context generator that gets relation data from juju.
238
239 Subclasses must provide `interface`, which is the interface type of interest,
240 and `required_keys`, which is the set of keys required for the relation to
241 be considered complete. The first relation for the interface that is complete
242 will be used to populate the data for template.
243
244 The generated context will be namespaced under the interface type, to prevent
245 potential naming conflicts.
246 """
247 interface = None
248 required_keys = []
249
250 def __bool__(self):
251 """
252 Updates the data and returns True if all of the required_keys are available.
253 """
254 self.get_data()
255 return self.is_ready()
256
257 __nonzero__ = __bool__
258
259 def __repr__(self):
260 return super(RelationContext, self).__repr__()
261
262 def is_ready(self):
263 """
264 Returns True if all of the `required_keys` are available from any units.
265 """
266 return len(self.get(self.interface, [])) > 0
267
268 def _is_ready(self, unit_data):
269 """
270 Helper method that tests a set of relation data and returns True if
271 all of the `required_keys` are present.
272 """
273 return set(unit_data.keys()).issuperset(set(self.required_keys))
274
275 def get_data(self):
276 """
277 Retrieve the relation data and store it under `self[self.interface]`.
278
279 Only complete sets of data are stored.
280
281 The data can be treated as either a list or a mapping. Treating it as
282 a list will give the data from all the complete units. Treating it as
283 a mapping will give the data for the first complete unit, lexographically
284 ordered by relation ID then unit ID.
285
286 For example, if there are relation IDs 'db:1' and 'db:2', while the
287 service on relation 'db:1' has units 'wordpress/0' and 'wordpress/1',
288 while the service on relation 'db:2' has unit 'mediawiki/0', then
289 accessing `self[self.interface]['foo']` will return the 'foo' value
290 from unit 'wordpress/0'.
291 """
292 if not hookenv.relation_ids(self.interface):
293 return
294
295 ns = self.setdefault(self.interface, DefaultMappingList())
296 for rid in sorted(hookenv.relation_ids(self.interface)):
297 for unit in sorted(hookenv.related_units(rid)):
298 reldata = hookenv.relation_get(rid=rid, unit=unit)
299 if self._is_ready(reldata):
300 ns.append(reldata)
301
302
303class ManagerCallback(object):
304 """
305 Special case of a callback that takes the `ServiceManager` instance
306 in addition to the service name.
307
308 Subclasses should implement `__call__` which should accept two parameters:
309
310 * `manager` The `ServiceManager` instance
311 * `service_name` The name of the service it's being triggered for
312 * `event_name` The name of the event that this callback is handling
313 """
314 def __call__(self, manager, service_name, event_name):
315 raise NotImplementedError()
316
317
318class TemplateCallback(ManagerCallback):
319 """
320 Callback class that will render a template, for use as a ready action.
321
322 The `target` param, if omitted, will default to `/etc/init/<service name>`.
323 """
324 def __init__(self, source, target, owner='root', group='root', perms=0444):
325 self.source = source
326 self.target = target
327 self.owner = owner
328 self.group = group
329 self.perms = perms
330
331 def __call__(self, manager, service_name, event_name):
332 service = manager.get_service(service_name)
333 context = {}
334 for ctx in service.get('required_data', []):
335 context.update(ctx)
336 templating.render(self.source, self.target, context,
337 self.owner, self.group, self.perms)
338
339
340class PortManagerCallback(ManagerCallback):
341 """
342 Callback class that will open or close ports, for use as either
343 a start or stop action.
344 """
345 def __call__(self, manager, service_name, event_name):
346 service = manager.get_service(service_name)
347 for port in service.get('ports', []):
348 if event_name == 'start':
349 hookenv.open_port(port)
350 elif event_name == 'stop':
351 hookenv.close_port(port)
352
353
354# Convenience aliases
355render_template = template = TemplateCallback
356open_ports = PortManagerCallback()
357close_ports = PortManagerCallback()
0358
=== added file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/core/templating.py 2014-06-05 18:17:30 +0000
@@ -0,0 +1,51 @@
1import os
2
3from charmhelpers.core import host
4from charmhelpers.core import hookenv
5
6
7def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
8 """
9 Render a template.
10
11 The `source` path, if not absolute, is relative to the `templates_dir`.
12
13 The `target` path should be absolute.
14
15 The context should be a dict containing the values to be replaced in the
16 template.
17
18 The `owner`, `group`, and `perms` options will be passed to `write_file`.
19
20 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
21
22 Note: Using this requires python-jinja2; if it is not installed, calling
23 this will attempt to use charmhelpers.fetch.apt_install to install it.
24 """
25 try:
26 from jinja2 import FileSystemLoader, Environment, exceptions
27 except ImportError:
28 try:
29 from charmhelpers.fetch import apt_install
30 except ImportError:
31 hookenv.log('Could not import jinja2, and could not import '
32 'charmhelpers.fetch to install it',
33 level=hookenv.ERROR)
34 raise
35 apt_install('python-jinja2', fatal=True)
36 from jinja2 import FileSystemLoader, Environment, exceptions
37
38 if templates_dir is None:
39 templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
40 loader = Environment(loader=FileSystemLoader(templates_dir))
41 try:
42 source = source
43 template = loader.get_template(source)
44 except exceptions.TemplateNotFound as e:
45 hookenv.log('Could not load template %s from %s.' %
46 (source, templates_dir),
47 level=hookenv.ERROR)
48 raise e
49 content = template.render(context)
50 host.mkdir(os.path.dirname(target))
51 host.write_file(target, content, owner, group, perms)
052
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-05-14 16:40:09 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2014-06-05 18:17:30 +0000
@@ -1,4 +1,5 @@
1import importlib1import importlib
2import time
2from yaml import safe_load3from yaml import safe_load
3from charmhelpers.core.host import (4from charmhelpers.core.host import (
4 lsb_release5 lsb_release
@@ -15,6 +16,7 @@
15import apt_pkg16import apt_pkg
16import os17import os
1718
19
18CLOUD_ARCHIVE = """# Ubuntu Cloud Archive20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
19deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
20"""22"""
@@ -56,10 +58,62 @@
56 'precise-proposed/icehouse': 'precise-proposed/icehouse',58 'precise-proposed/icehouse': 'precise-proposed/icehouse',
57}59}
5860
61# The order of this list is very important. Handlers should be listed in from
62# least- to most-specific URL matching.
63FETCH_HANDLERS = (
64 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
65 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
66)
67
68APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
69APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
70APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
71
72
73class SourceConfigError(Exception):
74 pass
75
76
77class UnhandledSource(Exception):
78 pass
79
80
81class AptLockError(Exception):
82 pass
83
84
85class BaseFetchHandler(object):
86
87 """Base class for FetchHandler implementations in fetch plugins"""
88
89 def can_handle(self, source):
90 """Returns True if the source can be handled. Otherwise returns
91 a string explaining why it cannot"""
92 return "Wrong source type"
93
94 def install(self, source):
95 """Try to download and unpack the source. Return the path to the
96 unpacked files or raise UnhandledSource."""
97 raise UnhandledSource("Wrong source type {}".format(source))
98
99 def parse_url(self, url):
100 return urlparse(url)
101
102 def base_url(self, url):
103 """Return url without querystring or fragment"""
104 parts = list(self.parse_url(url))
105 parts[4:] = ['' for i in parts[4:]]
106 return urlunparse(parts)
107
59108
60def filter_installed_packages(packages):109def filter_installed_packages(packages):
61 """Returns a list of packages that require installation"""110 """Returns a list of packages that require installation"""
62 apt_pkg.init()111 apt_pkg.init()
112
113 # Tell apt to build an in-memory cache to prevent race conditions (if
114 # another process is already building the cache).
115 apt_pkg.config.set("Dir::Cache::pkgcache", "")
116
63 cache = apt_pkg.Cache()117 cache = apt_pkg.Cache()
64 _pkgs = []118 _pkgs = []
65 for package in packages:119 for package in packages:
@@ -87,14 +141,7 @@
87 cmd.extend(packages)141 cmd.extend(packages)
88 log("Installing {} with options: {}".format(packages,142 log("Installing {} with options: {}".format(packages,
89 options))143 options))
90 env = os.environ.copy()144 _run_apt_command(cmd, fatal)
91 if 'DEBIAN_FRONTEND' not in env:
92 env['DEBIAN_FRONTEND'] = 'noninteractive'
93
94 if fatal:
95 subprocess.check_call(cmd, env=env)
96 else:
97 subprocess.call(cmd, env=env)
98145
99146
100def apt_upgrade(options=None, fatal=False, dist=False):147def apt_upgrade(options=None, fatal=False, dist=False):
@@ -109,24 +156,13 @@
109 else:156 else:
110 cmd.append('upgrade')157 cmd.append('upgrade')
111 log("Upgrading with options: {}".format(options))158 log("Upgrading with options: {}".format(options))
112159 _run_apt_command(cmd, fatal)
113 env = os.environ.copy()
114 if 'DEBIAN_FRONTEND' not in env:
115 env['DEBIAN_FRONTEND'] = 'noninteractive'
116
117 if fatal:
118 subprocess.check_call(cmd, env=env)
119 else:
120 subprocess.call(cmd, env=env)
121160
122161
123def apt_update(fatal=False):162def apt_update(fatal=False):
124 """Update local apt cache"""163 """Update local apt cache"""
125 cmd = ['apt-get', 'update']164 cmd = ['apt-get', 'update']
126 if fatal:165 _run_apt_command(cmd, fatal)
127 subprocess.check_call(cmd)
128 else:
129 subprocess.call(cmd)
130166
131167
132def apt_purge(packages, fatal=False):168def apt_purge(packages, fatal=False):
@@ -137,10 +173,7 @@
137 else:173 else:
138 cmd.extend(packages)174 cmd.extend(packages)
139 log("Purging {}".format(packages))175 log("Purging {}".format(packages))
140 if fatal:176 _run_apt_command(cmd, fatal)
141 subprocess.check_call(cmd)
142 else:
143 subprocess.call(cmd)
144177
145178
146def apt_hold(packages, fatal=False):179def apt_hold(packages, fatal=False):
@@ -151,6 +184,7 @@
151 else:184 else:
152 cmd.extend(packages)185 cmd.extend(packages)
153 log("Holding {}".format(packages))186 log("Holding {}".format(packages))
187
154 if fatal:188 if fatal:
155 subprocess.check_call(cmd)189 subprocess.check_call(cmd)
156 else:190 else:
@@ -188,10 +222,6 @@
188 key])222 key])
189223
190224
191class SourceConfigError(Exception):
192 pass
193
194
195def configure_sources(update=False,225def configure_sources(update=False,
196 sources_var='install_sources',226 sources_var='install_sources',
197 keys_var='install_keys'):227 keys_var='install_keys'):
@@ -224,17 +254,6 @@
224 if update:254 if update:
225 apt_update(fatal=True)255 apt_update(fatal=True)
226256
227# The order of this list is very important. Handlers should be listed in from
228# least- to most-specific URL matching.
229FETCH_HANDLERS = (
230 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
231 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
232)
233
234
235class UnhandledSource(Exception):
236 pass
237
238257
239def install_remote(source):258def install_remote(source):
240 """259 """
@@ -265,30 +284,6 @@
265 return install_remote(source)284 return install_remote(source)
266285
267286
268class BaseFetchHandler(object):
269
270 """Base class for FetchHandler implementations in fetch plugins"""
271
272 def can_handle(self, source):
273 """Returns True if the source can be handled. Otherwise returns
274 a string explaining why it cannot"""
275 return "Wrong source type"
276
277 def install(self, source):
278 """Try to download and unpack the source. Return the path to the
279 unpacked files or raise UnhandledSource."""
280 raise UnhandledSource("Wrong source type {}".format(source))
281
282 def parse_url(self, url):
283 return urlparse(url)
284
285 def base_url(self, url):
286 """Return url without querystring or fragment"""
287 parts = list(self.parse_url(url))
288 parts[4:] = ['' for i in parts[4:]]
289 return urlunparse(parts)
290
291
292def plugins(fetch_handlers=None):287def plugins(fetch_handlers=None):
293 if not fetch_handlers:288 if not fetch_handlers:
294 fetch_handlers = FETCH_HANDLERS289 fetch_handlers = FETCH_HANDLERS
@@ -306,3 +301,40 @@
306 log("FetchHandler {} not found, skipping plugin".format(301 log("FetchHandler {} not found, skipping plugin".format(
307 handler_name))302 handler_name))
308 return plugin_list303 return plugin_list
304
305
306def _run_apt_command(cmd, fatal=False):
307 """
308 Run an APT command, checking output and retrying if the fatal flag is set
309 to True.
310
311 :param: cmd: str: The apt command to run.
312 :param: fatal: bool: Whether the command's output should be checked and
313 retried.
314 """
315 env = os.environ.copy()
316
317 if 'DEBIAN_FRONTEND' not in env:
318 env['DEBIAN_FRONTEND'] = 'noninteractive'
319
320 if fatal:
321 retry_count = 0
322 result = None
323
324 # If the command is considered "fatal", we need to retry if the apt
325 # lock was not acquired.
326
327 while result is None or result == APT_NO_LOCK:
328 try:
329 result = subprocess.check_call(cmd, env=env)
330 except subprocess.CalledProcessError, e:
331 retry_count = retry_count + 1
332 if retry_count > APT_NO_LOCK_RETRY_COUNT:
333 raise
334 result = e.returncode
335 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
336 "".format(APT_NO_LOCK_RETRY_DELAY))
337 time.sleep(APT_NO_LOCK_RETRY_DELAY)
338
339 else:
340 subprocess.call(cmd, env=env)
309341
=== modified file 'hooks/config-changed'
--- hooks/config-changed 2014-05-14 16:40:09 +0000
+++ hooks/config-changed 2014-06-05 18:17:30 +0000
@@ -1,2 +1,5 @@
1#!/bin/bash1#!/usr/bin/env python
2# config-changed occurs everytime a new configuration value is updated (juju set)2from charmhelpers.core import services
3import config
4manager = services.ServiceManager(config.SERVICES)
5manager.manage()
36
=== added file 'hooks/config.py'
--- hooks/config.py 1970-01-01 00:00:00 +0000
+++ hooks/config.py 2014-06-05 18:17:30 +0000
@@ -0,0 +1,80 @@
1from charmhelpers.core import services
2from charmhelpers.contrib.cloudfoundry import contexts
3
4HM9K_PACKAGES = ['python-jinja2', 'cfhm9000']
5
6HM_DIR = '/var/lib/cloudfoundry/cfhm9000'
7WORKSPACE_DIR = '/var/lib/cloudfoundry/hm-workspace'
8
9hm_relations = [contexts.NatsRelation(),
10 contexts.EtcdRelation(),
11 contexts.CloudControllerRelation()]
12
13SERVICES = [
14 {
15 'service': 'cf-hm9k-fetcher',
16 'required_data': hm_relations,
17 'data_ready': [
18 services.template(source='hm9000.json',
19 target=HM_DIR + '/config/hm9000.json'),
20 services.template(source='cf-hm9k-fetcher.conf',
21 target='/etc/init/cf-hm9k-fetcher.conf'),
22 ],
23 },
24 {
25 'service': 'cf-hm9k-listener',
26 'required_data': hm_relations,
27 'data_ready': [
28 services.template(source='cf-hm9k-listener.conf',
29 target='/etc/init/cf-hm9k-listener.conf'),
30 ],
31 },
32 {
33 'service': 'cf-hm9k-analyzer',
34 'required_data': hm_relations,
35 'data_ready': [
36 services.template(source='cf-hm9k-analyzer.conf',
37 target='/etc/init/cf-hm9k-analyzer.conf'),
38 ],
39 },
40 {
41 'service': 'cf-hm9k-sender',
42 'required_data': hm_relations,
43 'data_ready': [
44 services.template(source='cf-hm9k-sender.conf',
45 target='/etc/init/cf-hm9k-sender.conf'),
46 ],
47 },
48 {
49 'service': 'cf-hm9k-metrics-server',
50 'required_data': hm_relations,
51 'data_ready': [
52 services.template(source='cf-hm9k-metrics-server.conf',
53 target='/etc/init/cf-hm9k-metrics-server.conf'),
54 ],
55 },
56 {
57 'service': 'cf-hm9k-api-server',
58 'required_data': hm_relations,
59 'data_ready': [
60 services.template(source='cf-hm9k-api-server.conf',
61 target='/etc/init/cf-hm9k-api-server.conf'),
62 ],
63 },
64 {
65 'service': 'cf-hm9k-evacuator',
66 'required_data': hm_relations,
67 'data_ready': [
68 services.template(source='cf-hm9k-evacuator.conf',
69 target='/etc/init/cf-hm9k-evacuator.conf'),
70 ],
71 },
72 {
73 'service': 'cf-hm9k-shredder',
74 'required_data': hm_relations,
75 'data_ready': [
76 services.template(source='cf-hm9k-shredder.conf',
77 target='/etc/init/cf-hm9k-shredder.conf'),
78 ],
79 },
80]
081
=== added file 'hooks/etcd-relation-changed'
--- hooks/etcd-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/etcd-relation-changed 2014-06-05 18:17:30 +0000
@@ -0,0 +1,5 @@
1#!/usr/bin/env python
2from charmhelpers.core import services
3import config
4manager = services.ServiceManager(config.SERVICES)
5manager.manage()
06
=== modified file 'hooks/install'
--- hooks/install 2014-05-14 16:40:09 +0000
+++ hooks/install 2014-06-05 18:17:30 +0000
@@ -1,8 +1,43 @@
1#!/bin/bash1#!/usr/bin/env python
2# Here do anything needed to install the service2# vim: et ai ts=4 sw=4:
3# i.e. apt-get install -y foo or bzr branch http://myserver/mycode /srv/webroot3
4# Make sure this hook exits cleanly and is idempotent, common problems here are4import os
5# failing to account for a debconf question on a dependency, or trying to pull5import subprocess
6# from github without installing git first.6
77from charmhelpers.core import host
8apt-get install -y cf-hm90008from charmhelpers.core import hookenv
9from charmhelpers.contrib.cloudfoundry.common import (
10 prepare_cloudfoundry_environment
11)
12
13import config
14
15CHARM_DIR = hookenv.charm_dir()
16
17
18def install():
19 prepare_cloudfoundry_environment(hookenv.config(), config.HM9K_PACKAGES)
20 install_from_source()
21
22
23def install_from_source():
24 subprocess.check_call([
25 'git', 'clone',
26 'https://github.com/cloudfoundry/hm-workspace', config.WORKSPACE_DIR])
27 host.mkdir(config.WORKSPACE_DIR + '/bin')
28 with host.chdir(config.WORKSPACE_DIR):
29 subprocess.check_call(['git', 'submodule', 'update', '--init'])
30 with host.chdir(config.WORKSPACE_DIR + '/src/github.com/cloudfoundry/hm9000'):
31 subprocess.check_call(['go', 'install', '.'],
32 env={'GOPATH': config.WORKSPACE_DIR})
33
34
35def install_from_charm():
36 host.copy_file(
37 os.path.join(hookenv.charm_dir(), 'files/hm9000'),
38 config.WORKSPACE_DIR + '/bin/hm9000',
39 owner='vcap', group='vcap', perms=0555)
40
41
42if __name__ == '__main__':
43 install()
944
=== added file 'hooks/metrics-relation-changed'
--- hooks/metrics-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/metrics-relation-changed 2014-06-05 18:17:30 +0000
@@ -0,0 +1,5 @@
1#!/usr/bin/env python
2from charmhelpers.core import services
3import config
4manager = services.ServiceManager(config.SERVICES)
5manager.manage()
06
=== added file 'hooks/nats-relation-changed'
--- hooks/nats-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/nats-relation-changed 2014-06-05 18:17:30 +0000
@@ -0,0 +1,5 @@
1#!/usr/bin/env python
2from charmhelpers.core import services
3import config
4manager = services.ServiceManager(config.SERVICES)
5manager.manage()
06
=== removed file 'hooks/relation-name-relation-broken'
--- hooks/relation-name-relation-broken 2014-05-14 16:40:09 +0000
+++ hooks/relation-name-relation-broken 1970-01-01 00:00:00 +0000
@@ -1,2 +0,0 @@
1#!/bin/sh
2# This hook runs when the full relation is removed (not just a single member)
30
=== removed file 'hooks/relation-name-relation-changed'
--- hooks/relation-name-relation-changed 2014-05-14 16:40:09 +0000
+++ hooks/relation-name-relation-changed 1970-01-01 00:00:00 +0000
@@ -1,9 +0,0 @@
1#!/bin/bash
2# This must be renamed to the name of the relation. The goal here is to
3# affect any change needed by relationships being formed, modified, or broken
4# This script should be idempotent.
5juju-log $JUJU_REMOTE_UNIT modified its settings
6juju-log Relation settings:
7relation-get
8juju-log Relation members:
9relation-list
100
=== removed file 'hooks/relation-name-relation-departed'
--- hooks/relation-name-relation-departed 2014-05-14 16:40:09 +0000
+++ hooks/relation-name-relation-departed 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
1#!/bin/sh
2# This must be renamed to the name of the relation. The goal here is to
3# affect any change needed by the remote unit leaving the relationship.
4# This script should be idempotent.
5juju-log $JUJU_REMOTE_UNIT departed
60
=== removed file 'hooks/relation-name-relation-joined'
--- hooks/relation-name-relation-joined 2014-05-14 16:40:09 +0000
+++ hooks/relation-name-relation-joined 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
1#!/bin/sh
2# This must be renamed to the name of the relation. The goal here is to
3# affect any change needed by relationships being formed
4# This script should be idempotent.
5juju-log $JUJU_REMOTE_UNIT joined
60
=== modified file 'hooks/start'
--- hooks/start 2014-05-14 16:40:09 +0000
+++ hooks/start 2014-06-05 18:17:30 +0000
@@ -1,4 +1,5 @@
1#!/bin/bash1#!/usr/bin/env python
2# Here put anything that is needed to start the service.2from charmhelpers.core import services
3# Note that currently this is run directly after install3import config
4# i.e. 'service apache2 start'4manager = services.ServiceManager(config.SERVICES)
5manager.manage()
56
=== modified file 'hooks/stop'
--- hooks/stop 2014-05-14 16:40:09 +0000
+++ hooks/stop 2014-06-05 18:17:30 +0000
@@ -1,7 +1,5 @@
1#!/bin/bash1#!/usr/bin/env python
2# This will be run when the service is being torn down, allowing you to disable2from charmhelpers.core import services
3# it in various ways..3import config
4# For example, if your web app uses a text file to signal to the load balancer4manager = services.ServiceManager(config.SERVICES)
5# that it is live... you could remove it and sleep for a bit to allow the load5manager.manage()
6# balancer to stop sending traffic.
7# rm /srv/webroot/server-live.txt && sleep 30
86
=== modified file 'hooks/upgrade-charm'
--- hooks/upgrade-charm 2014-05-14 16:40:09 +0000
+++ hooks/upgrade-charm 2014-06-05 18:17:30 +0000
@@ -1,6 +1,5 @@
1#!/bin/bash1#!/usr/bin/env python
2# This hook is executed each time a charm is upgraded after the new charm2from charmhelpers.core import services
3# contents have been unpacked3import config
4# Best practice suggests you execute the hooks/install and4manager = services.ServiceManager(config.SERVICES)
5# hooks/config-changed to ensure all updates are processed5manager.manage()
6
76
=== modified file 'metadata.yaml'
--- metadata.yaml 2014-05-14 16:40:09 +0000
+++ metadata.yaml 2014-06-05 18:17:30 +0000
@@ -1,6 +1,6 @@
1name: cf-hm90001name: cf-hm9000
2summary: Whit Morriss <whit.morriss@canonical.com>2summary: Health Monitor for Cloud Foundry
3maintainer: cloudfoundry-charmers3maintainer: cf-charmers
4description: |4description: |
5 Deploys the hm9000 health monitoring system for cloud foundry5 Deploys the hm9000 health monitoring system for cloud foundry
6categories:6categories:
@@ -9,13 +9,10 @@
9provides:9provides:
10 metrics:10 metrics:
11 interface: http11 interface: http
12 api:
13 interface: nats
14requires:12requires:
15 nats:13 nats:
16 interface: nats14 interface: nats
17 storage:15 etcd:
18 interface: etcd16 interface: http
19# peers:17 cc:
20# peer-relation:18 interface: cf-cloud-controller
21# interface: interface-name
2219
=== removed file 'notes.md'
--- notes.md 2014-05-14 16:40:09 +0000
+++ notes.md 1970-01-01 00:00:00 +0000
@@ -1,8 +0,0 @@
1# Notes
2
3## Relations
4
5 - etcd
6 - nats
7
8
90
=== added directory 'templates'
=== added file 'templates/cf-hm9k-analyzer.conf'
--- templates/cf-hm9k-analyzer.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-analyzer.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 analyze --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-api-server.conf'
--- templates/cf-hm9k-api-server.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-api-server.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 serve_api --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-evacuator.conf'
--- templates/cf-hm9k-evacuator.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-evacuator.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 evacuator --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-fetcher.conf'
--- templates/cf-hm9k-fetcher.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-fetcher.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 fetch_desired --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-listener.conf'
--- templates/cf-hm9k-listener.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-listener.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 listen --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-metrics-server.conf'
--- templates/cf-hm9k-metrics-server.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-metrics-server.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 serve_metrics --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-sender.conf'
--- templates/cf-hm9k-sender.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-sender.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 send --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/cf-hm9k-shredder.conf'
--- templates/cf-hm9k-shredder.conf 1970-01-01 00:00:00 +0000
+++ templates/cf-hm9k-shredder.conf 2014-06-05 18:17:30 +0000
@@ -0,0 +1,17 @@
1description "Cloud Foundry HM9000"
2author "cf-charmers <cf-charmers@lists.launchpad.net>"
3start on runlevel [2345]
4stop on runlevel [!2345]
5#expect daemon
6#apparmor load <profile-path>
7setuid vcap
8setgid vcap
9respawn
10respawn limit 10 5
11normal exit 0
12
13env GOPATH=/var/lib/cloudfoundry/hm-workspace/
14export GOPATH
15
16chdir /var/lib/cloudfoundry/hm-workspace
17exec ./bin/hm9000 shred --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json
018
=== added file 'templates/hm9000.json'
--- templates/hm9000.json 1970-01-01 00:00:00 +0000
+++ templates/hm9000.json 2014-06-05 18:17:30 +0000
@@ -0,0 +1,31 @@
1{
2 "heartbeat_period_in_seconds": 10,
3
4 "cc_auth_user": "{{cc['user']}}",
5 "cc_auth_password": "{{cc['password']}}",
6 "cc_base_url": "http://{{cc['hostname']}}:{{cc['port']}}",
7 "skip_cert_verify": true,
8 "desired_state_batch_size": 500,
9 "fetcher_network_timeout_in_seconds": 10,
10
11 "store_schema_version": 1,
12 "store_type": "etcd",
13 "store_urls": [
14 {% for unit in etcd -%}
15 "http://{{unit['hostname']}}:{{unit['port']}}"{% if not loop.last %},{% endif -%}
16 {%- endfor %}
17 ],
18
19 "metrics_server_port": 7879,
20 "metrics_server_user": "metrics_server_user",
21 "metrics_server_password": "canHazMetrics?",
22
23 "log_level": "INFO",
24
25 "nats": [{
26 "host": "{{nats['nats_address']}}",
27 "port": {{nats['nats_port']}},
28 "user": "{{nats['nats_user']}}",
29 "password": "{{nats['nats_password']}}"
30 }]
31}

Subscribers

People subscribed via source and target branches