Merge lp:~johnsca/charms/trusty/cf-hm9000/services into lp:~cf-charmers/charms/trusty/cf-hm9000/trunk
- Trusty Tahr (14.04)
- services
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 2 |
Proposed branch: | lp:~johnsca/charms/trusty/cf-hm9000/services |
Merge into: | lp:~cf-charmers/charms/trusty/cf-hm9000/trunk |
Diff against target: |
2448 lines (+1093/-882) 53 files modified
files/README.md (+0/-3) files/default-config.json (+0/-27) files/hm9000.json.erb (+0/-30) files/hm9000_analyzer_ctl (+0/-41) files/hm9000_api_server_ctl (+0/-40) files/hm9000_evacuator_ctl (+0/-40) files/hm9000_fetcher_ctl (+0/-41) files/hm9000_listener_ctl (+0/-44) files/hm9000_metrics_server_ctl (+0/-40) files/hm9000_sender_ctl (+0/-41) files/hm9000_shredder_ctl (+0/-41) files/syslog_forwarder.conf.erb (+0/-65) hooks/cc-relation-changed (+5/-0) hooks/charmhelpers/contrib/cloudfoundry/common.py (+0/-57) hooks/charmhelpers/contrib/cloudfoundry/config_helper.py (+0/-11) hooks/charmhelpers/contrib/cloudfoundry/contexts.py (+59/-55) hooks/charmhelpers/contrib/cloudfoundry/install.py (+0/-35) hooks/charmhelpers/contrib/cloudfoundry/services.py (+0/-118) hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py (+0/-14) hooks/charmhelpers/contrib/openstack/context.py (+1/-1) hooks/charmhelpers/contrib/openstack/neutron.py (+17/-1) hooks/charmhelpers/contrib/openstack/utils.py (+6/-1) hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-1) hooks/charmhelpers/contrib/storage/linux/utils.py (+19/-5) hooks/charmhelpers/core/hookenv.py (+98/-1) hooks/charmhelpers/core/host.py (+52/-0) hooks/charmhelpers/core/services.py (+357/-0) hooks/charmhelpers/core/templating.py (+51/-0) hooks/charmhelpers/fetch/__init__.py (+96/-64) hooks/config-changed (+5/-2) hooks/config.py (+80/-0) hooks/etcd-relation-changed (+5/-0) hooks/install (+43/-8) hooks/metrics-relation-changed (+5/-0) hooks/nats-relation-changed (+5/-0) hooks/relation-name-relation-broken (+0/-2) hooks/relation-name-relation-changed (+0/-9) hooks/relation-name-relation-departed (+0/-5) hooks/relation-name-relation-joined (+0/-5) hooks/start (+5/-4) hooks/stop (+5/-7) hooks/upgrade-charm (+5/-6) metadata.yaml (+6/-9) notes.md (+0/-8) templates/cf-hm9k-analyzer.conf (+17/-0) templates/cf-hm9k-api-server.conf (+17/-0) templates/cf-hm9k-evacuator.conf (+17/-0) templates/cf-hm9k-fetcher.conf (+17/-0) templates/cf-hm9k-listener.conf (+17/-0) templates/cf-hm9k-metrics-server.conf (+17/-0) templates/cf-hm9k-sender.conf (+17/-0) templates/cf-hm9k-shredder.conf (+17/-0) templates/hm9000.json (+31/-0) |
To merge this branch: | bzr merge lp:~johnsca/charms/trusty/cf-hm9000/services |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Cloud Foundry Charmers | Pending | ||
Review via email:
|
Commit message
Description of the change
Finished charm using services framework
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Cory Johns (johnsca) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Benjamin Saller (bcsaller) wrote : | # |
Thanks for this, a little hard to verify as you indicated. It looks like
we can add the internal MCAT suite to the CI as well.
I think landing this now and completing the topology is fine, even if we
have to evolve its configuration its an 'optional' component.
LGTM
https:/
File hooks/config.py (right):
https:/
hooks/config.py:24: 'required_data': [contexts.
What do you think about a top level define of this list that we reuse,
something like:
required_data: hm_contexts
where hm_contexts is defined above the service block.
https:/
File templates/
https:/
templates/
<email address hidden>"
This is an existing issue, but I'd like for all the Authors in all the
projects to point to the projects list rather than a bunch of
individuals. The charms maintainer fields as well. We should make a card
for this and switch them at once.
https:/
File templates/
https:/
templates/
["http://{{etcd[
This could easily be a list, no? I think we'll want to handle it that
way (via iteration), but for now this is fine.
https:/
templates/
Making two cards, been meaning to audit all the default ports and
username/password defaults. This is something we'll need to be able to
manage better in the future. Nothing to do in this branch
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Cory Johns (johnsca) wrote : | # |
On 2014/06/02 18:29:44, benjamin.saller wrote:
https:/
> hooks/config.py:24: 'required_data': [contexts.
> What do you think about a top level define of this list that we reuse,
something
> like:
> required_data: hm_contexts
> where hm_contexts is defined above the service block.
+1 I definitely should have done this to begin with.
https:/
> templates/
> <mailto:<email address hidden>>"
> This is an existing issue, but I'd like for all the Authors in all the
projects
> to point to the projects list rather than a bunch of individuals. The
charms
> maintainer fields as well. We should make a card for this and switch
them at
> once.
Easy enough to do in this charm during this review, then we can fix the
existing ones in a batch.
https:/
> templates/
> ["http://{{etcd[
> This could easily be a list, no? I think we'll want to handle it that
way (via
> iteration), but for now this is fine.
This raises an issue with how I implemented multiple units in the
framework, which I will address now.
- 4. By Cory Johns
-
Updated all author & maintainer fields to cf-charmers
- 5. By Cory Johns
-
Refactor away repetition in required_data items
- 6. By Cory Johns
-
Handle multiple etcd units
- 7. By Cory Johns
-
Resynced charm-helpers for missed RelationContext classes
- 8. By Cory Johns
-
Remove extra whitespace from hm9000.json config
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Cory Johns (johnsca) wrote : | # |
Please take a look.
- 9. By Cory Johns
-
Resynced charm-helpers for bug fix
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Benjamin Saller (bcsaller) wrote : | # |
Thanks for this, since it depends on the helpers change lets resolve
that before I can approve this. Good stuff though.
https:/
File metadata.yaml (right):
https:/
metadata.yaml:3: maintainer: cf-charmers
ha, nice catch. we might want to use the full email address for
cf-charmers though, or say lp:~cf-charmers. Something to make it a
little more clear.
https:/
File templates/
https:/
templates/
This might change a little if you like the helpers changes suggested in
the other branch. Thanks for this important fix.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Benjamin Saller (bcsaller) wrote : | # |
Thanks for this, since it depends on the helpers change lets resolve
that before I can approve this. Good stuff though.
https:/
File metadata.yaml (right):
https:/
metadata.yaml:3: maintainer: cf-charmers
ha, nice catch. we might want to use the full email address for
cf-charmers though, or say lp:~cf-charmers. Something to make it a
little more clear.
https:/
File templates/
https:/
templates/
This might change a little if you like the helpers changes suggested in
the other branch. Thanks for this important fix.
- 10. By Cory Johns
-
Updated charm-helpers and cleaned up iteration of multiple etcd units
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Cory Johns (johnsca) wrote : | # |
Please take a look.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Benjamin Saller (bcsaller) wrote : | # |
This LGTM,+1
https:/
File templates/
https:/
templates/
nice, thanks
Preview Diff
1 | === removed file 'files/README.md' | |||
2 | --- files/README.md 2014-05-14 16:40:09 +0000 | |||
3 | +++ files/README.md 1970-01-01 00:00:00 +0000 | |||
4 | @@ -1,3 +0,0 @@ | |||
5 | 1 | # Contents | ||
6 | 2 | |||
7 | 3 | ctl files and config from cf-release | ||
8 | 4 | 0 | ||
9 | === removed file 'files/default-config.json' | |||
10 | --- files/default-config.json 2014-05-14 16:40:09 +0000 | |||
11 | +++ files/default-config.json 1970-01-01 00:00:00 +0000 | |||
12 | @@ -1,27 +0,0 @@ | |||
13 | 1 | { | ||
14 | 2 | "heartbeat_period_in_seconds": 10, | ||
15 | 3 | |||
16 | 4 | "cc_auth_user": "mcat", | ||
17 | 5 | "cc_auth_password": "testing", | ||
18 | 6 | "cc_base_url": "http://127.0.0.1:6001", | ||
19 | 7 | "skip_cert_verify": true, | ||
20 | 8 | "desired_state_batch_size": 500, | ||
21 | 9 | "fetcher_network_timeout_in_seconds": 10, | ||
22 | 10 | |||
23 | 11 | "store_schema_version": 1, | ||
24 | 12 | "store_type": "etcd", | ||
25 | 13 | "store_urls": ["http://127.0.0.1:4001"], | ||
26 | 14 | |||
27 | 15 | "metrics_server_port": 7879, | ||
28 | 16 | "metrics_server_user": "metrics_server_user", | ||
29 | 17 | "metrics_server_password": "canHazMetrics?", | ||
30 | 18 | |||
31 | 19 | "log_level": "INFO", | ||
32 | 20 | |||
33 | 21 | "nats": [{ | ||
34 | 22 | "host": "127.0.0.1", | ||
35 | 23 | "port": 4222, | ||
36 | 24 | "user": "", | ||
37 | 25 | "password": "" | ||
38 | 26 | }] | ||
39 | 27 | } | ||
40 | 28 | 0 | ||
41 | === added file 'files/hm9000' | |||
42 | 29 | Binary files files/hm9000 1970-01-01 00:00:00 +0000 and files/hm9000 2014-06-05 18:17:30 +0000 differ | 1 | Binary files files/hm9000 1970-01-01 00:00:00 +0000 and files/hm9000 2014-06-05 18:17:30 +0000 differ |
43 | === removed file 'files/hm9000.json.erb' | |||
44 | --- files/hm9000.json.erb 2014-05-14 16:40:09 +0000 | |||
45 | +++ files/hm9000.json.erb 1970-01-01 00:00:00 +0000 | |||
46 | @@ -1,30 +0,0 @@ | |||
47 | 1 | { | ||
48 | 2 | "heartbeat_period_in_seconds": 10, | ||
49 | 3 | |||
50 | 4 | "cc_auth_user": "<%= p("ccng.bulk_api_user") %>", | ||
51 | 5 | "cc_auth_password": "<%= p("ccng.bulk_api_password") %>", | ||
52 | 6 | "cc_base_url": "<%= p("cc.srv_api_uri") %>", | ||
53 | 7 | "skip_cert_verify": <%= p("ssl.skip_cert_verify") %>, | ||
54 | 8 | "desired_state_batch_size": 500, | ||
55 | 9 | "fetcher_network_timeout_in_seconds": 10, | ||
56 | 10 | |||
57 | 11 | "store_schema_version": 4, | ||
58 | 12 | "store_urls": [<%= p("etcd.machines").map{|addr| "\"http://#{addr}:4001\""}.join(",")%>], | ||
59 | 13 | |||
60 | 14 | "metrics_server_port": 0, | ||
61 | 15 | "metrics_server_user": "", | ||
62 | 16 | "metrics_server_password": "", | ||
63 | 17 | |||
64 | 18 | "log_level": "INFO", | ||
65 | 19 | |||
66 | 20 | "nats": <%= | ||
67 | 21 | p("nats.machines").collect do |addr| | ||
68 | 22 | { | ||
69 | 23 | "host" => addr, | ||
70 | 24 | "port" => p("nats.port"), | ||
71 | 25 | "user" => p("nats.user"), | ||
72 | 26 | "password" => p("nats.password") | ||
73 | 27 | } | ||
74 | 28 | end.to_json | ||
75 | 29 | %> | ||
76 | 30 | } | ||
77 | 31 | 0 | ||
78 | === removed file 'files/hm9000_analyzer_ctl' | |||
79 | --- files/hm9000_analyzer_ctl 2014-05-14 16:40:09 +0000 | |||
80 | +++ files/hm9000_analyzer_ctl 1970-01-01 00:00:00 +0000 | |||
81 | @@ -1,41 +0,0 @@ | |||
82 | 1 | #!/bin/bash | ||
83 | 2 | |||
84 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
85 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
86 | 5 | PIDFILE=$RUN_DIR/hm9000_analyzer.pid | ||
87 | 6 | |||
88 | 7 | source /var/vcap/packages/common/utils.sh | ||
89 | 8 | |||
90 | 9 | case $1 in | ||
91 | 10 | |||
92 | 11 | start) | ||
93 | 12 | pid_guard $PIDFILE "hm9000_analyzer" | ||
94 | 13 | |||
95 | 14 | mkdir -p $RUN_DIR | ||
96 | 15 | mkdir -p $LOG_DIR | ||
97 | 16 | |||
98 | 17 | chown -R vcap:vcap $RUN_DIR | ||
99 | 18 | chown -R vcap:vcap $LOG_DIR | ||
100 | 19 | |||
101 | 20 | echo $$ > $PIDFILE | ||
102 | 21 | |||
103 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
104 | 23 | analyze \ | ||
105 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
106 | 25 | --poll \ | ||
107 | 26 | 1>>$LOG_DIR/hm9000_analyzer.stdout.log \ | ||
108 | 27 | 2>>$LOG_DIR/hm9000_analyzer.stderr.log | ||
109 | 28 | |||
110 | 29 | ;; | ||
111 | 30 | |||
112 | 31 | stop) | ||
113 | 32 | kill_and_wait $PIDFILE | ||
114 | 33 | |||
115 | 34 | ;; | ||
116 | 35 | |||
117 | 36 | *) | ||
118 | 37 | echo "Usage: hm9000_analyzer_ctl {start|stop}" | ||
119 | 38 | |||
120 | 39 | ;; | ||
121 | 40 | |||
122 | 41 | esac | ||
123 | 42 | 0 | ||
124 | === removed file 'files/hm9000_api_server_ctl' | |||
125 | --- files/hm9000_api_server_ctl 2014-05-14 16:40:09 +0000 | |||
126 | +++ files/hm9000_api_server_ctl 1970-01-01 00:00:00 +0000 | |||
127 | @@ -1,40 +0,0 @@ | |||
128 | 1 | #!/bin/bash | ||
129 | 2 | |||
130 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
131 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
132 | 5 | PIDFILE=$RUN_DIR/hm9000_api_server.pid | ||
133 | 6 | |||
134 | 7 | source /var/vcap/packages/common/utils.sh | ||
135 | 8 | |||
136 | 9 | case $1 in | ||
137 | 10 | |||
138 | 11 | start) | ||
139 | 12 | pid_guard $PIDFILE "hm9000_api_server" | ||
140 | 13 | |||
141 | 14 | mkdir -p $RUN_DIR | ||
142 | 15 | mkdir -p $LOG_DIR | ||
143 | 16 | |||
144 | 17 | chown -R vcap:vcap $RUN_DIR | ||
145 | 18 | chown -R vcap:vcap $LOG_DIR | ||
146 | 19 | |||
147 | 20 | echo $$ > $PIDFILE | ||
148 | 21 | |||
149 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
150 | 23 | serve_api \ | ||
151 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
152 | 25 | 1>>$LOG_DIR/hm9000_api_server.stdout.log \ | ||
153 | 26 | 2>>$LOG_DIR/hm9000_api_server.stderr.log | ||
154 | 27 | |||
155 | 28 | ;; | ||
156 | 29 | |||
157 | 30 | stop) | ||
158 | 31 | kill_and_wait $PIDFILE | ||
159 | 32 | |||
160 | 33 | ;; | ||
161 | 34 | |||
162 | 35 | *) | ||
163 | 36 | echo "Usage: hm9000_api_server_ctl {start|stop}" | ||
164 | 37 | |||
165 | 38 | ;; | ||
166 | 39 | |||
167 | 40 | esac | ||
168 | 41 | 0 | ||
169 | === removed file 'files/hm9000_evacuator_ctl' | |||
170 | --- files/hm9000_evacuator_ctl 2014-05-14 16:40:09 +0000 | |||
171 | +++ files/hm9000_evacuator_ctl 1970-01-01 00:00:00 +0000 | |||
172 | @@ -1,40 +0,0 @@ | |||
173 | 1 | #!/bin/bash | ||
174 | 2 | |||
175 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
176 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
177 | 5 | PIDFILE=$RUN_DIR/hm9000_evacuator.pid | ||
178 | 6 | |||
179 | 7 | source /var/vcap/packages/common/utils.sh | ||
180 | 8 | |||
181 | 9 | case $1 in | ||
182 | 10 | |||
183 | 11 | start) | ||
184 | 12 | pid_guard $PIDFILE "hm9000_evacuator" | ||
185 | 13 | |||
186 | 14 | mkdir -p $RUN_DIR | ||
187 | 15 | mkdir -p $LOG_DIR | ||
188 | 16 | |||
189 | 17 | chown -R vcap:vcap $RUN_DIR | ||
190 | 18 | chown -R vcap:vcap $LOG_DIR | ||
191 | 19 | |||
192 | 20 | echo $$ > $PIDFILE | ||
193 | 21 | |||
194 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
195 | 23 | evacuator \ | ||
196 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
197 | 25 | 1>>$LOG_DIR/hm9000_evacuator.stdout.log \ | ||
198 | 26 | 2>>$LOG_DIR/hm9000_evacuator.stderr.log | ||
199 | 27 | |||
200 | 28 | ;; | ||
201 | 29 | |||
202 | 30 | stop) | ||
203 | 31 | kill_and_wait $PIDFILE | ||
204 | 32 | |||
205 | 33 | ;; | ||
206 | 34 | |||
207 | 35 | *) | ||
208 | 36 | echo "Usage: hm9000_evacuator_ctl {start|stop}" | ||
209 | 37 | |||
210 | 38 | ;; | ||
211 | 39 | |||
212 | 40 | esac | ||
213 | 41 | 0 | ||
214 | === removed file 'files/hm9000_fetcher_ctl' | |||
215 | --- files/hm9000_fetcher_ctl 2014-05-14 16:40:09 +0000 | |||
216 | +++ files/hm9000_fetcher_ctl 1970-01-01 00:00:00 +0000 | |||
217 | @@ -1,41 +0,0 @@ | |||
218 | 1 | #!/bin/bash | ||
219 | 2 | |||
220 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
221 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
222 | 5 | PIDFILE=$RUN_DIR/hm9000_fetcher.pid | ||
223 | 6 | |||
224 | 7 | source /var/vcap/packages/common/utils.sh | ||
225 | 8 | |||
226 | 9 | case $1 in | ||
227 | 10 | |||
228 | 11 | start) | ||
229 | 12 | pid_guard $PIDFILE "hm9000_fetcher" | ||
230 | 13 | |||
231 | 14 | mkdir -p $RUN_DIR | ||
232 | 15 | mkdir -p $LOG_DIR | ||
233 | 16 | |||
234 | 17 | chown -R vcap:vcap $RUN_DIR | ||
235 | 18 | chown -R vcap:vcap $LOG_DIR | ||
236 | 19 | |||
237 | 20 | echo $$ > $PIDFILE | ||
238 | 21 | |||
239 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
240 | 23 | fetch_desired \ | ||
241 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
242 | 25 | --poll \ | ||
243 | 26 | 1>>$LOG_DIR/hm9000_fetcher.stdout.log \ | ||
244 | 27 | 2>>$LOG_DIR/hm9000_fetcher.stderr.log | ||
245 | 28 | |||
246 | 29 | ;; | ||
247 | 30 | |||
248 | 31 | stop) | ||
249 | 32 | kill_and_wait $PIDFILE | ||
250 | 33 | |||
251 | 34 | ;; | ||
252 | 35 | |||
253 | 36 | *) | ||
254 | 37 | echo "Usage: hm9000_fetcher_ctl {start|stop}" | ||
255 | 38 | |||
256 | 39 | ;; | ||
257 | 40 | |||
258 | 41 | esac | ||
259 | 42 | 0 | ||
260 | === removed file 'files/hm9000_listener_ctl' | |||
261 | --- files/hm9000_listener_ctl 2014-05-14 16:40:09 +0000 | |||
262 | +++ files/hm9000_listener_ctl 1970-01-01 00:00:00 +0000 | |||
263 | @@ -1,44 +0,0 @@ | |||
264 | 1 | #!/bin/bash | ||
265 | 2 | |||
266 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
267 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
268 | 5 | PIDFILE=$RUN_DIR/hm9000_listener.pid | ||
269 | 6 | |||
270 | 7 | source /var/vcap/packages/common/utils.sh | ||
271 | 8 | |||
272 | 9 | case $1 in | ||
273 | 10 | |||
274 | 11 | start) | ||
275 | 12 | pid_guard $PIDFILE "hm9000_listener" | ||
276 | 13 | |||
277 | 14 | mkdir -p $RUN_DIR | ||
278 | 15 | mkdir -p $LOG_DIR | ||
279 | 16 | |||
280 | 17 | chown -R vcap:vcap $RUN_DIR | ||
281 | 18 | chown -R vcap:vcap $LOG_DIR | ||
282 | 19 | |||
283 | 20 | <% if_p("syslog_aggregator") do %> | ||
284 | 21 | /var/vcap/packages/syslog_aggregator/setup_syslog_forwarder.sh /var/vcap/jobs/hm9000/config | ||
285 | 22 | <% end %> | ||
286 | 23 | |||
287 | 24 | echo $$ > $PIDFILE | ||
288 | 25 | |||
289 | 26 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
290 | 27 | listen \ | ||
291 | 28 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
292 | 29 | 1>>$LOG_DIR/hm9000_listener.stdout.log \ | ||
293 | 30 | 2>>$LOG_DIR/hm9000_listener.stderr.log | ||
294 | 31 | |||
295 | 32 | ;; | ||
296 | 33 | |||
297 | 34 | stop) | ||
298 | 35 | kill_and_wait $PIDFILE | ||
299 | 36 | |||
300 | 37 | ;; | ||
301 | 38 | |||
302 | 39 | *) | ||
303 | 40 | echo "Usage: hm9000_listener_ctl {start|stop}" | ||
304 | 41 | |||
305 | 42 | ;; | ||
306 | 43 | |||
307 | 44 | esac | ||
308 | 45 | 0 | ||
309 | === removed file 'files/hm9000_metrics_server_ctl' | |||
310 | --- files/hm9000_metrics_server_ctl 2014-05-14 16:40:09 +0000 | |||
311 | +++ files/hm9000_metrics_server_ctl 1970-01-01 00:00:00 +0000 | |||
312 | @@ -1,40 +0,0 @@ | |||
313 | 1 | #!/bin/bash | ||
314 | 2 | |||
315 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
316 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
317 | 5 | PIDFILE=$RUN_DIR/hm9000_metrics_server.pid | ||
318 | 6 | |||
319 | 7 | source /var/vcap/packages/common/utils.sh | ||
320 | 8 | |||
321 | 9 | case $1 in | ||
322 | 10 | |||
323 | 11 | start) | ||
324 | 12 | pid_guard $PIDFILE "hm9000_metrics_server" | ||
325 | 13 | |||
326 | 14 | mkdir -p $RUN_DIR | ||
327 | 15 | mkdir -p $LOG_DIR | ||
328 | 16 | |||
329 | 17 | chown -R vcap:vcap $RUN_DIR | ||
330 | 18 | chown -R vcap:vcap $LOG_DIR | ||
331 | 19 | |||
332 | 20 | echo $$ > $PIDFILE | ||
333 | 21 | |||
334 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
335 | 23 | serve_metrics \ | ||
336 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
337 | 25 | 1>>$LOG_DIR/hm9000_metrics_server.stdout.log \ | ||
338 | 26 | 2>>$LOG_DIR/hm9000_metrics_server.stderr.log | ||
339 | 27 | |||
340 | 28 | ;; | ||
341 | 29 | |||
342 | 30 | stop) | ||
343 | 31 | kill_and_wait $PIDFILE | ||
344 | 32 | |||
345 | 33 | ;; | ||
346 | 34 | |||
347 | 35 | *) | ||
348 | 36 | echo "Usage: hm9000_metrics_server_ctl {start|stop}" | ||
349 | 37 | |||
350 | 38 | ;; | ||
351 | 39 | |||
352 | 40 | esac | ||
353 | 41 | 0 | ||
354 | === removed file 'files/hm9000_sender_ctl' | |||
355 | --- files/hm9000_sender_ctl 2014-05-14 16:40:09 +0000 | |||
356 | +++ files/hm9000_sender_ctl 1970-01-01 00:00:00 +0000 | |||
357 | @@ -1,41 +0,0 @@ | |||
358 | 1 | #!/bin/bash | ||
359 | 2 | |||
360 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
361 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
362 | 5 | PIDFILE=$RUN_DIR/hm9000_sender.pid | ||
363 | 6 | |||
364 | 7 | source /var/vcap/packages/common/utils.sh | ||
365 | 8 | |||
366 | 9 | case $1 in | ||
367 | 10 | |||
368 | 11 | start) | ||
369 | 12 | pid_guard $PIDFILE "hm9000_sender" | ||
370 | 13 | |||
371 | 14 | mkdir -p $RUN_DIR | ||
372 | 15 | mkdir -p $LOG_DIR | ||
373 | 16 | |||
374 | 17 | chown -R vcap:vcap $RUN_DIR | ||
375 | 18 | chown -R vcap:vcap $LOG_DIR | ||
376 | 19 | |||
377 | 20 | echo $$ > $PIDFILE | ||
378 | 21 | |||
379 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
380 | 23 | send \ | ||
381 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
382 | 25 | --poll \ | ||
383 | 26 | 1>>$LOG_DIR/hm9000_sender.stdout.log \ | ||
384 | 27 | 2>>$LOG_DIR/hm9000_sender.stderr.log | ||
385 | 28 | |||
386 | 29 | ;; | ||
387 | 30 | |||
388 | 31 | stop) | ||
389 | 32 | kill_and_wait $PIDFILE | ||
390 | 33 | |||
391 | 34 | ;; | ||
392 | 35 | |||
393 | 36 | *) | ||
394 | 37 | echo "Usage: hm9000_sender_ctl {start|stop}" | ||
395 | 38 | |||
396 | 39 | ;; | ||
397 | 40 | |||
398 | 41 | esac | ||
399 | 42 | 0 | ||
400 | === removed file 'files/hm9000_shredder_ctl' | |||
401 | --- files/hm9000_shredder_ctl 2014-05-14 16:40:09 +0000 | |||
402 | +++ files/hm9000_shredder_ctl 1970-01-01 00:00:00 +0000 | |||
403 | @@ -1,41 +0,0 @@ | |||
404 | 1 | #!/bin/bash | ||
405 | 2 | |||
406 | 3 | RUN_DIR=/var/vcap/sys/run/hm9000 | ||
407 | 4 | LOG_DIR=/var/vcap/sys/log/hm9000 | ||
408 | 5 | PIDFILE=$RUN_DIR/hm9000_shredder.pid | ||
409 | 6 | |||
410 | 7 | source /var/vcap/packages/common/utils.sh | ||
411 | 8 | |||
412 | 9 | case $1 in | ||
413 | 10 | |||
414 | 11 | start) | ||
415 | 12 | pid_guard $PIDFILE "hm9000_shredder" | ||
416 | 13 | |||
417 | 14 | mkdir -p $RUN_DIR | ||
418 | 15 | mkdir -p $LOG_DIR | ||
419 | 16 | |||
420 | 17 | chown -R vcap:vcap $RUN_DIR | ||
421 | 18 | chown -R vcap:vcap $LOG_DIR | ||
422 | 19 | |||
423 | 20 | echo $$ > $PIDFILE | ||
424 | 21 | |||
425 | 22 | exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ | ||
426 | 23 | shred \ | ||
427 | 24 | --config=/var/vcap/jobs/hm9000/config/hm9000.json \ | ||
428 | 25 | --poll \ | ||
429 | 26 | 1>>$LOG_DIR/hm9000_shredder.stdout.log \ | ||
430 | 27 | 2>>$LOG_DIR/hm9000_shredder.stderr.log | ||
431 | 28 | |||
432 | 29 | ;; | ||
433 | 30 | |||
434 | 31 | stop) | ||
435 | 32 | kill_and_wait $PIDFILE | ||
436 | 33 | |||
437 | 34 | ;; | ||
438 | 35 | |||
439 | 36 | *) | ||
440 | 37 | echo "Usage: hm9000_shredder_ctl {start|stop}" | ||
441 | 38 | |||
442 | 39 | ;; | ||
443 | 40 | |||
444 | 41 | esac | ||
445 | 42 | 0 | ||
446 | === removed file 'files/syslog_forwarder.conf.erb' | |||
447 | --- files/syslog_forwarder.conf.erb 2014-05-14 16:40:09 +0000 | |||
448 | +++ files/syslog_forwarder.conf.erb 1970-01-01 00:00:00 +0000 | |||
449 | @@ -1,65 +0,0 @@ | |||
450 | 1 | <% if_p("syslog_aggregator.address", "syslog_aggregator.port", "syslog_aggregator.transport") do |address, port, transport| %> | ||
451 | 2 | $ModLoad imuxsock # local message reception (rsyslog uses a datagram socket) | ||
452 | 3 | $MaxMessageSize 4k # default is 2k | ||
453 | 4 | $WorkDirectory /var/vcap/sys/rsyslog/buffered # where messages should be buffered on disk | ||
454 | 5 | |||
455 | 6 | # Forward vcap messages to the aggregator | ||
456 | 7 | # | ||
457 | 8 | $ActionResumeRetryCount -1 # Try until the server becomes available | ||
458 | 9 | $ActionQueueType LinkedList # Allocate on-demand | ||
459 | 10 | $ActionQueueFileName agg_backlog # Spill to disk if queue is full | ||
460 | 11 | $ActionQueueMaxDiskSpace 32m # Max size for disk queue | ||
461 | 12 | $ActionQueueLowWaterMark 2000 # Num messages. Assuming avg size of 512B, this is 1MiB. | ||
462 | 13 | $ActionQueueHighWaterMark 8000 # Num messages. Assuming avg size of 512B, this is 4MiB. (If this is reached, messages will spill to disk until the low watermark is reached). | ||
463 | 14 | $ActionQueueTimeoutEnqueue 0 # Discard messages if the queue + disk is full | ||
464 | 15 | $ActionQueueSaveOnShutdown on # Save in-memory data to disk if rsyslog shuts down | ||
465 | 16 | |||
466 | 17 | <% ip = spec.networks.send(properties.networks.apps).ip %> | ||
467 | 18 | template(name="CfLogTemplate" type="list") { | ||
468 | 19 | constant(value="<") | ||
469 | 20 | property(name="pri") | ||
470 | 21 | constant(value=">") | ||
471 | 22 | property(name="timestamp" dateFormat="rfc3339") | ||
472 | 23 | constant(value=" <%= ip.strip %> ") | ||
473 | 24 | property(name="programname") | ||
474 | 25 | constant(value=" [job=") | ||
475 | 26 | property(name="programname") | ||
476 | 27 | constant(value=" index=<%= spec.index.to_i %>] ") | ||
477 | 28 | property(name="msg") | ||
478 | 29 | } | ||
479 | 30 | |||
480 | 31 | <% if transport == "relp" %> | ||
481 | 32 | $ModLoad omrelp | ||
482 | 33 | :programname, startswith, "vcap." :omrelp:<%= address %>:<%= port %>;CfLogTemplate | ||
483 | 34 | <% elsif transport == "udp" %> | ||
484 | 35 | :programname, startswith, "vcap." @<%= address %>:<%= port %>;CfLogTemplate | ||
485 | 36 | <% elsif transport == "tcp" %> | ||
486 | 37 | :programname, startswith, "vcap." @@<%= address %>:<%= port %>;CfLogTemplate | ||
487 | 38 | <% else %> | ||
488 | 39 | #only RELP, UDP, and TCP are supported | ||
489 | 40 | <% end %> | ||
490 | 41 | |||
491 | 42 | # Log vcap messages locally, too | ||
492 | 43 | #$template VcapComponentLogFile, "/var/log/%programname:6:$%/%programname:6:$%.log" | ||
493 | 44 | #$template VcapComponentLogFormat, "%timegenerated% %syslogseverity-text% -- %msg%\n" | ||
494 | 45 | #:programname, startswith, "vcap." -?VcapComponentLogFile;VcapComponentLogFormat | ||
495 | 46 | |||
496 | 47 | # Prevent them from reaching anywhere else | ||
497 | 48 | :programname, startswith, "vcap." ~ | ||
498 | 49 | |||
499 | 50 | <% if properties.syslog_aggregator.all %> | ||
500 | 51 | <% if transport == "relp" %> | ||
501 | 52 | *.* :omrelp:<%= address %>:<%= port %> | ||
502 | 53 | <% elsif transport == "udp" %> | ||
503 | 54 | *.* @<%= address %>:<%= port %> | ||
504 | 55 | <% elsif transport == "tcp" %> | ||
505 | 56 | *.* @@<%= address %>:<%= port %> | ||
506 | 57 | <% else %> | ||
507 | 58 | #only RELP, UDP, and TCP are supported | ||
508 | 59 | <% end %> | ||
509 | 60 | <% end %> | ||
510 | 61 | |||
511 | 62 | <% end.else do %> | ||
512 | 63 | # Prevent them from reaching anywhere else | ||
513 | 64 | :programname, startswith, "vcap." ~ | ||
514 | 65 | <% end %> | ||
515 | 66 | 0 | ||
516 | === added file 'hooks/cc-relation-changed' | |||
517 | --- hooks/cc-relation-changed 1970-01-01 00:00:00 +0000 | |||
518 | +++ hooks/cc-relation-changed 2014-06-05 18:17:30 +0000 | |||
519 | @@ -0,0 +1,5 @@ | |||
520 | 1 | #!/usr/bin/env python | ||
521 | 2 | from charmhelpers.core import services | ||
522 | 3 | import config | ||
523 | 4 | manager = services.ServiceManager(config.SERVICES) | ||
524 | 5 | manager.manage() | ||
525 | 0 | 6 | ||
526 | === modified file 'hooks/charmhelpers/contrib/cloudfoundry/common.py' | |||
527 | --- hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-05-14 16:40:09 +0000 | |||
528 | +++ hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-06-05 18:17:30 +0000 | |||
529 | @@ -1,11 +1,3 @@ | |||
530 | 1 | import sys | ||
531 | 2 | import os | ||
532 | 3 | import pwd | ||
533 | 4 | import grp | ||
534 | 5 | import subprocess | ||
535 | 6 | |||
536 | 7 | from contextlib import contextmanager | ||
537 | 8 | from charmhelpers.core.hookenv import log, ERROR, DEBUG | ||
538 | 9 | from charmhelpers.core import host | 1 | from charmhelpers.core import host |
539 | 10 | 2 | ||
540 | 11 | from charmhelpers.fetch import ( | 3 | from charmhelpers.fetch import ( |
541 | @@ -13,55 +5,6 @@ | |||
542 | 13 | ) | 5 | ) |
543 | 14 | 6 | ||
544 | 15 | 7 | ||
545 | 16 | def run(command, exit_on_error=True, quiet=False): | ||
546 | 17 | '''Run a command and return the output.''' | ||
547 | 18 | if not quiet: | ||
548 | 19 | log("Running {!r}".format(command), DEBUG) | ||
549 | 20 | p = subprocess.Popen( | ||
550 | 21 | command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, | ||
551 | 22 | shell=isinstance(command, basestring)) | ||
552 | 23 | p.stdin.close() | ||
553 | 24 | lines = [] | ||
554 | 25 | for line in p.stdout: | ||
555 | 26 | if line: | ||
556 | 27 | if not quiet: | ||
557 | 28 | print line | ||
558 | 29 | lines.append(line) | ||
559 | 30 | elif p.poll() is not None: | ||
560 | 31 | break | ||
561 | 32 | |||
562 | 33 | p.wait() | ||
563 | 34 | |||
564 | 35 | if p.returncode == 0: | ||
565 | 36 | return '\n'.join(lines) | ||
566 | 37 | |||
567 | 38 | if p.returncode != 0 and exit_on_error: | ||
568 | 39 | log("ERROR: {}".format(p.returncode), ERROR) | ||
569 | 40 | sys.exit(p.returncode) | ||
570 | 41 | |||
571 | 42 | raise subprocess.CalledProcessError( | ||
572 | 43 | p.returncode, command, '\n'.join(lines)) | ||
573 | 44 | |||
574 | 45 | |||
575 | 46 | def chownr(path, owner, group): | ||
576 | 47 | uid = pwd.getpwnam(owner).pw_uid | ||
577 | 48 | gid = grp.getgrnam(group).gr_gid | ||
578 | 49 | for root, dirs, files in os.walk(path): | ||
579 | 50 | for momo in dirs: | ||
580 | 51 | os.chown(os.path.join(root, momo), uid, gid) | ||
581 | 52 | for momo in files: | ||
582 | 53 | os.chown(os.path.join(root, momo), uid, gid) | ||
583 | 54 | |||
584 | 55 | |||
585 | 56 | @contextmanager | ||
586 | 57 | def chdir(d): | ||
587 | 58 | cur = os.getcwd() | ||
588 | 59 | try: | ||
589 | 60 | yield os.chdir(d) | ||
590 | 61 | finally: | ||
591 | 62 | os.chdir(cur) | ||
592 | 63 | |||
593 | 64 | |||
594 | 65 | def prepare_cloudfoundry_environment(config_data, packages): | 8 | def prepare_cloudfoundry_environment(config_data, packages): |
595 | 66 | add_source(config_data['source'], config_data.get('key')) | 9 | add_source(config_data['source'], config_data.get('key')) |
596 | 67 | apt_update(fatal=True) | 10 | apt_update(fatal=True) |
597 | 68 | 11 | ||
598 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/config_helper.py' | |||
599 | --- hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 2014-05-14 16:40:09 +0000 | |||
600 | +++ hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 1970-01-01 00:00:00 +0000 | |||
601 | @@ -1,11 +0,0 @@ | |||
602 | 1 | import jinja2 | ||
603 | 2 | |||
604 | 3 | TEMPLATES_DIR = 'templates' | ||
605 | 4 | |||
606 | 5 | def render_template(template_name, context, template_dir=TEMPLATES_DIR): | ||
607 | 6 | templates = jinja2.Environment( | ||
608 | 7 | loader=jinja2.FileSystemLoader(template_dir)) | ||
609 | 8 | template = templates.get_template(template_name) | ||
610 | 9 | return template.render(context) | ||
611 | 10 | |||
612 | 11 | |||
613 | 12 | 0 | ||
614 | === modified file 'hooks/charmhelpers/contrib/cloudfoundry/contexts.py' | |||
615 | --- hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-05-14 16:40:09 +0000 | |||
616 | +++ hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-06-05 18:17:30 +0000 | |||
617 | @@ -1,70 +1,74 @@ | |||
618 | 1 | import os | 1 | import os |
619 | 2 | import yaml | 2 | import yaml |
620 | 3 | 3 | ||
652 | 4 | from charmhelpers.core import hookenv | 4 | from charmhelpers.core.services import RelationContext |
653 | 5 | from charmhelpers.contrib.openstack.context import OSContextGenerator | 5 | |
654 | 6 | 6 | ||
655 | 7 | 7 | class StoredContext(dict): | |
656 | 8 | class RelationContext(OSContextGenerator): | 8 | """ |
657 | 9 | def __call__(self): | 9 | A data context that always returns the data that it was first created with. |
658 | 10 | if not hookenv.relation_ids(self.interface): | 10 | """ |
628 | 11 | return {} | ||
629 | 12 | |||
630 | 13 | ctx = {} | ||
631 | 14 | for rid in hookenv.relation_ids(self.interface): | ||
632 | 15 | for unit in hookenv.related_units(rid): | ||
633 | 16 | reldata = hookenv.relation_get(rid=rid, unit=unit) | ||
634 | 17 | required = set(self.required_keys) | ||
635 | 18 | if set(reldata.keys()).issuperset(required): | ||
636 | 19 | ns = ctx.setdefault(self.interface, {}) | ||
637 | 20 | for k, v in reldata.items(): | ||
638 | 21 | ns[k] = v | ||
639 | 22 | return ctx | ||
640 | 23 | |||
641 | 24 | return {} | ||
642 | 25 | |||
643 | 26 | |||
644 | 27 | class ConfigContext(OSContextGenerator): | ||
645 | 28 | def __call__(self): | ||
646 | 29 | return hookenv.config() | ||
647 | 30 | |||
648 | 31 | |||
649 | 32 | # Stores `config_data` hash into yaml file with `file_name` as a name | ||
650 | 33 | # if `file_name` already exists, then it loads data from `file_name`. | ||
651 | 34 | class StoredContext(OSContextGenerator): | ||
659 | 35 | def __init__(self, file_name, config_data): | 11 | def __init__(self, file_name, config_data): |
661 | 36 | self.data = config_data | 12 | """ |
662 | 13 | If the file exists, populate `self` with the data from the file. | ||
663 | 14 | Otherwise, populate with the given data and persist it to the file. | ||
664 | 15 | """ | ||
665 | 37 | if os.path.exists(file_name): | 16 | if os.path.exists(file_name): |
670 | 38 | with open(file_name, 'r') as file_stream: | 17 | self.update(self.read_context(file_name)) |
667 | 39 | self.data = yaml.load(file_stream) | ||
668 | 40 | if not self.data: | ||
669 | 41 | raise OSError("%s is empty" % file_name) | ||
671 | 42 | else: | 18 | else: |
689 | 43 | with open(file_name, 'w') as file_stream: | 19 | self.store_context(file_name, config_data) |
690 | 44 | yaml.dump(config_data, file_stream) | 20 | self.update(config_data) |
691 | 45 | self.data = config_data | 21 | |
692 | 46 | 22 | def store_context(self, file_name, config_data): | |
693 | 47 | def __call__(self): | 23 | with open(file_name, 'w') as file_stream: |
694 | 48 | return self.data | 24 | yaml.dump(config_data, file_stream) |
695 | 49 | 25 | ||
696 | 50 | 26 | def read_context(self, file_name): | |
697 | 51 | class StaticContext(OSContextGenerator): | 27 | with open(file_name, 'r') as file_stream: |
698 | 52 | def __init__(self, data): | 28 | data = yaml.load(file_stream) |
699 | 53 | self.data = data | 29 | if not data: |
700 | 54 | 30 | raise OSError("%s is empty" % file_name) | |
701 | 55 | def __call__(self): | 31 | return data |
702 | 56 | return self.data | 32 | |
703 | 57 | 33 | ||
704 | 58 | 34 | class NatsRelation(RelationContext): | |
688 | 59 | class NatsContext(RelationContext): | ||
705 | 60 | interface = 'nats' | 35 | interface = 'nats' |
706 | 61 | required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password'] | 36 | required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password'] |
707 | 62 | 37 | ||
708 | 63 | 38 | ||
710 | 64 | class RouterContext(RelationContext): | 39 | class MysqlRelation(RelationContext): |
711 | 40 | interface = 'db' | ||
712 | 41 | required_keys = ['user', 'password', 'host', 'database'] | ||
713 | 42 | dsn_template = "mysql2://{user}:{password}@{host}:{port}/{database}" | ||
714 | 43 | |||
715 | 44 | def get_data(self): | ||
716 | 45 | RelationContext.get_data(self) | ||
717 | 46 | if self.is_ready(): | ||
718 | 47 | if 'port' not in self['db']: | ||
719 | 48 | self['db']['port'] = '3306' | ||
720 | 49 | self['db']['dsn'] = self.dsn_template.format(**self['db']) | ||
721 | 50 | |||
722 | 51 | |||
723 | 52 | class RouterRelation(RelationContext): | ||
724 | 65 | interface = 'router' | 53 | interface = 'router' |
725 | 66 | required_keys = ['domain'] | 54 | required_keys = ['domain'] |
726 | 67 | 55 | ||
728 | 68 | class LogRouterContext(RelationContext): | 56 | |
729 | 57 | class LogRouterRelation(RelationContext): | ||
730 | 69 | interface = 'logrouter' | 58 | interface = 'logrouter' |
731 | 70 | required_keys = ['shared-secret', 'logrouter-address'] | 59 | required_keys = ['shared-secret', 'logrouter-address'] |
732 | 60 | |||
733 | 61 | |||
734 | 62 | class LoggregatorRelation(RelationContext): | ||
735 | 63 | interface = 'loggregator' | ||
736 | 64 | required_keys = ['shared_secret', 'loggregator_address'] | ||
737 | 65 | |||
738 | 66 | |||
739 | 67 | class EtcdRelation(RelationContext): | ||
740 | 68 | interface = 'etcd' | ||
741 | 69 | required_keys = ['hostname', 'port'] | ||
742 | 70 | |||
743 | 71 | |||
744 | 72 | class CloudControllerRelation(RelationContext): | ||
745 | 73 | interface = 'cc' | ||
746 | 74 | required_keys = ['hostname', 'port', 'user', 'password'] | ||
747 | 71 | 75 | ||
748 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/install.py' | |||
749 | --- hooks/charmhelpers/contrib/cloudfoundry/install.py 2014-05-14 16:40:09 +0000 | |||
750 | +++ hooks/charmhelpers/contrib/cloudfoundry/install.py 1970-01-01 00:00:00 +0000 | |||
751 | @@ -1,35 +0,0 @@ | |||
752 | 1 | import os | ||
753 | 2 | import subprocess | ||
754 | 3 | |||
755 | 4 | |||
756 | 5 | def install(src, dest, fileprops=None, sudo=False): | ||
757 | 6 | """Install a file from src to dest. Dest can be a complete filename | ||
758 | 7 | or a target directory. fileprops is a dict with 'owner' (username of owner) | ||
759 | 8 | and mode (octal string) as keys, the defaults are 'ubuntu' and '400' | ||
760 | 9 | |||
761 | 10 | When owner is passed or when access requires it sudo can be set to True and | ||
762 | 11 | sudo will be used to install the file. | ||
763 | 12 | """ | ||
764 | 13 | if not fileprops: | ||
765 | 14 | fileprops = {} | ||
766 | 15 | mode = fileprops.get('mode', '400') | ||
767 | 16 | owner = fileprops.get('owner') | ||
768 | 17 | cmd = ['install'] | ||
769 | 18 | |||
770 | 19 | if not os.path.exists(src): | ||
771 | 20 | raise OSError(src) | ||
772 | 21 | |||
773 | 22 | if not os.path.exists(dest) and not os.path.exists(os.path.dirname(dest)): | ||
774 | 23 | # create all but the last component as path | ||
775 | 24 | cmd.append('-D') | ||
776 | 25 | |||
777 | 26 | if mode: | ||
778 | 27 | cmd.extend(['-m', mode]) | ||
779 | 28 | |||
780 | 29 | if owner: | ||
781 | 30 | cmd.extend(['-o', owner]) | ||
782 | 31 | |||
783 | 32 | if sudo: | ||
784 | 33 | cmd.insert(0, 'sudo') | ||
785 | 34 | cmd.extend([src, dest]) | ||
786 | 35 | subprocess.check_call(cmd) | ||
787 | 36 | 0 | ||
788 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/services.py' | |||
789 | --- hooks/charmhelpers/contrib/cloudfoundry/services.py 2014-05-14 16:40:09 +0000 | |||
790 | +++ hooks/charmhelpers/contrib/cloudfoundry/services.py 1970-01-01 00:00:00 +0000 | |||
791 | @@ -1,118 +0,0 @@ | |||
792 | 1 | import os | ||
793 | 2 | import tempfile | ||
794 | 3 | from charmhelpers.core import host | ||
795 | 4 | |||
796 | 5 | from charmhelpers.contrib.cloudfoundry.install import install | ||
797 | 6 | from charmhelpers.core.hookenv import log | ||
798 | 7 | from jinja2 import Environment, FileSystemLoader | ||
799 | 8 | |||
800 | 9 | SERVICE_CONFIG = [] | ||
801 | 10 | TEMPLATE_LOADER = None | ||
802 | 11 | |||
803 | 12 | |||
804 | 13 | def render_template(template_name, context): | ||
805 | 14 | """Render template to a tempfile returning the name""" | ||
806 | 15 | _, fn = tempfile.mkstemp() | ||
807 | 16 | template = load_template(template_name) | ||
808 | 17 | output = template.render(context) | ||
809 | 18 | with open(fn, "w") as fp: | ||
810 | 19 | fp.write(output) | ||
811 | 20 | return fn | ||
812 | 21 | |||
813 | 22 | |||
814 | 23 | def collect_contexts(context_providers): | ||
815 | 24 | ctx = {} | ||
816 | 25 | for provider in context_providers: | ||
817 | 26 | c = provider() | ||
818 | 27 | if not c: | ||
819 | 28 | return {} | ||
820 | 29 | ctx.update(c) | ||
821 | 30 | return ctx | ||
822 | 31 | |||
823 | 32 | |||
824 | 33 | def load_template(name): | ||
825 | 34 | return TEMPLATE_LOADER.get_template(name) | ||
826 | 35 | |||
827 | 36 | |||
828 | 37 | def configure_templates(template_dir): | ||
829 | 38 | global TEMPLATE_LOADER | ||
830 | 39 | TEMPLATE_LOADER = Environment(loader=FileSystemLoader(template_dir)) | ||
831 | 40 | |||
832 | 41 | |||
833 | 42 | def register(service_configs, template_dir): | ||
834 | 43 | """Register a list of service configs. | ||
835 | 44 | |||
836 | 45 | Service Configs are dicts in the following formats: | ||
837 | 46 | |||
838 | 47 | { | ||
839 | 48 | "service": <service name>, | ||
840 | 49 | "templates": [ { | ||
841 | 50 | 'target': <render target of template>, | ||
842 | 51 | 'source': <optional name of template in passed in template_dir> | ||
843 | 52 | 'file_properties': <optional dict taking owner and octal mode> | ||
844 | 53 | 'contexts': [ context generators, see contexts.py ] | ||
845 | 54 | } | ||
846 | 55 | ] } | ||
847 | 56 | |||
848 | 57 | If 'source' is not provided for a template the template_dir will | ||
849 | 58 | be consulted for ``basename(target).j2``. | ||
850 | 59 | """ | ||
851 | 60 | global SERVICE_CONFIG | ||
852 | 61 | if template_dir: | ||
853 | 62 | configure_templates(template_dir) | ||
854 | 63 | SERVICE_CONFIG.extend(service_configs) | ||
855 | 64 | |||
856 | 65 | |||
857 | 66 | def reset(): | ||
858 | 67 | global SERVICE_CONFIG | ||
859 | 68 | SERVICE_CONFIG = [] | ||
860 | 69 | |||
861 | 70 | |||
862 | 71 | # def service_context(name): | ||
863 | 72 | # contexts = collect_contexts(template['contexts']) | ||
864 | 73 | |||
865 | 74 | def reconfigure_service(service_name, restart=True): | ||
866 | 75 | global SERVICE_CONFIG | ||
867 | 76 | service = None | ||
868 | 77 | for service in SERVICE_CONFIG: | ||
869 | 78 | if service['service'] == service_name: | ||
870 | 79 | break | ||
871 | 80 | if not service or service['service'] != service_name: | ||
872 | 81 | raise KeyError('Service not registered: %s' % service_name) | ||
873 | 82 | |||
874 | 83 | templates = service['templates'] | ||
875 | 84 | for template in templates: | ||
876 | 85 | contexts = collect_contexts(template['contexts']) | ||
877 | 86 | if contexts: | ||
878 | 87 | template_target = template['target'] | ||
879 | 88 | default_template = "%s.j2" % os.path.basename(template_target) | ||
880 | 89 | template_name = template.get('source', default_template) | ||
881 | 90 | output_file = render_template(template_name, contexts) | ||
882 | 91 | file_properties = template.get('file_properties') | ||
883 | 92 | install(output_file, template_target, file_properties) | ||
884 | 93 | os.unlink(output_file) | ||
885 | 94 | else: | ||
886 | 95 | restart = False | ||
887 | 96 | |||
888 | 97 | if restart: | ||
889 | 98 | host.service_restart(service_name) | ||
890 | 99 | |||
891 | 100 | |||
892 | 101 | def stop_services(): | ||
893 | 102 | global SERVICE_CONFIG | ||
894 | 103 | for service in SERVICE_CONFIG: | ||
895 | 104 | if host.service_running(service['service']): | ||
896 | 105 | host.service_stop(service['service']) | ||
897 | 106 | |||
898 | 107 | |||
899 | 108 | def get_service(service_name): | ||
900 | 109 | global SERVICE_CONFIG | ||
901 | 110 | for service in SERVICE_CONFIG: | ||
902 | 111 | if service_name == service['service']: | ||
903 | 112 | return service | ||
904 | 113 | return None | ||
905 | 114 | |||
906 | 115 | |||
907 | 116 | def reconfigure_services(restart=True): | ||
908 | 117 | for service in SERVICE_CONFIG: | ||
909 | 118 | reconfigure_service(service['service'], restart=restart) | ||
910 | 119 | 0 | ||
911 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py' | |||
912 | --- hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 2014-05-14 16:40:09 +0000 | |||
913 | +++ hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 1970-01-01 00:00:00 +0000 | |||
914 | @@ -1,14 +0,0 @@ | |||
915 | 1 | import os | ||
916 | 2 | import glob | ||
917 | 3 | from charmhelpers.core import hookenv | ||
918 | 4 | from charmhelpers.core.hookenv import charm_dir | ||
919 | 5 | from charmhelpers.contrib.cloudfoundry.install import install | ||
920 | 6 | |||
921 | 7 | |||
922 | 8 | def install_upstart_scripts(dirname=os.path.join(hookenv.charm_dir(), | ||
923 | 9 | 'files/upstart'), | ||
924 | 10 | pattern='*.conf'): | ||
925 | 11 | for script in glob.glob("%s/%s" % (dirname, pattern)): | ||
926 | 12 | filename = os.path.join(dirname, script) | ||
927 | 13 | hookenv.log('Installing upstart job:' + filename, hookenv.DEBUG) | ||
928 | 14 | install(filename, '/etc/init') | ||
929 | 15 | 0 | ||
930 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
931 | --- hooks/charmhelpers/contrib/openstack/context.py 2014-05-14 16:40:09 +0000 | |||
932 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-06-05 18:17:30 +0000 | |||
933 | @@ -570,7 +570,7 @@ | |||
934 | 570 | 570 | ||
935 | 571 | if self.plugin == 'ovs': | 571 | if self.plugin == 'ovs': |
936 | 572 | ctxt.update(self.ovs_ctxt()) | 572 | ctxt.update(self.ovs_ctxt()) |
938 | 573 | elif self.plugin == 'nvp': | 573 | elif self.plugin in ['nvp', 'nsx']: |
939 | 574 | ctxt.update(self.nvp_ctxt()) | 574 | ctxt.update(self.nvp_ctxt()) |
940 | 575 | 575 | ||
941 | 576 | alchemy_flags = config('neutron-alchemy-flags') | 576 | alchemy_flags = config('neutron-alchemy-flags') |
942 | 577 | 577 | ||
943 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
944 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-14 16:40:09 +0000 | |||
945 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-05 18:17:30 +0000 | |||
946 | @@ -114,14 +114,30 @@ | |||
947 | 114 | 'server_packages': ['neutron-server', | 114 | 'server_packages': ['neutron-server', |
948 | 115 | 'neutron-plugin-nicira'], | 115 | 'neutron-plugin-nicira'], |
949 | 116 | 'server_services': ['neutron-server'] | 116 | 'server_services': ['neutron-server'] |
950 | 117 | }, | ||
951 | 118 | 'nsx': { | ||
952 | 119 | 'config': '/etc/neutron/plugins/vmware/nsx.ini', | ||
953 | 120 | 'driver': 'vmware', | ||
954 | 121 | 'contexts': [ | ||
955 | 122 | context.SharedDBContext(user=config('neutron-database-user'), | ||
956 | 123 | database=config('neutron-database'), | ||
957 | 124 | relation_prefix='neutron', | ||
958 | 125 | ssl_dir=NEUTRON_CONF_DIR)], | ||
959 | 126 | 'services': [], | ||
960 | 127 | 'packages': [], | ||
961 | 128 | 'server_packages': ['neutron-server', | ||
962 | 129 | 'neutron-plugin-vmware'], | ||
963 | 130 | 'server_services': ['neutron-server'] | ||
964 | 117 | } | 131 | } |
965 | 118 | } | 132 | } |
966 | 119 | # NOTE: patch in ml2 plugin for icehouse onwards | ||
967 | 120 | if release >= 'icehouse': | 133 | if release >= 'icehouse': |
968 | 134 | # NOTE: patch in ml2 plugin for icehouse onwards | ||
969 | 121 | plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' | 135 | plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' |
970 | 122 | plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' | 136 | plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' |
971 | 123 | plugins['ovs']['server_packages'] = ['neutron-server', | 137 | plugins['ovs']['server_packages'] = ['neutron-server', |
972 | 124 | 'neutron-plugin-ml2'] | 138 | 'neutron-plugin-ml2'] |
973 | 139 | # NOTE: patch in vmware renames nvp->nsx for icehouse onwards | ||
974 | 140 | plugins['nvp'] = plugins['nsx'] | ||
975 | 125 | return plugins | 141 | return plugins |
976 | 126 | 142 | ||
977 | 127 | 143 | ||
978 | 128 | 144 | ||
979 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
980 | --- hooks/charmhelpers/contrib/openstack/utils.py 2014-05-14 16:40:09 +0000 | |||
981 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-06-05 18:17:30 +0000 | |||
982 | @@ -131,6 +131,11 @@ | |||
983 | 131 | def get_os_codename_package(package, fatal=True): | 131 | def get_os_codename_package(package, fatal=True): |
984 | 132 | '''Derive OpenStack release codename from an installed package.''' | 132 | '''Derive OpenStack release codename from an installed package.''' |
985 | 133 | apt.init() | 133 | apt.init() |
986 | 134 | |||
987 | 135 | # Tell apt to build an in-memory cache to prevent race conditions (if | ||
988 | 136 | # another process is already building the cache). | ||
989 | 137 | apt.config.set("Dir::Cache::pkgcache", "") | ||
990 | 138 | |||
991 | 134 | cache = apt.Cache() | 139 | cache = apt.Cache() |
992 | 135 | 140 | ||
993 | 136 | try: | 141 | try: |
994 | @@ -183,7 +188,7 @@ | |||
995 | 183 | if cname == codename: | 188 | if cname == codename: |
996 | 184 | return version | 189 | return version |
997 | 185 | #e = "Could not determine OpenStack version for package: %s" % pkg | 190 | #e = "Could not determine OpenStack version for package: %s" % pkg |
999 | 186 | #error_out(e) | 191 | # error_out(e) |
1000 | 187 | 192 | ||
1001 | 188 | 193 | ||
1002 | 189 | os_rel = None | 194 | os_rel = None |
1003 | 190 | 195 | ||
1004 | === modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' | |||
1005 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-14 16:40:09 +0000 | |||
1006 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-06-05 18:17:30 +0000 | |||
1007 | @@ -62,7 +62,7 @@ | |||
1008 | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() |
1009 | 63 | for l in pvd: | 63 | for l in pvd: |
1010 | 64 | if l.strip().startswith('VG Name'): | 64 | if l.strip().startswith('VG Name'): |
1012 | 65 | vg = ' '.join(l.split()).split(' ').pop() | 65 | vg = ' '.join(l.strip().split()[2:]) |
1013 | 66 | return vg | 66 | return vg |
1014 | 67 | 67 | ||
1015 | 68 | 68 | ||
1016 | 69 | 69 | ||
1017 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
1018 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-14 16:40:09 +0000 | |||
1019 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-06-05 18:17:30 +0000 | |||
1020 | @@ -1,4 +1,5 @@ | |||
1022 | 1 | from os import stat | 1 | import os |
1023 | 2 | import re | ||
1024 | 2 | from stat import S_ISBLK | 3 | from stat import S_ISBLK |
1025 | 3 | 4 | ||
1026 | 4 | from subprocess import ( | 5 | from subprocess import ( |
1027 | @@ -14,7 +15,9 @@ | |||
1028 | 14 | 15 | ||
1029 | 15 | :returns: boolean: True if path is a block device, False if not. | 16 | :returns: boolean: True if path is a block device, False if not. |
1030 | 16 | ''' | 17 | ''' |
1032 | 17 | return S_ISBLK(stat(path).st_mode) | 18 | if not os.path.exists(path): |
1033 | 19 | return False | ||
1034 | 20 | return S_ISBLK(os.stat(path).st_mode) | ||
1035 | 18 | 21 | ||
1036 | 19 | 22 | ||
1037 | 20 | def zap_disk(block_device): | 23 | def zap_disk(block_device): |
1038 | @@ -29,7 +32,18 @@ | |||
1039 | 29 | '--clear', block_device]) | 32 | '--clear', block_device]) |
1040 | 30 | dev_end = check_output(['blockdev', '--getsz', block_device]) | 33 | dev_end = check_output(['blockdev', '--getsz', block_device]) |
1041 | 31 | gpt_end = int(dev_end.split()[0]) - 100 | 34 | gpt_end = int(dev_end.split()[0]) - 100 |
1043 | 32 | check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), | 35 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
1044 | 33 | 'bs=1M', 'count=1']) | 36 | 'bs=1M', 'count=1']) |
1047 | 34 | check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), | 37 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
1048 | 35 | 'bs=512', 'count=100', 'seek=%s'%(gpt_end)]) | 38 | 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) |
1049 | 39 | |||
1050 | 40 | def is_device_mounted(device): | ||
1051 | 41 | '''Given a device path, return True if that device is mounted, and False | ||
1052 | 42 | if it isn't. | ||
1053 | 43 | |||
1054 | 44 | :param device: str: Full path of the device to check. | ||
1055 | 45 | :returns: boolean: True if the path represents a mounted device, False if | ||
1056 | 46 | it doesn't. | ||
1057 | 47 | ''' | ||
1058 | 48 | out = check_output(['mount']) | ||
1059 | 49 | return bool(re.search(device + r"[0-9]+\b", out)) | ||
1060 | 36 | 50 | ||
1061 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
1062 | --- hooks/charmhelpers/core/hookenv.py 2014-05-14 16:40:09 +0000 | |||
1063 | +++ hooks/charmhelpers/core/hookenv.py 2014-06-05 18:17:30 +0000 | |||
1064 | @@ -155,6 +155,100 @@ | |||
1065 | 155 | return os.path.basename(sys.argv[0]) | 155 | return os.path.basename(sys.argv[0]) |
1066 | 156 | 156 | ||
1067 | 157 | 157 | ||
1068 | 158 | class Config(dict): | ||
1069 | 159 | """A Juju charm config dictionary that can write itself to | ||
1070 | 160 | disk (as json) and track which values have changed since | ||
1071 | 161 | the previous hook invocation. | ||
1072 | 162 | |||
1073 | 163 | Do not instantiate this object directly - instead call | ||
1074 | 164 | ``hookenv.config()`` | ||
1075 | 165 | |||
1076 | 166 | Example usage:: | ||
1077 | 167 | |||
1078 | 168 | >>> # inside a hook | ||
1079 | 169 | >>> from charmhelpers.core import hookenv | ||
1080 | 170 | >>> config = hookenv.config() | ||
1081 | 171 | >>> config['foo'] | ||
1082 | 172 | 'bar' | ||
1083 | 173 | >>> config['mykey'] = 'myval' | ||
1084 | 174 | >>> config.save() | ||
1085 | 175 | |||
1086 | 176 | |||
1087 | 177 | >>> # user runs `juju set mycharm foo=baz` | ||
1088 | 178 | >>> # now we're inside subsequent config-changed hook | ||
1089 | 179 | >>> config = hookenv.config() | ||
1090 | 180 | >>> config['foo'] | ||
1091 | 181 | 'baz' | ||
1092 | 182 | >>> # test to see if this val has changed since last hook | ||
1093 | 183 | >>> config.changed('foo') | ||
1094 | 184 | True | ||
1095 | 185 | >>> # what was the previous value? | ||
1096 | 186 | >>> config.previous('foo') | ||
1097 | 187 | 'bar' | ||
1098 | 188 | >>> # keys/values that we add are preserved across hooks | ||
1099 | 189 | >>> config['mykey'] | ||
1100 | 190 | 'myval' | ||
1101 | 191 | >>> # don't forget to save at the end of hook! | ||
1102 | 192 | >>> config.save() | ||
1103 | 193 | |||
1104 | 194 | """ | ||
1105 | 195 | CONFIG_FILE_NAME = '.juju-persistent-config' | ||
1106 | 196 | |||
1107 | 197 | def __init__(self, *args, **kw): | ||
1108 | 198 | super(Config, self).__init__(*args, **kw) | ||
1109 | 199 | self._prev_dict = None | ||
1110 | 200 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) | ||
1111 | 201 | if os.path.exists(self.path): | ||
1112 | 202 | self.load_previous() | ||
1113 | 203 | |||
1114 | 204 | def load_previous(self, path=None): | ||
1115 | 205 | """Load previous copy of config from disk so that current values | ||
1116 | 206 | can be compared to previous values. | ||
1117 | 207 | |||
1118 | 208 | :param path: | ||
1119 | 209 | |||
1120 | 210 | File path from which to load the previous config. If `None`, | ||
1121 | 211 | config is loaded from the default location. If `path` is | ||
1122 | 212 | specified, subsequent `save()` calls will write to the same | ||
1123 | 213 | path. | ||
1124 | 214 | |||
1125 | 215 | """ | ||
1126 | 216 | self.path = path or self.path | ||
1127 | 217 | with open(self.path) as f: | ||
1128 | 218 | self._prev_dict = json.load(f) | ||
1129 | 219 | |||
1130 | 220 | def changed(self, key): | ||
1131 | 221 | """Return true if the value for this key has changed since | ||
1132 | 222 | the last save. | ||
1133 | 223 | |||
1134 | 224 | """ | ||
1135 | 225 | if self._prev_dict is None: | ||
1136 | 226 | return True | ||
1137 | 227 | return self.previous(key) != self.get(key) | ||
1138 | 228 | |||
1139 | 229 | def previous(self, key): | ||
1140 | 230 | """Return previous value for this key, or None if there | ||
1141 | 231 | is no "previous" value. | ||
1142 | 232 | |||
1143 | 233 | """ | ||
1144 | 234 | if self._prev_dict: | ||
1145 | 235 | return self._prev_dict.get(key) | ||
1146 | 236 | return None | ||
1147 | 237 | |||
1148 | 238 | def save(self): | ||
1149 | 239 | """Save this config to disk. | ||
1150 | 240 | |||
1151 | 241 | Preserves items in _prev_dict that do not exist in self. | ||
1152 | 242 | |||
1153 | 243 | """ | ||
1154 | 244 | if self._prev_dict: | ||
1155 | 245 | for k, v in self._prev_dict.iteritems(): | ||
1156 | 246 | if k not in self: | ||
1157 | 247 | self[k] = v | ||
1158 | 248 | with open(self.path, 'w') as f: | ||
1159 | 249 | json.dump(self, f) | ||
1160 | 250 | |||
1161 | 251 | |||
1162 | 158 | @cached | 252 | @cached |
1163 | 159 | def config(scope=None): | 253 | def config(scope=None): |
1164 | 160 | """Juju charm configuration""" | 254 | """Juju charm configuration""" |
1165 | @@ -163,7 +257,10 @@ | |||
1166 | 163 | config_cmd_line.append(scope) | 257 | config_cmd_line.append(scope) |
1167 | 164 | config_cmd_line.append('--format=json') | 258 | config_cmd_line.append('--format=json') |
1168 | 165 | try: | 259 | try: |
1170 | 166 | return json.loads(subprocess.check_output(config_cmd_line)) | 260 | config_data = json.loads(subprocess.check_output(config_cmd_line)) |
1171 | 261 | if scope is not None: | ||
1172 | 262 | return config_data | ||
1173 | 263 | return Config(config_data) | ||
1174 | 167 | except ValueError: | 264 | except ValueError: |
1175 | 168 | return None | 265 | return None |
1176 | 169 | 266 | ||
1177 | 170 | 267 | ||
1178 | === modified file 'hooks/charmhelpers/core/host.py' | |||
1179 | --- hooks/charmhelpers/core/host.py 2014-05-14 16:40:09 +0000 | |||
1180 | +++ hooks/charmhelpers/core/host.py 2014-06-05 18:17:30 +0000 | |||
1181 | @@ -12,6 +12,9 @@ | |||
1182 | 12 | import string | 12 | import string |
1183 | 13 | import subprocess | 13 | import subprocess |
1184 | 14 | import hashlib | 14 | import hashlib |
1185 | 15 | import shutil | ||
1186 | 16 | import apt_pkg | ||
1187 | 17 | from contextlib import contextmanager | ||
1188 | 15 | 18 | ||
1189 | 16 | from collections import OrderedDict | 19 | from collections import OrderedDict |
1190 | 17 | 20 | ||
1191 | @@ -60,6 +63,11 @@ | |||
1192 | 60 | return False | 63 | return False |
1193 | 61 | 64 | ||
1194 | 62 | 65 | ||
1195 | 66 | def service_available(service_name): | ||
1196 | 67 | """Determine whether a system service is available""" | ||
1197 | 68 | return service('status', service_name) | ||
1198 | 69 | |||
1199 | 70 | |||
1200 | 63 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | 71 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
1201 | 64 | """Add a user to the system""" | 72 | """Add a user to the system""" |
1202 | 65 | try: | 73 | try: |
1203 | @@ -143,6 +151,16 @@ | |||
1204 | 143 | target.write(content) | 151 | target.write(content) |
1205 | 144 | 152 | ||
1206 | 145 | 153 | ||
1207 | 154 | def copy_file(src, dst, owner='root', group='root', perms=0444): | ||
1208 | 155 | """Create or overwrite a file with the contents of another file""" | ||
1209 | 156 | log("Writing file {} {}:{} {:o} from {}".format(dst, owner, group, perms, src)) | ||
1210 | 157 | uid = pwd.getpwnam(owner).pw_uid | ||
1211 | 158 | gid = grp.getgrnam(group).gr_gid | ||
1212 | 159 | shutil.copyfile(src, dst) | ||
1213 | 160 | os.chown(dst, uid, gid) | ||
1214 | 161 | os.chmod(dst, perms) | ||
1215 | 162 | |||
1216 | 163 | |||
1217 | 146 | def mount(device, mountpoint, options=None, persist=False): | 164 | def mount(device, mountpoint, options=None, persist=False): |
1218 | 147 | """Mount a filesystem at a particular mountpoint""" | 165 | """Mount a filesystem at a particular mountpoint""" |
1219 | 148 | cmd_args = ['mount'] | 166 | cmd_args = ['mount'] |
1220 | @@ -295,3 +313,37 @@ | |||
1221 | 295 | if 'link/ether' in words: | 313 | if 'link/ether' in words: |
1222 | 296 | hwaddr = words[words.index('link/ether') + 1] | 314 | hwaddr = words[words.index('link/ether') + 1] |
1223 | 297 | return hwaddr | 315 | return hwaddr |
1224 | 316 | |||
1225 | 317 | |||
1226 | 318 | def cmp_pkgrevno(package, revno, pkgcache=None): | ||
1227 | 319 | '''Compare supplied revno with the revno of the installed package | ||
1228 | 320 | 1 => Installed revno is greater than supplied arg | ||
1229 | 321 | 0 => Installed revno is the same as supplied arg | ||
1230 | 322 | -1 => Installed revno is less than supplied arg | ||
1231 | 323 | ''' | ||
1232 | 324 | if not pkgcache: | ||
1233 | 325 | apt_pkg.init() | ||
1234 | 326 | pkgcache = apt_pkg.Cache() | ||
1235 | 327 | pkg = pkgcache[package] | ||
1236 | 328 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) | ||
1237 | 329 | |||
1238 | 330 | |||
1239 | 331 | @contextmanager | ||
1240 | 332 | def chdir(d): | ||
1241 | 333 | cur = os.getcwd() | ||
1242 | 334 | try: | ||
1243 | 335 | yield os.chdir(d) | ||
1244 | 336 | finally: | ||
1245 | 337 | os.chdir(cur) | ||
1246 | 338 | |||
1247 | 339 | |||
1248 | 340 | def chownr(path, owner, group): | ||
1249 | 341 | uid = pwd.getpwnam(owner).pw_uid | ||
1250 | 342 | gid = grp.getgrnam(group).gr_gid | ||
1251 | 343 | |||
1252 | 344 | for root, dirs, files in os.walk(path): | ||
1253 | 345 | for name in dirs + files: | ||
1254 | 346 | full = os.path.join(root, name) | ||
1255 | 347 | broken_symlink = os.path.lexists(full) and not os.path.exists(full) | ||
1256 | 348 | if not broken_symlink: | ||
1257 | 349 | os.chown(full, uid, gid) | ||
1258 | 298 | 350 | ||
1259 | === added file 'hooks/charmhelpers/core/services.py' | |||
1260 | --- hooks/charmhelpers/core/services.py 1970-01-01 00:00:00 +0000 | |||
1261 | +++ hooks/charmhelpers/core/services.py 2014-06-05 18:17:30 +0000 | |||
1262 | @@ -0,0 +1,357 @@ | |||
1263 | 1 | import os | ||
1264 | 2 | import sys | ||
1265 | 3 | from collections import Iterable | ||
1266 | 4 | from charmhelpers.core import templating | ||
1267 | 5 | from charmhelpers.core import host | ||
1268 | 6 | from charmhelpers.core import hookenv | ||
1269 | 7 | |||
1270 | 8 | |||
1271 | 9 | class ServiceManager(object): | ||
1272 | 10 | def __init__(self, services=None): | ||
1273 | 11 | """ | ||
1274 | 12 | Register a list of services, given their definitions. | ||
1275 | 13 | |||
1276 | 14 | Traditional charm authoring is focused on implementing hooks. That is, | ||
1277 | 15 | the charm author is thinking in terms of "What hook am I handling; what | ||
1278 | 16 | does this hook need to do?" However, in most cases, the real question | ||
1279 | 17 | should be "Do I have the information I need to configure and start this | ||
1280 | 18 | piece of software and, if so, what are the steps for doing so." The | ||
1281 | 19 | ServiceManager framework tries to bring the focus to the data and the | ||
1282 | 20 | setup tasks, in the most declarative way possible. | ||
1283 | 21 | |||
1284 | 22 | Service definitions are dicts in the following formats (all keys except | ||
1285 | 23 | 'service' are optional): | ||
1286 | 24 | |||
1287 | 25 | { | ||
1288 | 26 | "service": <service name>, | ||
1289 | 27 | "required_data": <list of required data contexts>, | ||
1290 | 28 | "data_ready": <one or more callbacks>, | ||
1291 | 29 | "data_lost": <one or more callbacks>, | ||
1292 | 30 | "start": <one or more callbacks>, | ||
1293 | 31 | "stop": <one or more callbacks>, | ||
1294 | 32 | "ports": <list of ports to manage>, | ||
1295 | 33 | } | ||
1296 | 34 | |||
1297 | 35 | The 'required_data' list should contain dicts of required data (or | ||
1298 | 36 | dependency managers that act like dicts and know how to collect the data). | ||
1299 | 37 | Only when all items in the 'required_data' list are populated are the list | ||
1300 | 38 | of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more | ||
1301 | 39 | information. | ||
1302 | 40 | |||
1303 | 41 | The 'data_ready' value should be either a single callback, or a list of | ||
1304 | 42 | callbacks, to be called when all items in 'required_data' pass `is_ready()`. | ||
1305 | 43 | Each callback will be called with the service name as the only parameter. | ||
1306 | 44 | After these all of the 'data_ready' callbacks are called, the 'start' | ||
1307 | 45 | callbacks are fired. | ||
1308 | 46 | |||
1309 | 47 | The 'data_lost' value should be either a single callback, or a list of | ||
1310 | 48 | callbacks, to be called when a 'required_data' item no longer passes | ||
1311 | 49 | `is_ready()`. Each callback will be called with the service name as the | ||
1312 | 50 | only parameter. After these all of the 'data_ready' callbacks are called, | ||
1313 | 51 | the 'stop' callbacks are fired. | ||
1314 | 52 | |||
1315 | 53 | The 'start' value should be either a single callback, or a list of | ||
1316 | 54 | callbacks, to be called when starting the service, after the 'data_ready' | ||
1317 | 55 | callbacks are complete. Each callback will be called with the service | ||
1318 | 56 | name as the only parameter. This defaults to | ||
1319 | 57 | `[host.service_start, services.open_ports]`. | ||
1320 | 58 | |||
1321 | 59 | The 'stop' value should be either a single callback, or a list of | ||
1322 | 60 | callbacks, to be called when stopping the service. If the service is | ||
1323 | 61 | being stopped because it no longer has all of its 'required_data', this | ||
1324 | 62 | will be called after all of the 'data_lost' callbacks are complete. | ||
1325 | 63 | Each callback will be called with the service name as the only parameter. | ||
1326 | 64 | This defaults to `[services.close_ports, host.service_stop]`. | ||
1327 | 65 | |||
1328 | 66 | The 'ports' value should be a list of ports to manage. The default | ||
1329 | 67 | 'start' handler will open the ports after the service is started, | ||
1330 | 68 | and the default 'stop' handler will close the ports prior to stopping | ||
1331 | 69 | the service. | ||
1332 | 70 | |||
1333 | 71 | |||
1334 | 72 | Examples: | ||
1335 | 73 | |||
1336 | 74 | The following registers an Upstart service called bingod that depends on | ||
1337 | 75 | a mongodb relation and which runs a custom `db_migrate` function prior to | ||
1338 | 76 | restarting the service, and a Runit serivce called spadesd. | ||
1339 | 77 | |||
1340 | 78 | manager = services.ServiceManager([ | ||
1341 | 79 | { | ||
1342 | 80 | 'service': 'bingod', | ||
1343 | 81 | 'ports': [80, 443], | ||
1344 | 82 | 'required_data': [MongoRelation(), config(), {'my': 'data'}], | ||
1345 | 83 | 'data_ready': [ | ||
1346 | 84 | services.template(source='bingod.conf'), | ||
1347 | 85 | services.template(source='bingod.ini', | ||
1348 | 86 | target='/etc/bingod.ini', | ||
1349 | 87 | owner='bingo', perms=0400), | ||
1350 | 88 | ], | ||
1351 | 89 | }, | ||
1352 | 90 | { | ||
1353 | 91 | 'service': 'spadesd', | ||
1354 | 92 | 'data_ready': services.template(source='spadesd_run.j2', | ||
1355 | 93 | target='/etc/sv/spadesd/run', | ||
1356 | 94 | perms=0555), | ||
1357 | 95 | 'start': runit_start, | ||
1358 | 96 | 'stop': runit_stop, | ||
1359 | 97 | }, | ||
1360 | 98 | ]) | ||
1361 | 99 | manager.manage() | ||
1362 | 100 | """ | ||
1363 | 101 | self.services = {} | ||
1364 | 102 | for service in services or []: | ||
1365 | 103 | service_name = service['service'] | ||
1366 | 104 | self.services[service_name] = service | ||
1367 | 105 | |||
1368 | 106 | def manage(self): | ||
1369 | 107 | """ | ||
1370 | 108 | Handle the current hook by doing The Right Thing with the registered services. | ||
1371 | 109 | """ | ||
1372 | 110 | hook_name = os.path.basename(sys.argv[0]) | ||
1373 | 111 | if hook_name == 'stop': | ||
1374 | 112 | self.stop_services() | ||
1375 | 113 | else: | ||
1376 | 114 | self.reconfigure_services() | ||
1377 | 115 | |||
1378 | 116 | def reconfigure_services(self, *service_names): | ||
1379 | 117 | """ | ||
1380 | 118 | Update all files for one or more registered services, and, | ||
1381 | 119 | if ready, optionally restart them. | ||
1382 | 120 | |||
1383 | 121 | If no service names are given, reconfigures all registered services. | ||
1384 | 122 | """ | ||
1385 | 123 | for service_name in service_names or self.services.keys(): | ||
1386 | 124 | if self.is_ready(service_name): | ||
1387 | 125 | self.fire_event('data_ready', service_name) | ||
1388 | 126 | self.fire_event('start', service_name, default=[ | ||
1389 | 127 | host.service_restart, | ||
1390 | 128 | open_ports]) | ||
1391 | 129 | self.save_ready(service_name) | ||
1392 | 130 | else: | ||
1393 | 131 | if self.was_ready(service_name): | ||
1394 | 132 | self.fire_event('data_lost', service_name) | ||
1395 | 133 | self.fire_event('stop', service_name, default=[ | ||
1396 | 134 | close_ports, | ||
1397 | 135 | host.service_stop]) | ||
1398 | 136 | self.save_lost(service_name) | ||
1399 | 137 | |||
1400 | 138 | def stop_services(self, *service_names): | ||
1401 | 139 | """ | ||
1402 | 140 | Stop one or more registered services, by name. | ||
1403 | 141 | |||
1404 | 142 | If no service names are given, stops all registered services. | ||
1405 | 143 | """ | ||
1406 | 144 | for service_name in service_names or self.services.keys(): | ||
1407 | 145 | self.fire_event('stop', service_name, default=[ | ||
1408 | 146 | close_ports, | ||
1409 | 147 | host.service_stop]) | ||
1410 | 148 | |||
1411 | 149 | def get_service(self, service_name): | ||
1412 | 150 | """ | ||
1413 | 151 | Given the name of a registered service, return its service definition. | ||
1414 | 152 | """ | ||
1415 | 153 | service = self.services.get(service_name) | ||
1416 | 154 | if not service: | ||
1417 | 155 | raise KeyError('Service not registered: %s' % service_name) | ||
1418 | 156 | return service | ||
1419 | 157 | |||
1420 | 158 | def fire_event(self, event_name, service_name, default=None): | ||
1421 | 159 | """ | ||
1422 | 160 | Fire a data_ready, data_lost, start, or stop event on a given service. | ||
1423 | 161 | """ | ||
1424 | 162 | service = self.get_service(service_name) | ||
1425 | 163 | callbacks = service.get(event_name, default) | ||
1426 | 164 | if not callbacks: | ||
1427 | 165 | return | ||
1428 | 166 | if not isinstance(callbacks, Iterable): | ||
1429 | 167 | callbacks = [callbacks] | ||
1430 | 168 | for callback in callbacks: | ||
1431 | 169 | if isinstance(callback, ManagerCallback): | ||
1432 | 170 | callback(self, service_name, event_name) | ||
1433 | 171 | else: | ||
1434 | 172 | callback(service_name) | ||
1435 | 173 | |||
1436 | 174 | def is_ready(self, service_name): | ||
1437 | 175 | """ | ||
1438 | 176 | Determine if a registered service is ready, by checking its 'required_data'. | ||
1439 | 177 | |||
1440 | 178 | A 'required_data' item can be any mapping type, and is considered ready | ||
1441 | 179 | if `bool(item)` evaluates as True. | ||
1442 | 180 | """ | ||
1443 | 181 | service = self.get_service(service_name) | ||
1444 | 182 | reqs = service.get('required_data', []) | ||
1445 | 183 | return all(bool(req) for req in reqs) | ||
1446 | 184 | |||
1447 | 185 | def save_ready(self, service_name): | ||
1448 | 186 | """ | ||
1449 | 187 | Save an indicator that the given service is now data_ready. | ||
1450 | 188 | """ | ||
1451 | 189 | ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name) | ||
1452 | 190 | with open(ready_file, 'a'): | ||
1453 | 191 | pass | ||
1454 | 192 | |||
1455 | 193 | def save_lost(self, service_name): | ||
1456 | 194 | """ | ||
1457 | 195 | Save an indicator that the given service is no longer data_ready. | ||
1458 | 196 | """ | ||
1459 | 197 | ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name) | ||
1460 | 198 | if os.path.exists(ready_file): | ||
1461 | 199 | os.remove(ready_file) | ||
1462 | 200 | |||
1463 | 201 | def was_ready(self, service_name): | ||
1464 | 202 | """ | ||
1465 | 203 | Determine if the given service was previously data_ready. | ||
1466 | 204 | """ | ||
1467 | 205 | ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name) | ||
1468 | 206 | return os.path.exists(ready_file) | ||
1469 | 207 | |||
1470 | 208 | |||
1471 | 209 | class DefaultMappingList(list): | ||
1472 | 210 | """ | ||
1473 | 211 | A list of mappings that proxies calls to `__getitem__` with non-int keys, | ||
1474 | 212 | as well as calls to `get`, to the first mapping in the list. | ||
1475 | 213 | |||
1476 | 214 | >>> dml = DefaultMappingList([ | ||
1477 | 215 | ... {'foo': 'bar'}, | ||
1478 | 216 | ... {'foo': 'qux'}, | ||
1479 | 217 | ... ]) | ||
1480 | 218 | >>> dml['foo'] == 'bar' | ||
1481 | 219 | True | ||
1482 | 220 | >>> dml[1]['foo'] == 'qux' | ||
1483 | 221 | True | ||
1484 | 222 | >>> dml.get('foo') == 'bar' | ||
1485 | 223 | True | ||
1486 | 224 | """ | ||
1487 | 225 | def __getitem__(self, key): | ||
1488 | 226 | if isinstance(key, int): | ||
1489 | 227 | return super(DefaultMappingList, self).__getitem__(key) | ||
1490 | 228 | else: | ||
1491 | 229 | return super(DefaultMappingList, self).__getitem__(0)[key] | ||
1492 | 230 | |||
1493 | 231 | def get(self, key, default=None): | ||
1494 | 232 | return self[0].get(key, default) | ||
1495 | 233 | |||
1496 | 234 | |||
1497 | 235 | class RelationContext(dict): | ||
1498 | 236 | """ | ||
1499 | 237 | Base class for a context generator that gets relation data from juju. | ||
1500 | 238 | |||
1501 | 239 | Subclasses must provide `interface`, which is the interface type of interest, | ||
1502 | 240 | and `required_keys`, which is the set of keys required for the relation to | ||
1503 | 241 | be considered complete. The first relation for the interface that is complete | ||
1504 | 242 | will be used to populate the data for template. | ||
1505 | 243 | |||
1506 | 244 | The generated context will be namespaced under the interface type, to prevent | ||
1507 | 245 | potential naming conflicts. | ||
1508 | 246 | """ | ||
1509 | 247 | interface = None | ||
1510 | 248 | required_keys = [] | ||
1511 | 249 | |||
1512 | 250 | def __bool__(self): | ||
1513 | 251 | """ | ||
1514 | 252 | Updates the data and returns True if all of the required_keys are available. | ||
1515 | 253 | """ | ||
1516 | 254 | self.get_data() | ||
1517 | 255 | return self.is_ready() | ||
1518 | 256 | |||
1519 | 257 | __nonzero__ = __bool__ | ||
1520 | 258 | |||
1521 | 259 | def __repr__(self): | ||
1522 | 260 | return super(RelationContext, self).__repr__() | ||
1523 | 261 | |||
1524 | 262 | def is_ready(self): | ||
1525 | 263 | """ | ||
1526 | 264 | Returns True if all of the `required_keys` are available from any units. | ||
1527 | 265 | """ | ||
1528 | 266 | return len(self.get(self.interface, [])) > 0 | ||
1529 | 267 | |||
1530 | 268 | def _is_ready(self, unit_data): | ||
1531 | 269 | """ | ||
1532 | 270 | Helper method that tests a set of relation data and returns True if | ||
1533 | 271 | all of the `required_keys` are present. | ||
1534 | 272 | """ | ||
1535 | 273 | return set(unit_data.keys()).issuperset(set(self.required_keys)) | ||
1536 | 274 | |||
1537 | 275 | def get_data(self): | ||
1538 | 276 | """ | ||
1539 | 277 | Retrieve the relation data and store it under `self[self.interface]`. | ||
1540 | 278 | |||
1541 | 279 | Only complete sets of data are stored. | ||
1542 | 280 | |||
1543 | 281 | The data can be treated as either a list or a mapping. Treating it as | ||
1544 | 282 | a list will give the data from all the complete units. Treating it as | ||
1545 | 283 | a mapping will give the data for the first complete unit, lexographically | ||
1546 | 284 | ordered by relation ID then unit ID. | ||
1547 | 285 | |||
1548 | 286 | For example, if there are relation IDs 'db:1' and 'db:2', while the | ||
1549 | 287 | service on relation 'db:1' has units 'wordpress/0' and 'wordpress/1', | ||
1550 | 288 | while the service on relation 'db:2' has unit 'mediawiki/0', then | ||
1551 | 289 | accessing `self[self.interface]['foo']` will return the 'foo' value | ||
1552 | 290 | from unit 'wordpress/0'. | ||
1553 | 291 | """ | ||
1554 | 292 | if not hookenv.relation_ids(self.interface): | ||
1555 | 293 | return | ||
1556 | 294 | |||
1557 | 295 | ns = self.setdefault(self.interface, DefaultMappingList()) | ||
1558 | 296 | for rid in sorted(hookenv.relation_ids(self.interface)): | ||
1559 | 297 | for unit in sorted(hookenv.related_units(rid)): | ||
1560 | 298 | reldata = hookenv.relation_get(rid=rid, unit=unit) | ||
1561 | 299 | if self._is_ready(reldata): | ||
1562 | 300 | ns.append(reldata) | ||
1563 | 301 | |||
1564 | 302 | |||
1565 | 303 | class ManagerCallback(object): | ||
1566 | 304 | """ | ||
1567 | 305 | Special case of a callback that takes the `ServiceManager` instance | ||
1568 | 306 | in addition to the service name. | ||
1569 | 307 | |||
1570 | 308 | Subclasses should implement `__call__` which should accept two parameters: | ||
1571 | 309 | |||
1572 | 310 | * `manager` The `ServiceManager` instance | ||
1573 | 311 | * `service_name` The name of the service it's being triggered for | ||
1574 | 312 | * `event_name` The name of the event that this callback is handling | ||
1575 | 313 | """ | ||
1576 | 314 | def __call__(self, manager, service_name, event_name): | ||
1577 | 315 | raise NotImplementedError() | ||
1578 | 316 | |||
1579 | 317 | |||
1580 | 318 | class TemplateCallback(ManagerCallback): | ||
1581 | 319 | """ | ||
1582 | 320 | Callback class that will render a template, for use as a ready action. | ||
1583 | 321 | |||
1584 | 322 | The `target` param, if omitted, will default to `/etc/init/<service name>`. | ||
1585 | 323 | """ | ||
1586 | 324 | def __init__(self, source, target, owner='root', group='root', perms=0444): | ||
1587 | 325 | self.source = source | ||
1588 | 326 | self.target = target | ||
1589 | 327 | self.owner = owner | ||
1590 | 328 | self.group = group | ||
1591 | 329 | self.perms = perms | ||
1592 | 330 | |||
1593 | 331 | def __call__(self, manager, service_name, event_name): | ||
1594 | 332 | service = manager.get_service(service_name) | ||
1595 | 333 | context = {} | ||
1596 | 334 | for ctx in service.get('required_data', []): | ||
1597 | 335 | context.update(ctx) | ||
1598 | 336 | templating.render(self.source, self.target, context, | ||
1599 | 337 | self.owner, self.group, self.perms) | ||
1600 | 338 | |||
1601 | 339 | |||
1602 | 340 | class PortManagerCallback(ManagerCallback): | ||
1603 | 341 | """ | ||
1604 | 342 | Callback class that will open or close ports, for use as either | ||
1605 | 343 | a start or stop action. | ||
1606 | 344 | """ | ||
1607 | 345 | def __call__(self, manager, service_name, event_name): | ||
1608 | 346 | service = manager.get_service(service_name) | ||
1609 | 347 | for port in service.get('ports', []): | ||
1610 | 348 | if event_name == 'start': | ||
1611 | 349 | hookenv.open_port(port) | ||
1612 | 350 | elif event_name == 'stop': | ||
1613 | 351 | hookenv.close_port(port) | ||
1614 | 352 | |||
1615 | 353 | |||
1616 | 354 | # Convenience aliases | ||
1617 | 355 | render_template = template = TemplateCallback | ||
1618 | 356 | open_ports = PortManagerCallback() | ||
1619 | 357 | close_ports = PortManagerCallback() | ||
1620 | 0 | 358 | ||
1621 | === added file 'hooks/charmhelpers/core/templating.py' | |||
1622 | --- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000 | |||
1623 | +++ hooks/charmhelpers/core/templating.py 2014-06-05 18:17:30 +0000 | |||
1624 | @@ -0,0 +1,51 @@ | |||
1625 | 1 | import os | ||
1626 | 2 | |||
1627 | 3 | from charmhelpers.core import host | ||
1628 | 4 | from charmhelpers.core import hookenv | ||
1629 | 5 | |||
1630 | 6 | |||
1631 | 7 | def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): | ||
1632 | 8 | """ | ||
1633 | 9 | Render a template. | ||
1634 | 10 | |||
1635 | 11 | The `source` path, if not absolute, is relative to the `templates_dir`. | ||
1636 | 12 | |||
1637 | 13 | The `target` path should be absolute. | ||
1638 | 14 | |||
1639 | 15 | The context should be a dict containing the values to be replaced in the | ||
1640 | 16 | template. | ||
1641 | 17 | |||
1642 | 18 | The `owner`, `group`, and `perms` options will be passed to `write_file`. | ||
1643 | 19 | |||
1644 | 20 | If omitted, `templates_dir` defaults to the `templates` folder in the charm. | ||
1645 | 21 | |||
1646 | 22 | Note: Using this requires python-jinja2; if it is not installed, calling | ||
1647 | 23 | this will attempt to use charmhelpers.fetch.apt_install to install it. | ||
1648 | 24 | """ | ||
1649 | 25 | try: | ||
1650 | 26 | from jinja2 import FileSystemLoader, Environment, exceptions | ||
1651 | 27 | except ImportError: | ||
1652 | 28 | try: | ||
1653 | 29 | from charmhelpers.fetch import apt_install | ||
1654 | 30 | except ImportError: | ||
1655 | 31 | hookenv.log('Could not import jinja2, and could not import ' | ||
1656 | 32 | 'charmhelpers.fetch to install it', | ||
1657 | 33 | level=hookenv.ERROR) | ||
1658 | 34 | raise | ||
1659 | 35 | apt_install('python-jinja2', fatal=True) | ||
1660 | 36 | from jinja2 import FileSystemLoader, Environment, exceptions | ||
1661 | 37 | |||
1662 | 38 | if templates_dir is None: | ||
1663 | 39 | templates_dir = os.path.join(hookenv.charm_dir(), 'templates') | ||
1664 | 40 | loader = Environment(loader=FileSystemLoader(templates_dir)) | ||
1665 | 41 | try: | ||
1666 | 42 | source = source | ||
1667 | 43 | template = loader.get_template(source) | ||
1668 | 44 | except exceptions.TemplateNotFound as e: | ||
1669 | 45 | hookenv.log('Could not load template %s from %s.' % | ||
1670 | 46 | (source, templates_dir), | ||
1671 | 47 | level=hookenv.ERROR) | ||
1672 | 48 | raise e | ||
1673 | 49 | content = template.render(context) | ||
1674 | 50 | host.mkdir(os.path.dirname(target)) | ||
1675 | 51 | host.write_file(target, content, owner, group, perms) | ||
1676 | 0 | 52 | ||
1677 | === modified file 'hooks/charmhelpers/fetch/__init__.py' | |||
1678 | --- hooks/charmhelpers/fetch/__init__.py 2014-05-14 16:40:09 +0000 | |||
1679 | +++ hooks/charmhelpers/fetch/__init__.py 2014-06-05 18:17:30 +0000 | |||
1680 | @@ -1,4 +1,5 @@ | |||
1681 | 1 | import importlib | 1 | import importlib |
1682 | 2 | import time | ||
1683 | 2 | from yaml import safe_load | 3 | from yaml import safe_load |
1684 | 3 | from charmhelpers.core.host import ( | 4 | from charmhelpers.core.host import ( |
1685 | 4 | lsb_release | 5 | lsb_release |
1686 | @@ -15,6 +16,7 @@ | |||
1687 | 15 | import apt_pkg | 16 | import apt_pkg |
1688 | 16 | import os | 17 | import os |
1689 | 17 | 18 | ||
1690 | 19 | |||
1691 | 18 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | 20 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
1692 | 19 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | 21 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1693 | 20 | """ | 22 | """ |
1694 | @@ -56,10 +58,62 @@ | |||
1695 | 56 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', | 58 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
1696 | 57 | } | 59 | } |
1697 | 58 | 60 | ||
1698 | 61 | # The order of this list is very important. Handlers should be listed in from | ||
1699 | 62 | # least- to most-specific URL matching. | ||
1700 | 63 | FETCH_HANDLERS = ( | ||
1701 | 64 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
1702 | 65 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
1703 | 66 | ) | ||
1704 | 67 | |||
1705 | 68 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. | ||
1706 | 69 | APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. | ||
1707 | 70 | APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. | ||
1708 | 71 | |||
1709 | 72 | |||
1710 | 73 | class SourceConfigError(Exception): | ||
1711 | 74 | pass | ||
1712 | 75 | |||
1713 | 76 | |||
1714 | 77 | class UnhandledSource(Exception): | ||
1715 | 78 | pass | ||
1716 | 79 | |||
1717 | 80 | |||
1718 | 81 | class AptLockError(Exception): | ||
1719 | 82 | pass | ||
1720 | 83 | |||
1721 | 84 | |||
1722 | 85 | class BaseFetchHandler(object): | ||
1723 | 86 | |||
1724 | 87 | """Base class for FetchHandler implementations in fetch plugins""" | ||
1725 | 88 | |||
1726 | 89 | def can_handle(self, source): | ||
1727 | 90 | """Returns True if the source can be handled. Otherwise returns | ||
1728 | 91 | a string explaining why it cannot""" | ||
1729 | 92 | return "Wrong source type" | ||
1730 | 93 | |||
1731 | 94 | def install(self, source): | ||
1732 | 95 | """Try to download and unpack the source. Return the path to the | ||
1733 | 96 | unpacked files or raise UnhandledSource.""" | ||
1734 | 97 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
1735 | 98 | |||
1736 | 99 | def parse_url(self, url): | ||
1737 | 100 | return urlparse(url) | ||
1738 | 101 | |||
1739 | 102 | def base_url(self, url): | ||
1740 | 103 | """Return url without querystring or fragment""" | ||
1741 | 104 | parts = list(self.parse_url(url)) | ||
1742 | 105 | parts[4:] = ['' for i in parts[4:]] | ||
1743 | 106 | return urlunparse(parts) | ||
1744 | 107 | |||
1745 | 59 | 108 | ||
1746 | 60 | def filter_installed_packages(packages): | 109 | def filter_installed_packages(packages): |
1747 | 61 | """Returns a list of packages that require installation""" | 110 | """Returns a list of packages that require installation""" |
1748 | 62 | apt_pkg.init() | 111 | apt_pkg.init() |
1749 | 112 | |||
1750 | 113 | # Tell apt to build an in-memory cache to prevent race conditions (if | ||
1751 | 114 | # another process is already building the cache). | ||
1752 | 115 | apt_pkg.config.set("Dir::Cache::pkgcache", "") | ||
1753 | 116 | |||
1754 | 63 | cache = apt_pkg.Cache() | 117 | cache = apt_pkg.Cache() |
1755 | 64 | _pkgs = [] | 118 | _pkgs = [] |
1756 | 65 | for package in packages: | 119 | for package in packages: |
1757 | @@ -87,14 +141,7 @@ | |||
1758 | 87 | cmd.extend(packages) | 141 | cmd.extend(packages) |
1759 | 88 | log("Installing {} with options: {}".format(packages, | 142 | log("Installing {} with options: {}".format(packages, |
1760 | 89 | options)) | 143 | options)) |
1769 | 90 | env = os.environ.copy() | 144 | _run_apt_command(cmd, fatal) |
1762 | 91 | if 'DEBIAN_FRONTEND' not in env: | ||
1763 | 92 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
1764 | 93 | |||
1765 | 94 | if fatal: | ||
1766 | 95 | subprocess.check_call(cmd, env=env) | ||
1767 | 96 | else: | ||
1768 | 97 | subprocess.call(cmd, env=env) | ||
1770 | 98 | 145 | ||
1771 | 99 | 146 | ||
1772 | 100 | def apt_upgrade(options=None, fatal=False, dist=False): | 147 | def apt_upgrade(options=None, fatal=False, dist=False): |
1773 | @@ -109,24 +156,13 @@ | |||
1774 | 109 | else: | 156 | else: |
1775 | 110 | cmd.append('upgrade') | 157 | cmd.append('upgrade') |
1776 | 111 | log("Upgrading with options: {}".format(options)) | 158 | log("Upgrading with options: {}".format(options)) |
1786 | 112 | 159 | _run_apt_command(cmd, fatal) | |
1778 | 113 | env = os.environ.copy() | ||
1779 | 114 | if 'DEBIAN_FRONTEND' not in env: | ||
1780 | 115 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
1781 | 116 | |||
1782 | 117 | if fatal: | ||
1783 | 118 | subprocess.check_call(cmd, env=env) | ||
1784 | 119 | else: | ||
1785 | 120 | subprocess.call(cmd, env=env) | ||
1787 | 121 | 160 | ||
1788 | 122 | 161 | ||
1789 | 123 | def apt_update(fatal=False): | 162 | def apt_update(fatal=False): |
1790 | 124 | """Update local apt cache""" | 163 | """Update local apt cache""" |
1791 | 125 | cmd = ['apt-get', 'update'] | 164 | cmd = ['apt-get', 'update'] |
1796 | 126 | if fatal: | 165 | _run_apt_command(cmd, fatal) |
1793 | 127 | subprocess.check_call(cmd) | ||
1794 | 128 | else: | ||
1795 | 129 | subprocess.call(cmd) | ||
1797 | 130 | 166 | ||
1798 | 131 | 167 | ||
1799 | 132 | def apt_purge(packages, fatal=False): | 168 | def apt_purge(packages, fatal=False): |
1800 | @@ -137,10 +173,7 @@ | |||
1801 | 137 | else: | 173 | else: |
1802 | 138 | cmd.extend(packages) | 174 | cmd.extend(packages) |
1803 | 139 | log("Purging {}".format(packages)) | 175 | log("Purging {}".format(packages)) |
1808 | 140 | if fatal: | 176 | _run_apt_command(cmd, fatal) |
1805 | 141 | subprocess.check_call(cmd) | ||
1806 | 142 | else: | ||
1807 | 143 | subprocess.call(cmd) | ||
1809 | 144 | 177 | ||
1810 | 145 | 178 | ||
1811 | 146 | def apt_hold(packages, fatal=False): | 179 | def apt_hold(packages, fatal=False): |
1812 | @@ -151,6 +184,7 @@ | |||
1813 | 151 | else: | 184 | else: |
1814 | 152 | cmd.extend(packages) | 185 | cmd.extend(packages) |
1815 | 153 | log("Holding {}".format(packages)) | 186 | log("Holding {}".format(packages)) |
1816 | 187 | |||
1817 | 154 | if fatal: | 188 | if fatal: |
1818 | 155 | subprocess.check_call(cmd) | 189 | subprocess.check_call(cmd) |
1819 | 156 | else: | 190 | else: |
1820 | @@ -188,10 +222,6 @@ | |||
1821 | 188 | key]) | 222 | key]) |
1822 | 189 | 223 | ||
1823 | 190 | 224 | ||
1824 | 191 | class SourceConfigError(Exception): | ||
1825 | 192 | pass | ||
1826 | 193 | |||
1827 | 194 | |||
1828 | 195 | def configure_sources(update=False, | 225 | def configure_sources(update=False, |
1829 | 196 | sources_var='install_sources', | 226 | sources_var='install_sources', |
1830 | 197 | keys_var='install_keys'): | 227 | keys_var='install_keys'): |
1831 | @@ -224,17 +254,6 @@ | |||
1832 | 224 | if update: | 254 | if update: |
1833 | 225 | apt_update(fatal=True) | 255 | apt_update(fatal=True) |
1834 | 226 | 256 | ||
1835 | 227 | # The order of this list is very important. Handlers should be listed in from | ||
1836 | 228 | # least- to most-specific URL matching. | ||
1837 | 229 | FETCH_HANDLERS = ( | ||
1838 | 230 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
1839 | 231 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
1840 | 232 | ) | ||
1841 | 233 | |||
1842 | 234 | |||
1843 | 235 | class UnhandledSource(Exception): | ||
1844 | 236 | pass | ||
1845 | 237 | |||
1846 | 238 | 257 | ||
1847 | 239 | def install_remote(source): | 258 | def install_remote(source): |
1848 | 240 | """ | 259 | """ |
1849 | @@ -265,30 +284,6 @@ | |||
1850 | 265 | return install_remote(source) | 284 | return install_remote(source) |
1851 | 266 | 285 | ||
1852 | 267 | 286 | ||
1853 | 268 | class BaseFetchHandler(object): | ||
1854 | 269 | |||
1855 | 270 | """Base class for FetchHandler implementations in fetch plugins""" | ||
1856 | 271 | |||
1857 | 272 | def can_handle(self, source): | ||
1858 | 273 | """Returns True if the source can be handled. Otherwise returns | ||
1859 | 274 | a string explaining why it cannot""" | ||
1860 | 275 | return "Wrong source type" | ||
1861 | 276 | |||
1862 | 277 | def install(self, source): | ||
1863 | 278 | """Try to download and unpack the source. Return the path to the | ||
1864 | 279 | unpacked files or raise UnhandledSource.""" | ||
1865 | 280 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
1866 | 281 | |||
1867 | 282 | def parse_url(self, url): | ||
1868 | 283 | return urlparse(url) | ||
1869 | 284 | |||
1870 | 285 | def base_url(self, url): | ||
1871 | 286 | """Return url without querystring or fragment""" | ||
1872 | 287 | parts = list(self.parse_url(url)) | ||
1873 | 288 | parts[4:] = ['' for i in parts[4:]] | ||
1874 | 289 | return urlunparse(parts) | ||
1875 | 290 | |||
1876 | 291 | |||
1877 | 292 | def plugins(fetch_handlers=None): | 287 | def plugins(fetch_handlers=None): |
1878 | 293 | if not fetch_handlers: | 288 | if not fetch_handlers: |
1879 | 294 | fetch_handlers = FETCH_HANDLERS | 289 | fetch_handlers = FETCH_HANDLERS |
1880 | @@ -306,3 +301,40 @@ | |||
1881 | 306 | log("FetchHandler {} not found, skipping plugin".format( | 301 | log("FetchHandler {} not found, skipping plugin".format( |
1882 | 307 | handler_name)) | 302 | handler_name)) |
1883 | 308 | return plugin_list | 303 | return plugin_list |
1884 | 304 | |||
1885 | 305 | |||
1886 | 306 | def _run_apt_command(cmd, fatal=False): | ||
1887 | 307 | """ | ||
1888 | 308 | Run an APT command, checking output and retrying if the fatal flag is set | ||
1889 | 309 | to True. | ||
1890 | 310 | |||
1891 | 311 | :param: cmd: str: The apt command to run. | ||
1892 | 312 | :param: fatal: bool: Whether the command's output should be checked and | ||
1893 | 313 | retried. | ||
1894 | 314 | """ | ||
1895 | 315 | env = os.environ.copy() | ||
1896 | 316 | |||
1897 | 317 | if 'DEBIAN_FRONTEND' not in env: | ||
1898 | 318 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
1899 | 319 | |||
1900 | 320 | if fatal: | ||
1901 | 321 | retry_count = 0 | ||
1902 | 322 | result = None | ||
1903 | 323 | |||
1904 | 324 | # If the command is considered "fatal", we need to retry if the apt | ||
1905 | 325 | # lock was not acquired. | ||
1906 | 326 | |||
1907 | 327 | while result is None or result == APT_NO_LOCK: | ||
1908 | 328 | try: | ||
1909 | 329 | result = subprocess.check_call(cmd, env=env) | ||
1910 | 330 | except subprocess.CalledProcessError, e: | ||
1911 | 331 | retry_count = retry_count + 1 | ||
1912 | 332 | if retry_count > APT_NO_LOCK_RETRY_COUNT: | ||
1913 | 333 | raise | ||
1914 | 334 | result = e.returncode | ||
1915 | 335 | log("Couldn't acquire DPKG lock. Will retry in {} seconds." | ||
1916 | 336 | "".format(APT_NO_LOCK_RETRY_DELAY)) | ||
1917 | 337 | time.sleep(APT_NO_LOCK_RETRY_DELAY) | ||
1918 | 338 | |||
1919 | 339 | else: | ||
1920 | 340 | subprocess.call(cmd, env=env) | ||
1921 | 309 | 341 | ||
1922 | === modified file 'hooks/config-changed' | |||
1923 | --- hooks/config-changed 2014-05-14 16:40:09 +0000 | |||
1924 | +++ hooks/config-changed 2014-06-05 18:17:30 +0000 | |||
1925 | @@ -1,2 +1,5 @@ | |||
1928 | 1 | #!/bin/bash | 1 | #!/usr/bin/env python |
1929 | 2 | # config-changed occurs everytime a new configuration value is updated (juju set) | 2 | from charmhelpers.core import services |
1930 | 3 | import config | ||
1931 | 4 | manager = services.ServiceManager(config.SERVICES) | ||
1932 | 5 | manager.manage() | ||
1933 | 3 | 6 | ||
1934 | === added file 'hooks/config.py' | |||
1935 | --- hooks/config.py 1970-01-01 00:00:00 +0000 | |||
1936 | +++ hooks/config.py 2014-06-05 18:17:30 +0000 | |||
1937 | @@ -0,0 +1,80 @@ | |||
1938 | 1 | from charmhelpers.core import services | ||
1939 | 2 | from charmhelpers.contrib.cloudfoundry import contexts | ||
1940 | 3 | |||
1941 | 4 | HM9K_PACKAGES = ['python-jinja2', 'cfhm9000'] | ||
1942 | 5 | |||
1943 | 6 | HM_DIR = '/var/lib/cloudfoundry/cfhm9000' | ||
1944 | 7 | WORKSPACE_DIR = '/var/lib/cloudfoundry/hm-workspace' | ||
1945 | 8 | |||
1946 | 9 | hm_relations = [contexts.NatsRelation(), | ||
1947 | 10 | contexts.EtcdRelation(), | ||
1948 | 11 | contexts.CloudControllerRelation()] | ||
1949 | 12 | |||
1950 | 13 | SERVICES = [ | ||
1951 | 14 | { | ||
1952 | 15 | 'service': 'cf-hm9k-fetcher', | ||
1953 | 16 | 'required_data': hm_relations, | ||
1954 | 17 | 'data_ready': [ | ||
1955 | 18 | services.template(source='hm9000.json', | ||
1956 | 19 | target=HM_DIR + '/config/hm9000.json'), | ||
1957 | 20 | services.template(source='cf-hm9k-fetcher.conf', | ||
1958 | 21 | target='/etc/init/cf-hm9k-fetcher.conf'), | ||
1959 | 22 | ], | ||
1960 | 23 | }, | ||
1961 | 24 | { | ||
1962 | 25 | 'service': 'cf-hm9k-listener', | ||
1963 | 26 | 'required_data': hm_relations, | ||
1964 | 27 | 'data_ready': [ | ||
1965 | 28 | services.template(source='cf-hm9k-listener.conf', | ||
1966 | 29 | target='/etc/init/cf-hm9k-listener.conf'), | ||
1967 | 30 | ], | ||
1968 | 31 | }, | ||
1969 | 32 | { | ||
1970 | 33 | 'service': 'cf-hm9k-analyzer', | ||
1971 | 34 | 'required_data': hm_relations, | ||
1972 | 35 | 'data_ready': [ | ||
1973 | 36 | services.template(source='cf-hm9k-analyzer.conf', | ||
1974 | 37 | target='/etc/init/cf-hm9k-analyzer.conf'), | ||
1975 | 38 | ], | ||
1976 | 39 | }, | ||
1977 | 40 | { | ||
1978 | 41 | 'service': 'cf-hm9k-sender', | ||
1979 | 42 | 'required_data': hm_relations, | ||
1980 | 43 | 'data_ready': [ | ||
1981 | 44 | services.template(source='cf-hm9k-sender.conf', | ||
1982 | 45 | target='/etc/init/cf-hm9k-sender.conf'), | ||
1983 | 46 | ], | ||
1984 | 47 | }, | ||
1985 | 48 | { | ||
1986 | 49 | 'service': 'cf-hm9k-metrics-server', | ||
1987 | 50 | 'required_data': hm_relations, | ||
1988 | 51 | 'data_ready': [ | ||
1989 | 52 | services.template(source='cf-hm9k-metrics-server.conf', | ||
1990 | 53 | target='/etc/init/cf-hm9k-metrics-server.conf'), | ||
1991 | 54 | ], | ||
1992 | 55 | }, | ||
1993 | 56 | { | ||
1994 | 57 | 'service': 'cf-hm9k-api-server', | ||
1995 | 58 | 'required_data': hm_relations, | ||
1996 | 59 | 'data_ready': [ | ||
1997 | 60 | services.template(source='cf-hm9k-api-server.conf', | ||
1998 | 61 | target='/etc/init/cf-hm9k-api-server.conf'), | ||
1999 | 62 | ], | ||
2000 | 63 | }, | ||
2001 | 64 | { | ||
2002 | 65 | 'service': 'cf-hm9k-evacuator', | ||
2003 | 66 | 'required_data': hm_relations, | ||
2004 | 67 | 'data_ready': [ | ||
2005 | 68 | services.template(source='cf-hm9k-evacuator.conf', | ||
2006 | 69 | target='/etc/init/cf-hm9k-evacuator.conf'), | ||
2007 | 70 | ], | ||
2008 | 71 | }, | ||
2009 | 72 | { | ||
2010 | 73 | 'service': 'cf-hm9k-shredder', | ||
2011 | 74 | 'required_data': hm_relations, | ||
2012 | 75 | 'data_ready': [ | ||
2013 | 76 | services.template(source='cf-hm9k-shredder.conf', | ||
2014 | 77 | target='/etc/init/cf-hm9k-shredder.conf'), | ||
2015 | 78 | ], | ||
2016 | 79 | }, | ||
2017 | 80 | ] | ||
2018 | 0 | 81 | ||
2019 | === added file 'hooks/etcd-relation-changed' | |||
2020 | --- hooks/etcd-relation-changed 1970-01-01 00:00:00 +0000 | |||
2021 | +++ hooks/etcd-relation-changed 2014-06-05 18:17:30 +0000 | |||
2022 | @@ -0,0 +1,5 @@ | |||
2023 | 1 | #!/usr/bin/env python | ||
2024 | 2 | from charmhelpers.core import services | ||
2025 | 3 | import config | ||
2026 | 4 | manager = services.ServiceManager(config.SERVICES) | ||
2027 | 5 | manager.manage() | ||
2028 | 0 | 6 | ||
2029 | === modified file 'hooks/install' | |||
2030 | --- hooks/install 2014-05-14 16:40:09 +0000 | |||
2031 | +++ hooks/install 2014-06-05 18:17:30 +0000 | |||
2032 | @@ -1,8 +1,43 @@ | |||
2041 | 1 | #!/bin/bash | 1 | #!/usr/bin/env python |
2042 | 2 | # Here do anything needed to install the service | 2 | # vim: et ai ts=4 sw=4: |
2043 | 3 | # i.e. apt-get install -y foo or bzr branch http://myserver/mycode /srv/webroot | 3 | |
2044 | 4 | # Make sure this hook exits cleanly and is idempotent, common problems here are | 4 | import os |
2045 | 5 | # failing to account for a debconf question on a dependency, or trying to pull | 5 | import subprocess |
2046 | 6 | # from github without installing git first. | 6 | |
2047 | 7 | 7 | from charmhelpers.core import host | |
2048 | 8 | apt-get install -y cf-hm9000 | 8 | from charmhelpers.core import hookenv |
2049 | 9 | from charmhelpers.contrib.cloudfoundry.common import ( | ||
2050 | 10 | prepare_cloudfoundry_environment | ||
2051 | 11 | ) | ||
2052 | 12 | |||
2053 | 13 | import config | ||
2054 | 14 | |||
2055 | 15 | CHARM_DIR = hookenv.charm_dir() | ||
2056 | 16 | |||
2057 | 17 | |||
2058 | 18 | def install(): | ||
2059 | 19 | prepare_cloudfoundry_environment(hookenv.config(), config.HM9K_PACKAGES) | ||
2060 | 20 | install_from_source() | ||
2061 | 21 | |||
2062 | 22 | |||
2063 | 23 | def install_from_source(): | ||
2064 | 24 | subprocess.check_call([ | ||
2065 | 25 | 'git', 'clone', | ||
2066 | 26 | 'https://github.com/cloudfoundry/hm-workspace', config.WORKSPACE_DIR]) | ||
2067 | 27 | host.mkdir(config.WORKSPACE_DIR + '/bin') | ||
2068 | 28 | with host.chdir(config.WORKSPACE_DIR): | ||
2069 | 29 | subprocess.check_call(['git', 'submodule', 'update', '--init']) | ||
2070 | 30 | with host.chdir(config.WORKSPACE_DIR + '/src/github.com/cloudfoundry/hm9000'): | ||
2071 | 31 | subprocess.check_call(['go', 'install', '.'], | ||
2072 | 32 | env={'GOPATH': config.WORKSPACE_DIR}) | ||
2073 | 33 | |||
2074 | 34 | |||
2075 | 35 | def install_from_charm(): | ||
2076 | 36 | host.copy_file( | ||
2077 | 37 | os.path.join(hookenv.charm_dir(), 'files/hm9000'), | ||
2078 | 38 | config.WORKSPACE_DIR + '/bin/hm9000', | ||
2079 | 39 | owner='vcap', group='vcap', perms=0555) | ||
2080 | 40 | |||
2081 | 41 | |||
2082 | 42 | if __name__ == '__main__': | ||
2083 | 43 | install() | ||
2084 | 9 | 44 | ||
2085 | === added file 'hooks/metrics-relation-changed' | |||
2086 | --- hooks/metrics-relation-changed 1970-01-01 00:00:00 +0000 | |||
2087 | +++ hooks/metrics-relation-changed 2014-06-05 18:17:30 +0000 | |||
2088 | @@ -0,0 +1,5 @@ | |||
2089 | 1 | #!/usr/bin/env python | ||
2090 | 2 | from charmhelpers.core import services | ||
2091 | 3 | import config | ||
2092 | 4 | manager = services.ServiceManager(config.SERVICES) | ||
2093 | 5 | manager.manage() | ||
2094 | 0 | 6 | ||
2095 | === added file 'hooks/nats-relation-changed' | |||
2096 | --- hooks/nats-relation-changed 1970-01-01 00:00:00 +0000 | |||
2097 | +++ hooks/nats-relation-changed 2014-06-05 18:17:30 +0000 | |||
2098 | @@ -0,0 +1,5 @@ | |||
2099 | 1 | #!/usr/bin/env python | ||
2100 | 2 | from charmhelpers.core import services | ||
2101 | 3 | import config | ||
2102 | 4 | manager = services.ServiceManager(config.SERVICES) | ||
2103 | 5 | manager.manage() | ||
2104 | 0 | 6 | ||
2105 | === removed file 'hooks/relation-name-relation-broken' | |||
2106 | --- hooks/relation-name-relation-broken 2014-05-14 16:40:09 +0000 | |||
2107 | +++ hooks/relation-name-relation-broken 1970-01-01 00:00:00 +0000 | |||
2108 | @@ -1,2 +0,0 @@ | |||
2109 | 1 | #!/bin/sh | ||
2110 | 2 | # This hook runs when the full relation is removed (not just a single member) | ||
2111 | 3 | 0 | ||
2112 | === removed file 'hooks/relation-name-relation-changed' | |||
2113 | --- hooks/relation-name-relation-changed 2014-05-14 16:40:09 +0000 | |||
2114 | +++ hooks/relation-name-relation-changed 1970-01-01 00:00:00 +0000 | |||
2115 | @@ -1,9 +0,0 @@ | |||
2116 | 1 | #!/bin/bash | ||
2117 | 2 | # This must be renamed to the name of the relation. The goal here is to | ||
2118 | 3 | # affect any change needed by relationships being formed, modified, or broken | ||
2119 | 4 | # This script should be idempotent. | ||
2120 | 5 | juju-log $JUJU_REMOTE_UNIT modified its settings | ||
2121 | 6 | juju-log Relation settings: | ||
2122 | 7 | relation-get | ||
2123 | 8 | juju-log Relation members: | ||
2124 | 9 | relation-list | ||
2125 | 10 | 0 | ||
2126 | === removed file 'hooks/relation-name-relation-departed' | |||
2127 | --- hooks/relation-name-relation-departed 2014-05-14 16:40:09 +0000 | |||
2128 | +++ hooks/relation-name-relation-departed 1970-01-01 00:00:00 +0000 | |||
2129 | @@ -1,5 +0,0 @@ | |||
2130 | 1 | #!/bin/sh | ||
2131 | 2 | # This must be renamed to the name of the relation. The goal here is to | ||
2132 | 3 | # affect any change needed by the remote unit leaving the relationship. | ||
2133 | 4 | # This script should be idempotent. | ||
2134 | 5 | juju-log $JUJU_REMOTE_UNIT departed | ||
2135 | 6 | 0 | ||
2136 | === removed file 'hooks/relation-name-relation-joined' | |||
2137 | --- hooks/relation-name-relation-joined 2014-05-14 16:40:09 +0000 | |||
2138 | +++ hooks/relation-name-relation-joined 1970-01-01 00:00:00 +0000 | |||
2139 | @@ -1,5 +0,0 @@ | |||
2140 | 1 | #!/bin/sh | ||
2141 | 2 | # This must be renamed to the name of the relation. The goal here is to | ||
2142 | 3 | # affect any change needed by relationships being formed | ||
2143 | 4 | # This script should be idempotent. | ||
2144 | 5 | juju-log $JUJU_REMOTE_UNIT joined | ||
2145 | 6 | 0 | ||
2146 | === modified file 'hooks/start' | |||
2147 | --- hooks/start 2014-05-14 16:40:09 +0000 | |||
2148 | +++ hooks/start 2014-06-05 18:17:30 +0000 | |||
2149 | @@ -1,4 +1,5 @@ | |||
2154 | 1 | #!/bin/bash | 1 | #!/usr/bin/env python |
2155 | 2 | # Here put anything that is needed to start the service. | 2 | from charmhelpers.core import services |
2156 | 3 | # Note that currently this is run directly after install | 3 | import config |
2157 | 4 | # i.e. 'service apache2 start' | 4 | manager = services.ServiceManager(config.SERVICES) |
2158 | 5 | manager.manage() | ||
2159 | 5 | 6 | ||
2160 | === modified file 'hooks/stop' | |||
2161 | --- hooks/stop 2014-05-14 16:40:09 +0000 | |||
2162 | +++ hooks/stop 2014-06-05 18:17:30 +0000 | |||
2163 | @@ -1,7 +1,5 @@ | |||
2171 | 1 | #!/bin/bash | 1 | #!/usr/bin/env python |
2172 | 2 | # This will be run when the service is being torn down, allowing you to disable | 2 | from charmhelpers.core import services |
2173 | 3 | # it in various ways.. | 3 | import config |
2174 | 4 | # For example, if your web app uses a text file to signal to the load balancer | 4 | manager = services.ServiceManager(config.SERVICES) |
2175 | 5 | # that it is live... you could remove it and sleep for a bit to allow the load | 5 | manager.manage() |
2169 | 6 | # balancer to stop sending traffic. | ||
2170 | 7 | # rm /srv/webroot/server-live.txt && sleep 30 | ||
2176 | 8 | 6 | ||
2177 | === modified file 'hooks/upgrade-charm' | |||
2178 | --- hooks/upgrade-charm 2014-05-14 16:40:09 +0000 | |||
2179 | +++ hooks/upgrade-charm 2014-06-05 18:17:30 +0000 | |||
2180 | @@ -1,6 +1,5 @@ | |||
2187 | 1 | #!/bin/bash | 1 | #!/usr/bin/env python |
2188 | 2 | # This hook is executed each time a charm is upgraded after the new charm | 2 | from charmhelpers.core import services |
2189 | 3 | # contents have been unpacked | 3 | import config |
2190 | 4 | # Best practice suggests you execute the hooks/install and | 4 | manager = services.ServiceManager(config.SERVICES) |
2191 | 5 | # hooks/config-changed to ensure all updates are processed | 5 | manager.manage() |
2186 | 6 | |||
2192 | 7 | 6 | ||
2193 | === modified file 'metadata.yaml' | |||
2194 | --- metadata.yaml 2014-05-14 16:40:09 +0000 | |||
2195 | +++ metadata.yaml 2014-06-05 18:17:30 +0000 | |||
2196 | @@ -1,6 +1,6 @@ | |||
2197 | 1 | name: cf-hm9000 | 1 | name: cf-hm9000 |
2200 | 2 | summary: Whit Morriss <whit.morriss@canonical.com> | 2 | summary: Health Monitor for Cloud Foundry |
2201 | 3 | maintainer: cloudfoundry-charmers | 3 | maintainer: cf-charmers |
2202 | 4 | description: | | 4 | description: | |
2203 | 5 | Deploys the hm9000 health monitoring system for cloud foundry | 5 | Deploys the hm9000 health monitoring system for cloud foundry |
2204 | 6 | categories: | 6 | categories: |
2205 | @@ -9,13 +9,10 @@ | |||
2206 | 9 | provides: | 9 | provides: |
2207 | 10 | metrics: | 10 | metrics: |
2208 | 11 | interface: http | 11 | interface: http |
2209 | 12 | api: | ||
2210 | 13 | interface: nats | ||
2211 | 14 | requires: | 12 | requires: |
2212 | 15 | nats: | 13 | nats: |
2213 | 16 | interface: nats | 14 | interface: nats |
2219 | 17 | storage: | 15 | etcd: |
2220 | 18 | interface: etcd | 16 | interface: http |
2221 | 19 | # peers: | 17 | cc: |
2222 | 20 | # peer-relation: | 18 | interface: cf-cloud-controller |
2218 | 21 | # interface: interface-name | ||
2223 | 22 | 19 | ||
2224 | === removed file 'notes.md' | |||
2225 | --- notes.md 2014-05-14 16:40:09 +0000 | |||
2226 | +++ notes.md 1970-01-01 00:00:00 +0000 | |||
2227 | @@ -1,8 +0,0 @@ | |||
2228 | 1 | # Notes | ||
2229 | 2 | |||
2230 | 3 | ## Relations | ||
2231 | 4 | |||
2232 | 5 | - etcd | ||
2233 | 6 | - nats | ||
2234 | 7 | |||
2235 | 8 | |||
2236 | 9 | 0 | ||
2237 | === added directory 'templates' | |||
2238 | === added file 'templates/cf-hm9k-analyzer.conf' | |||
2239 | --- templates/cf-hm9k-analyzer.conf 1970-01-01 00:00:00 +0000 | |||
2240 | +++ templates/cf-hm9k-analyzer.conf 2014-06-05 18:17:30 +0000 | |||
2241 | @@ -0,0 +1,17 @@ | |||
2242 | 1 | description "Cloud Foundry HM9000" | ||
2243 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2244 | 3 | start on runlevel [2345] | ||
2245 | 4 | stop on runlevel [!2345] | ||
2246 | 5 | #expect daemon | ||
2247 | 6 | #apparmor load <profile-path> | ||
2248 | 7 | setuid vcap | ||
2249 | 8 | setgid vcap | ||
2250 | 9 | respawn | ||
2251 | 10 | respawn limit 10 5 | ||
2252 | 11 | normal exit 0 | ||
2253 | 12 | |||
2254 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2255 | 14 | export GOPATH | ||
2256 | 15 | |||
2257 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2258 | 17 | exec ./bin/hm9000 analyze --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2259 | 0 | 18 | ||
2260 | === added file 'templates/cf-hm9k-api-server.conf' | |||
2261 | --- templates/cf-hm9k-api-server.conf 1970-01-01 00:00:00 +0000 | |||
2262 | +++ templates/cf-hm9k-api-server.conf 2014-06-05 18:17:30 +0000 | |||
2263 | @@ -0,0 +1,17 @@ | |||
2264 | 1 | description "Cloud Foundry HM9000" | ||
2265 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2266 | 3 | start on runlevel [2345] | ||
2267 | 4 | stop on runlevel [!2345] | ||
2268 | 5 | #expect daemon | ||
2269 | 6 | #apparmor load <profile-path> | ||
2270 | 7 | setuid vcap | ||
2271 | 8 | setgid vcap | ||
2272 | 9 | respawn | ||
2273 | 10 | respawn limit 10 5 | ||
2274 | 11 | normal exit 0 | ||
2275 | 12 | |||
2276 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2277 | 14 | export GOPATH | ||
2278 | 15 | |||
2279 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2280 | 17 | exec ./bin/hm9000 serve_api --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2281 | 0 | 18 | ||
2282 | === added file 'templates/cf-hm9k-evacuator.conf' | |||
2283 | --- templates/cf-hm9k-evacuator.conf 1970-01-01 00:00:00 +0000 | |||
2284 | +++ templates/cf-hm9k-evacuator.conf 2014-06-05 18:17:30 +0000 | |||
2285 | @@ -0,0 +1,17 @@ | |||
2286 | 1 | description "Cloud Foundry HM9000" | ||
2287 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2288 | 3 | start on runlevel [2345] | ||
2289 | 4 | stop on runlevel [!2345] | ||
2290 | 5 | #expect daemon | ||
2291 | 6 | #apparmor load <profile-path> | ||
2292 | 7 | setuid vcap | ||
2293 | 8 | setgid vcap | ||
2294 | 9 | respawn | ||
2295 | 10 | respawn limit 10 5 | ||
2296 | 11 | normal exit 0 | ||
2297 | 12 | |||
2298 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2299 | 14 | export GOPATH | ||
2300 | 15 | |||
2301 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2302 | 17 | exec ./bin/hm9000 evacuator --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2303 | 0 | 18 | ||
2304 | === added file 'templates/cf-hm9k-fetcher.conf' | |||
2305 | --- templates/cf-hm9k-fetcher.conf 1970-01-01 00:00:00 +0000 | |||
2306 | +++ templates/cf-hm9k-fetcher.conf 2014-06-05 18:17:30 +0000 | |||
2307 | @@ -0,0 +1,17 @@ | |||
2308 | 1 | description "Cloud Foundry HM9000" | ||
2309 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2310 | 3 | start on runlevel [2345] | ||
2311 | 4 | stop on runlevel [!2345] | ||
2312 | 5 | #expect daemon | ||
2313 | 6 | #apparmor load <profile-path> | ||
2314 | 7 | setuid vcap | ||
2315 | 8 | setgid vcap | ||
2316 | 9 | respawn | ||
2317 | 10 | respawn limit 10 5 | ||
2318 | 11 | normal exit 0 | ||
2319 | 12 | |||
2320 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2321 | 14 | export GOPATH | ||
2322 | 15 | |||
2323 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2324 | 17 | exec ./bin/hm9000 fetch_desired --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2325 | 0 | 18 | ||
2326 | === added file 'templates/cf-hm9k-listener.conf' | |||
2327 | --- templates/cf-hm9k-listener.conf 1970-01-01 00:00:00 +0000 | |||
2328 | +++ templates/cf-hm9k-listener.conf 2014-06-05 18:17:30 +0000 | |||
2329 | @@ -0,0 +1,17 @@ | |||
2330 | 1 | description "Cloud Foundry HM9000" | ||
2331 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2332 | 3 | start on runlevel [2345] | ||
2333 | 4 | stop on runlevel [!2345] | ||
2334 | 5 | #expect daemon | ||
2335 | 6 | #apparmor load <profile-path> | ||
2336 | 7 | setuid vcap | ||
2337 | 8 | setgid vcap | ||
2338 | 9 | respawn | ||
2339 | 10 | respawn limit 10 5 | ||
2340 | 11 | normal exit 0 | ||
2341 | 12 | |||
2342 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2343 | 14 | export GOPATH | ||
2344 | 15 | |||
2345 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2346 | 17 | exec ./bin/hm9000 listen --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2347 | 0 | 18 | ||
2348 | === added file 'templates/cf-hm9k-metrics-server.conf' | |||
2349 | --- templates/cf-hm9k-metrics-server.conf 1970-01-01 00:00:00 +0000 | |||
2350 | +++ templates/cf-hm9k-metrics-server.conf 2014-06-05 18:17:30 +0000 | |||
2351 | @@ -0,0 +1,17 @@ | |||
2352 | 1 | description "Cloud Foundry HM9000" | ||
2353 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2354 | 3 | start on runlevel [2345] | ||
2355 | 4 | stop on runlevel [!2345] | ||
2356 | 5 | #expect daemon | ||
2357 | 6 | #apparmor load <profile-path> | ||
2358 | 7 | setuid vcap | ||
2359 | 8 | setgid vcap | ||
2360 | 9 | respawn | ||
2361 | 10 | respawn limit 10 5 | ||
2362 | 11 | normal exit 0 | ||
2363 | 12 | |||
2364 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2365 | 14 | export GOPATH | ||
2366 | 15 | |||
2367 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2368 | 17 | exec ./bin/hm9000 serve_metrics --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2369 | 0 | 18 | ||
2370 | === added file 'templates/cf-hm9k-sender.conf' | |||
2371 | --- templates/cf-hm9k-sender.conf 1970-01-01 00:00:00 +0000 | |||
2372 | +++ templates/cf-hm9k-sender.conf 2014-06-05 18:17:30 +0000 | |||
2373 | @@ -0,0 +1,17 @@ | |||
2374 | 1 | description "Cloud Foundry HM9000" | ||
2375 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2376 | 3 | start on runlevel [2345] | ||
2377 | 4 | stop on runlevel [!2345] | ||
2378 | 5 | #expect daemon | ||
2379 | 6 | #apparmor load <profile-path> | ||
2380 | 7 | setuid vcap | ||
2381 | 8 | setgid vcap | ||
2382 | 9 | respawn | ||
2383 | 10 | respawn limit 10 5 | ||
2384 | 11 | normal exit 0 | ||
2385 | 12 | |||
2386 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2387 | 14 | export GOPATH | ||
2388 | 15 | |||
2389 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2390 | 17 | exec ./bin/hm9000 send --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2391 | 0 | 18 | ||
2392 | === added file 'templates/cf-hm9k-shredder.conf' | |||
2393 | --- templates/cf-hm9k-shredder.conf 1970-01-01 00:00:00 +0000 | |||
2394 | +++ templates/cf-hm9k-shredder.conf 2014-06-05 18:17:30 +0000 | |||
2395 | @@ -0,0 +1,17 @@ | |||
2396 | 1 | description "Cloud Foundry HM9000" | ||
2397 | 2 | author "cf-charmers <cf-charmers@lists.launchpad.net>" | ||
2398 | 3 | start on runlevel [2345] | ||
2399 | 4 | stop on runlevel [!2345] | ||
2400 | 5 | #expect daemon | ||
2401 | 6 | #apparmor load <profile-path> | ||
2402 | 7 | setuid vcap | ||
2403 | 8 | setgid vcap | ||
2404 | 9 | respawn | ||
2405 | 10 | respawn limit 10 5 | ||
2406 | 11 | normal exit 0 | ||
2407 | 12 | |||
2408 | 13 | env GOPATH=/var/lib/cloudfoundry/hm-workspace/ | ||
2409 | 14 | export GOPATH | ||
2410 | 15 | |||
2411 | 16 | chdir /var/lib/cloudfoundry/hm-workspace | ||
2412 | 17 | exec ./bin/hm9000 shred --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json | ||
2413 | 0 | 18 | ||
2414 | === added file 'templates/hm9000.json' | |||
2415 | --- templates/hm9000.json 1970-01-01 00:00:00 +0000 | |||
2416 | +++ templates/hm9000.json 2014-06-05 18:17:30 +0000 | |||
2417 | @@ -0,0 +1,31 @@ | |||
2418 | 1 | { | ||
2419 | 2 | "heartbeat_period_in_seconds": 10, | ||
2420 | 3 | |||
2421 | 4 | "cc_auth_user": "{{cc['user']}}", | ||
2422 | 5 | "cc_auth_password": "{{cc['password']}}", | ||
2423 | 6 | "cc_base_url": "http://{{cc['hostname']}}:{{cc['port']}}", | ||
2424 | 7 | "skip_cert_verify": true, | ||
2425 | 8 | "desired_state_batch_size": 500, | ||
2426 | 9 | "fetcher_network_timeout_in_seconds": 10, | ||
2427 | 10 | |||
2428 | 11 | "store_schema_version": 1, | ||
2429 | 12 | "store_type": "etcd", | ||
2430 | 13 | "store_urls": [ | ||
2431 | 14 | {% for unit in etcd -%} | ||
2432 | 15 | "http://{{unit['hostname']}}:{{unit['port']}}"{% if not loop.last %},{% endif -%} | ||
2433 | 16 | {%- endfor %} | ||
2434 | 17 | ], | ||
2435 | 18 | |||
2436 | 19 | "metrics_server_port": 7879, | ||
2437 | 20 | "metrics_server_user": "metrics_server_user", | ||
2438 | 21 | "metrics_server_password": "canHazMetrics?", | ||
2439 | 22 | |||
2440 | 23 | "log_level": "INFO", | ||
2441 | 24 | |||
2442 | 25 | "nats": [{ | ||
2443 | 26 | "host": "{{nats['nats_address']}}", | ||
2444 | 27 | "port": {{nats['nats_port']}}, | ||
2445 | 28 | "user": "{{nats['nats_user']}}", | ||
2446 | 29 | "password": "{{nats['nats_password']}}" | ||
2447 | 30 | }] | ||
2448 | 31 | } |
Reviewers: mp+221770_ code.launchpad. net,
Message:
Please take a look.
Description:
Finished charm using services framework
https:/ /code.launchpad .net/~johnsca/ charms/ trusty/ cf-hm9000/ services/ +merge/ 221770
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/104820043/
Affected files (+1064, -879 lines): config. json json.erb analyzer_ ctl api_server_ ctl evacuator_ ctl fetcher_ ctl listener_ ctl metrics_ server_ ctl sender_ ctl shredder_ ctl forwarder. conf.erb relation- changed ers/contrib/ cloudfoundry/ common. py ers/contrib/ cloudfoundry/ config_ helper. py ers/contrib/ cloudfoundry/ contexts. py ers/contrib/ cloudfoundry/ install. py ers/contrib/ cloudfoundry/ services. py ers/contrib/ cloudfoundry/ upstart_ helper. py ers/contrib/ openstack/ context. py ers/contrib/ openstack/ neutron. py ers/contrib/ openstack/ utils.py ers/contrib/ storage/ linux/lvm. py ers/contrib/ storage/ linux/utils. py ers/core/ hookenv. py ers/core/ host.py ers/core/ services. py ers/core/ templating. py ers/fetch/ __init_ _.py changed relation- changed relation- changed relation- changed name-relation- broken name-relation- changed name-relation- departed name-relation- joined cf-hm9k- analyzer. conf cf-hm9k- api-server. conf cf-hm9k- evacuator. conf cf-hm9k- fetcher. conf cf-hm9k- listener. conf cf-hm9k- metrics- server. conf cf-hm9k- sender. conf cf-hm9k- shredder. conf hm9000. json
A [revision details]
D files/README.md
D files/default-
A files/hm9000
D files/hm9000.
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/syslog_
A hooks/cc-
M hooks/charmhelp
D hooks/charmhelp
M hooks/charmhelp
D hooks/charmhelp
D hooks/charmhelp
D hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
A hooks/charmhelp
A hooks/charmhelp
M hooks/charmhelp
A hooks/config.py
M hooks/config-
A hooks/etcd-
M hooks/install
A hooks/metrics-
A hooks/nats-
D hooks/relation-
D hooks/relation-
D hooks/relation-
D hooks/relation-
M hooks/start
M hooks/stop
M hooks/upgrade-charm
M metadata.yaml
D notes.md
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/