Merge lp:~johnsca/charms/trusty/cf-hm9000/services into lp:~cf-charmers/charms/trusty/cf-hm9000/trunk
- Trusty Tahr (14.04)
- services
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 2 |
Proposed branch: | lp:~johnsca/charms/trusty/cf-hm9000/services |
Merge into: | lp:~cf-charmers/charms/trusty/cf-hm9000/trunk |
Diff against target: |
2448 lines (+1093/-882) 53 files modified
files/README.md (+0/-3) files/default-config.json (+0/-27) files/hm9000.json.erb (+0/-30) files/hm9000_analyzer_ctl (+0/-41) files/hm9000_api_server_ctl (+0/-40) files/hm9000_evacuator_ctl (+0/-40) files/hm9000_fetcher_ctl (+0/-41) files/hm9000_listener_ctl (+0/-44) files/hm9000_metrics_server_ctl (+0/-40) files/hm9000_sender_ctl (+0/-41) files/hm9000_shredder_ctl (+0/-41) files/syslog_forwarder.conf.erb (+0/-65) hooks/cc-relation-changed (+5/-0) hooks/charmhelpers/contrib/cloudfoundry/common.py (+0/-57) hooks/charmhelpers/contrib/cloudfoundry/config_helper.py (+0/-11) hooks/charmhelpers/contrib/cloudfoundry/contexts.py (+59/-55) hooks/charmhelpers/contrib/cloudfoundry/install.py (+0/-35) hooks/charmhelpers/contrib/cloudfoundry/services.py (+0/-118) hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py (+0/-14) hooks/charmhelpers/contrib/openstack/context.py (+1/-1) hooks/charmhelpers/contrib/openstack/neutron.py (+17/-1) hooks/charmhelpers/contrib/openstack/utils.py (+6/-1) hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-1) hooks/charmhelpers/contrib/storage/linux/utils.py (+19/-5) hooks/charmhelpers/core/hookenv.py (+98/-1) hooks/charmhelpers/core/host.py (+52/-0) hooks/charmhelpers/core/services.py (+357/-0) hooks/charmhelpers/core/templating.py (+51/-0) hooks/charmhelpers/fetch/__init__.py (+96/-64) hooks/config-changed (+5/-2) hooks/config.py (+80/-0) hooks/etcd-relation-changed (+5/-0) hooks/install (+43/-8) hooks/metrics-relation-changed (+5/-0) hooks/nats-relation-changed (+5/-0) hooks/relation-name-relation-broken (+0/-2) hooks/relation-name-relation-changed (+0/-9) hooks/relation-name-relation-departed (+0/-5) hooks/relation-name-relation-joined (+0/-5) hooks/start (+5/-4) hooks/stop (+5/-7) hooks/upgrade-charm (+5/-6) metadata.yaml (+6/-9) notes.md (+0/-8) templates/cf-hm9k-analyzer.conf (+17/-0) templates/cf-hm9k-api-server.conf (+17/-0) templates/cf-hm9k-evacuator.conf (+17/-0) templates/cf-hm9k-fetcher.conf (+17/-0) templates/cf-hm9k-listener.conf (+17/-0) templates/cf-hm9k-metrics-server.conf (+17/-0) templates/cf-hm9k-sender.conf (+17/-0) templates/cf-hm9k-shredder.conf (+17/-0) templates/hm9000.json (+31/-0) |
To merge this branch: | bzr merge lp:~johnsca/charms/trusty/cf-hm9000/services |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Cloud Foundry Charmers | Pending | ||
Review via email: mp+221770@code.launchpad.net |
Commit message
Description of the change
Finished charm using services framework
Cory Johns (johnsca) wrote : | # |
Benjamin Saller (bcsaller) wrote : | # |
Thanks for this, a little hard to verify as you indicated. It looks like
we can add the internal MCAT suite to the CI as well.
I think landing this now and completing the topology is fine, even if we
have to evolve its configuration its an 'optional' component.
LGTM
https:/
File hooks/config.py (right):
https:/
hooks/config.py:24: 'required_data': [contexts.
What do you think about a top level define of this list that we reuse,
something like:
required_data: hm_contexts
where hm_contexts is defined above the service block.
https:/
File templates/
https:/
templates/
<email address hidden>"
This is an existing issue, but I'd like for all the Authors in all the
projects to point to the projects list rather than a bunch of
individuals. The charms maintainer fields as well. We should make a card
for this and switch them at once.
https:/
File templates/
https:/
templates/
["http://{{etcd[
This could easily be a list, no? I think we'll want to handle it that
way (via iteration), but for now this is fine.
https:/
templates/
Making two cards, been meaning to audit all the default ports and
username/password defaults. This is something we'll need to be able to
manage better in the future. Nothing to do in this branch
Cory Johns (johnsca) wrote : | # |
On 2014/06/02 18:29:44, benjamin.saller wrote:
https:/
> hooks/config.py:24: 'required_data': [contexts.
> What do you think about a top level define of this list that we reuse,
something
> like:
> required_data: hm_contexts
> where hm_contexts is defined above the service block.
+1 I definitely should have done this to begin with.
https:/
> templates/
> <mailto:<email address hidden>>"
> This is an existing issue, but I'd like for all the Authors in all the
projects
> to point to the projects list rather than a bunch of individuals. The
charms
> maintainer fields as well. We should make a card for this and switch
them at
> once.
Easy enough to do in this charm during this review, then we can fix the
existing ones in a batch.
https:/
> templates/
> ["http://{{etcd[
> This could easily be a list, no? I think we'll want to handle it that
way (via
> iteration), but for now this is fine.
This raises an issue with how I implemented multiple units in the
framework, which I will address now.
- 4. By Cory Johns
-
Updated all author & maintainer fields to cf-charmers
- 5. By Cory Johns
-
Refactor away repetition in required_data items
- 6. By Cory Johns
-
Handle multiple etcd units
- 7. By Cory Johns
-
Resynced charm-helpers for missed RelationContext classes
- 8. By Cory Johns
-
Remove extra whitespace from hm9000.json config
Cory Johns (johnsca) wrote : | # |
Please take a look.
- 9. By Cory Johns
-
Resynced charm-helpers for bug fix
Benjamin Saller (bcsaller) wrote : | # |
Thanks for this, since it depends on the helpers change lets resolve
that before I can approve this. Good stuff though.
https:/
File metadata.yaml (right):
https:/
metadata.yaml:3: maintainer: cf-charmers
ha, nice catch. we might want to use the full email address for
cf-charmers though, or say lp:~cf-charmers. Something to make it a
little more clear.
https:/
File templates/
https:/
templates/
This might change a little if you like the helpers changes suggested in
the other branch. Thanks for this important fix.
Benjamin Saller (bcsaller) wrote : | # |
Thanks for this, since it depends on the helpers change lets resolve
that before I can approve this. Good stuff though.
https:/
File metadata.yaml (right):
https:/
metadata.yaml:3: maintainer: cf-charmers
ha, nice catch. we might want to use the full email address for
cf-charmers though, or say lp:~cf-charmers. Something to make it a
little more clear.
https:/
File templates/
https:/
templates/
This might change a little if you like the helpers changes suggested in
the other branch. Thanks for this important fix.
- 10. By Cory Johns
-
Updated charm-helpers and cleaned up iteration of multiple etcd units
Cory Johns (johnsca) wrote : | # |
Please take a look.
Benjamin Saller (bcsaller) wrote : | # |
This LGTM,+1
https:/
File templates/
https:/
templates/
nice, thanks
Preview Diff
1 | === removed file 'files/README.md' |
2 | --- files/README.md 2014-05-14 16:40:09 +0000 |
3 | +++ files/README.md 1970-01-01 00:00:00 +0000 |
4 | @@ -1,3 +0,0 @@ |
5 | -# Contents |
6 | - |
7 | -ctl files and config from cf-release |
8 | |
9 | === removed file 'files/default-config.json' |
10 | --- files/default-config.json 2014-05-14 16:40:09 +0000 |
11 | +++ files/default-config.json 1970-01-01 00:00:00 +0000 |
12 | @@ -1,27 +0,0 @@ |
13 | -{ |
14 | - "heartbeat_period_in_seconds": 10, |
15 | - |
16 | - "cc_auth_user": "mcat", |
17 | - "cc_auth_password": "testing", |
18 | - "cc_base_url": "http://127.0.0.1:6001", |
19 | - "skip_cert_verify": true, |
20 | - "desired_state_batch_size": 500, |
21 | - "fetcher_network_timeout_in_seconds": 10, |
22 | - |
23 | - "store_schema_version": 1, |
24 | - "store_type": "etcd", |
25 | - "store_urls": ["http://127.0.0.1:4001"], |
26 | - |
27 | - "metrics_server_port": 7879, |
28 | - "metrics_server_user": "metrics_server_user", |
29 | - "metrics_server_password": "canHazMetrics?", |
30 | - |
31 | - "log_level": "INFO", |
32 | - |
33 | - "nats": [{ |
34 | - "host": "127.0.0.1", |
35 | - "port": 4222, |
36 | - "user": "", |
37 | - "password": "" |
38 | - }] |
39 | -} |
40 | |
41 | === added file 'files/hm9000' |
42 | Binary files files/hm9000 1970-01-01 00:00:00 +0000 and files/hm9000 2014-06-05 18:17:30 +0000 differ |
43 | === removed file 'files/hm9000.json.erb' |
44 | --- files/hm9000.json.erb 2014-05-14 16:40:09 +0000 |
45 | +++ files/hm9000.json.erb 1970-01-01 00:00:00 +0000 |
46 | @@ -1,30 +0,0 @@ |
47 | -{ |
48 | - "heartbeat_period_in_seconds": 10, |
49 | - |
50 | - "cc_auth_user": "<%= p("ccng.bulk_api_user") %>", |
51 | - "cc_auth_password": "<%= p("ccng.bulk_api_password") %>", |
52 | - "cc_base_url": "<%= p("cc.srv_api_uri") %>", |
53 | - "skip_cert_verify": <%= p("ssl.skip_cert_verify") %>, |
54 | - "desired_state_batch_size": 500, |
55 | - "fetcher_network_timeout_in_seconds": 10, |
56 | - |
57 | - "store_schema_version": 4, |
58 | - "store_urls": [<%= p("etcd.machines").map{|addr| "\"http://#{addr}:4001\""}.join(",")%>], |
59 | - |
60 | - "metrics_server_port": 0, |
61 | - "metrics_server_user": "", |
62 | - "metrics_server_password": "", |
63 | - |
64 | - "log_level": "INFO", |
65 | - |
66 | - "nats": <%= |
67 | - p("nats.machines").collect do |addr| |
68 | - { |
69 | - "host" => addr, |
70 | - "port" => p("nats.port"), |
71 | - "user" => p("nats.user"), |
72 | - "password" => p("nats.password") |
73 | - } |
74 | - end.to_json |
75 | -%> |
76 | -} |
77 | |
78 | === removed file 'files/hm9000_analyzer_ctl' |
79 | --- files/hm9000_analyzer_ctl 2014-05-14 16:40:09 +0000 |
80 | +++ files/hm9000_analyzer_ctl 1970-01-01 00:00:00 +0000 |
81 | @@ -1,41 +0,0 @@ |
82 | -#!/bin/bash |
83 | - |
84 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
85 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
86 | -PIDFILE=$RUN_DIR/hm9000_analyzer.pid |
87 | - |
88 | -source /var/vcap/packages/common/utils.sh |
89 | - |
90 | -case $1 in |
91 | - |
92 | - start) |
93 | - pid_guard $PIDFILE "hm9000_analyzer" |
94 | - |
95 | - mkdir -p $RUN_DIR |
96 | - mkdir -p $LOG_DIR |
97 | - |
98 | - chown -R vcap:vcap $RUN_DIR |
99 | - chown -R vcap:vcap $LOG_DIR |
100 | - |
101 | - echo $$ > $PIDFILE |
102 | - |
103 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
104 | - analyze \ |
105 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
106 | - --poll \ |
107 | - 1>>$LOG_DIR/hm9000_analyzer.stdout.log \ |
108 | - 2>>$LOG_DIR/hm9000_analyzer.stderr.log |
109 | - |
110 | - ;; |
111 | - |
112 | - stop) |
113 | - kill_and_wait $PIDFILE |
114 | - |
115 | - ;; |
116 | - |
117 | - *) |
118 | - echo "Usage: hm9000_analyzer_ctl {start|stop}" |
119 | - |
120 | - ;; |
121 | - |
122 | -esac |
123 | |
124 | === removed file 'files/hm9000_api_server_ctl' |
125 | --- files/hm9000_api_server_ctl 2014-05-14 16:40:09 +0000 |
126 | +++ files/hm9000_api_server_ctl 1970-01-01 00:00:00 +0000 |
127 | @@ -1,40 +0,0 @@ |
128 | -#!/bin/bash |
129 | - |
130 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
131 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
132 | -PIDFILE=$RUN_DIR/hm9000_api_server.pid |
133 | - |
134 | -source /var/vcap/packages/common/utils.sh |
135 | - |
136 | -case $1 in |
137 | - |
138 | - start) |
139 | - pid_guard $PIDFILE "hm9000_api_server" |
140 | - |
141 | - mkdir -p $RUN_DIR |
142 | - mkdir -p $LOG_DIR |
143 | - |
144 | - chown -R vcap:vcap $RUN_DIR |
145 | - chown -R vcap:vcap $LOG_DIR |
146 | - |
147 | - echo $$ > $PIDFILE |
148 | - |
149 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
150 | - serve_api \ |
151 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
152 | - 1>>$LOG_DIR/hm9000_api_server.stdout.log \ |
153 | - 2>>$LOG_DIR/hm9000_api_server.stderr.log |
154 | - |
155 | - ;; |
156 | - |
157 | - stop) |
158 | - kill_and_wait $PIDFILE |
159 | - |
160 | - ;; |
161 | - |
162 | - *) |
163 | - echo "Usage: hm9000_api_server_ctl {start|stop}" |
164 | - |
165 | - ;; |
166 | - |
167 | -esac |
168 | |
169 | === removed file 'files/hm9000_evacuator_ctl' |
170 | --- files/hm9000_evacuator_ctl 2014-05-14 16:40:09 +0000 |
171 | +++ files/hm9000_evacuator_ctl 1970-01-01 00:00:00 +0000 |
172 | @@ -1,40 +0,0 @@ |
173 | -#!/bin/bash |
174 | - |
175 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
176 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
177 | -PIDFILE=$RUN_DIR/hm9000_evacuator.pid |
178 | - |
179 | -source /var/vcap/packages/common/utils.sh |
180 | - |
181 | -case $1 in |
182 | - |
183 | - start) |
184 | - pid_guard $PIDFILE "hm9000_evacuator" |
185 | - |
186 | - mkdir -p $RUN_DIR |
187 | - mkdir -p $LOG_DIR |
188 | - |
189 | - chown -R vcap:vcap $RUN_DIR |
190 | - chown -R vcap:vcap $LOG_DIR |
191 | - |
192 | - echo $$ > $PIDFILE |
193 | - |
194 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
195 | - evacuator \ |
196 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
197 | - 1>>$LOG_DIR/hm9000_evacuator.stdout.log \ |
198 | - 2>>$LOG_DIR/hm9000_evacuator.stderr.log |
199 | - |
200 | - ;; |
201 | - |
202 | - stop) |
203 | - kill_and_wait $PIDFILE |
204 | - |
205 | - ;; |
206 | - |
207 | - *) |
208 | - echo "Usage: hm9000_evacuator_ctl {start|stop}" |
209 | - |
210 | - ;; |
211 | - |
212 | -esac |
213 | |
214 | === removed file 'files/hm9000_fetcher_ctl' |
215 | --- files/hm9000_fetcher_ctl 2014-05-14 16:40:09 +0000 |
216 | +++ files/hm9000_fetcher_ctl 1970-01-01 00:00:00 +0000 |
217 | @@ -1,41 +0,0 @@ |
218 | -#!/bin/bash |
219 | - |
220 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
221 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
222 | -PIDFILE=$RUN_DIR/hm9000_fetcher.pid |
223 | - |
224 | -source /var/vcap/packages/common/utils.sh |
225 | - |
226 | -case $1 in |
227 | - |
228 | - start) |
229 | - pid_guard $PIDFILE "hm9000_fetcher" |
230 | - |
231 | - mkdir -p $RUN_DIR |
232 | - mkdir -p $LOG_DIR |
233 | - |
234 | - chown -R vcap:vcap $RUN_DIR |
235 | - chown -R vcap:vcap $LOG_DIR |
236 | - |
237 | - echo $$ > $PIDFILE |
238 | - |
239 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
240 | - fetch_desired \ |
241 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
242 | - --poll \ |
243 | - 1>>$LOG_DIR/hm9000_fetcher.stdout.log \ |
244 | - 2>>$LOG_DIR/hm9000_fetcher.stderr.log |
245 | - |
246 | - ;; |
247 | - |
248 | - stop) |
249 | - kill_and_wait $PIDFILE |
250 | - |
251 | - ;; |
252 | - |
253 | - *) |
254 | - echo "Usage: hm9000_fetcher_ctl {start|stop}" |
255 | - |
256 | - ;; |
257 | - |
258 | -esac |
259 | |
260 | === removed file 'files/hm9000_listener_ctl' |
261 | --- files/hm9000_listener_ctl 2014-05-14 16:40:09 +0000 |
262 | +++ files/hm9000_listener_ctl 1970-01-01 00:00:00 +0000 |
263 | @@ -1,44 +0,0 @@ |
264 | -#!/bin/bash |
265 | - |
266 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
267 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
268 | -PIDFILE=$RUN_DIR/hm9000_listener.pid |
269 | - |
270 | -source /var/vcap/packages/common/utils.sh |
271 | - |
272 | -case $1 in |
273 | - |
274 | - start) |
275 | - pid_guard $PIDFILE "hm9000_listener" |
276 | - |
277 | - mkdir -p $RUN_DIR |
278 | - mkdir -p $LOG_DIR |
279 | - |
280 | - chown -R vcap:vcap $RUN_DIR |
281 | - chown -R vcap:vcap $LOG_DIR |
282 | - |
283 | - <% if_p("syslog_aggregator") do %> |
284 | - /var/vcap/packages/syslog_aggregator/setup_syslog_forwarder.sh /var/vcap/jobs/hm9000/config |
285 | - <% end %> |
286 | - |
287 | - echo $$ > $PIDFILE |
288 | - |
289 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
290 | - listen \ |
291 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
292 | - 1>>$LOG_DIR/hm9000_listener.stdout.log \ |
293 | - 2>>$LOG_DIR/hm9000_listener.stderr.log |
294 | - |
295 | - ;; |
296 | - |
297 | - stop) |
298 | - kill_and_wait $PIDFILE |
299 | - |
300 | - ;; |
301 | - |
302 | - *) |
303 | - echo "Usage: hm9000_listener_ctl {start|stop}" |
304 | - |
305 | - ;; |
306 | - |
307 | -esac |
308 | |
309 | === removed file 'files/hm9000_metrics_server_ctl' |
310 | --- files/hm9000_metrics_server_ctl 2014-05-14 16:40:09 +0000 |
311 | +++ files/hm9000_metrics_server_ctl 1970-01-01 00:00:00 +0000 |
312 | @@ -1,40 +0,0 @@ |
313 | -#!/bin/bash |
314 | - |
315 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
316 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
317 | -PIDFILE=$RUN_DIR/hm9000_metrics_server.pid |
318 | - |
319 | -source /var/vcap/packages/common/utils.sh |
320 | - |
321 | -case $1 in |
322 | - |
323 | - start) |
324 | - pid_guard $PIDFILE "hm9000_metrics_server" |
325 | - |
326 | - mkdir -p $RUN_DIR |
327 | - mkdir -p $LOG_DIR |
328 | - |
329 | - chown -R vcap:vcap $RUN_DIR |
330 | - chown -R vcap:vcap $LOG_DIR |
331 | - |
332 | - echo $$ > $PIDFILE |
333 | - |
334 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
335 | - serve_metrics \ |
336 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
337 | - 1>>$LOG_DIR/hm9000_metrics_server.stdout.log \ |
338 | - 2>>$LOG_DIR/hm9000_metrics_server.stderr.log |
339 | - |
340 | - ;; |
341 | - |
342 | - stop) |
343 | - kill_and_wait $PIDFILE |
344 | - |
345 | - ;; |
346 | - |
347 | - *) |
348 | - echo "Usage: hm9000_metrics_server_ctl {start|stop}" |
349 | - |
350 | - ;; |
351 | - |
352 | -esac |
353 | |
354 | === removed file 'files/hm9000_sender_ctl' |
355 | --- files/hm9000_sender_ctl 2014-05-14 16:40:09 +0000 |
356 | +++ files/hm9000_sender_ctl 1970-01-01 00:00:00 +0000 |
357 | @@ -1,41 +0,0 @@ |
358 | -#!/bin/bash |
359 | - |
360 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
361 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
362 | -PIDFILE=$RUN_DIR/hm9000_sender.pid |
363 | - |
364 | -source /var/vcap/packages/common/utils.sh |
365 | - |
366 | -case $1 in |
367 | - |
368 | - start) |
369 | - pid_guard $PIDFILE "hm9000_sender" |
370 | - |
371 | - mkdir -p $RUN_DIR |
372 | - mkdir -p $LOG_DIR |
373 | - |
374 | - chown -R vcap:vcap $RUN_DIR |
375 | - chown -R vcap:vcap $LOG_DIR |
376 | - |
377 | - echo $$ > $PIDFILE |
378 | - |
379 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
380 | - send \ |
381 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
382 | - --poll \ |
383 | - 1>>$LOG_DIR/hm9000_sender.stdout.log \ |
384 | - 2>>$LOG_DIR/hm9000_sender.stderr.log |
385 | - |
386 | - ;; |
387 | - |
388 | - stop) |
389 | - kill_and_wait $PIDFILE |
390 | - |
391 | - ;; |
392 | - |
393 | - *) |
394 | - echo "Usage: hm9000_sender_ctl {start|stop}" |
395 | - |
396 | - ;; |
397 | - |
398 | -esac |
399 | |
400 | === removed file 'files/hm9000_shredder_ctl' |
401 | --- files/hm9000_shredder_ctl 2014-05-14 16:40:09 +0000 |
402 | +++ files/hm9000_shredder_ctl 1970-01-01 00:00:00 +0000 |
403 | @@ -1,41 +0,0 @@ |
404 | -#!/bin/bash |
405 | - |
406 | -RUN_DIR=/var/vcap/sys/run/hm9000 |
407 | -LOG_DIR=/var/vcap/sys/log/hm9000 |
408 | -PIDFILE=$RUN_DIR/hm9000_shredder.pid |
409 | - |
410 | -source /var/vcap/packages/common/utils.sh |
411 | - |
412 | -case $1 in |
413 | - |
414 | - start) |
415 | - pid_guard $PIDFILE "hm9000_shredder" |
416 | - |
417 | - mkdir -p $RUN_DIR |
418 | - mkdir -p $LOG_DIR |
419 | - |
420 | - chown -R vcap:vcap $RUN_DIR |
421 | - chown -R vcap:vcap $LOG_DIR |
422 | - |
423 | - echo $$ > $PIDFILE |
424 | - |
425 | - exec chpst -u vcap:vcap /var/vcap/packages/hm9000/hm9000 \ |
426 | - shred \ |
427 | - --config=/var/vcap/jobs/hm9000/config/hm9000.json \ |
428 | - --poll \ |
429 | - 1>>$LOG_DIR/hm9000_shredder.stdout.log \ |
430 | - 2>>$LOG_DIR/hm9000_shredder.stderr.log |
431 | - |
432 | - ;; |
433 | - |
434 | - stop) |
435 | - kill_and_wait $PIDFILE |
436 | - |
437 | - ;; |
438 | - |
439 | - *) |
440 | - echo "Usage: hm9000_shredder_ctl {start|stop}" |
441 | - |
442 | - ;; |
443 | - |
444 | -esac |
445 | |
446 | === removed file 'files/syslog_forwarder.conf.erb' |
447 | --- files/syslog_forwarder.conf.erb 2014-05-14 16:40:09 +0000 |
448 | +++ files/syslog_forwarder.conf.erb 1970-01-01 00:00:00 +0000 |
449 | @@ -1,65 +0,0 @@ |
450 | -<% if_p("syslog_aggregator.address", "syslog_aggregator.port", "syslog_aggregator.transport") do |address, port, transport| %> |
451 | -$ModLoad imuxsock # local message reception (rsyslog uses a datagram socket) |
452 | -$MaxMessageSize 4k # default is 2k |
453 | -$WorkDirectory /var/vcap/sys/rsyslog/buffered # where messages should be buffered on disk |
454 | - |
455 | -# Forward vcap messages to the aggregator |
456 | -# |
457 | -$ActionResumeRetryCount -1 # Try until the server becomes available |
458 | -$ActionQueueType LinkedList # Allocate on-demand |
459 | -$ActionQueueFileName agg_backlog # Spill to disk if queue is full |
460 | -$ActionQueueMaxDiskSpace 32m # Max size for disk queue |
461 | -$ActionQueueLowWaterMark 2000 # Num messages. Assuming avg size of 512B, this is 1MiB. |
462 | -$ActionQueueHighWaterMark 8000 # Num messages. Assuming avg size of 512B, this is 4MiB. (If this is reached, messages will spill to disk until the low watermark is reached). |
463 | -$ActionQueueTimeoutEnqueue 0 # Discard messages if the queue + disk is full |
464 | -$ActionQueueSaveOnShutdown on # Save in-memory data to disk if rsyslog shuts down |
465 | - |
466 | -<% ip = spec.networks.send(properties.networks.apps).ip %> |
467 | -template(name="CfLogTemplate" type="list") { |
468 | - constant(value="<") |
469 | - property(name="pri") |
470 | - constant(value=">") |
471 | - property(name="timestamp" dateFormat="rfc3339") |
472 | - constant(value=" <%= ip.strip %> ") |
473 | - property(name="programname") |
474 | - constant(value=" [job=") |
475 | - property(name="programname") |
476 | - constant(value=" index=<%= spec.index.to_i %>] ") |
477 | - property(name="msg") |
478 | -} |
479 | - |
480 | -<% if transport == "relp" %> |
481 | -$ModLoad omrelp |
482 | -:programname, startswith, "vcap." :omrelp:<%= address %>:<%= port %>;CfLogTemplate |
483 | -<% elsif transport == "udp" %> |
484 | -:programname, startswith, "vcap." @<%= address %>:<%= port %>;CfLogTemplate |
485 | -<% elsif transport == "tcp" %> |
486 | -:programname, startswith, "vcap." @@<%= address %>:<%= port %>;CfLogTemplate |
487 | -<% else %> |
488 | -#only RELP, UDP, and TCP are supported |
489 | -<% end %> |
490 | - |
491 | -# Log vcap messages locally, too |
492 | -#$template VcapComponentLogFile, "/var/log/%programname:6:$%/%programname:6:$%.log" |
493 | -#$template VcapComponentLogFormat, "%timegenerated% %syslogseverity-text% -- %msg%\n" |
494 | -#:programname, startswith, "vcap." -?VcapComponentLogFile;VcapComponentLogFormat |
495 | - |
496 | -# Prevent them from reaching anywhere else |
497 | -:programname, startswith, "vcap." ~ |
498 | - |
499 | -<% if properties.syslog_aggregator.all %> |
500 | - <% if transport == "relp" %> |
501 | -*.* :omrelp:<%= address %>:<%= port %> |
502 | - <% elsif transport == "udp" %> |
503 | -*.* @<%= address %>:<%= port %> |
504 | - <% elsif transport == "tcp" %> |
505 | -*.* @@<%= address %>:<%= port %> |
506 | - <% else %> |
507 | -#only RELP, UDP, and TCP are supported |
508 | - <% end %> |
509 | -<% end %> |
510 | - |
511 | -<% end.else do %> |
512 | -# Prevent them from reaching anywhere else |
513 | -:programname, startswith, "vcap." ~ |
514 | -<% end %> |
515 | |
516 | === added file 'hooks/cc-relation-changed' |
517 | --- hooks/cc-relation-changed 1970-01-01 00:00:00 +0000 |
518 | +++ hooks/cc-relation-changed 2014-06-05 18:17:30 +0000 |
519 | @@ -0,0 +1,5 @@ |
520 | +#!/usr/bin/env python |
521 | +from charmhelpers.core import services |
522 | +import config |
523 | +manager = services.ServiceManager(config.SERVICES) |
524 | +manager.manage() |
525 | |
526 | === modified file 'hooks/charmhelpers/contrib/cloudfoundry/common.py' |
527 | --- hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-05-14 16:40:09 +0000 |
528 | +++ hooks/charmhelpers/contrib/cloudfoundry/common.py 2014-06-05 18:17:30 +0000 |
529 | @@ -1,11 +1,3 @@ |
530 | -import sys |
531 | -import os |
532 | -import pwd |
533 | -import grp |
534 | -import subprocess |
535 | - |
536 | -from contextlib import contextmanager |
537 | -from charmhelpers.core.hookenv import log, ERROR, DEBUG |
538 | from charmhelpers.core import host |
539 | |
540 | from charmhelpers.fetch import ( |
541 | @@ -13,55 +5,6 @@ |
542 | ) |
543 | |
544 | |
545 | -def run(command, exit_on_error=True, quiet=False): |
546 | - '''Run a command and return the output.''' |
547 | - if not quiet: |
548 | - log("Running {!r}".format(command), DEBUG) |
549 | - p = subprocess.Popen( |
550 | - command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, |
551 | - shell=isinstance(command, basestring)) |
552 | - p.stdin.close() |
553 | - lines = [] |
554 | - for line in p.stdout: |
555 | - if line: |
556 | - if not quiet: |
557 | - print line |
558 | - lines.append(line) |
559 | - elif p.poll() is not None: |
560 | - break |
561 | - |
562 | - p.wait() |
563 | - |
564 | - if p.returncode == 0: |
565 | - return '\n'.join(lines) |
566 | - |
567 | - if p.returncode != 0 and exit_on_error: |
568 | - log("ERROR: {}".format(p.returncode), ERROR) |
569 | - sys.exit(p.returncode) |
570 | - |
571 | - raise subprocess.CalledProcessError( |
572 | - p.returncode, command, '\n'.join(lines)) |
573 | - |
574 | - |
575 | -def chownr(path, owner, group): |
576 | - uid = pwd.getpwnam(owner).pw_uid |
577 | - gid = grp.getgrnam(group).gr_gid |
578 | - for root, dirs, files in os.walk(path): |
579 | - for momo in dirs: |
580 | - os.chown(os.path.join(root, momo), uid, gid) |
581 | - for momo in files: |
582 | - os.chown(os.path.join(root, momo), uid, gid) |
583 | - |
584 | - |
585 | -@contextmanager |
586 | -def chdir(d): |
587 | - cur = os.getcwd() |
588 | - try: |
589 | - yield os.chdir(d) |
590 | - finally: |
591 | - os.chdir(cur) |
592 | - |
593 | - |
594 | def prepare_cloudfoundry_environment(config_data, packages): |
595 | add_source(config_data['source'], config_data.get('key')) |
596 | apt_update(fatal=True) |
597 | |
598 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/config_helper.py' |
599 | --- hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 2014-05-14 16:40:09 +0000 |
600 | +++ hooks/charmhelpers/contrib/cloudfoundry/config_helper.py 1970-01-01 00:00:00 +0000 |
601 | @@ -1,11 +0,0 @@ |
602 | -import jinja2 |
603 | - |
604 | -TEMPLATES_DIR = 'templates' |
605 | - |
606 | -def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
607 | - templates = jinja2.Environment( |
608 | - loader=jinja2.FileSystemLoader(template_dir)) |
609 | - template = templates.get_template(template_name) |
610 | - return template.render(context) |
611 | - |
612 | - |
613 | |
614 | === modified file 'hooks/charmhelpers/contrib/cloudfoundry/contexts.py' |
615 | --- hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-05-14 16:40:09 +0000 |
616 | +++ hooks/charmhelpers/contrib/cloudfoundry/contexts.py 2014-06-05 18:17:30 +0000 |
617 | @@ -1,70 +1,74 @@ |
618 | import os |
619 | import yaml |
620 | |
621 | -from charmhelpers.core import hookenv |
622 | -from charmhelpers.contrib.openstack.context import OSContextGenerator |
623 | - |
624 | - |
625 | -class RelationContext(OSContextGenerator): |
626 | - def __call__(self): |
627 | - if not hookenv.relation_ids(self.interface): |
628 | - return {} |
629 | - |
630 | - ctx = {} |
631 | - for rid in hookenv.relation_ids(self.interface): |
632 | - for unit in hookenv.related_units(rid): |
633 | - reldata = hookenv.relation_get(rid=rid, unit=unit) |
634 | - required = set(self.required_keys) |
635 | - if set(reldata.keys()).issuperset(required): |
636 | - ns = ctx.setdefault(self.interface, {}) |
637 | - for k, v in reldata.items(): |
638 | - ns[k] = v |
639 | - return ctx |
640 | - |
641 | - return {} |
642 | - |
643 | - |
644 | -class ConfigContext(OSContextGenerator): |
645 | - def __call__(self): |
646 | - return hookenv.config() |
647 | - |
648 | - |
649 | -# Stores `config_data` hash into yaml file with `file_name` as a name |
650 | -# if `file_name` already exists, then it loads data from `file_name`. |
651 | -class StoredContext(OSContextGenerator): |
652 | +from charmhelpers.core.services import RelationContext |
653 | + |
654 | + |
655 | +class StoredContext(dict): |
656 | + """ |
657 | + A data context that always returns the data that it was first created with. |
658 | + """ |
659 | def __init__(self, file_name, config_data): |
660 | - self.data = config_data |
661 | + """ |
662 | + If the file exists, populate `self` with the data from the file. |
663 | + Otherwise, populate with the given data and persist it to the file. |
664 | + """ |
665 | if os.path.exists(file_name): |
666 | - with open(file_name, 'r') as file_stream: |
667 | - self.data = yaml.load(file_stream) |
668 | - if not self.data: |
669 | - raise OSError("%s is empty" % file_name) |
670 | + self.update(self.read_context(file_name)) |
671 | else: |
672 | - with open(file_name, 'w') as file_stream: |
673 | - yaml.dump(config_data, file_stream) |
674 | - self.data = config_data |
675 | - |
676 | - def __call__(self): |
677 | - return self.data |
678 | - |
679 | - |
680 | -class StaticContext(OSContextGenerator): |
681 | - def __init__(self, data): |
682 | - self.data = data |
683 | - |
684 | - def __call__(self): |
685 | - return self.data |
686 | - |
687 | - |
688 | -class NatsContext(RelationContext): |
689 | + self.store_context(file_name, config_data) |
690 | + self.update(config_data) |
691 | + |
692 | + def store_context(self, file_name, config_data): |
693 | + with open(file_name, 'w') as file_stream: |
694 | + yaml.dump(config_data, file_stream) |
695 | + |
696 | + def read_context(self, file_name): |
697 | + with open(file_name, 'r') as file_stream: |
698 | + data = yaml.load(file_stream) |
699 | + if not data: |
700 | + raise OSError("%s is empty" % file_name) |
701 | + return data |
702 | + |
703 | + |
704 | +class NatsRelation(RelationContext): |
705 | interface = 'nats' |
706 | required_keys = ['nats_port', 'nats_address', 'nats_user', 'nats_password'] |
707 | |
708 | |
709 | -class RouterContext(RelationContext): |
710 | +class MysqlRelation(RelationContext): |
711 | + interface = 'db' |
712 | + required_keys = ['user', 'password', 'host', 'database'] |
713 | + dsn_template = "mysql2://{user}:{password}@{host}:{port}/{database}" |
714 | + |
715 | + def get_data(self): |
716 | + RelationContext.get_data(self) |
717 | + if self.is_ready(): |
718 | + if 'port' not in self['db']: |
719 | + self['db']['port'] = '3306' |
720 | + self['db']['dsn'] = self.dsn_template.format(**self['db']) |
721 | + |
722 | + |
723 | +class RouterRelation(RelationContext): |
724 | interface = 'router' |
725 | required_keys = ['domain'] |
726 | |
727 | -class LogRouterContext(RelationContext): |
728 | + |
729 | +class LogRouterRelation(RelationContext): |
730 | interface = 'logrouter' |
731 | required_keys = ['shared-secret', 'logrouter-address'] |
732 | + |
733 | + |
734 | +class LoggregatorRelation(RelationContext): |
735 | + interface = 'loggregator' |
736 | + required_keys = ['shared_secret', 'loggregator_address'] |
737 | + |
738 | + |
739 | +class EtcdRelation(RelationContext): |
740 | + interface = 'etcd' |
741 | + required_keys = ['hostname', 'port'] |
742 | + |
743 | + |
744 | +class CloudControllerRelation(RelationContext): |
745 | + interface = 'cc' |
746 | + required_keys = ['hostname', 'port', 'user', 'password'] |
747 | |
748 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/install.py' |
749 | --- hooks/charmhelpers/contrib/cloudfoundry/install.py 2014-05-14 16:40:09 +0000 |
750 | +++ hooks/charmhelpers/contrib/cloudfoundry/install.py 1970-01-01 00:00:00 +0000 |
751 | @@ -1,35 +0,0 @@ |
752 | -import os |
753 | -import subprocess |
754 | - |
755 | - |
756 | -def install(src, dest, fileprops=None, sudo=False): |
757 | - """Install a file from src to dest. Dest can be a complete filename |
758 | - or a target directory. fileprops is a dict with 'owner' (username of owner) |
759 | - and mode (octal string) as keys, the defaults are 'ubuntu' and '400' |
760 | - |
761 | - When owner is passed or when access requires it sudo can be set to True and |
762 | - sudo will be used to install the file. |
763 | - """ |
764 | - if not fileprops: |
765 | - fileprops = {} |
766 | - mode = fileprops.get('mode', '400') |
767 | - owner = fileprops.get('owner') |
768 | - cmd = ['install'] |
769 | - |
770 | - if not os.path.exists(src): |
771 | - raise OSError(src) |
772 | - |
773 | - if not os.path.exists(dest) and not os.path.exists(os.path.dirname(dest)): |
774 | - # create all but the last component as path |
775 | - cmd.append('-D') |
776 | - |
777 | - if mode: |
778 | - cmd.extend(['-m', mode]) |
779 | - |
780 | - if owner: |
781 | - cmd.extend(['-o', owner]) |
782 | - |
783 | - if sudo: |
784 | - cmd.insert(0, 'sudo') |
785 | - cmd.extend([src, dest]) |
786 | - subprocess.check_call(cmd) |
787 | |
788 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/services.py' |
789 | --- hooks/charmhelpers/contrib/cloudfoundry/services.py 2014-05-14 16:40:09 +0000 |
790 | +++ hooks/charmhelpers/contrib/cloudfoundry/services.py 1970-01-01 00:00:00 +0000 |
791 | @@ -1,118 +0,0 @@ |
792 | -import os |
793 | -import tempfile |
794 | -from charmhelpers.core import host |
795 | - |
796 | -from charmhelpers.contrib.cloudfoundry.install import install |
797 | -from charmhelpers.core.hookenv import log |
798 | -from jinja2 import Environment, FileSystemLoader |
799 | - |
800 | -SERVICE_CONFIG = [] |
801 | -TEMPLATE_LOADER = None |
802 | - |
803 | - |
804 | -def render_template(template_name, context): |
805 | - """Render template to a tempfile returning the name""" |
806 | - _, fn = tempfile.mkstemp() |
807 | - template = load_template(template_name) |
808 | - output = template.render(context) |
809 | - with open(fn, "w") as fp: |
810 | - fp.write(output) |
811 | - return fn |
812 | - |
813 | - |
814 | -def collect_contexts(context_providers): |
815 | - ctx = {} |
816 | - for provider in context_providers: |
817 | - c = provider() |
818 | - if not c: |
819 | - return {} |
820 | - ctx.update(c) |
821 | - return ctx |
822 | - |
823 | - |
824 | -def load_template(name): |
825 | - return TEMPLATE_LOADER.get_template(name) |
826 | - |
827 | - |
828 | -def configure_templates(template_dir): |
829 | - global TEMPLATE_LOADER |
830 | - TEMPLATE_LOADER = Environment(loader=FileSystemLoader(template_dir)) |
831 | - |
832 | - |
833 | -def register(service_configs, template_dir): |
834 | - """Register a list of service configs. |
835 | - |
836 | - Service Configs are dicts in the following formats: |
837 | - |
838 | - { |
839 | - "service": <service name>, |
840 | - "templates": [ { |
841 | - 'target': <render target of template>, |
842 | - 'source': <optional name of template in passed in template_dir> |
843 | - 'file_properties': <optional dict taking owner and octal mode> |
844 | - 'contexts': [ context generators, see contexts.py ] |
845 | - } |
846 | - ] } |
847 | - |
848 | - If 'source' is not provided for a template the template_dir will |
849 | - be consulted for ``basename(target).j2``. |
850 | - """ |
851 | - global SERVICE_CONFIG |
852 | - if template_dir: |
853 | - configure_templates(template_dir) |
854 | - SERVICE_CONFIG.extend(service_configs) |
855 | - |
856 | - |
857 | -def reset(): |
858 | - global SERVICE_CONFIG |
859 | - SERVICE_CONFIG = [] |
860 | - |
861 | - |
862 | -# def service_context(name): |
863 | -# contexts = collect_contexts(template['contexts']) |
864 | - |
865 | -def reconfigure_service(service_name, restart=True): |
866 | - global SERVICE_CONFIG |
867 | - service = None |
868 | - for service in SERVICE_CONFIG: |
869 | - if service['service'] == service_name: |
870 | - break |
871 | - if not service or service['service'] != service_name: |
872 | - raise KeyError('Service not registered: %s' % service_name) |
873 | - |
874 | - templates = service['templates'] |
875 | - for template in templates: |
876 | - contexts = collect_contexts(template['contexts']) |
877 | - if contexts: |
878 | - template_target = template['target'] |
879 | - default_template = "%s.j2" % os.path.basename(template_target) |
880 | - template_name = template.get('source', default_template) |
881 | - output_file = render_template(template_name, contexts) |
882 | - file_properties = template.get('file_properties') |
883 | - install(output_file, template_target, file_properties) |
884 | - os.unlink(output_file) |
885 | - else: |
886 | - restart = False |
887 | - |
888 | - if restart: |
889 | - host.service_restart(service_name) |
890 | - |
891 | - |
892 | -def stop_services(): |
893 | - global SERVICE_CONFIG |
894 | - for service in SERVICE_CONFIG: |
895 | - if host.service_running(service['service']): |
896 | - host.service_stop(service['service']) |
897 | - |
898 | - |
899 | -def get_service(service_name): |
900 | - global SERVICE_CONFIG |
901 | - for service in SERVICE_CONFIG: |
902 | - if service_name == service['service']: |
903 | - return service |
904 | - return None |
905 | - |
906 | - |
907 | -def reconfigure_services(restart=True): |
908 | - for service in SERVICE_CONFIG: |
909 | - reconfigure_service(service['service'], restart=restart) |
910 | |
911 | === removed file 'hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py' |
912 | --- hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 2014-05-14 16:40:09 +0000 |
913 | +++ hooks/charmhelpers/contrib/cloudfoundry/upstart_helper.py 1970-01-01 00:00:00 +0000 |
914 | @@ -1,14 +0,0 @@ |
915 | -import os |
916 | -import glob |
917 | -from charmhelpers.core import hookenv |
918 | -from charmhelpers.core.hookenv import charm_dir |
919 | -from charmhelpers.contrib.cloudfoundry.install import install |
920 | - |
921 | - |
922 | -def install_upstart_scripts(dirname=os.path.join(hookenv.charm_dir(), |
923 | - 'files/upstart'), |
924 | - pattern='*.conf'): |
925 | - for script in glob.glob("%s/%s" % (dirname, pattern)): |
926 | - filename = os.path.join(dirname, script) |
927 | - hookenv.log('Installing upstart job:' + filename, hookenv.DEBUG) |
928 | - install(filename, '/etc/init') |
929 | |
930 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' |
931 | --- hooks/charmhelpers/contrib/openstack/context.py 2014-05-14 16:40:09 +0000 |
932 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-06-05 18:17:30 +0000 |
933 | @@ -570,7 +570,7 @@ |
934 | |
935 | if self.plugin == 'ovs': |
936 | ctxt.update(self.ovs_ctxt()) |
937 | - elif self.plugin == 'nvp': |
938 | + elif self.plugin in ['nvp', 'nsx']: |
939 | ctxt.update(self.nvp_ctxt()) |
940 | |
941 | alchemy_flags = config('neutron-alchemy-flags') |
942 | |
943 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
944 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2014-05-14 16:40:09 +0000 |
945 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-05 18:17:30 +0000 |
946 | @@ -114,14 +114,30 @@ |
947 | 'server_packages': ['neutron-server', |
948 | 'neutron-plugin-nicira'], |
949 | 'server_services': ['neutron-server'] |
950 | + }, |
951 | + 'nsx': { |
952 | + 'config': '/etc/neutron/plugins/vmware/nsx.ini', |
953 | + 'driver': 'vmware', |
954 | + 'contexts': [ |
955 | + context.SharedDBContext(user=config('neutron-database-user'), |
956 | + database=config('neutron-database'), |
957 | + relation_prefix='neutron', |
958 | + ssl_dir=NEUTRON_CONF_DIR)], |
959 | + 'services': [], |
960 | + 'packages': [], |
961 | + 'server_packages': ['neutron-server', |
962 | + 'neutron-plugin-vmware'], |
963 | + 'server_services': ['neutron-server'] |
964 | } |
965 | } |
966 | - # NOTE: patch in ml2 plugin for icehouse onwards |
967 | if release >= 'icehouse': |
968 | + # NOTE: patch in ml2 plugin for icehouse onwards |
969 | plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini' |
970 | plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin' |
971 | plugins['ovs']['server_packages'] = ['neutron-server', |
972 | 'neutron-plugin-ml2'] |
973 | + # NOTE: patch in vmware renames nvp->nsx for icehouse onwards |
974 | + plugins['nvp'] = plugins['nsx'] |
975 | return plugins |
976 | |
977 | |
978 | |
979 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' |
980 | --- hooks/charmhelpers/contrib/openstack/utils.py 2014-05-14 16:40:09 +0000 |
981 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-06-05 18:17:30 +0000 |
982 | @@ -131,6 +131,11 @@ |
983 | def get_os_codename_package(package, fatal=True): |
984 | '''Derive OpenStack release codename from an installed package.''' |
985 | apt.init() |
986 | + |
987 | + # Tell apt to build an in-memory cache to prevent race conditions (if |
988 | + # another process is already building the cache). |
989 | + apt.config.set("Dir::Cache::pkgcache", "") |
990 | + |
991 | cache = apt.Cache() |
992 | |
993 | try: |
994 | @@ -183,7 +188,7 @@ |
995 | if cname == codename: |
996 | return version |
997 | #e = "Could not determine OpenStack version for package: %s" % pkg |
998 | - #error_out(e) |
999 | + # error_out(e) |
1000 | |
1001 | |
1002 | os_rel = None |
1003 | |
1004 | === modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' |
1005 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-14 16:40:09 +0000 |
1006 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-06-05 18:17:30 +0000 |
1007 | @@ -62,7 +62,7 @@ |
1008 | pvd = check_output(['pvdisplay', block_device]).splitlines() |
1009 | for l in pvd: |
1010 | if l.strip().startswith('VG Name'): |
1011 | - vg = ' '.join(l.split()).split(' ').pop() |
1012 | + vg = ' '.join(l.strip().split()[2:]) |
1013 | return vg |
1014 | |
1015 | |
1016 | |
1017 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
1018 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-05-14 16:40:09 +0000 |
1019 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-06-05 18:17:30 +0000 |
1020 | @@ -1,4 +1,5 @@ |
1021 | -from os import stat |
1022 | +import os |
1023 | +import re |
1024 | from stat import S_ISBLK |
1025 | |
1026 | from subprocess import ( |
1027 | @@ -14,7 +15,9 @@ |
1028 | |
1029 | :returns: boolean: True if path is a block device, False if not. |
1030 | ''' |
1031 | - return S_ISBLK(stat(path).st_mode) |
1032 | + if not os.path.exists(path): |
1033 | + return False |
1034 | + return S_ISBLK(os.stat(path).st_mode) |
1035 | |
1036 | |
1037 | def zap_disk(block_device): |
1038 | @@ -29,7 +32,18 @@ |
1039 | '--clear', block_device]) |
1040 | dev_end = check_output(['blockdev', '--getsz', block_device]) |
1041 | gpt_end = int(dev_end.split()[0]) - 100 |
1042 | - check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), |
1043 | + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
1044 | 'bs=1M', 'count=1']) |
1045 | - check_call(['dd', 'if=/dev/zero', 'of=%s'%(block_device), |
1046 | - 'bs=512', 'count=100', 'seek=%s'%(gpt_end)]) |
1047 | + check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
1048 | + 'bs=512', 'count=100', 'seek=%s' % (gpt_end)]) |
1049 | + |
1050 | +def is_device_mounted(device): |
1051 | + '''Given a device path, return True if that device is mounted, and False |
1052 | + if it isn't. |
1053 | + |
1054 | + :param device: str: Full path of the device to check. |
1055 | + :returns: boolean: True if the path represents a mounted device, False if |
1056 | + it doesn't. |
1057 | + ''' |
1058 | + out = check_output(['mount']) |
1059 | + return bool(re.search(device + r"[0-9]+\b", out)) |
1060 | |
1061 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
1062 | --- hooks/charmhelpers/core/hookenv.py 2014-05-14 16:40:09 +0000 |
1063 | +++ hooks/charmhelpers/core/hookenv.py 2014-06-05 18:17:30 +0000 |
1064 | @@ -155,6 +155,100 @@ |
1065 | return os.path.basename(sys.argv[0]) |
1066 | |
1067 | |
1068 | +class Config(dict): |
1069 | + """A Juju charm config dictionary that can write itself to |
1070 | + disk (as json) and track which values have changed since |
1071 | + the previous hook invocation. |
1072 | + |
1073 | + Do not instantiate this object directly - instead call |
1074 | + ``hookenv.config()`` |
1075 | + |
1076 | + Example usage:: |
1077 | + |
1078 | + >>> # inside a hook |
1079 | + >>> from charmhelpers.core import hookenv |
1080 | + >>> config = hookenv.config() |
1081 | + >>> config['foo'] |
1082 | + 'bar' |
1083 | + >>> config['mykey'] = 'myval' |
1084 | + >>> config.save() |
1085 | + |
1086 | + |
1087 | + >>> # user runs `juju set mycharm foo=baz` |
1088 | + >>> # now we're inside subsequent config-changed hook |
1089 | + >>> config = hookenv.config() |
1090 | + >>> config['foo'] |
1091 | + 'baz' |
1092 | + >>> # test to see if this val has changed since last hook |
1093 | + >>> config.changed('foo') |
1094 | + True |
1095 | + >>> # what was the previous value? |
1096 | + >>> config.previous('foo') |
1097 | + 'bar' |
1098 | + >>> # keys/values that we add are preserved across hooks |
1099 | + >>> config['mykey'] |
1100 | + 'myval' |
1101 | + >>> # don't forget to save at the end of hook! |
1102 | + >>> config.save() |
1103 | + |
1104 | + """ |
1105 | + CONFIG_FILE_NAME = '.juju-persistent-config' |
1106 | + |
1107 | + def __init__(self, *args, **kw): |
1108 | + super(Config, self).__init__(*args, **kw) |
1109 | + self._prev_dict = None |
1110 | + self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
1111 | + if os.path.exists(self.path): |
1112 | + self.load_previous() |
1113 | + |
1114 | + def load_previous(self, path=None): |
1115 | + """Load previous copy of config from disk so that current values |
1116 | + can be compared to previous values. |
1117 | + |
1118 | + :param path: |
1119 | + |
1120 | + File path from which to load the previous config. If `None`, |
1121 | + config is loaded from the default location. If `path` is |
1122 | + specified, subsequent `save()` calls will write to the same |
1123 | + path. |
1124 | + |
1125 | + """ |
1126 | + self.path = path or self.path |
1127 | + with open(self.path) as f: |
1128 | + self._prev_dict = json.load(f) |
1129 | + |
1130 | + def changed(self, key): |
1131 | + """Return true if the value for this key has changed since |
1132 | + the last save. |
1133 | + |
1134 | + """ |
1135 | + if self._prev_dict is None: |
1136 | + return True |
1137 | + return self.previous(key) != self.get(key) |
1138 | + |
1139 | + def previous(self, key): |
1140 | + """Return previous value for this key, or None if there |
1141 | + is no "previous" value. |
1142 | + |
1143 | + """ |
1144 | + if self._prev_dict: |
1145 | + return self._prev_dict.get(key) |
1146 | + return None |
1147 | + |
1148 | + def save(self): |
1149 | + """Save this config to disk. |
1150 | + |
1151 | + Preserves items in _prev_dict that do not exist in self. |
1152 | + |
1153 | + """ |
1154 | + if self._prev_dict: |
1155 | + for k, v in self._prev_dict.iteritems(): |
1156 | + if k not in self: |
1157 | + self[k] = v |
1158 | + with open(self.path, 'w') as f: |
1159 | + json.dump(self, f) |
1160 | + |
1161 | + |
1162 | @cached |
1163 | def config(scope=None): |
1164 | """Juju charm configuration""" |
1165 | @@ -163,7 +257,10 @@ |
1166 | config_cmd_line.append(scope) |
1167 | config_cmd_line.append('--format=json') |
1168 | try: |
1169 | - return json.loads(subprocess.check_output(config_cmd_line)) |
1170 | + config_data = json.loads(subprocess.check_output(config_cmd_line)) |
1171 | + if scope is not None: |
1172 | + return config_data |
1173 | + return Config(config_data) |
1174 | except ValueError: |
1175 | return None |
1176 | |
1177 | |
1178 | === modified file 'hooks/charmhelpers/core/host.py' |
1179 | --- hooks/charmhelpers/core/host.py 2014-05-14 16:40:09 +0000 |
1180 | +++ hooks/charmhelpers/core/host.py 2014-06-05 18:17:30 +0000 |
1181 | @@ -12,6 +12,9 @@ |
1182 | import string |
1183 | import subprocess |
1184 | import hashlib |
1185 | +import shutil |
1186 | +import apt_pkg |
1187 | +from contextlib import contextmanager |
1188 | |
1189 | from collections import OrderedDict |
1190 | |
1191 | @@ -60,6 +63,11 @@ |
1192 | return False |
1193 | |
1194 | |
1195 | +def service_available(service_name): |
1196 | + """Determine whether a system service is available""" |
1197 | + return service('status', service_name) |
1198 | + |
1199 | + |
1200 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
1201 | """Add a user to the system""" |
1202 | try: |
1203 | @@ -143,6 +151,16 @@ |
1204 | target.write(content) |
1205 | |
1206 | |
1207 | +def copy_file(src, dst, owner='root', group='root', perms=0444): |
1208 | + """Create or overwrite a file with the contents of another file""" |
1209 | + log("Writing file {} {}:{} {:o} from {}".format(dst, owner, group, perms, src)) |
1210 | + uid = pwd.getpwnam(owner).pw_uid |
1211 | + gid = grp.getgrnam(group).gr_gid |
1212 | + shutil.copyfile(src, dst) |
1213 | + os.chown(dst, uid, gid) |
1214 | + os.chmod(dst, perms) |
1215 | + |
1216 | + |
1217 | def mount(device, mountpoint, options=None, persist=False): |
1218 | """Mount a filesystem at a particular mountpoint""" |
1219 | cmd_args = ['mount'] |
1220 | @@ -295,3 +313,37 @@ |
1221 | if 'link/ether' in words: |
1222 | hwaddr = words[words.index('link/ether') + 1] |
1223 | return hwaddr |
1224 | + |
1225 | + |
1226 | +def cmp_pkgrevno(package, revno, pkgcache=None): |
1227 | + '''Compare supplied revno with the revno of the installed package |
1228 | + 1 => Installed revno is greater than supplied arg |
1229 | + 0 => Installed revno is the same as supplied arg |
1230 | + -1 => Installed revno is less than supplied arg |
1231 | + ''' |
1232 | + if not pkgcache: |
1233 | + apt_pkg.init() |
1234 | + pkgcache = apt_pkg.Cache() |
1235 | + pkg = pkgcache[package] |
1236 | + return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
1237 | + |
1238 | + |
1239 | +@contextmanager |
1240 | +def chdir(d): |
1241 | + cur = os.getcwd() |
1242 | + try: |
1243 | + yield os.chdir(d) |
1244 | + finally: |
1245 | + os.chdir(cur) |
1246 | + |
1247 | + |
1248 | +def chownr(path, owner, group): |
1249 | + uid = pwd.getpwnam(owner).pw_uid |
1250 | + gid = grp.getgrnam(group).gr_gid |
1251 | + |
1252 | + for root, dirs, files in os.walk(path): |
1253 | + for name in dirs + files: |
1254 | + full = os.path.join(root, name) |
1255 | + broken_symlink = os.path.lexists(full) and not os.path.exists(full) |
1256 | + if not broken_symlink: |
1257 | + os.chown(full, uid, gid) |
1258 | |
1259 | === added file 'hooks/charmhelpers/core/services.py' |
1260 | --- hooks/charmhelpers/core/services.py 1970-01-01 00:00:00 +0000 |
1261 | +++ hooks/charmhelpers/core/services.py 2014-06-05 18:17:30 +0000 |
1262 | @@ -0,0 +1,357 @@ |
1263 | +import os |
1264 | +import sys |
1265 | +from collections import Iterable |
1266 | +from charmhelpers.core import templating |
1267 | +from charmhelpers.core import host |
1268 | +from charmhelpers.core import hookenv |
1269 | + |
1270 | + |
1271 | +class ServiceManager(object): |
1272 | + def __init__(self, services=None): |
1273 | + """ |
1274 | + Register a list of services, given their definitions. |
1275 | + |
1276 | + Traditional charm authoring is focused on implementing hooks. That is, |
1277 | + the charm author is thinking in terms of "What hook am I handling; what |
1278 | + does this hook need to do?" However, in most cases, the real question |
1279 | + should be "Do I have the information I need to configure and start this |
1280 | + piece of software and, if so, what are the steps for doing so." The |
1281 | + ServiceManager framework tries to bring the focus to the data and the |
1282 | + setup tasks, in the most declarative way possible. |
1283 | + |
1284 | + Service definitions are dicts in the following formats (all keys except |
1285 | + 'service' are optional): |
1286 | + |
1287 | + { |
1288 | + "service": <service name>, |
1289 | + "required_data": <list of required data contexts>, |
1290 | + "data_ready": <one or more callbacks>, |
1291 | + "data_lost": <one or more callbacks>, |
1292 | + "start": <one or more callbacks>, |
1293 | + "stop": <one or more callbacks>, |
1294 | + "ports": <list of ports to manage>, |
1295 | + } |
1296 | + |
1297 | + The 'required_data' list should contain dicts of required data (or |
1298 | + dependency managers that act like dicts and know how to collect the data). |
1299 | + Only when all items in the 'required_data' list are populated are the list |
1300 | + of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more |
1301 | + information. |
1302 | + |
1303 | + The 'data_ready' value should be either a single callback, or a list of |
1304 | + callbacks, to be called when all items in 'required_data' pass `is_ready()`. |
1305 | + Each callback will be called with the service name as the only parameter. |
1306 | + After these all of the 'data_ready' callbacks are called, the 'start' |
1307 | + callbacks are fired. |
1308 | + |
1309 | + The 'data_lost' value should be either a single callback, or a list of |
1310 | + callbacks, to be called when a 'required_data' item no longer passes |
1311 | + `is_ready()`. Each callback will be called with the service name as the |
1312 | + only parameter. After these all of the 'data_ready' callbacks are called, |
1313 | + the 'stop' callbacks are fired. |
1314 | + |
1315 | + The 'start' value should be either a single callback, or a list of |
1316 | + callbacks, to be called when starting the service, after the 'data_ready' |
1317 | + callbacks are complete. Each callback will be called with the service |
1318 | + name as the only parameter. This defaults to |
1319 | + `[host.service_start, services.open_ports]`. |
1320 | + |
1321 | + The 'stop' value should be either a single callback, or a list of |
1322 | + callbacks, to be called when stopping the service. If the service is |
1323 | + being stopped because it no longer has all of its 'required_data', this |
1324 | + will be called after all of the 'data_lost' callbacks are complete. |
1325 | + Each callback will be called with the service name as the only parameter. |
1326 | + This defaults to `[services.close_ports, host.service_stop]`. |
1327 | + |
1328 | + The 'ports' value should be a list of ports to manage. The default |
1329 | + 'start' handler will open the ports after the service is started, |
1330 | + and the default 'stop' handler will close the ports prior to stopping |
1331 | + the service. |
1332 | + |
1333 | + |
1334 | + Examples: |
1335 | + |
1336 | + The following registers an Upstart service called bingod that depends on |
1337 | + a mongodb relation and which runs a custom `db_migrate` function prior to |
1338 | + restarting the service, and a Runit serivce called spadesd. |
1339 | + |
1340 | + manager = services.ServiceManager([ |
1341 | + { |
1342 | + 'service': 'bingod', |
1343 | + 'ports': [80, 443], |
1344 | + 'required_data': [MongoRelation(), config(), {'my': 'data'}], |
1345 | + 'data_ready': [ |
1346 | + services.template(source='bingod.conf'), |
1347 | + services.template(source='bingod.ini', |
1348 | + target='/etc/bingod.ini', |
1349 | + owner='bingo', perms=0400), |
1350 | + ], |
1351 | + }, |
1352 | + { |
1353 | + 'service': 'spadesd', |
1354 | + 'data_ready': services.template(source='spadesd_run.j2', |
1355 | + target='/etc/sv/spadesd/run', |
1356 | + perms=0555), |
1357 | + 'start': runit_start, |
1358 | + 'stop': runit_stop, |
1359 | + }, |
1360 | + ]) |
1361 | + manager.manage() |
1362 | + """ |
1363 | + self.services = {} |
1364 | + for service in services or []: |
1365 | + service_name = service['service'] |
1366 | + self.services[service_name] = service |
1367 | + |
1368 | + def manage(self): |
1369 | + """ |
1370 | + Handle the current hook by doing The Right Thing with the registered services. |
1371 | + """ |
1372 | + hook_name = os.path.basename(sys.argv[0]) |
1373 | + if hook_name == 'stop': |
1374 | + self.stop_services() |
1375 | + else: |
1376 | + self.reconfigure_services() |
1377 | + |
1378 | + def reconfigure_services(self, *service_names): |
1379 | + """ |
1380 | + Update all files for one or more registered services, and, |
1381 | + if ready, optionally restart them. |
1382 | + |
1383 | + If no service names are given, reconfigures all registered services. |
1384 | + """ |
1385 | + for service_name in service_names or self.services.keys(): |
1386 | + if self.is_ready(service_name): |
1387 | + self.fire_event('data_ready', service_name) |
1388 | + self.fire_event('start', service_name, default=[ |
1389 | + host.service_restart, |
1390 | + open_ports]) |
1391 | + self.save_ready(service_name) |
1392 | + else: |
1393 | + if self.was_ready(service_name): |
1394 | + self.fire_event('data_lost', service_name) |
1395 | + self.fire_event('stop', service_name, default=[ |
1396 | + close_ports, |
1397 | + host.service_stop]) |
1398 | + self.save_lost(service_name) |
1399 | + |
1400 | + def stop_services(self, *service_names): |
1401 | + """ |
1402 | + Stop one or more registered services, by name. |
1403 | + |
1404 | + If no service names are given, stops all registered services. |
1405 | + """ |
1406 | + for service_name in service_names or self.services.keys(): |
1407 | + self.fire_event('stop', service_name, default=[ |
1408 | + close_ports, |
1409 | + host.service_stop]) |
1410 | + |
1411 | + def get_service(self, service_name): |
1412 | + """ |
1413 | + Given the name of a registered service, return its service definition. |
1414 | + """ |
1415 | + service = self.services.get(service_name) |
1416 | + if not service: |
1417 | + raise KeyError('Service not registered: %s' % service_name) |
1418 | + return service |
1419 | + |
1420 | + def fire_event(self, event_name, service_name, default=None): |
1421 | + """ |
1422 | + Fire a data_ready, data_lost, start, or stop event on a given service. |
1423 | + """ |
1424 | + service = self.get_service(service_name) |
1425 | + callbacks = service.get(event_name, default) |
1426 | + if not callbacks: |
1427 | + return |
1428 | + if not isinstance(callbacks, Iterable): |
1429 | + callbacks = [callbacks] |
1430 | + for callback in callbacks: |
1431 | + if isinstance(callback, ManagerCallback): |
1432 | + callback(self, service_name, event_name) |
1433 | + else: |
1434 | + callback(service_name) |
1435 | + |
1436 | + def is_ready(self, service_name): |
1437 | + """ |
1438 | + Determine if a registered service is ready, by checking its 'required_data'. |
1439 | + |
1440 | + A 'required_data' item can be any mapping type, and is considered ready |
1441 | + if `bool(item)` evaluates as True. |
1442 | + """ |
1443 | + service = self.get_service(service_name) |
1444 | + reqs = service.get('required_data', []) |
1445 | + return all(bool(req) for req in reqs) |
1446 | + |
1447 | + def save_ready(self, service_name): |
1448 | + """ |
1449 | + Save an indicator that the given service is now data_ready. |
1450 | + """ |
1451 | + ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name) |
1452 | + with open(ready_file, 'a'): |
1453 | + pass |
1454 | + |
1455 | + def save_lost(self, service_name): |
1456 | + """ |
1457 | + Save an indicator that the given service is no longer data_ready. |
1458 | + """ |
1459 | + ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name) |
1460 | + if os.path.exists(ready_file): |
1461 | + os.remove(ready_file) |
1462 | + |
1463 | + def was_ready(self, service_name): |
1464 | + """ |
1465 | + Determine if the given service was previously data_ready. |
1466 | + """ |
1467 | + ready_file = '{}/.ready.{}'.format(hookenv.charm_dir(), service_name) |
1468 | + return os.path.exists(ready_file) |
1469 | + |
1470 | + |
1471 | +class DefaultMappingList(list): |
1472 | + """ |
1473 | + A list of mappings that proxies calls to `__getitem__` with non-int keys, |
1474 | + as well as calls to `get`, to the first mapping in the list. |
1475 | + |
1476 | + >>> dml = DefaultMappingList([ |
1477 | + ... {'foo': 'bar'}, |
1478 | + ... {'foo': 'qux'}, |
1479 | + ... ]) |
1480 | + >>> dml['foo'] == 'bar' |
1481 | + True |
1482 | + >>> dml[1]['foo'] == 'qux' |
1483 | + True |
1484 | + >>> dml.get('foo') == 'bar' |
1485 | + True |
1486 | + """ |
1487 | + def __getitem__(self, key): |
1488 | + if isinstance(key, int): |
1489 | + return super(DefaultMappingList, self).__getitem__(key) |
1490 | + else: |
1491 | + return super(DefaultMappingList, self).__getitem__(0)[key] |
1492 | + |
1493 | + def get(self, key, default=None): |
1494 | + return self[0].get(key, default) |
1495 | + |
1496 | + |
1497 | +class RelationContext(dict): |
1498 | + """ |
1499 | + Base class for a context generator that gets relation data from juju. |
1500 | + |
1501 | + Subclasses must provide `interface`, which is the interface type of interest, |
1502 | + and `required_keys`, which is the set of keys required for the relation to |
1503 | + be considered complete. The first relation for the interface that is complete |
1504 | + will be used to populate the data for template. |
1505 | + |
1506 | + The generated context will be namespaced under the interface type, to prevent |
1507 | + potential naming conflicts. |
1508 | + """ |
1509 | + interface = None |
1510 | + required_keys = [] |
1511 | + |
1512 | + def __bool__(self): |
1513 | + """ |
1514 | + Updates the data and returns True if all of the required_keys are available. |
1515 | + """ |
1516 | + self.get_data() |
1517 | + return self.is_ready() |
1518 | + |
1519 | + __nonzero__ = __bool__ |
1520 | + |
1521 | + def __repr__(self): |
1522 | + return super(RelationContext, self).__repr__() |
1523 | + |
1524 | + def is_ready(self): |
1525 | + """ |
1526 | + Returns True if all of the `required_keys` are available from any units. |
1527 | + """ |
1528 | + return len(self.get(self.interface, [])) > 0 |
1529 | + |
1530 | + def _is_ready(self, unit_data): |
1531 | + """ |
1532 | + Helper method that tests a set of relation data and returns True if |
1533 | + all of the `required_keys` are present. |
1534 | + """ |
1535 | + return set(unit_data.keys()).issuperset(set(self.required_keys)) |
1536 | + |
1537 | + def get_data(self): |
1538 | + """ |
1539 | + Retrieve the relation data and store it under `self[self.interface]`. |
1540 | + |
1541 | + Only complete sets of data are stored. |
1542 | + |
1543 | + The data can be treated as either a list or a mapping. Treating it as |
1544 | + a list will give the data from all the complete units. Treating it as |
1545 | + a mapping will give the data for the first complete unit, lexographically |
1546 | + ordered by relation ID then unit ID. |
1547 | + |
1548 | + For example, if there are relation IDs 'db:1' and 'db:2', while the |
1549 | + service on relation 'db:1' has units 'wordpress/0' and 'wordpress/1', |
1550 | + while the service on relation 'db:2' has unit 'mediawiki/0', then |
1551 | + accessing `self[self.interface]['foo']` will return the 'foo' value |
1552 | + from unit 'wordpress/0'. |
1553 | + """ |
1554 | + if not hookenv.relation_ids(self.interface): |
1555 | + return |
1556 | + |
1557 | + ns = self.setdefault(self.interface, DefaultMappingList()) |
1558 | + for rid in sorted(hookenv.relation_ids(self.interface)): |
1559 | + for unit in sorted(hookenv.related_units(rid)): |
1560 | + reldata = hookenv.relation_get(rid=rid, unit=unit) |
1561 | + if self._is_ready(reldata): |
1562 | + ns.append(reldata) |
1563 | + |
1564 | + |
1565 | +class ManagerCallback(object): |
1566 | + """ |
1567 | + Special case of a callback that takes the `ServiceManager` instance |
1568 | + in addition to the service name. |
1569 | + |
1570 | + Subclasses should implement `__call__` which should accept two parameters: |
1571 | + |
1572 | + * `manager` The `ServiceManager` instance |
1573 | + * `service_name` The name of the service it's being triggered for |
1574 | + * `event_name` The name of the event that this callback is handling |
1575 | + """ |
1576 | + def __call__(self, manager, service_name, event_name): |
1577 | + raise NotImplementedError() |
1578 | + |
1579 | + |
1580 | +class TemplateCallback(ManagerCallback): |
1581 | + """ |
1582 | + Callback class that will render a template, for use as a ready action. |
1583 | + |
1584 | + The `target` param, if omitted, will default to `/etc/init/<service name>`. |
1585 | + """ |
1586 | + def __init__(self, source, target, owner='root', group='root', perms=0444): |
1587 | + self.source = source |
1588 | + self.target = target |
1589 | + self.owner = owner |
1590 | + self.group = group |
1591 | + self.perms = perms |
1592 | + |
1593 | + def __call__(self, manager, service_name, event_name): |
1594 | + service = manager.get_service(service_name) |
1595 | + context = {} |
1596 | + for ctx in service.get('required_data', []): |
1597 | + context.update(ctx) |
1598 | + templating.render(self.source, self.target, context, |
1599 | + self.owner, self.group, self.perms) |
1600 | + |
1601 | + |
1602 | +class PortManagerCallback(ManagerCallback): |
1603 | + """ |
1604 | + Callback class that will open or close ports, for use as either |
1605 | + a start or stop action. |
1606 | + """ |
1607 | + def __call__(self, manager, service_name, event_name): |
1608 | + service = manager.get_service(service_name) |
1609 | + for port in service.get('ports', []): |
1610 | + if event_name == 'start': |
1611 | + hookenv.open_port(port) |
1612 | + elif event_name == 'stop': |
1613 | + hookenv.close_port(port) |
1614 | + |
1615 | + |
1616 | +# Convenience aliases |
1617 | +render_template = template = TemplateCallback |
1618 | +open_ports = PortManagerCallback() |
1619 | +close_ports = PortManagerCallback() |
1620 | |
1621 | === added file 'hooks/charmhelpers/core/templating.py' |
1622 | --- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000 |
1623 | +++ hooks/charmhelpers/core/templating.py 2014-06-05 18:17:30 +0000 |
1624 | @@ -0,0 +1,51 @@ |
1625 | +import os |
1626 | + |
1627 | +from charmhelpers.core import host |
1628 | +from charmhelpers.core import hookenv |
1629 | + |
1630 | + |
1631 | +def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): |
1632 | + """ |
1633 | + Render a template. |
1634 | + |
1635 | + The `source` path, if not absolute, is relative to the `templates_dir`. |
1636 | + |
1637 | + The `target` path should be absolute. |
1638 | + |
1639 | + The context should be a dict containing the values to be replaced in the |
1640 | + template. |
1641 | + |
1642 | + The `owner`, `group`, and `perms` options will be passed to `write_file`. |
1643 | + |
1644 | + If omitted, `templates_dir` defaults to the `templates` folder in the charm. |
1645 | + |
1646 | + Note: Using this requires python-jinja2; if it is not installed, calling |
1647 | + this will attempt to use charmhelpers.fetch.apt_install to install it. |
1648 | + """ |
1649 | + try: |
1650 | + from jinja2 import FileSystemLoader, Environment, exceptions |
1651 | + except ImportError: |
1652 | + try: |
1653 | + from charmhelpers.fetch import apt_install |
1654 | + except ImportError: |
1655 | + hookenv.log('Could not import jinja2, and could not import ' |
1656 | + 'charmhelpers.fetch to install it', |
1657 | + level=hookenv.ERROR) |
1658 | + raise |
1659 | + apt_install('python-jinja2', fatal=True) |
1660 | + from jinja2 import FileSystemLoader, Environment, exceptions |
1661 | + |
1662 | + if templates_dir is None: |
1663 | + templates_dir = os.path.join(hookenv.charm_dir(), 'templates') |
1664 | + loader = Environment(loader=FileSystemLoader(templates_dir)) |
1665 | + try: |
1666 | + source = source |
1667 | + template = loader.get_template(source) |
1668 | + except exceptions.TemplateNotFound as e: |
1669 | + hookenv.log('Could not load template %s from %s.' % |
1670 | + (source, templates_dir), |
1671 | + level=hookenv.ERROR) |
1672 | + raise e |
1673 | + content = template.render(context) |
1674 | + host.mkdir(os.path.dirname(target)) |
1675 | + host.write_file(target, content, owner, group, perms) |
1676 | |
1677 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
1678 | --- hooks/charmhelpers/fetch/__init__.py 2014-05-14 16:40:09 +0000 |
1679 | +++ hooks/charmhelpers/fetch/__init__.py 2014-06-05 18:17:30 +0000 |
1680 | @@ -1,4 +1,5 @@ |
1681 | import importlib |
1682 | +import time |
1683 | from yaml import safe_load |
1684 | from charmhelpers.core.host import ( |
1685 | lsb_release |
1686 | @@ -15,6 +16,7 @@ |
1687 | import apt_pkg |
1688 | import os |
1689 | |
1690 | + |
1691 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
1692 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1693 | """ |
1694 | @@ -56,10 +58,62 @@ |
1695 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
1696 | } |
1697 | |
1698 | +# The order of this list is very important. Handlers should be listed in from |
1699 | +# least- to most-specific URL matching. |
1700 | +FETCH_HANDLERS = ( |
1701 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
1702 | + 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
1703 | +) |
1704 | + |
1705 | +APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
1706 | +APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. |
1707 | +APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. |
1708 | + |
1709 | + |
1710 | +class SourceConfigError(Exception): |
1711 | + pass |
1712 | + |
1713 | + |
1714 | +class UnhandledSource(Exception): |
1715 | + pass |
1716 | + |
1717 | + |
1718 | +class AptLockError(Exception): |
1719 | + pass |
1720 | + |
1721 | + |
1722 | +class BaseFetchHandler(object): |
1723 | + |
1724 | + """Base class for FetchHandler implementations in fetch plugins""" |
1725 | + |
1726 | + def can_handle(self, source): |
1727 | + """Returns True if the source can be handled. Otherwise returns |
1728 | + a string explaining why it cannot""" |
1729 | + return "Wrong source type" |
1730 | + |
1731 | + def install(self, source): |
1732 | + """Try to download and unpack the source. Return the path to the |
1733 | + unpacked files or raise UnhandledSource.""" |
1734 | + raise UnhandledSource("Wrong source type {}".format(source)) |
1735 | + |
1736 | + def parse_url(self, url): |
1737 | + return urlparse(url) |
1738 | + |
1739 | + def base_url(self, url): |
1740 | + """Return url without querystring or fragment""" |
1741 | + parts = list(self.parse_url(url)) |
1742 | + parts[4:] = ['' for i in parts[4:]] |
1743 | + return urlunparse(parts) |
1744 | + |
1745 | |
1746 | def filter_installed_packages(packages): |
1747 | """Returns a list of packages that require installation""" |
1748 | apt_pkg.init() |
1749 | + |
1750 | + # Tell apt to build an in-memory cache to prevent race conditions (if |
1751 | + # another process is already building the cache). |
1752 | + apt_pkg.config.set("Dir::Cache::pkgcache", "") |
1753 | + |
1754 | cache = apt_pkg.Cache() |
1755 | _pkgs = [] |
1756 | for package in packages: |
1757 | @@ -87,14 +141,7 @@ |
1758 | cmd.extend(packages) |
1759 | log("Installing {} with options: {}".format(packages, |
1760 | options)) |
1761 | - env = os.environ.copy() |
1762 | - if 'DEBIAN_FRONTEND' not in env: |
1763 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
1764 | - |
1765 | - if fatal: |
1766 | - subprocess.check_call(cmd, env=env) |
1767 | - else: |
1768 | - subprocess.call(cmd, env=env) |
1769 | + _run_apt_command(cmd, fatal) |
1770 | |
1771 | |
1772 | def apt_upgrade(options=None, fatal=False, dist=False): |
1773 | @@ -109,24 +156,13 @@ |
1774 | else: |
1775 | cmd.append('upgrade') |
1776 | log("Upgrading with options: {}".format(options)) |
1777 | - |
1778 | - env = os.environ.copy() |
1779 | - if 'DEBIAN_FRONTEND' not in env: |
1780 | - env['DEBIAN_FRONTEND'] = 'noninteractive' |
1781 | - |
1782 | - if fatal: |
1783 | - subprocess.check_call(cmd, env=env) |
1784 | - else: |
1785 | - subprocess.call(cmd, env=env) |
1786 | + _run_apt_command(cmd, fatal) |
1787 | |
1788 | |
1789 | def apt_update(fatal=False): |
1790 | """Update local apt cache""" |
1791 | cmd = ['apt-get', 'update'] |
1792 | - if fatal: |
1793 | - subprocess.check_call(cmd) |
1794 | - else: |
1795 | - subprocess.call(cmd) |
1796 | + _run_apt_command(cmd, fatal) |
1797 | |
1798 | |
1799 | def apt_purge(packages, fatal=False): |
1800 | @@ -137,10 +173,7 @@ |
1801 | else: |
1802 | cmd.extend(packages) |
1803 | log("Purging {}".format(packages)) |
1804 | - if fatal: |
1805 | - subprocess.check_call(cmd) |
1806 | - else: |
1807 | - subprocess.call(cmd) |
1808 | + _run_apt_command(cmd, fatal) |
1809 | |
1810 | |
1811 | def apt_hold(packages, fatal=False): |
1812 | @@ -151,6 +184,7 @@ |
1813 | else: |
1814 | cmd.extend(packages) |
1815 | log("Holding {}".format(packages)) |
1816 | + |
1817 | if fatal: |
1818 | subprocess.check_call(cmd) |
1819 | else: |
1820 | @@ -188,10 +222,6 @@ |
1821 | key]) |
1822 | |
1823 | |
1824 | -class SourceConfigError(Exception): |
1825 | - pass |
1826 | - |
1827 | - |
1828 | def configure_sources(update=False, |
1829 | sources_var='install_sources', |
1830 | keys_var='install_keys'): |
1831 | @@ -224,17 +254,6 @@ |
1832 | if update: |
1833 | apt_update(fatal=True) |
1834 | |
1835 | -# The order of this list is very important. Handlers should be listed in from |
1836 | -# least- to most-specific URL matching. |
1837 | -FETCH_HANDLERS = ( |
1838 | - 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
1839 | - 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
1840 | -) |
1841 | - |
1842 | - |
1843 | -class UnhandledSource(Exception): |
1844 | - pass |
1845 | - |
1846 | |
1847 | def install_remote(source): |
1848 | """ |
1849 | @@ -265,30 +284,6 @@ |
1850 | return install_remote(source) |
1851 | |
1852 | |
1853 | -class BaseFetchHandler(object): |
1854 | - |
1855 | - """Base class for FetchHandler implementations in fetch plugins""" |
1856 | - |
1857 | - def can_handle(self, source): |
1858 | - """Returns True if the source can be handled. Otherwise returns |
1859 | - a string explaining why it cannot""" |
1860 | - return "Wrong source type" |
1861 | - |
1862 | - def install(self, source): |
1863 | - """Try to download and unpack the source. Return the path to the |
1864 | - unpacked files or raise UnhandledSource.""" |
1865 | - raise UnhandledSource("Wrong source type {}".format(source)) |
1866 | - |
1867 | - def parse_url(self, url): |
1868 | - return urlparse(url) |
1869 | - |
1870 | - def base_url(self, url): |
1871 | - """Return url without querystring or fragment""" |
1872 | - parts = list(self.parse_url(url)) |
1873 | - parts[4:] = ['' for i in parts[4:]] |
1874 | - return urlunparse(parts) |
1875 | - |
1876 | - |
1877 | def plugins(fetch_handlers=None): |
1878 | if not fetch_handlers: |
1879 | fetch_handlers = FETCH_HANDLERS |
1880 | @@ -306,3 +301,40 @@ |
1881 | log("FetchHandler {} not found, skipping plugin".format( |
1882 | handler_name)) |
1883 | return plugin_list |
1884 | + |
1885 | + |
1886 | +def _run_apt_command(cmd, fatal=False): |
1887 | + """ |
1888 | + Run an APT command, checking output and retrying if the fatal flag is set |
1889 | + to True. |
1890 | + |
1891 | + :param: cmd: str: The apt command to run. |
1892 | + :param: fatal: bool: Whether the command's output should be checked and |
1893 | + retried. |
1894 | + """ |
1895 | + env = os.environ.copy() |
1896 | + |
1897 | + if 'DEBIAN_FRONTEND' not in env: |
1898 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
1899 | + |
1900 | + if fatal: |
1901 | + retry_count = 0 |
1902 | + result = None |
1903 | + |
1904 | + # If the command is considered "fatal", we need to retry if the apt |
1905 | + # lock was not acquired. |
1906 | + |
1907 | + while result is None or result == APT_NO_LOCK: |
1908 | + try: |
1909 | + result = subprocess.check_call(cmd, env=env) |
1910 | + except subprocess.CalledProcessError, e: |
1911 | + retry_count = retry_count + 1 |
1912 | + if retry_count > APT_NO_LOCK_RETRY_COUNT: |
1913 | + raise |
1914 | + result = e.returncode |
1915 | + log("Couldn't acquire DPKG lock. Will retry in {} seconds." |
1916 | + "".format(APT_NO_LOCK_RETRY_DELAY)) |
1917 | + time.sleep(APT_NO_LOCK_RETRY_DELAY) |
1918 | + |
1919 | + else: |
1920 | + subprocess.call(cmd, env=env) |
1921 | |
1922 | === modified file 'hooks/config-changed' |
1923 | --- hooks/config-changed 2014-05-14 16:40:09 +0000 |
1924 | +++ hooks/config-changed 2014-06-05 18:17:30 +0000 |
1925 | @@ -1,2 +1,5 @@ |
1926 | -#!/bin/bash |
1927 | -# config-changed occurs everytime a new configuration value is updated (juju set) |
1928 | +#!/usr/bin/env python |
1929 | +from charmhelpers.core import services |
1930 | +import config |
1931 | +manager = services.ServiceManager(config.SERVICES) |
1932 | +manager.manage() |
1933 | |
1934 | === added file 'hooks/config.py' |
1935 | --- hooks/config.py 1970-01-01 00:00:00 +0000 |
1936 | +++ hooks/config.py 2014-06-05 18:17:30 +0000 |
1937 | @@ -0,0 +1,80 @@ |
1938 | +from charmhelpers.core import services |
1939 | +from charmhelpers.contrib.cloudfoundry import contexts |
1940 | + |
1941 | +HM9K_PACKAGES = ['python-jinja2', 'cfhm9000'] |
1942 | + |
1943 | +HM_DIR = '/var/lib/cloudfoundry/cfhm9000' |
1944 | +WORKSPACE_DIR = '/var/lib/cloudfoundry/hm-workspace' |
1945 | + |
1946 | +hm_relations = [contexts.NatsRelation(), |
1947 | + contexts.EtcdRelation(), |
1948 | + contexts.CloudControllerRelation()] |
1949 | + |
1950 | +SERVICES = [ |
1951 | + { |
1952 | + 'service': 'cf-hm9k-fetcher', |
1953 | + 'required_data': hm_relations, |
1954 | + 'data_ready': [ |
1955 | + services.template(source='hm9000.json', |
1956 | + target=HM_DIR + '/config/hm9000.json'), |
1957 | + services.template(source='cf-hm9k-fetcher.conf', |
1958 | + target='/etc/init/cf-hm9k-fetcher.conf'), |
1959 | + ], |
1960 | + }, |
1961 | + { |
1962 | + 'service': 'cf-hm9k-listener', |
1963 | + 'required_data': hm_relations, |
1964 | + 'data_ready': [ |
1965 | + services.template(source='cf-hm9k-listener.conf', |
1966 | + target='/etc/init/cf-hm9k-listener.conf'), |
1967 | + ], |
1968 | + }, |
1969 | + { |
1970 | + 'service': 'cf-hm9k-analyzer', |
1971 | + 'required_data': hm_relations, |
1972 | + 'data_ready': [ |
1973 | + services.template(source='cf-hm9k-analyzer.conf', |
1974 | + target='/etc/init/cf-hm9k-analyzer.conf'), |
1975 | + ], |
1976 | + }, |
1977 | + { |
1978 | + 'service': 'cf-hm9k-sender', |
1979 | + 'required_data': hm_relations, |
1980 | + 'data_ready': [ |
1981 | + services.template(source='cf-hm9k-sender.conf', |
1982 | + target='/etc/init/cf-hm9k-sender.conf'), |
1983 | + ], |
1984 | + }, |
1985 | + { |
1986 | + 'service': 'cf-hm9k-metrics-server', |
1987 | + 'required_data': hm_relations, |
1988 | + 'data_ready': [ |
1989 | + services.template(source='cf-hm9k-metrics-server.conf', |
1990 | + target='/etc/init/cf-hm9k-metrics-server.conf'), |
1991 | + ], |
1992 | + }, |
1993 | + { |
1994 | + 'service': 'cf-hm9k-api-server', |
1995 | + 'required_data': hm_relations, |
1996 | + 'data_ready': [ |
1997 | + services.template(source='cf-hm9k-api-server.conf', |
1998 | + target='/etc/init/cf-hm9k-api-server.conf'), |
1999 | + ], |
2000 | + }, |
2001 | + { |
2002 | + 'service': 'cf-hm9k-evacuator', |
2003 | + 'required_data': hm_relations, |
2004 | + 'data_ready': [ |
2005 | + services.template(source='cf-hm9k-evacuator.conf', |
2006 | + target='/etc/init/cf-hm9k-evacuator.conf'), |
2007 | + ], |
2008 | + }, |
2009 | + { |
2010 | + 'service': 'cf-hm9k-shredder', |
2011 | + 'required_data': hm_relations, |
2012 | + 'data_ready': [ |
2013 | + services.template(source='cf-hm9k-shredder.conf', |
2014 | + target='/etc/init/cf-hm9k-shredder.conf'), |
2015 | + ], |
2016 | + }, |
2017 | +] |
2018 | |
2019 | === added file 'hooks/etcd-relation-changed' |
2020 | --- hooks/etcd-relation-changed 1970-01-01 00:00:00 +0000 |
2021 | +++ hooks/etcd-relation-changed 2014-06-05 18:17:30 +0000 |
2022 | @@ -0,0 +1,5 @@ |
2023 | +#!/usr/bin/env python |
2024 | +from charmhelpers.core import services |
2025 | +import config |
2026 | +manager = services.ServiceManager(config.SERVICES) |
2027 | +manager.manage() |
2028 | |
2029 | === modified file 'hooks/install' |
2030 | --- hooks/install 2014-05-14 16:40:09 +0000 |
2031 | +++ hooks/install 2014-06-05 18:17:30 +0000 |
2032 | @@ -1,8 +1,43 @@ |
2033 | -#!/bin/bash |
2034 | -# Here do anything needed to install the service |
2035 | -# i.e. apt-get install -y foo or bzr branch http://myserver/mycode /srv/webroot |
2036 | -# Make sure this hook exits cleanly and is idempotent, common problems here are |
2037 | -# failing to account for a debconf question on a dependency, or trying to pull |
2038 | -# from github without installing git first. |
2039 | - |
2040 | -apt-get install -y cf-hm9000 |
2041 | +#!/usr/bin/env python |
2042 | +# vim: et ai ts=4 sw=4: |
2043 | + |
2044 | +import os |
2045 | +import subprocess |
2046 | + |
2047 | +from charmhelpers.core import host |
2048 | +from charmhelpers.core import hookenv |
2049 | +from charmhelpers.contrib.cloudfoundry.common import ( |
2050 | + prepare_cloudfoundry_environment |
2051 | +) |
2052 | + |
2053 | +import config |
2054 | + |
2055 | +CHARM_DIR = hookenv.charm_dir() |
2056 | + |
2057 | + |
2058 | +def install(): |
2059 | + prepare_cloudfoundry_environment(hookenv.config(), config.HM9K_PACKAGES) |
2060 | + install_from_source() |
2061 | + |
2062 | + |
2063 | +def install_from_source(): |
2064 | + subprocess.check_call([ |
2065 | + 'git', 'clone', |
2066 | + 'https://github.com/cloudfoundry/hm-workspace', config.WORKSPACE_DIR]) |
2067 | + host.mkdir(config.WORKSPACE_DIR + '/bin') |
2068 | + with host.chdir(config.WORKSPACE_DIR): |
2069 | + subprocess.check_call(['git', 'submodule', 'update', '--init']) |
2070 | + with host.chdir(config.WORKSPACE_DIR + '/src/github.com/cloudfoundry/hm9000'): |
2071 | + subprocess.check_call(['go', 'install', '.'], |
2072 | + env={'GOPATH': config.WORKSPACE_DIR}) |
2073 | + |
2074 | + |
2075 | +def install_from_charm(): |
2076 | + host.copy_file( |
2077 | + os.path.join(hookenv.charm_dir(), 'files/hm9000'), |
2078 | + config.WORKSPACE_DIR + '/bin/hm9000', |
2079 | + owner='vcap', group='vcap', perms=0555) |
2080 | + |
2081 | + |
2082 | +if __name__ == '__main__': |
2083 | + install() |
2084 | |
2085 | === added file 'hooks/metrics-relation-changed' |
2086 | --- hooks/metrics-relation-changed 1970-01-01 00:00:00 +0000 |
2087 | +++ hooks/metrics-relation-changed 2014-06-05 18:17:30 +0000 |
2088 | @@ -0,0 +1,5 @@ |
2089 | +#!/usr/bin/env python |
2090 | +from charmhelpers.core import services |
2091 | +import config |
2092 | +manager = services.ServiceManager(config.SERVICES) |
2093 | +manager.manage() |
2094 | |
2095 | === added file 'hooks/nats-relation-changed' |
2096 | --- hooks/nats-relation-changed 1970-01-01 00:00:00 +0000 |
2097 | +++ hooks/nats-relation-changed 2014-06-05 18:17:30 +0000 |
2098 | @@ -0,0 +1,5 @@ |
2099 | +#!/usr/bin/env python |
2100 | +from charmhelpers.core import services |
2101 | +import config |
2102 | +manager = services.ServiceManager(config.SERVICES) |
2103 | +manager.manage() |
2104 | |
2105 | === removed file 'hooks/relation-name-relation-broken' |
2106 | --- hooks/relation-name-relation-broken 2014-05-14 16:40:09 +0000 |
2107 | +++ hooks/relation-name-relation-broken 1970-01-01 00:00:00 +0000 |
2108 | @@ -1,2 +0,0 @@ |
2109 | -#!/bin/sh |
2110 | -# This hook runs when the full relation is removed (not just a single member) |
2111 | |
2112 | === removed file 'hooks/relation-name-relation-changed' |
2113 | --- hooks/relation-name-relation-changed 2014-05-14 16:40:09 +0000 |
2114 | +++ hooks/relation-name-relation-changed 1970-01-01 00:00:00 +0000 |
2115 | @@ -1,9 +0,0 @@ |
2116 | -#!/bin/bash |
2117 | -# This must be renamed to the name of the relation. The goal here is to |
2118 | -# affect any change needed by relationships being formed, modified, or broken |
2119 | -# This script should be idempotent. |
2120 | -juju-log $JUJU_REMOTE_UNIT modified its settings |
2121 | -juju-log Relation settings: |
2122 | -relation-get |
2123 | -juju-log Relation members: |
2124 | -relation-list |
2125 | |
2126 | === removed file 'hooks/relation-name-relation-departed' |
2127 | --- hooks/relation-name-relation-departed 2014-05-14 16:40:09 +0000 |
2128 | +++ hooks/relation-name-relation-departed 1970-01-01 00:00:00 +0000 |
2129 | @@ -1,5 +0,0 @@ |
2130 | -#!/bin/sh |
2131 | -# This must be renamed to the name of the relation. The goal here is to |
2132 | -# affect any change needed by the remote unit leaving the relationship. |
2133 | -# This script should be idempotent. |
2134 | -juju-log $JUJU_REMOTE_UNIT departed |
2135 | |
2136 | === removed file 'hooks/relation-name-relation-joined' |
2137 | --- hooks/relation-name-relation-joined 2014-05-14 16:40:09 +0000 |
2138 | +++ hooks/relation-name-relation-joined 1970-01-01 00:00:00 +0000 |
2139 | @@ -1,5 +0,0 @@ |
2140 | -#!/bin/sh |
2141 | -# This must be renamed to the name of the relation. The goal here is to |
2142 | -# affect any change needed by relationships being formed |
2143 | -# This script should be idempotent. |
2144 | -juju-log $JUJU_REMOTE_UNIT joined |
2145 | |
2146 | === modified file 'hooks/start' |
2147 | --- hooks/start 2014-05-14 16:40:09 +0000 |
2148 | +++ hooks/start 2014-06-05 18:17:30 +0000 |
2149 | @@ -1,4 +1,5 @@ |
2150 | -#!/bin/bash |
2151 | -# Here put anything that is needed to start the service. |
2152 | -# Note that currently this is run directly after install |
2153 | -# i.e. 'service apache2 start' |
2154 | +#!/usr/bin/env python |
2155 | +from charmhelpers.core import services |
2156 | +import config |
2157 | +manager = services.ServiceManager(config.SERVICES) |
2158 | +manager.manage() |
2159 | |
2160 | === modified file 'hooks/stop' |
2161 | --- hooks/stop 2014-05-14 16:40:09 +0000 |
2162 | +++ hooks/stop 2014-06-05 18:17:30 +0000 |
2163 | @@ -1,7 +1,5 @@ |
2164 | -#!/bin/bash |
2165 | -# This will be run when the service is being torn down, allowing you to disable |
2166 | -# it in various ways.. |
2167 | -# For example, if your web app uses a text file to signal to the load balancer |
2168 | -# that it is live... you could remove it and sleep for a bit to allow the load |
2169 | -# balancer to stop sending traffic. |
2170 | -# rm /srv/webroot/server-live.txt && sleep 30 |
2171 | +#!/usr/bin/env python |
2172 | +from charmhelpers.core import services |
2173 | +import config |
2174 | +manager = services.ServiceManager(config.SERVICES) |
2175 | +manager.manage() |
2176 | |
2177 | === modified file 'hooks/upgrade-charm' |
2178 | --- hooks/upgrade-charm 2014-05-14 16:40:09 +0000 |
2179 | +++ hooks/upgrade-charm 2014-06-05 18:17:30 +0000 |
2180 | @@ -1,6 +1,5 @@ |
2181 | -#!/bin/bash |
2182 | -# This hook is executed each time a charm is upgraded after the new charm |
2183 | -# contents have been unpacked |
2184 | -# Best practice suggests you execute the hooks/install and |
2185 | -# hooks/config-changed to ensure all updates are processed |
2186 | - |
2187 | +#!/usr/bin/env python |
2188 | +from charmhelpers.core import services |
2189 | +import config |
2190 | +manager = services.ServiceManager(config.SERVICES) |
2191 | +manager.manage() |
2192 | |
2193 | === modified file 'metadata.yaml' |
2194 | --- metadata.yaml 2014-05-14 16:40:09 +0000 |
2195 | +++ metadata.yaml 2014-06-05 18:17:30 +0000 |
2196 | @@ -1,6 +1,6 @@ |
2197 | name: cf-hm9000 |
2198 | -summary: Whit Morriss <whit.morriss@canonical.com> |
2199 | -maintainer: cloudfoundry-charmers |
2200 | +summary: Health Monitor for Cloud Foundry |
2201 | +maintainer: cf-charmers |
2202 | description: | |
2203 | Deploys the hm9000 health monitoring system for cloud foundry |
2204 | categories: |
2205 | @@ -9,13 +9,10 @@ |
2206 | provides: |
2207 | metrics: |
2208 | interface: http |
2209 | - api: |
2210 | - interface: nats |
2211 | requires: |
2212 | nats: |
2213 | interface: nats |
2214 | - storage: |
2215 | - interface: etcd |
2216 | -# peers: |
2217 | -# peer-relation: |
2218 | -# interface: interface-name |
2219 | + etcd: |
2220 | + interface: http |
2221 | + cc: |
2222 | + interface: cf-cloud-controller |
2223 | |
2224 | === removed file 'notes.md' |
2225 | --- notes.md 2014-05-14 16:40:09 +0000 |
2226 | +++ notes.md 1970-01-01 00:00:00 +0000 |
2227 | @@ -1,8 +0,0 @@ |
2228 | -# Notes |
2229 | - |
2230 | -## Relations |
2231 | - |
2232 | - - etcd |
2233 | - - nats |
2234 | - |
2235 | - |
2236 | |
2237 | === added directory 'templates' |
2238 | === added file 'templates/cf-hm9k-analyzer.conf' |
2239 | --- templates/cf-hm9k-analyzer.conf 1970-01-01 00:00:00 +0000 |
2240 | +++ templates/cf-hm9k-analyzer.conf 2014-06-05 18:17:30 +0000 |
2241 | @@ -0,0 +1,17 @@ |
2242 | +description "Cloud Foundry HM9000" |
2243 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2244 | +start on runlevel [2345] |
2245 | +stop on runlevel [!2345] |
2246 | +#expect daemon |
2247 | +#apparmor load <profile-path> |
2248 | +setuid vcap |
2249 | +setgid vcap |
2250 | +respawn |
2251 | +respawn limit 10 5 |
2252 | +normal exit 0 |
2253 | + |
2254 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2255 | +export GOPATH |
2256 | + |
2257 | +chdir /var/lib/cloudfoundry/hm-workspace |
2258 | +exec ./bin/hm9000 analyze --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2259 | |
2260 | === added file 'templates/cf-hm9k-api-server.conf' |
2261 | --- templates/cf-hm9k-api-server.conf 1970-01-01 00:00:00 +0000 |
2262 | +++ templates/cf-hm9k-api-server.conf 2014-06-05 18:17:30 +0000 |
2263 | @@ -0,0 +1,17 @@ |
2264 | +description "Cloud Foundry HM9000" |
2265 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2266 | +start on runlevel [2345] |
2267 | +stop on runlevel [!2345] |
2268 | +#expect daemon |
2269 | +#apparmor load <profile-path> |
2270 | +setuid vcap |
2271 | +setgid vcap |
2272 | +respawn |
2273 | +respawn limit 10 5 |
2274 | +normal exit 0 |
2275 | + |
2276 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2277 | +export GOPATH |
2278 | + |
2279 | +chdir /var/lib/cloudfoundry/hm-workspace |
2280 | +exec ./bin/hm9000 serve_api --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2281 | |
2282 | === added file 'templates/cf-hm9k-evacuator.conf' |
2283 | --- templates/cf-hm9k-evacuator.conf 1970-01-01 00:00:00 +0000 |
2284 | +++ templates/cf-hm9k-evacuator.conf 2014-06-05 18:17:30 +0000 |
2285 | @@ -0,0 +1,17 @@ |
2286 | +description "Cloud Foundry HM9000" |
2287 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2288 | +start on runlevel [2345] |
2289 | +stop on runlevel [!2345] |
2290 | +#expect daemon |
2291 | +#apparmor load <profile-path> |
2292 | +setuid vcap |
2293 | +setgid vcap |
2294 | +respawn |
2295 | +respawn limit 10 5 |
2296 | +normal exit 0 |
2297 | + |
2298 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2299 | +export GOPATH |
2300 | + |
2301 | +chdir /var/lib/cloudfoundry/hm-workspace |
2302 | +exec ./bin/hm9000 evacuator --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2303 | |
2304 | === added file 'templates/cf-hm9k-fetcher.conf' |
2305 | --- templates/cf-hm9k-fetcher.conf 1970-01-01 00:00:00 +0000 |
2306 | +++ templates/cf-hm9k-fetcher.conf 2014-06-05 18:17:30 +0000 |
2307 | @@ -0,0 +1,17 @@ |
2308 | +description "Cloud Foundry HM9000" |
2309 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2310 | +start on runlevel [2345] |
2311 | +stop on runlevel [!2345] |
2312 | +#expect daemon |
2313 | +#apparmor load <profile-path> |
2314 | +setuid vcap |
2315 | +setgid vcap |
2316 | +respawn |
2317 | +respawn limit 10 5 |
2318 | +normal exit 0 |
2319 | + |
2320 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2321 | +export GOPATH |
2322 | + |
2323 | +chdir /var/lib/cloudfoundry/hm-workspace |
2324 | +exec ./bin/hm9000 fetch_desired --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2325 | |
2326 | === added file 'templates/cf-hm9k-listener.conf' |
2327 | --- templates/cf-hm9k-listener.conf 1970-01-01 00:00:00 +0000 |
2328 | +++ templates/cf-hm9k-listener.conf 2014-06-05 18:17:30 +0000 |
2329 | @@ -0,0 +1,17 @@ |
2330 | +description "Cloud Foundry HM9000" |
2331 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2332 | +start on runlevel [2345] |
2333 | +stop on runlevel [!2345] |
2334 | +#expect daemon |
2335 | +#apparmor load <profile-path> |
2336 | +setuid vcap |
2337 | +setgid vcap |
2338 | +respawn |
2339 | +respawn limit 10 5 |
2340 | +normal exit 0 |
2341 | + |
2342 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2343 | +export GOPATH |
2344 | + |
2345 | +chdir /var/lib/cloudfoundry/hm-workspace |
2346 | +exec ./bin/hm9000 listen --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2347 | |
2348 | === added file 'templates/cf-hm9k-metrics-server.conf' |
2349 | --- templates/cf-hm9k-metrics-server.conf 1970-01-01 00:00:00 +0000 |
2350 | +++ templates/cf-hm9k-metrics-server.conf 2014-06-05 18:17:30 +0000 |
2351 | @@ -0,0 +1,17 @@ |
2352 | +description "Cloud Foundry HM9000" |
2353 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2354 | +start on runlevel [2345] |
2355 | +stop on runlevel [!2345] |
2356 | +#expect daemon |
2357 | +#apparmor load <profile-path> |
2358 | +setuid vcap |
2359 | +setgid vcap |
2360 | +respawn |
2361 | +respawn limit 10 5 |
2362 | +normal exit 0 |
2363 | + |
2364 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2365 | +export GOPATH |
2366 | + |
2367 | +chdir /var/lib/cloudfoundry/hm-workspace |
2368 | +exec ./bin/hm9000 serve_metrics --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2369 | |
2370 | === added file 'templates/cf-hm9k-sender.conf' |
2371 | --- templates/cf-hm9k-sender.conf 1970-01-01 00:00:00 +0000 |
2372 | +++ templates/cf-hm9k-sender.conf 2014-06-05 18:17:30 +0000 |
2373 | @@ -0,0 +1,17 @@ |
2374 | +description "Cloud Foundry HM9000" |
2375 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2376 | +start on runlevel [2345] |
2377 | +stop on runlevel [!2345] |
2378 | +#expect daemon |
2379 | +#apparmor load <profile-path> |
2380 | +setuid vcap |
2381 | +setgid vcap |
2382 | +respawn |
2383 | +respawn limit 10 5 |
2384 | +normal exit 0 |
2385 | + |
2386 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2387 | +export GOPATH |
2388 | + |
2389 | +chdir /var/lib/cloudfoundry/hm-workspace |
2390 | +exec ./bin/hm9000 send --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2391 | |
2392 | === added file 'templates/cf-hm9k-shredder.conf' |
2393 | --- templates/cf-hm9k-shredder.conf 1970-01-01 00:00:00 +0000 |
2394 | +++ templates/cf-hm9k-shredder.conf 2014-06-05 18:17:30 +0000 |
2395 | @@ -0,0 +1,17 @@ |
2396 | +description "Cloud Foundry HM9000" |
2397 | +author "cf-charmers <cf-charmers@lists.launchpad.net>" |
2398 | +start on runlevel [2345] |
2399 | +stop on runlevel [!2345] |
2400 | +#expect daemon |
2401 | +#apparmor load <profile-path> |
2402 | +setuid vcap |
2403 | +setgid vcap |
2404 | +respawn |
2405 | +respawn limit 10 5 |
2406 | +normal exit 0 |
2407 | + |
2408 | +env GOPATH=/var/lib/cloudfoundry/hm-workspace/ |
2409 | +export GOPATH |
2410 | + |
2411 | +chdir /var/lib/cloudfoundry/hm-workspace |
2412 | +exec ./bin/hm9000 shred --poll --config=/var/lib/cloudfoundry/cfhm9000/config/hm9000.json |
2413 | |
2414 | === added file 'templates/hm9000.json' |
2415 | --- templates/hm9000.json 1970-01-01 00:00:00 +0000 |
2416 | +++ templates/hm9000.json 2014-06-05 18:17:30 +0000 |
2417 | @@ -0,0 +1,31 @@ |
2418 | +{ |
2419 | + "heartbeat_period_in_seconds": 10, |
2420 | + |
2421 | + "cc_auth_user": "{{cc['user']}}", |
2422 | + "cc_auth_password": "{{cc['password']}}", |
2423 | + "cc_base_url": "http://{{cc['hostname']}}:{{cc['port']}}", |
2424 | + "skip_cert_verify": true, |
2425 | + "desired_state_batch_size": 500, |
2426 | + "fetcher_network_timeout_in_seconds": 10, |
2427 | + |
2428 | + "store_schema_version": 1, |
2429 | + "store_type": "etcd", |
2430 | + "store_urls": [ |
2431 | + {% for unit in etcd -%} |
2432 | + "http://{{unit['hostname']}}:{{unit['port']}}"{% if not loop.last %},{% endif -%} |
2433 | + {%- endfor %} |
2434 | + ], |
2435 | + |
2436 | + "metrics_server_port": 7879, |
2437 | + "metrics_server_user": "metrics_server_user", |
2438 | + "metrics_server_password": "canHazMetrics?", |
2439 | + |
2440 | + "log_level": "INFO", |
2441 | + |
2442 | + "nats": [{ |
2443 | + "host": "{{nats['nats_address']}}", |
2444 | + "port": {{nats['nats_port']}}, |
2445 | + "user": "{{nats['nats_user']}}", |
2446 | + "password": "{{nats['nats_password']}}" |
2447 | + }] |
2448 | +} |
Reviewers: mp+221770_ code.launchpad. net,
Message:
Please take a look.
Description:
Finished charm using services framework
https:/ /code.launchpad .net/~johnsca/ charms/ trusty/ cf-hm9000/ services/ +merge/ 221770
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/104820043/
Affected files (+1064, -879 lines): config. json json.erb analyzer_ ctl api_server_ ctl evacuator_ ctl fetcher_ ctl listener_ ctl metrics_ server_ ctl sender_ ctl shredder_ ctl forwarder. conf.erb relation- changed ers/contrib/ cloudfoundry/ common. py ers/contrib/ cloudfoundry/ config_ helper. py ers/contrib/ cloudfoundry/ contexts. py ers/contrib/ cloudfoundry/ install. py ers/contrib/ cloudfoundry/ services. py ers/contrib/ cloudfoundry/ upstart_ helper. py ers/contrib/ openstack/ context. py ers/contrib/ openstack/ neutron. py ers/contrib/ openstack/ utils.py ers/contrib/ storage/ linux/lvm. py ers/contrib/ storage/ linux/utils. py ers/core/ hookenv. py ers/core/ host.py ers/core/ services. py ers/core/ templating. py ers/fetch/ __init_ _.py changed relation- changed relation- changed relation- changed name-relation- broken name-relation- changed name-relation- departed name-relation- joined cf-hm9k- analyzer. conf cf-hm9k- api-server. conf cf-hm9k- evacuator. conf cf-hm9k- fetcher. conf cf-hm9k- listener. conf cf-hm9k- metrics- server. conf cf-hm9k- sender. conf cf-hm9k- shredder. conf hm9000. json
A [revision details]
D files/README.md
D files/default-
A files/hm9000
D files/hm9000.
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/hm9000_
D files/syslog_
A hooks/cc-
M hooks/charmhelp
D hooks/charmhelp
M hooks/charmhelp
D hooks/charmhelp
D hooks/charmhelp
D hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
M hooks/charmhelp
A hooks/charmhelp
A hooks/charmhelp
M hooks/charmhelp
A hooks/config.py
M hooks/config-
A hooks/etcd-
M hooks/install
A hooks/metrics-
A hooks/nats-
D hooks/relation-
D hooks/relation-
D hooks/relation-
D hooks/relation-
M hooks/start
M hooks/stop
M hooks/upgrade-charm
M metadata.yaml
D notes.md
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/
A templates/