Merge lp:~jacekn/charms/precise/haproxy/haproxy-updates into lp:charms/haproxy
- Precise Pangolin (12.04)
- haproxy-updates
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 85 |
Proposed branch: | lp:~jacekn/charms/precise/haproxy/haproxy-updates |
Merge into: | lp:charms/haproxy |
Diff against target: |
5649 lines (+4614/-89) 51 files modified
Makefile (+2/-2) README.md (+51/-2) config.yaml (+59/-0) files/nrpe/check_haproxy.sh (+14/-4) files/nrpe/check_haproxy_queue_depth.sh (+2/-2) hooks/charmhelpers/cli/README.rst (+57/-0) hooks/charmhelpers/cli/__init__.py (+147/-0) hooks/charmhelpers/cli/commands.py (+2/-0) hooks/charmhelpers/cli/host.py (+15/-0) hooks/charmhelpers/contrib/ansible/__init__.py (+165/-0) hooks/charmhelpers/contrib/charmhelpers/IMPORT (+4/-0) hooks/charmhelpers/contrib/charmhelpers/__init__.py (+184/-0) hooks/charmhelpers/contrib/charmsupport/IMPORT (+14/-0) hooks/charmhelpers/contrib/charmsupport/nrpe.py (+1/-3) hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0) hooks/charmhelpers/contrib/jujugui/IMPORT (+4/-0) hooks/charmhelpers/contrib/jujugui/utils.py (+602/-0) hooks/charmhelpers/contrib/network/ip.py (+69/-0) hooks/charmhelpers/contrib/network/ovs/__init__.py (+75/-0) hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0) hooks/charmhelpers/contrib/openstack/context.py (+577/-0) hooks/charmhelpers/contrib/openstack/neutron.py (+137/-0) hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0) hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+11/-0) hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+37/-0) hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+23/-0) hooks/charmhelpers/contrib/openstack/templating.py (+280/-0) hooks/charmhelpers/contrib/openstack/utils.py (+440/-0) hooks/charmhelpers/contrib/saltstack/__init__.py (+102/-0) hooks/charmhelpers/contrib/ssl/__init__.py (+78/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+383/-0) hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0) hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0) hooks/charmhelpers/contrib/templating/contexts.py (+73/-0) hooks/charmhelpers/contrib/templating/pyformat.py (+13/-0) hooks/charmhelpers/core/hookenv.py (+78/-23) hooks/charmhelpers/core/host.py (+66/-14) hooks/charmhelpers/fetch/__init__.py (+84/-12) hooks/charmhelpers/fetch/bzrurl.py (+7/-2) hooks/charmhelpers/payload/__init__.py (+1/-0) hooks/charmhelpers/payload/archive.py (+57/-0) hooks/charmhelpers/payload/execd.py (+50/-0) hooks/hooks.py (+86/-11) hooks/tests/test_config_changed_hooks.py (+1/-0) hooks/tests/test_helpers.py (+67/-9) hooks/tests/test_install.py (+5/-0) hooks/tests/test_reverseproxy_hooks.py (+55/-3) hooks/tests/test_website_hooks.py (+0/-1) metadata.yaml (+1/-1) |
To merge this branch: | bzr merge lp:~jacekn/charms/precise/haproxy/haproxy-updates |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Andrew McLeod (community) | Approve | ||
Tim Van Steenburgh (community) | Needs Fixing | ||
Review via email: mp+272559@code.launchpad.net |
Commit message
Description of the change
This branch makes a few improvements to the haproxy charm:
- add SSL support
- add nagios_
- multiple monitoring fixes
- add open_monitoring
Please note that all tests related to the changes are passing. There is however one test that is currently failing in the upstream branch, unrelated to my changes. See https:/
Tim Van Steenburgh (tvansteenburgh) wrote : | # |
Barry Price (barryprice) wrote : | # |
Hi Tim,
Sorry for the delay on this - as suggested, I've merged your fixes back into this branch if you could re-review.
Andrew McLeod (admcleod) wrote : | # |
Hi Barry,
I've tested this now and can confirm all tests pass, so heres my +1
Andrew
Cory Johns (johnsca) wrote : | # |
Barry and Jacek,
This has been merged and will be available in cs:precise/haproxy shortly.
However, I see that there is also a separate cs:trusty/haproxy branch. Should these changes also be proposed against that branch?
Preview Diff
1 | === modified file 'Makefile' |
2 | --- Makefile 2014-09-11 18:50:46 +0000 |
3 | +++ Makefile 2016-02-12 04:16:45 +0000 |
4 | @@ -18,7 +18,7 @@ |
5 | @test `cat revision` = 0 && rm revision |
6 | |
7 | .venv: |
8 | - sudo apt-get install -y python-apt python-virtualenv |
9 | + sudo apt-get install -y python-apt python-virtualenv python-jinja2 |
10 | virtualenv .venv --system-site-packages |
11 | .venv/bin/pip install -I nose testtools mock pyyaml |
12 | |
13 | @@ -28,7 +28,7 @@ |
14 | |
15 | lint: |
16 | @echo Checking for Python syntax... |
17 | - @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK |
18 | + @flake8 $(HOOKS_DIR) --ignore=E123,E265 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK |
19 | |
20 | sourcedeps: $(PWD)/config-manager.txt |
21 | @echo Updating source dependencies... |
22 | |
23 | === modified file 'README.md' |
24 | --- README.md 2014-09-08 18:21:01 +0000 |
25 | +++ README.md 2016-02-12 04:16:45 +0000 |
26 | @@ -78,7 +78,6 @@ |
27 | |
28 | ## Website Relation |
29 | |
30 | - |
31 | The website relation is the other side of haproxy. It can communicate with |
32 | charms written like apache2 that can act as a front-end for haproxy to take of |
33 | things like ssl encryption. When joining a service like apache2 on its |
34 | @@ -130,7 +129,57 @@ |
35 | cross-environment relations then that will be the best way to handle |
36 | this configuration, as it will work in either scenario. |
37 | |
38 | -## HAProxy Project Information |
39 | +## peering\_mode and the indirection layer |
40 | + |
41 | +If you are going to spawn multiple haproxy units, you should pay special |
42 | +attention to the peering\_mode configuration option. |
43 | + |
44 | +### active-passive mode |
45 | + |
46 | +The peering\_mode option defaults to "active-passive" and in this mode, all |
47 | +haproxy units ("peers") will proxy traffic to the first working peer (i.e. that |
48 | +passes a basic layer4 check). What this means is that extra peers are working |
49 | +as "hot spares", and so adding units doesn't add global bandwidth to the |
50 | +haproxy layer. |
51 | + |
52 | +In order to achieve this, the charm configures a new service in haproxy that |
53 | +will simply forward the traffic to the first working peer. The haproxy service |
54 | +that actually load-balances between the backends is renamed, and its port |
55 | +number is increased by one. |
56 | + |
57 | +For example, if you have 3 working haproxy units haproxy/0, haproxy/1 and |
58 | +haproxy/2 configured to listen on port 80, in active-passive mode, and |
59 | +haproxy/2 gets a request, the request is routed through the following path : |
60 | + |
61 | +haproxy/2:80 ==> haproxy/0:81 ==> \[backends\] |
62 | + |
63 | +In the same fashion, if haproxy/1 receives a request, it's routed in the following way : |
64 | + |
65 | +haproxy/1:80 ==> haproxy/0:81 ==> \[backends\] |
66 | + |
67 | +If haproxy/0 was to go down, then all the requests would be forwarded to the |
68 | +next working peer, i.e. haproxy/1. In this case, a request received by |
69 | +haproxy/2 would be routed as follows : |
70 | + |
71 | +haproxy/2:80 ==> haproxy/1:81 ==> \[backends\] |
72 | + |
73 | +This mode allows a strict control of the maximum number of connections the |
74 | +backends will receive, and guarantees you'll have enough bandwidth to the |
75 | +backends should an haproxy unit die, at the cost of having less overall |
76 | +bandwidth to the backends. |
77 | + |
78 | +### active-active mode |
79 | + |
80 | +If the peering\_mode option is set to "active-active", then any haproxy unit |
81 | +will be independant from each other and will simply load-balance the traffic to |
82 | +the backends. In this case, the indirection layer described above is not |
83 | +created in this case. |
84 | + |
85 | +This mode allows increasing the bandwidth to the backends by adding additional |
86 | +units, at the cost of having less control over the number of connections that |
87 | +they will receive. |
88 | + |
89 | +# HAProxy Project Information |
90 | |
91 | - [HAProxy Homepage](http://haproxy.1wt.eu/) |
92 | - [HAProxy mailing list](http://haproxy.1wt.eu/#tact) |
93 | |
94 | === modified file 'config.yaml' |
95 | --- config.yaml 2014-09-08 18:21:01 +0000 |
96 | +++ config.yaml 2016-02-12 04:16:45 +0000 |
97 | @@ -71,6 +71,14 @@ |
98 | default: False |
99 | type: boolean |
100 | description: Enable monitoring |
101 | + open_monitoring_port: |
102 | + default: True |
103 | + type: boolean |
104 | + description: | |
105 | + Open the monitoring port when enable_monitoring is true. |
106 | + |
107 | + Consider setting this to false if exposing haproxy on a shared |
108 | + or untrusted network, e.g., when deploying a frontend. |
109 | monitoring_port: |
110 | default: 10000 |
111 | type: int |
112 | @@ -126,6 +134,20 @@ |
113 | with a cookie. Session are sticky by default. To turn off sticky sessions, |
114 | remove the 'cookie SRVNAME insert' and 'cookie S{i}' stanzas from |
115 | `service_options` and `server_options`. |
116 | + ssl_cert: |
117 | + default: "" |
118 | + type: string |
119 | + description: | |
120 | + This option is only supported in Haproxy >= 1.5. |
121 | + |
122 | + Use this SSL certificate for frontend SSL termination, if specified. |
123 | + This should be a concatenation of: |
124 | + * The public certificate (PEM) |
125 | + * Zero or more intermediate CA certificates (PEM) |
126 | + * The private key (PEM) |
127 | + The certificate(s) + private key will be installed with read-access to the |
128 | + haproxy service user. If this option is set, all bind stanzas will use this |
129 | + certificate. |
130 | sysctl: |
131 | default: "" |
132 | type: string |
133 | @@ -142,6 +164,33 @@ |
134 | juju-postgresql-0 |
135 | If you're running multiple environments with the same services in them |
136 | this allows you to differentiate between them. |
137 | + nagios_servicegroups: |
138 | + default: "" |
139 | + type: string |
140 | + description: | |
141 | + A comma-separated list of nagios servicegroups. |
142 | + If left empty, the nagios_context will be used as the servicegroup |
143 | + install_sources: |
144 | + default: "" |
145 | + type: string |
146 | + description: | |
147 | + YAML list of additional installation sources, as a string. The number of |
148 | + install_sources must match the number of install_keys. For example: |
149 | + install_sources: | |
150 | + - ppa:project1/ppa |
151 | + - ppa:project2/ppa |
152 | + install_keys: |
153 | + default: "" |
154 | + type: string |
155 | + description: | |
156 | + YAML list of GPG keys for installation sources, as a string. For apt repository |
157 | + URLs, use the public key ID used to verify package signatures. For |
158 | + other sources such as PPA, use empty string. This list must have the |
159 | + same number of elements as install_sources, even if the key items are |
160 | + all empty string. An example to go with the above for install_sources: |
161 | + install_keys: | |
162 | + - "" |
163 | + - "" |
164 | metrics_target: |
165 | default: "" |
166 | type: string |
167 | @@ -159,3 +208,13 @@ |
168 | default: 5 |
169 | type: int |
170 | description: Period for metrics cron job to run in minutes |
171 | + peering_mode: |
172 | + default: "active-passive" |
173 | + type: string |
174 | + description: | |
175 | + Possible values : "active-passive", "active-active". This is only used |
176 | + if several units are spawned. In "active-passive" mode, all the units will |
177 | + forward traffic to the first working haproxy unit, which will then forward it |
178 | + to configured backends. In "active-active" mode, each unit will proxy the |
179 | + traffic directly to the backends. The "active-passive" mode gives a better |
180 | + control of the maximum connection that will be opened to a backend server. |
181 | |
182 | === modified file 'files/nrpe/check_haproxy.sh' |
183 | --- files/nrpe/check_haproxy.sh 2013-03-27 15:41:26 +0000 |
184 | +++ files/nrpe/check_haproxy.sh 2016-02-12 04:16:45 +0000 |
185 | @@ -10,14 +10,24 @@ |
186 | NOTACTIVE='' |
187 | LOGFILE=/var/log/nagios/check_haproxy.log |
188 | AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}') |
189 | - |
190 | -for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'}); |
191 | +SSL=$(grep 10000 /etc/haproxy/haproxy.cfg | grep -q ssl && echo "-S") |
192 | + |
193 | +/usr/lib/nagios/plugins/check_http ${SSL} -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="HAProxy version 1.5.*" -e ' 200 OK' > /dev/null 2>&1 |
194 | +if [ $? == 0 ] |
195 | +then |
196 | + # this is 1.5, which changed the class values |
197 | + REGEX="class=\"(active|backup)(3|4).*" |
198 | +else |
199 | + REGEX="class=\"(active|backup)(2|3).*" |
200 | +fi |
201 | + |
202 | +for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2}'); |
203 | do |
204 | - output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK') |
205 | + output=$(/usr/lib/nagios/plugins/check_http ${SSL} -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="${REGEX}${appserver}" -e ' 200 OK') |
206 | if [ $? != 0 ]; then |
207 | date >> $LOGFILE |
208 | echo $output >> $LOGFILE |
209 | - /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 -v | grep $appserver >> $LOGFILE 2>&1 |
210 | + /usr/lib/nagios/plugins/check_http ${SSL} -a ${AUTH} -I 127.0.0.1 -p 10000 -v | grep $appserver >> $LOGFILE 2>&1 |
211 | CRITICAL=1 |
212 | NOTACTIVE="${NOTACTIVE} $appserver" |
213 | fi |
214 | |
215 | === modified file 'files/nrpe/check_haproxy_queue_depth.sh' |
216 | --- files/nrpe/check_haproxy_queue_depth.sh 2014-07-22 21:13:48 +0000 |
217 | +++ files/nrpe/check_haproxy_queue_depth.sh 2016-02-12 04:16:45 +0000 |
218 | @@ -16,8 +16,8 @@ |
219 | |
220 | for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}') |
221 | do |
222 | - CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3) |
223 | - MAXQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 4) |
224 | + CURRQ=$(echo "$HAPROXYSTATS" | grep ^$BACKEND, | grep BACKEND | cut -d , -f 3) |
225 | + MAXQ=$(echo "$HAPROXYSTATS" | grep ^$BACKEND, | grep BACKEND | cut -d , -f 4) |
226 | |
227 | if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then |
228 | echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ" |
229 | |
230 | === added directory 'hooks/charmhelpers/cli' |
231 | === added file 'hooks/charmhelpers/cli/README.rst' |
232 | --- hooks/charmhelpers/cli/README.rst 1970-01-01 00:00:00 +0000 |
233 | +++ hooks/charmhelpers/cli/README.rst 2016-02-12 04:16:45 +0000 |
234 | @@ -0,0 +1,57 @@ |
235 | +========== |
236 | +Commandant |
237 | +========== |
238 | + |
239 | +----------------------------------------------------- |
240 | +Automatic command-line interfaces to Python functions |
241 | +----------------------------------------------------- |
242 | + |
243 | +One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands. |
244 | + |
245 | +Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life. |
246 | + |
247 | +Goals |
248 | +===== |
249 | + |
250 | +* Single decorator to expose a function as a command. |
251 | + * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW) |
252 | +* Automatic analysis of function signature through ``inspect.getargspec()`` |
253 | +* Command argument parser built automatically with ``argparse`` |
254 | +* Interactive interpreter loop object made with ``Cmd`` |
255 | +* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps. |
256 | + |
257 | +Other Important Features that need writing |
258 | +------------------------------------------ |
259 | + |
260 | +* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour |
261 | +* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc. |
262 | + - Filename arguments are important, as good practice is for functions to accept file objects as parameters. |
263 | + - choices arguments help to limit bad input before the function is called |
264 | +* Some automatic behaviour could make for better defaults, once the user can override them. |
265 | + - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True. |
266 | + - We could automatically support hyphens as alternates for underscores |
267 | + - Arguments defaulting to sequence types could support the ``append`` action. |
268 | + |
269 | + |
270 | +----------------------------------------------------- |
271 | +Implementing subcommands |
272 | +----------------------------------------------------- |
273 | + |
274 | +(WIP) |
275 | + |
276 | +So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose. |
277 | + |
278 | +Some examples:: |
279 | + |
280 | + from charmhelpers.cli import CommandLine |
281 | + from charmhelpers.payload import execd |
282 | + from charmhelpers.foo import bar |
283 | + |
284 | + cli = CommandLine() |
285 | + |
286 | + cli.subcommand(execd.execd_run) |
287 | + |
288 | + @cli.subcommand_builder("bar", help="Bar baz qux") |
289 | + def barcmd_builder(subparser): |
290 | + subparser.add_argument('argument1', help="yackety") |
291 | + return bar |
292 | |
293 | === added file 'hooks/charmhelpers/cli/__init__.py' |
294 | --- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000 |
295 | +++ hooks/charmhelpers/cli/__init__.py 2016-02-12 04:16:45 +0000 |
296 | @@ -0,0 +1,147 @@ |
297 | +import inspect |
298 | +import itertools |
299 | +import argparse |
300 | +import sys |
301 | + |
302 | + |
303 | +class OutputFormatter(object): |
304 | + def __init__(self, outfile=sys.stdout): |
305 | + self.formats = ( |
306 | + "raw", |
307 | + "json", |
308 | + "py", |
309 | + "yaml", |
310 | + "csv", |
311 | + "tab", |
312 | + ) |
313 | + self.outfile = outfile |
314 | + |
315 | + def add_arguments(self, argument_parser): |
316 | + formatgroup = argument_parser.add_mutually_exclusive_group() |
317 | + choices = self.supported_formats |
318 | + formatgroup.add_argument("--format", metavar='FMT', |
319 | + help="Select output format for returned data, " |
320 | + "where FMT is one of: {}".format(choices), |
321 | + choices=choices, default='raw') |
322 | + for fmt in self.formats: |
323 | + fmtfunc = getattr(self, fmt) |
324 | + formatgroup.add_argument("-{}".format(fmt[0]), |
325 | + "--{}".format(fmt), action='store_const', |
326 | + const=fmt, dest='format', |
327 | + help=fmtfunc.__doc__) |
328 | + |
329 | + @property |
330 | + def supported_formats(self): |
331 | + return self.formats |
332 | + |
333 | + def raw(self, output): |
334 | + """Output data as raw string (default)""" |
335 | + self.outfile.write(str(output)) |
336 | + |
337 | + def py(self, output): |
338 | + """Output data as a nicely-formatted python data structure""" |
339 | + import pprint |
340 | + pprint.pprint(output, stream=self.outfile) |
341 | + |
342 | + def json(self, output): |
343 | + """Output data in JSON format""" |
344 | + import json |
345 | + json.dump(output, self.outfile) |
346 | + |
347 | + def yaml(self, output): |
348 | + """Output data in YAML format""" |
349 | + import yaml |
350 | + yaml.safe_dump(output, self.outfile) |
351 | + |
352 | + def csv(self, output): |
353 | + """Output data as excel-compatible CSV""" |
354 | + import csv |
355 | + csvwriter = csv.writer(self.outfile) |
356 | + csvwriter.writerows(output) |
357 | + |
358 | + def tab(self, output): |
359 | + """Output data in excel-compatible tab-delimited format""" |
360 | + import csv |
361 | + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) |
362 | + csvwriter.writerows(output) |
363 | + |
364 | + def format_output(self, output, fmt='raw'): |
365 | + fmtfunc = getattr(self, fmt) |
366 | + fmtfunc(output) |
367 | + |
368 | + |
369 | +class CommandLine(object): |
370 | + argument_parser = None |
371 | + subparsers = None |
372 | + formatter = None |
373 | + |
374 | + def __init__(self): |
375 | + if not self.argument_parser: |
376 | + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') |
377 | + if not self.formatter: |
378 | + self.formatter = OutputFormatter() |
379 | + self.formatter.add_arguments(self.argument_parser) |
380 | + if not self.subparsers: |
381 | + self.subparsers = self.argument_parser.add_subparsers(help='Commands') |
382 | + |
383 | + def subcommand(self, command_name=None): |
384 | + """ |
385 | + Decorate a function as a subcommand. Use its arguments as the |
386 | + command-line arguments""" |
387 | + def wrapper(decorated): |
388 | + cmd_name = command_name or decorated.__name__ |
389 | + subparser = self.subparsers.add_parser(cmd_name, |
390 | + description=decorated.__doc__) |
391 | + for args, kwargs in describe_arguments(decorated): |
392 | + subparser.add_argument(*args, **kwargs) |
393 | + subparser.set_defaults(func=decorated) |
394 | + return decorated |
395 | + return wrapper |
396 | + |
397 | + def subcommand_builder(self, command_name, description=None): |
398 | + """ |
399 | + Decorate a function that builds a subcommand. Builders should accept a |
400 | + single argument (the subparser instance) and return the function to be |
401 | + run as the command.""" |
402 | + def wrapper(decorated): |
403 | + subparser = self.subparsers.add_parser(command_name) |
404 | + func = decorated(subparser) |
405 | + subparser.set_defaults(func=func) |
406 | + subparser.description = description or func.__doc__ |
407 | + return wrapper |
408 | + |
409 | + def run(self): |
410 | + "Run cli, processing arguments and executing subcommands." |
411 | + arguments = self.argument_parser.parse_args() |
412 | + argspec = inspect.getargspec(arguments.func) |
413 | + vargs = [] |
414 | + kwargs = {} |
415 | + if argspec.varargs: |
416 | + vargs = getattr(arguments, argspec.varargs) |
417 | + for arg in argspec.args: |
418 | + kwargs[arg] = getattr(arguments, arg) |
419 | + self.formatter.format_output(arguments.func(*vargs, **kwargs), arguments.format) |
420 | + |
421 | + |
422 | +cmdline = CommandLine() |
423 | + |
424 | + |
425 | +def describe_arguments(func): |
426 | + """ |
427 | + Analyze a function's signature and return a data structure suitable for |
428 | + passing in as arguments to an argparse parser's add_argument() method.""" |
429 | + |
430 | + argspec = inspect.getargspec(func) |
431 | + # we should probably raise an exception somewhere if func includes **kwargs |
432 | + if argspec.defaults: |
433 | + positional_args = argspec.args[:-len(argspec.defaults)] |
434 | + keyword_names = argspec.args[-len(argspec.defaults):] |
435 | + for arg, default in itertools.izip(keyword_names, argspec.defaults): |
436 | + yield ('--{}'.format(arg),), {'default': default} |
437 | + else: |
438 | + positional_args = argspec.args |
439 | + |
440 | + for arg in positional_args: |
441 | + yield (arg,), {} |
442 | + if argspec.varargs: |
443 | + yield (argspec.varargs,), {'nargs': '*'} |
444 | |
445 | === added file 'hooks/charmhelpers/cli/commands.py' |
446 | --- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000 |
447 | +++ hooks/charmhelpers/cli/commands.py 2016-02-12 04:16:45 +0000 |
448 | @@ -0,0 +1,2 @@ |
449 | +from . import CommandLine |
450 | +import host |
451 | |
452 | === added file 'hooks/charmhelpers/cli/host.py' |
453 | --- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000 |
454 | +++ hooks/charmhelpers/cli/host.py 2016-02-12 04:16:45 +0000 |
455 | @@ -0,0 +1,15 @@ |
456 | +from . import cmdline |
457 | +from charmhelpers.core import host |
458 | + |
459 | + |
460 | +@cmdline.subcommand() |
461 | +def mounts(): |
462 | + "List mounts" |
463 | + return host.mounts() |
464 | + |
465 | + |
466 | +@cmdline.subcommand_builder('service', description="Control system services") |
467 | +def service(subparser): |
468 | + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") |
469 | + subparser.add_argument("service_name", help="Name of the service to control") |
470 | + return host.service |
471 | |
472 | === added directory 'hooks/charmhelpers/contrib/ansible' |
473 | === added file 'hooks/charmhelpers/contrib/ansible/__init__.py' |
474 | --- hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000 |
475 | +++ hooks/charmhelpers/contrib/ansible/__init__.py 2016-02-12 04:16:45 +0000 |
476 | @@ -0,0 +1,165 @@ |
477 | +# Copyright 2013 Canonical Ltd. |
478 | +# |
479 | +# Authors: |
480 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
481 | +"""Charm Helpers ansible - declare the state of your machines. |
482 | + |
483 | +This helper enables you to declare your machine state, rather than |
484 | +program it procedurally (and have to test each change to your procedures). |
485 | +Your install hook can be as simple as: |
486 | + |
487 | +{{{ |
488 | +import charmhelpers.contrib.ansible |
489 | + |
490 | + |
491 | +def install(): |
492 | + charmhelpers.contrib.ansible.install_ansible_support() |
493 | + charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml') |
494 | +}}} |
495 | + |
496 | +and won't need to change (nor will its tests) when you change the machine |
497 | +state. |
498 | + |
499 | +All of your juju config and relation-data are available as template |
500 | +variables within your playbooks and templates. An install playbook looks |
501 | +something like: |
502 | + |
503 | +{{{ |
504 | +--- |
505 | +- hosts: localhost |
506 | + user: root |
507 | + |
508 | + tasks: |
509 | + - name: Add private repositories. |
510 | + template: |
511 | + src: ../templates/private-repositories.list.jinja2 |
512 | + dest: /etc/apt/sources.list.d/private.list |
513 | + |
514 | + - name: Update the cache. |
515 | + apt: update_cache=yes |
516 | + |
517 | + - name: Install dependencies. |
518 | + apt: pkg={{ item }} |
519 | + with_items: |
520 | + - python-mimeparse |
521 | + - python-webob |
522 | + - sunburnt |
523 | + |
524 | + - name: Setup groups. |
525 | + group: name={{ item.name }} gid={{ item.gid }} |
526 | + with_items: |
527 | + - { name: 'deploy_user', gid: 1800 } |
528 | + - { name: 'service_user', gid: 1500 } |
529 | + |
530 | + ... |
531 | +}}} |
532 | + |
533 | +Read more online about playbooks[1] and standard ansible modules[2]. |
534 | + |
535 | +[1] http://www.ansibleworks.com/docs/playbooks.html |
536 | +[2] http://www.ansibleworks.com/docs/modules.html |
537 | +""" |
538 | +import os |
539 | +import subprocess |
540 | + |
541 | +import charmhelpers.contrib.templating.contexts |
542 | +import charmhelpers.core.host |
543 | +import charmhelpers.core.hookenv |
544 | +import charmhelpers.fetch |
545 | + |
546 | + |
547 | +charm_dir = os.environ.get('CHARM_DIR', '') |
548 | +ansible_hosts_path = '/etc/ansible/hosts' |
549 | +# Ansible will automatically include any vars in the following |
550 | +# file in its inventory when run locally. |
551 | +ansible_vars_path = '/etc/ansible/host_vars/localhost' |
552 | + |
553 | + |
554 | +def install_ansible_support(from_ppa=True): |
555 | + """Installs the ansible package. |
556 | + |
557 | + By default it is installed from the PPA [1] linked from |
558 | + the ansible website [2]. |
559 | + |
560 | + [1] https://launchpad.net/~rquillo/+archive/ansible |
561 | + [2] http://www.ansibleworks.com/docs/gettingstarted.html#ubuntu-and-debian |
562 | + |
563 | + If from_ppa is false, you must ensure that the package is available |
564 | + from a configured repository. |
565 | + """ |
566 | + if from_ppa: |
567 | + charmhelpers.fetch.add_source('ppa:rquillo/ansible') |
568 | + charmhelpers.fetch.apt_update(fatal=True) |
569 | + charmhelpers.fetch.apt_install('ansible') |
570 | + with open(ansible_hosts_path, 'w+') as hosts_file: |
571 | + hosts_file.write('localhost ansible_connection=local') |
572 | + |
573 | + |
574 | +def apply_playbook(playbook, tags=None): |
575 | + tags = tags or [] |
576 | + tags = ",".join(tags) |
577 | + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( |
578 | + ansible_vars_path, namespace_separator='__', |
579 | + allow_hyphens_in_keys=False) |
580 | + call = [ |
581 | + 'ansible-playbook', |
582 | + '-c', |
583 | + 'local', |
584 | + playbook, |
585 | + ] |
586 | + if tags: |
587 | + call.extend(['--tags', '{}'.format(tags)]) |
588 | + subprocess.check_call(call) |
589 | + |
590 | + |
591 | +class AnsibleHooks(charmhelpers.core.hookenv.Hooks): |
592 | + """Run a playbook with the hook-name as the tag. |
593 | + |
594 | + This helper builds on the standard hookenv.Hooks helper, |
595 | + but additionally runs the playbook with the hook-name specified |
596 | + using --tags (ie. running all the tasks tagged with the hook-name). |
597 | + |
598 | + Example: |
599 | + hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml') |
600 | + |
601 | + # All the tasks within my_machine_state.yaml tagged with 'install' |
602 | + # will be run automatically after do_custom_work() |
603 | + @hooks.hook() |
604 | + def install(): |
605 | + do_custom_work() |
606 | + |
607 | + # For most of your hooks, you won't need to do anything other |
608 | + # than run the tagged tasks for the hook: |
609 | + @hooks.hook('config-changed', 'start', 'stop') |
610 | + def just_use_playbook(): |
611 | + pass |
612 | + |
613 | + # As a convenience, you can avoid the above noop function by specifying |
614 | + # the hooks which are handled by ansible-only and they'll be registered |
615 | + # for you: |
616 | + # hooks = AnsibleHooks( |
617 | + # 'playbooks/my_machine_state.yaml', |
618 | + # default_hooks=['config-changed', 'start', 'stop']) |
619 | + |
620 | + if __name__ == "__main__": |
621 | + # execute a hook based on the name the program is called by |
622 | + hooks.execute(sys.argv) |
623 | + """ |
624 | + |
625 | + def __init__(self, playbook_path, default_hooks=None): |
626 | + """Register any hooks handled by ansible.""" |
627 | + super(AnsibleHooks, self).__init__() |
628 | + |
629 | + self.playbook_path = playbook_path |
630 | + |
631 | + default_hooks = default_hooks or [] |
632 | + noop = lambda *args, **kwargs: None |
633 | + for hook in default_hooks: |
634 | + self.register(hook, noop) |
635 | + |
636 | + def execute(self, args): |
637 | + """Execute the hook followed by the playbook using the hook as tag.""" |
638 | + super(AnsibleHooks, self).execute(args) |
639 | + hook_name = os.path.basename(args[0]) |
640 | + charmhelpers.contrib.ansible.apply_playbook( |
641 | + self.playbook_path, tags=[hook_name]) |
642 | |
643 | === added directory 'hooks/charmhelpers/contrib/charmhelpers' |
644 | === added file 'hooks/charmhelpers/contrib/charmhelpers/IMPORT' |
645 | --- hooks/charmhelpers/contrib/charmhelpers/IMPORT 1970-01-01 00:00:00 +0000 |
646 | +++ hooks/charmhelpers/contrib/charmhelpers/IMPORT 2016-02-12 04:16:45 +0000 |
647 | @@ -0,0 +1,4 @@ |
648 | +Source lp:charm-tools/trunk |
649 | + |
650 | +charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py |
651 | +charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py |
652 | |
653 | === added file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py' |
654 | --- hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000 |
655 | +++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 2016-02-12 04:16:45 +0000 |
656 | @@ -0,0 +1,184 @@ |
657 | +# Copyright 2012 Canonical Ltd. This software is licensed under the |
658 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
659 | + |
660 | +import warnings |
661 | +warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) |
662 | + |
663 | +"""Helper functions for writing Juju charms in Python.""" |
664 | + |
665 | +__metaclass__ = type |
666 | +__all__ = [ |
667 | + #'get_config', # core.hookenv.config() |
668 | + #'log', # core.hookenv.log() |
669 | + #'log_entry', # core.hookenv.log() |
670 | + #'log_exit', # core.hookenv.log() |
671 | + #'relation_get', # core.hookenv.relation_get() |
672 | + #'relation_set', # core.hookenv.relation_set() |
673 | + #'relation_ids', # core.hookenv.relation_ids() |
674 | + #'relation_list', # core.hookenv.relation_units() |
675 | + #'config_get', # core.hookenv.config() |
676 | + #'unit_get', # core.hookenv.unit_get() |
677 | + #'open_port', # core.hookenv.open_port() |
678 | + #'close_port', # core.hookenv.close_port() |
679 | + #'service_control', # core.host.service() |
680 | + 'unit_info', # client-side, NOT IMPLEMENTED |
681 | + 'wait_for_machine', # client-side, NOT IMPLEMENTED |
682 | + 'wait_for_page_contents', # client-side, NOT IMPLEMENTED |
683 | + 'wait_for_relation', # client-side, NOT IMPLEMENTED |
684 | + 'wait_for_unit', # client-side, NOT IMPLEMENTED |
685 | +] |
686 | + |
687 | +import operator |
688 | +from shelltoolbox import ( |
689 | + command, |
690 | +) |
691 | +import tempfile |
692 | +import time |
693 | +import urllib2 |
694 | +import yaml |
695 | + |
696 | +SLEEP_AMOUNT = 0.1 |
697 | +# We create a juju_status Command here because it makes testing much, |
698 | +# much easier. |
699 | +juju_status = lambda: command('juju')('status') |
700 | + |
701 | +# re-implemented as charmhelpers.fetch.configure_sources() |
702 | +#def configure_source(update=False): |
703 | +# source = config_get('source') |
704 | +# if ((source.startswith('ppa:') or |
705 | +# source.startswith('cloud:') or |
706 | +# source.startswith('http:'))): |
707 | +# run('add-apt-repository', source) |
708 | +# if source.startswith("http:"): |
709 | +# run('apt-key', 'import', config_get('key')) |
710 | +# if update: |
711 | +# run('apt-get', 'update') |
712 | + |
713 | + |
714 | +# DEPRECATED: client-side only |
715 | +def make_charm_config_file(charm_config): |
716 | + charm_config_file = tempfile.NamedTemporaryFile() |
717 | + charm_config_file.write(yaml.dump(charm_config)) |
718 | + charm_config_file.flush() |
719 | + # The NamedTemporaryFile instance is returned instead of just the name |
720 | + # because we want to take advantage of garbage collection-triggered |
721 | + # deletion of the temp file when it goes out of scope in the caller. |
722 | + return charm_config_file |
723 | + |
724 | + |
725 | +# DEPRECATED: client-side only |
726 | +def unit_info(service_name, item_name, data=None, unit=None): |
727 | + if data is None: |
728 | + data = yaml.safe_load(juju_status()) |
729 | + service = data['services'].get(service_name) |
730 | + if service is None: |
731 | + # XXX 2012-02-08 gmb: |
732 | + # This allows us to cope with the race condition that we |
733 | + # have between deploying a service and having it come up in |
734 | + # `juju status`. We could probably do with cleaning it up so |
735 | + # that it fails a bit more noisily after a while. |
736 | + return '' |
737 | + units = service['units'] |
738 | + if unit is not None: |
739 | + item = units[unit][item_name] |
740 | + else: |
741 | + # It might seem odd to sort the units here, but we do it to |
742 | + # ensure that when no unit is specified, the first unit for the |
743 | + # service (or at least the one with the lowest number) is the |
744 | + # one whose data gets returned. |
745 | + sorted_unit_names = sorted(units.keys()) |
746 | + item = units[sorted_unit_names[0]][item_name] |
747 | + return item |
748 | + |
749 | + |
750 | +# DEPRECATED: client-side only |
751 | +def get_machine_data(): |
752 | + return yaml.safe_load(juju_status())['machines'] |
753 | + |
754 | + |
755 | +# DEPRECATED: client-side only |
756 | +def wait_for_machine(num_machines=1, timeout=300): |
757 | + """Wait `timeout` seconds for `num_machines` machines to come up. |
758 | + |
759 | + This wait_for... function can be called by other wait_for functions |
760 | + whose timeouts might be too short in situations where only a bare |
761 | + Juju setup has been bootstrapped. |
762 | + |
763 | + :return: A tuple of (num_machines, time_taken). This is used for |
764 | + testing. |
765 | + """ |
766 | + # You may think this is a hack, and you'd be right. The easiest way |
767 | + # to tell what environment we're working in (LXC vs EC2) is to check |
768 | + # the dns-name of the first machine. If it's localhost we're in LXC |
769 | + # and we can just return here. |
770 | + if get_machine_data()[0]['dns-name'] == 'localhost': |
771 | + return 1, 0 |
772 | + start_time = time.time() |
773 | + while True: |
774 | + # Drop the first machine, since it's the Zookeeper and that's |
775 | + # not a machine that we need to wait for. This will only work |
776 | + # for EC2 environments, which is why we return early above if |
777 | + # we're in LXC. |
778 | + machine_data = get_machine_data() |
779 | + non_zookeeper_machines = [ |
780 | + machine_data[key] for key in machine_data.keys()[1:]] |
781 | + if len(non_zookeeper_machines) >= num_machines: |
782 | + all_machines_running = True |
783 | + for machine in non_zookeeper_machines: |
784 | + if machine.get('instance-state') != 'running': |
785 | + all_machines_running = False |
786 | + break |
787 | + if all_machines_running: |
788 | + break |
789 | + if time.time() - start_time >= timeout: |
790 | + raise RuntimeError('timeout waiting for service to start') |
791 | + time.sleep(SLEEP_AMOUNT) |
792 | + return num_machines, time.time() - start_time |
793 | + |
794 | + |
795 | +# DEPRECATED: client-side only |
796 | +def wait_for_unit(service_name, timeout=480): |
797 | + """Wait `timeout` seconds for a given service name to come up.""" |
798 | + wait_for_machine(num_machines=1) |
799 | + start_time = time.time() |
800 | + while True: |
801 | + state = unit_info(service_name, 'agent-state') |
802 | + if 'error' in state or state == 'started': |
803 | + break |
804 | + if time.time() - start_time >= timeout: |
805 | + raise RuntimeError('timeout waiting for service to start') |
806 | + time.sleep(SLEEP_AMOUNT) |
807 | + if state != 'started': |
808 | + raise RuntimeError('unit did not start, agent-state: ' + state) |
809 | + |
810 | + |
811 | +# DEPRECATED: client-side only |
812 | +def wait_for_relation(service_name, relation_name, timeout=120): |
813 | + """Wait `timeout` seconds for a given relation to come up.""" |
814 | + start_time = time.time() |
815 | + while True: |
816 | + relation = unit_info(service_name, 'relations').get(relation_name) |
817 | + if relation is not None and relation['state'] == 'up': |
818 | + break |
819 | + if time.time() - start_time >= timeout: |
820 | + raise RuntimeError('timeout waiting for relation to be up') |
821 | + time.sleep(SLEEP_AMOUNT) |
822 | + |
823 | + |
824 | +# DEPRECATED: client-side only |
825 | +def wait_for_page_contents(url, contents, timeout=120, validate=None): |
826 | + if validate is None: |
827 | + validate = operator.contains |
828 | + start_time = time.time() |
829 | + while True: |
830 | + try: |
831 | + stream = urllib2.urlopen(url) |
832 | + except (urllib2.HTTPError, urllib2.URLError): |
833 | + pass |
834 | + else: |
835 | + page = stream.read() |
836 | + if validate(page, contents): |
837 | + return page |
838 | + if time.time() - start_time >= timeout: |
839 | + raise RuntimeError('timeout waiting for contents of ' + url) |
840 | + time.sleep(SLEEP_AMOUNT) |
841 | |
842 | === added file 'hooks/charmhelpers/contrib/charmsupport/IMPORT' |
843 | --- hooks/charmhelpers/contrib/charmsupport/IMPORT 1970-01-01 00:00:00 +0000 |
844 | +++ hooks/charmhelpers/contrib/charmsupport/IMPORT 2016-02-12 04:16:45 +0000 |
845 | @@ -0,0 +1,14 @@ |
846 | +Source: lp:charmsupport/trunk |
847 | + |
848 | +charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py |
849 | +charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py |
850 | +charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py |
851 | +charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py |
852 | +charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py |
853 | + |
854 | +charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py |
855 | +charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py |
856 | +charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py |
857 | +charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py |
858 | + |
859 | +charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport |
860 | |
861 | === modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' |
862 | --- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-08-21 19:14:32 +0000 |
863 | +++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2016-02-12 04:16:45 +0000 |
864 | @@ -125,10 +125,8 @@ |
865 | |
866 | def _locate_cmd(self, check_cmd): |
867 | search_path = ( |
868 | - '/', |
869 | - os.path.join(os.environ['CHARM_DIR'], |
870 | - 'files/nrpe-external-master'), |
871 | '/usr/lib/nagios/plugins', |
872 | + '/usr/local/lib/nagios/plugins', |
873 | ) |
874 | parts = shlex.split(check_cmd) |
875 | for path in search_path: |
876 | |
877 | === added directory 'hooks/charmhelpers/contrib/hahelpers' |
878 | === added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py' |
879 | === added file 'hooks/charmhelpers/contrib/hahelpers/apache.py' |
880 | --- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000 |
881 | +++ hooks/charmhelpers/contrib/hahelpers/apache.py 2016-02-12 04:16:45 +0000 |
882 | @@ -0,0 +1,58 @@ |
883 | +# |
884 | +# Copyright 2012 Canonical Ltd. |
885 | +# |
886 | +# This file is sourced from lp:openstack-charm-helpers |
887 | +# |
888 | +# Authors: |
889 | +# James Page <james.page@ubuntu.com> |
890 | +# Adam Gandelman <adamg@ubuntu.com> |
891 | +# |
892 | + |
893 | +import subprocess |
894 | + |
895 | +from charmhelpers.core.hookenv import ( |
896 | + config as config_get, |
897 | + relation_get, |
898 | + relation_ids, |
899 | + related_units as relation_list, |
900 | + log, |
901 | + INFO, |
902 | +) |
903 | + |
904 | + |
905 | +def get_cert(): |
906 | + cert = config_get('ssl_cert') |
907 | + key = config_get('ssl_key') |
908 | + if not (cert and key): |
909 | + log("Inspecting identity-service relations for SSL certificate.", |
910 | + level=INFO) |
911 | + cert = key = None |
912 | + for r_id in relation_ids('identity-service'): |
913 | + for unit in relation_list(r_id): |
914 | + if not cert: |
915 | + cert = relation_get('ssl_cert', |
916 | + rid=r_id, unit=unit) |
917 | + if not key: |
918 | + key = relation_get('ssl_key', |
919 | + rid=r_id, unit=unit) |
920 | + return (cert, key) |
921 | + |
922 | + |
923 | +def get_ca_cert(): |
924 | + ca_cert = None |
925 | + log("Inspecting identity-service relations for CA SSL certificate.", |
926 | + level=INFO) |
927 | + for r_id in relation_ids('identity-service'): |
928 | + for unit in relation_list(r_id): |
929 | + if not ca_cert: |
930 | + ca_cert = relation_get('ca_cert', |
931 | + rid=r_id, unit=unit) |
932 | + return ca_cert |
933 | + |
934 | + |
935 | +def install_ca_cert(ca_cert): |
936 | + if ca_cert: |
937 | + with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt', |
938 | + 'w') as crt: |
939 | + crt.write(ca_cert) |
940 | + subprocess.check_call(['update-ca-certificates', '--fresh']) |
941 | |
942 | === added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' |
943 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000 |
944 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2016-02-12 04:16:45 +0000 |
945 | @@ -0,0 +1,183 @@ |
946 | +# |
947 | +# Copyright 2012 Canonical Ltd. |
948 | +# |
949 | +# Authors: |
950 | +# James Page <james.page@ubuntu.com> |
951 | +# Adam Gandelman <adamg@ubuntu.com> |
952 | +# |
953 | + |
954 | +import subprocess |
955 | +import os |
956 | + |
957 | +from socket import gethostname as get_unit_hostname |
958 | + |
959 | +from charmhelpers.core.hookenv import ( |
960 | + log, |
961 | + relation_ids, |
962 | + related_units as relation_list, |
963 | + relation_get, |
964 | + config as config_get, |
965 | + INFO, |
966 | + ERROR, |
967 | + unit_get, |
968 | +) |
969 | + |
970 | + |
971 | +class HAIncompleteConfig(Exception): |
972 | + pass |
973 | + |
974 | + |
975 | +def is_clustered(): |
976 | + for r_id in (relation_ids('ha') or []): |
977 | + for unit in (relation_list(r_id) or []): |
978 | + clustered = relation_get('clustered', |
979 | + rid=r_id, |
980 | + unit=unit) |
981 | + if clustered: |
982 | + return True |
983 | + return False |
984 | + |
985 | + |
986 | +def is_leader(resource): |
987 | + cmd = [ |
988 | + "crm", "resource", |
989 | + "show", resource |
990 | + ] |
991 | + try: |
992 | + status = subprocess.check_output(cmd) |
993 | + except subprocess.CalledProcessError: |
994 | + return False |
995 | + else: |
996 | + if get_unit_hostname() in status: |
997 | + return True |
998 | + else: |
999 | + return False |
1000 | + |
1001 | + |
1002 | +def peer_units(): |
1003 | + peers = [] |
1004 | + for r_id in (relation_ids('cluster') or []): |
1005 | + for unit in (relation_list(r_id) or []): |
1006 | + peers.append(unit) |
1007 | + return peers |
1008 | + |
1009 | + |
1010 | +def oldest_peer(peers): |
1011 | + local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1]) |
1012 | + for peer in peers: |
1013 | + remote_unit_no = int(peer.split('/')[1]) |
1014 | + if remote_unit_no < local_unit_no: |
1015 | + return False |
1016 | + return True |
1017 | + |
1018 | + |
1019 | +def eligible_leader(resource): |
1020 | + if is_clustered(): |
1021 | + if not is_leader(resource): |
1022 | + log('Deferring action to CRM leader.', level=INFO) |
1023 | + return False |
1024 | + else: |
1025 | + peers = peer_units() |
1026 | + if peers and not oldest_peer(peers): |
1027 | + log('Deferring action to oldest service unit.', level=INFO) |
1028 | + return False |
1029 | + return True |
1030 | + |
1031 | + |
1032 | +def https(): |
1033 | + ''' |
1034 | + Determines whether enough data has been provided in configuration |
1035 | + or relation data to configure HTTPS |
1036 | + . |
1037 | + returns: boolean |
1038 | + ''' |
1039 | + if config_get('use-https') == "yes": |
1040 | + return True |
1041 | + if config_get('ssl_cert') and config_get('ssl_key'): |
1042 | + return True |
1043 | + for r_id in relation_ids('identity-service'): |
1044 | + for unit in relation_list(r_id): |
1045 | + rel_state = [ |
1046 | + relation_get('https_keystone', rid=r_id, unit=unit), |
1047 | + relation_get('ssl_cert', rid=r_id, unit=unit), |
1048 | + relation_get('ssl_key', rid=r_id, unit=unit), |
1049 | + relation_get('ca_cert', rid=r_id, unit=unit), |
1050 | + ] |
1051 | + # NOTE: works around (LP: #1203241) |
1052 | + if (None not in rel_state) and ('' not in rel_state): |
1053 | + return True |
1054 | + return False |
1055 | + |
1056 | + |
1057 | +def determine_api_port(public_port): |
1058 | + ''' |
1059 | + Determine correct API server listening port based on |
1060 | + existence of HTTPS reverse proxy and/or haproxy. |
1061 | + |
1062 | + public_port: int: standard public port for given service |
1063 | + |
1064 | + returns: int: the correct listening port for the API service |
1065 | + ''' |
1066 | + i = 0 |
1067 | + if len(peer_units()) > 0 or is_clustered(): |
1068 | + i += 1 |
1069 | + if https(): |
1070 | + i += 1 |
1071 | + return public_port - (i * 10) |
1072 | + |
1073 | + |
1074 | +def determine_haproxy_port(public_port): |
1075 | + ''' |
1076 | + Description: Determine correct proxy listening port based on public IP + |
1077 | + existence of HTTPS reverse proxy. |
1078 | + |
1079 | + public_port: int: standard public port for given service |
1080 | + |
1081 | + returns: int: the correct listening port for the HAProxy service |
1082 | + ''' |
1083 | + i = 0 |
1084 | + if https(): |
1085 | + i += 1 |
1086 | + return public_port - (i * 10) |
1087 | + |
1088 | + |
1089 | +def get_hacluster_config(): |
1090 | + ''' |
1091 | + Obtains all relevant configuration from charm configuration required |
1092 | + for initiating a relation to hacluster: |
1093 | + |
1094 | + ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr |
1095 | + |
1096 | + returns: dict: A dict containing settings keyed by setting name. |
1097 | + raises: HAIncompleteConfig if settings are missing. |
1098 | + ''' |
1099 | + settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr'] |
1100 | + conf = {} |
1101 | + for setting in settings: |
1102 | + conf[setting] = config_get(setting) |
1103 | + missing = [] |
1104 | + [missing.append(s) for s, v in conf.iteritems() if v is None] |
1105 | + if missing: |
1106 | + log('Insufficient config data to configure hacluster.', level=ERROR) |
1107 | + raise HAIncompleteConfig |
1108 | + return conf |
1109 | + |
1110 | + |
1111 | +def canonical_url(configs, vip_setting='vip'): |
1112 | + ''' |
1113 | + Returns the correct HTTP URL to this host given the state of HTTPS |
1114 | + configuration and hacluster. |
1115 | + |
1116 | + :configs : OSTemplateRenderer: A config tempating object to inspect for |
1117 | + a complete https context. |
1118 | + :vip_setting: str: Setting in charm config that specifies |
1119 | + VIP address. |
1120 | + ''' |
1121 | + scheme = 'http' |
1122 | + if 'https' in configs.complete_contexts(): |
1123 | + scheme = 'https' |
1124 | + if is_clustered(): |
1125 | + addr = config_get(vip_setting) |
1126 | + else: |
1127 | + addr = unit_get('private-address') |
1128 | + return '%s://%s' % (scheme, addr) |
1129 | |
1130 | === added directory 'hooks/charmhelpers/contrib/jujugui' |
1131 | === added file 'hooks/charmhelpers/contrib/jujugui/IMPORT' |
1132 | --- hooks/charmhelpers/contrib/jujugui/IMPORT 1970-01-01 00:00:00 +0000 |
1133 | +++ hooks/charmhelpers/contrib/jujugui/IMPORT 2016-02-12 04:16:45 +0000 |
1134 | @@ -0,0 +1,4 @@ |
1135 | +Source: lp:charms/juju-gui |
1136 | + |
1137 | +juju-gui/hooks/utils.py -> charm-helpers/charmhelpers/contrib/jujugui/utils.py |
1138 | +juju-gui/tests/test_utils.py -> charm-helpers/tests/contrib/jujugui/test_utils.py |
1139 | |
1140 | === added file 'hooks/charmhelpers/contrib/jujugui/__init__.py' |
1141 | === added file 'hooks/charmhelpers/contrib/jujugui/utils.py' |
1142 | --- hooks/charmhelpers/contrib/jujugui/utils.py 1970-01-01 00:00:00 +0000 |
1143 | +++ hooks/charmhelpers/contrib/jujugui/utils.py 2016-02-12 04:16:45 +0000 |
1144 | @@ -0,0 +1,602 @@ |
1145 | +"""Juju GUI charm utilities.""" |
1146 | + |
1147 | +__all__ = [ |
1148 | + 'AGENT', |
1149 | + 'APACHE', |
1150 | + 'API_PORT', |
1151 | + 'CURRENT_DIR', |
1152 | + 'HAPROXY', |
1153 | + 'IMPROV', |
1154 | + 'JUJU_DIR', |
1155 | + 'JUJU_GUI_DIR', |
1156 | + 'JUJU_GUI_SITE', |
1157 | + 'JUJU_PEM', |
1158 | + 'WEB_PORT', |
1159 | + 'bzr_checkout', |
1160 | + 'chain', |
1161 | + 'cmd_log', |
1162 | + 'fetch_api', |
1163 | + 'fetch_gui', |
1164 | + 'find_missing_packages', |
1165 | + 'first_path_in_dir', |
1166 | + 'get_api_address', |
1167 | + 'get_npm_cache_archive_url', |
1168 | + 'get_release_file_url', |
1169 | + 'get_staging_dependencies', |
1170 | + 'get_zookeeper_address', |
1171 | + 'legacy_juju', |
1172 | + 'log_hook', |
1173 | + 'merge', |
1174 | + 'parse_source', |
1175 | + 'prime_npm_cache', |
1176 | + 'render_to_file', |
1177 | + 'save_or_create_certificates', |
1178 | + 'setup_apache', |
1179 | + 'setup_gui', |
1180 | + 'start_agent', |
1181 | + 'start_gui', |
1182 | + 'start_improv', |
1183 | + 'write_apache_config', |
1184 | +] |
1185 | + |
1186 | +from contextlib import contextmanager |
1187 | +import errno |
1188 | +import json |
1189 | +import os |
1190 | +import logging |
1191 | +import shutil |
1192 | +from subprocess import CalledProcessError |
1193 | +import tempfile |
1194 | +from urlparse import urlparse |
1195 | + |
1196 | +import apt |
1197 | +import tempita |
1198 | + |
1199 | +from launchpadlib.launchpad import Launchpad |
1200 | +from shelltoolbox import ( |
1201 | + Serializer, |
1202 | + apt_get_install, |
1203 | + command, |
1204 | + environ, |
1205 | + install_extra_repositories, |
1206 | + run, |
1207 | + script_name, |
1208 | + search_file, |
1209 | + su, |
1210 | +) |
1211 | +from charmhelpers.core.host import ( |
1212 | + service_start, |
1213 | +) |
1214 | +from charmhelpers.core.hookenv import ( |
1215 | + log, |
1216 | + config, |
1217 | + unit_get, |
1218 | +) |
1219 | + |
1220 | + |
1221 | +AGENT = 'juju-api-agent' |
1222 | +APACHE = 'apache2' |
1223 | +IMPROV = 'juju-api-improv' |
1224 | +HAPROXY = 'haproxy' |
1225 | + |
1226 | +API_PORT = 8080 |
1227 | +WEB_PORT = 8000 |
1228 | + |
1229 | +CURRENT_DIR = os.getcwd() |
1230 | +JUJU_DIR = os.path.join(CURRENT_DIR, 'juju') |
1231 | +JUJU_GUI_DIR = os.path.join(CURRENT_DIR, 'juju-gui') |
1232 | +JUJU_GUI_SITE = '/etc/apache2/sites-available/juju-gui' |
1233 | +JUJU_GUI_PORTS = '/etc/apache2/ports.conf' |
1234 | +JUJU_PEM = 'juju.includes-private-key.pem' |
1235 | +BUILD_REPOSITORIES = ('ppa:chris-lea/node.js-legacy',) |
1236 | +DEB_BUILD_DEPENDENCIES = ( |
1237 | + 'bzr', 'imagemagick', 'make', 'nodejs', 'npm', |
1238 | +) |
1239 | +DEB_STAGE_DEPENDENCIES = ( |
1240 | + 'zookeeper', |
1241 | +) |
1242 | + |
1243 | + |
1244 | +# Store the configuration from on invocation to the next. |
1245 | +config_json = Serializer('/tmp/config.json') |
1246 | +# Bazaar checkout command. |
1247 | +bzr_checkout = command('bzr', 'co', '--lightweight') |
1248 | +# Whether or not the charm is deployed using juju-core. |
1249 | +# If juju-core has been used to deploy the charm, an agent.conf file must |
1250 | +# be present in the charm parent directory. |
1251 | +legacy_juju = lambda: not os.path.exists( |
1252 | + os.path.join(CURRENT_DIR, '..', 'agent.conf')) |
1253 | + |
1254 | + |
1255 | +def _get_build_dependencies(): |
1256 | + """Install deb dependencies for building.""" |
1257 | + log('Installing build dependencies.') |
1258 | + cmd_log(install_extra_repositories(*BUILD_REPOSITORIES)) |
1259 | + cmd_log(apt_get_install(*DEB_BUILD_DEPENDENCIES)) |
1260 | + |
1261 | + |
1262 | +def get_api_address(unit_dir): |
1263 | + """Return the Juju API address stored in the uniter agent.conf file.""" |
1264 | + import yaml # python-yaml is only installed if juju-core is used. |
1265 | + # XXX 2013-03-27 frankban bug=1161443: |
1266 | + # currently the uniter agent.conf file does not include the API |
1267 | + # address. For now retrieve it from the machine agent file. |
1268 | + base_dir = os.path.abspath(os.path.join(unit_dir, '..')) |
1269 | + for dirname in os.listdir(base_dir): |
1270 | + if dirname.startswith('machine-'): |
1271 | + agent_conf = os.path.join(base_dir, dirname, 'agent.conf') |
1272 | + break |
1273 | + else: |
1274 | + raise IOError('Juju agent configuration file not found.') |
1275 | + contents = yaml.load(open(agent_conf)) |
1276 | + return contents['apiinfo']['addrs'][0] |
1277 | + |
1278 | + |
1279 | +def get_staging_dependencies(): |
1280 | + """Install deb dependencies for the stage (improv) environment.""" |
1281 | + log('Installing stage dependencies.') |
1282 | + cmd_log(apt_get_install(*DEB_STAGE_DEPENDENCIES)) |
1283 | + |
1284 | + |
1285 | +def first_path_in_dir(directory): |
1286 | + """Return the full path of the first file/dir in *directory*.""" |
1287 | + return os.path.join(directory, os.listdir(directory)[0]) |
1288 | + |
1289 | + |
1290 | +def _get_by_attr(collection, attr, value): |
1291 | + """Return the first item in collection having attr == value. |
1292 | + |
1293 | + Return None if the item is not found. |
1294 | + """ |
1295 | + for item in collection: |
1296 | + if getattr(item, attr) == value: |
1297 | + return item |
1298 | + |
1299 | + |
1300 | +def get_release_file_url(project, series_name, release_version): |
1301 | + """Return the URL of the release file hosted in Launchpad. |
1302 | + |
1303 | + The returned URL points to a release file for the given project, series |
1304 | + name and release version. |
1305 | + The argument *project* is a project object as returned by launchpadlib. |
1306 | + The arguments *series_name* and *release_version* are strings. If |
1307 | + *release_version* is None, the URL of the latest release will be returned. |
1308 | + """ |
1309 | + series = _get_by_attr(project.series, 'name', series_name) |
1310 | + if series is None: |
1311 | + raise ValueError('%r: series not found' % series_name) |
1312 | + # Releases are returned by Launchpad in reverse date order. |
1313 | + releases = list(series.releases) |
1314 | + if not releases: |
1315 | + raise ValueError('%r: series does not contain releases' % series_name) |
1316 | + if release_version is not None: |
1317 | + release = _get_by_attr(releases, 'version', release_version) |
1318 | + if release is None: |
1319 | + raise ValueError('%r: release not found' % release_version) |
1320 | + releases = [release] |
1321 | + for release in releases: |
1322 | + for file_ in release.files: |
1323 | + if str(file_).endswith('.tgz'): |
1324 | + return file_.file_link |
1325 | + raise ValueError('%r: file not found' % release_version) |
1326 | + |
1327 | + |
1328 | +def get_zookeeper_address(agent_file_path): |
1329 | + """Retrieve the Zookeeper address contained in the given *agent_file_path*. |
1330 | + |
1331 | + The *agent_file_path* is a path to a file containing a line similar to the |
1332 | + following:: |
1333 | + |
1334 | + env JUJU_ZOOKEEPER="address" |
1335 | + """ |
1336 | + line = search_file('JUJU_ZOOKEEPER', agent_file_path).strip() |
1337 | + return line.split('=')[1].strip('"') |
1338 | + |
1339 | + |
1340 | +@contextmanager |
1341 | +def log_hook(): |
1342 | + """Log when a hook starts and stops its execution. |
1343 | + |
1344 | + Also log to stdout possible CalledProcessError exceptions raised executing |
1345 | + the hook. |
1346 | + """ |
1347 | + script = script_name() |
1348 | + log(">>> Entering {}".format(script)) |
1349 | + try: |
1350 | + yield |
1351 | + except CalledProcessError as err: |
1352 | + log('Exception caught:') |
1353 | + log(err.output) |
1354 | + raise |
1355 | + finally: |
1356 | + log("<<< Exiting {}".format(script)) |
1357 | + |
1358 | + |
1359 | +def parse_source(source): |
1360 | + """Parse the ``juju-gui-source`` option. |
1361 | + |
1362 | + Return a tuple of two elements representing info on how to deploy Juju GUI. |
1363 | + Examples: |
1364 | + - ('stable', None): latest stable release; |
1365 | + - ('stable', '0.1.0'): stable release v0.1.0; |
1366 | + - ('trunk', None): latest trunk release; |
1367 | + - ('trunk', '0.1.0+build.1'): trunk release v0.1.0 bzr revision 1; |
1368 | + - ('branch', 'lp:juju-gui'): release is made from a branch; |
1369 | + - ('url', 'http://example.com/gui'): release from a downloaded file. |
1370 | + """ |
1371 | + if source.startswith('url:'): |
1372 | + source = source[4:] |
1373 | + # Support file paths, including relative paths. |
1374 | + if urlparse(source).scheme == '': |
1375 | + if not source.startswith('/'): |
1376 | + source = os.path.join(os.path.abspath(CURRENT_DIR), source) |
1377 | + source = "file://%s" % source |
1378 | + return 'url', source |
1379 | + if source in ('stable', 'trunk'): |
1380 | + return source, None |
1381 | + if source.startswith('lp:') or source.startswith('http://'): |
1382 | + return 'branch', source |
1383 | + if 'build' in source: |
1384 | + return 'trunk', source |
1385 | + return 'stable', source |
1386 | + |
1387 | + |
1388 | +def render_to_file(template_name, context, destination): |
1389 | + """Render the given *template_name* into *destination* using *context*. |
1390 | + |
1391 | + The tempita template language is used to render contents |
1392 | + (see http://pythonpaste.org/tempita/). |
1393 | + The argument *template_name* is the name or path of the template file: |
1394 | + it may be either a path relative to ``../config`` or an absolute path. |
1395 | + The argument *destination* is a file path. |
1396 | + The argument *context* is a dict-like object. |
1397 | + """ |
1398 | + template_path = os.path.abspath(template_name) |
1399 | + template = tempita.Template.from_filename(template_path) |
1400 | + with open(destination, 'w') as stream: |
1401 | + stream.write(template.substitute(context)) |
1402 | + |
1403 | + |
1404 | +results_log = None |
1405 | + |
1406 | + |
1407 | +def _setupLogging(): |
1408 | + global results_log |
1409 | + if results_log is not None: |
1410 | + return |
1411 | + cfg = config() |
1412 | + logging.basicConfig( |
1413 | + filename=cfg['command-log-file'], |
1414 | + level=logging.INFO, |
1415 | + format="%(asctime)s: %(name)s@%(levelname)s %(message)s") |
1416 | + results_log = logging.getLogger('juju-gui') |
1417 | + |
1418 | + |
1419 | +def cmd_log(results): |
1420 | + global results_log |
1421 | + if not results: |
1422 | + return |
1423 | + if results_log is None: |
1424 | + _setupLogging() |
1425 | + # Since 'results' may be multi-line output, start it on a separate line |
1426 | + # from the logger timestamp, etc. |
1427 | + results_log.info('\n' + results) |
1428 | + |
1429 | + |
1430 | +def start_improv(staging_env, ssl_cert_path, |
1431 | + config_path='/etc/init/juju-api-improv.conf'): |
1432 | + """Start a simulated juju environment using ``improv.py``.""" |
1433 | + log('Setting up staging start up script.') |
1434 | + context = { |
1435 | + 'juju_dir': JUJU_DIR, |
1436 | + 'keys': ssl_cert_path, |
1437 | + 'port': API_PORT, |
1438 | + 'staging_env': staging_env, |
1439 | + } |
1440 | + render_to_file('config/juju-api-improv.conf.template', context, config_path) |
1441 | + log('Starting the staging backend.') |
1442 | + with su('root'): |
1443 | + service_start(IMPROV) |
1444 | + |
1445 | + |
1446 | +def start_agent( |
1447 | + ssl_cert_path, config_path='/etc/init/juju-api-agent.conf', |
1448 | + read_only=False): |
1449 | + """Start the Juju agent and connect to the current environment.""" |
1450 | + # Retrieve the Zookeeper address from the start up script. |
1451 | + unit_dir = os.path.realpath(os.path.join(CURRENT_DIR, '..')) |
1452 | + agent_file = '/etc/init/juju-{0}.conf'.format(os.path.basename(unit_dir)) |
1453 | + zookeeper = get_zookeeper_address(agent_file) |
1454 | + log('Setting up API agent start up script.') |
1455 | + context = { |
1456 | + 'juju_dir': JUJU_DIR, |
1457 | + 'keys': ssl_cert_path, |
1458 | + 'port': API_PORT, |
1459 | + 'zookeeper': zookeeper, |
1460 | + 'read_only': read_only |
1461 | + } |
1462 | + render_to_file('config/juju-api-agent.conf.template', context, config_path) |
1463 | + log('Starting API agent.') |
1464 | + with su('root'): |
1465 | + service_start(AGENT) |
1466 | + |
1467 | + |
1468 | +def start_gui( |
1469 | + console_enabled, login_help, readonly, in_staging, ssl_cert_path, |
1470 | + charmworld_url, serve_tests, haproxy_path='/etc/haproxy/haproxy.cfg', |
1471 | + config_js_path=None, secure=True, sandbox=False): |
1472 | + """Set up and start the Juju GUI server.""" |
1473 | + with su('root'): |
1474 | + run('chown', '-R', 'ubuntu:', JUJU_GUI_DIR) |
1475 | + # XXX 2013-02-05 frankban bug=1116320: |
1476 | + # External insecure resources are still loaded when testing in the |
1477 | + # debug environment. For now, switch to the production environment if |
1478 | + # the charm is configured to serve tests. |
1479 | + if in_staging and not serve_tests: |
1480 | + build_dirname = 'build-debug' |
1481 | + else: |
1482 | + build_dirname = 'build-prod' |
1483 | + build_dir = os.path.join(JUJU_GUI_DIR, build_dirname) |
1484 | + log('Generating the Juju GUI configuration file.') |
1485 | + is_legacy_juju = legacy_juju() |
1486 | + user, password = None, None |
1487 | + if (is_legacy_juju and in_staging) or sandbox: |
1488 | + user, password = 'admin', 'admin' |
1489 | + else: |
1490 | + user, password = None, None |
1491 | + |
1492 | + api_backend = 'python' if is_legacy_juju else 'go' |
1493 | + if secure: |
1494 | + protocol = 'wss' |
1495 | + else: |
1496 | + log('Running in insecure mode! Port 80 will serve unencrypted.') |
1497 | + protocol = 'ws' |
1498 | + |
1499 | + context = { |
1500 | + 'raw_protocol': protocol, |
1501 | + 'address': unit_get('public-address'), |
1502 | + 'console_enabled': json.dumps(console_enabled), |
1503 | + 'login_help': json.dumps(login_help), |
1504 | + 'password': json.dumps(password), |
1505 | + 'api_backend': json.dumps(api_backend), |
1506 | + 'readonly': json.dumps(readonly), |
1507 | + 'user': json.dumps(user), |
1508 | + 'protocol': json.dumps(protocol), |
1509 | + 'sandbox': json.dumps(sandbox), |
1510 | + 'charmworld_url': json.dumps(charmworld_url), |
1511 | + } |
1512 | + if config_js_path is None: |
1513 | + config_js_path = os.path.join( |
1514 | + build_dir, 'juju-ui', 'assets', 'config.js') |
1515 | + render_to_file('config/config.js.template', context, config_js_path) |
1516 | + |
1517 | + write_apache_config(build_dir, serve_tests) |
1518 | + |
1519 | + log('Generating haproxy configuration file.') |
1520 | + if is_legacy_juju: |
1521 | + # The PyJuju API agent is listening on localhost. |
1522 | + api_address = '127.0.0.1:{0}'.format(API_PORT) |
1523 | + else: |
1524 | + # Retrieve the juju-core API server address. |
1525 | + api_address = get_api_address(os.path.join(CURRENT_DIR, '..')) |
1526 | + context = { |
1527 | + 'api_address': api_address, |
1528 | + 'api_pem': JUJU_PEM, |
1529 | + 'legacy_juju': is_legacy_juju, |
1530 | + 'ssl_cert_path': ssl_cert_path, |
1531 | + # In PyJuju environments, use the same certificate for both HTTPS and |
1532 | + # WebSocket connections. In juju-core the system already has the proper |
1533 | + # certificate installed. |
1534 | + 'web_pem': JUJU_PEM, |
1535 | + 'web_port': WEB_PORT, |
1536 | + 'secure': secure |
1537 | + } |
1538 | + render_to_file('config/haproxy.cfg.template', context, haproxy_path) |
1539 | + log('Starting Juju GUI.') |
1540 | + |
1541 | + |
1542 | +def write_apache_config(build_dir, serve_tests=False): |
1543 | + log('Generating the apache site configuration file.') |
1544 | + context = { |
1545 | + 'port': WEB_PORT, |
1546 | + 'serve_tests': serve_tests, |
1547 | + 'server_root': build_dir, |
1548 | + 'tests_root': os.path.join(JUJU_GUI_DIR, 'test', ''), |
1549 | + } |
1550 | + render_to_file('config/apache-ports.template', context, JUJU_GUI_PORTS) |
1551 | + render_to_file('config/apache-site.template', context, JUJU_GUI_SITE) |
1552 | + |
1553 | + |
1554 | +def get_npm_cache_archive_url(Launchpad=Launchpad): |
1555 | + """Figure out the URL of the most recent NPM cache archive on Launchpad.""" |
1556 | + launchpad = Launchpad.login_anonymously('Juju GUI charm', 'production') |
1557 | + project = launchpad.projects['juju-gui'] |
1558 | + # Find the URL of the most recently created NPM cache archive. |
1559 | + npm_cache_url = get_release_file_url(project, 'npm-cache', None) |
1560 | + return npm_cache_url |
1561 | + |
1562 | + |
1563 | +def prime_npm_cache(npm_cache_url): |
1564 | + """Download NPM cache archive and prime the NPM cache with it.""" |
1565 | + # Download the cache archive and then uncompress it into the NPM cache. |
1566 | + npm_cache_archive = os.path.join(CURRENT_DIR, 'npm-cache.tgz') |
1567 | + cmd_log(run('curl', '-L', '-o', npm_cache_archive, npm_cache_url)) |
1568 | + npm_cache_dir = os.path.expanduser('~/.npm') |
1569 | + # The NPM cache directory probably does not exist, so make it if not. |
1570 | + try: |
1571 | + os.mkdir(npm_cache_dir) |
1572 | + except OSError, e: |
1573 | + # If the directory already exists then ignore the error. |
1574 | + if e.errno != errno.EEXIST: # File exists. |
1575 | + raise |
1576 | + uncompress = command('tar', '-x', '-z', '-C', npm_cache_dir, '-f') |
1577 | + cmd_log(uncompress(npm_cache_archive)) |
1578 | + |
1579 | + |
1580 | +def fetch_gui(juju_gui_source, logpath): |
1581 | + """Retrieve the Juju GUI release/branch.""" |
1582 | + # Retrieve a Juju GUI release. |
1583 | + origin, version_or_branch = parse_source(juju_gui_source) |
1584 | + if origin == 'branch': |
1585 | + # Make sure we have the dependencies necessary for us to actually make |
1586 | + # a build. |
1587 | + _get_build_dependencies() |
1588 | + # Create a release starting from a branch. |
1589 | + juju_gui_source_dir = os.path.join(CURRENT_DIR, 'juju-gui-source') |
1590 | + log('Retrieving Juju GUI source checkout from %s.' % version_or_branch) |
1591 | + cmd_log(run('rm', '-rf', juju_gui_source_dir)) |
1592 | + cmd_log(bzr_checkout(version_or_branch, juju_gui_source_dir)) |
1593 | + log('Preparing a Juju GUI release.') |
1594 | + logdir = os.path.dirname(logpath) |
1595 | + fd, name = tempfile.mkstemp(prefix='make-distfile-', dir=logdir) |
1596 | + log('Output from "make distfile" sent to %s' % name) |
1597 | + with environ(NO_BZR='1'): |
1598 | + run('make', '-C', juju_gui_source_dir, 'distfile', |
1599 | + stdout=fd, stderr=fd) |
1600 | + release_tarball = first_path_in_dir( |
1601 | + os.path.join(juju_gui_source_dir, 'releases')) |
1602 | + else: |
1603 | + log('Retrieving Juju GUI release.') |
1604 | + if origin == 'url': |
1605 | + file_url = version_or_branch |
1606 | + else: |
1607 | + # Retrieve a release from Launchpad. |
1608 | + launchpad = Launchpad.login_anonymously( |
1609 | + 'Juju GUI charm', 'production') |
1610 | + project = launchpad.projects['juju-gui'] |
1611 | + file_url = get_release_file_url(project, origin, version_or_branch) |
1612 | + log('Downloading release file from %s.' % file_url) |
1613 | + release_tarball = os.path.join(CURRENT_DIR, 'release.tgz') |
1614 | + cmd_log(run('curl', '-L', '-o', release_tarball, file_url)) |
1615 | + return release_tarball |
1616 | + |
1617 | + |
1618 | +def fetch_api(juju_api_branch): |
1619 | + """Retrieve the Juju branch.""" |
1620 | + # Retrieve Juju API source checkout. |
1621 | + log('Retrieving Juju API source checkout.') |
1622 | + cmd_log(run('rm', '-rf', JUJU_DIR)) |
1623 | + cmd_log(bzr_checkout(juju_api_branch, JUJU_DIR)) |
1624 | + |
1625 | + |
1626 | +def setup_gui(release_tarball): |
1627 | + """Set up Juju GUI.""" |
1628 | + # Uncompress the release tarball. |
1629 | + log('Installing Juju GUI.') |
1630 | + release_dir = os.path.join(CURRENT_DIR, 'release') |
1631 | + cmd_log(run('rm', '-rf', release_dir)) |
1632 | + os.mkdir(release_dir) |
1633 | + uncompress = command('tar', '-x', '-z', '-C', release_dir, '-f') |
1634 | + cmd_log(uncompress(release_tarball)) |
1635 | + # Link the Juju GUI dir to the contents of the release tarball. |
1636 | + cmd_log(run('ln', '-sf', first_path_in_dir(release_dir), JUJU_GUI_DIR)) |
1637 | + |
1638 | + |
1639 | +def setup_apache(): |
1640 | + """Set up apache.""" |
1641 | + log('Setting up apache.') |
1642 | + if not os.path.exists(JUJU_GUI_SITE): |
1643 | + cmd_log(run('touch', JUJU_GUI_SITE)) |
1644 | + cmd_log(run('chown', 'ubuntu:', JUJU_GUI_SITE)) |
1645 | + cmd_log( |
1646 | + run('ln', '-s', JUJU_GUI_SITE, |
1647 | + '/etc/apache2/sites-enabled/juju-gui')) |
1648 | + |
1649 | + if not os.path.exists(JUJU_GUI_PORTS): |
1650 | + cmd_log(run('touch', JUJU_GUI_PORTS)) |
1651 | + cmd_log(run('chown', 'ubuntu:', JUJU_GUI_PORTS)) |
1652 | + |
1653 | + with su('root'): |
1654 | + run('a2dissite', 'default') |
1655 | + run('a2ensite', 'juju-gui') |
1656 | + |
1657 | + |
1658 | +def save_or_create_certificates( |
1659 | + ssl_cert_path, ssl_cert_contents, ssl_key_contents): |
1660 | + """Generate the SSL certificates. |
1661 | + |
1662 | + If both *ssl_cert_contents* and *ssl_key_contents* are provided, use them |
1663 | + as certificates; otherwise, generate them. |
1664 | + |
1665 | + Also create a pem file, suitable for use in the haproxy configuration, |
1666 | + concatenating the key and the certificate files. |
1667 | + """ |
1668 | + crt_path = os.path.join(ssl_cert_path, 'juju.crt') |
1669 | + key_path = os.path.join(ssl_cert_path, 'juju.key') |
1670 | + if not os.path.exists(ssl_cert_path): |
1671 | + os.makedirs(ssl_cert_path) |
1672 | + if ssl_cert_contents and ssl_key_contents: |
1673 | + # Save the provided certificates. |
1674 | + with open(crt_path, 'w') as cert_file: |
1675 | + cert_file.write(ssl_cert_contents) |
1676 | + with open(key_path, 'w') as key_file: |
1677 | + key_file.write(ssl_key_contents) |
1678 | + else: |
1679 | + # Generate certificates. |
1680 | + # See http://superuser.com/questions/226192/openssl-without-prompt |
1681 | + cmd_log(run( |
1682 | + 'openssl', 'req', '-new', '-newkey', 'rsa:4096', |
1683 | + '-days', '365', '-nodes', '-x509', '-subj', |
1684 | + # These are arbitrary test values for the certificate. |
1685 | + '/C=GB/ST=Juju/L=GUI/O=Ubuntu/CN=juju.ubuntu.com', |
1686 | + '-keyout', key_path, '-out', crt_path)) |
1687 | + # Generate the pem file. |
1688 | + pem_path = os.path.join(ssl_cert_path, JUJU_PEM) |
1689 | + if os.path.exists(pem_path): |
1690 | + os.remove(pem_path) |
1691 | + with open(pem_path, 'w') as pem_file: |
1692 | + shutil.copyfileobj(open(key_path), pem_file) |
1693 | + shutil.copyfileobj(open(crt_path), pem_file) |
1694 | + |
1695 | + |
1696 | +def find_missing_packages(*packages): |
1697 | + """Given a list of packages, return the packages which are not installed. |
1698 | + """ |
1699 | + cache = apt.Cache() |
1700 | + missing = set() |
1701 | + for pkg_name in packages: |
1702 | + try: |
1703 | + pkg = cache[pkg_name] |
1704 | + except KeyError: |
1705 | + missing.add(pkg_name) |
1706 | + continue |
1707 | + if pkg.is_installed: |
1708 | + continue |
1709 | + missing.add(pkg_name) |
1710 | + return missing |
1711 | + |
1712 | + |
1713 | +## Backend support decorators |
1714 | + |
1715 | +def chain(name): |
1716 | + """Helper method to compose a set of mixin objects into a callable. |
1717 | + |
1718 | + Each method is called in the context of its mixin instance, and its |
1719 | + argument is the Backend instance. |
1720 | + """ |
1721 | + # Chain method calls through all implementing mixins. |
1722 | + def method(self): |
1723 | + for mixin in self.mixins: |
1724 | + a_callable = getattr(type(mixin), name, None) |
1725 | + if a_callable: |
1726 | + a_callable(mixin, self) |
1727 | + |
1728 | + method.__name__ = name |
1729 | + return method |
1730 | + |
1731 | + |
1732 | +def merge(name): |
1733 | + """Helper to merge a property from a set of strategy objects |
1734 | + into a unified set. |
1735 | + """ |
1736 | + # Return merged property from every providing mixin as a set. |
1737 | + @property |
1738 | + def method(self): |
1739 | + result = set() |
1740 | + for mixin in self.mixins: |
1741 | + segment = getattr(type(mixin), name, None) |
1742 | + if segment and isinstance(segment, (list, tuple, set)): |
1743 | + result |= set(segment) |
1744 | + |
1745 | + return result |
1746 | + return method |
1747 | |
1748 | === added directory 'hooks/charmhelpers/contrib/network' |
1749 | === added file 'hooks/charmhelpers/contrib/network/__init__.py' |
1750 | === added file 'hooks/charmhelpers/contrib/network/ip.py' |
1751 | --- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000 |
1752 | +++ hooks/charmhelpers/contrib/network/ip.py 2016-02-12 04:16:45 +0000 |
1753 | @@ -0,0 +1,69 @@ |
1754 | +import sys |
1755 | + |
1756 | +from charmhelpers.fetch import apt_install |
1757 | +from charmhelpers.core.hookenv import ( |
1758 | + ERROR, log, |
1759 | +) |
1760 | + |
1761 | +try: |
1762 | + import netifaces |
1763 | +except ImportError: |
1764 | + apt_install('python-netifaces') |
1765 | + import netifaces |
1766 | + |
1767 | +try: |
1768 | + import netaddr |
1769 | +except ImportError: |
1770 | + apt_install('python-netaddr') |
1771 | + import netaddr |
1772 | + |
1773 | + |
1774 | +def _validate_cidr(network): |
1775 | + try: |
1776 | + netaddr.IPNetwork(network) |
1777 | + except (netaddr.core.AddrFormatError, ValueError): |
1778 | + raise ValueError("Network (%s) is not in CIDR presentation format" % |
1779 | + network) |
1780 | + |
1781 | + |
1782 | +def get_address_in_network(network, fallback=None, fatal=False): |
1783 | + """ |
1784 | + Get an IPv4 address within the network from the host. |
1785 | + |
1786 | + Args: |
1787 | + network (str): CIDR presentation format. For example, |
1788 | + '192.168.1.0/24'. |
1789 | + fallback (str): If no address is found, return fallback. |
1790 | + fatal (boolean): If no address is found, fallback is not |
1791 | + set and fatal is True then exit(1). |
1792 | + """ |
1793 | + |
1794 | + def not_found_error_out(): |
1795 | + log("No IP address found in network: %s" % network, |
1796 | + level=ERROR) |
1797 | + sys.exit(1) |
1798 | + |
1799 | + if network is None: |
1800 | + if fallback is not None: |
1801 | + return fallback |
1802 | + else: |
1803 | + if fatal: |
1804 | + not_found_error_out() |
1805 | + |
1806 | + _validate_cidr(network) |
1807 | + for iface in netifaces.interfaces(): |
1808 | + addresses = netifaces.ifaddresses(iface) |
1809 | + if netifaces.AF_INET in addresses: |
1810 | + addr = addresses[netifaces.AF_INET][0]['addr'] |
1811 | + netmask = addresses[netifaces.AF_INET][0]['netmask'] |
1812 | + cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
1813 | + if cidr in netaddr.IPNetwork(network): |
1814 | + return str(cidr.ip) |
1815 | + |
1816 | + if fallback is not None: |
1817 | + return fallback |
1818 | + |
1819 | + if fatal: |
1820 | + not_found_error_out() |
1821 | + |
1822 | + return None |
1823 | |
1824 | === added directory 'hooks/charmhelpers/contrib/network/ovs' |
1825 | === added file 'hooks/charmhelpers/contrib/network/ovs/__init__.py' |
1826 | --- hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000 |
1827 | +++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2016-02-12 04:16:45 +0000 |
1828 | @@ -0,0 +1,75 @@ |
1829 | +''' Helpers for interacting with OpenvSwitch ''' |
1830 | +import subprocess |
1831 | +import os |
1832 | +from charmhelpers.core.hookenv import ( |
1833 | + log, WARNING |
1834 | +) |
1835 | +from charmhelpers.core.host import ( |
1836 | + service |
1837 | +) |
1838 | + |
1839 | + |
1840 | +def add_bridge(name): |
1841 | + ''' Add the named bridge to openvswitch ''' |
1842 | + log('Creating bridge {}'.format(name)) |
1843 | + subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name]) |
1844 | + |
1845 | + |
1846 | +def del_bridge(name): |
1847 | + ''' Delete the named bridge from openvswitch ''' |
1848 | + log('Deleting bridge {}'.format(name)) |
1849 | + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name]) |
1850 | + |
1851 | + |
1852 | +def add_bridge_port(name, port): |
1853 | + ''' Add a port to the named openvswitch bridge ''' |
1854 | + log('Adding port {} to bridge {}'.format(port, name)) |
1855 | + subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port", |
1856 | + name, port]) |
1857 | + subprocess.check_call(["ip", "link", "set", port, "up"]) |
1858 | + |
1859 | + |
1860 | +def del_bridge_port(name, port): |
1861 | + ''' Delete a port from the named openvswitch bridge ''' |
1862 | + log('Deleting port {} from bridge {}'.format(port, name)) |
1863 | + subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port", |
1864 | + name, port]) |
1865 | + subprocess.check_call(["ip", "link", "set", port, "down"]) |
1866 | + |
1867 | + |
1868 | +def set_manager(manager): |
1869 | + ''' Set the controller for the local openvswitch ''' |
1870 | + log('Setting manager for local ovs to {}'.format(manager)) |
1871 | + subprocess.check_call(['ovs-vsctl', 'set-manager', |
1872 | + 'ssl:{}'.format(manager)]) |
1873 | + |
1874 | + |
1875 | +CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem' |
1876 | + |
1877 | + |
1878 | +def get_certificate(): |
1879 | + ''' Read openvswitch certificate from disk ''' |
1880 | + if os.path.exists(CERT_PATH): |
1881 | + log('Reading ovs certificate from {}'.format(CERT_PATH)) |
1882 | + with open(CERT_PATH, 'r') as cert: |
1883 | + full_cert = cert.read() |
1884 | + begin_marker = "-----BEGIN CERTIFICATE-----" |
1885 | + end_marker = "-----END CERTIFICATE-----" |
1886 | + begin_index = full_cert.find(begin_marker) |
1887 | + end_index = full_cert.rfind(end_marker) |
1888 | + if end_index == -1 or begin_index == -1: |
1889 | + raise RuntimeError("Certificate does not contain valid begin" |
1890 | + " and end markers.") |
1891 | + full_cert = full_cert[begin_index:(end_index + len(end_marker))] |
1892 | + return full_cert |
1893 | + else: |
1894 | + log('Certificate not found', level=WARNING) |
1895 | + return None |
1896 | + |
1897 | + |
1898 | +def full_restart(): |
1899 | + ''' Full restart and reload of openvswitch ''' |
1900 | + if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'): |
1901 | + service('start', 'openvswitch-force-reload-kmod') |
1902 | + else: |
1903 | + service('force-reload-kmod', 'openvswitch-switch') |
1904 | |
1905 | === added directory 'hooks/charmhelpers/contrib/openstack' |
1906 | === added file 'hooks/charmhelpers/contrib/openstack/__init__.py' |
1907 | === added file 'hooks/charmhelpers/contrib/openstack/alternatives.py' |
1908 | --- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000 |
1909 | +++ hooks/charmhelpers/contrib/openstack/alternatives.py 2016-02-12 04:16:45 +0000 |
1910 | @@ -0,0 +1,17 @@ |
1911 | +''' Helper for managing alternatives for file conflict resolution ''' |
1912 | + |
1913 | +import subprocess |
1914 | +import shutil |
1915 | +import os |
1916 | + |
1917 | + |
1918 | +def install_alternative(name, target, source, priority=50): |
1919 | + ''' Install alternative configuration ''' |
1920 | + if (os.path.exists(target) and not os.path.islink(target)): |
1921 | + # Move existing file/directory away before installing |
1922 | + shutil.move(target, '{}.bak'.format(target)) |
1923 | + cmd = [ |
1924 | + 'update-alternatives', '--force', '--install', |
1925 | + target, name, source, str(priority) |
1926 | + ] |
1927 | + subprocess.check_call(cmd) |
1928 | |
1929 | === added file 'hooks/charmhelpers/contrib/openstack/context.py' |
1930 | --- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000 |
1931 | +++ hooks/charmhelpers/contrib/openstack/context.py 2016-02-12 04:16:45 +0000 |
1932 | @@ -0,0 +1,577 @@ |
1933 | +import json |
1934 | +import os |
1935 | + |
1936 | +from base64 import b64decode |
1937 | + |
1938 | +from subprocess import ( |
1939 | + check_call |
1940 | +) |
1941 | + |
1942 | + |
1943 | +from charmhelpers.fetch import ( |
1944 | + apt_install, |
1945 | + filter_installed_packages, |
1946 | +) |
1947 | + |
1948 | +from charmhelpers.core.hookenv import ( |
1949 | + config, |
1950 | + local_unit, |
1951 | + log, |
1952 | + relation_get, |
1953 | + relation_ids, |
1954 | + related_units, |
1955 | + unit_get, |
1956 | + unit_private_ip, |
1957 | + ERROR, |
1958 | +) |
1959 | + |
1960 | +from charmhelpers.contrib.hahelpers.cluster import ( |
1961 | + determine_api_port, |
1962 | + determine_haproxy_port, |
1963 | + https, |
1964 | + is_clustered, |
1965 | + peer_units, |
1966 | +) |
1967 | + |
1968 | +from charmhelpers.contrib.hahelpers.apache import ( |
1969 | + get_cert, |
1970 | + get_ca_cert, |
1971 | +) |
1972 | + |
1973 | +from charmhelpers.contrib.openstack.neutron import ( |
1974 | + neutron_plugin_attribute, |
1975 | +) |
1976 | + |
1977 | +CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
1978 | + |
1979 | + |
1980 | +class OSContextError(Exception): |
1981 | + pass |
1982 | + |
1983 | + |
1984 | +def ensure_packages(packages): |
1985 | + '''Install but do not upgrade required plugin packages''' |
1986 | + required = filter_installed_packages(packages) |
1987 | + if required: |
1988 | + apt_install(required, fatal=True) |
1989 | + |
1990 | + |
1991 | +def context_complete(ctxt): |
1992 | + _missing = [] |
1993 | + for k, v in ctxt.iteritems(): |
1994 | + if v is None or v == '': |
1995 | + _missing.append(k) |
1996 | + if _missing: |
1997 | + log('Missing required data: %s' % ' '.join(_missing), level='INFO') |
1998 | + return False |
1999 | + return True |
2000 | + |
2001 | + |
2002 | +class OSContextGenerator(object): |
2003 | + interfaces = [] |
2004 | + |
2005 | + def __call__(self): |
2006 | + raise NotImplementedError |
2007 | + |
2008 | + |
2009 | +class SharedDBContext(OSContextGenerator): |
2010 | + interfaces = ['shared-db'] |
2011 | + |
2012 | + def __init__(self, database=None, user=None, relation_prefix=None): |
2013 | + ''' |
2014 | + Allows inspecting relation for settings prefixed with relation_prefix. |
2015 | + This is useful for parsing access for multiple databases returned via |
2016 | + the shared-db interface (eg, nova_password, quantum_password) |
2017 | + ''' |
2018 | + self.relation_prefix = relation_prefix |
2019 | + self.database = database |
2020 | + self.user = user |
2021 | + |
2022 | + def __call__(self): |
2023 | + self.database = self.database or config('database') |
2024 | + self.user = self.user or config('database-user') |
2025 | + if None in [self.database, self.user]: |
2026 | + log('Could not generate shared_db context. ' |
2027 | + 'Missing required charm config options. ' |
2028 | + '(database name and user)') |
2029 | + raise OSContextError |
2030 | + ctxt = {} |
2031 | + |
2032 | + password_setting = 'password' |
2033 | + if self.relation_prefix: |
2034 | + password_setting = self.relation_prefix + '_password' |
2035 | + |
2036 | + for rid in relation_ids('shared-db'): |
2037 | + for unit in related_units(rid): |
2038 | + passwd = relation_get(password_setting, rid=rid, unit=unit) |
2039 | + ctxt = { |
2040 | + 'database_host': relation_get('db_host', rid=rid, |
2041 | + unit=unit), |
2042 | + 'database': self.database, |
2043 | + 'database_user': self.user, |
2044 | + 'database_password': passwd, |
2045 | + } |
2046 | + if context_complete(ctxt): |
2047 | + return ctxt |
2048 | + return {} |
2049 | + |
2050 | + |
2051 | +class IdentityServiceContext(OSContextGenerator): |
2052 | + interfaces = ['identity-service'] |
2053 | + |
2054 | + def __call__(self): |
2055 | + log('Generating template context for identity-service') |
2056 | + ctxt = {} |
2057 | + |
2058 | + for rid in relation_ids('identity-service'): |
2059 | + for unit in related_units(rid): |
2060 | + ctxt = { |
2061 | + 'service_port': relation_get('service_port', rid=rid, |
2062 | + unit=unit), |
2063 | + 'service_host': relation_get('service_host', rid=rid, |
2064 | + unit=unit), |
2065 | + 'auth_host': relation_get('auth_host', rid=rid, unit=unit), |
2066 | + 'auth_port': relation_get('auth_port', rid=rid, unit=unit), |
2067 | + 'admin_tenant_name': relation_get('service_tenant', |
2068 | + rid=rid, unit=unit), |
2069 | + 'admin_user': relation_get('service_username', rid=rid, |
2070 | + unit=unit), |
2071 | + 'admin_password': relation_get('service_password', rid=rid, |
2072 | + unit=unit), |
2073 | + # XXX: Hard-coded http. |
2074 | + 'service_protocol': 'http', |
2075 | + 'auth_protocol': 'http', |
2076 | + } |
2077 | + if context_complete(ctxt): |
2078 | + return ctxt |
2079 | + return {} |
2080 | + |
2081 | + |
2082 | +class AMQPContext(OSContextGenerator): |
2083 | + interfaces = ['amqp'] |
2084 | + |
2085 | + def __call__(self): |
2086 | + log('Generating template context for amqp') |
2087 | + conf = config() |
2088 | + try: |
2089 | + username = conf['rabbit-user'] |
2090 | + vhost = conf['rabbit-vhost'] |
2091 | + except KeyError as e: |
2092 | + log('Could not generate shared_db context. ' |
2093 | + 'Missing required charm config options: %s.' % e) |
2094 | + raise OSContextError |
2095 | + |
2096 | + ctxt = {} |
2097 | + for rid in relation_ids('amqp'): |
2098 | + for unit in related_units(rid): |
2099 | + if relation_get('clustered', rid=rid, unit=unit): |
2100 | + ctxt['clustered'] = True |
2101 | + ctxt['rabbitmq_host'] = relation_get('vip', rid=rid, |
2102 | + unit=unit) |
2103 | + else: |
2104 | + ctxt['rabbitmq_host'] = relation_get('private-address', |
2105 | + rid=rid, unit=unit) |
2106 | + ctxt.update({ |
2107 | + 'rabbitmq_user': username, |
2108 | + 'rabbitmq_password': relation_get('password', rid=rid, |
2109 | + unit=unit), |
2110 | + 'rabbitmq_virtual_host': vhost, |
2111 | + }) |
2112 | + if context_complete(ctxt): |
2113 | + # Sufficient information found = break out! |
2114 | + break |
2115 | + # Used for active/active rabbitmq >= grizzly |
2116 | + if 'clustered' not in ctxt and len(related_units(rid)) > 1: |
2117 | + rabbitmq_hosts = [] |
2118 | + for unit in related_units(rid): |
2119 | + rabbitmq_hosts.append(relation_get('private-address', |
2120 | + rid=rid, unit=unit)) |
2121 | + ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) |
2122 | + if not context_complete(ctxt): |
2123 | + return {} |
2124 | + else: |
2125 | + return ctxt |
2126 | + |
2127 | + |
2128 | +class CephContext(OSContextGenerator): |
2129 | + interfaces = ['ceph'] |
2130 | + |
2131 | + def __call__(self): |
2132 | + '''This generates context for /etc/ceph/ceph.conf templates''' |
2133 | + if not relation_ids('ceph'): |
2134 | + return {} |
2135 | + log('Generating template context for ceph') |
2136 | + mon_hosts = [] |
2137 | + auth = None |
2138 | + key = None |
2139 | + for rid in relation_ids('ceph'): |
2140 | + for unit in related_units(rid): |
2141 | + mon_hosts.append(relation_get('private-address', rid=rid, |
2142 | + unit=unit)) |
2143 | + auth = relation_get('auth', rid=rid, unit=unit) |
2144 | + key = relation_get('key', rid=rid, unit=unit) |
2145 | + |
2146 | + ctxt = { |
2147 | + 'mon_hosts': ' '.join(mon_hosts), |
2148 | + 'auth': auth, |
2149 | + 'key': key, |
2150 | + } |
2151 | + |
2152 | + if not os.path.isdir('/etc/ceph'): |
2153 | + os.mkdir('/etc/ceph') |
2154 | + |
2155 | + if not context_complete(ctxt): |
2156 | + return {} |
2157 | + |
2158 | + ensure_packages(['ceph-common']) |
2159 | + |
2160 | + return ctxt |
2161 | + |
2162 | + |
2163 | +class HAProxyContext(OSContextGenerator): |
2164 | + interfaces = ['cluster'] |
2165 | + |
2166 | + def __call__(self): |
2167 | + ''' |
2168 | + Builds half a context for the haproxy template, which describes |
2169 | + all peers to be included in the cluster. Each charm needs to include |
2170 | + its own context generator that describes the port mapping. |
2171 | + ''' |
2172 | + if not relation_ids('cluster'): |
2173 | + return {} |
2174 | + |
2175 | + cluster_hosts = {} |
2176 | + l_unit = local_unit().replace('/', '-') |
2177 | + cluster_hosts[l_unit] = unit_get('private-address') |
2178 | + |
2179 | + for rid in relation_ids('cluster'): |
2180 | + for unit in related_units(rid): |
2181 | + _unit = unit.replace('/', '-') |
2182 | + addr = relation_get('private-address', rid=rid, unit=unit) |
2183 | + cluster_hosts[_unit] = addr |
2184 | + |
2185 | + ctxt = { |
2186 | + 'units': cluster_hosts, |
2187 | + } |
2188 | + if len(cluster_hosts.keys()) > 1: |
2189 | + # Enable haproxy when we have enough peers. |
2190 | + log('Ensuring haproxy enabled in /etc/default/haproxy.') |
2191 | + with open('/etc/default/haproxy', 'w') as out: |
2192 | + out.write('ENABLED=1\n') |
2193 | + return ctxt |
2194 | + log('HAProxy context is incomplete, this unit has no peers.') |
2195 | + return {} |
2196 | + |
2197 | + |
2198 | +class ImageServiceContext(OSContextGenerator): |
2199 | + interfaces = ['image-service'] |
2200 | + |
2201 | + def __call__(self): |
2202 | + ''' |
2203 | + Obtains the glance API server from the image-service relation. Useful |
2204 | + in nova and cinder (currently). |
2205 | + ''' |
2206 | + log('Generating template context for image-service.') |
2207 | + rids = relation_ids('image-service') |
2208 | + if not rids: |
2209 | + return {} |
2210 | + for rid in rids: |
2211 | + for unit in related_units(rid): |
2212 | + api_server = relation_get('glance-api-server', |
2213 | + rid=rid, unit=unit) |
2214 | + if api_server: |
2215 | + return {'glance_api_servers': api_server} |
2216 | + log('ImageService context is incomplete. ' |
2217 | + 'Missing required relation data.') |
2218 | + return {} |
2219 | + |
2220 | + |
2221 | +class ApacheSSLContext(OSContextGenerator): |
2222 | + |
2223 | + """ |
2224 | + Generates a context for an apache vhost configuration that configures |
2225 | + HTTPS reverse proxying for one or many endpoints. Generated context |
2226 | + looks something like: |
2227 | + { |
2228 | + 'namespace': 'cinder', |
2229 | + 'private_address': 'iscsi.mycinderhost.com', |
2230 | + 'endpoints': [(8776, 8766), (8777, 8767)] |
2231 | + } |
2232 | + |
2233 | + The endpoints list consists of a tuples mapping external ports |
2234 | + to internal ports. |
2235 | + """ |
2236 | + interfaces = ['https'] |
2237 | + |
2238 | + # charms should inherit this context and set external ports |
2239 | + # and service namespace accordingly. |
2240 | + external_ports = [] |
2241 | + service_namespace = None |
2242 | + |
2243 | + def enable_modules(self): |
2244 | + cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http'] |
2245 | + check_call(cmd) |
2246 | + |
2247 | + def configure_cert(self): |
2248 | + if not os.path.isdir('/etc/apache2/ssl'): |
2249 | + os.mkdir('/etc/apache2/ssl') |
2250 | + ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) |
2251 | + if not os.path.isdir(ssl_dir): |
2252 | + os.mkdir(ssl_dir) |
2253 | + cert, key = get_cert() |
2254 | + with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out: |
2255 | + cert_out.write(b64decode(cert)) |
2256 | + with open(os.path.join(ssl_dir, 'key'), 'w') as key_out: |
2257 | + key_out.write(b64decode(key)) |
2258 | + ca_cert = get_ca_cert() |
2259 | + if ca_cert: |
2260 | + with open(CA_CERT_PATH, 'w') as ca_out: |
2261 | + ca_out.write(b64decode(ca_cert)) |
2262 | + check_call(['update-ca-certificates']) |
2263 | + |
2264 | + def __call__(self): |
2265 | + if isinstance(self.external_ports, basestring): |
2266 | + self.external_ports = [self.external_ports] |
2267 | + if (not self.external_ports or not https()): |
2268 | + return {} |
2269 | + |
2270 | + self.configure_cert() |
2271 | + self.enable_modules() |
2272 | + |
2273 | + ctxt = { |
2274 | + 'namespace': self.service_namespace, |
2275 | + 'private_address': unit_get('private-address'), |
2276 | + 'endpoints': [] |
2277 | + } |
2278 | + for ext_port in self.external_ports: |
2279 | + if peer_units() or is_clustered(): |
2280 | + int_port = determine_haproxy_port(ext_port) |
2281 | + else: |
2282 | + int_port = determine_api_port(ext_port) |
2283 | + portmap = (int(ext_port), int(int_port)) |
2284 | + ctxt['endpoints'].append(portmap) |
2285 | + return ctxt |
2286 | + |
2287 | + |
2288 | +class NeutronContext(object): |
2289 | + interfaces = [] |
2290 | + |
2291 | + @property |
2292 | + def plugin(self): |
2293 | + return None |
2294 | + |
2295 | + @property |
2296 | + def network_manager(self): |
2297 | + return None |
2298 | + |
2299 | + @property |
2300 | + def packages(self): |
2301 | + return neutron_plugin_attribute( |
2302 | + self.plugin, 'packages', self.network_manager) |
2303 | + |
2304 | + @property |
2305 | + def neutron_security_groups(self): |
2306 | + return None |
2307 | + |
2308 | + def _ensure_packages(self): |
2309 | + [ensure_packages(pkgs) for pkgs in self.packages] |
2310 | + |
2311 | + def _save_flag_file(self): |
2312 | + if self.network_manager == 'quantum': |
2313 | + _file = '/etc/nova/quantum_plugin.conf' |
2314 | + else: |
2315 | + _file = '/etc/nova/neutron_plugin.conf' |
2316 | + with open(_file, 'wb') as out: |
2317 | + out.write(self.plugin + '\n') |
2318 | + |
2319 | + def ovs_ctxt(self): |
2320 | + driver = neutron_plugin_attribute(self.plugin, 'driver', |
2321 | + self.network_manager) |
2322 | + config = neutron_plugin_attribute(self.plugin, 'config', |
2323 | + self.network_manager) |
2324 | + ovs_ctxt = { |
2325 | + 'core_plugin': driver, |
2326 | + 'neutron_plugin': 'ovs', |
2327 | + 'neutron_security_groups': self.neutron_security_groups, |
2328 | + 'local_ip': unit_private_ip(), |
2329 | + 'config': config |
2330 | + } |
2331 | + |
2332 | + return ovs_ctxt |
2333 | + |
2334 | + def nvp_ctxt(self): |
2335 | + driver = neutron_plugin_attribute(self.plugin, 'driver', |
2336 | + self.network_manager) |
2337 | + config = neutron_plugin_attribute(self.plugin, 'config', |
2338 | + self.network_manager) |
2339 | + nvp_ctxt = { |
2340 | + 'core_plugin': driver, |
2341 | + 'neutron_plugin': 'nvp', |
2342 | + 'neutron_security_groups': self.neutron_security_groups, |
2343 | + 'local_ip': unit_private_ip(), |
2344 | + 'config': config |
2345 | + } |
2346 | + |
2347 | + return nvp_ctxt |
2348 | + |
2349 | + def __call__(self): |
2350 | + self._ensure_packages() |
2351 | + |
2352 | + if self.network_manager not in ['quantum', 'neutron']: |
2353 | + return {} |
2354 | + |
2355 | + if not self.plugin: |
2356 | + return {} |
2357 | + |
2358 | + ctxt = {'network_manager': self.network_manager} |
2359 | + |
2360 | + if self.plugin == 'ovs': |
2361 | + ctxt.update(self.ovs_ctxt()) |
2362 | + elif self.plugin == 'nvp': |
2363 | + ctxt.update(self.nvp_ctxt()) |
2364 | + |
2365 | + self._save_flag_file() |
2366 | + return ctxt |
2367 | + |
2368 | + |
2369 | +class OSConfigFlagContext(OSContextGenerator): |
2370 | + |
2371 | + """ |
2372 | + Responsible for adding user-defined config-flags in charm config to a |
2373 | + template context. |
2374 | + |
2375 | + NOTE: the value of config-flags may be a comma-separated list of |
2376 | + key=value pairs and some Openstack config files support |
2377 | + comma-separated lists as values. |
2378 | + """ |
2379 | + |
2380 | + def __call__(self): |
2381 | + config_flags = config('config-flags') |
2382 | + if not config_flags: |
2383 | + return {} |
2384 | + |
2385 | + if config_flags.find('==') >= 0: |
2386 | + log("config_flags is not in expected format (key=value)", |
2387 | + level=ERROR) |
2388 | + raise OSContextError |
2389 | + |
2390 | + # strip the following from each value. |
2391 | + post_strippers = ' ,' |
2392 | + # we strip any leading/trailing '=' or ' ' from the string then |
2393 | + # split on '='. |
2394 | + split = config_flags.strip(' =').split('=') |
2395 | + limit = len(split) |
2396 | + flags = {} |
2397 | + for i in xrange(0, limit - 1): |
2398 | + current = split[i] |
2399 | + next = split[i + 1] |
2400 | + vindex = next.rfind(',') |
2401 | + if (i == limit - 2) or (vindex < 0): |
2402 | + value = next |
2403 | + else: |
2404 | + value = next[:vindex] |
2405 | + |
2406 | + if i == 0: |
2407 | + key = current |
2408 | + else: |
2409 | + # if this not the first entry, expect an embedded key. |
2410 | + index = current.rfind(',') |
2411 | + if index < 0: |
2412 | + log("invalid config value(s) at index %s" % (i), |
2413 | + level=ERROR) |
2414 | + raise OSContextError |
2415 | + key = current[index + 1:] |
2416 | + |
2417 | + # Add to collection. |
2418 | + flags[key.strip(post_strippers)] = value.rstrip(post_strippers) |
2419 | + |
2420 | + return {'user_config_flags': flags} |
2421 | + |
2422 | + |
2423 | +class SubordinateConfigContext(OSContextGenerator): |
2424 | + |
2425 | + """ |
2426 | + Responsible for inspecting relations to subordinates that |
2427 | + may be exporting required config via a json blob. |
2428 | + |
2429 | + The subordinate interface allows subordinates to export their |
2430 | + configuration requirements to the principle for multiple config |
2431 | + files and multiple serivces. Ie, a subordinate that has interfaces |
2432 | + to both glance and nova may export to following yaml blob as json: |
2433 | + |
2434 | + glance: |
2435 | + /etc/glance/glance-api.conf: |
2436 | + sections: |
2437 | + DEFAULT: |
2438 | + - [key1, value1] |
2439 | + /etc/glance/glance-registry.conf: |
2440 | + MYSECTION: |
2441 | + - [key2, value2] |
2442 | + nova: |
2443 | + /etc/nova/nova.conf: |
2444 | + sections: |
2445 | + DEFAULT: |
2446 | + - [key3, value3] |
2447 | + |
2448 | + |
2449 | + It is then up to the principle charms to subscribe this context to |
2450 | + the service+config file it is interestd in. Configuration data will |
2451 | + be available in the template context, in glance's case, as: |
2452 | + ctxt = { |
2453 | + ... other context ... |
2454 | + 'subordinate_config': { |
2455 | + 'DEFAULT': { |
2456 | + 'key1': 'value1', |
2457 | + }, |
2458 | + 'MYSECTION': { |
2459 | + 'key2': 'value2', |
2460 | + }, |
2461 | + } |
2462 | + } |
2463 | + |
2464 | + """ |
2465 | + |
2466 | + def __init__(self, service, config_file, interface): |
2467 | + """ |
2468 | + :param service : Service name key to query in any subordinate |
2469 | + data found |
2470 | + :param config_file : Service's config file to query sections |
2471 | + :param interface : Subordinate interface to inspect |
2472 | + """ |
2473 | + self.service = service |
2474 | + self.config_file = config_file |
2475 | + self.interface = interface |
2476 | + |
2477 | + def __call__(self): |
2478 | + ctxt = {} |
2479 | + for rid in relation_ids(self.interface): |
2480 | + for unit in related_units(rid): |
2481 | + sub_config = relation_get('subordinate_configuration', |
2482 | + rid=rid, unit=unit) |
2483 | + if sub_config and sub_config != '': |
2484 | + try: |
2485 | + sub_config = json.loads(sub_config) |
2486 | + except: |
2487 | + log('Could not parse JSON from subordinate_config ' |
2488 | + 'setting from %s' % rid, level=ERROR) |
2489 | + continue |
2490 | + |
2491 | + if self.service not in sub_config: |
2492 | + log('Found subordinate_config on %s but it contained' |
2493 | + 'nothing for %s service' % (rid, self.service)) |
2494 | + continue |
2495 | + |
2496 | + sub_config = sub_config[self.service] |
2497 | + if self.config_file not in sub_config: |
2498 | + log('Found subordinate_config on %s but it contained' |
2499 | + 'nothing for %s' % (rid, self.config_file)) |
2500 | + continue |
2501 | + |
2502 | + sub_config = sub_config[self.config_file] |
2503 | + for k, v in sub_config.iteritems(): |
2504 | + ctxt[k] = v |
2505 | + |
2506 | + if not ctxt: |
2507 | + ctxt['sections'] = {} |
2508 | + |
2509 | + return ctxt |
2510 | |
2511 | === added file 'hooks/charmhelpers/contrib/openstack/neutron.py' |
2512 | --- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000 |
2513 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-02-12 04:16:45 +0000 |
2514 | @@ -0,0 +1,137 @@ |
2515 | +# Various utilies for dealing with Neutron and the renaming from Quantum. |
2516 | + |
2517 | +from subprocess import check_output |
2518 | + |
2519 | +from charmhelpers.core.hookenv import ( |
2520 | + config, |
2521 | + log, |
2522 | + ERROR, |
2523 | +) |
2524 | + |
2525 | +from charmhelpers.contrib.openstack.utils import os_release |
2526 | + |
2527 | + |
2528 | +def headers_package(): |
2529 | + """Ensures correct linux-headers for running kernel are installed, |
2530 | + for building DKMS package""" |
2531 | + kver = check_output(['uname', '-r']).strip() |
2532 | + return 'linux-headers-%s' % kver |
2533 | + |
2534 | + |
2535 | +# legacy |
2536 | +def quantum_plugins(): |
2537 | + from charmhelpers.contrib.openstack import context |
2538 | + return { |
2539 | + 'ovs': { |
2540 | + 'config': '/etc/quantum/plugins/openvswitch/' |
2541 | + 'ovs_quantum_plugin.ini', |
2542 | + 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.' |
2543 | + 'OVSQuantumPluginV2', |
2544 | + 'contexts': [ |
2545 | + context.SharedDBContext(user=config('neutron-database-user'), |
2546 | + database=config('neutron-database'), |
2547 | + relation_prefix='neutron')], |
2548 | + 'services': ['quantum-plugin-openvswitch-agent'], |
2549 | + 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], |
2550 | + ['quantum-plugin-openvswitch-agent']], |
2551 | + 'server_packages': ['quantum-server', |
2552 | + 'quantum-plugin-openvswitch'], |
2553 | + 'server_services': ['quantum-server'] |
2554 | + }, |
2555 | + 'nvp': { |
2556 | + 'config': '/etc/quantum/plugins/nicira/nvp.ini', |
2557 | + 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.' |
2558 | + 'QuantumPlugin.NvpPluginV2', |
2559 | + 'contexts': [ |
2560 | + context.SharedDBContext(user=config('neutron-database-user'), |
2561 | + database=config('neutron-database'), |
2562 | + relation_prefix='neutron')], |
2563 | + 'services': [], |
2564 | + 'packages': [], |
2565 | + 'server_packages': ['quantum-server', |
2566 | + 'quantum-plugin-nicira'], |
2567 | + 'server_services': ['quantum-server'] |
2568 | + } |
2569 | + } |
2570 | + |
2571 | + |
2572 | +def neutron_plugins(): |
2573 | + from charmhelpers.contrib.openstack import context |
2574 | + return { |
2575 | + 'ovs': { |
2576 | + 'config': '/etc/neutron/plugins/openvswitch/' |
2577 | + 'ovs_neutron_plugin.ini', |
2578 | + 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.' |
2579 | + 'OVSNeutronPluginV2', |
2580 | + 'contexts': [ |
2581 | + context.SharedDBContext(user=config('neutron-database-user'), |
2582 | + database=config('neutron-database'), |
2583 | + relation_prefix='neutron')], |
2584 | + 'services': ['neutron-plugin-openvswitch-agent'], |
2585 | + 'packages': [[headers_package(), 'openvswitch-datapath-dkms'], |
2586 | + ['quantum-plugin-openvswitch-agent']], |
2587 | + 'server_packages': ['neutron-server', |
2588 | + 'neutron-plugin-openvswitch'], |
2589 | + 'server_services': ['neutron-server'] |
2590 | + }, |
2591 | + 'nvp': { |
2592 | + 'config': '/etc/neutron/plugins/nicira/nvp.ini', |
2593 | + 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.' |
2594 | + 'NeutronPlugin.NvpPluginV2', |
2595 | + 'contexts': [ |
2596 | + context.SharedDBContext(user=config('neutron-database-user'), |
2597 | + database=config('neutron-database'), |
2598 | + relation_prefix='neutron')], |
2599 | + 'services': [], |
2600 | + 'packages': [], |
2601 | + 'server_packages': ['neutron-server', |
2602 | + 'neutron-plugin-nicira'], |
2603 | + 'server_services': ['neutron-server'] |
2604 | + } |
2605 | + } |
2606 | + |
2607 | + |
2608 | +def neutron_plugin_attribute(plugin, attr, net_manager=None): |
2609 | + manager = net_manager or network_manager() |
2610 | + if manager == 'quantum': |
2611 | + plugins = quantum_plugins() |
2612 | + elif manager == 'neutron': |
2613 | + plugins = neutron_plugins() |
2614 | + else: |
2615 | + log('Error: Network manager does not support plugins.') |
2616 | + raise Exception |
2617 | + |
2618 | + try: |
2619 | + _plugin = plugins[plugin] |
2620 | + except KeyError: |
2621 | + log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR) |
2622 | + raise Exception |
2623 | + |
2624 | + try: |
2625 | + return _plugin[attr] |
2626 | + except KeyError: |
2627 | + return None |
2628 | + |
2629 | + |
2630 | +def network_manager(): |
2631 | + ''' |
2632 | + Deals with the renaming of Quantum to Neutron in H and any situations |
2633 | + that require compatability (eg, deploying H with network-manager=quantum, |
2634 | + upgrading from G). |
2635 | + ''' |
2636 | + release = os_release('nova-common') |
2637 | + manager = config('network-manager').lower() |
2638 | + |
2639 | + if manager not in ['quantum', 'neutron']: |
2640 | + return manager |
2641 | + |
2642 | + if release in ['essex']: |
2643 | + # E does not support neutron |
2644 | + log('Neutron networking not supported in Essex.', level=ERROR) |
2645 | + raise Exception |
2646 | + elif release in ['folsom', 'grizzly']: |
2647 | + # neutron is named quantum in F and G |
2648 | + return 'quantum' |
2649 | + else: |
2650 | + # ensure accurate naming for all releases post-H |
2651 | + return 'neutron' |
2652 | |
2653 | === added directory 'hooks/charmhelpers/contrib/openstack/templates' |
2654 | === added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py' |
2655 | --- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000 |
2656 | +++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2016-02-12 04:16:45 +0000 |
2657 | @@ -0,0 +1,2 @@ |
2658 | +# dummy __init__.py to fool syncer into thinking this is a syncable python |
2659 | +# module |
2660 | |
2661 | === added file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf' |
2662 | --- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 1970-01-01 00:00:00 +0000 |
2663 | +++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2016-02-12 04:16:45 +0000 |
2664 | @@ -0,0 +1,11 @@ |
2665 | +############################################################################### |
2666 | +# [ WARNING ] |
2667 | +# cinder configuration file maintained by Juju |
2668 | +# local changes may be overwritten. |
2669 | +############################################################################### |
2670 | +{% if auth -%} |
2671 | +[global] |
2672 | + auth_supported = {{ auth }} |
2673 | + keyring = /etc/ceph/$cluster.$name.keyring |
2674 | + mon host = {{ mon_hosts }} |
2675 | +{% endif -%} |
2676 | |
2677 | === added file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg' |
2678 | --- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 1970-01-01 00:00:00 +0000 |
2679 | +++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-02-12 04:16:45 +0000 |
2680 | @@ -0,0 +1,37 @@ |
2681 | +global |
2682 | + log 127.0.0.1 local0 |
2683 | + log 127.0.0.1 local1 notice |
2684 | + maxconn 20000 |
2685 | + user haproxy |
2686 | + group haproxy |
2687 | + spread-checks 0 |
2688 | + |
2689 | +defaults |
2690 | + log global |
2691 | + mode http |
2692 | + option httplog |
2693 | + option dontlognull |
2694 | + retries 3 |
2695 | + timeout queue 1000 |
2696 | + timeout connect 1000 |
2697 | + timeout client 30000 |
2698 | + timeout server 30000 |
2699 | + |
2700 | +listen stats :8888 |
2701 | + mode http |
2702 | + stats enable |
2703 | + stats hide-version |
2704 | + stats realm Haproxy\ Statistics |
2705 | + stats uri / |
2706 | + stats auth admin:password |
2707 | + |
2708 | +{% if units -%} |
2709 | +{% for service, ports in service_ports.iteritems() -%} |
2710 | +listen {{ service }} 0.0.0.0:{{ ports[0] }} |
2711 | + balance roundrobin |
2712 | + option tcplog |
2713 | + {% for unit, address in units.iteritems() -%} |
2714 | + server {{ unit }} {{ address }}:{{ ports[1] }} check |
2715 | + {% endfor %} |
2716 | +{% endfor -%} |
2717 | +{% endif -%} |
2718 | |
2719 | === added file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend' |
2720 | --- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 1970-01-01 00:00:00 +0000 |
2721 | +++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2016-02-12 04:16:45 +0000 |
2722 | @@ -0,0 +1,23 @@ |
2723 | +{% if endpoints -%} |
2724 | +{% for ext, int in endpoints -%} |
2725 | +Listen {{ ext }} |
2726 | +NameVirtualHost *:{{ ext }} |
2727 | +<VirtualHost *:{{ ext }}> |
2728 | + ServerName {{ private_address }} |
2729 | + SSLEngine on |
2730 | + SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert |
2731 | + SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key |
2732 | + ProxyPass / http://localhost:{{ int }}/ |
2733 | + ProxyPassReverse / http://localhost:{{ int }}/ |
2734 | + ProxyPreserveHost on |
2735 | +</VirtualHost> |
2736 | +<Proxy *> |
2737 | + Order deny,allow |
2738 | + Allow from all |
2739 | +</Proxy> |
2740 | +<Location /> |
2741 | + Order allow,deny |
2742 | + Allow from all |
2743 | +</Location> |
2744 | +{% endfor -%} |
2745 | +{% endif -%} |
2746 | |
2747 | === added symlink 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf' |
2748 | === target is u'openstack_https_frontend' |
2749 | === added file 'hooks/charmhelpers/contrib/openstack/templating.py' |
2750 | --- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000 |
2751 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2016-02-12 04:16:45 +0000 |
2752 | @@ -0,0 +1,280 @@ |
2753 | +import os |
2754 | + |
2755 | +from charmhelpers.fetch import apt_install |
2756 | + |
2757 | +from charmhelpers.core.hookenv import ( |
2758 | + log, |
2759 | + ERROR, |
2760 | + INFO |
2761 | +) |
2762 | + |
2763 | +from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
2764 | + |
2765 | +try: |
2766 | + from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions |
2767 | +except ImportError: |
2768 | + # python-jinja2 may not be installed yet, or we're running unittests. |
2769 | + FileSystemLoader = ChoiceLoader = Environment = exceptions = None |
2770 | + |
2771 | + |
2772 | +class OSConfigException(Exception): |
2773 | + pass |
2774 | + |
2775 | + |
2776 | +def get_loader(templates_dir, os_release): |
2777 | + """ |
2778 | + Create a jinja2.ChoiceLoader containing template dirs up to |
2779 | + and including os_release. If directory template directory |
2780 | + is missing at templates_dir, it will be omitted from the loader. |
2781 | + templates_dir is added to the bottom of the search list as a base |
2782 | + loading dir. |
2783 | + |
2784 | + A charm may also ship a templates dir with this module |
2785 | + and it will be appended to the bottom of the search list, eg: |
2786 | + hooks/charmhelpers/contrib/openstack/templates. |
2787 | + |
2788 | + :param templates_dir: str: Base template directory containing release |
2789 | + sub-directories. |
2790 | + :param os_release : str: OpenStack release codename to construct template |
2791 | + loader. |
2792 | + |
2793 | + :returns : jinja2.ChoiceLoader constructed with a list of |
2794 | + jinja2.FilesystemLoaders, ordered in descending |
2795 | + order by OpenStack release. |
2796 | + """ |
2797 | + tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
2798 | + for rel in OPENSTACK_CODENAMES.itervalues()] |
2799 | + |
2800 | + if not os.path.isdir(templates_dir): |
2801 | + log('Templates directory not found @ %s.' % templates_dir, |
2802 | + level=ERROR) |
2803 | + raise OSConfigException |
2804 | + |
2805 | + # the bottom contains tempaltes_dir and possibly a common templates dir |
2806 | + # shipped with the helper. |
2807 | + loaders = [FileSystemLoader(templates_dir)] |
2808 | + helper_templates = os.path.join(os.path.dirname(__file__), 'templates') |
2809 | + if os.path.isdir(helper_templates): |
2810 | + loaders.append(FileSystemLoader(helper_templates)) |
2811 | + |
2812 | + for rel, tmpl_dir in tmpl_dirs: |
2813 | + if os.path.isdir(tmpl_dir): |
2814 | + loaders.insert(0, FileSystemLoader(tmpl_dir)) |
2815 | + if rel == os_release: |
2816 | + break |
2817 | + log('Creating choice loader with dirs: %s' % |
2818 | + [l.searchpath for l in loaders], level=INFO) |
2819 | + return ChoiceLoader(loaders) |
2820 | + |
2821 | + |
2822 | +class OSConfigTemplate(object): |
2823 | + """ |
2824 | + Associates a config file template with a list of context generators. |
2825 | + Responsible for constructing a template context based on those generators. |
2826 | + """ |
2827 | + def __init__(self, config_file, contexts): |
2828 | + self.config_file = config_file |
2829 | + |
2830 | + if hasattr(contexts, '__call__'): |
2831 | + self.contexts = [contexts] |
2832 | + else: |
2833 | + self.contexts = contexts |
2834 | + |
2835 | + self._complete_contexts = [] |
2836 | + |
2837 | + def context(self): |
2838 | + ctxt = {} |
2839 | + for context in self.contexts: |
2840 | + _ctxt = context() |
2841 | + if _ctxt: |
2842 | + ctxt.update(_ctxt) |
2843 | + # track interfaces for every complete context. |
2844 | + [self._complete_contexts.append(interface) |
2845 | + for interface in context.interfaces |
2846 | + if interface not in self._complete_contexts] |
2847 | + return ctxt |
2848 | + |
2849 | + def complete_contexts(self): |
2850 | + ''' |
2851 | + Return a list of interfaces that have atisfied contexts. |
2852 | + ''' |
2853 | + if self._complete_contexts: |
2854 | + return self._complete_contexts |
2855 | + self.context() |
2856 | + return self._complete_contexts |
2857 | + |
2858 | + |
2859 | +class OSConfigRenderer(object): |
2860 | + """ |
2861 | + This class provides a common templating system to be used by OpenStack |
2862 | + charms. It is intended to help charms share common code and templates, |
2863 | + and ease the burden of managing config templates across multiple OpenStack |
2864 | + releases. |
2865 | + |
2866 | + Basic usage: |
2867 | + # import some common context generates from charmhelpers |
2868 | + from charmhelpers.contrib.openstack import context |
2869 | + |
2870 | + # Create a renderer object for a specific OS release. |
2871 | + configs = OSConfigRenderer(templates_dir='/tmp/templates', |
2872 | + openstack_release='folsom') |
2873 | + # register some config files with context generators. |
2874 | + configs.register(config_file='/etc/nova/nova.conf', |
2875 | + contexts=[context.SharedDBContext(), |
2876 | + context.AMQPContext()]) |
2877 | + configs.register(config_file='/etc/nova/api-paste.ini', |
2878 | + contexts=[context.IdentityServiceContext()]) |
2879 | + configs.register(config_file='/etc/haproxy/haproxy.conf', |
2880 | + contexts=[context.HAProxyContext()]) |
2881 | + # write out a single config |
2882 | + configs.write('/etc/nova/nova.conf') |
2883 | + # write out all registered configs |
2884 | + configs.write_all() |
2885 | + |
2886 | + Details: |
2887 | + |
2888 | + OpenStack Releases and template loading |
2889 | + --------------------------------------- |
2890 | + When the object is instantiated, it is associated with a specific OS |
2891 | + release. This dictates how the template loader will be constructed. |
2892 | + |
2893 | + The constructed loader attempts to load the template from several places |
2894 | + in the following order: |
2895 | + - from the most recent OS release-specific template dir (if one exists) |
2896 | + - the base templates_dir |
2897 | + - a template directory shipped in the charm with this helper file. |
2898 | + |
2899 | + |
2900 | + For the example above, '/tmp/templates' contains the following structure: |
2901 | + /tmp/templates/nova.conf |
2902 | + /tmp/templates/api-paste.ini |
2903 | + /tmp/templates/grizzly/api-paste.ini |
2904 | + /tmp/templates/havana/api-paste.ini |
2905 | + |
2906 | + Since it was registered with the grizzly release, it first seraches |
2907 | + the grizzly directory for nova.conf, then the templates dir. |
2908 | + |
2909 | + When writing api-paste.ini, it will find the template in the grizzly |
2910 | + directory. |
2911 | + |
2912 | + If the object were created with folsom, it would fall back to the |
2913 | + base templates dir for its api-paste.ini template. |
2914 | + |
2915 | + This system should help manage changes in config files through |
2916 | + openstack releases, allowing charms to fall back to the most recently |
2917 | + updated config template for a given release |
2918 | + |
2919 | + The haproxy.conf, since it is not shipped in the templates dir, will |
2920 | + be loaded from the module directory's template directory, eg |
2921 | + $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows |
2922 | + us to ship common templates (haproxy, apache) with the helpers. |
2923 | + |
2924 | + Context generators |
2925 | + --------------------------------------- |
2926 | + Context generators are used to generate template contexts during hook |
2927 | + execution. Doing so may require inspecting service relations, charm |
2928 | + config, etc. When registered, a config file is associated with a list |
2929 | + of generators. When a template is rendered and written, all context |
2930 | + generates are called in a chain to generate the context dictionary |
2931 | + passed to the jinja2 template. See context.py for more info. |
2932 | + """ |
2933 | + def __init__(self, templates_dir, openstack_release): |
2934 | + if not os.path.isdir(templates_dir): |
2935 | + log('Could not locate templates dir %s' % templates_dir, |
2936 | + level=ERROR) |
2937 | + raise OSConfigException |
2938 | + |
2939 | + self.templates_dir = templates_dir |
2940 | + self.openstack_release = openstack_release |
2941 | + self.templates = {} |
2942 | + self._tmpl_env = None |
2943 | + |
2944 | + if None in [Environment, ChoiceLoader, FileSystemLoader]: |
2945 | + # if this code is running, the object is created pre-install hook. |
2946 | + # jinja2 shouldn't get touched until the module is reloaded on next |
2947 | + # hook execution, with proper jinja2 bits successfully imported. |
2948 | + apt_install('python-jinja2') |
2949 | + |
2950 | + def register(self, config_file, contexts): |
2951 | + """ |
2952 | + Register a config file with a list of context generators to be called |
2953 | + during rendering. |
2954 | + """ |
2955 | + self.templates[config_file] = OSConfigTemplate(config_file=config_file, |
2956 | + contexts=contexts) |
2957 | + log('Registered config file: %s' % config_file, level=INFO) |
2958 | + |
2959 | + def _get_tmpl_env(self): |
2960 | + if not self._tmpl_env: |
2961 | + loader = get_loader(self.templates_dir, self.openstack_release) |
2962 | + self._tmpl_env = Environment(loader=loader) |
2963 | + |
2964 | + def _get_template(self, template): |
2965 | + self._get_tmpl_env() |
2966 | + template = self._tmpl_env.get_template(template) |
2967 | + log('Loaded template from %s' % template.filename, level=INFO) |
2968 | + return template |
2969 | + |
2970 | + def render(self, config_file): |
2971 | + if config_file not in self.templates: |
2972 | + log('Config not registered: %s' % config_file, level=ERROR) |
2973 | + raise OSConfigException |
2974 | + ctxt = self.templates[config_file].context() |
2975 | + |
2976 | + _tmpl = os.path.basename(config_file) |
2977 | + try: |
2978 | + template = self._get_template(_tmpl) |
2979 | + except exceptions.TemplateNotFound: |
2980 | + # if no template is found with basename, try looking for it |
2981 | + # using a munged full path, eg: |
2982 | + # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf |
2983 | + _tmpl = '_'.join(config_file.split('/')[1:]) |
2984 | + try: |
2985 | + template = self._get_template(_tmpl) |
2986 | + except exceptions.TemplateNotFound as e: |
2987 | + log('Could not load template from %s by %s or %s.' % |
2988 | + (self.templates_dir, os.path.basename(config_file), _tmpl), |
2989 | + level=ERROR) |
2990 | + raise e |
2991 | + |
2992 | + log('Rendering from template: %s' % _tmpl, level=INFO) |
2993 | + return template.render(ctxt) |
2994 | + |
2995 | + def write(self, config_file): |
2996 | + """ |
2997 | + Write a single config file, raises if config file is not registered. |
2998 | + """ |
2999 | + if config_file not in self.templates: |
3000 | + log('Config not registered: %s' % config_file, level=ERROR) |
3001 | + raise OSConfigException |
3002 | + |
3003 | + _out = self.render(config_file) |
3004 | + |
3005 | + with open(config_file, 'wb') as out: |
3006 | + out.write(_out) |
3007 | + |
3008 | + log('Wrote template %s.' % config_file, level=INFO) |
3009 | + |
3010 | + def write_all(self): |
3011 | + """ |
3012 | + Write out all registered config files. |
3013 | + """ |
3014 | + [self.write(k) for k in self.templates.iterkeys()] |
3015 | + |
3016 | + def set_release(self, openstack_release): |
3017 | + """ |
3018 | + Resets the template environment and generates a new template loader |
3019 | + based on a the new openstack release. |
3020 | + """ |
3021 | + self._tmpl_env = None |
3022 | + self.openstack_release = openstack_release |
3023 | + self._get_tmpl_env() |
3024 | + |
3025 | + def complete_contexts(self): |
3026 | + ''' |
3027 | + Returns a list of context interfaces that yield a complete context. |
3028 | + ''' |
3029 | + interfaces = [] |
3030 | + [interfaces.extend(i.complete_contexts()) |
3031 | + for i in self.templates.itervalues()] |
3032 | + return interfaces |
3033 | |
3034 | === added file 'hooks/charmhelpers/contrib/openstack/utils.py' |
3035 | --- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000 |
3036 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2016-02-12 04:16:45 +0000 |
3037 | @@ -0,0 +1,440 @@ |
3038 | +#!/usr/bin/python |
3039 | + |
3040 | +# Common python helper functions used for OpenStack charms. |
3041 | +from collections import OrderedDict |
3042 | + |
3043 | +import apt_pkg as apt |
3044 | +import subprocess |
3045 | +import os |
3046 | +import socket |
3047 | +import sys |
3048 | + |
3049 | +from charmhelpers.core.hookenv import ( |
3050 | + config, |
3051 | + log as juju_log, |
3052 | + charm_dir, |
3053 | + ERROR, |
3054 | + INFO |
3055 | +) |
3056 | + |
3057 | +from charmhelpers.contrib.storage.linux.lvm import ( |
3058 | + deactivate_lvm_volume_group, |
3059 | + is_lvm_physical_volume, |
3060 | + remove_lvm_physical_volume, |
3061 | +) |
3062 | + |
3063 | +from charmhelpers.core.host import lsb_release, mounts, umount |
3064 | +from charmhelpers.fetch import apt_install |
3065 | +from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
3066 | +from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
3067 | + |
3068 | +CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
3069 | +CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
3070 | + |
3071 | +DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' |
3072 | + 'restricted main multiverse universe') |
3073 | + |
3074 | + |
3075 | +UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
3076 | + ('oneiric', 'diablo'), |
3077 | + ('precise', 'essex'), |
3078 | + ('quantal', 'folsom'), |
3079 | + ('raring', 'grizzly'), |
3080 | + ('saucy', 'havana'), |
3081 | + ('trusty', 'icehouse') |
3082 | +]) |
3083 | + |
3084 | + |
3085 | +OPENSTACK_CODENAMES = OrderedDict([ |
3086 | + ('2011.2', 'diablo'), |
3087 | + ('2012.1', 'essex'), |
3088 | + ('2012.2', 'folsom'), |
3089 | + ('2013.1', 'grizzly'), |
3090 | + ('2013.2', 'havana'), |
3091 | + ('2014.1', 'icehouse'), |
3092 | +]) |
3093 | + |
3094 | +# The ugly duckling |
3095 | +SWIFT_CODENAMES = OrderedDict([ |
3096 | + ('1.4.3', 'diablo'), |
3097 | + ('1.4.8', 'essex'), |
3098 | + ('1.7.4', 'folsom'), |
3099 | + ('1.8.0', 'grizzly'), |
3100 | + ('1.7.7', 'grizzly'), |
3101 | + ('1.7.6', 'grizzly'), |
3102 | + ('1.10.0', 'havana'), |
3103 | + ('1.9.1', 'havana'), |
3104 | + ('1.9.0', 'havana'), |
3105 | +]) |
3106 | + |
3107 | +DEFAULT_LOOPBACK_SIZE = '5G' |
3108 | + |
3109 | + |
3110 | +def error_out(msg): |
3111 | + juju_log("FATAL ERROR: %s" % msg, level='ERROR') |
3112 | + sys.exit(1) |
3113 | + |
3114 | + |
3115 | +def get_os_codename_install_source(src): |
3116 | + '''Derive OpenStack release codename from a given installation source.''' |
3117 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
3118 | + rel = '' |
3119 | + if src in ['distro', 'distro-proposed']: |
3120 | + try: |
3121 | + rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel] |
3122 | + except KeyError: |
3123 | + e = 'Could not derive openstack release for '\ |
3124 | + 'this Ubuntu release: %s' % ubuntu_rel |
3125 | + error_out(e) |
3126 | + return rel |
3127 | + |
3128 | + if src.startswith('cloud:'): |
3129 | + ca_rel = src.split(':')[1] |
3130 | + ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0] |
3131 | + return ca_rel |
3132 | + |
3133 | + # Best guess match based on deb string provided |
3134 | + if src.startswith('deb') or src.startswith('ppa'): |
3135 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
3136 | + if v in src: |
3137 | + return v |
3138 | + |
3139 | + |
3140 | +def get_os_version_install_source(src): |
3141 | + codename = get_os_codename_install_source(src) |
3142 | + return get_os_version_codename(codename) |
3143 | + |
3144 | + |
3145 | +def get_os_codename_version(vers): |
3146 | + '''Determine OpenStack codename from version number.''' |
3147 | + try: |
3148 | + return OPENSTACK_CODENAMES[vers] |
3149 | + except KeyError: |
3150 | + e = 'Could not determine OpenStack codename for version %s' % vers |
3151 | + error_out(e) |
3152 | + |
3153 | + |
3154 | +def get_os_version_codename(codename): |
3155 | + '''Determine OpenStack version number from codename.''' |
3156 | + for k, v in OPENSTACK_CODENAMES.iteritems(): |
3157 | + if v == codename: |
3158 | + return k |
3159 | + e = 'Could not derive OpenStack version for '\ |
3160 | + 'codename: %s' % codename |
3161 | + error_out(e) |
3162 | + |
3163 | + |
3164 | +def get_os_codename_package(package, fatal=True): |
3165 | + '''Derive OpenStack release codename from an installed package.''' |
3166 | + apt.init() |
3167 | + cache = apt.Cache() |
3168 | + |
3169 | + try: |
3170 | + pkg = cache[package] |
3171 | + except: |
3172 | + if not fatal: |
3173 | + return None |
3174 | + # the package is unknown to the current apt cache. |
3175 | + e = 'Could not determine version of package with no installation '\ |
3176 | + 'candidate: %s' % package |
3177 | + error_out(e) |
3178 | + |
3179 | + if not pkg.current_ver: |
3180 | + if not fatal: |
3181 | + return None |
3182 | + # package is known, but no version is currently installed. |
3183 | + e = 'Could not determine version of uninstalled package: %s' % package |
3184 | + error_out(e) |
3185 | + |
3186 | + vers = apt.upstream_version(pkg.current_ver.ver_str) |
3187 | + |
3188 | + try: |
3189 | + if 'swift' in pkg.name: |
3190 | + swift_vers = vers[:5] |
3191 | + if swift_vers not in SWIFT_CODENAMES: |
3192 | + # Deal with 1.10.0 upward |
3193 | + swift_vers = vers[:6] |
3194 | + return SWIFT_CODENAMES[swift_vers] |
3195 | + else: |
3196 | + vers = vers[:6] |
3197 | + return OPENSTACK_CODENAMES[vers] |
3198 | + except KeyError: |
3199 | + e = 'Could not determine OpenStack codename for version %s' % vers |
3200 | + error_out(e) |
3201 | + |
3202 | + |
3203 | +def get_os_version_package(pkg, fatal=True): |
3204 | + '''Derive OpenStack version number from an installed package.''' |
3205 | + codename = get_os_codename_package(pkg, fatal=fatal) |
3206 | + |
3207 | + if not codename: |
3208 | + return None |
3209 | + |
3210 | + if 'swift' in pkg: |
3211 | + vers_map = SWIFT_CODENAMES |
3212 | + else: |
3213 | + vers_map = OPENSTACK_CODENAMES |
3214 | + |
3215 | + for version, cname in vers_map.iteritems(): |
3216 | + if cname == codename: |
3217 | + return version |
3218 | + #e = "Could not determine OpenStack version for package: %s" % pkg |
3219 | + #error_out(e) |
3220 | + |
3221 | + |
3222 | +os_rel = None |
3223 | + |
3224 | + |
3225 | +def os_release(package, base='essex'): |
3226 | + ''' |
3227 | + Returns OpenStack release codename from a cached global. |
3228 | + If the codename can not be determined from either an installed package or |
3229 | + the installation source, the earliest release supported by the charm should |
3230 | + be returned. |
3231 | + ''' |
3232 | + global os_rel |
3233 | + if os_rel: |
3234 | + return os_rel |
3235 | + os_rel = (get_os_codename_package(package, fatal=False) or |
3236 | + get_os_codename_install_source(config('openstack-origin')) or |
3237 | + base) |
3238 | + return os_rel |
3239 | + |
3240 | + |
3241 | +def import_key(keyid): |
3242 | + cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \ |
3243 | + "--recv-keys %s" % keyid |
3244 | + try: |
3245 | + subprocess.check_call(cmd.split(' ')) |
3246 | + except subprocess.CalledProcessError: |
3247 | + error_out("Error importing repo key %s" % keyid) |
3248 | + |
3249 | + |
3250 | +def configure_installation_source(rel): |
3251 | + '''Configure apt installation source.''' |
3252 | + if rel == 'distro': |
3253 | + return |
3254 | + elif rel == 'distro-proposed': |
3255 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
3256 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
3257 | + f.write(DISTRO_PROPOSED % ubuntu_rel) |
3258 | + elif rel[:4] == "ppa:": |
3259 | + src = rel |
3260 | + subprocess.check_call(["add-apt-repository", "-y", src]) |
3261 | + elif rel[:3] == "deb": |
3262 | + l = len(rel.split('|')) |
3263 | + if l == 2: |
3264 | + src, key = rel.split('|') |
3265 | + juju_log("Importing PPA key from keyserver for %s" % src) |
3266 | + import_key(key) |
3267 | + elif l == 1: |
3268 | + src = rel |
3269 | + with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f: |
3270 | + f.write(src) |
3271 | + elif rel[:6] == 'cloud:': |
3272 | + ubuntu_rel = lsb_release()['DISTRIB_CODENAME'] |
3273 | + rel = rel.split(':')[1] |
3274 | + u_rel = rel.split('-')[0] |
3275 | + ca_rel = rel.split('-')[1] |
3276 | + |
3277 | + if u_rel != ubuntu_rel: |
3278 | + e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\ |
3279 | + 'version (%s)' % (ca_rel, ubuntu_rel) |
3280 | + error_out(e) |
3281 | + |
3282 | + if 'staging' in ca_rel: |
3283 | + # staging is just a regular PPA. |
3284 | + os_rel = ca_rel.split('/')[0] |
3285 | + ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel |
3286 | + cmd = 'add-apt-repository -y %s' % ppa |
3287 | + subprocess.check_call(cmd.split(' ')) |
3288 | + return |
3289 | + |
3290 | + # map charm config options to actual archive pockets. |
3291 | + pockets = { |
3292 | + 'folsom': 'precise-updates/folsom', |
3293 | + 'folsom/updates': 'precise-updates/folsom', |
3294 | + 'folsom/proposed': 'precise-proposed/folsom', |
3295 | + 'grizzly': 'precise-updates/grizzly', |
3296 | + 'grizzly/updates': 'precise-updates/grizzly', |
3297 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
3298 | + 'havana': 'precise-updates/havana', |
3299 | + 'havana/updates': 'precise-updates/havana', |
3300 | + 'havana/proposed': 'precise-proposed/havana', |
3301 | + 'icehouse': 'precise-updates/icehouse', |
3302 | + 'icehouse/updates': 'precise-updates/icehouse', |
3303 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
3304 | + } |
3305 | + |
3306 | + try: |
3307 | + pocket = pockets[ca_rel] |
3308 | + except KeyError: |
3309 | + e = 'Invalid Cloud Archive release specified: %s' % rel |
3310 | + error_out(e) |
3311 | + |
3312 | + src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket) |
3313 | + apt_install('ubuntu-cloud-keyring', fatal=True) |
3314 | + |
3315 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f: |
3316 | + f.write(src) |
3317 | + else: |
3318 | + error_out("Invalid openstack-release specified: %s" % rel) |
3319 | + |
3320 | + |
3321 | +def save_script_rc(script_path="scripts/scriptrc", **env_vars): |
3322 | + """ |
3323 | + Write an rc file in the charm-delivered directory containing |
3324 | + exported environment variables provided by env_vars. Any charm scripts run |
3325 | + outside the juju hook environment can source this scriptrc to obtain |
3326 | + updated config information necessary to perform health checks or |
3327 | + service changes. |
3328 | + """ |
3329 | + juju_rc_path = "%s/%s" % (charm_dir(), script_path) |
3330 | + if not os.path.exists(os.path.dirname(juju_rc_path)): |
3331 | + os.mkdir(os.path.dirname(juju_rc_path)) |
3332 | + with open(juju_rc_path, 'wb') as rc_script: |
3333 | + rc_script.write( |
3334 | + "#!/bin/bash\n") |
3335 | + [rc_script.write('export %s=%s\n' % (u, p)) |
3336 | + for u, p in env_vars.iteritems() if u != "script_path"] |
3337 | + |
3338 | + |
3339 | +def openstack_upgrade_available(package): |
3340 | + """ |
3341 | + Determines if an OpenStack upgrade is available from installation |
3342 | + source, based on version of installed package. |
3343 | + |
3344 | + :param package: str: Name of installed package. |
3345 | + |
3346 | + :returns: bool: : Returns True if configured installation source offers |
3347 | + a newer version of package. |
3348 | + |
3349 | + """ |
3350 | + |
3351 | + src = config('openstack-origin') |
3352 | + cur_vers = get_os_version_package(package) |
3353 | + available_vers = get_os_version_install_source(src) |
3354 | + apt.init() |
3355 | + return apt.version_compare(available_vers, cur_vers) == 1 |
3356 | + |
3357 | + |
3358 | +def ensure_block_device(block_device): |
3359 | + ''' |
3360 | + Confirm block_device, create as loopback if necessary. |
3361 | + |
3362 | + :param block_device: str: Full path of block device to ensure. |
3363 | + |
3364 | + :returns: str: Full path of ensured block device. |
3365 | + ''' |
3366 | + _none = ['None', 'none', None] |
3367 | + if (block_device in _none): |
3368 | + error_out('prepare_storage(): Missing required input: ' |
3369 | + 'block_device=%s.' % block_device, level=ERROR) |
3370 | + |
3371 | + if block_device.startswith('/dev/'): |
3372 | + bdev = block_device |
3373 | + elif block_device.startswith('/'): |
3374 | + _bd = block_device.split('|') |
3375 | + if len(_bd) == 2: |
3376 | + bdev, size = _bd |
3377 | + else: |
3378 | + bdev = block_device |
3379 | + size = DEFAULT_LOOPBACK_SIZE |
3380 | + bdev = ensure_loopback_device(bdev, size) |
3381 | + else: |
3382 | + bdev = '/dev/%s' % block_device |
3383 | + |
3384 | + if not is_block_device(bdev): |
3385 | + error_out('Failed to locate valid block device at %s' % bdev, |
3386 | + level=ERROR) |
3387 | + |
3388 | + return bdev |
3389 | + |
3390 | + |
3391 | +def clean_storage(block_device): |
3392 | + ''' |
3393 | + Ensures a block device is clean. That is: |
3394 | + - unmounted |
3395 | + - any lvm volume groups are deactivated |
3396 | + - any lvm physical device signatures removed |
3397 | + - partition table wiped |
3398 | + |
3399 | + :param block_device: str: Full path to block device to clean. |
3400 | + ''' |
3401 | + for mp, d in mounts(): |
3402 | + if d == block_device: |
3403 | + juju_log('clean_storage(): %s is mounted @ %s, unmounting.' % |
3404 | + (d, mp), level=INFO) |
3405 | + umount(mp, persist=True) |
3406 | + |
3407 | + if is_lvm_physical_volume(block_device): |
3408 | + deactivate_lvm_volume_group(block_device) |
3409 | + remove_lvm_physical_volume(block_device) |
3410 | + else: |
3411 | + zap_disk(block_device) |
3412 | + |
3413 | + |
3414 | +def is_ip(address): |
3415 | + """ |
3416 | + Returns True if address is a valid IP address. |
3417 | + """ |
3418 | + try: |
3419 | + # Test to see if already an IPv4 address |
3420 | + socket.inet_aton(address) |
3421 | + return True |
3422 | + except socket.error: |
3423 | + return False |
3424 | + |
3425 | + |
3426 | +def ns_query(address): |
3427 | + try: |
3428 | + import dns.resolver |
3429 | + except ImportError: |
3430 | + apt_install('python-dnspython') |
3431 | + import dns.resolver |
3432 | + |
3433 | + if isinstance(address, dns.name.Name): |
3434 | + rtype = 'PTR' |
3435 | + elif isinstance(address, basestring): |
3436 | + rtype = 'A' |
3437 | + |
3438 | + answers = dns.resolver.query(address, rtype) |
3439 | + if answers: |
3440 | + return str(answers[0]) |
3441 | + return None |
3442 | + |
3443 | + |
3444 | +def get_host_ip(hostname): |
3445 | + """ |
3446 | + Resolves the IP for a given hostname, or returns |
3447 | + the input if it is already an IP. |
3448 | + """ |
3449 | + if is_ip(hostname): |
3450 | + return hostname |
3451 | + |
3452 | + return ns_query(hostname) |
3453 | + |
3454 | + |
3455 | +def get_hostname(address): |
3456 | + """ |
3457 | + Resolves hostname for given IP, or returns the input |
3458 | + if it is already a hostname. |
3459 | + """ |
3460 | + if not is_ip(address): |
3461 | + return address |
3462 | + |
3463 | + try: |
3464 | + import dns.reversename |
3465 | + except ImportError: |
3466 | + apt_install('python-dnspython') |
3467 | + import dns.reversename |
3468 | + |
3469 | + rev = dns.reversename.from_address(address) |
3470 | + result = ns_query(rev) |
3471 | + if not result: |
3472 | + return None |
3473 | + |
3474 | + # strip trailing . |
3475 | + if result.endswith('.'): |
3476 | + return result[:-1] |
3477 | + return result |
3478 | |
3479 | === added directory 'hooks/charmhelpers/contrib/saltstack' |
3480 | === added file 'hooks/charmhelpers/contrib/saltstack/__init__.py' |
3481 | --- hooks/charmhelpers/contrib/saltstack/__init__.py 1970-01-01 00:00:00 +0000 |
3482 | +++ hooks/charmhelpers/contrib/saltstack/__init__.py 2016-02-12 04:16:45 +0000 |
3483 | @@ -0,0 +1,102 @@ |
3484 | +"""Charm Helpers saltstack - declare the state of your machines. |
3485 | + |
3486 | +This helper enables you to declare your machine state, rather than |
3487 | +program it procedurally (and have to test each change to your procedures). |
3488 | +Your install hook can be as simple as: |
3489 | + |
3490 | +{{{ |
3491 | +from charmhelpers.contrib.saltstack import ( |
3492 | + install_salt_support, |
3493 | + update_machine_state, |
3494 | +) |
3495 | + |
3496 | + |
3497 | +def install(): |
3498 | + install_salt_support() |
3499 | + update_machine_state('machine_states/dependencies.yaml') |
3500 | + update_machine_state('machine_states/installed.yaml') |
3501 | +}}} |
3502 | + |
3503 | +and won't need to change (nor will its tests) when you change the machine |
3504 | +state. |
3505 | + |
3506 | +It's using a python package called salt-minion which allows various formats for |
3507 | +specifying resources, such as: |
3508 | + |
3509 | +{{{ |
3510 | +/srv/{{ basedir }}: |
3511 | + file.directory: |
3512 | + - group: ubunet |
3513 | + - user: ubunet |
3514 | + - require: |
3515 | + - user: ubunet |
3516 | + - recurse: |
3517 | + - user |
3518 | + - group |
3519 | + |
3520 | +ubunet: |
3521 | + group.present: |
3522 | + - gid: 1500 |
3523 | + user.present: |
3524 | + - uid: 1500 |
3525 | + - gid: 1500 |
3526 | + - createhome: False |
3527 | + - require: |
3528 | + - group: ubunet |
3529 | +}}} |
3530 | + |
3531 | +The docs for all the different state definitions are at: |
3532 | + http://docs.saltstack.com/ref/states/all/ |
3533 | + |
3534 | + |
3535 | +TODO: |
3536 | + * Add test helpers which will ensure that machine state definitions |
3537 | + are functionally (but not necessarily logically) correct (ie. getting |
3538 | + salt to parse all state defs. |
3539 | + * Add a link to a public bootstrap charm example / blogpost. |
3540 | + * Find a way to obviate the need to use the grains['charm_dir'] syntax |
3541 | + in templates. |
3542 | +""" |
3543 | +# Copyright 2013 Canonical Ltd. |
3544 | +# |
3545 | +# Authors: |
3546 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
3547 | +import subprocess |
3548 | + |
3549 | +import charmhelpers.contrib.templating.contexts |
3550 | +import charmhelpers.core.host |
3551 | +import charmhelpers.core.hookenv |
3552 | + |
3553 | + |
3554 | +salt_grains_path = '/etc/salt/grains' |
3555 | + |
3556 | + |
3557 | +def install_salt_support(from_ppa=True): |
3558 | + """Installs the salt-minion helper for machine state. |
3559 | + |
3560 | + By default the salt-minion package is installed from |
3561 | + the saltstack PPA. If from_ppa is False you must ensure |
3562 | + that the salt-minion package is available in the apt cache. |
3563 | + """ |
3564 | + if from_ppa: |
3565 | + subprocess.check_call([ |
3566 | + '/usr/bin/add-apt-repository', |
3567 | + '--yes', |
3568 | + 'ppa:saltstack/salt', |
3569 | + ]) |
3570 | + subprocess.check_call(['/usr/bin/apt-get', 'update']) |
3571 | + # We install salt-common as salt-minion would run the salt-minion |
3572 | + # daemon. |
3573 | + charmhelpers.fetch.apt_install('salt-common') |
3574 | + |
3575 | + |
3576 | +def update_machine_state(state_path): |
3577 | + """Update the machine state using the provided state declaration.""" |
3578 | + charmhelpers.contrib.templating.contexts.juju_state_to_yaml( |
3579 | + salt_grains_path) |
3580 | + subprocess.check_call([ |
3581 | + 'salt-call', |
3582 | + '--local', |
3583 | + 'state.template', |
3584 | + state_path, |
3585 | + ]) |
3586 | |
3587 | === added directory 'hooks/charmhelpers/contrib/ssl' |
3588 | === added file 'hooks/charmhelpers/contrib/ssl/__init__.py' |
3589 | --- hooks/charmhelpers/contrib/ssl/__init__.py 1970-01-01 00:00:00 +0000 |
3590 | +++ hooks/charmhelpers/contrib/ssl/__init__.py 2016-02-12 04:16:45 +0000 |
3591 | @@ -0,0 +1,78 @@ |
3592 | +import subprocess |
3593 | +from charmhelpers.core import hookenv |
3594 | + |
3595 | + |
3596 | +def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None): |
3597 | + """Generate selfsigned SSL keypair |
3598 | + |
3599 | + You must provide one of the 3 optional arguments: |
3600 | + config, subject or cn |
3601 | + If more than one is provided the leftmost will be used |
3602 | + |
3603 | + Arguments: |
3604 | + keyfile -- (required) full path to the keyfile to be created |
3605 | + certfile -- (required) full path to the certfile to be created |
3606 | + keysize -- (optional) SSL key length |
3607 | + config -- (optional) openssl configuration file |
3608 | + subject -- (optional) dictionary with SSL subject variables |
3609 | + cn -- (optional) cerfificate common name |
3610 | + |
3611 | + Required keys in subject dict: |
3612 | + cn -- Common name (eq. FQDN) |
3613 | + |
3614 | + Optional keys in subject dict |
3615 | + country -- Country Name (2 letter code) |
3616 | + state -- State or Province Name (full name) |
3617 | + locality -- Locality Name (eg, city) |
3618 | + organization -- Organization Name (eg, company) |
3619 | + organizational_unit -- Organizational Unit Name (eg, section) |
3620 | + email -- Email Address |
3621 | + """ |
3622 | + |
3623 | + cmd = [] |
3624 | + if config: |
3625 | + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", |
3626 | + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", |
3627 | + "-keyout", keyfile, |
3628 | + "-out", certfile, "-config", config] |
3629 | + elif subject: |
3630 | + ssl_subject = "" |
3631 | + if "country" in subject: |
3632 | + ssl_subject = ssl_subject + "/C={}".format(subject["country"]) |
3633 | + if "state" in subject: |
3634 | + ssl_subject = ssl_subject + "/ST={}".format(subject["state"]) |
3635 | + if "locality" in subject: |
3636 | + ssl_subject = ssl_subject + "/L={}".format(subject["locality"]) |
3637 | + if "organization" in subject: |
3638 | + ssl_subject = ssl_subject + "/O={}".format(subject["organization"]) |
3639 | + if "organizational_unit" in subject: |
3640 | + ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"]) |
3641 | + if "cn" in subject: |
3642 | + ssl_subject = ssl_subject + "/CN={}".format(subject["cn"]) |
3643 | + else: |
3644 | + hookenv.log("When using \"subject\" argument you must " |
3645 | + "provide \"cn\" field at very least") |
3646 | + return False |
3647 | + if "email" in subject: |
3648 | + ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"]) |
3649 | + |
3650 | + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", |
3651 | + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", |
3652 | + "-keyout", keyfile, |
3653 | + "-out", certfile, "-subj", ssl_subject] |
3654 | + elif cn: |
3655 | + cmd = ["/usr/bin/openssl", "req", "-new", "-newkey", |
3656 | + "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509", |
3657 | + "-keyout", keyfile, |
3658 | + "-out", certfile, "-subj", "/CN={}".format(cn)] |
3659 | + |
3660 | + if not cmd: |
3661 | + hookenv.log("No config, subject or cn provided," |
3662 | + "unable to generate self signed SSL certificates") |
3663 | + return False |
3664 | + try: |
3665 | + subprocess.check_call(cmd) |
3666 | + return True |
3667 | + except Exception as e: |
3668 | + print "Execution of openssl command failed:\n{}".format(e) |
3669 | + return False |
3670 | |
3671 | === added directory 'hooks/charmhelpers/contrib/storage' |
3672 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' |
3673 | === added directory 'hooks/charmhelpers/contrib/storage/linux' |
3674 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' |
3675 | === added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' |
3676 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000 |
3677 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-02-12 04:16:45 +0000 |
3678 | @@ -0,0 +1,383 @@ |
3679 | +# |
3680 | +# Copyright 2012 Canonical Ltd. |
3681 | +# |
3682 | +# This file is sourced from lp:openstack-charm-helpers |
3683 | +# |
3684 | +# Authors: |
3685 | +# James Page <james.page@ubuntu.com> |
3686 | +# Adam Gandelman <adamg@ubuntu.com> |
3687 | +# |
3688 | + |
3689 | +import os |
3690 | +import shutil |
3691 | +import json |
3692 | +import time |
3693 | + |
3694 | +from subprocess import ( |
3695 | + check_call, |
3696 | + check_output, |
3697 | + CalledProcessError |
3698 | +) |
3699 | + |
3700 | +from charmhelpers.core.hookenv import ( |
3701 | + relation_get, |
3702 | + relation_ids, |
3703 | + related_units, |
3704 | + log, |
3705 | + INFO, |
3706 | + WARNING, |
3707 | + ERROR |
3708 | +) |
3709 | + |
3710 | +from charmhelpers.core.host import ( |
3711 | + mount, |
3712 | + mounts, |
3713 | + service_start, |
3714 | + service_stop, |
3715 | + service_running, |
3716 | + umount, |
3717 | +) |
3718 | + |
3719 | +from charmhelpers.fetch import ( |
3720 | + apt_install, |
3721 | +) |
3722 | + |
3723 | +KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
3724 | +KEYFILE = '/etc/ceph/ceph.client.{}.key' |
3725 | + |
3726 | +CEPH_CONF = """[global] |
3727 | + auth supported = {auth} |
3728 | + keyring = {keyring} |
3729 | + mon host = {mon_hosts} |
3730 | +""" |
3731 | + |
3732 | + |
3733 | +def install(): |
3734 | + ''' Basic Ceph client installation ''' |
3735 | + ceph_dir = "/etc/ceph" |
3736 | + if not os.path.exists(ceph_dir): |
3737 | + os.mkdir(ceph_dir) |
3738 | + apt_install('ceph-common', fatal=True) |
3739 | + |
3740 | + |
3741 | +def rbd_exists(service, pool, rbd_img): |
3742 | + ''' Check to see if a RADOS block device exists ''' |
3743 | + try: |
3744 | + out = check_output(['rbd', 'list', '--id', service, |
3745 | + '--pool', pool]) |
3746 | + except CalledProcessError: |
3747 | + return False |
3748 | + else: |
3749 | + return rbd_img in out |
3750 | + |
3751 | + |
3752 | +def create_rbd_image(service, pool, image, sizemb): |
3753 | + ''' Create a new RADOS block device ''' |
3754 | + cmd = [ |
3755 | + 'rbd', |
3756 | + 'create', |
3757 | + image, |
3758 | + '--size', |
3759 | + str(sizemb), |
3760 | + '--id', |
3761 | + service, |
3762 | + '--pool', |
3763 | + pool |
3764 | + ] |
3765 | + check_call(cmd) |
3766 | + |
3767 | + |
3768 | +def pool_exists(service, name): |
3769 | + ''' Check to see if a RADOS pool already exists ''' |
3770 | + try: |
3771 | + out = check_output(['rados', '--id', service, 'lspools']) |
3772 | + except CalledProcessError: |
3773 | + return False |
3774 | + else: |
3775 | + return name in out |
3776 | + |
3777 | + |
3778 | +def get_osds(service): |
3779 | + ''' |
3780 | + Return a list of all Ceph Object Storage Daemons |
3781 | + currently in the cluster |
3782 | + ''' |
3783 | + version = ceph_version() |
3784 | + if version and version >= '0.56': |
3785 | + return json.loads(check_output(['ceph', '--id', service, |
3786 | + 'osd', 'ls', '--format=json'])) |
3787 | + else: |
3788 | + return None |
3789 | + |
3790 | + |
3791 | +def create_pool(service, name, replicas=2): |
3792 | + ''' Create a new RADOS pool ''' |
3793 | + if pool_exists(service, name): |
3794 | + log("Ceph pool {} already exists, skipping creation".format(name), |
3795 | + level=WARNING) |
3796 | + return |
3797 | + # Calculate the number of placement groups based |
3798 | + # on upstream recommended best practices. |
3799 | + osds = get_osds(service) |
3800 | + if osds: |
3801 | + pgnum = (len(osds) * 100 / replicas) |
3802 | + else: |
3803 | + # NOTE(james-page): Default to 200 for older ceph versions |
3804 | + # which don't support OSD query from cli |
3805 | + pgnum = 200 |
3806 | + cmd = [ |
3807 | + 'ceph', '--id', service, |
3808 | + 'osd', 'pool', 'create', |
3809 | + name, str(pgnum) |
3810 | + ] |
3811 | + check_call(cmd) |
3812 | + cmd = [ |
3813 | + 'ceph', '--id', service, |
3814 | + 'osd', 'pool', 'set', name, |
3815 | + 'size', str(replicas) |
3816 | + ] |
3817 | + check_call(cmd) |
3818 | + |
3819 | + |
3820 | +def delete_pool(service, name): |
3821 | + ''' Delete a RADOS pool from ceph ''' |
3822 | + cmd = [ |
3823 | + 'ceph', '--id', service, |
3824 | + 'osd', 'pool', 'delete', |
3825 | + name, '--yes-i-really-really-mean-it' |
3826 | + ] |
3827 | + check_call(cmd) |
3828 | + |
3829 | + |
3830 | +def _keyfile_path(service): |
3831 | + return KEYFILE.format(service) |
3832 | + |
3833 | + |
3834 | +def _keyring_path(service): |
3835 | + return KEYRING.format(service) |
3836 | + |
3837 | + |
3838 | +def create_keyring(service, key): |
3839 | + ''' Create a new Ceph keyring containing key''' |
3840 | + keyring = _keyring_path(service) |
3841 | + if os.path.exists(keyring): |
3842 | + log('ceph: Keyring exists at %s.' % keyring, level=WARNING) |
3843 | + return |
3844 | + cmd = [ |
3845 | + 'ceph-authtool', |
3846 | + keyring, |
3847 | + '--create-keyring', |
3848 | + '--name=client.{}'.format(service), |
3849 | + '--add-key={}'.format(key) |
3850 | + ] |
3851 | + check_call(cmd) |
3852 | + log('ceph: Created new ring at %s.' % keyring, level=INFO) |
3853 | + |
3854 | + |
3855 | +def create_key_file(service, key): |
3856 | + ''' Create a file containing key ''' |
3857 | + keyfile = _keyfile_path(service) |
3858 | + if os.path.exists(keyfile): |
3859 | + log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) |
3860 | + return |
3861 | + with open(keyfile, 'w') as fd: |
3862 | + fd.write(key) |
3863 | + log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) |
3864 | + |
3865 | + |
3866 | +def get_ceph_nodes(): |
3867 | + ''' Query named relation 'ceph' to detemine current nodes ''' |
3868 | + hosts = [] |
3869 | + for r_id in relation_ids('ceph'): |
3870 | + for unit in related_units(r_id): |
3871 | + hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
3872 | + return hosts |
3873 | + |
3874 | + |
3875 | +def configure(service, key, auth): |
3876 | + ''' Perform basic configuration of Ceph ''' |
3877 | + create_keyring(service, key) |
3878 | + create_key_file(service, key) |
3879 | + hosts = get_ceph_nodes() |
3880 | + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
3881 | + ceph_conf.write(CEPH_CONF.format(auth=auth, |
3882 | + keyring=_keyring_path(service), |
3883 | + mon_hosts=",".join(map(str, hosts)))) |
3884 | + modprobe('rbd') |
3885 | + |
3886 | + |
3887 | +def image_mapped(name): |
3888 | + ''' Determine whether a RADOS block device is mapped locally ''' |
3889 | + try: |
3890 | + out = check_output(['rbd', 'showmapped']) |
3891 | + except CalledProcessError: |
3892 | + return False |
3893 | + else: |
3894 | + return name in out |
3895 | + |
3896 | + |
3897 | +def map_block_storage(service, pool, image): |
3898 | + ''' Map a RADOS block device for local use ''' |
3899 | + cmd = [ |
3900 | + 'rbd', |
3901 | + 'map', |
3902 | + '{}/{}'.format(pool, image), |
3903 | + '--user', |
3904 | + service, |
3905 | + '--secret', |
3906 | + _keyfile_path(service), |
3907 | + ] |
3908 | + check_call(cmd) |
3909 | + |
3910 | + |
3911 | +def filesystem_mounted(fs): |
3912 | + ''' Determine whether a filesytems is already mounted ''' |
3913 | + return fs in [f for f, m in mounts()] |
3914 | + |
3915 | + |
3916 | +def make_filesystem(blk_device, fstype='ext4', timeout=10): |
3917 | + ''' Make a new filesystem on the specified block device ''' |
3918 | + count = 0 |
3919 | + e_noent = os.errno.ENOENT |
3920 | + while not os.path.exists(blk_device): |
3921 | + if count >= timeout: |
3922 | + log('ceph: gave up waiting on block device %s' % blk_device, |
3923 | + level=ERROR) |
3924 | + raise IOError(e_noent, os.strerror(e_noent), blk_device) |
3925 | + log('ceph: waiting for block device %s to appear' % blk_device, |
3926 | + level=INFO) |
3927 | + count += 1 |
3928 | + time.sleep(1) |
3929 | + else: |
3930 | + log('ceph: Formatting block device %s as filesystem %s.' % |
3931 | + (blk_device, fstype), level=INFO) |
3932 | + check_call(['mkfs', '-t', fstype, blk_device]) |
3933 | + |
3934 | + |
3935 | +def place_data_on_block_device(blk_device, data_src_dst): |
3936 | + ''' Migrate data in data_src_dst to blk_device and then remount ''' |
3937 | + # mount block device into /mnt |
3938 | + mount(blk_device, '/mnt') |
3939 | + # copy data to /mnt |
3940 | + copy_files(data_src_dst, '/mnt') |
3941 | + # umount block device |
3942 | + umount('/mnt') |
3943 | + # Grab user/group ID's from original source |
3944 | + _dir = os.stat(data_src_dst) |
3945 | + uid = _dir.st_uid |
3946 | + gid = _dir.st_gid |
3947 | + # re-mount where the data should originally be |
3948 | + # TODO: persist is currently a NO-OP in core.host |
3949 | + mount(blk_device, data_src_dst, persist=True) |
3950 | + # ensure original ownership of new mount. |
3951 | + os.chown(data_src_dst, uid, gid) |
3952 | + |
3953 | + |
3954 | +# TODO: re-use |
3955 | +def modprobe(module): |
3956 | + ''' Load a kernel module and configure for auto-load on reboot ''' |
3957 | + log('ceph: Loading kernel module', level=INFO) |
3958 | + cmd = ['modprobe', module] |
3959 | + check_call(cmd) |
3960 | + with open('/etc/modules', 'r+') as modules: |
3961 | + if module not in modules.read(): |
3962 | + modules.write(module) |
3963 | + |
3964 | + |
3965 | +def copy_files(src, dst, symlinks=False, ignore=None): |
3966 | + ''' Copy files from src to dst ''' |
3967 | + for item in os.listdir(src): |
3968 | + s = os.path.join(src, item) |
3969 | + d = os.path.join(dst, item) |
3970 | + if os.path.isdir(s): |
3971 | + shutil.copytree(s, d, symlinks, ignore) |
3972 | + else: |
3973 | + shutil.copy2(s, d) |
3974 | + |
3975 | + |
3976 | +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
3977 | + blk_device, fstype, system_services=[]): |
3978 | + """ |
3979 | + NOTE: This function must only be called from a single service unit for |
3980 | + the same rbd_img otherwise data loss will occur. |
3981 | + |
3982 | + Ensures given pool and RBD image exists, is mapped to a block device, |
3983 | + and the device is formatted and mounted at the given mount_point. |
3984 | + |
3985 | + If formatting a device for the first time, data existing at mount_point |
3986 | + will be migrated to the RBD device before being re-mounted. |
3987 | + |
3988 | + All services listed in system_services will be stopped prior to data |
3989 | + migration and restarted when complete. |
3990 | + """ |
3991 | + # Ensure pool, RBD image, RBD mappings are in place. |
3992 | + if not pool_exists(service, pool): |
3993 | + log('ceph: Creating new pool {}.'.format(pool)) |
3994 | + create_pool(service, pool) |
3995 | + |
3996 | + if not rbd_exists(service, pool, rbd_img): |
3997 | + log('ceph: Creating RBD image ({}).'.format(rbd_img)) |
3998 | + create_rbd_image(service, pool, rbd_img, sizemb) |
3999 | + |
4000 | + if not image_mapped(rbd_img): |
4001 | + log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) |
4002 | + map_block_storage(service, pool, rbd_img) |
4003 | + |
4004 | + # make file system |
4005 | + # TODO: What happens if for whatever reason this is run again and |
4006 | + # the data is already in the rbd device and/or is mounted?? |
4007 | + # When it is mounted already, it will fail to make the fs |
4008 | + # XXX: This is really sketchy! Need to at least add an fstab entry |
4009 | + # otherwise this hook will blow away existing data if its executed |
4010 | + # after a reboot. |
4011 | + if not filesystem_mounted(mount_point): |
4012 | + make_filesystem(blk_device, fstype) |
4013 | + |
4014 | + for svc in system_services: |
4015 | + if service_running(svc): |
4016 | + log('ceph: Stopping services {} prior to migrating data.' |
4017 | + .format(svc)) |
4018 | + service_stop(svc) |
4019 | + |
4020 | + place_data_on_block_device(blk_device, mount_point) |
4021 | + |
4022 | + for svc in system_services: |
4023 | + log('ceph: Starting service {} after migrating data.' |
4024 | + .format(svc)) |
4025 | + service_start(svc) |
4026 | + |
4027 | + |
4028 | +def ensure_ceph_keyring(service, user=None, group=None): |
4029 | + ''' |
4030 | + Ensures a ceph keyring is created for a named service |
4031 | + and optionally ensures user and group ownership. |
4032 | + |
4033 | + Returns False if no ceph key is available in relation state. |
4034 | + ''' |
4035 | + key = None |
4036 | + for rid in relation_ids('ceph'): |
4037 | + for unit in related_units(rid): |
4038 | + key = relation_get('key', rid=rid, unit=unit) |
4039 | + if key: |
4040 | + break |
4041 | + if not key: |
4042 | + return False |
4043 | + create_keyring(service=service, key=key) |
4044 | + keyring = _keyring_path(service) |
4045 | + if user and group: |
4046 | + check_call(['chown', '%s.%s' % (user, group), keyring]) |
4047 | + return True |
4048 | + |
4049 | + |
4050 | +def ceph_version(): |
4051 | + ''' Retrieve the local version of ceph ''' |
4052 | + if os.path.exists('/usr/bin/ceph'): |
4053 | + cmd = ['ceph', '-v'] |
4054 | + output = check_output(cmd) |
4055 | + output = output.split() |
4056 | + if len(output) > 3: |
4057 | + return output[2] |
4058 | + else: |
4059 | + return None |
4060 | + else: |
4061 | + return None |
4062 | |
4063 | === added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' |
4064 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000 |
4065 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-02-12 04:16:45 +0000 |
4066 | @@ -0,0 +1,62 @@ |
4067 | + |
4068 | +import os |
4069 | +import re |
4070 | + |
4071 | +from subprocess import ( |
4072 | + check_call, |
4073 | + check_output, |
4074 | +) |
4075 | + |
4076 | + |
4077 | +################################################## |
4078 | +# loopback device helpers. |
4079 | +################################################## |
4080 | +def loopback_devices(): |
4081 | + ''' |
4082 | + Parse through 'losetup -a' output to determine currently mapped |
4083 | + loopback devices. Output is expected to look like: |
4084 | + |
4085 | + /dev/loop0: [0807]:961814 (/tmp/my.img) |
4086 | + |
4087 | + :returns: dict: a dict mapping {loopback_dev: backing_file} |
4088 | + ''' |
4089 | + loopbacks = {} |
4090 | + cmd = ['losetup', '-a'] |
4091 | + devs = [d.strip().split(' ') for d in |
4092 | + check_output(cmd).splitlines() if d != ''] |
4093 | + for dev, _, f in devs: |
4094 | + loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0] |
4095 | + return loopbacks |
4096 | + |
4097 | + |
4098 | +def create_loopback(file_path): |
4099 | + ''' |
4100 | + Create a loopback device for a given backing file. |
4101 | + |
4102 | + :returns: str: Full path to new loopback device (eg, /dev/loop0) |
4103 | + ''' |
4104 | + file_path = os.path.abspath(file_path) |
4105 | + check_call(['losetup', '--find', file_path]) |
4106 | + for d, f in loopback_devices().iteritems(): |
4107 | + if f == file_path: |
4108 | + return d |
4109 | + |
4110 | + |
4111 | +def ensure_loopback_device(path, size): |
4112 | + ''' |
4113 | + Ensure a loopback device exists for a given backing file path and size. |
4114 | + If it a loopback device is not mapped to file, a new one will be created. |
4115 | + |
4116 | + TODO: Confirm size of found loopback device. |
4117 | + |
4118 | + :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) |
4119 | + ''' |
4120 | + for d, f in loopback_devices().iteritems(): |
4121 | + if f == path: |
4122 | + return d |
4123 | + |
4124 | + if not os.path.exists(path): |
4125 | + cmd = ['truncate', '--size', size, path] |
4126 | + check_call(cmd) |
4127 | + |
4128 | + return create_loopback(path) |
4129 | |
4130 | === added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' |
4131 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000 |
4132 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2016-02-12 04:16:45 +0000 |
4133 | @@ -0,0 +1,88 @@ |
4134 | +from subprocess import ( |
4135 | + CalledProcessError, |
4136 | + check_call, |
4137 | + check_output, |
4138 | + Popen, |
4139 | + PIPE, |
4140 | +) |
4141 | + |
4142 | + |
4143 | +################################################## |
4144 | +# LVM helpers. |
4145 | +################################################## |
4146 | +def deactivate_lvm_volume_group(block_device): |
4147 | + ''' |
4148 | + Deactivate any volume gruop associated with an LVM physical volume. |
4149 | + |
4150 | + :param block_device: str: Full path to LVM physical volume |
4151 | + ''' |
4152 | + vg = list_lvm_volume_group(block_device) |
4153 | + if vg: |
4154 | + cmd = ['vgchange', '-an', vg] |
4155 | + check_call(cmd) |
4156 | + |
4157 | + |
4158 | +def is_lvm_physical_volume(block_device): |
4159 | + ''' |
4160 | + Determine whether a block device is initialized as an LVM PV. |
4161 | + |
4162 | + :param block_device: str: Full path of block device to inspect. |
4163 | + |
4164 | + :returns: boolean: True if block device is a PV, False if not. |
4165 | + ''' |
4166 | + try: |
4167 | + check_output(['pvdisplay', block_device]) |
4168 | + return True |
4169 | + except CalledProcessError: |
4170 | + return False |
4171 | + |
4172 | + |
4173 | +def remove_lvm_physical_volume(block_device): |
4174 | + ''' |
4175 | + Remove LVM PV signatures from a given block device. |
4176 | + |
4177 | + :param block_device: str: Full path of block device to scrub. |
4178 | + ''' |
4179 | + p = Popen(['pvremove', '-ff', block_device], |
4180 | + stdin=PIPE) |
4181 | + p.communicate(input='y\n') |
4182 | + |
4183 | + |
4184 | +def list_lvm_volume_group(block_device): |
4185 | + ''' |
4186 | + List LVM volume group associated with a given block device. |
4187 | + |
4188 | + Assumes block device is a valid LVM PV. |
4189 | + |
4190 | + :param block_device: str: Full path of block device to inspect. |
4191 | + |
4192 | + :returns: str: Name of volume group associated with block device or None |
4193 | + ''' |
4194 | + vg = None |
4195 | + pvd = check_output(['pvdisplay', block_device]).splitlines() |
4196 | + for l in pvd: |
4197 | + if l.strip().startswith('VG Name'): |
4198 | + vg = ' '.join(l.split()).split(' ').pop() |
4199 | + return vg |
4200 | + |
4201 | + |
4202 | +def create_lvm_physical_volume(block_device): |
4203 | + ''' |
4204 | + Initialize a block device as an LVM physical volume. |
4205 | + |
4206 | + :param block_device: str: Full path of block device to initialize. |
4207 | + |
4208 | + ''' |
4209 | + check_call(['pvcreate', block_device]) |
4210 | + |
4211 | + |
4212 | +def create_lvm_volume_group(volume_group, block_device): |
4213 | + ''' |
4214 | + Create an LVM volume group backed by a given block device. |
4215 | + |
4216 | + Assumes block device has already been initialized as an LVM PV. |
4217 | + |
4218 | + :param volume_group: str: Name of volume group to create. |
4219 | + :block_device: str: Full path of PV-initialized block device. |
4220 | + ''' |
4221 | + check_call(['vgcreate', volume_group, block_device]) |
4222 | |
4223 | === added file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
4224 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 |
4225 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2016-02-12 04:16:45 +0000 |
4226 | @@ -0,0 +1,25 @@ |
4227 | +from os import stat |
4228 | +from stat import S_ISBLK |
4229 | + |
4230 | +from subprocess import ( |
4231 | + check_call |
4232 | +) |
4233 | + |
4234 | + |
4235 | +def is_block_device(path): |
4236 | + ''' |
4237 | + Confirm device at path is a valid block device node. |
4238 | + |
4239 | + :returns: boolean: True if path is a block device, False if not. |
4240 | + ''' |
4241 | + return S_ISBLK(stat(path).st_mode) |
4242 | + |
4243 | + |
4244 | +def zap_disk(block_device): |
4245 | + ''' |
4246 | + Clear a block device of partition table. Relies on sgdisk, which is |
4247 | + installed as pat of the 'gdisk' package in Ubuntu. |
4248 | + |
4249 | + :param block_device: str: Full path of block device to clean. |
4250 | + ''' |
4251 | + check_call(['sgdisk', '--zap-all', '--mbrtogpt', block_device]) |
4252 | |
4253 | === added directory 'hooks/charmhelpers/contrib/templating' |
4254 | === added file 'hooks/charmhelpers/contrib/templating/__init__.py' |
4255 | === added file 'hooks/charmhelpers/contrib/templating/contexts.py' |
4256 | --- hooks/charmhelpers/contrib/templating/contexts.py 1970-01-01 00:00:00 +0000 |
4257 | +++ hooks/charmhelpers/contrib/templating/contexts.py 2016-02-12 04:16:45 +0000 |
4258 | @@ -0,0 +1,73 @@ |
4259 | +# Copyright 2013 Canonical Ltd. |
4260 | +# |
4261 | +# Authors: |
4262 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
4263 | +"""A helper to create a yaml cache of config with namespaced relation data.""" |
4264 | +import os |
4265 | +import yaml |
4266 | + |
4267 | +import charmhelpers.core.hookenv |
4268 | + |
4269 | + |
4270 | +charm_dir = os.environ.get('CHARM_DIR', '') |
4271 | + |
4272 | + |
4273 | +def juju_state_to_yaml(yaml_path, namespace_separator=':', |
4274 | + allow_hyphens_in_keys=True): |
4275 | + """Update the juju config and state in a yaml file. |
4276 | + |
4277 | + This includes any current relation-get data, and the charm |
4278 | + directory. |
4279 | + |
4280 | + This function was created for the ansible and saltstack |
4281 | + support, as those libraries can use a yaml file to supply |
4282 | + context to templates, but it may be useful generally to |
4283 | + create and update an on-disk cache of all the config, including |
4284 | + previous relation data. |
4285 | + |
4286 | + By default, hyphens are allowed in keys as this is supported |
4287 | + by yaml, but for tools like ansible, hyphens are not valid [1]. |
4288 | + |
4289 | + [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name |
4290 | + """ |
4291 | + config = charmhelpers.core.hookenv.config() |
4292 | + |
4293 | + # Add the charm_dir which we will need to refer to charm |
4294 | + # file resources etc. |
4295 | + config['charm_dir'] = charm_dir |
4296 | + config['local_unit'] = charmhelpers.core.hookenv.local_unit() |
4297 | + |
4298 | + # Add any relation data prefixed with the relation type. |
4299 | + relation_type = charmhelpers.core.hookenv.relation_type() |
4300 | + if relation_type is not None: |
4301 | + relation_data = charmhelpers.core.hookenv.relation_get() |
4302 | + relation_data = dict( |
4303 | + ("{relation_type}{namespace_separator}{key}".format( |
4304 | + relation_type=relation_type.replace('-', '_'), |
4305 | + key=key, |
4306 | + namespace_separator=namespace_separator), val) |
4307 | + for key, val in relation_data.items()) |
4308 | + config.update(relation_data) |
4309 | + |
4310 | + # Don't use non-standard tags for unicode which will not |
4311 | + # work when salt uses yaml.load_safe. |
4312 | + yaml.add_representer(unicode, lambda dumper, |
4313 | + value: dumper.represent_scalar( |
4314 | + u'tag:yaml.org,2002:str', value)) |
4315 | + |
4316 | + yaml_dir = os.path.dirname(yaml_path) |
4317 | + if not os.path.exists(yaml_dir): |
4318 | + os.makedirs(yaml_dir) |
4319 | + |
4320 | + if os.path.exists(yaml_path): |
4321 | + with open(yaml_path, "r") as existing_vars_file: |
4322 | + existing_vars = yaml.load(existing_vars_file.read()) |
4323 | + else: |
4324 | + existing_vars = {} |
4325 | + |
4326 | + if not allow_hyphens_in_keys: |
4327 | + config = dict( |
4328 | + (key.replace('-', '_'), val) for key, val in config.items()) |
4329 | + existing_vars.update(config) |
4330 | + with open(yaml_path, "w+") as fp: |
4331 | + fp.write(yaml.dump(existing_vars)) |
4332 | |
4333 | === added file 'hooks/charmhelpers/contrib/templating/pyformat.py' |
4334 | --- hooks/charmhelpers/contrib/templating/pyformat.py 1970-01-01 00:00:00 +0000 |
4335 | +++ hooks/charmhelpers/contrib/templating/pyformat.py 2016-02-12 04:16:45 +0000 |
4336 | @@ -0,0 +1,13 @@ |
4337 | +''' |
4338 | +Templating using standard Python str.format() method. |
4339 | +''' |
4340 | + |
4341 | +from charmhelpers.core import hookenv |
4342 | + |
4343 | + |
4344 | +def render(template, extra={}, **kwargs): |
4345 | + """Return the template rendered using Python's str.format().""" |
4346 | + context = hookenv.execution_environment() |
4347 | + context.update(extra) |
4348 | + context.update(kwargs) |
4349 | + return template.format(**context) |
4350 | |
4351 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
4352 | --- hooks/charmhelpers/core/hookenv.py 2013-08-21 19:14:32 +0000 |
4353 | +++ hooks/charmhelpers/core/hookenv.py 2016-02-12 04:16:45 +0000 |
4354 | @@ -9,6 +9,7 @@ |
4355 | import yaml |
4356 | import subprocess |
4357 | import UserDict |
4358 | +from subprocess import CalledProcessError |
4359 | |
4360 | CRITICAL = "CRITICAL" |
4361 | ERROR = "ERROR" |
4362 | @@ -21,7 +22,7 @@ |
4363 | |
4364 | |
4365 | def cached(func): |
4366 | - ''' Cache return values for multiple executions of func + args |
4367 | + """Cache return values for multiple executions of func + args |
4368 | |
4369 | For example: |
4370 | |
4371 | @@ -32,7 +33,7 @@ |
4372 | unit_get('test') |
4373 | |
4374 | will cache the result of unit_get + 'test' for future calls. |
4375 | - ''' |
4376 | + """ |
4377 | def wrapper(*args, **kwargs): |
4378 | global cache |
4379 | key = str((func, args, kwargs)) |
4380 | @@ -46,8 +47,8 @@ |
4381 | |
4382 | |
4383 | def flush(key): |
4384 | - ''' Flushes any entries from function cache where the |
4385 | - key is found in the function+args ''' |
4386 | + """Flushes any entries from function cache where the |
4387 | + key is found in the function+args """ |
4388 | flush_list = [] |
4389 | for item in cache: |
4390 | if key in item: |
4391 | @@ -57,7 +58,7 @@ |
4392 | |
4393 | |
4394 | def log(message, level=None): |
4395 | - "Write a message to the juju log" |
4396 | + """Write a message to the juju log""" |
4397 | command = ['juju-log'] |
4398 | if level: |
4399 | command += ['-l', level] |
4400 | @@ -66,7 +67,7 @@ |
4401 | |
4402 | |
4403 | class Serializable(UserDict.IterableUserDict): |
4404 | - "Wrapper, an object that can be serialized to yaml or json" |
4405 | + """Wrapper, an object that can be serialized to yaml or json""" |
4406 | |
4407 | def __init__(self, obj): |
4408 | # wrap the object |
4409 | @@ -96,11 +97,11 @@ |
4410 | self.data = state |
4411 | |
4412 | def json(self): |
4413 | - "Serialize the object to json" |
4414 | + """Serialize the object to json""" |
4415 | return json.dumps(self.data) |
4416 | |
4417 | def yaml(self): |
4418 | - "Serialize the object to yaml" |
4419 | + """Serialize the object to yaml""" |
4420 | return yaml.dump(self.data) |
4421 | |
4422 | |
4423 | @@ -119,38 +120,38 @@ |
4424 | |
4425 | |
4426 | def in_relation_hook(): |
4427 | - "Determine whether we're running in a relation hook" |
4428 | + """Determine whether we're running in a relation hook""" |
4429 | return 'JUJU_RELATION' in os.environ |
4430 | |
4431 | |
4432 | def relation_type(): |
4433 | - "The scope for the current relation hook" |
4434 | + """The scope for the current relation hook""" |
4435 | return os.environ.get('JUJU_RELATION', None) |
4436 | |
4437 | |
4438 | def relation_id(): |
4439 | - "The relation ID for the current relation hook" |
4440 | + """The relation ID for the current relation hook""" |
4441 | return os.environ.get('JUJU_RELATION_ID', None) |
4442 | |
4443 | |
4444 | def local_unit(): |
4445 | - "Local unit ID" |
4446 | + """Local unit ID""" |
4447 | return os.environ['JUJU_UNIT_NAME'] |
4448 | |
4449 | |
4450 | def remote_unit(): |
4451 | - "The remote unit for the current relation hook" |
4452 | + """The remote unit for the current relation hook""" |
4453 | return os.environ['JUJU_REMOTE_UNIT'] |
4454 | |
4455 | |
4456 | def service_name(): |
4457 | - "The name service group this unit belongs to" |
4458 | + """The name service group this unit belongs to""" |
4459 | return local_unit().split('/')[0] |
4460 | |
4461 | |
4462 | @cached |
4463 | def config(scope=None): |
4464 | - "Juju charm configuration" |
4465 | + """Juju charm configuration""" |
4466 | config_cmd_line = ['config-get'] |
4467 | if scope is not None: |
4468 | config_cmd_line.append(scope) |
4469 | @@ -163,6 +164,7 @@ |
4470 | |
4471 | @cached |
4472 | def relation_get(attribute=None, unit=None, rid=None): |
4473 | + """Get relation information""" |
4474 | _args = ['relation-get', '--format=json'] |
4475 | if rid: |
4476 | _args.append('-r') |
4477 | @@ -174,9 +176,14 @@ |
4478 | return json.loads(subprocess.check_output(_args)) |
4479 | except ValueError: |
4480 | return None |
4481 | + except CalledProcessError, e: |
4482 | + if e.returncode == 2: |
4483 | + return None |
4484 | + raise |
4485 | |
4486 | |
4487 | def relation_set(relation_id=None, relation_settings={}, **kwargs): |
4488 | + """Set relation information for the current unit""" |
4489 | relation_cmd_line = ['relation-set'] |
4490 | if relation_id is not None: |
4491 | relation_cmd_line.extend(('-r', relation_id)) |
4492 | @@ -192,7 +199,7 @@ |
4493 | |
4494 | @cached |
4495 | def relation_ids(reltype=None): |
4496 | - "A list of relation_ids" |
4497 | + """A list of relation_ids""" |
4498 | reltype = reltype or relation_type() |
4499 | relid_cmd_line = ['relation-ids', '--format=json'] |
4500 | if reltype is not None: |
4501 | @@ -203,7 +210,7 @@ |
4502 | |
4503 | @cached |
4504 | def related_units(relid=None): |
4505 | - "A list of related units" |
4506 | + """A list of related units""" |
4507 | relid = relid or relation_id() |
4508 | units_cmd_line = ['relation-list', '--format=json'] |
4509 | if relid is not None: |
4510 | @@ -213,7 +220,7 @@ |
4511 | |
4512 | @cached |
4513 | def relation_for_unit(unit=None, rid=None): |
4514 | - "Get the json represenation of a unit's relation" |
4515 | + """Get the json represenation of a unit's relation""" |
4516 | unit = unit or remote_unit() |
4517 | relation = relation_get(unit=unit, rid=rid) |
4518 | for key in relation: |
4519 | @@ -225,7 +232,7 @@ |
4520 | |
4521 | @cached |
4522 | def relations_for_id(relid=None): |
4523 | - "Get relations of a specific relation ID" |
4524 | + """Get relations of a specific relation ID""" |
4525 | relation_data = [] |
4526 | relid = relid or relation_ids() |
4527 | for unit in related_units(relid): |
4528 | @@ -237,7 +244,7 @@ |
4529 | |
4530 | @cached |
4531 | def relations_of_type(reltype=None): |
4532 | - "Get relations of a specific type" |
4533 | + """Get relations of a specific type""" |
4534 | relation_data = [] |
4535 | reltype = reltype or relation_type() |
4536 | for relid in relation_ids(reltype): |
4537 | @@ -249,7 +256,7 @@ |
4538 | |
4539 | @cached |
4540 | def relation_types(): |
4541 | - "Get a list of relation types supported by this charm" |
4542 | + """Get a list of relation types supported by this charm""" |
4543 | charmdir = os.environ.get('CHARM_DIR', '') |
4544 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
4545 | md = yaml.safe_load(mdf) |
4546 | @@ -264,6 +271,7 @@ |
4547 | |
4548 | @cached |
4549 | def relations(): |
4550 | + """Get a nested dictionary of relation data for all related units""" |
4551 | rels = {} |
4552 | for reltype in relation_types(): |
4553 | relids = {} |
4554 | @@ -277,15 +285,35 @@ |
4555 | return rels |
4556 | |
4557 | |
4558 | +@cached |
4559 | +def is_relation_made(relation, keys='private-address'): |
4560 | + ''' |
4561 | + Determine whether a relation is established by checking for |
4562 | + presence of key(s). If a list of keys is provided, they |
4563 | + must all be present for the relation to be identified as made |
4564 | + ''' |
4565 | + if isinstance(keys, str): |
4566 | + keys = [keys] |
4567 | + for r_id in relation_ids(relation): |
4568 | + for unit in related_units(r_id): |
4569 | + context = {} |
4570 | + for k in keys: |
4571 | + context[k] = relation_get(k, rid=r_id, |
4572 | + unit=unit) |
4573 | + if None not in context.values(): |
4574 | + return True |
4575 | + return False |
4576 | + |
4577 | + |
4578 | def open_port(port, protocol="TCP"): |
4579 | - "Open a service network port" |
4580 | + """Open a service network port""" |
4581 | _args = ['open-port'] |
4582 | _args.append('{}/{}'.format(port, protocol)) |
4583 | subprocess.check_call(_args) |
4584 | |
4585 | |
4586 | def close_port(port, protocol="TCP"): |
4587 | - "Close a service network port" |
4588 | + """Close a service network port""" |
4589 | _args = ['close-port'] |
4590 | _args.append('{}/{}'.format(port, protocol)) |
4591 | subprocess.check_call(_args) |
4592 | @@ -293,6 +321,7 @@ |
4593 | |
4594 | @cached |
4595 | def unit_get(attribute): |
4596 | + """Get the unit ID for the remote unit""" |
4597 | _args = ['unit-get', '--format=json', attribute] |
4598 | try: |
4599 | return json.loads(subprocess.check_output(_args)) |
4600 | @@ -301,22 +330,46 @@ |
4601 | |
4602 | |
4603 | def unit_private_ip(): |
4604 | + """Get this unit's private IP address""" |
4605 | return unit_get('private-address') |
4606 | |
4607 | |
4608 | class UnregisteredHookError(Exception): |
4609 | + """Raised when an undefined hook is called""" |
4610 | pass |
4611 | |
4612 | |
4613 | class Hooks(object): |
4614 | + """A convenient handler for hook functions. |
4615 | + |
4616 | + Example: |
4617 | + hooks = Hooks() |
4618 | + |
4619 | + # register a hook, taking its name from the function name |
4620 | + @hooks.hook() |
4621 | + def install(): |
4622 | + ... |
4623 | + |
4624 | + # register a hook, providing a custom hook name |
4625 | + @hooks.hook("config-changed") |
4626 | + def config_changed(): |
4627 | + ... |
4628 | + |
4629 | + if __name__ == "__main__": |
4630 | + # execute a hook based on the name the program is called by |
4631 | + hooks.execute(sys.argv) |
4632 | + """ |
4633 | + |
4634 | def __init__(self): |
4635 | super(Hooks, self).__init__() |
4636 | self._hooks = {} |
4637 | |
4638 | def register(self, name, function): |
4639 | + """Register a hook""" |
4640 | self._hooks[name] = function |
4641 | |
4642 | def execute(self, args): |
4643 | + """Execute a registered hook based on args[0]""" |
4644 | hook_name = os.path.basename(args[0]) |
4645 | if hook_name in self._hooks: |
4646 | self._hooks[hook_name]() |
4647 | @@ -324,6 +377,7 @@ |
4648 | raise UnregisteredHookError(hook_name) |
4649 | |
4650 | def hook(self, *hook_names): |
4651 | + """Decorator, registering them as hooks""" |
4652 | def wrapper(decorated): |
4653 | for hook_name in hook_names: |
4654 | self.register(hook_name, decorated) |
4655 | @@ -337,4 +391,5 @@ |
4656 | |
4657 | |
4658 | def charm_dir(): |
4659 | + """Return the root directory of the current charm""" |
4660 | return os.environ.get('CHARM_DIR') |
4661 | |
4662 | === modified file 'hooks/charmhelpers/core/host.py' |
4663 | --- hooks/charmhelpers/core/host.py 2013-08-21 19:14:32 +0000 |
4664 | +++ hooks/charmhelpers/core/host.py 2016-02-12 04:16:45 +0000 |
4665 | @@ -19,28 +19,36 @@ |
4666 | |
4667 | |
4668 | def service_start(service_name): |
4669 | - service('start', service_name) |
4670 | + """Start a system service""" |
4671 | + return service('start', service_name) |
4672 | |
4673 | |
4674 | def service_stop(service_name): |
4675 | - service('stop', service_name) |
4676 | + """Stop a system service""" |
4677 | + return service('stop', service_name) |
4678 | |
4679 | |
4680 | def service_restart(service_name): |
4681 | - service('restart', service_name) |
4682 | + """Restart a system service""" |
4683 | + return service('restart', service_name) |
4684 | |
4685 | |
4686 | def service_reload(service_name, restart_on_failure=False): |
4687 | - if not service('reload', service_name) and restart_on_failure: |
4688 | - service('restart', service_name) |
4689 | + """Reload a system service, optionally falling back to restart if reload fails""" |
4690 | + service_result = service('reload', service_name) |
4691 | + if not service_result and restart_on_failure: |
4692 | + service_result = service('restart', service_name) |
4693 | + return service_result |
4694 | |
4695 | |
4696 | def service(action, service_name): |
4697 | + """Control a system service""" |
4698 | cmd = ['service', service_name, action] |
4699 | return subprocess.call(cmd) == 0 |
4700 | |
4701 | |
4702 | def service_running(service): |
4703 | + """Determine whether a system service is running""" |
4704 | try: |
4705 | output = subprocess.check_output(['service', service, 'status']) |
4706 | except subprocess.CalledProcessError: |
4707 | @@ -53,7 +61,7 @@ |
4708 | |
4709 | |
4710 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
4711 | - """Add a user""" |
4712 | + """Add a user to the system""" |
4713 | try: |
4714 | user_info = pwd.getpwnam(username) |
4715 | log('user {0} already exists!'.format(username)) |
4716 | @@ -136,7 +144,7 @@ |
4717 | |
4718 | |
4719 | def mount(device, mountpoint, options=None, persist=False): |
4720 | - '''Mount a filesystem''' |
4721 | + """Mount a filesystem at a particular mountpoint""" |
4722 | cmd_args = ['mount'] |
4723 | if options is not None: |
4724 | cmd_args.extend(['-o', options]) |
4725 | @@ -153,7 +161,7 @@ |
4726 | |
4727 | |
4728 | def umount(mountpoint, persist=False): |
4729 | - '''Unmount a filesystem''' |
4730 | + """Unmount a filesystem""" |
4731 | cmd_args = ['umount', mountpoint] |
4732 | try: |
4733 | subprocess.check_output(cmd_args) |
4734 | @@ -167,7 +175,7 @@ |
4735 | |
4736 | |
4737 | def mounts(): |
4738 | - '''List of all mounted volumes as [[mountpoint,device],[...]]''' |
4739 | + """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" |
4740 | with open('/proc/mounts') as f: |
4741 | # [['/mount/point','/dev/path'],[...]] |
4742 | system_mounts = [m[1::-1] for m in [l.strip().split() |
4743 | @@ -176,7 +184,7 @@ |
4744 | |
4745 | |
4746 | def file_hash(path): |
4747 | - ''' Generate a md5 hash of the contents of 'path' or None if not found ''' |
4748 | + """Generate a md5 hash of the contents of 'path' or None if not found """ |
4749 | if os.path.exists(path): |
4750 | h = hashlib.md5() |
4751 | with open(path, 'r') as source: |
4752 | @@ -187,7 +195,7 @@ |
4753 | |
4754 | |
4755 | def restart_on_change(restart_map): |
4756 | - ''' Restart services based on configuration files changing |
4757 | + """Restart services based on configuration files changing |
4758 | |
4759 | This function is used a decorator, for example |
4760 | |
4761 | @@ -200,7 +208,7 @@ |
4762 | In this example, the cinder-api and cinder-volume services |
4763 | would be restarted if /etc/ceph/ceph.conf is changed by the |
4764 | ceph_client_changed function. |
4765 | - ''' |
4766 | + """ |
4767 | def wrap(f): |
4768 | def wrapped_f(*args): |
4769 | checksums = {} |
4770 | @@ -218,7 +226,7 @@ |
4771 | |
4772 | |
4773 | def lsb_release(): |
4774 | - '''Return /etc/lsb-release in a dict''' |
4775 | + """Return /etc/lsb-release in a dict""" |
4776 | d = {} |
4777 | with open('/etc/lsb-release', 'r') as lsb: |
4778 | for l in lsb: |
4779 | @@ -228,7 +236,7 @@ |
4780 | |
4781 | |
4782 | def pwgen(length=None): |
4783 | - '''Generate a random pasword.''' |
4784 | + """Generate a random pasword.""" |
4785 | if length is None: |
4786 | length = random.choice(range(35, 45)) |
4787 | alphanumeric_chars = [ |
4788 | @@ -237,3 +245,47 @@ |
4789 | random_chars = [ |
4790 | random.choice(alphanumeric_chars) for _ in range(length)] |
4791 | return(''.join(random_chars)) |
4792 | + |
4793 | + |
4794 | +def list_nics(nic_type): |
4795 | + '''Return a list of nics of given type(s)''' |
4796 | + if isinstance(nic_type, basestring): |
4797 | + int_types = [nic_type] |
4798 | + else: |
4799 | + int_types = nic_type |
4800 | + interfaces = [] |
4801 | + for int_type in int_types: |
4802 | + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
4803 | + ip_output = subprocess.check_output(cmd).split('\n') |
4804 | + ip_output = (line for line in ip_output if line) |
4805 | + for line in ip_output: |
4806 | + if line.split()[1].startswith(int_type): |
4807 | + interfaces.append(line.split()[1].replace(":", "")) |
4808 | + return interfaces |
4809 | + |
4810 | + |
4811 | +def set_nic_mtu(nic, mtu): |
4812 | + '''Set MTU on a network interface''' |
4813 | + cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
4814 | + subprocess.check_call(cmd) |
4815 | + |
4816 | + |
4817 | +def get_nic_mtu(nic): |
4818 | + cmd = ['ip', 'addr', 'show', nic] |
4819 | + ip_output = subprocess.check_output(cmd).split('\n') |
4820 | + mtu = "" |
4821 | + for line in ip_output: |
4822 | + words = line.split() |
4823 | + if 'mtu' in words: |
4824 | + mtu = words[words.index("mtu") + 1] |
4825 | + return mtu |
4826 | + |
4827 | + |
4828 | +def get_nic_hwaddr(nic): |
4829 | + cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
4830 | + ip_output = subprocess.check_output(cmd) |
4831 | + hwaddr = "" |
4832 | + words = ip_output.split() |
4833 | + if 'link/ether' in words: |
4834 | + hwaddr = words[words.index('link/ether') + 1] |
4835 | + return hwaddr |
4836 | |
4837 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
4838 | --- hooks/charmhelpers/fetch/__init__.py 2013-08-21 19:19:29 +0000 |
4839 | +++ hooks/charmhelpers/fetch/__init__.py 2016-02-12 04:16:45 +0000 |
4840 | @@ -13,6 +13,7 @@ |
4841 | log, |
4842 | ) |
4843 | import apt_pkg |
4844 | +import os |
4845 | |
4846 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
4847 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
4848 | @@ -20,6 +21,40 @@ |
4849 | PROPOSED_POCKET = """# Proposed |
4850 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted |
4851 | """ |
4852 | +CLOUD_ARCHIVE_POCKETS = { |
4853 | + # Folsom |
4854 | + 'folsom': 'precise-updates/folsom', |
4855 | + 'precise-folsom': 'precise-updates/folsom', |
4856 | + 'precise-folsom/updates': 'precise-updates/folsom', |
4857 | + 'precise-updates/folsom': 'precise-updates/folsom', |
4858 | + 'folsom/proposed': 'precise-proposed/folsom', |
4859 | + 'precise-folsom/proposed': 'precise-proposed/folsom', |
4860 | + 'precise-proposed/folsom': 'precise-proposed/folsom', |
4861 | + # Grizzly |
4862 | + 'grizzly': 'precise-updates/grizzly', |
4863 | + 'precise-grizzly': 'precise-updates/grizzly', |
4864 | + 'precise-grizzly/updates': 'precise-updates/grizzly', |
4865 | + 'precise-updates/grizzly': 'precise-updates/grizzly', |
4866 | + 'grizzly/proposed': 'precise-proposed/grizzly', |
4867 | + 'precise-grizzly/proposed': 'precise-proposed/grizzly', |
4868 | + 'precise-proposed/grizzly': 'precise-proposed/grizzly', |
4869 | + # Havana |
4870 | + 'havana': 'precise-updates/havana', |
4871 | + 'precise-havana': 'precise-updates/havana', |
4872 | + 'precise-havana/updates': 'precise-updates/havana', |
4873 | + 'precise-updates/havana': 'precise-updates/havana', |
4874 | + 'havana/proposed': 'precise-proposed/havana', |
4875 | + 'precise-havana/proposed': 'precise-proposed/havana', |
4876 | + 'precise-proposed/havana': 'precise-proposed/havana', |
4877 | + # Icehouse |
4878 | + 'icehouse': 'precise-updates/icehouse', |
4879 | + 'precise-icehouse': 'precise-updates/icehouse', |
4880 | + 'precise-icehouse/updates': 'precise-updates/icehouse', |
4881 | + 'precise-updates/icehouse': 'precise-updates/icehouse', |
4882 | + 'icehouse/proposed': 'precise-proposed/icehouse', |
4883 | + 'precise-icehouse/proposed': 'precise-proposed/icehouse', |
4884 | + 'precise-proposed/icehouse': 'precise-proposed/icehouse', |
4885 | +} |
4886 | |
4887 | |
4888 | def filter_installed_packages(packages): |
4889 | @@ -40,8 +75,10 @@ |
4890 | |
4891 | def apt_install(packages, options=None, fatal=False): |
4892 | """Install one or more packages""" |
4893 | - options = options or [] |
4894 | - cmd = ['apt-get', '-y'] |
4895 | + if options is None: |
4896 | + options = ['--option=Dpkg::Options::=--force-confold'] |
4897 | + |
4898 | + cmd = ['apt-get', '--assume-yes'] |
4899 | cmd.extend(options) |
4900 | cmd.append('install') |
4901 | if isinstance(packages, basestring): |
4902 | @@ -50,10 +87,14 @@ |
4903 | cmd.extend(packages) |
4904 | log("Installing {} with options: {}".format(packages, |
4905 | options)) |
4906 | + env = os.environ.copy() |
4907 | + if 'DEBIAN_FRONTEND' not in env: |
4908 | + env['DEBIAN_FRONTEND'] = 'noninteractive' |
4909 | + |
4910 | if fatal: |
4911 | - subprocess.check_call(cmd) |
4912 | + subprocess.check_call(cmd, env=env) |
4913 | else: |
4914 | - subprocess.call(cmd) |
4915 | + subprocess.call(cmd, env=env) |
4916 | |
4917 | |
4918 | def apt_update(fatal=False): |
4919 | @@ -67,7 +108,7 @@ |
4920 | |
4921 | def apt_purge(packages, fatal=False): |
4922 | """Purge one or more packages""" |
4923 | - cmd = ['apt-get', '-y', 'purge'] |
4924 | + cmd = ['apt-get', '--assume-yes', 'purge'] |
4925 | if isinstance(packages, basestring): |
4926 | cmd.append(packages) |
4927 | else: |
4928 | @@ -79,16 +120,37 @@ |
4929 | subprocess.call(cmd) |
4930 | |
4931 | |
4932 | +def apt_hold(packages, fatal=False): |
4933 | + """Hold one or more packages""" |
4934 | + cmd = ['apt-mark', 'hold'] |
4935 | + if isinstance(packages, basestring): |
4936 | + cmd.append(packages) |
4937 | + else: |
4938 | + cmd.extend(packages) |
4939 | + log("Holding {}".format(packages)) |
4940 | + if fatal: |
4941 | + subprocess.check_call(cmd) |
4942 | + else: |
4943 | + subprocess.call(cmd) |
4944 | + |
4945 | + |
4946 | def add_source(source, key=None): |
4947 | - if ((source.startswith('ppa:') or |
4948 | - source.startswith('http:'))): |
4949 | + if (source.startswith('ppa:') or |
4950 | + source.startswith('http:') or |
4951 | + source.startswith('deb ') or |
4952 | + source.startswith('cloud-archive:')): |
4953 | subprocess.check_call(['add-apt-repository', '--yes', source]) |
4954 | elif source.startswith('cloud:'): |
4955 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), |
4956 | fatal=True) |
4957 | pocket = source.split(':')[-1] |
4958 | + if pocket not in CLOUD_ARCHIVE_POCKETS: |
4959 | + raise SourceConfigError( |
4960 | + 'Unsupported cloud: source option %s' % |
4961 | + pocket) |
4962 | + actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] |
4963 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
4964 | - apt.write(CLOUD_ARCHIVE.format(pocket)) |
4965 | + apt.write(CLOUD_ARCHIVE.format(actual_pocket)) |
4966 | elif source == 'proposed': |
4967 | release = lsb_release()['DISTRIB_CODENAME'] |
4968 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
4969 | @@ -118,8 +180,13 @@ |
4970 | Note that 'null' (a.k.a. None) should not be quoted. |
4971 | """ |
4972 | sources = safe_load(config(sources_var)) |
4973 | - keys = safe_load(config(keys_var)) |
4974 | - if isinstance(sources, basestring) and isinstance(keys, basestring): |
4975 | + keys = config(keys_var) |
4976 | + if not sources: |
4977 | + return |
4978 | + if keys is not None: |
4979 | + keys = safe_load(keys) |
4980 | + if isinstance(sources, basestring) and ( |
4981 | + keys is None or isinstance(keys, basestring)): |
4982 | add_source(sources, keys) |
4983 | else: |
4984 | if not len(sources) == len(keys): |
4985 | @@ -172,7 +239,9 @@ |
4986 | |
4987 | |
4988 | class BaseFetchHandler(object): |
4989 | + |
4990 | """Base class for FetchHandler implementations in fetch plugins""" |
4991 | + |
4992 | def can_handle(self, source): |
4993 | """Returns True if the source can be handled. Otherwise returns |
4994 | a string explaining why it cannot""" |
4995 | @@ -200,10 +269,13 @@ |
4996 | for handler_name in fetch_handlers: |
4997 | package, classname = handler_name.rsplit('.', 1) |
4998 | try: |
4999 | - handler_class = getattr(importlib.import_module(package), classname) |
5000 | + handler_class = getattr( |
Hey Jacek,
This looks good, thanks! I went ahead and fixed up the test failures on another branch here: lp:~tvansteenburgh/charms/precise/haproxy/haproxy-updates-test-fixes. If you will merge my fixes into your branch, I'll gladly approve and merge your branch.