Merge lp:~tealeg/charms/trusty/ceilometer/pause-and-resume into lp:~openstack-charmers-archive/charms/trusty/ceilometer/trunk
- Trusty Tahr (14.04)
- pause-and-resume
- Merge into trunk
Status: | Superseded | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~tealeg/charms/trusty/ceilometer/pause-and-resume | ||||
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceilometer/trunk | ||||
Diff against target: |
4038 lines (+2936/-456) (has conflicts) 32 files modified
actions.yaml (+4/-0) actions/actions.py (+60/-0) ceilometer_utils.py (+4/-4) charmhelpers/cli/__init__.py (+191/-0) charmhelpers/cli/benchmark.py (+36/-0) charmhelpers/cli/commands.py (+32/-0) charmhelpers/cli/hookenv.py (+23/-0) charmhelpers/cli/host.py (+31/-0) charmhelpers/cli/unitdata.py (+39/-0) charmhelpers/contrib/network/ip.py (+5/-1) charmhelpers/contrib/openstack/amulet/deployment.py (+24/-3) charmhelpers/contrib/openstack/amulet/utils.py (+891/-263) charmhelpers/contrib/openstack/context.py (+36/-5) charmhelpers/contrib/openstack/neutron.py (+54/-3) charmhelpers/contrib/openstack/utils.py (+69/-20) charmhelpers/contrib/peerstorage/__init__.py (+269/-0) charmhelpers/contrib/storage/linux/ceph.py (+2/-11) charmhelpers/contrib/storage/linux/utils.py (+3/-2) charmhelpers/core/files.py (+45/-0) charmhelpers/core/hookenv.py (+226/-4) charmhelpers/core/host.py (+136/-11) charmhelpers/core/hugepage.py (+62/-0) charmhelpers/core/kernel.py (+68/-0) charmhelpers/core/services/helpers.py (+26/-2) charmhelpers/fetch/__init__.py (+8/-0) hooks/ceilometer_hooks.py (+1/-2) tests/020-basic-trusty-liberty (+11/-0) tests/021-basic-wily-liberty (+9/-0) tests/basic_deployment.py (+299/-123) tests/charmhelpers/contrib/amulet/utils.py (+246/-1) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+7/-1) tests/tests.yaml (+19/-0) Conflict adding file charmhelpers/cli. Moved existing file to charmhelpers/cli.moved. Text conflict in charmhelpers/contrib/openstack/amulet/deployment.py Text conflict in charmhelpers/contrib/openstack/amulet/utils.py Text conflict in charmhelpers/contrib/openstack/neutron.py Text conflict in charmhelpers/contrib/openstack/utils.py Conflict adding file charmhelpers/contrib/peerstorage. Moved existing file to charmhelpers/contrib/peerstorage.moved. Conflict adding file charmhelpers/core/files.py. Moved existing file to charmhelpers/core/files.py.moved. Text conflict in charmhelpers/core/hookenv.py Text conflict in charmhelpers/core/host.py Text conflict in charmhelpers/core/services/helpers.py Conflict adding file tests/020-basic-trusty-liberty. Moved existing file to tests/020-basic-trusty-liberty.moved. Conflict adding file tests/021-basic-wily-liberty. Moved existing file to tests/021-basic-wily-liberty.moved. Text conflict in tests/basic_deployment.py Text conflict in tests/charmhelpers/contrib/amulet/utils.py Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py Conflict adding file tests/tests.yaml. Moved existing file to tests/tests.yaml.moved. |
||||
To merge this branch: | bzr merge lp:~tealeg/charms/trusty/ceilometer/pause-and-resume | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
OpenStack Charmers | Pending | ||
Landscape | Pending | ||
Review via email: mp+270748@code.launchpad.net |
This proposal has been superseded by a proposal from 2015-09-11.
Commit message
Description of the change
This branch adds pause and resume actions for the ceilometer services.
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #8950 ceilometer for tealeg mp270748
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #6349 ceilometer for tealeg mp270748
AMULET OK: passed
Build: http://
- 108. By Geoff Teale
-
Merge forwards
- 109. By Geoff Teale
-
Update the descriptions of the pause and resume actions.
- 110. By Geoff Teale
-
Import CEILOMETER_
SERVICES. - 111. By Geoff Teale
-
Use local linked ceilometer_utils.py in actions.
- 112. By Geoff Teale
-
Move ceilometer_contexts to root directory of charm and symlink to both hooks and actions.
Unmerged revisions
- 112. By Geoff Teale
-
Move ceilometer_contexts to root directory of charm and symlink to both hooks and actions.
- 111. By Geoff Teale
-
Use local linked ceilometer_utils.py in actions.
- 110. By Geoff Teale
-
Import CEILOMETER_
SERVICES. - 109. By Geoff Teale
-
Update the descriptions of the pause and resume actions.
- 108. By Geoff Teale
-
Merge forwards
- 107. By Geoff Teale
-
lint
- 106. By Geoff Teale
-
Test status, initially, post pause and post resume.
- 105. By Geoff Teale
-
Sync charm-helpers from branch with status-get.
- 104. By Geoff Teale
-
Tidy linting issues
- 103. By Geoff Teale
-
self._run_action not self.run_action.
Preview Diff
1 | === added directory 'actions' |
2 | === added file 'actions.yaml' |
3 | --- actions.yaml 1970-01-01 00:00:00 +0000 |
4 | +++ actions.yaml 2015-09-11 11:01:28 +0000 |
5 | @@ -0,0 +1,4 @@ |
6 | +pause: |
7 | + description: Pause the apache service providing ceilometer functions. |
8 | +resume: |
9 | + descrpition: Resume the apache service providing ceilometer functions. |
10 | \ No newline at end of file |
11 | |
12 | === added file 'actions/actions.py' |
13 | --- actions/actions.py 1970-01-01 00:00:00 +0000 |
14 | +++ actions/actions.py 2015-09-11 11:01:28 +0000 |
15 | @@ -0,0 +1,60 @@ |
16 | +#!/usr/bin/python |
17 | + |
18 | +import os |
19 | +import sys |
20 | + |
21 | +from charmhelpers.core.host import service_pause, service_resume |
22 | +from charmhelpers.core.hookenv import action_fail, status_set |
23 | + |
24 | +CEILOMETER_SERVICES = [ |
25 | + 'ceilometer-agent-central', |
26 | + 'ceilometer-collector', |
27 | + 'ceilometer-api', |
28 | + 'ceilometer-alarm-evaluator', |
29 | + 'ceilometer-alarm-notifier', |
30 | + 'ceilometer-agent-notification', |
31 | +] |
32 | + |
33 | + |
34 | +def pause(args): |
35 | + """Pause the Ceilometer services. |
36 | + |
37 | + @raises Exception should the service fail to stop. |
38 | + """ |
39 | + for service in CEILOMETER_SERVICES: |
40 | + if not service_pause(service): |
41 | + raise Exception("Failed to %s." % service) |
42 | + status_set( |
43 | + "maintenance", "Paused. Use 'resume' action to resume normal service.") |
44 | + |
45 | +def resume(args): |
46 | + """Resume the Ceilometer services. |
47 | + |
48 | + @raises Exception should the service fail to start.""" |
49 | + for service in CEILOMETER_SERVICES: |
50 | + if not service_resume(service): |
51 | + raise Exception("Failed to resume %s." % service) |
52 | + status_set("active", "") |
53 | + |
54 | + |
55 | +# A dictionary of all the defined actions to callables (which take |
56 | +# parsed arguments). |
57 | +ACTIONS = {"pause": pause, "resume": resume} |
58 | + |
59 | + |
60 | +def main(args): |
61 | + action_name = os.path.basename(args[0]) |
62 | + try: |
63 | + action = ACTIONS[action_name] |
64 | + except KeyError: |
65 | + return "Action %s undefined" % action_name |
66 | + else: |
67 | + try: |
68 | + action(args) |
69 | + except Exception as e: |
70 | + action_fail(str(e)) |
71 | + |
72 | + |
73 | +if __name__ == "__main__": |
74 | + sys.exit(main(sys.argv)) |
75 | + |
76 | |
77 | === added symlink 'actions/ceilometer_utils.py' |
78 | === target is u'../ceilometer_utils.py' |
79 | === added symlink 'actions/charmhelpers' |
80 | === target is u'../charmhelpers' |
81 | === added symlink 'actions/pause' |
82 | === target is u'actions.py' |
83 | === added symlink 'actions/resume' |
84 | === target is u'actions.py' |
85 | === renamed file 'hooks/ceilometer_utils.py' => 'ceilometer_utils.py' |
86 | --- hooks/ceilometer_utils.py 2015-02-20 11:35:22 +0000 |
87 | +++ ceilometer_utils.py 2015-09-11 11:01:28 +0000 |
88 | @@ -113,8 +113,8 @@ |
89 | configs = templating.OSConfigRenderer(templates_dir=TEMPLATES, |
90 | openstack_release=release) |
91 | |
92 | - if (get_os_codename_install_source(config('openstack-origin')) |
93 | - >= 'icehouse'): |
94 | + if (get_os_codename_install_source( |
95 | + config('openstack-origin')) >= 'icehouse'): |
96 | CONFIG_FILES[CEILOMETER_CONF]['services'] = \ |
97 | CONFIG_FILES[CEILOMETER_CONF]['services'] + ICEHOUSE_SERVICES |
98 | |
99 | @@ -194,8 +194,8 @@ |
100 | |
101 | def get_packages(): |
102 | packages = deepcopy(CEILOMETER_PACKAGES) |
103 | - if (get_os_codename_install_source(config('openstack-origin')) |
104 | - >= 'icehouse'): |
105 | + if (get_os_codename_install_source( |
106 | + config('openstack-origin')) >= 'icehouse'): |
107 | packages = packages + ICEHOUSE_PACKAGES |
108 | return packages |
109 | |
110 | |
111 | === modified file 'charm-helpers-hooks.yaml' |
112 | === renamed directory 'hooks/charmhelpers' => 'charmhelpers' |
113 | === added directory 'charmhelpers/cli' |
114 | === renamed directory 'hooks/charmhelpers/cli' => 'charmhelpers/cli.moved' |
115 | === added file 'charmhelpers/cli/__init__.py' |
116 | --- charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000 |
117 | +++ charmhelpers/cli/__init__.py 2015-09-11 11:01:28 +0000 |
118 | @@ -0,0 +1,191 @@ |
119 | +# Copyright 2014-2015 Canonical Limited. |
120 | +# |
121 | +# This file is part of charm-helpers. |
122 | +# |
123 | +# charm-helpers is free software: you can redistribute it and/or modify |
124 | +# it under the terms of the GNU Lesser General Public License version 3 as |
125 | +# published by the Free Software Foundation. |
126 | +# |
127 | +# charm-helpers is distributed in the hope that it will be useful, |
128 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
129 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
130 | +# GNU Lesser General Public License for more details. |
131 | +# |
132 | +# You should have received a copy of the GNU Lesser General Public License |
133 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
134 | + |
135 | +import inspect |
136 | +import argparse |
137 | +import sys |
138 | + |
139 | +from six.moves import zip |
140 | + |
141 | +from charmhelpers.core import unitdata |
142 | + |
143 | + |
144 | +class OutputFormatter(object): |
145 | + def __init__(self, outfile=sys.stdout): |
146 | + self.formats = ( |
147 | + "raw", |
148 | + "json", |
149 | + "py", |
150 | + "yaml", |
151 | + "csv", |
152 | + "tab", |
153 | + ) |
154 | + self.outfile = outfile |
155 | + |
156 | + def add_arguments(self, argument_parser): |
157 | + formatgroup = argument_parser.add_mutually_exclusive_group() |
158 | + choices = self.supported_formats |
159 | + formatgroup.add_argument("--format", metavar='FMT', |
160 | + help="Select output format for returned data, " |
161 | + "where FMT is one of: {}".format(choices), |
162 | + choices=choices, default='raw') |
163 | + for fmt in self.formats: |
164 | + fmtfunc = getattr(self, fmt) |
165 | + formatgroup.add_argument("-{}".format(fmt[0]), |
166 | + "--{}".format(fmt), action='store_const', |
167 | + const=fmt, dest='format', |
168 | + help=fmtfunc.__doc__) |
169 | + |
170 | + @property |
171 | + def supported_formats(self): |
172 | + return self.formats |
173 | + |
174 | + def raw(self, output): |
175 | + """Output data as raw string (default)""" |
176 | + if isinstance(output, (list, tuple)): |
177 | + output = '\n'.join(map(str, output)) |
178 | + self.outfile.write(str(output)) |
179 | + |
180 | + def py(self, output): |
181 | + """Output data as a nicely-formatted python data structure""" |
182 | + import pprint |
183 | + pprint.pprint(output, stream=self.outfile) |
184 | + |
185 | + def json(self, output): |
186 | + """Output data in JSON format""" |
187 | + import json |
188 | + json.dump(output, self.outfile) |
189 | + |
190 | + def yaml(self, output): |
191 | + """Output data in YAML format""" |
192 | + import yaml |
193 | + yaml.safe_dump(output, self.outfile) |
194 | + |
195 | + def csv(self, output): |
196 | + """Output data as excel-compatible CSV""" |
197 | + import csv |
198 | + csvwriter = csv.writer(self.outfile) |
199 | + csvwriter.writerows(output) |
200 | + |
201 | + def tab(self, output): |
202 | + """Output data in excel-compatible tab-delimited format""" |
203 | + import csv |
204 | + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) |
205 | + csvwriter.writerows(output) |
206 | + |
207 | + def format_output(self, output, fmt='raw'): |
208 | + fmtfunc = getattr(self, fmt) |
209 | + fmtfunc(output) |
210 | + |
211 | + |
212 | +class CommandLine(object): |
213 | + argument_parser = None |
214 | + subparsers = None |
215 | + formatter = None |
216 | + exit_code = 0 |
217 | + |
218 | + def __init__(self): |
219 | + if not self.argument_parser: |
220 | + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') |
221 | + if not self.formatter: |
222 | + self.formatter = OutputFormatter() |
223 | + self.formatter.add_arguments(self.argument_parser) |
224 | + if not self.subparsers: |
225 | + self.subparsers = self.argument_parser.add_subparsers(help='Commands') |
226 | + |
227 | + def subcommand(self, command_name=None): |
228 | + """ |
229 | + Decorate a function as a subcommand. Use its arguments as the |
230 | + command-line arguments""" |
231 | + def wrapper(decorated): |
232 | + cmd_name = command_name or decorated.__name__ |
233 | + subparser = self.subparsers.add_parser(cmd_name, |
234 | + description=decorated.__doc__) |
235 | + for args, kwargs in describe_arguments(decorated): |
236 | + subparser.add_argument(*args, **kwargs) |
237 | + subparser.set_defaults(func=decorated) |
238 | + return decorated |
239 | + return wrapper |
240 | + |
241 | + def test_command(self, decorated): |
242 | + """ |
243 | + Subcommand is a boolean test function, so bool return values should be |
244 | + converted to a 0/1 exit code. |
245 | + """ |
246 | + decorated._cli_test_command = True |
247 | + return decorated |
248 | + |
249 | + def no_output(self, decorated): |
250 | + """ |
251 | + Subcommand is not expected to return a value, so don't print a spurious None. |
252 | + """ |
253 | + decorated._cli_no_output = True |
254 | + return decorated |
255 | + |
256 | + def subcommand_builder(self, command_name, description=None): |
257 | + """ |
258 | + Decorate a function that builds a subcommand. Builders should accept a |
259 | + single argument (the subparser instance) and return the function to be |
260 | + run as the command.""" |
261 | + def wrapper(decorated): |
262 | + subparser = self.subparsers.add_parser(command_name) |
263 | + func = decorated(subparser) |
264 | + subparser.set_defaults(func=func) |
265 | + subparser.description = description or func.__doc__ |
266 | + return wrapper |
267 | + |
268 | + def run(self): |
269 | + "Run cli, processing arguments and executing subcommands." |
270 | + arguments = self.argument_parser.parse_args() |
271 | + argspec = inspect.getargspec(arguments.func) |
272 | + vargs = [] |
273 | + for arg in argspec.args: |
274 | + vargs.append(getattr(arguments, arg)) |
275 | + if argspec.varargs: |
276 | + vargs.extend(getattr(arguments, argspec.varargs)) |
277 | + output = arguments.func(*vargs) |
278 | + if getattr(arguments.func, '_cli_test_command', False): |
279 | + self.exit_code = 0 if output else 1 |
280 | + output = '' |
281 | + if getattr(arguments.func, '_cli_no_output', False): |
282 | + output = '' |
283 | + self.formatter.format_output(output, arguments.format) |
284 | + if unitdata._KV: |
285 | + unitdata._KV.flush() |
286 | + |
287 | + |
288 | +cmdline = CommandLine() |
289 | + |
290 | + |
291 | +def describe_arguments(func): |
292 | + """ |
293 | + Analyze a function's signature and return a data structure suitable for |
294 | + passing in as arguments to an argparse parser's add_argument() method.""" |
295 | + |
296 | + argspec = inspect.getargspec(func) |
297 | + # we should probably raise an exception somewhere if func includes **kwargs |
298 | + if argspec.defaults: |
299 | + positional_args = argspec.args[:-len(argspec.defaults)] |
300 | + keyword_names = argspec.args[-len(argspec.defaults):] |
301 | + for arg, default in zip(keyword_names, argspec.defaults): |
302 | + yield ('--{}'.format(arg),), {'default': default} |
303 | + else: |
304 | + positional_args = argspec.args |
305 | + |
306 | + for arg in positional_args: |
307 | + yield (arg,), {} |
308 | + if argspec.varargs: |
309 | + yield (argspec.varargs,), {'nargs': '*'} |
310 | |
311 | === added file 'charmhelpers/cli/benchmark.py' |
312 | --- charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000 |
313 | +++ charmhelpers/cli/benchmark.py 2015-09-11 11:01:28 +0000 |
314 | @@ -0,0 +1,36 @@ |
315 | +# Copyright 2014-2015 Canonical Limited. |
316 | +# |
317 | +# This file is part of charm-helpers. |
318 | +# |
319 | +# charm-helpers is free software: you can redistribute it and/or modify |
320 | +# it under the terms of the GNU Lesser General Public License version 3 as |
321 | +# published by the Free Software Foundation. |
322 | +# |
323 | +# charm-helpers is distributed in the hope that it will be useful, |
324 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
325 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
326 | +# GNU Lesser General Public License for more details. |
327 | +# |
328 | +# You should have received a copy of the GNU Lesser General Public License |
329 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
330 | + |
331 | +from . import cmdline |
332 | +from charmhelpers.contrib.benchmark import Benchmark |
333 | + |
334 | + |
335 | +@cmdline.subcommand(command_name='benchmark-start') |
336 | +def start(): |
337 | + Benchmark.start() |
338 | + |
339 | + |
340 | +@cmdline.subcommand(command_name='benchmark-finish') |
341 | +def finish(): |
342 | + Benchmark.finish() |
343 | + |
344 | + |
345 | +@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score") |
346 | +def service(subparser): |
347 | + subparser.add_argument("value", help="The composite score.") |
348 | + subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.") |
349 | + subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.") |
350 | + return Benchmark.set_composite_score |
351 | |
352 | === added file 'charmhelpers/cli/commands.py' |
353 | --- charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000 |
354 | +++ charmhelpers/cli/commands.py 2015-09-11 11:01:28 +0000 |
355 | @@ -0,0 +1,32 @@ |
356 | +# Copyright 2014-2015 Canonical Limited. |
357 | +# |
358 | +# This file is part of charm-helpers. |
359 | +# |
360 | +# charm-helpers is free software: you can redistribute it and/or modify |
361 | +# it under the terms of the GNU Lesser General Public License version 3 as |
362 | +# published by the Free Software Foundation. |
363 | +# |
364 | +# charm-helpers is distributed in the hope that it will be useful, |
365 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
366 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
367 | +# GNU Lesser General Public License for more details. |
368 | +# |
369 | +# You should have received a copy of the GNU Lesser General Public License |
370 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
371 | + |
372 | +""" |
373 | +This module loads sub-modules into the python runtime so they can be |
374 | +discovered via the inspect module. In order to prevent flake8 from (rightfully) |
375 | +telling us these are unused modules, throw a ' # noqa' at the end of each import |
376 | +so that the warning is suppressed. |
377 | +""" |
378 | + |
379 | +from . import CommandLine # noqa |
380 | + |
381 | +""" |
382 | +Import the sub-modules which have decorated subcommands to register with chlp. |
383 | +""" |
384 | +from . import host # noqa |
385 | +from . import benchmark # noqa |
386 | +from . import unitdata # noqa |
387 | +from . import hookenv # noqa |
388 | |
389 | === added file 'charmhelpers/cli/hookenv.py' |
390 | --- charmhelpers/cli/hookenv.py 1970-01-01 00:00:00 +0000 |
391 | +++ charmhelpers/cli/hookenv.py 2015-09-11 11:01:28 +0000 |
392 | @@ -0,0 +1,23 @@ |
393 | +# Copyright 2014-2015 Canonical Limited. |
394 | +# |
395 | +# This file is part of charm-helpers. |
396 | +# |
397 | +# charm-helpers is free software: you can redistribute it and/or modify |
398 | +# it under the terms of the GNU Lesser General Public License version 3 as |
399 | +# published by the Free Software Foundation. |
400 | +# |
401 | +# charm-helpers is distributed in the hope that it will be useful, |
402 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
403 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
404 | +# GNU Lesser General Public License for more details. |
405 | +# |
406 | +# You should have received a copy of the GNU Lesser General Public License |
407 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
408 | + |
409 | +from . import cmdline |
410 | +from charmhelpers.core import hookenv |
411 | + |
412 | + |
413 | +cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped) |
414 | +cmdline.subcommand('service-name')(hookenv.service_name) |
415 | +cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped) |
416 | |
417 | === added file 'charmhelpers/cli/host.py' |
418 | --- charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000 |
419 | +++ charmhelpers/cli/host.py 2015-09-11 11:01:28 +0000 |
420 | @@ -0,0 +1,31 @@ |
421 | +# Copyright 2014-2015 Canonical Limited. |
422 | +# |
423 | +# This file is part of charm-helpers. |
424 | +# |
425 | +# charm-helpers is free software: you can redistribute it and/or modify |
426 | +# it under the terms of the GNU Lesser General Public License version 3 as |
427 | +# published by the Free Software Foundation. |
428 | +# |
429 | +# charm-helpers is distributed in the hope that it will be useful, |
430 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
431 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
432 | +# GNU Lesser General Public License for more details. |
433 | +# |
434 | +# You should have received a copy of the GNU Lesser General Public License |
435 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
436 | + |
437 | +from . import cmdline |
438 | +from charmhelpers.core import host |
439 | + |
440 | + |
441 | +@cmdline.subcommand() |
442 | +def mounts(): |
443 | + "List mounts" |
444 | + return host.mounts() |
445 | + |
446 | + |
447 | +@cmdline.subcommand_builder('service', description="Control system services") |
448 | +def service(subparser): |
449 | + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") |
450 | + subparser.add_argument("service_name", help="Name of the service to control") |
451 | + return host.service |
452 | |
453 | === added file 'charmhelpers/cli/unitdata.py' |
454 | --- charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000 |
455 | +++ charmhelpers/cli/unitdata.py 2015-09-11 11:01:28 +0000 |
456 | @@ -0,0 +1,39 @@ |
457 | +# Copyright 2014-2015 Canonical Limited. |
458 | +# |
459 | +# This file is part of charm-helpers. |
460 | +# |
461 | +# charm-helpers is free software: you can redistribute it and/or modify |
462 | +# it under the terms of the GNU Lesser General Public License version 3 as |
463 | +# published by the Free Software Foundation. |
464 | +# |
465 | +# charm-helpers is distributed in the hope that it will be useful, |
466 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
467 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
468 | +# GNU Lesser General Public License for more details. |
469 | +# |
470 | +# You should have received a copy of the GNU Lesser General Public License |
471 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
472 | + |
473 | +from . import cmdline |
474 | +from charmhelpers.core import unitdata |
475 | + |
476 | + |
477 | +@cmdline.subcommand_builder('unitdata', description="Store and retrieve data") |
478 | +def unitdata_cmd(subparser): |
479 | + nested = subparser.add_subparsers() |
480 | + get_cmd = nested.add_parser('get', help='Retrieve data') |
481 | + get_cmd.add_argument('key', help='Key to retrieve the value of') |
482 | + get_cmd.set_defaults(action='get', value=None) |
483 | + set_cmd = nested.add_parser('set', help='Store data') |
484 | + set_cmd.add_argument('key', help='Key to set') |
485 | + set_cmd.add_argument('value', help='Value to store') |
486 | + set_cmd.set_defaults(action='set') |
487 | + |
488 | + def _unitdata_cmd(action, key, value): |
489 | + if action == 'get': |
490 | + return unitdata.kv().get(key) |
491 | + elif action == 'set': |
492 | + unitdata.kv().set(key, value) |
493 | + unitdata.kv().flush() |
494 | + return '' |
495 | + return _unitdata_cmd |
496 | |
497 | === modified file 'charmhelpers/contrib/network/ip.py' |
498 | --- hooks/charmhelpers/contrib/network/ip.py 2015-03-04 09:52:56 +0000 |
499 | +++ charmhelpers/contrib/network/ip.py 2015-09-11 11:01:28 +0000 |
500 | @@ -435,8 +435,12 @@ |
501 | |
502 | rev = dns.reversename.from_address(address) |
503 | result = ns_query(rev) |
504 | + |
505 | if not result: |
506 | - return None |
507 | + try: |
508 | + result = socket.gethostbyaddr(address)[0] |
509 | + except: |
510 | + return None |
511 | else: |
512 | result = address |
513 | |
514 | |
515 | === modified file 'charmhelpers/contrib/openstack/amulet/deployment.py' |
516 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-10 16:32:05 +0000 |
517 | +++ charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-11 11:01:28 +0000 |
518 | @@ -44,7 +44,14 @@ |
519 | Determine if the local branch being tested is derived from its |
520 | stable or next (dev) branch, and based on this, use the corresonding |
521 | stable or next branches for the other_services.""" |
522 | - base_charms = ['mysql', 'mongodb'] |
523 | + |
524 | + # Charms outside the lp:~openstack-charmers namespace |
525 | + base_charms = ['mysql', 'mongodb', 'nrpe'] |
526 | + |
527 | + # Force these charms to current series even when using an older series. |
528 | + # ie. Use trusty/nrpe even when series is precise, as the P charm |
529 | + # does not possess the necessary external master config and hooks. |
530 | + force_series_current = ['nrpe'] |
531 | |
532 | if self.series in ['precise', 'trusty']: |
533 | base_series = self.series |
534 | @@ -53,11 +60,17 @@ |
535 | |
536 | if self.stable: |
537 | for svc in other_services: |
538 | + if svc['name'] in force_series_current: |
539 | + base_series = self.current_next |
540 | + |
541 | temp = 'lp:charms/{}/{}' |
542 | svc['location'] = temp.format(base_series, |
543 | svc['name']) |
544 | else: |
545 | for svc in other_services: |
546 | + if svc['name'] in force_series_current: |
547 | + base_series = self.current_next |
548 | + |
549 | if svc['name'] in base_charms: |
550 | temp = 'lp:charms/{}/{}' |
551 | svc['location'] = temp.format(base_series, |
552 | @@ -77,21 +90,29 @@ |
553 | |
554 | services = other_services |
555 | services.append(this_service) |
556 | + |
557 | + # Charms which should use the source config option |
558 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
559 | 'ceph-osd', 'ceph-radosgw'] |
560 | +<<<<<<< TREE |
561 | # Most OpenStack subordinate charms do not expose an origin option |
562 | # as that is controlled by the principle. |
563 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
564 | +======= |
565 | + |
566 | + # Charms which can not use openstack-origin, ie. many subordinates |
567 | + no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
568 | +>>>>>>> MERGE-SOURCE |
569 | |
570 | if self.openstack: |
571 | for svc in services: |
572 | - if svc['name'] not in use_source + ignore: |
573 | + if svc['name'] not in use_source + no_origin: |
574 | config = {'openstack-origin': self.openstack} |
575 | self.d.configure(svc['name'], config) |
576 | |
577 | if self.source: |
578 | for svc in services: |
579 | - if svc['name'] in use_source and svc['name'] not in ignore: |
580 | + if svc['name'] in use_source and svc['name'] not in no_origin: |
581 | config = {'source': self.source} |
582 | self.d.configure(svc['name'], config) |
583 | |
584 | |
585 | === modified file 'charmhelpers/contrib/openstack/amulet/utils.py' |
586 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-08-10 16:32:05 +0000 |
587 | +++ charmhelpers/contrib/openstack/amulet/utils.py 2015-09-11 11:01:28 +0000 |
588 | @@ -27,7 +27,12 @@ |
589 | import heatclient.v1.client as heat_client |
590 | import keystoneclient.v2_0 as keystone_client |
591 | import novaclient.v1_1.client as nova_client |
592 | -import swiftclient |
593 | +<<<<<<< TREE |
594 | +import swiftclient |
595 | +======= |
596 | +import pika |
597 | +import swiftclient |
598 | +>>>>>>> MERGE-SOURCE |
599 | |
600 | from charmhelpers.contrib.amulet.utils import ( |
601 | AmuletUtils |
602 | @@ -340,265 +345,888 @@ |
603 | |
604 | def delete_instance(self, nova, instance): |
605 | """Delete the specified instance.""" |
606 | - |
607 | - # /!\ DEPRECATION WARNING |
608 | - self.log.warn('/!\\ DEPRECATION WARNING: use ' |
609 | - 'delete_resource instead of delete_instance.') |
610 | - self.log.debug('Deleting instance ({})...'.format(instance)) |
611 | - return self.delete_resource(nova.servers, instance, |
612 | - msg='nova instance') |
613 | - |
614 | - def create_or_get_keypair(self, nova, keypair_name="testkey"): |
615 | - """Create a new keypair, or return pointer if it already exists.""" |
616 | - try: |
617 | - _keypair = nova.keypairs.get(keypair_name) |
618 | - self.log.debug('Keypair ({}) already exists, ' |
619 | - 'using it.'.format(keypair_name)) |
620 | - return _keypair |
621 | - except: |
622 | - self.log.debug('Keypair ({}) does not exist, ' |
623 | - 'creating it.'.format(keypair_name)) |
624 | - |
625 | - _keypair = nova.keypairs.create(name=keypair_name) |
626 | - return _keypair |
627 | - |
628 | - def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
629 | - img_id=None, src_vol_id=None, snap_id=None): |
630 | - """Create cinder volume, optionally from a glance image, OR |
631 | - optionally as a clone of an existing volume, OR optionally |
632 | - from a snapshot. Wait for the new volume status to reach |
633 | - the expected status, validate and return a resource pointer. |
634 | - |
635 | - :param vol_name: cinder volume display name |
636 | - :param vol_size: size in gigabytes |
637 | - :param img_id: optional glance image id |
638 | - :param src_vol_id: optional source volume id to clone |
639 | - :param snap_id: optional snapshot id to use |
640 | - :returns: cinder volume pointer |
641 | - """ |
642 | - # Handle parameter input and avoid impossible combinations |
643 | - if img_id and not src_vol_id and not snap_id: |
644 | - # Create volume from image |
645 | - self.log.debug('Creating cinder volume from glance image...') |
646 | - bootable = 'true' |
647 | - elif src_vol_id and not img_id and not snap_id: |
648 | - # Clone an existing volume |
649 | - self.log.debug('Cloning cinder volume...') |
650 | - bootable = cinder.volumes.get(src_vol_id).bootable |
651 | - elif snap_id and not src_vol_id and not img_id: |
652 | - # Create volume from snapshot |
653 | - self.log.debug('Creating cinder volume from snapshot...') |
654 | - snap = cinder.volume_snapshots.find(id=snap_id) |
655 | - vol_size = snap.size |
656 | - snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
657 | - bootable = cinder.volumes.get(snap_vol_id).bootable |
658 | - elif not img_id and not src_vol_id and not snap_id: |
659 | - # Create volume |
660 | - self.log.debug('Creating cinder volume...') |
661 | - bootable = 'false' |
662 | - else: |
663 | - # Impossible combination of parameters |
664 | - msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
665 | - 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
666 | - img_id, src_vol_id, |
667 | - snap_id)) |
668 | - amulet.raise_status(amulet.FAIL, msg=msg) |
669 | - |
670 | - # Create new volume |
671 | - try: |
672 | - vol_new = cinder.volumes.create(display_name=vol_name, |
673 | - imageRef=img_id, |
674 | - size=vol_size, |
675 | - source_volid=src_vol_id, |
676 | - snapshot_id=snap_id) |
677 | - vol_id = vol_new.id |
678 | - except Exception as e: |
679 | - msg = 'Failed to create volume: {}'.format(e) |
680 | - amulet.raise_status(amulet.FAIL, msg=msg) |
681 | - |
682 | - # Wait for volume to reach available status |
683 | - ret = self.resource_reaches_status(cinder.volumes, vol_id, |
684 | - expected_stat="available", |
685 | - msg="Volume status wait") |
686 | - if not ret: |
687 | - msg = 'Cinder volume failed to reach expected state.' |
688 | - amulet.raise_status(amulet.FAIL, msg=msg) |
689 | - |
690 | - # Re-validate new volume |
691 | - self.log.debug('Validating volume attributes...') |
692 | - val_vol_name = cinder.volumes.get(vol_id).display_name |
693 | - val_vol_boot = cinder.volumes.get(vol_id).bootable |
694 | - val_vol_stat = cinder.volumes.get(vol_id).status |
695 | - val_vol_size = cinder.volumes.get(vol_id).size |
696 | - msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
697 | - '{} size:{}'.format(val_vol_name, vol_id, |
698 | - val_vol_stat, val_vol_boot, |
699 | - val_vol_size)) |
700 | - |
701 | - if val_vol_boot == bootable and val_vol_stat == 'available' \ |
702 | - and val_vol_name == vol_name and val_vol_size == vol_size: |
703 | - self.log.debug(msg_attr) |
704 | - else: |
705 | - msg = ('Volume validation failed, {}'.format(msg_attr)) |
706 | - amulet.raise_status(amulet.FAIL, msg=msg) |
707 | - |
708 | - return vol_new |
709 | - |
710 | - def delete_resource(self, resource, resource_id, |
711 | - msg="resource", max_wait=120): |
712 | - """Delete one openstack resource, such as one instance, keypair, |
713 | - image, volume, stack, etc., and confirm deletion within max wait time. |
714 | - |
715 | - :param resource: pointer to os resource type, ex:glance_client.images |
716 | - :param resource_id: unique name or id for the openstack resource |
717 | - :param msg: text to identify purpose in logging |
718 | - :param max_wait: maximum wait time in seconds |
719 | - :returns: True if successful, otherwise False |
720 | - """ |
721 | - self.log.debug('Deleting OpenStack resource ' |
722 | - '{} ({})'.format(resource_id, msg)) |
723 | - num_before = len(list(resource.list())) |
724 | - resource.delete(resource_id) |
725 | - |
726 | - tries = 0 |
727 | - num_after = len(list(resource.list())) |
728 | - while num_after != (num_before - 1) and tries < (max_wait / 4): |
729 | - self.log.debug('{} delete check: ' |
730 | - '{} [{}:{}] {}'.format(msg, tries, |
731 | - num_before, |
732 | - num_after, |
733 | - resource_id)) |
734 | - time.sleep(4) |
735 | - num_after = len(list(resource.list())) |
736 | - tries += 1 |
737 | - |
738 | - self.log.debug('{}: expected, actual count = {}, ' |
739 | - '{}'.format(msg, num_before - 1, num_after)) |
740 | - |
741 | - if num_after == (num_before - 1): |
742 | - return True |
743 | - else: |
744 | - self.log.error('{} delete timed out'.format(msg)) |
745 | - return False |
746 | - |
747 | - def resource_reaches_status(self, resource, resource_id, |
748 | - expected_stat='available', |
749 | - msg='resource', max_wait=120): |
750 | - """Wait for an openstack resources status to reach an |
751 | - expected status within a specified time. Useful to confirm that |
752 | - nova instances, cinder vols, snapshots, glance images, heat stacks |
753 | - and other resources eventually reach the expected status. |
754 | - |
755 | - :param resource: pointer to os resource type, ex: heat_client.stacks |
756 | - :param resource_id: unique id for the openstack resource |
757 | - :param expected_stat: status to expect resource to reach |
758 | - :param msg: text to identify purpose in logging |
759 | - :param max_wait: maximum wait time in seconds |
760 | - :returns: True if successful, False if status is not reached |
761 | - """ |
762 | - |
763 | - tries = 0 |
764 | - resource_stat = resource.get(resource_id).status |
765 | - while resource_stat != expected_stat and tries < (max_wait / 4): |
766 | - self.log.debug('{} status check: ' |
767 | - '{} [{}:{}] {}'.format(msg, tries, |
768 | - resource_stat, |
769 | - expected_stat, |
770 | - resource_id)) |
771 | - time.sleep(4) |
772 | - resource_stat = resource.get(resource_id).status |
773 | - tries += 1 |
774 | - |
775 | - self.log.debug('{}: expected, actual status = {}, ' |
776 | - '{}'.format(msg, resource_stat, expected_stat)) |
777 | - |
778 | - if resource_stat == expected_stat: |
779 | - return True |
780 | - else: |
781 | - self.log.debug('{} never reached expected status: ' |
782 | - '{}'.format(resource_id, expected_stat)) |
783 | - return False |
784 | - |
785 | - def get_ceph_osd_id_cmd(self, index): |
786 | - """Produce a shell command that will return a ceph-osd id.""" |
787 | - return ("`initctl list | grep 'ceph-osd ' | " |
788 | - "awk 'NR=={} {{ print $2 }}' | " |
789 | - "grep -o '[0-9]*'`".format(index + 1)) |
790 | - |
791 | - def get_ceph_pools(self, sentry_unit): |
792 | - """Return a dict of ceph pools from a single ceph unit, with |
793 | - pool name as keys, pool id as vals.""" |
794 | - pools = {} |
795 | - cmd = 'sudo ceph osd lspools' |
796 | - output, code = sentry_unit.run(cmd) |
797 | - if code != 0: |
798 | - msg = ('{} `{}` returned {} ' |
799 | - '{}'.format(sentry_unit.info['unit_name'], |
800 | - cmd, code, output)) |
801 | - amulet.raise_status(amulet.FAIL, msg=msg) |
802 | - |
803 | - # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
804 | - for pool in str(output).split(','): |
805 | - pool_id_name = pool.split(' ') |
806 | - if len(pool_id_name) == 2: |
807 | - pool_id = pool_id_name[0] |
808 | - pool_name = pool_id_name[1] |
809 | - pools[pool_name] = int(pool_id) |
810 | - |
811 | - self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
812 | - pools)) |
813 | - return pools |
814 | - |
815 | - def get_ceph_df(self, sentry_unit): |
816 | - """Return dict of ceph df json output, including ceph pool state. |
817 | - |
818 | - :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
819 | - :returns: Dict of ceph df output |
820 | - """ |
821 | - cmd = 'sudo ceph df --format=json' |
822 | - output, code = sentry_unit.run(cmd) |
823 | - if code != 0: |
824 | - msg = ('{} `{}` returned {} ' |
825 | - '{}'.format(sentry_unit.info['unit_name'], |
826 | - cmd, code, output)) |
827 | - amulet.raise_status(amulet.FAIL, msg=msg) |
828 | - return json.loads(output) |
829 | - |
830 | - def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
831 | - """Take a sample of attributes of a ceph pool, returning ceph |
832 | - pool name, object count and disk space used for the specified |
833 | - pool ID number. |
834 | - |
835 | - :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
836 | - :param pool_id: Ceph pool ID |
837 | - :returns: List of pool name, object count, kb disk space used |
838 | - """ |
839 | - df = self.get_ceph_df(sentry_unit) |
840 | - pool_name = df['pools'][pool_id]['name'] |
841 | - obj_count = df['pools'][pool_id]['stats']['objects'] |
842 | - kb_used = df['pools'][pool_id]['stats']['kb_used'] |
843 | - self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
844 | - '{} kb used'.format(pool_name, pool_id, |
845 | - obj_count, kb_used)) |
846 | - return pool_name, obj_count, kb_used |
847 | - |
848 | - def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
849 | - """Validate ceph pool samples taken over time, such as pool |
850 | - object counts or pool kb used, before adding, after adding, and |
851 | - after deleting items which affect those pool attributes. The |
852 | - 2nd element is expected to be greater than the 1st; 3rd is expected |
853 | - to be less than the 2nd. |
854 | - |
855 | - :param samples: List containing 3 data samples |
856 | - :param sample_type: String for logging and usage context |
857 | - :returns: None if successful, Failure message otherwise |
858 | - """ |
859 | - original, created, deleted = range(3) |
860 | - if samples[created] <= samples[original] or \ |
861 | - samples[deleted] >= samples[created]: |
862 | - return ('Ceph {} samples ({}) ' |
863 | - 'unexpected.'.format(sample_type, samples)) |
864 | - else: |
865 | - self.log.debug('Ceph {} samples (OK): ' |
866 | - '{}'.format(sample_type, samples)) |
867 | - return None |
868 | +<<<<<<< TREE |
869 | + |
870 | + # /!\ DEPRECATION WARNING |
871 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
872 | + 'delete_resource instead of delete_instance.') |
873 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
874 | + return self.delete_resource(nova.servers, instance, |
875 | + msg='nova instance') |
876 | + |
877 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
878 | + """Create a new keypair, or return pointer if it already exists.""" |
879 | + try: |
880 | + _keypair = nova.keypairs.get(keypair_name) |
881 | + self.log.debug('Keypair ({}) already exists, ' |
882 | + 'using it.'.format(keypair_name)) |
883 | + return _keypair |
884 | + except: |
885 | + self.log.debug('Keypair ({}) does not exist, ' |
886 | + 'creating it.'.format(keypair_name)) |
887 | + |
888 | + _keypair = nova.keypairs.create(name=keypair_name) |
889 | + return _keypair |
890 | + |
891 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
892 | + img_id=None, src_vol_id=None, snap_id=None): |
893 | + """Create cinder volume, optionally from a glance image, OR |
894 | + optionally as a clone of an existing volume, OR optionally |
895 | + from a snapshot. Wait for the new volume status to reach |
896 | + the expected status, validate and return a resource pointer. |
897 | + |
898 | + :param vol_name: cinder volume display name |
899 | + :param vol_size: size in gigabytes |
900 | + :param img_id: optional glance image id |
901 | + :param src_vol_id: optional source volume id to clone |
902 | + :param snap_id: optional snapshot id to use |
903 | + :returns: cinder volume pointer |
904 | + """ |
905 | + # Handle parameter input and avoid impossible combinations |
906 | + if img_id and not src_vol_id and not snap_id: |
907 | + # Create volume from image |
908 | + self.log.debug('Creating cinder volume from glance image...') |
909 | + bootable = 'true' |
910 | + elif src_vol_id and not img_id and not snap_id: |
911 | + # Clone an existing volume |
912 | + self.log.debug('Cloning cinder volume...') |
913 | + bootable = cinder.volumes.get(src_vol_id).bootable |
914 | + elif snap_id and not src_vol_id and not img_id: |
915 | + # Create volume from snapshot |
916 | + self.log.debug('Creating cinder volume from snapshot...') |
917 | + snap = cinder.volume_snapshots.find(id=snap_id) |
918 | + vol_size = snap.size |
919 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
920 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
921 | + elif not img_id and not src_vol_id and not snap_id: |
922 | + # Create volume |
923 | + self.log.debug('Creating cinder volume...') |
924 | + bootable = 'false' |
925 | + else: |
926 | + # Impossible combination of parameters |
927 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
928 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
929 | + img_id, src_vol_id, |
930 | + snap_id)) |
931 | + amulet.raise_status(amulet.FAIL, msg=msg) |
932 | + |
933 | + # Create new volume |
934 | + try: |
935 | + vol_new = cinder.volumes.create(display_name=vol_name, |
936 | + imageRef=img_id, |
937 | + size=vol_size, |
938 | + source_volid=src_vol_id, |
939 | + snapshot_id=snap_id) |
940 | + vol_id = vol_new.id |
941 | + except Exception as e: |
942 | + msg = 'Failed to create volume: {}'.format(e) |
943 | + amulet.raise_status(amulet.FAIL, msg=msg) |
944 | + |
945 | + # Wait for volume to reach available status |
946 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
947 | + expected_stat="available", |
948 | + msg="Volume status wait") |
949 | + if not ret: |
950 | + msg = 'Cinder volume failed to reach expected state.' |
951 | + amulet.raise_status(amulet.FAIL, msg=msg) |
952 | + |
953 | + # Re-validate new volume |
954 | + self.log.debug('Validating volume attributes...') |
955 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
956 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
957 | + val_vol_stat = cinder.volumes.get(vol_id).status |
958 | + val_vol_size = cinder.volumes.get(vol_id).size |
959 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
960 | + '{} size:{}'.format(val_vol_name, vol_id, |
961 | + val_vol_stat, val_vol_boot, |
962 | + val_vol_size)) |
963 | + |
964 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
965 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
966 | + self.log.debug(msg_attr) |
967 | + else: |
968 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
969 | + amulet.raise_status(amulet.FAIL, msg=msg) |
970 | + |
971 | + return vol_new |
972 | + |
973 | + def delete_resource(self, resource, resource_id, |
974 | + msg="resource", max_wait=120): |
975 | + """Delete one openstack resource, such as one instance, keypair, |
976 | + image, volume, stack, etc., and confirm deletion within max wait time. |
977 | + |
978 | + :param resource: pointer to os resource type, ex:glance_client.images |
979 | + :param resource_id: unique name or id for the openstack resource |
980 | + :param msg: text to identify purpose in logging |
981 | + :param max_wait: maximum wait time in seconds |
982 | + :returns: True if successful, otherwise False |
983 | + """ |
984 | + self.log.debug('Deleting OpenStack resource ' |
985 | + '{} ({})'.format(resource_id, msg)) |
986 | + num_before = len(list(resource.list())) |
987 | + resource.delete(resource_id) |
988 | + |
989 | + tries = 0 |
990 | + num_after = len(list(resource.list())) |
991 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
992 | + self.log.debug('{} delete check: ' |
993 | + '{} [{}:{}] {}'.format(msg, tries, |
994 | + num_before, |
995 | + num_after, |
996 | + resource_id)) |
997 | + time.sleep(4) |
998 | + num_after = len(list(resource.list())) |
999 | + tries += 1 |
1000 | + |
1001 | + self.log.debug('{}: expected, actual count = {}, ' |
1002 | + '{}'.format(msg, num_before - 1, num_after)) |
1003 | + |
1004 | + if num_after == (num_before - 1): |
1005 | + return True |
1006 | + else: |
1007 | + self.log.error('{} delete timed out'.format(msg)) |
1008 | + return False |
1009 | + |
1010 | + def resource_reaches_status(self, resource, resource_id, |
1011 | + expected_stat='available', |
1012 | + msg='resource', max_wait=120): |
1013 | + """Wait for an openstack resources status to reach an |
1014 | + expected status within a specified time. Useful to confirm that |
1015 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
1016 | + and other resources eventually reach the expected status. |
1017 | + |
1018 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
1019 | + :param resource_id: unique id for the openstack resource |
1020 | + :param expected_stat: status to expect resource to reach |
1021 | + :param msg: text to identify purpose in logging |
1022 | + :param max_wait: maximum wait time in seconds |
1023 | + :returns: True if successful, False if status is not reached |
1024 | + """ |
1025 | + |
1026 | + tries = 0 |
1027 | + resource_stat = resource.get(resource_id).status |
1028 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
1029 | + self.log.debug('{} status check: ' |
1030 | + '{} [{}:{}] {}'.format(msg, tries, |
1031 | + resource_stat, |
1032 | + expected_stat, |
1033 | + resource_id)) |
1034 | + time.sleep(4) |
1035 | + resource_stat = resource.get(resource_id).status |
1036 | + tries += 1 |
1037 | + |
1038 | + self.log.debug('{}: expected, actual status = {}, ' |
1039 | + '{}'.format(msg, resource_stat, expected_stat)) |
1040 | + |
1041 | + if resource_stat == expected_stat: |
1042 | + return True |
1043 | + else: |
1044 | + self.log.debug('{} never reached expected status: ' |
1045 | + '{}'.format(resource_id, expected_stat)) |
1046 | + return False |
1047 | + |
1048 | + def get_ceph_osd_id_cmd(self, index): |
1049 | + """Produce a shell command that will return a ceph-osd id.""" |
1050 | + return ("`initctl list | grep 'ceph-osd ' | " |
1051 | + "awk 'NR=={} {{ print $2 }}' | " |
1052 | + "grep -o '[0-9]*'`".format(index + 1)) |
1053 | + |
1054 | + def get_ceph_pools(self, sentry_unit): |
1055 | + """Return a dict of ceph pools from a single ceph unit, with |
1056 | + pool name as keys, pool id as vals.""" |
1057 | + pools = {} |
1058 | + cmd = 'sudo ceph osd lspools' |
1059 | + output, code = sentry_unit.run(cmd) |
1060 | + if code != 0: |
1061 | + msg = ('{} `{}` returned {} ' |
1062 | + '{}'.format(sentry_unit.info['unit_name'], |
1063 | + cmd, code, output)) |
1064 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1065 | + |
1066 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
1067 | + for pool in str(output).split(','): |
1068 | + pool_id_name = pool.split(' ') |
1069 | + if len(pool_id_name) == 2: |
1070 | + pool_id = pool_id_name[0] |
1071 | + pool_name = pool_id_name[1] |
1072 | + pools[pool_name] = int(pool_id) |
1073 | + |
1074 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
1075 | + pools)) |
1076 | + return pools |
1077 | + |
1078 | + def get_ceph_df(self, sentry_unit): |
1079 | + """Return dict of ceph df json output, including ceph pool state. |
1080 | + |
1081 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1082 | + :returns: Dict of ceph df output |
1083 | + """ |
1084 | + cmd = 'sudo ceph df --format=json' |
1085 | + output, code = sentry_unit.run(cmd) |
1086 | + if code != 0: |
1087 | + msg = ('{} `{}` returned {} ' |
1088 | + '{}'.format(sentry_unit.info['unit_name'], |
1089 | + cmd, code, output)) |
1090 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1091 | + return json.loads(output) |
1092 | + |
1093 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
1094 | + """Take a sample of attributes of a ceph pool, returning ceph |
1095 | + pool name, object count and disk space used for the specified |
1096 | + pool ID number. |
1097 | + |
1098 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1099 | + :param pool_id: Ceph pool ID |
1100 | + :returns: List of pool name, object count, kb disk space used |
1101 | + """ |
1102 | + df = self.get_ceph_df(sentry_unit) |
1103 | + pool_name = df['pools'][pool_id]['name'] |
1104 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
1105 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
1106 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
1107 | + '{} kb used'.format(pool_name, pool_id, |
1108 | + obj_count, kb_used)) |
1109 | + return pool_name, obj_count, kb_used |
1110 | + |
1111 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
1112 | + """Validate ceph pool samples taken over time, such as pool |
1113 | + object counts or pool kb used, before adding, after adding, and |
1114 | + after deleting items which affect those pool attributes. The |
1115 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
1116 | + to be less than the 2nd. |
1117 | + |
1118 | + :param samples: List containing 3 data samples |
1119 | + :param sample_type: String for logging and usage context |
1120 | + :returns: None if successful, Failure message otherwise |
1121 | + """ |
1122 | + original, created, deleted = range(3) |
1123 | + if samples[created] <= samples[original] or \ |
1124 | + samples[deleted] >= samples[created]: |
1125 | + return ('Ceph {} samples ({}) ' |
1126 | + 'unexpected.'.format(sample_type, samples)) |
1127 | + else: |
1128 | + self.log.debug('Ceph {} samples (OK): ' |
1129 | + '{}'.format(sample_type, samples)) |
1130 | + return None |
1131 | +======= |
1132 | + |
1133 | + # /!\ DEPRECATION WARNING |
1134 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
1135 | + 'delete_resource instead of delete_instance.') |
1136 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
1137 | + return self.delete_resource(nova.servers, instance, |
1138 | + msg='nova instance') |
1139 | + |
1140 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
1141 | + """Create a new keypair, or return pointer if it already exists.""" |
1142 | + try: |
1143 | + _keypair = nova.keypairs.get(keypair_name) |
1144 | + self.log.debug('Keypair ({}) already exists, ' |
1145 | + 'using it.'.format(keypair_name)) |
1146 | + return _keypair |
1147 | + except: |
1148 | + self.log.debug('Keypair ({}) does not exist, ' |
1149 | + 'creating it.'.format(keypair_name)) |
1150 | + |
1151 | + _keypair = nova.keypairs.create(name=keypair_name) |
1152 | + return _keypair |
1153 | + |
1154 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
1155 | + img_id=None, src_vol_id=None, snap_id=None): |
1156 | + """Create cinder volume, optionally from a glance image, OR |
1157 | + optionally as a clone of an existing volume, OR optionally |
1158 | + from a snapshot. Wait for the new volume status to reach |
1159 | + the expected status, validate and return a resource pointer. |
1160 | + |
1161 | + :param vol_name: cinder volume display name |
1162 | + :param vol_size: size in gigabytes |
1163 | + :param img_id: optional glance image id |
1164 | + :param src_vol_id: optional source volume id to clone |
1165 | + :param snap_id: optional snapshot id to use |
1166 | + :returns: cinder volume pointer |
1167 | + """ |
1168 | + # Handle parameter input and avoid impossible combinations |
1169 | + if img_id and not src_vol_id and not snap_id: |
1170 | + # Create volume from image |
1171 | + self.log.debug('Creating cinder volume from glance image...') |
1172 | + bootable = 'true' |
1173 | + elif src_vol_id and not img_id and not snap_id: |
1174 | + # Clone an existing volume |
1175 | + self.log.debug('Cloning cinder volume...') |
1176 | + bootable = cinder.volumes.get(src_vol_id).bootable |
1177 | + elif snap_id and not src_vol_id and not img_id: |
1178 | + # Create volume from snapshot |
1179 | + self.log.debug('Creating cinder volume from snapshot...') |
1180 | + snap = cinder.volume_snapshots.find(id=snap_id) |
1181 | + vol_size = snap.size |
1182 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
1183 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
1184 | + elif not img_id and not src_vol_id and not snap_id: |
1185 | + # Create volume |
1186 | + self.log.debug('Creating cinder volume...') |
1187 | + bootable = 'false' |
1188 | + else: |
1189 | + # Impossible combination of parameters |
1190 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
1191 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
1192 | + img_id, src_vol_id, |
1193 | + snap_id)) |
1194 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1195 | + |
1196 | + # Create new volume |
1197 | + try: |
1198 | + vol_new = cinder.volumes.create(display_name=vol_name, |
1199 | + imageRef=img_id, |
1200 | + size=vol_size, |
1201 | + source_volid=src_vol_id, |
1202 | + snapshot_id=snap_id) |
1203 | + vol_id = vol_new.id |
1204 | + except Exception as e: |
1205 | + msg = 'Failed to create volume: {}'.format(e) |
1206 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1207 | + |
1208 | + # Wait for volume to reach available status |
1209 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
1210 | + expected_stat="available", |
1211 | + msg="Volume status wait") |
1212 | + if not ret: |
1213 | + msg = 'Cinder volume failed to reach expected state.' |
1214 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1215 | + |
1216 | + # Re-validate new volume |
1217 | + self.log.debug('Validating volume attributes...') |
1218 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
1219 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
1220 | + val_vol_stat = cinder.volumes.get(vol_id).status |
1221 | + val_vol_size = cinder.volumes.get(vol_id).size |
1222 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
1223 | + '{} size:{}'.format(val_vol_name, vol_id, |
1224 | + val_vol_stat, val_vol_boot, |
1225 | + val_vol_size)) |
1226 | + |
1227 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
1228 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
1229 | + self.log.debug(msg_attr) |
1230 | + else: |
1231 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
1232 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1233 | + |
1234 | + return vol_new |
1235 | + |
1236 | + def delete_resource(self, resource, resource_id, |
1237 | + msg="resource", max_wait=120): |
1238 | + """Delete one openstack resource, such as one instance, keypair, |
1239 | + image, volume, stack, etc., and confirm deletion within max wait time. |
1240 | + |
1241 | + :param resource: pointer to os resource type, ex:glance_client.images |
1242 | + :param resource_id: unique name or id for the openstack resource |
1243 | + :param msg: text to identify purpose in logging |
1244 | + :param max_wait: maximum wait time in seconds |
1245 | + :returns: True if successful, otherwise False |
1246 | + """ |
1247 | + self.log.debug('Deleting OpenStack resource ' |
1248 | + '{} ({})'.format(resource_id, msg)) |
1249 | + num_before = len(list(resource.list())) |
1250 | + resource.delete(resource_id) |
1251 | + |
1252 | + tries = 0 |
1253 | + num_after = len(list(resource.list())) |
1254 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
1255 | + self.log.debug('{} delete check: ' |
1256 | + '{} [{}:{}] {}'.format(msg, tries, |
1257 | + num_before, |
1258 | + num_after, |
1259 | + resource_id)) |
1260 | + time.sleep(4) |
1261 | + num_after = len(list(resource.list())) |
1262 | + tries += 1 |
1263 | + |
1264 | + self.log.debug('{}: expected, actual count = {}, ' |
1265 | + '{}'.format(msg, num_before - 1, num_after)) |
1266 | + |
1267 | + if num_after == (num_before - 1): |
1268 | + return True |
1269 | + else: |
1270 | + self.log.error('{} delete timed out'.format(msg)) |
1271 | + return False |
1272 | + |
1273 | + def resource_reaches_status(self, resource, resource_id, |
1274 | + expected_stat='available', |
1275 | + msg='resource', max_wait=120): |
1276 | + """Wait for an openstack resources status to reach an |
1277 | + expected status within a specified time. Useful to confirm that |
1278 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
1279 | + and other resources eventually reach the expected status. |
1280 | + |
1281 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
1282 | + :param resource_id: unique id for the openstack resource |
1283 | + :param expected_stat: status to expect resource to reach |
1284 | + :param msg: text to identify purpose in logging |
1285 | + :param max_wait: maximum wait time in seconds |
1286 | + :returns: True if successful, False if status is not reached |
1287 | + """ |
1288 | + |
1289 | + tries = 0 |
1290 | + resource_stat = resource.get(resource_id).status |
1291 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
1292 | + self.log.debug('{} status check: ' |
1293 | + '{} [{}:{}] {}'.format(msg, tries, |
1294 | + resource_stat, |
1295 | + expected_stat, |
1296 | + resource_id)) |
1297 | + time.sleep(4) |
1298 | + resource_stat = resource.get(resource_id).status |
1299 | + tries += 1 |
1300 | + |
1301 | + self.log.debug('{}: expected, actual status = {}, ' |
1302 | + '{}'.format(msg, resource_stat, expected_stat)) |
1303 | + |
1304 | + if resource_stat == expected_stat: |
1305 | + return True |
1306 | + else: |
1307 | + self.log.debug('{} never reached expected status: ' |
1308 | + '{}'.format(resource_id, expected_stat)) |
1309 | + return False |
1310 | + |
1311 | + def get_ceph_osd_id_cmd(self, index): |
1312 | + """Produce a shell command that will return a ceph-osd id.""" |
1313 | + return ("`initctl list | grep 'ceph-osd ' | " |
1314 | + "awk 'NR=={} {{ print $2 }}' | " |
1315 | + "grep -o '[0-9]*'`".format(index + 1)) |
1316 | + |
1317 | + def get_ceph_pools(self, sentry_unit): |
1318 | + """Return a dict of ceph pools from a single ceph unit, with |
1319 | + pool name as keys, pool id as vals.""" |
1320 | + pools = {} |
1321 | + cmd = 'sudo ceph osd lspools' |
1322 | + output, code = sentry_unit.run(cmd) |
1323 | + if code != 0: |
1324 | + msg = ('{} `{}` returned {} ' |
1325 | + '{}'.format(sentry_unit.info['unit_name'], |
1326 | + cmd, code, output)) |
1327 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1328 | + |
1329 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
1330 | + for pool in str(output).split(','): |
1331 | + pool_id_name = pool.split(' ') |
1332 | + if len(pool_id_name) == 2: |
1333 | + pool_id = pool_id_name[0] |
1334 | + pool_name = pool_id_name[1] |
1335 | + pools[pool_name] = int(pool_id) |
1336 | + |
1337 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
1338 | + pools)) |
1339 | + return pools |
1340 | + |
1341 | + def get_ceph_df(self, sentry_unit): |
1342 | + """Return dict of ceph df json output, including ceph pool state. |
1343 | + |
1344 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1345 | + :returns: Dict of ceph df output |
1346 | + """ |
1347 | + cmd = 'sudo ceph df --format=json' |
1348 | + output, code = sentry_unit.run(cmd) |
1349 | + if code != 0: |
1350 | + msg = ('{} `{}` returned {} ' |
1351 | + '{}'.format(sentry_unit.info['unit_name'], |
1352 | + cmd, code, output)) |
1353 | + amulet.raise_status(amulet.FAIL, msg=msg) |
1354 | + return json.loads(output) |
1355 | + |
1356 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
1357 | + """Take a sample of attributes of a ceph pool, returning ceph |
1358 | + pool name, object count and disk space used for the specified |
1359 | + pool ID number. |
1360 | + |
1361 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
1362 | + :param pool_id: Ceph pool ID |
1363 | + :returns: List of pool name, object count, kb disk space used |
1364 | + """ |
1365 | + df = self.get_ceph_df(sentry_unit) |
1366 | + pool_name = df['pools'][pool_id]['name'] |
1367 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
1368 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
1369 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
1370 | + '{} kb used'.format(pool_name, pool_id, |
1371 | + obj_count, kb_used)) |
1372 | + return pool_name, obj_count, kb_used |
1373 | + |
1374 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
1375 | + """Validate ceph pool samples taken over time, such as pool |
1376 | + object counts or pool kb used, before adding, after adding, and |
1377 | + after deleting items which affect those pool attributes. The |
1378 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
1379 | + to be less than the 2nd. |
1380 | + |
1381 | + :param samples: List containing 3 data samples |
1382 | + :param sample_type: String for logging and usage context |
1383 | + :returns: None if successful, Failure message otherwise |
1384 | + """ |
1385 | + original, created, deleted = range(3) |
1386 | + if samples[created] <= samples[original] or \ |
1387 | + samples[deleted] >= samples[created]: |
1388 | + return ('Ceph {} samples ({}) ' |
1389 | + 'unexpected.'.format(sample_type, samples)) |
1390 | + else: |
1391 | + self.log.debug('Ceph {} samples (OK): ' |
1392 | + '{}'.format(sample_type, samples)) |
1393 | + return None |
1394 | + |
1395 | +# rabbitmq/amqp specific helpers: |
1396 | + def add_rmq_test_user(self, sentry_units, |
1397 | + username="testuser1", password="changeme"): |
1398 | + """Add a test user via the first rmq juju unit, check connection as |
1399 | + the new user against all sentry units. |
1400 | + |
1401 | + :param sentry_units: list of sentry unit pointers |
1402 | + :param username: amqp user name, default to testuser1 |
1403 | + :param password: amqp user password |
1404 | + :returns: None if successful. Raise on error. |
1405 | + """ |
1406 | + self.log.debug('Adding rmq user ({})...'.format(username)) |
1407 | + |
1408 | + # Check that user does not already exist |
1409 | + cmd_user_list = 'rabbitmqctl list_users' |
1410 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) |
1411 | + if username in output: |
1412 | + self.log.warning('User ({}) already exists, returning ' |
1413 | + 'gracefully.'.format(username)) |
1414 | + return |
1415 | + |
1416 | + perms = '".*" ".*" ".*"' |
1417 | + cmds = ['rabbitmqctl add_user {} {}'.format(username, password), |
1418 | + 'rabbitmqctl set_permissions {} {}'.format(username, perms)] |
1419 | + |
1420 | + # Add user via first unit |
1421 | + for cmd in cmds: |
1422 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd) |
1423 | + |
1424 | + # Check connection against the other sentry_units |
1425 | + self.log.debug('Checking user connect against units...') |
1426 | + for sentry_unit in sentry_units: |
1427 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=False, |
1428 | + username=username, |
1429 | + password=password) |
1430 | + connection.close() |
1431 | + |
1432 | + def delete_rmq_test_user(self, sentry_units, username="testuser1"): |
1433 | + """Delete a rabbitmq user via the first rmq juju unit. |
1434 | + |
1435 | + :param sentry_units: list of sentry unit pointers |
1436 | + :param username: amqp user name, default to testuser1 |
1437 | + :param password: amqp user password |
1438 | + :returns: None if successful or no such user. |
1439 | + """ |
1440 | + self.log.debug('Deleting rmq user ({})...'.format(username)) |
1441 | + |
1442 | + # Check that the user exists |
1443 | + cmd_user_list = 'rabbitmqctl list_users' |
1444 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list) |
1445 | + |
1446 | + if username not in output: |
1447 | + self.log.warning('User ({}) does not exist, returning ' |
1448 | + 'gracefully.'.format(username)) |
1449 | + return |
1450 | + |
1451 | + # Delete the user |
1452 | + cmd_user_del = 'rabbitmqctl delete_user {}'.format(username) |
1453 | + output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del) |
1454 | + |
1455 | + def get_rmq_cluster_status(self, sentry_unit): |
1456 | + """Execute rabbitmq cluster status command on a unit and return |
1457 | + the full output. |
1458 | + |
1459 | + :param unit: sentry unit |
1460 | + :returns: String containing console output of cluster status command |
1461 | + """ |
1462 | + cmd = 'rabbitmqctl cluster_status' |
1463 | + output, _ = self.run_cmd_unit(sentry_unit, cmd) |
1464 | + self.log.debug('{} cluster_status:\n{}'.format( |
1465 | + sentry_unit.info['unit_name'], output)) |
1466 | + return str(output) |
1467 | + |
1468 | + def get_rmq_cluster_running_nodes(self, sentry_unit): |
1469 | + """Parse rabbitmqctl cluster_status output string, return list of |
1470 | + running rabbitmq cluster nodes. |
1471 | + |
1472 | + :param unit: sentry unit |
1473 | + :returns: List containing node names of running nodes |
1474 | + """ |
1475 | + # NOTE(beisner): rabbitmqctl cluster_status output is not |
1476 | + # json-parsable, do string chop foo, then json.loads that. |
1477 | + str_stat = self.get_rmq_cluster_status(sentry_unit) |
1478 | + if 'running_nodes' in str_stat: |
1479 | + pos_start = str_stat.find("{running_nodes,") + 15 |
1480 | + pos_end = str_stat.find("]},", pos_start) + 1 |
1481 | + str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"') |
1482 | + run_nodes = json.loads(str_run_nodes) |
1483 | + return run_nodes |
1484 | + else: |
1485 | + return [] |
1486 | + |
1487 | + def validate_rmq_cluster_running_nodes(self, sentry_units): |
1488 | + """Check that all rmq unit hostnames are represented in the |
1489 | + cluster_status output of all units. |
1490 | + |
1491 | + :param host_names: dict of juju unit names to host names |
1492 | + :param units: list of sentry unit pointers (all rmq units) |
1493 | + :returns: None if successful, otherwise return error message |
1494 | + """ |
1495 | + host_names = self.get_unit_hostnames(sentry_units) |
1496 | + errors = [] |
1497 | + |
1498 | + # Query every unit for cluster_status running nodes |
1499 | + for query_unit in sentry_units: |
1500 | + query_unit_name = query_unit.info['unit_name'] |
1501 | + running_nodes = self.get_rmq_cluster_running_nodes(query_unit) |
1502 | + |
1503 | + # Confirm that every unit is represented in the queried unit's |
1504 | + # cluster_status running nodes output. |
1505 | + for validate_unit in sentry_units: |
1506 | + val_host_name = host_names[validate_unit.info['unit_name']] |
1507 | + val_node_name = 'rabbit@{}'.format(val_host_name) |
1508 | + |
1509 | + if val_node_name not in running_nodes: |
1510 | + errors.append('Cluster member check failed on {}: {} not ' |
1511 | + 'in {}\n'.format(query_unit_name, |
1512 | + val_node_name, |
1513 | + running_nodes)) |
1514 | + if errors: |
1515 | + return ''.join(errors) |
1516 | + |
1517 | + def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None): |
1518 | + """Check a single juju rmq unit for ssl and port in the config file.""" |
1519 | + host = sentry_unit.info['public-address'] |
1520 | + unit_name = sentry_unit.info['unit_name'] |
1521 | + |
1522 | + conf_file = '/etc/rabbitmq/rabbitmq.config' |
1523 | + conf_contents = str(self.file_contents_safe(sentry_unit, |
1524 | + conf_file, max_wait=16)) |
1525 | + # Checks |
1526 | + conf_ssl = 'ssl' in conf_contents |
1527 | + conf_port = str(port) in conf_contents |
1528 | + |
1529 | + # Port explicitly checked in config |
1530 | + if port and conf_port and conf_ssl: |
1531 | + self.log.debug('SSL is enabled @{}:{} ' |
1532 | + '({})'.format(host, port, unit_name)) |
1533 | + return True |
1534 | + elif port and not conf_port and conf_ssl: |
1535 | + self.log.debug('SSL is enabled @{} but not on port {} ' |
1536 | + '({})'.format(host, port, unit_name)) |
1537 | + return False |
1538 | + # Port not checked (useful when checking that ssl is disabled) |
1539 | + elif not port and conf_ssl: |
1540 | + self.log.debug('SSL is enabled @{}:{} ' |
1541 | + '({})'.format(host, port, unit_name)) |
1542 | + return True |
1543 | + elif not port and not conf_ssl: |
1544 | + self.log.debug('SSL not enabled @{}:{} ' |
1545 | + '({})'.format(host, port, unit_name)) |
1546 | + return False |
1547 | + else: |
1548 | + msg = ('Unknown condition when checking SSL status @{}:{} ' |
1549 | + '({})'.format(host, port, unit_name)) |
1550 | + amulet.raise_status(amulet.FAIL, msg) |
1551 | + |
1552 | + def validate_rmq_ssl_enabled_units(self, sentry_units, port=None): |
1553 | + """Check that ssl is enabled on rmq juju sentry units. |
1554 | + |
1555 | + :param sentry_units: list of all rmq sentry units |
1556 | + :param port: optional ssl port override to validate |
1557 | + :returns: None if successful, otherwise return error message |
1558 | + """ |
1559 | + for sentry_unit in sentry_units: |
1560 | + if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port): |
1561 | + return ('Unexpected condition: ssl is disabled on unit ' |
1562 | + '({})'.format(sentry_unit.info['unit_name'])) |
1563 | + return None |
1564 | + |
1565 | + def validate_rmq_ssl_disabled_units(self, sentry_units): |
1566 | + """Check that ssl is enabled on listed rmq juju sentry units. |
1567 | + |
1568 | + :param sentry_units: list of all rmq sentry units |
1569 | + :returns: True if successful. Raise on error. |
1570 | + """ |
1571 | + for sentry_unit in sentry_units: |
1572 | + if self.rmq_ssl_is_enabled_on_unit(sentry_unit): |
1573 | + return ('Unexpected condition: ssl is enabled on unit ' |
1574 | + '({})'.format(sentry_unit.info['unit_name'])) |
1575 | + return None |
1576 | + |
1577 | + def configure_rmq_ssl_on(self, sentry_units, deployment, |
1578 | + port=None, max_wait=60): |
1579 | + """Turn ssl charm config option on, with optional non-default |
1580 | + ssl port specification. Confirm that it is enabled on every |
1581 | + unit. |
1582 | + |
1583 | + :param sentry_units: list of sentry units |
1584 | + :param deployment: amulet deployment object pointer |
1585 | + :param port: amqp port, use defaults if None |
1586 | + :param max_wait: maximum time to wait in seconds to confirm |
1587 | + :returns: None if successful. Raise on error. |
1588 | + """ |
1589 | + self.log.debug('Setting ssl charm config option: on') |
1590 | + |
1591 | + # Enable RMQ SSL |
1592 | + config = {'ssl': 'on'} |
1593 | + if port: |
1594 | + config['ssl_port'] = port |
1595 | + |
1596 | + deployment.configure('rabbitmq-server', config) |
1597 | + |
1598 | + # Confirm |
1599 | + tries = 0 |
1600 | + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) |
1601 | + while ret and tries < (max_wait / 4): |
1602 | + time.sleep(4) |
1603 | + self.log.debug('Attempt {}: {}'.format(tries, ret)) |
1604 | + ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port) |
1605 | + tries += 1 |
1606 | + |
1607 | + if ret: |
1608 | + amulet.raise_status(amulet.FAIL, ret) |
1609 | + |
1610 | + def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60): |
1611 | + """Turn ssl charm config option off, confirm that it is disabled |
1612 | + on every unit. |
1613 | + |
1614 | + :param sentry_units: list of sentry units |
1615 | + :param deployment: amulet deployment object pointer |
1616 | + :param max_wait: maximum time to wait in seconds to confirm |
1617 | + :returns: None if successful. Raise on error. |
1618 | + """ |
1619 | + self.log.debug('Setting ssl charm config option: off') |
1620 | + |
1621 | + # Disable RMQ SSL |
1622 | + config = {'ssl': 'off'} |
1623 | + deployment.configure('rabbitmq-server', config) |
1624 | + |
1625 | + # Confirm |
1626 | + tries = 0 |
1627 | + ret = self.validate_rmq_ssl_disabled_units(sentry_units) |
1628 | + while ret and tries < (max_wait / 4): |
1629 | + time.sleep(4) |
1630 | + self.log.debug('Attempt {}: {}'.format(tries, ret)) |
1631 | + ret = self.validate_rmq_ssl_disabled_units(sentry_units) |
1632 | + tries += 1 |
1633 | + |
1634 | + if ret: |
1635 | + amulet.raise_status(amulet.FAIL, ret) |
1636 | + |
1637 | + def connect_amqp_by_unit(self, sentry_unit, ssl=False, |
1638 | + port=None, fatal=True, |
1639 | + username="testuser1", password="changeme"): |
1640 | + """Establish and return a pika amqp connection to the rabbitmq service |
1641 | + running on a rmq juju unit. |
1642 | + |
1643 | + :param sentry_unit: sentry unit pointer |
1644 | + :param ssl: boolean, default to False |
1645 | + :param port: amqp port, use defaults if None |
1646 | + :param fatal: boolean, default to True (raises on connect error) |
1647 | + :param username: amqp user name, default to testuser1 |
1648 | + :param password: amqp user password |
1649 | + :returns: pika amqp connection pointer or None if failed and non-fatal |
1650 | + """ |
1651 | + host = sentry_unit.info['public-address'] |
1652 | + unit_name = sentry_unit.info['unit_name'] |
1653 | + |
1654 | + # Default port logic if port is not specified |
1655 | + if ssl and not port: |
1656 | + port = 5671 |
1657 | + elif not ssl and not port: |
1658 | + port = 5672 |
1659 | + |
1660 | + self.log.debug('Connecting to amqp on {}:{} ({}) as ' |
1661 | + '{}...'.format(host, port, unit_name, username)) |
1662 | + |
1663 | + try: |
1664 | + credentials = pika.PlainCredentials(username, password) |
1665 | + parameters = pika.ConnectionParameters(host=host, port=port, |
1666 | + credentials=credentials, |
1667 | + ssl=ssl, |
1668 | + connection_attempts=3, |
1669 | + retry_delay=5, |
1670 | + socket_timeout=1) |
1671 | + connection = pika.BlockingConnection(parameters) |
1672 | + assert connection.server_properties['product'] == 'RabbitMQ' |
1673 | + self.log.debug('Connect OK') |
1674 | + return connection |
1675 | + except Exception as e: |
1676 | + msg = ('amqp connection failed to {}:{} as ' |
1677 | + '{} ({})'.format(host, port, username, str(e))) |
1678 | + if fatal: |
1679 | + amulet.raise_status(amulet.FAIL, msg) |
1680 | + else: |
1681 | + self.log.warn(msg) |
1682 | + return None |
1683 | + |
1684 | + def publish_amqp_message_by_unit(self, sentry_unit, message, |
1685 | + queue="test", ssl=False, |
1686 | + username="testuser1", |
1687 | + password="changeme", |
1688 | + port=None): |
1689 | + """Publish an amqp message to a rmq juju unit. |
1690 | + |
1691 | + :param sentry_unit: sentry unit pointer |
1692 | + :param message: amqp message string |
1693 | + :param queue: message queue, default to test |
1694 | + :param username: amqp user name, default to testuser1 |
1695 | + :param password: amqp user password |
1696 | + :param ssl: boolean, default to False |
1697 | + :param port: amqp port, use defaults if None |
1698 | + :returns: None. Raises exception if publish failed. |
1699 | + """ |
1700 | + self.log.debug('Publishing message to {} queue:\n{}'.format(queue, |
1701 | + message)) |
1702 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, |
1703 | + port=port, |
1704 | + username=username, |
1705 | + password=password) |
1706 | + |
1707 | + # NOTE(beisner): extra debug here re: pika hang potential: |
1708 | + # https://github.com/pika/pika/issues/297 |
1709 | + # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw |
1710 | + self.log.debug('Defining channel...') |
1711 | + channel = connection.channel() |
1712 | + self.log.debug('Declaring queue...') |
1713 | + channel.queue_declare(queue=queue, auto_delete=False, durable=True) |
1714 | + self.log.debug('Publishing message...') |
1715 | + channel.basic_publish(exchange='', routing_key=queue, body=message) |
1716 | + self.log.debug('Closing channel...') |
1717 | + channel.close() |
1718 | + self.log.debug('Closing connection...') |
1719 | + connection.close() |
1720 | + |
1721 | + def get_amqp_message_by_unit(self, sentry_unit, queue="test", |
1722 | + username="testuser1", |
1723 | + password="changeme", |
1724 | + ssl=False, port=None): |
1725 | + """Get an amqp message from a rmq juju unit. |
1726 | + |
1727 | + :param sentry_unit: sentry unit pointer |
1728 | + :param queue: message queue, default to test |
1729 | + :param username: amqp user name, default to testuser1 |
1730 | + :param password: amqp user password |
1731 | + :param ssl: boolean, default to False |
1732 | + :param port: amqp port, use defaults if None |
1733 | + :returns: amqp message body as string. Raise if get fails. |
1734 | + """ |
1735 | + connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl, |
1736 | + port=port, |
1737 | + username=username, |
1738 | + password=password) |
1739 | + channel = connection.channel() |
1740 | + method_frame, _, body = channel.basic_get(queue) |
1741 | + |
1742 | + if method_frame: |
1743 | + self.log.debug('Retreived message from {} queue:\n{}'.format(queue, |
1744 | + body)) |
1745 | + channel.basic_ack(method_frame.delivery_tag) |
1746 | + channel.close() |
1747 | + connection.close() |
1748 | + return body |
1749 | + else: |
1750 | + msg = 'No message retrieved.' |
1751 | + amulet.raise_status(amulet.FAIL, msg) |
1752 | +>>>>>>> MERGE-SOURCE |
1753 | |
1754 | === modified file 'charmhelpers/contrib/openstack/context.py' |
1755 | --- hooks/charmhelpers/contrib/openstack/context.py 2015-08-10 16:32:05 +0000 |
1756 | +++ charmhelpers/contrib/openstack/context.py 2015-09-11 11:01:28 +0000 |
1757 | @@ -50,6 +50,8 @@ |
1758 | from charmhelpers.core.strutils import bool_from_string |
1759 | |
1760 | from charmhelpers.core.host import ( |
1761 | + get_bond_master, |
1762 | + is_phy_iface, |
1763 | list_nics, |
1764 | get_nic_hwaddr, |
1765 | mkdir, |
1766 | @@ -893,6 +895,18 @@ |
1767 | 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} |
1768 | return ctxt |
1769 | |
1770 | + def pg_ctxt(self): |
1771 | + driver = neutron_plugin_attribute(self.plugin, 'driver', |
1772 | + self.network_manager) |
1773 | + config = neutron_plugin_attribute(self.plugin, 'config', |
1774 | + self.network_manager) |
1775 | + ovs_ctxt = {'core_plugin': driver, |
1776 | + 'neutron_plugin': 'plumgrid', |
1777 | + 'neutron_security_groups': self.neutron_security_groups, |
1778 | + 'local_ip': unit_private_ip(), |
1779 | + 'config': config} |
1780 | + return ovs_ctxt |
1781 | + |
1782 | def __call__(self): |
1783 | if self.network_manager not in ['quantum', 'neutron']: |
1784 | return {} |
1785 | @@ -912,6 +926,8 @@ |
1786 | ctxt.update(self.calico_ctxt()) |
1787 | elif self.plugin == 'vsp': |
1788 | ctxt.update(self.nuage_ctxt()) |
1789 | + elif self.plugin == 'plumgrid': |
1790 | + ctxt.update(self.pg_ctxt()) |
1791 | |
1792 | alchemy_flags = config('neutron-alchemy-flags') |
1793 | if alchemy_flags: |
1794 | @@ -923,7 +939,6 @@ |
1795 | |
1796 | |
1797 | class NeutronPortContext(OSContextGenerator): |
1798 | - NIC_PREFIXES = ['eth', 'bond'] |
1799 | |
1800 | def resolve_ports(self, ports): |
1801 | """Resolve NICs not yet bound to bridge(s) |
1802 | @@ -935,7 +950,18 @@ |
1803 | |
1804 | hwaddr_to_nic = {} |
1805 | hwaddr_to_ip = {} |
1806 | - for nic in list_nics(self.NIC_PREFIXES): |
1807 | + for nic in list_nics(): |
1808 | + # Ignore virtual interfaces (bond masters will be identified from |
1809 | + # their slaves) |
1810 | + if not is_phy_iface(nic): |
1811 | + continue |
1812 | + |
1813 | + _nic = get_bond_master(nic) |
1814 | + if _nic: |
1815 | + log("Replacing iface '%s' with bond master '%s'" % (nic, _nic), |
1816 | + level=DEBUG) |
1817 | + nic = _nic |
1818 | + |
1819 | hwaddr = get_nic_hwaddr(nic) |
1820 | hwaddr_to_nic[hwaddr] = nic |
1821 | addresses = get_ipv4_addr(nic, fatal=False) |
1822 | @@ -961,7 +987,8 @@ |
1823 | # trust it to be the real external network). |
1824 | resolved.append(entry) |
1825 | |
1826 | - return resolved |
1827 | + # Ensure no duplicates |
1828 | + return list(set(resolved)) |
1829 | |
1830 | |
1831 | class OSConfigFlagContext(OSContextGenerator): |
1832 | @@ -1280,15 +1307,19 @@ |
1833 | def __call__(self): |
1834 | ports = config('data-port') |
1835 | if ports: |
1836 | + # Map of {port/mac:bridge} |
1837 | portmap = parse_data_port_mappings(ports) |
1838 | - ports = portmap.values() |
1839 | + ports = portmap.keys() |
1840 | + # Resolve provided ports or mac addresses and filter out those |
1841 | + # already attached to a bridge. |
1842 | resolved = self.resolve_ports(ports) |
1843 | + # FIXME: is this necessary? |
1844 | normalized = {get_nic_hwaddr(port): port for port in resolved |
1845 | if port not in ports} |
1846 | normalized.update({port: port for port in resolved |
1847 | if port in ports}) |
1848 | if resolved: |
1849 | - return {bridge: normalized[port] for bridge, port in |
1850 | + return {bridge: normalized[port] for port, bridge in |
1851 | six.iteritems(portmap) if port in normalized.keys()} |
1852 | |
1853 | return None |
1854 | |
1855 | === modified file 'charmhelpers/contrib/openstack/neutron.py' |
1856 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2015-08-10 16:32:05 +0000 |
1857 | +++ charmhelpers/contrib/openstack/neutron.py 2015-09-11 11:01:28 +0000 |
1858 | @@ -195,6 +195,20 @@ |
1859 | 'packages': [], |
1860 | 'server_packages': ['neutron-server', 'neutron-plugin-nuage'], |
1861 | 'server_services': ['neutron-server'] |
1862 | + }, |
1863 | + 'plumgrid': { |
1864 | + 'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini', |
1865 | + 'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2', |
1866 | + 'contexts': [ |
1867 | + context.SharedDBContext(user=config('database-user'), |
1868 | + database=config('database'), |
1869 | + ssl_dir=NEUTRON_CONF_DIR)], |
1870 | + 'services': [], |
1871 | + 'packages': [['plumgrid-lxc'], |
1872 | + ['iovisor-dkms']], |
1873 | + 'server_packages': ['neutron-server', |
1874 | + 'neutron-plugin-plumgrid'], |
1875 | + 'server_services': ['neutron-server'] |
1876 | } |
1877 | } |
1878 | if release >= 'icehouse': |
1879 | @@ -255,17 +269,38 @@ |
1880 | return 'neutron' |
1881 | |
1882 | |
1883 | -def parse_mappings(mappings): |
1884 | +def parse_mappings(mappings, key_rvalue=False): |
1885 | + """By default mappings are lvalue keyed. |
1886 | + |
1887 | + If key_rvalue is True, the mapping will be reversed to allow multiple |
1888 | + configs for the same lvalue. |
1889 | + """ |
1890 | parsed = {} |
1891 | if mappings: |
1892 | mappings = mappings.split() |
1893 | for m in mappings: |
1894 | p = m.partition(':') |
1895 | +<<<<<<< TREE |
1896 | key = p[0].strip() |
1897 | if p[1]: |
1898 | parsed[key] = p[2].strip() |
1899 | else: |
1900 | parsed[key] = '' |
1901 | +======= |
1902 | + |
1903 | + if key_rvalue: |
1904 | + key_index = 2 |
1905 | + val_index = 0 |
1906 | + # if there is no rvalue skip to next |
1907 | + if not p[1]: |
1908 | + continue |
1909 | + else: |
1910 | + key_index = 0 |
1911 | + val_index = 2 |
1912 | + |
1913 | + key = p[key_index].strip() |
1914 | + parsed[key] = p[val_index].strip() |
1915 | +>>>>>>> MERGE-SOURCE |
1916 | |
1917 | return parsed |
1918 | |
1919 | @@ -283,17 +318,28 @@ |
1920 | def parse_data_port_mappings(mappings, default_bridge='br-data'): |
1921 | """Parse data port mappings. |
1922 | |
1923 | - Mappings must be a space-delimited list of bridge:port mappings. |
1924 | + Mappings must be a space-delimited list of port:bridge mappings. |
1925 | |
1926 | - Returns dict of the form {bridge:port}. |
1927 | + Returns dict of the form {port:bridge} where port may be an mac address or |
1928 | + interface name. |
1929 | """ |
1930 | +<<<<<<< TREE |
1931 | _mappings = parse_mappings(mappings) |
1932 | if not _mappings or list(_mappings.values()) == ['']: |
1933 | +======= |
1934 | + |
1935 | + # NOTE(dosaboy): we use rvalue for key to allow multiple values to be |
1936 | + # proposed for <port> since it may be a mac address which will differ |
1937 | + # across units this allowing first-known-good to be chosen. |
1938 | + _mappings = parse_mappings(mappings, key_rvalue=True) |
1939 | + if not _mappings or list(_mappings.values()) == ['']: |
1940 | +>>>>>>> MERGE-SOURCE |
1941 | if not mappings: |
1942 | return {} |
1943 | |
1944 | # For backwards-compatibility we need to support port-only provided in |
1945 | # config. |
1946 | +<<<<<<< TREE |
1947 | _mappings = {default_bridge: mappings.split()[0]} |
1948 | |
1949 | bridges = _mappings.keys() |
1950 | @@ -302,6 +348,11 @@ |
1951 | raise Exception("It is not allowed to have more than one port " |
1952 | "configured on the same bridge") |
1953 | |
1954 | +======= |
1955 | + _mappings = {mappings.split()[0]: default_bridge} |
1956 | + |
1957 | + ports = _mappings.keys() |
1958 | +>>>>>>> MERGE-SOURCE |
1959 | if len(set(ports)) != len(ports): |
1960 | raise Exception("It is not allowed to have the same port configured " |
1961 | "on more than one bridge") |
1962 | |
1963 | === modified file 'charmhelpers/contrib/openstack/utils.py' |
1964 | --- hooks/charmhelpers/contrib/openstack/utils.py 2015-08-10 16:32:05 +0000 |
1965 | +++ charmhelpers/contrib/openstack/utils.py 2015-09-11 11:01:28 +0000 |
1966 | @@ -1,5 +1,3 @@ |
1967 | -#!/usr/bin/python |
1968 | - |
1969 | # Copyright 2014-2015 Canonical Limited. |
1970 | # |
1971 | # This file is part of charm-helpers. |
1972 | @@ -24,6 +22,7 @@ |
1973 | import json |
1974 | import os |
1975 | import sys |
1976 | +import re |
1977 | |
1978 | import six |
1979 | import yaml |
1980 | @@ -69,7 +68,6 @@ |
1981 | DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' |
1982 | 'restricted main multiverse universe') |
1983 | |
1984 | - |
1985 | UBUNTU_OPENSTACK_RELEASE = OrderedDict([ |
1986 | ('oneiric', 'diablo'), |
1987 | ('precise', 'essex'), |
1988 | @@ -115,9 +113,45 @@ |
1989 | ('2.2.0', 'juno'), |
1990 | ('2.2.1', 'kilo'), |
1991 | ('2.2.2', 'kilo'), |
1992 | - ('2.3.0', 'liberty'), |
1993 | +<<<<<<< TREE |
1994 | + ('2.3.0', 'liberty'), |
1995 | +======= |
1996 | + ('2.3.0', 'liberty'), |
1997 | + ('2.4.0', 'liberty'), |
1998 | +>>>>>>> MERGE-SOURCE |
1999 | ]) |
2000 | |
2001 | +# >= Liberty version->codename mapping |
2002 | +PACKAGE_CODENAMES = { |
2003 | + 'nova-common': OrderedDict([ |
2004 | + ('12.0.0', 'liberty'), |
2005 | + ]), |
2006 | + 'neutron-common': OrderedDict([ |
2007 | + ('7.0.0', 'liberty'), |
2008 | + ]), |
2009 | + 'cinder-common': OrderedDict([ |
2010 | + ('7.0.0', 'liberty'), |
2011 | + ]), |
2012 | + 'keystone': OrderedDict([ |
2013 | + ('8.0.0', 'liberty'), |
2014 | + ]), |
2015 | + 'horizon-common': OrderedDict([ |
2016 | + ('8.0.0', 'liberty'), |
2017 | + ]), |
2018 | + 'ceilometer-common': OrderedDict([ |
2019 | + ('5.0.0', 'liberty'), |
2020 | + ]), |
2021 | + 'heat-common': OrderedDict([ |
2022 | + ('5.0.0', 'liberty'), |
2023 | + ]), |
2024 | + 'glance-common': OrderedDict([ |
2025 | + ('11.0.0', 'liberty'), |
2026 | + ]), |
2027 | + 'openstack-dashboard': OrderedDict([ |
2028 | + ('8.0.0', 'liberty'), |
2029 | + ]), |
2030 | +} |
2031 | + |
2032 | DEFAULT_LOOPBACK_SIZE = '5G' |
2033 | |
2034 | |
2035 | @@ -167,9 +201,9 @@ |
2036 | error_out(e) |
2037 | |
2038 | |
2039 | -def get_os_version_codename(codename): |
2040 | +def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES): |
2041 | '''Determine OpenStack version number from codename.''' |
2042 | - for k, v in six.iteritems(OPENSTACK_CODENAMES): |
2043 | + for k, v in six.iteritems(version_map): |
2044 | if v == codename: |
2045 | return k |
2046 | e = 'Could not derive OpenStack version for '\ |
2047 | @@ -201,20 +235,31 @@ |
2048 | error_out(e) |
2049 | |
2050 | vers = apt.upstream_version(pkg.current_ver.ver_str) |
2051 | + match = re.match('^(\d+)\.(\d+)\.(\d+)', vers) |
2052 | + if match: |
2053 | + vers = match.group(0) |
2054 | |
2055 | - try: |
2056 | - if 'swift' in pkg.name: |
2057 | - swift_vers = vers[:5] |
2058 | - if swift_vers not in SWIFT_CODENAMES: |
2059 | - # Deal with 1.10.0 upward |
2060 | - swift_vers = vers[:6] |
2061 | - return SWIFT_CODENAMES[swift_vers] |
2062 | - else: |
2063 | - vers = vers[:6] |
2064 | - return OPENSTACK_CODENAMES[vers] |
2065 | - except KeyError: |
2066 | - e = 'Could not determine OpenStack codename for version %s' % vers |
2067 | - error_out(e) |
2068 | + # >= Liberty independent project versions |
2069 | + if (package in PACKAGE_CODENAMES and |
2070 | + vers in PACKAGE_CODENAMES[package]): |
2071 | + return PACKAGE_CODENAMES[package][vers] |
2072 | + else: |
2073 | + # < Liberty co-ordinated project versions |
2074 | + try: |
2075 | + if 'swift' in pkg.name: |
2076 | + swift_vers = vers[:5] |
2077 | + if swift_vers not in SWIFT_CODENAMES: |
2078 | + # Deal with 1.10.0 upward |
2079 | + swift_vers = vers[:6] |
2080 | + return SWIFT_CODENAMES[swift_vers] |
2081 | + else: |
2082 | + vers = vers[:6] |
2083 | + return OPENSTACK_CODENAMES[vers] |
2084 | + except KeyError: |
2085 | + if not fatal: |
2086 | + return None |
2087 | + e = 'Could not determine OpenStack codename for version %s' % vers |
2088 | + error_out(e) |
2089 | |
2090 | |
2091 | def get_os_version_package(pkg, fatal=True): |
2092 | @@ -392,7 +437,11 @@ |
2093 | import apt_pkg as apt |
2094 | src = config('openstack-origin') |
2095 | cur_vers = get_os_version_package(package) |
2096 | - available_vers = get_os_version_install_source(src) |
2097 | + if "swift" in package: |
2098 | + codename = get_os_codename_install_source(src) |
2099 | + available_vers = get_os_version_codename(codename, SWIFT_CODENAMES) |
2100 | + else: |
2101 | + available_vers = get_os_version_install_source(src) |
2102 | apt.init() |
2103 | return apt.version_compare(available_vers, cur_vers) == 1 |
2104 | |
2105 | |
2106 | === added directory 'charmhelpers/contrib/peerstorage' |
2107 | === renamed directory 'hooks/charmhelpers/contrib/peerstorage' => 'charmhelpers/contrib/peerstorage.moved' |
2108 | === added file 'charmhelpers/contrib/peerstorage/__init__.py' |
2109 | --- charmhelpers/contrib/peerstorage/__init__.py 1970-01-01 00:00:00 +0000 |
2110 | +++ charmhelpers/contrib/peerstorage/__init__.py 2015-09-11 11:01:28 +0000 |
2111 | @@ -0,0 +1,269 @@ |
2112 | +# Copyright 2014-2015 Canonical Limited. |
2113 | +# |
2114 | +# This file is part of charm-helpers. |
2115 | +# |
2116 | +# charm-helpers is free software: you can redistribute it and/or modify |
2117 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2118 | +# published by the Free Software Foundation. |
2119 | +# |
2120 | +# charm-helpers is distributed in the hope that it will be useful, |
2121 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2122 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2123 | +# GNU Lesser General Public License for more details. |
2124 | +# |
2125 | +# You should have received a copy of the GNU Lesser General Public License |
2126 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2127 | + |
2128 | +import json |
2129 | +import six |
2130 | + |
2131 | +from charmhelpers.core.hookenv import relation_id as current_relation_id |
2132 | +from charmhelpers.core.hookenv import ( |
2133 | + is_relation_made, |
2134 | + relation_ids, |
2135 | + relation_get as _relation_get, |
2136 | + local_unit, |
2137 | + relation_set as _relation_set, |
2138 | + leader_get as _leader_get, |
2139 | + leader_set, |
2140 | + is_leader, |
2141 | +) |
2142 | + |
2143 | + |
2144 | +""" |
2145 | +This helper provides functions to support use of a peer relation |
2146 | +for basic key/value storage, with the added benefit that all storage |
2147 | +can be replicated across peer units. |
2148 | + |
2149 | +Requirement to use: |
2150 | + |
2151 | +To use this, the "peer_echo()" method has to be called form the peer |
2152 | +relation's relation-changed hook: |
2153 | + |
2154 | +@hooks.hook("cluster-relation-changed") # Adapt the to your peer relation name |
2155 | +def cluster_relation_changed(): |
2156 | + peer_echo() |
2157 | + |
2158 | +Once this is done, you can use peer storage from anywhere: |
2159 | + |
2160 | +@hooks.hook("some-hook") |
2161 | +def some_hook(): |
2162 | + # You can store and retrieve key/values this way: |
2163 | + if is_relation_made("cluster"): # from charmhelpers.core.hookenv |
2164 | + # There are peers available so we can work with peer storage |
2165 | + peer_store("mykey", "myvalue") |
2166 | + value = peer_retrieve("mykey") |
2167 | + print value |
2168 | + else: |
2169 | + print "No peers joind the relation, cannot share key/values :(" |
2170 | +""" |
2171 | + |
2172 | + |
2173 | +def leader_get(attribute=None, rid=None): |
2174 | + """Wrapper to ensure that settings are migrated from the peer relation. |
2175 | + |
2176 | + This is to support upgrading an environment that does not support |
2177 | + Juju leadership election to one that does. |
2178 | + |
2179 | + If a setting is not extant in the leader-get but is on the relation-get |
2180 | + peer rel, it is migrated and marked as such so that it is not re-migrated. |
2181 | + """ |
2182 | + migration_key = '__leader_get_migrated_settings__' |
2183 | + if not is_leader(): |
2184 | + return _leader_get(attribute=attribute) |
2185 | + |
2186 | + settings_migrated = False |
2187 | + leader_settings = _leader_get(attribute=attribute) |
2188 | + previously_migrated = _leader_get(attribute=migration_key) |
2189 | + |
2190 | + if previously_migrated: |
2191 | + migrated = set(json.loads(previously_migrated)) |
2192 | + else: |
2193 | + migrated = set([]) |
2194 | + |
2195 | + try: |
2196 | + if migration_key in leader_settings: |
2197 | + del leader_settings[migration_key] |
2198 | + except TypeError: |
2199 | + pass |
2200 | + |
2201 | + if attribute: |
2202 | + if attribute in migrated: |
2203 | + return leader_settings |
2204 | + |
2205 | + # If attribute not present in leader db, check if this unit has set |
2206 | + # the attribute in the peer relation |
2207 | + if not leader_settings: |
2208 | + peer_setting = _relation_get(attribute=attribute, unit=local_unit(), |
2209 | + rid=rid) |
2210 | + if peer_setting: |
2211 | + leader_set(settings={attribute: peer_setting}) |
2212 | + leader_settings = peer_setting |
2213 | + |
2214 | + if leader_settings: |
2215 | + settings_migrated = True |
2216 | + migrated.add(attribute) |
2217 | + else: |
2218 | + r_settings = _relation_get(unit=local_unit(), rid=rid) |
2219 | + if r_settings: |
2220 | + for key in set(r_settings.keys()).difference(migrated): |
2221 | + # Leader setting wins |
2222 | + if not leader_settings.get(key): |
2223 | + leader_settings[key] = r_settings[key] |
2224 | + |
2225 | + settings_migrated = True |
2226 | + migrated.add(key) |
2227 | + |
2228 | + if settings_migrated: |
2229 | + leader_set(**leader_settings) |
2230 | + |
2231 | + if migrated and settings_migrated: |
2232 | + migrated = json.dumps(list(migrated)) |
2233 | + leader_set(settings={migration_key: migrated}) |
2234 | + |
2235 | + return leader_settings |
2236 | + |
2237 | + |
2238 | +def relation_set(relation_id=None, relation_settings=None, **kwargs): |
2239 | + """Attempt to use leader-set if supported in the current version of Juju, |
2240 | + otherwise falls back on relation-set. |
2241 | + |
2242 | + Note that we only attempt to use leader-set if the provided relation_id is |
2243 | + a peer relation id or no relation id is provided (in which case we assume |
2244 | + we are within the peer relation context). |
2245 | + """ |
2246 | + try: |
2247 | + if relation_id in relation_ids('cluster'): |
2248 | + return leader_set(settings=relation_settings, **kwargs) |
2249 | + else: |
2250 | + raise NotImplementedError |
2251 | + except NotImplementedError: |
2252 | + return _relation_set(relation_id=relation_id, |
2253 | + relation_settings=relation_settings, **kwargs) |
2254 | + |
2255 | + |
2256 | +def relation_get(attribute=None, unit=None, rid=None): |
2257 | + """Attempt to use leader-get if supported in the current version of Juju, |
2258 | + otherwise falls back on relation-get. |
2259 | + |
2260 | + Note that we only attempt to use leader-get if the provided rid is a peer |
2261 | + relation id or no relation id is provided (in which case we assume we are |
2262 | + within the peer relation context). |
2263 | + """ |
2264 | + try: |
2265 | + if rid in relation_ids('cluster'): |
2266 | + return leader_get(attribute, rid) |
2267 | + else: |
2268 | + raise NotImplementedError |
2269 | + except NotImplementedError: |
2270 | + return _relation_get(attribute=attribute, rid=rid, unit=unit) |
2271 | + |
2272 | + |
2273 | +def peer_retrieve(key, relation_name='cluster'): |
2274 | + """Retrieve a named key from peer relation `relation_name`.""" |
2275 | + cluster_rels = relation_ids(relation_name) |
2276 | + if len(cluster_rels) > 0: |
2277 | + cluster_rid = cluster_rels[0] |
2278 | + return relation_get(attribute=key, rid=cluster_rid, |
2279 | + unit=local_unit()) |
2280 | + else: |
2281 | + raise ValueError('Unable to detect' |
2282 | + 'peer relation {}'.format(relation_name)) |
2283 | + |
2284 | + |
2285 | +def peer_retrieve_by_prefix(prefix, relation_name='cluster', delimiter='_', |
2286 | + inc_list=None, exc_list=None): |
2287 | + """ Retrieve k/v pairs given a prefix and filter using {inc,exc}_list """ |
2288 | + inc_list = inc_list if inc_list else [] |
2289 | + exc_list = exc_list if exc_list else [] |
2290 | + peerdb_settings = peer_retrieve('-', relation_name=relation_name) |
2291 | + matched = {} |
2292 | + if peerdb_settings is None: |
2293 | + return matched |
2294 | + for k, v in peerdb_settings.items(): |
2295 | + full_prefix = prefix + delimiter |
2296 | + if k.startswith(full_prefix): |
2297 | + new_key = k.replace(full_prefix, '') |
2298 | + if new_key in exc_list: |
2299 | + continue |
2300 | + if new_key in inc_list or len(inc_list) == 0: |
2301 | + matched[new_key] = v |
2302 | + return matched |
2303 | + |
2304 | + |
2305 | +def peer_store(key, value, relation_name='cluster'): |
2306 | + """Store the key/value pair on the named peer relation `relation_name`.""" |
2307 | + cluster_rels = relation_ids(relation_name) |
2308 | + if len(cluster_rels) > 0: |
2309 | + cluster_rid = cluster_rels[0] |
2310 | + relation_set(relation_id=cluster_rid, |
2311 | + relation_settings={key: value}) |
2312 | + else: |
2313 | + raise ValueError('Unable to detect ' |
2314 | + 'peer relation {}'.format(relation_name)) |
2315 | + |
2316 | + |
2317 | +def peer_echo(includes=None, force=False): |
2318 | + """Echo filtered attributes back onto the same relation for storage. |
2319 | + |
2320 | + This is a requirement to use the peerstorage module - it needs to be called |
2321 | + from the peer relation's changed hook. |
2322 | + |
2323 | + If Juju leader support exists this will be a noop unless force is True. |
2324 | + """ |
2325 | + try: |
2326 | + is_leader() |
2327 | + except NotImplementedError: |
2328 | + pass |
2329 | + else: |
2330 | + if not force: |
2331 | + return # NOOP if leader-election is supported |
2332 | + |
2333 | + # Use original non-leader calls |
2334 | + relation_get = _relation_get |
2335 | + relation_set = _relation_set |
2336 | + |
2337 | + rdata = relation_get() |
2338 | + echo_data = {} |
2339 | + if includes is None: |
2340 | + echo_data = rdata.copy() |
2341 | + for ex in ['private-address', 'public-address']: |
2342 | + if ex in echo_data: |
2343 | + echo_data.pop(ex) |
2344 | + else: |
2345 | + for attribute, value in six.iteritems(rdata): |
2346 | + for include in includes: |
2347 | + if include in attribute: |
2348 | + echo_data[attribute] = value |
2349 | + if len(echo_data) > 0: |
2350 | + relation_set(relation_settings=echo_data) |
2351 | + |
2352 | + |
2353 | +def peer_store_and_set(relation_id=None, peer_relation_name='cluster', |
2354 | + peer_store_fatal=False, relation_settings=None, |
2355 | + delimiter='_', **kwargs): |
2356 | + """Store passed-in arguments both in argument relation and in peer storage. |
2357 | + |
2358 | + It functions like doing relation_set() and peer_store() at the same time, |
2359 | + with the same data. |
2360 | + |
2361 | + @param relation_id: the id of the relation to store the data on. Defaults |
2362 | + to the current relation. |
2363 | + @param peer_store_fatal: Set to True, the function will raise an exception |
2364 | + should the peer sotrage not be avialable.""" |
2365 | + |
2366 | + relation_settings = relation_settings if relation_settings else {} |
2367 | + relation_set(relation_id=relation_id, |
2368 | + relation_settings=relation_settings, |
2369 | + **kwargs) |
2370 | + if is_relation_made(peer_relation_name): |
2371 | + for key, value in six.iteritems(dict(list(kwargs.items()) + |
2372 | + list(relation_settings.items()))): |
2373 | + key_prefix = relation_id or current_relation_id() |
2374 | + peer_store(key_prefix + delimiter + key, |
2375 | + value, |
2376 | + relation_name=peer_relation_name) |
2377 | + else: |
2378 | + if peer_store_fatal: |
2379 | + raise ValueError('Unable to detect ' |
2380 | + 'peer relation {}'.format(peer_relation_name)) |
2381 | |
2382 | === modified file 'charmhelpers/contrib/storage/linux/ceph.py' |
2383 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-08-10 16:32:05 +0000 |
2384 | +++ charmhelpers/contrib/storage/linux/ceph.py 2015-09-11 11:01:28 +0000 |
2385 | @@ -56,6 +56,8 @@ |
2386 | apt_install, |
2387 | ) |
2388 | |
2389 | +from charmhelpers.core.kernel import modprobe |
2390 | + |
2391 | KEYRING = '/etc/ceph/ceph.client.{}.keyring' |
2392 | KEYFILE = '/etc/ceph/ceph.client.{}.key' |
2393 | |
2394 | @@ -288,17 +290,6 @@ |
2395 | os.chown(data_src_dst, uid, gid) |
2396 | |
2397 | |
2398 | -# TODO: re-use |
2399 | -def modprobe(module): |
2400 | - """Load a kernel module and configure for auto-load on reboot.""" |
2401 | - log('Loading kernel module', level=INFO) |
2402 | - cmd = ['modprobe', module] |
2403 | - check_call(cmd) |
2404 | - with open('/etc/modules', 'r+') as modules: |
2405 | - if module not in modules.read(): |
2406 | - modules.write(module) |
2407 | - |
2408 | - |
2409 | def copy_files(src, dst, symlinks=False, ignore=None): |
2410 | """Copy files from src to dst.""" |
2411 | for item in os.listdir(src): |
2412 | |
2413 | === modified file 'charmhelpers/contrib/storage/linux/utils.py' |
2414 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2015-08-10 16:32:05 +0000 |
2415 | +++ charmhelpers/contrib/storage/linux/utils.py 2015-09-11 11:01:28 +0000 |
2416 | @@ -43,9 +43,10 @@ |
2417 | |
2418 | :param block_device: str: Full path of block device to clean. |
2419 | ''' |
2420 | + # https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b |
2421 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
2422 | - call(['sgdisk', '--zap-all', '--mbrtogpt', |
2423 | - '--clear', block_device]) |
2424 | + call(['sgdisk', '--zap-all', '--', block_device]) |
2425 | + call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device]) |
2426 | dev_end = check_output(['blockdev', '--getsz', |
2427 | block_device]).decode('UTF-8') |
2428 | gpt_end = int(dev_end.split()[0]) - 100 |
2429 | |
2430 | === added file 'charmhelpers/core/files.py' |
2431 | --- charmhelpers/core/files.py 1970-01-01 00:00:00 +0000 |
2432 | +++ charmhelpers/core/files.py 2015-09-11 11:01:28 +0000 |
2433 | @@ -0,0 +1,45 @@ |
2434 | +#!/usr/bin/env python |
2435 | +# -*- coding: utf-8 -*- |
2436 | + |
2437 | +# Copyright 2014-2015 Canonical Limited. |
2438 | +# |
2439 | +# This file is part of charm-helpers. |
2440 | +# |
2441 | +# charm-helpers is free software: you can redistribute it and/or modify |
2442 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2443 | +# published by the Free Software Foundation. |
2444 | +# |
2445 | +# charm-helpers is distributed in the hope that it will be useful, |
2446 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2447 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2448 | +# GNU Lesser General Public License for more details. |
2449 | +# |
2450 | +# You should have received a copy of the GNU Lesser General Public License |
2451 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2452 | + |
2453 | +__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>' |
2454 | + |
2455 | +import os |
2456 | +import subprocess |
2457 | + |
2458 | + |
2459 | +def sed(filename, before, after, flags='g'): |
2460 | + """ |
2461 | + Search and replaces the given pattern on filename. |
2462 | + |
2463 | + :param filename: relative or absolute file path. |
2464 | + :param before: expression to be replaced (see 'man sed') |
2465 | + :param after: expression to replace with (see 'man sed') |
2466 | + :param flags: sed-compatible regex flags in example, to make |
2467 | + the search and replace case insensitive, specify ``flags="i"``. |
2468 | + The ``g`` flag is always specified regardless, so you do not |
2469 | + need to remember to include it when overriding this parameter. |
2470 | + :returns: If the sed command exit code was zero then return, |
2471 | + otherwise raise CalledProcessError. |
2472 | + """ |
2473 | + expression = r's/{0}/{1}/{2}'.format(before, |
2474 | + after, flags) |
2475 | + |
2476 | + return subprocess.check_call(["sed", "-i", "-r", "-e", |
2477 | + expression, |
2478 | + os.path.expanduser(filename)]) |
2479 | |
2480 | === renamed file 'hooks/charmhelpers/core/files.py' => 'charmhelpers/core/files.py.moved' |
2481 | === modified file 'charmhelpers/core/hookenv.py' |
2482 | --- hooks/charmhelpers/core/hookenv.py 2015-08-10 16:32:05 +0000 |
2483 | +++ charmhelpers/core/hookenv.py 2015-09-11 11:01:28 +0000 |
2484 | @@ -87,10 +87,18 @@ |
2485 | try: |
2486 | return cache[key] |
2487 | except KeyError: |
2488 | - pass # Drop out of the exception handler scope. |
2489 | - res = func(*args, **kwargs) |
2490 | - cache[key] = res |
2491 | - return res |
2492 | +<<<<<<< TREE |
2493 | + pass # Drop out of the exception handler scope. |
2494 | + res = func(*args, **kwargs) |
2495 | + cache[key] = res |
2496 | + return res |
2497 | +======= |
2498 | + pass # Drop out of the exception handler scope. |
2499 | + res = func(*args, **kwargs) |
2500 | + cache[key] = res |
2501 | + return res |
2502 | + wrapper._wrapped = func |
2503 | +>>>>>>> MERGE-SOURCE |
2504 | return wrapper |
2505 | |
2506 | |
2507 | @@ -190,6 +198,7 @@ |
2508 | return os.environ.get('JUJU_RELATION', None) |
2509 | |
2510 | |
2511 | +<<<<<<< TREE |
2512 | @cmdline.subcommand() |
2513 | @cached |
2514 | def relation_id(relation_name=None, service_or_unit=None): |
2515 | @@ -204,6 +213,21 @@ |
2516 | return relid |
2517 | else: |
2518 | raise ValueError('Must specify neither or both of relation_name and service_or_unit') |
2519 | +======= |
2520 | +@cached |
2521 | +def relation_id(relation_name=None, service_or_unit=None): |
2522 | + """The relation ID for the current or a specified relation""" |
2523 | + if not relation_name and not service_or_unit: |
2524 | + return os.environ.get('JUJU_RELATION_ID', None) |
2525 | + elif relation_name and service_or_unit: |
2526 | + service_name = service_or_unit.split('/')[0] |
2527 | + for relid in relation_ids(relation_name): |
2528 | + remote_service = remote_service_name(relid) |
2529 | + if remote_service == service_name: |
2530 | + return relid |
2531 | + else: |
2532 | + raise ValueError('Must specify neither or both of relation_name and service_or_unit') |
2533 | +>>>>>>> MERGE-SOURCE |
2534 | |
2535 | |
2536 | def local_unit(): |
2537 | @@ -213,15 +237,22 @@ |
2538 | |
2539 | def remote_unit(): |
2540 | """The remote unit for the current relation hook""" |
2541 | +<<<<<<< TREE |
2542 | return os.environ.get('JUJU_REMOTE_UNIT', None) |
2543 | |
2544 | |
2545 | @cmdline.subcommand() |
2546 | +======= |
2547 | + return os.environ.get('JUJU_REMOTE_UNIT', None) |
2548 | + |
2549 | + |
2550 | +>>>>>>> MERGE-SOURCE |
2551 | def service_name(): |
2552 | """The name service group this unit belongs to""" |
2553 | return local_unit().split('/')[0] |
2554 | |
2555 | |
2556 | +<<<<<<< TREE |
2557 | @cmdline.subcommand() |
2558 | @cached |
2559 | def remote_service_name(relid=None): |
2560 | @@ -234,6 +265,19 @@ |
2561 | return unit.split('/')[0] if unit else None |
2562 | |
2563 | |
2564 | +======= |
2565 | +@cached |
2566 | +def remote_service_name(relid=None): |
2567 | + """The remote service name for a given relation-id (or the current relation)""" |
2568 | + if relid is None: |
2569 | + unit = remote_unit() |
2570 | + else: |
2571 | + units = related_units(relid) |
2572 | + unit = units[0] if units else None |
2573 | + return unit.split('/')[0] if unit else None |
2574 | + |
2575 | + |
2576 | +>>>>>>> MERGE-SOURCE |
2577 | def hook_name(): |
2578 | """The name of the currently executing hook""" |
2579 | return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0])) |
2580 | @@ -740,6 +784,7 @@ |
2581 | |
2582 | The results set by action_set are preserved.""" |
2583 | subprocess.check_call(['action-fail', message]) |
2584 | +<<<<<<< TREE |
2585 | |
2586 | |
2587 | def action_name(): |
2588 | @@ -913,3 +958,180 @@ |
2589 | for callback, args, kwargs in reversed(_atexit): |
2590 | callback(*args, **kwargs) |
2591 | del _atexit[:] |
2592 | +======= |
2593 | + |
2594 | + |
2595 | +def action_name(): |
2596 | + """Get the name of the currently executing action.""" |
2597 | + return os.environ.get('JUJU_ACTION_NAME') |
2598 | + |
2599 | + |
2600 | +def action_uuid(): |
2601 | + """Get the UUID of the currently executing action.""" |
2602 | + return os.environ.get('JUJU_ACTION_UUID') |
2603 | + |
2604 | + |
2605 | +def action_tag(): |
2606 | + """Get the tag for the currently executing action.""" |
2607 | + return os.environ.get('JUJU_ACTION_TAG') |
2608 | + |
2609 | + |
2610 | +def status_set(workload_state, message): |
2611 | + """Set the workload state with a message |
2612 | + |
2613 | + Use status-set to set the workload state with a message which is visible |
2614 | + to the user via juju status. If the status-set command is not found then |
2615 | + assume this is juju < 1.23 and juju-log the message unstead. |
2616 | + |
2617 | + workload_state -- valid juju workload state. |
2618 | + message -- status update message |
2619 | + """ |
2620 | + valid_states = ['maintenance', 'blocked', 'waiting', 'active'] |
2621 | + if workload_state not in valid_states: |
2622 | + raise ValueError( |
2623 | + '{!r} is not a valid workload state'.format(workload_state) |
2624 | + ) |
2625 | + cmd = ['status-set', workload_state, message] |
2626 | + try: |
2627 | + ret = subprocess.call(cmd) |
2628 | + if ret == 0: |
2629 | + return |
2630 | + except OSError as e: |
2631 | + if e.errno != errno.ENOENT: |
2632 | + raise |
2633 | + log_message = 'status-set failed: {} {}'.format(workload_state, |
2634 | + message) |
2635 | + log(log_message, level='INFO') |
2636 | + |
2637 | + |
2638 | +def status_get(): |
2639 | + """Retrieve the previously set juju workload state and message |
2640 | + |
2641 | + If the status-get command is not found then assume this is juju < 1.23 and |
2642 | + return 'unknown', "" |
2643 | + |
2644 | + """ |
2645 | + cmd = ['status-get', "--format=json", "--include-data"] |
2646 | + try: |
2647 | + raw_status = subprocess.check_output(cmd) |
2648 | + except OSError as e: |
2649 | + if e.errno == errno.ENOENT: |
2650 | + return ('unknown', "") |
2651 | + else: |
2652 | + raise |
2653 | + else: |
2654 | + status = json.loads(raw_status.decode("UTF-8")) |
2655 | + return (status["status"], status["message"]) |
2656 | + |
2657 | + |
2658 | +def translate_exc(from_exc, to_exc): |
2659 | + def inner_translate_exc1(f): |
2660 | + def inner_translate_exc2(*args, **kwargs): |
2661 | + try: |
2662 | + return f(*args, **kwargs) |
2663 | + except from_exc: |
2664 | + raise to_exc |
2665 | + |
2666 | + return inner_translate_exc2 |
2667 | + |
2668 | + return inner_translate_exc1 |
2669 | + |
2670 | + |
2671 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
2672 | +def is_leader(): |
2673 | + """Does the current unit hold the juju leadership |
2674 | + |
2675 | + Uses juju to determine whether the current unit is the leader of its peers |
2676 | + """ |
2677 | + cmd = ['is-leader', '--format=json'] |
2678 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
2679 | + |
2680 | + |
2681 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
2682 | +def leader_get(attribute=None): |
2683 | + """Juju leader get value(s)""" |
2684 | + cmd = ['leader-get', '--format=json'] + [attribute or '-'] |
2685 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
2686 | + |
2687 | + |
2688 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
2689 | +def leader_set(settings=None, **kwargs): |
2690 | + """Juju leader set value(s)""" |
2691 | + # Don't log secrets. |
2692 | + # log("Juju leader-set '%s'" % (settings), level=DEBUG) |
2693 | + cmd = ['leader-set'] |
2694 | + settings = settings or {} |
2695 | + settings.update(kwargs) |
2696 | + for k, v in settings.items(): |
2697 | + if v is None: |
2698 | + cmd.append('{}='.format(k)) |
2699 | + else: |
2700 | + cmd.append('{}={}'.format(k, v)) |
2701 | + subprocess.check_call(cmd) |
2702 | + |
2703 | + |
2704 | +@cached |
2705 | +def juju_version(): |
2706 | + """Full version string (eg. '1.23.3.1-trusty-amd64')""" |
2707 | + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 |
2708 | + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] |
2709 | + return subprocess.check_output([jujud, 'version'], |
2710 | + universal_newlines=True).strip() |
2711 | + |
2712 | + |
2713 | +@cached |
2714 | +def has_juju_version(minimum_version): |
2715 | + """Return True if the Juju version is at least the provided version""" |
2716 | + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
2717 | + |
2718 | + |
2719 | +_atexit = [] |
2720 | +_atstart = [] |
2721 | + |
2722 | + |
2723 | +def atstart(callback, *args, **kwargs): |
2724 | + '''Schedule a callback to run before the main hook. |
2725 | + |
2726 | + Callbacks are run in the order they were added. |
2727 | + |
2728 | + This is useful for modules and classes to perform initialization |
2729 | + and inject behavior. In particular: |
2730 | + |
2731 | + - Run common code before all of your hooks, such as logging |
2732 | + the hook name or interesting relation data. |
2733 | + - Defer object or module initialization that requires a hook |
2734 | + context until we know there actually is a hook context, |
2735 | + making testing easier. |
2736 | + - Rather than requiring charm authors to include boilerplate to |
2737 | + invoke your helper's behavior, have it run automatically if |
2738 | + your object is instantiated or module imported. |
2739 | + |
2740 | + This is not at all useful after your hook framework as been launched. |
2741 | + ''' |
2742 | + global _atstart |
2743 | + _atstart.append((callback, args, kwargs)) |
2744 | + |
2745 | + |
2746 | +def atexit(callback, *args, **kwargs): |
2747 | + '''Schedule a callback to run on successful hook completion. |
2748 | + |
2749 | + Callbacks are run in the reverse order that they were added.''' |
2750 | + _atexit.append((callback, args, kwargs)) |
2751 | + |
2752 | + |
2753 | +def _run_atstart(): |
2754 | + '''Hook frameworks must invoke this before running the main hook body.''' |
2755 | + global _atstart |
2756 | + for callback, args, kwargs in _atstart: |
2757 | + callback(*args, **kwargs) |
2758 | + del _atstart[:] |
2759 | + |
2760 | + |
2761 | +def _run_atexit(): |
2762 | + '''Hook frameworks must invoke this after the main hook body has |
2763 | + successfully completed. Do not invoke it if the hook fails.''' |
2764 | + global _atexit |
2765 | + for callback, args, kwargs in reversed(_atexit): |
2766 | + callback(*args, **kwargs) |
2767 | + del _atexit[:] |
2768 | +>>>>>>> MERGE-SOURCE |
2769 | |
2770 | === modified file 'charmhelpers/core/host.py' |
2771 | --- hooks/charmhelpers/core/host.py 2015-08-10 16:32:05 +0000 |
2772 | +++ charmhelpers/core/host.py 2015-09-11 11:01:28 +0000 |
2773 | @@ -63,6 +63,7 @@ |
2774 | return service_result |
2775 | |
2776 | |
2777 | +<<<<<<< TREE |
2778 | def service_pause(service_name, init_dir=None): |
2779 | """Pause a system service. |
2780 | |
2781 | @@ -93,6 +94,54 @@ |
2782 | return started |
2783 | |
2784 | |
2785 | +======= |
2786 | +def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"): |
2787 | + """Pause a system service. |
2788 | + |
2789 | + Stop it, and prevent it from starting again at boot.""" |
2790 | + stopped = service_stop(service_name) |
2791 | + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
2792 | + sysv_file = os.path.join(initd_dir, service_name) |
2793 | + if os.path.exists(upstart_file): |
2794 | + override_path = os.path.join( |
2795 | + init_dir, '{}.override'.format(service_name)) |
2796 | + with open(override_path, 'w') as fh: |
2797 | + fh.write("manual\n") |
2798 | + elif os.path.exists(sysv_file): |
2799 | + subprocess.check_call(["update-rc.d", service_name, "disable"]) |
2800 | + else: |
2801 | + # XXX: Support SystemD too |
2802 | + raise ValueError( |
2803 | + "Unable to detect {0} as either Upstart {1} or SysV {2}".format( |
2804 | + service_name, upstart_file, sysv_file)) |
2805 | + return stopped |
2806 | + |
2807 | + |
2808 | +def service_resume(service_name, init_dir="/etc/init", |
2809 | + initd_dir="/etc/init.d"): |
2810 | + """Resume a system service. |
2811 | + |
2812 | + Reenable starting again at boot. Start the service""" |
2813 | + upstart_file = os.path.join(init_dir, "{}.conf".format(service_name)) |
2814 | + sysv_file = os.path.join(initd_dir, service_name) |
2815 | + if os.path.exists(upstart_file): |
2816 | + override_path = os.path.join( |
2817 | + init_dir, '{}.override'.format(service_name)) |
2818 | + if os.path.exists(override_path): |
2819 | + os.unlink(override_path) |
2820 | + elif os.path.exists(sysv_file): |
2821 | + subprocess.check_call(["update-rc.d", service_name, "enable"]) |
2822 | + else: |
2823 | + # XXX: Support SystemD too |
2824 | + raise ValueError( |
2825 | + "Unable to detect {0} as either Upstart {1} or SysV {2}".format( |
2826 | + service_name, upstart_file, sysv_file)) |
2827 | + |
2828 | + started = service_start(service_name) |
2829 | + return started |
2830 | + |
2831 | + |
2832 | +>>>>>>> MERGE-SOURCE |
2833 | def service(action, service_name): |
2834 | """Control a system service""" |
2835 | cmd = ['service', service_name, action] |
2836 | @@ -148,6 +197,16 @@ |
2837 | return user_info |
2838 | |
2839 | |
2840 | +def user_exists(username): |
2841 | + """Check if a user exists""" |
2842 | + try: |
2843 | + pwd.getpwnam(username) |
2844 | + user_exists = True |
2845 | + except KeyError: |
2846 | + user_exists = False |
2847 | + return user_exists |
2848 | + |
2849 | + |
2850 | def add_group(group_name, system_group=False): |
2851 | """Add a group to the system""" |
2852 | try: |
2853 | @@ -280,6 +339,17 @@ |
2854 | return system_mounts |
2855 | |
2856 | |
2857 | +def fstab_mount(mountpoint): |
2858 | + """Mount filesystem using fstab""" |
2859 | + cmd_args = ['mount', mountpoint] |
2860 | + try: |
2861 | + subprocess.check_output(cmd_args) |
2862 | + except subprocess.CalledProcessError as e: |
2863 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
2864 | + return False |
2865 | + return True |
2866 | + |
2867 | + |
2868 | def file_hash(path, hash_type='md5'): |
2869 | """ |
2870 | Generate a hash checksum of the contents of 'path' or None if not found. |
2871 | @@ -396,25 +466,80 @@ |
2872 | return(''.join(random_chars)) |
2873 | |
2874 | |
2875 | -def list_nics(nic_type): |
2876 | +def is_phy_iface(interface): |
2877 | + """Returns True if interface is not virtual, otherwise False.""" |
2878 | + if interface: |
2879 | + sys_net = '/sys/class/net' |
2880 | + if os.path.isdir(sys_net): |
2881 | + for iface in glob.glob(os.path.join(sys_net, '*')): |
2882 | + if '/virtual/' in os.path.realpath(iface): |
2883 | + continue |
2884 | + |
2885 | + if interface == os.path.basename(iface): |
2886 | + return True |
2887 | + |
2888 | + return False |
2889 | + |
2890 | + |
2891 | +def get_bond_master(interface): |
2892 | + """Returns bond master if interface is bond slave otherwise None. |
2893 | + |
2894 | + NOTE: the provided interface is expected to be physical |
2895 | + """ |
2896 | + if interface: |
2897 | + iface_path = '/sys/class/net/%s' % (interface) |
2898 | + if os.path.exists(iface_path): |
2899 | + if '/virtual/' in os.path.realpath(iface_path): |
2900 | + return None |
2901 | + |
2902 | + master = os.path.join(iface_path, 'master') |
2903 | + if os.path.exists(master): |
2904 | + master = os.path.realpath(master) |
2905 | + # make sure it is a bond master |
2906 | + if os.path.exists(os.path.join(master, 'bonding')): |
2907 | + return os.path.basename(master) |
2908 | + |
2909 | + return None |
2910 | + |
2911 | + |
2912 | +def list_nics(nic_type=None): |
2913 | '''Return a list of nics of given type(s)''' |
2914 | if isinstance(nic_type, six.string_types): |
2915 | int_types = [nic_type] |
2916 | else: |
2917 | int_types = nic_type |
2918 | + |
2919 | interfaces = [] |
2920 | - for int_type in int_types: |
2921 | - cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
2922 | + if nic_type: |
2923 | + for int_type in int_types: |
2924 | + cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
2925 | + ip_output = subprocess.check_output(cmd).decode('UTF-8') |
2926 | + ip_output = ip_output.split('\n') |
2927 | + ip_output = (line for line in ip_output if line) |
2928 | + for line in ip_output: |
2929 | + if line.split()[1].startswith(int_type): |
2930 | + matched = re.search('.*: (' + int_type + |
2931 | + r'[0-9]+\.[0-9]+)@.*', line) |
2932 | + if matched: |
2933 | + iface = matched.groups()[0] |
2934 | + else: |
2935 | + iface = line.split()[1].replace(":", "") |
2936 | + |
2937 | + if iface not in interfaces: |
2938 | + interfaces.append(iface) |
2939 | + else: |
2940 | + cmd = ['ip', 'a'] |
2941 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
2942 | - ip_output = (line for line in ip_output if line) |
2943 | + ip_output = (line.strip() for line in ip_output if line) |
2944 | + |
2945 | + key = re.compile('^[0-9]+:\s+(.+):') |
2946 | for line in ip_output: |
2947 | - if line.split()[1].startswith(int_type): |
2948 | - matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line) |
2949 | - if matched: |
2950 | - interface = matched.groups()[0] |
2951 | - else: |
2952 | - interface = line.split()[1].replace(":", "") |
2953 | - interfaces.append(interface) |
2954 | + matched = re.search(key, line) |
2955 | + if matched: |
2956 | + iface = matched.group(1) |
2957 | + iface = iface.partition("@")[0] |
2958 | + if iface not in interfaces: |
2959 | + interfaces.append(iface) |
2960 | |
2961 | return interfaces |
2962 | |
2963 | |
2964 | === added file 'charmhelpers/core/hugepage.py' |
2965 | --- charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000 |
2966 | +++ charmhelpers/core/hugepage.py 2015-09-11 11:01:28 +0000 |
2967 | @@ -0,0 +1,62 @@ |
2968 | +# -*- coding: utf-8 -*- |
2969 | + |
2970 | +# Copyright 2014-2015 Canonical Limited. |
2971 | +# |
2972 | +# This file is part of charm-helpers. |
2973 | +# |
2974 | +# charm-helpers is free software: you can redistribute it and/or modify |
2975 | +# it under the terms of the GNU Lesser General Public License version 3 as |
2976 | +# published by the Free Software Foundation. |
2977 | +# |
2978 | +# charm-helpers is distributed in the hope that it will be useful, |
2979 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2980 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2981 | +# GNU Lesser General Public License for more details. |
2982 | +# |
2983 | +# You should have received a copy of the GNU Lesser General Public License |
2984 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
2985 | + |
2986 | +import yaml |
2987 | +from charmhelpers.core import fstab |
2988 | +from charmhelpers.core import sysctl |
2989 | +from charmhelpers.core.host import ( |
2990 | + add_group, |
2991 | + add_user_to_group, |
2992 | + fstab_mount, |
2993 | + mkdir, |
2994 | +) |
2995 | + |
2996 | + |
2997 | +def hugepage_support(user, group='hugetlb', nr_hugepages=256, |
2998 | + max_map_count=65536, mnt_point='/run/hugepages/kvm', |
2999 | + pagesize='2MB', mount=True): |
3000 | + """Enable hugepages on system. |
3001 | + |
3002 | + Args: |
3003 | + user (str) -- Username to allow access to hugepages to |
3004 | + group (str) -- Group name to own hugepages |
3005 | + nr_hugepages (int) -- Number of pages to reserve |
3006 | + max_map_count (int) -- Number of Virtual Memory Areas a process can own |
3007 | + mnt_point (str) -- Directory to mount hugepages on |
3008 | + pagesize (str) -- Size of hugepages |
3009 | + mount (bool) -- Whether to Mount hugepages |
3010 | + """ |
3011 | + group_info = add_group(group) |
3012 | + gid = group_info.gr_gid |
3013 | + add_user_to_group(user, group) |
3014 | + sysctl_settings = { |
3015 | + 'vm.nr_hugepages': nr_hugepages, |
3016 | + 'vm.max_map_count': max_map_count, |
3017 | + 'vm.hugetlb_shm_group': gid, |
3018 | + } |
3019 | + sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf') |
3020 | + mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False) |
3021 | + lfstab = fstab.Fstab() |
3022 | + fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point) |
3023 | + if fstab_entry: |
3024 | + lfstab.remove_entry(fstab_entry) |
3025 | + entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs', |
3026 | + 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0) |
3027 | + lfstab.add_entry(entry) |
3028 | + if mount: |
3029 | + fstab_mount(mnt_point) |
3030 | |
3031 | === added file 'charmhelpers/core/kernel.py' |
3032 | --- charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000 |
3033 | +++ charmhelpers/core/kernel.py 2015-09-11 11:01:28 +0000 |
3034 | @@ -0,0 +1,68 @@ |
3035 | +#!/usr/bin/env python |
3036 | +# -*- coding: utf-8 -*- |
3037 | + |
3038 | +# Copyright 2014-2015 Canonical Limited. |
3039 | +# |
3040 | +# This file is part of charm-helpers. |
3041 | +# |
3042 | +# charm-helpers is free software: you can redistribute it and/or modify |
3043 | +# it under the terms of the GNU Lesser General Public License version 3 as |
3044 | +# published by the Free Software Foundation. |
3045 | +# |
3046 | +# charm-helpers is distributed in the hope that it will be useful, |
3047 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
3048 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
3049 | +# GNU Lesser General Public License for more details. |
3050 | +# |
3051 | +# You should have received a copy of the GNU Lesser General Public License |
3052 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3053 | + |
3054 | +__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" |
3055 | + |
3056 | +from charmhelpers.core.hookenv import ( |
3057 | + log, |
3058 | + INFO |
3059 | +) |
3060 | + |
3061 | +from subprocess import check_call, check_output |
3062 | +import re |
3063 | + |
3064 | + |
3065 | +def modprobe(module, persist=True): |
3066 | + """Load a kernel module and configure for auto-load on reboot.""" |
3067 | + cmd = ['modprobe', module] |
3068 | + |
3069 | + log('Loading kernel module %s' % module, level=INFO) |
3070 | + |
3071 | + check_call(cmd) |
3072 | + if persist: |
3073 | + with open('/etc/modules', 'r+') as modules: |
3074 | + if module not in modules.read(): |
3075 | + modules.write(module) |
3076 | + |
3077 | + |
3078 | +def rmmod(module, force=False): |
3079 | + """Remove a module from the linux kernel""" |
3080 | + cmd = ['rmmod'] |
3081 | + if force: |
3082 | + cmd.append('-f') |
3083 | + cmd.append(module) |
3084 | + log('Removing kernel module %s' % module, level=INFO) |
3085 | + return check_call(cmd) |
3086 | + |
3087 | + |
3088 | +def lsmod(): |
3089 | + """Shows what kernel modules are currently loaded""" |
3090 | + return check_output(['lsmod'], |
3091 | + universal_newlines=True) |
3092 | + |
3093 | + |
3094 | +def is_module_loaded(module): |
3095 | + """Checks if a kernel module is already loaded""" |
3096 | + matches = re.findall('^%s[ ]+' % module, lsmod(), re.M) |
3097 | + return len(matches) > 0 |
3098 | + |
3099 | + |
3100 | +def update_initramfs(version='all'): |
3101 | + """Updates an initramfs image""" |
3102 | + return check_call(["update-initramfs", "-k", version, "-u"]) |
3103 | |
3104 | === modified file 'charmhelpers/core/services/helpers.py' |
3105 | --- hooks/charmhelpers/core/services/helpers.py 2015-08-10 16:32:05 +0000 |
3106 | +++ charmhelpers/core/services/helpers.py 2015-09-11 11:01:28 +0000 |
3107 | @@ -16,7 +16,9 @@ |
3108 | |
3109 | import os |
3110 | import yaml |
3111 | + |
3112 | from charmhelpers.core import hookenv |
3113 | +from charmhelpers.core import host |
3114 | from charmhelpers.core import templating |
3115 | |
3116 | from charmhelpers.core.services.base import ManagerCallback |
3117 | @@ -239,28 +241,50 @@ |
3118 | action. |
3119 | |
3120 | :param str source: The template source file, relative to |
3121 | - `$CHARM_DIR/templates` |
3122 | +<<<<<<< TREE |
3123 | + `$CHARM_DIR/templates` |
3124 | +======= |
3125 | + `$CHARM_DIR/templates` |
3126 | + |
3127 | +>>>>>>> MERGE-SOURCE |
3128 | :param str target: The target to write the rendered template to |
3129 | :param str owner: The owner of the rendered file |
3130 | :param str group: The group of the rendered file |
3131 | :param int perms: The permissions of the rendered file |
3132 | +<<<<<<< TREE |
3133 | |
3134 | +======= |
3135 | + :param partial on_change_action: functools partial to be executed when |
3136 | + rendered file changes |
3137 | +>>>>>>> MERGE-SOURCE |
3138 | """ |
3139 | def __init__(self, source, target, |
3140 | - owner='root', group='root', perms=0o444): |
3141 | + owner='root', group='root', perms=0o444, |
3142 | + on_change_action=None): |
3143 | self.source = source |
3144 | self.target = target |
3145 | self.owner = owner |
3146 | self.group = group |
3147 | self.perms = perms |
3148 | + self.on_change_action = on_change_action |
3149 | |
3150 | def __call__(self, manager, service_name, event_name): |
3151 | + pre_checksum = '' |
3152 | + if self.on_change_action and os.path.isfile(self.target): |
3153 | + pre_checksum = host.file_hash(self.target) |
3154 | service = manager.get_service(service_name) |
3155 | context = {} |
3156 | for ctx in service.get('required_data', []): |
3157 | context.update(ctx) |
3158 | templating.render(self.source, self.target, context, |
3159 | self.owner, self.group, self.perms) |
3160 | + if self.on_change_action: |
3161 | + if pre_checksum == host.file_hash(self.target): |
3162 | + hookenv.log( |
3163 | + 'No change detected: {}'.format(self.target), |
3164 | + hookenv.DEBUG) |
3165 | + else: |
3166 | + self.on_change_action() |
3167 | |
3168 | |
3169 | # Convenience aliases for templates |
3170 | |
3171 | === modified file 'charmhelpers/fetch/__init__.py' |
3172 | --- hooks/charmhelpers/fetch/__init__.py 2015-08-10 16:32:05 +0000 |
3173 | +++ charmhelpers/fetch/__init__.py 2015-09-11 11:01:28 +0000 |
3174 | @@ -90,6 +90,14 @@ |
3175 | 'kilo/proposed': 'trusty-proposed/kilo', |
3176 | 'trusty-kilo/proposed': 'trusty-proposed/kilo', |
3177 | 'trusty-proposed/kilo': 'trusty-proposed/kilo', |
3178 | + # Liberty |
3179 | + 'liberty': 'trusty-updates/liberty', |
3180 | + 'trusty-liberty': 'trusty-updates/liberty', |
3181 | + 'trusty-liberty/updates': 'trusty-updates/liberty', |
3182 | + 'trusty-updates/liberty': 'trusty-updates/liberty', |
3183 | + 'liberty/proposed': 'trusty-proposed/liberty', |
3184 | + 'trusty-liberty/proposed': 'trusty-proposed/liberty', |
3185 | + 'trusty-proposed/liberty': 'trusty-proposed/liberty', |
3186 | } |
3187 | |
3188 | # The order of this list is very important. Handlers should be listed in from |
3189 | |
3190 | === modified file 'hooks/ceilometer_hooks.py' |
3191 | --- hooks/ceilometer_hooks.py 2015-08-10 16:32:05 +0000 |
3192 | +++ hooks/ceilometer_hooks.py 2015-09-11 11:01:28 +0000 |
3193 | @@ -66,8 +66,7 @@ |
3194 | def install(): |
3195 | execd_preinstall() |
3196 | origin = config('openstack-origin') |
3197 | - if (lsb_release()['DISTRIB_CODENAME'] == 'precise' |
3198 | - and origin == 'distro'): |
3199 | + if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and origin == 'distro'): |
3200 | origin = 'cloud:precise-grizzly' |
3201 | configure_installation_source(origin) |
3202 | apt_update(fatal=True) |
3203 | |
3204 | === added symlink 'hooks/ceilometer_utils.py' |
3205 | === target is u'../ceilometer_utils.py' |
3206 | === added symlink 'hooks/charmhelpers' |
3207 | === target is u'../charmhelpers' |
3208 | === added file 'tests/020-basic-trusty-liberty' |
3209 | --- tests/020-basic-trusty-liberty 1970-01-01 00:00:00 +0000 |
3210 | +++ tests/020-basic-trusty-liberty 2015-09-11 11:01:28 +0000 |
3211 | @@ -0,0 +1,11 @@ |
3212 | +#!/usr/bin/python |
3213 | + |
3214 | +"""Amulet tests on a basic ceilometer deployment on trusty-liberty.""" |
3215 | + |
3216 | +from basic_deployment import CeilometerBasicDeployment |
3217 | + |
3218 | +if __name__ == '__main__': |
3219 | + deployment = CeilometerBasicDeployment(series='trusty', |
3220 | + openstack='cloud:trusty-liberty', |
3221 | + source='cloud:trusty-updates/liberty') |
3222 | + deployment.run_tests() |
3223 | |
3224 | === renamed file 'tests/020-basic-trusty-liberty' => 'tests/020-basic-trusty-liberty.moved' |
3225 | === added file 'tests/021-basic-wily-liberty' |
3226 | --- tests/021-basic-wily-liberty 1970-01-01 00:00:00 +0000 |
3227 | +++ tests/021-basic-wily-liberty 2015-09-11 11:01:28 +0000 |
3228 | @@ -0,0 +1,9 @@ |
3229 | +#!/usr/bin/python |
3230 | + |
3231 | +"""Amulet tests on a basic ceilometer deployment on wily-liberty.""" |
3232 | + |
3233 | +from basic_deployment import CeilometerBasicDeployment |
3234 | + |
3235 | +if __name__ == '__main__': |
3236 | + deployment = CeilometerBasicDeployment(series='wily') |
3237 | + deployment.run_tests() |
3238 | |
3239 | === renamed file 'tests/021-basic-wily-liberty' => 'tests/021-basic-wily-liberty.moved' |
3240 | === modified file 'tests/basic_deployment.py' |
3241 | --- tests/basic_deployment.py 2015-08-10 17:28:32 +0000 |
3242 | +++ tests/basic_deployment.py 2015-09-11 11:01:28 +0000 |
3243 | @@ -1,9 +1,18 @@ |
3244 | #!/usr/bin/python |
3245 | |
3246 | -""" |
3247 | -Basic ceilometer functional tests. |
3248 | -""" |
3249 | +<<<<<<< TREE |
3250 | +""" |
3251 | +Basic ceilometer functional tests. |
3252 | +""" |
3253 | +======= |
3254 | +import subprocess |
3255 | + |
3256 | +""" |
3257 | +Basic ceilometer functional tests. |
3258 | +""" |
3259 | +>>>>>>> MERGE-SOURCE |
3260 | import amulet |
3261 | +import json |
3262 | import time |
3263 | from ceilometerclient.v2 import client as ceilclient |
3264 | |
3265 | @@ -107,7 +116,37 @@ |
3266 | endpoint_type='publicURL') |
3267 | self.ceil = ceilclient.Client(endpoint=ep, token=self._get_token) |
3268 | |
3269 | - def test_100_services(self): |
3270 | +<<<<<<< TREE |
3271 | + def test_100_services(self): |
3272 | +======= |
3273 | + def _run_action(self, unit_id, action, *args): |
3274 | + command = ["juju", "action", "do", "--format=json", unit_id, action] |
3275 | + command.extend(args) |
3276 | + print("Running command: %s\n" % " ".join(command)) |
3277 | + output = subprocess.check_output(command) |
3278 | + output_json = output.decode(encoding="UTF-8") |
3279 | + data = json.loads(output_json) |
3280 | + action_id = data[u'Action queued with id'] |
3281 | + return action_id |
3282 | + |
3283 | + def _wait_on_action(self, action_id): |
3284 | + command = ["juju", "action", "fetch", "--format=json", action_id] |
3285 | + while True: |
3286 | + try: |
3287 | + output = subprocess.check_output(command) |
3288 | + except Exception as e: |
3289 | + print(e) |
3290 | + return False |
3291 | + output_json = output.decode(encoding="UTF-8") |
3292 | + data = json.loads(output_json) |
3293 | + if data[u"status"] == "completed": |
3294 | + return True |
3295 | + elif data[u"status"] == "failed": |
3296 | + return False |
3297 | + time.sleep(2) |
3298 | + |
3299 | + def test_100_services(self): |
3300 | +>>>>>>> MERGE-SOURCE |
3301 | """Verify the expected services are running on the corresponding |
3302 | service units.""" |
3303 | ceilometer_svcs = [ |
3304 | @@ -450,122 +489,259 @@ |
3305 | if ret: |
3306 | message = "ceilometer config error: {}".format(ret) |
3307 | amulet.raise_status(amulet.FAIL, msg=message) |
3308 | - |
3309 | - def test_301_nova_config(self): |
3310 | - """Verify data in the nova compute nova config file""" |
3311 | - u.log.debug('Checking nova compute config file...') |
3312 | - unit = self.nova_sentry |
3313 | - conf = '/etc/nova/nova.conf' |
3314 | - expected = { |
3315 | - 'DEFAULT': { |
3316 | - 'verbose': 'False', |
3317 | - 'debug': 'False', |
3318 | - 'use_syslog': 'False', |
3319 | - 'my_ip': u.valid_ip, |
3320 | - 'dhcpbridge_flagfile': '/etc/nova/nova.conf', |
3321 | - 'dhcpbridge': '/usr/bin/nova-dhcpbridge', |
3322 | - 'logdir': '/var/log/nova', |
3323 | - 'state_path': '/var/lib/nova', |
3324 | - 'api_paste_config': '/etc/nova/api-paste.ini', |
3325 | - 'enabled_apis': 'ec2,osapi_compute,metadata', |
3326 | - 'auth_strategy': 'keystone', |
3327 | - 'instance_usage_audit': 'True', |
3328 | - 'instance_usage_audit_period': 'hour', |
3329 | - 'notify_on_state_change': 'vm_and_task_state', |
3330 | - } |
3331 | - } |
3332 | - |
3333 | - # NOTE(beisner): notification_driver is not checked like the |
3334 | - # others, as configparser does not support duplicate config |
3335 | - # options, and dicts cant have duplicate keys. |
3336 | - for section, pairs in expected.iteritems(): |
3337 | - ret = u.validate_config_data(unit, conf, section, pairs) |
3338 | - if ret: |
3339 | - message = "ceilometer config error: {}".format(ret) |
3340 | - amulet.raise_status(amulet.FAIL, msg=message) |
3341 | - |
3342 | - # Check notification_driver existence via simple grep cmd |
3343 | - lines = [('notification_driver = ' |
3344 | - 'ceilometer.compute.nova_notifier'), |
3345 | - ('notification_driver = ' |
3346 | - 'nova.openstack.common.notifier.rpc_notifier')] |
3347 | - |
3348 | - sentry_units = [unit] |
3349 | - cmds = [] |
3350 | - for line in lines: |
3351 | - cmds.append('grep "{}" {}'.format(line, conf)) |
3352 | - |
3353 | - ret = u.check_commands_on_units(cmds, sentry_units) |
3354 | - if ret: |
3355 | - amulet.raise_status(amulet.FAIL, msg=ret) |
3356 | - |
3357 | - def test_302_nova_ceilometer_config(self): |
3358 | - """Verify data in the ceilometer config file on the |
3359 | - nova-compute (ceilometer-agent) unit.""" |
3360 | - u.log.debug('Checking nova ceilometer config file...') |
3361 | - unit = self.nova_sentry |
3362 | - conf = '/etc/ceilometer/ceilometer.conf' |
3363 | - expected = { |
3364 | - 'DEFAULT': { |
3365 | - 'logdir': '/var/log/ceilometer' |
3366 | - }, |
3367 | - 'database': { |
3368 | - 'backend': 'sqlalchemy', |
3369 | - 'connection': 'sqlite:////var/lib/ceilometer/$sqlite_db' |
3370 | - } |
3371 | - } |
3372 | - |
3373 | - for section, pairs in expected.iteritems(): |
3374 | - ret = u.validate_config_data(unit, conf, section, pairs) |
3375 | - if ret: |
3376 | - message = "ceilometer config error: {}".format(ret) |
3377 | - amulet.raise_status(amulet.FAIL, msg=message) |
3378 | - |
3379 | - def test_400_api_connection(self): |
3380 | - """Simple api calls to check service is up and responding""" |
3381 | - u.log.debug('Checking api functionality...') |
3382 | - assert(self.ceil.samples.list() == []) |
3383 | - assert(self.ceil.meters.list() == []) |
3384 | - |
3385 | - # NOTE(beisner): need to add more functional tests |
3386 | - |
3387 | - def test_900_restart_on_config_change(self): |
3388 | - """Verify that the specified services are restarted when the config |
3389 | - is changed. |
3390 | - """ |
3391 | - sentry = self.ceil_sentry |
3392 | - juju_service = 'ceilometer' |
3393 | - |
3394 | - # Expected default and alternate values |
3395 | - set_default = {'debug': 'False'} |
3396 | - set_alternate = {'debug': 'True'} |
3397 | - |
3398 | - # Config file affected by juju set config change |
3399 | - conf_file = '/etc/ceilometer/ceilometer.conf' |
3400 | - |
3401 | - # Services which are expected to restart upon config change |
3402 | - services = [ |
3403 | - 'ceilometer-agent-central', |
3404 | - 'ceilometer-collector', |
3405 | - 'ceilometer-api', |
3406 | - 'ceilometer-alarm-evaluator', |
3407 | - 'ceilometer-alarm-notifier', |
3408 | - 'ceilometer-agent-notification', |
3409 | - ] |
3410 | - |
3411 | - # Make config change, check for service restarts |
3412 | - u.log.debug('Making config change on {}...'.format(juju_service)) |
3413 | - self.d.configure(juju_service, set_alternate) |
3414 | - |
3415 | - sleep_time = 40 |
3416 | - for s in services: |
3417 | - u.log.debug("Checking that service restarted: {}".format(s)) |
3418 | - if not u.service_restarted(sentry, s, |
3419 | - conf_file, sleep_time=sleep_time, |
3420 | - pgrep_full=True): |
3421 | - self.d.configure(juju_service, set_default) |
3422 | - msg = "service {} didn't restart after config change".format(s) |
3423 | - amulet.raise_status(amulet.FAIL, msg=msg) |
3424 | - sleep_time = 0 |
3425 | - |
3426 | - self.d.configure(juju_service, set_default) |
3427 | +<<<<<<< TREE |
3428 | + |
3429 | + def test_301_nova_config(self): |
3430 | + """Verify data in the nova compute nova config file""" |
3431 | + u.log.debug('Checking nova compute config file...') |
3432 | + unit = self.nova_sentry |
3433 | + conf = '/etc/nova/nova.conf' |
3434 | + expected = { |
3435 | + 'DEFAULT': { |
3436 | + 'verbose': 'False', |
3437 | + 'debug': 'False', |
3438 | + 'use_syslog': 'False', |
3439 | + 'my_ip': u.valid_ip, |
3440 | + 'dhcpbridge_flagfile': '/etc/nova/nova.conf', |
3441 | + 'dhcpbridge': '/usr/bin/nova-dhcpbridge', |
3442 | + 'logdir': '/var/log/nova', |
3443 | + 'state_path': '/var/lib/nova', |
3444 | + 'api_paste_config': '/etc/nova/api-paste.ini', |
3445 | + 'enabled_apis': 'ec2,osapi_compute,metadata', |
3446 | + 'auth_strategy': 'keystone', |
3447 | + 'instance_usage_audit': 'True', |
3448 | + 'instance_usage_audit_period': 'hour', |
3449 | + 'notify_on_state_change': 'vm_and_task_state', |
3450 | + } |
3451 | + } |
3452 | + |
3453 | + # NOTE(beisner): notification_driver is not checked like the |
3454 | + # others, as configparser does not support duplicate config |
3455 | + # options, and dicts cant have duplicate keys. |
3456 | + for section, pairs in expected.iteritems(): |
3457 | + ret = u.validate_config_data(unit, conf, section, pairs) |
3458 | + if ret: |
3459 | + message = "ceilometer config error: {}".format(ret) |
3460 | + amulet.raise_status(amulet.FAIL, msg=message) |
3461 | + |
3462 | + # Check notification_driver existence via simple grep cmd |
3463 | + lines = [('notification_driver = ' |
3464 | + 'ceilometer.compute.nova_notifier'), |
3465 | + ('notification_driver = ' |
3466 | + 'nova.openstack.common.notifier.rpc_notifier')] |
3467 | + |
3468 | + sentry_units = [unit] |
3469 | + cmds = [] |
3470 | + for line in lines: |
3471 | + cmds.append('grep "{}" {}'.format(line, conf)) |
3472 | + |
3473 | + ret = u.check_commands_on_units(cmds, sentry_units) |
3474 | + if ret: |
3475 | + amulet.raise_status(amulet.FAIL, msg=ret) |
3476 | + |
3477 | + def test_302_nova_ceilometer_config(self): |
3478 | + """Verify data in the ceilometer config file on the |
3479 | + nova-compute (ceilometer-agent) unit.""" |
3480 | + u.log.debug('Checking nova ceilometer config file...') |
3481 | + unit = self.nova_sentry |
3482 | + conf = '/etc/ceilometer/ceilometer.conf' |
3483 | + expected = { |
3484 | + 'DEFAULT': { |
3485 | + 'logdir': '/var/log/ceilometer' |
3486 | + }, |
3487 | + 'database': { |
3488 | + 'backend': 'sqlalchemy', |
3489 | + 'connection': 'sqlite:////var/lib/ceilometer/$sqlite_db' |
3490 | + } |
3491 | + } |
3492 | + |
3493 | + for section, pairs in expected.iteritems(): |
3494 | + ret = u.validate_config_data(unit, conf, section, pairs) |
3495 | + if ret: |
3496 | + message = "ceilometer config error: {}".format(ret) |
3497 | + amulet.raise_status(amulet.FAIL, msg=message) |
3498 | + |
3499 | + def test_400_api_connection(self): |
3500 | + """Simple api calls to check service is up and responding""" |
3501 | + u.log.debug('Checking api functionality...') |
3502 | + assert(self.ceil.samples.list() == []) |
3503 | + assert(self.ceil.meters.list() == []) |
3504 | + |
3505 | + # NOTE(beisner): need to add more functional tests |
3506 | + |
3507 | + def test_900_restart_on_config_change(self): |
3508 | + """Verify that the specified services are restarted when the config |
3509 | + is changed. |
3510 | + """ |
3511 | + sentry = self.ceil_sentry |
3512 | + juju_service = 'ceilometer' |
3513 | + |
3514 | + # Expected default and alternate values |
3515 | + set_default = {'debug': 'False'} |
3516 | + set_alternate = {'debug': 'True'} |
3517 | + |
3518 | + # Config file affected by juju set config change |
3519 | + conf_file = '/etc/ceilometer/ceilometer.conf' |
3520 | + |
3521 | + # Services which are expected to restart upon config change |
3522 | + services = [ |
3523 | + 'ceilometer-agent-central', |
3524 | + 'ceilometer-collector', |
3525 | + 'ceilometer-api', |
3526 | + 'ceilometer-alarm-evaluator', |
3527 | + 'ceilometer-alarm-notifier', |
3528 | + 'ceilometer-agent-notification', |
3529 | + ] |
3530 | + |
3531 | + # Make config change, check for service restarts |
3532 | + u.log.debug('Making config change on {}...'.format(juju_service)) |
3533 | + self.d.configure(juju_service, set_alternate) |
3534 | + |
3535 | + sleep_time = 40 |
3536 | + for s in services: |
3537 | + u.log.debug("Checking that service restarted: {}".format(s)) |
3538 | + if not u.service_restarted(sentry, s, |
3539 | + conf_file, sleep_time=sleep_time, |
3540 | + pgrep_full=True): |
3541 | + self.d.configure(juju_service, set_default) |
3542 | + msg = "service {} didn't restart after config change".format(s) |
3543 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3544 | + sleep_time = 0 |
3545 | + |
3546 | + self.d.configure(juju_service, set_default) |
3547 | +======= |
3548 | + |
3549 | + def test_301_nova_config(self): |
3550 | + """Verify data in the nova compute nova config file""" |
3551 | + u.log.debug('Checking nova compute config file...') |
3552 | + unit = self.nova_sentry |
3553 | + conf = '/etc/nova/nova.conf' |
3554 | + expected = { |
3555 | + 'DEFAULT': { |
3556 | + 'verbose': 'False', |
3557 | + 'debug': 'False', |
3558 | + 'use_syslog': 'False', |
3559 | + 'my_ip': u.valid_ip, |
3560 | + 'dhcpbridge_flagfile': '/etc/nova/nova.conf', |
3561 | + 'dhcpbridge': '/usr/bin/nova-dhcpbridge', |
3562 | + 'logdir': '/var/log/nova', |
3563 | + 'state_path': '/var/lib/nova', |
3564 | + 'api_paste_config': '/etc/nova/api-paste.ini', |
3565 | + 'enabled_apis': 'ec2,osapi_compute,metadata', |
3566 | + 'auth_strategy': 'keystone', |
3567 | + 'instance_usage_audit': 'True', |
3568 | + 'instance_usage_audit_period': 'hour', |
3569 | + 'notify_on_state_change': 'vm_and_task_state', |
3570 | + } |
3571 | + } |
3572 | + |
3573 | + # NOTE(beisner): notification_driver is not checked like the |
3574 | + # others, as configparser does not support duplicate config |
3575 | + # options, and dicts cant have duplicate keys. |
3576 | + for section, pairs in expected.iteritems(): |
3577 | + ret = u.validate_config_data(unit, conf, section, pairs) |
3578 | + if ret: |
3579 | + message = "ceilometer config error: {}".format(ret) |
3580 | + amulet.raise_status(amulet.FAIL, msg=message) |
3581 | + |
3582 | + # Check notification_driver existence via simple grep cmd |
3583 | + lines = [('notification_driver = ' |
3584 | + 'ceilometer.compute.nova_notifier'), |
3585 | + ('notification_driver = ' |
3586 | + 'nova.openstack.common.notifier.rpc_notifier')] |
3587 | + |
3588 | + sentry_units = [unit] |
3589 | + cmds = [] |
3590 | + for line in lines: |
3591 | + cmds.append('grep "{}" {}'.format(line, conf)) |
3592 | + |
3593 | + ret = u.check_commands_on_units(cmds, sentry_units) |
3594 | + if ret: |
3595 | + amulet.raise_status(amulet.FAIL, msg=ret) |
3596 | + |
3597 | + def test_302_nova_ceilometer_config(self): |
3598 | + """Verify data in the ceilometer config file on the |
3599 | + nova-compute (ceilometer-agent) unit.""" |
3600 | + u.log.debug('Checking nova ceilometer config file...') |
3601 | + unit = self.nova_sentry |
3602 | + conf = '/etc/ceilometer/ceilometer.conf' |
3603 | + expected = { |
3604 | + 'DEFAULT': { |
3605 | + 'logdir': '/var/log/ceilometer' |
3606 | + }, |
3607 | + 'database': { |
3608 | + 'backend': 'sqlalchemy', |
3609 | + 'connection': 'sqlite:////var/lib/ceilometer/$sqlite_db' |
3610 | + } |
3611 | + } |
3612 | + |
3613 | + for section, pairs in expected.iteritems(): |
3614 | + ret = u.validate_config_data(unit, conf, section, pairs) |
3615 | + if ret: |
3616 | + message = "ceilometer config error: {}".format(ret) |
3617 | + amulet.raise_status(amulet.FAIL, msg=message) |
3618 | + |
3619 | + def test_400_api_connection(self): |
3620 | + """Simple api calls to check service is up and responding""" |
3621 | + u.log.debug('Checking api functionality...') |
3622 | + assert(self.ceil.samples.list() == []) |
3623 | + assert(self.ceil.meters.list() == []) |
3624 | + |
3625 | + # NOTE(beisner): need to add more functional tests |
3626 | + |
3627 | + def test_900_restart_on_config_change(self): |
3628 | + """Verify that the specified services are restarted when the config |
3629 | + is changed. |
3630 | + """ |
3631 | + sentry = self.ceil_sentry |
3632 | + juju_service = 'ceilometer' |
3633 | + |
3634 | + # Expected default and alternate values |
3635 | + set_default = {'debug': 'False'} |
3636 | + set_alternate = {'debug': 'True'} |
3637 | + |
3638 | + # Config file affected by juju set config change |
3639 | + conf_file = '/etc/ceilometer/ceilometer.conf' |
3640 | + |
3641 | + # Services which are expected to restart upon config change |
3642 | + services = [ |
3643 | + 'ceilometer-agent-central', |
3644 | + 'ceilometer-collector', |
3645 | + 'ceilometer-api', |
3646 | + 'ceilometer-alarm-evaluator', |
3647 | + 'ceilometer-alarm-notifier', |
3648 | + 'ceilometer-agent-notification', |
3649 | + ] |
3650 | + |
3651 | + # Make config change, check for service restarts |
3652 | + u.log.debug('Making config change on {}...'.format(juju_service)) |
3653 | + self.d.configure(juju_service, set_alternate) |
3654 | + |
3655 | + sleep_time = 40 |
3656 | + for s in services: |
3657 | + u.log.debug("Checking that service restarted: {}".format(s)) |
3658 | + if not u.service_restarted(sentry, s, |
3659 | + conf_file, sleep_time=sleep_time, |
3660 | + pgrep_full=True): |
3661 | + self.d.configure(juju_service, set_default) |
3662 | + msg = "service {} didn't restart after config change".format(s) |
3663 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3664 | + sleep_time = 0 |
3665 | + |
3666 | + self.d.configure(juju_service, set_default) |
3667 | + |
3668 | + def test_1000_pause_and_resume(self): |
3669 | + """The services can be paused and resumed. """ |
3670 | + unit_name = "ceilometer/0" |
3671 | + unit = self.d.sentry.unit[unit_name] |
3672 | + |
3673 | + assert u.status_get(unit)[0] == "unknown" |
3674 | + |
3675 | + action_id = self._run_action(unit_name, "pause") |
3676 | + assert self._wait_on_action(action_id), "Pause action failed." |
3677 | + assert u.status_get(unit)[0] == "maintenance" |
3678 | + |
3679 | + action_id = self._run_action(unit_name, "resume") |
3680 | + assert self._wait_on_action(action_id), "Resume action failed." |
3681 | + assert u.status_get(unit)[0] == "active" |
3682 | +>>>>>>> MERGE-SOURCE |
3683 | |
3684 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
3685 | --- tests/charmhelpers/contrib/amulet/utils.py 2015-08-10 16:32:05 +0000 |
3686 | +++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-11 11:01:28 +0000 |
3687 | @@ -14,17 +14,37 @@ |
3688 | # You should have received a copy of the GNU Lesser General Public License |
3689 | # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
3690 | |
3691 | +<<<<<<< TREE |
3692 | import amulet |
3693 | import ConfigParser |
3694 | import distro_info |
3695 | +======= |
3696 | +>>>>>>> MERGE-SOURCE |
3697 | import io |
3698 | +import json |
3699 | import logging |
3700 | import os |
3701 | import re |
3702 | +<<<<<<< TREE |
3703 | import six |
3704 | +======= |
3705 | +import subprocess |
3706 | +>>>>>>> MERGE-SOURCE |
3707 | import sys |
3708 | import time |
3709 | +<<<<<<< TREE |
3710 | import urlparse |
3711 | +======= |
3712 | + |
3713 | +import amulet |
3714 | +import distro_info |
3715 | +import six |
3716 | +from six.moves import configparser |
3717 | +if six.PY3: |
3718 | + from urllib import parse as urlparse |
3719 | +else: |
3720 | + import urlparse |
3721 | +>>>>>>> MERGE-SOURCE |
3722 | |
3723 | |
3724 | class AmuletUtils(object): |
3725 | @@ -122,6 +142,7 @@ |
3726 | return "command `{}` returned {}".format(cmd, str(code)) |
3727 | return None |
3728 | |
3729 | +<<<<<<< TREE |
3730 | def validate_services_by_name(self, sentry_services): |
3731 | """Validate system service status by service name, automatically |
3732 | detecting init system based on Ubuntu release codename. |
3733 | @@ -157,6 +178,47 @@ |
3734 | return "command `{}` returned {}".format(cmd, str(code)) |
3735 | return None |
3736 | |
3737 | +======= |
3738 | + def validate_services_by_name(self, sentry_services): |
3739 | + """Validate system service status by service name, automatically |
3740 | + detecting init system based on Ubuntu release codename. |
3741 | + |
3742 | + :param sentry_services: dict with sentry keys and svc list values |
3743 | + :returns: None if successful, Failure string message otherwise |
3744 | + """ |
3745 | + self.log.debug('Checking status of system services...') |
3746 | + |
3747 | + # Point at which systemd became a thing |
3748 | + systemd_switch = self.ubuntu_releases.index('vivid') |
3749 | + |
3750 | + for sentry_unit, services_list in six.iteritems(sentry_services): |
3751 | + # Get lsb_release codename from unit |
3752 | + release, ret = self.get_ubuntu_release_from_sentry(sentry_unit) |
3753 | + if ret: |
3754 | + return ret |
3755 | + |
3756 | + for service_name in services_list: |
3757 | + if (self.ubuntu_releases.index(release) >= systemd_switch or |
3758 | + service_name in ['rabbitmq-server', 'apache2']): |
3759 | + # init is systemd (or regular sysv) |
3760 | + cmd = 'sudo service {} status'.format(service_name) |
3761 | + output, code = sentry_unit.run(cmd) |
3762 | + service_running = code == 0 |
3763 | + elif self.ubuntu_releases.index(release) < systemd_switch: |
3764 | + # init is upstart |
3765 | + cmd = 'sudo status {}'.format(service_name) |
3766 | + output, code = sentry_unit.run(cmd) |
3767 | + service_running = code == 0 and "start/running" in output |
3768 | + |
3769 | + self.log.debug('{} `{}` returned ' |
3770 | + '{}'.format(sentry_unit.info['unit_name'], |
3771 | + cmd, code)) |
3772 | + if not service_running: |
3773 | + return u"command `{}` returned {} {}".format( |
3774 | + cmd, output, str(code)) |
3775 | + return None |
3776 | + |
3777 | +>>>>>>> MERGE-SOURCE |
3778 | def _get_config(self, unit, filename): |
3779 | """Get a ConfigParser object for parsing a unit's config file.""" |
3780 | file_contents = unit.file_contents(filename) |
3781 | @@ -164,7 +226,7 @@ |
3782 | # NOTE(beisner): by default, ConfigParser does not handle options |
3783 | # with no value, such as the flags used in the mysql my.cnf file. |
3784 | # https://bugs.python.org/issue7005 |
3785 | - config = ConfigParser.ConfigParser(allow_no_value=True) |
3786 | + config = configparser.ConfigParser(allow_no_value=True) |
3787 | config.readfp(io.StringIO(file_contents)) |
3788 | return config |
3789 | |
3790 | @@ -413,6 +475,7 @@ |
3791 | |
3792 | def endpoint_error(self, name, data): |
3793 | return 'unexpected endpoint data in {} - {}'.format(name, data) |
3794 | +<<<<<<< TREE |
3795 | |
3796 | def get_ubuntu_releases(self): |
3797 | """Return a list of all Ubuntu releases in order of release.""" |
3798 | @@ -531,3 +594,185 @@ |
3799 | return 'Dicts within list are not identical' |
3800 | |
3801 | return None |
3802 | +======= |
3803 | + |
3804 | + def get_ubuntu_releases(self): |
3805 | + """Return a list of all Ubuntu releases in order of release.""" |
3806 | + _d = distro_info.UbuntuDistroInfo() |
3807 | + _release_list = _d.all |
3808 | + self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
3809 | + return _release_list |
3810 | + |
3811 | + def file_to_url(self, file_rel_path): |
3812 | + """Convert a relative file path to a file URL.""" |
3813 | + _abs_path = os.path.abspath(file_rel_path) |
3814 | + return urlparse.urlparse(_abs_path, scheme='file').geturl() |
3815 | + |
3816 | + def check_commands_on_units(self, commands, sentry_units): |
3817 | + """Check that all commands in a list exit zero on all |
3818 | + sentry units in a list. |
3819 | + |
3820 | + :param commands: list of bash commands |
3821 | + :param sentry_units: list of sentry unit pointers |
3822 | + :returns: None if successful; Failure message otherwise |
3823 | + """ |
3824 | + self.log.debug('Checking exit codes for {} commands on {} ' |
3825 | + 'sentry units...'.format(len(commands), |
3826 | + len(sentry_units))) |
3827 | + for sentry_unit in sentry_units: |
3828 | + for cmd in commands: |
3829 | + output, code = sentry_unit.run(cmd) |
3830 | + if code == 0: |
3831 | + self.log.debug('{} `{}` returned {} ' |
3832 | + '(OK)'.format(sentry_unit.info['unit_name'], |
3833 | + cmd, code)) |
3834 | + else: |
3835 | + return ('{} `{}` returned {} ' |
3836 | + '{}'.format(sentry_unit.info['unit_name'], |
3837 | + cmd, code, output)) |
3838 | + return None |
3839 | + |
3840 | + def get_process_id_list(self, sentry_unit, process_name, |
3841 | + expect_success=True): |
3842 | + """Get a list of process ID(s) from a single sentry juju unit |
3843 | + for a single process name. |
3844 | + |
3845 | + :param sentry_unit: Amulet sentry instance (juju unit) |
3846 | + :param process_name: Process name |
3847 | + :param expect_success: If False, expect the PID to be missing, |
3848 | + raise if it is present. |
3849 | + :returns: List of process IDs |
3850 | + """ |
3851 | + cmd = 'pidof -x {}'.format(process_name) |
3852 | + if not expect_success: |
3853 | + cmd += " || exit 0 && exit 1" |
3854 | + output, code = sentry_unit.run(cmd) |
3855 | + if code != 0: |
3856 | + msg = ('{} `{}` returned {} ' |
3857 | + '{}'.format(sentry_unit.info['unit_name'], |
3858 | + cmd, code, output)) |
3859 | + amulet.raise_status(amulet.FAIL, msg=msg) |
3860 | + return str(output).split() |
3861 | + |
3862 | + def get_unit_process_ids(self, unit_processes, expect_success=True): |
3863 | + """Construct a dict containing unit sentries, process names, and |
3864 | + process IDs. |
3865 | + |
3866 | + :param unit_processes: A dictionary of Amulet sentry instance |
3867 | + to list of process names. |
3868 | + :param expect_success: if False expect the processes to not be |
3869 | + running, raise if they are. |
3870 | + :returns: Dictionary of Amulet sentry instance to dictionary |
3871 | + of process names to PIDs. |
3872 | + """ |
3873 | + pid_dict = {} |
3874 | + for sentry_unit, process_list in six.iteritems(unit_processes): |
3875 | + pid_dict[sentry_unit] = {} |
3876 | + for process in process_list: |
3877 | + pids = self.get_process_id_list( |
3878 | + sentry_unit, process, expect_success=expect_success) |
3879 | + pid_dict[sentry_unit].update({process: pids}) |
3880 | + return pid_dict |
3881 | + |
3882 | + def validate_unit_process_ids(self, expected, actual): |
3883 | + """Validate process id quantities for services on units.""" |
3884 | + self.log.debug('Checking units for running processes...') |
3885 | + self.log.debug('Expected PIDs: {}'.format(expected)) |
3886 | + self.log.debug('Actual PIDs: {}'.format(actual)) |
3887 | + |
3888 | + if len(actual) != len(expected): |
3889 | + return ('Unit count mismatch. expected, actual: {}, ' |
3890 | + '{} '.format(len(expected), len(actual))) |
3891 | + |
3892 | + for (e_sentry, e_proc_names) in six.iteritems(expected): |
3893 | + e_sentry_name = e_sentry.info['unit_name'] |
3894 | + if e_sentry in actual.keys(): |
3895 | + a_proc_names = actual[e_sentry] |
3896 | + else: |
3897 | + return ('Expected sentry ({}) not found in actual dict data.' |
3898 | + '{}'.format(e_sentry_name, e_sentry)) |
3899 | + |
3900 | + if len(e_proc_names.keys()) != len(a_proc_names.keys()): |
3901 | + return ('Process name count mismatch. expected, actual: {}, ' |
3902 | + '{}'.format(len(expected), len(actual))) |
3903 | + |
3904 | + for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ |
3905 | + zip(e_proc_names.items(), a_proc_names.items()): |
3906 | + if e_proc_name != a_proc_name: |
3907 | + return ('Process name mismatch. expected, actual: {}, ' |
3908 | + '{}'.format(e_proc_name, a_proc_name)) |
3909 | + |
3910 | + a_pids_length = len(a_pids) |
3911 | + fail_msg = ('PID count mismatch. {} ({}) expected, actual: ' |
3912 | + '{}, {} ({})'.format(e_sentry_name, e_proc_name, |
3913 | + e_pids_length, a_pids_length, |
3914 | + a_pids)) |
3915 | + |
3916 | + # If expected is not bool, ensure PID quantities match |
3917 | + if not isinstance(e_pids_length, bool) and \ |
3918 | + a_pids_length != e_pids_length: |
3919 | + return fail_msg |
3920 | + # If expected is bool True, ensure 1 or more PIDs exist |
3921 | + elif isinstance(e_pids_length, bool) and \ |
3922 | + e_pids_length is True and a_pids_length < 1: |
3923 | + return fail_msg |
3924 | + # If expected is bool False, ensure 0 PIDs exist |
3925 | + elif isinstance(e_pids_length, bool) and \ |
3926 | + e_pids_length is False and a_pids_length != 0: |
3927 | + return fail_msg |
3928 | + else: |
3929 | + self.log.debug('PID check OK: {} {} {}: ' |
3930 | + '{}'.format(e_sentry_name, e_proc_name, |
3931 | + e_pids_length, a_pids)) |
3932 | + return None |
3933 | + |
3934 | + def validate_list_of_identical_dicts(self, list_of_dicts): |
3935 | + """Check that all dicts within a list are identical.""" |
3936 | + hashes = [] |
3937 | + for _dict in list_of_dicts: |
3938 | + hashes.append(hash(frozenset(_dict.items()))) |
3939 | + |
3940 | + self.log.debug('Hashes: {}'.format(hashes)) |
3941 | + if len(set(hashes)) == 1: |
3942 | + self.log.debug('Dicts within list are identical') |
3943 | + else: |
3944 | + return 'Dicts within list are not identical' |
3945 | + |
3946 | + return None |
3947 | + |
3948 | + def run_action(self, unit_sentry, action, |
3949 | + _check_output=subprocess.check_output): |
3950 | + """Run the named action on a given unit sentry. |
3951 | + |
3952 | + _check_output parameter is used for dependency injection. |
3953 | + |
3954 | + @return action_id. |
3955 | + """ |
3956 | + unit_id = unit_sentry.info["unit_name"] |
3957 | + command = ["juju", "action", "do", "--format=json", unit_id, action] |
3958 | + self.log.info("Running command: %s\n" % " ".join(command)) |
3959 | + output = _check_output(command, universal_newlines=True) |
3960 | + data = json.loads(output) |
3961 | + action_id = data[u'Action queued with id'] |
3962 | + return action_id |
3963 | + |
3964 | + def wait_on_action(self, action_id, _check_output=subprocess.check_output): |
3965 | + """Wait for a given action, returning if it completed or not. |
3966 | + |
3967 | + _check_output parameter is used for dependency injection. |
3968 | + """ |
3969 | + command = ["juju", "action", "fetch", "--format=json", "--wait=0", |
3970 | + action_id] |
3971 | + output = _check_output(command, universal_newlines=True) |
3972 | + data = json.loads(output) |
3973 | + return data.get(u"status") == "completed" |
3974 | + |
3975 | + def status_get(self, unit): |
3976 | + """Return the current service status of this unit.""" |
3977 | + raw_status, return_code = unit.run( |
3978 | + "status-get --format=json --include-data") |
3979 | + if return_code != 0: |
3980 | + return ("unknown", "") |
3981 | + status = json.loads(raw_status) |
3982 | + return (status["status"], status["message"]) |
3983 | +>>>>>>> MERGE-SOURCE |
3984 | |
3985 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
3986 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-10 16:32:05 +0000 |
3987 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-11 11:01:28 +0000 |
3988 | @@ -44,7 +44,7 @@ |
3989 | Determine if the local branch being tested is derived from its |
3990 | stable or next (dev) branch, and based on this, use the corresonding |
3991 | stable or next branches for the other_services.""" |
3992 | - base_charms = ['mysql', 'mongodb'] |
3993 | + base_charms = ['mysql', 'mongodb', 'nrpe'] |
3994 | |
3995 | if self.series in ['precise', 'trusty']: |
3996 | base_series = self.series |
3997 | @@ -79,9 +79,15 @@ |
3998 | services.append(this_service) |
3999 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
4000 | 'ceph-osd', 'ceph-radosgw'] |
4001 | +<<<<<<< TREE |
4002 | # Most OpenStack subordinate charms do not expose an origin option |
4003 | # as that is controlled by the principle. |
4004 | ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch'] |
4005 | +======= |
4006 | + # Most OpenStack subordinate charms do not expose an origin option |
4007 | + # as that is controlled by the principle. |
4008 | + ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
4009 | +>>>>>>> MERGE-SOURCE |
4010 | |
4011 | if self.openstack: |
4012 | for svc in services: |
4013 | |
4014 | === added file 'tests/tests.yaml' |
4015 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
4016 | +++ tests/tests.yaml 2015-09-11 11:01:28 +0000 |
4017 | @@ -0,0 +1,19 @@ |
4018 | +bootstrap: true |
4019 | +reset: true |
4020 | +virtualenv: true |
4021 | +makefile: |
4022 | + - lint |
4023 | + - test |
4024 | +sources: |
4025 | + - ppa:juju/stable |
4026 | +packages: |
4027 | + - amulet |
4028 | + - python-amulet |
4029 | + - python-ceilometerclient |
4030 | + - python-cinderclient |
4031 | + - python-distro-info |
4032 | + - python-glanceclient |
4033 | + - python-heatclient |
4034 | + - python-keystoneclient |
4035 | + - python-novaclient |
4036 | + - python-swiftclient |
4037 | |
4038 | === renamed file 'tests/tests.yaml' => 'tests/tests.yaml.moved' |
charm_lint_check #9719 ceilometer for tealeg mp270748
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/9719/