Merge lp:~tealeg/charms/trusty/percona-cluster/pause-and-resume into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by Geoff Teale
Status: Merged
Merged at revision: 74
Proposed branch: lp:~tealeg/charms/trusty/percona-cluster/pause-and-resume
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 2140 lines (+1754/-89)
15 files modified
actions.yaml (+4/-0)
actions/actions.py (+51/-0)
charm-helpers-tests.yaml (+1/-0)
charmhelpers/contrib/network/ip.py (+5/-1)
charmhelpers/core/hookenv.py (+11/-9)
charmhelpers/core/host.py (+32/-16)
charmhelpers/core/kernel.py (+68/-0)
tests/00-setup (+2/-0)
tests/31-test-pause-and-resume.py (+38/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+4/-2)
tests/charmhelpers/contrib/amulet/utils.py (+243/-52)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+23/-9)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+359/-0)
tests/charmhelpers/core/__init__.py (+15/-0)
tests/charmhelpers/core/hookenv.py (+898/-0)
To merge this branch: bzr merge lp:~tealeg/charms/trusty/percona-cluster/pause-and-resume
Reviewer Review Type Date Requested Status
Chris Glass (community) Approve
Adam Collard (community) Approve
Review via email: mp+268238@code.launchpad.net

Description of the change

This branch adds pause and resume actions.

In addition to that central goal it also:
 - includes an updated version of charmhelpers
 - defines tests for the actions.

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #7599 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/7599/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #8197 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/8197/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

re: amulet tests...

Thank you for your work on this. These will be great test additions.

FYI - The percona-cluster charm's amulet tests aren't consistent with the other os-charms, as you may have noticed. I've got its amulet test refactor on my list for this cycle, to make it consistent with other os-charms in how they exercise each of the currently-supported ubuntu:openstack release combos. Be aware that, as written, the existing and proposed amulet tests will only exercise Trusty-Icehouse in automation. When I refactor the others, I'll be sure to preserve your amulet tests and pull those into the run-on-every-combo pivot.

Questions, suggestions re: this proposal:

Can you re-use existing amulet helpers instead of adding new local helpers? I know a few things just landed there with regard to actions and service checking in amulet tests.

For local helpers which are not yet represented in amulet helpers, yet potentially useful in other charm tests...

If there are OpenStack-specific, amulet-specific helpers which are useful in other charm tests, please land those in charmhelpers/contrib/openstack/amulet/utils.py.

If there are non-OpenStack-specific, amulet-specific helpers which are useful in other charm tests, please land those in charmhelpers/contrib/amulet/utils.py.

Feel free to holler with any questions. Thanks again!

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #5843 percona-cluster-next for tealeg mp268238
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/5843/

Revision history for this message
Geoff Teale (tealeg) wrote :

Hi Ryan,

Thanks for your feedback. Sorry we had some much variation and duplication in our submissions, it wasn't the plan.

Hopefully you should find the latest revision of this MP more in line with Adam and Alberto's work.

--
Geoff

> re: amulet tests...
>
> Thank you for your work on this. These will be great test additions.
>
> FYI - The percona-cluster charm's amulet tests aren't consistent with the
> other os-charms, as you may have noticed. I've got its amulet test refactor
> on my list for this cycle, to make it consistent with other os-charms in how
> they exercise each of the currently-supported ubuntu:openstack release combos.
> Be aware that, as written, the existing and proposed amulet tests will only
> exercise Trusty-Icehouse in automation. When I refactor the others, I'll be
> sure to preserve your amulet tests and pull those into the run-on-every-combo
> pivot.
>
> Questions, suggestions re: this proposal:
>
> Can you re-use existing amulet helpers instead of adding new local helpers? I
> know a few things just landed there with regard to actions and service
> checking in amulet tests.
>
> For local helpers which are not yet represented in amulet helpers, yet
> potentially useful in other charm tests...
>
> If there are OpenStack-specific, amulet-specific helpers which are useful in
> other charm tests, please land those in
> charmhelpers/contrib/openstack/amulet/utils.py.
>
> If there are non-OpenStack-specific, amulet-specific helpers which are useful
> in other charm tests, please land those in
> charmhelpers/contrib/amulet/utils.py.
>
> Feel free to holler with any questions. Thanks again!

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #7817 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/7817/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #8423 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/8423/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #5930 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12136124/
Build: http://10.245.162.77:8080/job/charm_amulet_test/5930/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Looks like the 00-setup needs to be updated. We ran into that dep issue elsewhere.

This should be the ticket:
http://paste.ubuntu.com/12136867/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Otherwise, pending passing tests, looks good to me. Thank you for your work on this!

Revision history for this message
Geoff Teale (tealeg) wrote :

Cheers Ryan, will do!

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #8477 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/8477/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #7868 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/7868/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #7869 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/7869/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #8478 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/8478/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #5936 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
ERROR subprocess encountered error code 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12139634/
Build: http://10.245.162.77:8080/job/charm_amulet_test/5936/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #5938 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12139740/
Build: http://10.245.162.77:8080/job/charm_amulet_test/5938/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

I believe if you rebase and resolve any resulting conflicts, the amulet tests in your branch will probably pass. We landed some necessary updates for other efforts, also fixed some amulet test dependency issues along the way.

Revision history for this message
Geoff Teale (tealeg) wrote :

> I believe if you rebase and resolve any resulting conflicts, the amulet tests
> in your branch will probably pass. We landed some necessary updates for other
> efforts, also fixed some amulet test dependency issues along the way.

Hi Ryan, it should be good to go now.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9091 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12237361/
Build: http://10.245.162.77:8080/job/charm_lint_check/9091/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8400 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8400/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9092 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9092/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8401 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8401/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6147 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12238554/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6147/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

It looks like the tests/00-setup here needs updated. We're working on updating them all, but if you can do that here, that should resolve the import error on the amulet test.

Example in-flight:
http://bazaar.launchpad.net/~1chb1n/charms/trusty/openstack-dashboard/next-amulet-fixup-1507/view/head:/tests/00-setup

Revision history for this message
Geoff Teale (tealeg) wrote :

> It looks like the tests/00-setup here needs updated. We're working on
> updating them all, but if you can do that here, that should resolve the import
> error on the amulet test.
>
> Example in-flight:
> http://bazaar.launchpad.net/~1chb1n/charms/trusty/openstack-dashboard/next-
> amulet-fixup-1507/view/head:/tests/00-setup

Hi Ryan,

It should already have exactly that content as I've merged in the most recent percona-cluster/next which has that. I'll poke around a little and see if I can find out the issue.

Revision history for this message
Geoff Teale (tealeg) wrote :

OK, I managed to reproduce the error by removing the python3-distro-info package from my VM. It seems that we need to explicitly pull in both python-distro-info and python3-distro-info. Let's see if that gets the Amulet tests to pass here too.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8463 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8463/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9157 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9157/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6169 percona-cluster-next for tealeg mp268238
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6169/

Revision history for this message
Adam Collard (adam-collard) wrote :

N/F for the missing asserts about status, otherwise looks good.

review: Needs Fixing
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8529 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8529/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9228 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/9228/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6190 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12252191/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6190/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9239 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12254011/
Build: http://10.245.162.77:8080/job/charm_lint_check/9239/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8540 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8540/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9242 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12254270/
Build: http://10.245.162.77:8080/job/charm_lint_check/9242/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8543 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8543/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6201 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12255036/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6201/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9321 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12262221/
Build: http://10.245.162.77:8080/job/charm_lint_check/9321/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8620 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8620/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9324 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12262815/
Build: http://10.245.162.77:8080/job/charm_lint_check/9324/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8622 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8622/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6234 percona-cluster-next for tealeg mp268238
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6234/

Revision history for this message
Geoff Teale (tealeg) wrote :

> N/F for the missing asserts about status, otherwise looks good.

It should now be good, utilising the outstanding code from charmhelpers to do status_get.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #8932 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/8932/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9714 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12328261/
Build: http://10.245.162.77:8080/job/charm_lint_check/9714/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6339 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12329079/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6339/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9001 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9001/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #9773 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12337429/
Build: http://10.245.162.77:8080/job/charm_lint_check/9773/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6355 percona-cluster-next for tealeg mp268238
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12338262/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6355/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9213 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9213/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10047 percona-cluster-next for tealeg mp268238
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/12417154/
Build: http://10.245.162.77:8080/job/charm_lint_check/10047/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9214 percona-cluster-next for tealeg mp268238
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9214/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10048 percona-cluster-next for tealeg mp268238
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10048/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6433 percona-cluster-next for tealeg mp268238
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6433/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6434 percona-cluster-next for tealeg mp268238
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6434/

Revision history for this message
Adam Collard (adam-collard) wrote :

Looks good, thanks! +1

review: Approve
Revision history for this message
Chris Glass (tribaal) wrote :

Looks good! Thanks for your contribution.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added directory 'actions'
=== added file 'actions.yaml'
--- actions.yaml 1970-01-01 00:00:00 +0000
+++ actions.yaml 2015-09-15 12:31:21 +0000
@@ -0,0 +1,4 @@
1pause:
2 description: Pause the MySQL service.
3resume:
4 description: Resume the MySQL service.
0\ No newline at end of file5\ No newline at end of file
16
=== added file 'actions/actions.py'
--- actions/actions.py 1970-01-01 00:00:00 +0000
+++ actions/actions.py 2015-09-15 12:31:21 +0000
@@ -0,0 +1,51 @@
1#!/usr/bin/python
2
3import os
4import sys
5
6from charmhelpers.core.host import service_pause, service_resume
7from charmhelpers.core.hookenv import action_fail, status_set
8
9
10MYSQL_SERVICE = "mysql"
11
12def pause(args):
13 """Pause the MySQL service.
14
15 @raises Exception should the service fail to stop.
16 """
17 if not service_pause(MYSQL_SERVICE):
18 raise Exception("Failed to pause MySQL service.")
19 status_set(
20 "maintenance", "Paused. Use 'resume' action to resume normal service.")
21
22def resume(args):
23 """Resume the MySQL service.
24
25 @raises Exception should the service fail to start."""
26 if not service_resume(MYSQL_SERVICE):
27 raise Exception("Failed to resume MySQL service.")
28 status_set("active", "")
29
30
31# A dictionary of all the defined actions to callables (which take
32# parsed arguments).
33ACTIONS = {"pause": pause, "resume": resume}
34
35
36def main(args):
37 action_name = os.path.basename(args[0])
38 try:
39 action = ACTIONS[action_name]
40 except KeyError:
41 return "Action %s undefined" % action_name
42 else:
43 try:
44 action(args)
45 except Exception as e:
46 action_fail(str(e))
47
48
49if __name__ == "__main__":
50 sys.exit(main(sys.argv))
51
052
=== added symlink 'actions/charmhelpers'
=== target is u'../charmhelpers'
=== added symlink 'actions/pause'
=== target is u'actions.py'
=== added symlink 'actions/resume'
=== target is u'actions.py'
=== modified file 'charm-helpers-tests.yaml'
--- charm-helpers-tests.yaml 2015-04-15 14:23:37 +0000
+++ charm-helpers-tests.yaml 2015-09-15 12:31:21 +0000
@@ -3,3 +3,4 @@
3include:3include:
4 - contrib.amulet4 - contrib.amulet
5 - contrib.openstack.amulet5 - contrib.openstack.amulet
6 - core.hookenv
67
=== renamed directory 'hooks/charmhelpers' => 'charmhelpers'
=== modified file 'charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2015-03-03 02:26:12 +0000
+++ charmhelpers/contrib/network/ip.py 2015-09-15 12:31:21 +0000
@@ -435,8 +435,12 @@
435435
436 rev = dns.reversename.from_address(address)436 rev = dns.reversename.from_address(address)
437 result = ns_query(rev)437 result = ns_query(rev)
438
438 if not result:439 if not result:
439 return None440 try:
441 result = socket.gethostbyaddr(address)[0]
442 except:
443 return None
440 else:444 else:
441 result = address445 result = address
442446
443447
=== modified file 'charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-08-26 13:11:30 +0000
+++ charmhelpers/core/hookenv.py 2015-09-15 12:31:21 +0000
@@ -767,21 +767,23 @@
767767
768768
769def status_get():769def status_get():
770 """Retrieve the previously set juju workload state770 """Retrieve the previously set juju workload state and message
771771
772 If the status-set command is not found then assume this is juju < 1.23 and772 If the status-get command is not found then assume this is juju < 1.23 and
773 return 'unknown'773 return 'unknown', ""
774
774 """775 """
775 cmd = ['status-get']776 cmd = ['status-get', "--format=json", "--include-data"]
776 try:777 try:
777 raw_status = subprocess.check_output(cmd, universal_newlines=True)778 raw_status = subprocess.check_output(cmd)
778 status = raw_status.rstrip()
779 return status
780 except OSError as e:779 except OSError as e:
781 if e.errno == errno.ENOENT:780 if e.errno == errno.ENOENT:
782 return 'unknown'781 return ('unknown', "")
783 else:782 else:
784 raise783 raise
784 else:
785 status = json.loads(raw_status.decode("UTF-8"))
786 return (status["status"], status["message"])
785787
786788
787def translate_exc(from_exc, to_exc):789def translate_exc(from_exc, to_exc):
788790
=== modified file 'charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-08-26 13:11:30 +0000
+++ charmhelpers/core/host.py 2015-09-15 12:31:21 +0000
@@ -63,32 +63,48 @@
63 return service_result63 return service_result
6464
6565
66def service_pause(service_name, init_dir=None):66def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"):
67 """Pause a system service.67 """Pause a system service.
6868
69 Stop it, and prevent it from starting again at boot."""69 Stop it, and prevent it from starting again at boot."""
70 if init_dir is None:
71 init_dir = "/etc/init"
72 stopped = service_stop(service_name)70 stopped = service_stop(service_name)
73 # XXX: Support systemd too71 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
74 override_path = os.path.join(72 sysv_file = os.path.join(initd_dir, service_name)
75 init_dir, '{}.override'.format(service_name))73 if os.path.exists(upstart_file):
76 with open(override_path, 'w') as fh:74 override_path = os.path.join(
77 fh.write("manual\n")75 init_dir, '{}.override'.format(service_name))
76 with open(override_path, 'w') as fh:
77 fh.write("manual\n")
78 elif os.path.exists(sysv_file):
79 subprocess.check_call(["update-rc.d", service_name, "disable"])
80 else:
81 # XXX: Support SystemD too
82 raise ValueError(
83 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
84 service_name, upstart_file, sysv_file))
78 return stopped85 return stopped
7986
8087
81def service_resume(service_name, init_dir=None):88def service_resume(service_name, init_dir="/etc/init",
89 initd_dir="/etc/init.d"):
82 """Resume a system service.90 """Resume a system service.
8391
84 Reenable starting again at boot. Start the service"""92 Reenable starting again at boot. Start the service"""
85 # XXX: Support systemd too93 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
86 if init_dir is None:94 sysv_file = os.path.join(initd_dir, service_name)
87 init_dir = "/etc/init"95 if os.path.exists(upstart_file):
88 override_path = os.path.join(96 override_path = os.path.join(
89 init_dir, '{}.override'.format(service_name))97 init_dir, '{}.override'.format(service_name))
90 if os.path.exists(override_path):98 if os.path.exists(override_path):
91 os.unlink(override_path)99 os.unlink(override_path)
100 elif os.path.exists(sysv_file):
101 subprocess.check_call(["update-rc.d", service_name, "enable"])
102 else:
103 # XXX: Support SystemD too
104 raise ValueError(
105 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
106 service_name, upstart_file, sysv_file))
107
92 started = service_start(service_name)108 started = service_start(service_name)
93 return started109 return started
94110
95111
=== added file 'charmhelpers/core/kernel.py'
--- charmhelpers/core/kernel.py 1970-01-01 00:00:00 +0000
+++ charmhelpers/core/kernel.py 2015-09-15 12:31:21 +0000
@@ -0,0 +1,68 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4# Copyright 2014-2015 Canonical Limited.
5#
6# This file is part of charm-helpers.
7#
8# charm-helpers is free software: you can redistribute it and/or modify
9# it under the terms of the GNU Lesser General Public License version 3 as
10# published by the Free Software Foundation.
11#
12# charm-helpers is distributed in the hope that it will be useful,
13# but WITHOUT ANY WARRANTY; without even the implied warranty of
14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15# GNU Lesser General Public License for more details.
16#
17# You should have received a copy of the GNU Lesser General Public License
18# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
19
20__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
21
22from charmhelpers.core.hookenv import (
23 log,
24 INFO
25)
26
27from subprocess import check_call, check_output
28import re
29
30
31def modprobe(module, persist=True):
32 """Load a kernel module and configure for auto-load on reboot."""
33 cmd = ['modprobe', module]
34
35 log('Loading kernel module %s' % module, level=INFO)
36
37 check_call(cmd)
38 if persist:
39 with open('/etc/modules', 'r+') as modules:
40 if module not in modules.read():
41 modules.write(module)
42
43
44def rmmod(module, force=False):
45 """Remove a module from the linux kernel"""
46 cmd = ['rmmod']
47 if force:
48 cmd.append('-f')
49 cmd.append(module)
50 log('Removing kernel module %s' % module, level=INFO)
51 return check_call(cmd)
52
53
54def lsmod():
55 """Shows what kernel modules are currently loaded"""
56 return check_output(['lsmod'],
57 universal_newlines=True)
58
59
60def is_module_loaded(module):
61 """Checks if a kernel module is already loaded"""
62 matches = re.findall('^%s[ ]+' % module, lsmod(), re.M)
63 return len(matches) > 0
64
65
66def update_initramfs(version='all'):
67 """Updates an initramfs image"""
68 return check_call(["update-initramfs", "-k", version, "-u"])
069
=== added symlink 'hooks/charmhelpers'
=== target is u'../charmhelpers'
=== modified file 'tests/00-setup'
--- tests/00-setup 2015-08-26 13:22:41 +0000
+++ tests/00-setup 2015-09-15 12:31:21 +0000
@@ -7,6 +7,7 @@
7sudo apt-get install --yes amulet \7sudo apt-get install --yes amulet \
8 python-cinderclient \8 python-cinderclient \
9 python-distro-info \9 python-distro-info \
10 python3-distro-info \
10 python-glanceclient \11 python-glanceclient \
11 python-heatclient \12 python-heatclient \
12 python-keystoneclient \13 python-keystoneclient \
@@ -14,3 +15,4 @@
14 python-novaclient \15 python-novaclient \
15 python-pika \16 python-pika \
16 python-swiftclient17 python-swiftclient
18
1719
=== added file 'tests/31-test-pause-and-resume.py'
--- tests/31-test-pause-and-resume.py 1970-01-01 00:00:00 +0000
+++ tests/31-test-pause-and-resume.py 2015-09-15 12:31:21 +0000
@@ -0,0 +1,38 @@
1#!/usr/bin/python3
2# test percona-cluster pause and resum
3
4import basic_deployment
5from charmhelpers.contrib.amulet.utils import AmuletUtils
6
7utils = AmuletUtils()
8
9
10class PauseResume(basic_deployment.BasicDeployment):
11
12 def run(self):
13 super(PauseResume, self).run()
14 uid = 'percona-cluster/0'
15 unit = self.d.sentry.unit[uid]
16 assert self.is_mysqld_running(unit), 'mysql not running: %s' % uid
17 assert utils.status_get(unit)[0] == "unknown"
18
19 action_id = utils.run_action(unit, "pause")
20 assert utils.wait_on_action(action_id), "Pause action failed."
21
22 # Note that is_mysqld_running will print an error message when
23 # mysqld is not running. This is by design but it looks odd
24 # in the output.
25 assert not self.is_mysqld_running(unit=unit), \
26 "mysqld is still running!"
27
28 assert utils.status_get(unit)[0] == "maintenance"
29 action_id = utils.run_action(unit, "resume")
30 assert utils.wait_on_action(action_id), "Resume action failed"
31 assert utils.status_get(unit)[0] == "active"
32 assert self.is_mysqld_running(unit=unit), \
33 "mysqld not running after resume."
34
35
36if __name__ == "__main__":
37 p = PauseResume()
38 p.run()
039
=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
--- tests/charmhelpers/contrib/amulet/deployment.py 2015-04-15 14:23:37 +0000
+++ tests/charmhelpers/contrib/amulet/deployment.py 2015-09-15 12:31:21 +0000
@@ -51,7 +51,8 @@
51 if 'units' not in this_service:51 if 'units' not in this_service:
52 this_service['units'] = 152 this_service['units'] = 1
5353
54 self.d.add(this_service['name'], units=this_service['units'])54 self.d.add(this_service['name'], units=this_service['units'],
55 constraints=this_service.get('constraints'))
5556
56 for svc in other_services:57 for svc in other_services:
57 if 'location' in svc:58 if 'location' in svc:
@@ -64,7 +65,8 @@
64 if 'units' not in svc:65 if 'units' not in svc:
65 svc['units'] = 166 svc['units'] = 1
6667
67 self.d.add(svc['name'], charm=branch_location, units=svc['units'])68 self.d.add(svc['name'], charm=branch_location, units=svc['units'],
69 constraints=svc.get('constraints'))
6870
69 def _add_relations(self, relations):71 def _add_relations(self, relations):
70 """Add all of the relations for the services."""72 """Add all of the relations for the services."""
7173
=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
--- tests/charmhelpers/contrib/amulet/utils.py 2015-08-26 13:11:30 +0000
+++ tests/charmhelpers/contrib/amulet/utils.py 2015-09-15 12:31:21 +0000
@@ -19,9 +19,11 @@
19import logging19import logging
20import os20import os
21import re21import re
22import socket
22import subprocess23import subprocess
23import sys24import sys
24import time25import time
26import uuid
2527
26import amulet28import amulet
27import distro_info29import distro_info
@@ -114,7 +116,7 @@
114 # /!\ DEPRECATION WARNING (beisner):116 # /!\ DEPRECATION WARNING (beisner):
115 # New and existing tests should be rewritten to use117 # New and existing tests should be rewritten to use
116 # validate_services_by_name() as it is aware of init systems.118 # validate_services_by_name() as it is aware of init systems.
117 self.log.warn('/!\\ DEPRECATION WARNING: use '119 self.log.warn('DEPRECATION WARNING: use '
118 'validate_services_by_name instead of validate_services '120 'validate_services_by_name instead of validate_services '
119 'due to init system differences.')121 'due to init system differences.')
120122
@@ -269,33 +271,52 @@
269 """Get last modification time of directory."""271 """Get last modification time of directory."""
270 return sentry_unit.directory_stat(directory)['mtime']272 return sentry_unit.directory_stat(directory)['mtime']
271273
272 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):274 def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
273 """Get process' start time.275 """Get start time of a process based on the last modification time
274276 of the /proc/pid directory.
275 Determine start time of the process based on the last modification277
276 time of the /proc/pid directory. If pgrep_full is True, the process278 :sentry_unit: The sentry unit to check for the service on
277 name is matched against the full command line.279 :service: service name to look for in process table
278 """280 :pgrep_full: [Deprecated] Use full command line search mode with pgrep
279 if pgrep_full:281 :returns: epoch time of service process start
280 cmd = 'pgrep -o -f {}'.format(service)282 :param commands: list of bash commands
281 else:283 :param sentry_units: list of sentry unit pointers
282 cmd = 'pgrep -o {}'.format(service)284 :returns: None if successful; Failure message otherwise
283 cmd = cmd + ' | grep -v pgrep || exit 0'285 """
284 cmd_out = sentry_unit.run(cmd)286 if pgrep_full is not None:
285 self.log.debug('CMDout: ' + str(cmd_out))287 # /!\ DEPRECATION WARNING (beisner):
286 if cmd_out[0]:288 # No longer implemented, as pidof is now used instead of pgrep.
287 self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))289 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
288 proc_dir = '/proc/{}'.format(cmd_out[0].strip())290 self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
289 return self._get_dir_mtime(sentry_unit, proc_dir)291 'longer implemented re: lp 1474030.')
292
293 pid_list = self.get_process_id_list(sentry_unit, service)
294 pid = pid_list[0]
295 proc_dir = '/proc/{}'.format(pid)
296 self.log.debug('Pid for {} on {}: {}'.format(
297 service, sentry_unit.info['unit_name'], pid))
298
299 return self._get_dir_mtime(sentry_unit, proc_dir)
290300
291 def service_restarted(self, sentry_unit, service, filename,301 def service_restarted(self, sentry_unit, service, filename,
292 pgrep_full=False, sleep_time=20):302 pgrep_full=None, sleep_time=20):
293 """Check if service was restarted.303 """Check if service was restarted.
294304
295 Compare a service's start time vs a file's last modification time305 Compare a service's start time vs a file's last modification time
296 (such as a config file for that service) to determine if the service306 (such as a config file for that service) to determine if the service
297 has been restarted.307 has been restarted.
298 """308 """
309 # /!\ DEPRECATION WARNING (beisner):
310 # This method is prone to races in that no before-time is known.
311 # Use validate_service_config_changed instead.
312
313 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
314 # used instead of pgrep. pgrep_full is still passed through to ensure
315 # deprecation WARNS. lp1474030
316 self.log.warn('DEPRECATION WARNING: use '
317 'validate_service_config_changed instead of '
318 'service_restarted due to known races.')
319
299 time.sleep(sleep_time)320 time.sleep(sleep_time)
300 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=321 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
301 self._get_file_mtime(sentry_unit, filename)):322 self._get_file_mtime(sentry_unit, filename)):
@@ -304,15 +325,15 @@
304 return False325 return False
305326
306 def service_restarted_since(self, sentry_unit, mtime, service,327 def service_restarted_since(self, sentry_unit, mtime, service,
307 pgrep_full=False, sleep_time=20,328 pgrep_full=None, sleep_time=20,
308 retry_count=2):329 retry_count=2, retry_sleep_time=30):
309 """Check if service was been started after a given time.330 """Check if service was been started after a given time.
310331
311 Args:332 Args:
312 sentry_unit (sentry): The sentry unit to check for the service on333 sentry_unit (sentry): The sentry unit to check for the service on
313 mtime (float): The epoch time to check against334 mtime (float): The epoch time to check against
314 service (string): service name to look for in process table335 service (string): service name to look for in process table
315 pgrep_full (boolean): Use full command line search mode with pgrep336 pgrep_full: [Deprecated] Use full command line search mode with pgrep
316 sleep_time (int): Seconds to sleep before looking for process337 sleep_time (int): Seconds to sleep before looking for process
317 retry_count (int): If service is not found, how many times to retry338 retry_count (int): If service is not found, how many times to retry
318339
@@ -321,30 +342,44 @@
321 False if service is older than mtime or if service was342 False if service is older than mtime or if service was
322 not found.343 not found.
323 """344 """
324 self.log.debug('Checking %s restarted since %s' % (service, mtime))345 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
346 # used instead of pgrep. pgrep_full is still passed through to ensure
347 # deprecation WARNS. lp1474030
348
349 unit_name = sentry_unit.info['unit_name']
350 self.log.debug('Checking that %s service restarted since %s on '
351 '%s' % (service, mtime, unit_name))
325 time.sleep(sleep_time)352 time.sleep(sleep_time)
326 proc_start_time = self._get_proc_start_time(sentry_unit, service,353 proc_start_time = None
327 pgrep_full)354 tries = 0
328 while retry_count > 0 and not proc_start_time:355 while tries <= retry_count and not proc_start_time:
329 self.log.debug('No pid file found for service %s, will retry %i '356 try:
330 'more times' % (service, retry_count))357 proc_start_time = self._get_proc_start_time(sentry_unit,
331 time.sleep(30)358 service,
332 proc_start_time = self._get_proc_start_time(sentry_unit, service,359 pgrep_full)
333 pgrep_full)360 self.log.debug('Attempt {} to get {} proc start time on {} '
334 retry_count = retry_count - 1361 'OK'.format(tries, service, unit_name))
362 except IOError:
363 # NOTE(beisner) - race avoidance, proc may not exist yet.
364 # https://bugs.launchpad.net/charm-helpers/+bug/1474030
365 self.log.debug('Attempt {} to get {} proc start time on {} '
366 'failed'.format(tries, service, unit_name))
367 time.sleep(retry_sleep_time)
368 tries += 1
335369
336 if not proc_start_time:370 if not proc_start_time:
337 self.log.warn('No proc start time found, assuming service did '371 self.log.warn('No proc start time found, assuming service did '
338 'not start')372 'not start')
339 return False373 return False
340 if proc_start_time >= mtime:374 if proc_start_time >= mtime:
341 self.log.debug('proc start time is newer than provided mtime'375 self.log.debug('Proc start time is newer than provided mtime'
342 '(%s >= %s)' % (proc_start_time, mtime))376 '(%s >= %s) on %s (OK)' % (proc_start_time,
377 mtime, unit_name))
343 return True378 return True
344 else:379 else:
345 self.log.warn('proc start time (%s) is older than provided mtime '380 self.log.warn('Proc start time (%s) is older than provided mtime '
346 '(%s), service did not restart' % (proc_start_time,381 '(%s) on %s, service did not '
347 mtime))382 'restart' % (proc_start_time, mtime, unit_name))
348 return False383 return False
349384
350 def config_updated_since(self, sentry_unit, filename, mtime,385 def config_updated_since(self, sentry_unit, filename, mtime,
@@ -374,8 +409,9 @@
374 return False409 return False
375410
376 def validate_service_config_changed(self, sentry_unit, mtime, service,411 def validate_service_config_changed(self, sentry_unit, mtime, service,
377 filename, pgrep_full=False,412 filename, pgrep_full=None,
378 sleep_time=20, retry_count=2):413 sleep_time=20, retry_count=2,
414 retry_sleep_time=30):
379 """Check service and file were updated after mtime415 """Check service and file were updated after mtime
380416
381 Args:417 Args:
@@ -383,9 +419,10 @@
383 mtime (float): The epoch time to check against419 mtime (float): The epoch time to check against
384 service (string): service name to look for in process table420 service (string): service name to look for in process table
385 filename (string): The file to check mtime of421 filename (string): The file to check mtime of
386 pgrep_full (boolean): Use full command line search mode with pgrep422 pgrep_full: [Deprecated] Use full command line search mode with pgrep
387 sleep_time (int): Seconds to sleep before looking for process423 sleep_time (int): Initial sleep in seconds to pass to test helpers
388 retry_count (int): If service is not found, how many times to retry424 retry_count (int): If service is not found, how many times to retry
425 retry_sleep_time (int): Time in seconds to wait between retries
389426
390 Typical Usage:427 Typical Usage:
391 u = OpenStackAmuletUtils(ERROR)428 u = OpenStackAmuletUtils(ERROR)
@@ -402,15 +439,25 @@
402 mtime, False if service is older than mtime or if service was439 mtime, False if service is older than mtime or if service was
403 not found or if filename was modified before mtime.440 not found or if filename was modified before mtime.
404 """441 """
405 self.log.debug('Checking %s restarted since %s' % (service, mtime))442
406 time.sleep(sleep_time)443 # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
407 service_restart = self.service_restarted_since(sentry_unit, mtime,444 # used instead of pgrep. pgrep_full is still passed through to ensure
408 service,445 # deprecation WARNS. lp1474030
409 pgrep_full=pgrep_full,446
410 sleep_time=0,447 service_restart = self.service_restarted_since(
411 retry_count=retry_count)448 sentry_unit, mtime,
412 config_update = self.config_updated_since(sentry_unit, filename, mtime,449 service,
413 sleep_time=0)450 pgrep_full=pgrep_full,
451 sleep_time=sleep_time,
452 retry_count=retry_count,
453 retry_sleep_time=retry_sleep_time)
454
455 config_update = self.config_updated_since(
456 sentry_unit,
457 filename,
458 mtime,
459 sleep_time=0)
460
414 return service_restart and config_update461 return service_restart and config_update
415462
416 def get_sentry_time(self, sentry_unit):463 def get_sentry_time(self, sentry_unit):
@@ -428,7 +475,6 @@
428 """Return a list of all Ubuntu releases in order of release."""475 """Return a list of all Ubuntu releases in order of release."""
429 _d = distro_info.UbuntuDistroInfo()476 _d = distro_info.UbuntuDistroInfo()
430 _release_list = _d.all477 _release_list = _d.all
431 self.log.debug('Ubuntu release list: {}'.format(_release_list))
432 return _release_list478 return _release_list
433479
434 def file_to_url(self, file_rel_path):480 def file_to_url(self, file_rel_path):
@@ -568,6 +614,142 @@
568614
569 return None615 return None
570616
617 def validate_sectionless_conf(self, file_contents, expected):
618 """A crude conf parser. Useful to inspect configuration files which
619 do not have section headers (as would be necessary in order to use
620 the configparser). Such as openstack-dashboard or rabbitmq confs."""
621 for line in file_contents.split('\n'):
622 if '=' in line:
623 args = line.split('=')
624 if len(args) <= 1:
625 continue
626 key = args[0].strip()
627 value = args[1].strip()
628 if key in expected.keys():
629 if expected[key] != value:
630 msg = ('Config mismatch. Expected, actual: {}, '
631 '{}'.format(expected[key], value))
632 amulet.raise_status(amulet.FAIL, msg=msg)
633
634 def get_unit_hostnames(self, units):
635 """Return a dict of juju unit names to hostnames."""
636 host_names = {}
637 for unit in units:
638 host_names[unit.info['unit_name']] = \
639 str(unit.file_contents('/etc/hostname').strip())
640 self.log.debug('Unit host names: {}'.format(host_names))
641 return host_names
642
643 def run_cmd_unit(self, sentry_unit, cmd):
644 """Run a command on a unit, return the output and exit code."""
645 output, code = sentry_unit.run(cmd)
646 if code == 0:
647 self.log.debug('{} `{}` command returned {} '
648 '(OK)'.format(sentry_unit.info['unit_name'],
649 cmd, code))
650 else:
651 msg = ('{} `{}` command returned {} '
652 '{}'.format(sentry_unit.info['unit_name'],
653 cmd, code, output))
654 amulet.raise_status(amulet.FAIL, msg=msg)
655 return str(output), code
656
657 def file_exists_on_unit(self, sentry_unit, file_name):
658 """Check if a file exists on a unit."""
659 try:
660 sentry_unit.file_stat(file_name)
661 return True
662 except IOError:
663 return False
664 except Exception as e:
665 msg = 'Error checking file {}: {}'.format(file_name, e)
666 amulet.raise_status(amulet.FAIL, msg=msg)
667
668 def file_contents_safe(self, sentry_unit, file_name,
669 max_wait=60, fatal=False):
670 """Get file contents from a sentry unit. Wrap amulet file_contents
671 with retry logic to address races where a file checks as existing,
672 but no longer exists by the time file_contents is called.
673 Return None if file not found. Optionally raise if fatal is True."""
674 unit_name = sentry_unit.info['unit_name']
675 file_contents = False
676 tries = 0
677 while not file_contents and tries < (max_wait / 4):
678 try:
679 file_contents = sentry_unit.file_contents(file_name)
680 except IOError:
681 self.log.debug('Attempt {} to open file {} from {} '
682 'failed'.format(tries, file_name,
683 unit_name))
684 time.sleep(4)
685 tries += 1
686
687 if file_contents:
688 return file_contents
689 elif not fatal:
690 return None
691 elif fatal:
692 msg = 'Failed to get file contents from unit.'
693 amulet.raise_status(amulet.FAIL, msg)
694
695 def port_knock_tcp(self, host="localhost", port=22, timeout=15):
696 """Open a TCP socket to check for a listening sevice on a host.
697
698 :param host: host name or IP address, default to localhost
699 :param port: TCP port number, default to 22
700 :param timeout: Connect timeout, default to 15 seconds
701 :returns: True if successful, False if connect failed
702 """
703
704 # Resolve host name if possible
705 try:
706 connect_host = socket.gethostbyname(host)
707 host_human = "{} ({})".format(connect_host, host)
708 except socket.error as e:
709 self.log.warn('Unable to resolve address: '
710 '{} ({}) Trying anyway!'.format(host, e))
711 connect_host = host
712 host_human = connect_host
713
714 # Attempt socket connection
715 try:
716 knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
717 knock.settimeout(timeout)
718 knock.connect((connect_host, port))
719 knock.close()
720 self.log.debug('Socket connect OK for host '
721 '{} on port {}.'.format(host_human, port))
722 return True
723 except socket.error as e:
724 self.log.debug('Socket connect FAIL for'
725 ' {} port {} ({})'.format(host_human, port, e))
726 return False
727
728 def port_knock_units(self, sentry_units, port=22,
729 timeout=15, expect_success=True):
730 """Open a TCP socket to check for a listening sevice on each
731 listed juju unit.
732
733 :param sentry_units: list of sentry unit pointers
734 :param port: TCP port number, default to 22
735 :param timeout: Connect timeout, default to 15 seconds
736 :expect_success: True by default, set False to invert logic
737 :returns: None if successful, Failure message otherwise
738 """
739 for unit in sentry_units:
740 host = unit.info['public-address']
741 connected = self.port_knock_tcp(host, port, timeout)
742 if not connected and expect_success:
743 return 'Socket connect failed.'
744 elif connected and not expect_success:
745 return 'Socket connected unexpectedly.'
746
747 def get_uuid_epoch_stamp(self):
748 """Returns a stamp string based on uuid4 and epoch time. Useful in
749 generating test messages which need to be unique-ish."""
750 return '[{}-{}]'.format(uuid.uuid4(), time.time())
751
752# amulet juju action helpers:
571 def run_action(self, unit_sentry, action,753 def run_action(self, unit_sentry, action,
572 _check_output=subprocess.check_output):754 _check_output=subprocess.check_output):
573 """Run the named action on a given unit sentry.755 """Run the named action on a given unit sentry.
@@ -594,3 +776,12 @@
594 output = _check_output(command, universal_newlines=True)776 output = _check_output(command, universal_newlines=True)
595 data = json.loads(output)777 data = json.loads(output)
596 return data.get(u"status") == "completed"778 return data.get(u"status") == "completed"
779
780 def status_get(self, unit):
781 """Return the current service status of this unit."""
782 raw_status, return_code = unit.run(
783 "status-get --format=json --include-data")
784 if return_code != 0:
785 return ("unknown", "")
786 status = json.loads(raw_status)
787 return (status["status"], status["message"])
597788
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-08-26 13:11:30 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-09-15 12:31:21 +0000
@@ -44,20 +44,31 @@
44 Determine if the local branch being tested is derived from its44 Determine if the local branch being tested is derived from its
45 stable or next (dev) branch, and based on this, use the corresonding45 stable or next (dev) branch, and based on this, use the corresonding
46 stable or next branches for the other_services."""46 stable or next branches for the other_services."""
47
48 # Charms outside the lp:~openstack-charmers namespace
47 base_charms = ['mysql', 'mongodb', 'nrpe']49 base_charms = ['mysql', 'mongodb', 'nrpe']
4850
51 # Force these charms to current series even when using an older series.
52 # ie. Use trusty/nrpe even when series is precise, as the P charm
53 # does not possess the necessary external master config and hooks.
54 force_series_current = ['nrpe']
55
49 if self.series in ['precise', 'trusty']:56 if self.series in ['precise', 'trusty']:
50 base_series = self.series57 base_series = self.series
51 else:58 else:
52 base_series = self.current_next59 base_series = self.current_next
5360
54 if self.stable:61 for svc in other_services:
55 for svc in other_services:62 if svc['name'] in force_series_current:
63 base_series = self.current_next
64 # If a location has been explicitly set, use it
65 if svc.get('location'):
66 continue
67 if self.stable:
56 temp = 'lp:charms/{}/{}'68 temp = 'lp:charms/{}/{}'
57 svc['location'] = temp.format(base_series,69 svc['location'] = temp.format(base_series,
58 svc['name'])70 svc['name'])
59 else:71 else:
60 for svc in other_services:
61 if svc['name'] in base_charms:72 if svc['name'] in base_charms:
62 temp = 'lp:charms/{}/{}'73 temp = 'lp:charms/{}/{}'
63 svc['location'] = temp.format(base_series,74 svc['location'] = temp.format(base_series,
@@ -66,6 +77,7 @@
66 temp = 'lp:~openstack-charmers/charms/{}/{}/next'77 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
67 svc['location'] = temp.format(self.current_next,78 svc['location'] = temp.format(self.current_next,
68 svc['name'])79 svc['name'])
80
69 return other_services81 return other_services
7082
71 def _add_services(self, this_service, other_services):83 def _add_services(self, this_service, other_services):
@@ -77,21 +89,23 @@
7789
78 services = other_services90 services = other_services
79 services.append(this_service)91 services.append(this_service)
92
93 # Charms which should use the source config option
80 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',94 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
81 'ceph-osd', 'ceph-radosgw']95 'ceph-osd', 'ceph-radosgw']
82 # Most OpenStack subordinate charms do not expose an origin option96
83 # as that is controlled by the principle.97 # Charms which can not use openstack-origin, ie. many subordinates
84 ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']98 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
8599
86 if self.openstack:100 if self.openstack:
87 for svc in services:101 for svc in services:
88 if svc['name'] not in use_source + ignore:102 if svc['name'] not in use_source + no_origin:
89 config = {'openstack-origin': self.openstack}103 config = {'openstack-origin': self.openstack}
90 self.d.configure(svc['name'], config)104 self.d.configure(svc['name'], config)
91105
92 if self.source:106 if self.source:
93 for svc in services:107 for svc in services:
94 if svc['name'] in use_source and svc['name'] not in ignore:108 if svc['name'] in use_source and svc['name'] not in no_origin:
95 config = {'source': self.source}109 config = {'source': self.source}
96 self.d.configure(svc['name'], config)110 self.d.configure(svc['name'], config)
97111
98112
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-08-03 14:52:57 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2015-09-15 12:31:21 +0000
@@ -27,6 +27,7 @@
27import heatclient.v1.client as heat_client27import heatclient.v1.client as heat_client
28import keystoneclient.v2_0 as keystone_client28import keystoneclient.v2_0 as keystone_client
29import novaclient.v1_1.client as nova_client29import novaclient.v1_1.client as nova_client
30import pika
30import swiftclient31import swiftclient
3132
32from charmhelpers.contrib.amulet.utils import (33from charmhelpers.contrib.amulet.utils import (
@@ -602,3 +603,361 @@
602 self.log.debug('Ceph {} samples (OK): '603 self.log.debug('Ceph {} samples (OK): '
603 '{}'.format(sample_type, samples))604 '{}'.format(sample_type, samples))
604 return None605 return None
606
607# rabbitmq/amqp specific helpers:
608 def add_rmq_test_user(self, sentry_units,
609 username="testuser1", password="changeme"):
610 """Add a test user via the first rmq juju unit, check connection as
611 the new user against all sentry units.
612
613 :param sentry_units: list of sentry unit pointers
614 :param username: amqp user name, default to testuser1
615 :param password: amqp user password
616 :returns: None if successful. Raise on error.
617 """
618 self.log.debug('Adding rmq user ({})...'.format(username))
619
620 # Check that user does not already exist
621 cmd_user_list = 'rabbitmqctl list_users'
622 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
623 if username in output:
624 self.log.warning('User ({}) already exists, returning '
625 'gracefully.'.format(username))
626 return
627
628 perms = '".*" ".*" ".*"'
629 cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
630 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
631
632 # Add user via first unit
633 for cmd in cmds:
634 output, _ = self.run_cmd_unit(sentry_units[0], cmd)
635
636 # Check connection against the other sentry_units
637 self.log.debug('Checking user connect against units...')
638 for sentry_unit in sentry_units:
639 connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
640 username=username,
641 password=password)
642 connection.close()
643
644 def delete_rmq_test_user(self, sentry_units, username="testuser1"):
645 """Delete a rabbitmq user via the first rmq juju unit.
646
647 :param sentry_units: list of sentry unit pointers
648 :param username: amqp user name, default to testuser1
649 :param password: amqp user password
650 :returns: None if successful or no such user.
651 """
652 self.log.debug('Deleting rmq user ({})...'.format(username))
653
654 # Check that the user exists
655 cmd_user_list = 'rabbitmqctl list_users'
656 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
657
658 if username not in output:
659 self.log.warning('User ({}) does not exist, returning '
660 'gracefully.'.format(username))
661 return
662
663 # Delete the user
664 cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
665 output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
666
667 def get_rmq_cluster_status(self, sentry_unit):
668 """Execute rabbitmq cluster status command on a unit and return
669 the full output.
670
671 :param unit: sentry unit
672 :returns: String containing console output of cluster status command
673 """
674 cmd = 'rabbitmqctl cluster_status'
675 output, _ = self.run_cmd_unit(sentry_unit, cmd)
676 self.log.debug('{} cluster_status:\n{}'.format(
677 sentry_unit.info['unit_name'], output))
678 return str(output)
679
680 def get_rmq_cluster_running_nodes(self, sentry_unit):
681 """Parse rabbitmqctl cluster_status output string, return list of
682 running rabbitmq cluster nodes.
683
684 :param unit: sentry unit
685 :returns: List containing node names of running nodes
686 """
687 # NOTE(beisner): rabbitmqctl cluster_status output is not
688 # json-parsable, do string chop foo, then json.loads that.
689 str_stat = self.get_rmq_cluster_status(sentry_unit)
690 if 'running_nodes' in str_stat:
691 pos_start = str_stat.find("{running_nodes,") + 15
692 pos_end = str_stat.find("]},", pos_start) + 1
693 str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
694 run_nodes = json.loads(str_run_nodes)
695 return run_nodes
696 else:
697 return []
698
699 def validate_rmq_cluster_running_nodes(self, sentry_units):
700 """Check that all rmq unit hostnames are represented in the
701 cluster_status output of all units.
702
703 :param host_names: dict of juju unit names to host names
704 :param units: list of sentry unit pointers (all rmq units)
705 :returns: None if successful, otherwise return error message
706 """
707 host_names = self.get_unit_hostnames(sentry_units)
708 errors = []
709
710 # Query every unit for cluster_status running nodes
711 for query_unit in sentry_units:
712 query_unit_name = query_unit.info['unit_name']
713 running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
714
715 # Confirm that every unit is represented in the queried unit's
716 # cluster_status running nodes output.
717 for validate_unit in sentry_units:
718 val_host_name = host_names[validate_unit.info['unit_name']]
719 val_node_name = 'rabbit@{}'.format(val_host_name)
720
721 if val_node_name not in running_nodes:
722 errors.append('Cluster member check failed on {}: {} not '
723 'in {}\n'.format(query_unit_name,
724 val_node_name,
725 running_nodes))
726 if errors:
727 return ''.join(errors)
728
729 def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
730 """Check a single juju rmq unit for ssl and port in the config file."""
731 host = sentry_unit.info['public-address']
732 unit_name = sentry_unit.info['unit_name']
733
734 conf_file = '/etc/rabbitmq/rabbitmq.config'
735 conf_contents = str(self.file_contents_safe(sentry_unit,
736 conf_file, max_wait=16))
737 # Checks
738 conf_ssl = 'ssl' in conf_contents
739 conf_port = str(port) in conf_contents
740
741 # Port explicitly checked in config
742 if port and conf_port and conf_ssl:
743 self.log.debug('SSL is enabled @{}:{} '
744 '({})'.format(host, port, unit_name))
745 return True
746 elif port and not conf_port and conf_ssl:
747 self.log.debug('SSL is enabled @{} but not on port {} '
748 '({})'.format(host, port, unit_name))
749 return False
750 # Port not checked (useful when checking that ssl is disabled)
751 elif not port and conf_ssl:
752 self.log.debug('SSL is enabled @{}:{} '
753 '({})'.format(host, port, unit_name))
754 return True
755 elif not port and not conf_ssl:
756 self.log.debug('SSL not enabled @{}:{} '
757 '({})'.format(host, port, unit_name))
758 return False
759 else:
760 msg = ('Unknown condition when checking SSL status @{}:{} '
761 '({})'.format(host, port, unit_name))
762 amulet.raise_status(amulet.FAIL, msg)
763
764 def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
765 """Check that ssl is enabled on rmq juju sentry units.
766
767 :param sentry_units: list of all rmq sentry units
768 :param port: optional ssl port override to validate
769 :returns: None if successful, otherwise return error message
770 """
771 for sentry_unit in sentry_units:
772 if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
773 return ('Unexpected condition: ssl is disabled on unit '
774 '({})'.format(sentry_unit.info['unit_name']))
775 return None
776
777 def validate_rmq_ssl_disabled_units(self, sentry_units):
778 """Check that ssl is enabled on listed rmq juju sentry units.
779
780 :param sentry_units: list of all rmq sentry units
781 :returns: True if successful. Raise on error.
782 """
783 for sentry_unit in sentry_units:
784 if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
785 return ('Unexpected condition: ssl is enabled on unit '
786 '({})'.format(sentry_unit.info['unit_name']))
787 return None
788
789 def configure_rmq_ssl_on(self, sentry_units, deployment,
790 port=None, max_wait=60):
791 """Turn ssl charm config option on, with optional non-default
792 ssl port specification. Confirm that it is enabled on every
793 unit.
794
795 :param sentry_units: list of sentry units
796 :param deployment: amulet deployment object pointer
797 :param port: amqp port, use defaults if None
798 :param max_wait: maximum time to wait in seconds to confirm
799 :returns: None if successful. Raise on error.
800 """
801 self.log.debug('Setting ssl charm config option: on')
802
803 # Enable RMQ SSL
804 config = {'ssl': 'on'}
805 if port:
806 config['ssl_port'] = port
807
808 deployment.configure('rabbitmq-server', config)
809
810 # Confirm
811 tries = 0
812 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
813 while ret and tries < (max_wait / 4):
814 time.sleep(4)
815 self.log.debug('Attempt {}: {}'.format(tries, ret))
816 ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
817 tries += 1
818
819 if ret:
820 amulet.raise_status(amulet.FAIL, ret)
821
822 def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
823 """Turn ssl charm config option off, confirm that it is disabled
824 on every unit.
825
826 :param sentry_units: list of sentry units
827 :param deployment: amulet deployment object pointer
828 :param max_wait: maximum time to wait in seconds to confirm
829 :returns: None if successful. Raise on error.
830 """
831 self.log.debug('Setting ssl charm config option: off')
832
833 # Disable RMQ SSL
834 config = {'ssl': 'off'}
835 deployment.configure('rabbitmq-server', config)
836
837 # Confirm
838 tries = 0
839 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
840 while ret and tries < (max_wait / 4):
841 time.sleep(4)
842 self.log.debug('Attempt {}: {}'.format(tries, ret))
843 ret = self.validate_rmq_ssl_disabled_units(sentry_units)
844 tries += 1
845
846 if ret:
847 amulet.raise_status(amulet.FAIL, ret)
848
849 def connect_amqp_by_unit(self, sentry_unit, ssl=False,
850 port=None, fatal=True,
851 username="testuser1", password="changeme"):
852 """Establish and return a pika amqp connection to the rabbitmq service
853 running on a rmq juju unit.
854
855 :param sentry_unit: sentry unit pointer
856 :param ssl: boolean, default to False
857 :param port: amqp port, use defaults if None
858 :param fatal: boolean, default to True (raises on connect error)
859 :param username: amqp user name, default to testuser1
860 :param password: amqp user password
861 :returns: pika amqp connection pointer or None if failed and non-fatal
862 """
863 host = sentry_unit.info['public-address']
864 unit_name = sentry_unit.info['unit_name']
865
866 # Default port logic if port is not specified
867 if ssl and not port:
868 port = 5671
869 elif not ssl and not port:
870 port = 5672
871
872 self.log.debug('Connecting to amqp on {}:{} ({}) as '
873 '{}...'.format(host, port, unit_name, username))
874
875 try:
876 credentials = pika.PlainCredentials(username, password)
877 parameters = pika.ConnectionParameters(host=host, port=port,
878 credentials=credentials,
879 ssl=ssl,
880 connection_attempts=3,
881 retry_delay=5,
882 socket_timeout=1)
883 connection = pika.BlockingConnection(parameters)
884 assert connection.server_properties['product'] == 'RabbitMQ'
885 self.log.debug('Connect OK')
886 return connection
887 except Exception as e:
888 msg = ('amqp connection failed to {}:{} as '
889 '{} ({})'.format(host, port, username, str(e)))
890 if fatal:
891 amulet.raise_status(amulet.FAIL, msg)
892 else:
893 self.log.warn(msg)
894 return None
895
896 def publish_amqp_message_by_unit(self, sentry_unit, message,
897 queue="test", ssl=False,
898 username="testuser1",
899 password="changeme",
900 port=None):
901 """Publish an amqp message to a rmq juju unit.
902
903 :param sentry_unit: sentry unit pointer
904 :param message: amqp message string
905 :param queue: message queue, default to test
906 :param username: amqp user name, default to testuser1
907 :param password: amqp user password
908 :param ssl: boolean, default to False
909 :param port: amqp port, use defaults if None
910 :returns: None. Raises exception if publish failed.
911 """
912 self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
913 message))
914 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
915 port=port,
916 username=username,
917 password=password)
918
919 # NOTE(beisner): extra debug here re: pika hang potential:
920 # https://github.com/pika/pika/issues/297
921 # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
922 self.log.debug('Defining channel...')
923 channel = connection.channel()
924 self.log.debug('Declaring queue...')
925 channel.queue_declare(queue=queue, auto_delete=False, durable=True)
926 self.log.debug('Publishing message...')
927 channel.basic_publish(exchange='', routing_key=queue, body=message)
928 self.log.debug('Closing channel...')
929 channel.close()
930 self.log.debug('Closing connection...')
931 connection.close()
932
933 def get_amqp_message_by_unit(self, sentry_unit, queue="test",
934 username="testuser1",
935 password="changeme",
936 ssl=False, port=None):
937 """Get an amqp message from a rmq juju unit.
938
939 :param sentry_unit: sentry unit pointer
940 :param queue: message queue, default to test
941 :param username: amqp user name, default to testuser1
942 :param password: amqp user password
943 :param ssl: boolean, default to False
944 :param port: amqp port, use defaults if None
945 :returns: amqp message body as string. Raise if get fails.
946 """
947 connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
948 port=port,
949 username=username,
950 password=password)
951 channel = connection.channel()
952 method_frame, _, body = channel.basic_get(queue)
953
954 if method_frame:
955 self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
956 body))
957 channel.basic_ack(method_frame.delivery_tag)
958 channel.close()
959 connection.close()
960 return body
961 else:
962 msg = 'No message retrieved.'
963 amulet.raise_status(amulet.FAIL, msg)
605964
=== added directory 'tests/charmhelpers/core'
=== added file 'tests/charmhelpers/core/__init__.py'
--- tests/charmhelpers/core/__init__.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/core/__init__.py 2015-09-15 12:31:21 +0000
@@ -0,0 +1,15 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
016
=== added file 'tests/charmhelpers/core/hookenv.py'
--- tests/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
+++ tests/charmhelpers/core/hookenv.py 2015-09-15 12:31:21 +0000
@@ -0,0 +1,898 @@
1# Copyright 2014-2015 Canonical Limited.
2#
3# This file is part of charm-helpers.
4#
5# charm-helpers is free software: you can redistribute it and/or modify
6# it under the terms of the GNU Lesser General Public License version 3 as
7# published by the Free Software Foundation.
8#
9# charm-helpers is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU Lesser General Public License for more details.
13#
14# You should have received a copy of the GNU Lesser General Public License
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
16
17"Interactions with the Juju environment"
18# Copyright 2013 Canonical Ltd.
19#
20# Authors:
21# Charm Helpers Developers <juju@lists.ubuntu.com>
22
23from __future__ import print_function
24import copy
25from distutils.version import LooseVersion
26from functools import wraps
27import glob
28import os
29import json
30import yaml
31import subprocess
32import sys
33import errno
34import tempfile
35from subprocess import CalledProcessError
36
37import six
38if not six.PY3:
39 from UserDict import UserDict
40else:
41 from collections import UserDict
42
43CRITICAL = "CRITICAL"
44ERROR = "ERROR"
45WARNING = "WARNING"
46INFO = "INFO"
47DEBUG = "DEBUG"
48MARKER = object()
49
50cache = {}
51
52
53def cached(func):
54 """Cache return values for multiple executions of func + args
55
56 For example::
57
58 @cached
59 def unit_get(attribute):
60 pass
61
62 unit_get('test')
63
64 will cache the result of unit_get + 'test' for future calls.
65 """
66 @wraps(func)
67 def wrapper(*args, **kwargs):
68 global cache
69 key = str((func, args, kwargs))
70 try:
71 return cache[key]
72 except KeyError:
73 pass # Drop out of the exception handler scope.
74 res = func(*args, **kwargs)
75 cache[key] = res
76 return res
77 wrapper._wrapped = func
78 return wrapper
79
80
81def flush(key):
82 """Flushes any entries from function cache where the
83 key is found in the function+args """
84 flush_list = []
85 for item in cache:
86 if key in item:
87 flush_list.append(item)
88 for item in flush_list:
89 del cache[item]
90
91
92def log(message, level=None):
93 """Write a message to the juju log"""
94 command = ['juju-log']
95 if level:
96 command += ['-l', level]
97 if not isinstance(message, six.string_types):
98 message = repr(message)
99 command += [message]
100 # Missing juju-log should not cause failures in unit tests
101 # Send log output to stderr
102 try:
103 subprocess.call(command)
104 except OSError as e:
105 if e.errno == errno.ENOENT:
106 if level:
107 message = "{}: {}".format(level, message)
108 message = "juju-log: {}".format(message)
109 print(message, file=sys.stderr)
110 else:
111 raise
112
113
114class Serializable(UserDict):
115 """Wrapper, an object that can be serialized to yaml or json"""
116
117 def __init__(self, obj):
118 # wrap the object
119 UserDict.__init__(self)
120 self.data = obj
121
122 def __getattr__(self, attr):
123 # See if this object has attribute.
124 if attr in ("json", "yaml", "data"):
125 return self.__dict__[attr]
126 # Check for attribute in wrapped object.
127 got = getattr(self.data, attr, MARKER)
128 if got is not MARKER:
129 return got
130 # Proxy to the wrapped object via dict interface.
131 try:
132 return self.data[attr]
133 except KeyError:
134 raise AttributeError(attr)
135
136 def __getstate__(self):
137 # Pickle as a standard dictionary.
138 return self.data
139
140 def __setstate__(self, state):
141 # Unpickle into our wrapper.
142 self.data = state
143
144 def json(self):
145 """Serialize the object to json"""
146 return json.dumps(self.data)
147
148 def yaml(self):
149 """Serialize the object to yaml"""
150 return yaml.dump(self.data)
151
152
153def execution_environment():
154 """A convenient bundling of the current execution context"""
155 context = {}
156 context['conf'] = config()
157 if relation_id():
158 context['reltype'] = relation_type()
159 context['relid'] = relation_id()
160 context['rel'] = relation_get()
161 context['unit'] = local_unit()
162 context['rels'] = relations()
163 context['env'] = os.environ
164 return context
165
166
167def in_relation_hook():
168 """Determine whether we're running in a relation hook"""
169 return 'JUJU_RELATION' in os.environ
170
171
172def relation_type():
173 """The scope for the current relation hook"""
174 return os.environ.get('JUJU_RELATION', None)
175
176
177@cached
178def relation_id(relation_name=None, service_or_unit=None):
179 """The relation ID for the current or a specified relation"""
180 if not relation_name and not service_or_unit:
181 return os.environ.get('JUJU_RELATION_ID', None)
182 elif relation_name and service_or_unit:
183 service_name = service_or_unit.split('/')[0]
184 for relid in relation_ids(relation_name):
185 remote_service = remote_service_name(relid)
186 if remote_service == service_name:
187 return relid
188 else:
189 raise ValueError('Must specify neither or both of relation_name and service_or_unit')
190
191
192def local_unit():
193 """Local unit ID"""
194 return os.environ['JUJU_UNIT_NAME']
195
196
197def remote_unit():
198 """The remote unit for the current relation hook"""
199 return os.environ.get('JUJU_REMOTE_UNIT', None)
200
201
202def service_name():
203 """The name service group this unit belongs to"""
204 return local_unit().split('/')[0]
205
206
207@cached
208def remote_service_name(relid=None):
209 """The remote service name for a given relation-id (or the current relation)"""
210 if relid is None:
211 unit = remote_unit()
212 else:
213 units = related_units(relid)
214 unit = units[0] if units else None
215 return unit.split('/')[0] if unit else None
216
217
218def hook_name():
219 """The name of the currently executing hook"""
220 return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))
221
222
223class Config(dict):
224 """A dictionary representation of the charm's config.yaml, with some
225 extra features:
226
227 - See which values in the dictionary have changed since the previous hook.
228 - For values that have changed, see what the previous value was.
229 - Store arbitrary data for use in a later hook.
230
231 NOTE: Do not instantiate this object directly - instead call
232 ``hookenv.config()``, which will return an instance of :class:`Config`.
233
234 Example usage::
235
236 >>> # inside a hook
237 >>> from charmhelpers.core import hookenv
238 >>> config = hookenv.config()
239 >>> config['foo']
240 'bar'
241 >>> # store a new key/value for later use
242 >>> config['mykey'] = 'myval'
243
244
245 >>> # user runs `juju set mycharm foo=baz`
246 >>> # now we're inside subsequent config-changed hook
247 >>> config = hookenv.config()
248 >>> config['foo']
249 'baz'
250 >>> # test to see if this val has changed since last hook
251 >>> config.changed('foo')
252 True
253 >>> # what was the previous value?
254 >>> config.previous('foo')
255 'bar'
256 >>> # keys/values that we add are preserved across hooks
257 >>> config['mykey']
258 'myval'
259
260 """
261 CONFIG_FILE_NAME = '.juju-persistent-config'
262
263 def __init__(self, *args, **kw):
264 super(Config, self).__init__(*args, **kw)
265 self.implicit_save = True
266 self._prev_dict = None
267 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
268 if os.path.exists(self.path):
269 self.load_previous()
270 atexit(self._implicit_save)
271
272 def load_previous(self, path=None):
273 """Load previous copy of config from disk.
274
275 In normal usage you don't need to call this method directly - it
276 is called automatically at object initialization.
277
278 :param path:
279
280 File path from which to load the previous config. If `None`,
281 config is loaded from the default location. If `path` is
282 specified, subsequent `save()` calls will write to the same
283 path.
284
285 """
286 self.path = path or self.path
287 with open(self.path) as f:
288 self._prev_dict = json.load(f)
289 for k, v in copy.deepcopy(self._prev_dict).items():
290 if k not in self:
291 self[k] = v
292
293 def changed(self, key):
294 """Return True if the current value for this key is different from
295 the previous value.
296
297 """
298 if self._prev_dict is None:
299 return True
300 return self.previous(key) != self.get(key)
301
302 def previous(self, key):
303 """Return previous value for this key, or None if there
304 is no previous value.
305
306 """
307 if self._prev_dict:
308 return self._prev_dict.get(key)
309 return None
310
311 def save(self):
312 """Save this config to disk.
313
314 If the charm is using the :mod:`Services Framework <services.base>`
315 or :meth:'@hook <Hooks.hook>' decorator, this
316 is called automatically at the end of successful hook execution.
317 Otherwise, it should be called directly by user code.
318
319 To disable automatic saves, set ``implicit_save=False`` on this
320 instance.
321
322 """
323 with open(self.path, 'w') as f:
324 json.dump(self, f)
325
326 def _implicit_save(self):
327 if self.implicit_save:
328 self.save()
329
330
331@cached
332def config(scope=None):
333 """Juju charm configuration"""
334 config_cmd_line = ['config-get']
335 if scope is not None:
336 config_cmd_line.append(scope)
337 config_cmd_line.append('--format=json')
338 try:
339 config_data = json.loads(
340 subprocess.check_output(config_cmd_line).decode('UTF-8'))
341 if scope is not None:
342 return config_data
343 return Config(config_data)
344 except ValueError:
345 return None
346
347
348@cached
349def relation_get(attribute=None, unit=None, rid=None):
350 """Get relation information"""
351 _args = ['relation-get', '--format=json']
352 if rid:
353 _args.append('-r')
354 _args.append(rid)
355 _args.append(attribute or '-')
356 if unit:
357 _args.append(unit)
358 try:
359 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
360 except ValueError:
361 return None
362 except CalledProcessError as e:
363 if e.returncode == 2:
364 return None
365 raise
366
367
368def relation_set(relation_id=None, relation_settings=None, **kwargs):
369 """Set relation information for the current unit"""
370 relation_settings = relation_settings if relation_settings else {}
371 relation_cmd_line = ['relation-set']
372 accepts_file = "--file" in subprocess.check_output(
373 relation_cmd_line + ["--help"], universal_newlines=True)
374 if relation_id is not None:
375 relation_cmd_line.extend(('-r', relation_id))
376 settings = relation_settings.copy()
377 settings.update(kwargs)
378 for key, value in settings.items():
379 # Force value to be a string: it always should, but some call
380 # sites pass in things like dicts or numbers.
381 if value is not None:
382 settings[key] = "{}".format(value)
383 if accepts_file:
384 # --file was introduced in Juju 1.23.2. Use it by default if
385 # available, since otherwise we'll break if the relation data is
386 # too big. Ideally we should tell relation-set to read the data from
387 # stdin, but that feature is broken in 1.23.2: Bug #1454678.
388 with tempfile.NamedTemporaryFile(delete=False) as settings_file:
389 settings_file.write(yaml.safe_dump(settings).encode("utf-8"))
390 subprocess.check_call(
391 relation_cmd_line + ["--file", settings_file.name])
392 os.remove(settings_file.name)
393 else:
394 for key, value in settings.items():
395 if value is None:
396 relation_cmd_line.append('{}='.format(key))
397 else:
398 relation_cmd_line.append('{}={}'.format(key, value))
399 subprocess.check_call(relation_cmd_line)
400 # Flush cache of any relation-gets for local unit
401 flush(local_unit())
402
403
404def relation_clear(r_id=None):
405 ''' Clears any relation data already set on relation r_id '''
406 settings = relation_get(rid=r_id,
407 unit=local_unit())
408 for setting in settings:
409 if setting not in ['public-address', 'private-address']:
410 settings[setting] = None
411 relation_set(relation_id=r_id,
412 **settings)
413
414
415@cached
416def relation_ids(reltype=None):
417 """A list of relation_ids"""
418 reltype = reltype or relation_type()
419 relid_cmd_line = ['relation-ids', '--format=json']
420 if reltype is not None:
421 relid_cmd_line.append(reltype)
422 return json.loads(
423 subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
424 return []
425
426
427@cached
428def related_units(relid=None):
429 """A list of related units"""
430 relid = relid or relation_id()
431 units_cmd_line = ['relation-list', '--format=json']
432 if relid is not None:
433 units_cmd_line.extend(('-r', relid))
434 return json.loads(
435 subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
436
437
438@cached
439def relation_for_unit(unit=None, rid=None):
440 """Get the json represenation of a unit's relation"""
441 unit = unit or remote_unit()
442 relation = relation_get(unit=unit, rid=rid)
443 for key in relation:
444 if key.endswith('-list'):
445 relation[key] = relation[key].split()
446 relation['__unit__'] = unit
447 return relation
448
449
450@cached
451def relations_for_id(relid=None):
452 """Get relations of a specific relation ID"""
453 relation_data = []
454 relid = relid or relation_ids()
455 for unit in related_units(relid):
456 unit_data = relation_for_unit(unit, relid)
457 unit_data['__relid__'] = relid
458 relation_data.append(unit_data)
459 return relation_data
460
461
462@cached
463def relations_of_type(reltype=None):
464 """Get relations of a specific type"""
465 relation_data = []
466 reltype = reltype or relation_type()
467 for relid in relation_ids(reltype):
468 for relation in relations_for_id(relid):
469 relation['__relid__'] = relid
470 relation_data.append(relation)
471 return relation_data
472
473
474@cached
475def metadata():
476 """Get the current charm metadata.yaml contents as a python object"""
477 with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
478 return yaml.safe_load(md)
479
480
481@cached
482def relation_types():
483 """Get a list of relation types supported by this charm"""
484 rel_types = []
485 md = metadata()
486 for key in ('provides', 'requires', 'peers'):
487 section = md.get(key)
488 if section:
489 rel_types.extend(section.keys())
490 return rel_types
491
492
493@cached
494def relation_to_interface(relation_name):
495 """
496 Given the name of a relation, return the interface that relation uses.
497
498 :returns: The interface name, or ``None``.
499 """
500 return relation_to_role_and_interface(relation_name)[1]
501
502
503@cached
504def relation_to_role_and_interface(relation_name):
505 """
506 Given the name of a relation, return the role and the name of the interface
507 that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
508
509 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
510 """
511 _metadata = metadata()
512 for role in ('provides', 'requires', 'peer'):
513 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
514 if interface:
515 return role, interface
516 return None, None
517
518
519@cached
520def role_and_interface_to_relations(role, interface_name):
521 """
522 Given a role and interface name, return a list of relation names for the
523 current charm that use that interface under that role (where role is one
524 of ``provides``, ``requires``, or ``peer``).
525
526 :returns: A list of relation names.
527 """
528 _metadata = metadata()
529 results = []
530 for relation_name, relation in _metadata.get(role, {}).items():
531 if relation['interface'] == interface_name:
532 results.append(relation_name)
533 return results
534
535
536@cached
537def interface_to_relations(interface_name):
538 """
539 Given an interface, return a list of relation names for the current
540 charm that use that interface.
541
542 :returns: A list of relation names.
543 """
544 results = []
545 for role in ('provides', 'requires', 'peer'):
546 results.extend(role_and_interface_to_relations(role, interface_name))
547 return results
548
549
550@cached
551def charm_name():
552 """Get the name of the current charm as is specified on metadata.yaml"""
553 return metadata().get('name')
554
555
556@cached
557def relations():
558 """Get a nested dictionary of relation data for all related units"""
559 rels = {}
560 for reltype in relation_types():
561 relids = {}
562 for relid in relation_ids(reltype):
563 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
564 for unit in related_units(relid):
565 reldata = relation_get(unit=unit, rid=relid)
566 units[unit] = reldata
567 relids[relid] = units
568 rels[reltype] = relids
569 return rels
570
571
572@cached
573def is_relation_made(relation, keys='private-address'):
574 '''
575 Determine whether a relation is established by checking for
576 presence of key(s). If a list of keys is provided, they
577 must all be present for the relation to be identified as made
578 '''
579 if isinstance(keys, str):
580 keys = [keys]
581 for r_id in relation_ids(relation):
582 for unit in related_units(r_id):
583 context = {}
584 for k in keys:
585 context[k] = relation_get(k, rid=r_id,
586 unit=unit)
587 if None not in context.values():
588 return True
589 return False
590
591
592def open_port(port, protocol="TCP"):
593 """Open a service network port"""
594 _args = ['open-port']
595 _args.append('{}/{}'.format(port, protocol))
596 subprocess.check_call(_args)
597
598
599def close_port(port, protocol="TCP"):
600 """Close a service network port"""
601 _args = ['close-port']
602 _args.append('{}/{}'.format(port, protocol))
603 subprocess.check_call(_args)
604
605
606@cached
607def unit_get(attribute):
608 """Get the unit ID for the remote unit"""
609 _args = ['unit-get', '--format=json', attribute]
610 try:
611 return json.loads(subprocess.check_output(_args).decode('UTF-8'))
612 except ValueError:
613 return None
614
615
616def unit_public_ip():
617 """Get this unit's public IP address"""
618 return unit_get('public-address')
619
620
621def unit_private_ip():
622 """Get this unit's private IP address"""
623 return unit_get('private-address')
624
625
626class UnregisteredHookError(Exception):
627 """Raised when an undefined hook is called"""
628 pass
629
630
631class Hooks(object):
632 """A convenient handler for hook functions.
633
634 Example::
635
636 hooks = Hooks()
637
638 # register a hook, taking its name from the function name
639 @hooks.hook()
640 def install():
641 pass # your code here
642
643 # register a hook, providing a custom hook name
644 @hooks.hook("config-changed")
645 def config_changed():
646 pass # your code here
647
648 if __name__ == "__main__":
649 # execute a hook based on the name the program is called by
650 hooks.execute(sys.argv)
651 """
652
653 def __init__(self, config_save=None):
654 super(Hooks, self).__init__()
655 self._hooks = {}
656
657 # For unknown reasons, we allow the Hooks constructor to override
658 # config().implicit_save.
659 if config_save is not None:
660 config().implicit_save = config_save
661
662 def register(self, name, function):
663 """Register a hook"""
664 self._hooks[name] = function
665
666 def execute(self, args):
667 """Execute a registered hook based on args[0]"""
668 _run_atstart()
669 hook_name = os.path.basename(args[0])
670 if hook_name in self._hooks:
671 try:
672 self._hooks[hook_name]()
673 except SystemExit as x:
674 if x.code is None or x.code == 0:
675 _run_atexit()
676 raise
677 _run_atexit()
678 else:
679 raise UnregisteredHookError(hook_name)
680
681 def hook(self, *hook_names):
682 """Decorator, registering them as hooks"""
683 def wrapper(decorated):
684 for hook_name in hook_names:
685 self.register(hook_name, decorated)
686 else:
687 self.register(decorated.__name__, decorated)
688 if '_' in decorated.__name__:
689 self.register(
690 decorated.__name__.replace('_', '-'), decorated)
691 return decorated
692 return wrapper
693
694
695def charm_dir():
696 """Return the root directory of the current charm"""
697 return os.environ.get('CHARM_DIR')
698
699
700@cached
701def action_get(key=None):
702 """Gets the value of an action parameter, or all key/value param pairs"""
703 cmd = ['action-get']
704 if key is not None:
705 cmd.append(key)
706 cmd.append('--format=json')
707 action_data = json.loads(subprocess.check_output(cmd).decode('UTF-8'))
708 return action_data
709
710
711def action_set(values):
712 """Sets the values to be returned after the action finishes"""
713 cmd = ['action-set']
714 for k, v in list(values.items()):
715 cmd.append('{}={}'.format(k, v))
716 subprocess.check_call(cmd)
717
718
719def action_fail(message):
720 """Sets the action status to failed and sets the error message.
721
722 The results set by action_set are preserved."""
723 subprocess.check_call(['action-fail', message])
724
725
726def action_name():
727 """Get the name of the currently executing action."""
728 return os.environ.get('JUJU_ACTION_NAME')
729
730
731def action_uuid():
732 """Get the UUID of the currently executing action."""
733 return os.environ.get('JUJU_ACTION_UUID')
734
735
736def action_tag():
737 """Get the tag for the currently executing action."""
738 return os.environ.get('JUJU_ACTION_TAG')
739
740
741def status_set(workload_state, message):
742 """Set the workload state with a message
743
744 Use status-set to set the workload state with a message which is visible
745 to the user via juju status. If the status-set command is not found then
746 assume this is juju < 1.23 and juju-log the message unstead.
747
748 workload_state -- valid juju workload state.
749 message -- status update message
750 """
751 valid_states = ['maintenance', 'blocked', 'waiting', 'active']
752 if workload_state not in valid_states:
753 raise ValueError(
754 '{!r} is not a valid workload state'.format(workload_state)
755 )
756 cmd = ['status-set', workload_state, message]
757 try:
758 ret = subprocess.call(cmd)
759 if ret == 0:
760 return
761 except OSError as e:
762 if e.errno != errno.ENOENT:
763 raise
764 log_message = 'status-set failed: {} {}'.format(workload_state,
765 message)
766 log(log_message, level='INFO')
767
768
769def status_get():
770 """Retrieve the previously set juju workload state and message
771
772 If the status-get command is not found then assume this is juju < 1.23 and
773 return 'unknown', ""
774
775 """
776 cmd = ['status-get', "--format=json", "--include-data"]
777 try:
778 raw_status = subprocess.check_output(cmd)
779 except OSError as e:
780 if e.errno == errno.ENOENT:
781 return ('unknown', "")
782 else:
783 raise
784 else:
785 status = json.loads(raw_status.decode("UTF-8"))
786 return (status["status"], status["message"])
787
788
789def translate_exc(from_exc, to_exc):
790 def inner_translate_exc1(f):
791 def inner_translate_exc2(*args, **kwargs):
792 try:
793 return f(*args, **kwargs)
794 except from_exc:
795 raise to_exc
796
797 return inner_translate_exc2
798
799 return inner_translate_exc1
800
801
802@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
803def is_leader():
804 """Does the current unit hold the juju leadership
805
806 Uses juju to determine whether the current unit is the leader of its peers
807 """
808 cmd = ['is-leader', '--format=json']
809 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
810
811
812@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
813def leader_get(attribute=None):
814 """Juju leader get value(s)"""
815 cmd = ['leader-get', '--format=json'] + [attribute or '-']
816 return json.loads(subprocess.check_output(cmd).decode('UTF-8'))
817
818
819@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
820def leader_set(settings=None, **kwargs):
821 """Juju leader set value(s)"""
822 # Don't log secrets.
823 # log("Juju leader-set '%s'" % (settings), level=DEBUG)
824 cmd = ['leader-set']
825 settings = settings or {}
826 settings.update(kwargs)
827 for k, v in settings.items():
828 if v is None:
829 cmd.append('{}='.format(k))
830 else:
831 cmd.append('{}={}'.format(k, v))
832 subprocess.check_call(cmd)
833
834
835@cached
836def juju_version():
837 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
838 # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
839 jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
840 return subprocess.check_output([jujud, 'version'],
841 universal_newlines=True).strip()
842
843
844@cached
845def has_juju_version(minimum_version):
846 """Return True if the Juju version is at least the provided version"""
847 return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
848
849
850_atexit = []
851_atstart = []
852
853
854def atstart(callback, *args, **kwargs):
855 '''Schedule a callback to run before the main hook.
856
857 Callbacks are run in the order they were added.
858
859 This is useful for modules and classes to perform initialization
860 and inject behavior. In particular:
861
862 - Run common code before all of your hooks, such as logging
863 the hook name or interesting relation data.
864 - Defer object or module initialization that requires a hook
865 context until we know there actually is a hook context,
866 making testing easier.
867 - Rather than requiring charm authors to include boilerplate to
868 invoke your helper's behavior, have it run automatically if
869 your object is instantiated or module imported.
870
871 This is not at all useful after your hook framework as been launched.
872 '''
873 global _atstart
874 _atstart.append((callback, args, kwargs))
875
876
877def atexit(callback, *args, **kwargs):
878 '''Schedule a callback to run on successful hook completion.
879
880 Callbacks are run in the reverse order that they were added.'''
881 _atexit.append((callback, args, kwargs))
882
883
884def _run_atstart():
885 '''Hook frameworks must invoke this before running the main hook body.'''
886 global _atstart
887 for callback, args, kwargs in _atstart:
888 callback(*args, **kwargs)
889 del _atstart[:]
890
891
892def _run_atexit():
893 '''Hook frameworks must invoke this after the main hook body has
894 successfully completed. Do not invoke it if the hook fails.'''
895 global _atexit
896 for callback, args, kwargs in reversed(_atexit):
897 callback(*args, **kwargs)
898 del _atexit[:]

Subscribers

People subscribed via source and target branches