Merge lp:~hopem/charms/trusty/keystone/fix-ssl-cert-sync into lp:~openstack-charmers-archive/charms/trusty/keystone/next
- Trusty Tahr (14.04)
- fix-ssl-cert-sync
- Merge into next
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 109 | ||||
Proposed branch: | lp:~hopem/charms/trusty/keystone/fix-ssl-cert-sync | ||||
Merge into: | lp:~openstack-charmers-archive/charms/trusty/keystone/next | ||||
Diff against target: |
1821 lines (+937/-202) 7 files modified
README.md (+46/-10) hooks/keystone_context.py (+45/-8) hooks/keystone_hooks.py (+180/-66) hooks/keystone_ssl.py (+43/-14) hooks/keystone_utils.py (+443/-50) unit_tests/test_keystone_hooks.py (+178/-50) unit_tests/test_keystone_utils.py (+2/-4) |
||||
To merge this branch: | bzr merge lp:~hopem/charms/trusty/keystone/fix-ssl-cert-sync | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Liam Young (community) | Approve | ||
Ryan Beisner (community) | Approve | ||
Review via email:
|
Commit message
Description of the change
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #557 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #684 keystone-next for hopem mp245492
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #531 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #560 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #687 keystone-next for hopem mp245492
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #533 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #562 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #538 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #567 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #694 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #598 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #627 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #786 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #609 keystone-next for hopem mp245492
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
unit_
make: *** [lint] Error 1
Full lint test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #638 keystone-next for hopem mp245492
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
FAILED (errors=3, failures=2)
make: *** [unit_test] Error 1
Full unit test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #796 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #610 keystone-next for hopem mp245492
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
unit_
make: *** [lint] Error 1
Full lint test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #639 keystone-next for hopem mp245492
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
FAILED (errors=3, failures=2)
make: *** [unit_test] Error 1
Full unit test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #797 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #611 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #640 keystone-next for hopem mp245492
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
FAILED (errors=3, failures=2)
make: *** [unit_test] Error 1
Full unit test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #798 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #613 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #642 keystone-next for hopem mp245492
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
FAILED (errors=9, failures=2)
make: *** [unit_test] Error 1
Full unit test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #799 keystone-next for hopem mp245492
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #614 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #643 keystone-next for hopem mp245492
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
FAILED (errors=25)
make: *** [unit_test] Error 1
Full unit test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #800 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #615 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #644 keystone-next for hopem mp245492
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
FAILED (errors=25)
make: *** [unit_test] Error 1
Full unit test output: http://
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #801 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #779 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #808 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #963 keystone-next for hopem mp245492
AMULET OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Ryan Beisner (1chb1n) wrote : | # |
P, T & U deploy tests are all happy!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #800 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #829 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #989 keystone-next for hopem mp245492
AMULET OK: passed
- 112. By Edward Hope-Morley
-
updated README
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #884 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #913 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #1108 keystone-next for hopem mp245492
AMULET OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Ryan Beisner (1chb1n) wrote : | # |
All deploy tests are happy. P-I, T-I, T-J, U-J
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Liam Young (gnuoy) wrote : | # |
Approve
Tested using mojo spec dev/keystone_ssl from lp:~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs.
(There are issue with removing units which are now highlighted in the READE)
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #935 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #964 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #1157 keystone-next for hopem mp245492
AMULET OK: passed
Build: http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #938 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #967 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #1160 keystone-next for hopem mp245492
AMULET OK: passed
Build: http://
- 113. By Edward Hope-Morley
-
ignore ssl actions if not enabled and improve support for non-ssl -> ssl
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #946 keystone-next for hopem mp245492
LINT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #975 keystone-next for hopem mp245492
UNIT OK: passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #1168 keystone-next for hopem mp245492
AMULET OK: passed
Build: http://
Preview Diff
1 | === modified file 'README.md' | |||
2 | --- README.md 2014-06-29 21:56:26 +0000 | |||
3 | +++ README.md 2015-01-21 16:51:08 +0000 | |||
4 | @@ -1,22 +1,30 @@ | |||
5 | 1 | Overview | ||
6 | 2 | ======== | ||
7 | 3 | |||
8 | 1 | This charm provides Keystone, the Openstack identity service. It's target | 4 | This charm provides Keystone, the Openstack identity service. It's target |
14 | 2 | platform is Ubuntu Precise + Openstack Essex. This has not been tested | 5 | platform is (ideally) Ubuntu LTS + Openstack. |
15 | 3 | using Oneiric + Diablo. | 6 | |
16 | 4 | 7 | Usage | |
17 | 5 | It provides three interfaces. | 8 | ===== |
18 | 6 | 9 | ||
19 | 10 | The following interfaces are provided: | ||
20 | 11 | |||
21 | 12 | - nrpe-external-master: Used generate Nagios checks. | ||
22 | 13 | |||
23 | 7 | - identity-service: Openstack API endpoints request an entry in the | 14 | - identity-service: Openstack API endpoints request an entry in the |
24 | 8 | Keystone service catalog + endpoint template catalog. When a relation | 15 | Keystone service catalog + endpoint template catalog. When a relation |
25 | 9 | is established, Keystone receives: service name, region, public_url, | 16 | is established, Keystone receives: service name, region, public_url, |
26 | 10 | admin_url and internal_url. It first checks that the requested service | 17 | admin_url and internal_url. It first checks that the requested service |
27 | 11 | is listed as a supported service. This list should stay updated to | 18 | is listed as a supported service. This list should stay updated to |
30 | 12 | support current Openstack core services. If the services is supported, | 19 | support current Openstack core services. If the service is supported, |
31 | 13 | a entry in the service catalog is created, an endpoint template is | 20 | an entry in the service catalog is created, an endpoint template is |
32 | 14 | created and a admin token is generated. The other end of the relation | 21 | created and a admin token is generated. The other end of the relation |
34 | 15 | recieves the token as well as info on which ports Keystone is listening. | 22 | receives the token as well as info on which ports Keystone is listening |
35 | 23 | on. | ||
36 | 16 | 24 | ||
37 | 17 | - keystone-service: This is currently only used by Horizon/dashboard | 25 | - keystone-service: This is currently only used by Horizon/dashboard |
38 | 18 | as its interaction with Keystone is different from other Openstack API | 26 | as its interaction with Keystone is different from other Openstack API |
40 | 19 | servicies. That is, Horizon requests a Keystone role and token exists. | 27 | services. That is, Horizon requests a Keystone role and token exists. |
41 | 20 | During a relation, Horizon requests its configured default role and | 28 | During a relation, Horizon requests its configured default role and |
42 | 21 | Keystone responds with a token and the auth + admin ports on which | 29 | Keystone responds with a token and the auth + admin ports on which |
43 | 22 | Keystone is listening. | 30 | Keystone is listening. |
44 | @@ -26,9 +34,37 @@ | |||
45 | 26 | provision users, tenants, etc. or that otherwise automate using the | 34 | provision users, tenants, etc. or that otherwise automate using the |
46 | 27 | Openstack cluster deployment. | 35 | Openstack cluster deployment. |
47 | 28 | 36 | ||
48 | 37 | - identity-notifications: Used to broadcast messages to any services | ||
49 | 38 | listening on the interface. | ||
50 | 39 | |||
51 | 40 | Database | ||
52 | 41 | -------- | ||
53 | 42 | |||
54 | 29 | Keystone requires a database. By default, a local sqlite database is used. | 43 | Keystone requires a database. By default, a local sqlite database is used. |
55 | 30 | The charm supports relations to a shared-db via mysql-shared interface. When | 44 | The charm supports relations to a shared-db via mysql-shared interface. When |
56 | 31 | a new data store is configured, the charm ensures the minimum administrator | 45 | a new data store is configured, the charm ensures the minimum administrator |
57 | 32 | credentials exist (as configured via charm configuration) | 46 | credentials exist (as configured via charm configuration) |
58 | 33 | 47 | ||
60 | 34 | VIP is only required if you plan on multi-unit clusterming. The VIP becomes a highly-available API endpoint. | 48 | HA/Clustering |
61 | 49 | ------------- | ||
62 | 50 | |||
63 | 51 | VIP is only required if you plan on multi-unit clustering (requires relating | ||
64 | 52 | with hacluster charm). The VIP becomes a highly-available API endpoint. | ||
65 | 53 | |||
66 | 54 | SSL/HTTPS | ||
67 | 55 | --------- | ||
68 | 56 | |||
69 | 57 | This charm also supports SSL and HTTPS endpoints. In order to ensure SSL | ||
70 | 58 | certificates are only created once and distributed to all units, one unit gets | ||
71 | 59 | elected as an ssl-cert-master. One side-effect of this is that as units are | ||
72 | 60 | scaled-out the currently elected leader needs to be running in order for nodes | ||
73 | 61 | to sync certificates. This 'feature' is to work around the lack of native | ||
74 | 62 | leadership election via Juju itself, a feature that is due for release some | ||
75 | 63 | time soon but until then we have to rely on this. Also, if a keystone unit does | ||
76 | 64 | go down, it must be removed from Juju i.e. | ||
77 | 65 | |||
78 | 66 | juju destroy-unit keystone/<unit-num> | ||
79 | 67 | |||
80 | 68 | Otherwise it will be assumed that this unit may come back at some point and | ||
81 | 69 | therefore must be know to be in-sync with the rest before continuing. | ||
82 | 70 | |||
83 | 35 | 71 | ||
84 | === modified file 'hooks/keystone_context.py' | |||
85 | --- hooks/keystone_context.py 2015-01-19 10:45:41 +0000 | |||
86 | +++ hooks/keystone_context.py 2015-01-21 16:51:08 +0000 | |||
87 | @@ -1,3 +1,5 @@ | |||
88 | 1 | import os | ||
89 | 2 | |||
90 | 1 | from charmhelpers.core.hookenv import config | 3 | from charmhelpers.core.hookenv import config |
91 | 2 | 4 | ||
92 | 3 | from charmhelpers.core.host import mkdir, write_file | 5 | from charmhelpers.core.host import mkdir, write_file |
93 | @@ -6,13 +8,16 @@ | |||
94 | 6 | 8 | ||
95 | 7 | from charmhelpers.contrib.hahelpers.cluster import ( | 9 | from charmhelpers.contrib.hahelpers.cluster import ( |
96 | 8 | determine_apache_port, | 10 | determine_apache_port, |
98 | 9 | determine_api_port | 11 | determine_api_port, |
99 | 12 | ) | ||
100 | 13 | |||
101 | 14 | from charmhelpers.core.hookenv import ( | ||
102 | 15 | log, | ||
103 | 16 | INFO, | ||
104 | 10 | ) | 17 | ) |
105 | 11 | 18 | ||
106 | 12 | from charmhelpers.contrib.hahelpers.apache import install_ca_cert | 19 | from charmhelpers.contrib.hahelpers.apache import install_ca_cert |
107 | 13 | 20 | ||
108 | 14 | import os | ||
109 | 15 | |||
110 | 16 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' | 21 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
111 | 17 | 22 | ||
112 | 18 | 23 | ||
113 | @@ -29,20 +34,52 @@ | |||
114 | 29 | return super(ApacheSSLContext, self).__call__() | 34 | return super(ApacheSSLContext, self).__call__() |
115 | 30 | 35 | ||
116 | 31 | def configure_cert(self, cn): | 36 | def configure_cert(self, cn): |
118 | 32 | from keystone_utils import SSH_USER, get_ca | 37 | from keystone_utils import ( |
119 | 38 | SSH_USER, | ||
120 | 39 | get_ca, | ||
121 | 40 | ensure_permissions, | ||
122 | 41 | is_ssl_cert_master, | ||
123 | 42 | ) | ||
124 | 43 | |||
125 | 33 | ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) | 44 | ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace) |
127 | 34 | mkdir(path=ssl_dir) | 45 | perms = 0o755 |
128 | 46 | mkdir(path=ssl_dir, owner=SSH_USER, group='keystone', perms=perms) | ||
129 | 47 | # Ensure accessible by keystone ssh user and group (for sync) | ||
130 | 48 | ensure_permissions(ssl_dir, user=SSH_USER, group='keystone', | ||
131 | 49 | perms=perms) | ||
132 | 50 | |||
133 | 51 | if not is_ssl_cert_master(): | ||
134 | 52 | log("Not ssl-cert-master - skipping apache cert config", | ||
135 | 53 | level=INFO) | ||
136 | 54 | return | ||
137 | 55 | |||
138 | 56 | log("Creating apache ssl certs in %s" % (ssl_dir), level=INFO) | ||
139 | 57 | |||
140 | 35 | ca = get_ca(user=SSH_USER) | 58 | ca = get_ca(user=SSH_USER) |
141 | 36 | cert, key = ca.get_cert_and_key(common_name=cn) | 59 | cert, key = ca.get_cert_and_key(common_name=cn) |
142 | 37 | write_file(path=os.path.join(ssl_dir, 'cert_{}'.format(cn)), | 60 | write_file(path=os.path.join(ssl_dir, 'cert_{}'.format(cn)), |
144 | 38 | content=cert) | 61 | content=cert, owner=SSH_USER, group='keystone', perms=0o644) |
145 | 39 | write_file(path=os.path.join(ssl_dir, 'key_{}'.format(cn)), | 62 | write_file(path=os.path.join(ssl_dir, 'key_{}'.format(cn)), |
147 | 40 | content=key) | 63 | content=key, owner=SSH_USER, group='keystone', perms=0o644) |
148 | 41 | 64 | ||
149 | 42 | def configure_ca(self): | 65 | def configure_ca(self): |
151 | 43 | from keystone_utils import SSH_USER, get_ca | 66 | from keystone_utils import ( |
152 | 67 | SSH_USER, | ||
153 | 68 | get_ca, | ||
154 | 69 | ensure_permissions, | ||
155 | 70 | is_ssl_cert_master, | ||
156 | 71 | ) | ||
157 | 72 | |||
158 | 73 | if not is_ssl_cert_master(): | ||
159 | 74 | log("Not ssl-cert-master - skipping apache cert config", | ||
160 | 75 | level=INFO) | ||
161 | 76 | return | ||
162 | 77 | |||
163 | 44 | ca = get_ca(user=SSH_USER) | 78 | ca = get_ca(user=SSH_USER) |
164 | 45 | install_ca_cert(ca.get_ca_bundle()) | 79 | install_ca_cert(ca.get_ca_bundle()) |
165 | 80 | # Ensure accessible by keystone ssh user and group (unison) | ||
166 | 81 | ensure_permissions(CA_CERT_PATH, user=SSH_USER, group='keystone', | ||
167 | 82 | perms=0o0644) | ||
168 | 46 | 83 | ||
169 | 47 | def canonical_names(self): | 84 | def canonical_names(self): |
170 | 48 | addresses = self.get_network_addresses() | 85 | addresses = self.get_network_addresses() |
171 | 49 | 86 | ||
172 | === modified file 'hooks/keystone_hooks.py' | |||
173 | --- hooks/keystone_hooks.py 2015-01-21 16:26:50 +0000 | |||
174 | +++ hooks/keystone_hooks.py 2015-01-21 16:51:08 +0000 | |||
175 | @@ -1,7 +1,9 @@ | |||
176 | 1 | #!/usr/bin/python | 1 | #!/usr/bin/python |
177 | 2 | |||
178 | 3 | import hashlib | 2 | import hashlib |
179 | 3 | import json | ||
180 | 4 | import os | 4 | import os |
181 | 5 | import re | ||
182 | 6 | import stat | ||
183 | 5 | import sys | 7 | import sys |
184 | 6 | import time | 8 | import time |
185 | 7 | 9 | ||
186 | @@ -16,6 +18,8 @@ | |||
187 | 16 | is_relation_made, | 18 | is_relation_made, |
188 | 17 | log, | 19 | log, |
189 | 18 | local_unit, | 20 | local_unit, |
190 | 21 | DEBUG, | ||
191 | 22 | WARNING, | ||
192 | 19 | ERROR, | 23 | ERROR, |
193 | 20 | relation_get, | 24 | relation_get, |
194 | 21 | relation_ids, | 25 | relation_ids, |
195 | @@ -48,9 +52,8 @@ | |||
196 | 48 | get_admin_passwd, | 52 | get_admin_passwd, |
197 | 49 | migrate_database, | 53 | migrate_database, |
198 | 50 | save_script_rc, | 54 | save_script_rc, |
200 | 51 | synchronize_ca, | 55 | synchronize_ca_if_changed, |
201 | 52 | register_configs, | 56 | register_configs, |
202 | 53 | relation_list, | ||
203 | 54 | restart_map, | 57 | restart_map, |
204 | 55 | services, | 58 | services, |
205 | 56 | CLUSTER_RES, | 59 | CLUSTER_RES, |
206 | @@ -58,12 +61,18 @@ | |||
207 | 58 | SSH_USER, | 61 | SSH_USER, |
208 | 59 | setup_ipv6, | 62 | setup_ipv6, |
209 | 60 | send_notifications, | 63 | send_notifications, |
210 | 64 | check_peer_actions, | ||
211 | 65 | CA_CERT_PATH, | ||
212 | 66 | ensure_permissions, | ||
213 | 67 | get_ssl_sync_request_units, | ||
214 | 68 | is_str_true, | ||
215 | 69 | is_ssl_cert_master, | ||
216 | 61 | ) | 70 | ) |
217 | 62 | 71 | ||
218 | 63 | from charmhelpers.contrib.hahelpers.cluster import ( | 72 | from charmhelpers.contrib.hahelpers.cluster import ( |
221 | 64 | eligible_leader, | 73 | is_elected_leader, |
220 | 65 | is_leader, | ||
222 | 66 | get_hacluster_config, | 74 | get_hacluster_config, |
223 | 75 | peer_units, | ||
224 | 67 | ) | 76 | ) |
225 | 68 | 77 | ||
226 | 69 | from charmhelpers.payload.execd import execd_preinstall | 78 | from charmhelpers.payload.execd import execd_preinstall |
227 | @@ -73,6 +82,7 @@ | |||
228 | 73 | ) | 82 | ) |
229 | 74 | from charmhelpers.contrib.openstack.ip import ( | 83 | from charmhelpers.contrib.openstack.ip import ( |
230 | 75 | ADMIN, | 84 | ADMIN, |
231 | 85 | PUBLIC, | ||
232 | 76 | resolve_address, | 86 | resolve_address, |
233 | 77 | ) | 87 | ) |
234 | 78 | from charmhelpers.contrib.network.ip import ( | 88 | from charmhelpers.contrib.network.ip import ( |
235 | @@ -100,12 +110,14 @@ | |||
236 | 100 | 110 | ||
237 | 101 | @hooks.hook('config-changed') | 111 | @hooks.hook('config-changed') |
238 | 102 | @restart_on_change(restart_map()) | 112 | @restart_on_change(restart_map()) |
239 | 113 | @synchronize_ca_if_changed() | ||
240 | 103 | def config_changed(): | 114 | def config_changed(): |
241 | 104 | if config('prefer-ipv6'): | 115 | if config('prefer-ipv6'): |
242 | 105 | setup_ipv6() | 116 | setup_ipv6() |
243 | 106 | sync_db_with_multi_ipv6_addresses(config('database'), | 117 | sync_db_with_multi_ipv6_addresses(config('database'), |
244 | 107 | config('database-user')) | 118 | config('database-user')) |
245 | 108 | 119 | ||
246 | 120 | unison.ensure_user(user=SSH_USER, group='juju_keystone') | ||
247 | 109 | unison.ensure_user(user=SSH_USER, group='keystone') | 121 | unison.ensure_user(user=SSH_USER, group='keystone') |
248 | 110 | homedir = unison.get_homedir(SSH_USER) | 122 | homedir = unison.get_homedir(SSH_USER) |
249 | 111 | if not os.path.isdir(homedir): | 123 | if not os.path.isdir(homedir): |
250 | @@ -116,25 +128,33 @@ | |||
251 | 116 | 128 | ||
252 | 117 | check_call(['chmod', '-R', 'g+wrx', '/var/lib/keystone/']) | 129 | check_call(['chmod', '-R', 'g+wrx', '/var/lib/keystone/']) |
253 | 118 | 130 | ||
254 | 131 | # Ensure unison can write to certs dir. | ||
255 | 132 | # FIXME: need to a better way around this e.g. move cert to it's own dir | ||
256 | 133 | # and give that unison permissions. | ||
257 | 134 | path = os.path.dirname(CA_CERT_PATH) | ||
258 | 135 | perms = int(oct(stat.S_IMODE(os.stat(path).st_mode) | | ||
259 | 136 | (stat.S_IWGRP | stat.S_IXGRP)), base=8) | ||
260 | 137 | ensure_permissions(path, group='keystone', perms=perms) | ||
261 | 138 | |||
262 | 119 | save_script_rc() | 139 | save_script_rc() |
263 | 120 | configure_https() | 140 | configure_https() |
264 | 121 | update_nrpe_config() | 141 | update_nrpe_config() |
265 | 122 | CONFIGS.write_all() | 142 | CONFIGS.write_all() |
276 | 123 | if eligible_leader(CLUSTER_RES): | 143 | |
277 | 124 | migrate_database() | 144 | # Update relations since SSL may have been configured. If we have peer |
278 | 125 | ensure_initial_admin(config) | 145 | # units we can rely on the sync to do this in cluster relation. |
279 | 126 | log('Firing identity_changed hook for all related services.') | 146 | if is_elected_leader(CLUSTER_RES) and not peer_units(): |
280 | 127 | # HTTPS may have been set - so fire all identity relations | 147 | update_all_identity_relation_units() |
271 | 128 | # again | ||
272 | 129 | for r_id in relation_ids('identity-service'): | ||
273 | 130 | for unit in relation_list(r_id): | ||
274 | 131 | identity_changed(relation_id=r_id, | ||
275 | 132 | remote_unit=unit) | ||
281 | 133 | 148 | ||
282 | 134 | for rid in relation_ids('identity-admin'): | 149 | for rid in relation_ids('identity-admin'): |
283 | 135 | admin_relation_changed(rid) | 150 | admin_relation_changed(rid) |
286 | 136 | for rid in relation_ids('cluster'): | 151 | |
287 | 137 | cluster_joined(rid) | 152 | # Ensure sync request is sent out (needed for upgrade to ssl from non-ssl) |
288 | 153 | settings = {} | ||
289 | 154 | append_ssl_sync_request(settings) | ||
290 | 155 | if settings: | ||
291 | 156 | for rid in relation_ids('cluster'): | ||
292 | 157 | relation_set(relation_id=rid, relation_settings=settings) | ||
293 | 138 | 158 | ||
294 | 139 | 159 | ||
295 | 140 | @hooks.hook('shared-db-relation-joined') | 160 | @hooks.hook('shared-db-relation-joined') |
296 | @@ -167,14 +187,35 @@ | |||
297 | 167 | relation_set(database=config('database')) | 187 | relation_set(database=config('database')) |
298 | 168 | 188 | ||
299 | 169 | 189 | ||
300 | 190 | def update_all_identity_relation_units(): | ||
301 | 191 | CONFIGS.write_all() | ||
302 | 192 | try: | ||
303 | 193 | migrate_database() | ||
304 | 194 | except Exception as exc: | ||
305 | 195 | log("Database initialisation failed (%s) - db not ready?" % (exc), | ||
306 | 196 | level=WARNING) | ||
307 | 197 | else: | ||
308 | 198 | ensure_initial_admin(config) | ||
309 | 199 | log('Firing identity_changed hook for all related services.') | ||
310 | 200 | for rid in relation_ids('identity-service'): | ||
311 | 201 | for unit in related_units(rid): | ||
312 | 202 | identity_changed(relation_id=rid, remote_unit=unit) | ||
313 | 203 | |||
314 | 204 | |||
315 | 205 | @synchronize_ca_if_changed(force=True) | ||
316 | 206 | def update_all_identity_relation_units_force_sync(): | ||
317 | 207 | update_all_identity_relation_units() | ||
318 | 208 | |||
319 | 209 | |||
320 | 170 | @hooks.hook('shared-db-relation-changed') | 210 | @hooks.hook('shared-db-relation-changed') |
321 | 171 | @restart_on_change(restart_map()) | 211 | @restart_on_change(restart_map()) |
322 | 212 | @synchronize_ca_if_changed() | ||
323 | 172 | def db_changed(): | 213 | def db_changed(): |
324 | 173 | if 'shared-db' not in CONFIGS.complete_contexts(): | 214 | if 'shared-db' not in CONFIGS.complete_contexts(): |
325 | 174 | log('shared-db relation incomplete. Peer not ready?') | 215 | log('shared-db relation incomplete. Peer not ready?') |
326 | 175 | else: | 216 | else: |
327 | 176 | CONFIGS.write(KEYSTONE_CONF) | 217 | CONFIGS.write(KEYSTONE_CONF) |
329 | 177 | if eligible_leader(CLUSTER_RES): | 218 | if is_elected_leader(CLUSTER_RES): |
330 | 178 | # Bugs 1353135 & 1187508. Dbs can appear to be ready before the | 219 | # Bugs 1353135 & 1187508. Dbs can appear to be ready before the |
331 | 179 | # units acl entry has been added. So, if the db supports passing | 220 | # units acl entry has been added. So, if the db supports passing |
332 | 180 | # a list of permitted units then check if we're in the list. | 221 | # a list of permitted units then check if we're in the list. |
333 | @@ -182,38 +223,46 @@ | |||
334 | 182 | if allowed_units and local_unit() not in allowed_units.split(): | 223 | if allowed_units and local_unit() not in allowed_units.split(): |
335 | 183 | log('Allowed_units list provided and this unit not present') | 224 | log('Allowed_units list provided and this unit not present') |
336 | 184 | return | 225 | return |
337 | 185 | migrate_database() | ||
338 | 186 | ensure_initial_admin(config) | ||
339 | 187 | # Ensure any existing service entries are updated in the | 226 | # Ensure any existing service entries are updated in the |
340 | 188 | # new database backend | 227 | # new database backend |
344 | 189 | for rid in relation_ids('identity-service'): | 228 | update_all_identity_relation_units() |
342 | 190 | for unit in related_units(rid): | ||
343 | 191 | identity_changed(relation_id=rid, remote_unit=unit) | ||
345 | 192 | 229 | ||
346 | 193 | 230 | ||
347 | 194 | @hooks.hook('pgsql-db-relation-changed') | 231 | @hooks.hook('pgsql-db-relation-changed') |
348 | 195 | @restart_on_change(restart_map()) | 232 | @restart_on_change(restart_map()) |
349 | 233 | @synchronize_ca_if_changed() | ||
350 | 196 | def pgsql_db_changed(): | 234 | def pgsql_db_changed(): |
351 | 197 | if 'pgsql-db' not in CONFIGS.complete_contexts(): | 235 | if 'pgsql-db' not in CONFIGS.complete_contexts(): |
352 | 198 | log('pgsql-db relation incomplete. Peer not ready?') | 236 | log('pgsql-db relation incomplete. Peer not ready?') |
353 | 199 | else: | 237 | else: |
354 | 200 | CONFIGS.write(KEYSTONE_CONF) | 238 | CONFIGS.write(KEYSTONE_CONF) |
358 | 201 | if eligible_leader(CLUSTER_RES): | 239 | if is_elected_leader(CLUSTER_RES): |
356 | 202 | migrate_database() | ||
357 | 203 | ensure_initial_admin(config) | ||
359 | 204 | # Ensure any existing service entries are updated in the | 240 | # Ensure any existing service entries are updated in the |
360 | 205 | # new database backend | 241 | # new database backend |
364 | 206 | for rid in relation_ids('identity-service'): | 242 | update_all_identity_relation_units() |
362 | 207 | for unit in related_units(rid): | ||
363 | 208 | identity_changed(relation_id=rid, remote_unit=unit) | ||
365 | 209 | 243 | ||
366 | 210 | 244 | ||
367 | 211 | @hooks.hook('identity-service-relation-changed') | 245 | @hooks.hook('identity-service-relation-changed') |
368 | 246 | @synchronize_ca_if_changed() | ||
369 | 212 | def identity_changed(relation_id=None, remote_unit=None): | 247 | def identity_changed(relation_id=None, remote_unit=None): |
370 | 248 | CONFIGS.write_all() | ||
371 | 249 | |||
372 | 213 | notifications = {} | 250 | notifications = {} |
376 | 214 | if eligible_leader(CLUSTER_RES): | 251 | if is_elected_leader(CLUSTER_RES): |
377 | 215 | add_service_to_keystone(relation_id, remote_unit) | 252 | # Catch database not configured error and defer until db ready |
378 | 216 | synchronize_ca() | 253 | from keystoneclient.apiclient.exceptions import InternalServerError |
379 | 254 | try: | ||
380 | 255 | add_service_to_keystone(relation_id, remote_unit) | ||
381 | 256 | except InternalServerError as exc: | ||
382 | 257 | key = re.compile("'keystone\..+' doesn't exist") | ||
383 | 258 | if re.search(key, exc.message): | ||
384 | 259 | log("Keystone database not yet ready (InternalServerError " | ||
385 | 260 | "raised) - deferring until *-db relation completes.", | ||
386 | 261 | level=WARNING) | ||
387 | 262 | return | ||
388 | 263 | |||
389 | 264 | log("Unexpected exception occurred", level=ERROR) | ||
390 | 265 | raise | ||
391 | 217 | 266 | ||
392 | 218 | settings = relation_get(rid=relation_id, unit=remote_unit) | 267 | settings = relation_get(rid=relation_id, unit=remote_unit) |
393 | 219 | service = settings.get('service', None) | 268 | service = settings.get('service', None) |
394 | @@ -241,46 +290,113 @@ | |||
395 | 241 | send_notifications(notifications) | 290 | send_notifications(notifications) |
396 | 242 | 291 | ||
397 | 243 | 292 | ||
398 | 293 | def append_ssl_sync_request(settings): | ||
399 | 294 | """Add request to be synced to relation settings. | ||
400 | 295 | |||
401 | 296 | This will be consumed by cluster-relation-changed ssl master. | ||
402 | 297 | """ | ||
403 | 298 | if (is_str_true(config('use-https')) or | ||
404 | 299 | is_str_true(config('https-service-endpoints'))): | ||
405 | 300 | unit = local_unit().replace('/', '-') | ||
406 | 301 | settings['ssl-sync-required-%s' % (unit)] = '1' | ||
407 | 302 | |||
408 | 303 | |||
409 | 244 | @hooks.hook('cluster-relation-joined') | 304 | @hooks.hook('cluster-relation-joined') |
411 | 245 | def cluster_joined(relation_id=None): | 305 | def cluster_joined(): |
412 | 246 | unison.ssh_authorized_peers(user=SSH_USER, | 306 | unison.ssh_authorized_peers(user=SSH_USER, |
413 | 247 | group='juju_keystone', | 307 | group='juju_keystone', |
414 | 248 | peer_interface='cluster', | 308 | peer_interface='cluster', |
415 | 249 | ensure_local_user=True) | 309 | ensure_local_user=True) |
416 | 310 | |||
417 | 311 | settings = {} | ||
418 | 312 | |||
419 | 250 | for addr_type in ADDRESS_TYPES: | 313 | for addr_type in ADDRESS_TYPES: |
420 | 251 | address = get_address_in_network( | 314 | address = get_address_in_network( |
421 | 252 | config('os-{}-network'.format(addr_type)) | 315 | config('os-{}-network'.format(addr_type)) |
422 | 253 | ) | 316 | ) |
423 | 254 | if address: | 317 | if address: |
428 | 255 | relation_set( | 318 | settings['{}-address'.format(addr_type)] = address |
425 | 256 | relation_id=relation_id, | ||
426 | 257 | relation_settings={'{}-address'.format(addr_type): address} | ||
427 | 258 | ) | ||
429 | 259 | 319 | ||
430 | 260 | if config('prefer-ipv6'): | 320 | if config('prefer-ipv6'): |
431 | 261 | private_addr = get_ipv6_addr(exc_list=[config('vip')])[0] | 321 | private_addr = get_ipv6_addr(exc_list=[config('vip')])[0] |
434 | 262 | relation_set(relation_id=relation_id, | 322 | settings['private-address'] = private_addr |
435 | 263 | relation_settings={'private-address': private_addr}) | 323 | |
436 | 324 | append_ssl_sync_request(settings) | ||
437 | 325 | |||
438 | 326 | relation_set(relation_settings=settings) | ||
439 | 327 | |||
440 | 328 | |||
441 | 329 | def apply_echo_filters(settings, echo_whitelist): | ||
442 | 330 | """Filter settings to be peer_echo'ed. | ||
443 | 331 | |||
444 | 332 | We may have received some data that we don't want to re-echo so filter | ||
445 | 333 | out unwanted keys and provide overrides. | ||
446 | 334 | |||
447 | 335 | Returns: | ||
448 | 336 | tuple(filtered list of keys to be echoed, overrides for keys omitted) | ||
449 | 337 | """ | ||
450 | 338 | filtered = [] | ||
451 | 339 | overrides = {} | ||
452 | 340 | for key in settings.iterkeys(): | ||
453 | 341 | for ekey in echo_whitelist: | ||
454 | 342 | if ekey in key: | ||
455 | 343 | if ekey == 'identity-service:': | ||
456 | 344 | auth_host = resolve_address(ADMIN) | ||
457 | 345 | service_host = resolve_address(PUBLIC) | ||
458 | 346 | if (key.endswith('auth_host') and | ||
459 | 347 | settings[key] != auth_host): | ||
460 | 348 | overrides[key] = auth_host | ||
461 | 349 | continue | ||
462 | 350 | elif (key.endswith('service_host') and | ||
463 | 351 | settings[key] != service_host): | ||
464 | 352 | overrides[key] = service_host | ||
465 | 353 | continue | ||
466 | 354 | |||
467 | 355 | filtered.append(key) | ||
468 | 356 | |||
469 | 357 | return filtered, overrides | ||
470 | 264 | 358 | ||
471 | 265 | 359 | ||
472 | 266 | @hooks.hook('cluster-relation-changed', | 360 | @hooks.hook('cluster-relation-changed', |
473 | 267 | 'cluster-relation-departed') | 361 | 'cluster-relation-departed') |
474 | 268 | @restart_on_change(restart_map(), stopstart=True) | 362 | @restart_on_change(restart_map(), stopstart=True) |
475 | 269 | def cluster_changed(): | 363 | def cluster_changed(): |
476 | 364 | settings = relation_get() | ||
477 | 270 | # NOTE(jamespage) re-echo passwords for peer storage | 365 | # NOTE(jamespage) re-echo passwords for peer storage |
479 | 271 | peer_echo(includes=['_passwd', 'identity-service:']) | 366 | echo_whitelist, overrides = \ |
480 | 367 | apply_echo_filters(settings, ['_passwd', 'identity-service:', | ||
481 | 368 | 'ssl-cert-master']) | ||
482 | 369 | log("Peer echo overrides: %s" % (overrides), level=DEBUG) | ||
483 | 370 | relation_set(**overrides) | ||
484 | 371 | if echo_whitelist: | ||
485 | 372 | log("Peer echo whitelist: %s" % (echo_whitelist), level=DEBUG) | ||
486 | 373 | peer_echo(includes=echo_whitelist) | ||
487 | 374 | |||
488 | 375 | check_peer_actions() | ||
489 | 272 | unison.ssh_authorized_peers(user=SSH_USER, | 376 | unison.ssh_authorized_peers(user=SSH_USER, |
490 | 273 | group='keystone', | 377 | group='keystone', |
491 | 274 | peer_interface='cluster', | 378 | peer_interface='cluster', |
492 | 275 | ensure_local_user=True) | 379 | ensure_local_user=True) |
501 | 276 | synchronize_ca() | 380 | |
502 | 277 | CONFIGS.write_all() | 381 | if is_elected_leader(CLUSTER_RES) or is_ssl_cert_master(): |
503 | 278 | for r_id in relation_ids('identity-service'): | 382 | units = get_ssl_sync_request_units() |
504 | 279 | for unit in relation_list(r_id): | 383 | synced_units = relation_get(attribute='ssl-synced-units', |
505 | 280 | identity_changed(relation_id=r_id, | 384 | unit=local_unit()) |
506 | 281 | remote_unit=unit) | 385 | if synced_units: |
507 | 282 | for rid in relation_ids('identity-admin'): | 386 | synced_units = json.loads(synced_units) |
508 | 283 | admin_relation_changed(rid) | 387 | diff = set(units).symmetric_difference(set(synced_units)) |
509 | 388 | |||
510 | 389 | if units and (not synced_units or diff): | ||
511 | 390 | log("New peers joined and need syncing - %s" % | ||
512 | 391 | (', '.join(units)), level=DEBUG) | ||
513 | 392 | update_all_identity_relation_units_force_sync() | ||
514 | 393 | else: | ||
515 | 394 | update_all_identity_relation_units() | ||
516 | 395 | |||
517 | 396 | for rid in relation_ids('identity-admin'): | ||
518 | 397 | admin_relation_changed(rid) | ||
519 | 398 | else: | ||
520 | 399 | CONFIGS.write_all() | ||
521 | 284 | 400 | ||
522 | 285 | 401 | ||
523 | 286 | @hooks.hook('ha-relation-joined') | 402 | @hooks.hook('ha-relation-joined') |
524 | @@ -320,7 +436,7 @@ | |||
525 | 320 | vip_group.append(vip_key) | 436 | vip_group.append(vip_key) |
526 | 321 | 437 | ||
527 | 322 | if len(vip_group) >= 1: | 438 | if len(vip_group) >= 1: |
529 | 323 | relation_set(groups={'grp_ks_vips': ' '.join(vip_group)}) | 439 | relation_set(groups={CLUSTER_RES: ' '.join(vip_group)}) |
530 | 324 | 440 | ||
531 | 325 | init_services = { | 441 | init_services = { |
532 | 326 | 'res_ks_haproxy': 'haproxy' | 442 | 'res_ks_haproxy': 'haproxy' |
533 | @@ -338,17 +454,17 @@ | |||
534 | 338 | 454 | ||
535 | 339 | @hooks.hook('ha-relation-changed') | 455 | @hooks.hook('ha-relation-changed') |
536 | 340 | @restart_on_change(restart_map()) | 456 | @restart_on_change(restart_map()) |
537 | 457 | @synchronize_ca_if_changed() | ||
538 | 341 | def ha_changed(): | 458 | def ha_changed(): |
539 | 459 | CONFIGS.write_all() | ||
540 | 460 | |||
541 | 342 | clustered = relation_get('clustered') | 461 | clustered = relation_get('clustered') |
545 | 343 | CONFIGS.write_all() | 462 | if clustered and is_elected_leader(CLUSTER_RES): |
543 | 344 | if (clustered is not None and | ||
544 | 345 | is_leader(CLUSTER_RES)): | ||
546 | 346 | ensure_initial_admin(config) | 463 | ensure_initial_admin(config) |
547 | 347 | log('Cluster configured, notifying other services and updating ' | 464 | log('Cluster configured, notifying other services and updating ' |
548 | 348 | 'keystone endpoint configuration') | 465 | 'keystone endpoint configuration') |
552 | 349 | for rid in relation_ids('identity-service'): | 466 | |
553 | 350 | for unit in related_units(rid): | 467 | update_all_identity_relation_units() |
551 | 351 | identity_changed(relation_id=rid, remote_unit=unit) | ||
554 | 352 | 468 | ||
555 | 353 | 469 | ||
556 | 354 | @hooks.hook('identity-admin-relation-changed') | 470 | @hooks.hook('identity-admin-relation-changed') |
557 | @@ -365,6 +481,7 @@ | |||
558 | 365 | relation_set(relation_id=relation_id, **relation_data) | 481 | relation_set(relation_id=relation_id, **relation_data) |
559 | 366 | 482 | ||
560 | 367 | 483 | ||
561 | 484 | @synchronize_ca_if_changed(fatal=True) | ||
562 | 368 | def configure_https(): | 485 | def configure_https(): |
563 | 369 | ''' | 486 | ''' |
564 | 370 | Enables SSL API Apache config if appropriate and kicks identity-service | 487 | Enables SSL API Apache config if appropriate and kicks identity-service |
565 | @@ -383,25 +500,22 @@ | |||
566 | 383 | 500 | ||
567 | 384 | @hooks.hook('upgrade-charm') | 501 | @hooks.hook('upgrade-charm') |
568 | 385 | @restart_on_change(restart_map(), stopstart=True) | 502 | @restart_on_change(restart_map(), stopstart=True) |
569 | 503 | @synchronize_ca_if_changed() | ||
570 | 386 | def upgrade_charm(): | 504 | def upgrade_charm(): |
571 | 387 | apt_install(filter_installed_packages(determine_packages())) | 505 | apt_install(filter_installed_packages(determine_packages())) |
572 | 388 | unison.ssh_authorized_peers(user=SSH_USER, | 506 | unison.ssh_authorized_peers(user=SSH_USER, |
573 | 389 | group='keystone', | 507 | group='keystone', |
574 | 390 | peer_interface='cluster', | 508 | peer_interface='cluster', |
575 | 391 | ensure_local_user=True) | 509 | ensure_local_user=True) |
576 | 510 | |||
577 | 511 | CONFIGS.write_all() | ||
578 | 392 | update_nrpe_config() | 512 | update_nrpe_config() |
583 | 393 | synchronize_ca() | 513 | |
584 | 394 | if eligible_leader(CLUSTER_RES): | 514 | if is_elected_leader(CLUSTER_RES): |
585 | 395 | log('Cluster leader - ensuring endpoint configuration' | 515 | log('Cluster leader - ensuring endpoint configuration is up to ' |
586 | 396 | ' is up to date') | 516 | 'date', level=DEBUG) |
587 | 397 | time.sleep(10) | 517 | time.sleep(10) |
595 | 398 | ensure_initial_admin(config) | 518 | update_all_identity_relation_units() |
589 | 399 | # Deal with interface changes for icehouse | ||
590 | 400 | for r_id in relation_ids('identity-service'): | ||
591 | 401 | for unit in relation_list(r_id): | ||
592 | 402 | identity_changed(relation_id=r_id, | ||
593 | 403 | remote_unit=unit) | ||
594 | 404 | CONFIGS.write_all() | ||
596 | 405 | 519 | ||
597 | 406 | 520 | ||
598 | 407 | @hooks.hook('nrpe-external-master-relation-joined', | 521 | @hooks.hook('nrpe-external-master-relation-joined', |
599 | 408 | 522 | ||
600 | === modified file 'hooks/keystone_ssl.py' | |||
601 | --- hooks/keystone_ssl.py 2014-07-02 07:55:44 +0000 | |||
602 | +++ hooks/keystone_ssl.py 2015-01-21 16:51:08 +0000 | |||
603 | @@ -5,6 +5,13 @@ | |||
604 | 5 | import subprocess | 5 | import subprocess |
605 | 6 | import tarfile | 6 | import tarfile |
606 | 7 | import tempfile | 7 | import tempfile |
607 | 8 | import time | ||
608 | 9 | |||
609 | 10 | from charmhelpers.core.hookenv import ( | ||
610 | 11 | log, | ||
611 | 12 | DEBUG, | ||
612 | 13 | WARNING, | ||
613 | 14 | ) | ||
614 | 8 | 15 | ||
615 | 9 | CA_EXPIRY = '365' | 16 | CA_EXPIRY = '365' |
616 | 10 | ORG_NAME = 'Ubuntu' | 17 | ORG_NAME = 'Ubuntu' |
617 | @@ -101,6 +108,9 @@ | |||
618 | 101 | extendedKeyUsage = serverAuth, clientAuth | 108 | extendedKeyUsage = serverAuth, clientAuth |
619 | 102 | """ | 109 | """ |
620 | 103 | 110 | ||
621 | 111 | # Instance can be appended to this list to represent a singleton | ||
622 | 112 | CA_SINGLETON = [] | ||
623 | 113 | |||
624 | 104 | 114 | ||
625 | 105 | def init_ca(ca_dir, common_name, org_name=ORG_NAME, org_unit_name=ORG_UNIT): | 115 | def init_ca(ca_dir, common_name, org_name=ORG_NAME, org_unit_name=ORG_UNIT): |
626 | 106 | print 'Ensuring certificate authority exists at %s.' % ca_dir | 116 | print 'Ensuring certificate authority exists at %s.' % ca_dir |
627 | @@ -275,23 +285,42 @@ | |||
628 | 275 | crt = self._sign_csr(csr, service, common_name) | 285 | crt = self._sign_csr(csr, service, common_name) |
629 | 276 | cmd = ['chown', '-R', '%s.%s' % (self.user, self.group), self.ca_dir] | 286 | cmd = ['chown', '-R', '%s.%s' % (self.user, self.group), self.ca_dir] |
630 | 277 | subprocess.check_call(cmd) | 287 | subprocess.check_call(cmd) |
632 | 278 | print 'Signed new CSR, crt @ %s' % crt | 288 | log('Signed new CSR, crt @ %s' % crt, level=DEBUG) |
633 | 279 | return crt, key | 289 | return crt, key |
634 | 280 | 290 | ||
635 | 281 | def get_cert_and_key(self, common_name): | 291 | def get_cert_and_key(self, common_name): |
649 | 282 | print 'Getting certificate and key for %s.' % common_name | 292 | log('Getting certificate and key for %s.' % common_name, level=DEBUG) |
650 | 283 | key = os.path.join(self.ca_dir, 'certs', '%s.key' % common_name) | 293 | keypath = os.path.join(self.ca_dir, 'certs', '%s.key' % common_name) |
651 | 284 | crt = os.path.join(self.ca_dir, 'certs', '%s.crt' % common_name) | 294 | crtpath = os.path.join(self.ca_dir, 'certs', '%s.crt' % common_name) |
652 | 285 | if os.path.isfile(crt): | 295 | if os.path.isfile(crtpath): |
653 | 286 | print 'Found existing certificate for %s.' % common_name | 296 | log('Found existing certificate for %s.' % common_name, |
654 | 287 | crt = open(crt, 'r').read() | 297 | level=DEBUG) |
655 | 288 | try: | 298 | max_retries = 3 |
656 | 289 | key = open(key, 'r').read() | 299 | while True: |
657 | 290 | except: | 300 | mtime = os.path.getmtime(crtpath) |
658 | 291 | print 'Could not load ssl private key for %s from %s' %\ | 301 | |
659 | 292 | (common_name, key) | 302 | crt = open(crtpath, 'r').read() |
660 | 293 | exit(1) | 303 | try: |
661 | 294 | return crt, key | 304 | key = open(keypath, 'r').read() |
662 | 305 | except: | ||
663 | 306 | msg = ('Could not load ssl private key for %s from %s' % | ||
664 | 307 | (common_name, keypath)) | ||
665 | 308 | raise Exception(msg) | ||
666 | 309 | |||
667 | 310 | # Ensure we are not reading a file that is being written to | ||
668 | 311 | if mtime != os.path.getmtime(crtpath): | ||
669 | 312 | max_retries -= 1 | ||
670 | 313 | if max_retries == 0: | ||
671 | 314 | msg = ("crt contents changed during read - retry " | ||
672 | 315 | "failed") | ||
673 | 316 | raise Exception(msg) | ||
674 | 317 | |||
675 | 318 | log("crt contents changed during read - re-reading", | ||
676 | 319 | level=WARNING) | ||
677 | 320 | time.sleep(1) | ||
678 | 321 | else: | ||
679 | 322 | return crt, key | ||
680 | 323 | |||
681 | 295 | crt, key = self._create_certificate(common_name, common_name) | 324 | crt, key = self._create_certificate(common_name, common_name) |
682 | 296 | return open(crt, 'r').read(), open(key, 'r').read() | 325 | return open(crt, 'r').read(), open(key, 'r').read() |
683 | 297 | 326 | ||
684 | 298 | 327 | ||
685 | === modified file 'hooks/keystone_utils.py' | |||
686 | --- hooks/keystone_utils.py 2015-01-19 10:45:41 +0000 | |||
687 | +++ hooks/keystone_utils.py 2015-01-21 16:51:08 +0000 | |||
688 | @@ -1,20 +1,27 @@ | |||
689 | 1 | #!/usr/bin/python | 1 | #!/usr/bin/python |
690 | 2 | import glob | ||
691 | 3 | import grp | ||
692 | 4 | import hashlib | ||
693 | 5 | import json | ||
694 | 6 | import os | ||
695 | 7 | import pwd | ||
696 | 8 | import re | ||
697 | 2 | import subprocess | 9 | import subprocess |
699 | 3 | import os | 10 | import threading |
700 | 11 | import time | ||
701 | 12 | import urlparse | ||
702 | 4 | import uuid | 13 | import uuid |
703 | 5 | import urlparse | ||
704 | 6 | import time | ||
705 | 7 | 14 | ||
706 | 8 | from base64 import b64encode | 15 | from base64 import b64encode |
707 | 9 | from collections import OrderedDict | 16 | from collections import OrderedDict |
708 | 10 | from copy import deepcopy | 17 | from copy import deepcopy |
709 | 11 | 18 | ||
710 | 12 | from charmhelpers.contrib.hahelpers.cluster import( | 19 | from charmhelpers.contrib.hahelpers.cluster import( |
712 | 13 | eligible_leader, | 20 | is_elected_leader, |
713 | 14 | determine_api_port, | 21 | determine_api_port, |
714 | 15 | https, | 22 | https, |
717 | 16 | is_clustered, | 23 | peer_units, |
718 | 17 | is_elected_leader, | 24 | oldest_peer, |
719 | 18 | ) | 25 | ) |
720 | 19 | 26 | ||
721 | 20 | from charmhelpers.contrib.openstack import context, templating | 27 | from charmhelpers.contrib.openstack import context, templating |
722 | @@ -37,8 +44,17 @@ | |||
723 | 37 | os_release, | 44 | os_release, |
724 | 38 | save_script_rc as _save_script_rc) | 45 | save_script_rc as _save_script_rc) |
725 | 39 | 46 | ||
726 | 47 | from charmhelpers.core.host import ( | ||
727 | 48 | mkdir, | ||
728 | 49 | write_file, | ||
729 | 50 | ) | ||
730 | 51 | |||
731 | 40 | import charmhelpers.contrib.unison as unison | 52 | import charmhelpers.contrib.unison as unison |
732 | 41 | 53 | ||
733 | 54 | from charmhelpers.core.decorators import ( | ||
734 | 55 | retry_on_exception, | ||
735 | 56 | ) | ||
736 | 57 | |||
737 | 42 | from charmhelpers.core.hookenv import ( | 58 | from charmhelpers.core.hookenv import ( |
738 | 43 | config, | 59 | config, |
739 | 44 | is_relation_made, | 60 | is_relation_made, |
740 | @@ -47,8 +63,11 @@ | |||
741 | 47 | relation_get, | 63 | relation_get, |
742 | 48 | relation_set, | 64 | relation_set, |
743 | 49 | relation_ids, | 65 | relation_ids, |
744 | 66 | related_units, | ||
745 | 50 | DEBUG, | 67 | DEBUG, |
746 | 51 | INFO, | 68 | INFO, |
747 | 69 | WARNING, | ||
748 | 70 | ERROR, | ||
749 | 52 | ) | 71 | ) |
750 | 53 | 72 | ||
751 | 54 | from charmhelpers.fetch import ( | 73 | from charmhelpers.fetch import ( |
752 | @@ -61,6 +80,7 @@ | |||
753 | 61 | from charmhelpers.core.host import ( | 80 | from charmhelpers.core.host import ( |
754 | 62 | service_stop, | 81 | service_stop, |
755 | 63 | service_start, | 82 | service_start, |
756 | 83 | service_restart, | ||
757 | 64 | pwgen, | 84 | pwgen, |
758 | 65 | lsb_release | 85 | lsb_release |
759 | 66 | ) | 86 | ) |
760 | @@ -110,10 +130,13 @@ | |||
761 | 110 | APACHE_CONF = '/etc/apache2/sites-available/openstack_https_frontend' | 130 | APACHE_CONF = '/etc/apache2/sites-available/openstack_https_frontend' |
762 | 111 | APACHE_24_CONF = '/etc/apache2/sites-available/openstack_https_frontend.conf' | 131 | APACHE_24_CONF = '/etc/apache2/sites-available/openstack_https_frontend.conf' |
763 | 112 | 132 | ||
764 | 133 | APACHE_SSL_DIR = '/etc/apache2/ssl/keystone' | ||
765 | 134 | SYNC_FLAGS_DIR = '/var/lib/keystone/juju_sync_flags/' | ||
766 | 113 | SSL_DIR = '/var/lib/keystone/juju_ssl/' | 135 | SSL_DIR = '/var/lib/keystone/juju_ssl/' |
767 | 114 | SSL_CA_NAME = 'Ubuntu Cloud' | 136 | SSL_CA_NAME = 'Ubuntu Cloud' |
768 | 115 | CLUSTER_RES = 'grp_ks_vips' | 137 | CLUSTER_RES = 'grp_ks_vips' |
769 | 116 | SSH_USER = 'juju_keystone' | 138 | SSH_USER = 'juju_keystone' |
770 | 139 | SSL_SYNC_SEMAPHORE = threading.Semaphore() | ||
771 | 117 | 140 | ||
772 | 118 | BASE_RESOURCE_MAP = OrderedDict([ | 141 | BASE_RESOURCE_MAP = OrderedDict([ |
773 | 119 | (KEYSTONE_CONF, { | 142 | (KEYSTONE_CONF, { |
774 | @@ -203,6 +226,13 @@ | |||
775 | 203 | } | 226 | } |
776 | 204 | 227 | ||
777 | 205 | 228 | ||
778 | 229 | def is_str_true(value): | ||
779 | 230 | if value and value.lower() in ['true', 'yes']: | ||
780 | 231 | return True | ||
781 | 232 | |||
782 | 233 | return False | ||
783 | 234 | |||
784 | 235 | |||
785 | 206 | def resource_map(): | 236 | def resource_map(): |
786 | 207 | ''' | 237 | ''' |
787 | 208 | Dynamically generate a map of resources that will be managed for a single | 238 | Dynamically generate a map of resources that will be managed for a single |
788 | @@ -287,7 +317,7 @@ | |||
789 | 287 | configs.set_release(openstack_release=new_os_rel) | 317 | configs.set_release(openstack_release=new_os_rel) |
790 | 288 | configs.write_all() | 318 | configs.write_all() |
791 | 289 | 319 | ||
793 | 290 | if eligible_leader(CLUSTER_RES): | 320 | if is_elected_leader(CLUSTER_RES): |
794 | 291 | migrate_database() | 321 | migrate_database() |
795 | 292 | 322 | ||
796 | 293 | 323 | ||
797 | @@ -389,7 +419,7 @@ | |||
798 | 389 | 419 | ||
799 | 390 | up_to_date = True | 420 | up_to_date = True |
800 | 391 | for k in ['publicurl', 'adminurl', 'internalurl']: | 421 | for k in ['publicurl', 'adminurl', 'internalurl']: |
802 | 392 | if ep[k] != locals()[k]: | 422 | if ep.get(k) != locals()[k]: |
803 | 393 | up_to_date = False | 423 | up_to_date = False |
804 | 394 | 424 | ||
805 | 395 | if up_to_date: | 425 | if up_to_date: |
806 | @@ -500,7 +530,7 @@ | |||
807 | 500 | if passwd and passwd.lower() != "none": | 530 | if passwd and passwd.lower() != "none": |
808 | 501 | return passwd | 531 | return passwd |
809 | 502 | 532 | ||
811 | 503 | if eligible_leader(CLUSTER_RES): | 533 | if is_elected_leader(CLUSTER_RES): |
812 | 504 | if os.path.isfile(STORED_PASSWD): | 534 | if os.path.isfile(STORED_PASSWD): |
813 | 505 | log("Loading stored passwd from %s" % STORED_PASSWD, level=INFO) | 535 | log("Loading stored passwd from %s" % STORED_PASSWD, level=INFO) |
814 | 506 | with open(STORED_PASSWD, 'r') as fd: | 536 | with open(STORED_PASSWD, 'r') as fd: |
815 | @@ -527,33 +557,47 @@ | |||
816 | 527 | 557 | ||
817 | 528 | 558 | ||
818 | 529 | def ensure_initial_admin(config): | 559 | def ensure_initial_admin(config): |
820 | 530 | """ Ensures the minimum admin stuff exists in whatever database we're | 560 | # Allow retry on fail since leader may not be ready yet. |
821 | 561 | # NOTE(hopem): ks client may not be installed at module import time so we | ||
822 | 562 | # use this wrapped approach instead. | ||
823 | 563 | from keystoneclient.apiclient.exceptions import InternalServerError | ||
824 | 564 | |||
825 | 565 | @retry_on_exception(3, base_delay=3, exc_type=InternalServerError) | ||
826 | 566 | def _ensure_initial_admin(config): | ||
827 | 567 | """Ensures the minimum admin stuff exists in whatever database we're | ||
828 | 531 | using. | 568 | using. |
829 | 569 | |||
830 | 532 | This and the helper functions it calls are meant to be idempotent and | 570 | This and the helper functions it calls are meant to be idempotent and |
831 | 533 | run during install as well as during db-changed. This will maintain | 571 | run during install as well as during db-changed. This will maintain |
832 | 534 | the admin tenant, user, role, service entry and endpoint across every | 572 | the admin tenant, user, role, service entry and endpoint across every |
833 | 535 | datastore we might use. | 573 | datastore we might use. |
834 | 574 | |||
835 | 536 | TODO: Possibly migrate data from one backend to another after it | 575 | TODO: Possibly migrate data from one backend to another after it |
836 | 537 | changes? | 576 | changes? |
856 | 538 | """ | 577 | """ |
857 | 539 | create_tenant("admin") | 578 | create_tenant("admin") |
858 | 540 | create_tenant(config("service-tenant")) | 579 | create_tenant(config("service-tenant")) |
859 | 541 | # User is managed by ldap backend when using ldap identity | 580 | # User is managed by ldap backend when using ldap identity |
860 | 542 | if not (config('identity-backend') == 'ldap' and config('ldap-readonly')): | 581 | if not (config('identity-backend') == |
861 | 543 | passwd = get_admin_passwd() | 582 | 'ldap' and config('ldap-readonly')): |
862 | 544 | if passwd: | 583 | passwd = get_admin_passwd() |
863 | 545 | create_user(config('admin-user'), passwd, tenant='admin') | 584 | if passwd: |
864 | 546 | update_user_password(config('admin-user'), passwd) | 585 | create_user(config('admin-user'), passwd, tenant='admin') |
865 | 547 | create_role(config('admin-role'), config('admin-user'), 'admin') | 586 | update_user_password(config('admin-user'), passwd) |
866 | 548 | create_service_entry("keystone", "identity", "Keystone Identity Service") | 587 | create_role(config('admin-role'), config('admin-user'), |
867 | 549 | 588 | 'admin') | |
868 | 550 | for region in config('region').split(): | 589 | create_service_entry("keystone", "identity", |
869 | 551 | create_keystone_endpoint(public_ip=resolve_address(PUBLIC), | 590 | "Keystone Identity Service") |
870 | 552 | service_port=config("service-port"), | 591 | |
871 | 553 | internal_ip=resolve_address(INTERNAL), | 592 | for region in config('region').split(): |
872 | 554 | admin_ip=resolve_address(ADMIN), | 593 | create_keystone_endpoint(public_ip=resolve_address(PUBLIC), |
873 | 555 | auth_port=config("admin-port"), | 594 | service_port=config("service-port"), |
874 | 556 | region=region) | 595 | internal_ip=resolve_address(INTERNAL), |
875 | 596 | admin_ip=resolve_address(ADMIN), | ||
876 | 597 | auth_port=config("admin-port"), | ||
877 | 598 | region=region) | ||
878 | 599 | |||
879 | 600 | return _ensure_initial_admin(config) | ||
880 | 557 | 601 | ||
881 | 558 | 602 | ||
882 | 559 | def endpoint_url(ip, port): | 603 | def endpoint_url(ip, port): |
883 | @@ -621,20 +665,357 @@ | |||
884 | 621 | return passwd | 665 | return passwd |
885 | 622 | 666 | ||
886 | 623 | 667 | ||
901 | 624 | def synchronize_ca(): | 668 | def ensure_permissions(path, user=None, group=None, perms=None): |
902 | 625 | ''' | 669 | """Set chownand chmod for path |
903 | 626 | Broadcast service credentials to peers or consume those that have been | 670 | |
904 | 627 | broadcasted by peer, depending on hook context. | 671 | Note that -1 for uid or gid result in no change. |
905 | 628 | ''' | 672 | """ |
906 | 629 | if not eligible_leader(CLUSTER_RES): | 673 | if user: |
907 | 630 | return | 674 | uid = pwd.getpwnam(user).pw_uid |
908 | 631 | log('Synchronizing CA to all peers.') | 675 | else: |
909 | 632 | if is_clustered(): | 676 | uid = -1 |
910 | 633 | if config('https-service-endpoints') in ['True', 'true']: | 677 | |
911 | 634 | unison.sync_to_peers(peer_interface='cluster', | 678 | if group: |
912 | 635 | paths=[SSL_DIR], user=SSH_USER, verbose=True) | 679 | gid = grp.getgrnam(group).gr_gid |
913 | 636 | 680 | else: | |
914 | 637 | CA = [] | 681 | gid = -1 |
915 | 682 | |||
916 | 683 | os.chown(path, uid, gid) | ||
917 | 684 | |||
918 | 685 | if perms: | ||
919 | 686 | os.chmod(path, perms) | ||
920 | 687 | |||
921 | 688 | |||
922 | 689 | def check_peer_actions(): | ||
923 | 690 | """Honour service action requests from sync master. | ||
924 | 691 | |||
925 | 692 | Check for service action request flags, perform the action then delete the | ||
926 | 693 | flag. | ||
927 | 694 | """ | ||
928 | 695 | restart = relation_get(attribute='restart-services-trigger') | ||
929 | 696 | if restart and os.path.isdir(SYNC_FLAGS_DIR): | ||
930 | 697 | for flagfile in glob.glob(os.path.join(SYNC_FLAGS_DIR, '*')): | ||
931 | 698 | flag = os.path.basename(flagfile) | ||
932 | 699 | key = re.compile("^(.+)?\.(.+)?\.(.+)") | ||
933 | 700 | res = re.search(key, flag) | ||
934 | 701 | if res: | ||
935 | 702 | source = res.group(1) | ||
936 | 703 | service = res.group(2) | ||
937 | 704 | action = res.group(3) | ||
938 | 705 | else: | ||
939 | 706 | key = re.compile("^(.+)?\.(.+)?") | ||
940 | 707 | res = re.search(key, flag) | ||
941 | 708 | source = res.group(1) | ||
942 | 709 | action = res.group(2) | ||
943 | 710 | |||
944 | 711 | # Don't execute actions requested by this unit. | ||
945 | 712 | if local_unit().replace('.', '-') != source: | ||
946 | 713 | if action == 'restart': | ||
947 | 714 | log("Running action='%s' on service '%s'" % | ||
948 | 715 | (action, service), level=DEBUG) | ||
949 | 716 | service_restart(service) | ||
950 | 717 | elif action == 'start': | ||
951 | 718 | log("Running action='%s' on service '%s'" % | ||
952 | 719 | (action, service), level=DEBUG) | ||
953 | 720 | service_start(service) | ||
954 | 721 | elif action == 'stop': | ||
955 | 722 | log("Running action='%s' on service '%s'" % | ||
956 | 723 | (action, service), level=DEBUG) | ||
957 | 724 | service_stop(service) | ||
958 | 725 | elif action == 'update-ca-certificates': | ||
959 | 726 | log("Running %s" % (action), level=DEBUG) | ||
960 | 727 | subprocess.check_call(['update-ca-certificates']) | ||
961 | 728 | else: | ||
962 | 729 | log("Unknown action flag=%s" % (flag), level=WARNING) | ||
963 | 730 | |||
964 | 731 | try: | ||
965 | 732 | os.remove(flagfile) | ||
966 | 733 | except: | ||
967 | 734 | pass | ||
968 | 735 | |||
969 | 736 | |||
970 | 737 | def create_peer_service_actions(action, services): | ||
971 | 738 | """Mark remote services for action. | ||
972 | 739 | |||
973 | 740 | Default action is restart. These action will be picked up by peer units | ||
974 | 741 | e.g. we may need to restart services on peer units after certs have been | ||
975 | 742 | synced. | ||
976 | 743 | """ | ||
977 | 744 | for service in services: | ||
978 | 745 | flagfile = os.path.join(SYNC_FLAGS_DIR, '%s.%s.%s' % | ||
979 | 746 | (local_unit().replace('/', '-'), | ||
980 | 747 | service.strip(), action)) | ||
981 | 748 | log("Creating action %s" % (flagfile), level=DEBUG) | ||
982 | 749 | write_file(flagfile, content='', owner=SSH_USER, group='keystone', | ||
983 | 750 | perms=0o644) | ||
984 | 751 | |||
985 | 752 | |||
986 | 753 | def create_peer_actions(actions): | ||
987 | 754 | for action in actions: | ||
988 | 755 | action = "%s.%s" % (local_unit().replace('/', '-'), action) | ||
989 | 756 | flagfile = os.path.join(SYNC_FLAGS_DIR, action) | ||
990 | 757 | log("Creating action %s" % (flagfile), level=DEBUG) | ||
991 | 758 | write_file(flagfile, content='', owner=SSH_USER, group='keystone', | ||
992 | 759 | perms=0o644) | ||
993 | 760 | |||
994 | 761 | |||
995 | 762 | @retry_on_exception(3, base_delay=2, exc_type=subprocess.CalledProcessError) | ||
996 | 763 | def unison_sync(paths_to_sync): | ||
997 | 764 | """Do unison sync and retry a few times if it fails since peers may not be | ||
998 | 765 | ready for sync. | ||
999 | 766 | """ | ||
1000 | 767 | log('Synchronizing CA (%s) to all peers.' % (', '.join(paths_to_sync)), | ||
1001 | 768 | level=INFO) | ||
1002 | 769 | keystone_gid = grp.getgrnam('keystone').gr_gid | ||
1003 | 770 | unison.sync_to_peers(peer_interface='cluster', paths=paths_to_sync, | ||
1004 | 771 | user=SSH_USER, verbose=True, gid=keystone_gid, | ||
1005 | 772 | fatal=True) | ||
1006 | 773 | |||
1007 | 774 | |||
1008 | 775 | def get_ssl_sync_request_units(): | ||
1009 | 776 | """Get list of units that have requested to be synced. | ||
1010 | 777 | |||
1011 | 778 | NOTE: this must be called from cluster relation context. | ||
1012 | 779 | """ | ||
1013 | 780 | units = [] | ||
1014 | 781 | for unit in related_units(): | ||
1015 | 782 | settings = relation_get(unit=unit) or {} | ||
1016 | 783 | rkeys = settings.keys() | ||
1017 | 784 | key = re.compile("^ssl-sync-required-(.+)") | ||
1018 | 785 | for rkey in rkeys: | ||
1019 | 786 | res = re.search(key, rkey) | ||
1020 | 787 | if res: | ||
1021 | 788 | units.append(res.group(1)) | ||
1022 | 789 | |||
1023 | 790 | return units | ||
1024 | 791 | |||
1025 | 792 | |||
1026 | 793 | def is_ssl_cert_master(): | ||
1027 | 794 | """Return True if this unit is ssl cert master.""" | ||
1028 | 795 | master = None | ||
1029 | 796 | for rid in relation_ids('cluster'): | ||
1030 | 797 | master = relation_get(attribute='ssl-cert-master', rid=rid, | ||
1031 | 798 | unit=local_unit()) | ||
1032 | 799 | |||
1033 | 800 | return master == local_unit() | ||
1034 | 801 | |||
1035 | 802 | |||
1036 | 803 | def ensure_ssl_cert_master(use_oldest_peer=False): | ||
1037 | 804 | """Ensure that an ssl cert master has been elected. | ||
1038 | 805 | |||
1039 | 806 | Normally the cluster leader will take control but we allow for this to be | ||
1040 | 807 | ignored since this could be called before the cluster is ready. | ||
1041 | 808 | """ | ||
1042 | 809 | # Don't do anything if we are not in ssl/https mode | ||
1043 | 810 | if not (is_str_true(config('use-https')) or | ||
1044 | 811 | is_str_true(config('https-service-endpoints'))): | ||
1045 | 812 | log("SSL/HTTPS is NOT enabled", level=DEBUG) | ||
1046 | 813 | return False | ||
1047 | 814 | |||
1048 | 815 | if not peer_units(): | ||
1049 | 816 | log("Not syncing certs since there are no peer units.", level=INFO) | ||
1050 | 817 | return False | ||
1051 | 818 | |||
1052 | 819 | if use_oldest_peer: | ||
1053 | 820 | elect = oldest_peer(peer_units()) | ||
1054 | 821 | else: | ||
1055 | 822 | elect = is_elected_leader(CLUSTER_RES) | ||
1056 | 823 | |||
1057 | 824 | if elect: | ||
1058 | 825 | masters = [] | ||
1059 | 826 | for rid in relation_ids('cluster'): | ||
1060 | 827 | for unit in related_units(rid): | ||
1061 | 828 | m = relation_get(rid=rid, unit=unit, | ||
1062 | 829 | attribute='ssl-cert-master') | ||
1063 | 830 | if m is not None: | ||
1064 | 831 | masters.append(m) | ||
1065 | 832 | |||
1066 | 833 | # We expect all peers to echo this setting | ||
1067 | 834 | if not masters or 'unknown' in masters: | ||
1068 | 835 | log("Notifying peers this unit is ssl-cert-master", level=INFO) | ||
1069 | 836 | for rid in relation_ids('cluster'): | ||
1070 | 837 | settings = {'ssl-cert-master': local_unit()} | ||
1071 | 838 | relation_set(relation_id=rid, relation_settings=settings) | ||
1072 | 839 | |||
1073 | 840 | # Return now and wait for cluster-relation-changed (peer_echo) for | ||
1074 | 841 | # sync. | ||
1075 | 842 | return False | ||
1076 | 843 | elif len(set(masters)) != 1 and local_unit() not in masters: | ||
1077 | 844 | log("Did not get concensus from peers on who is master (%s) - " | ||
1078 | 845 | "waiting for current master to release before self-electing" % | ||
1079 | 846 | (masters), level=INFO) | ||
1080 | 847 | return False | ||
1081 | 848 | |||
1082 | 849 | if not is_ssl_cert_master(): | ||
1083 | 850 | log("Not ssl cert master - skipping sync", level=INFO) | ||
1084 | 851 | return False | ||
1085 | 852 | |||
1086 | 853 | return True | ||
1087 | 854 | |||
1088 | 855 | |||
1089 | 856 | def synchronize_ca(fatal=False): | ||
1090 | 857 | """Broadcast service credentials to peers. | ||
1091 | 858 | |||
1092 | 859 | By default a failure to sync is fatal and will result in a raised | ||
1093 | 860 | exception. | ||
1094 | 861 | |||
1095 | 862 | This function uses a relation setting 'ssl-cert-master' to get some | ||
1096 | 863 | leader stickiness while synchronisation is being carried out. This ensures | ||
1097 | 864 | that the last host to create and broadcast cetificates has the option to | ||
1098 | 865 | complete actions before electing the new leader as sync master. | ||
1099 | 866 | """ | ||
1100 | 867 | paths_to_sync = [SYNC_FLAGS_DIR] | ||
1101 | 868 | |||
1102 | 869 | if is_str_true(config('https-service-endpoints')): | ||
1103 | 870 | log("Syncing all endpoint certs since https-service-endpoints=True", | ||
1104 | 871 | level=DEBUG) | ||
1105 | 872 | paths_to_sync.append(SSL_DIR) | ||
1106 | 873 | paths_to_sync.append(APACHE_SSL_DIR) | ||
1107 | 874 | paths_to_sync.append(CA_CERT_PATH) | ||
1108 | 875 | elif is_str_true(config('use-https')): | ||
1109 | 876 | log("Syncing keystone-endpoint certs since use-https=True", | ||
1110 | 877 | level=DEBUG) | ||
1111 | 878 | paths_to_sync.append(APACHE_SSL_DIR) | ||
1112 | 879 | paths_to_sync.append(CA_CERT_PATH) | ||
1113 | 880 | |||
1114 | 881 | if not paths_to_sync: | ||
1115 | 882 | log("Nothing to sync - skipping", level=DEBUG) | ||
1116 | 883 | return | ||
1117 | 884 | |||
1118 | 885 | if not os.path.isdir(SYNC_FLAGS_DIR): | ||
1119 | 886 | mkdir(SYNC_FLAGS_DIR, SSH_USER, 'keystone', 0o775) | ||
1120 | 887 | |||
1121 | 888 | # We need to restart peer apache services to ensure they have picked up | ||
1122 | 889 | # new ssl keys. | ||
1123 | 890 | create_peer_service_actions('restart', ['apache2']) | ||
1124 | 891 | create_peer_actions(['update-ca-certificates']) | ||
1125 | 892 | |||
1126 | 893 | # Format here needs to match that used when peers request sync | ||
1127 | 894 | synced_units = [unit.replace('/', '-') for unit in peer_units()] | ||
1128 | 895 | |||
1129 | 896 | retries = 3 | ||
1130 | 897 | while True: | ||
1131 | 898 | hash1 = hashlib.sha256() | ||
1132 | 899 | for path in paths_to_sync: | ||
1133 | 900 | update_hash_from_path(hash1, path) | ||
1134 | 901 | |||
1135 | 902 | try: | ||
1136 | 903 | unison_sync(paths_to_sync) | ||
1137 | 904 | except: | ||
1138 | 905 | if fatal: | ||
1139 | 906 | raise | ||
1140 | 907 | else: | ||
1141 | 908 | log("Sync failed but fatal=False", level=INFO) | ||
1142 | 909 | return | ||
1143 | 910 | |||
1144 | 911 | hash2 = hashlib.sha256() | ||
1145 | 912 | for path in paths_to_sync: | ||
1146 | 913 | update_hash_from_path(hash2, path) | ||
1147 | 914 | |||
1148 | 915 | # Detect whether someone else has synced to this unit while we did our | ||
1149 | 916 | # transfer. | ||
1150 | 917 | if hash1.hexdigest() != hash2.hexdigest(): | ||
1151 | 918 | retries -= 1 | ||
1152 | 919 | if retries > 0: | ||
1153 | 920 | log("SSL dir contents changed during sync - retrying unison " | ||
1154 | 921 | "sync %s more times" % (retries), level=WARNING) | ||
1155 | 922 | else: | ||
1156 | 923 | log("SSL dir contents changed during sync - retries failed", | ||
1157 | 924 | level=ERROR) | ||
1158 | 925 | return {} | ||
1159 | 926 | else: | ||
1160 | 927 | break | ||
1161 | 928 | |||
1162 | 929 | hash = hash1.hexdigest() | ||
1163 | 930 | log("Sending restart-services-trigger=%s to all peers" % (hash), | ||
1164 | 931 | level=DEBUG) | ||
1165 | 932 | |||
1166 | 933 | log("Sync complete", level=DEBUG) | ||
1167 | 934 | return {'restart-services-trigger': hash, | ||
1168 | 935 | 'ssl-synced-units': json.dumps(synced_units)} | ||
1169 | 936 | |||
1170 | 937 | |||
1171 | 938 | def update_hash_from_path(hash, path, recurse_depth=10): | ||
1172 | 939 | """Recurse through path and update the provided hash for every file found. | ||
1173 | 940 | """ | ||
1174 | 941 | if not recurse_depth: | ||
1175 | 942 | log("Max recursion depth (%s) reached for update_hash_from_path() at " | ||
1176 | 943 | "path='%s' - not going any deeper" % (recurse_depth, path), | ||
1177 | 944 | level=WARNING) | ||
1178 | 945 | return | ||
1179 | 946 | |||
1180 | 947 | for p in glob.glob("%s/*" % path): | ||
1181 | 948 | if os.path.isdir(p): | ||
1182 | 949 | update_hash_from_path(hash, p, recurse_depth=recurse_depth - 1) | ||
1183 | 950 | else: | ||
1184 | 951 | with open(p, 'r') as fd: | ||
1185 | 952 | hash.update(fd.read()) | ||
1186 | 953 | |||
1187 | 954 | |||
1188 | 955 | def synchronize_ca_if_changed(force=False, fatal=False): | ||
1189 | 956 | """Decorator to perform ssl cert sync if decorated function modifies them | ||
1190 | 957 | in any way. | ||
1191 | 958 | |||
1192 | 959 | If force is True a sync is done regardless. | ||
1193 | 960 | """ | ||
1194 | 961 | def inner_synchronize_ca_if_changed1(f): | ||
1195 | 962 | def inner_synchronize_ca_if_changed2(*args, **kwargs): | ||
1196 | 963 | # Only sync master can do sync. Ensure (a) we are not nested and | ||
1197 | 964 | # (b) a master is elected and we are it. | ||
1198 | 965 | acquired = SSL_SYNC_SEMAPHORE.acquire(blocking=0) | ||
1199 | 966 | try: | ||
1200 | 967 | if not acquired: | ||
1201 | 968 | log("Nested sync - ignoring", level=DEBUG) | ||
1202 | 969 | return f(*args, **kwargs) | ||
1203 | 970 | |||
1204 | 971 | if not ensure_ssl_cert_master(): | ||
1205 | 972 | log("Not leader - ignoring sync", level=DEBUG) | ||
1206 | 973 | return f(*args, **kwargs) | ||
1207 | 974 | |||
1208 | 975 | peer_settings = {} | ||
1209 | 976 | if not force: | ||
1210 | 977 | ssl_dirs = [SSL_DIR, APACHE_SSL_DIR, CA_CERT_PATH] | ||
1211 | 978 | |||
1212 | 979 | hash1 = hashlib.sha256() | ||
1213 | 980 | for path in ssl_dirs: | ||
1214 | 981 | update_hash_from_path(hash1, path) | ||
1215 | 982 | |||
1216 | 983 | ret = f(*args, **kwargs) | ||
1217 | 984 | |||
1218 | 985 | hash2 = hashlib.sha256() | ||
1219 | 986 | for path in ssl_dirs: | ||
1220 | 987 | update_hash_from_path(hash2, path) | ||
1221 | 988 | |||
1222 | 989 | if hash1.hexdigest() != hash2.hexdigest(): | ||
1223 | 990 | log("SSL certs have changed - syncing peers", | ||
1224 | 991 | level=DEBUG) | ||
1225 | 992 | peer_settings = synchronize_ca(fatal=fatal) | ||
1226 | 993 | else: | ||
1227 | 994 | log("SSL certs have not changed - skipping sync", | ||
1228 | 995 | level=DEBUG) | ||
1229 | 996 | else: | ||
1230 | 997 | ret = f(*args, **kwargs) | ||
1231 | 998 | log("Doing forced ssl cert sync", level=DEBUG) | ||
1232 | 999 | peer_settings = synchronize_ca(fatal=fatal) | ||
1233 | 1000 | |||
1234 | 1001 | # If we are the sync master but not leader, ensure we have | ||
1235 | 1002 | # relinquished master status. | ||
1236 | 1003 | if not is_elected_leader(CLUSTER_RES): | ||
1237 | 1004 | log("Re-electing ssl cert master.", level=INFO) | ||
1238 | 1005 | peer_settings['ssl-cert-master'] = 'unknown' | ||
1239 | 1006 | |||
1240 | 1007 | if peer_settings: | ||
1241 | 1008 | for rid in relation_ids('cluster'): | ||
1242 | 1009 | relation_set(relation_id=rid, | ||
1243 | 1010 | relation_settings=peer_settings) | ||
1244 | 1011 | |||
1245 | 1012 | return ret | ||
1246 | 1013 | finally: | ||
1247 | 1014 | SSL_SYNC_SEMAPHORE.release() | ||
1248 | 1015 | |||
1249 | 1016 | return inner_synchronize_ca_if_changed2 | ||
1250 | 1017 | |||
1251 | 1018 | return inner_synchronize_ca_if_changed1 | ||
1252 | 638 | 1019 | ||
1253 | 639 | 1020 | ||
1254 | 640 | def get_ca(user='keystone', group='keystone'): | 1021 | def get_ca(user='keystone', group='keystone'): |
1255 | @@ -642,22 +1023,32 @@ | |||
1256 | 642 | Initialize a new CA object if one hasn't already been loaded. | 1023 | Initialize a new CA object if one hasn't already been loaded. |
1257 | 643 | This will create a new CA or load an existing one. | 1024 | This will create a new CA or load an existing one. |
1258 | 644 | """ | 1025 | """ |
1260 | 645 | if not CA: | 1026 | if not ssl.CA_SINGLETON: |
1261 | 646 | if not os.path.isdir(SSL_DIR): | 1027 | if not os.path.isdir(SSL_DIR): |
1262 | 647 | os.mkdir(SSL_DIR) | 1028 | os.mkdir(SSL_DIR) |
1263 | 1029 | |||
1264 | 648 | d_name = '_'.join(SSL_CA_NAME.lower().split(' ')) | 1030 | d_name = '_'.join(SSL_CA_NAME.lower().split(' ')) |
1265 | 649 | ca = ssl.JujuCA(name=SSL_CA_NAME, user=user, group=group, | 1031 | ca = ssl.JujuCA(name=SSL_CA_NAME, user=user, group=group, |
1266 | 650 | ca_dir=os.path.join(SSL_DIR, | 1032 | ca_dir=os.path.join(SSL_DIR, |
1267 | 651 | '%s_intermediate_ca' % d_name), | 1033 | '%s_intermediate_ca' % d_name), |
1268 | 652 | root_ca_dir=os.path.join(SSL_DIR, | 1034 | root_ca_dir=os.path.join(SSL_DIR, |
1269 | 653 | '%s_root_ca' % d_name)) | 1035 | '%s_root_ca' % d_name)) |
1270 | 1036 | |||
1271 | 654 | # SSL_DIR is synchronized via all peers over unison+ssh, need | 1037 | # SSL_DIR is synchronized via all peers over unison+ssh, need |
1272 | 655 | # to ensure permissions. | 1038 | # to ensure permissions. |
1273 | 656 | subprocess.check_output(['chown', '-R', '%s.%s' % (user, group), | 1039 | subprocess.check_output(['chown', '-R', '%s.%s' % (user, group), |
1274 | 657 | '%s' % SSL_DIR]) | 1040 | '%s' % SSL_DIR]) |
1275 | 658 | subprocess.check_output(['chmod', '-R', 'g+rwx', '%s' % SSL_DIR]) | 1041 | subprocess.check_output(['chmod', '-R', 'g+rwx', '%s' % SSL_DIR]) |
1278 | 659 | CA.append(ca) | 1042 | |
1279 | 660 | return CA[0] | 1043 | # Ensure a master has been elected and prefer this unit. Note that we |
1280 | 1044 | # prefer oldest peer as predicate since this action i normally only | ||
1281 | 1045 | # performed once at deploy time when the oldest peer should be the | ||
1282 | 1046 | # first to be ready. | ||
1283 | 1047 | ensure_ssl_cert_master(use_oldest_peer=True) | ||
1284 | 1048 | |||
1285 | 1049 | ssl.CA_SINGLETON.append(ca) | ||
1286 | 1050 | |||
1287 | 1051 | return ssl.CA_SINGLETON[0] | ||
1288 | 661 | 1052 | ||
1289 | 662 | 1053 | ||
1290 | 663 | def relation_list(rid): | 1054 | def relation_list(rid): |
1291 | @@ -683,7 +1074,7 @@ | |||
1292 | 683 | https_cns = [] | 1074 | https_cns = [] |
1293 | 684 | if single.issubset(settings): | 1075 | if single.issubset(settings): |
1294 | 685 | # other end of relation advertised only one endpoint | 1076 | # other end of relation advertised only one endpoint |
1296 | 686 | if 'None' in [v for k, v in settings.iteritems()]: | 1077 | if 'None' in settings.itervalues(): |
1297 | 687 | # Some backend services advertise no endpoint but require a | 1078 | # Some backend services advertise no endpoint but require a |
1298 | 688 | # hook execution to update auth strategy. | 1079 | # hook execution to update auth strategy. |
1299 | 689 | relation_data = {} | 1080 | relation_data = {} |
1300 | @@ -699,7 +1090,7 @@ | |||
1301 | 699 | relation_data["auth_port"] = config('admin-port') | 1090 | relation_data["auth_port"] = config('admin-port') |
1302 | 700 | relation_data["service_port"] = config('service-port') | 1091 | relation_data["service_port"] = config('service-port') |
1303 | 701 | relation_data["region"] = config('region') | 1092 | relation_data["region"] = config('region') |
1305 | 702 | if config('https-service-endpoints') in ['True', 'true']: | 1093 | if is_str_true(config('https-service-endpoints')): |
1306 | 703 | # Pass CA cert as client will need it to | 1094 | # Pass CA cert as client will need it to |
1307 | 704 | # verify https connections | 1095 | # verify https connections |
1308 | 705 | ca = get_ca(user=SSH_USER) | 1096 | ca = get_ca(user=SSH_USER) |
1309 | @@ -711,6 +1102,7 @@ | |||
1310 | 711 | for role in get_requested_roles(settings): | 1102 | for role in get_requested_roles(settings): |
1311 | 712 | log("Creating requested role: %s" % role) | 1103 | log("Creating requested role: %s" % role) |
1312 | 713 | create_role(role) | 1104 | create_role(role) |
1313 | 1105 | |||
1314 | 714 | peer_store_and_set(relation_id=relation_id, | 1106 | peer_store_and_set(relation_id=relation_id, |
1315 | 715 | **relation_data) | 1107 | **relation_data) |
1316 | 716 | return | 1108 | return |
1317 | @@ -786,7 +1178,7 @@ | |||
1318 | 786 | if prefix: | 1178 | if prefix: |
1319 | 787 | service_username = "%s%s" % (prefix, service_username) | 1179 | service_username = "%s%s" % (prefix, service_username) |
1320 | 788 | 1180 | ||
1322 | 789 | if 'None' in [v for k, v in settings.iteritems()]: | 1181 | if 'None' in settings.itervalues(): |
1323 | 790 | return | 1182 | return |
1324 | 791 | 1183 | ||
1325 | 792 | if not service_username: | 1184 | if not service_username: |
1326 | @@ -838,7 +1230,7 @@ | |||
1327 | 838 | relation_data["auth_protocol"] = "http" | 1230 | relation_data["auth_protocol"] = "http" |
1328 | 839 | relation_data["service_protocol"] = "http" | 1231 | relation_data["service_protocol"] = "http" |
1329 | 840 | # generate or get a new cert/key for service if set to manage certs. | 1232 | # generate or get a new cert/key for service if set to manage certs. |
1331 | 841 | if config('https-service-endpoints') in ['True', 'true']: | 1233 | if is_str_true(config('https-service-endpoints')): |
1332 | 842 | ca = get_ca(user=SSH_USER) | 1234 | ca = get_ca(user=SSH_USER) |
1333 | 843 | # NOTE(jamespage) may have multiple cns to deal with to iterate | 1235 | # NOTE(jamespage) may have multiple cns to deal with to iterate |
1334 | 844 | https_cns = set(https_cns) | 1236 | https_cns = set(https_cns) |
1335 | @@ -853,6 +1245,7 @@ | |||
1336 | 853 | ca_bundle = ca.get_ca_bundle() | 1245 | ca_bundle = ca.get_ca_bundle() |
1337 | 854 | relation_data['ca_cert'] = b64encode(ca_bundle) | 1246 | relation_data['ca_cert'] = b64encode(ca_bundle) |
1338 | 855 | relation_data['https_keystone'] = 'True' | 1247 | relation_data['https_keystone'] = 'True' |
1339 | 1248 | |||
1340 | 856 | peer_store_and_set(relation_id=relation_id, | 1249 | peer_store_and_set(relation_id=relation_id, |
1341 | 857 | **relation_data) | 1250 | **relation_data) |
1342 | 858 | 1251 | ||
1343 | 859 | 1252 | ||
1344 | === modified file 'unit_tests/test_keystone_hooks.py' | |||
1345 | --- unit_tests/test_keystone_hooks.py 2015-01-21 16:26:50 +0000 | |||
1346 | +++ unit_tests/test_keystone_hooks.py 2015-01-21 16:51:08 +0000 | |||
1347 | @@ -1,6 +1,7 @@ | |||
1348 | 1 | from mock import call, patch, MagicMock | 1 | from mock import call, patch, MagicMock |
1349 | 2 | import os | 2 | import os |
1350 | 3 | import json | 3 | import json |
1351 | 4 | import uuid | ||
1352 | 4 | 5 | ||
1353 | 5 | from test_utils import CharmTestCase | 6 | from test_utils import CharmTestCase |
1354 | 6 | 7 | ||
1355 | @@ -30,7 +31,6 @@ | |||
1356 | 30 | 'local_unit', | 31 | 'local_unit', |
1357 | 31 | 'filter_installed_packages', | 32 | 'filter_installed_packages', |
1358 | 32 | 'relation_ids', | 33 | 'relation_ids', |
1359 | 33 | 'relation_list', | ||
1360 | 34 | 'relation_set', | 34 | 'relation_set', |
1361 | 35 | 'relation_get', | 35 | 'relation_get', |
1362 | 36 | 'related_units', | 36 | 'related_units', |
1363 | @@ -42,9 +42,10 @@ | |||
1364 | 42 | 'restart_on_change', | 42 | 'restart_on_change', |
1365 | 43 | # charmhelpers.contrib.openstack.utils | 43 | # charmhelpers.contrib.openstack.utils |
1366 | 44 | 'configure_installation_source', | 44 | 'configure_installation_source', |
1367 | 45 | # charmhelpers.contrib.openstack.ip | ||
1368 | 46 | 'resolve_address', | ||
1369 | 45 | # charmhelpers.contrib.hahelpers.cluster_utils | 47 | # charmhelpers.contrib.hahelpers.cluster_utils |
1372 | 46 | 'is_leader', | 48 | 'is_elected_leader', |
1371 | 47 | 'eligible_leader', | ||
1373 | 48 | 'get_hacluster_config', | 49 | 'get_hacluster_config', |
1374 | 49 | # keystone_utils | 50 | # keystone_utils |
1375 | 50 | 'restart_map', | 51 | 'restart_map', |
1376 | @@ -55,7 +56,7 @@ | |||
1377 | 55 | 'migrate_database', | 56 | 'migrate_database', |
1378 | 56 | 'ensure_initial_admin', | 57 | 'ensure_initial_admin', |
1379 | 57 | 'add_service_to_keystone', | 58 | 'add_service_to_keystone', |
1381 | 58 | 'synchronize_ca', | 59 | 'synchronize_ca_if_changed', |
1382 | 59 | 'update_nrpe_config', | 60 | 'update_nrpe_config', |
1383 | 60 | # other | 61 | # other |
1384 | 61 | 'check_call', | 62 | 'check_call', |
1385 | @@ -160,8 +161,13 @@ | |||
1386 | 160 | 'Attempting to associate a postgresql database when there ' | 161 | 'Attempting to associate a postgresql database when there ' |
1387 | 161 | 'is already associated a mysql one') | 162 | 'is already associated a mysql one') |
1388 | 162 | 163 | ||
1389 | 164 | @patch('keystone_utils.log') | ||
1390 | 165 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1391 | 163 | @patch.object(hooks, 'CONFIGS') | 166 | @patch.object(hooks, 'CONFIGS') |
1393 | 164 | def test_db_changed_missing_relation_data(self, configs): | 167 | def test_db_changed_missing_relation_data(self, configs, |
1394 | 168 | mock_ensure_ssl_cert_master, | ||
1395 | 169 | mock_log): | ||
1396 | 170 | mock_ensure_ssl_cert_master.return_value = False | ||
1397 | 165 | configs.complete_contexts = MagicMock() | 171 | configs.complete_contexts = MagicMock() |
1398 | 166 | configs.complete_contexts.return_value = [] | 172 | configs.complete_contexts.return_value = [] |
1399 | 167 | hooks.db_changed() | 173 | hooks.db_changed() |
1400 | @@ -169,8 +175,13 @@ | |||
1401 | 169 | 'shared-db relation incomplete. Peer not ready?' | 175 | 'shared-db relation incomplete. Peer not ready?' |
1402 | 170 | ) | 176 | ) |
1403 | 171 | 177 | ||
1404 | 178 | @patch('keystone_utils.log') | ||
1405 | 179 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1406 | 172 | @patch.object(hooks, 'CONFIGS') | 180 | @patch.object(hooks, 'CONFIGS') |
1408 | 173 | def test_postgresql_db_changed_missing_relation_data(self, configs): | 181 | def test_postgresql_db_changed_missing_relation_data(self, configs, |
1409 | 182 | mock_ensure_leader, | ||
1410 | 183 | mock_log): | ||
1411 | 184 | mock_ensure_leader.return_value = False | ||
1412 | 174 | configs.complete_contexts = MagicMock() | 185 | configs.complete_contexts = MagicMock() |
1413 | 175 | configs.complete_contexts.return_value = [] | 186 | configs.complete_contexts.return_value = [] |
1414 | 176 | hooks.pgsql_db_changed() | 187 | hooks.pgsql_db_changed() |
1415 | @@ -192,9 +203,14 @@ | |||
1416 | 192 | configs.write = MagicMock() | 203 | configs.write = MagicMock() |
1417 | 193 | hooks.pgsql_db_changed() | 204 | hooks.pgsql_db_changed() |
1418 | 194 | 205 | ||
1419 | 206 | @patch('keystone_utils.log') | ||
1420 | 207 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1421 | 195 | @patch.object(hooks, 'CONFIGS') | 208 | @patch.object(hooks, 'CONFIGS') |
1422 | 196 | @patch.object(hooks, 'identity_changed') | 209 | @patch.object(hooks, 'identity_changed') |
1424 | 197 | def test_db_changed_allowed(self, identity_changed, configs): | 210 | def test_db_changed_allowed(self, identity_changed, configs, |
1425 | 211 | mock_ensure_ssl_cert_master, | ||
1426 | 212 | mock_log): | ||
1427 | 213 | mock_ensure_ssl_cert_master.return_value = False | ||
1428 | 198 | self.relation_ids.return_value = ['identity-service:0'] | 214 | self.relation_ids.return_value = ['identity-service:0'] |
1429 | 199 | self.related_units.return_value = ['unit/0'] | 215 | self.related_units.return_value = ['unit/0'] |
1430 | 200 | 216 | ||
1431 | @@ -207,9 +223,13 @@ | |||
1432 | 207 | relation_id='identity-service:0', | 223 | relation_id='identity-service:0', |
1433 | 208 | remote_unit='unit/0') | 224 | remote_unit='unit/0') |
1434 | 209 | 225 | ||
1435 | 226 | @patch('keystone_utils.log') | ||
1436 | 227 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1437 | 210 | @patch.object(hooks, 'CONFIGS') | 228 | @patch.object(hooks, 'CONFIGS') |
1438 | 211 | @patch.object(hooks, 'identity_changed') | 229 | @patch.object(hooks, 'identity_changed') |
1440 | 212 | def test_db_changed_not_allowed(self, identity_changed, configs): | 230 | def test_db_changed_not_allowed(self, identity_changed, configs, |
1441 | 231 | mock_ensure_ssl_cert_master, mock_log): | ||
1442 | 232 | mock_ensure_ssl_cert_master.return_value = False | ||
1443 | 213 | self.relation_ids.return_value = ['identity-service:0'] | 233 | self.relation_ids.return_value = ['identity-service:0'] |
1444 | 214 | self.related_units.return_value = ['unit/0'] | 234 | self.related_units.return_value = ['unit/0'] |
1445 | 215 | 235 | ||
1446 | @@ -220,9 +240,13 @@ | |||
1447 | 220 | self.assertFalse(self.ensure_initial_admin.called) | 240 | self.assertFalse(self.ensure_initial_admin.called) |
1448 | 221 | self.assertFalse(identity_changed.called) | 241 | self.assertFalse(identity_changed.called) |
1449 | 222 | 242 | ||
1450 | 243 | @patch('keystone_utils.log') | ||
1451 | 244 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1452 | 223 | @patch.object(hooks, 'CONFIGS') | 245 | @patch.object(hooks, 'CONFIGS') |
1453 | 224 | @patch.object(hooks, 'identity_changed') | 246 | @patch.object(hooks, 'identity_changed') |
1455 | 225 | def test_postgresql_db_changed(self, identity_changed, configs): | 247 | def test_postgresql_db_changed(self, identity_changed, configs, |
1456 | 248 | mock_ensure_ssl_cert_master, mock_log): | ||
1457 | 249 | mock_ensure_ssl_cert_master.return_value = False | ||
1458 | 226 | self.relation_ids.return_value = ['identity-service:0'] | 250 | self.relation_ids.return_value = ['identity-service:0'] |
1459 | 227 | self.related_units.return_value = ['unit/0'] | 251 | self.related_units.return_value = ['unit/0'] |
1460 | 228 | 252 | ||
1461 | @@ -235,6 +259,10 @@ | |||
1462 | 235 | relation_id='identity-service:0', | 259 | relation_id='identity-service:0', |
1463 | 236 | remote_unit='unit/0') | 260 | remote_unit='unit/0') |
1464 | 237 | 261 | ||
1465 | 262 | @patch('keystone_utils.log') | ||
1466 | 263 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1467 | 264 | @patch.object(hooks, 'peer_units') | ||
1468 | 265 | @patch.object(hooks, 'ensure_permissions') | ||
1469 | 238 | @patch.object(hooks, 'admin_relation_changed') | 266 | @patch.object(hooks, 'admin_relation_changed') |
1470 | 239 | @patch.object(hooks, 'cluster_joined') | 267 | @patch.object(hooks, 'cluster_joined') |
1471 | 240 | @patch.object(unison, 'ensure_user') | 268 | @patch.object(unison, 'ensure_user') |
1472 | @@ -245,11 +273,15 @@ | |||
1473 | 245 | def test_config_changed_no_openstack_upgrade_leader( | 273 | def test_config_changed_no_openstack_upgrade_leader( |
1474 | 246 | self, configure_https, identity_changed, | 274 | self, configure_https, identity_changed, |
1475 | 247 | configs, get_homedir, ensure_user, cluster_joined, | 275 | configs, get_homedir, ensure_user, cluster_joined, |
1477 | 248 | admin_relation_changed): | 276 | admin_relation_changed, ensure_permissions, mock_peer_units, |
1478 | 277 | mock_ensure_ssl_cert_master, mock_log): | ||
1479 | 249 | self.openstack_upgrade_available.return_value = False | 278 | self.openstack_upgrade_available.return_value = False |
1483 | 250 | self.eligible_leader.return_value = True | 279 | self.is_elected_leader.return_value = True |
1484 | 251 | self.relation_ids.return_value = ['dummyid:0'] | 280 | # avoid having to mock syncer |
1485 | 252 | self.relation_list.return_value = ['unit/0'] | 281 | mock_ensure_ssl_cert_master.return_value = False |
1486 | 282 | mock_peer_units.return_value = [] | ||
1487 | 283 | self.relation_ids.return_value = ['identity-service:0'] | ||
1488 | 284 | self.related_units.return_value = ['unit/0'] | ||
1489 | 253 | 285 | ||
1490 | 254 | hooks.config_changed() | 286 | hooks.config_changed() |
1491 | 255 | ensure_user.assert_called_with(user=self.ssh_user, group='keystone') | 287 | ensure_user.assert_called_with(user=self.ssh_user, group='keystone') |
1492 | @@ -264,10 +296,13 @@ | |||
1493 | 264 | self.log.assert_called_with( | 296 | self.log.assert_called_with( |
1494 | 265 | 'Firing identity_changed hook for all related services.') | 297 | 'Firing identity_changed hook for all related services.') |
1495 | 266 | identity_changed.assert_called_with( | 298 | identity_changed.assert_called_with( |
1497 | 267 | relation_id='dummyid:0', | 299 | relation_id='identity-service:0', |
1498 | 268 | remote_unit='unit/0') | 300 | remote_unit='unit/0') |
1500 | 269 | admin_relation_changed.assert_called_with('dummyid:0') | 301 | admin_relation_changed.assert_called_with('identity-service:0') |
1501 | 270 | 302 | ||
1502 | 303 | @patch('keystone_utils.log') | ||
1503 | 304 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1504 | 305 | @patch.object(hooks, 'ensure_permissions') | ||
1505 | 271 | @patch.object(hooks, 'cluster_joined') | 306 | @patch.object(hooks, 'cluster_joined') |
1506 | 272 | @patch.object(unison, 'ensure_user') | 307 | @patch.object(unison, 'ensure_user') |
1507 | 273 | @patch.object(unison, 'get_homedir') | 308 | @patch.object(unison, 'get_homedir') |
1508 | @@ -276,9 +311,12 @@ | |||
1509 | 276 | @patch.object(hooks, 'configure_https') | 311 | @patch.object(hooks, 'configure_https') |
1510 | 277 | def test_config_changed_no_openstack_upgrade_not_leader( | 312 | def test_config_changed_no_openstack_upgrade_not_leader( |
1511 | 278 | self, configure_https, identity_changed, | 313 | self, configure_https, identity_changed, |
1513 | 279 | configs, get_homedir, ensure_user, cluster_joined): | 314 | configs, get_homedir, ensure_user, cluster_joined, |
1514 | 315 | ensure_permissions, mock_ensure_ssl_cert_master, | ||
1515 | 316 | mock_log): | ||
1516 | 280 | self.openstack_upgrade_available.return_value = False | 317 | self.openstack_upgrade_available.return_value = False |
1518 | 281 | self.eligible_leader.return_value = False | 318 | self.is_elected_leader.return_value = False |
1519 | 319 | mock_ensure_ssl_cert_master.return_value = False | ||
1520 | 282 | 320 | ||
1521 | 283 | hooks.config_changed() | 321 | hooks.config_changed() |
1522 | 284 | ensure_user.assert_called_with(user=self.ssh_user, group='keystone') | 322 | ensure_user.assert_called_with(user=self.ssh_user, group='keystone') |
1523 | @@ -292,6 +330,10 @@ | |||
1524 | 292 | self.assertFalse(self.ensure_initial_admin.called) | 330 | self.assertFalse(self.ensure_initial_admin.called) |
1525 | 293 | self.assertFalse(identity_changed.called) | 331 | self.assertFalse(identity_changed.called) |
1526 | 294 | 332 | ||
1527 | 333 | @patch('keystone_utils.log') | ||
1528 | 334 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1529 | 335 | @patch.object(hooks, 'peer_units') | ||
1530 | 336 | @patch.object(hooks, 'ensure_permissions') | ||
1531 | 295 | @patch.object(hooks, 'admin_relation_changed') | 337 | @patch.object(hooks, 'admin_relation_changed') |
1532 | 296 | @patch.object(hooks, 'cluster_joined') | 338 | @patch.object(hooks, 'cluster_joined') |
1533 | 297 | @patch.object(unison, 'ensure_user') | 339 | @patch.object(unison, 'ensure_user') |
1534 | @@ -302,11 +344,16 @@ | |||
1535 | 302 | def test_config_changed_with_openstack_upgrade( | 344 | def test_config_changed_with_openstack_upgrade( |
1536 | 303 | self, configure_https, identity_changed, | 345 | self, configure_https, identity_changed, |
1537 | 304 | configs, get_homedir, ensure_user, cluster_joined, | 346 | configs, get_homedir, ensure_user, cluster_joined, |
1539 | 305 | admin_relation_changed): | 347 | admin_relation_changed, |
1540 | 348 | ensure_permissions, mock_peer_units, mock_ensure_ssl_cert_master, | ||
1541 | 349 | mock_log): | ||
1542 | 306 | self.openstack_upgrade_available.return_value = True | 350 | self.openstack_upgrade_available.return_value = True |
1546 | 307 | self.eligible_leader.return_value = True | 351 | self.is_elected_leader.return_value = True |
1547 | 308 | self.relation_ids.return_value = ['dummyid:0'] | 352 | # avoid having to mock syncer |
1548 | 309 | self.relation_list.return_value = ['unit/0'] | 353 | mock_ensure_ssl_cert_master.return_value = False |
1549 | 354 | mock_peer_units.return_value = [] | ||
1550 | 355 | self.relation_ids.return_value = ['identity-service:0'] | ||
1551 | 356 | self.related_units.return_value = ['unit/0'] | ||
1552 | 310 | 357 | ||
1553 | 311 | hooks.config_changed() | 358 | hooks.config_changed() |
1554 | 312 | ensure_user.assert_called_with(user=self.ssh_user, group='keystone') | 359 | ensure_user.assert_called_with(user=self.ssh_user, group='keystone') |
1555 | @@ -323,25 +370,33 @@ | |||
1556 | 323 | self.log.assert_called_with( | 370 | self.log.assert_called_with( |
1557 | 324 | 'Firing identity_changed hook for all related services.') | 371 | 'Firing identity_changed hook for all related services.') |
1558 | 325 | identity_changed.assert_called_with( | 372 | identity_changed.assert_called_with( |
1560 | 326 | relation_id='dummyid:0', | 373 | relation_id='identity-service:0', |
1561 | 327 | remote_unit='unit/0') | 374 | remote_unit='unit/0') |
1563 | 328 | admin_relation_changed.assert_called_with('dummyid:0') | 375 | admin_relation_changed.assert_called_with('identity-service:0') |
1564 | 329 | 376 | ||
1565 | 377 | @patch('keystone_utils.log') | ||
1566 | 378 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1567 | 330 | @patch.object(hooks, 'hashlib') | 379 | @patch.object(hooks, 'hashlib') |
1568 | 331 | @patch.object(hooks, 'send_notifications') | 380 | @patch.object(hooks, 'send_notifications') |
1569 | 332 | def test_identity_changed_leader(self, mock_send_notifications, | 381 | def test_identity_changed_leader(self, mock_send_notifications, |
1572 | 333 | mock_hashlib): | 382 | mock_hashlib, mock_ensure_ssl_cert_master, |
1573 | 334 | self.eligible_leader.return_value = True | 383 | mock_log): |
1574 | 384 | mock_ensure_ssl_cert_master.return_value = False | ||
1575 | 335 | hooks.identity_changed( | 385 | hooks.identity_changed( |
1576 | 336 | relation_id='identity-service:0', | 386 | relation_id='identity-service:0', |
1577 | 337 | remote_unit='unit/0') | 387 | remote_unit='unit/0') |
1578 | 338 | self.add_service_to_keystone.assert_called_with( | 388 | self.add_service_to_keystone.assert_called_with( |
1579 | 339 | 'identity-service:0', | 389 | 'identity-service:0', |
1580 | 340 | 'unit/0') | 390 | 'unit/0') |
1581 | 341 | self.assertTrue(self.synchronize_ca.called) | ||
1582 | 342 | 391 | ||
1585 | 343 | def test_identity_changed_no_leader(self): | 392 | @patch.object(hooks, 'local_unit') |
1586 | 344 | self.eligible_leader.return_value = False | 393 | @patch('keystone_utils.log') |
1587 | 394 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1588 | 395 | def test_identity_changed_no_leader(self, mock_ensure_ssl_cert_master, | ||
1589 | 396 | mock_log, mock_local_unit): | ||
1590 | 397 | mock_ensure_ssl_cert_master.return_value = False | ||
1591 | 398 | mock_local_unit.return_value = 'unit/0' | ||
1592 | 399 | self.is_elected_leader.return_value = False | ||
1593 | 345 | hooks.identity_changed( | 400 | hooks.identity_changed( |
1594 | 346 | relation_id='identity-service:0', | 401 | relation_id='identity-service:0', |
1595 | 347 | remote_unit='unit/0') | 402 | remote_unit='unit/0') |
1596 | @@ -349,23 +404,44 @@ | |||
1597 | 349 | self.log.assert_called_with( | 404 | self.log.assert_called_with( |
1598 | 350 | 'Deferring identity_changed() to service leader.') | 405 | 'Deferring identity_changed() to service leader.') |
1599 | 351 | 406 | ||
1600 | 407 | @patch.object(hooks, 'local_unit') | ||
1601 | 408 | @patch.object(hooks, 'peer_units') | ||
1602 | 352 | @patch.object(unison, 'ssh_authorized_peers') | 409 | @patch.object(unison, 'ssh_authorized_peers') |
1604 | 353 | def test_cluster_joined(self, ssh_authorized_peers): | 410 | def test_cluster_joined(self, ssh_authorized_peers, mock_peer_units, |
1605 | 411 | mock_local_unit): | ||
1606 | 412 | mock_local_unit.return_value = 'unit/0' | ||
1607 | 413 | mock_peer_units.return_value = ['unit/0'] | ||
1608 | 354 | hooks.cluster_joined() | 414 | hooks.cluster_joined() |
1609 | 355 | ssh_authorized_peers.assert_called_with( | 415 | ssh_authorized_peers.assert_called_with( |
1610 | 356 | user=self.ssh_user, group='juju_keystone', | 416 | user=self.ssh_user, group='juju_keystone', |
1611 | 357 | peer_interface='cluster', ensure_local_user=True) | 417 | peer_interface='cluster', ensure_local_user=True) |
1612 | 358 | 418 | ||
1613 | 419 | @patch.object(hooks, 'is_ssl_cert_master') | ||
1614 | 420 | @patch.object(hooks, 'peer_units') | ||
1615 | 421 | @patch('keystone_utils.log') | ||
1616 | 422 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1617 | 423 | @patch('keystone_utils.synchronize_ca') | ||
1618 | 424 | @patch.object(hooks, 'check_peer_actions') | ||
1619 | 359 | @patch.object(unison, 'ssh_authorized_peers') | 425 | @patch.object(unison, 'ssh_authorized_peers') |
1620 | 360 | @patch.object(hooks, 'CONFIGS') | 426 | @patch.object(hooks, 'CONFIGS') |
1622 | 361 | def test_cluster_changed(self, configs, ssh_authorized_peers): | 427 | def test_cluster_changed(self, configs, ssh_authorized_peers, |
1623 | 428 | check_peer_actions, mock_synchronize_ca, | ||
1624 | 429 | mock_ensure_ssl_cert_master, | ||
1625 | 430 | mock_log, mock_peer_units, | ||
1626 | 431 | mock_is_ssl_cert_master): | ||
1627 | 432 | mock_is_ssl_cert_master.return_value = False | ||
1628 | 433 | mock_peer_units.return_value = ['unit/0'] | ||
1629 | 434 | mock_ensure_ssl_cert_master.return_value = False | ||
1630 | 435 | self.is_elected_leader.return_value = False | ||
1631 | 436 | self.relation_get.return_value = {'foo_passwd': '123', | ||
1632 | 437 | 'identity-service:16_foo': 'bar'} | ||
1633 | 362 | hooks.cluster_changed() | 438 | hooks.cluster_changed() |
1636 | 363 | self.peer_echo.assert_called_with(includes=['_passwd', | 439 | self.peer_echo.assert_called_with(includes=['foo_passwd', |
1637 | 364 | 'identity-service:']) | 440 | 'identity-service:16_foo']) |
1638 | 365 | ssh_authorized_peers.assert_called_with( | 441 | ssh_authorized_peers.assert_called_with( |
1639 | 366 | user=self.ssh_user, group='keystone', | 442 | user=self.ssh_user, group='keystone', |
1640 | 367 | peer_interface='cluster', ensure_local_user=True) | 443 | peer_interface='cluster', ensure_local_user=True) |
1642 | 368 | self.assertTrue(self.synchronize_ca.called) | 444 | self.assertFalse(mock_synchronize_ca.called) |
1643 | 369 | self.assertTrue(configs.write_all.called) | 445 | self.assertTrue(configs.write_all.called) |
1644 | 370 | 446 | ||
1645 | 371 | def test_ha_joined(self): | 447 | def test_ha_joined(self): |
1646 | @@ -440,34 +516,50 @@ | |||
1647 | 440 | } | 516 | } |
1648 | 441 | self.relation_set.assert_called_with(**args) | 517 | self.relation_set.assert_called_with(**args) |
1649 | 442 | 518 | ||
1650 | 519 | @patch('keystone_utils.log') | ||
1651 | 520 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1652 | 521 | @patch('keystone_utils.synchronize_ca') | ||
1653 | 443 | @patch.object(hooks, 'CONFIGS') | 522 | @patch.object(hooks, 'CONFIGS') |
1655 | 444 | def test_ha_relation_changed_not_clustered_not_leader(self, configs): | 523 | def test_ha_relation_changed_not_clustered_not_leader(self, configs, |
1656 | 524 | mock_synchronize_ca, | ||
1657 | 525 | mock_is_master, | ||
1658 | 526 | mock_log): | ||
1659 | 527 | mock_is_master.return_value = False | ||
1660 | 445 | self.relation_get.return_value = False | 528 | self.relation_get.return_value = False |
1662 | 446 | self.is_leader.return_value = False | 529 | self.is_elected_leader.return_value = False |
1663 | 447 | 530 | ||
1664 | 448 | hooks.ha_changed() | 531 | hooks.ha_changed() |
1665 | 449 | self.assertTrue(configs.write_all.called) | 532 | self.assertTrue(configs.write_all.called) |
1666 | 533 | self.assertFalse(mock_synchronize_ca.called) | ||
1667 | 450 | 534 | ||
1668 | 535 | @patch('keystone_utils.log') | ||
1669 | 536 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1670 | 451 | @patch.object(hooks, 'identity_changed') | 537 | @patch.object(hooks, 'identity_changed') |
1671 | 452 | @patch.object(hooks, 'CONFIGS') | 538 | @patch.object(hooks, 'CONFIGS') |
1674 | 453 | def test_ha_relation_changed_clustered_leader( | 539 | def test_ha_relation_changed_clustered_leader(self, configs, |
1675 | 454 | self, configs, identity_changed): | 540 | identity_changed, |
1676 | 541 | mock_ensure_ssl_cert_master, | ||
1677 | 542 | mock_log): | ||
1678 | 543 | mock_ensure_ssl_cert_master.return_value = False | ||
1679 | 455 | self.relation_get.return_value = True | 544 | self.relation_get.return_value = True |
1681 | 456 | self.is_leader.return_value = True | 545 | self.is_elected_leader.return_value = True |
1682 | 457 | self.relation_ids.return_value = ['identity-service:0'] | 546 | self.relation_ids.return_value = ['identity-service:0'] |
1683 | 458 | self.related_units.return_value = ['unit/0'] | 547 | self.related_units.return_value = ['unit/0'] |
1684 | 459 | 548 | ||
1685 | 460 | hooks.ha_changed() | 549 | hooks.ha_changed() |
1686 | 461 | self.assertTrue(configs.write_all.called) | 550 | self.assertTrue(configs.write_all.called) |
1687 | 462 | self.log.assert_called_with( | 551 | self.log.assert_called_with( |
1690 | 463 | 'Cluster configured, notifying other services and updating ' | 552 | 'Firing identity_changed hook for all related services.') |
1689 | 464 | 'keystone endpoint configuration') | ||
1691 | 465 | identity_changed.assert_called_with( | 553 | identity_changed.assert_called_with( |
1692 | 466 | relation_id='identity-service:0', | 554 | relation_id='identity-service:0', |
1693 | 467 | remote_unit='unit/0') | 555 | remote_unit='unit/0') |
1694 | 468 | 556 | ||
1695 | 557 | @patch('keystone_utils.log') | ||
1696 | 558 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1697 | 469 | @patch.object(hooks, 'CONFIGS') | 559 | @patch.object(hooks, 'CONFIGS') |
1699 | 470 | def test_configure_https_enable(self, configs): | 560 | def test_configure_https_enable(self, configs, mock_ensure_ssl_cert_master, |
1700 | 561 | mock_log): | ||
1701 | 562 | mock_ensure_ssl_cert_master.return_value = False | ||
1702 | 471 | configs.complete_contexts = MagicMock() | 563 | configs.complete_contexts = MagicMock() |
1703 | 472 | configs.complete_contexts.return_value = ['https'] | 564 | configs.complete_contexts.return_value = ['https'] |
1704 | 473 | configs.write = MagicMock() | 565 | configs.write = MagicMock() |
1705 | @@ -477,8 +569,13 @@ | |||
1706 | 477 | cmd = ['a2ensite', 'openstack_https_frontend'] | 569 | cmd = ['a2ensite', 'openstack_https_frontend'] |
1707 | 478 | self.check_call.assert_called_with(cmd) | 570 | self.check_call.assert_called_with(cmd) |
1708 | 479 | 571 | ||
1709 | 572 | @patch('keystone_utils.log') | ||
1710 | 573 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1711 | 480 | @patch.object(hooks, 'CONFIGS') | 574 | @patch.object(hooks, 'CONFIGS') |
1713 | 481 | def test_configure_https_disable(self, configs): | 575 | def test_configure_https_disable(self, configs, |
1714 | 576 | mock_ensure_ssl_cert_master, | ||
1715 | 577 | mock_log): | ||
1716 | 578 | mock_ensure_ssl_cert_master.return_value = False | ||
1717 | 482 | configs.complete_contexts = MagicMock() | 579 | configs.complete_contexts = MagicMock() |
1718 | 483 | configs.complete_contexts.return_value = [''] | 580 | configs.complete_contexts.return_value = [''] |
1719 | 484 | configs.write = MagicMock() | 581 | configs.write = MagicMock() |
1720 | @@ -488,30 +585,61 @@ | |||
1721 | 488 | cmd = ['a2dissite', 'openstack_https_frontend'] | 585 | cmd = ['a2dissite', 'openstack_https_frontend'] |
1722 | 489 | self.check_call.assert_called_with(cmd) | 586 | self.check_call.assert_called_with(cmd) |
1723 | 490 | 587 | ||
1724 | 588 | @patch('keystone_utils.log') | ||
1725 | 589 | @patch('keystone_utils.relation_ids') | ||
1726 | 590 | @patch('keystone_utils.is_elected_leader') | ||
1727 | 591 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1728 | 592 | @patch('keystone_utils.update_hash_from_path') | ||
1729 | 593 | @patch('keystone_utils.synchronize_ca') | ||
1730 | 491 | @patch.object(unison, 'ssh_authorized_peers') | 594 | @patch.object(unison, 'ssh_authorized_peers') |
1733 | 492 | def test_upgrade_charm_leader(self, ssh_authorized_peers): | 595 | def test_upgrade_charm_leader(self, ssh_authorized_peers, |
1734 | 493 | self.eligible_leader.return_value = True | 596 | mock_synchronize_ca, |
1735 | 597 | mock_update_hash_from_path, | ||
1736 | 598 | mock_ensure_ssl_cert_master, | ||
1737 | 599 | mock_is_elected_leader, | ||
1738 | 600 | mock_relation_ids, | ||
1739 | 601 | mock_log): | ||
1740 | 602 | mock_is_elected_leader.return_value = False | ||
1741 | 603 | mock_relation_ids.return_value = [] | ||
1742 | 604 | mock_ensure_ssl_cert_master.return_value = True | ||
1743 | 605 | # Ensure always returns diff | ||
1744 | 606 | mock_update_hash_from_path.side_effect = \ | ||
1745 | 607 | lambda hash, *args, **kwargs: hash.update(str(uuid.uuid4())) | ||
1746 | 608 | |||
1747 | 609 | self.is_elected_leader.return_value = True | ||
1748 | 494 | self.filter_installed_packages.return_value = [] | 610 | self.filter_installed_packages.return_value = [] |
1749 | 495 | hooks.upgrade_charm() | 611 | hooks.upgrade_charm() |
1750 | 496 | self.assertTrue(self.apt_install.called) | 612 | self.assertTrue(self.apt_install.called) |
1751 | 497 | ssh_authorized_peers.assert_called_with( | 613 | ssh_authorized_peers.assert_called_with( |
1752 | 498 | user=self.ssh_user, group='keystone', | 614 | user=self.ssh_user, group='keystone', |
1753 | 499 | peer_interface='cluster', ensure_local_user=True) | 615 | peer_interface='cluster', ensure_local_user=True) |
1755 | 500 | self.assertTrue(self.synchronize_ca.called) | 616 | self.assertTrue(mock_synchronize_ca.called) |
1756 | 501 | self.log.assert_called_with( | 617 | self.log.assert_called_with( |
1759 | 502 | 'Cluster leader - ensuring endpoint configuration' | 618 | 'Firing identity_changed hook for all related services.') |
1758 | 503 | ' is up to date') | ||
1760 | 504 | self.assertTrue(self.ensure_initial_admin.called) | 619 | self.assertTrue(self.ensure_initial_admin.called) |
1761 | 505 | 620 | ||
1762 | 621 | @patch('keystone_utils.log') | ||
1763 | 622 | @patch('keystone_utils.relation_ids') | ||
1764 | 623 | @patch('keystone_utils.ensure_ssl_cert_master') | ||
1765 | 624 | @patch('keystone_utils.update_hash_from_path') | ||
1766 | 506 | @patch.object(unison, 'ssh_authorized_peers') | 625 | @patch.object(unison, 'ssh_authorized_peers') |
1769 | 507 | def test_upgrade_charm_not_leader(self, ssh_authorized_peers): | 626 | def test_upgrade_charm_not_leader(self, ssh_authorized_peers, |
1770 | 508 | self.eligible_leader.return_value = False | 627 | mock_update_hash_from_path, |
1771 | 628 | mock_ensure_ssl_cert_master, | ||
1772 | 629 | mock_relation_ids, | ||
1773 | 630 | mock_log): | ||
1774 | 631 | mock_relation_ids.return_value = [] | ||
1775 | 632 | mock_ensure_ssl_cert_master.return_value = False | ||
1776 | 633 | # Ensure always returns diff | ||
1777 | 634 | mock_update_hash_from_path.side_effect = \ | ||
1778 | 635 | lambda hash, *args, **kwargs: hash.update(str(uuid.uuid4())) | ||
1779 | 636 | |||
1780 | 637 | self.is_elected_leader.return_value = False | ||
1781 | 509 | self.filter_installed_packages.return_value = [] | 638 | self.filter_installed_packages.return_value = [] |
1782 | 510 | hooks.upgrade_charm() | 639 | hooks.upgrade_charm() |
1783 | 511 | self.assertTrue(self.apt_install.called) | 640 | self.assertTrue(self.apt_install.called) |
1784 | 512 | ssh_authorized_peers.assert_called_with( | 641 | ssh_authorized_peers.assert_called_with( |
1785 | 513 | user=self.ssh_user, group='keystone', | 642 | user=self.ssh_user, group='keystone', |
1786 | 514 | peer_interface='cluster', ensure_local_user=True) | 643 | peer_interface='cluster', ensure_local_user=True) |
1787 | 515 | self.assertTrue(self.synchronize_ca.called) | ||
1788 | 516 | self.assertFalse(self.log.called) | 644 | self.assertFalse(self.log.called) |
1789 | 517 | self.assertFalse(self.ensure_initial_admin.called) | 645 | self.assertFalse(self.ensure_initial_admin.called) |
1790 | 518 | 646 | ||
1791 | === modified file 'unit_tests/test_keystone_utils.py' | |||
1792 | --- unit_tests/test_keystone_utils.py 2015-01-14 13:17:50 +0000 | |||
1793 | +++ unit_tests/test_keystone_utils.py 2015-01-21 16:51:08 +0000 | |||
1794 | @@ -26,9 +26,8 @@ | |||
1795 | 26 | 'get_os_codename_install_source', | 26 | 'get_os_codename_install_source', |
1796 | 27 | 'grant_role', | 27 | 'grant_role', |
1797 | 28 | 'configure_installation_source', | 28 | 'configure_installation_source', |
1799 | 29 | 'eligible_leader', | 29 | 'is_elected_leader', |
1800 | 30 | 'https', | 30 | 'https', |
1801 | 31 | 'is_clustered', | ||
1802 | 32 | 'peer_store_and_set', | 31 | 'peer_store_and_set', |
1803 | 33 | 'service_stop', | 32 | 'service_stop', |
1804 | 34 | 'service_start', | 33 | 'service_start', |
1805 | @@ -115,7 +114,7 @@ | |||
1806 | 115 | self, migrate_database, determine_packages, configs): | 114 | self, migrate_database, determine_packages, configs): |
1807 | 116 | self.test_config.set('openstack-origin', 'precise') | 115 | self.test_config.set('openstack-origin', 'precise') |
1808 | 117 | determine_packages.return_value = [] | 116 | determine_packages.return_value = [] |
1810 | 118 | self.eligible_leader.return_value = True | 117 | self.is_elected_leader.return_value = True |
1811 | 119 | 118 | ||
1812 | 120 | utils.do_openstack_upgrade(configs) | 119 | utils.do_openstack_upgrade(configs) |
1813 | 121 | 120 | ||
1814 | @@ -202,7 +201,6 @@ | |||
1815 | 202 | self.resolve_address.return_value = '10.0.0.3' | 201 | self.resolve_address.return_value = '10.0.0.3' |
1816 | 203 | self.test_config.set('admin-port', 80) | 202 | self.test_config.set('admin-port', 80) |
1817 | 204 | self.test_config.set('service-port', 81) | 203 | self.test_config.set('service-port', 81) |
1818 | 205 | self.is_clustered.return_value = False | ||
1819 | 206 | self.https.return_value = False | 204 | self.https.return_value = False |
1820 | 207 | self.test_config.set('https-service-endpoints', 'False') | 205 | self.test_config.set('https-service-endpoints', 'False') |
1821 | 208 | self.get_local_endpoint.return_value = 'http://localhost:80/v2.0/' | 206 | self.get_local_endpoint.return_value = 'http://localhost:80/v2.0/' |
charm_lint_check #528 keystone-next for hopem mp245492
LINT OK: passed
Build: http:// 10.245. 162.77: 8080/job/ charm_lint_ check/528/