Merge lp:~hopem/charms/trusty/keystone/fix-ssl-cert-sync into lp:~openstack-charmers-archive/charms/trusty/keystone/next

Proposed by Edward Hope-Morley
Status: Merged
Merged at revision: 109
Proposed branch: lp:~hopem/charms/trusty/keystone/fix-ssl-cert-sync
Merge into: lp:~openstack-charmers-archive/charms/trusty/keystone/next
Diff against target: 1821 lines (+937/-202)
7 files modified
README.md (+46/-10)
hooks/keystone_context.py (+45/-8)
hooks/keystone_hooks.py (+180/-66)
hooks/keystone_ssl.py (+43/-14)
hooks/keystone_utils.py (+443/-50)
unit_tests/test_keystone_hooks.py (+178/-50)
unit_tests/test_keystone_utils.py (+2/-4)
To merge this branch: bzr merge lp:~hopem/charms/trusty/keystone/fix-ssl-cert-sync
Reviewer Review Type Date Requested Status
Liam Young (community) Approve
Ryan Beisner (community) Approve
Review via email: mp+245492@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #528 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/528/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #557 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/557/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #684 keystone-next for hopem mp245492
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9676583/
Build: http://10.245.162.77:8080/job/charm_amulet_test/684/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #531 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/531/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #560 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/560/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #687 keystone-next for hopem mp245492
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9676857/
Build: http://10.245.162.77:8080/job/charm_amulet_test/687/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #533 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/533/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #562 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/562/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #538 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/538/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #567 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/567/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #694 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/694/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #598 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/598/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #627 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/627/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #786 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/786/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #609 keystone-next for hopem mp245492
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
  unit_tests/test_keystone_hooks.py:537:44: W291 trailing whitespace
  make: *** [lint] Error 1

Full lint test output: http://paste.ubuntu.com/9699419/
Build: http://10.245.162.77:8080/job/charm_lint_check/609/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #638 keystone-next for hopem mp245492
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3, failures=2)
  make: *** [unit_test] Error 1

Full unit test output: http://paste.ubuntu.com/9699422/
Build: http://10.245.162.77:8080/job/charm_unit_test/638/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #796 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/796/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #610 keystone-next for hopem mp245492
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
  unit_tests/test_keystone_hooks.py:537:44: W291 trailing whitespace
  make: *** [lint] Error 1

Full lint test output: http://paste.ubuntu.com/9699605/
Build: http://10.245.162.77:8080/job/charm_lint_check/610/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #639 keystone-next for hopem mp245492
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3, failures=2)
  make: *** [unit_test] Error 1

Full unit test output: http://paste.ubuntu.com/9699606/
Build: http://10.245.162.77:8080/job/charm_unit_test/639/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #797 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/797/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #611 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/611/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #640 keystone-next for hopem mp245492
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3, failures=2)
  make: *** [unit_test] Error 1

Full unit test output: http://paste.ubuntu.com/9699828/
Build: http://10.245.162.77:8080/job/charm_unit_test/640/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #798 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/798/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #613 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/613/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #642 keystone-next for hopem mp245492
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=9, failures=2)
  make: *** [unit_test] Error 1

Full unit test output: http://paste.ubuntu.com/9704972/
Build: http://10.245.162.77:8080/job/charm_unit_test/642/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #799 keystone-next for hopem mp245492
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9705021/
Build: http://10.245.162.77:8080/job/charm_amulet_test/799/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #614 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/614/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #643 keystone-next for hopem mp245492
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=25)
  make: *** [unit_test] Error 1

Full unit test output: http://paste.ubuntu.com/9705140/
Build: http://10.245.162.77:8080/job/charm_unit_test/643/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #800 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/800/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #615 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/615/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #644 keystone-next for hopem mp245492
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=25)
  make: *** [unit_test] Error 1

Full unit test output: http://paste.ubuntu.com/9705469/
Build: http://10.245.162.77:8080/job/charm_unit_test/644/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #801 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/801/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #779 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/779/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #808 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/808/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #963 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/963/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

P, T & U deploy tests are all happy!

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #800 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/800/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #829 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/829/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #989 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/989/

112. By Edward Hope-Morley

updated README

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #884 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/884/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #913 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/913/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #1108 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/1108/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

All deploy tests are happy. P-I, T-I, T-J, U-J

Revision history for this message
Liam Young (gnuoy) wrote :

Approve

Tested using mojo spec dev/keystone_ssl from lp:~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs.

(There are issue with removing units which are now highlighted in the READE)

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #935 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/935/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #964 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/964/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #1157 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/1157/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #938 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/938/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #967 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/967/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #1160 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/1160/

113. By Edward Hope-Morley

ignore ssl actions if not enabled and improve support for non-ssl -> ssl

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #946 keystone-next for hopem mp245492
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/946/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #975 keystone-next for hopem mp245492
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/975/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #1168 keystone-next for hopem mp245492
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/1168/

Revision history for this message
Liam Young (gnuoy) wrote :

Approve.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2014-06-29 21:56:26 +0000
3+++ README.md 2015-01-21 16:51:08 +0000
4@@ -1,22 +1,30 @@
5+Overview
6+========
7+
8 This charm provides Keystone, the Openstack identity service. It's target
9-platform is Ubuntu Precise + Openstack Essex. This has not been tested
10-using Oneiric + Diablo.
11-
12-It provides three interfaces.
13-
14+platform is (ideally) Ubuntu LTS + Openstack.
15+
16+Usage
17+=====
18+
19+The following interfaces are provided:
20+
21+ - nrpe-external-master: Used generate Nagios checks.
22+
23 - identity-service: Openstack API endpoints request an entry in the
24 Keystone service catalog + endpoint template catalog. When a relation
25 is established, Keystone receives: service name, region, public_url,
26 admin_url and internal_url. It first checks that the requested service
27 is listed as a supported service. This list should stay updated to
28- support current Openstack core services. If the services is supported,
29- a entry in the service catalog is created, an endpoint template is
30+ support current Openstack core services. If the service is supported,
31+ an entry in the service catalog is created, an endpoint template is
32 created and a admin token is generated. The other end of the relation
33- recieves the token as well as info on which ports Keystone is listening.
34+ receives the token as well as info on which ports Keystone is listening
35+ on.
36
37 - keystone-service: This is currently only used by Horizon/dashboard
38 as its interaction with Keystone is different from other Openstack API
39- servicies. That is, Horizon requests a Keystone role and token exists.
40+ services. That is, Horizon requests a Keystone role and token exists.
41 During a relation, Horizon requests its configured default role and
42 Keystone responds with a token and the auth + admin ports on which
43 Keystone is listening.
44@@ -26,9 +34,37 @@
45 provision users, tenants, etc. or that otherwise automate using the
46 Openstack cluster deployment.
47
48+ - identity-notifications: Used to broadcast messages to any services
49+ listening on the interface.
50+
51+Database
52+--------
53+
54 Keystone requires a database. By default, a local sqlite database is used.
55 The charm supports relations to a shared-db via mysql-shared interface. When
56 a new data store is configured, the charm ensures the minimum administrator
57 credentials exist (as configured via charm configuration)
58
59-VIP is only required if you plan on multi-unit clusterming. The VIP becomes a highly-available API endpoint.
60+HA/Clustering
61+-------------
62+
63+VIP is only required if you plan on multi-unit clustering (requires relating
64+with hacluster charm). The VIP becomes a highly-available API endpoint.
65+
66+SSL/HTTPS
67+---------
68+
69+This charm also supports SSL and HTTPS endpoints. In order to ensure SSL
70+certificates are only created once and distributed to all units, one unit gets
71+elected as an ssl-cert-master. One side-effect of this is that as units are
72+scaled-out the currently elected leader needs to be running in order for nodes
73+to sync certificates. This 'feature' is to work around the lack of native
74+leadership election via Juju itself, a feature that is due for release some
75+time soon but until then we have to rely on this. Also, if a keystone unit does
76+go down, it must be removed from Juju i.e.
77+
78+ juju destroy-unit keystone/<unit-num>
79+
80+Otherwise it will be assumed that this unit may come back at some point and
81+therefore must be know to be in-sync with the rest before continuing.
82+
83
84=== modified file 'hooks/keystone_context.py'
85--- hooks/keystone_context.py 2015-01-19 10:45:41 +0000
86+++ hooks/keystone_context.py 2015-01-21 16:51:08 +0000
87@@ -1,3 +1,5 @@
88+import os
89+
90 from charmhelpers.core.hookenv import config
91
92 from charmhelpers.core.host import mkdir, write_file
93@@ -6,13 +8,16 @@
94
95 from charmhelpers.contrib.hahelpers.cluster import (
96 determine_apache_port,
97- determine_api_port
98+ determine_api_port,
99+)
100+
101+from charmhelpers.core.hookenv import (
102+ log,
103+ INFO,
104 )
105
106 from charmhelpers.contrib.hahelpers.apache import install_ca_cert
107
108-import os
109-
110 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
111
112
113@@ -29,20 +34,52 @@
114 return super(ApacheSSLContext, self).__call__()
115
116 def configure_cert(self, cn):
117- from keystone_utils import SSH_USER, get_ca
118+ from keystone_utils import (
119+ SSH_USER,
120+ get_ca,
121+ ensure_permissions,
122+ is_ssl_cert_master,
123+ )
124+
125 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
126- mkdir(path=ssl_dir)
127+ perms = 0o755
128+ mkdir(path=ssl_dir, owner=SSH_USER, group='keystone', perms=perms)
129+ # Ensure accessible by keystone ssh user and group (for sync)
130+ ensure_permissions(ssl_dir, user=SSH_USER, group='keystone',
131+ perms=perms)
132+
133+ if not is_ssl_cert_master():
134+ log("Not ssl-cert-master - skipping apache cert config",
135+ level=INFO)
136+ return
137+
138+ log("Creating apache ssl certs in %s" % (ssl_dir), level=INFO)
139+
140 ca = get_ca(user=SSH_USER)
141 cert, key = ca.get_cert_and_key(common_name=cn)
142 write_file(path=os.path.join(ssl_dir, 'cert_{}'.format(cn)),
143- content=cert)
144+ content=cert, owner=SSH_USER, group='keystone', perms=0o644)
145 write_file(path=os.path.join(ssl_dir, 'key_{}'.format(cn)),
146- content=key)
147+ content=key, owner=SSH_USER, group='keystone', perms=0o644)
148
149 def configure_ca(self):
150- from keystone_utils import SSH_USER, get_ca
151+ from keystone_utils import (
152+ SSH_USER,
153+ get_ca,
154+ ensure_permissions,
155+ is_ssl_cert_master,
156+ )
157+
158+ if not is_ssl_cert_master():
159+ log("Not ssl-cert-master - skipping apache cert config",
160+ level=INFO)
161+ return
162+
163 ca = get_ca(user=SSH_USER)
164 install_ca_cert(ca.get_ca_bundle())
165+ # Ensure accessible by keystone ssh user and group (unison)
166+ ensure_permissions(CA_CERT_PATH, user=SSH_USER, group='keystone',
167+ perms=0o0644)
168
169 def canonical_names(self):
170 addresses = self.get_network_addresses()
171
172=== modified file 'hooks/keystone_hooks.py'
173--- hooks/keystone_hooks.py 2015-01-21 16:26:50 +0000
174+++ hooks/keystone_hooks.py 2015-01-21 16:51:08 +0000
175@@ -1,7 +1,9 @@
176 #!/usr/bin/python
177-
178 import hashlib
179+import json
180 import os
181+import re
182+import stat
183 import sys
184 import time
185
186@@ -16,6 +18,8 @@
187 is_relation_made,
188 log,
189 local_unit,
190+ DEBUG,
191+ WARNING,
192 ERROR,
193 relation_get,
194 relation_ids,
195@@ -48,9 +52,8 @@
196 get_admin_passwd,
197 migrate_database,
198 save_script_rc,
199- synchronize_ca,
200+ synchronize_ca_if_changed,
201 register_configs,
202- relation_list,
203 restart_map,
204 services,
205 CLUSTER_RES,
206@@ -58,12 +61,18 @@
207 SSH_USER,
208 setup_ipv6,
209 send_notifications,
210+ check_peer_actions,
211+ CA_CERT_PATH,
212+ ensure_permissions,
213+ get_ssl_sync_request_units,
214+ is_str_true,
215+ is_ssl_cert_master,
216 )
217
218 from charmhelpers.contrib.hahelpers.cluster import (
219- eligible_leader,
220- is_leader,
221+ is_elected_leader,
222 get_hacluster_config,
223+ peer_units,
224 )
225
226 from charmhelpers.payload.execd import execd_preinstall
227@@ -73,6 +82,7 @@
228 )
229 from charmhelpers.contrib.openstack.ip import (
230 ADMIN,
231+ PUBLIC,
232 resolve_address,
233 )
234 from charmhelpers.contrib.network.ip import (
235@@ -100,12 +110,14 @@
236
237 @hooks.hook('config-changed')
238 @restart_on_change(restart_map())
239+@synchronize_ca_if_changed()
240 def config_changed():
241 if config('prefer-ipv6'):
242 setup_ipv6()
243 sync_db_with_multi_ipv6_addresses(config('database'),
244 config('database-user'))
245
246+ unison.ensure_user(user=SSH_USER, group='juju_keystone')
247 unison.ensure_user(user=SSH_USER, group='keystone')
248 homedir = unison.get_homedir(SSH_USER)
249 if not os.path.isdir(homedir):
250@@ -116,25 +128,33 @@
251
252 check_call(['chmod', '-R', 'g+wrx', '/var/lib/keystone/'])
253
254+ # Ensure unison can write to certs dir.
255+ # FIXME: need to a better way around this e.g. move cert to it's own dir
256+ # and give that unison permissions.
257+ path = os.path.dirname(CA_CERT_PATH)
258+ perms = int(oct(stat.S_IMODE(os.stat(path).st_mode) |
259+ (stat.S_IWGRP | stat.S_IXGRP)), base=8)
260+ ensure_permissions(path, group='keystone', perms=perms)
261+
262 save_script_rc()
263 configure_https()
264 update_nrpe_config()
265 CONFIGS.write_all()
266- if eligible_leader(CLUSTER_RES):
267- migrate_database()
268- ensure_initial_admin(config)
269- log('Firing identity_changed hook for all related services.')
270- # HTTPS may have been set - so fire all identity relations
271- # again
272- for r_id in relation_ids('identity-service'):
273- for unit in relation_list(r_id):
274- identity_changed(relation_id=r_id,
275- remote_unit=unit)
276+
277+ # Update relations since SSL may have been configured. If we have peer
278+ # units we can rely on the sync to do this in cluster relation.
279+ if is_elected_leader(CLUSTER_RES) and not peer_units():
280+ update_all_identity_relation_units()
281
282 for rid in relation_ids('identity-admin'):
283 admin_relation_changed(rid)
284- for rid in relation_ids('cluster'):
285- cluster_joined(rid)
286+
287+ # Ensure sync request is sent out (needed for upgrade to ssl from non-ssl)
288+ settings = {}
289+ append_ssl_sync_request(settings)
290+ if settings:
291+ for rid in relation_ids('cluster'):
292+ relation_set(relation_id=rid, relation_settings=settings)
293
294
295 @hooks.hook('shared-db-relation-joined')
296@@ -167,14 +187,35 @@
297 relation_set(database=config('database'))
298
299
300+def update_all_identity_relation_units():
301+ CONFIGS.write_all()
302+ try:
303+ migrate_database()
304+ except Exception as exc:
305+ log("Database initialisation failed (%s) - db not ready?" % (exc),
306+ level=WARNING)
307+ else:
308+ ensure_initial_admin(config)
309+ log('Firing identity_changed hook for all related services.')
310+ for rid in relation_ids('identity-service'):
311+ for unit in related_units(rid):
312+ identity_changed(relation_id=rid, remote_unit=unit)
313+
314+
315+@synchronize_ca_if_changed(force=True)
316+def update_all_identity_relation_units_force_sync():
317+ update_all_identity_relation_units()
318+
319+
320 @hooks.hook('shared-db-relation-changed')
321 @restart_on_change(restart_map())
322+@synchronize_ca_if_changed()
323 def db_changed():
324 if 'shared-db' not in CONFIGS.complete_contexts():
325 log('shared-db relation incomplete. Peer not ready?')
326 else:
327 CONFIGS.write(KEYSTONE_CONF)
328- if eligible_leader(CLUSTER_RES):
329+ if is_elected_leader(CLUSTER_RES):
330 # Bugs 1353135 & 1187508. Dbs can appear to be ready before the
331 # units acl entry has been added. So, if the db supports passing
332 # a list of permitted units then check if we're in the list.
333@@ -182,38 +223,46 @@
334 if allowed_units and local_unit() not in allowed_units.split():
335 log('Allowed_units list provided and this unit not present')
336 return
337- migrate_database()
338- ensure_initial_admin(config)
339 # Ensure any existing service entries are updated in the
340 # new database backend
341- for rid in relation_ids('identity-service'):
342- for unit in related_units(rid):
343- identity_changed(relation_id=rid, remote_unit=unit)
344+ update_all_identity_relation_units()
345
346
347 @hooks.hook('pgsql-db-relation-changed')
348 @restart_on_change(restart_map())
349+@synchronize_ca_if_changed()
350 def pgsql_db_changed():
351 if 'pgsql-db' not in CONFIGS.complete_contexts():
352 log('pgsql-db relation incomplete. Peer not ready?')
353 else:
354 CONFIGS.write(KEYSTONE_CONF)
355- if eligible_leader(CLUSTER_RES):
356- migrate_database()
357- ensure_initial_admin(config)
358+ if is_elected_leader(CLUSTER_RES):
359 # Ensure any existing service entries are updated in the
360 # new database backend
361- for rid in relation_ids('identity-service'):
362- for unit in related_units(rid):
363- identity_changed(relation_id=rid, remote_unit=unit)
364+ update_all_identity_relation_units()
365
366
367 @hooks.hook('identity-service-relation-changed')
368+@synchronize_ca_if_changed()
369 def identity_changed(relation_id=None, remote_unit=None):
370+ CONFIGS.write_all()
371+
372 notifications = {}
373- if eligible_leader(CLUSTER_RES):
374- add_service_to_keystone(relation_id, remote_unit)
375- synchronize_ca()
376+ if is_elected_leader(CLUSTER_RES):
377+ # Catch database not configured error and defer until db ready
378+ from keystoneclient.apiclient.exceptions import InternalServerError
379+ try:
380+ add_service_to_keystone(relation_id, remote_unit)
381+ except InternalServerError as exc:
382+ key = re.compile("'keystone\..+' doesn't exist")
383+ if re.search(key, exc.message):
384+ log("Keystone database not yet ready (InternalServerError "
385+ "raised) - deferring until *-db relation completes.",
386+ level=WARNING)
387+ return
388+
389+ log("Unexpected exception occurred", level=ERROR)
390+ raise
391
392 settings = relation_get(rid=relation_id, unit=remote_unit)
393 service = settings.get('service', None)
394@@ -241,46 +290,113 @@
395 send_notifications(notifications)
396
397
398+def append_ssl_sync_request(settings):
399+ """Add request to be synced to relation settings.
400+
401+ This will be consumed by cluster-relation-changed ssl master.
402+ """
403+ if (is_str_true(config('use-https')) or
404+ is_str_true(config('https-service-endpoints'))):
405+ unit = local_unit().replace('/', '-')
406+ settings['ssl-sync-required-%s' % (unit)] = '1'
407+
408+
409 @hooks.hook('cluster-relation-joined')
410-def cluster_joined(relation_id=None):
411+def cluster_joined():
412 unison.ssh_authorized_peers(user=SSH_USER,
413 group='juju_keystone',
414 peer_interface='cluster',
415 ensure_local_user=True)
416+
417+ settings = {}
418+
419 for addr_type in ADDRESS_TYPES:
420 address = get_address_in_network(
421 config('os-{}-network'.format(addr_type))
422 )
423 if address:
424- relation_set(
425- relation_id=relation_id,
426- relation_settings={'{}-address'.format(addr_type): address}
427- )
428+ settings['{}-address'.format(addr_type)] = address
429
430 if config('prefer-ipv6'):
431 private_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
432- relation_set(relation_id=relation_id,
433- relation_settings={'private-address': private_addr})
434+ settings['private-address'] = private_addr
435+
436+ append_ssl_sync_request(settings)
437+
438+ relation_set(relation_settings=settings)
439+
440+
441+def apply_echo_filters(settings, echo_whitelist):
442+ """Filter settings to be peer_echo'ed.
443+
444+ We may have received some data that we don't want to re-echo so filter
445+ out unwanted keys and provide overrides.
446+
447+ Returns:
448+ tuple(filtered list of keys to be echoed, overrides for keys omitted)
449+ """
450+ filtered = []
451+ overrides = {}
452+ for key in settings.iterkeys():
453+ for ekey in echo_whitelist:
454+ if ekey in key:
455+ if ekey == 'identity-service:':
456+ auth_host = resolve_address(ADMIN)
457+ service_host = resolve_address(PUBLIC)
458+ if (key.endswith('auth_host') and
459+ settings[key] != auth_host):
460+ overrides[key] = auth_host
461+ continue
462+ elif (key.endswith('service_host') and
463+ settings[key] != service_host):
464+ overrides[key] = service_host
465+ continue
466+
467+ filtered.append(key)
468+
469+ return filtered, overrides
470
471
472 @hooks.hook('cluster-relation-changed',
473 'cluster-relation-departed')
474 @restart_on_change(restart_map(), stopstart=True)
475 def cluster_changed():
476+ settings = relation_get()
477 # NOTE(jamespage) re-echo passwords for peer storage
478- peer_echo(includes=['_passwd', 'identity-service:'])
479+ echo_whitelist, overrides = \
480+ apply_echo_filters(settings, ['_passwd', 'identity-service:',
481+ 'ssl-cert-master'])
482+ log("Peer echo overrides: %s" % (overrides), level=DEBUG)
483+ relation_set(**overrides)
484+ if echo_whitelist:
485+ log("Peer echo whitelist: %s" % (echo_whitelist), level=DEBUG)
486+ peer_echo(includes=echo_whitelist)
487+
488+ check_peer_actions()
489 unison.ssh_authorized_peers(user=SSH_USER,
490 group='keystone',
491 peer_interface='cluster',
492 ensure_local_user=True)
493- synchronize_ca()
494- CONFIGS.write_all()
495- for r_id in relation_ids('identity-service'):
496- for unit in relation_list(r_id):
497- identity_changed(relation_id=r_id,
498- remote_unit=unit)
499- for rid in relation_ids('identity-admin'):
500- admin_relation_changed(rid)
501+
502+ if is_elected_leader(CLUSTER_RES) or is_ssl_cert_master():
503+ units = get_ssl_sync_request_units()
504+ synced_units = relation_get(attribute='ssl-synced-units',
505+ unit=local_unit())
506+ if synced_units:
507+ synced_units = json.loads(synced_units)
508+ diff = set(units).symmetric_difference(set(synced_units))
509+
510+ if units and (not synced_units or diff):
511+ log("New peers joined and need syncing - %s" %
512+ (', '.join(units)), level=DEBUG)
513+ update_all_identity_relation_units_force_sync()
514+ else:
515+ update_all_identity_relation_units()
516+
517+ for rid in relation_ids('identity-admin'):
518+ admin_relation_changed(rid)
519+ else:
520+ CONFIGS.write_all()
521
522
523 @hooks.hook('ha-relation-joined')
524@@ -320,7 +436,7 @@
525 vip_group.append(vip_key)
526
527 if len(vip_group) >= 1:
528- relation_set(groups={'grp_ks_vips': ' '.join(vip_group)})
529+ relation_set(groups={CLUSTER_RES: ' '.join(vip_group)})
530
531 init_services = {
532 'res_ks_haproxy': 'haproxy'
533@@ -338,17 +454,17 @@
534
535 @hooks.hook('ha-relation-changed')
536 @restart_on_change(restart_map())
537+@synchronize_ca_if_changed()
538 def ha_changed():
539+ CONFIGS.write_all()
540+
541 clustered = relation_get('clustered')
542- CONFIGS.write_all()
543- if (clustered is not None and
544- is_leader(CLUSTER_RES)):
545+ if clustered and is_elected_leader(CLUSTER_RES):
546 ensure_initial_admin(config)
547 log('Cluster configured, notifying other services and updating '
548 'keystone endpoint configuration')
549- for rid in relation_ids('identity-service'):
550- for unit in related_units(rid):
551- identity_changed(relation_id=rid, remote_unit=unit)
552+
553+ update_all_identity_relation_units()
554
555
556 @hooks.hook('identity-admin-relation-changed')
557@@ -365,6 +481,7 @@
558 relation_set(relation_id=relation_id, **relation_data)
559
560
561+@synchronize_ca_if_changed(fatal=True)
562 def configure_https():
563 '''
564 Enables SSL API Apache config if appropriate and kicks identity-service
565@@ -383,25 +500,22 @@
566
567 @hooks.hook('upgrade-charm')
568 @restart_on_change(restart_map(), stopstart=True)
569+@synchronize_ca_if_changed()
570 def upgrade_charm():
571 apt_install(filter_installed_packages(determine_packages()))
572 unison.ssh_authorized_peers(user=SSH_USER,
573 group='keystone',
574 peer_interface='cluster',
575 ensure_local_user=True)
576+
577+ CONFIGS.write_all()
578 update_nrpe_config()
579- synchronize_ca()
580- if eligible_leader(CLUSTER_RES):
581- log('Cluster leader - ensuring endpoint configuration'
582- ' is up to date')
583+
584+ if is_elected_leader(CLUSTER_RES):
585+ log('Cluster leader - ensuring endpoint configuration is up to '
586+ 'date', level=DEBUG)
587 time.sleep(10)
588- ensure_initial_admin(config)
589- # Deal with interface changes for icehouse
590- for r_id in relation_ids('identity-service'):
591- for unit in relation_list(r_id):
592- identity_changed(relation_id=r_id,
593- remote_unit=unit)
594- CONFIGS.write_all()
595+ update_all_identity_relation_units()
596
597
598 @hooks.hook('nrpe-external-master-relation-joined',
599
600=== modified file 'hooks/keystone_ssl.py'
601--- hooks/keystone_ssl.py 2014-07-02 07:55:44 +0000
602+++ hooks/keystone_ssl.py 2015-01-21 16:51:08 +0000
603@@ -5,6 +5,13 @@
604 import subprocess
605 import tarfile
606 import tempfile
607+import time
608+
609+from charmhelpers.core.hookenv import (
610+ log,
611+ DEBUG,
612+ WARNING,
613+)
614
615 CA_EXPIRY = '365'
616 ORG_NAME = 'Ubuntu'
617@@ -101,6 +108,9 @@
618 extendedKeyUsage = serverAuth, clientAuth
619 """
620
621+# Instance can be appended to this list to represent a singleton
622+CA_SINGLETON = []
623+
624
625 def init_ca(ca_dir, common_name, org_name=ORG_NAME, org_unit_name=ORG_UNIT):
626 print 'Ensuring certificate authority exists at %s.' % ca_dir
627@@ -275,23 +285,42 @@
628 crt = self._sign_csr(csr, service, common_name)
629 cmd = ['chown', '-R', '%s.%s' % (self.user, self.group), self.ca_dir]
630 subprocess.check_call(cmd)
631- print 'Signed new CSR, crt @ %s' % crt
632+ log('Signed new CSR, crt @ %s' % crt, level=DEBUG)
633 return crt, key
634
635 def get_cert_and_key(self, common_name):
636- print 'Getting certificate and key for %s.' % common_name
637- key = os.path.join(self.ca_dir, 'certs', '%s.key' % common_name)
638- crt = os.path.join(self.ca_dir, 'certs', '%s.crt' % common_name)
639- if os.path.isfile(crt):
640- print 'Found existing certificate for %s.' % common_name
641- crt = open(crt, 'r').read()
642- try:
643- key = open(key, 'r').read()
644- except:
645- print 'Could not load ssl private key for %s from %s' %\
646- (common_name, key)
647- exit(1)
648- return crt, key
649+ log('Getting certificate and key for %s.' % common_name, level=DEBUG)
650+ keypath = os.path.join(self.ca_dir, 'certs', '%s.key' % common_name)
651+ crtpath = os.path.join(self.ca_dir, 'certs', '%s.crt' % common_name)
652+ if os.path.isfile(crtpath):
653+ log('Found existing certificate for %s.' % common_name,
654+ level=DEBUG)
655+ max_retries = 3
656+ while True:
657+ mtime = os.path.getmtime(crtpath)
658+
659+ crt = open(crtpath, 'r').read()
660+ try:
661+ key = open(keypath, 'r').read()
662+ except:
663+ msg = ('Could not load ssl private key for %s from %s' %
664+ (common_name, keypath))
665+ raise Exception(msg)
666+
667+ # Ensure we are not reading a file that is being written to
668+ if mtime != os.path.getmtime(crtpath):
669+ max_retries -= 1
670+ if max_retries == 0:
671+ msg = ("crt contents changed during read - retry "
672+ "failed")
673+ raise Exception(msg)
674+
675+ log("crt contents changed during read - re-reading",
676+ level=WARNING)
677+ time.sleep(1)
678+ else:
679+ return crt, key
680+
681 crt, key = self._create_certificate(common_name, common_name)
682 return open(crt, 'r').read(), open(key, 'r').read()
683
684
685=== modified file 'hooks/keystone_utils.py'
686--- hooks/keystone_utils.py 2015-01-19 10:45:41 +0000
687+++ hooks/keystone_utils.py 2015-01-21 16:51:08 +0000
688@@ -1,20 +1,27 @@
689 #!/usr/bin/python
690+import glob
691+import grp
692+import hashlib
693+import json
694+import os
695+import pwd
696+import re
697 import subprocess
698-import os
699+import threading
700+import time
701+import urlparse
702 import uuid
703-import urlparse
704-import time
705
706 from base64 import b64encode
707 from collections import OrderedDict
708 from copy import deepcopy
709
710 from charmhelpers.contrib.hahelpers.cluster import(
711- eligible_leader,
712+ is_elected_leader,
713 determine_api_port,
714 https,
715- is_clustered,
716- is_elected_leader,
717+ peer_units,
718+ oldest_peer,
719 )
720
721 from charmhelpers.contrib.openstack import context, templating
722@@ -37,8 +44,17 @@
723 os_release,
724 save_script_rc as _save_script_rc)
725
726+from charmhelpers.core.host import (
727+ mkdir,
728+ write_file,
729+)
730+
731 import charmhelpers.contrib.unison as unison
732
733+from charmhelpers.core.decorators import (
734+ retry_on_exception,
735+)
736+
737 from charmhelpers.core.hookenv import (
738 config,
739 is_relation_made,
740@@ -47,8 +63,11 @@
741 relation_get,
742 relation_set,
743 relation_ids,
744+ related_units,
745 DEBUG,
746 INFO,
747+ WARNING,
748+ ERROR,
749 )
750
751 from charmhelpers.fetch import (
752@@ -61,6 +80,7 @@
753 from charmhelpers.core.host import (
754 service_stop,
755 service_start,
756+ service_restart,
757 pwgen,
758 lsb_release
759 )
760@@ -110,10 +130,13 @@
761 APACHE_CONF = '/etc/apache2/sites-available/openstack_https_frontend'
762 APACHE_24_CONF = '/etc/apache2/sites-available/openstack_https_frontend.conf'
763
764+APACHE_SSL_DIR = '/etc/apache2/ssl/keystone'
765+SYNC_FLAGS_DIR = '/var/lib/keystone/juju_sync_flags/'
766 SSL_DIR = '/var/lib/keystone/juju_ssl/'
767 SSL_CA_NAME = 'Ubuntu Cloud'
768 CLUSTER_RES = 'grp_ks_vips'
769 SSH_USER = 'juju_keystone'
770+SSL_SYNC_SEMAPHORE = threading.Semaphore()
771
772 BASE_RESOURCE_MAP = OrderedDict([
773 (KEYSTONE_CONF, {
774@@ -203,6 +226,13 @@
775 }
776
777
778+def is_str_true(value):
779+ if value and value.lower() in ['true', 'yes']:
780+ return True
781+
782+ return False
783+
784+
785 def resource_map():
786 '''
787 Dynamically generate a map of resources that will be managed for a single
788@@ -287,7 +317,7 @@
789 configs.set_release(openstack_release=new_os_rel)
790 configs.write_all()
791
792- if eligible_leader(CLUSTER_RES):
793+ if is_elected_leader(CLUSTER_RES):
794 migrate_database()
795
796
797@@ -389,7 +419,7 @@
798
799 up_to_date = True
800 for k in ['publicurl', 'adminurl', 'internalurl']:
801- if ep[k] != locals()[k]:
802+ if ep.get(k) != locals()[k]:
803 up_to_date = False
804
805 if up_to_date:
806@@ -500,7 +530,7 @@
807 if passwd and passwd.lower() != "none":
808 return passwd
809
810- if eligible_leader(CLUSTER_RES):
811+ if is_elected_leader(CLUSTER_RES):
812 if os.path.isfile(STORED_PASSWD):
813 log("Loading stored passwd from %s" % STORED_PASSWD, level=INFO)
814 with open(STORED_PASSWD, 'r') as fd:
815@@ -527,33 +557,47 @@
816
817
818 def ensure_initial_admin(config):
819- """ Ensures the minimum admin stuff exists in whatever database we're
820+ # Allow retry on fail since leader may not be ready yet.
821+ # NOTE(hopem): ks client may not be installed at module import time so we
822+ # use this wrapped approach instead.
823+ from keystoneclient.apiclient.exceptions import InternalServerError
824+
825+ @retry_on_exception(3, base_delay=3, exc_type=InternalServerError)
826+ def _ensure_initial_admin(config):
827+ """Ensures the minimum admin stuff exists in whatever database we're
828 using.
829+
830 This and the helper functions it calls are meant to be idempotent and
831 run during install as well as during db-changed. This will maintain
832 the admin tenant, user, role, service entry and endpoint across every
833 datastore we might use.
834+
835 TODO: Possibly migrate data from one backend to another after it
836 changes?
837- """
838- create_tenant("admin")
839- create_tenant(config("service-tenant"))
840- # User is managed by ldap backend when using ldap identity
841- if not (config('identity-backend') == 'ldap' and config('ldap-readonly')):
842- passwd = get_admin_passwd()
843- if passwd:
844- create_user(config('admin-user'), passwd, tenant='admin')
845- update_user_password(config('admin-user'), passwd)
846- create_role(config('admin-role'), config('admin-user'), 'admin')
847- create_service_entry("keystone", "identity", "Keystone Identity Service")
848-
849- for region in config('region').split():
850- create_keystone_endpoint(public_ip=resolve_address(PUBLIC),
851- service_port=config("service-port"),
852- internal_ip=resolve_address(INTERNAL),
853- admin_ip=resolve_address(ADMIN),
854- auth_port=config("admin-port"),
855- region=region)
856+ """
857+ create_tenant("admin")
858+ create_tenant(config("service-tenant"))
859+ # User is managed by ldap backend when using ldap identity
860+ if not (config('identity-backend') ==
861+ 'ldap' and config('ldap-readonly')):
862+ passwd = get_admin_passwd()
863+ if passwd:
864+ create_user(config('admin-user'), passwd, tenant='admin')
865+ update_user_password(config('admin-user'), passwd)
866+ create_role(config('admin-role'), config('admin-user'),
867+ 'admin')
868+ create_service_entry("keystone", "identity",
869+ "Keystone Identity Service")
870+
871+ for region in config('region').split():
872+ create_keystone_endpoint(public_ip=resolve_address(PUBLIC),
873+ service_port=config("service-port"),
874+ internal_ip=resolve_address(INTERNAL),
875+ admin_ip=resolve_address(ADMIN),
876+ auth_port=config("admin-port"),
877+ region=region)
878+
879+ return _ensure_initial_admin(config)
880
881
882 def endpoint_url(ip, port):
883@@ -621,20 +665,357 @@
884 return passwd
885
886
887-def synchronize_ca():
888- '''
889- Broadcast service credentials to peers or consume those that have been
890- broadcasted by peer, depending on hook context.
891- '''
892- if not eligible_leader(CLUSTER_RES):
893- return
894- log('Synchronizing CA to all peers.')
895- if is_clustered():
896- if config('https-service-endpoints') in ['True', 'true']:
897- unison.sync_to_peers(peer_interface='cluster',
898- paths=[SSL_DIR], user=SSH_USER, verbose=True)
899-
900-CA = []
901+def ensure_permissions(path, user=None, group=None, perms=None):
902+ """Set chownand chmod for path
903+
904+ Note that -1 for uid or gid result in no change.
905+ """
906+ if user:
907+ uid = pwd.getpwnam(user).pw_uid
908+ else:
909+ uid = -1
910+
911+ if group:
912+ gid = grp.getgrnam(group).gr_gid
913+ else:
914+ gid = -1
915+
916+ os.chown(path, uid, gid)
917+
918+ if perms:
919+ os.chmod(path, perms)
920+
921+
922+def check_peer_actions():
923+ """Honour service action requests from sync master.
924+
925+ Check for service action request flags, perform the action then delete the
926+ flag.
927+ """
928+ restart = relation_get(attribute='restart-services-trigger')
929+ if restart and os.path.isdir(SYNC_FLAGS_DIR):
930+ for flagfile in glob.glob(os.path.join(SYNC_FLAGS_DIR, '*')):
931+ flag = os.path.basename(flagfile)
932+ key = re.compile("^(.+)?\.(.+)?\.(.+)")
933+ res = re.search(key, flag)
934+ if res:
935+ source = res.group(1)
936+ service = res.group(2)
937+ action = res.group(3)
938+ else:
939+ key = re.compile("^(.+)?\.(.+)?")
940+ res = re.search(key, flag)
941+ source = res.group(1)
942+ action = res.group(2)
943+
944+ # Don't execute actions requested by this unit.
945+ if local_unit().replace('.', '-') != source:
946+ if action == 'restart':
947+ log("Running action='%s' on service '%s'" %
948+ (action, service), level=DEBUG)
949+ service_restart(service)
950+ elif action == 'start':
951+ log("Running action='%s' on service '%s'" %
952+ (action, service), level=DEBUG)
953+ service_start(service)
954+ elif action == 'stop':
955+ log("Running action='%s' on service '%s'" %
956+ (action, service), level=DEBUG)
957+ service_stop(service)
958+ elif action == 'update-ca-certificates':
959+ log("Running %s" % (action), level=DEBUG)
960+ subprocess.check_call(['update-ca-certificates'])
961+ else:
962+ log("Unknown action flag=%s" % (flag), level=WARNING)
963+
964+ try:
965+ os.remove(flagfile)
966+ except:
967+ pass
968+
969+
970+def create_peer_service_actions(action, services):
971+ """Mark remote services for action.
972+
973+ Default action is restart. These action will be picked up by peer units
974+ e.g. we may need to restart services on peer units after certs have been
975+ synced.
976+ """
977+ for service in services:
978+ flagfile = os.path.join(SYNC_FLAGS_DIR, '%s.%s.%s' %
979+ (local_unit().replace('/', '-'),
980+ service.strip(), action))
981+ log("Creating action %s" % (flagfile), level=DEBUG)
982+ write_file(flagfile, content='', owner=SSH_USER, group='keystone',
983+ perms=0o644)
984+
985+
986+def create_peer_actions(actions):
987+ for action in actions:
988+ action = "%s.%s" % (local_unit().replace('/', '-'), action)
989+ flagfile = os.path.join(SYNC_FLAGS_DIR, action)
990+ log("Creating action %s" % (flagfile), level=DEBUG)
991+ write_file(flagfile, content='', owner=SSH_USER, group='keystone',
992+ perms=0o644)
993+
994+
995+@retry_on_exception(3, base_delay=2, exc_type=subprocess.CalledProcessError)
996+def unison_sync(paths_to_sync):
997+ """Do unison sync and retry a few times if it fails since peers may not be
998+ ready for sync.
999+ """
1000+ log('Synchronizing CA (%s) to all peers.' % (', '.join(paths_to_sync)),
1001+ level=INFO)
1002+ keystone_gid = grp.getgrnam('keystone').gr_gid
1003+ unison.sync_to_peers(peer_interface='cluster', paths=paths_to_sync,
1004+ user=SSH_USER, verbose=True, gid=keystone_gid,
1005+ fatal=True)
1006+
1007+
1008+def get_ssl_sync_request_units():
1009+ """Get list of units that have requested to be synced.
1010+
1011+ NOTE: this must be called from cluster relation context.
1012+ """
1013+ units = []
1014+ for unit in related_units():
1015+ settings = relation_get(unit=unit) or {}
1016+ rkeys = settings.keys()
1017+ key = re.compile("^ssl-sync-required-(.+)")
1018+ for rkey in rkeys:
1019+ res = re.search(key, rkey)
1020+ if res:
1021+ units.append(res.group(1))
1022+
1023+ return units
1024+
1025+
1026+def is_ssl_cert_master():
1027+ """Return True if this unit is ssl cert master."""
1028+ master = None
1029+ for rid in relation_ids('cluster'):
1030+ master = relation_get(attribute='ssl-cert-master', rid=rid,
1031+ unit=local_unit())
1032+
1033+ return master == local_unit()
1034+
1035+
1036+def ensure_ssl_cert_master(use_oldest_peer=False):
1037+ """Ensure that an ssl cert master has been elected.
1038+
1039+ Normally the cluster leader will take control but we allow for this to be
1040+ ignored since this could be called before the cluster is ready.
1041+ """
1042+ # Don't do anything if we are not in ssl/https mode
1043+ if not (is_str_true(config('use-https')) or
1044+ is_str_true(config('https-service-endpoints'))):
1045+ log("SSL/HTTPS is NOT enabled", level=DEBUG)
1046+ return False
1047+
1048+ if not peer_units():
1049+ log("Not syncing certs since there are no peer units.", level=INFO)
1050+ return False
1051+
1052+ if use_oldest_peer:
1053+ elect = oldest_peer(peer_units())
1054+ else:
1055+ elect = is_elected_leader(CLUSTER_RES)
1056+
1057+ if elect:
1058+ masters = []
1059+ for rid in relation_ids('cluster'):
1060+ for unit in related_units(rid):
1061+ m = relation_get(rid=rid, unit=unit,
1062+ attribute='ssl-cert-master')
1063+ if m is not None:
1064+ masters.append(m)
1065+
1066+ # We expect all peers to echo this setting
1067+ if not masters or 'unknown' in masters:
1068+ log("Notifying peers this unit is ssl-cert-master", level=INFO)
1069+ for rid in relation_ids('cluster'):
1070+ settings = {'ssl-cert-master': local_unit()}
1071+ relation_set(relation_id=rid, relation_settings=settings)
1072+
1073+ # Return now and wait for cluster-relation-changed (peer_echo) for
1074+ # sync.
1075+ return False
1076+ elif len(set(masters)) != 1 and local_unit() not in masters:
1077+ log("Did not get concensus from peers on who is master (%s) - "
1078+ "waiting for current master to release before self-electing" %
1079+ (masters), level=INFO)
1080+ return False
1081+
1082+ if not is_ssl_cert_master():
1083+ log("Not ssl cert master - skipping sync", level=INFO)
1084+ return False
1085+
1086+ return True
1087+
1088+
1089+def synchronize_ca(fatal=False):
1090+ """Broadcast service credentials to peers.
1091+
1092+ By default a failure to sync is fatal and will result in a raised
1093+ exception.
1094+
1095+ This function uses a relation setting 'ssl-cert-master' to get some
1096+ leader stickiness while synchronisation is being carried out. This ensures
1097+ that the last host to create and broadcast cetificates has the option to
1098+ complete actions before electing the new leader as sync master.
1099+ """
1100+ paths_to_sync = [SYNC_FLAGS_DIR]
1101+
1102+ if is_str_true(config('https-service-endpoints')):
1103+ log("Syncing all endpoint certs since https-service-endpoints=True",
1104+ level=DEBUG)
1105+ paths_to_sync.append(SSL_DIR)
1106+ paths_to_sync.append(APACHE_SSL_DIR)
1107+ paths_to_sync.append(CA_CERT_PATH)
1108+ elif is_str_true(config('use-https')):
1109+ log("Syncing keystone-endpoint certs since use-https=True",
1110+ level=DEBUG)
1111+ paths_to_sync.append(APACHE_SSL_DIR)
1112+ paths_to_sync.append(CA_CERT_PATH)
1113+
1114+ if not paths_to_sync:
1115+ log("Nothing to sync - skipping", level=DEBUG)
1116+ return
1117+
1118+ if not os.path.isdir(SYNC_FLAGS_DIR):
1119+ mkdir(SYNC_FLAGS_DIR, SSH_USER, 'keystone', 0o775)
1120+
1121+ # We need to restart peer apache services to ensure they have picked up
1122+ # new ssl keys.
1123+ create_peer_service_actions('restart', ['apache2'])
1124+ create_peer_actions(['update-ca-certificates'])
1125+
1126+ # Format here needs to match that used when peers request sync
1127+ synced_units = [unit.replace('/', '-') for unit in peer_units()]
1128+
1129+ retries = 3
1130+ while True:
1131+ hash1 = hashlib.sha256()
1132+ for path in paths_to_sync:
1133+ update_hash_from_path(hash1, path)
1134+
1135+ try:
1136+ unison_sync(paths_to_sync)
1137+ except:
1138+ if fatal:
1139+ raise
1140+ else:
1141+ log("Sync failed but fatal=False", level=INFO)
1142+ return
1143+
1144+ hash2 = hashlib.sha256()
1145+ for path in paths_to_sync:
1146+ update_hash_from_path(hash2, path)
1147+
1148+ # Detect whether someone else has synced to this unit while we did our
1149+ # transfer.
1150+ if hash1.hexdigest() != hash2.hexdigest():
1151+ retries -= 1
1152+ if retries > 0:
1153+ log("SSL dir contents changed during sync - retrying unison "
1154+ "sync %s more times" % (retries), level=WARNING)
1155+ else:
1156+ log("SSL dir contents changed during sync - retries failed",
1157+ level=ERROR)
1158+ return {}
1159+ else:
1160+ break
1161+
1162+ hash = hash1.hexdigest()
1163+ log("Sending restart-services-trigger=%s to all peers" % (hash),
1164+ level=DEBUG)
1165+
1166+ log("Sync complete", level=DEBUG)
1167+ return {'restart-services-trigger': hash,
1168+ 'ssl-synced-units': json.dumps(synced_units)}
1169+
1170+
1171+def update_hash_from_path(hash, path, recurse_depth=10):
1172+ """Recurse through path and update the provided hash for every file found.
1173+ """
1174+ if not recurse_depth:
1175+ log("Max recursion depth (%s) reached for update_hash_from_path() at "
1176+ "path='%s' - not going any deeper" % (recurse_depth, path),
1177+ level=WARNING)
1178+ return
1179+
1180+ for p in glob.glob("%s/*" % path):
1181+ if os.path.isdir(p):
1182+ update_hash_from_path(hash, p, recurse_depth=recurse_depth - 1)
1183+ else:
1184+ with open(p, 'r') as fd:
1185+ hash.update(fd.read())
1186+
1187+
1188+def synchronize_ca_if_changed(force=False, fatal=False):
1189+ """Decorator to perform ssl cert sync if decorated function modifies them
1190+ in any way.
1191+
1192+ If force is True a sync is done regardless.
1193+ """
1194+ def inner_synchronize_ca_if_changed1(f):
1195+ def inner_synchronize_ca_if_changed2(*args, **kwargs):
1196+ # Only sync master can do sync. Ensure (a) we are not nested and
1197+ # (b) a master is elected and we are it.
1198+ acquired = SSL_SYNC_SEMAPHORE.acquire(blocking=0)
1199+ try:
1200+ if not acquired:
1201+ log("Nested sync - ignoring", level=DEBUG)
1202+ return f(*args, **kwargs)
1203+
1204+ if not ensure_ssl_cert_master():
1205+ log("Not leader - ignoring sync", level=DEBUG)
1206+ return f(*args, **kwargs)
1207+
1208+ peer_settings = {}
1209+ if not force:
1210+ ssl_dirs = [SSL_DIR, APACHE_SSL_DIR, CA_CERT_PATH]
1211+
1212+ hash1 = hashlib.sha256()
1213+ for path in ssl_dirs:
1214+ update_hash_from_path(hash1, path)
1215+
1216+ ret = f(*args, **kwargs)
1217+
1218+ hash2 = hashlib.sha256()
1219+ for path in ssl_dirs:
1220+ update_hash_from_path(hash2, path)
1221+
1222+ if hash1.hexdigest() != hash2.hexdigest():
1223+ log("SSL certs have changed - syncing peers",
1224+ level=DEBUG)
1225+ peer_settings = synchronize_ca(fatal=fatal)
1226+ else:
1227+ log("SSL certs have not changed - skipping sync",
1228+ level=DEBUG)
1229+ else:
1230+ ret = f(*args, **kwargs)
1231+ log("Doing forced ssl cert sync", level=DEBUG)
1232+ peer_settings = synchronize_ca(fatal=fatal)
1233+
1234+ # If we are the sync master but not leader, ensure we have
1235+ # relinquished master status.
1236+ if not is_elected_leader(CLUSTER_RES):
1237+ log("Re-electing ssl cert master.", level=INFO)
1238+ peer_settings['ssl-cert-master'] = 'unknown'
1239+
1240+ if peer_settings:
1241+ for rid in relation_ids('cluster'):
1242+ relation_set(relation_id=rid,
1243+ relation_settings=peer_settings)
1244+
1245+ return ret
1246+ finally:
1247+ SSL_SYNC_SEMAPHORE.release()
1248+
1249+ return inner_synchronize_ca_if_changed2
1250+
1251+ return inner_synchronize_ca_if_changed1
1252
1253
1254 def get_ca(user='keystone', group='keystone'):
1255@@ -642,22 +1023,32 @@
1256 Initialize a new CA object if one hasn't already been loaded.
1257 This will create a new CA or load an existing one.
1258 """
1259- if not CA:
1260+ if not ssl.CA_SINGLETON:
1261 if not os.path.isdir(SSL_DIR):
1262 os.mkdir(SSL_DIR)
1263+
1264 d_name = '_'.join(SSL_CA_NAME.lower().split(' '))
1265 ca = ssl.JujuCA(name=SSL_CA_NAME, user=user, group=group,
1266 ca_dir=os.path.join(SSL_DIR,
1267 '%s_intermediate_ca' % d_name),
1268 root_ca_dir=os.path.join(SSL_DIR,
1269 '%s_root_ca' % d_name))
1270+
1271 # SSL_DIR is synchronized via all peers over unison+ssh, need
1272 # to ensure permissions.
1273 subprocess.check_output(['chown', '-R', '%s.%s' % (user, group),
1274 '%s' % SSL_DIR])
1275 subprocess.check_output(['chmod', '-R', 'g+rwx', '%s' % SSL_DIR])
1276- CA.append(ca)
1277- return CA[0]
1278+
1279+ # Ensure a master has been elected and prefer this unit. Note that we
1280+ # prefer oldest peer as predicate since this action i normally only
1281+ # performed once at deploy time when the oldest peer should be the
1282+ # first to be ready.
1283+ ensure_ssl_cert_master(use_oldest_peer=True)
1284+
1285+ ssl.CA_SINGLETON.append(ca)
1286+
1287+ return ssl.CA_SINGLETON[0]
1288
1289
1290 def relation_list(rid):
1291@@ -683,7 +1074,7 @@
1292 https_cns = []
1293 if single.issubset(settings):
1294 # other end of relation advertised only one endpoint
1295- if 'None' in [v for k, v in settings.iteritems()]:
1296+ if 'None' in settings.itervalues():
1297 # Some backend services advertise no endpoint but require a
1298 # hook execution to update auth strategy.
1299 relation_data = {}
1300@@ -699,7 +1090,7 @@
1301 relation_data["auth_port"] = config('admin-port')
1302 relation_data["service_port"] = config('service-port')
1303 relation_data["region"] = config('region')
1304- if config('https-service-endpoints') in ['True', 'true']:
1305+ if is_str_true(config('https-service-endpoints')):
1306 # Pass CA cert as client will need it to
1307 # verify https connections
1308 ca = get_ca(user=SSH_USER)
1309@@ -711,6 +1102,7 @@
1310 for role in get_requested_roles(settings):
1311 log("Creating requested role: %s" % role)
1312 create_role(role)
1313+
1314 peer_store_and_set(relation_id=relation_id,
1315 **relation_data)
1316 return
1317@@ -786,7 +1178,7 @@
1318 if prefix:
1319 service_username = "%s%s" % (prefix, service_username)
1320
1321- if 'None' in [v for k, v in settings.iteritems()]:
1322+ if 'None' in settings.itervalues():
1323 return
1324
1325 if not service_username:
1326@@ -838,7 +1230,7 @@
1327 relation_data["auth_protocol"] = "http"
1328 relation_data["service_protocol"] = "http"
1329 # generate or get a new cert/key for service if set to manage certs.
1330- if config('https-service-endpoints') in ['True', 'true']:
1331+ if is_str_true(config('https-service-endpoints')):
1332 ca = get_ca(user=SSH_USER)
1333 # NOTE(jamespage) may have multiple cns to deal with to iterate
1334 https_cns = set(https_cns)
1335@@ -853,6 +1245,7 @@
1336 ca_bundle = ca.get_ca_bundle()
1337 relation_data['ca_cert'] = b64encode(ca_bundle)
1338 relation_data['https_keystone'] = 'True'
1339+
1340 peer_store_and_set(relation_id=relation_id,
1341 **relation_data)
1342
1343
1344=== modified file 'unit_tests/test_keystone_hooks.py'
1345--- unit_tests/test_keystone_hooks.py 2015-01-21 16:26:50 +0000
1346+++ unit_tests/test_keystone_hooks.py 2015-01-21 16:51:08 +0000
1347@@ -1,6 +1,7 @@
1348 from mock import call, patch, MagicMock
1349 import os
1350 import json
1351+import uuid
1352
1353 from test_utils import CharmTestCase
1354
1355@@ -30,7 +31,6 @@
1356 'local_unit',
1357 'filter_installed_packages',
1358 'relation_ids',
1359- 'relation_list',
1360 'relation_set',
1361 'relation_get',
1362 'related_units',
1363@@ -42,9 +42,10 @@
1364 'restart_on_change',
1365 # charmhelpers.contrib.openstack.utils
1366 'configure_installation_source',
1367+ # charmhelpers.contrib.openstack.ip
1368+ 'resolve_address',
1369 # charmhelpers.contrib.hahelpers.cluster_utils
1370- 'is_leader',
1371- 'eligible_leader',
1372+ 'is_elected_leader',
1373 'get_hacluster_config',
1374 # keystone_utils
1375 'restart_map',
1376@@ -55,7 +56,7 @@
1377 'migrate_database',
1378 'ensure_initial_admin',
1379 'add_service_to_keystone',
1380- 'synchronize_ca',
1381+ 'synchronize_ca_if_changed',
1382 'update_nrpe_config',
1383 # other
1384 'check_call',
1385@@ -160,8 +161,13 @@
1386 'Attempting to associate a postgresql database when there '
1387 'is already associated a mysql one')
1388
1389+ @patch('keystone_utils.log')
1390+ @patch('keystone_utils.ensure_ssl_cert_master')
1391 @patch.object(hooks, 'CONFIGS')
1392- def test_db_changed_missing_relation_data(self, configs):
1393+ def test_db_changed_missing_relation_data(self, configs,
1394+ mock_ensure_ssl_cert_master,
1395+ mock_log):
1396+ mock_ensure_ssl_cert_master.return_value = False
1397 configs.complete_contexts = MagicMock()
1398 configs.complete_contexts.return_value = []
1399 hooks.db_changed()
1400@@ -169,8 +175,13 @@
1401 'shared-db relation incomplete. Peer not ready?'
1402 )
1403
1404+ @patch('keystone_utils.log')
1405+ @patch('keystone_utils.ensure_ssl_cert_master')
1406 @patch.object(hooks, 'CONFIGS')
1407- def test_postgresql_db_changed_missing_relation_data(self, configs):
1408+ def test_postgresql_db_changed_missing_relation_data(self, configs,
1409+ mock_ensure_leader,
1410+ mock_log):
1411+ mock_ensure_leader.return_value = False
1412 configs.complete_contexts = MagicMock()
1413 configs.complete_contexts.return_value = []
1414 hooks.pgsql_db_changed()
1415@@ -192,9 +203,14 @@
1416 configs.write = MagicMock()
1417 hooks.pgsql_db_changed()
1418
1419+ @patch('keystone_utils.log')
1420+ @patch('keystone_utils.ensure_ssl_cert_master')
1421 @patch.object(hooks, 'CONFIGS')
1422 @patch.object(hooks, 'identity_changed')
1423- def test_db_changed_allowed(self, identity_changed, configs):
1424+ def test_db_changed_allowed(self, identity_changed, configs,
1425+ mock_ensure_ssl_cert_master,
1426+ mock_log):
1427+ mock_ensure_ssl_cert_master.return_value = False
1428 self.relation_ids.return_value = ['identity-service:0']
1429 self.related_units.return_value = ['unit/0']
1430
1431@@ -207,9 +223,13 @@
1432 relation_id='identity-service:0',
1433 remote_unit='unit/0')
1434
1435+ @patch('keystone_utils.log')
1436+ @patch('keystone_utils.ensure_ssl_cert_master')
1437 @patch.object(hooks, 'CONFIGS')
1438 @patch.object(hooks, 'identity_changed')
1439- def test_db_changed_not_allowed(self, identity_changed, configs):
1440+ def test_db_changed_not_allowed(self, identity_changed, configs,
1441+ mock_ensure_ssl_cert_master, mock_log):
1442+ mock_ensure_ssl_cert_master.return_value = False
1443 self.relation_ids.return_value = ['identity-service:0']
1444 self.related_units.return_value = ['unit/0']
1445
1446@@ -220,9 +240,13 @@
1447 self.assertFalse(self.ensure_initial_admin.called)
1448 self.assertFalse(identity_changed.called)
1449
1450+ @patch('keystone_utils.log')
1451+ @patch('keystone_utils.ensure_ssl_cert_master')
1452 @patch.object(hooks, 'CONFIGS')
1453 @patch.object(hooks, 'identity_changed')
1454- def test_postgresql_db_changed(self, identity_changed, configs):
1455+ def test_postgresql_db_changed(self, identity_changed, configs,
1456+ mock_ensure_ssl_cert_master, mock_log):
1457+ mock_ensure_ssl_cert_master.return_value = False
1458 self.relation_ids.return_value = ['identity-service:0']
1459 self.related_units.return_value = ['unit/0']
1460
1461@@ -235,6 +259,10 @@
1462 relation_id='identity-service:0',
1463 remote_unit='unit/0')
1464
1465+ @patch('keystone_utils.log')
1466+ @patch('keystone_utils.ensure_ssl_cert_master')
1467+ @patch.object(hooks, 'peer_units')
1468+ @patch.object(hooks, 'ensure_permissions')
1469 @patch.object(hooks, 'admin_relation_changed')
1470 @patch.object(hooks, 'cluster_joined')
1471 @patch.object(unison, 'ensure_user')
1472@@ -245,11 +273,15 @@
1473 def test_config_changed_no_openstack_upgrade_leader(
1474 self, configure_https, identity_changed,
1475 configs, get_homedir, ensure_user, cluster_joined,
1476- admin_relation_changed):
1477+ admin_relation_changed, ensure_permissions, mock_peer_units,
1478+ mock_ensure_ssl_cert_master, mock_log):
1479 self.openstack_upgrade_available.return_value = False
1480- self.eligible_leader.return_value = True
1481- self.relation_ids.return_value = ['dummyid:0']
1482- self.relation_list.return_value = ['unit/0']
1483+ self.is_elected_leader.return_value = True
1484+ # avoid having to mock syncer
1485+ mock_ensure_ssl_cert_master.return_value = False
1486+ mock_peer_units.return_value = []
1487+ self.relation_ids.return_value = ['identity-service:0']
1488+ self.related_units.return_value = ['unit/0']
1489
1490 hooks.config_changed()
1491 ensure_user.assert_called_with(user=self.ssh_user, group='keystone')
1492@@ -264,10 +296,13 @@
1493 self.log.assert_called_with(
1494 'Firing identity_changed hook for all related services.')
1495 identity_changed.assert_called_with(
1496- relation_id='dummyid:0',
1497+ relation_id='identity-service:0',
1498 remote_unit='unit/0')
1499- admin_relation_changed.assert_called_with('dummyid:0')
1500+ admin_relation_changed.assert_called_with('identity-service:0')
1501
1502+ @patch('keystone_utils.log')
1503+ @patch('keystone_utils.ensure_ssl_cert_master')
1504+ @patch.object(hooks, 'ensure_permissions')
1505 @patch.object(hooks, 'cluster_joined')
1506 @patch.object(unison, 'ensure_user')
1507 @patch.object(unison, 'get_homedir')
1508@@ -276,9 +311,12 @@
1509 @patch.object(hooks, 'configure_https')
1510 def test_config_changed_no_openstack_upgrade_not_leader(
1511 self, configure_https, identity_changed,
1512- configs, get_homedir, ensure_user, cluster_joined):
1513+ configs, get_homedir, ensure_user, cluster_joined,
1514+ ensure_permissions, mock_ensure_ssl_cert_master,
1515+ mock_log):
1516 self.openstack_upgrade_available.return_value = False
1517- self.eligible_leader.return_value = False
1518+ self.is_elected_leader.return_value = False
1519+ mock_ensure_ssl_cert_master.return_value = False
1520
1521 hooks.config_changed()
1522 ensure_user.assert_called_with(user=self.ssh_user, group='keystone')
1523@@ -292,6 +330,10 @@
1524 self.assertFalse(self.ensure_initial_admin.called)
1525 self.assertFalse(identity_changed.called)
1526
1527+ @patch('keystone_utils.log')
1528+ @patch('keystone_utils.ensure_ssl_cert_master')
1529+ @patch.object(hooks, 'peer_units')
1530+ @patch.object(hooks, 'ensure_permissions')
1531 @patch.object(hooks, 'admin_relation_changed')
1532 @patch.object(hooks, 'cluster_joined')
1533 @patch.object(unison, 'ensure_user')
1534@@ -302,11 +344,16 @@
1535 def test_config_changed_with_openstack_upgrade(
1536 self, configure_https, identity_changed,
1537 configs, get_homedir, ensure_user, cluster_joined,
1538- admin_relation_changed):
1539+ admin_relation_changed,
1540+ ensure_permissions, mock_peer_units, mock_ensure_ssl_cert_master,
1541+ mock_log):
1542 self.openstack_upgrade_available.return_value = True
1543- self.eligible_leader.return_value = True
1544- self.relation_ids.return_value = ['dummyid:0']
1545- self.relation_list.return_value = ['unit/0']
1546+ self.is_elected_leader.return_value = True
1547+ # avoid having to mock syncer
1548+ mock_ensure_ssl_cert_master.return_value = False
1549+ mock_peer_units.return_value = []
1550+ self.relation_ids.return_value = ['identity-service:0']
1551+ self.related_units.return_value = ['unit/0']
1552
1553 hooks.config_changed()
1554 ensure_user.assert_called_with(user=self.ssh_user, group='keystone')
1555@@ -323,25 +370,33 @@
1556 self.log.assert_called_with(
1557 'Firing identity_changed hook for all related services.')
1558 identity_changed.assert_called_with(
1559- relation_id='dummyid:0',
1560+ relation_id='identity-service:0',
1561 remote_unit='unit/0')
1562- admin_relation_changed.assert_called_with('dummyid:0')
1563+ admin_relation_changed.assert_called_with('identity-service:0')
1564
1565+ @patch('keystone_utils.log')
1566+ @patch('keystone_utils.ensure_ssl_cert_master')
1567 @patch.object(hooks, 'hashlib')
1568 @patch.object(hooks, 'send_notifications')
1569 def test_identity_changed_leader(self, mock_send_notifications,
1570- mock_hashlib):
1571- self.eligible_leader.return_value = True
1572+ mock_hashlib, mock_ensure_ssl_cert_master,
1573+ mock_log):
1574+ mock_ensure_ssl_cert_master.return_value = False
1575 hooks.identity_changed(
1576 relation_id='identity-service:0',
1577 remote_unit='unit/0')
1578 self.add_service_to_keystone.assert_called_with(
1579 'identity-service:0',
1580 'unit/0')
1581- self.assertTrue(self.synchronize_ca.called)
1582
1583- def test_identity_changed_no_leader(self):
1584- self.eligible_leader.return_value = False
1585+ @patch.object(hooks, 'local_unit')
1586+ @patch('keystone_utils.log')
1587+ @patch('keystone_utils.ensure_ssl_cert_master')
1588+ def test_identity_changed_no_leader(self, mock_ensure_ssl_cert_master,
1589+ mock_log, mock_local_unit):
1590+ mock_ensure_ssl_cert_master.return_value = False
1591+ mock_local_unit.return_value = 'unit/0'
1592+ self.is_elected_leader.return_value = False
1593 hooks.identity_changed(
1594 relation_id='identity-service:0',
1595 remote_unit='unit/0')
1596@@ -349,23 +404,44 @@
1597 self.log.assert_called_with(
1598 'Deferring identity_changed() to service leader.')
1599
1600+ @patch.object(hooks, 'local_unit')
1601+ @patch.object(hooks, 'peer_units')
1602 @patch.object(unison, 'ssh_authorized_peers')
1603- def test_cluster_joined(self, ssh_authorized_peers):
1604+ def test_cluster_joined(self, ssh_authorized_peers, mock_peer_units,
1605+ mock_local_unit):
1606+ mock_local_unit.return_value = 'unit/0'
1607+ mock_peer_units.return_value = ['unit/0']
1608 hooks.cluster_joined()
1609 ssh_authorized_peers.assert_called_with(
1610 user=self.ssh_user, group='juju_keystone',
1611 peer_interface='cluster', ensure_local_user=True)
1612
1613+ @patch.object(hooks, 'is_ssl_cert_master')
1614+ @patch.object(hooks, 'peer_units')
1615+ @patch('keystone_utils.log')
1616+ @patch('keystone_utils.ensure_ssl_cert_master')
1617+ @patch('keystone_utils.synchronize_ca')
1618+ @patch.object(hooks, 'check_peer_actions')
1619 @patch.object(unison, 'ssh_authorized_peers')
1620 @patch.object(hooks, 'CONFIGS')
1621- def test_cluster_changed(self, configs, ssh_authorized_peers):
1622+ def test_cluster_changed(self, configs, ssh_authorized_peers,
1623+ check_peer_actions, mock_synchronize_ca,
1624+ mock_ensure_ssl_cert_master,
1625+ mock_log, mock_peer_units,
1626+ mock_is_ssl_cert_master):
1627+ mock_is_ssl_cert_master.return_value = False
1628+ mock_peer_units.return_value = ['unit/0']
1629+ mock_ensure_ssl_cert_master.return_value = False
1630+ self.is_elected_leader.return_value = False
1631+ self.relation_get.return_value = {'foo_passwd': '123',
1632+ 'identity-service:16_foo': 'bar'}
1633 hooks.cluster_changed()
1634- self.peer_echo.assert_called_with(includes=['_passwd',
1635- 'identity-service:'])
1636+ self.peer_echo.assert_called_with(includes=['foo_passwd',
1637+ 'identity-service:16_foo'])
1638 ssh_authorized_peers.assert_called_with(
1639 user=self.ssh_user, group='keystone',
1640 peer_interface='cluster', ensure_local_user=True)
1641- self.assertTrue(self.synchronize_ca.called)
1642+ self.assertFalse(mock_synchronize_ca.called)
1643 self.assertTrue(configs.write_all.called)
1644
1645 def test_ha_joined(self):
1646@@ -440,34 +516,50 @@
1647 }
1648 self.relation_set.assert_called_with(**args)
1649
1650+ @patch('keystone_utils.log')
1651+ @patch('keystone_utils.ensure_ssl_cert_master')
1652+ @patch('keystone_utils.synchronize_ca')
1653 @patch.object(hooks, 'CONFIGS')
1654- def test_ha_relation_changed_not_clustered_not_leader(self, configs):
1655+ def test_ha_relation_changed_not_clustered_not_leader(self, configs,
1656+ mock_synchronize_ca,
1657+ mock_is_master,
1658+ mock_log):
1659+ mock_is_master.return_value = False
1660 self.relation_get.return_value = False
1661- self.is_leader.return_value = False
1662+ self.is_elected_leader.return_value = False
1663
1664 hooks.ha_changed()
1665 self.assertTrue(configs.write_all.called)
1666+ self.assertFalse(mock_synchronize_ca.called)
1667
1668+ @patch('keystone_utils.log')
1669+ @patch('keystone_utils.ensure_ssl_cert_master')
1670 @patch.object(hooks, 'identity_changed')
1671 @patch.object(hooks, 'CONFIGS')
1672- def test_ha_relation_changed_clustered_leader(
1673- self, configs, identity_changed):
1674+ def test_ha_relation_changed_clustered_leader(self, configs,
1675+ identity_changed,
1676+ mock_ensure_ssl_cert_master,
1677+ mock_log):
1678+ mock_ensure_ssl_cert_master.return_value = False
1679 self.relation_get.return_value = True
1680- self.is_leader.return_value = True
1681+ self.is_elected_leader.return_value = True
1682 self.relation_ids.return_value = ['identity-service:0']
1683 self.related_units.return_value = ['unit/0']
1684
1685 hooks.ha_changed()
1686 self.assertTrue(configs.write_all.called)
1687 self.log.assert_called_with(
1688- 'Cluster configured, notifying other services and updating '
1689- 'keystone endpoint configuration')
1690+ 'Firing identity_changed hook for all related services.')
1691 identity_changed.assert_called_with(
1692 relation_id='identity-service:0',
1693 remote_unit='unit/0')
1694
1695+ @patch('keystone_utils.log')
1696+ @patch('keystone_utils.ensure_ssl_cert_master')
1697 @patch.object(hooks, 'CONFIGS')
1698- def test_configure_https_enable(self, configs):
1699+ def test_configure_https_enable(self, configs, mock_ensure_ssl_cert_master,
1700+ mock_log):
1701+ mock_ensure_ssl_cert_master.return_value = False
1702 configs.complete_contexts = MagicMock()
1703 configs.complete_contexts.return_value = ['https']
1704 configs.write = MagicMock()
1705@@ -477,8 +569,13 @@
1706 cmd = ['a2ensite', 'openstack_https_frontend']
1707 self.check_call.assert_called_with(cmd)
1708
1709+ @patch('keystone_utils.log')
1710+ @patch('keystone_utils.ensure_ssl_cert_master')
1711 @patch.object(hooks, 'CONFIGS')
1712- def test_configure_https_disable(self, configs):
1713+ def test_configure_https_disable(self, configs,
1714+ mock_ensure_ssl_cert_master,
1715+ mock_log):
1716+ mock_ensure_ssl_cert_master.return_value = False
1717 configs.complete_contexts = MagicMock()
1718 configs.complete_contexts.return_value = ['']
1719 configs.write = MagicMock()
1720@@ -488,30 +585,61 @@
1721 cmd = ['a2dissite', 'openstack_https_frontend']
1722 self.check_call.assert_called_with(cmd)
1723
1724+ @patch('keystone_utils.log')
1725+ @patch('keystone_utils.relation_ids')
1726+ @patch('keystone_utils.is_elected_leader')
1727+ @patch('keystone_utils.ensure_ssl_cert_master')
1728+ @patch('keystone_utils.update_hash_from_path')
1729+ @patch('keystone_utils.synchronize_ca')
1730 @patch.object(unison, 'ssh_authorized_peers')
1731- def test_upgrade_charm_leader(self, ssh_authorized_peers):
1732- self.eligible_leader.return_value = True
1733+ def test_upgrade_charm_leader(self, ssh_authorized_peers,
1734+ mock_synchronize_ca,
1735+ mock_update_hash_from_path,
1736+ mock_ensure_ssl_cert_master,
1737+ mock_is_elected_leader,
1738+ mock_relation_ids,
1739+ mock_log):
1740+ mock_is_elected_leader.return_value = False
1741+ mock_relation_ids.return_value = []
1742+ mock_ensure_ssl_cert_master.return_value = True
1743+ # Ensure always returns diff
1744+ mock_update_hash_from_path.side_effect = \
1745+ lambda hash, *args, **kwargs: hash.update(str(uuid.uuid4()))
1746+
1747+ self.is_elected_leader.return_value = True
1748 self.filter_installed_packages.return_value = []
1749 hooks.upgrade_charm()
1750 self.assertTrue(self.apt_install.called)
1751 ssh_authorized_peers.assert_called_with(
1752 user=self.ssh_user, group='keystone',
1753 peer_interface='cluster', ensure_local_user=True)
1754- self.assertTrue(self.synchronize_ca.called)
1755+ self.assertTrue(mock_synchronize_ca.called)
1756 self.log.assert_called_with(
1757- 'Cluster leader - ensuring endpoint configuration'
1758- ' is up to date')
1759+ 'Firing identity_changed hook for all related services.')
1760 self.assertTrue(self.ensure_initial_admin.called)
1761
1762+ @patch('keystone_utils.log')
1763+ @patch('keystone_utils.relation_ids')
1764+ @patch('keystone_utils.ensure_ssl_cert_master')
1765+ @patch('keystone_utils.update_hash_from_path')
1766 @patch.object(unison, 'ssh_authorized_peers')
1767- def test_upgrade_charm_not_leader(self, ssh_authorized_peers):
1768- self.eligible_leader.return_value = False
1769+ def test_upgrade_charm_not_leader(self, ssh_authorized_peers,
1770+ mock_update_hash_from_path,
1771+ mock_ensure_ssl_cert_master,
1772+ mock_relation_ids,
1773+ mock_log):
1774+ mock_relation_ids.return_value = []
1775+ mock_ensure_ssl_cert_master.return_value = False
1776+ # Ensure always returns diff
1777+ mock_update_hash_from_path.side_effect = \
1778+ lambda hash, *args, **kwargs: hash.update(str(uuid.uuid4()))
1779+
1780+ self.is_elected_leader.return_value = False
1781 self.filter_installed_packages.return_value = []
1782 hooks.upgrade_charm()
1783 self.assertTrue(self.apt_install.called)
1784 ssh_authorized_peers.assert_called_with(
1785 user=self.ssh_user, group='keystone',
1786 peer_interface='cluster', ensure_local_user=True)
1787- self.assertTrue(self.synchronize_ca.called)
1788 self.assertFalse(self.log.called)
1789 self.assertFalse(self.ensure_initial_admin.called)
1790
1791=== modified file 'unit_tests/test_keystone_utils.py'
1792--- unit_tests/test_keystone_utils.py 2015-01-14 13:17:50 +0000
1793+++ unit_tests/test_keystone_utils.py 2015-01-21 16:51:08 +0000
1794@@ -26,9 +26,8 @@
1795 'get_os_codename_install_source',
1796 'grant_role',
1797 'configure_installation_source',
1798- 'eligible_leader',
1799+ 'is_elected_leader',
1800 'https',
1801- 'is_clustered',
1802 'peer_store_and_set',
1803 'service_stop',
1804 'service_start',
1805@@ -115,7 +114,7 @@
1806 self, migrate_database, determine_packages, configs):
1807 self.test_config.set('openstack-origin', 'precise')
1808 determine_packages.return_value = []
1809- self.eligible_leader.return_value = True
1810+ self.is_elected_leader.return_value = True
1811
1812 utils.do_openstack_upgrade(configs)
1813
1814@@ -202,7 +201,6 @@
1815 self.resolve_address.return_value = '10.0.0.3'
1816 self.test_config.set('admin-port', 80)
1817 self.test_config.set('service-port', 81)
1818- self.is_clustered.return_value = False
1819 self.https.return_value = False
1820 self.test_config.set('https-service-endpoints', 'False')
1821 self.get_local_endpoint.return_value = 'http://localhost:80/v2.0/'

Subscribers

People subscribed via source and target branches