Merge lp:~yolanda.robla/charms/precise/keystone/ha-support into lp:~charmers/charms/precise/keystone/trunk

Proposed by Yolanda Robla
Status: Superseded
Proposed branch: lp:~yolanda.robla/charms/precise/keystone/ha-support
Merge into: lp:~charmers/charms/precise/keystone/trunk
Diff against target: 785 lines (+398/-138)
7 files modified
config.yaml (+37/-0)
hooks/keystone-hooks (+150/-22)
hooks/lib/openstack_common.py (+56/-4)
hooks/utils.py (+113/-111)
metadata.yaml (+6/-0)
revision (+1/-1)
templates/haproxy.cfg (+35/-0)
To merge this branch: bzr merge lp:~yolanda.robla/charms/precise/keystone/ha-support
Reviewer Review Type Date Requested Status
Adam Gandelman Pending
Review via email: mp+145823@code.launchpad.net

Description of the change

Added Ceilometer service

To post a comment you must log in.
49. By Adam Gandelman

Merge 'Added Ceilometer service'

50. By James Page

Adds better support for service leaders.

* The service leader is determined depending on how keystone is currently clustered. If there are multiple units, but no hacluster subordinate, the oldest service unit is elected leader (lowest unit number). If hacluster exists and the service is clustered, the CRM is consulted and the node hosting the resources is designated the leader.

* Only the leader may initialize or touch the database (create users, endpoints, etc)

* The leader is responsible for synchronizing a list of service credentials to all peers. The list is stored on disk and resolves the issue of the passwd dump files in /var/lib/keystone/ being out-of-sync among peers.

We can use the same approach in the rabbitmq-server charm if it works out here.

51. By James Page

Pass keystone information to services which just need to access keystone

52. By James Page

openstack-common: increase server/client timeout to 10seconds

53. By James Page

Use db_host, not private address to ensure VIP is used in HA scenarios

54. By Adam Gandelman

Sync openstack charm helpers.

55. By Adam Gandelman

Merge health_checks.d framework.

56. By James Page

Fixup keystone epoch version handling

57. By James Page

SSH based hook syncing

58. By Adam Gandelman

Big refactor and cleanup from James.

59. By Adam Gandelman

Sync openstack_common.py

60. By Adam Gandelman

Sync utils.py

61. By James Page

keystone health updates to use new cluster.determine_api_port method

62. By Adam Gandelman

Merge SSH_USER fix for non-clustered nodes.

63. By Yolanda Robla

grant requested role to user

64. By Yolanda Robla

upgrade revision

65. By Yolanda Robla

fixes in requested_roles

66. By Yolanda Robla

changes in requested roles

67. By Yolanda Robla

typo fix

68. By Yolanda Robla

fixes in requested roles

69. By Yolanda Robla

fix in create_role

70. By Yolanda Robla

fixes in grant requested roles

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2012-10-12 17:26:48 +0000
3+++ config.yaml 2013-01-31 11:59:26 +0000
4@@ -26,6 +26,10 @@
5 default: "/etc/keystone/keystone.conf"
6 type: string
7 description: "Location of keystone configuration file"
8+ log-level:
9+ default: WARNING
10+ type: string
11+ description: Log level (WARNING, INFO, DEBUG, ERROR)
12 service-port:
13 default: 5000
14 type: int
15@@ -75,3 +79,36 @@
16 default: "keystone"
17 type: string
18 description: "Database username"
19+ region:
20+ default: RegionOne
21+ type: string
22+ description: "OpenStack Region(s) - separate multiple regions with single space"
23+ # HA configuration settings
24+ vip:
25+ type: string
26+ description: "Virtual IP to use to front keystone in ha configuration"
27+ vip_iface:
28+ type: string
29+ default: eth0
30+ description: "Network Interface where to place the Virtual IP"
31+ vip_cidr:
32+ type: int
33+ default: 24
34+ description: "Netmask that will be used for the Virtual IP"
35+ ha-bindiface:
36+ type: string
37+ default: eth0
38+ description: |
39+ Default network interface on which HA cluster will bind to communication
40+ with the other members of the HA Cluster.
41+ ha-mcastport:
42+ type: int
43+ default: 5403
44+ description: |
45+ Default multicast port number that will be used to communicate between
46+ HA Cluster nodes.
47+ # PKI enablement and configuration (Grizzly and beyond)
48+ enable-pki:
49+ default: "false"
50+ type: string
51+ description: "Enable PKI token signing (Grizzly and beyond)"
52
53=== added symlink 'hooks/cluster-relation-changed'
54=== target is u'keystone-hooks'
55=== added symlink 'hooks/cluster-relation-departed'
56=== target is u'keystone-hooks'
57=== added symlink 'hooks/ha-relation-changed'
58=== target is u'keystone-hooks'
59=== added symlink 'hooks/ha-relation-joined'
60=== target is u'keystone-hooks'
61=== modified file 'hooks/keystone-hooks'
62--- hooks/keystone-hooks 2012-12-12 03:52:01 +0000
63+++ hooks/keystone-hooks 2013-01-31 11:59:26 +0000
64@@ -8,7 +8,7 @@
65
66 config = config_get()
67
68-packages = "keystone python-mysqldb pwgen"
69+packages = "keystone python-mysqldb pwgen haproxy python-jinja2"
70 service = "keystone"
71
72 # used to verify joined services are valid openstack components.
73@@ -46,6 +46,14 @@
74 "quantum": {
75 "type": "network",
76 "desc": "Quantum Networking Service"
77+ },
78+ "oxygen": {
79+ "type": "oxygen",
80+ "desc": "Oxygen Cloud Image Service"
81+ },
82+ "ceilometer": {
83+ "type": "metering",
84+ "desc": "Ceilometer Metering Service"
85 }
86 }
87
88@@ -124,18 +132,30 @@
89 realtion_set({ "admin_token": -1 })
90 return
91
92- def add_endpoint(region, service, public_url, admin_url, internal_url):
93+ def add_endpoint(region, service, publicurl, adminurl, internalurl):
94 desc = valid_services[service]["desc"]
95 service_type = valid_services[service]["type"]
96 create_service_entry(service, service_type, desc)
97 create_endpoint_template(region=region, service=service,
98- public_url=public_url,
99- admin_url=admin_url,
100- internal_url=internal_url)
101+ publicurl=publicurl,
102+ adminurl=adminurl,
103+ internalurl=internalurl)
104+
105+ if is_clustered() and not is_leader():
106+ # Only respond if service unit is the leader
107+ return
108
109 settings = relation_get_dict(relation_id=relation_id,
110 remote_unit=remote_unit)
111
112+ # Allow the remote service to request creation of any additional roles.
113+ # Currently used by Swift.
114+ if 'requested_roles' in settings and settings['requested_roles'] != 'None':
115+ roles = settings['requested_roles'].split(',')
116+ juju_log("Creating requested roles: %s" % roles)
117+ for role in roles:
118+ create_role(role, user=config['admin-user'], tenant='admin')
119+
120 # the minimum settings needed per endpoint
121 single = set(['service', 'region', 'public_url', 'admin_url',
122 'internal_url'])
123@@ -149,9 +169,9 @@
124
125 ensure_valid_service(settings['service'])
126 add_endpoint(region=settings['region'], service=settings['service'],
127- public_url=settings['public_url'],
128- admin_url=settings['admin_url'],
129- internal_url=settings['internal_url'])
130+ publicurl=settings['public_url'],
131+ adminurl=settings['admin_url'],
132+ internalurl=settings['internal_url'])
133 service_username = settings['service']
134 else:
135 # assemble multiple endpoints from relation data. service name
136@@ -186,9 +206,9 @@
137 ep = endpoints[ep]
138 ensure_valid_service(ep['service'])
139 add_endpoint(region=ep['region'], service=ep['service'],
140- public_url=ep['public_url'],
141- admin_url=ep['admin_url'],
142- internal_url=ep['internal_url'])
143+ publicurl=ep['public_url'],
144+ adminurl=ep['admin_url'],
145+ internalurl=ep['internal_url'])
146 services.append(ep['service'])
147 service_username = '_'.join(services)
148
149@@ -201,6 +221,7 @@
150 token = get_admin_token()
151 juju_log("Creating service credentials for '%s'" % service_username)
152
153+ # TODO: This needs to be changed as it won't work with ha keystone
154 stored_passwd = '/var/lib/keystone/%s.passwd' % service_username
155 if os.path.isfile(stored_passwd):
156 juju_log("Loading stored service passwd from %s" % stored_passwd)
157@@ -213,14 +234,6 @@
158 create_user(service_username, service_password, config['service-tenant'])
159 grant_role(service_username, config['admin-role'], config['service-tenant'])
160
161- # Allow the remote service to request creation of any additional roles.
162- # Currently used by Swift.
163- if 'requested_roles' in settings:
164- roles = settings['requested_roles'].split(',')
165- juju_log("Creating requested roles: %s" % roles)
166- for role in roles:
167- create_role(role, user=config['admin-user'], tenant='admin')
168-
169 # As of https://review.openstack.org/#change,4675, all nodes hosting
170 # an endpoint(s) needs a service username and password assigned to
171 # the service tenant and granted admin role.
172@@ -237,6 +250,13 @@
173 "service_password": service_password,
174 "service_tenant": config['service-tenant']
175 }
176+ # Check if clustered and use vip + haproxy ports if so
177+ if is_clustered():
178+ relation_data["auth_host"] = config['vip']
179+ relation_data["auth_port"] = SERVICE_PORTS['keystone_admin']
180+ relation_data["service_host"] = config['vip']
181+ relation_data["service_port"] = SERVICE_PORTS['keystone_service']
182+
183 relation_set(relation_data)
184
185 def config_changed():
186@@ -246,11 +266,114 @@
187 available = get_os_codename_install_source(config['openstack-origin'])
188 installed = get_os_codename_package('keystone')
189
190- if get_os_version_codename(available) > get_os_version_codename(installed):
191+ if (available and
192+ get_os_version_codename(available) > get_os_version_codename(installed)):
193 do_openstack_upgrade(config['openstack-origin'], packages)
194
195 set_admin_token(config['admin-token'])
196- ensure_initial_admin(config)
197+
198+ if is_clustered() and is_leader():
199+ juju_log('Cluster leader - ensuring endpoint configuration is up to date')
200+ ensure_initial_admin(config)
201+ elif not is_clustered():
202+ ensure_initial_admin(config)
203+
204+ update_config_block('logger_root', level=config['log-level'],
205+ file='/etc/keystone/logging.conf')
206+ if get_os_version_package('keystone') >= '2013.1':
207+ # PKI introduced in Grizzly
208+ configure_pki_tokens(config)
209+
210+ execute("service keystone restart", echo=True)
211+ cluster_changed()
212+
213+
214+def upgrade_charm():
215+ cluster_changed()
216+ if is_clustered() and is_leader():
217+ juju_log('Cluster leader - ensuring endpoint configuration is up to date')
218+ ensure_initial_admin(config)
219+ elif not is_clustered():
220+ ensure_initial_admin(config)
221+
222+
223+SERVICE_PORTS = {
224+ "keystone_admin": int(config['admin-port']) + 1,
225+ "keystone_service": int(config['service-port']) + 1
226+ }
227+
228+
229+def cluster_changed():
230+ cluster_hosts = {}
231+ cluster_hosts['self'] = config['hostname']
232+ for r_id in relation_ids('cluster'):
233+ for unit in relation_list(r_id):
234+ cluster_hosts[unit.replace('/','-')] = \
235+ relation_get_dict(relation_id=r_id,
236+ remote_unit=unit)['private-address']
237+ configure_haproxy(cluster_hosts,
238+ SERVICE_PORTS)
239+
240+
241+def ha_relation_changed():
242+ relation_data = relation_get_dict()
243+ if ('clustered' in relation_data and
244+ is_leader()):
245+ juju_log('Cluster configured, notifying other services and updating'
246+ 'keystone endpoint configuration')
247+ # Update keystone endpoint to point at VIP
248+ ensure_initial_admin(config)
249+ # Tell all related services to start using
250+ # the VIP and haproxy ports instead
251+ for r_id in relation_ids('identity-service'):
252+ relation_set_2(rid=r_id,
253+ auth_host=config['vip'],
254+ service_host=config['vip'],
255+ service_port=SERVICE_PORTS['keystone_service'],
256+ auth_port=SERVICE_PORTS['keystone_admin'])
257+
258+
259+def ha_relation_joined():
260+ # Obtain the config values necessary for the cluster config. These
261+ # include multicast port and interface to bind to.
262+ corosync_bindiface = config['ha-bindiface']
263+ corosync_mcastport = config['ha-mcastport']
264+
265+ # Obtain resources
266+ resources = {
267+ 'res_ks_vip':'ocf:heartbeat:IPaddr2',
268+ 'res_ks_haproxy':'lsb:haproxy'
269+ }
270+ # TODO: Obtain netmask and nic where to place VIP.
271+ resource_params = {
272+ 'res_ks_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' % (config['vip'],
273+ config['vip_cidr'], config['vip_iface']),
274+ 'res_ks_haproxy':'op monitor interval="5s"'
275+ }
276+ init_services = {
277+ 'res_ks_haproxy':'haproxy'
278+ }
279+ groups = {
280+ 'grp_ks_haproxy':'res_ks_vip res_ks_haproxy'
281+ }
282+ #clones = {
283+ # 'cln_ks_haproxy':'res_ks_haproxy meta globally-unique="false" interleave="true"'
284+ # }
285+
286+ #orders = {
287+ # 'ord_vip_before_haproxy':'inf: res_ks_vip res_ks_haproxy'
288+ # }
289+ #colocations = {
290+ # 'col_vip_on_haproxy':'inf: res_ks_haproxy res_ks_vip'
291+ # }
292+
293+ relation_set_2(init_services=init_services,
294+ corosync_bindiface=corosync_bindiface,
295+ corosync_mcastport=corosync_mcastport,
296+ resources=resources,
297+ resource_params=resource_params,
298+ groups=groups)
299+
300
301 hooks = {
302 "install": install_hook,
303@@ -258,7 +381,12 @@
304 "shared-db-relation-changed": db_changed,
305 "identity-service-relation-joined": identity_joined,
306 "identity-service-relation-changed": identity_changed,
307- "config-changed": config_changed
308+ "config-changed": config_changed,
309+ "cluster-relation-changed": cluster_changed,
310+ "cluster-relation-departed": cluster_changed,
311+ "ha-relation-joined": ha_relation_joined,
312+ "ha-relation-changed": ha_relation_changed,
313+ "upgrade-charm": upgrade_charm
314 }
315
316 # keystone-hooks gets called by symlink corresponding to the requested relation
317
318=== modified file 'hooks/lib/openstack_common.py'
319--- hooks/lib/openstack_common.py 2012-12-05 20:35:05 +0000
320+++ hooks/lib/openstack_common.py 2013-01-31 11:59:26 +0000
321@@ -3,6 +3,7 @@
322 # Common python helper functions used for OpenStack charms.
323
324 import subprocess
325+import os
326
327 CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
328 CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
329@@ -22,6 +23,12 @@
330 '2013.1': 'grizzly'
331 }
332
333+# The ugly duckling
334+swift_codenames = {
335+ '1.4.3': 'diablo',
336+ '1.4.8': 'essex',
337+ '1.7.4': 'folsom'
338+}
339
340 def juju_log(msg):
341 subprocess.check_call(['juju-log', msg])
342@@ -118,12 +125,32 @@
343
344 vers = vers[:6]
345 try:
346- return openstack_codenames[vers]
347+ if 'swift' in pkg:
348+ vers = vers[:5]
349+ return swift_codenames[vers]
350+ else:
351+ vers = vers[:6]
352+ return openstack_codenames[vers]
353 except KeyError:
354 e = 'Could not determine OpenStack codename for version %s' % vers
355 error_out(e)
356
357
358+def get_os_version_package(pkg):
359+ '''Derive OpenStack version number from an installed package.'''
360+ codename = get_os_codename_package(pkg)
361+
362+ if 'swift' in pkg:
363+ vers_map = swift_codenames
364+ else:
365+ vers_map = openstack_codenames
366+
367+ for version, cname in vers_map.iteritems():
368+ if cname == codename:
369+ return version
370+ e = "Could not determine OpenStack version for package: %s" % pkg
371+ error_out(e)
372+
373 def configure_installation_source(rel):
374 '''Configure apt installation source.'''
375
376@@ -164,9 +191,11 @@
377 'version (%s)' % (ca_rel, ubuntu_rel)
378 error_out(e)
379
380- if ca_rel == 'folsom/staging':
381+ if 'staging' in ca_rel:
382 # staging is just a regular PPA.
383- cmd = 'add-apt-repository -y ppa:ubuntu-cloud-archive/folsom-staging'
384+ os_rel = ca_rel.split('/')[0]
385+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
386+ cmd = 'add-apt-repository -y %s' % ppa
387 subprocess.check_call(cmd.split(' '))
388 return
389
390@@ -174,7 +203,10 @@
391 pockets = {
392 'folsom': 'precise-updates/folsom',
393 'folsom/updates': 'precise-updates/folsom',
394- 'folsom/proposed': 'precise-proposed/folsom'
395+ 'folsom/proposed': 'precise-proposed/folsom',
396+ 'grizzly': 'precise-updates/grizzly',
397+ 'grizzly/updates': 'precise-updates/grizzly',
398+ 'grizzly/proposed': 'precise-proposed/grizzly'
399 }
400
401 try:
402@@ -191,3 +223,23 @@
403 else:
404 error_out("Invalid openstack-release specified: %s" % rel)
405
406+HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
407+HAPROXY_DEFAULT = '/etc/default/haproxy'
408+
409+def configure_haproxy(units, service_ports, template_dir=None):
410+ template_dir = template_dir or 'templates'
411+ import jinja2
412+ context = {
413+ 'units': units,
414+ 'service_ports': service_ports
415+ }
416+ templates = jinja2.Environment(
417+ loader=jinja2.FileSystemLoader(template_dir)
418+ )
419+ template = templates.get_template(
420+ os.path.basename(HAPROXY_CONF)
421+ )
422+ with open(HAPROXY_CONF, 'w') as f:
423+ f.write(template.render(context))
424+ with open(HAPROXY_DEFAULT, 'w') as f:
425+ f.write('ENABLED=1')
426
427=== added symlink 'hooks/upgrade-charm'
428=== target is u'keystone-hooks'
429=== modified file 'hooks/utils.py'
430--- hooks/utils.py 2012-12-12 03:52:41 +0000
431+++ hooks/utils.py 2013-01-31 11:59:26 +0000
432@@ -1,4 +1,5 @@
433 #!/usr/bin/python
434+import ConfigParser
435 import subprocess
436 import sys
437 import json
438@@ -11,6 +12,7 @@
439 stored_passwd = "/var/lib/keystone/keystone.passwd"
440 stored_token = "/var/lib/keystone/keystone.token"
441
442+
443 def execute(cmd, die=False, echo=False):
444 """ Executes a command
445
446@@ -79,6 +81,20 @@
447 for k in relation_data:
448 execute("relation-set %s=%s" % (k, relation_data[k]), die=True)
449
450+def relation_set_2(**kwargs):
451+ cmd = [
452+ 'relation-set'
453+ ]
454+ args = []
455+ for k, v in kwargs.items():
456+ if k == 'rid':
457+ cmd.append('-r')
458+ cmd.append(v)
459+ else:
460+ args.append('{}={}'.format(k, v))
461+ cmd += args
462+ subprocess.check_call(cmd)
463+
464 def relation_get(relation_data):
465 """ Obtain all current relation data
466 relation_data is a list of options to query from the relation
467@@ -149,106 +165,26 @@
468 keystone_conf)
469 error_out('Could not find admin_token line in %s' % keystone_conf)
470
471-def update_config_block(block, **kwargs):
472+def update_config_block(section, **kwargs):
473 """ Updates keystone.conf blocks given kwargs.
474- Can be used to update driver settings for a particular backend,
475- setting the sql connection, etc.
476-
477- Parses block heading as '[block]'
478-
479- If block does not exist, a new block will be created at end of file with
480- given kwargs
481- """
482- f = open(keystone_conf, "r+")
483- orig = f.readlines()
484- new = []
485- found_block = ""
486- heading = "[%s]\n" % block
487-
488- lines = len(orig)
489- ln = 0
490-
491- def update_block(block):
492- for k, v in kwargs.iteritems():
493- for l in block:
494- if l.strip().split(" ")[0] == k:
495- block[block.index(l)] = "%s = %s\n" % (k, v)
496- return
497- block.append('%s = %s\n' % (k, v))
498- block.append('\n')
499-
500- try:
501- found = False
502- while ln < lines:
503- if orig[ln] != heading:
504- new.append(orig[ln])
505- ln += 1
506- else:
507- new.append(orig[ln])
508- ln += 1
509- block = []
510- while orig[ln].strip() != '':
511- block.append(orig[ln])
512- ln += 1
513- update_block(block)
514- new += block
515- found = True
516-
517- if not found:
518- if new[(len(new) - 1)].strip() != '':
519- new.append('\n')
520- new.append('%s' % heading)
521- for k, v in kwargs.iteritems():
522- new.append('%s = %s\n' % (k, v))
523- new.append('\n')
524- except:
525- error_out('Error while attempting to update config block. '\
526- 'Refusing to overwite existing config.')
527-
528- return
529-
530- # backup original config
531- backup = open(keystone_conf + '.juju-back', 'w+')
532- for l in orig:
533- backup.write(l)
534- backup.close()
535-
536- # update config
537- f.seek(0)
538- f.truncate()
539- for l in new:
540- f.write(l)
541-
542-
543-def keystone_conf_update(opt, val):
544- """ Updates keystone.conf values
545- If option exists, it is reset to new value
546- If it does not, it added to the top of the config file after the [DEFAULT]
547- heading to keep it out of the paste deploy config
548- """
549- f = open(keystone_conf, "r+")
550- orig = f.readlines()
551- new = ""
552- found = False
553- for l in orig:
554- if l.split(' ')[0] == opt:
555- juju_log("Updating %s, setting %s = %s" % (keystone_conf, opt, val))
556- new += "%s = %s\n" % (opt, val)
557- found = True
558- else:
559- new += l
560- new = new.split('\n')
561- # insert a new value at the top of the file, after the 'DEFAULT' header so
562- # as not to muck up paste deploy configuration later in the file
563- if not found:
564- juju_log("Adding new config option %s = %s" % (opt, val))
565- header = new.index("[DEFAULT]")
566- new.insert((header+1), "%s = %s" % (opt, val))
567- f.seek(0)
568- f.truncate()
569- for l in new:
570- f.write("%s\n" % l)
571- f.close
572+ Update a config setting in a specific setting of a config
573+ file (/etc/keystone/keystone.conf, by default)
574+ """
575+ if 'file' in kwargs:
576+ conf_file = kwargs['file']
577+ del kwargs['file']
578+ else:
579+ conf_file = keystone_conf
580+ config = ConfigParser.RawConfigParser()
581+ config.read(conf_file)
582+
583+ if section != 'DEFAULT' and not config.has_section(section):
584+ config.add_section(section)
585+
586+ for k, v in kwargs.iteritems():
587+ config.set(section, k, v)
588+ with open(conf_file, 'wb') as out:
589+ config.write(out)
590
591 def create_service_entry(service_name, service_type, service_desc, owner=None):
592 """ Add a new service entry to keystone if one does not already exist """
593@@ -264,8 +200,8 @@
594 description=service_desc)
595 juju_log("Created new service entry '%s'" % service_name)
596
597-def create_endpoint_template(region, service, public_url, admin_url,
598- internal_url):
599+def create_endpoint_template(region, service, publicurl, adminurl,
600+ internalurl):
601 """ Create a new endpoint template for service if one does not already
602 exist matching name *and* region """
603 import manager
604@@ -276,13 +212,24 @@
605 if ep['service_id'] == service_id and ep['region'] == region:
606 juju_log("Endpoint template already exists for '%s' in '%s'"
607 % (service, region))
608- return
609+
610+ up_to_date = True
611+ for k in ['publicurl', 'adminurl', 'internalurl']:
612+ if ep[k] != locals()[k]:
613+ up_to_date = False
614+
615+ if up_to_date:
616+ return
617+ else:
618+ # delete endpoint and recreate if endpoint urls need updating.
619+ juju_log("Updating endpoint template with new endpoint urls.")
620+ manager.api.endpoints.delete(ep['id'])
621
622 manager.api.endpoints.create(region=region,
623 service_id=service_id,
624- publicurl=public_url,
625- adminurl=admin_url,
626- internalurl=internal_url)
627+ publicurl=publicurl,
628+ adminurl=adminurl,
629+ internalurl=internalurl)
630 juju_log("Created new endpoint template for '%s' in '%s'" %
631 (region, service))
632
633@@ -411,12 +358,33 @@
634 create_service_entry("keystone", "identity", "Keystone Identity Service")
635 # following documentation here, perhaps we should be using juju
636 # public/private addresses for public/internal urls.
637- public_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"])
638- admin_url = "http://%s:%s/v2.0" % (config["hostname"], config["admin-port"])
639- internal_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"])
640- create_endpoint_template("RegionOne", "keystone", public_url,
641+ if is_clustered():
642+ juju_log("Creating endpoint for clustered configuration")
643+ for region in config['region'].split():
644+ create_keystone_endpoint(service_host=config["vip"],
645+ service_port=int(config["service-port"]) + 1,
646+ auth_host=config["vip"],
647+ auth_port=int(config["admin-port"]) + 1,
648+ region=region)
649+ else:
650+ juju_log("Creating standard endpoint")
651+ for region in config['region'].split():
652+ create_keystone_endpoint(service_host=config["hostname"],
653+ service_port=config["service-port"],
654+ auth_host=config["hostname"],
655+ auth_port=config["admin-port"],
656+ region=region)
657+
658+
659+def create_keystone_endpoint(service_host, service_port,
660+ auth_host, auth_port, region):
661+ public_url = "http://%s:%s/v2.0" % (service_host, service_port)
662+ admin_url = "http://%s:%s/v2.0" % (auth_host, auth_port)
663+ internal_url = "http://%s:%s/v2.0" % (service_host, service_port)
664+ create_endpoint_template(region, "keystone", public_url,
665 admin_url, internal_url)
666
667+
668 def update_user_password(username, password):
669 import manager
670 manager = manager.KeystoneManager(endpoint='http://localhost:35357/v2.0/',
671@@ -431,6 +399,15 @@
672 juju_log("Successfully updated password for user '%s'" % username)
673
674
675+def configure_pki_tokens(config):
676+ '''Configure PKI token signing, if enabled.'''
677+ if config['enable-pki'] not in ['True', 'true']:
678+ update_config_block('signing', token_format='UUID')
679+ else:
680+ juju_log('TODO: PKI Support, setting to UUID for now.')
681+ update_config_block('signing', token_format='UUID')
682+
683+
684 def do_openstack_upgrade(install_src, packages):
685 '''Upgrade packages from a given install src.'''
686
687@@ -474,9 +451,34 @@
688 relation_data["private-address"],
689 config["database"]))
690
691- juju_log('Running database migrations for %s' % new_vers)
692 execute('service keystone stop', echo=True)
693- execute('keystone-manage db_sync', echo=True, die=True)
694+ if ((is_clustered() and is_leader()) or
695+ not is_clustered()):
696+ juju_log('Running database migrations for %s' % new_vers)
697+ execute('keystone-manage db_sync', echo=True, die=True)
698+ else:
699+ juju_log('Not cluster leader; snoozing whilst leader upgrades DB')
700+ time.sleep(10)
701 execute('service keystone start', echo=True)
702 time.sleep(5)
703 juju_log('Completed Keystone upgrade: %s -> %s' % (old_vers, new_vers))
704+
705+
706+def is_clustered():
707+ for r_id in (relation_ids('ha') or []):
708+ for unit in (relation_list(r_id) or []):
709+ relation_data = \
710+ relation_get_dict(relation_id=r_id,
711+ remote_unit=unit)
712+ if 'clustered' in relation_data:
713+ return True
714+ return False
715+
716+
717+def is_leader():
718+ status = execute('crm resource show res_ks_vip', echo=True)[0].strip()
719+ hostname = execute('hostname', echo=True)[0].strip()
720+ if hostname in status:
721+ return True
722+ else:
723+ return False
724
725=== modified file 'metadata.yaml'
726--- metadata.yaml 2012-06-07 17:43:42 +0000
727+++ metadata.yaml 2013-01-31 11:59:26 +0000
728@@ -11,3 +11,9 @@
729 requires:
730 shared-db:
731 interface: mysql-shared
732+ ha:
733+ interface: hacluster
734+ scope: container
735+peers:
736+ cluster:
737+ interface: keystone-ha
738
739=== modified file 'revision'
740--- revision 2012-12-12 03:52:01 +0000
741+++ revision 2013-01-31 11:59:26 +0000
742@@ -1,1 +1,1 @@
743-165
744+195
745
746=== added directory 'templates'
747=== added file 'templates/haproxy.cfg'
748--- templates/haproxy.cfg 1970-01-01 00:00:00 +0000
749+++ templates/haproxy.cfg 2013-01-31 11:59:26 +0000
750@@ -0,0 +1,35 @@
751+global
752+ log 127.0.0.1 local0
753+ log 127.0.0.1 local1 notice
754+ maxconn 4096
755+ user haproxy
756+ group haproxy
757+ spread-checks 0
758+
759+defaults
760+ log global
761+ mode http
762+ option httplog
763+ option dontlognull
764+ retries 3
765+ timeout queue 1000
766+ timeout connect 1000
767+ timeout client 1000
768+ timeout server 1000
769+
770+listen stats :8888
771+ mode http
772+ stats enable
773+ stats hide-version
774+ stats realm Haproxy\ Statistics
775+ stats uri /
776+ stats auth admin:password
777+
778+{% for service, port in service_ports.iteritems() -%}
779+listen {{ service }} 0.0.0.0:{{ port }}
780+ balance roundrobin
781+ option tcplog
782+ {% for unit, address in units.iteritems() -%}
783+ server {{ unit }} {{ address }}:{{ port - 1 }} check
784+ {% endfor %}
785+{% endfor %}

Subscribers

People subscribed via source and target branches

to all changes: