Merge lp:~openstack-charmers/charms/precise/mysql/ha-support into lp:charms/mysql

Proposed by James Page
Status: Merged
Merged at revision: 97
Proposed branch: lp:~openstack-charmers/charms/precise/mysql/ha-support
Merge into: lp:charms/mysql
Diff against target: 1376 lines (+1044/-149)
19 files modified
config.yaml (+36/-0)
hooks/common.py (+2/-2)
hooks/config-changed (+1/-1)
hooks/ha_relations.py (+146/-0)
hooks/install (+2/-1)
hooks/lib/ceph_utils.py (+256/-0)
hooks/lib/cluster_utils.py (+130/-0)
hooks/lib/utils.py (+283/-0)
hooks/master-relation-changed (+1/-1)
hooks/monitors.common.bash (+1/-1)
hooks/shared-db-relations (+0/-137)
hooks/shared_db_relations.py (+139/-0)
hooks/slave-relation-broken (+2/-2)
hooks/slave-relation-changed (+2/-2)
hooks/upgrade-charm (+17/-1)
metadata.yaml (+8/-0)
revision (+1/-1)
scripts/add_to_cluster (+13/-0)
scripts/remove_from_cluster (+4/-0)
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/mysql/ha-support
Reviewer Review Type Date Requested Status
Marco Ceppi (community) Approve
charmers Pending
Review via email: mp+165059@code.launchpad.net

Description of the change

This branch constitutes the work done in the last 6 months to enable HA support for MySQL to support OpenStack HA.

To post a comment you must log in.
Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Reviewing now

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Hi James, thanks for the work on this. I've run it through it's paces and everything seems to work with regards to upgrading from current store version, so fantastic job on that.

My only question is with regards to rbd-name, shouldn't this be unique per MySQL deployment? I can foresee the probability of users stomping on each other trying to deploy two separate MySQL services against Ceph and not changing the rbd-name during deployment. Would it not be better instead to just use the SERVICE_NAME when creating the volume? Is there a use case where you'd need to manually set it that I'm not seeing?

Outside of that question, this work looks fantastic! I look forward to your response so I can get this merged in quickly :)

review: Needs Information
Revision history for this message
Andres Rodriguez (andreserl) wrote :

Hi Marco,

To answer your question, no, it is not required to be unique per MySQL deployment. So the way this works is that SERVICE_NAME creates a pool on which the rbd-name (which is used to create the image) is created. So for example I deploy 3 mysql units against 1 ceph for HA (juju deploy mysql -n) they all need access to the same image (hence the same pool).

Now, however, if I deploy X mysql services (with various units) as mysqlX, mysqlY, etc (juju deploy mysql mysqlX && juju deploy mysql mysqlY) against 1 ceph, each mysqlX service will have to access to a corresponding 'pool' where the image rbd-name is created. So there would be 2 pools of images 'mysqlx', 'mysqly' each with an image called 'mysql1'.

Hope this helps.

Regards.

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

Hi Andres, thanks for the clarification. I was just worried about this line:

> If the image name exists in Ceph, it will be re-used and the data will be overwritten.

In the config.yaml. Based on your feedback though it appears sharing an rbd pool between multiple MySQL services is something that would be a use case (which makes sense as a config option). Consider this merged! Thanks!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'config.yaml'
--- config.yaml 2013-03-26 20:50:20 +0000
+++ config.yaml 2013-05-22 10:37:31 +0000
@@ -31,3 +31,39 @@
31 default: 'MIXED'31 default: 'MIXED'
32 type: string32 type: string
33 description: If binlogging is enabled, this is the format that will be used. Ignored when tuning-level == fast.33 description: If binlogging is enabled, this is the format that will be used. Ignored when tuning-level == fast.
34 vip:
35 type: string
36 description: "Virtual IP to use to front mysql in ha configuration"
37 vip_iface:
38 type: string
39 default: eth0
40 description: "Network Interface where to place the Virtual IP"
41 vip_cidr:
42 type: int
43 default: 24
44 description: "Netmask that will be used for the Virtual IP"
45 ha-bindiface:
46 type: string
47 default: eth0
48 description: |
49 Default network interface on which HA cluster will bind to communication
50 with the other members of the HA Cluster.
51 ha-mcastport:
52 type: int
53 default: 5411
54 description: |
55 Default multicast port number that will be used to communicate between
56 HA Cluster nodes.
57 block-size:
58 type: int
59 default: 5
60 description: |
61 Default block storage size to create when setting up MySQL block storage.
62 This value should be specified in GB (e.g. 100 not 100GB).
63 rbd-name:
64 type: string
65 default: mysql1
66 description: |
67 The name that will be used to create the Ceph's RBD image with. If the
68 image name exists in Ceph, it will be re-used and the data will be
69 overwritten.
3470
=== added symlink 'hooks/ceph-relation-changed'
=== target is u'ha_relations.py'
=== added symlink 'hooks/ceph-relation-joined'
=== target is u'ha_relations.py'
=== added symlink 'hooks/cluster-relation-changed'
=== target is u'ha_relations.py'
=== modified file 'hooks/common.py'
--- hooks/common.py 2012-11-29 16:31:14 +0000
+++ hooks/common.py 2013-05-22 10:37:31 +0000
@@ -7,7 +7,7 @@
7import uuid7import uuid
88
9def get_service_user_file(service):9def get_service_user_file(service):
10 return '/var/lib/juju/%s.service_user2' % service10 return '/var/lib/mysql/%s.service_user2' % service
1111
1212
13def get_service_user(service):13def get_service_user(service):
@@ -58,7 +58,7 @@
5858
59def get_db_cursor():59def get_db_cursor():
60 # Connect to mysql60 # Connect to mysql
61 passwd = open("/var/lib/juju/mysql.passwd").read().strip()61 passwd = open("/var/lib/mysql/mysql.passwd").read().strip()
62 connection = MySQLdb.connect(user="root", host="localhost", passwd=passwd)62 connection = MySQLdb.connect(user="root", host="localhost", passwd=passwd)
63 return connection.cursor()63 return connection.cursor()
6464
6565
=== modified file 'hooks/config-changed'
--- hooks/config-changed 2012-11-21 17:14:24 +0000
+++ hooks/config-changed 2013-05-22 10:37:31 +0000
@@ -89,7 +89,7 @@
89 check_call(['add-apt-repository','-y','deb http://%s %s main' % (source, series)])89 check_call(['add-apt-repository','-y','deb http://%s %s main' % (source, series)])
90 check_call(['apt-get','update'])90 check_call(['apt-get','update'])
9191
92with open('/var/lib/juju/mysql.passwd','r') as rpw:92with open('/var/lib/mysql/mysql.passwd','r') as rpw:
93 root_pass = rpw.read()93 root_pass = rpw.read()
9494
95dconf = Popen(['debconf-set-selections'], stdin=PIPE)95dconf = Popen(['debconf-set-selections'], stdin=PIPE)
9696
=== added symlink 'hooks/ha-relation-changed'
=== target is u'ha_relations.py'
=== added symlink 'hooks/ha-relation-joined'
=== target is u'ha_relations.py'
=== added file 'hooks/ha_relations.py'
--- hooks/ha_relations.py 1970-01-01 00:00:00 +0000
+++ hooks/ha_relations.py 2013-05-22 10:37:31 +0000
@@ -0,0 +1,146 @@
1#!/usr/bin/env python
2
3import sys
4import os
5
6import lib.utils as utils
7import lib.ceph_utils as ceph
8import lib.cluster_utils as cluster
9
10# CEPH
11DATA_SRC_DST = '/var/lib/mysql'
12SERVICE_NAME = os.getenv('JUJU_UNIT_NAME').split('/')[0]
13POOL_NAME = SERVICE_NAME
14LEADER_RES = 'res_mysql_vip'
15
16
17def ha_relation_joined():
18 vip = utils.config_get('vip')
19 vip_iface = utils.config_get('vip_iface')
20 vip_cidr = utils.config_get('vip_cidr')
21 corosync_bindiface = utils.config_get('ha-bindiface')
22 corosync_mcastport = utils.config_get('ha-mcastport')
23
24 if None in [vip, vip_cidr, vip_iface]:
25 utils.juju_log('WARNING',
26 'Insufficient VIP information to configure cluster')
27 sys.exit(1)
28
29 # Starting configuring resources.
30 init_services = {
31 'res_mysqld': 'mysql',
32 }
33
34 # If the 'ha' relation has been made *before* the 'ceph' relation,
35 # it doesn't make sense to make it until after the 'ceph' relation is made
36 if not utils.is_relation_made('ceph', 'auth'):
37 utils.juju_log('INFO',
38 '*ceph* relation does not exist. '
39 'Not sending *ha* relation data yet')
40 return
41 else:
42 utils.juju_log('INFO',
43 '*ceph* relation exists. Sending *ha* relation data')
44
45 block_storage = 'ceph'
46
47 resources = {
48 'res_mysql_rbd': 'ocf:ceph:rbd',
49 'res_mysql_fs': 'ocf:heartbeat:Filesystem',
50 'res_mysql_vip': 'ocf:heartbeat:IPaddr2',
51 'res_mysqld': 'upstart:mysql',
52 }
53
54 rbd_name = utils.config_get('rbd-name')
55 resource_params = {
56 'res_mysql_rbd': 'params name="%s" pool="%s" user="%s" '
57 'secret="%s"' % \
58 (rbd_name, POOL_NAME,
59 SERVICE_NAME, ceph.keyfile_path(SERVICE_NAME)),
60 'res_mysql_fs': 'params device="/dev/rbd/%s/%s" directory="%s" '
61 'fstype="ext4" op start start-delay="10s"' % \
62 (POOL_NAME, rbd_name, DATA_SRC_DST),
63 'res_mysql_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' % \
64 (vip, vip_cidr, vip_iface),
65 'res_mysqld': 'op start start-delay="5s" op monitor interval="5s"',
66 }
67
68 groups = {
69 'grp_mysql': 'res_mysql_rbd res_mysql_fs res_mysql_vip res_mysqld',
70 }
71
72 for rel_id in utils.relation_ids('ha'):
73 utils.relation_set(rid=rel_id,
74 block_storage=block_storage,
75 corosync_bindiface=corosync_bindiface,
76 corosync_mcastport=corosync_mcastport,
77 resources=resources,
78 resource_params=resource_params,
79 init_services=init_services,
80 groups=groups)
81
82
83def ha_relation_changed():
84 clustered = utils.relation_get('clustered')
85 if (clustered and cluster.is_leader(LEADER_RES)):
86 utils.juju_log('INFO', 'Cluster configured, notifying other services')
87 # Tell all related services to start using the VIP
88 for r_id in utils.relation_ids('shared-db'):
89 utils.relation_set(rid=r_id,
90 db_host=utils.config_get('vip'))
91
92
93def ceph_joined():
94 utils.juju_log('INFO', 'Start Ceph Relation Joined')
95 ceph.install()
96 utils.juju_log('INFO', 'Finish Ceph Relation Joined')
97
98
99def ceph_changed():
100 utils.juju_log('INFO', 'Start Ceph Relation Changed')
101 auth = utils.relation_get('auth')
102 key = utils.relation_get('key')
103 if None in [auth, key]:
104 utils.juju_log('INFO', 'Missing key or auth in relation')
105 return
106
107 ceph.configure(service=SERVICE_NAME, key=key, auth=auth)
108
109 if cluster.eligible_leader(LEADER_RES):
110 sizemb = int(utils.config_get('block-size')) * 1024
111 rbd_img = utils.config_get('rbd-name')
112 blk_device = '/dev/rbd/%s/%s' % (POOL_NAME, rbd_img)
113 ceph.ensure_ceph_storage(service=SERVICE_NAME, pool=POOL_NAME,
114 rbd_img=rbd_img, sizemb=sizemb,
115 fstype='ext4', mount_point=DATA_SRC_DST,
116 blk_device=blk_device,
117 system_services=['mysql'])
118 else:
119 utils.juju_log('INFO',
120 'This is not the peer leader. Not configuring RBD.')
121 # Stopping MySQL
122 if utils.running('mysql'):
123 utils.juju_log('INFO', 'Stopping MySQL...')
124 utils.stop('mysql')
125
126 # If 'ha' relation has been made before the 'ceph' relation
127 # it is important to make sure the ha-relation data is being
128 # sent.
129 if utils.is_relation_made('ha'):
130 utils.juju_log('INFO',
131 '*ha* relation exists. Making sure the ha'
132 ' relation data is sent.')
133 ha_relation_joined()
134 return
135
136 utils.juju_log('INFO', 'Finish Ceph Relation Changed')
137
138
139hooks = {
140 "ha-relation-joined": ha_relation_joined,
141 "ha-relation-changed": ha_relation_changed,
142 "ceph-relation-joined": ceph_joined,
143 "ceph-relation-changed": ceph_changed,
144}
145
146utils.do_hooks(hooks)
0147
=== modified file 'hooks/install'
--- hooks/install 2012-11-01 21:49:21 +0000
+++ hooks/install 2013-05-22 10:37:31 +0000
@@ -3,8 +3,9 @@
3apt-get update3apt-get update
4apt-get install -y debconf-utils python-mysqldb uuid pwgen dnsutils charm-helper-sh || exit 14apt-get install -y debconf-utils python-mysqldb uuid pwgen dnsutils charm-helper-sh || exit 1
55
6PASSFILE=/var/lib/juju/mysql.passwd6PASSFILE=/var/lib/mysql/mysql.passwd
7if ! [ -f $PASSFILE ] ; then7if ! [ -f $PASSFILE ] ; then
8 mkdir -p /var/lib/mysql
8 touch $PASSFILE9 touch $PASSFILE
9fi10fi
10chmod 0600 $PASSFILE11chmod 0600 $PASSFILE
1112
=== added directory 'hooks/lib'
=== added file 'hooks/lib/__init__.py'
=== added file 'hooks/lib/ceph_utils.py'
--- hooks/lib/ceph_utils.py 1970-01-01 00:00:00 +0000
+++ hooks/lib/ceph_utils.py 2013-05-22 10:37:31 +0000
@@ -0,0 +1,256 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import commands
12import subprocess
13import os
14import shutil
15import lib.utils as utils
16
17KEYRING = '/etc/ceph/ceph.client.%s.keyring'
18KEYFILE = '/etc/ceph/ceph.client.%s.key'
19
20CEPH_CONF = """[global]
21 auth supported = %(auth)s
22 keyring = %(keyring)s
23 mon host = %(mon_hosts)s
24"""
25
26
27def execute(cmd):
28 subprocess.check_call(cmd)
29
30
31def execute_shell(cmd):
32 subprocess.check_call(cmd, shell=True)
33
34
35def install():
36 ceph_dir = "/etc/ceph"
37 if not os.path.isdir(ceph_dir):
38 os.mkdir(ceph_dir)
39 utils.install('ceph-common')
40
41
42def rbd_exists(service, pool, rbd_img):
43 (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\
44 (service, pool))
45 return rbd_img in out
46
47
48def create_rbd_image(service, pool, image, sizemb):
49 cmd = [
50 'rbd',
51 'create',
52 image,
53 '--size',
54 str(sizemb),
55 '--id',
56 service,
57 '--pool',
58 pool
59 ]
60 execute(cmd)
61
62
63def pool_exists(service, name):
64 (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
65 return name in out
66
67
68def create_pool(service, name):
69 cmd = [
70 'rados',
71 '--id',
72 service,
73 'mkpool',
74 name
75 ]
76 execute(cmd)
77
78
79def keyfile_path(service):
80 return KEYFILE % service
81
82
83def keyring_path(service):
84 return KEYRING % service
85
86
87def create_keyring(service, key):
88 keyring = keyring_path(service)
89 if os.path.exists(keyring):
90 utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring)
91 cmd = [
92 'ceph-authtool',
93 keyring,
94 '--create-keyring',
95 '--name=client.%s' % service,
96 '--add-key=%s' % key
97 ]
98 execute(cmd)
99 utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring)
100
101
102def create_key_file(service, key):
103 # create a file containing the key
104 keyfile = keyfile_path(service)
105 if os.path.exists(keyfile):
106 utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile)
107 fd = open(keyfile, 'w')
108 fd.write(key)
109 fd.close()
110 utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile)
111
112
113def get_ceph_nodes():
114 hosts = []
115 for r_id in utils.relation_ids('ceph'):
116 for unit in utils.relation_list(r_id):
117 hosts.append(utils.relation_get('private-address',
118 unit=unit, rid=r_id))
119 return hosts
120
121
122def configure(service, key, auth):
123 create_keyring(service, key)
124 create_key_file(service, key)
125 hosts = get_ceph_nodes()
126 mon_hosts = ",".join(map(str, hosts))
127 keyring = keyring_path(service)
128 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
129 ceph_conf.write(CEPH_CONF % locals())
130 modprobe_kernel_module('rbd')
131
132
133def image_mapped(image_name):
134 (rc, out) = commands.getstatusoutput('rbd showmapped')
135 return image_name in out
136
137
138def map_block_storage(service, pool, image):
139 cmd = [
140 'rbd',
141 'map',
142 '%s/%s' % (pool, image),
143 '--user',
144 service,
145 '--secret',
146 keyfile_path(service),
147 ]
148 execute(cmd)
149
150
151def filesystem_mounted(fs):
152 return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
153
154
155def make_filesystem(blk_device, fstype='ext4'):
156 utils.juju_log('INFO',
157 'ceph: Formatting block device %s as filesystem %s.' %\
158 (blk_device, fstype))
159 cmd = ['mkfs', '-t', fstype, blk_device]
160 execute(cmd)
161
162
163def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
164 # mount block device into /mnt
165 cmd = ['mount', '-t', fstype, blk_device, '/mnt']
166 execute(cmd)
167
168 # copy data to /mnt
169 try:
170 copy_files(data_src_dst, '/mnt')
171 except:
172 pass
173
174 # umount block device
175 cmd = ['umount', '/mnt']
176 execute(cmd)
177
178 _dir = os.stat(data_src_dst)
179 uid = _dir.st_uid
180 gid = _dir.st_gid
181
182 # re-mount where the data should originally be
183 cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
184 execute(cmd)
185
186 # ensure original ownership of new mount.
187 cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
188 execute(cmd)
189
190
191# TODO: re-use
192def modprobe_kernel_module(module):
193 utils.juju_log('INFO', 'Loading kernel module')
194 cmd = ['modprobe', module]
195 execute(cmd)
196 cmd = 'echo %s >> /etc/modules' % module
197 execute_shell(cmd)
198
199
200def copy_files(src, dst, symlinks=False, ignore=None):
201 for item in os.listdir(src):
202 s = os.path.join(src, item)
203 d = os.path.join(dst, item)
204 if os.path.isdir(s):
205 shutil.copytree(s, d, symlinks, ignore)
206 else:
207 shutil.copy2(s, d)
208
209
210def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
211 blk_device, fstype, system_services=[]):
212 """
213 To be called from the current cluster leader.
214 Ensures given pool and RBD image exists, is mapped to a block device,
215 and the device is formatted and mounted at the given mount_point.
216
217 If formatting a device for the first time, data existing at mount_point
218 will be migrated to the RBD device before being remounted.
219
220 All services listed in system_services will be stopped prior to data
221 migration and restarted when complete.
222 """
223 # Ensure pool, RBD image, RBD mappings are in place.
224 if not pool_exists(service, pool):
225 utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool)
226 create_pool(service, pool)
227
228 if not rbd_exists(service, pool, rbd_img):
229 utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img)
230 create_rbd_image(service, pool, rbd_img, sizemb)
231
232 if not image_mapped(rbd_img):
233 utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.')
234 map_block_storage(service, pool, rbd_img)
235
236 # make file system
237 # TODO: What happens if for whatever reason this is run again and
238 # the data is already in the rbd device and/or is mounted??
239 # When it is mounted already, it will fail to make the fs
240 # XXX: This is really sketchy! Need to at least add an fstab entry
241 # otherwise this hook will blow away existing data if its executed
242 # after a reboot.
243 if not filesystem_mounted(mount_point):
244 make_filesystem(blk_device, fstype)
245
246 for svc in system_services:
247 if utils.running(svc):
248 utils.juju_log('INFO',
249 'Stopping services %s prior to migrating '\
250 'data' % svc)
251 utils.stop(svc)
252
253 place_data_on_ceph(service, blk_device, mount_point, fstype)
254
255 for svc in system_services:
256 utils.start(svc)
0257
=== added file 'hooks/lib/cluster_utils.py'
--- hooks/lib/cluster_utils.py 1970-01-01 00:00:00 +0000
+++ hooks/lib/cluster_utils.py 2013-05-22 10:37:31 +0000
@@ -0,0 +1,130 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11from lib.utils import (
12 juju_log,
13 relation_ids,
14 relation_list,
15 relation_get,
16 get_unit_hostname,
17 config_get
18 )
19import subprocess
20import os
21
22
23def is_clustered():
24 for r_id in (relation_ids('ha') or []):
25 for unit in (relation_list(r_id) or []):
26 clustered = relation_get('clustered',
27 rid=r_id,
28 unit=unit)
29 if clustered:
30 return True
31 return False
32
33
34def is_leader(resource):
35 cmd = [
36 "crm", "resource",
37 "show", resource
38 ]
39 try:
40 status = subprocess.check_output(cmd)
41 except subprocess.CalledProcessError:
42 return False
43 else:
44 if get_unit_hostname() in status:
45 return True
46 else:
47 return False
48
49
50def peer_units():
51 peers = []
52 for r_id in (relation_ids('cluster') or []):
53 for unit in (relation_list(r_id) or []):
54 peers.append(unit)
55 return peers
56
57
58def oldest_peer(peers):
59 local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1]
60 for peer in peers:
61 remote_unit_no = peer.split('/')[1]
62 if remote_unit_no < local_unit_no:
63 return False
64 return True
65
66
67def eligible_leader(resource):
68 if is_clustered():
69 if not is_leader(resource):
70 juju_log('INFO', 'Deferring action to CRM leader.')
71 return False
72 else:
73 peers = peer_units()
74 if peers and not oldest_peer(peers):
75 juju_log('INFO', 'Deferring action to oldest service unit.')
76 return False
77 return True
78
79
80def https():
81 '''
82 Determines whether enough data has been provided in configuration
83 or relation data to configure HTTPS
84 .
85 returns: boolean
86 '''
87 if config_get('use-https') == "yes":
88 return True
89 if config_get('ssl_cert') and config_get('ssl_key'):
90 return True
91 for r_id in relation_ids('identity-service'):
92 for unit in relation_list(r_id):
93 if (relation_get('https_keystone', rid=r_id, unit=unit) and
94 relation_get('ssl_cert', rid=r_id, unit=unit) and
95 relation_get('ssl_key', rid=r_id, unit=unit) and
96 relation_get('ca_cert', rid=r_id, unit=unit)):
97 return True
98 return False
99
100
101def determine_api_port(public_port):
102 '''
103 Determine correct API server listening port based on
104 existence of HTTPS reverse proxy and/or haproxy.
105
106 public_port: int: standard public port for given service
107
108 returns: int: the correct listening port for the API service
109 '''
110 i = 0
111 if len(peer_units()) > 0 or is_clustered():
112 i += 1
113 if https():
114 i += 1
115 return public_port - (i * 10)
116
117
118def determine_haproxy_port(public_port):
119 '''
120 Description: Determine correct proxy listening port based on public IP +
121 existence of HTTPS reverse proxy.
122
123 public_port: int: standard public port for given service
124
125 returns: int: the correct listening port for the HAProxy service
126 '''
127 i = 0
128 if https():
129 i += 1
130 return public_port - (i * 10)
0131
=== added file 'hooks/lib/utils.py'
--- hooks/lib/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/lib/utils.py 2013-05-22 10:37:31 +0000
@@ -0,0 +1,283 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Paul Collins <paul.collins@canonical.com>
9# Adam Gandelman <adamg@ubuntu.com>
10#
11
12import json
13import os
14import subprocess
15import socket
16import sys
17
18
19def do_hooks(hooks):
20 hook = os.path.basename(sys.argv[0])
21
22 try:
23 hook_func = hooks[hook]
24 except KeyError:
25 juju_log('INFO',
26 "This charm doesn't know how to handle '{}'.".format(hook))
27 else:
28 hook_func()
29
30
31def install(*pkgs):
32 cmd = [
33 'apt-get',
34 '-y',
35 'install'
36 ]
37 for pkg in pkgs:
38 cmd.append(pkg)
39 subprocess.check_call(cmd)
40
41TEMPLATES_DIR = 'templates'
42
43try:
44 import jinja2
45except ImportError:
46 install('python-jinja2')
47 import jinja2
48
49try:
50 import dns.resolver
51except ImportError:
52 install('python-dnspython')
53 import dns.resolver
54
55
56def render_template(template_name, context, template_dir=TEMPLATES_DIR):
57 templates = jinja2.Environment(
58 loader=jinja2.FileSystemLoader(template_dir)
59 )
60 template = templates.get_template(template_name)
61 return template.render(context)
62
63CLOUD_ARCHIVE = \
64""" # Ubuntu Cloud Archive
65deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
66"""
67
68CLOUD_ARCHIVE_POCKETS = {
69 'folsom': 'precise-updates/folsom',
70 'folsom/updates': 'precise-updates/folsom',
71 'folsom/proposed': 'precise-proposed/folsom',
72 'grizzly': 'precise-updates/grizzly',
73 'grizzly/updates': 'precise-updates/grizzly',
74 'grizzly/proposed': 'precise-proposed/grizzly'
75 }
76
77
78def configure_source():
79 source = str(config_get('openstack-origin'))
80 if not source:
81 return
82 if source.startswith('ppa:'):
83 cmd = [
84 'add-apt-repository',
85 source
86 ]
87 subprocess.check_call(cmd)
88 if source.startswith('cloud:'):
89 install('ubuntu-cloud-keyring')
90 pocket = source.split(':')[1]
91 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
92 apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
93 if source.startswith('deb'):
94 l = len(source.split('|'))
95 if l == 2:
96 (apt_line, key) = source.split('|')
97 cmd = [
98 'apt-key',
99 'adv', '--keyserver keyserver.ubuntu.com',
100 '--recv-keys', key
101 ]
102 subprocess.check_call(cmd)
103 elif l == 1:
104 apt_line = source
105
106 with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
107 apt.write(apt_line + "\n")
108 cmd = [
109 'apt-get',
110 'update'
111 ]
112 subprocess.check_call(cmd)
113
114# Protocols
115TCP = 'TCP'
116UDP = 'UDP'
117
118
119def expose(port, protocol='TCP'):
120 cmd = [
121 'open-port',
122 '{}/{}'.format(port, protocol)
123 ]
124 subprocess.check_call(cmd)
125
126
127def juju_log(severity, message):
128 cmd = [
129 'juju-log',
130 '--log-level', severity,
131 message
132 ]
133 subprocess.check_call(cmd)
134
135
136def relation_ids(relation):
137 cmd = [
138 'relation-ids',
139 relation
140 ]
141 result = str(subprocess.check_output(cmd)).split()
142 if result == "":
143 return None
144 else:
145 return result
146
147
148def relation_list(rid):
149 cmd = [
150 'relation-list',
151 '-r', rid,
152 ]
153 result = str(subprocess.check_output(cmd)).split()
154 if result == "":
155 return None
156 else:
157 return result
158
159
160def relation_get(attribute, unit=None, rid=None):
161 cmd = [
162 'relation-get',
163 ]
164 if rid:
165 cmd.append('-r')
166 cmd.append(rid)
167 cmd.append(attribute)
168 if unit:
169 cmd.append(unit)
170 value = subprocess.check_output(cmd).strip() # IGNORE:E1103
171 if value == "":
172 return None
173 else:
174 return value
175
176
177def relation_set(**kwargs):
178 cmd = [
179 'relation-set'
180 ]
181 args = []
182 for k, v in kwargs.items():
183 if k == 'rid':
184 if v:
185 cmd.append('-r')
186 cmd.append(v)
187 else:
188 args.append('{}={}'.format(k, v))
189 cmd += args
190 subprocess.check_call(cmd)
191
192
193def unit_get(attribute):
194 cmd = [
195 'unit-get',
196 attribute
197 ]
198 value = subprocess.check_output(cmd).strip() # IGNORE:E1103
199 if value == "":
200 return None
201 else:
202 return value
203
204
205def config_get(attribute):
206 cmd = [
207 'config-get',
208 '--format',
209 'json',
210 ]
211 out = subprocess.check_output(cmd).strip() # IGNORE:E1103
212 cfg = json.loads(out)
213
214 try:
215 return cfg[attribute]
216 except KeyError:
217 return None
218
219
220def get_unit_hostname():
221 return socket.gethostname()
222
223
224def get_host_ip(hostname=unit_get('private-address')):
225 try:
226 # Test to see if already an IPv4 address
227 socket.inet_aton(hostname)
228 return hostname
229 except socket.error:
230 answers = dns.resolver.query(hostname, 'A')
231 if answers:
232 return answers[0].address
233 return None
234
235
236def _svc_control(service, action):
237 subprocess.check_call(['service', service, action])
238
239
240def restart(*services):
241 for service in services:
242 _svc_control(service, 'restart')
243
244
245def stop(*services):
246 for service in services:
247 _svc_control(service, 'stop')
248
249
250def start(*services):
251 for service in services:
252 _svc_control(service, 'start')
253
254
255def reload(*services):
256 for service in services:
257 try:
258 _svc_control(service, 'reload')
259 except subprocess.CalledProcessError:
260 # Reload failed - either service does not support reload
261 # or it was not running - restart will fixup most things
262 _svc_control(service, 'restart')
263
264
265def running(service):
266 try:
267 output = subprocess.check_output(['service', service, 'status'])
268 except subprocess.CalledProcessError:
269 return False
270 else:
271 if ("start/running" in output or
272 "is running" in output):
273 return True
274 else:
275 return False
276
277
278def is_relation_made(relation, key='private-address'):
279 for r_id in (relation_ids(relation) or []):
280 for unit in (relation_list(r_id) or []):
281 if relation_get(key, rid=r_id, unit=unit):
282 return True
283 return False
0284
=== modified file 'hooks/master-relation-changed'
--- hooks/master-relation-changed 2012-11-02 06:41:12 +0000
+++ hooks/master-relation-changed 2013-05-22 10:37:31 +0000
@@ -6,7 +6,7 @@
66
7. /usr/share/charm-helper/sh/net.sh7. /usr/share/charm-helper/sh/net.sh
88
9ROOTARGS="-uroot -p`cat /var/lib/juju/mysql.passwd`"9ROOTARGS="-uroot -p`cat /var/lib/mysql/mysql.passwd`"
10snapdir=/var/www/snaps10snapdir=/var/www/snaps
11mkdir -p $snapdir11mkdir -p $snapdir
12apt-get -y install apache212apt-get -y install apache2
1313
=== modified file 'hooks/monitors.common.bash'
--- hooks/monitors.common.bash 2012-07-12 21:58:36 +0000
+++ hooks/monitors.common.bash 2013-05-22 10:37:31 +0000
@@ -1,4 +1,4 @@
1MYSQL="mysql -uroot -p`cat /var/lib/juju/mysql.passwd`"1MYSQL="mysql -uroot -p`cat /var/lib/mysql/mysql.passwd`"
2monitor_user=monitors2monitor_user=monitors
3. /usr/share/charm-helper/sh/net.sh3. /usr/share/charm-helper/sh/net.sh
4if [ -n "$JUJU_REMOTE_UNIT" ] ; then4if [ -n "$JUJU_REMOTE_UNIT" ] ; then
55
=== modified symlink 'hooks/shared-db-relation-changed'
=== target changed u'shared-db-relations' => u'shared_db_relations.py'
=== modified symlink 'hooks/shared-db-relation-joined'
=== target changed u'shared-db-relations' => u'shared_db_relations.py'
=== removed file 'hooks/shared-db-relations'
--- hooks/shared-db-relations 2012-12-03 20:21:07 +0000
+++ hooks/shared-db-relations 1970-01-01 00:00:00 +0000
@@ -1,137 +0,0 @@
1#!/usr/bin/python
2#
3# Create relations between a shared database to many peers.
4# Join does nothing. Peer requests access to $DATABASE from $REMOTE_HOST.
5# It's up to the hooks to ensure database exists, peer has access and
6# clean up grants after a broken/departed peer (TODO)
7#
8# Author: Adam Gandelman <adam.gandelman@canonical.com>
9
10
11from common import *
12import sys
13import subprocess
14import json
15import socket
16import os
17
18
19def pwgen():
20 return subprocess.check_output(['pwgen', '-s', '16']).strip()
21
22
23def relation_get():
24 return json.loads(subprocess.check_output(
25 ['relation-get',
26 '--format',
27 'json']
28 )
29 )
30
31
32def relation_set(**kwargs):
33 cmd = [ 'relation-set' ]
34 args = []
35 for k, v in kwargs.items():
36 if k == 'rid':
37 cmd.append('-r')
38 cmd.append(v)
39 else:
40 args.append('{}={}'.format(k, v))
41 cmd += args
42 subprocess.check_call(cmd)
43
44
45def shared_db_changed():
46
47 def configure_db(hostname,
48 database,
49 username):
50 passwd_file = "/var/lib/juju/mysql-{}.passwd"\
51 .format(username)
52 if hostname != local_hostname:
53 remote_ip = socket.gethostbyname(hostname)
54 else:
55 remote_ip = '127.0.0.1'
56
57 if not os.path.exists(passwd_file):
58 password = pwgen()
59 with open(passwd_file, 'w') as pfile:
60 pfile.write(password)
61 else:
62 with open(passwd_file) as pfile:
63 password = pfile.read().strip()
64
65 if not database_exists(database):
66 create_database(database)
67 if not grant_exists(database,
68 username,
69 remote_ip):
70 create_grant(database,
71 username,
72 remote_ip, password)
73 return password
74
75 settings = relation_get()
76 local_hostname = socket.getfqdn()
77 singleset = set([
78 'database',
79 'username',
80 'hostname'
81 ])
82
83 if singleset.issubset(settings):
84 # Process a single database configuration
85 password = configure_db(settings['hostname'],
86 settings['database'],
87 settings['username'])
88 relation_set(db_host=local_hostname,
89 password=password)
90 else:
91 # Process multiple database setup requests.
92 # from incoming relation data:
93 # nova_database=xxx nova_username=xxx nova_hostname=xxx
94 # quantum_database=xxx quantum_username=xxx quantum_hostname=xxx
95 # create
96 #{
97 # "nova": {
98 # "username": xxx,
99 # "database": xxx,
100 # "hostname": xxx
101 # },
102 # "quantum": {
103 # "username": xxx,
104 # "database": xxx,
105 # "hostname": xxx
106 # }
107 #}
108 #
109 databases = {}
110 for k, v in settings.iteritems():
111 db = k.split('_')[0]
112 x = '_'.join(k.split('_')[1:])
113 if db not in databases:
114 databases[db] = {}
115 databases[db][x] = v
116 return_data = {}
117 for db in databases:
118 if singleset.issubset(databases[db]):
119 return_data['_'.join([ db, 'password' ])] = \
120 configure_db(databases[db]['hostname'],
121 databases[db]['database'],
122 databases[db]['username'])
123 relation_set(**return_data)
124 relation_set(db_host=local_hostname)
125
126hook = os.path.basename(sys.argv[0])
127hooks = {
128 "shared-db-relation-changed": shared_db_changed
129 }
130try:
131 hook_func = hooks[hook]
132except KeyError:
133 pass
134else:
135 hook_func()
136
137sys.exit(0)
1380
=== added file 'hooks/shared_db_relations.py'
--- hooks/shared_db_relations.py 1970-01-01 00:00:00 +0000
+++ hooks/shared_db_relations.py 2013-05-22 10:37:31 +0000
@@ -0,0 +1,139 @@
1#!/usr/bin/python
2#
3# Create relations between a shared database to many peers.
4# Join does nothing. Peer requests access to $DATABASE from $REMOTE_HOST.
5# It's up to the hooks to ensure database exists, peer has access and
6# clean up grants after a broken/departed peer (TODO)
7#
8# Author: Adam Gandelman <adam.gandelman@canonical.com>
9
10
11from common import (
12 database_exists,
13 create_database,
14 grant_exists,
15 create_grant
16 )
17import subprocess
18import json
19import socket
20import os
21import lib.utils as utils
22import lib.cluster_utils as cluster
23
24LEADER_RES = 'res_mysql_vip'
25
26
27def pwgen():
28 return str(subprocess.check_output(['pwgen', '-s', '16'])).strip()
29
30
31def relation_get():
32 return json.loads(subprocess.check_output(
33 ['relation-get',
34 '--format',
35 'json']
36 )
37 )
38
39
40def shared_db_changed():
41
42 def configure_db(hostname,
43 database,
44 username):
45 passwd_file = "/var/lib/mysql/mysql-{}.passwd"\
46 .format(username)
47 if hostname != local_hostname:
48 remote_ip = socket.gethostbyname(hostname)
49 else:
50 remote_ip = '127.0.0.1'
51
52 if not os.path.exists(passwd_file):
53 password = pwgen()
54 with open(passwd_file, 'w') as pfile:
55 pfile.write(password)
56 else:
57 with open(passwd_file) as pfile:
58 password = pfile.read().strip()
59
60 if not database_exists(database):
61 create_database(database)
62 if not grant_exists(database,
63 username,
64 remote_ip):
65 create_grant(database,
66 username,
67 remote_ip, password)
68 return password
69
70 if not cluster.eligible_leader(LEADER_RES):
71 utils.juju_log('INFO',
72 'MySQL service is peered, bailing shared-db relation'
73 ' as this service unit is not the leader')
74 return
75
76 settings = relation_get()
77 local_hostname = socket.getfqdn()
78 singleset = set([
79 'database',
80 'username',
81 'hostname'
82 ])
83
84 if singleset.issubset(settings):
85 # Process a single database configuration
86 password = configure_db(settings['hostname'],
87 settings['database'],
88 settings['username'])
89 if not cluster.is_clustered():
90 utils.relation_set(db_host=local_hostname,
91 password=password)
92 else:
93 utils.relation_set(db_host=utils.config_get("vip"),
94 password=password)
95
96 else:
97 # Process multiple database setup requests.
98 # from incoming relation data:
99 # nova_database=xxx nova_username=xxx nova_hostname=xxx
100 # quantum_database=xxx quantum_username=xxx quantum_hostname=xxx
101 # create
102 #{
103 # "nova": {
104 # "username": xxx,
105 # "database": xxx,
106 # "hostname": xxx
107 # },
108 # "quantum": {
109 # "username": xxx,
110 # "database": xxx,
111 # "hostname": xxx
112 # }
113 #}
114 #
115 databases = {}
116 for k, v in settings.iteritems():
117 db = k.split('_')[0]
118 x = '_'.join(k.split('_')[1:])
119 if db not in databases:
120 databases[db] = {}
121 databases[db][x] = v
122 return_data = {}
123 for db in databases:
124 if singleset.issubset(databases[db]):
125 return_data['_'.join([db, 'password'])] = \
126 configure_db(databases[db]['hostname'],
127 databases[db]['database'],
128 databases[db]['username'])
129 utils.relation_set(**return_data)
130 if not cluster.is_clustered():
131 utils.relation_set(db_host=local_hostname)
132 else:
133 utils.relation_set(db_host=utils.config_get("vip"))
134
135hooks = {
136 "shared-db-relation-changed": shared_db_changed
137 }
138
139utils.do_hooks(hooks)
0140
=== modified file 'hooks/slave-relation-broken'
--- hooks/slave-relation-broken 2011-12-06 21:23:39 +0000
+++ hooks/slave-relation-broken 2013-05-22 10:37:31 +0000
@@ -1,8 +1,8 @@
1#!/bin/sh1#!/bin/sh
22
3# Kill the replication3# Kill the replication
4mysql -uroot -p`cat /var/lib/juju/mysql.passwd` -e 'STOP SLAVE;'4mysql -uroot -p`cat /var/lib/mysql/mysql.passwd` -e 'STOP SLAVE;'
5mysql -uroot -p`cat /var/lib/juju/mysql.passwd` -e 'RESET SLAVE;'5mysql -uroot -p`cat /var/lib/mysql/mysql.passwd` -e 'RESET SLAVE;'
6# No longer a slave6# No longer a slave
7# XXX this may change the server-id .. not sure if thats what we7# XXX this may change the server-id .. not sure if thats what we
8# want!8# want!
99
=== modified file 'hooks/slave-relation-changed'
--- hooks/slave-relation-changed 2011-12-06 21:17:17 +0000
+++ hooks/slave-relation-changed 2013-05-22 10:37:31 +0000
@@ -2,7 +2,7 @@
22
3set -e3set -e
44
5ROOTARGS="-uroot -p`cat /var/lib/juju/mysql.passwd`"5ROOTARGS="-uroot -p`cat /var/lib/mysql/mysql.passwd`"
66
7# Others can join that service but only the lowest will be the master7# Others can join that service but only the lowest will be the master
8# Note that we could be more automatic but for now we will wait for8# Note that we could be more automatic but for now we will wait for
@@ -39,7 +39,7 @@
39curl --silent --show-error $dumpurl |zcat| mysql $ROOTARGS39curl --silent --show-error $dumpurl |zcat| mysql $ROOTARGS
40# Root pw gets overwritten by import40# Root pw gets overwritten by import
41echo Re-setting Root Pasword -- can use ours because it hasnt been flushed41echo Re-setting Root Pasword -- can use ours because it hasnt been flushed
42myrootpw=`cat /var/lib/juju/mysql.passwd`42myrootpw=`cat /var/lib/mysql/mysql.passwd`
43mysqladmin -uroot -p$myrootpw password $myrootpw43mysqladmin -uroot -p$myrootpw password $myrootpw
44# Debian packages expect debian-sys-maint@localhost to be root privileged and44# Debian packages expect debian-sys-maint@localhost to be root privileged and
45# configured in /etc/mysql/debian.cnf. we just broke that.. fix it45# configured in /etc/mysql/debian.cnf. we just broke that.. fix it
4646
=== modified file 'hooks/upgrade-charm'
--- hooks/upgrade-charm 2012-11-02 06:41:12 +0000
+++ hooks/upgrade-charm 2013-05-22 10:37:31 +0000
@@ -2,10 +2,26 @@
2home=`dirname $0`2home=`dirname $0`
3# Remove any existing .service_user files, which will cause3# Remove any existing .service_user files, which will cause
4# new users/pw's to be generated, which is a good thing4# new users/pw's to be generated, which is a good thing
5old_service_user_files=$(ls /var/lib/juju/$.service_user)5old_service_user_files=$(ls /var/lib/juju/*.service_user)
6if [ -n "$old_service_user_files" ] ; then6if [ -n "$old_service_user_files" ] ; then
7 juju-log -l WARNING "Stale users left around, should be revoked: $(cat $old_service_user_files)"7 juju-log -l WARNING "Stale users left around, should be revoked: $(cat $old_service_user_files)"
8 rm -f $old_service_user_files8 rm -f $old_service_user_files
9fi9fi
10
11# Move service_user2 files to /var/lib/mysql as they are
12# now stored there to support HA clustering with ceph.
13new_service_user_files=$(ls /var/lib/juju/*.service_user2)
14if [ -n "$new_service_user_files" ]; then
15 juju-log -l INFO "Moving service_user files [$new_service_user_files] to [/var/lib/mysql]"
16 mv $new_service_user_files /var/lib/mysql/
17fi
18# Move passwd files to /var/lib/mysql as they are
19# now stored there to support HA clustering with ceph.
20password_files=$(ls /var/lib/juju/*.passwd)
21if [ -n "$password_files" ]; then
22 juju-log -l INFO "Moving passwd files [$password_files] to [/var/lib/mysql]"
23 mv $password_files /var/lib/mysql/
24fi
25
10$home/install26$home/install
11exec $home/config-changed27exec $home/config-changed
1228
=== modified file 'metadata.yaml'
--- metadata.yaml 2013-04-22 15:04:34 +0000
+++ metadata.yaml 2013-05-22 10:37:31 +0000
@@ -23,6 +23,14 @@
23 local-monitors:23 local-monitors:
24 interface: local-monitors24 interface: local-monitors
25 scope: container25 scope: container
26peers:
27 cluster:
28 interface: mysql-ha
26requires:29requires:
27 slave:30 slave:
28 interface: mysql-oneway-replication31 interface: mysql-oneway-replication
32 ceph:
33 interface: ceph-client
34 ha:
35 interface: hacluster
36 scope: container
2937
=== modified file 'revision'
--- revision 2012-11-29 16:31:14 +0000
+++ revision 2013-05-22 10:37:31 +0000
@@ -1,1 +1,1 @@
11651306
22
=== added directory 'scripts'
=== added file 'scripts/add_to_cluster'
--- scripts/add_to_cluster 1970-01-01 00:00:00 +0000
+++ scripts/add_to_cluster 2013-05-22 10:37:31 +0000
@@ -0,0 +1,13 @@
1#!/bin/bash
2service corosync start || /bin/true
3sleep 2
4while ! service pacemaker start; do
5 echo "Attempting to start pacemaker"
6 sleep 1;
7done;
8crm node online
9sleep 2
10while crm status | egrep -q 'Stopped$'; do
11 echo "Waiting for nodes to come online"
12 sleep 1
13done
014
=== added file 'scripts/remove_from_cluster'
--- scripts/remove_from_cluster 1970-01-01 00:00:00 +0000
+++ scripts/remove_from_cluster 2013-05-22 10:37:31 +0000
@@ -0,0 +1,4 @@
1#!/bin/bash
2crm node standby
3service pacemaker stop
4service corosync stop

Subscribers

People subscribed via source and target branches