Merge lp:~andreserl/charms/quantal/mysql/ceph-support into lp:~openstack-charmers/charms/quantal/mysql/hacluster-support

Proposed by Andres Rodriguez
Status: Merged
Merged at revision: 101
Proposed branch: lp:~andreserl/charms/quantal/mysql/ceph-support
Merge into: lp:~openstack-charmers/charms/quantal/mysql/hacluster-support
Diff against target: 1100 lines (+474/-362)
18 files modified
config.yaml (+14/-12)
hooks/ceph.py (+108/-0)
hooks/common.py (+2/-2)
hooks/config-changed (+1/-4)
hooks/drbd.py (+0/-130)
hooks/ha-relations (+236/-187)
hooks/install (+2/-1)
hooks/master-relation-changed (+1/-1)
hooks/monitors.common.bash (+1/-1)
hooks/shared-db-relations (+19/-4)
hooks/slave-relation-broken (+2/-2)
hooks/slave-relation-changed (+2/-2)
hooks/upgrade-charm (+17/-1)
hooks/utils.py (+61/-0)
metadata.yaml (+2/-0)
revision (+1/-1)
templates/ceph.conf (+5/-0)
templates/mysql.res (+0/-14)
To merge this branch: bzr merge lp:~andreserl/charms/quantal/mysql/ceph-support
Reviewer Review Type Date Requested Status
James Page Approve
Review via email: mp+148820@code.launchpad.net

Commit message

This branch removes support for DRBD and adds support for Ceph.

To post a comment you must log in.
Revision history for this message
James Page (james-page) wrote :

1) ceph-relation-changed

Should figure out if mysql is already running on ceph rbd and not stop, start mysql.

Just checking for the device presence should be enough.

The rbd kernel module will pickup changes in /etc/ceph/ceph.conf automagically.

2) Upgrade

The charm needs to move /var/lib/juju/*.passwd /var/lib/mysql on upgrade.

3) drbd.py

Drop as no longer used.

4) /etc/mysql/debian.cnf

Also needs to be synched from master server as contains debian sys maintenance password for upgrades.

5) shared-db-relations

The is_clustered/eligible_leader check is to complicated - the eligible leader check should be dealing with the is_clustered bit anyway:

    if not utils.eligible_leader():
        utils.juju_log('INFO',
                       'MySQL service is peered, bailing shared-db relation')
        return

This should be enough - am I the leader whatever the context? no - OK bail!

6) config.yaml

    vip:
        type: string
        default: None
        description: "Virtual IP to use to front keystone in ha configuration"

This is inconsistent with the rest of the charms which define this; I'd prefer if you dropped the 'default: None' and just checked for the vip like this:

   if 'vip' in config:
      ...

Revision history for this message
James Page (james-page) wrote :

7) block-storage config option

Still don't like this - it can be inferred by looking at the relations a service has.

i.e. I kept forgetting to set it and ended up with a broken cluster!

Revision history for this message
Adam Gandelman (gandelman-a) wrote :

One suggestion I have is to add shared-db relation settings to signify the service's cluster status, similar to the hacluster's use of 'clustered=yes' in its relation to its principle. If the clustered status + VIP get passed to client services via the relation, the other side of the relation can do something similar to:

[[ "$(relation-get clustered)" == "True" ]] && db_host="$(relation-get vip)" || db_host="$(relation-get private-address)"
db_conn=mysql://$user:$passwd@$db_host/$database

Services should make use of private-address from the environment instead of hostnames / address provided via relation-settings wherever possible. We're totally bending the rules currently to support the floating IPs/hacluster scenario, and I imagine it is going to break in environments with split-horizon DNS or complex routing setups. I'd like to avoid using something like 'db_host' via relation unless its absolutely required (in a clustered deployment), and use the private-address from environment in all other instances.

Revision history for this message
James Page (james-page) wrote :

Adam

Not sure I agree; surely its the difference between doing the check on the mysql side (once) or the same check on the other end of the relation (many times).

118. By Andres Rodriguez

Address some of James' concerns

119. By Andres Rodriguez

Remove no longer supported drbd installation

120. By Andres Rodriguez

Do some cleanup and ensure rados image creation is done in the leader

121. By Andres Rodriguez

Move service_user2 and passwd files from /var/lib/juju to /var/lib/mysql on upgrade

Revision history for this message
James Page (james-page) wrote :

Still need to sort out the mysql restarts in ceph changed and the syncing of files.

Lets do that as a separate MP.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2013-01-16 21:46:58 +0000
3+++ config.yaml 2013-02-20 16:05:25 +0000
4@@ -50,18 +50,20 @@
5 with the other members of the HA Cluster.
6 ha-mcastport:
7 type: int
8- default: 5410
9+ default: 5411
10 description: |
11 Default multicast port number that will be used to communicate between
12 HA Cluster nodes.
13- block-storage:
14- type: string
15- default: None
16- description: |
17- Default block storage type to use when setting up MySQL in HA. Options
18- are 'drbd' or 'ceph'.
19- block-device:
20- type: string
21- default: "/dev/sdb1"
22- description: |
23- "The *available* block device to use for DRBD."
24+ block-size:
25+ type: string
26+ default: 5G
27+ description: |
28+ Default block storage size to create when setting up MySQL block storage.
29+ This value should be specified in GB (e.g. 100G).
30+ rbd-name:
31+ type: string
32+ default: mysql1
33+ description: |
34+ The name that will be used to create the Ceph's RBD image with. If the
35+ image name exists in Ceph, it will be re-used and the data will be
36+ overwritten.
37
38=== added symlink 'hooks/ceph-relation-changed'
39=== target is u'ha-relations'
40=== added symlink 'hooks/ceph-relation-joined'
41=== target is u'ha-relations'
42=== added file 'hooks/ceph.py'
43--- hooks/ceph.py 1970-01-01 00:00:00 +0000
44+++ hooks/ceph.py 2013-02-20 16:05:25 +0000
45@@ -0,0 +1,108 @@
46+import utils
47+import commands
48+import re
49+import subprocess
50+import sys
51+import os
52+import shutil
53+
54+
55+def execute(cmd):
56+ subprocess.check_call(cmd)
57+
58+
59+def execute_shell(cmd):
60+ subprocess.check_call(cmd, shell=True)
61+
62+
63+def create_image(service, image, sizemb):
64+ cmd = [
65+ 'rbd',
66+ 'create',
67+ image,
68+ '--size',
69+ sizemb,
70+ '--id',
71+ service,
72+ '--pool',
73+ 'images'
74+ ]
75+ execute(cmd)
76+
77+
78+def create_image_pool(service):
79+ cmd = [
80+ 'rados',
81+ '--id',
82+ service,
83+ 'mkpool',
84+ 'images'
85+ ]
86+ execute(cmd)
87+
88+
89+def create_keyring(service, keyring, key):
90+ cmd = [
91+ 'ceph-authtool',
92+ keyring,
93+ '--create-keyring',
94+ '--name=client.%s' % service,
95+ '--add-key=%s' % key
96+ ]
97+ execute(cmd)
98+
99+
100+def map_block_storage(service, image, keyfile):
101+ cmd = [
102+ 'rbd',
103+ 'map',
104+ 'images/%s' % image,
105+ '--user',
106+ service,
107+ '--secret',
108+ keyfile,
109+ ]
110+ execute(cmd)
111+
112+
113+def make_filesystem(service, blk_device, fstype='ext4'):
114+ cmd = ['mkfs', '-t', fstype, blk_device]
115+ execute(cmd)
116+
117+
118+def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
119+ # mount block device into /mnt
120+ cmd = ['mount', '-t', fstype, blk_device, '/mnt']
121+ execute(cmd)
122+
123+ # copy data to /mnt
124+ try:
125+ copy_files(data_src_dst, '/mnt')
126+ except:
127+ pass
128+
129+ # umount block device
130+ cmd = ['umount', '/mnt']
131+ execute(cmd)
132+
133+ # re-mount where the data should originally be
134+ cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
135+ execute(cmd)
136+
137+
138+# TODO: re-use
139+def modprobe_kernel_module(module):
140+ cmd = ['modprobe', module]
141+ execute(cmd)
142+ cmd = 'echo %s >> /etc/modules' % module
143+ execute_shell(cmd)
144+
145+
146+def copy_files(src, dst, symlinks=False, ignore=None):
147+ for item in os.listdir(src):
148+ s = os.path.join(src, item)
149+ d = os.path.join(dst, item)
150+ if os.path.isdir(s):
151+ shutil.copytree(s, d, symlinks, ignore)
152+ else:
153+ shutil.copy2(s, d)
154
155=== modified symlink 'hooks/cluster-relation-changed'
156=== target changed u'ha-relation' => u'ha-relations'
157=== modified file 'hooks/common.py'
158--- hooks/common.py 2012-11-29 16:31:14 +0000
159+++ hooks/common.py 2013-02-20 16:05:25 +0000
160@@ -7,7 +7,7 @@
161 import uuid
162
163 def get_service_user_file(service):
164- return '/var/lib/juju/%s.service_user2' % service
165+ return '/var/lib/mysql/%s.service_user2' % service
166
167
168 def get_service_user(service):
169@@ -58,7 +58,7 @@
170
171 def get_db_cursor():
172 # Connect to mysql
173- passwd = open("/var/lib/juju/mysql.passwd").read().strip()
174+ passwd = open("/var/lib/mysql/mysql.passwd").read().strip()
175 connection = MySQLdb.connect(user="root", host="localhost", passwd=passwd)
176 return connection.cursor()
177
178
179=== modified file 'hooks/config-changed'
180--- hooks/config-changed 2013-01-17 04:37:31 +0000
181+++ hooks/config-changed 2013-02-20 16:05:25 +0000
182@@ -90,7 +90,7 @@
183 check_call(['add-apt-repository','-y','deb http://%s %s main' % (source, series)])
184 check_call(['apt-get','update'])
185
186-with open('/var/lib/juju/mysql.passwd','r') as rpw:
187+with open('/var/lib/mysql/mysql.passwd','r') as rpw:
188 root_pass = rpw.read()
189
190 dconf = Popen(['debconf-set-selections'], stdin=PIPE)
191@@ -350,6 +350,3 @@
192 except CalledProcessError:
193 pass
194 check_call(['service','mysql','start'])
195-
196-if configs['block-storage'] == "drbd":
197- utils.install('drbd8-utils')
198
199=== removed file 'hooks/drbd.py'
200--- hooks/drbd.py 2013-01-11 20:39:20 +0000
201+++ hooks/drbd.py 1970-01-01 00:00:00 +0000
202@@ -1,130 +0,0 @@
203-import utils
204-import commands
205-import re
206-import subprocess
207-import sys
208-import os
209-import shutil
210-
211-
212-def execute(cmd):
213- subprocess.check_call(cmd)
214-
215-
216-def execute_shell(cmd):
217- subprocess.check_call(cmd, shell=True)
218-
219-
220-def prepare_drbd_disk(block_device=None):
221- if block_device is None:
222- sys.exit(1)
223-
224- cmd = 'dd if=/dev/zero of=%s bs=512 count=1 oflag=direct >/dev/null' % block_device
225- execute_shell(cmd)
226- cmd = '(echo n; echo p; echo 1; echo ; echo; echo w) | fdisk %s' % block_device
227- execute_shell(cmd)
228-
229-
230-def modprobe_module():
231- cmd = ['modprobe', 'drbd']
232- execute(cmd)
233- cmd = 'echo drbd >> /etc/modules'
234- execute_shell(cmd)
235-
236-
237-def create_md(resource):
238- cmd = ['drbdadm', '--', '--force', 'create-md', resource]
239- execute(cmd)
240-
241-
242-def bring_resource_up(resource):
243- cmd = ['drbdadm', 'up', resource]
244- execute(cmd)
245-
246-
247-def clear_bitmap(resource):
248- cmd = ['drbdadm', '--', '--clear-bitmap', 'new-current-uuid', resource]
249- execute(cmd)
250-
251-
252-def make_primary(resource):
253- cmd = ['drbdadm', 'primary', resource]
254- execute(cmd)
255-
256-
257-def make_secondary(resource):
258- cmd = ['drbdadm', 'secondary', resource]
259- execute(cmd)
260-
261-
262-def format_drbd_device():
263- cmd = ['mkfs', '-t', 'ext3', '/dev/drbd0']
264- execute(cmd)
265-
266-
267-def is_connected():
268- (status, output) = commands.getstatusoutput("drbd-overview")
269- show_re = re.compile("0:export Connected")
270- quorum = show_re.search(output)
271- if quorum:
272- return True
273- return False
274-
275-
276-def is_quorum_secondary():
277- (status, output) = commands.getstatusoutput("drbd-overview")
278- show_re = re.compile("Secondary/Secondary")
279- quorum = show_re.search(output)
280- if quorum:
281- return True
282- return False
283-
284-
285-def is_quorum_primary():
286- (status, output) = commands.getstatusoutput("drbd-overview")
287- show_re = re.compile("Primary/Secondary")
288- quorum = show_re.search(output)
289- if quorum:
290- return True
291- return False
292-
293-
294-def is_state_inconsistent():
295- (status, output) = commands.getstatusoutput("drbd-overview")
296- show_re = re.compile("Inconsistent/Inconsistent")
297- quorum = show_re.search(output)
298- if quorum:
299- return True
300- return False
301-
302-
303-def is_state_uptodate():
304- (status, output) = commands.getstatusoutput("drbd-overview")
305- show_re = re.compile("UpToDate/UpToDate")
306- quorum = show_re.search(output)
307- if quorum:
308- return True
309- return False
310-
311-
312-def copy_files(src, dst, symlinks=False, ignore=None):
313- for item in os.listdir(src):
314- s = os.path.join(src, item)
315- d = os.path.join(dst, item)
316- if os.path.isdir(s):
317- shutil.copytree(s, d, symlinks, ignore)
318- else:
319- shutil.copy2(s, d)
320-
321-
322-def put_on_drbd():
323- cmd = ['mount', '-t', 'ext3', '/dev/drbd0', '/mnt']
324- execute(cmd)
325- # TODO: Before copying make sure it is mounted.
326- copy_files('/var/lib/mysql','/mnt')
327- cmd = ['chown', '-R', 'mysql:mysql', '/mnt']
328- execute(cmd)
329- cmd = ['umount', '/mnt']
330- execute(cmd)
331- cmd = ['mount', '-t', 'ext3', '/dev/drbd0', '/var/lib/mysql']
332- execute(cmd)
333
334=== modified symlink 'hooks/ha-relation-changed'
335=== target changed u'ha-relation' => u'ha-relations'
336=== modified symlink 'hooks/ha-relation-joined'
337=== target changed u'ha-relation' => u'ha-relations'
338=== renamed file 'hooks/ha-relation' => 'hooks/ha-relations'
339--- hooks/ha-relation 2013-01-17 04:37:31 +0000
340+++ hooks/ha-relations 2013-02-20 16:05:25 +0000
341@@ -4,235 +4,284 @@
342 import sys
343 import subprocess
344 import os
345+import time
346+import commands
347+
348 import utils
349-import drbd
350-import time
351+import ceph
352
353 STORAGEMARKER = '/var/lib/juju/storageconfigured'
354-DRBD_RESOURCE = 'mysql'
355-DRBD_DEVICE = '/dev/drbd0'
356-DRBD_MOUNTPOINT = '/var/lib/mysql'
357+
358+# CEPH
359+DATA_SRC_DST = '/var/lib/mysql'
360+SERVICE_NAME = utils.get_unit_name().replace('-','/').split('/')[0]
361+KEYRING = "/etc/ceph/ceph.client.%s.keyring" % SERVICE_NAME
362+KEYFILE = "/etc/ceph/ceph.client.%s.key" % SERVICE_NAME
363+
364
365 config=json.loads(subprocess.check_output(['config-get','--format=json']))
366
367
368 def ha_relation_joined():
369- # obtain the block device
370- block_storage = config['block-storage']
371- block_device = config['block-device']
372+
373+ # Checking vip values
374+ if not 'vip' in config:
375+ utils.juju_log('WARNING', 'NO Virtual IP was defined, bailing')
376+ sys.exit(1)
377+
378+ if config['vip_iface'] == "None" or not config['vip_iface']:
379+ utils.juju_log('WARNING', 'NO Virtual IP interface was defined, bailing')
380+ sys.exit(1)
381+
382+ if config['vip_cidr'] == "None" or not config['vip_cidr']:
383+ utils.juju_log('WARNING', 'NO CIDR was defined for the Virtual IP, bailing')
384+ sys.exit(1)
385
386 # Obtain the config values necessary for the cluster config. These
387 # include multicast port and interface to bind to.
388 corosync_bindiface = config['ha-bindiface']
389 corosync_mcastport = config['ha-mcastport']
390
391- if block_storage == "None":
392- utils.juju_log('WARNING',
393- 'NO block storage configured, not passing HA relation data')
394+ # Starting configuring resources.
395+ init_services = {
396+ 'res_mysqld':'mysql',
397+ }
398+
399+
400+ # If the 'ha' relation has been made *before* the 'ceph' relation,
401+ # it doesn't make sense to make it until after the 'ceph' relation
402+ # is made
403+ if not utils.is_relation_made('ceph'):
404+ utils.juju_log('INFO',
405+ '*ceph* relation does not exist. Not sending *ha* relation data')
406 return
407- elif block_storage == "drbd":
408- # Obtain resources
409+ else:
410+ utils.juju_log('INFO',
411+ '*ceph* relation exists. Sending *ha* relation data')
412+
413+ block_storage = 'ceph'
414+
415 resources = {
416+ 'res_mysql_rbd':'ocf:ceph:rbd',
417+ 'res_mysql_fs':'ocf:heartbeat:Filesystem',
418 'res_mysql_vip':'ocf:heartbeat:IPaddr2',
419- 'res_mysql_fs':'ocf:heartbeat:Filesystem',
420- 'res_mysql_drbd':'ocf:linbit:drbd',
421 'res_mysqld':'upstart:mysql',
422 }
423+
424 resource_params = {
425+ 'res_mysql_rbd':'params name="%s" pool="images" user="%s" secret="%s"' % (
426+ config['rbd-name'], SERVICE_NAME, KEYFILE),
427+ 'res_mysql_fs':'params device="/dev/rbd/images/%s" directory="%s" fstype="ext4" op start start-delay="10s"' % (
428+ config['rbd-name'], DATA_SRC_DST),
429 'res_mysql_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' % (config['vip'],
430 config['vip_cidr'], config['vip_iface']),
431- 'res_mysql_fs':'params device="%s" directory="%s" fstype="ext3"' % (DRBD_DEVICE, DRBD_MOUNTPOINT),
432- 'res_mysql_drbd':'params drbd_resource="%s"' % DRBD_RESOURCE,
433- 'res_mysqld':'op monitor interval=5s',
434- }
435-
436- init_services = {
437- 'res_mysqld':'mysql',
438+ 'res_mysqld':'op start start-delay="5s" op monitor interval="5s"',
439 }
440
441 groups = {
442- 'grp_mysql':'res_mysql_fs res_mysql_vip res_mysqld',
443- }
444-
445- ms = {
446- 'ms_drbd_mysql':'res_mysql_drbd meta notify="true" master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"'
447- }
448-
449- orders = {
450- 'ord_drbd_before_mysql':'inf: ms_drbd_mysql:promote grp_mysql:start'
451- }
452-
453- colocations = {
454- 'col_mysql_on_drbd':'inf: grp_mysql ms_drbd_mysql:Master'
455- }
456-
457- utils.relation_set(block_storage=block_storage,
458- block_device=block_device,
459- corosync_bindiface=corosync_bindiface,
460- corosync_mcastport=corosync_mcastport,
461- resources=resources,
462- resource_params=resource_params,
463- init_services=init_services,
464- colocations=colocations,
465- orders=orders,
466- groups=groups,
467- ms=ms)
468+ 'grp_mysql':'res_mysql_rbd res_mysql_fs res_mysql_vip res_mysqld',
469+ }
470+
471+ for rel_id in utils.relation_ids('ha'):
472+ utils.relation_set(rid=rel_id,
473+ block_storage=block_storage,
474+ corosync_bindiface=corosync_bindiface,
475+ corosync_mcastport=corosync_mcastport,
476+ resources=resources,
477+ resource_params=resource_params,
478+ init_services=init_services,
479+ groups=groups)
480
481
482 def ha_relation_changed():
483- pass
484-
485-
486-def get_cluster_nodes():
487+ relation_data = utils.relation_get_dict()
488+ if ('clustered' in relation_data and
489+ utils.is_leader()):
490+ utils.juju_log('INFO', 'Cluster configured, notifying other services')
491+ # Tell all related services to start using
492+ # the VIP
493+ for r_id in utils.relation_ids('shared-db'):
494+ utils.relation_set(rid=r_id,
495+ db_host=config['vip'])
496+
497+
498+def ceph_joined():
499+ utils.juju_log('INFO', 'Start Ceph Relation Joined')
500+
501+ ceph_dir = "/etc/ceph"
502+ if not os.path.isdir(ceph_dir):
503+ os.mkdir(ceph_dir)
504+ utils.install('ceph-common')
505+
506+ utils.juju_log('INFO', 'Finish Ceph Relation Joined')
507+
508+
509+def ceph_changed():
510+ utils.juju_log('INFO', 'Start Ceph Relation Changed')
511+
512+ # TODO: ask james: What happens if the relation data has changed?
513+ # do we reconfigure ceph? What do we do with the data?
514+ key = utils.relation_get('key')
515+
516+ if key:
517+ # create KEYRING file
518+ if not os.path.exists(KEYRING):
519+ ceph.create_keyring(SERVICE_NAME, KEYRING, key)
520+ # create a file containing the key
521+ if not os.path.exists(KEYFILE):
522+ fd = open(KEYFILE, 'w')
523+ fd.write(key)
524+ fd.close()
525+ else:
526+ sys.exit(0)
527+
528+ # emit ceph config
529+ hosts = get_ceph_nodes()
530+ mon_hosts = ",".join(map(str, hosts))
531+ conf_context = {
532+ 'auth': utils.relation_get('auth'),
533+ 'keyring': KEYRING,
534+ 'mon_hosts': mon_hosts,
535+ }
536+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
537+ ceph_conf.write(utils.render_template('ceph.conf',
538+ conf_context))
539+
540+ # Create the images pool if it does not already exist
541+ if utils.eligible_leader():
542+ (status, output) = commands.getstatusoutput("rados --id %s lspools" % SERVICE_NAME)
543+ pools = "images" in output
544+ if not pools:
545+ utils.juju_log('INFO','Creating image pool')
546+ ceph.create_image_pool(SERVICE_NAME)
547+
548+ # Configure ceph()
549+ configure_ceph()
550+
551+ # If 'ha' relation has been made before the 'ceph' relation
552+ # it is important to make sure the ha-relation data is being
553+ # sent.
554+ if utils.is_relation_made('ha'):
555+ utils.juju_log('INFO',
556+ '*ha* relation exists. Making sure the ha relation data is sent.')
557+ ha_relation_joined()
558+ return
559+ else:
560+ utils.juju_log('INFO',
561+ '*ha* relation does not exist.')
562+
563+ utils.juju_log('INFO', 'Finish Ceph Relation Changed')
564+
565+
566+def configure_ceph():
567+ utils.juju_log('INFO', 'Start Ceph Configuration')
568+
569+ block_sizemb = int(config['block-size'].split('G')[0]) * 1024
570+ image_name = config['rbd-name']
571+ fstype = 'ext4'
572+ data_src = DATA_SRC_DST
573+ blk_device = '/dev/rbd/images/%s' % image_name
574+
575+ # modprobe the kernel module
576+ utils.juju_log('INFO','Loading kernel module')
577+ ceph.modprobe_kernel_module('rbd')
578+
579+
580+ # configure mysql for ceph storage options
581+ if not utils.eligible_leader():
582+ utils.juju_log('INFO','This is not the peer leader. Not configuring RBD.')
583+ # Stopping MySQL
584+ if utils.running('mysql'):
585+ utils.juju_log('INFO','Stopping MySQL...')
586+ utils.stop('mysql')
587+ return
588+
589+ elif utils.eligible_leader():
590+ # create an image/block device
591+ (status, output) = commands.getstatusoutput('rbd list --id %s --pool images' % SERVICE_NAME)
592+ rbd = image_name in output
593+ if not rbd:
594+ utils.juju_log('INFO', 'Creating RBD Image...')
595+ ceph.create_image(SERVICE_NAME, image_name, str(block_sizemb))
596+ else:
597+ utils.juju_log('INFO',
598+ 'Looks like RBD already exists. Not creating a new one.')
599+
600+ # map the image to a block device if not already mapped.
601+ (status, output) = commands.getstatusoutput('rbd showmapped')
602+ mapped = image_name in output
603+ if not mapped:
604+ # map block storage
605+ utils.juju_log('INFO', 'Mapping RBD Image as a Block Device')
606+ ceph.map_block_storage(SERVICE_NAME, image_name, KEYFILE)
607+ else:
608+ utils.juju_log('INFO',
609+ 'Looks like RBD is already mapped. Not re-mapping.')
610+
611+ # make file system
612+ # TODO: What happens if for whatever reason this is run again and
613+ # the data is already in the rbd device and/or is mounted??
614+ # When it is mounted already, it will fail to make the fs
615+ utils.juju_log('INFO', 'Trying to move data over to RBD.')
616+ if not filesystem_mounted(data_src):
617+ utils.juju_log('INFO', 'Formating RBD.')
618+ ceph.make_filesystem(SERVICE_NAME, blk_device, fstype)
619+
620+ # Stopping MySQL
621+ if utils.running('mysql'):
622+ utils.juju_log('INFO','Stopping MySQL before moving data to RBD.')
623+ utils.stop('mysql')
624+
625+ # mount block device to temporary location and copy the data
626+ utils.juju_log('INFO', 'Copying MySQL data to RBD.')
627+ ceph.place_data_on_ceph(SERVICE_NAME, blk_device, data_src, fstype)
628+
629+ # Make files be owned by mysql user/pass
630+ cmd = ['chown', '-R', 'mysql:mysql', data_src]
631+ subprocess.check_call(cmd)
632+ else:
633+ utils.juju_log('INFO',
634+ 'Looks like data is already on the RBD, skipping...')
635+
636+ if not utils.running('mysql'):
637+ utils.start('mysql')
638+
639+ else:
640+ return
641+
642+ utils.juju_log('INFO', 'Finish Ceph Configuration')
643+
644+
645+def filesystem_mounted(fs):
646+ return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
647+
648+
649+def get_ceph_nodes():
650 hosts = []
651- hosts.append('{}:6789'.format(utils.get_host_ip()))
652-
653- for relid in utils.relation_ids('cluster'):
654- for unit in utils.relation_list(relid):
655- hosts.append(
656- '{}:6789'.format(utils.get_host_ip(
657- utils.relation_get('private-address',
658- unit, relid)))
659- )
660-
661- hosts.sort()
662+ for r_id in utils.relation_ids('ceph'):
663+ for unit in utils.relation_list(r_id):
664+ #hosts.append(utils.relation_get_dict(relation_id=r_id,
665+ # remote_unit=unit)['private-address'])
666+ hosts.append(utils.relation_get('private-address', unit=unit, rid=r_id))
667 return hosts
668
669
670-def get_cluster_leader():
671- # Obtains the unit name of the first service unit deploy.
672- # e.g. mysql-0
673- units = []
674- local = units.append(utils.get_unit_name())
675- for r_id in utils.relation_ids('cluster'):
676- for unit in utils.relation_list(r_id):
677- units.append(unit.replace('/','-'))
678-
679- return min(unit for unit in units)
680-
681-
682-def get_drbd_conf():
683- cluster_hosts = {}
684- # TODO: In MAAS private-address is the *hostname*. We need to set the
685- # private address in a relation.
686- cluster_hosts[utils.get_unit_hostname()] = utils.unit_get('private-address')
687- for r_id in utils.relation_ids('cluster'):
688- for unit in utils.relation_list(r_id):
689- cluster_hosts[unit.replace('/','-')] = \
690- utils.relation_get_dict(relation_id=r_id,
691- remote_unit=unit)['private-address']
692-
693- conf = {
694- 'block_device': config['block-device'],
695- 'drbd_device': DRBD_DEVICE,
696- 'units': cluster_hosts,
697- }
698- return conf
699-
700-
701-def emit_drbd_conf():
702- # read config variables
703- drbd_conf_context = get_drbd_conf()
704- # write config file
705- with open('/etc/drbd.d/mysql.res', 'w') as drbd_conf:
706- drbd_conf.write(utils.render_template('mysql.res',
707- drbd_conf_context))
708-
709-
710-def configure_drbd():
711- drbd.prepare_drbd_disk(config['block-device'])
712- drbd.modprobe_module()
713-
714- emit_drbd_conf()
715- drbd.create_md(DRBD_RESOURCE)
716- drbd.bring_resource_up(DRBD_RESOURCE)
717-
718- # Wait for quorum.
719- while not drbd.is_quorum_secondary():
720- time.sleep(1)
721- while not drbd.is_state_inconsistent():
722- time.sleep(1)
723-
724- if utils.get_unit_name() == get_cluster_leader():
725- # clear bitmap
726- if drbd.is_quorum_secondary() and drbd.is_state_inconsistent():
727- drbd.clear_bitmap(DRBD_RESOURCE)
728- # wait for resources to be UpToDate
729- while not drbd.is_state_uptodate():
730- time.sleep(1)
731- # Make leader primary
732- if drbd.is_state_uptodate():
733- drbd.make_primary(DRBD_RESOURCE)
734- # Wait for node to become primary
735- while not drbd.is_quorum_primary():
736- time.sleep(1)
737- # Format DRBD resource
738- if drbd.is_quorum_primary():
739- drbd.format_drbd_device()
740-
741- utils.stop("mysql")
742- if utils.get_unit_name() == get_cluster_leader() and drbd.is_quorum_primary():
743- drbd.put_on_drbd()
744- utils.start("mysql")
745-
746- return True
747-
748-
749 def cluster_changed():
750- # Check that we are not already configured
751- if os.path.exists(STORAGEMARKER):
752- utils.juju_log('INFO',
753- 'Block storage already configured, not reconfiguring')
754- return
755-
756 utils.juju_log('INFO', 'Begin cluster changed hook.')
757- if len(get_cluster_nodes()) != 2:
758- utils.juju_log('WARNING', 'Not enough nodes in cluster, bailing')
759- return
760-
761- #TODO:
762- # 1. check if block-storage has been set
763- # 2. prepare block storage
764- # 3. emit DRBD conf??? -- maybe not because we need 2 nodes. Unless we call cluster_changed from cluster relationship.
765- if config['block-storage'] == "None":
766- utils.juju_log('WARNING', 'NO block storage configured, bailing')
767- return
768- elif config['block-storage'] == "drbd":
769- if config['block-device'] == "None":
770- utils.juju_log('WARNING',
771- 'NO block-device defined, cannot configure DRBD')
772- return
773- else:
774- storage_configured = configure_drbd()
775- elif config['block-storage'] == "ceph":
776- # TODO: Add support for ceph
777- pass
778-
779- if not storage_configured:
780- utils.juju_log('WARNING', 'Unable to configure block storage, bailing')
781- return
782-
783- # TODO: if leader fails, then marker should not be placed on secondary node, nor
784- # DRBD should be mounted on both.
785- # TODO: probably would be good idea to check that DRBD has been mounted in var/lib/mysql
786- # in primary and secondary, that should say it has been successful.
787- with open(STORAGEMARKER, 'w') as marker:
788- marker.write('done')
789+
790+ if config['block-size'] == "None":
791+ utils.juju_log('WARNING', 'NO block storage size configured, bailing')
792+ return
793
794 utils.juju_log('INFO', 'End install hook.')
795
796-def show_drbd():
797- import commands
798- (status, output) = commands.getstatusoutput("drbd-overview")
799- utils.juju_log('INFO', '############################################################3')
800- utils.juju_log('INFO', output)
801- utils.juju_log('INFO', '############################################################3')
802
803 hooks = {
804- # TODO: cluster-relation-departed (remove drbdconfigured file)
805 "cluster-relation-changed": cluster_changed,
806 "ha-relation-joined": ha_relation_joined,
807 "ha-relation-changed": ha_relation_changed,
808+ "ceph-relation-joined": ceph_joined,
809+ "ceph-relation-changed": ceph_changed,
810 }
811
812 # keystone-hooks gets called by symlink corresponding to the requested relation
813
814=== modified file 'hooks/install'
815--- hooks/install 2012-11-01 21:49:21 +0000
816+++ hooks/install 2013-02-20 16:05:25 +0000
817@@ -3,8 +3,9 @@
818 apt-get update
819 apt-get install -y debconf-utils python-mysqldb uuid pwgen dnsutils charm-helper-sh || exit 1
820
821-PASSFILE=/var/lib/juju/mysql.passwd
822+PASSFILE=/var/lib/mysql/mysql.passwd
823 if ! [ -f $PASSFILE ] ; then
824+ mkdir -p /var/lib/mysql
825 touch $PASSFILE
826 fi
827 chmod 0600 $PASSFILE
828
829=== modified file 'hooks/master-relation-changed'
830--- hooks/master-relation-changed 2012-11-02 06:41:12 +0000
831+++ hooks/master-relation-changed 2013-02-20 16:05:25 +0000
832@@ -6,7 +6,7 @@
833
834 . /usr/share/charm-helper/sh/net.sh
835
836-ROOTARGS="-uroot -p`cat /var/lib/juju/mysql.passwd`"
837+ROOTARGS="-uroot -p`cat /var/lib/mysql/mysql.passwd`"
838 snapdir=/var/www/snaps
839 mkdir -p $snapdir
840 apt-get -y install apache2
841
842=== modified file 'hooks/monitors.common.bash'
843--- hooks/monitors.common.bash 2012-07-12 21:58:36 +0000
844+++ hooks/monitors.common.bash 2013-02-20 16:05:25 +0000
845@@ -1,4 +1,4 @@
846-MYSQL="mysql -uroot -p`cat /var/lib/juju/mysql.passwd`"
847+MYSQL="mysql -uroot -p`cat /var/lib/mysql/mysql.passwd`"
848 monitor_user=monitors
849 . /usr/share/charm-helper/sh/net.sh
850 if [ -n "$JUJU_REMOTE_UNIT" ] ; then
851
852=== modified file 'hooks/shared-db-relations'
853--- hooks/shared-db-relations 2012-12-03 20:21:07 +0000
854+++ hooks/shared-db-relations 2013-02-20 16:05:25 +0000
855@@ -14,7 +14,9 @@
856 import json
857 import socket
858 import os
859+import utils
860
861+config=json.loads(subprocess.check_output(['config-get','--format=json']))
862
863 def pwgen():
864 return subprocess.check_output(['pwgen', '-s', '16']).strip()
865@@ -47,7 +49,7 @@
866 def configure_db(hostname,
867 database,
868 username):
869- passwd_file = "/var/lib/juju/mysql-{}.passwd"\
870+ passwd_file = "/var/lib/mysql/mysql-{}.passwd"\
871 .format(username)
872 if hostname != local_hostname:
873 remote_ip = socket.gethostbyname(hostname)
874@@ -72,6 +74,11 @@
875 remote_ip, password)
876 return password
877
878+ if not utils.eligible_leader():
879+ utils.juju_log('INFO',
880+ 'MySQL service is peered, bailing shared-db relation')
881+ return
882+
883 settings = relation_get()
884 local_hostname = socket.getfqdn()
885 singleset = set([
886@@ -85,8 +92,13 @@
887 password = configure_db(settings['hostname'],
888 settings['database'],
889 settings['username'])
890- relation_set(db_host=local_hostname,
891- password=password)
892+ if not utils.is_clustered():
893+ relation_set(db_host=local_hostname,
894+ password=password)
895+ else:
896+ relation_set(db_host=config["vip"],
897+ password=password)
898+
899 else:
900 # Process multiple database setup requests.
901 # from incoming relation data:
902@@ -121,7 +133,10 @@
903 databases[db]['database'],
904 databases[db]['username'])
905 relation_set(**return_data)
906- relation_set(db_host=local_hostname)
907+ if not utils.is_clustered():
908+ relation_set(db_host=local_hostname)
909+ else:
910+ relation_set(db_host=config["vip"])
911
912 hook = os.path.basename(sys.argv[0])
913 hooks = {
914
915=== modified file 'hooks/slave-relation-broken'
916--- hooks/slave-relation-broken 2011-12-06 21:23:39 +0000
917+++ hooks/slave-relation-broken 2013-02-20 16:05:25 +0000
918@@ -1,8 +1,8 @@
919 #!/bin/sh
920
921 # Kill the replication
922-mysql -uroot -p`cat /var/lib/juju/mysql.passwd` -e 'STOP SLAVE;'
923-mysql -uroot -p`cat /var/lib/juju/mysql.passwd` -e 'RESET SLAVE;'
924+mysql -uroot -p`cat /var/lib/mysql/mysql.passwd` -e 'STOP SLAVE;'
925+mysql -uroot -p`cat /var/lib/mysql/mysql.passwd` -e 'RESET SLAVE;'
926 # No longer a slave
927 # XXX this may change the server-id .. not sure if thats what we
928 # want!
929
930=== modified file 'hooks/slave-relation-changed'
931--- hooks/slave-relation-changed 2011-12-06 21:17:17 +0000
932+++ hooks/slave-relation-changed 2013-02-20 16:05:25 +0000
933@@ -2,7 +2,7 @@
934
935 set -e
936
937-ROOTARGS="-uroot -p`cat /var/lib/juju/mysql.passwd`"
938+ROOTARGS="-uroot -p`cat /var/lib/mysql/mysql.passwd`"
939
940 # Others can join that service but only the lowest will be the master
941 # Note that we could be more automatic but for now we will wait for
942@@ -39,7 +39,7 @@
943 curl --silent --show-error $dumpurl |zcat| mysql $ROOTARGS
944 # Root pw gets overwritten by import
945 echo Re-setting Root Pasword -- can use ours because it hasnt been flushed
946-myrootpw=`cat /var/lib/juju/mysql.passwd`
947+myrootpw=`cat /var/lib/mysql/mysql.passwd`
948 mysqladmin -uroot -p$myrootpw password $myrootpw
949 # Debian packages expect debian-sys-maint@localhost to be root privileged and
950 # configured in /etc/mysql/debian.cnf. we just broke that.. fix it
951
952=== modified file 'hooks/upgrade-charm'
953--- hooks/upgrade-charm 2012-11-02 06:41:12 +0000
954+++ hooks/upgrade-charm 2013-02-20 16:05:25 +0000
955@@ -2,10 +2,26 @@
956 home=`dirname $0`
957 # Remove any existing .service_user files, which will cause
958 # new users/pw's to be generated, which is a good thing
959-old_service_user_files=$(ls /var/lib/juju/$.service_user)
960+old_service_user_files=$(ls /var/lib/juju/*.service_user)
961 if [ -n "$old_service_user_files" ] ; then
962 juju-log -l WARNING "Stale users left around, should be revoked: $(cat $old_service_user_files)"
963 rm -f $old_service_user_files
964 fi
965+
966+# Move service_user2 files to /var/lib/mysql as they are
967+# now stored there to support HA clustering with ceph.
968+new_service_user_files=$(ls /var/lib/juju/*.service_user2)
969+if [ -n "$new_service_user_files" ]; then
970+ juju-log -l INFO "Moving service_user files [$new_service_user_files] to [/var/lib/mysql]"
971+ mv $new_service_user_files /var/lib/mysql/
972+fi
973+# Move passwd files to /var/lib/mysql as they are
974+# now stored there to support HA clustering with ceph.
975+password_files=$(ls /var/lib/juju/*.passwd)
976+if [ -n "$password_files" ]; then
977+ juju-log -l INFO "Moving passwd files [$password_files] to [/var/lib/mysql]"
978+ mv $password_files /var/lib/mysql/
979+fi
980+
981 $home/install
982 exec $home/config-changed
983
984=== modified file 'hooks/utils.py'
985--- hooks/utils.py 2013-01-17 04:37:31 +0000
986+++ hooks/utils.py 2013-02-20 16:05:25 +0000
987@@ -383,3 +383,64 @@
988 return str(ip.network)
989 else:
990 return None
991+
992+
993+def is_clustered():
994+ for r_id in (relation_ids('ha') or []):
995+ for unit in (relation_list(r_id) or []):
996+ relation_data = \
997+ relation_get_dict(relation_id=r_id,
998+ remote_unit=unit)
999+ if 'clustered' in relation_data:
1000+ return True
1001+ return False
1002+
1003+
1004+def is_leader():
1005+ status = execute('crm resource show res_mysql_vip', echo=True)[0].strip()
1006+ hostname = execute('hostname', echo=True)[0].strip()
1007+ if hostname in status:
1008+ return True
1009+ else:
1010+ return False
1011+
1012+
1013+def peer_units():
1014+ peers = []
1015+ for r_id in (relation_ids('cluster') or []):
1016+ for unit in (relation_list(r_id) or []):
1017+ peers.append(unit)
1018+ return peers
1019+
1020+
1021+def oldest_peer(peers):
1022+ local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1]
1023+ for peer in peers:
1024+ remote_unit_no = peer.split('/')[1]
1025+ if remote_unit_no < local_unit_no:
1026+ return False
1027+ return True
1028+
1029+
1030+def eligible_leader():
1031+ if is_clustered():
1032+ if not is_leader():
1033+ juju_log('INFO', 'Deferring action to CRM leader.')
1034+ return False
1035+ else:
1036+ peers = peer_units()
1037+ if peers and not oldest_peer(peers):
1038+ juju_log('INFO', 'Deferring action to oldest service unit.')
1039+ return False
1040+ return True
1041+
1042+
1043+def is_relation_made(relation=None):
1044+ relation_data = []
1045+ for r_id in (relation_ids(relation) or []):
1046+ for unit in (relation_list(r_id) or []):
1047+ relation_data.append(relation_get_dict(relation_id=r_id,
1048+ remote_unit=unit))
1049+ if not relation_data:
1050+ return False
1051+ return True
1052
1053=== modified file 'metadata.yaml'
1054--- metadata.yaml 2013-01-11 20:39:20 +0000
1055+++ metadata.yaml 2013-02-20 16:05:25 +0000
1056@@ -28,6 +28,8 @@
1057 requires:
1058 slave:
1059 interface: mysql-oneway-replication
1060+ ceph:
1061+ interface: ceph-client
1062 ha:
1063 interface: hacluster
1064 scope: container
1065
1066=== modified file 'revision'
1067--- revision 2013-01-17 04:37:31 +0000
1068+++ revision 2013-02-20 16:05:25 +0000
1069@@ -1,1 +1,1 @@
1070-181
1071+293
1072
1073=== added file 'templates/ceph.conf'
1074--- templates/ceph.conf 1970-01-01 00:00:00 +0000
1075+++ templates/ceph.conf 2013-02-20 16:05:25 +0000
1076@@ -0,0 +1,5 @@
1077+[global]
1078+ auth supported = {{ auth }}
1079+ keyring = {{ keyring }}
1080+ mon host = {{ mon_hosts }}
1081+
1082
1083=== removed file 'templates/mysql.res'
1084--- templates/mysql.res 2013-01-17 00:56:56 +0000
1085+++ templates/mysql.res 1970-01-01 00:00:00 +0000
1086@@ -1,14 +0,0 @@
1087-resource mysql {
1088- device {{ drbd_device }};
1089- disk {{ block_device }}1;
1090- meta-disk internal;
1091- {% for unit, address in units.iteritems() -%}
1092- on {{ unit }} {
1093- address {{ address }}:7788;
1094- }
1095- {% endfor %}
1096- syncer {
1097- rate 10M;
1098- }
1099-}
1100-

Subscribers

People subscribed via source and target branches