Merge lp:~james-page/charms/precise/mysql/refactoring-review into lp:~openstack-charmers/charms/precise/mysql/ha-support
- Precise Pangolin (12.04)
- refactoring-review
- Merge into ha-support
Status: | Merged |
---|---|
Approved by: | Adam Gandelman |
Approved revision: | 112 |
Merged at revision: | 103 |
Proposed branch: | lp:~james-page/charms/precise/mysql/refactoring-review |
Merge into: | lp:~openstack-charmers/charms/precise/mysql/ha-support |
Diff against target: |
2078 lines (+962/-933) 12 files modified
README.md (+98/-0) config.yaml (+4/-4) hooks/ceph.py (+0/-240) hooks/config-changed (+0/-1) hooks/ha_relations.py (+60/-85) hooks/lib/ceph_utils.py (+256/-0) hooks/lib/cluster_utils.py (+130/-0) hooks/lib/utils.py (+275/-0) hooks/shared-db-relations (+0/-152) hooks/shared_db_relations.py (+139/-0) hooks/utils.py (+0/-446) templates/ceph.conf (+0/-5) |
To merge this branch: | bzr merge lp:~james-page/charms/precise/mysql/refactoring-review |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Adam Gandelman (community) | Approve | ||
Andres Rodriguez (community) | Approve | ||
Review via email:
|
Commit message
Description of the change
Review and refactoring so we can actually propose this against lp:charms/mysql
1) I've reworked the ceph and utils package into a lib top-level package and synced with the openstack charm helpers as written for swift-proxy (and in a bit keystone). This makes it clear which parts are sourced elsewhere and which parts are core the the charm
2) renamed hook targets to .py
ha-relations -> ha_relations.py
shared-db-relations -> shared_
3) Made shared_db_relations use as much of lib/* as possible
4) Reworked ha_relations to use lib/utils.py versions of _get methods, dropped config dictionary.
5) PEP-8 tidied what I touched.
6) is_relation_made()
I left this in ha_relations.py for the time being but I did add an optional key; this is used specifically in the ceph check to ensure that the relation is made and is actually usable - otherwise HA will start before ceph has provided keys etc.etc.etc.
- 111. By James Page
-
Remove redundant storage marker
- 112. By James Page
-
Rebased on trunk charm
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Adam Gandelman (gandelman-a) wrote : | # |
+1 This looks great, James!
- 113. By James Page
-
Re-sync with ha-helper
Preview Diff
1 | === added file 'README.md' |
2 | --- README.md 1970-01-01 00:00:00 +0000 |
3 | +++ README.md 2013-03-16 07:49:20 +0000 |
4 | @@ -0,0 +1,98 @@ |
5 | +# Overview |
6 | + |
7 | +MySQL is a fast, stable and true multi-user, multi-threaded SQL database server. |
8 | +SQL (Structured Query Language) is the most popular database query language in |
9 | +the world. The main goals of MySQL are speed, robustness and ease of use. |
10 | + |
11 | +Percona is fork of MySQL by Percona Inc. which focuses on maximizing |
12 | +performance, particularly for heavy workloads. It is a drop-in replacement for |
13 | +MySQL and features XtraDB, a drop-in replacement for the InnoDB storage engine. |
14 | + |
15 | +[http://www.mysql.com](http://www.mysql.com) |
16 | + |
17 | +[http://www.percona.com/software/percona-server](http://www.percona.com/software/percona-server) |
18 | + |
19 | +# Usage |
20 | + |
21 | +## General |
22 | + |
23 | +To deploy a MySQL service: |
24 | + |
25 | + juju deploy mysql |
26 | + |
27 | +Once deployed, you can ssh into the deployed service and access the |
28 | +MySQL console as the MySQL root user: |
29 | + |
30 | + juju ssh <unit> |
31 | + mysql -u root -p |
32 | + # enter root password - /var/lib/juju/mysql.passwd |
33 | + |
34 | +To change deployment to a Percona server: |
35 | + |
36 | + juju set mysql flavor=percona |
37 | + |
38 | +## Optimization |
39 | + |
40 | +You can tweak various options to optimize your MySQL deployment: |
41 | + |
42 | +* max-connections - Maximum connections allowed to server or '-1' for default. |
43 | + |
44 | +* preferred-storage-engine - A comma separated list of storage engines to |
45 | + optimize for. First in the list is marked as default storage engine. 'InnoDB' |
46 | + or 'MyISAM' are acceptable values. |
47 | + |
48 | +* tuning-level - Specify 'safest', 'fast' or 'unsafe' to choose required |
49 | + transaction safety. This option determines the flush value for innodb commit |
50 | + and binary logs. Specify 'safest' for full ACID compliance. 'fast' relaxes the |
51 | + compliance for performance and 'unsafe' will remove most restrictions. |
52 | + |
53 | +* dataset-size - Memory allocation for all caches (InnoDB buffer pool, MyISAM |
54 | + key, query). Suffix value with 'K', 'M', 'G' or 'T' to indicate unit of |
55 | + kilobyte, megabyte, gigabyte or terabyte respectively. Suffix value with '%' |
56 | + to use percentage of machine's total memory. |
57 | + |
58 | +* query-cache-type - Specify 'ON', 'DEMAND' or 'OFF' to turn query cache on, |
59 | + selectively (dependent on queries) or off. |
60 | + |
61 | +* query-cache-size - Size of query cache (no. of bytes) or '-1' to use 20% |
62 | + of memory allocation. |
63 | + |
64 | +Each of these can be applied by running: |
65 | + |
66 | + juju set <service> <option>=<value> |
67 | + |
68 | +e.g. |
69 | + |
70 | + juju set mysql preferred-storage-engine=InnoDB |
71 | + juju set mysql dataset-size=50% |
72 | + juju set mysql query-cache-type=ON |
73 | + juju set mysql query-cache-size=-1 |
74 | + |
75 | +## Replication |
76 | + |
77 | +MySQL supports the ability to replicate databases to slave instances. This |
78 | +allows you, for example, to load balance read queries across multiple slaves or |
79 | +use a slave to perform backups, all whilst not impeding the master's |
80 | +performance. |
81 | + |
82 | +To deploy a slave: |
83 | + |
84 | + # deploy second service |
85 | + juju deploy mysql mysql-slave |
86 | + |
87 | + # add master to slave relation |
88 | + juju add-relation mysql:master mysql-slave:slave |
89 | + |
90 | +Any changes to the master are reflected on the slave. |
91 | + |
92 | +Any queries that modify the database(s) should be applied to the master only. |
93 | +The slave should be treated strictly as read only. |
94 | + |
95 | +You can add further slaves with: |
96 | + |
97 | + juju add-unit mysql-slave |
98 | + |
99 | +## Monitoring |
100 | + |
101 | +This charm provides relations that support monitoring via either Nagios or |
102 | +Munin. Refer to the appropriate charm for usage. |
103 | |
104 | === modified file 'config.yaml' |
105 | --- config.yaml 2013-02-19 21:11:35 +0000 |
106 | +++ config.yaml 2013-03-16 07:49:20 +0000 |
107 | @@ -33,7 +33,7 @@ |
108 | description: If binlogging is enabled, this is the format that will be used. Ignored when tuning-level == fast. |
109 | vip: |
110 | type: string |
111 | - description: "Virtual IP to use to front keystone in ha configuration" |
112 | + description: "Virtual IP to use to front mysql in ha configuration" |
113 | vip_iface: |
114 | type: string |
115 | default: eth0 |
116 | @@ -55,11 +55,11 @@ |
117 | Default multicast port number that will be used to communicate between |
118 | HA Cluster nodes. |
119 | block-size: |
120 | - type: string |
121 | - default: 5G |
122 | + type: int |
123 | + default: 5 |
124 | description: | |
125 | Default block storage size to create when setting up MySQL block storage. |
126 | - This value should be specified in GB (e.g. 100G). |
127 | + This value should be specified in GB (e.g. 100 not 100GB). |
128 | rbd-name: |
129 | type: string |
130 | default: mysql1 |
131 | |
132 | === modified symlink 'hooks/ceph-relation-changed' |
133 | === target changed u'ha-relations' => u'ha_relations.py' |
134 | === modified symlink 'hooks/ceph-relation-joined' |
135 | === target changed u'ha-relations' => u'ha_relations.py' |
136 | === removed file 'hooks/ceph.py' |
137 | --- hooks/ceph.py 2013-02-28 00:01:30 +0000 |
138 | +++ hooks/ceph.py 1970-01-01 00:00:00 +0000 |
139 | @@ -1,240 +0,0 @@ |
140 | -import utils |
141 | -import commands |
142 | -import re |
143 | -import subprocess |
144 | -import sys |
145 | -import os |
146 | -import shutil |
147 | - |
148 | -KEYRING = '/etc/ceph/ceph.client.%s.keyring' |
149 | -KEYFILE = '/etc/ceph/ceph.client.%s.key' |
150 | - |
151 | -CEPH_CONF = """[global] |
152 | - auth supported = %(auth)s |
153 | - keyring = %(keyring)s |
154 | - mon host = %(mon_hosts)s |
155 | -""" |
156 | - |
157 | -def execute(cmd): |
158 | - subprocess.check_call(cmd) |
159 | - |
160 | - |
161 | -def execute_shell(cmd): |
162 | - subprocess.check_call(cmd, shell=True) |
163 | - |
164 | - |
165 | -def install(): |
166 | - ceph_dir = "/etc/ceph" |
167 | - if not os.path.isdir(ceph_dir): |
168 | - os.mkdir(ceph_dir) |
169 | - utils.install('ceph-common') |
170 | - |
171 | - |
172 | -def rbd_exists(service, pool, rbd_img): |
173 | - (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\ |
174 | - (service, pool)) |
175 | - return rbd_img in out |
176 | - |
177 | - |
178 | -def create_rbd_image(service, pool, image, sizemb): |
179 | - cmd = [ |
180 | - 'rbd', |
181 | - 'create', |
182 | - image, |
183 | - '--size', |
184 | - str(sizemb), |
185 | - '--id', |
186 | - service, |
187 | - '--pool', |
188 | - pool |
189 | - ] |
190 | - execute(cmd) |
191 | - |
192 | - |
193 | -def pool_exists(service, name): |
194 | - (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) |
195 | - return name in out |
196 | - |
197 | -def create_pool(service, name): |
198 | - cmd = [ |
199 | - 'rados', |
200 | - '--id', |
201 | - service, |
202 | - 'mkpool', |
203 | - name |
204 | - ] |
205 | - execute(cmd) |
206 | - |
207 | - |
208 | -def keyfile_path(service): |
209 | - return KEYFILE % service |
210 | - |
211 | -def keyring_path(service): |
212 | - return KEYRING % service |
213 | - |
214 | -def create_keyring(service, key): |
215 | - keyring = keyring_path(service) |
216 | - if os.path.exists(keyring): |
217 | - utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring) |
218 | - cmd = [ |
219 | - 'ceph-authtool', |
220 | - keyring, |
221 | - '--create-keyring', |
222 | - '--name=client.%s' % service, |
223 | - '--add-key=%s' % key |
224 | - ] |
225 | - execute(cmd) |
226 | - utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring) |
227 | - |
228 | - |
229 | -def create_key_file(service, key): |
230 | - # create a file containing the key |
231 | - keyfile = keyfile_path(service) |
232 | - if os.path.exists(keyfile): |
233 | - utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile) |
234 | - fd = open(keyfile, 'w') |
235 | - fd.write(key) |
236 | - fd.close() |
237 | - utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile) |
238 | - |
239 | - |
240 | -def get_ceph_nodes(): |
241 | - hosts = [] |
242 | - for r_id in utils.relation_ids('ceph'): |
243 | - for unit in utils.relation_list(r_id): |
244 | - hosts.append(utils.relation_get('private-address', |
245 | - unit=unit, rid=r_id)) |
246 | - return hosts |
247 | - |
248 | - |
249 | -def configure(service, key, auth): |
250 | - create_keyring(service, key) |
251 | - create_key_file(service, key) |
252 | - hosts = get_ceph_nodes() |
253 | - mon_hosts = ",".join(map(str, hosts)) |
254 | - keyring = keyring_path(service) |
255 | - with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
256 | - ceph_conf.write(CEPH_CONF % locals()) |
257 | - modprobe_kernel_module('rbd') |
258 | - |
259 | - |
260 | -def image_mapped(image_name): |
261 | - (rc, out) = commands.getstatusoutput('rbd showmapped') |
262 | - return image_name in out |
263 | - |
264 | -def map_block_storage(service, pool, image): |
265 | - cmd = [ |
266 | - 'rbd', |
267 | - 'map', |
268 | - '%s/%s' % (pool, image), |
269 | - '--user', |
270 | - service, |
271 | - '--secret', |
272 | - keyfile_path(service), |
273 | - ] |
274 | - execute(cmd) |
275 | - |
276 | - |
277 | -def filesystem_mounted(fs): |
278 | - return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
279 | - |
280 | -def make_filesystem(blk_device, fstype='ext4'): |
281 | - utils.juju_log('INFO', |
282 | - 'ceph: Formatting block device %s as filesystem %s.' %\ |
283 | - (blk_device, fstype)) |
284 | - cmd = ['mkfs', '-t', fstype, blk_device] |
285 | - execute(cmd) |
286 | - |
287 | - |
288 | -def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): |
289 | - # mount block device into /mnt |
290 | - cmd = ['mount', '-t', fstype, blk_device, '/mnt'] |
291 | - execute(cmd) |
292 | - |
293 | - # copy data to /mnt |
294 | - try: |
295 | - copy_files(data_src_dst, '/mnt') |
296 | - except: |
297 | - pass |
298 | - |
299 | - # umount block device |
300 | - cmd = ['umount', '/mnt'] |
301 | - execute(cmd) |
302 | - |
303 | - _dir = os.stat(data_src_dst) |
304 | - uid = _dir.st_uid |
305 | - gid = _dir.st_gid |
306 | - |
307 | - # re-mount where the data should originally be |
308 | - cmd = ['mount', '-t', fstype, blk_device, data_src_dst] |
309 | - execute(cmd) |
310 | - |
311 | - # ensure original ownership of new mount. |
312 | - cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] |
313 | - execute(cmd) |
314 | - |
315 | -# TODO: re-use |
316 | -def modprobe_kernel_module(module): |
317 | - utils.juju_log('INFO','Loading kernel module') |
318 | - cmd = ['modprobe', module] |
319 | - execute(cmd) |
320 | - cmd = 'echo %s >> /etc/modules' % module |
321 | - execute_shell(cmd) |
322 | - |
323 | - |
324 | -def copy_files(src, dst, symlinks=False, ignore=None): |
325 | - for item in os.listdir(src): |
326 | - s = os.path.join(src, item) |
327 | - d = os.path.join(dst, item) |
328 | - if os.path.isdir(s): |
329 | - shutil.copytree(s, d, symlinks, ignore) |
330 | - else: |
331 | - shutil.copy2(s, d) |
332 | - |
333 | -def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
334 | - blk_device, fstype, system_services=[]): |
335 | - """ |
336 | - To be called from the current cluster leader. |
337 | - Ensures given pool and RBD image exists, is mapped to a block device, |
338 | - and the device is formatted and mounted at the given mount_point. |
339 | - |
340 | - If formatting a device for the first time, data existing at mount_point |
341 | - will be migrated to the RBD device before being remounted. |
342 | - |
343 | - All services listed in system_services will be stopped prior to data |
344 | - migration and restarted when complete. |
345 | - """ |
346 | - # Ensure pool, RBD image, RBD mappings are in place. |
347 | - if not pool_exists(service, pool): |
348 | - utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool) |
349 | - create_pool(service, pool) |
350 | - |
351 | - if not rbd_exists(service, pool, rbd_img): |
352 | - utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img) |
353 | - create_rbd_image(service, pool, rbd_img, sizemb) |
354 | - |
355 | - if not image_mapped(rbd_img): |
356 | - utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.') |
357 | - map_block_storage(service, pool, rbd_img) |
358 | - |
359 | - # make file system |
360 | - # TODO: What happens if for whatever reason this is run again and |
361 | - # the data is already in the rbd device and/or is mounted?? |
362 | - # When it is mounted already, it will fail to make the fs |
363 | - # XXX: This is really sketchy! Need to at least add an fstab entry |
364 | - # otherwise this hook will blow away existing data if its executed |
365 | - # after a reboot. |
366 | - if not filesystem_mounted(mount_point): |
367 | - make_filesystem(blk_device, fstype) |
368 | - |
369 | - for svc in system_services: |
370 | - if utils.running(svc): |
371 | - utils.juju_log('INFO', |
372 | - 'Stopping services %s prior to migrating '\ |
373 | - 'data' % svc) |
374 | - utils.stop(svc) |
375 | - |
376 | - place_data_on_ceph(service, blk_device, mount_point, fstype) |
377 | - |
378 | - for svc in system_services: |
379 | - utils.start(svc) |
380 | |
381 | === added symlink 'hooks/cluster-relation-changed' |
382 | === target is u'ha_relations.py' |
383 | === removed symlink 'hooks/cluster-relation-changed' |
384 | === target was u'ha-relations' |
385 | === modified file 'hooks/config-changed' |
386 | --- hooks/config-changed 2013-02-19 21:22:51 +0000 |
387 | +++ hooks/config-changed 2013-03-16 07:49:20 +0000 |
388 | @@ -7,7 +7,6 @@ |
389 | import hashlib |
390 | import os |
391 | import sys |
392 | -import utils |
393 | from string import upper |
394 | |
395 | num_re = re.compile('^[0-9]+$') |
396 | |
397 | === modified symlink 'hooks/ha-relation-changed' |
398 | === target changed u'ha-relations' => u'ha_relations.py' |
399 | === modified symlink 'hooks/ha-relation-joined' |
400 | === target changed u'ha-relations' => u'ha_relations.py' |
401 | === renamed file 'hooks/ha-relations' => 'hooks/ha_relations.py' |
402 | --- hooks/ha-relations 2013-03-01 20:56:10 +0000 |
403 | +++ hooks/ha_relations.py 2013-03-16 07:49:20 +0000 |
404 | @@ -1,84 +1,72 @@ |
405 | #!/usr/bin/env python |
406 | |
407 | -import json |
408 | import sys |
409 | -import subprocess |
410 | import os |
411 | -import time |
412 | -import commands |
413 | - |
414 | -import utils |
415 | -import ceph |
416 | - |
417 | -STORAGEMARKER = '/var/lib/juju/storageconfigured' |
418 | + |
419 | +import lib.utils as utils |
420 | +import lib.ceph_utils as ceph |
421 | +import lib.cluster_utils as cluster |
422 | |
423 | # CEPH |
424 | DATA_SRC_DST = '/var/lib/mysql' |
425 | - |
426 | SERVICE_NAME = os.getenv('JUJU_UNIT_NAME').split('/')[0] |
427 | POOL_NAME = SERVICE_NAME |
428 | - |
429 | -config=json.loads(subprocess.check_output(['config-get','--format=json'])) |
430 | +LEADER_RES = 'res_mysql_vip' |
431 | |
432 | |
433 | def ha_relation_joined(): |
434 | - |
435 | - # Checking vip values |
436 | - if not 'vip' in config: |
437 | - utils.juju_log('WARNING', 'NO Virtual IP was defined, bailing') |
438 | - sys.exit(1) |
439 | - |
440 | - if config['vip_iface'] == "None" or not config['vip_iface']: |
441 | - utils.juju_log('WARNING', 'NO Virtual IP interface was defined, bailing') |
442 | - sys.exit(1) |
443 | - |
444 | - if config['vip_cidr'] == "None" or not config['vip_cidr']: |
445 | - utils.juju_log('WARNING', 'NO CIDR was defined for the Virtual IP, bailing') |
446 | - sys.exit(1) |
447 | - |
448 | - # Obtain the config values necessary for the cluster config. These |
449 | - # include multicast port and interface to bind to. |
450 | - corosync_bindiface = config['ha-bindiface'] |
451 | - corosync_mcastport = config['ha-mcastport'] |
452 | + vip = utils.config_get('vip') |
453 | + vip_iface = utils.config_get('vip_iface') |
454 | + vip_cidr = utils.config_get('vip_cidr') |
455 | + corosync_bindiface = utils.config_get('ha-bindiface') |
456 | + corosync_mcastport = utils.config_get('ha-mcastport') |
457 | + |
458 | + if None in [vip, vip_cidr, vip_iface]: |
459 | + utils.juju_log('WARNING', |
460 | + 'Insufficient VIP information to configure cluster') |
461 | + sys.exit(1) |
462 | |
463 | # Starting configuring resources. |
464 | init_services = { |
465 | - 'res_mysqld':'mysql', |
466 | + 'res_mysqld': 'mysql', |
467 | } |
468 | |
469 | - |
470 | # If the 'ha' relation has been made *before* the 'ceph' relation, |
471 | - # it doesn't make sense to make it until after the 'ceph' relation |
472 | - # is made |
473 | - if not utils.is_relation_made('ceph'): |
474 | + # it doesn't make sense to make it until after the 'ceph' relation is made |
475 | + if not is_relation_made('ceph', 'auth'): |
476 | utils.juju_log('INFO', |
477 | - '*ceph* relation does not exist. Not sending *ha* relation data') |
478 | + '*ceph* relation does not exist. ' |
479 | + 'Not sending *ha* relation data yet') |
480 | return |
481 | else: |
482 | utils.juju_log('INFO', |
483 | - '*ceph* relation exists. Sending *ha* relation data') |
484 | + '*ceph* relation exists. Sending *ha* relation data') |
485 | |
486 | block_storage = 'ceph' |
487 | |
488 | resources = { |
489 | - 'res_mysql_rbd':'ocf:ceph:rbd', |
490 | - 'res_mysql_fs':'ocf:heartbeat:Filesystem', |
491 | - 'res_mysql_vip':'ocf:heartbeat:IPaddr2', |
492 | - 'res_mysqld':'upstart:mysql', |
493 | + 'res_mysql_rbd': 'ocf:ceph:rbd', |
494 | + 'res_mysql_fs': 'ocf:heartbeat:Filesystem', |
495 | + 'res_mysql_vip': 'ocf:heartbeat:IPaddr2', |
496 | + 'res_mysqld': 'upstart:mysql', |
497 | } |
498 | |
499 | + rbd_name = utils.config_get('rbd-name') |
500 | resource_params = { |
501 | - 'res_mysql_rbd':'params name="%s" pool="%s" user="%s" secret="%s"' % ( |
502 | - config['rbd-name'], POOL_NAME, SERVICE_NAME, ceph.keyfile_path(SERVICE_NAME)), |
503 | - 'res_mysql_fs':'params device="/dev/rbd/%s/%s" directory="%s" fstype="ext4" op start start-delay="10s"' % ( |
504 | - POOL_NAME, config['rbd-name'], DATA_SRC_DST), |
505 | - 'res_mysql_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' % (config['vip'], |
506 | - config['vip_cidr'], config['vip_iface']), |
507 | - 'res_mysqld':'op start start-delay="5s" op monitor interval="5s"', |
508 | + 'res_mysql_rbd': 'params name="%s" pool="%s" user="%s" ' |
509 | + 'secret="%s"' % \ |
510 | + (rbd_name, POOL_NAME, |
511 | + SERVICE_NAME, ceph.keyfile_path(SERVICE_NAME)), |
512 | + 'res_mysql_fs': 'params device="/dev/rbd/%s/%s" directory="%s" ' |
513 | + 'fstype="ext4" op start start-delay="10s"' % \ |
514 | + (POOL_NAME, rbd_name, DATA_SRC_DST), |
515 | + 'res_mysql_vip': 'params ip="%s" cidr_netmask="%s" nic="%s"' % \ |
516 | + (vip, vip_cidr, vip_iface), |
517 | + 'res_mysqld': 'op start start-delay="5s" op monitor interval="5s"', |
518 | } |
519 | |
520 | groups = { |
521 | - 'grp_mysql':'res_mysql_rbd res_mysql_fs res_mysql_vip res_mysqld', |
522 | + 'grp_mysql': 'res_mysql_rbd res_mysql_fs res_mysql_vip res_mysqld', |
523 | } |
524 | |
525 | for rel_id in utils.relation_ids('ha'): |
526 | @@ -93,15 +81,13 @@ |
527 | |
528 | |
529 | def ha_relation_changed(): |
530 | - relation_data = utils.relation_get_dict() |
531 | - if ('clustered' in relation_data and |
532 | - utils.is_leader()): |
533 | + clustered = utils.relation_get('clustered') |
534 | + if (clustered and cluster.is_leader(LEADER_RES)): |
535 | utils.juju_log('INFO', 'Cluster configured, notifying other services') |
536 | - # Tell all related services to start using |
537 | - # the VIP |
538 | + # Tell all related services to start using the VIP |
539 | for r_id in utils.relation_ids('shared-db'): |
540 | utils.relation_set(rid=r_id, |
541 | - db_host=config['vip']) |
542 | + db_host=utils.config_get('vip')) |
543 | |
544 | |
545 | def ceph_joined(): |
546 | @@ -112,21 +98,17 @@ |
547 | |
548 | def ceph_changed(): |
549 | utils.juju_log('INFO', 'Start Ceph Relation Changed') |
550 | - |
551 | - # TODO: ask james: What happens if the relation data has changed? |
552 | - # do we reconfigure ceph? What do we do with the data? |
553 | auth = utils.relation_get('auth') |
554 | key = utils.relation_get('key') |
555 | if None in [auth, key]: |
556 | utils.juju_log('INFO', 'Missing key or auth in relation') |
557 | - sys.exit(0) |
558 | - |
559 | + return |
560 | |
561 | ceph.configure(service=SERVICE_NAME, key=key, auth=auth) |
562 | |
563 | - if utils.eligible_leader(): |
564 | - sizemb = int(config['block-size'].split('G')[0]) * 1024 |
565 | - rbd_img = config['rbd-name'] |
566 | + if cluster.eligible_leader(LEADER_RES): |
567 | + sizemb = int(utils.config_get('block-size')) * 1024 |
568 | + rbd_img = utils.config_get('rbd-name') |
569 | blk_device = '/dev/rbd/%s/%s' % (POOL_NAME, rbd_img) |
570 | ceph.ensure_ceph_storage(service=SERVICE_NAME, pool=POOL_NAME, |
571 | rbd_img=rbd_img, sizemb=sizemb, |
572 | @@ -138,46 +120,39 @@ |
573 | 'This is not the peer leader. Not configuring RBD.') |
574 | # Stopping MySQL |
575 | if utils.running('mysql'): |
576 | - utils.juju_log('INFO','Stopping MySQL...') |
577 | + utils.juju_log('INFO', 'Stopping MySQL...') |
578 | utils.stop('mysql') |
579 | |
580 | - |
581 | # If 'ha' relation has been made before the 'ceph' relation |
582 | # it is important to make sure the ha-relation data is being |
583 | # sent. |
584 | - if utils.is_relation_made('ha'): |
585 | + if is_relation_made('ha'): |
586 | utils.juju_log('INFO', |
587 | - '*ha* relation exists. Making sure the ha relation data is sent.') |
588 | + '*ha* relation exists. Making sure the ha' |
589 | + ' relation data is sent.') |
590 | ha_relation_joined() |
591 | return |
592 | - else: |
593 | - utils.juju_log('INFO', |
594 | - '*ha* relation does not exist.') |
595 | |
596 | utils.juju_log('INFO', 'Finish Ceph Relation Changed') |
597 | |
598 | |
599 | -def cluster_changed(): |
600 | - utils.juju_log('INFO', 'Begin cluster changed hook.') |
601 | - |
602 | - if config['block-size'] == "None": |
603 | - utils.juju_log('WARNING', 'NO block storage size configured, bailing') |
604 | - return |
605 | - |
606 | - utils.juju_log('INFO', 'End install hook.') |
607 | +def is_relation_made(relation, key='private-address'): |
608 | + relation_data = [] |
609 | + for r_id in (utils.relation_ids(relation) or []): |
610 | + for unit in (utils.relation_list(r_id) or []): |
611 | + relation_data.append(utils.relation_get(key, |
612 | + rid=r_id, |
613 | + unit=unit)) |
614 | + if not relation_data: |
615 | + return False |
616 | + return True |
617 | |
618 | |
619 | hooks = { |
620 | - "cluster-relation-changed": cluster_changed, |
621 | "ha-relation-joined": ha_relation_joined, |
622 | "ha-relation-changed": ha_relation_changed, |
623 | "ceph-relation-joined": ceph_joined, |
624 | "ceph-relation-changed": ceph_changed, |
625 | } |
626 | |
627 | -# keystone-hooks gets called by symlink corresponding to the requested relation |
628 | -# hook. |
629 | -arg0 = sys.argv[0].split("/").pop() |
630 | -if arg0 not in hooks.keys(): |
631 | - error_out("Unsupported hook: %s" % arg0) |
632 | -hooks[arg0]() |
633 | +utils.do_hooks(hooks) |
634 | |
635 | === added directory 'hooks/lib' |
636 | === added file 'hooks/lib/__init__.py' |
637 | === added file 'hooks/lib/ceph_utils.py' |
638 | --- hooks/lib/ceph_utils.py 1970-01-01 00:00:00 +0000 |
639 | +++ hooks/lib/ceph_utils.py 2013-03-16 07:49:20 +0000 |
640 | @@ -0,0 +1,256 @@ |
641 | +# |
642 | +# Copyright 2012 Canonical Ltd. |
643 | +# |
644 | +# This file is sourced from lp:openstack-charm-helpers |
645 | +# |
646 | +# Authors: |
647 | +# James Page <james.page@ubuntu.com> |
648 | +# Adam Gandelman <adamg@ubuntu.com> |
649 | +# |
650 | + |
651 | +import commands |
652 | +import subprocess |
653 | +import os |
654 | +import shutil |
655 | +import lib.utils as utils |
656 | + |
657 | +KEYRING = '/etc/ceph/ceph.client.%s.keyring' |
658 | +KEYFILE = '/etc/ceph/ceph.client.%s.key' |
659 | + |
660 | +CEPH_CONF = """[global] |
661 | + auth supported = %(auth)s |
662 | + keyring = %(keyring)s |
663 | + mon host = %(mon_hosts)s |
664 | +""" |
665 | + |
666 | + |
667 | +def execute(cmd): |
668 | + subprocess.check_call(cmd) |
669 | + |
670 | + |
671 | +def execute_shell(cmd): |
672 | + subprocess.check_call(cmd, shell=True) |
673 | + |
674 | + |
675 | +def install(): |
676 | + ceph_dir = "/etc/ceph" |
677 | + if not os.path.isdir(ceph_dir): |
678 | + os.mkdir(ceph_dir) |
679 | + utils.install('ceph-common') |
680 | + |
681 | + |
682 | +def rbd_exists(service, pool, rbd_img): |
683 | + (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %\ |
684 | + (service, pool)) |
685 | + return rbd_img in out |
686 | + |
687 | + |
688 | +def create_rbd_image(service, pool, image, sizemb): |
689 | + cmd = [ |
690 | + 'rbd', |
691 | + 'create', |
692 | + image, |
693 | + '--size', |
694 | + str(sizemb), |
695 | + '--id', |
696 | + service, |
697 | + '--pool', |
698 | + pool |
699 | + ] |
700 | + execute(cmd) |
701 | + |
702 | + |
703 | +def pool_exists(service, name): |
704 | + (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) |
705 | + return name in out |
706 | + |
707 | + |
708 | +def create_pool(service, name): |
709 | + cmd = [ |
710 | + 'rados', |
711 | + '--id', |
712 | + service, |
713 | + 'mkpool', |
714 | + name |
715 | + ] |
716 | + execute(cmd) |
717 | + |
718 | + |
719 | +def keyfile_path(service): |
720 | + return KEYFILE % service |
721 | + |
722 | + |
723 | +def keyring_path(service): |
724 | + return KEYRING % service |
725 | + |
726 | + |
727 | +def create_keyring(service, key): |
728 | + keyring = keyring_path(service) |
729 | + if os.path.exists(keyring): |
730 | + utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring) |
731 | + cmd = [ |
732 | + 'ceph-authtool', |
733 | + keyring, |
734 | + '--create-keyring', |
735 | + '--name=client.%s' % service, |
736 | + '--add-key=%s' % key |
737 | + ] |
738 | + execute(cmd) |
739 | + utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring) |
740 | + |
741 | + |
742 | +def create_key_file(service, key): |
743 | + # create a file containing the key |
744 | + keyfile = keyfile_path(service) |
745 | + if os.path.exists(keyfile): |
746 | + utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile) |
747 | + fd = open(keyfile, 'w') |
748 | + fd.write(key) |
749 | + fd.close() |
750 | + utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile) |
751 | + |
752 | + |
753 | +def get_ceph_nodes(): |
754 | + hosts = [] |
755 | + for r_id in utils.relation_ids('ceph'): |
756 | + for unit in utils.relation_list(r_id): |
757 | + hosts.append(utils.relation_get('private-address', |
758 | + unit=unit, rid=r_id)) |
759 | + return hosts |
760 | + |
761 | + |
762 | +def configure(service, key, auth): |
763 | + create_keyring(service, key) |
764 | + create_key_file(service, key) |
765 | + hosts = get_ceph_nodes() |
766 | + mon_hosts = ",".join(map(str, hosts)) |
767 | + keyring = keyring_path(service) |
768 | + with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: |
769 | + ceph_conf.write(CEPH_CONF % locals()) |
770 | + modprobe_kernel_module('rbd') |
771 | + |
772 | + |
773 | +def image_mapped(image_name): |
774 | + (rc, out) = commands.getstatusoutput('rbd showmapped') |
775 | + return image_name in out |
776 | + |
777 | + |
778 | +def map_block_storage(service, pool, image): |
779 | + cmd = [ |
780 | + 'rbd', |
781 | + 'map', |
782 | + '%s/%s' % (pool, image), |
783 | + '--user', |
784 | + service, |
785 | + '--secret', |
786 | + keyfile_path(service), |
787 | + ] |
788 | + execute(cmd) |
789 | + |
790 | + |
791 | +def filesystem_mounted(fs): |
792 | + return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
793 | + |
794 | + |
795 | +def make_filesystem(blk_device, fstype='ext4'): |
796 | + utils.juju_log('INFO', |
797 | + 'ceph: Formatting block device %s as filesystem %s.' %\ |
798 | + (blk_device, fstype)) |
799 | + cmd = ['mkfs', '-t', fstype, blk_device] |
800 | + execute(cmd) |
801 | + |
802 | + |
803 | +def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): |
804 | + # mount block device into /mnt |
805 | + cmd = ['mount', '-t', fstype, blk_device, '/mnt'] |
806 | + execute(cmd) |
807 | + |
808 | + # copy data to /mnt |
809 | + try: |
810 | + copy_files(data_src_dst, '/mnt') |
811 | + except: |
812 | + pass |
813 | + |
814 | + # umount block device |
815 | + cmd = ['umount', '/mnt'] |
816 | + execute(cmd) |
817 | + |
818 | + _dir = os.stat(data_src_dst) |
819 | + uid = _dir.st_uid |
820 | + gid = _dir.st_gid |
821 | + |
822 | + # re-mount where the data should originally be |
823 | + cmd = ['mount', '-t', fstype, blk_device, data_src_dst] |
824 | + execute(cmd) |
825 | + |
826 | + # ensure original ownership of new mount. |
827 | + cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] |
828 | + execute(cmd) |
829 | + |
830 | + |
831 | +# TODO: re-use |
832 | +def modprobe_kernel_module(module): |
833 | + utils.juju_log('INFO', 'Loading kernel module') |
834 | + cmd = ['modprobe', module] |
835 | + execute(cmd) |
836 | + cmd = 'echo %s >> /etc/modules' % module |
837 | + execute_shell(cmd) |
838 | + |
839 | + |
840 | +def copy_files(src, dst, symlinks=False, ignore=None): |
841 | + for item in os.listdir(src): |
842 | + s = os.path.join(src, item) |
843 | + d = os.path.join(dst, item) |
844 | + if os.path.isdir(s): |
845 | + shutil.copytree(s, d, symlinks, ignore) |
846 | + else: |
847 | + shutil.copy2(s, d) |
848 | + |
849 | + |
850 | +def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
851 | + blk_device, fstype, system_services=[]): |
852 | + """ |
853 | + To be called from the current cluster leader. |
854 | + Ensures given pool and RBD image exists, is mapped to a block device, |
855 | + and the device is formatted and mounted at the given mount_point. |
856 | + |
857 | + If formatting a device for the first time, data existing at mount_point |
858 | + will be migrated to the RBD device before being remounted. |
859 | + |
860 | + All services listed in system_services will be stopped prior to data |
861 | + migration and restarted when complete. |
862 | + """ |
863 | + # Ensure pool, RBD image, RBD mappings are in place. |
864 | + if not pool_exists(service, pool): |
865 | + utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool) |
866 | + create_pool(service, pool) |
867 | + |
868 | + if not rbd_exists(service, pool, rbd_img): |
869 | + utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img) |
870 | + create_rbd_image(service, pool, rbd_img, sizemb) |
871 | + |
872 | + if not image_mapped(rbd_img): |
873 | + utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.') |
874 | + map_block_storage(service, pool, rbd_img) |
875 | + |
876 | + # make file system |
877 | + # TODO: What happens if for whatever reason this is run again and |
878 | + # the data is already in the rbd device and/or is mounted?? |
879 | + # When it is mounted already, it will fail to make the fs |
880 | + # XXX: This is really sketchy! Need to at least add an fstab entry |
881 | + # otherwise this hook will blow away existing data if its executed |
882 | + # after a reboot. |
883 | + if not filesystem_mounted(mount_point): |
884 | + make_filesystem(blk_device, fstype) |
885 | + |
886 | + for svc in system_services: |
887 | + if utils.running(svc): |
888 | + utils.juju_log('INFO', |
889 | + 'Stopping services %s prior to migrating '\ |
890 | + 'data' % svc) |
891 | + utils.stop(svc) |
892 | + |
893 | + place_data_on_ceph(service, blk_device, mount_point, fstype) |
894 | + |
895 | + for svc in system_services: |
896 | + utils.start(svc) |
897 | |
898 | === added file 'hooks/lib/cluster_utils.py' |
899 | --- hooks/lib/cluster_utils.py 1970-01-01 00:00:00 +0000 |
900 | +++ hooks/lib/cluster_utils.py 2013-03-16 07:49:20 +0000 |
901 | @@ -0,0 +1,130 @@ |
902 | +# |
903 | +# Copyright 2012 Canonical Ltd. |
904 | +# |
905 | +# This file is sourced from lp:openstack-charm-helpers |
906 | +# |
907 | +# Authors: |
908 | +# James Page <james.page@ubuntu.com> |
909 | +# Adam Gandelman <adamg@ubuntu.com> |
910 | +# |
911 | + |
912 | +from lib.utils import ( |
913 | + juju_log, |
914 | + relation_ids, |
915 | + relation_list, |
916 | + relation_get, |
917 | + get_unit_hostname, |
918 | + config_get |
919 | + ) |
920 | +import subprocess |
921 | +import os |
922 | + |
923 | + |
924 | +def is_clustered(): |
925 | + for r_id in (relation_ids('ha') or []): |
926 | + for unit in (relation_list(r_id) or []): |
927 | + clustered = relation_get('clustered', |
928 | + rid=r_id, |
929 | + unit=unit) |
930 | + if clustered: |
931 | + return True |
932 | + return False |
933 | + |
934 | + |
935 | +def is_leader(resource): |
936 | + cmd = [ |
937 | + "crm", "resource", |
938 | + "show", resource |
939 | + ] |
940 | + try: |
941 | + status = subprocess.check_output(cmd) |
942 | + except subprocess.CalledProcessError: |
943 | + return False |
944 | + else: |
945 | + if get_unit_hostname() in status: |
946 | + return True |
947 | + else: |
948 | + return False |
949 | + |
950 | + |
951 | +def peer_units(): |
952 | + peers = [] |
953 | + for r_id in (relation_ids('cluster') or []): |
954 | + for unit in (relation_list(r_id) or []): |
955 | + peers.append(unit) |
956 | + return peers |
957 | + |
958 | + |
959 | +def oldest_peer(peers): |
960 | + local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] |
961 | + for peer in peers: |
962 | + remote_unit_no = peer.split('/')[1] |
963 | + if remote_unit_no < local_unit_no: |
964 | + return False |
965 | + return True |
966 | + |
967 | + |
968 | +def eligible_leader(resource): |
969 | + if is_clustered(): |
970 | + if not is_leader(resource): |
971 | + juju_log('INFO', 'Deferring action to CRM leader.') |
972 | + return False |
973 | + else: |
974 | + peers = peer_units() |
975 | + if peers and not oldest_peer(peers): |
976 | + juju_log('INFO', 'Deferring action to oldest service unit.') |
977 | + return False |
978 | + return True |
979 | + |
980 | + |
981 | +def https(): |
982 | + ''' |
983 | + Determines whether enough data has been provided in configuration |
984 | + or relation data to configure HTTPS |
985 | + . |
986 | + returns: boolean |
987 | + ''' |
988 | + if config_get('use-https') == "yes": |
989 | + return True |
990 | + if config_get('ssl_cert') and config_get('ssl_key'): |
991 | + return True |
992 | + for r_id in relation_ids('identity-service'): |
993 | + for unit in relation_list(r_id): |
994 | + if (relation_get('https_keystone', rid=r_id, unit=unit) and |
995 | + relation_get('ssl_cert', rid=r_id, unit=unit) and |
996 | + relation_get('ssl_key', rid=r_id, unit=unit) and |
997 | + relation_get('ca_cert', rid=r_id, unit=unit)): |
998 | + return True |
999 | + return False |
1000 | + |
1001 | + |
1002 | +def determine_api_port(public_port): |
1003 | + ''' |
1004 | + Determine correct API server listening port based on |
1005 | + existence of HTTPS reverse proxy and/or haproxy. |
1006 | + |
1007 | + public_port: int: standard public port for given service |
1008 | + |
1009 | + returns: int: the correct listening port for the API service |
1010 | + ''' |
1011 | + i = 0 |
1012 | + if len(peer_units()) > 0 or is_clustered(): |
1013 | + i += 1 |
1014 | + if https(): |
1015 | + i += 1 |
1016 | + return public_port - (i * 10) |
1017 | + |
1018 | + |
1019 | +def determine_haproxy_port(public_port): |
1020 | + ''' |
1021 | + Description: Determine correct proxy listening port based on public IP + |
1022 | + existence of HTTPS reverse proxy. |
1023 | + |
1024 | + public_port: int: standard public port for given service |
1025 | + |
1026 | + returns: int: the correct listening port for the HAProxy service |
1027 | + ''' |
1028 | + i = 0 |
1029 | + if https(): |
1030 | + i += 1 |
1031 | + return public_port - (i * 10) |
1032 | |
1033 | === added file 'hooks/lib/utils.py' |
1034 | --- hooks/lib/utils.py 1970-01-01 00:00:00 +0000 |
1035 | +++ hooks/lib/utils.py 2013-03-16 07:49:20 +0000 |
1036 | @@ -0,0 +1,275 @@ |
1037 | +# |
1038 | +# Copyright 2012 Canonical Ltd. |
1039 | +# |
1040 | +# This file is sourced from lp:openstack-charm-helpers |
1041 | +# |
1042 | +# Authors: |
1043 | +# James Page <james.page@ubuntu.com> |
1044 | +# Paul Collins <paul.collins@canonical.com> |
1045 | +# Adam Gandelman <adamg@ubuntu.com> |
1046 | +# |
1047 | + |
1048 | +import json |
1049 | +import os |
1050 | +import subprocess |
1051 | +import socket |
1052 | +import sys |
1053 | + |
1054 | + |
1055 | +def do_hooks(hooks): |
1056 | + hook = os.path.basename(sys.argv[0]) |
1057 | + |
1058 | + try: |
1059 | + hook_func = hooks[hook] |
1060 | + except KeyError: |
1061 | + juju_log('INFO', |
1062 | + "This charm doesn't know how to handle '{}'.".format(hook)) |
1063 | + else: |
1064 | + hook_func() |
1065 | + |
1066 | + |
1067 | +def install(*pkgs): |
1068 | + cmd = [ |
1069 | + 'apt-get', |
1070 | + '-y', |
1071 | + 'install' |
1072 | + ] |
1073 | + for pkg in pkgs: |
1074 | + cmd.append(pkg) |
1075 | + subprocess.check_call(cmd) |
1076 | + |
1077 | +TEMPLATES_DIR = 'templates' |
1078 | + |
1079 | +try: |
1080 | + import jinja2 |
1081 | +except ImportError: |
1082 | + install('python-jinja2') |
1083 | + import jinja2 |
1084 | + |
1085 | +try: |
1086 | + import dns.resolver |
1087 | +except ImportError: |
1088 | + install('python-dnspython') |
1089 | + import dns.resolver |
1090 | + |
1091 | + |
1092 | +def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
1093 | + templates = jinja2.Environment( |
1094 | + loader=jinja2.FileSystemLoader(template_dir) |
1095 | + ) |
1096 | + template = templates.get_template(template_name) |
1097 | + return template.render(context) |
1098 | + |
1099 | +CLOUD_ARCHIVE = \ |
1100 | +""" # Ubuntu Cloud Archive |
1101 | +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1102 | +""" |
1103 | + |
1104 | +CLOUD_ARCHIVE_POCKETS = { |
1105 | + 'folsom': 'precise-updates/folsom', |
1106 | + 'folsom/updates': 'precise-updates/folsom', |
1107 | + 'folsom/proposed': 'precise-proposed/folsom', |
1108 | + 'grizzly': 'precise-updates/grizzly', |
1109 | + 'grizzly/updates': 'precise-updates/grizzly', |
1110 | + 'grizzly/proposed': 'precise-proposed/grizzly' |
1111 | + } |
1112 | + |
1113 | + |
1114 | +def configure_source(): |
1115 | + source = str(config_get('openstack-origin')) |
1116 | + if not source: |
1117 | + return |
1118 | + if source.startswith('ppa:'): |
1119 | + cmd = [ |
1120 | + 'add-apt-repository', |
1121 | + source |
1122 | + ] |
1123 | + subprocess.check_call(cmd) |
1124 | + if source.startswith('cloud:'): |
1125 | + install('ubuntu-cloud-keyring') |
1126 | + pocket = source.split(':')[1] |
1127 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
1128 | + apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket])) |
1129 | + if source.startswith('deb'): |
1130 | + l = len(source.split('|')) |
1131 | + if l == 2: |
1132 | + (apt_line, key) = source.split('|') |
1133 | + cmd = [ |
1134 | + 'apt-key', |
1135 | + 'adv', '--keyserver keyserver.ubuntu.com', |
1136 | + '--recv-keys', key |
1137 | + ] |
1138 | + subprocess.check_call(cmd) |
1139 | + elif l == 1: |
1140 | + apt_line = source |
1141 | + |
1142 | + with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt: |
1143 | + apt.write(apt_line + "\n") |
1144 | + cmd = [ |
1145 | + 'apt-get', |
1146 | + 'update' |
1147 | + ] |
1148 | + subprocess.check_call(cmd) |
1149 | + |
1150 | +# Protocols |
1151 | +TCP = 'TCP' |
1152 | +UDP = 'UDP' |
1153 | + |
1154 | + |
1155 | +def expose(port, protocol='TCP'): |
1156 | + cmd = [ |
1157 | + 'open-port', |
1158 | + '{}/{}'.format(port, protocol) |
1159 | + ] |
1160 | + subprocess.check_call(cmd) |
1161 | + |
1162 | + |
1163 | +def juju_log(severity, message): |
1164 | + cmd = [ |
1165 | + 'juju-log', |
1166 | + '--log-level', severity, |
1167 | + message |
1168 | + ] |
1169 | + subprocess.check_call(cmd) |
1170 | + |
1171 | + |
1172 | +def relation_ids(relation): |
1173 | + cmd = [ |
1174 | + 'relation-ids', |
1175 | + relation |
1176 | + ] |
1177 | + result = str(subprocess.check_output(cmd)).split() |
1178 | + if result == "": |
1179 | + return None |
1180 | + else: |
1181 | + return result |
1182 | + |
1183 | + |
1184 | +def relation_list(rid): |
1185 | + cmd = [ |
1186 | + 'relation-list', |
1187 | + '-r', rid, |
1188 | + ] |
1189 | + result = str(subprocess.check_output(cmd)).split() |
1190 | + if result == "": |
1191 | + return None |
1192 | + else: |
1193 | + return result |
1194 | + |
1195 | + |
1196 | +def relation_get(attribute, unit=None, rid=None): |
1197 | + cmd = [ |
1198 | + 'relation-get', |
1199 | + ] |
1200 | + if rid: |
1201 | + cmd.append('-r') |
1202 | + cmd.append(rid) |
1203 | + cmd.append(attribute) |
1204 | + if unit: |
1205 | + cmd.append(unit) |
1206 | + value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1207 | + if value == "": |
1208 | + return None |
1209 | + else: |
1210 | + return value |
1211 | + |
1212 | + |
1213 | +def relation_set(**kwargs): |
1214 | + cmd = [ |
1215 | + 'relation-set' |
1216 | + ] |
1217 | + args = [] |
1218 | + for k, v in kwargs.items(): |
1219 | + if k == 'rid': |
1220 | + if v: |
1221 | + cmd.append('-r') |
1222 | + cmd.append(v) |
1223 | + else: |
1224 | + args.append('{}={}'.format(k, v)) |
1225 | + cmd += args |
1226 | + subprocess.check_call(cmd) |
1227 | + |
1228 | + |
1229 | +def unit_get(attribute): |
1230 | + cmd = [ |
1231 | + 'unit-get', |
1232 | + attribute |
1233 | + ] |
1234 | + value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1235 | + if value == "": |
1236 | + return None |
1237 | + else: |
1238 | + return value |
1239 | + |
1240 | + |
1241 | +def config_get(attribute): |
1242 | + cmd = [ |
1243 | + 'config-get', |
1244 | + '--format', |
1245 | + 'json', |
1246 | + ] |
1247 | + out = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1248 | + cfg = json.loads(out) |
1249 | + |
1250 | + try: |
1251 | + return cfg[attribute] |
1252 | + except KeyError: |
1253 | + return None |
1254 | + |
1255 | + |
1256 | +def get_unit_hostname(): |
1257 | + return socket.gethostname() |
1258 | + |
1259 | + |
1260 | +def get_host_ip(hostname=unit_get('private-address')): |
1261 | + try: |
1262 | + # Test to see if already an IPv4 address |
1263 | + socket.inet_aton(hostname) |
1264 | + return hostname |
1265 | + except socket.error: |
1266 | + answers = dns.resolver.query(hostname, 'A') |
1267 | + if answers: |
1268 | + return answers[0].address |
1269 | + return None |
1270 | + |
1271 | + |
1272 | +def _svc_control(service, action): |
1273 | + subprocess.check_call(['service', service, action]) |
1274 | + |
1275 | + |
1276 | +def restart(*services): |
1277 | + for service in services: |
1278 | + _svc_control(service, 'restart') |
1279 | + |
1280 | + |
1281 | +def stop(*services): |
1282 | + for service in services: |
1283 | + _svc_control(service, 'stop') |
1284 | + |
1285 | + |
1286 | +def start(*services): |
1287 | + for service in services: |
1288 | + _svc_control(service, 'start') |
1289 | + |
1290 | + |
1291 | +def reload(*services): |
1292 | + for service in services: |
1293 | + try: |
1294 | + _svc_control(service, 'reload') |
1295 | + except subprocess.CalledProcessError: |
1296 | + # Reload failed - either service does not support reload |
1297 | + # or it was not running - restart will fixup most things |
1298 | + _svc_control(service, 'restart') |
1299 | + |
1300 | + |
1301 | +def running(service): |
1302 | + try: |
1303 | + output = subprocess.check_output(['service', service, 'status']) |
1304 | + except subprocess.CalledProcessError: |
1305 | + return False |
1306 | + else: |
1307 | + if ("start/running" in output or |
1308 | + "is running" in output): |
1309 | + return True |
1310 | + else: |
1311 | + return False |
1312 | |
1313 | === modified symlink 'hooks/shared-db-relation-changed' |
1314 | === target changed u'shared-db-relations' => u'shared_db_relations.py' |
1315 | === modified symlink 'hooks/shared-db-relation-joined' |
1316 | === target changed u'shared-db-relations' => u'shared_db_relations.py' |
1317 | === removed file 'hooks/shared-db-relations' |
1318 | --- hooks/shared-db-relations 2013-02-19 23:11:24 +0000 |
1319 | +++ hooks/shared-db-relations 1970-01-01 00:00:00 +0000 |
1320 | @@ -1,152 +0,0 @@ |
1321 | -#!/usr/bin/python |
1322 | -# |
1323 | -# Create relations between a shared database to many peers. |
1324 | -# Join does nothing. Peer requests access to $DATABASE from $REMOTE_HOST. |
1325 | -# It's up to the hooks to ensure database exists, peer has access and |
1326 | -# clean up grants after a broken/departed peer (TODO) |
1327 | -# |
1328 | -# Author: Adam Gandelman <adam.gandelman@canonical.com> |
1329 | - |
1330 | - |
1331 | -from common import * |
1332 | -import sys |
1333 | -import subprocess |
1334 | -import json |
1335 | -import socket |
1336 | -import os |
1337 | -import utils |
1338 | - |
1339 | -config=json.loads(subprocess.check_output(['config-get','--format=json'])) |
1340 | - |
1341 | -def pwgen(): |
1342 | - return subprocess.check_output(['pwgen', '-s', '16']).strip() |
1343 | - |
1344 | - |
1345 | -def relation_get(): |
1346 | - return json.loads(subprocess.check_output( |
1347 | - ['relation-get', |
1348 | - '--format', |
1349 | - 'json'] |
1350 | - ) |
1351 | - ) |
1352 | - |
1353 | - |
1354 | -def relation_set(**kwargs): |
1355 | - cmd = [ 'relation-set' ] |
1356 | - args = [] |
1357 | - for k, v in kwargs.items(): |
1358 | - if k == 'rid': |
1359 | - cmd.append('-r') |
1360 | - cmd.append(v) |
1361 | - else: |
1362 | - args.append('{}={}'.format(k, v)) |
1363 | - cmd += args |
1364 | - subprocess.check_call(cmd) |
1365 | - |
1366 | - |
1367 | -def shared_db_changed(): |
1368 | - |
1369 | - def configure_db(hostname, |
1370 | - database, |
1371 | - username): |
1372 | - passwd_file = "/var/lib/mysql/mysql-{}.passwd"\ |
1373 | - .format(username) |
1374 | - if hostname != local_hostname: |
1375 | - remote_ip = socket.gethostbyname(hostname) |
1376 | - else: |
1377 | - remote_ip = '127.0.0.1' |
1378 | - |
1379 | - if not os.path.exists(passwd_file): |
1380 | - password = pwgen() |
1381 | - with open(passwd_file, 'w') as pfile: |
1382 | - pfile.write(password) |
1383 | - else: |
1384 | - with open(passwd_file) as pfile: |
1385 | - password = pfile.read().strip() |
1386 | - |
1387 | - if not database_exists(database): |
1388 | - create_database(database) |
1389 | - if not grant_exists(database, |
1390 | - username, |
1391 | - remote_ip): |
1392 | - create_grant(database, |
1393 | - username, |
1394 | - remote_ip, password) |
1395 | - return password |
1396 | - |
1397 | - if not utils.eligible_leader(): |
1398 | - utils.juju_log('INFO', |
1399 | - 'MySQL service is peered, bailing shared-db relation') |
1400 | - return |
1401 | - |
1402 | - settings = relation_get() |
1403 | - local_hostname = socket.getfqdn() |
1404 | - singleset = set([ |
1405 | - 'database', |
1406 | - 'username', |
1407 | - 'hostname' |
1408 | - ]) |
1409 | - |
1410 | - if singleset.issubset(settings): |
1411 | - # Process a single database configuration |
1412 | - password = configure_db(settings['hostname'], |
1413 | - settings['database'], |
1414 | - settings['username']) |
1415 | - if not utils.is_clustered(): |
1416 | - relation_set(db_host=local_hostname, |
1417 | - password=password) |
1418 | - else: |
1419 | - relation_set(db_host=config["vip"], |
1420 | - password=password) |
1421 | - |
1422 | - else: |
1423 | - # Process multiple database setup requests. |
1424 | - # from incoming relation data: |
1425 | - # nova_database=xxx nova_username=xxx nova_hostname=xxx |
1426 | - # quantum_database=xxx quantum_username=xxx quantum_hostname=xxx |
1427 | - # create |
1428 | - #{ |
1429 | - # "nova": { |
1430 | - # "username": xxx, |
1431 | - # "database": xxx, |
1432 | - # "hostname": xxx |
1433 | - # }, |
1434 | - # "quantum": { |
1435 | - # "username": xxx, |
1436 | - # "database": xxx, |
1437 | - # "hostname": xxx |
1438 | - # } |
1439 | - #} |
1440 | - # |
1441 | - databases = {} |
1442 | - for k, v in settings.iteritems(): |
1443 | - db = k.split('_')[0] |
1444 | - x = '_'.join(k.split('_')[1:]) |
1445 | - if db not in databases: |
1446 | - databases[db] = {} |
1447 | - databases[db][x] = v |
1448 | - return_data = {} |
1449 | - for db in databases: |
1450 | - if singleset.issubset(databases[db]): |
1451 | - return_data['_'.join([ db, 'password' ])] = \ |
1452 | - configure_db(databases[db]['hostname'], |
1453 | - databases[db]['database'], |
1454 | - databases[db]['username']) |
1455 | - relation_set(**return_data) |
1456 | - if not utils.is_clustered(): |
1457 | - relation_set(db_host=local_hostname) |
1458 | - else: |
1459 | - relation_set(db_host=config["vip"]) |
1460 | - |
1461 | -hook = os.path.basename(sys.argv[0]) |
1462 | -hooks = { |
1463 | - "shared-db-relation-changed": shared_db_changed |
1464 | - } |
1465 | -try: |
1466 | - hook_func = hooks[hook] |
1467 | -except KeyError: |
1468 | - pass |
1469 | -else: |
1470 | - hook_func() |
1471 | - |
1472 | -sys.exit(0) |
1473 | |
1474 | === added file 'hooks/shared_db_relations.py' |
1475 | --- hooks/shared_db_relations.py 1970-01-01 00:00:00 +0000 |
1476 | +++ hooks/shared_db_relations.py 2013-03-16 07:49:20 +0000 |
1477 | @@ -0,0 +1,139 @@ |
1478 | +#!/usr/bin/python |
1479 | +# |
1480 | +# Create relations between a shared database to many peers. |
1481 | +# Join does nothing. Peer requests access to $DATABASE from $REMOTE_HOST. |
1482 | +# It's up to the hooks to ensure database exists, peer has access and |
1483 | +# clean up grants after a broken/departed peer (TODO) |
1484 | +# |
1485 | +# Author: Adam Gandelman <adam.gandelman@canonical.com> |
1486 | + |
1487 | + |
1488 | +from common import ( |
1489 | + database_exists, |
1490 | + create_database, |
1491 | + grant_exists, |
1492 | + create_grant |
1493 | + ) |
1494 | +import subprocess |
1495 | +import json |
1496 | +import socket |
1497 | +import os |
1498 | +import lib.utils as utils |
1499 | +import lib.cluster_utils as cluster |
1500 | + |
1501 | +LEADER_RES = 'res_mysql_vip' |
1502 | + |
1503 | + |
1504 | +def pwgen(): |
1505 | + return str(subprocess.check_output(['pwgen', '-s', '16'])).strip() |
1506 | + |
1507 | + |
1508 | +def relation_get(): |
1509 | + return json.loads(subprocess.check_output( |
1510 | + ['relation-get', |
1511 | + '--format', |
1512 | + 'json'] |
1513 | + ) |
1514 | + ) |
1515 | + |
1516 | + |
1517 | +def shared_db_changed(): |
1518 | + |
1519 | + def configure_db(hostname, |
1520 | + database, |
1521 | + username): |
1522 | + passwd_file = "/var/lib/mysql/mysql-{}.passwd"\ |
1523 | + .format(username) |
1524 | + if hostname != local_hostname: |
1525 | + remote_ip = socket.gethostbyname(hostname) |
1526 | + else: |
1527 | + remote_ip = '127.0.0.1' |
1528 | + |
1529 | + if not os.path.exists(passwd_file): |
1530 | + password = pwgen() |
1531 | + with open(passwd_file, 'w') as pfile: |
1532 | + pfile.write(password) |
1533 | + else: |
1534 | + with open(passwd_file) as pfile: |
1535 | + password = pfile.read().strip() |
1536 | + |
1537 | + if not database_exists(database): |
1538 | + create_database(database) |
1539 | + if not grant_exists(database, |
1540 | + username, |
1541 | + remote_ip): |
1542 | + create_grant(database, |
1543 | + username, |
1544 | + remote_ip, password) |
1545 | + return password |
1546 | + |
1547 | + if not cluster.eligible_leader(LEADER_RES): |
1548 | + utils.juju_log('INFO', |
1549 | + 'MySQL service is peered, bailing shared-db relation' |
1550 | + ' as this service unit is not the leader') |
1551 | + return |
1552 | + |
1553 | + settings = relation_get() |
1554 | + local_hostname = socket.getfqdn() |
1555 | + singleset = set([ |
1556 | + 'database', |
1557 | + 'username', |
1558 | + 'hostname' |
1559 | + ]) |
1560 | + |
1561 | + if singleset.issubset(settings): |
1562 | + # Process a single database configuration |
1563 | + password = configure_db(settings['hostname'], |
1564 | + settings['database'], |
1565 | + settings['username']) |
1566 | + if not cluster.is_clustered(): |
1567 | + utils.relation_set(db_host=local_hostname, |
1568 | + password=password) |
1569 | + else: |
1570 | + utils.relation_set(db_host=utils.config_get("vip"), |
1571 | + password=password) |
1572 | + |
1573 | + else: |
1574 | + # Process multiple database setup requests. |
1575 | + # from incoming relation data: |
1576 | + # nova_database=xxx nova_username=xxx nova_hostname=xxx |
1577 | + # quantum_database=xxx quantum_username=xxx quantum_hostname=xxx |
1578 | + # create |
1579 | + #{ |
1580 | + # "nova": { |
1581 | + # "username": xxx, |
1582 | + # "database": xxx, |
1583 | + # "hostname": xxx |
1584 | + # }, |
1585 | + # "quantum": { |
1586 | + # "username": xxx, |
1587 | + # "database": xxx, |
1588 | + # "hostname": xxx |
1589 | + # } |
1590 | + #} |
1591 | + # |
1592 | + databases = {} |
1593 | + for k, v in settings.iteritems(): |
1594 | + db = k.split('_')[0] |
1595 | + x = '_'.join(k.split('_')[1:]) |
1596 | + if db not in databases: |
1597 | + databases[db] = {} |
1598 | + databases[db][x] = v |
1599 | + return_data = {} |
1600 | + for db in databases: |
1601 | + if singleset.issubset(databases[db]): |
1602 | + return_data['_'.join([db, 'password'])] = \ |
1603 | + configure_db(databases[db]['hostname'], |
1604 | + databases[db]['database'], |
1605 | + databases[db]['username']) |
1606 | + utils.relation_set(**return_data) |
1607 | + if not cluster.is_clustered(): |
1608 | + utils.relation_set(db_host=local_hostname) |
1609 | + else: |
1610 | + utils.relation_set(db_host=utils.config_get("vip")) |
1611 | + |
1612 | +hooks = { |
1613 | + "shared-db-relation-changed": shared_db_changed |
1614 | + } |
1615 | + |
1616 | +utils.do_hooks(hooks) |
1617 | |
1618 | === removed file 'hooks/utils.py' |
1619 | --- hooks/utils.py 2013-02-15 21:12:40 +0000 |
1620 | +++ hooks/utils.py 1970-01-01 00:00:00 +0000 |
1621 | @@ -1,446 +0,0 @@ |
1622 | - |
1623 | -# |
1624 | -# Copyright 2012 Canonical Ltd. |
1625 | -# |
1626 | -# Authors: |
1627 | -# James Page <james.page@ubuntu.com> |
1628 | -# Paul Collins <paul.collins@canonical.com> |
1629 | -# |
1630 | - |
1631 | -import os |
1632 | -import subprocess |
1633 | -import socket |
1634 | -import sys |
1635 | -import fcntl |
1636 | -import struct |
1637 | -import json |
1638 | -import time |
1639 | - |
1640 | - |
1641 | -def do_hooks(hooks): |
1642 | - hook = os.path.basename(sys.argv[0]) |
1643 | - |
1644 | - try: |
1645 | - hooks[hook]() |
1646 | - except KeyError: |
1647 | - juju_log('INFO', |
1648 | - "This charm doesn't know how to handle '{}'.".format(hook)) |
1649 | - |
1650 | - |
1651 | -def can_install(): |
1652 | - try: |
1653 | - fd = os.open("/var/lib/dpkg/lock", os.O_RDWR|os.O_CREAT|os.O_NOFOLLOW, 0640) |
1654 | - fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) |
1655 | - except IOError, message: |
1656 | - os.close(fd) |
1657 | - return False |
1658 | - else: |
1659 | - fcntl.lockf(fd, fcntl.LOCK_UN) |
1660 | - os.close(fd) |
1661 | - return True |
1662 | - |
1663 | - |
1664 | -def install(*pkgs): |
1665 | - cmd = [ |
1666 | - 'apt-get', |
1667 | - '-y', |
1668 | - 'install' |
1669 | - ] |
1670 | - for pkg in pkgs: |
1671 | - cmd.append(pkg) |
1672 | - while not can_install(): |
1673 | - juju_log('INFO', |
1674 | - "dpkg is busy, can't install %s yet, waiting..." % pkgs) |
1675 | - time.sleep(1) |
1676 | - subprocess.check_call(cmd) |
1677 | - |
1678 | -TEMPLATES_DIR = 'templates' |
1679 | - |
1680 | -try: |
1681 | - import jinja2 |
1682 | -except ImportError: |
1683 | - install('python-jinja2') |
1684 | - import jinja2 |
1685 | - |
1686 | -try: |
1687 | - from netaddr import IPNetwork |
1688 | -except ImportError: |
1689 | - install('python-netaddr') |
1690 | - from netaddr import IPNetwork |
1691 | - |
1692 | -try: |
1693 | - import dns.resolver |
1694 | -except ImportError: |
1695 | - install('python-dnspython') |
1696 | - import dns.resolver |
1697 | - |
1698 | - |
1699 | -def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
1700 | - templates = jinja2.Environment( |
1701 | - loader=jinja2.FileSystemLoader(template_dir) |
1702 | - ) |
1703 | - template = templates.get_template(template_name) |
1704 | - return template.render(context) |
1705 | - |
1706 | - |
1707 | -CLOUD_ARCHIVE = \ |
1708 | -""" # Ubuntu Cloud Archive |
1709 | -deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1710 | -""" |
1711 | - |
1712 | -CLOUD_ARCHIVE_POCKETS = { |
1713 | - 'precise-folsom': 'precise-updates/folsom', |
1714 | - 'precise-folsom/updates': 'precise-updates/folsom', |
1715 | - 'precise-folsom/proposed': 'precise-proposed/folsom', |
1716 | - 'precise-grizzly': 'precise-updates/grizzly', |
1717 | - 'precise-grizzly/updates': 'precise-updates/grizzly', |
1718 | - 'precise-grizzly/proposed': 'precise-proposed/grizzly' |
1719 | - } |
1720 | - |
1721 | - |
1722 | -def execute(cmd, die=False, echo=False): |
1723 | - """ Executes a command |
1724 | - |
1725 | - if die=True, script will exit(1) if command does not return 0 |
1726 | - if echo=True, output of command will be printed to stdout |
1727 | - |
1728 | - returns a tuple: (stdout, stderr, return code) |
1729 | - """ |
1730 | - p = subprocess.Popen(cmd.split(" "), |
1731 | - stdout=subprocess.PIPE, |
1732 | - stdin=subprocess.PIPE, |
1733 | - stderr=subprocess.PIPE) |
1734 | - stdout="" |
1735 | - stderr="" |
1736 | - |
1737 | - def print_line(l): |
1738 | - if echo: |
1739 | - print l.strip('\n') |
1740 | - sys.stdout.flush() |
1741 | - |
1742 | - for l in iter(p.stdout.readline, ''): |
1743 | - print_line(l) |
1744 | - stdout += l |
1745 | - for l in iter(p.stderr.readline, ''): |
1746 | - print_line(l) |
1747 | - stderr += l |
1748 | - |
1749 | - p.communicate() |
1750 | - rc = p.returncode |
1751 | - |
1752 | - if die and rc != 0: |
1753 | - error_out("ERROR: command %s return non-zero.\n" % cmd) |
1754 | - return (stdout, stderr, rc) |
1755 | - |
1756 | - |
1757 | -def configure_source(): |
1758 | - source = str(config_get('openstack-origin')) |
1759 | - if not source: |
1760 | - return |
1761 | - if source.startswith('ppa:'): |
1762 | - cmd = [ |
1763 | - 'add-apt-repository', |
1764 | - source |
1765 | - ] |
1766 | - subprocess.check_call(cmd) |
1767 | - if source.startswith('cloud:'): |
1768 | - install('ubuntu-cloud-keyring') |
1769 | - pocket = source.split(':')[1] |
1770 | - with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
1771 | - apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket])) |
1772 | - if source.startswith('deb'): |
1773 | - l = len(source.split('|')) |
1774 | - if l == 2: |
1775 | - (apt_line, key) = source.split('|') |
1776 | - cmd = [ |
1777 | - 'apt-key', |
1778 | - 'adv', '--keyserver keyserver.ubuntu.com', |
1779 | - '--recv-keys', key |
1780 | - ] |
1781 | - subprocess.check_call(cmd) |
1782 | - elif l == 1: |
1783 | - apt_line = source |
1784 | - |
1785 | - with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt: |
1786 | - apt.write(apt_line + "\n") |
1787 | - cmd = [ |
1788 | - 'apt-get', |
1789 | - 'update' |
1790 | - ] |
1791 | - subprocess.check_call(cmd) |
1792 | - |
1793 | -# Protocols |
1794 | -TCP = 'TCP' |
1795 | -UDP = 'UDP' |
1796 | - |
1797 | - |
1798 | -def expose(port, protocol='TCP'): |
1799 | - cmd = [ |
1800 | - 'open-port', |
1801 | - '{}/{}'.format(port, protocol) |
1802 | - ] |
1803 | - subprocess.check_call(cmd) |
1804 | - |
1805 | - |
1806 | -def juju_log(severity, message): |
1807 | - cmd = [ |
1808 | - 'juju-log', |
1809 | - '--log-level', severity, |
1810 | - message |
1811 | - ] |
1812 | - subprocess.check_call(cmd) |
1813 | - |
1814 | - |
1815 | -def relation_ids(relation): |
1816 | - cmd = [ |
1817 | - 'relation-ids', |
1818 | - relation |
1819 | - ] |
1820 | - return subprocess.check_output(cmd).split() # IGNORE:E1103 |
1821 | - |
1822 | - |
1823 | -def relation_list(rid): |
1824 | - cmd = [ |
1825 | - 'relation-list', |
1826 | - '-r', rid, |
1827 | - ] |
1828 | - return subprocess.check_output(cmd).split() # IGNORE:E1103 |
1829 | - |
1830 | - |
1831 | -def relation_get(attribute, unit=None, rid=None): |
1832 | - cmd = [ |
1833 | - 'relation-get', |
1834 | - ] |
1835 | - if rid: |
1836 | - cmd.append('-r') |
1837 | - cmd.append(rid) |
1838 | - cmd.append(attribute) |
1839 | - if unit: |
1840 | - cmd.append(unit) |
1841 | - value = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1842 | - if value == "": |
1843 | - return None |
1844 | - else: |
1845 | - return value |
1846 | - |
1847 | - |
1848 | -def relation_get_dict(relation_id=None, remote_unit=None): |
1849 | - """Obtain all relation data as dict by way of JSON""" |
1850 | - cmd = 'relation-get --format=json' |
1851 | - if relation_id: |
1852 | - cmd += ' -r %s' % relation_id |
1853 | - if remote_unit: |
1854 | - remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None) |
1855 | - os.environ['JUJU_REMOTE_UNIT'] = remote_unit |
1856 | - j = execute(cmd, die=True)[0] |
1857 | - if remote_unit and remote_unit_orig: |
1858 | - os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig |
1859 | - d = json.loads(j) |
1860 | - settings = {} |
1861 | - # convert unicode to strings |
1862 | - for k, v in d.iteritems(): |
1863 | - settings[str(k)] = str(v) |
1864 | - return settings |
1865 | - |
1866 | - |
1867 | -def relation_set(**kwargs): |
1868 | - cmd = [ |
1869 | - 'relation-set' |
1870 | - ] |
1871 | - args = [] |
1872 | - for k, v in kwargs.items(): |
1873 | - if k == 'rid': |
1874 | - cmd.append('-r') |
1875 | - cmd.append(v) |
1876 | - else: |
1877 | - args.append('{}={}'.format(k, v)) |
1878 | - cmd += args |
1879 | - subprocess.check_call(cmd) |
1880 | - |
1881 | - |
1882 | -def unit_get(attribute): |
1883 | - cmd = [ |
1884 | - 'unit-get', |
1885 | - attribute |
1886 | - ] |
1887 | - return subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1888 | - |
1889 | - |
1890 | -def config_get(attribute): |
1891 | - cmd = [ |
1892 | - 'config-get', |
1893 | - attribute |
1894 | - ] |
1895 | - return subprocess.check_output(cmd).strip() # IGNORE:E1103 |
1896 | - |
1897 | - |
1898 | -def get_unit_hostname(): |
1899 | - return socket.gethostname() |
1900 | - |
1901 | - |
1902 | -def get_unit_name(): |
1903 | - return os.environ.get('JUJU_UNIT_NAME').replace('/','-') |
1904 | - |
1905 | - |
1906 | -def get_host_ip(hostname=unit_get('private-address')): |
1907 | - try: |
1908 | - # Test to see if already an IPv4 address |
1909 | - socket.inet_aton(hostname) |
1910 | - return hostname |
1911 | - except socket.error: |
1912 | - pass |
1913 | - try: |
1914 | - answers = dns.resolver.query(hostname, 'A') |
1915 | - if answers: |
1916 | - return answers[0].address |
1917 | - except dns.resolver.NXDOMAIN: |
1918 | - pass |
1919 | - return None |
1920 | - |
1921 | - |
1922 | -def restart(*services): |
1923 | - for service in services: |
1924 | - subprocess.check_call(['service', service, 'restart']) |
1925 | - |
1926 | - |
1927 | -def stop(*services): |
1928 | - for service in services: |
1929 | - subprocess.check_call(['service', service, 'stop']) |
1930 | - |
1931 | - |
1932 | -def start(*services): |
1933 | - for service in services: |
1934 | - subprocess.check_call(['service', service, 'start']) |
1935 | - |
1936 | - |
1937 | -def running(service): |
1938 | - try: |
1939 | - output = subprocess.check_output(['service', service, 'status']) |
1940 | - except subprocess.CalledProcessError: |
1941 | - return False |
1942 | - else: |
1943 | - if ("start/running" in output or |
1944 | - "is running" in output): |
1945 | - return True |
1946 | - else: |
1947 | - return False |
1948 | - |
1949 | - |
1950 | -def disable_upstart_services(*services): |
1951 | - for service in services: |
1952 | - with open("/etc/init/{}.override".format(service), "w") as override: |
1953 | - override.write("manual") |
1954 | - |
1955 | - |
1956 | -def enable_upstart_services(*services): |
1957 | - for service in services: |
1958 | - path = '/etc/init/{}.override'.format(service) |
1959 | - if os.path.exists(path): |
1960 | - os.remove(path) |
1961 | - |
1962 | - |
1963 | -def disable_lsb_services(*services): |
1964 | - for service in services: |
1965 | - subprocess.check_call(['update-rc.d', '-f', service, 'remove']) |
1966 | - |
1967 | - |
1968 | -def enable_lsb_services(*services): |
1969 | - for service in services: |
1970 | - subprocess.check_call(['update-rc.d', '-f', service, 'defaults']) |
1971 | - |
1972 | - |
1973 | -def get_iface_ipaddr(iface): |
1974 | - s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) |
1975 | - return socket.inet_ntoa(fcntl.ioctl( |
1976 | - s.fileno(), |
1977 | - 0x8919, # SIOCGIFADDR |
1978 | - struct.pack('256s', iface[:15]) |
1979 | - )[20:24]) |
1980 | - |
1981 | - |
1982 | -def get_iface_netmask(iface): |
1983 | - s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) |
1984 | - return socket.inet_ntoa(fcntl.ioctl( |
1985 | - s.fileno(), |
1986 | - 0x891b, # SIOCGIFNETMASK |
1987 | - struct.pack('256s', iface[:15]) |
1988 | - )[20:24]) |
1989 | - |
1990 | - |
1991 | -def get_netmask_cidr(netmask): |
1992 | - netmask = netmask.split('.') |
1993 | - binary_str = '' |
1994 | - for octet in netmask: |
1995 | - binary_str += bin(int(octet))[2:].zfill(8) |
1996 | - return str(len(binary_str.rstrip('0'))) |
1997 | - |
1998 | - |
1999 | -def get_network_address(iface): |
2000 | - if iface: |
2001 | - network = "{}/{}".format(get_iface_ipaddr(iface), |
2002 | - get_netmask_cidr(get_iface_netmask(iface))) |
2003 | - ip = IPNetwork(network) |
2004 | - return str(ip.network) |
2005 | - else: |
2006 | - return None |
2007 | - |
2008 | - |
2009 | -def is_clustered(): |
2010 | - for r_id in (relation_ids('ha') or []): |
2011 | - for unit in (relation_list(r_id) or []): |
2012 | - relation_data = \ |
2013 | - relation_get_dict(relation_id=r_id, |
2014 | - remote_unit=unit) |
2015 | - if 'clustered' in relation_data: |
2016 | - return True |
2017 | - return False |
2018 | - |
2019 | - |
2020 | -def is_leader(): |
2021 | - status = execute('crm resource show res_mysql_vip', echo=True)[0].strip() |
2022 | - hostname = execute('hostname', echo=True)[0].strip() |
2023 | - if hostname in status: |
2024 | - return True |
2025 | - else: |
2026 | - return False |
2027 | - |
2028 | - |
2029 | -def peer_units(): |
2030 | - peers = [] |
2031 | - for r_id in (relation_ids('cluster') or []): |
2032 | - for unit in (relation_list(r_id) or []): |
2033 | - peers.append(unit) |
2034 | - return peers |
2035 | - |
2036 | - |
2037 | -def oldest_peer(peers): |
2038 | - local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] |
2039 | - for peer in peers: |
2040 | - remote_unit_no = peer.split('/')[1] |
2041 | - if remote_unit_no < local_unit_no: |
2042 | - return False |
2043 | - return True |
2044 | - |
2045 | - |
2046 | -def eligible_leader(): |
2047 | - if is_clustered(): |
2048 | - if not is_leader(): |
2049 | - juju_log('INFO', 'Deferring action to CRM leader.') |
2050 | - return False |
2051 | - else: |
2052 | - peers = peer_units() |
2053 | - if peers and not oldest_peer(peers): |
2054 | - juju_log('INFO', 'Deferring action to oldest service unit.') |
2055 | - return False |
2056 | - return True |
2057 | - |
2058 | - |
2059 | -def is_relation_made(relation=None): |
2060 | - relation_data = [] |
2061 | - for r_id in (relation_ids(relation) or []): |
2062 | - for unit in (relation_list(r_id) or []): |
2063 | - relation_data.append(relation_get_dict(relation_id=r_id, |
2064 | - remote_unit=unit)) |
2065 | - if not relation_data: |
2066 | - return False |
2067 | - return True |
2068 | |
2069 | === removed directory 'templates' |
2070 | === removed file 'templates/ceph.conf' |
2071 | --- templates/ceph.conf 2013-01-28 17:59:02 +0000 |
2072 | +++ templates/ceph.conf 1970-01-01 00:00:00 +0000 |
2073 | @@ -1,5 +0,0 @@ |
2074 | -[global] |
2075 | - auth supported = {{ auth }} |
2076 | - keyring = {{ keyring }} |
2077 | - mon host = {{ mon_hosts }} |
2078 | - |
Looks sane to me. Awaiting Adam's review.