Merge lp:~james-page/charms/precise/ceph/charm-helpers into lp:~charmers/charms/precise/ceph/trunk
- Precise Pangolin (12.04)
- charm-helpers
- Merge into trunk
Proposed by
James Page
Status: | Merged |
---|---|
Merged at revision: | 62 |
Proposed branch: | lp:~james-page/charms/precise/ceph/charm-helpers |
Merge into: | lp:~charmers/charms/precise/ceph/trunk |
Diff against target: |
1983 lines (+1146/-437) 13 files modified
.pydevproject (+1/-3) Makefile (+8/-0) README.md (+9/-9) charm-helpers-sync.yaml (+7/-0) hooks/ceph.py (+126/-26) hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0) hooks/charmhelpers/core/hookenv.py (+334/-0) hooks/charmhelpers/core/host.py (+273/-0) hooks/charmhelpers/fetch/__init__.py (+152/-0) hooks/charmhelpers/fetch/archiveurl.py (+43/-0) hooks/hooks.py (+149/-233) hooks/utils.py (+18/-164) metadata.yaml (+1/-2) |
To merge this branch: | bzr merge lp:~james-page/charms/precise/ceph/charm-helpers |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Mark Mims (community) | Approve | ||
Review via email: mp+173245@code.launchpad.net |
Commit message
Description of the change
Refactoring to support use with charm-helpers
Significant rework to support use of charm-helpers rather than its own
utils.py of fun.
Also fixes a couple of issues with newer versions of ceph which no longer automatically zap disks.
To post a comment you must log in.
- 80. By James Page
-
Fixup dodgy disk detection
Revision history for this message
Mark Mims (mark-mims) wrote : | # |
Added bug https:/
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file '.pydevproject' |
2 | --- .pydevproject 2012-10-18 08:24:36 +0000 |
3 | +++ .pydevproject 2013-07-08 08:34:31 +0000 |
4 | @@ -1,7 +1,5 @@ |
5 | <?xml version="1.0" encoding="UTF-8" standalone="no"?> |
6 | -<?eclipse-pydev version="1.0"?> |
7 | - |
8 | -<pydev_project> |
9 | +<?eclipse-pydev version="1.0"?><pydev_project> |
10 | <pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property> |
11 | <pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property> |
12 | <pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH"> |
13 | |
14 | === added file 'Makefile' |
15 | --- Makefile 1970-01-01 00:00:00 +0000 |
16 | +++ Makefile 2013-07-08 08:34:31 +0000 |
17 | @@ -0,0 +1,8 @@ |
18 | +#!/usr/bin/make |
19 | + |
20 | +lint: |
21 | + @flake8 --exclude hooks/charmhelpers hooks |
22 | + @charm proof |
23 | + |
24 | +sync: |
25 | + @charm-helper-sync -c charm-helpers-sync.yaml |
26 | |
27 | === modified file 'README.md' |
28 | --- README.md 2012-12-17 10:22:51 +0000 |
29 | +++ README.md 2013-07-08 08:34:31 +0000 |
30 | @@ -15,28 +15,28 @@ |
31 | fsid: |
32 | uuid specific to a ceph cluster used to ensure that different |
33 | clusters don't get mixed up - use `uuid` to generate one. |
34 | - |
35 | + |
36 | monitor-secret: |
37 | a ceph generated key used by the daemons that manage to cluster |
38 | to control security. You can use the ceph-authtool command to |
39 | generate one: |
40 | - |
41 | + |
42 | ceph-authtool /dev/stdout --name=mon. --gen-key |
43 | - |
44 | + |
45 | These two pieces of configuration must NOT be changed post bootstrap; attempting |
46 | todo this will cause a reconfiguration error and new service units will not join |
47 | the existing ceph cluster. |
48 | - |
49 | + |
50 | The charm also supports specification of the storage devices to use in the ceph |
51 | cluster. |
52 | |
53 | osd-devices: |
54 | A list of devices that the charm will attempt to detect, initialise and |
55 | activate as ceph storage. |
56 | - |
57 | + |
58 | This this can be a superset of the actual storage devices presented to |
59 | each service unit and can be changed post ceph bootstrap using `juju set`. |
60 | - |
61 | + |
62 | At a minimum you must provide a juju config file during initial deployment |
63 | with the fsid and monitor-secret options (contents of cepy.yaml below): |
64 | |
65 | @@ -44,7 +44,7 @@ |
66 | fsid: ecbb8960-0e21-11e2-b495-83a88f44db01 |
67 | monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg== |
68 | osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde |
69 | - |
70 | + |
71 | Specifying the osd-devices to use is also a good idea. |
72 | |
73 | Boot things up by using: |
74 | @@ -62,7 +62,7 @@ |
75 | James Page <james.page@ubuntu.com> |
76 | Report bugs at: http://bugs.launchpad.net/charms/+source/ceph/+filebug |
77 | Location: http://jujucharms.com/charms/ceph |
78 | - |
79 | + |
80 | Technical Bootnotes |
81 | =================== |
82 | |
83 | @@ -89,4 +89,4 @@ |
84 | implement it. |
85 | |
86 | See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph |
87 | -monitor cluster deployment strategies and pitfalls. |
88 | +monitor cluster deployment strategies and pitfalls. |
89 | |
90 | === added file 'charm-helpers-sync.yaml' |
91 | --- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000 |
92 | +++ charm-helpers-sync.yaml 2013-07-08 08:34:31 +0000 |
93 | @@ -0,0 +1,7 @@ |
94 | +branch: lp:charm-helpers |
95 | +destination: hooks/charmhelpers |
96 | +include: |
97 | + - core |
98 | + - fetch |
99 | + - contrib.storage.linux: |
100 | + - utils |
101 | |
102 | === modified file 'hooks/ceph.py' |
103 | --- hooks/ceph.py 2012-12-18 10:25:38 +0000 |
104 | +++ hooks/ceph.py 2013-07-08 08:34:31 +0000 |
105 | @@ -10,23 +10,36 @@ |
106 | import json |
107 | import subprocess |
108 | import time |
109 | -import utils |
110 | import os |
111 | import apt_pkg as apt |
112 | +from charmhelpers.core.host import ( |
113 | + mkdir, |
114 | + service_restart, |
115 | + log |
116 | +) |
117 | +from charmhelpers.contrib.storage.linux.utils import ( |
118 | + zap_disk, |
119 | + is_block_device |
120 | +) |
121 | +from utils import ( |
122 | + get_unit_hostname |
123 | +) |
124 | |
125 | LEADER = 'leader' |
126 | PEON = 'peon' |
127 | QUORUM = [LEADER, PEON] |
128 | |
129 | +PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs'] |
130 | + |
131 | |
132 | def is_quorum(): |
133 | - asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()) |
134 | + asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
135 | cmd = [ |
136 | "ceph", |
137 | "--admin-daemon", |
138 | asok, |
139 | "mon_status" |
140 | - ] |
141 | + ] |
142 | if os.path.exists(asok): |
143 | try: |
144 | result = json.loads(subprocess.check_output(cmd)) |
145 | @@ -44,13 +57,13 @@ |
146 | |
147 | |
148 | def is_leader(): |
149 | - asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()) |
150 | + asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
151 | cmd = [ |
152 | "ceph", |
153 | "--admin-daemon", |
154 | asok, |
155 | "mon_status" |
156 | - ] |
157 | + ] |
158 | if os.path.exists(asok): |
159 | try: |
160 | result = json.loads(subprocess.check_output(cmd)) |
161 | @@ -73,14 +86,14 @@ |
162 | |
163 | |
164 | def add_bootstrap_hint(peer): |
165 | - asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()) |
166 | + asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
167 | cmd = [ |
168 | "ceph", |
169 | "--admin-daemon", |
170 | asok, |
171 | "add_bootstrap_peer_hint", |
172 | peer |
173 | - ] |
174 | + ] |
175 | if os.path.exists(asok): |
176 | # Ignore any errors for this call |
177 | subprocess.call(cmd) |
178 | @@ -89,7 +102,7 @@ |
179 | 'xfs', |
180 | 'ext4', |
181 | 'btrfs' |
182 | - ] |
183 | +] |
184 | |
185 | |
186 | def is_osd_disk(dev): |
187 | @@ -99,7 +112,7 @@ |
188 | for line in info: |
189 | if line.startswith( |
190 | 'Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D' |
191 | - ): |
192 | + ): |
193 | return True |
194 | except subprocess.CalledProcessError: |
195 | pass |
196 | @@ -110,16 +123,11 @@ |
197 | cmd = [ |
198 | 'udevadm', 'trigger', |
199 | '--subsystem-match=block', '--action=add' |
200 | - ] |
201 | + ] |
202 | |
203 | subprocess.call(cmd) |
204 | |
205 | |
206 | -def zap_disk(dev): |
207 | - cmd = ['sgdisk', '--zap-all', dev] |
208 | - subprocess.check_call(cmd) |
209 | - |
210 | - |
211 | _bootstrap_keyring = "/var/lib/ceph/bootstrap-osd/ceph.keyring" |
212 | |
213 | |
214 | @@ -140,7 +148,7 @@ |
215 | '--create-keyring', |
216 | '--name=client.bootstrap-osd', |
217 | '--add-key={}'.format(key) |
218 | - ] |
219 | + ] |
220 | subprocess.check_call(cmd) |
221 | |
222 | # OSD caps taken from ceph-create-keys |
223 | @@ -148,10 +156,10 @@ |
224 | 'mon': [ |
225 | 'allow command osd create ...', |
226 | 'allow command osd crush set ...', |
227 | - r'allow command auth add * osd allow\ * mon allow\ rwx', |
228 | + r'allow command auth add * osd allow\ * mon allow\ rwx', |
229 | 'allow command mon getmap' |
230 | - ] |
231 | - } |
232 | + ] |
233 | +} |
234 | |
235 | |
236 | def get_osd_bootstrap_key(): |
237 | @@ -169,14 +177,14 @@ |
238 | '--create-keyring', |
239 | '--name=client.radosgw.gateway', |
240 | '--add-key={}'.format(key) |
241 | - ] |
242 | + ] |
243 | subprocess.check_call(cmd) |
244 | |
245 | # OSD caps taken from ceph-create-keys |
246 | _radosgw_caps = { |
247 | 'mon': ['allow r'], |
248 | 'osd': ['allow rwx'] |
249 | - } |
250 | +} |
251 | |
252 | |
253 | def get_radosgw_key(): |
254 | @@ -186,7 +194,7 @@ |
255 | _default_caps = { |
256 | 'mon': ['allow r'], |
257 | 'osd': ['allow rwx'] |
258 | - } |
259 | +} |
260 | |
261 | |
262 | def get_named_key(name, caps=None): |
263 | @@ -196,16 +204,16 @@ |
264 | '--name', 'mon.', |
265 | '--keyring', |
266 | '/var/lib/ceph/mon/ceph-{}/keyring'.format( |
267 | - utils.get_unit_hostname() |
268 | - ), |
269 | + get_unit_hostname() |
270 | + ), |
271 | 'auth', 'get-or-create', 'client.{}'.format(name), |
272 | - ] |
273 | + ] |
274 | # Add capabilities |
275 | for subsystem, subcaps in caps.iteritems(): |
276 | cmd.extend([ |
277 | subsystem, |
278 | '; '.join(subcaps), |
279 | - ]) |
280 | + ]) |
281 | output = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
282 | # get-or-create appears to have different output depending |
283 | # on whether its 'get' or 'create' |
284 | @@ -221,6 +229,42 @@ |
285 | return key |
286 | |
287 | |
288 | +def bootstrap_monitor_cluster(secret): |
289 | + hostname = get_unit_hostname() |
290 | + path = '/var/lib/ceph/mon/ceph-{}'.format(hostname) |
291 | + done = '{}/done'.format(path) |
292 | + upstart = '{}/upstart'.format(path) |
293 | + keyring = '/var/lib/ceph/tmp/{}.mon.keyring'.format(hostname) |
294 | + |
295 | + if os.path.exists(done): |
296 | + log('bootstrap_monitor_cluster: mon already initialized.') |
297 | + else: |
298 | + # Ceph >= 0.61.3 needs this for ceph-mon fs creation |
299 | + mkdir('/var/run/ceph', perms=0755) |
300 | + mkdir(path) |
301 | + # end changes for Ceph >= 0.61.3 |
302 | + try: |
303 | + subprocess.check_call(['ceph-authtool', keyring, |
304 | + '--create-keyring', '--name=mon.', |
305 | + '--add-key={}'.format(secret), |
306 | + '--cap', 'mon', 'allow *']) |
307 | + |
308 | + subprocess.check_call(['ceph-mon', '--mkfs', |
309 | + '-i', hostname, |
310 | + '--keyring', keyring]) |
311 | + |
312 | + with open(done, 'w'): |
313 | + pass |
314 | + with open(upstart, 'w'): |
315 | + pass |
316 | + |
317 | + service_restart('ceph-mon-all') |
318 | + except: |
319 | + raise |
320 | + finally: |
321 | + os.unlink(keyring) |
322 | + |
323 | + |
324 | def get_ceph_version(): |
325 | apt.init() |
326 | cache = apt.Cache() |
327 | @@ -233,3 +277,59 @@ |
328 | |
329 | def version_compare(a, b): |
330 | return apt.version_compare(a, b) |
331 | + |
332 | + |
333 | +def update_monfs(): |
334 | + hostname = get_unit_hostname() |
335 | + monfs = '/var/lib/ceph/mon/ceph-{}'.format(hostname) |
336 | + upstart = '{}/upstart'.format(monfs) |
337 | + if os.path.exists(monfs) and not os.path.exists(upstart): |
338 | + # Mark mon as managed by upstart so that |
339 | + # it gets start correctly on reboots |
340 | + with open(upstart, 'w'): |
341 | + pass |
342 | + |
343 | + |
344 | +def osdize(dev, osd_format, osd_journal, reformat_osd=False): |
345 | + if not os.path.exists(dev): |
346 | + log('Path {} does not exist - bailing'.format(dev)) |
347 | + return |
348 | + |
349 | + if not is_block_device(dev): |
350 | + log('Path {} is not a block device - bailing'.format(dev)) |
351 | + return |
352 | + |
353 | + if (is_osd_disk(dev) and not reformat_osd): |
354 | + log('Looks like {} is already an OSD, skipping.'.format(dev)) |
355 | + return |
356 | + |
357 | + if device_mounted(dev): |
358 | + log('Looks like {} is in use, skipping.'.format(dev)) |
359 | + return |
360 | + |
361 | + cmd = ['ceph-disk-prepare'] |
362 | + # Later versions of ceph support more options |
363 | + if get_ceph_version() >= "0.48.3": |
364 | + if osd_format: |
365 | + cmd.append('--fs-type') |
366 | + cmd.append(osd_format) |
367 | + cmd.append(dev) |
368 | + if osd_journal and os.path.exists(osd_journal): |
369 | + cmd.append(osd_journal) |
370 | + else: |
371 | + # Just provide the device - no other options |
372 | + # for older versions of ceph |
373 | + cmd.append(dev) |
374 | + |
375 | + if reformat_osd: |
376 | + zap_disk(dev) |
377 | + |
378 | + subprocess.check_call(cmd) |
379 | + |
380 | + |
381 | +def device_mounted(dev): |
382 | + return subprocess.call(['grep', '-wqs', dev + '1', '/proc/mounts']) == 0 |
383 | + |
384 | + |
385 | +def filesystem_mounted(fs): |
386 | + return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
387 | |
388 | === added directory 'hooks/charmhelpers' |
389 | === added file 'hooks/charmhelpers/__init__.py' |
390 | === added directory 'hooks/charmhelpers/contrib' |
391 | === added file 'hooks/charmhelpers/contrib/__init__.py' |
392 | === added directory 'hooks/charmhelpers/contrib/storage' |
393 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' |
394 | === added directory 'hooks/charmhelpers/contrib/storage/linux' |
395 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' |
396 | === added file 'hooks/charmhelpers/contrib/storage/linux/utils.py' |
397 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 |
398 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-07-08 08:34:31 +0000 |
399 | @@ -0,0 +1,25 @@ |
400 | +from os import stat |
401 | +from stat import S_ISBLK |
402 | + |
403 | +from subprocess import ( |
404 | + check_call |
405 | +) |
406 | + |
407 | + |
408 | +def is_block_device(path): |
409 | + ''' |
410 | + Confirm device at path is a valid block device node. |
411 | + |
412 | + :returns: boolean: True if path is a block device, False if not. |
413 | + ''' |
414 | + return S_ISBLK(stat(path).st_mode) |
415 | + |
416 | + |
417 | +def zap_disk(block_device): |
418 | + ''' |
419 | + Clear a block device of partition table. Relies on sgdisk, which is |
420 | + installed as pat of the 'gdisk' package in Ubuntu. |
421 | + |
422 | + :param block_device: str: Full path of block device to clean. |
423 | + ''' |
424 | + check_call(['sgdisk', '--zap-all', block_device]) |
425 | |
426 | === added directory 'hooks/charmhelpers/core' |
427 | === added file 'hooks/charmhelpers/core/__init__.py' |
428 | === added file 'hooks/charmhelpers/core/hookenv.py' |
429 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 |
430 | +++ hooks/charmhelpers/core/hookenv.py 2013-07-08 08:34:31 +0000 |
431 | @@ -0,0 +1,334 @@ |
432 | +"Interactions with the Juju environment" |
433 | +# Copyright 2013 Canonical Ltd. |
434 | +# |
435 | +# Authors: |
436 | +# Charm Helpers Developers <juju@lists.ubuntu.com> |
437 | + |
438 | +import os |
439 | +import json |
440 | +import yaml |
441 | +import subprocess |
442 | +import UserDict |
443 | + |
444 | +CRITICAL = "CRITICAL" |
445 | +ERROR = "ERROR" |
446 | +WARNING = "WARNING" |
447 | +INFO = "INFO" |
448 | +DEBUG = "DEBUG" |
449 | +MARKER = object() |
450 | + |
451 | +cache = {} |
452 | + |
453 | + |
454 | +def cached(func): |
455 | + ''' Cache return values for multiple executions of func + args |
456 | + |
457 | + For example: |
458 | + |
459 | + @cached |
460 | + def unit_get(attribute): |
461 | + pass |
462 | + |
463 | + unit_get('test') |
464 | + |
465 | + will cache the result of unit_get + 'test' for future calls. |
466 | + ''' |
467 | + def wrapper(*args, **kwargs): |
468 | + global cache |
469 | + key = str((func, args, kwargs)) |
470 | + try: |
471 | + return cache[key] |
472 | + except KeyError: |
473 | + res = func(*args, **kwargs) |
474 | + cache[key] = res |
475 | + return res |
476 | + return wrapper |
477 | + |
478 | + |
479 | +def flush(key): |
480 | + ''' Flushes any entries from function cache where the |
481 | + key is found in the function+args ''' |
482 | + flush_list = [] |
483 | + for item in cache: |
484 | + if key in item: |
485 | + flush_list.append(item) |
486 | + for item in flush_list: |
487 | + del cache[item] |
488 | + |
489 | + |
490 | +def log(message, level=None): |
491 | + "Write a message to the juju log" |
492 | + command = ['juju-log'] |
493 | + if level: |
494 | + command += ['-l', level] |
495 | + command += [message] |
496 | + subprocess.call(command) |
497 | + |
498 | + |
499 | +class Serializable(UserDict.IterableUserDict): |
500 | + "Wrapper, an object that can be serialized to yaml or json" |
501 | + |
502 | + def __init__(self, obj): |
503 | + # wrap the object |
504 | + UserDict.IterableUserDict.__init__(self) |
505 | + self.data = obj |
506 | + |
507 | + def __getattr__(self, attr): |
508 | + # See if this object has attribute. |
509 | + if attr in ("json", "yaml", "data"): |
510 | + return self.__dict__[attr] |
511 | + # Check for attribute in wrapped object. |
512 | + got = getattr(self.data, attr, MARKER) |
513 | + if got is not MARKER: |
514 | + return got |
515 | + # Proxy to the wrapped object via dict interface. |
516 | + try: |
517 | + return self.data[attr] |
518 | + except KeyError: |
519 | + raise AttributeError(attr) |
520 | + |
521 | + def __getstate__(self): |
522 | + # Pickle as a standard dictionary. |
523 | + return self.data |
524 | + |
525 | + def __setstate__(self, state): |
526 | + # Unpickle into our wrapper. |
527 | + self.data = state |
528 | + |
529 | + def json(self): |
530 | + "Serialize the object to json" |
531 | + return json.dumps(self.data) |
532 | + |
533 | + def yaml(self): |
534 | + "Serialize the object to yaml" |
535 | + return yaml.dump(self.data) |
536 | + |
537 | + |
538 | +def execution_environment(): |
539 | + """A convenient bundling of the current execution context""" |
540 | + context = {} |
541 | + context['conf'] = config() |
542 | + if relation_id(): |
543 | + context['reltype'] = relation_type() |
544 | + context['relid'] = relation_id() |
545 | + context['rel'] = relation_get() |
546 | + context['unit'] = local_unit() |
547 | + context['rels'] = relations() |
548 | + context['env'] = os.environ |
549 | + return context |
550 | + |
551 | + |
552 | +def in_relation_hook(): |
553 | + "Determine whether we're running in a relation hook" |
554 | + return 'JUJU_RELATION' in os.environ |
555 | + |
556 | + |
557 | +def relation_type(): |
558 | + "The scope for the current relation hook" |
559 | + return os.environ.get('JUJU_RELATION', None) |
560 | + |
561 | + |
562 | +def relation_id(): |
563 | + "The relation ID for the current relation hook" |
564 | + return os.environ.get('JUJU_RELATION_ID', None) |
565 | + |
566 | + |
567 | +def local_unit(): |
568 | + "Local unit ID" |
569 | + return os.environ['JUJU_UNIT_NAME'] |
570 | + |
571 | + |
572 | +def remote_unit(): |
573 | + "The remote unit for the current relation hook" |
574 | + return os.environ['JUJU_REMOTE_UNIT'] |
575 | + |
576 | + |
577 | +@cached |
578 | +def config(scope=None): |
579 | + "Juju charm configuration" |
580 | + config_cmd_line = ['config-get'] |
581 | + if scope is not None: |
582 | + config_cmd_line.append(scope) |
583 | + config_cmd_line.append('--format=json') |
584 | + try: |
585 | + return json.loads(subprocess.check_output(config_cmd_line)) |
586 | + except ValueError: |
587 | + return None |
588 | + |
589 | + |
590 | +@cached |
591 | +def relation_get(attribute=None, unit=None, rid=None): |
592 | + _args = ['relation-get', '--format=json'] |
593 | + if rid: |
594 | + _args.append('-r') |
595 | + _args.append(rid) |
596 | + _args.append(attribute or '-') |
597 | + if unit: |
598 | + _args.append(unit) |
599 | + try: |
600 | + return json.loads(subprocess.check_output(_args)) |
601 | + except ValueError: |
602 | + return None |
603 | + |
604 | + |
605 | +def relation_set(relation_id=None, relation_settings={}, **kwargs): |
606 | + relation_cmd_line = ['relation-set'] |
607 | + if relation_id is not None: |
608 | + relation_cmd_line.extend(('-r', relation_id)) |
609 | + for k, v in (relation_settings.items() + kwargs.items()): |
610 | + if v is None: |
611 | + relation_cmd_line.append('{}='.format(k)) |
612 | + else: |
613 | + relation_cmd_line.append('{}={}'.format(k, v)) |
614 | + subprocess.check_call(relation_cmd_line) |
615 | + # Flush cache of any relation-gets for local unit |
616 | + flush(local_unit()) |
617 | + |
618 | + |
619 | +@cached |
620 | +def relation_ids(reltype=None): |
621 | + "A list of relation_ids" |
622 | + reltype = reltype or relation_type() |
623 | + relid_cmd_line = ['relation-ids', '--format=json'] |
624 | + if reltype is not None: |
625 | + relid_cmd_line.append(reltype) |
626 | + return json.loads(subprocess.check_output(relid_cmd_line)) |
627 | + return [] |
628 | + |
629 | + |
630 | +@cached |
631 | +def related_units(relid=None): |
632 | + "A list of related units" |
633 | + relid = relid or relation_id() |
634 | + units_cmd_line = ['relation-list', '--format=json'] |
635 | + if relid is not None: |
636 | + units_cmd_line.extend(('-r', relid)) |
637 | + return json.loads(subprocess.check_output(units_cmd_line)) |
638 | + |
639 | + |
640 | +@cached |
641 | +def relation_for_unit(unit=None, rid=None): |
642 | + "Get the json represenation of a unit's relation" |
643 | + unit = unit or remote_unit() |
644 | + relation = relation_get(unit=unit, rid=rid) |
645 | + for key in relation: |
646 | + if key.endswith('-list'): |
647 | + relation[key] = relation[key].split() |
648 | + relation['__unit__'] = unit |
649 | + return relation |
650 | + |
651 | + |
652 | +@cached |
653 | +def relations_for_id(relid=None): |
654 | + "Get relations of a specific relation ID" |
655 | + relation_data = [] |
656 | + relid = relid or relation_ids() |
657 | + for unit in related_units(relid): |
658 | + unit_data = relation_for_unit(unit, relid) |
659 | + unit_data['__relid__'] = relid |
660 | + relation_data.append(unit_data) |
661 | + return relation_data |
662 | + |
663 | + |
664 | +@cached |
665 | +def relations_of_type(reltype=None): |
666 | + "Get relations of a specific type" |
667 | + relation_data = [] |
668 | + reltype = reltype or relation_type() |
669 | + for relid in relation_ids(reltype): |
670 | + for relation in relations_for_id(relid): |
671 | + relation['__relid__'] = relid |
672 | + relation_data.append(relation) |
673 | + return relation_data |
674 | + |
675 | + |
676 | +@cached |
677 | +def relation_types(): |
678 | + "Get a list of relation types supported by this charm" |
679 | + charmdir = os.environ.get('CHARM_DIR', '') |
680 | + mdf = open(os.path.join(charmdir, 'metadata.yaml')) |
681 | + md = yaml.safe_load(mdf) |
682 | + rel_types = [] |
683 | + for key in ('provides', 'requires', 'peers'): |
684 | + section = md.get(key) |
685 | + if section: |
686 | + rel_types.extend(section.keys()) |
687 | + mdf.close() |
688 | + return rel_types |
689 | + |
690 | + |
691 | +@cached |
692 | +def relations(): |
693 | + rels = {} |
694 | + for reltype in relation_types(): |
695 | + relids = {} |
696 | + for relid in relation_ids(reltype): |
697 | + units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} |
698 | + for unit in related_units(relid): |
699 | + reldata = relation_get(unit=unit, rid=relid) |
700 | + units[unit] = reldata |
701 | + relids[relid] = units |
702 | + rels[reltype] = relids |
703 | + return rels |
704 | + |
705 | + |
706 | +def open_port(port, protocol="TCP"): |
707 | + "Open a service network port" |
708 | + _args = ['open-port'] |
709 | + _args.append('{}/{}'.format(port, protocol)) |
710 | + subprocess.check_call(_args) |
711 | + |
712 | + |
713 | +def close_port(port, protocol="TCP"): |
714 | + "Close a service network port" |
715 | + _args = ['close-port'] |
716 | + _args.append('{}/{}'.format(port, protocol)) |
717 | + subprocess.check_call(_args) |
718 | + |
719 | + |
720 | +@cached |
721 | +def unit_get(attribute): |
722 | + _args = ['unit-get', '--format=json', attribute] |
723 | + try: |
724 | + return json.loads(subprocess.check_output(_args)) |
725 | + except ValueError: |
726 | + return None |
727 | + |
728 | + |
729 | +def unit_private_ip(): |
730 | + return unit_get('private-address') |
731 | + |
732 | + |
733 | +class UnregisteredHookError(Exception): |
734 | + pass |
735 | + |
736 | + |
737 | +class Hooks(object): |
738 | + def __init__(self): |
739 | + super(Hooks, self).__init__() |
740 | + self._hooks = {} |
741 | + |
742 | + def register(self, name, function): |
743 | + self._hooks[name] = function |
744 | + |
745 | + def execute(self, args): |
746 | + hook_name = os.path.basename(args[0]) |
747 | + if hook_name in self._hooks: |
748 | + self._hooks[hook_name]() |
749 | + else: |
750 | + raise UnregisteredHookError(hook_name) |
751 | + |
752 | + def hook(self, *hook_names): |
753 | + def wrapper(decorated): |
754 | + for hook_name in hook_names: |
755 | + self.register(hook_name, decorated) |
756 | + else: |
757 | + self.register(decorated.__name__, decorated) |
758 | + if '_' in decorated.__name__: |
759 | + self.register( |
760 | + decorated.__name__.replace('_', '-'), decorated) |
761 | + return decorated |
762 | + return wrapper |
763 | + |
764 | +def charm_dir(): |
765 | + return os.environ.get('CHARM_DIR') |
766 | |
767 | === added file 'hooks/charmhelpers/core/host.py' |
768 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 |
769 | +++ hooks/charmhelpers/core/host.py 2013-07-08 08:34:31 +0000 |
770 | @@ -0,0 +1,273 @@ |
771 | +"""Tools for working with the host system""" |
772 | +# Copyright 2012 Canonical Ltd. |
773 | +# |
774 | +# Authors: |
775 | +# Nick Moffitt <nick.moffitt@canonical.com> |
776 | +# Matthew Wedgwood <matthew.wedgwood@canonical.com> |
777 | + |
778 | +import apt_pkg |
779 | +import os |
780 | +import pwd |
781 | +import grp |
782 | +import subprocess |
783 | +import hashlib |
784 | + |
785 | +from collections import OrderedDict |
786 | + |
787 | +from hookenv import log, execution_environment |
788 | + |
789 | + |
790 | +def service_start(service_name): |
791 | + service('start', service_name) |
792 | + |
793 | + |
794 | +def service_stop(service_name): |
795 | + service('stop', service_name) |
796 | + |
797 | + |
798 | +def service_restart(service_name): |
799 | + service('restart', service_name) |
800 | + |
801 | + |
802 | +def service_reload(service_name, restart_on_failure=False): |
803 | + if not service('reload', service_name) and restart_on_failure: |
804 | + service('restart', service_name) |
805 | + |
806 | + |
807 | +def service(action, service_name): |
808 | + cmd = ['service', service_name, action] |
809 | + return subprocess.call(cmd) == 0 |
810 | + |
811 | + |
812 | +def adduser(username, password=None, shell='/bin/bash', system_user=False): |
813 | + """Add a user""" |
814 | + try: |
815 | + user_info = pwd.getpwnam(username) |
816 | + log('user {0} already exists!'.format(username)) |
817 | + except KeyError: |
818 | + log('creating user {0}'.format(username)) |
819 | + cmd = ['useradd'] |
820 | + if system_user or password is None: |
821 | + cmd.append('--system') |
822 | + else: |
823 | + cmd.extend([ |
824 | + '--create-home', |
825 | + '--shell', shell, |
826 | + '--password', password, |
827 | + ]) |
828 | + cmd.append(username) |
829 | + subprocess.check_call(cmd) |
830 | + user_info = pwd.getpwnam(username) |
831 | + return user_info |
832 | + |
833 | + |
834 | +def add_user_to_group(username, group): |
835 | + """Add a user to a group""" |
836 | + cmd = [ |
837 | + 'gpasswd', '-a', |
838 | + username, |
839 | + group |
840 | + ] |
841 | + log("Adding user {} to group {}".format(username, group)) |
842 | + subprocess.check_call(cmd) |
843 | + |
844 | + |
845 | +def rsync(from_path, to_path, flags='-r', options=None): |
846 | + """Replicate the contents of a path""" |
847 | + context = execution_environment() |
848 | + options = options or ['--delete', '--executability'] |
849 | + cmd = ['/usr/bin/rsync', flags] |
850 | + cmd.extend(options) |
851 | + cmd.append(from_path.format(**context)) |
852 | + cmd.append(to_path.format(**context)) |
853 | + log(" ".join(cmd)) |
854 | + return subprocess.check_output(cmd).strip() |
855 | + |
856 | + |
857 | +def symlink(source, destination): |
858 | + """Create a symbolic link""" |
859 | + context = execution_environment() |
860 | + log("Symlinking {} as {}".format(source, destination)) |
861 | + cmd = [ |
862 | + 'ln', |
863 | + '-sf', |
864 | + source.format(**context), |
865 | + destination.format(**context) |
866 | + ] |
867 | + subprocess.check_call(cmd) |
868 | + |
869 | + |
870 | +def mkdir(path, owner='root', group='root', perms=0555, force=False): |
871 | + """Create a directory""" |
872 | + context = execution_environment() |
873 | + log("Making dir {} {}:{} {:o}".format(path, owner, group, |
874 | + perms)) |
875 | + uid = pwd.getpwnam(owner.format(**context)).pw_uid |
876 | + gid = grp.getgrnam(group.format(**context)).gr_gid |
877 | + realpath = os.path.abspath(path) |
878 | + if os.path.exists(realpath): |
879 | + if force and not os.path.isdir(realpath): |
880 | + log("Removing non-directory file {} prior to mkdir()".format(path)) |
881 | + os.unlink(realpath) |
882 | + else: |
883 | + os.makedirs(realpath, perms) |
884 | + os.chown(realpath, uid, gid) |
885 | + |
886 | + |
887 | +def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs): |
888 | + """Create or overwrite a file with the contents of a string""" |
889 | + context = execution_environment() |
890 | + context.update(kwargs) |
891 | + log("Writing file {} {}:{} {:o}".format(path, owner, group, |
892 | + perms)) |
893 | + uid = pwd.getpwnam(owner.format(**context)).pw_uid |
894 | + gid = grp.getgrnam(group.format(**context)).gr_gid |
895 | + with open(path.format(**context), 'w') as target: |
896 | + os.fchown(target.fileno(), uid, gid) |
897 | + os.fchmod(target.fileno(), perms) |
898 | + target.write(fmtstr.format(**context)) |
899 | + |
900 | + |
901 | +def render_template_file(source, destination, **kwargs): |
902 | + """Create or overwrite a file using a template""" |
903 | + log("Rendering template {} for {}".format(source, |
904 | + destination)) |
905 | + context = execution_environment() |
906 | + with open(source.format(**context), 'r') as template: |
907 | + write_file(destination.format(**context), template.read(), |
908 | + **kwargs) |
909 | + |
910 | + |
911 | +def filter_installed_packages(packages): |
912 | + """Returns a list of packages that require installation""" |
913 | + apt_pkg.init() |
914 | + cache = apt_pkg.Cache() |
915 | + _pkgs = [] |
916 | + for package in packages: |
917 | + try: |
918 | + p = cache[package] |
919 | + p.current_ver or _pkgs.append(package) |
920 | + except KeyError: |
921 | + log('Package {} has no installation candidate.'.format(package), |
922 | + level='WARNING') |
923 | + _pkgs.append(package) |
924 | + return _pkgs |
925 | + |
926 | + |
927 | +def apt_install(packages, options=None, fatal=False): |
928 | + """Install one or more packages""" |
929 | + options = options or [] |
930 | + cmd = ['apt-get', '-y'] |
931 | + cmd.extend(options) |
932 | + cmd.append('install') |
933 | + if isinstance(packages, basestring): |
934 | + cmd.append(packages) |
935 | + else: |
936 | + cmd.extend(packages) |
937 | + log("Installing {} with options: {}".format(packages, |
938 | + options)) |
939 | + if fatal: |
940 | + subprocess.check_call(cmd) |
941 | + else: |
942 | + subprocess.call(cmd) |
943 | + |
944 | + |
945 | +def apt_update(fatal=False): |
946 | + """Update local apt cache""" |
947 | + cmd = ['apt-get', 'update'] |
948 | + if fatal: |
949 | + subprocess.check_call(cmd) |
950 | + else: |
951 | + subprocess.call(cmd) |
952 | + |
953 | + |
954 | +def mount(device, mountpoint, options=None, persist=False): |
955 | + '''Mount a filesystem''' |
956 | + cmd_args = ['mount'] |
957 | + if options is not None: |
958 | + cmd_args.extend(['-o', options]) |
959 | + cmd_args.extend([device, mountpoint]) |
960 | + try: |
961 | + subprocess.check_output(cmd_args) |
962 | + except subprocess.CalledProcessError, e: |
963 | + log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
964 | + return False |
965 | + if persist: |
966 | + # TODO: update fstab |
967 | + pass |
968 | + return True |
969 | + |
970 | + |
971 | +def umount(mountpoint, persist=False): |
972 | + '''Unmount a filesystem''' |
973 | + cmd_args = ['umount', mountpoint] |
974 | + try: |
975 | + subprocess.check_output(cmd_args) |
976 | + except subprocess.CalledProcessError, e: |
977 | + log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
978 | + return False |
979 | + if persist: |
980 | + # TODO: update fstab |
981 | + pass |
982 | + return True |
983 | + |
984 | + |
985 | +def mounts(): |
986 | + '''List of all mounted volumes as [[mountpoint,device],[...]]''' |
987 | + with open('/proc/mounts') as f: |
988 | + # [['/mount/point','/dev/path'],[...]] |
989 | + system_mounts = [m[1::-1] for m in [l.strip().split() |
990 | + for l in f.readlines()]] |
991 | + return system_mounts |
992 | + |
993 | + |
994 | +def file_hash(path): |
995 | + ''' Generate a md5 hash of the contents of 'path' or None if not found ''' |
996 | + if os.path.exists(path): |
997 | + h = hashlib.md5() |
998 | + with open(path, 'r') as source: |
999 | + h.update(source.read()) # IGNORE:E1101 - it does have update |
1000 | + return h.hexdigest() |
1001 | + else: |
1002 | + return None |
1003 | + |
1004 | + |
1005 | +def restart_on_change(restart_map): |
1006 | + ''' Restart services based on configuration files changing |
1007 | + |
1008 | + This function is used a decorator, for example |
1009 | + |
1010 | + @restart_on_change({ |
1011 | + '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
1012 | + }) |
1013 | + def ceph_client_changed(): |
1014 | + ... |
1015 | + |
1016 | + In this example, the cinder-api and cinder-volume services |
1017 | + would be restarted if /etc/ceph/ceph.conf is changed by the |
1018 | + ceph_client_changed function. |
1019 | + ''' |
1020 | + def wrap(f): |
1021 | + def wrapped_f(*args): |
1022 | + checksums = {} |
1023 | + for path in restart_map: |
1024 | + checksums[path] = file_hash(path) |
1025 | + f(*args) |
1026 | + restarts = [] |
1027 | + for path in restart_map: |
1028 | + if checksums[path] != file_hash(path): |
1029 | + restarts += restart_map[path] |
1030 | + for service_name in list(OrderedDict.fromkeys(restarts)): |
1031 | + service('restart', service_name) |
1032 | + return wrapped_f |
1033 | + return wrap |
1034 | + |
1035 | + |
1036 | +def lsb_release(): |
1037 | + '''Return /etc/lsb-release in a dict''' |
1038 | + d = {} |
1039 | + with open('/etc/lsb-release', 'r') as lsb: |
1040 | + for l in lsb: |
1041 | + k, v = l.split('=') |
1042 | + d[k.strip()] = v.strip() |
1043 | + return d |
1044 | |
1045 | === added directory 'hooks/charmhelpers/fetch' |
1046 | === added file 'hooks/charmhelpers/fetch/__init__.py' |
1047 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 |
1048 | +++ hooks/charmhelpers/fetch/__init__.py 2013-07-08 08:34:31 +0000 |
1049 | @@ -0,0 +1,152 @@ |
1050 | +import importlib |
1051 | +from yaml import safe_load |
1052 | +from charmhelpers.core.host import ( |
1053 | + apt_install, |
1054 | + apt_update, |
1055 | + filter_installed_packages, |
1056 | + lsb_release |
1057 | +) |
1058 | +from urlparse import ( |
1059 | + urlparse, |
1060 | + urlunparse, |
1061 | +) |
1062 | +import subprocess |
1063 | +from charmhelpers.core.hookenv import ( |
1064 | + config, |
1065 | + log, |
1066 | +) |
1067 | + |
1068 | +CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
1069 | +deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1070 | +""" |
1071 | +PROPOSED_POCKET = """# Proposed |
1072 | +deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted |
1073 | +""" |
1074 | + |
1075 | + |
1076 | +def add_source(source, key=None): |
1077 | + if ((source.startswith('ppa:') or |
1078 | + source.startswith('http:'))): |
1079 | + subprocess.check_call(['add-apt-repository', source]) |
1080 | + elif source.startswith('cloud:'): |
1081 | + apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), |
1082 | + fatal=True) |
1083 | + pocket = source.split(':')[-1] |
1084 | + with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
1085 | + apt.write(CLOUD_ARCHIVE.format(pocket)) |
1086 | + elif source == 'proposed': |
1087 | + release = lsb_release()['DISTRIB_CODENAME'] |
1088 | + with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
1089 | + apt.write(PROPOSED_POCKET.format(release)) |
1090 | + if key: |
1091 | + subprocess.check_call(['apt-key', 'import', key]) |
1092 | + |
1093 | + |
1094 | +class SourceConfigError(Exception): |
1095 | + pass |
1096 | + |
1097 | + |
1098 | +def configure_sources(update=False, |
1099 | + sources_var='install_sources', |
1100 | + keys_var='install_keys'): |
1101 | + """ |
1102 | + Configure multiple sources from charm configuration |
1103 | + |
1104 | + Example config: |
1105 | + install_sources: |
1106 | + - "ppa:foo" |
1107 | + - "http://example.com/repo precise main" |
1108 | + install_keys: |
1109 | + - null |
1110 | + - "a1b2c3d4" |
1111 | + |
1112 | + Note that 'null' (a.k.a. None) should not be quoted. |
1113 | + """ |
1114 | + sources = safe_load(config(sources_var)) |
1115 | + keys = safe_load(config(keys_var)) |
1116 | + if isinstance(sources, basestring) and isinstance(keys, basestring): |
1117 | + add_source(sources, keys) |
1118 | + else: |
1119 | + if not len(sources) == len(keys): |
1120 | + msg = 'Install sources and keys lists are different lengths' |
1121 | + raise SourceConfigError(msg) |
1122 | + for src_num in range(len(sources)): |
1123 | + add_source(sources[src_num], keys[src_num]) |
1124 | + if update: |
1125 | + apt_update(fatal=True) |
1126 | + |
1127 | +# The order of this list is very important. Handlers should be listed in from |
1128 | +# least- to most-specific URL matching. |
1129 | +FETCH_HANDLERS = ( |
1130 | + 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
1131 | +) |
1132 | + |
1133 | + |
1134 | +class UnhandledSource(Exception): |
1135 | + pass |
1136 | + |
1137 | + |
1138 | +def install_remote(source): |
1139 | + """ |
1140 | + Install a file tree from a remote source |
1141 | + |
1142 | + The specified source should be a url of the form: |
1143 | + scheme://[host]/path[#[option=value][&...]] |
1144 | + |
1145 | + Schemes supported are based on this modules submodules |
1146 | + Options supported are submodule-specific""" |
1147 | + # We ONLY check for True here because can_handle may return a string |
1148 | + # explaining why it can't handle a given source. |
1149 | + handlers = [h for h in plugins() if h.can_handle(source) is True] |
1150 | + for handler in handlers: |
1151 | + try: |
1152 | + installed_to = handler.install(source) |
1153 | + except UnhandledSource: |
1154 | + pass |
1155 | + if not installed_to: |
1156 | + raise UnhandledSource("No handler found for source {}".format(source)) |
1157 | + return installed_to |
1158 | + |
1159 | + |
1160 | +def install_from_config(config_var_name): |
1161 | + charm_config = config() |
1162 | + source = charm_config[config_var_name] |
1163 | + return install_remote(source) |
1164 | + |
1165 | + |
1166 | +class BaseFetchHandler(object): |
1167 | + """Base class for FetchHandler implementations in fetch plugins""" |
1168 | + def can_handle(self, source): |
1169 | + """Returns True if the source can be handled. Otherwise returns |
1170 | + a string explaining why it cannot""" |
1171 | + return "Wrong source type" |
1172 | + |
1173 | + def install(self, source): |
1174 | + """Try to download and unpack the source. Return the path to the |
1175 | + unpacked files or raise UnhandledSource.""" |
1176 | + raise UnhandledSource("Wrong source type {}".format(source)) |
1177 | + |
1178 | + def parse_url(self, url): |
1179 | + return urlparse(url) |
1180 | + |
1181 | + def base_url(self, url): |
1182 | + """Return url without querystring or fragment""" |
1183 | + parts = list(self.parse_url(url)) |
1184 | + parts[4:] = ['' for i in parts[4:]] |
1185 | + return urlunparse(parts) |
1186 | + |
1187 | + |
1188 | +def plugins(fetch_handlers=None): |
1189 | + if not fetch_handlers: |
1190 | + fetch_handlers = FETCH_HANDLERS |
1191 | + plugin_list = [] |
1192 | + for handler_name in fetch_handlers: |
1193 | + package, classname = handler_name.rsplit('.', 1) |
1194 | + try: |
1195 | + handler_class = getattr(importlib.import_module(package), classname) |
1196 | + plugin_list.append(handler_class()) |
1197 | + except (ImportError, AttributeError): |
1198 | + # Skip missing plugins so that they can be ommitted from |
1199 | + # installation if desired |
1200 | + log("FetchHandler {} not found, skipping plugin".format(handler_name)) |
1201 | + return plugin_list |
1202 | |
1203 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' |
1204 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 |
1205 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-07-08 08:34:31 +0000 |
1206 | @@ -0,0 +1,43 @@ |
1207 | +import os |
1208 | +import urllib2 |
1209 | +from charmhelpers.fetch import ( |
1210 | + BaseFetchHandler, |
1211 | + UnhandledSource |
1212 | +) |
1213 | +from charmhelpers.payload.archive import ( |
1214 | + get_archive_handler, |
1215 | + extract, |
1216 | +) |
1217 | + |
1218 | + |
1219 | +class ArchiveUrlFetchHandler(BaseFetchHandler): |
1220 | + """Handler for archives via generic URLs""" |
1221 | + def can_handle(self, source): |
1222 | + url_parts = self.parse_url(source) |
1223 | + if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): |
1224 | + return "Wrong source type" |
1225 | + if get_archive_handler(self.base_url(source)): |
1226 | + return True |
1227 | + return False |
1228 | + |
1229 | + def download(self, source, dest): |
1230 | + # propogate all exceptions |
1231 | + # URLError, OSError, etc |
1232 | + response = urllib2.urlopen(source) |
1233 | + with open(dest, 'w') as dest_file: |
1234 | + dest_file.write(response.read()) |
1235 | + |
1236 | + def install(self, source): |
1237 | + url_parts = self.parse_url(source) |
1238 | + dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
1239 | + dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
1240 | + try: |
1241 | + self.download(source, dld_file) |
1242 | + except urllib2.URLError as e: |
1243 | + return UnhandledSource(e.reason) |
1244 | + except OSError as e: |
1245 | + return UnhandledSource(e.strerror) |
1246 | + finally: |
1247 | + if os.path.isfile(dld_file): |
1248 | + os.unlink(dld_file) |
1249 | + return extract(dld_file) |
1250 | |
1251 | === modified file 'hooks/hooks.py' |
1252 | --- hooks/hooks.py 2013-06-20 21:15:17 +0000 |
1253 | +++ hooks/hooks.py 2013-07-08 08:34:31 +0000 |
1254 | @@ -10,12 +10,35 @@ |
1255 | |
1256 | import glob |
1257 | import os |
1258 | -import subprocess |
1259 | import shutil |
1260 | import sys |
1261 | |
1262 | import ceph |
1263 | -import utils |
1264 | +from charmhelpers.core.hookenv import ( |
1265 | + log, ERROR, |
1266 | + config, |
1267 | + relation_ids, |
1268 | + related_units, |
1269 | + relation_get, |
1270 | + relation_set, |
1271 | + remote_unit, |
1272 | + Hooks, UnregisteredHookError |
1273 | +) |
1274 | +from charmhelpers.core.host import ( |
1275 | + apt_install, |
1276 | + apt_update, |
1277 | + filter_installed_packages, |
1278 | + service_restart, |
1279 | + umount |
1280 | +) |
1281 | +from charmhelpers.fetch import add_source |
1282 | + |
1283 | +from utils import ( |
1284 | + render_template, |
1285 | + get_host_ip, |
1286 | +) |
1287 | + |
1288 | +hooks = Hooks() |
1289 | |
1290 | |
1291 | def install_upstart_scripts(): |
1292 | @@ -25,328 +48,221 @@ |
1293 | shutil.copy(x, '/etc/init/') |
1294 | |
1295 | |
1296 | +@hooks.hook('install') |
1297 | def install(): |
1298 | - utils.juju_log('INFO', 'Begin install hook.') |
1299 | - utils.configure_source() |
1300 | - utils.install('ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs') |
1301 | + log('Begin install hook.') |
1302 | + add_source(config('source'), config('key')) |
1303 | + apt_update(fatal=True) |
1304 | + apt_install(packages=ceph.PACKAGES, fatal=True) |
1305 | install_upstart_scripts() |
1306 | - utils.juju_log('INFO', 'End install hook.') |
1307 | + log('End install hook.') |
1308 | |
1309 | |
1310 | def emit_cephconf(): |
1311 | cephcontext = { |
1312 | - 'auth_supported': utils.config_get('auth-supported'), |
1313 | + 'auth_supported': config('auth-supported'), |
1314 | 'mon_hosts': ' '.join(get_mon_hosts()), |
1315 | - 'fsid': utils.config_get('fsid'), |
1316 | + 'fsid': config('fsid'), |
1317 | 'version': ceph.get_ceph_version() |
1318 | - } |
1319 | + } |
1320 | |
1321 | with open('/etc/ceph/ceph.conf', 'w') as cephconf: |
1322 | - cephconf.write(utils.render_template('ceph.conf', cephcontext)) |
1323 | + cephconf.write(render_template('ceph.conf', cephcontext)) |
1324 | |
1325 | JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped' |
1326 | |
1327 | |
1328 | +@hooks.hook('config-changed') |
1329 | def config_changed(): |
1330 | - utils.juju_log('INFO', 'Begin config-changed hook.') |
1331 | + log('Begin config-changed hook.') |
1332 | |
1333 | - utils.juju_log('INFO', 'Monitor hosts are ' + repr(get_mon_hosts())) |
1334 | + log('Monitor hosts are ' + repr(get_mon_hosts())) |
1335 | |
1336 | # Pre-flight checks |
1337 | - if not utils.config_get('fsid'): |
1338 | - utils.juju_log('CRITICAL', 'No fsid supplied, cannot proceed.') |
1339 | - sys.exit(1) |
1340 | - if not utils.config_get('monitor-secret'): |
1341 | - utils.juju_log('CRITICAL', |
1342 | - 'No monitor-secret supplied, cannot proceed.') |
1343 | - sys.exit(1) |
1344 | - if utils.config_get('osd-format') not in ceph.DISK_FORMATS: |
1345 | - utils.juju_log('CRITICAL', |
1346 | - 'Invalid OSD disk format configuration specified') |
1347 | + if not config('fsid'): |
1348 | + log('No fsid supplied, cannot proceed.', level=ERROR) |
1349 | + sys.exit(1) |
1350 | + if not config('monitor-secret'): |
1351 | + log('No monitor-secret supplied, cannot proceed.', level=ERROR) |
1352 | + sys.exit(1) |
1353 | + if config('osd-format') not in ceph.DISK_FORMATS: |
1354 | + log('Invalid OSD disk format configuration specified', level=ERROR) |
1355 | sys.exit(1) |
1356 | |
1357 | emit_cephconf() |
1358 | |
1359 | - e_mountpoint = utils.config_get('ephemeral-unmount') |
1360 | - if (e_mountpoint and |
1361 | - filesystem_mounted(e_mountpoint)): |
1362 | - subprocess.call(['umount', e_mountpoint]) |
1363 | + e_mountpoint = config('ephemeral-unmount') |
1364 | + if e_mountpoint and ceph.filesystem_mounted(e_mountpoint): |
1365 | + umount(e_mountpoint) |
1366 | |
1367 | - osd_journal = utils.config_get('osd-journal') |
1368 | - if (osd_journal and |
1369 | - not os.path.exists(JOURNAL_ZAPPED) and |
1370 | - os.path.exists(osd_journal)): |
1371 | + osd_journal = config('osd-journal') |
1372 | + if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) |
1373 | + and os.path.exists(osd_journal)): |
1374 | ceph.zap_disk(osd_journal) |
1375 | with open(JOURNAL_ZAPPED, 'w') as zapped: |
1376 | zapped.write('DONE') |
1377 | |
1378 | - for dev in utils.config_get('osd-devices').split(' '): |
1379 | - osdize(dev) |
1380 | + for dev in config('osd-devices').split(' '): |
1381 | + ceph.osdize(dev, config('osd-format'), config('osd-journal'), |
1382 | + reformat_osd()) |
1383 | |
1384 | # Support use of single node ceph |
1385 | - if (not ceph.is_bootstrapped() and |
1386 | - int(utils.config_get('monitor-count')) == 1): |
1387 | - bootstrap_monitor_cluster() |
1388 | + if (not ceph.is_bootstrapped() and int(config('monitor-count')) == 1): |
1389 | + ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
1390 | ceph.wait_for_bootstrap() |
1391 | |
1392 | if ceph.is_bootstrapped(): |
1393 | ceph.rescan_osd_devices() |
1394 | |
1395 | - utils.juju_log('INFO', 'End config-changed hook.') |
1396 | + log('End config-changed hook.') |
1397 | |
1398 | |
1399 | def get_mon_hosts(): |
1400 | hosts = [] |
1401 | - hosts.append('{}:6789'.format(utils.get_host_ip())) |
1402 | + hosts.append('{}:6789'.format(get_host_ip())) |
1403 | |
1404 | - for relid in utils.relation_ids('mon'): |
1405 | - for unit in utils.relation_list(relid): |
1406 | + for relid in relation_ids('mon'): |
1407 | + for unit in related_units(relid): |
1408 | hosts.append( |
1409 | - '{}:6789'.format(utils.get_host_ip( |
1410 | - utils.relation_get('private-address', |
1411 | - unit, relid))) |
1412 | - ) |
1413 | + '{}:6789'.format(get_host_ip(relation_get('private-address', |
1414 | + unit, relid))) |
1415 | + ) |
1416 | |
1417 | hosts.sort() |
1418 | return hosts |
1419 | |
1420 | |
1421 | -def update_monfs(): |
1422 | - hostname = utils.get_unit_hostname() |
1423 | - monfs = '/var/lib/ceph/mon/ceph-{}'.format(hostname) |
1424 | - upstart = '{}/upstart'.format(monfs) |
1425 | - if (os.path.exists(monfs) and |
1426 | - not os.path.exists(upstart)): |
1427 | - # Mark mon as managed by upstart so that |
1428 | - # it gets start correctly on reboots |
1429 | - with open(upstart, 'w'): |
1430 | - pass |
1431 | - |
1432 | - |
1433 | -def bootstrap_monitor_cluster(): |
1434 | - hostname = utils.get_unit_hostname() |
1435 | - path = '/var/lib/ceph/mon/ceph-{}'.format(hostname) |
1436 | - done = '{}/done'.format(path) |
1437 | - upstart = '{}/upstart'.format(path) |
1438 | - secret = utils.config_get('monitor-secret') |
1439 | - keyring = '/var/lib/ceph/tmp/{}.mon.keyring'.format(hostname) |
1440 | - |
1441 | - if os.path.exists(done): |
1442 | - utils.juju_log('INFO', |
1443 | - 'bootstrap_monitor_cluster: mon already initialized.') |
1444 | - else: |
1445 | - # Ceph >= 0.61.3 needs this for ceph-mon fs creation |
1446 | - os.makedirs('/var/run/ceph', mode=0755) |
1447 | - os.makedirs(path) |
1448 | - # end changes for Ceph >= 0.61.3 |
1449 | - try: |
1450 | - subprocess.check_call(['ceph-authtool', keyring, |
1451 | - '--create-keyring', '--name=mon.', |
1452 | - '--add-key={}'.format(secret), |
1453 | - '--cap', 'mon', 'allow *']) |
1454 | - |
1455 | - subprocess.check_call(['ceph-mon', '--mkfs', |
1456 | - '-i', hostname, |
1457 | - '--keyring', keyring]) |
1458 | - |
1459 | - with open(done, 'w'): |
1460 | - pass |
1461 | - with open(upstart, 'w'): |
1462 | - pass |
1463 | - |
1464 | - subprocess.check_call(['start', 'ceph-mon-all-starter']) |
1465 | - except: |
1466 | - raise |
1467 | - finally: |
1468 | - os.unlink(keyring) |
1469 | - |
1470 | - |
1471 | def reformat_osd(): |
1472 | - if utils.config_get('osd-reformat'): |
1473 | + if config('osd-reformat'): |
1474 | return True |
1475 | else: |
1476 | return False |
1477 | |
1478 | |
1479 | -def osdize(dev): |
1480 | - if not os.path.exists(dev): |
1481 | - utils.juju_log('INFO', |
1482 | - 'Path {} does not exist - bailing'.format(dev)) |
1483 | - return |
1484 | - |
1485 | - if (ceph.is_osd_disk(dev) and not |
1486 | - reformat_osd()): |
1487 | - utils.juju_log('INFO', |
1488 | - 'Looks like {} is already an OSD, skipping.' |
1489 | - .format(dev)) |
1490 | - return |
1491 | - |
1492 | - if device_mounted(dev): |
1493 | - utils.juju_log('INFO', |
1494 | - 'Looks like {} is in use, skipping.'.format(dev)) |
1495 | - return |
1496 | - |
1497 | - cmd = ['ceph-disk-prepare'] |
1498 | - # Later versions of ceph support more options |
1499 | - if ceph.get_ceph_version() >= "0.48.3": |
1500 | - osd_format = utils.config_get('osd-format') |
1501 | - if osd_format: |
1502 | - cmd.append('--fs-type') |
1503 | - cmd.append(osd_format) |
1504 | - cmd.append(dev) |
1505 | - osd_journal = utils.config_get('osd-journal') |
1506 | - if (osd_journal and |
1507 | - os.path.exists(osd_journal)): |
1508 | - cmd.append(osd_journal) |
1509 | - else: |
1510 | - # Just provide the device - no other options |
1511 | - # for older versions of ceph |
1512 | - cmd.append(dev) |
1513 | - subprocess.call(cmd) |
1514 | - |
1515 | - |
1516 | -def device_mounted(dev): |
1517 | - return subprocess.call(['grep', '-wqs', dev + '1', '/proc/mounts']) == 0 |
1518 | - |
1519 | - |
1520 | -def filesystem_mounted(fs): |
1521 | - return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
1522 | - |
1523 | - |
1524 | +@hooks.hook('mon-relation-departed', |
1525 | + 'mon-relation-joined') |
1526 | def mon_relation(): |
1527 | - utils.juju_log('INFO', 'Begin mon-relation hook.') |
1528 | + log('Begin mon-relation hook.') |
1529 | emit_cephconf() |
1530 | |
1531 | - moncount = int(utils.config_get('monitor-count')) |
1532 | + moncount = int(config('monitor-count')) |
1533 | if len(get_mon_hosts()) >= moncount: |
1534 | - bootstrap_monitor_cluster() |
1535 | + ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
1536 | ceph.wait_for_bootstrap() |
1537 | ceph.rescan_osd_devices() |
1538 | notify_osds() |
1539 | notify_radosgws() |
1540 | notify_client() |
1541 | else: |
1542 | - utils.juju_log('INFO', |
1543 | - 'Not enough mons ({}), punting.'.format( |
1544 | - len(get_mon_hosts()))) |
1545 | + log('Not enough mons ({}), punting.' |
1546 | + .format(len(get_mon_hosts()))) |
1547 | |
1548 | - utils.juju_log('INFO', 'End mon-relation hook.') |
1549 | + log('End mon-relation hook.') |
1550 | |
1551 | |
1552 | def notify_osds(): |
1553 | - utils.juju_log('INFO', 'Begin notify_osds.') |
1554 | - |
1555 | - for relid in utils.relation_ids('osd'): |
1556 | - utils.relation_set(fsid=utils.config_get('fsid'), |
1557 | - osd_bootstrap_key=ceph.get_osd_bootstrap_key(), |
1558 | - auth=utils.config_get('auth-supported'), |
1559 | - rid=relid) |
1560 | - |
1561 | - utils.juju_log('INFO', 'End notify_osds.') |
1562 | + log('Begin notify_osds.') |
1563 | + |
1564 | + for relid in relation_ids('osd'): |
1565 | + relation_set(relation_id=relid, |
1566 | + fsid=config('fsid'), |
1567 | + osd_bootstrap_key=ceph.get_osd_bootstrap_key(), |
1568 | + auth=config('auth-supported')) |
1569 | + |
1570 | + log('End notify_osds.') |
1571 | |
1572 | |
1573 | def notify_radosgws(): |
1574 | - utils.juju_log('INFO', 'Begin notify_radosgws.') |
1575 | - |
1576 | - for relid in utils.relation_ids('radosgw'): |
1577 | - utils.relation_set(radosgw_key=ceph.get_radosgw_key(), |
1578 | - auth=utils.config_get('auth-supported'), |
1579 | - rid=relid) |
1580 | - |
1581 | - utils.juju_log('INFO', 'End notify_radosgws.') |
1582 | + log('Begin notify_radosgws.') |
1583 | + |
1584 | + for relid in relation_ids('radosgw'): |
1585 | + relation_set(relation_id=relid, |
1586 | + radosgw_key=ceph.get_radosgw_key(), |
1587 | + auth=config('auth-supported')) |
1588 | + |
1589 | + log('End notify_radosgws.') |
1590 | |
1591 | |
1592 | def notify_client(): |
1593 | - utils.juju_log('INFO', 'Begin notify_client.') |
1594 | + log('Begin notify_client.') |
1595 | |
1596 | - for relid in utils.relation_ids('client'): |
1597 | - units = utils.relation_list(relid) |
1598 | + for relid in relation_ids('client'): |
1599 | + units = related_units(relid) |
1600 | if len(units) > 0: |
1601 | service_name = units[0].split('/')[0] |
1602 | - utils.relation_set(key=ceph.get_named_key(service_name), |
1603 | - auth=utils.config_get('auth-supported'), |
1604 | - rid=relid) |
1605 | - |
1606 | - utils.juju_log('INFO', 'End notify_client.') |
1607 | - |
1608 | - |
1609 | + relation_set(relation_id=relid, |
1610 | + key=ceph.get_named_key(service_name), |
1611 | + auth=config('auth-supported')) |
1612 | + |
1613 | + log('End notify_client.') |
1614 | + |
1615 | + |
1616 | +@hooks.hook('osd-relation-joined') |
1617 | def osd_relation(): |
1618 | - utils.juju_log('INFO', 'Begin osd-relation hook.') |
1619 | + log('Begin osd-relation hook.') |
1620 | |
1621 | if ceph.is_quorum(): |
1622 | - utils.juju_log('INFO', |
1623 | - 'mon cluster in quorum - providing fsid & keys') |
1624 | - utils.relation_set(fsid=utils.config_get('fsid'), |
1625 | - osd_bootstrap_key=ceph.get_osd_bootstrap_key(), |
1626 | - auth=utils.config_get('auth-supported')) |
1627 | + log('mon cluster in quorum - providing fsid & keys') |
1628 | + relation_set(fsid=config('fsid'), |
1629 | + osd_bootstrap_key=ceph.get_osd_bootstrap_key(), |
1630 | + auth=config('auth-supported')) |
1631 | else: |
1632 | - utils.juju_log('INFO', |
1633 | - 'mon cluster not in quorum - deferring fsid provision') |
1634 | - |
1635 | - utils.juju_log('INFO', 'End osd-relation hook.') |
1636 | - |
1637 | - |
1638 | + log('mon cluster not in quorum - deferring fsid provision') |
1639 | + |
1640 | + log('End osd-relation hook.') |
1641 | + |
1642 | + |
1643 | +@hooks.hook('radosgw-relation-joined') |
1644 | def radosgw_relation(): |
1645 | - utils.juju_log('INFO', 'Begin radosgw-relation hook.') |
1646 | - |
1647 | - utils.install('radosgw') # Install radosgw for admin tools |
1648 | - |
1649 | + log('Begin radosgw-relation hook.') |
1650 | + |
1651 | + # Install radosgw for admin tools |
1652 | + apt_install(packages=filter_installed_packages(['radosgw'])) |
1653 | if ceph.is_quorum(): |
1654 | - utils.juju_log('INFO', |
1655 | - 'mon cluster in quorum - \ |
1656 | - providing radosgw with keys') |
1657 | - utils.relation_set(radosgw_key=ceph.get_radosgw_key(), |
1658 | - auth=utils.config_get('auth-supported')) |
1659 | + log('mon cluster in quorum - providing radosgw with keys') |
1660 | + relation_set(radosgw_key=ceph.get_radosgw_key(), |
1661 | + auth=config('auth-supported')) |
1662 | else: |
1663 | - utils.juju_log('INFO', |
1664 | - 'mon cluster not in quorum - deferring key provision') |
1665 | - |
1666 | - utils.juju_log('INFO', 'End radosgw-relation hook.') |
1667 | - |
1668 | - |
1669 | + log('mon cluster not in quorum - deferring key provision') |
1670 | + |
1671 | + log('End radosgw-relation hook.') |
1672 | + |
1673 | + |
1674 | +@hooks.hook('client-relation-joined') |
1675 | def client_relation(): |
1676 | - utils.juju_log('INFO', 'Begin client-relation hook.') |
1677 | + log('Begin client-relation hook.') |
1678 | |
1679 | if ceph.is_quorum(): |
1680 | - utils.juju_log('INFO', |
1681 | - 'mon cluster in quorum - \ |
1682 | - providing client with keys') |
1683 | - service_name = os.environ['JUJU_REMOTE_UNIT'].split('/')[0] |
1684 | - utils.relation_set(key=ceph.get_named_key(service_name), |
1685 | - auth=utils.config_get('auth-supported')) |
1686 | + log('mon cluster in quorum - providing client with keys') |
1687 | + service_name = remote_unit().split('/')[0] |
1688 | + relation_set(key=ceph.get_named_key(service_name), |
1689 | + auth=config('auth-supported')) |
1690 | else: |
1691 | - utils.juju_log('INFO', |
1692 | - 'mon cluster not in quorum - deferring key provision') |
1693 | - |
1694 | - utils.juju_log('INFO', 'End client-relation hook.') |
1695 | - |
1696 | - |
1697 | + log('mon cluster not in quorum - deferring key provision') |
1698 | + |
1699 | + log('End client-relation hook.') |
1700 | + |
1701 | + |
1702 | +@hooks.hook('upgrade-charm') |
1703 | def upgrade_charm(): |
1704 | - utils.juju_log('INFO', 'Begin upgrade-charm hook.') |
1705 | + log('Begin upgrade-charm hook.') |
1706 | emit_cephconf() |
1707 | - utils.install('xfsprogs') |
1708 | + apt_install(packages=filter_installed_packages(ceph.PACKAGES), fatal=True) |
1709 | install_upstart_scripts() |
1710 | - update_monfs() |
1711 | - utils.juju_log('INFO', 'End upgrade-charm hook.') |
1712 | - |
1713 | - |
1714 | + ceph.update_monfs() |
1715 | + log('End upgrade-charm hook.') |
1716 | + |
1717 | + |
1718 | +@hooks.hook('start') |
1719 | def start(): |
1720 | # In case we're being redeployed to the same machines, try |
1721 | # to make sure everything is running as soon as possible. |
1722 | - subprocess.call(['start', 'ceph-mon-all-starter']) |
1723 | + service_restart('ceph-mon-all') |
1724 | ceph.rescan_osd_devices() |
1725 | |
1726 | |
1727 | -utils.do_hooks({ |
1728 | - 'config-changed': config_changed, |
1729 | - 'install': install, |
1730 | - 'mon-relation-departed': mon_relation, |
1731 | - 'mon-relation-joined': mon_relation, |
1732 | - 'osd-relation-joined': osd_relation, |
1733 | - 'radosgw-relation-joined': radosgw_relation, |
1734 | - 'client-relation-joined': client_relation, |
1735 | - 'start': start, |
1736 | - 'upgrade-charm': upgrade_charm, |
1737 | - }) |
1738 | - |
1739 | -sys.exit(0) |
1740 | +if __name__ == '__main__': |
1741 | + try: |
1742 | + hooks.execute(sys.argv) |
1743 | + except UnregisteredHookError as e: |
1744 | + log('Unknown hook {} - skipping.'.format(e)) |
1745 | |
1746 | === modified file 'hooks/utils.py' |
1747 | --- hooks/utils.py 2013-02-08 11:09:00 +0000 |
1748 | +++ hooks/utils.py 2013-07-08 08:34:31 +0000 |
1749 | @@ -7,97 +7,41 @@ |
1750 | # Paul Collins <paul.collins@canonical.com> |
1751 | # |
1752 | |
1753 | -import os |
1754 | -import subprocess |
1755 | import socket |
1756 | -import sys |
1757 | import re |
1758 | - |
1759 | - |
1760 | -def do_hooks(hooks): |
1761 | - hook = os.path.basename(sys.argv[0]) |
1762 | - |
1763 | - try: |
1764 | - hook_func = hooks[hook] |
1765 | - except KeyError: |
1766 | - juju_log('INFO', |
1767 | - "This charm doesn't know how to handle '{}'.".format(hook)) |
1768 | - else: |
1769 | - hook_func() |
1770 | - |
1771 | - |
1772 | -def install(*pkgs): |
1773 | - cmd = [ |
1774 | - 'apt-get', |
1775 | - '-y', |
1776 | - 'install' |
1777 | - ] |
1778 | - for pkg in pkgs: |
1779 | - cmd.append(pkg) |
1780 | - subprocess.check_call(cmd) |
1781 | +from charmhelpers.core.hookenv import ( |
1782 | + unit_get, |
1783 | + cached |
1784 | +) |
1785 | +from charmhelpers.core.host import ( |
1786 | + apt_install, |
1787 | + filter_installed_packages |
1788 | +) |
1789 | |
1790 | TEMPLATES_DIR = 'templates' |
1791 | |
1792 | try: |
1793 | import jinja2 |
1794 | except ImportError: |
1795 | - install('python-jinja2') |
1796 | + apt_install(filter_installed_packages(['python-jinja2']), |
1797 | + fatal=True) |
1798 | import jinja2 |
1799 | |
1800 | try: |
1801 | import dns.resolver |
1802 | except ImportError: |
1803 | - install('python-dnspython') |
1804 | + apt_install(filter_installed_packages(['python-dnspython']), |
1805 | + fatal=True) |
1806 | import dns.resolver |
1807 | |
1808 | |
1809 | def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
1810 | templates = jinja2.Environment( |
1811 | - loader=jinja2.FileSystemLoader(template_dir) |
1812 | - ) |
1813 | + loader=jinja2.FileSystemLoader(template_dir)) |
1814 | template = templates.get_template(template_name) |
1815 | return template.render(context) |
1816 | |
1817 | |
1818 | -CLOUD_ARCHIVE = \ |
1819 | -""" # Ubuntu Cloud Archive |
1820 | -deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
1821 | -""" |
1822 | - |
1823 | - |
1824 | -def configure_source(): |
1825 | - source = str(config_get('source')) |
1826 | - if not source: |
1827 | - return |
1828 | - if source.startswith('ppa:'): |
1829 | - cmd = [ |
1830 | - 'add-apt-repository', |
1831 | - source |
1832 | - ] |
1833 | - subprocess.check_call(cmd) |
1834 | - if source.startswith('cloud:'): |
1835 | - install('ubuntu-cloud-keyring') |
1836 | - pocket = source.split(':')[1] |
1837 | - with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: |
1838 | - apt.write(CLOUD_ARCHIVE.format(pocket)) |
1839 | - if source.startswith('http:'): |
1840 | - with open('/etc/apt/sources.list.d/ceph.list', 'w') as apt: |
1841 | - apt.write("deb " + source + "\n") |
1842 | - key = config_get('key') |
1843 | - if key: |
1844 | - cmd = [ |
1845 | - 'apt-key', |
1846 | - 'adv', '--keyserver keyserver.ubuntu.com', |
1847 | - '--recv-keys', key |
1848 | - ] |
1849 | - subprocess.check_call(cmd) |
1850 | - cmd = [ |
1851 | - 'apt-get', |
1852 | - 'update' |
1853 | - ] |
1854 | - subprocess.check_call(cmd) |
1855 | - |
1856 | - |
1857 | def enable_pocket(pocket): |
1858 | apt_sources = "/etc/apt/sources.list" |
1859 | with open(apt_sources, "r") as sources: |
1860 | @@ -109,105 +53,15 @@ |
1861 | else: |
1862 | sources.write(line) |
1863 | |
1864 | -# Protocols |
1865 | -TCP = 'TCP' |
1866 | -UDP = 'UDP' |
1867 | - |
1868 | - |
1869 | -def expose(port, protocol='TCP'): |
1870 | - cmd = [ |
1871 | - 'open-port', |
1872 | - '{}/{}'.format(port, protocol) |
1873 | - ] |
1874 | - subprocess.check_call(cmd) |
1875 | - |
1876 | - |
1877 | -def juju_log(severity, message): |
1878 | - cmd = [ |
1879 | - 'juju-log', |
1880 | - '--log-level', severity, |
1881 | - message |
1882 | - ] |
1883 | - subprocess.check_call(cmd) |
1884 | - |
1885 | - |
1886 | -def relation_ids(relation): |
1887 | - cmd = [ |
1888 | - 'relation-ids', |
1889 | - relation |
1890 | - ] |
1891 | - return subprocess.check_output(cmd).split() # IGNORE:E1103 |
1892 | - |
1893 | - |
1894 | -def relation_list(rid): |
1895 | - cmd = [ |
1896 | - 'relation-list', |
1897 | - '-r', rid, |
1898 | - ] |
1899 | - return subprocess.check_output(cmd).split() # IGNORE:E1103 |
1900 | - |
1901 | - |
1902 | -def relation_get(attribute, unit=None, rid=None): |
1903 | - cmd = [ |
1904 | - 'relation-get', |
1905 | - ] |
1906 | - if rid: |
1907 | - cmd.append('-r') |
1908 | - cmd.append(rid) |
1909 | - cmd.append(attribute) |
1910 | - if unit: |
1911 | - cmd.append(unit) |
1912 | - value = str(subprocess.check_output(cmd)).strip() |
1913 | - if value == "": |
1914 | - return None |
1915 | - else: |
1916 | - return value |
1917 | - |
1918 | - |
1919 | -def relation_set(**kwargs): |
1920 | - cmd = [ |
1921 | - 'relation-set' |
1922 | - ] |
1923 | - args = [] |
1924 | - for k, v in kwargs.items(): |
1925 | - if k == 'rid': |
1926 | - cmd.append('-r') |
1927 | - cmd.append(v) |
1928 | - else: |
1929 | - args.append('{}={}'.format(k, v)) |
1930 | - cmd += args |
1931 | - subprocess.check_call(cmd) |
1932 | - |
1933 | - |
1934 | -def unit_get(attribute): |
1935 | - cmd = [ |
1936 | - 'unit-get', |
1937 | - attribute |
1938 | - ] |
1939 | - value = str(subprocess.check_output(cmd)).strip() |
1940 | - if value == "": |
1941 | - return None |
1942 | - else: |
1943 | - return value |
1944 | - |
1945 | - |
1946 | -def config_get(attribute): |
1947 | - cmd = [ |
1948 | - 'config-get', |
1949 | - attribute |
1950 | - ] |
1951 | - value = str(subprocess.check_output(cmd)).strip() |
1952 | - if value == "": |
1953 | - return None |
1954 | - else: |
1955 | - return value |
1956 | - |
1957 | - |
1958 | + |
1959 | +@cached |
1960 | def get_unit_hostname(): |
1961 | return socket.gethostname() |
1962 | |
1963 | |
1964 | -def get_host_ip(hostname=unit_get('private-address')): |
1965 | +@cached |
1966 | +def get_host_ip(hostname=None): |
1967 | + hostname = hostname or unit_get('private-address') |
1968 | try: |
1969 | # Test to see if already an IPv4 address |
1970 | socket.inet_aton(hostname) |
1971 | |
1972 | === modified file 'metadata.yaml' |
1973 | --- metadata.yaml 2013-04-22 19:49:09 +0000 |
1974 | +++ metadata.yaml 2013-07-08 08:34:31 +0000 |
1975 | @@ -1,7 +1,6 @@ |
1976 | name: ceph |
1977 | summary: Highly scalable distributed storage |
1978 | -maintainer: James Page <james.page@ubuntu.com>, |
1979 | - Paul Collins <paul.collins@canonical.com> |
1980 | +maintainer: James Page <james.page@ubuntu.com> |
1981 | description: | |
1982 | Ceph is a distributed storage and network file system designed to provide |
1983 | excellent performance, reliability, and scalability. |
similarly to ceph-osd, two requests for the future:
- consider refactoring $CHARM_ DIR/hooks/ hooks.py into $CHARM_ DIR/lib/ ceph_tools with accompanying unit-type $CHARM_ DIR/lib/ ceph_tools/ tests where possible
- please think up some decent integration tests and add them into $CHARM_DIR/tests
Thanks!