Merge lp:~openstack-charmers/charms/precise/cinder/python-redux into lp:charms/raring/cinder

Proposed by Adam Gandelman
Status: Merged
Merge reported by: Adam Gandelman
Merged at revision: not available
Proposed branch: lp:~openstack-charmers/charms/precise/cinder/python-redux
Merge into: lp:charms/raring/cinder
Diff against target: 6238 lines (+5925/-0) (has conflicts)
47 files modified
.coveragerc (+7/-0)
.ls (+25/-0)
.project (+17/-0)
.pydevproject (+9/-0)
Makefile (+14/-0)
README.md (+126/-0)
charm-helpers.yaml (+12/-0)
config.yaml (+117/-0)
copyright (+17/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/openstack/context.py (+522/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+117/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+11/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+37/-0)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+23/-0)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf (+23/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+365/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+359/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0)
hooks/charmhelpers/core/hookenv.py (+340/-0)
hooks/charmhelpers/core/host.py (+241/-0)
hooks/charmhelpers/fetch/__init__.py (+209/-0)
hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
hooks/charmhelpers/fetch/bzrurl.py (+49/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/cinder_contexts.py (+76/-0)
hooks/cinder_hooks.py (+276/-0)
hooks/cinder_utils.py (+361/-0)
icon.svg (+769/-0)
metadata.yaml (+27/-0)
revision (+1/-0)
setup.cfg (+6/-0)
templates/cinder.conf (+42/-0)
templates/folsom/api-paste.ini (+60/-0)
templates/grizzly/api-paste.ini (+71/-0)
templates/havana/api-paste.ini (+71/-0)
unit_tests/__init__.py (+2/-0)
unit_tests/test_cinder_hooks.py (+286/-0)
unit_tests/test_cinder_utils.py (+223/-0)
unit_tests/test_cluster_hooks.py (+107/-0)
unit_tests/test_utils.py (+110/-0)
Conflict adding file config.yaml.  Moved existing file to config.yaml.moved.
Conflict adding file copyright.  Moved existing file to copyright.moved.
Conflict adding file hooks.  Moved existing file to hooks.moved.
Conflict adding file icon.svg.  Moved existing file to icon.svg.moved.
Conflict adding file metadata.yaml.  Moved existing file to metadata.yaml.moved.
Conflict adding file revision.  Moved existing file to revision.moved.
To merge this branch: bzr merge lp:~openstack-charmers/charms/precise/cinder/python-redux
Reviewer Review Type Date Requested Status
James Page Approve
Adam Gandelman (community) Needs Resubmitting
Review via email: mp+191081@code.launchpad.net

Description of the change

Update of all Havana / Saucy / python-redux work:

* Full python rewrite using new OpenStack charm-helpers.

* Test coverage

* Havana support

To post a comment you must log in.
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Tests currently failing

review: Needs Fixing
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Tests fixed

review: Needs Resubmitting
Revision history for this message
James Page (james-page) wrote :

+1'ed but we need to bzr push --overwrite as for some reason we lost common ancestry for this work.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.coveragerc'
2--- .coveragerc 1970-01-01 00:00:00 +0000
3+++ .coveragerc 2013-10-16 18:49:46 +0000
4@@ -0,0 +1,7 @@
5+[report]
6+# Regexes for lines to exclude from consideration
7+exclude_lines =
8+ if __name__ == .__main__.:
9+include=
10+ hooks/cinder_*
11+
12
13=== added file '.ls'
14--- .ls 1970-01-01 00:00:00 +0000
15+++ .ls 2013-10-16 18:49:46 +0000
16@@ -0,0 +1,25 @@
17+name: cinder
18+summary: Cinder OpenStack starage service
19+maintainer: Adam Gandelman <adamg@canonical.com>
20+description: |
21+ Cinder is a storage service for the Openstack project
22+provides:
23+ cinder-volume-service:
24+ interface: cinder
25+requires:
26+ shared-db:
27+ interface: mysql-shared
28+ amqp:
29+ interface: rabbitmq
30+ identity-service:
31+ interface: keystone
32+ ceph:
33+ interface: ceph-client
34+ image-service:
35+ interface: glance
36+ ha:
37+ interface: hacluster
38+ scope: container
39+peers:
40+ cluster:
41+ interface: cinder-ha
42
43=== added file '.project'
44--- .project 1970-01-01 00:00:00 +0000
45+++ .project 2013-10-16 18:49:46 +0000
46@@ -0,0 +1,17 @@
47+<?xml version="1.0" encoding="UTF-8"?>
48+<projectDescription>
49+ <name>cinder</name>
50+ <comment></comment>
51+ <projects>
52+ </projects>
53+ <buildSpec>
54+ <buildCommand>
55+ <name>org.python.pydev.PyDevBuilder</name>
56+ <arguments>
57+ </arguments>
58+ </buildCommand>
59+ </buildSpec>
60+ <natures>
61+ <nature>org.python.pydev.pythonNature</nature>
62+ </natures>
63+</projectDescription>
64
65=== added file '.pydevproject'
66--- .pydevproject 1970-01-01 00:00:00 +0000
67+++ .pydevproject 2013-10-16 18:49:46 +0000
68@@ -0,0 +1,9 @@
69+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
70+<?eclipse-pydev version="1.0"?><pydev_project>
71+<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>
72+<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
73+<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
74+<path>/cinder/hooks</path>
75+<path>/cinder/unit_tests</path>
76+</pydev_pathproperty>
77+</pydev_project>
78
79=== added file 'Makefile'
80--- Makefile 1970-01-01 00:00:00 +0000
81+++ Makefile 2013-10-16 18:49:46 +0000
82@@ -0,0 +1,14 @@
83+#!/usr/bin/make
84+PYTHON := /usr/bin/env python
85+
86+lint:
87+ @flake8 --exclude hooks/charmhelpers hooks
88+ @flake8 --exclude hooks/charmhelpers unit_tests
89+ @charm proof
90+
91+test:
92+ @echo Starting tests...
93+ @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
94+
95+sync:
96+ @charm-helper-sync -c charm-helpers.yaml
97
98=== added file 'README.md'
99--- README.md 1970-01-01 00:00:00 +0000
100+++ README.md 2013-10-16 18:49:46 +0000
101@@ -0,0 +1,126 @@
102+Overview
103+--------
104+
105+This charm provides the Cinder volume service for OpenStack. It is intended to
106+be used alongside the other OpenStack components, starting with the Folsom
107+release.
108+
109+Cinder is made up of 3 separate services: an API service, a scheduler and a
110+volume service. This charm allows them to be deployed in different
111+combination, depending on user preference and requirements.
112+
113+This charm was developed to support deploying Folsom on both
114+Ubuntu Quantal and Ubuntu Precise. Since Cinder is only available for
115+Ubuntu 12.04 via the Ubuntu Cloud Archive, deploying this charm to a
116+Precise machine will by default install Cinder and its dependencies from
117+the Cloud Archive.
118+
119+Usage
120+-----
121+
122+Cinder may be deployed in a number of ways. This charm focuses on 3 main
123+configurations. All require the existence of the other core OpenStack
124+services deployed via Juju charms, specifically: mysql, rabbitmq-server,
125+keystone and nova-cloud-controller. The following assumes these services
126+have already been deployed.
127+
128+Basic, all-in-one using local storage and iSCSI
129+===============================================
130+
131+The api server, scheduler and volume service are all deployed into the same
132+unit. Local storage will be initialized as a LVM phsyical device, and a volume
133+group initialized. Instance volumes will be created locally as logical volumes
134+and exported to instances via iSCSI. This is ideal for small-scale deployments
135+or testing:
136+
137+ cat >cinder.cfg <<END
138+ cinder:
139+ block-device: sdc
140+ overwrite: true
141+ END
142+ juju deploy --config=cinder.cfg cinder
143+ juju add-relation cinder keystone
144+ juju add-relation cinder mysql
145+ juju add-relation cinder rabbitmq-server
146+ juju add-relation cinder nova-cloud-controller
147+
148+Separate volume units for scale out, using local storage and iSCSI
149+==================================================================
150+
151+Separating the volume service from the API service allows the storage pool
152+to easily scale without the added complexity that accompanies load-balancing
153+the API server. When we've exhausted local storage on volume server, we can
154+simply add-unit to expand our capacity. Future requests to allocate volumes
155+will be distributed across the pool of volume servers according to the
156+availability of storage space.
157+
158+ cat >cinder.cfg <<END
159+ cinder-api:
160+ enabled-services: api, scheduler
161+ cinder-volume:
162+ enabled-services: volume
163+ block-device: sdc
164+ overwrite: true
165+ END
166+ juju deploy --config=cinder.cfg cinder cinder-api
167+ juju deploy --config=cinder.cfg cinder cinder-volume
168+ juju add-relation cinder-api mysql
169+ juju add-relation cinder-api rabbitmq-server
170+ juju add-relation cinder-api keystone
171+ juju add-relation cinder-api nova-cloud-controller
172+ juju add-relation cinder-volume mysql
173+ juju add-relation cinder-volume rabbitmq-server
174+
175+ # When more storage is needed, simply add more volume servers.
176+ juju add-unit cinder-volume
177+
178+All-in-one using Ceph-backed RBD volumes
179+========================================
180+
181+All 3 services can be deployed to the same unit, but instead of relying
182+on local storage to back volumes an external Ceph cluster is used. This
183+allows scalability and redundancy needs to be satisified and Cinder's RBD
184+driver used to create, export and connect volumes to instances. This assumes
185+a functioning Ceph cluster has already been deployed using the official Ceph
186+charm and a relation exists between the Ceph service and the nova-compute
187+service.
188+
189+ cat >cinder.cfg <<END
190+ cinder:
191+ block-device: None
192+ END
193+ juju deploy --config=cinder.cfg cinder
194+ juju add-relation cinder ceph
195+ juju add-relation cinder keystone
196+ juju add-relation cinder mysql
197+ juju add-relation cinder rabbitmq-server
198+ juju add-relation cinder nova-cloud-controller
199+
200+
201+Configuration
202+-------------
203+
204+The default value for most config options should work for most deployments.
205+
206+Users should be aware of three options, in particular:
207+
208+openstack-origin: Allows Cinder to be installed from a specific apt repository.
209+ See config.yaml for a list of supported sources.
210+
211+block-device: When using local storage, a block device should be specified to
212+ back a LVM volume group. It's important this device exists on
213+ all nodes that the service may be deployed to.
214+
215+overwrite: Whether or not to wipe local storage that of data that may prevent
216+ it from being initialized as a LVM phsyical device. This includes
217+ filesystems and partition tables. *CAUTION*
218+
219+enabled-services: Can be used to separate cinder services between service
220+ service units (see previous section)
221+
222+Contact Information
223+-------------------
224+
225+Author: Adam Gandelman <adamg@canonical.com>
226+Report bugs at: http://bugs.launchpad.net/charms
227+Location: http://jujucharms.com
228
229=== added file 'charm-helpers.yaml'
230--- charm-helpers.yaml 1970-01-01 00:00:00 +0000
231+++ charm-helpers.yaml 2013-10-16 18:49:46 +0000
232@@ -0,0 +1,12 @@
233+branch: lp:charm-helpers
234+destination: hooks/charmhelpers
235+include:
236+ - core
237+ - fetch
238+ - contrib.openstack|inc=*
239+ - contrib.storage
240+ - contrib.hahelpers:
241+ - apache
242+ - cluster
243+ - fetch
244+ - payload.execd
245
246=== added file 'config.yaml'
247--- config.yaml 1970-01-01 00:00:00 +0000
248+++ config.yaml 2013-10-16 18:49:46 +0000
249@@ -0,0 +1,117 @@
250+options:
251+ openstack-origin:
252+ default: distro
253+ type: string
254+ description: |
255+ Repository from which to install. May be one of the following:
256+ distro (default), ppa:somecustom/ppa, a deb url sources entry,
257+ or a supported Cloud Archive release pocket.
258+
259+ Supported Cloud Archive sources include: cloud:precise-folsom,
260+ cloud:precise-folsom/updates, cloud:precise-folsom/staging,
261+ cloud:precise-folsom/proposed.
262+
263+ When deploying to Precise, the default distro option will use
264+ the cloud:precise-folsom/updates repository instead, since Cinder
265+ was not available in the Ubuntu archive for Precise and is only
266+ available via the Ubuntu Cloud Archive.
267+ enabled-services:
268+ default: all
269+ type: string
270+ description: |
271+ If splitting cinder services between units, define which services
272+ to install and configure.
273+ block-device:
274+ default: sdb
275+ type: string
276+ description: |
277+ The *available* block device on which to create LVM volume group.
278+ May also be set to None for deployments that will not need local
279+ storage (eg, Ceph/RBD-backed volumes).
280+ ceph-osd-replication-count:
281+ default: 2
282+ type: int
283+ description: |
284+ This value dictates the number of replicas ceph must make of any
285+ object it stores withing the cinder rbd pool. Of course, this only
286+ applies if using Ceph as a backend store. Note that once the cinder
287+ rbd pool has been created, changing this value will not have any
288+ effect (although it can be changed in ceph by manually configuring
289+ your ceph cluster).
290+ volume-group:
291+ default: cinder-volumes
292+ type: string
293+ description: Name of volume group to create and store Cinder volumes.
294+ overwrite:
295+ default: "false"
296+ type: string
297+ description: |
298+ If true, charm will attempt to overwrite block devices containin
299+ previous filesystems or LVM, assuming it is not in use.
300+ database-user:
301+ default: cinder
302+ type: string
303+ description: Username to request database access.
304+ database:
305+ default: cinder
306+ type: string
307+ description: Database to request access.
308+ rabbit-user:
309+ default: cinder
310+ type: string
311+ description: Username to request access on rabbitmq-server.
312+ rabbit-vhost:
313+ default: cinder
314+ type: string
315+ description: RabbitMQ virtual host to request access on rabbitmq-server.
316+ api-listening-port:
317+ default: 8776
318+ type: int
319+ description: OpenStack Volume API listening port.
320+ region:
321+ default: RegionOne
322+ type: string
323+ description: OpenStack Region
324+ glance-api-version:
325+ default: 1
326+ type: int
327+ description: |
328+ Newer storage drivers may require the v2 Glance API to perform certain
329+ actions e.g. the RBD driver requires requires this to support COW
330+ cloning of images. This option will default to v1 for backwards
331+ compatibility older glance services.
332+ # HA configuration settings
333+ vip:
334+ type: string
335+ description: "Virtual IP to use to front cinder API in ha configuration"
336+ vip_iface:
337+ type: string
338+ default: eth0
339+ description: "Network Interface where to place the Virtual IP"
340+ vip_cidr:
341+ type: int
342+ default: 24
343+ description: "Netmask that will be used for the Virtual IP"
344+ ha-bindiface:
345+ type: string
346+ default: eth0
347+ description: |
348+ Default network interface on which HA cluster will bind to communication
349+ with the other members of the HA Cluster.
350+ ha-mcastport:
351+ type: int
352+ default: 5401
353+ description: |
354+ Default multicast port number that will be used to communicate between
355+ HA Cluster nodes.
356+ # Per-service HTTPS configuration.
357+ ssl_cert:
358+ type: string
359+ description: |
360+ SSL certificate to install and use for API ports. Setting this value
361+ and ssl_key will enable reverse proxying, point Glance's entry in the
362+ Keystone catalog to use https, and override any certficiate and key
363+ issued by Keystone (if it is configured to do so).
364+ ssl_key:
365+ type: string
366+ description: SSL key to use with certificate specified as ssl_cert.
367
368=== renamed file 'config.yaml' => 'config.yaml.moved'
369=== added file 'copyright'
370--- copyright 1970-01-01 00:00:00 +0000
371+++ copyright 2013-10-16 18:49:46 +0000
372@@ -0,0 +1,17 @@
373+Format: http://dep.debian.net/deps/dep5/
374+
375+Files: *
376+Copyright: Copyright 2012, Canonical Ltd., All Rights Reserved.
377+License: GPL-3
378+ This program is free software: you can redistribute it and/or modify
379+ it under the terms of the GNU General Public License as published by
380+ the Free Software Foundation, either version 3 of the License, or
381+ (at your option) any later version.
382+ .
383+ This program is distributed in the hope that it will be useful,
384+ but WITHOUT ANY WARRANTY; without even the implied warranty of
385+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
386+ GNU General Public License for more details.
387+ .
388+ You should have received a copy of the GNU General Public License
389+ along with this program. If not, see <http://www.gnu.org/licenses/>.
390
391=== renamed file 'copyright' => 'copyright.moved'
392=== added directory 'hooks'
393=== renamed directory 'hooks' => 'hooks.moved'
394=== added file 'hooks/__init__.py'
395=== added symlink 'hooks/amqp-relation-broken'
396=== target is u'cinder_hooks.py'
397=== added symlink 'hooks/amqp-relation-changed'
398=== target is u'cinder_hooks.py'
399=== added symlink 'hooks/amqp-relation-joined'
400=== target is u'cinder_hooks.py'
401=== added symlink 'hooks/ceph-relation-broken'
402=== target is u'cinder_hooks.py'
403=== added symlink 'hooks/ceph-relation-changed'
404=== target is u'cinder_hooks.py'
405=== added symlink 'hooks/ceph-relation-joined'
406=== target is u'cinder_hooks.py'
407=== added directory 'hooks/charmhelpers'
408=== added file 'hooks/charmhelpers/__init__.py'
409=== added directory 'hooks/charmhelpers/contrib'
410=== added file 'hooks/charmhelpers/contrib/__init__.py'
411=== added directory 'hooks/charmhelpers/contrib/hahelpers'
412=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
413=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
414--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
415+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2013-10-16 18:49:46 +0000
416@@ -0,0 +1,58 @@
417+#
418+# Copyright 2012 Canonical Ltd.
419+#
420+# This file is sourced from lp:openstack-charm-helpers
421+#
422+# Authors:
423+# James Page <james.page@ubuntu.com>
424+# Adam Gandelman <adamg@ubuntu.com>
425+#
426+
427+import subprocess
428+
429+from charmhelpers.core.hookenv import (
430+ config as config_get,
431+ relation_get,
432+ relation_ids,
433+ related_units as relation_list,
434+ log,
435+ INFO,
436+)
437+
438+
439+def get_cert():
440+ cert = config_get('ssl_cert')
441+ key = config_get('ssl_key')
442+ if not (cert and key):
443+ log("Inspecting identity-service relations for SSL certificate.",
444+ level=INFO)
445+ cert = key = None
446+ for r_id in relation_ids('identity-service'):
447+ for unit in relation_list(r_id):
448+ if not cert:
449+ cert = relation_get('ssl_cert',
450+ rid=r_id, unit=unit)
451+ if not key:
452+ key = relation_get('ssl_key',
453+ rid=r_id, unit=unit)
454+ return (cert, key)
455+
456+
457+def get_ca_cert():
458+ ca_cert = None
459+ log("Inspecting identity-service relations for CA SSL certificate.",
460+ level=INFO)
461+ for r_id in relation_ids('identity-service'):
462+ for unit in relation_list(r_id):
463+ if not ca_cert:
464+ ca_cert = relation_get('ca_cert',
465+ rid=r_id, unit=unit)
466+ return ca_cert
467+
468+
469+def install_ca_cert(ca_cert):
470+ if ca_cert:
471+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
472+ 'w') as crt:
473+ crt.write(ca_cert)
474+ subprocess.check_call(['update-ca-certificates', '--fresh'])
475
476=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
477--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
478+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2013-10-16 18:49:46 +0000
479@@ -0,0 +1,183 @@
480+#
481+# Copyright 2012 Canonical Ltd.
482+#
483+# Authors:
484+# James Page <james.page@ubuntu.com>
485+# Adam Gandelman <adamg@ubuntu.com>
486+#
487+
488+import subprocess
489+import os
490+
491+from socket import gethostname as get_unit_hostname
492+
493+from charmhelpers.core.hookenv import (
494+ log,
495+ relation_ids,
496+ related_units as relation_list,
497+ relation_get,
498+ config as config_get,
499+ INFO,
500+ ERROR,
501+ unit_get,
502+)
503+
504+
505+class HAIncompleteConfig(Exception):
506+ pass
507+
508+
509+def is_clustered():
510+ for r_id in (relation_ids('ha') or []):
511+ for unit in (relation_list(r_id) or []):
512+ clustered = relation_get('clustered',
513+ rid=r_id,
514+ unit=unit)
515+ if clustered:
516+ return True
517+ return False
518+
519+
520+def is_leader(resource):
521+ cmd = [
522+ "crm", "resource",
523+ "show", resource
524+ ]
525+ try:
526+ status = subprocess.check_output(cmd)
527+ except subprocess.CalledProcessError:
528+ return False
529+ else:
530+ if get_unit_hostname() in status:
531+ return True
532+ else:
533+ return False
534+
535+
536+def peer_units():
537+ peers = []
538+ for r_id in (relation_ids('cluster') or []):
539+ for unit in (relation_list(r_id) or []):
540+ peers.append(unit)
541+ return peers
542+
543+
544+def oldest_peer(peers):
545+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
546+ for peer in peers:
547+ remote_unit_no = int(peer.split('/')[1])
548+ if remote_unit_no < local_unit_no:
549+ return False
550+ return True
551+
552+
553+def eligible_leader(resource):
554+ if is_clustered():
555+ if not is_leader(resource):
556+ log('Deferring action to CRM leader.', level=INFO)
557+ return False
558+ else:
559+ peers = peer_units()
560+ if peers and not oldest_peer(peers):
561+ log('Deferring action to oldest service unit.', level=INFO)
562+ return False
563+ return True
564+
565+
566+def https():
567+ '''
568+ Determines whether enough data has been provided in configuration
569+ or relation data to configure HTTPS
570+ .
571+ returns: boolean
572+ '''
573+ if config_get('use-https') == "yes":
574+ return True
575+ if config_get('ssl_cert') and config_get('ssl_key'):
576+ return True
577+ for r_id in relation_ids('identity-service'):
578+ for unit in relation_list(r_id):
579+ rel_state = [
580+ relation_get('https_keystone', rid=r_id, unit=unit),
581+ relation_get('ssl_cert', rid=r_id, unit=unit),
582+ relation_get('ssl_key', rid=r_id, unit=unit),
583+ relation_get('ca_cert', rid=r_id, unit=unit),
584+ ]
585+ # NOTE: works around (LP: #1203241)
586+ if (None not in rel_state) and ('' not in rel_state):
587+ return True
588+ return False
589+
590+
591+def determine_api_port(public_port):
592+ '''
593+ Determine correct API server listening port based on
594+ existence of HTTPS reverse proxy and/or haproxy.
595+
596+ public_port: int: standard public port for given service
597+
598+ returns: int: the correct listening port for the API service
599+ '''
600+ i = 0
601+ if len(peer_units()) > 0 or is_clustered():
602+ i += 1
603+ if https():
604+ i += 1
605+ return public_port - (i * 10)
606+
607+
608+def determine_haproxy_port(public_port):
609+ '''
610+ Description: Determine correct proxy listening port based on public IP +
611+ existence of HTTPS reverse proxy.
612+
613+ public_port: int: standard public port for given service
614+
615+ returns: int: the correct listening port for the HAProxy service
616+ '''
617+ i = 0
618+ if https():
619+ i += 1
620+ return public_port - (i * 10)
621+
622+
623+def get_hacluster_config():
624+ '''
625+ Obtains all relevant configuration from charm configuration required
626+ for initiating a relation to hacluster:
627+
628+ ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
629+
630+ returns: dict: A dict containing settings keyed by setting name.
631+ raises: HAIncompleteConfig if settings are missing.
632+ '''
633+ settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
634+ conf = {}
635+ for setting in settings:
636+ conf[setting] = config_get(setting)
637+ missing = []
638+ [missing.append(s) for s, v in conf.iteritems() if v is None]
639+ if missing:
640+ log('Insufficient config data to configure hacluster.', level=ERROR)
641+ raise HAIncompleteConfig
642+ return conf
643+
644+
645+def canonical_url(configs, vip_setting='vip'):
646+ '''
647+ Returns the correct HTTP URL to this host given the state of HTTPS
648+ configuration and hacluster.
649+
650+ :configs : OSTemplateRenderer: A config tempating object to inspect for
651+ a complete https context.
652+ :vip_setting: str: Setting in charm config that specifies
653+ VIP address.
654+ '''
655+ scheme = 'http'
656+ if 'https' in configs.complete_contexts():
657+ scheme = 'https'
658+ if is_clustered():
659+ addr = config_get(vip_setting)
660+ else:
661+ addr = unit_get('private-address')
662+ return '%s://%s' % (scheme, addr)
663
664=== added directory 'hooks/charmhelpers/contrib/openstack'
665=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
666=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
667--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
668+++ hooks/charmhelpers/contrib/openstack/context.py 2013-10-16 18:49:46 +0000
669@@ -0,0 +1,522 @@
670+import json
671+import os
672+
673+from base64 import b64decode
674+
675+from subprocess import (
676+ check_call
677+)
678+
679+
680+from charmhelpers.fetch import (
681+ apt_install,
682+ filter_installed_packages,
683+)
684+
685+from charmhelpers.core.hookenv import (
686+ config,
687+ local_unit,
688+ log,
689+ relation_get,
690+ relation_ids,
691+ related_units,
692+ unit_get,
693+ unit_private_ip,
694+ ERROR,
695+ WARNING,
696+)
697+
698+from charmhelpers.contrib.hahelpers.cluster import (
699+ determine_api_port,
700+ determine_haproxy_port,
701+ https,
702+ is_clustered,
703+ peer_units,
704+)
705+
706+from charmhelpers.contrib.hahelpers.apache import (
707+ get_cert,
708+ get_ca_cert,
709+)
710+
711+from charmhelpers.contrib.openstack.neutron import (
712+ neutron_plugin_attribute,
713+)
714+
715+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
716+
717+
718+class OSContextError(Exception):
719+ pass
720+
721+
722+def ensure_packages(packages):
723+ '''Install but do not upgrade required plugin packages'''
724+ required = filter_installed_packages(packages)
725+ if required:
726+ apt_install(required, fatal=True)
727+
728+
729+def context_complete(ctxt):
730+ _missing = []
731+ for k, v in ctxt.iteritems():
732+ if v is None or v == '':
733+ _missing.append(k)
734+ if _missing:
735+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
736+ return False
737+ return True
738+
739+
740+class OSContextGenerator(object):
741+ interfaces = []
742+
743+ def __call__(self):
744+ raise NotImplementedError
745+
746+
747+class SharedDBContext(OSContextGenerator):
748+ interfaces = ['shared-db']
749+
750+ def __init__(self, database=None, user=None, relation_prefix=None):
751+ '''
752+ Allows inspecting relation for settings prefixed with relation_prefix.
753+ This is useful for parsing access for multiple databases returned via
754+ the shared-db interface (eg, nova_password, quantum_password)
755+ '''
756+ self.relation_prefix = relation_prefix
757+ self.database = database
758+ self.user = user
759+
760+ def __call__(self):
761+ self.database = self.database or config('database')
762+ self.user = self.user or config('database-user')
763+ if None in [self.database, self.user]:
764+ log('Could not generate shared_db context. '
765+ 'Missing required charm config options. '
766+ '(database name and user)')
767+ raise OSContextError
768+ ctxt = {}
769+
770+ password_setting = 'password'
771+ if self.relation_prefix:
772+ password_setting = self.relation_prefix + '_password'
773+
774+ for rid in relation_ids('shared-db'):
775+ for unit in related_units(rid):
776+ passwd = relation_get(password_setting, rid=rid, unit=unit)
777+ ctxt = {
778+ 'database_host': relation_get('db_host', rid=rid,
779+ unit=unit),
780+ 'database': self.database,
781+ 'database_user': self.user,
782+ 'database_password': passwd,
783+ }
784+ if context_complete(ctxt):
785+ return ctxt
786+ return {}
787+
788+
789+class IdentityServiceContext(OSContextGenerator):
790+ interfaces = ['identity-service']
791+
792+ def __call__(self):
793+ log('Generating template context for identity-service')
794+ ctxt = {}
795+
796+ for rid in relation_ids('identity-service'):
797+ for unit in related_units(rid):
798+ ctxt = {
799+ 'service_port': relation_get('service_port', rid=rid,
800+ unit=unit),
801+ 'service_host': relation_get('service_host', rid=rid,
802+ unit=unit),
803+ 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
804+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
805+ 'admin_tenant_name': relation_get('service_tenant',
806+ rid=rid, unit=unit),
807+ 'admin_user': relation_get('service_username', rid=rid,
808+ unit=unit),
809+ 'admin_password': relation_get('service_password', rid=rid,
810+ unit=unit),
811+ # XXX: Hard-coded http.
812+ 'service_protocol': 'http',
813+ 'auth_protocol': 'http',
814+ }
815+ if context_complete(ctxt):
816+ return ctxt
817+ return {}
818+
819+
820+class AMQPContext(OSContextGenerator):
821+ interfaces = ['amqp']
822+
823+ def __call__(self):
824+ log('Generating template context for amqp')
825+ conf = config()
826+ try:
827+ username = conf['rabbit-user']
828+ vhost = conf['rabbit-vhost']
829+ except KeyError as e:
830+ log('Could not generate shared_db context. '
831+ 'Missing required charm config options: %s.' % e)
832+ raise OSContextError
833+
834+ ctxt = {}
835+ for rid in relation_ids('amqp'):
836+ for unit in related_units(rid):
837+ if relation_get('clustered', rid=rid, unit=unit):
838+ ctxt['clustered'] = True
839+ ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
840+ unit=unit)
841+ else:
842+ ctxt['rabbitmq_host'] = relation_get('private-address',
843+ rid=rid, unit=unit)
844+ ctxt.update({
845+ 'rabbitmq_user': username,
846+ 'rabbitmq_password': relation_get('password', rid=rid,
847+ unit=unit),
848+ 'rabbitmq_virtual_host': vhost,
849+ })
850+ if context_complete(ctxt):
851+ # Sufficient information found = break out!
852+ break
853+ # Used for active/active rabbitmq >= grizzly
854+ ctxt['rabbitmq_hosts'] = []
855+ for unit in related_units(rid):
856+ ctxt['rabbitmq_hosts'].append(relation_get('private-address',
857+ rid=rid, unit=unit))
858+ if not context_complete(ctxt):
859+ return {}
860+ else:
861+ return ctxt
862+
863+
864+class CephContext(OSContextGenerator):
865+ interfaces = ['ceph']
866+
867+ def __call__(self):
868+ '''This generates context for /etc/ceph/ceph.conf templates'''
869+ if not relation_ids('ceph'):
870+ return {}
871+ log('Generating template context for ceph')
872+ mon_hosts = []
873+ auth = None
874+ key = None
875+ for rid in relation_ids('ceph'):
876+ for unit in related_units(rid):
877+ mon_hosts.append(relation_get('private-address', rid=rid,
878+ unit=unit))
879+ auth = relation_get('auth', rid=rid, unit=unit)
880+ key = relation_get('key', rid=rid, unit=unit)
881+
882+ ctxt = {
883+ 'mon_hosts': ' '.join(mon_hosts),
884+ 'auth': auth,
885+ 'key': key,
886+ }
887+
888+ if not os.path.isdir('/etc/ceph'):
889+ os.mkdir('/etc/ceph')
890+
891+ if not context_complete(ctxt):
892+ return {}
893+
894+ ensure_packages(['ceph-common'])
895+
896+ return ctxt
897+
898+
899+class HAProxyContext(OSContextGenerator):
900+ interfaces = ['cluster']
901+
902+ def __call__(self):
903+ '''
904+ Builds half a context for the haproxy template, which describes
905+ all peers to be included in the cluster. Each charm needs to include
906+ its own context generator that describes the port mapping.
907+ '''
908+ if not relation_ids('cluster'):
909+ return {}
910+
911+ cluster_hosts = {}
912+ l_unit = local_unit().replace('/', '-')
913+ cluster_hosts[l_unit] = unit_get('private-address')
914+
915+ for rid in relation_ids('cluster'):
916+ for unit in related_units(rid):
917+ _unit = unit.replace('/', '-')
918+ addr = relation_get('private-address', rid=rid, unit=unit)
919+ cluster_hosts[_unit] = addr
920+
921+ ctxt = {
922+ 'units': cluster_hosts,
923+ }
924+ if len(cluster_hosts.keys()) > 1:
925+ # Enable haproxy when we have enough peers.
926+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
927+ with open('/etc/default/haproxy', 'w') as out:
928+ out.write('ENABLED=1\n')
929+ return ctxt
930+ log('HAProxy context is incomplete, this unit has no peers.')
931+ return {}
932+
933+
934+class ImageServiceContext(OSContextGenerator):
935+ interfaces = ['image-service']
936+
937+ def __call__(self):
938+ '''
939+ Obtains the glance API server from the image-service relation. Useful
940+ in nova and cinder (currently).
941+ '''
942+ log('Generating template context for image-service.')
943+ rids = relation_ids('image-service')
944+ if not rids:
945+ return {}
946+ for rid in rids:
947+ for unit in related_units(rid):
948+ api_server = relation_get('glance-api-server',
949+ rid=rid, unit=unit)
950+ if api_server:
951+ return {'glance_api_servers': api_server}
952+ log('ImageService context is incomplete. '
953+ 'Missing required relation data.')
954+ return {}
955+
956+
957+class ApacheSSLContext(OSContextGenerator):
958+ """
959+ Generates a context for an apache vhost configuration that configures
960+ HTTPS reverse proxying for one or many endpoints. Generated context
961+ looks something like:
962+ {
963+ 'namespace': 'cinder',
964+ 'private_address': 'iscsi.mycinderhost.com',
965+ 'endpoints': [(8776, 8766), (8777, 8767)]
966+ }
967+
968+ The endpoints list consists of a tuples mapping external ports
969+ to internal ports.
970+ """
971+ interfaces = ['https']
972+
973+ # charms should inherit this context and set external ports
974+ # and service namespace accordingly.
975+ external_ports = []
976+ service_namespace = None
977+
978+ def enable_modules(self):
979+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
980+ check_call(cmd)
981+
982+ def configure_cert(self):
983+ if not os.path.isdir('/etc/apache2/ssl'):
984+ os.mkdir('/etc/apache2/ssl')
985+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
986+ if not os.path.isdir(ssl_dir):
987+ os.mkdir(ssl_dir)
988+ cert, key = get_cert()
989+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
990+ cert_out.write(b64decode(cert))
991+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
992+ key_out.write(b64decode(key))
993+ ca_cert = get_ca_cert()
994+ if ca_cert:
995+ with open(CA_CERT_PATH, 'w') as ca_out:
996+ ca_out.write(b64decode(ca_cert))
997+ check_call(['update-ca-certificates'])
998+
999+ def __call__(self):
1000+ if isinstance(self.external_ports, basestring):
1001+ self.external_ports = [self.external_ports]
1002+ if (not self.external_ports or not https()):
1003+ return {}
1004+
1005+ self.configure_cert()
1006+ self.enable_modules()
1007+
1008+ ctxt = {
1009+ 'namespace': self.service_namespace,
1010+ 'private_address': unit_get('private-address'),
1011+ 'endpoints': []
1012+ }
1013+ for ext_port in self.external_ports:
1014+ if peer_units() or is_clustered():
1015+ int_port = determine_haproxy_port(ext_port)
1016+ else:
1017+ int_port = determine_api_port(ext_port)
1018+ portmap = (int(ext_port), int(int_port))
1019+ ctxt['endpoints'].append(portmap)
1020+ return ctxt
1021+
1022+
1023+class NeutronContext(object):
1024+ interfaces = []
1025+
1026+ @property
1027+ def plugin(self):
1028+ return None
1029+
1030+ @property
1031+ def network_manager(self):
1032+ return None
1033+
1034+ @property
1035+ def packages(self):
1036+ return neutron_plugin_attribute(
1037+ self.plugin, 'packages', self.network_manager)
1038+
1039+ @property
1040+ def neutron_security_groups(self):
1041+ return None
1042+
1043+ def _ensure_packages(self):
1044+ [ensure_packages(pkgs) for pkgs in self.packages]
1045+
1046+ def _save_flag_file(self):
1047+ if self.network_manager == 'quantum':
1048+ _file = '/etc/nova/quantum_plugin.conf'
1049+ else:
1050+ _file = '/etc/nova/neutron_plugin.conf'
1051+ with open(_file, 'wb') as out:
1052+ out.write(self.plugin + '\n')
1053+
1054+ def ovs_ctxt(self):
1055+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1056+ self.network_manager)
1057+
1058+ ovs_ctxt = {
1059+ 'core_plugin': driver,
1060+ 'neutron_plugin': 'ovs',
1061+ 'neutron_security_groups': self.neutron_security_groups,
1062+ 'local_ip': unit_private_ip(),
1063+ }
1064+
1065+ return ovs_ctxt
1066+
1067+ def __call__(self):
1068+ self._ensure_packages()
1069+
1070+ if self.network_manager not in ['quantum', 'neutron']:
1071+ return {}
1072+
1073+ if not self.plugin:
1074+ return {}
1075+
1076+ ctxt = {'network_manager': self.network_manager}
1077+
1078+ if self.plugin == 'ovs':
1079+ ctxt.update(self.ovs_ctxt())
1080+
1081+ self._save_flag_file()
1082+ return ctxt
1083+
1084+
1085+class OSConfigFlagContext(OSContextGenerator):
1086+ '''
1087+ Responsible adding user-defined config-flags in charm config to a
1088+ to a template context.
1089+ '''
1090+ def __call__(self):
1091+ config_flags = config('config-flags')
1092+ if not config_flags or config_flags in ['None', '']:
1093+ return {}
1094+ config_flags = config_flags.split(',')
1095+ flags = {}
1096+ for flag in config_flags:
1097+ if '=' not in flag:
1098+ log('Improperly formatted config-flag, expected k=v '
1099+ 'got %s' % flag, level=WARNING)
1100+ continue
1101+ k, v = flag.split('=')
1102+ flags[k.strip()] = v
1103+ ctxt = {'user_config_flags': flags}
1104+ return ctxt
1105+
1106+
1107+class SubordinateConfigContext(OSContextGenerator):
1108+ """
1109+ Responsible for inspecting relations to subordinates that
1110+ may be exporting required config via a json blob.
1111+
1112+ The subordinate interface allows subordinates to export their
1113+ configuration requirements to the principle for multiple config
1114+ files and multiple serivces. Ie, a subordinate that has interfaces
1115+ to both glance and nova may export to following yaml blob as json:
1116+
1117+ glance:
1118+ /etc/glance/glance-api.conf:
1119+ sections:
1120+ DEFAULT:
1121+ - [key1, value1]
1122+ /etc/glance/glance-registry.conf:
1123+ MYSECTION:
1124+ - [key2, value2]
1125+ nova:
1126+ /etc/nova/nova.conf:
1127+ sections:
1128+ DEFAULT:
1129+ - [key3, value3]
1130+
1131+
1132+ It is then up to the principle charms to subscribe this context to
1133+ the service+config file it is interestd in. Configuration data will
1134+ be available in the template context, in glance's case, as:
1135+ ctxt = {
1136+ ... other context ...
1137+ 'subordinate_config': {
1138+ 'DEFAULT': {
1139+ 'key1': 'value1',
1140+ },
1141+ 'MYSECTION': {
1142+ 'key2': 'value2',
1143+ },
1144+ }
1145+ }
1146+
1147+ """
1148+ def __init__(self, service, config_file, interface):
1149+ """
1150+ :param service : Service name key to query in any subordinate
1151+ data found
1152+ :param config_file : Service's config file to query sections
1153+ :param interface : Subordinate interface to inspect
1154+ """
1155+ self.service = service
1156+ self.config_file = config_file
1157+ self.interface = interface
1158+
1159+ def __call__(self):
1160+ ctxt = {}
1161+ for rid in relation_ids(self.interface):
1162+ for unit in related_units(rid):
1163+ sub_config = relation_get('subordinate_configuration',
1164+ rid=rid, unit=unit)
1165+ if sub_config and sub_config != '':
1166+ try:
1167+ sub_config = json.loads(sub_config)
1168+ except:
1169+ log('Could not parse JSON from subordinate_config '
1170+ 'setting from %s' % rid, level=ERROR)
1171+ continue
1172+
1173+ if self.service not in sub_config:
1174+ log('Found subordinate_config on %s but it contained'
1175+ 'nothing for %s service' % (rid, self.service))
1176+ continue
1177+
1178+ sub_config = sub_config[self.service]
1179+ if self.config_file not in sub_config:
1180+ log('Found subordinate_config on %s but it contained'
1181+ 'nothing for %s' % (rid, self.config_file))
1182+ continue
1183+
1184+ sub_config = sub_config[self.config_file]
1185+ for k, v in sub_config.iteritems():
1186+ ctxt[k] = v
1187+
1188+ if not ctxt:
1189+ ctxt['sections'] = {}
1190+
1191+ return ctxt
1192
1193=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1194--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
1195+++ hooks/charmhelpers/contrib/openstack/neutron.py 2013-10-16 18:49:46 +0000
1196@@ -0,0 +1,117 @@
1197+# Various utilies for dealing with Neutron and the renaming from Quantum.
1198+
1199+from subprocess import check_output
1200+
1201+from charmhelpers.core.hookenv import (
1202+ config,
1203+ log,
1204+ ERROR,
1205+)
1206+
1207+from charmhelpers.contrib.openstack.utils import os_release
1208+
1209+
1210+def headers_package():
1211+ """Ensures correct linux-headers for running kernel are installed,
1212+ for building DKMS package"""
1213+ kver = check_output(['uname', '-r']).strip()
1214+ return 'linux-headers-%s' % kver
1215+
1216+
1217+# legacy
1218+def quantum_plugins():
1219+ from charmhelpers.contrib.openstack import context
1220+ return {
1221+ 'ovs': {
1222+ 'config': '/etc/quantum/plugins/openvswitch/'
1223+ 'ovs_quantum_plugin.ini',
1224+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
1225+ 'OVSQuantumPluginV2',
1226+ 'contexts': [
1227+ context.SharedDBContext(user=config('neutron-database-user'),
1228+ database=config('neutron-database'),
1229+ relation_prefix='neutron')],
1230+ 'services': ['quantum-plugin-openvswitch-agent'],
1231+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
1232+ ['quantum-plugin-openvswitch-agent']],
1233+ },
1234+ 'nvp': {
1235+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
1236+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
1237+ 'QuantumPlugin.NvpPluginV2',
1238+ 'services': [],
1239+ 'packages': [],
1240+ }
1241+ }
1242+
1243+
1244+def neutron_plugins():
1245+ from charmhelpers.contrib.openstack import context
1246+ return {
1247+ 'ovs': {
1248+ 'config': '/etc/neutron/plugins/openvswitch/'
1249+ 'ovs_neutron_plugin.ini',
1250+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
1251+ 'OVSNeutronPluginV2',
1252+ 'contexts': [
1253+ context.SharedDBContext(user=config('neutron-database-user'),
1254+ database=config('neutron-database'),
1255+ relation_prefix='neutron')],
1256+ 'services': ['neutron-plugin-openvswitch-agent'],
1257+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
1258+ ['quantum-plugin-openvswitch-agent']],
1259+ },
1260+ 'nvp': {
1261+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
1262+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
1263+ 'NeutronPlugin.NvpPluginV2',
1264+ 'services': [],
1265+ 'packages': [],
1266+ }
1267+ }
1268+
1269+
1270+def neutron_plugin_attribute(plugin, attr, net_manager=None):
1271+ manager = net_manager or network_manager()
1272+ if manager == 'quantum':
1273+ plugins = quantum_plugins()
1274+ elif manager == 'neutron':
1275+ plugins = neutron_plugins()
1276+ else:
1277+ log('Error: Network manager does not support plugins.')
1278+ raise Exception
1279+
1280+ try:
1281+ _plugin = plugins[plugin]
1282+ except KeyError:
1283+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
1284+ raise Exception
1285+
1286+ try:
1287+ return _plugin[attr]
1288+ except KeyError:
1289+ return None
1290+
1291+
1292+def network_manager():
1293+ '''
1294+ Deals with the renaming of Quantum to Neutron in H and any situations
1295+ that require compatability (eg, deploying H with network-manager=quantum,
1296+ upgrading from G).
1297+ '''
1298+ release = os_release('nova-common')
1299+ manager = config('network-manager').lower()
1300+
1301+ if manager not in ['quantum', 'neutron']:
1302+ return manager
1303+
1304+ if release in ['essex']:
1305+ # E does not support neutron
1306+ log('Neutron networking not supported in Essex.', level=ERROR)
1307+ raise Exception
1308+ elif release in ['folsom', 'grizzly']:
1309+ # neutron is named quantum in F and G
1310+ return 'quantum'
1311+ else:
1312+ # ensure accurate naming for all releases post-H
1313+ return 'neutron'
1314
1315=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
1316=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
1317--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
1318+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2013-10-16 18:49:46 +0000
1319@@ -0,0 +1,2 @@
1320+# dummy __init__.py to fool syncer into thinking this is a syncable python
1321+# module
1322
1323=== added file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
1324--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 1970-01-01 00:00:00 +0000
1325+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2013-10-16 18:49:46 +0000
1326@@ -0,0 +1,11 @@
1327+###############################################################################
1328+# [ WARNING ]
1329+# cinder configuration file maintained by Juju
1330+# local changes may be overwritten.
1331+###############################################################################
1332+{% if auth -%}
1333+[global]
1334+ auth_supported = {{ auth }}
1335+ keyring = /etc/ceph/$cluster.$name.keyring
1336+ mon host = {{ mon_hosts }}
1337+{% endif -%}
1338
1339=== added file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1340--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 1970-01-01 00:00:00 +0000
1341+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2013-10-16 18:49:46 +0000
1342@@ -0,0 +1,37 @@
1343+global
1344+ log 127.0.0.1 local0
1345+ log 127.0.0.1 local1 notice
1346+ maxconn 20000
1347+ user haproxy
1348+ group haproxy
1349+ spread-checks 0
1350+
1351+defaults
1352+ log global
1353+ mode http
1354+ option httplog
1355+ option dontlognull
1356+ retries 3
1357+ timeout queue 1000
1358+ timeout connect 1000
1359+ timeout client 30000
1360+ timeout server 30000
1361+
1362+listen stats :8888
1363+ mode http
1364+ stats enable
1365+ stats hide-version
1366+ stats realm Haproxy\ Statistics
1367+ stats uri /
1368+ stats auth admin:password
1369+
1370+{% if units -%}
1371+{% for service, ports in service_ports.iteritems() -%}
1372+listen {{ service }} 0.0.0.0:{{ ports[0] }}
1373+ balance roundrobin
1374+ option tcplog
1375+ {% for unit, address in units.iteritems() -%}
1376+ server {{ unit }} {{ address }}:{{ ports[1] }} check
1377+ {% endfor %}
1378+{% endfor -%}
1379+{% endif -%}
1380
1381=== added file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend'
1382--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 1970-01-01 00:00:00 +0000
1383+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2013-10-16 18:49:46 +0000
1384@@ -0,0 +1,23 @@
1385+{% if endpoints -%}
1386+{% for ext, int in endpoints -%}
1387+Listen {{ ext }}
1388+NameVirtualHost *:{{ ext }}
1389+<VirtualHost *:{{ ext }}>
1390+ ServerName {{ private_address }}
1391+ SSLEngine on
1392+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert
1393+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key
1394+ ProxyPass / http://localhost:{{ int }}/
1395+ ProxyPassReverse / http://localhost:{{ int }}/
1396+ ProxyPreserveHost on
1397+</VirtualHost>
1398+<Proxy *>
1399+ Order deny,allow
1400+ Allow from all
1401+</Proxy>
1402+<Location />
1403+ Order allow,deny
1404+ Allow from all
1405+</Location>
1406+{% endfor -%}
1407+{% endif -%}
1408
1409=== added file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf'
1410--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf 1970-01-01 00:00:00 +0000
1411+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf 2013-10-16 18:49:46 +0000
1412@@ -0,0 +1,23 @@
1413+{% if endpoints -%}
1414+{% for ext, int in endpoints -%}
1415+Listen {{ ext }}
1416+NameVirtualHost *:{{ ext }}
1417+<VirtualHost *:{{ ext }}>
1418+ ServerName {{ private_address }}
1419+ SSLEngine on
1420+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert
1421+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key
1422+ ProxyPass / http://localhost:{{ int }}/
1423+ ProxyPassReverse / http://localhost:{{ int }}/
1424+ ProxyPreserveHost on
1425+</VirtualHost>
1426+<Proxy *>
1427+ Order deny,allow
1428+ Allow from all
1429+</Proxy>
1430+<Location />
1431+ Order allow,deny
1432+ Allow from all
1433+</Location>
1434+{% endfor -%}
1435+{% endif -%}
1436
1437=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
1438--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
1439+++ hooks/charmhelpers/contrib/openstack/templating.py 2013-10-16 18:49:46 +0000
1440@@ -0,0 +1,280 @@
1441+import os
1442+
1443+from charmhelpers.fetch import apt_install
1444+
1445+from charmhelpers.core.hookenv import (
1446+ log,
1447+ ERROR,
1448+ INFO
1449+)
1450+
1451+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1452+
1453+try:
1454+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
1455+except ImportError:
1456+ # python-jinja2 may not be installed yet, or we're running unittests.
1457+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
1458+
1459+
1460+class OSConfigException(Exception):
1461+ pass
1462+
1463+
1464+def get_loader(templates_dir, os_release):
1465+ """
1466+ Create a jinja2.ChoiceLoader containing template dirs up to
1467+ and including os_release. If directory template directory
1468+ is missing at templates_dir, it will be omitted from the loader.
1469+ templates_dir is added to the bottom of the search list as a base
1470+ loading dir.
1471+
1472+ A charm may also ship a templates dir with this module
1473+ and it will be appended to the bottom of the search list, eg:
1474+ hooks/charmhelpers/contrib/openstack/templates.
1475+
1476+ :param templates_dir: str: Base template directory containing release
1477+ sub-directories.
1478+ :param os_release : str: OpenStack release codename to construct template
1479+ loader.
1480+
1481+ :returns : jinja2.ChoiceLoader constructed with a list of
1482+ jinja2.FilesystemLoaders, ordered in descending
1483+ order by OpenStack release.
1484+ """
1485+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1486+ for rel in OPENSTACK_CODENAMES.itervalues()]
1487+
1488+ if not os.path.isdir(templates_dir):
1489+ log('Templates directory not found @ %s.' % templates_dir,
1490+ level=ERROR)
1491+ raise OSConfigException
1492+
1493+ # the bottom contains tempaltes_dir and possibly a common templates dir
1494+ # shipped with the helper.
1495+ loaders = [FileSystemLoader(templates_dir)]
1496+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
1497+ if os.path.isdir(helper_templates):
1498+ loaders.append(FileSystemLoader(helper_templates))
1499+
1500+ for rel, tmpl_dir in tmpl_dirs:
1501+ if os.path.isdir(tmpl_dir):
1502+ loaders.insert(0, FileSystemLoader(tmpl_dir))
1503+ if rel == os_release:
1504+ break
1505+ log('Creating choice loader with dirs: %s' %
1506+ [l.searchpath for l in loaders], level=INFO)
1507+ return ChoiceLoader(loaders)
1508+
1509+
1510+class OSConfigTemplate(object):
1511+ """
1512+ Associates a config file template with a list of context generators.
1513+ Responsible for constructing a template context based on those generators.
1514+ """
1515+ def __init__(self, config_file, contexts):
1516+ self.config_file = config_file
1517+
1518+ if hasattr(contexts, '__call__'):
1519+ self.contexts = [contexts]
1520+ else:
1521+ self.contexts = contexts
1522+
1523+ self._complete_contexts = []
1524+
1525+ def context(self):
1526+ ctxt = {}
1527+ for context in self.contexts:
1528+ _ctxt = context()
1529+ if _ctxt:
1530+ ctxt.update(_ctxt)
1531+ # track interfaces for every complete context.
1532+ [self._complete_contexts.append(interface)
1533+ for interface in context.interfaces
1534+ if interface not in self._complete_contexts]
1535+ return ctxt
1536+
1537+ def complete_contexts(self):
1538+ '''
1539+ Return a list of interfaces that have atisfied contexts.
1540+ '''
1541+ if self._complete_contexts:
1542+ return self._complete_contexts
1543+ self.context()
1544+ return self._complete_contexts
1545+
1546+
1547+class OSConfigRenderer(object):
1548+ """
1549+ This class provides a common templating system to be used by OpenStack
1550+ charms. It is intended to help charms share common code and templates,
1551+ and ease the burden of managing config templates across multiple OpenStack
1552+ releases.
1553+
1554+ Basic usage:
1555+ # import some common context generates from charmhelpers
1556+ from charmhelpers.contrib.openstack import context
1557+
1558+ # Create a renderer object for a specific OS release.
1559+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
1560+ openstack_release='folsom')
1561+ # register some config files with context generators.
1562+ configs.register(config_file='/etc/nova/nova.conf',
1563+ contexts=[context.SharedDBContext(),
1564+ context.AMQPContext()])
1565+ configs.register(config_file='/etc/nova/api-paste.ini',
1566+ contexts=[context.IdentityServiceContext()])
1567+ configs.register(config_file='/etc/haproxy/haproxy.conf',
1568+ contexts=[context.HAProxyContext()])
1569+ # write out a single config
1570+ configs.write('/etc/nova/nova.conf')
1571+ # write out all registered configs
1572+ configs.write_all()
1573+
1574+ Details:
1575+
1576+ OpenStack Releases and template loading
1577+ ---------------------------------------
1578+ When the object is instantiated, it is associated with a specific OS
1579+ release. This dictates how the template loader will be constructed.
1580+
1581+ The constructed loader attempts to load the template from several places
1582+ in the following order:
1583+ - from the most recent OS release-specific template dir (if one exists)
1584+ - the base templates_dir
1585+ - a template directory shipped in the charm with this helper file.
1586+
1587+
1588+ For the example above, '/tmp/templates' contains the following structure:
1589+ /tmp/templates/nova.conf
1590+ /tmp/templates/api-paste.ini
1591+ /tmp/templates/grizzly/api-paste.ini
1592+ /tmp/templates/havana/api-paste.ini
1593+
1594+ Since it was registered with the grizzly release, it first seraches
1595+ the grizzly directory for nova.conf, then the templates dir.
1596+
1597+ When writing api-paste.ini, it will find the template in the grizzly
1598+ directory.
1599+
1600+ If the object were created with folsom, it would fall back to the
1601+ base templates dir for its api-paste.ini template.
1602+
1603+ This system should help manage changes in config files through
1604+ openstack releases, allowing charms to fall back to the most recently
1605+ updated config template for a given release
1606+
1607+ The haproxy.conf, since it is not shipped in the templates dir, will
1608+ be loaded from the module directory's template directory, eg
1609+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
1610+ us to ship common templates (haproxy, apache) with the helpers.
1611+
1612+ Context generators
1613+ ---------------------------------------
1614+ Context generators are used to generate template contexts during hook
1615+ execution. Doing so may require inspecting service relations, charm
1616+ config, etc. When registered, a config file is associated with a list
1617+ of generators. When a template is rendered and written, all context
1618+ generates are called in a chain to generate the context dictionary
1619+ passed to the jinja2 template. See context.py for more info.
1620+ """
1621+ def __init__(self, templates_dir, openstack_release):
1622+ if not os.path.isdir(templates_dir):
1623+ log('Could not locate templates dir %s' % templates_dir,
1624+ level=ERROR)
1625+ raise OSConfigException
1626+
1627+ self.templates_dir = templates_dir
1628+ self.openstack_release = openstack_release
1629+ self.templates = {}
1630+ self._tmpl_env = None
1631+
1632+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
1633+ # if this code is running, the object is created pre-install hook.
1634+ # jinja2 shouldn't get touched until the module is reloaded on next
1635+ # hook execution, with proper jinja2 bits successfully imported.
1636+ apt_install('python-jinja2')
1637+
1638+ def register(self, config_file, contexts):
1639+ """
1640+ Register a config file with a list of context generators to be called
1641+ during rendering.
1642+ """
1643+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
1644+ contexts=contexts)
1645+ log('Registered config file: %s' % config_file, level=INFO)
1646+
1647+ def _get_tmpl_env(self):
1648+ if not self._tmpl_env:
1649+ loader = get_loader(self.templates_dir, self.openstack_release)
1650+ self._tmpl_env = Environment(loader=loader)
1651+
1652+ def _get_template(self, template):
1653+ self._get_tmpl_env()
1654+ template = self._tmpl_env.get_template(template)
1655+ log('Loaded template from %s' % template.filename, level=INFO)
1656+ return template
1657+
1658+ def render(self, config_file):
1659+ if config_file not in self.templates:
1660+ log('Config not registered: %s' % config_file, level=ERROR)
1661+ raise OSConfigException
1662+ ctxt = self.templates[config_file].context()
1663+
1664+ _tmpl = os.path.basename(config_file)
1665+ try:
1666+ template = self._get_template(_tmpl)
1667+ except exceptions.TemplateNotFound:
1668+ # if no template is found with basename, try looking for it
1669+ # using a munged full path, eg:
1670+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
1671+ _tmpl = '_'.join(config_file.split('/')[1:])
1672+ try:
1673+ template = self._get_template(_tmpl)
1674+ except exceptions.TemplateNotFound as e:
1675+ log('Could not load template from %s by %s or %s.' %
1676+ (self.templates_dir, os.path.basename(config_file), _tmpl),
1677+ level=ERROR)
1678+ raise e
1679+
1680+ log('Rendering from template: %s' % _tmpl, level=INFO)
1681+ return template.render(ctxt)
1682+
1683+ def write(self, config_file):
1684+ """
1685+ Write a single config file, raises if config file is not registered.
1686+ """
1687+ if config_file not in self.templates:
1688+ log('Config not registered: %s' % config_file, level=ERROR)
1689+ raise OSConfigException
1690+
1691+ _out = self.render(config_file)
1692+
1693+ with open(config_file, 'wb') as out:
1694+ out.write(_out)
1695+
1696+ log('Wrote template %s.' % config_file, level=INFO)
1697+
1698+ def write_all(self):
1699+ """
1700+ Write out all registered config files.
1701+ """
1702+ [self.write(k) for k in self.templates.iterkeys()]
1703+
1704+ def set_release(self, openstack_release):
1705+ """
1706+ Resets the template environment and generates a new template loader
1707+ based on a the new openstack release.
1708+ """
1709+ self._tmpl_env = None
1710+ self.openstack_release = openstack_release
1711+ self._get_tmpl_env()
1712+
1713+ def complete_contexts(self):
1714+ '''
1715+ Returns a list of context interfaces that yield a complete context.
1716+ '''
1717+ interfaces = []
1718+ [interfaces.extend(i.complete_contexts())
1719+ for i in self.templates.itervalues()]
1720+ return interfaces
1721
1722=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
1723--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
1724+++ hooks/charmhelpers/contrib/openstack/utils.py 2013-10-16 18:49:46 +0000
1725@@ -0,0 +1,365 @@
1726+#!/usr/bin/python
1727+
1728+# Common python helper functions used for OpenStack charms.
1729+from collections import OrderedDict
1730+
1731+import apt_pkg as apt
1732+import subprocess
1733+import os
1734+import socket
1735+import sys
1736+
1737+from charmhelpers.core.hookenv import (
1738+ config,
1739+ log as juju_log,
1740+ charm_dir,
1741+)
1742+
1743+from charmhelpers.core.host import (
1744+ lsb_release,
1745+)
1746+
1747+from charmhelpers.fetch import (
1748+ apt_install,
1749+)
1750+
1751+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
1752+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
1753+
1754+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
1755+ ('oneiric', 'diablo'),
1756+ ('precise', 'essex'),
1757+ ('quantal', 'folsom'),
1758+ ('raring', 'grizzly'),
1759+ ('saucy', 'havana'),
1760+])
1761+
1762+
1763+OPENSTACK_CODENAMES = OrderedDict([
1764+ ('2011.2', 'diablo'),
1765+ ('2012.1', 'essex'),
1766+ ('2012.2', 'folsom'),
1767+ ('2013.1', 'grizzly'),
1768+ ('2013.2', 'havana'),
1769+ ('2014.1', 'icehouse'),
1770+])
1771+
1772+# The ugly duckling
1773+SWIFT_CODENAMES = OrderedDict([
1774+ ('1.4.3', 'diablo'),
1775+ ('1.4.8', 'essex'),
1776+ ('1.7.4', 'folsom'),
1777+ ('1.8.0', 'grizzly'),
1778+ ('1.7.7', 'grizzly'),
1779+ ('1.7.6', 'grizzly'),
1780+ ('1.10.0', 'havana'),
1781+ ('1.9.1', 'havana'),
1782+ ('1.9.0', 'havana'),
1783+])
1784+
1785+
1786+def error_out(msg):
1787+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
1788+ sys.exit(1)
1789+
1790+
1791+def get_os_codename_install_source(src):
1792+ '''Derive OpenStack release codename from a given installation source.'''
1793+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1794+ rel = ''
1795+ if src == 'distro':
1796+ try:
1797+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
1798+ except KeyError:
1799+ e = 'Could not derive openstack release for '\
1800+ 'this Ubuntu release: %s' % ubuntu_rel
1801+ error_out(e)
1802+ return rel
1803+
1804+ if src.startswith('cloud:'):
1805+ ca_rel = src.split(':')[1]
1806+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
1807+ return ca_rel
1808+
1809+ # Best guess match based on deb string provided
1810+ if src.startswith('deb') or src.startswith('ppa'):
1811+ for k, v in OPENSTACK_CODENAMES.iteritems():
1812+ if v in src:
1813+ return v
1814+
1815+
1816+def get_os_version_install_source(src):
1817+ codename = get_os_codename_install_source(src)
1818+ return get_os_version_codename(codename)
1819+
1820+
1821+def get_os_codename_version(vers):
1822+ '''Determine OpenStack codename from version number.'''
1823+ try:
1824+ return OPENSTACK_CODENAMES[vers]
1825+ except KeyError:
1826+ e = 'Could not determine OpenStack codename for version %s' % vers
1827+ error_out(e)
1828+
1829+
1830+def get_os_version_codename(codename):
1831+ '''Determine OpenStack version number from codename.'''
1832+ for k, v in OPENSTACK_CODENAMES.iteritems():
1833+ if v == codename:
1834+ return k
1835+ e = 'Could not derive OpenStack version for '\
1836+ 'codename: %s' % codename
1837+ error_out(e)
1838+
1839+
1840+def get_os_codename_package(package, fatal=True):
1841+ '''Derive OpenStack release codename from an installed package.'''
1842+ apt.init()
1843+ cache = apt.Cache()
1844+
1845+ try:
1846+ pkg = cache[package]
1847+ except:
1848+ if not fatal:
1849+ return None
1850+ # the package is unknown to the current apt cache.
1851+ e = 'Could not determine version of package with no installation '\
1852+ 'candidate: %s' % package
1853+ error_out(e)
1854+
1855+ if not pkg.current_ver:
1856+ if not fatal:
1857+ return None
1858+ # package is known, but no version is currently installed.
1859+ e = 'Could not determine version of uninstalled package: %s' % package
1860+ error_out(e)
1861+
1862+ vers = apt.upstream_version(pkg.current_ver.ver_str)
1863+
1864+ try:
1865+ if 'swift' in pkg.name:
1866+ swift_vers = vers[:5]
1867+ if swift_vers not in SWIFT_CODENAMES:
1868+ # Deal with 1.10.0 upward
1869+ swift_vers = vers[:6]
1870+ return SWIFT_CODENAMES[swift_vers]
1871+ else:
1872+ vers = vers[:6]
1873+ return OPENSTACK_CODENAMES[vers]
1874+ except KeyError:
1875+ e = 'Could not determine OpenStack codename for version %s' % vers
1876+ error_out(e)
1877+
1878+
1879+def get_os_version_package(pkg, fatal=True):
1880+ '''Derive OpenStack version number from an installed package.'''
1881+ codename = get_os_codename_package(pkg, fatal=fatal)
1882+
1883+ if not codename:
1884+ return None
1885+
1886+ if 'swift' in pkg:
1887+ vers_map = SWIFT_CODENAMES
1888+ else:
1889+ vers_map = OPENSTACK_CODENAMES
1890+
1891+ for version, cname in vers_map.iteritems():
1892+ if cname == codename:
1893+ return version
1894+ #e = "Could not determine OpenStack version for package: %s" % pkg
1895+ #error_out(e)
1896+
1897+
1898+os_rel = None
1899+
1900+
1901+def os_release(package, base='essex'):
1902+ '''
1903+ Returns OpenStack release codename from a cached global.
1904+ If the codename can not be determined from either an installed package or
1905+ the installation source, the earliest release supported by the charm should
1906+ be returned.
1907+ '''
1908+ global os_rel
1909+ if os_rel:
1910+ return os_rel
1911+ os_rel = (get_os_codename_package(package, fatal=False) or
1912+ get_os_codename_install_source(config('openstack-origin')) or
1913+ base)
1914+ return os_rel
1915+
1916+
1917+def import_key(keyid):
1918+ cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
1919+ "--recv-keys %s" % keyid
1920+ try:
1921+ subprocess.check_call(cmd.split(' '))
1922+ except subprocess.CalledProcessError:
1923+ error_out("Error importing repo key %s" % keyid)
1924+
1925+
1926+def configure_installation_source(rel):
1927+ '''Configure apt installation source.'''
1928+ if rel == 'distro':
1929+ return
1930+ elif rel[:4] == "ppa:":
1931+ src = rel
1932+ subprocess.check_call(["add-apt-repository", "-y", src])
1933+ elif rel[:3] == "deb":
1934+ l = len(rel.split('|'))
1935+ if l == 2:
1936+ src, key = rel.split('|')
1937+ juju_log("Importing PPA key from keyserver for %s" % src)
1938+ import_key(key)
1939+ elif l == 1:
1940+ src = rel
1941+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
1942+ f.write(src)
1943+ elif rel[:6] == 'cloud:':
1944+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
1945+ rel = rel.split(':')[1]
1946+ u_rel = rel.split('-')[0]
1947+ ca_rel = rel.split('-')[1]
1948+
1949+ if u_rel != ubuntu_rel:
1950+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
1951+ 'version (%s)' % (ca_rel, ubuntu_rel)
1952+ error_out(e)
1953+
1954+ if 'staging' in ca_rel:
1955+ # staging is just a regular PPA.
1956+ os_rel = ca_rel.split('/')[0]
1957+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
1958+ cmd = 'add-apt-repository -y %s' % ppa
1959+ subprocess.check_call(cmd.split(' '))
1960+ return
1961+
1962+ # map charm config options to actual archive pockets.
1963+ pockets = {
1964+ 'folsom': 'precise-updates/folsom',
1965+ 'folsom/updates': 'precise-updates/folsom',
1966+ 'folsom/proposed': 'precise-proposed/folsom',
1967+ 'grizzly': 'precise-updates/grizzly',
1968+ 'grizzly/updates': 'precise-updates/grizzly',
1969+ 'grizzly/proposed': 'precise-proposed/grizzly',
1970+ 'havana': 'precise-updates/havana',
1971+ 'havana/updates': 'precise-updates/havana',
1972+ 'havana/proposed': 'precise-proposed/havana',
1973+ }
1974+
1975+ try:
1976+ pocket = pockets[ca_rel]
1977+ except KeyError:
1978+ e = 'Invalid Cloud Archive release specified: %s' % rel
1979+ error_out(e)
1980+
1981+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
1982+ apt_install('ubuntu-cloud-keyring', fatal=True)
1983+
1984+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
1985+ f.write(src)
1986+ else:
1987+ error_out("Invalid openstack-release specified: %s" % rel)
1988+
1989+
1990+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
1991+ """
1992+ Write an rc file in the charm-delivered directory containing
1993+ exported environment variables provided by env_vars. Any charm scripts run
1994+ outside the juju hook environment can source this scriptrc to obtain
1995+ updated config information necessary to perform health checks or
1996+ service changes.
1997+ """
1998+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
1999+ if not os.path.exists(os.path.dirname(juju_rc_path)):
2000+ os.mkdir(os.path.dirname(juju_rc_path))
2001+ with open(juju_rc_path, 'wb') as rc_script:
2002+ rc_script.write(
2003+ "#!/bin/bash\n")
2004+ [rc_script.write('export %s=%s\n' % (u, p))
2005+ for u, p in env_vars.iteritems() if u != "script_path"]
2006+
2007+
2008+def openstack_upgrade_available(package):
2009+ """
2010+ Determines if an OpenStack upgrade is available from installation
2011+ source, based on version of installed package.
2012+
2013+ :param package: str: Name of installed package.
2014+
2015+ :returns: bool: : Returns True if configured installation source offers
2016+ a newer version of package.
2017+
2018+ """
2019+
2020+ src = config('openstack-origin')
2021+ cur_vers = get_os_version_package(package)
2022+ available_vers = get_os_version_install_source(src)
2023+ apt.init()
2024+ return apt.version_compare(available_vers, cur_vers) == 1
2025+
2026+
2027+def is_ip(address):
2028+ """
2029+ Returns True if address is a valid IP address.
2030+ """
2031+ try:
2032+ # Test to see if already an IPv4 address
2033+ socket.inet_aton(address)
2034+ return True
2035+ except socket.error:
2036+ return False
2037+
2038+
2039+def ns_query(address):
2040+ try:
2041+ import dns.resolver
2042+ except ImportError:
2043+ apt_install('python-dnspython')
2044+ import dns.resolver
2045+
2046+ if isinstance(address, dns.name.Name):
2047+ rtype = 'PTR'
2048+ elif isinstance(address, basestring):
2049+ rtype = 'A'
2050+
2051+ answers = dns.resolver.query(address, rtype)
2052+ if answers:
2053+ return str(answers[0])
2054+ return None
2055+
2056+
2057+def get_host_ip(hostname):
2058+ """
2059+ Resolves the IP for a given hostname, or returns
2060+ the input if it is already an IP.
2061+ """
2062+ if is_ip(hostname):
2063+ return hostname
2064+
2065+ return ns_query(hostname)
2066+
2067+
2068+def get_hostname(address):
2069+ """
2070+ Resolves hostname for given IP, or returns the input
2071+ if it is already a hostname.
2072+ """
2073+ if not is_ip(address):
2074+ return address
2075+
2076+ try:
2077+ import dns.reversename
2078+ except ImportError:
2079+ apt_install('python-dnspython')
2080+ import dns.reversename
2081+
2082+ rev = dns.reversename.from_address(address)
2083+ result = ns_query(rev)
2084+ if not result:
2085+ return None
2086+
2087+ # strip trailing .
2088+ if result.endswith('.'):
2089+ return result[:-1]
2090+ return result
2091
2092=== added directory 'hooks/charmhelpers/contrib/storage'
2093=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
2094=== added directory 'hooks/charmhelpers/contrib/storage/linux'
2095=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
2096=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2097--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
2098+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2013-10-16 18:49:46 +0000
2099@@ -0,0 +1,359 @@
2100+#
2101+# Copyright 2012 Canonical Ltd.
2102+#
2103+# This file is sourced from lp:openstack-charm-helpers
2104+#
2105+# Authors:
2106+# James Page <james.page@ubuntu.com>
2107+# Adam Gandelman <adamg@ubuntu.com>
2108+#
2109+
2110+import os
2111+import shutil
2112+import json
2113+import time
2114+
2115+from subprocess import (
2116+ check_call,
2117+ check_output,
2118+ CalledProcessError
2119+)
2120+
2121+from charmhelpers.core.hookenv import (
2122+ relation_get,
2123+ relation_ids,
2124+ related_units,
2125+ log,
2126+ INFO,
2127+ WARNING,
2128+ ERROR
2129+)
2130+
2131+from charmhelpers.core.host import (
2132+ mount,
2133+ mounts,
2134+ service_start,
2135+ service_stop,
2136+ service_running,
2137+ umount,
2138+)
2139+
2140+from charmhelpers.fetch import (
2141+ apt_install,
2142+)
2143+
2144+KEYRING = '/etc/ceph/ceph.client.{}.keyring'
2145+KEYFILE = '/etc/ceph/ceph.client.{}.key'
2146+
2147+CEPH_CONF = """[global]
2148+ auth supported = {auth}
2149+ keyring = {keyring}
2150+ mon host = {mon_hosts}
2151+"""
2152+
2153+
2154+def install():
2155+ ''' Basic Ceph client installation '''
2156+ ceph_dir = "/etc/ceph"
2157+ if not os.path.exists(ceph_dir):
2158+ os.mkdir(ceph_dir)
2159+ apt_install('ceph-common', fatal=True)
2160+
2161+
2162+def rbd_exists(service, pool, rbd_img):
2163+ ''' Check to see if a RADOS block device exists '''
2164+ try:
2165+ out = check_output(['rbd', 'list', '--id', service,
2166+ '--pool', pool])
2167+ except CalledProcessError:
2168+ return False
2169+ else:
2170+ return rbd_img in out
2171+
2172+
2173+def create_rbd_image(service, pool, image, sizemb):
2174+ ''' Create a new RADOS block device '''
2175+ cmd = [
2176+ 'rbd',
2177+ 'create',
2178+ image,
2179+ '--size',
2180+ str(sizemb),
2181+ '--id',
2182+ service,
2183+ '--pool',
2184+ pool
2185+ ]
2186+ check_call(cmd)
2187+
2188+
2189+def pool_exists(service, name):
2190+ ''' Check to see if a RADOS pool already exists '''
2191+ try:
2192+ out = check_output(['rados', '--id', service, 'lspools'])
2193+ except CalledProcessError:
2194+ return False
2195+ else:
2196+ return name in out
2197+
2198+
2199+def get_osds(service):
2200+ '''
2201+ Return a list of all Ceph Object Storage Daemons
2202+ currently in the cluster
2203+ '''
2204+ return json.loads(check_output(['ceph', '--id', service,
2205+ 'osd', 'ls', '--format=json']))
2206+
2207+
2208+def create_pool(service, name, replicas=2):
2209+ ''' Create a new RADOS pool '''
2210+ if pool_exists(service, name):
2211+ log("Ceph pool {} already exists, skipping creation".format(name),
2212+ level=WARNING)
2213+ return
2214+ # Calculate the number of placement groups based
2215+ # on upstream recommended best practices.
2216+ pgnum = (len(get_osds(service)) * 100 / replicas)
2217+ cmd = [
2218+ 'ceph', '--id', service,
2219+ 'osd', 'pool', 'create',
2220+ name, str(pgnum)
2221+ ]
2222+ check_call(cmd)
2223+ cmd = [
2224+ 'ceph', '--id', service,
2225+ 'osd', 'pool', 'set', name,
2226+ 'size', str(replicas)
2227+ ]
2228+ check_call(cmd)
2229+
2230+
2231+def delete_pool(service, name):
2232+ ''' Delete a RADOS pool from ceph '''
2233+ cmd = [
2234+ 'ceph', '--id', service,
2235+ 'osd', 'pool', 'delete',
2236+ name, '--yes-i-really-really-mean-it'
2237+ ]
2238+ check_call(cmd)
2239+
2240+
2241+def _keyfile_path(service):
2242+ return KEYFILE.format(service)
2243+
2244+
2245+def _keyring_path(service):
2246+ return KEYRING.format(service)
2247+
2248+
2249+def create_keyring(service, key):
2250+ ''' Create a new Ceph keyring containing key'''
2251+ keyring = _keyring_path(service)
2252+ if os.path.exists(keyring):
2253+ log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2254+ return
2255+ cmd = [
2256+ 'ceph-authtool',
2257+ keyring,
2258+ '--create-keyring',
2259+ '--name=client.{}'.format(service),
2260+ '--add-key={}'.format(key)
2261+ ]
2262+ check_call(cmd)
2263+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
2264+
2265+
2266+def create_key_file(service, key):
2267+ ''' Create a file containing key '''
2268+ keyfile = _keyfile_path(service)
2269+ if os.path.exists(keyfile):
2270+ log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2271+ return
2272+ with open(keyfile, 'w') as fd:
2273+ fd.write(key)
2274+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2275+
2276+
2277+def get_ceph_nodes():
2278+ ''' Query named relation 'ceph' to detemine current nodes '''
2279+ hosts = []
2280+ for r_id in relation_ids('ceph'):
2281+ for unit in related_units(r_id):
2282+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2283+ return hosts
2284+
2285+
2286+def configure(service, key, auth):
2287+ ''' Perform basic configuration of Ceph '''
2288+ create_keyring(service, key)
2289+ create_key_file(service, key)
2290+ hosts = get_ceph_nodes()
2291+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
2292+ ceph_conf.write(CEPH_CONF.format(auth=auth,
2293+ keyring=_keyring_path(service),
2294+ mon_hosts=",".join(map(str, hosts))))
2295+ modprobe('rbd')
2296+
2297+
2298+def image_mapped(name):
2299+ ''' Determine whether a RADOS block device is mapped locally '''
2300+ try:
2301+ out = check_output(['rbd', 'showmapped'])
2302+ except CalledProcessError:
2303+ return False
2304+ else:
2305+ return name in out
2306+
2307+
2308+def map_block_storage(service, pool, image):
2309+ ''' Map a RADOS block device for local use '''
2310+ cmd = [
2311+ 'rbd',
2312+ 'map',
2313+ '{}/{}'.format(pool, image),
2314+ '--user',
2315+ service,
2316+ '--secret',
2317+ _keyfile_path(service),
2318+ ]
2319+ check_call(cmd)
2320+
2321+
2322+def filesystem_mounted(fs):
2323+ ''' Determine whether a filesytems is already mounted '''
2324+ return fs in [f for f, m in mounts()]
2325+
2326+
2327+def make_filesystem(blk_device, fstype='ext4', timeout=10):
2328+ ''' Make a new filesystem on the specified block device '''
2329+ count = 0
2330+ e_noent = os.errno.ENOENT
2331+ while not os.path.exists(blk_device):
2332+ if count >= timeout:
2333+ log('ceph: gave up waiting on block device %s' % blk_device,
2334+ level=ERROR)
2335+ raise IOError(e_noent, os.strerror(e_noent), blk_device)
2336+ log('ceph: waiting for block device %s to appear' % blk_device,
2337+ level=INFO)
2338+ count += 1
2339+ time.sleep(1)
2340+ else:
2341+ log('ceph: Formatting block device %s as filesystem %s.' %
2342+ (blk_device, fstype), level=INFO)
2343+ check_call(['mkfs', '-t', fstype, blk_device])
2344+
2345+
2346+def place_data_on_block_device(blk_device, data_src_dst):
2347+ ''' Migrate data in data_src_dst to blk_device and then remount '''
2348+ # mount block device into /mnt
2349+ mount(blk_device, '/mnt')
2350+ # copy data to /mnt
2351+ copy_files(data_src_dst, '/mnt')
2352+ # umount block device
2353+ umount('/mnt')
2354+ # Grab user/group ID's from original source
2355+ _dir = os.stat(data_src_dst)
2356+ uid = _dir.st_uid
2357+ gid = _dir.st_gid
2358+ # re-mount where the data should originally be
2359+ # TODO: persist is currently a NO-OP in core.host
2360+ mount(blk_device, data_src_dst, persist=True)
2361+ # ensure original ownership of new mount.
2362+ os.chown(data_src_dst, uid, gid)
2363+
2364+
2365+# TODO: re-use
2366+def modprobe(module):
2367+ ''' Load a kernel module and configure for auto-load on reboot '''
2368+ log('ceph: Loading kernel module', level=INFO)
2369+ cmd = ['modprobe', module]
2370+ check_call(cmd)
2371+ with open('/etc/modules', 'r+') as modules:
2372+ if module not in modules.read():
2373+ modules.write(module)
2374+
2375+
2376+def copy_files(src, dst, symlinks=False, ignore=None):
2377+ ''' Copy files from src to dst '''
2378+ for item in os.listdir(src):
2379+ s = os.path.join(src, item)
2380+ d = os.path.join(dst, item)
2381+ if os.path.isdir(s):
2382+ shutil.copytree(s, d, symlinks, ignore)
2383+ else:
2384+ shutil.copy2(s, d)
2385+
2386+
2387+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2388+ blk_device, fstype, system_services=[]):
2389+ """
2390+ NOTE: This function must only be called from a single service unit for
2391+ the same rbd_img otherwise data loss will occur.
2392+
2393+ Ensures given pool and RBD image exists, is mapped to a block device,
2394+ and the device is formatted and mounted at the given mount_point.
2395+
2396+ If formatting a device for the first time, data existing at mount_point
2397+ will be migrated to the RBD device before being re-mounted.
2398+
2399+ All services listed in system_services will be stopped prior to data
2400+ migration and restarted when complete.
2401+ """
2402+ # Ensure pool, RBD image, RBD mappings are in place.
2403+ if not pool_exists(service, pool):
2404+ log('ceph: Creating new pool {}.'.format(pool))
2405+ create_pool(service, pool)
2406+
2407+ if not rbd_exists(service, pool, rbd_img):
2408+ log('ceph: Creating RBD image ({}).'.format(rbd_img))
2409+ create_rbd_image(service, pool, rbd_img, sizemb)
2410+
2411+ if not image_mapped(rbd_img):
2412+ log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2413+ map_block_storage(service, pool, rbd_img)
2414+
2415+ # make file system
2416+ # TODO: What happens if for whatever reason this is run again and
2417+ # the data is already in the rbd device and/or is mounted??
2418+ # When it is mounted already, it will fail to make the fs
2419+ # XXX: This is really sketchy! Need to at least add an fstab entry
2420+ # otherwise this hook will blow away existing data if its executed
2421+ # after a reboot.
2422+ if not filesystem_mounted(mount_point):
2423+ make_filesystem(blk_device, fstype)
2424+
2425+ for svc in system_services:
2426+ if service_running(svc):
2427+ log('ceph: Stopping services {} prior to migrating data.'
2428+ .format(svc))
2429+ service_stop(svc)
2430+
2431+ place_data_on_block_device(blk_device, mount_point)
2432+
2433+ for svc in system_services:
2434+ log('ceph: Starting service {} after migrating data.'
2435+ .format(svc))
2436+ service_start(svc)
2437+
2438+
2439+def ensure_ceph_keyring(service, user=None, group=None):
2440+ '''
2441+ Ensures a ceph keyring is created for a named service
2442+ and optionally ensures user and group ownership.
2443+
2444+ Returns False if no ceph key is available in relation state.
2445+ '''
2446+ key = None
2447+ for rid in relation_ids('ceph'):
2448+ for unit in related_units(rid):
2449+ key = relation_get('key', rid=rid, unit=unit)
2450+ if key:
2451+ break
2452+ if not key:
2453+ return False
2454+ create_keyring(service=service, key=key)
2455+ keyring = _keyring_path(service)
2456+ if user and group:
2457+ check_call(['chown', '%s.%s' % (user, group), keyring])
2458+ return True
2459
2460=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2461--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
2462+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-10-16 18:49:46 +0000
2463@@ -0,0 +1,62 @@
2464+
2465+import os
2466+import re
2467+
2468+from subprocess import (
2469+ check_call,
2470+ check_output,
2471+)
2472+
2473+
2474+##################################################
2475+# loopback device helpers.
2476+##################################################
2477+def loopback_devices():
2478+ '''
2479+ Parse through 'losetup -a' output to determine currently mapped
2480+ loopback devices. Output is expected to look like:
2481+
2482+ /dev/loop0: [0807]:961814 (/tmp/my.img)
2483+
2484+ :returns: dict: a dict mapping {loopback_dev: backing_file}
2485+ '''
2486+ loopbacks = {}
2487+ cmd = ['losetup', '-a']
2488+ devs = [d.strip().split(' ') for d in
2489+ check_output(cmd).splitlines() if d != '']
2490+ for dev, _, f in devs:
2491+ loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
2492+ return loopbacks
2493+
2494+
2495+def create_loopback(file_path):
2496+ '''
2497+ Create a loopback device for a given backing file.
2498+
2499+ :returns: str: Full path to new loopback device (eg, /dev/loop0)
2500+ '''
2501+ file_path = os.path.abspath(file_path)
2502+ check_call(['losetup', '--find', file_path])
2503+ for d, f in loopback_devices().iteritems():
2504+ if f == file_path:
2505+ return d
2506+
2507+
2508+def ensure_loopback_device(path, size):
2509+ '''
2510+ Ensure a loopback device exists for a given backing file path and size.
2511+ If it a loopback device is not mapped to file, a new one will be created.
2512+
2513+ TODO: Confirm size of found loopback device.
2514+
2515+ :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2516+ '''
2517+ for d, f in loopback_devices().iteritems():
2518+ if f == path:
2519+ return d
2520+
2521+ if not os.path.exists(path):
2522+ cmd = ['truncate', '--size', size, path]
2523+ check_call(cmd)
2524+
2525+ return create_loopback(path)
2526
2527=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2528--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
2529+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2013-10-16 18:49:46 +0000
2530@@ -0,0 +1,88 @@
2531+from subprocess import (
2532+ CalledProcessError,
2533+ check_call,
2534+ check_output,
2535+ Popen,
2536+ PIPE,
2537+)
2538+
2539+
2540+##################################################
2541+# LVM helpers.
2542+##################################################
2543+def deactivate_lvm_volume_group(block_device):
2544+ '''
2545+ Deactivate any volume gruop associated with an LVM physical volume.
2546+
2547+ :param block_device: str: Full path to LVM physical volume
2548+ '''
2549+ vg = list_lvm_volume_group(block_device)
2550+ if vg:
2551+ cmd = ['vgchange', '-an', vg]
2552+ check_call(cmd)
2553+
2554+
2555+def is_lvm_physical_volume(block_device):
2556+ '''
2557+ Determine whether a block device is initialized as an LVM PV.
2558+
2559+ :param block_device: str: Full path of block device to inspect.
2560+
2561+ :returns: boolean: True if block device is a PV, False if not.
2562+ '''
2563+ try:
2564+ check_output(['pvdisplay', block_device])
2565+ return True
2566+ except CalledProcessError:
2567+ return False
2568+
2569+
2570+def remove_lvm_physical_volume(block_device):
2571+ '''
2572+ Remove LVM PV signatures from a given block device.
2573+
2574+ :param block_device: str: Full path of block device to scrub.
2575+ '''
2576+ p = Popen(['pvremove', '-ff', block_device],
2577+ stdin=PIPE)
2578+ p.communicate(input='y\n')
2579+
2580+
2581+def list_lvm_volume_group(block_device):
2582+ '''
2583+ List LVM volume group associated with a given block device.
2584+
2585+ Assumes block device is a valid LVM PV.
2586+
2587+ :param block_device: str: Full path of block device to inspect.
2588+
2589+ :returns: str: Name of volume group associated with block device or None
2590+ '''
2591+ vg = None
2592+ pvd = check_output(['pvdisplay', block_device]).splitlines()
2593+ for l in pvd:
2594+ if l.strip().startswith('VG Name'):
2595+ vg = ' '.join(l.split()).split(' ').pop()
2596+ return vg
2597+
2598+
2599+def create_lvm_physical_volume(block_device):
2600+ '''
2601+ Initialize a block device as an LVM physical volume.
2602+
2603+ :param block_device: str: Full path of block device to initialize.
2604+
2605+ '''
2606+ check_call(['pvcreate', block_device])
2607+
2608+
2609+def create_lvm_volume_group(volume_group, block_device):
2610+ '''
2611+ Create an LVM volume group backed by a given block device.
2612+
2613+ Assumes block device has already been initialized as an LVM PV.
2614+
2615+ :param volume_group: str: Name of volume group to create.
2616+ :block_device: str: Full path of PV-initialized block device.
2617+ '''
2618+ check_call(['vgcreate', volume_group, block_device])
2619
2620=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2621--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
2622+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-10-16 18:49:46 +0000
2623@@ -0,0 +1,25 @@
2624+from os import stat
2625+from stat import S_ISBLK
2626+
2627+from subprocess import (
2628+ check_call
2629+)
2630+
2631+
2632+def is_block_device(path):
2633+ '''
2634+ Confirm device at path is a valid block device node.
2635+
2636+ :returns: boolean: True if path is a block device, False if not.
2637+ '''
2638+ return S_ISBLK(stat(path).st_mode)
2639+
2640+
2641+def zap_disk(block_device):
2642+ '''
2643+ Clear a block device of partition table. Relies on sgdisk, which is
2644+ installed as pat of the 'gdisk' package in Ubuntu.
2645+
2646+ :param block_device: str: Full path of block device to clean.
2647+ '''
2648+ check_call(['sgdisk', '--zap-all', block_device])
2649
2650=== added directory 'hooks/charmhelpers/core'
2651=== added file 'hooks/charmhelpers/core/__init__.py'
2652=== added file 'hooks/charmhelpers/core/hookenv.py'
2653--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
2654+++ hooks/charmhelpers/core/hookenv.py 2013-10-16 18:49:46 +0000
2655@@ -0,0 +1,340 @@
2656+"Interactions with the Juju environment"
2657+# Copyright 2013 Canonical Ltd.
2658+#
2659+# Authors:
2660+# Charm Helpers Developers <juju@lists.ubuntu.com>
2661+
2662+import os
2663+import json
2664+import yaml
2665+import subprocess
2666+import UserDict
2667+
2668+CRITICAL = "CRITICAL"
2669+ERROR = "ERROR"
2670+WARNING = "WARNING"
2671+INFO = "INFO"
2672+DEBUG = "DEBUG"
2673+MARKER = object()
2674+
2675+cache = {}
2676+
2677+
2678+def cached(func):
2679+ ''' Cache return values for multiple executions of func + args
2680+
2681+ For example:
2682+
2683+ @cached
2684+ def unit_get(attribute):
2685+ pass
2686+
2687+ unit_get('test')
2688+
2689+ will cache the result of unit_get + 'test' for future calls.
2690+ '''
2691+ def wrapper(*args, **kwargs):
2692+ global cache
2693+ key = str((func, args, kwargs))
2694+ try:
2695+ return cache[key]
2696+ except KeyError:
2697+ res = func(*args, **kwargs)
2698+ cache[key] = res
2699+ return res
2700+ return wrapper
2701+
2702+
2703+def flush(key):
2704+ ''' Flushes any entries from function cache where the
2705+ key is found in the function+args '''
2706+ flush_list = []
2707+ for item in cache:
2708+ if key in item:
2709+ flush_list.append(item)
2710+ for item in flush_list:
2711+ del cache[item]
2712+
2713+
2714+def log(message, level=None):
2715+ "Write a message to the juju log"
2716+ command = ['juju-log']
2717+ if level:
2718+ command += ['-l', level]
2719+ command += [message]
2720+ subprocess.call(command)
2721+
2722+
2723+class Serializable(UserDict.IterableUserDict):
2724+ "Wrapper, an object that can be serialized to yaml or json"
2725+
2726+ def __init__(self, obj):
2727+ # wrap the object
2728+ UserDict.IterableUserDict.__init__(self)
2729+ self.data = obj
2730+
2731+ def __getattr__(self, attr):
2732+ # See if this object has attribute.
2733+ if attr in ("json", "yaml", "data"):
2734+ return self.__dict__[attr]
2735+ # Check for attribute in wrapped object.
2736+ got = getattr(self.data, attr, MARKER)
2737+ if got is not MARKER:
2738+ return got
2739+ # Proxy to the wrapped object via dict interface.
2740+ try:
2741+ return self.data[attr]
2742+ except KeyError:
2743+ raise AttributeError(attr)
2744+
2745+ def __getstate__(self):
2746+ # Pickle as a standard dictionary.
2747+ return self.data
2748+
2749+ def __setstate__(self, state):
2750+ # Unpickle into our wrapper.
2751+ self.data = state
2752+
2753+ def json(self):
2754+ "Serialize the object to json"
2755+ return json.dumps(self.data)
2756+
2757+ def yaml(self):
2758+ "Serialize the object to yaml"
2759+ return yaml.dump(self.data)
2760+
2761+
2762+def execution_environment():
2763+ """A convenient bundling of the current execution context"""
2764+ context = {}
2765+ context['conf'] = config()
2766+ if relation_id():
2767+ context['reltype'] = relation_type()
2768+ context['relid'] = relation_id()
2769+ context['rel'] = relation_get()
2770+ context['unit'] = local_unit()
2771+ context['rels'] = relations()
2772+ context['env'] = os.environ
2773+ return context
2774+
2775+
2776+def in_relation_hook():
2777+ "Determine whether we're running in a relation hook"
2778+ return 'JUJU_RELATION' in os.environ
2779+
2780+
2781+def relation_type():
2782+ "The scope for the current relation hook"
2783+ return os.environ.get('JUJU_RELATION', None)
2784+
2785+
2786+def relation_id():
2787+ "The relation ID for the current relation hook"
2788+ return os.environ.get('JUJU_RELATION_ID', None)
2789+
2790+
2791+def local_unit():
2792+ "Local unit ID"
2793+ return os.environ['JUJU_UNIT_NAME']
2794+
2795+
2796+def remote_unit():
2797+ "The remote unit for the current relation hook"
2798+ return os.environ['JUJU_REMOTE_UNIT']
2799+
2800+
2801+def service_name():
2802+ "The name service group this unit belongs to"
2803+ return local_unit().split('/')[0]
2804+
2805+
2806+@cached
2807+def config(scope=None):
2808+ "Juju charm configuration"
2809+ config_cmd_line = ['config-get']
2810+ if scope is not None:
2811+ config_cmd_line.append(scope)
2812+ config_cmd_line.append('--format=json')
2813+ try:
2814+ return json.loads(subprocess.check_output(config_cmd_line))
2815+ except ValueError:
2816+ return None
2817+
2818+
2819+@cached
2820+def relation_get(attribute=None, unit=None, rid=None):
2821+ _args = ['relation-get', '--format=json']
2822+ if rid:
2823+ _args.append('-r')
2824+ _args.append(rid)
2825+ _args.append(attribute or '-')
2826+ if unit:
2827+ _args.append(unit)
2828+ try:
2829+ return json.loads(subprocess.check_output(_args))
2830+ except ValueError:
2831+ return None
2832+
2833+
2834+def relation_set(relation_id=None, relation_settings={}, **kwargs):
2835+ relation_cmd_line = ['relation-set']
2836+ if relation_id is not None:
2837+ relation_cmd_line.extend(('-r', relation_id))
2838+ for k, v in (relation_settings.items() + kwargs.items()):
2839+ if v is None:
2840+ relation_cmd_line.append('{}='.format(k))
2841+ else:
2842+ relation_cmd_line.append('{}={}'.format(k, v))
2843+ subprocess.check_call(relation_cmd_line)
2844+ # Flush cache of any relation-gets for local unit
2845+ flush(local_unit())
2846+
2847+
2848+@cached
2849+def relation_ids(reltype=None):
2850+ "A list of relation_ids"
2851+ reltype = reltype or relation_type()
2852+ relid_cmd_line = ['relation-ids', '--format=json']
2853+ if reltype is not None:
2854+ relid_cmd_line.append(reltype)
2855+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
2856+ return []
2857+
2858+
2859+@cached
2860+def related_units(relid=None):
2861+ "A list of related units"
2862+ relid = relid or relation_id()
2863+ units_cmd_line = ['relation-list', '--format=json']
2864+ if relid is not None:
2865+ units_cmd_line.extend(('-r', relid))
2866+ return json.loads(subprocess.check_output(units_cmd_line)) or []
2867+
2868+
2869+@cached
2870+def relation_for_unit(unit=None, rid=None):
2871+ "Get the json represenation of a unit's relation"
2872+ unit = unit or remote_unit()
2873+ relation = relation_get(unit=unit, rid=rid)
2874+ for key in relation:
2875+ if key.endswith('-list'):
2876+ relation[key] = relation[key].split()
2877+ relation['__unit__'] = unit
2878+ return relation
2879+
2880+
2881+@cached
2882+def relations_for_id(relid=None):
2883+ "Get relations of a specific relation ID"
2884+ relation_data = []
2885+ relid = relid or relation_ids()
2886+ for unit in related_units(relid):
2887+ unit_data = relation_for_unit(unit, relid)
2888+ unit_data['__relid__'] = relid
2889+ relation_data.append(unit_data)
2890+ return relation_data
2891+
2892+
2893+@cached
2894+def relations_of_type(reltype=None):
2895+ "Get relations of a specific type"
2896+ relation_data = []
2897+ reltype = reltype or relation_type()
2898+ for relid in relation_ids(reltype):
2899+ for relation in relations_for_id(relid):
2900+ relation['__relid__'] = relid
2901+ relation_data.append(relation)
2902+ return relation_data
2903+
2904+
2905+@cached
2906+def relation_types():
2907+ "Get a list of relation types supported by this charm"
2908+ charmdir = os.environ.get('CHARM_DIR', '')
2909+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
2910+ md = yaml.safe_load(mdf)
2911+ rel_types = []
2912+ for key in ('provides', 'requires', 'peers'):
2913+ section = md.get(key)
2914+ if section:
2915+ rel_types.extend(section.keys())
2916+ mdf.close()
2917+ return rel_types
2918+
2919+
2920+@cached
2921+def relations():
2922+ rels = {}
2923+ for reltype in relation_types():
2924+ relids = {}
2925+ for relid in relation_ids(reltype):
2926+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
2927+ for unit in related_units(relid):
2928+ reldata = relation_get(unit=unit, rid=relid)
2929+ units[unit] = reldata
2930+ relids[relid] = units
2931+ rels[reltype] = relids
2932+ return rels
2933+
2934+
2935+def open_port(port, protocol="TCP"):
2936+ "Open a service network port"
2937+ _args = ['open-port']
2938+ _args.append('{}/{}'.format(port, protocol))
2939+ subprocess.check_call(_args)
2940+
2941+
2942+def close_port(port, protocol="TCP"):
2943+ "Close a service network port"
2944+ _args = ['close-port']
2945+ _args.append('{}/{}'.format(port, protocol))
2946+ subprocess.check_call(_args)
2947+
2948+
2949+@cached
2950+def unit_get(attribute):
2951+ _args = ['unit-get', '--format=json', attribute]
2952+ try:
2953+ return json.loads(subprocess.check_output(_args))
2954+ except ValueError:
2955+ return None
2956+
2957+
2958+def unit_private_ip():
2959+ return unit_get('private-address')
2960+
2961+
2962+class UnregisteredHookError(Exception):
2963+ pass
2964+
2965+
2966+class Hooks(object):
2967+ def __init__(self):
2968+ super(Hooks, self).__init__()
2969+ self._hooks = {}
2970+
2971+ def register(self, name, function):
2972+ self._hooks[name] = function
2973+
2974+ def execute(self, args):
2975+ hook_name = os.path.basename(args[0])
2976+ if hook_name in self._hooks:
2977+ self._hooks[hook_name]()
2978+ else:
2979+ raise UnregisteredHookError(hook_name)
2980+
2981+ def hook(self, *hook_names):
2982+ def wrapper(decorated):
2983+ for hook_name in hook_names:
2984+ self.register(hook_name, decorated)
2985+ else:
2986+ self.register(decorated.__name__, decorated)
2987+ if '_' in decorated.__name__:
2988+ self.register(
2989+ decorated.__name__.replace('_', '-'), decorated)
2990+ return decorated
2991+ return wrapper
2992+
2993+
2994+def charm_dir():
2995+ return os.environ.get('CHARM_DIR')
2996
2997=== added file 'hooks/charmhelpers/core/host.py'
2998--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
2999+++ hooks/charmhelpers/core/host.py 2013-10-16 18:49:46 +0000
3000@@ -0,0 +1,241 @@
3001+"""Tools for working with the host system"""
3002+# Copyright 2012 Canonical Ltd.
3003+#
3004+# Authors:
3005+# Nick Moffitt <nick.moffitt@canonical.com>
3006+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
3007+
3008+import os
3009+import pwd
3010+import grp
3011+import random
3012+import string
3013+import subprocess
3014+import hashlib
3015+
3016+from collections import OrderedDict
3017+
3018+from hookenv import log
3019+
3020+
3021+def service_start(service_name):
3022+ return service('start', service_name)
3023+
3024+
3025+def service_stop(service_name):
3026+ return service('stop', service_name)
3027+
3028+
3029+def service_restart(service_name):
3030+ return service('restart', service_name)
3031+
3032+
3033+def service_reload(service_name, restart_on_failure=False):
3034+ service_result = service('reload', service_name)
3035+ if not service_result and restart_on_failure:
3036+ service_result = service('restart', service_name)
3037+ return service_result
3038+
3039+
3040+def service(action, service_name):
3041+ cmd = ['service', service_name, action]
3042+ return subprocess.call(cmd) == 0
3043+
3044+
3045+def service_running(service):
3046+ try:
3047+ output = subprocess.check_output(['service', service, 'status'])
3048+ except subprocess.CalledProcessError:
3049+ return False
3050+ else:
3051+ if ("start/running" in output or "is running" in output):
3052+ return True
3053+ else:
3054+ return False
3055+
3056+
3057+def adduser(username, password=None, shell='/bin/bash', system_user=False):
3058+ """Add a user"""
3059+ try:
3060+ user_info = pwd.getpwnam(username)
3061+ log('user {0} already exists!'.format(username))
3062+ except KeyError:
3063+ log('creating user {0}'.format(username))
3064+ cmd = ['useradd']
3065+ if system_user or password is None:
3066+ cmd.append('--system')
3067+ else:
3068+ cmd.extend([
3069+ '--create-home',
3070+ '--shell', shell,
3071+ '--password', password,
3072+ ])
3073+ cmd.append(username)
3074+ subprocess.check_call(cmd)
3075+ user_info = pwd.getpwnam(username)
3076+ return user_info
3077+
3078+
3079+def add_user_to_group(username, group):
3080+ """Add a user to a group"""
3081+ cmd = [
3082+ 'gpasswd', '-a',
3083+ username,
3084+ group
3085+ ]
3086+ log("Adding user {} to group {}".format(username, group))
3087+ subprocess.check_call(cmd)
3088+
3089+
3090+def rsync(from_path, to_path, flags='-r', options=None):
3091+ """Replicate the contents of a path"""
3092+ options = options or ['--delete', '--executability']
3093+ cmd = ['/usr/bin/rsync', flags]
3094+ cmd.extend(options)
3095+ cmd.append(from_path)
3096+ cmd.append(to_path)
3097+ log(" ".join(cmd))
3098+ return subprocess.check_output(cmd).strip()
3099+
3100+
3101+def symlink(source, destination):
3102+ """Create a symbolic link"""
3103+ log("Symlinking {} as {}".format(source, destination))
3104+ cmd = [
3105+ 'ln',
3106+ '-sf',
3107+ source,
3108+ destination,
3109+ ]
3110+ subprocess.check_call(cmd)
3111+
3112+
3113+def mkdir(path, owner='root', group='root', perms=0555, force=False):
3114+ """Create a directory"""
3115+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
3116+ perms))
3117+ uid = pwd.getpwnam(owner).pw_uid
3118+ gid = grp.getgrnam(group).gr_gid
3119+ realpath = os.path.abspath(path)
3120+ if os.path.exists(realpath):
3121+ if force and not os.path.isdir(realpath):
3122+ log("Removing non-directory file {} prior to mkdir()".format(path))
3123+ os.unlink(realpath)
3124+ else:
3125+ os.makedirs(realpath, perms)
3126+ os.chown(realpath, uid, gid)
3127+
3128+
3129+def write_file(path, content, owner='root', group='root', perms=0444):
3130+ """Create or overwrite a file with the contents of a string"""
3131+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
3132+ uid = pwd.getpwnam(owner).pw_uid
3133+ gid = grp.getgrnam(group).gr_gid
3134+ with open(path, 'w') as target:
3135+ os.fchown(target.fileno(), uid, gid)
3136+ os.fchmod(target.fileno(), perms)
3137+ target.write(content)
3138+
3139+
3140+def mount(device, mountpoint, options=None, persist=False):
3141+ '''Mount a filesystem'''
3142+ cmd_args = ['mount']
3143+ if options is not None:
3144+ cmd_args.extend(['-o', options])
3145+ cmd_args.extend([device, mountpoint])
3146+ try:
3147+ subprocess.check_output(cmd_args)
3148+ except subprocess.CalledProcessError, e:
3149+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
3150+ return False
3151+ if persist:
3152+ # TODO: update fstab
3153+ pass
3154+ return True
3155+
3156+
3157+def umount(mountpoint, persist=False):
3158+ '''Unmount a filesystem'''
3159+ cmd_args = ['umount', mountpoint]
3160+ try:
3161+ subprocess.check_output(cmd_args)
3162+ except subprocess.CalledProcessError, e:
3163+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
3164+ return False
3165+ if persist:
3166+ # TODO: update fstab
3167+ pass
3168+ return True
3169+
3170+
3171+def mounts():
3172+ '''List of all mounted volumes as [[mountpoint,device],[...]]'''
3173+ with open('/proc/mounts') as f:
3174+ # [['/mount/point','/dev/path'],[...]]
3175+ system_mounts = [m[1::-1] for m in [l.strip().split()
3176+ for l in f.readlines()]]
3177+ return system_mounts
3178+
3179+
3180+def file_hash(path):
3181+ ''' Generate a md5 hash of the contents of 'path' or None if not found '''
3182+ if os.path.exists(path):
3183+ h = hashlib.md5()
3184+ with open(path, 'r') as source:
3185+ h.update(source.read()) # IGNORE:E1101 - it does have update
3186+ return h.hexdigest()
3187+ else:
3188+ return None
3189+
3190+
3191+def restart_on_change(restart_map):
3192+ ''' Restart services based on configuration files changing
3193+
3194+ This function is used a decorator, for example
3195+
3196+ @restart_on_change({
3197+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
3198+ })
3199+ def ceph_client_changed():
3200+ ...
3201+
3202+ In this example, the cinder-api and cinder-volume services
3203+ would be restarted if /etc/ceph/ceph.conf is changed by the
3204+ ceph_client_changed function.
3205+ '''
3206+ def wrap(f):
3207+ def wrapped_f(*args):
3208+ checksums = {}
3209+ for path in restart_map:
3210+ checksums[path] = file_hash(path)
3211+ f(*args)
3212+ restarts = []
3213+ for path in restart_map:
3214+ if checksums[path] != file_hash(path):
3215+ restarts += restart_map[path]
3216+ for service_name in list(OrderedDict.fromkeys(restarts)):
3217+ service('restart', service_name)
3218+ return wrapped_f
3219+ return wrap
3220+
3221+
3222+def lsb_release():
3223+ '''Return /etc/lsb-release in a dict'''
3224+ d = {}
3225+ with open('/etc/lsb-release', 'r') as lsb:
3226+ for l in lsb:
3227+ k, v = l.split('=')
3228+ d[k.strip()] = v.strip()
3229+ return d
3230+
3231+
3232+def pwgen(length=None):
3233+ '''Generate a random pasword.'''
3234+ if length is None:
3235+ length = random.choice(range(35, 45))
3236+ alphanumeric_chars = [
3237+ l for l in (string.letters + string.digits)
3238+ if l not in 'l0QD1vAEIOUaeiou']
3239+ random_chars = [
3240+ random.choice(alphanumeric_chars) for _ in range(length)]
3241+ return(''.join(random_chars))
3242
3243=== added directory 'hooks/charmhelpers/fetch'
3244=== added file 'hooks/charmhelpers/fetch/__init__.py'
3245--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
3246+++ hooks/charmhelpers/fetch/__init__.py 2013-10-16 18:49:46 +0000
3247@@ -0,0 +1,209 @@
3248+import importlib
3249+from yaml import safe_load
3250+from charmhelpers.core.host import (
3251+ lsb_release
3252+)
3253+from urlparse import (
3254+ urlparse,
3255+ urlunparse,
3256+)
3257+import subprocess
3258+from charmhelpers.core.hookenv import (
3259+ config,
3260+ log,
3261+)
3262+import apt_pkg
3263+
3264+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3265+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3266+"""
3267+PROPOSED_POCKET = """# Proposed
3268+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
3269+"""
3270+
3271+
3272+def filter_installed_packages(packages):
3273+ """Returns a list of packages that require installation"""
3274+ apt_pkg.init()
3275+ cache = apt_pkg.Cache()
3276+ _pkgs = []
3277+ for package in packages:
3278+ try:
3279+ p = cache[package]
3280+ p.current_ver or _pkgs.append(package)
3281+ except KeyError:
3282+ log('Package {} has no installation candidate.'.format(package),
3283+ level='WARNING')
3284+ _pkgs.append(package)
3285+ return _pkgs
3286+
3287+
3288+def apt_install(packages, options=None, fatal=False):
3289+ """Install one or more packages"""
3290+ options = options or []
3291+ cmd = ['apt-get', '-y']
3292+ cmd.extend(options)
3293+ cmd.append('install')
3294+ if isinstance(packages, basestring):
3295+ cmd.append(packages)
3296+ else:
3297+ cmd.extend(packages)
3298+ log("Installing {} with options: {}".format(packages,
3299+ options))
3300+ if fatal:
3301+ subprocess.check_call(cmd)
3302+ else:
3303+ subprocess.call(cmd)
3304+
3305+
3306+def apt_update(fatal=False):
3307+ """Update local apt cache"""
3308+ cmd = ['apt-get', 'update']
3309+ if fatal:
3310+ subprocess.check_call(cmd)
3311+ else:
3312+ subprocess.call(cmd)
3313+
3314+
3315+def apt_purge(packages, fatal=False):
3316+ """Purge one or more packages"""
3317+ cmd = ['apt-get', '-y', 'purge']
3318+ if isinstance(packages, basestring):
3319+ cmd.append(packages)
3320+ else:
3321+ cmd.extend(packages)
3322+ log("Purging {}".format(packages))
3323+ if fatal:
3324+ subprocess.check_call(cmd)
3325+ else:
3326+ subprocess.call(cmd)
3327+
3328+
3329+def add_source(source, key=None):
3330+ if ((source.startswith('ppa:') or
3331+ source.startswith('http:'))):
3332+ subprocess.check_call(['add-apt-repository', '--yes', source])
3333+ elif source.startswith('cloud:'):
3334+ apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
3335+ fatal=True)
3336+ pocket = source.split(':')[-1]
3337+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
3338+ apt.write(CLOUD_ARCHIVE.format(pocket))
3339+ elif source == 'proposed':
3340+ release = lsb_release()['DISTRIB_CODENAME']
3341+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3342+ apt.write(PROPOSED_POCKET.format(release))
3343+ if key:
3344+ subprocess.check_call(['apt-key', 'import', key])
3345+
3346+
3347+class SourceConfigError(Exception):
3348+ pass
3349+
3350+
3351+def configure_sources(update=False,
3352+ sources_var='install_sources',
3353+ keys_var='install_keys'):
3354+ """
3355+ Configure multiple sources from charm configuration
3356+
3357+ Example config:
3358+ install_sources:
3359+ - "ppa:foo"
3360+ - "http://example.com/repo precise main"
3361+ install_keys:
3362+ - null
3363+ - "a1b2c3d4"
3364+
3365+ Note that 'null' (a.k.a. None) should not be quoted.
3366+ """
3367+ sources = safe_load(config(sources_var))
3368+ keys = safe_load(config(keys_var))
3369+ if isinstance(sources, basestring) and isinstance(keys, basestring):
3370+ add_source(sources, keys)
3371+ else:
3372+ if not len(sources) == len(keys):
3373+ msg = 'Install sources and keys lists are different lengths'
3374+ raise SourceConfigError(msg)
3375+ for src_num in range(len(sources)):
3376+ add_source(sources[src_num], keys[src_num])
3377+ if update:
3378+ apt_update(fatal=True)
3379+
3380+# The order of this list is very important. Handlers should be listed in from
3381+# least- to most-specific URL matching.
3382+FETCH_HANDLERS = (
3383+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3384+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3385+)
3386+
3387+
3388+class UnhandledSource(Exception):
3389+ pass
3390+
3391+
3392+def install_remote(source):
3393+ """
3394+ Install a file tree from a remote source
3395+
3396+ The specified source should be a url of the form:
3397+ scheme://[host]/path[#[option=value][&...]]
3398+
3399+ Schemes supported are based on this modules submodules
3400+ Options supported are submodule-specific"""
3401+ # We ONLY check for True here because can_handle may return a string
3402+ # explaining why it can't handle a given source.
3403+ handlers = [h for h in plugins() if h.can_handle(source) is True]
3404+ installed_to = None
3405+ for handler in handlers:
3406+ try:
3407+ installed_to = handler.install(source)
3408+ except UnhandledSource:
3409+ pass
3410+ if not installed_to:
3411+ raise UnhandledSource("No handler found for source {}".format(source))
3412+ return installed_to
3413+
3414+
3415+def install_from_config(config_var_name):
3416+ charm_config = config()
3417+ source = charm_config[config_var_name]
3418+ return install_remote(source)
3419+
3420+
3421+class BaseFetchHandler(object):
3422+ """Base class for FetchHandler implementations in fetch plugins"""
3423+ def can_handle(self, source):
3424+ """Returns True if the source can be handled. Otherwise returns
3425+ a string explaining why it cannot"""
3426+ return "Wrong source type"
3427+
3428+ def install(self, source):
3429+ """Try to download and unpack the source. Return the path to the
3430+ unpacked files or raise UnhandledSource."""
3431+ raise UnhandledSource("Wrong source type {}".format(source))
3432+
3433+ def parse_url(self, url):
3434+ return urlparse(url)
3435+
3436+ def base_url(self, url):
3437+ """Return url without querystring or fragment"""
3438+ parts = list(self.parse_url(url))
3439+ parts[4:] = ['' for i in parts[4:]]
3440+ return urlunparse(parts)
3441+
3442+
3443+def plugins(fetch_handlers=None):
3444+ if not fetch_handlers:
3445+ fetch_handlers = FETCH_HANDLERS
3446+ plugin_list = []
3447+ for handler_name in fetch_handlers:
3448+ package, classname = handler_name.rsplit('.', 1)
3449+ try:
3450+ handler_class = getattr(importlib.import_module(package), classname)
3451+ plugin_list.append(handler_class())
3452+ except (ImportError, AttributeError):
3453+ # Skip missing plugins so that they can be ommitted from
3454+ # installation if desired
3455+ log("FetchHandler {} not found, skipping plugin".format(handler_name))
3456+ return plugin_list
3457
3458=== added file 'hooks/charmhelpers/fetch/archiveurl.py'
3459--- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
3460+++ hooks/charmhelpers/fetch/archiveurl.py 2013-10-16 18:49:46 +0000
3461@@ -0,0 +1,48 @@
3462+import os
3463+import urllib2
3464+from charmhelpers.fetch import (
3465+ BaseFetchHandler,
3466+ UnhandledSource
3467+)
3468+from charmhelpers.payload.archive import (
3469+ get_archive_handler,
3470+ extract,
3471+)
3472+from charmhelpers.core.host import mkdir
3473+
3474+
3475+class ArchiveUrlFetchHandler(BaseFetchHandler):
3476+ """Handler for archives via generic URLs"""
3477+ def can_handle(self, source):
3478+ url_parts = self.parse_url(source)
3479+ if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
3480+ return "Wrong source type"
3481+ if get_archive_handler(self.base_url(source)):
3482+ return True
3483+ return False
3484+
3485+ def download(self, source, dest):
3486+ # propogate all exceptions
3487+ # URLError, OSError, etc
3488+ response = urllib2.urlopen(source)
3489+ try:
3490+ with open(dest, 'w') as dest_file:
3491+ dest_file.write(response.read())
3492+ except Exception as e:
3493+ if os.path.isfile(dest):
3494+ os.unlink(dest)
3495+ raise e
3496+
3497+ def install(self, source):
3498+ url_parts = self.parse_url(source)
3499+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3500+ if not os.path.exists(dest_dir):
3501+ mkdir(dest_dir, perms=0755)
3502+ dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3503+ try:
3504+ self.download(source, dld_file)
3505+ except urllib2.URLError as e:
3506+ raise UnhandledSource(e.reason)
3507+ except OSError as e:
3508+ raise UnhandledSource(e.strerror)
3509+ return extract(dld_file)
3510
3511=== added file 'hooks/charmhelpers/fetch/bzrurl.py'
3512--- hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
3513+++ hooks/charmhelpers/fetch/bzrurl.py 2013-10-16 18:49:46 +0000
3514@@ -0,0 +1,49 @@
3515+import os
3516+from charmhelpers.fetch import (
3517+ BaseFetchHandler,
3518+ UnhandledSource
3519+)
3520+from charmhelpers.core.host import mkdir
3521+
3522+try:
3523+ from bzrlib.branch import Branch
3524+except ImportError:
3525+ from charmhelpers.fetch import apt_install
3526+ apt_install("python-bzrlib")
3527+ from bzrlib.branch import Branch
3528+
3529+class BzrUrlFetchHandler(BaseFetchHandler):
3530+ """Handler for bazaar branches via generic and lp URLs"""
3531+ def can_handle(self, source):
3532+ url_parts = self.parse_url(source)
3533+ if url_parts.scheme not in ('bzr+ssh', 'lp'):
3534+ return False
3535+ else:
3536+ return True
3537+
3538+ def branch(self, source, dest):
3539+ url_parts = self.parse_url(source)
3540+ # If we use lp:branchname scheme we need to load plugins
3541+ if not self.can_handle(source):
3542+ raise UnhandledSource("Cannot handle {}".format(source))
3543+ if url_parts.scheme == "lp":
3544+ from bzrlib.plugin import load_plugins
3545+ load_plugins()
3546+ try:
3547+ remote_branch = Branch.open(source)
3548+ remote_branch.bzrdir.sprout(dest).open_branch()
3549+ except Exception as e:
3550+ raise e
3551+
3552+ def install(self, source):
3553+ url_parts = self.parse_url(source)
3554+ branch_name = url_parts.path.strip("/").split("/")[-1]
3555+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
3556+ if not os.path.exists(dest_dir):
3557+ mkdir(dest_dir, perms=0755)
3558+ try:
3559+ self.branch(source, dest_dir)
3560+ except OSError as e:
3561+ raise UnhandledSource(e.strerror)
3562+ return dest_dir
3563+
3564
3565=== added directory 'hooks/charmhelpers/payload'
3566=== added file 'hooks/charmhelpers/payload/__init__.py'
3567--- hooks/charmhelpers/payload/__init__.py 1970-01-01 00:00:00 +0000
3568+++ hooks/charmhelpers/payload/__init__.py 2013-10-16 18:49:46 +0000
3569@@ -0,0 +1,1 @@
3570+"Tools for working with files injected into a charm just before deployment."
3571
3572=== added file 'hooks/charmhelpers/payload/execd.py'
3573--- hooks/charmhelpers/payload/execd.py 1970-01-01 00:00:00 +0000
3574+++ hooks/charmhelpers/payload/execd.py 2013-10-16 18:49:46 +0000
3575@@ -0,0 +1,50 @@
3576+#!/usr/bin/env python
3577+
3578+import os
3579+import sys
3580+import subprocess
3581+from charmhelpers.core import hookenv
3582+
3583+
3584+def default_execd_dir():
3585+ return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
3586+
3587+
3588+def execd_module_paths(execd_dir=None):
3589+ """Generate a list of full paths to modules within execd_dir."""
3590+ if not execd_dir:
3591+ execd_dir = default_execd_dir()
3592+
3593+ if not os.path.exists(execd_dir):
3594+ return
3595+
3596+ for subpath in os.listdir(execd_dir):
3597+ module = os.path.join(execd_dir, subpath)
3598+ if os.path.isdir(module):
3599+ yield module
3600+
3601+
3602+def execd_submodule_paths(command, execd_dir=None):
3603+ """Generate a list of full paths to the specified command within exec_dir.
3604+ """
3605+ for module_path in execd_module_paths(execd_dir):
3606+ path = os.path.join(module_path, command)
3607+ if os.access(path, os.X_OK) and os.path.isfile(path):
3608+ yield path
3609+
3610+
3611+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
3612+ """Run command for each module within execd_dir which defines it."""
3613+ for submodule_path in execd_submodule_paths(command, execd_dir):
3614+ try:
3615+ subprocess.check_call(submodule_path, shell=True, stderr=stderr)
3616+ except subprocess.CalledProcessError as e:
3617+ hookenv.log("Error ({}) running {}. Output: {}".format(
3618+ e.returncode, e.cmd, e.output))
3619+ if die_on_error:
3620+ sys.exit(e.returncode)
3621+
3622+
3623+def execd_preinstall(execd_dir=None):
3624+ """Run charm-pre-install for each module within execd_dir."""
3625+ execd_run('charm-pre-install', execd_dir=execd_dir)
3626
3627=== added symlink 'hooks/cinder-volume-service'
3628=== target is u'cinder_hooks.py'
3629=== added symlink 'hooks/cinder-volume-service-relation-joined'
3630=== target is u'cinder_hooks.py'
3631=== added file 'hooks/cinder_contexts.py'
3632--- hooks/cinder_contexts.py 1970-01-01 00:00:00 +0000
3633+++ hooks/cinder_contexts.py 2013-10-16 18:49:46 +0000
3634@@ -0,0 +1,76 @@
3635+from charmhelpers.core.hookenv import (
3636+ config,
3637+ relation_ids,
3638+ service_name,
3639+)
3640+
3641+from charmhelpers.contrib.openstack.context import (
3642+ OSContextGenerator,
3643+ ApacheSSLContext as SSLContext,
3644+)
3645+
3646+from charmhelpers.contrib.hahelpers.cluster import (
3647+ determine_api_port,
3648+ determine_haproxy_port,
3649+)
3650+
3651+
3652+class ImageServiceContext(OSContextGenerator):
3653+ interfaces = ['image-service']
3654+
3655+ def __call__(self):
3656+ if not relation_ids('image-service'):
3657+ return {}
3658+ return { 'glance_api_version': config('glance-api-version') }
3659+
3660+
3661+class CephContext(OSContextGenerator):
3662+ interfaces = ['ceph']
3663+
3664+ def __call__(self):
3665+ """
3666+ Used to generate template context to be added to cinder.conf in the
3667+ presence of a ceph relation.
3668+ """
3669+ if not relation_ids('ceph'):
3670+ return {}
3671+ service = service_name()
3672+ return {
3673+ 'volume_driver': 'cinder.volume.driver.RBDDriver',
3674+ # ensure_ceph_pool() creates pool based on service name.
3675+ 'rbd_pool': service,
3676+ 'rbd_user': service,
3677+ 'host': service,
3678+ }
3679+
3680+
3681+class HAProxyContext(OSContextGenerator):
3682+ interfaces = ['ceph']
3683+
3684+ def __call__(self):
3685+ '''
3686+ Extends the main charmhelpers HAProxyContext with a port mapping
3687+ specific to this charm.
3688+ Also used to extend cinder.conf context with correct api_listening_port
3689+ '''
3690+ haproxy_port = determine_haproxy_port(config('api-listening-port'))
3691+ api_port = determine_api_port(config('api-listening-port'))
3692+
3693+ ctxt = {
3694+ 'service_ports': {'cinder_api': [haproxy_port, api_port]},
3695+ 'osapi_volume_listen_port': api_port,
3696+ }
3697+ return ctxt
3698+
3699+
3700+class ApacheSSLContext(SSLContext):
3701+ interfaces = ['https']
3702+ external_ports = [8776]
3703+ service_namespace = 'cinder'
3704+
3705+ def __call__(self):
3706+ # late import to work around circular dependency
3707+ from cinder_utils import service_enabled
3708+ if not service_enabled('cinder-api'):
3709+ return {}
3710+ return super(ApacheSSLContext, self).__call__()
3711
3712=== added file 'hooks/cinder_hooks.py'
3713--- hooks/cinder_hooks.py 1970-01-01 00:00:00 +0000
3714+++ hooks/cinder_hooks.py 2013-10-16 18:49:46 +0000
3715@@ -0,0 +1,276 @@
3716+#!/usr/bin/python
3717+
3718+import os
3719+import sys
3720+
3721+from subprocess import check_call
3722+
3723+from cinder_utils import (
3724+ clean_storage,
3725+ determine_packages,
3726+ do_openstack_upgrade,
3727+ ensure_block_device,
3728+ ensure_ceph_pool,
3729+ juju_log,
3730+ migrate_database,
3731+ prepare_lvm_storage,
3732+ register_configs,
3733+ restart_map,
3734+ service_enabled,
3735+ set_ceph_env_variables,
3736+ CLUSTER_RES,
3737+ CINDER_CONF,
3738+ CINDER_API_CONF,
3739+ CEPH_CONF,
3740+)
3741+
3742+from charmhelpers.core.hookenv import (
3743+ Hooks,
3744+ UnregisteredHookError,
3745+ config,
3746+ relation_get,
3747+ relation_ids,
3748+ relation_set,
3749+ service_name,
3750+ unit_get,
3751+)
3752+
3753+from charmhelpers.fetch import apt_install, apt_update
3754+from charmhelpers.core.host import lsb_release, restart_on_change
3755+
3756+from charmhelpers.contrib.openstack.utils import (
3757+ configure_installation_source, openstack_upgrade_available)
3758+
3759+from charmhelpers.contrib.storage.linux.ceph import ensure_ceph_keyring
3760+
3761+from charmhelpers.contrib.hahelpers.cluster import (
3762+ canonical_url,
3763+ eligible_leader,
3764+ is_leader,
3765+ get_hacluster_config,
3766+)
3767+
3768+from charmhelpers.payload.execd import execd_preinstall
3769+
3770+hooks = Hooks()
3771+
3772+CONFIGS = register_configs()
3773+
3774+
3775+@hooks.hook('install')
3776+def install():
3777+ execd_preinstall()
3778+ conf = config()
3779+ src = conf['openstack-origin']
3780+ if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and
3781+ src == 'distro'):
3782+ src = 'cloud:precise-folsom'
3783+ configure_installation_source(src)
3784+ apt_update()
3785+ apt_install(determine_packages(), fatal=True)
3786+
3787+ if (service_enabled('volume') and
3788+ conf['block-device'] not in [None, 'None', 'none']):
3789+ bdev = ensure_block_device(conf['block-device'])
3790+ juju_log('Located valid block device: %s' % bdev)
3791+ if conf['overwrite'] in ['true', 'True', True]:
3792+ juju_log('Ensuring block device is clean: %s' % bdev)
3793+ clean_storage(bdev)
3794+ prepare_lvm_storage(bdev, conf['volume-group'])
3795+
3796+
3797+@hooks.hook('config-changed')
3798+@restart_on_change(restart_map())
3799+def config_changed():
3800+ if openstack_upgrade_available('cinder-common'):
3801+ do_openstack_upgrade(configs=CONFIGS)
3802+ CONFIGS.write_all()
3803+ configure_https()
3804+
3805+
3806+@hooks.hook('shared-db-relation-joined')
3807+def db_joined():
3808+ conf = config()
3809+ relation_set(database=conf['database'], username=conf['database-user'],
3810+ hostname=unit_get('private-address'))
3811+
3812+
3813+@hooks.hook('shared-db-relation-changed')
3814+@restart_on_change(restart_map())
3815+def db_changed():
3816+ if 'shared-db' not in CONFIGS.complete_contexts():
3817+ juju_log('shared-db relation incomplete. Peer not ready?')
3818+ return
3819+ CONFIGS.write(CINDER_CONF)
3820+ if eligible_leader(CLUSTER_RES):
3821+ juju_log('Cluster leader, performing db sync')
3822+ migrate_database()
3823+
3824+
3825+@hooks.hook('amqp-relation-joined')
3826+def amqp_joined():
3827+ conf = config()
3828+ relation_set(username=conf['rabbit-user'], vhost=conf['rabbit-vhost'])
3829+
3830+
3831+@hooks.hook('amqp-relation-changed')
3832+@restart_on_change(restart_map())
3833+def amqp_changed():
3834+ if 'amqp' not in CONFIGS.complete_contexts():
3835+ juju_log('amqp relation incomplete. Peer not ready?')
3836+ return
3837+ CONFIGS.write(CINDER_CONF)
3838+
3839+
3840+@hooks.hook('identity-service-relation-joined')
3841+def identity_joined(rid=None):
3842+ if not eligible_leader(CLUSTER_RES):
3843+ return
3844+
3845+ conf = config()
3846+
3847+ port = conf['api-listening-port']
3848+ url = canonical_url(CONFIGS) + ':%s/v1/$(tenant_id)s' % port
3849+
3850+ settings = {
3851+ 'region': conf['region'],
3852+ 'service': 'cinder',
3853+ 'public_url': url,
3854+ 'internal_url': url,
3855+ 'admin_url': url,
3856+ }
3857+ relation_set(relation_id=rid, **settings)
3858+
3859+
3860+@hooks.hook('identity-service-relation-changed')
3861+@restart_on_change(restart_map())
3862+def identity_changed():
3863+ if 'identity-service' not in CONFIGS.complete_contexts():
3864+ juju_log('identity-service relation incomplete. Peer not ready?')
3865+ return
3866+ CONFIGS.write(CINDER_API_CONF)
3867+ configure_https()
3868+
3869+
3870+@hooks.hook('ceph-relation-joined')
3871+def ceph_joined():
3872+ if not os.path.isdir('/etc/ceph'):
3873+ os.mkdir('/etc/ceph')
3874+ apt_install('ceph-common', fatal=True)
3875+
3876+
3877+@hooks.hook('ceph-relation-changed')
3878+@restart_on_change(restart_map())
3879+def ceph_changed():
3880+ if 'ceph' not in CONFIGS.complete_contexts():
3881+ juju_log('ceph relation incomplete. Peer not ready?')
3882+ return
3883+ svc = service_name()
3884+ if not ensure_ceph_keyring(service=svc,
3885+ user='cinder', group='cinder'):
3886+ juju_log('Could not create ceph keyring: peer not ready?')
3887+ return
3888+ CONFIGS.write(CEPH_CONF)
3889+ CONFIGS.write(CINDER_CONF)
3890+ set_ceph_env_variables(service=svc)
3891+
3892+ if eligible_leader(CLUSTER_RES):
3893+ _config = config()
3894+ ensure_ceph_pool(service=svc,
3895+ replicas=_config['ceph-osd-replication-count'])
3896+
3897+
3898+@hooks.hook('cluster-relation-changed',
3899+ 'cluster-relation-departed')
3900+@restart_on_change(restart_map())
3901+def cluster_changed():
3902+ CONFIGS.write_all()
3903+
3904+
3905+@hooks.hook('ha-relation-joined')
3906+def ha_joined():
3907+ config = get_hacluster_config()
3908+ resources = {
3909+ 'res_cinder_vip': 'ocf:heartbeat:IPaddr2',
3910+ 'res_cinder_haproxy': 'lsb:haproxy'
3911+ }
3912+
3913+ vip_params = 'params ip="%s" cidr_netmask="%s" nic="%s"' % \
3914+ (config['vip'], config['vip_cidr'], config['vip_iface'])
3915+ resource_params = {
3916+ 'res_cinder_vip': vip_params,
3917+ 'res_cinder_haproxy': 'op monitor interval="5s"'
3918+ }
3919+ init_services = {
3920+ 'res_cinder_haproxy': 'haproxy'
3921+ }
3922+ clones = {
3923+ 'cl_cinder_haproxy': 'res_cinder_haproxy'
3924+ }
3925+ relation_set(init_services=init_services,
3926+ corosync_bindiface=config['ha-bindiface'],
3927+ corosync_mcastport=config['ha-mcastport'],
3928+ resources=resources,
3929+ resource_params=resource_params,
3930+ clones=clones)
3931+
3932+
3933+@hooks.hook('ha-relation-changed')
3934+def ha_changed():
3935+ clustered = relation_get('clustered')
3936+ if not clustered or clustered in [None, 'None', '']:
3937+ juju_log('ha_changed: hacluster subordinate not fully clustered.')
3938+ return
3939+ if not is_leader(CLUSTER_RES):
3940+ juju_log('ha_changed: hacluster complete but we are not leader.')
3941+ return
3942+ juju_log('Cluster configured, notifying other services and updating '
3943+ 'keystone endpoint configuration')
3944+ for rid in relation_ids('identity-service'):
3945+ identity_joined(rid=rid)
3946+
3947+
3948+@hooks.hook('image-service-relation-changed')
3949+@restart_on_change(restart_map())
3950+def image_service_changed():
3951+ CONFIGS.write(CINDER_CONF)
3952+
3953+
3954+@hooks.hook('amqp-relation-broken',
3955+ 'ceph-relation-broken',
3956+ 'identity-service-relation-broken',
3957+ 'image-service-relation-broken',
3958+ 'shared-db-relation-broken')
3959+@restart_on_change(restart_map())
3960+def relation_broken():
3961+ CONFIGS.write_all()
3962+
3963+
3964+def configure_https():
3965+ '''
3966+ Enables SSL API Apache config if appropriate and kicks identity-service
3967+ with any required api updates.
3968+ '''
3969+ # need to write all to ensure changes to the entire request pipeline
3970+ # propagate (c-api, haprxy, apache)
3971+ CONFIGS.write_all()
3972+ if 'https' in CONFIGS.complete_contexts():
3973+ cmd = ['a2ensite', 'openstack_https_frontend']
3974+ check_call(cmd)
3975+ else:
3976+ cmd = ['a2dissite', 'openstack_https_frontend']
3977+ check_call(cmd)
3978+
3979+ for rid in relation_ids('identity-service'):
3980+ identity_joined(rid=rid)
3981+
3982+
3983+def main():
3984+ try:
3985+ hooks.execute(sys.argv)
3986+ except UnregisteredHookError as e:
3987+ juju_log('Unknown hook {} - skipping.'.format(e))
3988+
3989+
3990+if __name__ == '__main__':
3991+ main()
3992
3993=== added file 'hooks/cinder_utils.py'
3994--- hooks/cinder_utils.py 1970-01-01 00:00:00 +0000
3995+++ hooks/cinder_utils.py 2013-10-16 18:49:46 +0000
3996@@ -0,0 +1,361 @@
3997+import os
3998+import subprocess
3999+
4000+from collections import OrderedDict
4001+from copy import copy
4002+
4003+import cinder_contexts
4004+
4005+from charmhelpers.core.hookenv import (
4006+ config,
4007+ relation_ids,
4008+ log,
4009+)
4010+
4011+from charmhelpers.fetch import (
4012+ apt_install,
4013+ apt_update,
4014+)
4015+
4016+from charmhelpers.core.host import (
4017+ mounts,
4018+ umount,
4019+)
4020+
4021+from charmhelpers.contrib.storage.linux.ceph import (
4022+ create_pool as ceph_create_pool,
4023+ pool_exists as ceph_pool_exists,
4024+)
4025+
4026+from charmhelpers.contrib.hahelpers.cluster import (
4027+ eligible_leader,
4028+)
4029+
4030+from charmhelpers.contrib.storage.linux.utils import (
4031+ is_block_device,
4032+ zap_disk,
4033+)
4034+
4035+from charmhelpers.contrib.storage.linux.lvm import (
4036+ create_lvm_physical_volume,
4037+ create_lvm_volume_group,
4038+ deactivate_lvm_volume_group,
4039+ is_lvm_physical_volume,
4040+ remove_lvm_physical_volume,
4041+)
4042+
4043+from charmhelpers.contrib.storage.linux.loopback import (
4044+ ensure_loopback_device,
4045+)
4046+
4047+from charmhelpers.contrib.openstack import (
4048+ templating,
4049+ context,
4050+)
4051+
4052+from charmhelpers.contrib.openstack.utils import (
4053+ configure_installation_source,
4054+ get_os_codename_package,
4055+ get_os_codename_install_source,
4056+)
4057+
4058+
4059+COMMON_PACKAGES = [
4060+ 'apache2',
4061+ 'cinder-common',
4062+ 'gdisk',
4063+ 'haproxy',
4064+ 'python-jinja2',
4065+ 'python-keystoneclient',
4066+ 'python-mysqldb',
4067+ 'qemu-utils',
4068+]
4069+
4070+API_PACKAGES = ['cinder-api']
4071+VOLUME_PACKAGES = ['cinder-volume']
4072+SCHEDULER_PACKAGES = ['cinder-scheduler']
4073+
4074+DEFAULT_LOOPBACK_SIZE = '5G'
4075+
4076+# Cluster resource used to determine leadership when hacluster'd
4077+CLUSTER_RES = 'res_cinder_vip'
4078+
4079+
4080+class CinderCharmError(Exception):
4081+ pass
4082+
4083+CINDER_CONF = '/etc/cinder/cinder.conf'
4084+CINDER_API_CONF = '/etc/cinder/api-paste.ini'
4085+CEPH_CONF = '/etc/ceph/ceph.conf'
4086+HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
4087+APACHE_SITE_CONF = '/etc/apache2/sites-available/openstack_https_frontend'
4088+APACHE_SITE_24_CONF = '/etc/apache2/sites-available/' \
4089+ 'openstack_https_frontend.conf'
4090+
4091+TEMPLATES = 'templates/'
4092+# Map config files to hook contexts and services that will be associated
4093+# with file in restart_on_changes()'s service map.
4094+CONFIG_FILES = OrderedDict([
4095+ (CINDER_CONF, {
4096+ 'hook_contexts': [context.SharedDBContext(),
4097+ context.AMQPContext(),
4098+ context.ImageServiceContext(),
4099+ cinder_contexts.CephContext(),
4100+ cinder_contexts.HAProxyContext(),
4101+ cinder_contexts.ImageServiceContext()],
4102+ 'services': ['cinder-api', 'cinder-volume',
4103+ 'cinder-scheduler', 'haproxy']
4104+ }),
4105+ (CINDER_API_CONF, {
4106+ 'hook_contexts': [context.IdentityServiceContext()],
4107+ 'services': ['cinder-api'],
4108+ }),
4109+ (CEPH_CONF, {
4110+ 'hook_contexts': [context.CephContext()],
4111+ 'services': ['cinder-volume']
4112+ }),
4113+ (HAPROXY_CONF, {
4114+ 'hook_contexts': [context.HAProxyContext(),
4115+ cinder_contexts.HAProxyContext()],
4116+ 'services': ['haproxy'],
4117+ }),
4118+ (APACHE_SITE_CONF, {
4119+ 'hook_contexts': [cinder_contexts.ApacheSSLContext()],
4120+ 'services': ['apache2'],
4121+ }),
4122+ (APACHE_SITE_24_CONF, {
4123+ 'hook_contexts': [cinder_contexts.ApacheSSLContext()],
4124+ 'services': ['apache2'],
4125+ }),
4126+])
4127+
4128+
4129+def register_configs():
4130+ """
4131+ Register config files with their respective contexts.
4132+ Regstration of some configs may not be required depending on
4133+ existing of certain relations.
4134+ """
4135+ # if called without anything installed (eg during install hook)
4136+ # just default to earliest supported release. configs dont get touched
4137+ # till post-install, anyway.
4138+ release = get_os_codename_package('cinder-common', fatal=False) or 'folsom'
4139+ configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
4140+ openstack_release=release)
4141+
4142+ confs = [CINDER_API_CONF,
4143+ CINDER_CONF,
4144+ HAPROXY_CONF]
4145+
4146+ if relation_ids('ceph'):
4147+ # need to create this early, new peers will have a relation during
4148+ # registration # before they've run the ceph hooks to create the
4149+ # directory.
4150+ if not os.path.isdir(os.path.dirname(CEPH_CONF)):
4151+ os.mkdir(os.path.dirname(CEPH_CONF))
4152+ confs.append(CEPH_CONF)
4153+
4154+ for conf in confs:
4155+ configs.register(conf, CONFIG_FILES[conf]['hook_contexts'])
4156+
4157+ if os.path.exists('/etc/apache2/conf-available'):
4158+ configs.register(APACHE_SITE_24_CONF,
4159+ CONFIG_FILES[APACHE_SITE_24_CONF]['hook_contexts'])
4160+ else:
4161+ configs.register(APACHE_SITE_CONF,
4162+ CONFIG_FILES[APACHE_SITE_CONF]['hook_contexts'])
4163+ return configs
4164+
4165+
4166+def juju_log(msg):
4167+ log('[cinder] %s' % msg)
4168+
4169+
4170+def determine_packages():
4171+ '''
4172+ Determine list of packages required for the currently enabled services.
4173+
4174+ :returns: list of package names
4175+ '''
4176+ pkgs = copy(COMMON_PACKAGES)
4177+ for s, p in [('api', API_PACKAGES),
4178+ ('volume', VOLUME_PACKAGES),
4179+ ('scheduler', SCHEDULER_PACKAGES)]:
4180+ if service_enabled(s):
4181+ pkgs += p
4182+ return pkgs
4183+
4184+
4185+def service_enabled(service):
4186+ '''
4187+ Determine if a specific cinder service is enabled in charm configuration.
4188+
4189+ :param service: str: cinder service name to query (volume, scheduler, api,
4190+ all)
4191+
4192+ :returns: boolean: True if service is enabled in config, False if not.
4193+ '''
4194+ enabled = config()['enabled-services']
4195+ if enabled == 'all':
4196+ return True
4197+ return service in enabled
4198+
4199+
4200+def restart_map():
4201+ '''
4202+ Determine the correct resource map to be passed to
4203+ charmhelpers.core.restart_on_change() based on the services configured.
4204+
4205+ :returns: dict: A dictionary mapping config file to lists of services
4206+ that should be restarted when file changes.
4207+ '''
4208+ _map = []
4209+ for f, ctxt in CONFIG_FILES.iteritems():
4210+ svcs = []
4211+ for svc in ctxt['services']:
4212+ if svc.startswith('cinder-'):
4213+ if service_enabled(svc.split('-')[1]):
4214+ svcs.append(svc)
4215+ else:
4216+ svcs.append(svc)
4217+ if svcs:
4218+ _map.append((f, svcs))
4219+ return OrderedDict(_map)
4220+
4221+
4222+def prepare_lvm_storage(block_device, volume_group):
4223+ '''
4224+ Ensures block_device is initialized as a LVM PV and creates volume_group.
4225+ Assumes block device is clean and will raise if storage is already
4226+ initialized as a PV.
4227+
4228+ :param block_device: str: Full path to block device to be prepared.
4229+ :param volume_group: str: Name of volume group to be created with
4230+ block_device as backing PV.
4231+
4232+ :returns: None or raises CinderCharmError if storage is unclean.
4233+ '''
4234+ e = None
4235+ if is_lvm_physical_volume(block_device):
4236+ juju_log('ERROR: Could not prepare LVM storage: %s is already '
4237+ 'initialized as LVM physical device.' % block_device)
4238+ raise CinderCharmError
4239+
4240+ try:
4241+ create_lvm_physical_volume(block_device)
4242+ create_lvm_volume_group(volume_group, block_device)
4243+ except Exception as e:
4244+ juju_log('Could not prepare LVM storage on %s.' % block_device)
4245+ juju_log(e)
4246+ raise CinderCharmError
4247+
4248+
4249+def clean_storage(block_device):
4250+ '''
4251+ Ensures a block device is clean. That is:
4252+ - unmounted
4253+ - any lvm volume groups are deactivated
4254+ - any lvm physical device signatures removed
4255+ - partition table wiped
4256+
4257+ :param block_device: str: Full path to block device to clean.
4258+ '''
4259+ for mp, d in mounts():
4260+ if d == block_device:
4261+ juju_log('clean_storage(): Found %s mounted @ %s, unmounting.' %
4262+ (d, mp))
4263+ umount(mp, persist=True)
4264+
4265+ if is_lvm_physical_volume(block_device):
4266+ deactivate_lvm_volume_group(block_device)
4267+ remove_lvm_physical_volume(block_device)
4268+ else:
4269+ zap_disk(block_device)
4270+
4271+
4272+def ensure_block_device(block_device):
4273+ '''
4274+ Confirm block_device, create as loopback if necessary.
4275+
4276+ :param block_device: str: Full path of block device to ensure.
4277+
4278+ :returns: str: Full path of ensured block device.
4279+ '''
4280+ _none = ['None', 'none', None]
4281+ if (block_device in _none):
4282+ juju_log('prepare_storage(): Missing required input: '
4283+ 'block_device=%s.' % block_device)
4284+ raise CinderCharmError
4285+
4286+ if block_device.startswith('/dev/'):
4287+ bdev = block_device
4288+ elif block_device.startswith('/'):
4289+ _bd = block_device.split('|')
4290+ if len(_bd) == 2:
4291+ bdev, size = _bd
4292+ else:
4293+ bdev = block_device
4294+ size = DEFAULT_LOOPBACK_SIZE
4295+ bdev = ensure_loopback_device(bdev, size)
4296+ else:
4297+ bdev = '/dev/%s' % block_device
4298+
4299+ if not is_block_device(bdev):
4300+ juju_log('Failed to locate valid block device at %s' % bdev)
4301+ raise CinderCharmError
4302+
4303+ return bdev
4304+
4305+
4306+def migrate_database():
4307+ '''Runs cinder-manage to initialize a new database or migrate existing'''
4308+ cmd = ['cinder-manage', 'db', 'sync']
4309+ subprocess.check_call(cmd)
4310+
4311+
4312+def ensure_ceph_pool(service, replicas):
4313+ '''Creates a ceph pool for service if one does not exist'''
4314+ # TODO: Ditto about moving somewhere sharable.
4315+ if not ceph_pool_exists(service=service, name=service):
4316+ ceph_create_pool(service=service, name=service, replicas=replicas)
4317+
4318+
4319+def set_ceph_env_variables(service):
4320+ # XXX: Horrid kludge to make cinder-volume use
4321+ # a different ceph username than admin
4322+ env = open('/etc/environment', 'r').read()
4323+ if 'CEPH_ARGS' not in env:
4324+ with open('/etc/environment', 'a') as out:
4325+ out.write('CEPH_ARGS="--id %s"\n' % service)
4326+ with open('/etc/init/cinder-volume.override', 'w') as out:
4327+ out.write('env CEPH_ARGS="--id %s"\n' % service)
4328+
4329+
4330+def do_openstack_upgrade(configs):
4331+ """
4332+ Perform an uprade of cinder. Takes care of upgrading packages, rewriting
4333+ configs + database migration and potentially any other post-upgrade
4334+ actions.
4335+
4336+ :param configs: The charms main OSConfigRenderer object.
4337+
4338+ """
4339+ new_src = config('openstack-origin')
4340+ new_os_rel = get_os_codename_install_source(new_src)
4341+
4342+ juju_log('Performing OpenStack upgrade to %s.' % (new_os_rel))
4343+
4344+ configure_installation_source(new_src)
4345+ dpkg_opts = [
4346+ '--option', 'Dpkg::Options::=--force-confnew',
4347+ '--option', 'Dpkg::Options::=--force-confdef',
4348+ ]
4349+ apt_update()
4350+ apt_install(packages=determine_packages(), options=dpkg_opts, fatal=True)
4351+
4352+ # set CONFIGS to load templates from new release and regenerate config
4353+ configs.set_release(openstack_release=new_os_rel)
4354+ configs.write_all()
4355+
4356+ if eligible_leader(CLUSTER_RES):
4357+ migrate_database()
4358
4359=== added symlink 'hooks/cluster-relation-changed'
4360=== target is u'cinder_hooks.py'
4361=== added symlink 'hooks/cluster-relation-departed'
4362=== target is u'cinder_hooks.py'
4363=== added symlink 'hooks/config-changed'
4364=== target is u'cinder_hooks.py'
4365=== added symlink 'hooks/ha-relation-changed'
4366=== target is u'cinder_hooks.py'
4367=== added symlink 'hooks/ha-relation-joined'
4368=== target is u'cinder_hooks.py'
4369=== added symlink 'hooks/identity-service-relation-broken'
4370=== target is u'cinder_hooks.py'
4371=== added symlink 'hooks/identity-service-relation-changed'
4372=== target is u'cinder_hooks.py'
4373=== added symlink 'hooks/identity-service-relation-joined'
4374=== target is u'cinder_hooks.py'
4375=== added symlink 'hooks/image-service-relation-broken'
4376=== target is u'cinder_hooks.py'
4377=== added symlink 'hooks/image-service-relation-changed'
4378=== target is u'cinder_hooks.py'
4379=== added symlink 'hooks/install'
4380=== target is u'cinder_hooks.py'
4381=== added symlink 'hooks/shared-db-relation-broken'
4382=== target is u'cinder_hooks.py'
4383=== added symlink 'hooks/shared-db-relation-changed'
4384=== target is u'cinder_hooks.py'
4385=== added symlink 'hooks/shared-db-relation-joined'
4386=== target is u'cinder_hooks.py'
4387=== added symlink 'hooks/start'
4388=== target is u'cinder_hooks.py'
4389=== added symlink 'hooks/stop'
4390=== target is u'cinder_hooks.py'
4391=== added file 'icon.svg'
4392--- icon.svg 1970-01-01 00:00:00 +0000
4393+++ icon.svg 2013-10-16 18:49:46 +0000
4394@@ -0,0 +1,769 @@
4395+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
4396+<!-- Created with Inkscape (http://www.inkscape.org/) -->
4397+
4398+<svg
4399+ xmlns:dc="http://purl.org/dc/elements/1.1/"
4400+ xmlns:cc="http://creativecommons.org/ns#"
4401+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
4402+ xmlns:svg="http://www.w3.org/2000/svg"
4403+ xmlns="http://www.w3.org/2000/svg"
4404+ xmlns:xlink="http://www.w3.org/1999/xlink"
4405+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
4406+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
4407+ width="96"
4408+ height="96"
4409+ id="svg6517"
4410+ version="1.1"
4411+ inkscape:version="0.48+devel r12304"
4412+ sodipodi:docname="Openstack.svg">
4413+ <defs
4414+ id="defs6519">
4415+ <linearGradient
4416+ inkscape:collect="always"
4417+ id="linearGradient902">
4418+ <stop
4419+ style="stop-color:#cccccc;stop-opacity:1"
4420+ offset="0"
4421+ id="stop904" />
4422+ <stop
4423+ style="stop-color:#e6e6e6;stop-opacity:1"
4424+ offset="1"
4425+ id="stop906" />
4426+ </linearGradient>
4427+ <linearGradient
4428+ id="Background">
4429+ <stop
4430+ id="stop4178"
4431+ offset="0"
4432+ style="stop-color:#22779e;stop-opacity:1" />
4433+ <stop
4434+ id="stop4180"
4435+ offset="1"
4436+ style="stop-color:#2991c0;stop-opacity:1" />
4437+ </linearGradient>
4438+ <filter
4439+ style="color-interpolation-filters:sRGB;"
4440+ inkscape:label="Inner Shadow"
4441+ id="filter1121">
4442+ <feFlood
4443+ flood-opacity="0.59999999999999998"
4444+ flood-color="rgb(0,0,0)"
4445+ result="flood"
4446+ id="feFlood1123" />
4447+ <feComposite
4448+ in="flood"
4449+ in2="SourceGraphic"
4450+ operator="out"
4451+ result="composite1"
4452+ id="feComposite1125" />
4453+ <feGaussianBlur
4454+ in="composite1"
4455+ stdDeviation="1"
4456+ result="blur"
4457+ id="feGaussianBlur1127" />
4458+ <feOffset
4459+ dx="0"
4460+ dy="2"
4461+ result="offset"
4462+ id="feOffset1129" />
4463+ <feComposite
4464+ in="offset"
4465+ in2="SourceGraphic"
4466+ operator="atop"
4467+ result="composite2"
4468+ id="feComposite1131" />
4469+ </filter>
4470+ <filter
4471+ style="color-interpolation-filters:sRGB;"
4472+ inkscape:label="Drop Shadow"
4473+ id="filter950">
4474+ <feFlood
4475+ flood-opacity="0.25"
4476+ flood-color="rgb(0,0,0)"
4477+ result="flood"
4478+ id="feFlood952" />
4479+ <feComposite
4480+ in="flood"
4481+ in2="SourceGraphic"
4482+ operator="in"
4483+ result="composite1"
4484+ id="feComposite954" />
4485+ <feGaussianBlur
4486+ in="composite1"
4487+ stdDeviation="1"
4488+ result="blur"
4489+ id="feGaussianBlur956" />
4490+ <feOffset
4491+ dx="0"
4492+ dy="1"
4493+ result="offset"
4494+ id="feOffset958" />
4495+ <feComposite
4496+ in="SourceGraphic"
4497+ in2="offset"
4498+ operator="over"
4499+ result="composite2"
4500+ id="feComposite960" />
4501+ </filter>
4502+ <clipPath
4503+ clipPathUnits="userSpaceOnUse"
4504+ id="clipPath873">
4505+ <g
4506+ transform="matrix(0,-0.66666667,0.66604479,0,-258.25992,677.00001)"
4507+ id="g875"
4508+ inkscape:label="Layer 1"
4509+ style="fill:#ff00ff;fill-opacity:1;stroke:none;display:inline">
4510+ <path
4511+ style="fill:#ff00ff;fill-opacity:1;stroke:none;display:inline"
4512+ d="m 46.702703,898.22775 50.594594,0 C 138.16216,898.22775 144,904.06497 144,944.92583 l 0,50.73846 c 0,40.86071 -5.83784,46.69791 -46.702703,46.69791 l -50.594594,0 C 5.8378378,1042.3622 0,1036.525 0,995.66429 L 0,944.92583 C 0,904.06497 5.8378378,898.22775 46.702703,898.22775 Z"
4513+ id="path877"
4514+ inkscape:connector-curvature="0"
4515+ sodipodi:nodetypes="sssssssss" />
4516+ </g>
4517+ </clipPath>
4518+ <filter
4519+ inkscape:collect="always"
4520+ id="filter891"
4521+ inkscape:label="Badge Shadow">
4522+ <feGaussianBlur
4523+ inkscape:collect="always"
4524+ stdDeviation="0.71999962"
4525+ id="feGaussianBlur893" />
4526+ </filter>
4527+ <style
4528+ id="style867"
4529+ type="text/css"><![CDATA[
4530+ .fil0 {fill:#1F1A17}
4531+ ]]></style>
4532+ <linearGradient
4533+ inkscape:collect="always"
4534+ xlink:href="#linearGradient902"
4535+ id="linearGradient908"
4536+ x1="-220"
4537+ y1="731.29077"
4538+ x2="-220"
4539+ y2="635.29077"
4540+ gradientUnits="userSpaceOnUse" />
4541+ <clipPath
4542+ id="clipPath16">
4543+ <path
4544+ id="path18"
4545+ d="m -9,-9 614,0 0,231 -614,0 0,-231 z" />
4546+ </clipPath>
4547+ <clipPath
4548+ id="clipPath116">
4549+ <path
4550+ id="path118"
4551+ d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129" />
4552+ </clipPath>
4553+ <clipPath
4554+ id="clipPath128">
4555+ <path
4556+ id="path130"
4557+ d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129" />
4558+ </clipPath>
4559+ <linearGradient
4560+ id="linearGradient3850"
4561+ inkscape:collect="always">
4562+ <stop
4563+ id="stop3852"
4564+ offset="0"
4565+ style="stop-color:#000000;stop-opacity:1;" />
4566+ <stop
4567+ id="stop3854"
4568+ offset="1"
4569+ style="stop-color:#000000;stop-opacity:0;" />
4570+ </linearGradient>
4571+ <clipPath
4572+ clipPathUnits="userSpaceOnUse"
4573+ id="clipPath3095">
4574+ <path
4575+ d="m 976.648,389.551 -842.402,0 0,839.999 842.402,0 0,-839.999"
4576+ id="path3097"
4577+ inkscape:connector-curvature="0" />
4578+ </clipPath>
4579+ <linearGradient
4580+ y2="1005.2372"
4581+ x2="1128.7266"
4582+ y1="1031.9298"
4583+ x1="1155.4192"
4584+ gradientUnits="userSpaceOnUse"
4585+ id="linearGradient4705"
4586+ xlink:href="#linearGradient4439"
4587+ inkscape:collect="always" />
4588+ <linearGradient
4589+ id="linearGradient4439"
4590+ inkscape:collect="always">
4591+ <stop
4592+ id="stop4441"
4593+ offset="0"
4594+ style="stop-color:#841f1c;stop-opacity:1;" />
4595+ <stop
4596+ id="stop4443"
4597+ offset="1"
4598+ style="stop-color:#c42e24;stop-opacity:1" />
4599+ </linearGradient>
4600+ <linearGradient
4601+ y2="946.60907"
4602+ x2="753.62018"
4603+ y1="934.54321"
4604+ x1="741.55432"
4605+ gradientUnits="userSpaceOnUse"
4606+ id="linearGradient4707"
4607+ xlink:href="#linearGradient4473"
4608+ inkscape:collect="always" />
4609+ <linearGradient
4610+ id="linearGradient4473"
4611+ inkscape:collect="always">
4612+ <stop
4613+ id="stop4475"
4614+ offset="0"
4615+ style="stop-color:#8d211a;stop-opacity:1" />
4616+ <stop
4617+ id="stop4477"
4618+ offset="1"
4619+ style="stop-color:#c42e24;stop-opacity:1" />
4620+ </linearGradient>
4621+ <clipPath
4622+ clipPathUnits="userSpaceOnUse"
4623+ id="clipPath3195">
4624+ <path
4625+ d="m 611.836,756.738 -106.34,105.207 c -8.473,8.289 -13.617,20.102 -13.598,33.379 L 598.301,790.207 c -0.031,-13.418 5.094,-25.031 13.535,-33.469"
4626+ id="path3197"
4627+ inkscape:connector-curvature="0" />
4628+ </clipPath>
4629+ <clipPath
4630+ clipPathUnits="userSpaceOnUse"
4631+ id="clipPath3235">
4632+ <path
4633+ d="m 1095.64,1501.81 c 35.46,-35.07 70.89,-70.11 106.35,-105.17 4.4,-4.38 7.11,-10.53 7.11,-17.55 l -106.37,105.21 c 0,7 -2.71,13.11 -7.09,17.51"
4634+ id="path3237"
4635+ inkscape:connector-curvature="0" />
4636+ </clipPath>
4637+ <linearGradient
4638+ y2="1595.1495"
4639+ x2="1299.6487"
4640+ y1="1607.1796"
4641+ x1="1311.6787"
4642+ gradientUnits="userSpaceOnUse"
4643+ id="linearGradient4709"
4644+ xlink:href="#linearGradient4389"
4645+ inkscape:collect="always" />
4646+ <linearGradient
4647+ id="linearGradient4389"
4648+ inkscape:collect="always">
4649+ <stop
4650+ id="stop4391"
4651+ offset="0"
4652+ style="stop-color:#912120;stop-opacity:1;" />
4653+ <stop
4654+ id="stop4393"
4655+ offset="1"
4656+ style="stop-color:#c42e24;stop-opacity:1" />
4657+ </linearGradient>
4658+ <linearGradient
4659+ y2="666.30505"
4660+ x2="903.20746"
4661+ y1="672.85754"
4662+ x1="896.65497"
4663+ gradientTransform="matrix(8,0,0,-8,-6602.8469,6874.4287)"
4664+ gradientUnits="userSpaceOnUse"
4665+ id="linearGradient4711"
4666+ xlink:href="#linearGradient4355"
4667+ inkscape:collect="always" />
4668+ <linearGradient
4669+ id="linearGradient4355"
4670+ inkscape:collect="always">
4671+ <stop
4672+ id="stop4357"
4673+ offset="0"
4674+ style="stop-color:#841f1c;stop-opacity:1;" />
4675+ <stop
4676+ id="stop4359"
4677+ offset="1"
4678+ style="stop-color:#c42e24;stop-opacity:1" />
4679+ </linearGradient>
4680+ <clipPath
4681+ id="clipPath4591"
4682+ clipPathUnits="userSpaceOnUse">
4683+ <path
4684+ inkscape:connector-curvature="0"
4685+ d="m 1106.6009,730.43734 -0.036,21.648 c -0.01,3.50825 -2.8675,6.61375 -6.4037,6.92525 l -83.6503,7.33162 c -3.5205,0.30763 -6.3812,-2.29987 -6.3671,-5.8145 l 0.036,-21.6475 20.1171,-1.76662 -0.011,4.63775 c 0,1.83937 1.4844,3.19925 3.3262,3.0395 l 49.5274,-4.33975 c 1.8425,-0.166 3.3425,-1.78125 3.3538,-3.626 l 0.01,-4.63025 20.1,-1.7575"
4686+ style="fill:#ff00ff;fill-opacity:1;fill-rule:nonzero;stroke:none"
4687+ id="path4593" />
4688+ </clipPath>
4689+ <filter
4690+ height="1.3208274"
4691+ y="-0.16041371"
4692+ width="1.0564449"
4693+ x="-0.028222449"
4694+ id="filter4595"
4695+ inkscape:collect="always"
4696+ color-interpolation-filters="sRGB">
4697+ <feGaussianBlur
4698+ id="feGaussianBlur4597"
4699+ stdDeviation="0.66106737"
4700+ inkscape:collect="always" />
4701+ </filter>
4702+ <filter
4703+ id="filter3831"
4704+ inkscape:collect="always">
4705+ <feGaussianBlur
4706+ id="feGaussianBlur3833"
4707+ stdDeviation="0.86309522"
4708+ inkscape:collect="always" />
4709+ </filter>
4710+ <radialGradient
4711+ gradientUnits="userSpaceOnUse"
4712+ gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
4713+ r="20.40658"
4714+ fy="93.399292"
4715+ fx="-26.508606"
4716+ cy="93.399292"
4717+ cx="-26.508606"
4718+ id="radialGradient3856"
4719+ xlink:href="#linearGradient3850"
4720+ inkscape:collect="always" />
4721+ <filter
4722+ height="1.3286154"
4723+ y="-0.1643077"
4724+ width="1.3437241"
4725+ x="-0.17186206"
4726+ id="filter3868"
4727+ inkscape:collect="always">
4728+ <feGaussianBlur
4729+ id="feGaussianBlur3870"
4730+ stdDeviation="0.62628186"
4731+ inkscape:collect="always" />
4732+ </filter>
4733+ <filter
4734+ id="filter3885"
4735+ inkscape:collect="always">
4736+ <feGaussianBlur
4737+ id="feGaussianBlur3887"
4738+ stdDeviation="5.7442192"
4739+ inkscape:collect="always" />
4740+ </filter>
4741+ <linearGradient
4742+ gradientTransform="translate(-318.48033,212.32022)"
4743+ gradientUnits="userSpaceOnUse"
4744+ y2="993.19702"
4745+ x2="-51.879555"
4746+ y1="593.11615"
4747+ x1="348.20132"
4748+ id="linearGradient3895"
4749+ xlink:href="#linearGradient3850"
4750+ inkscape:collect="always" />
4751+ <radialGradient
4752+ r="20.40658"
4753+ fy="93.399292"
4754+ fx="-26.508606"
4755+ cy="93.399292"
4756+ cx="-26.508606"
4757+ gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
4758+ gradientUnits="userSpaceOnUse"
4759+ id="radialGradient3902"
4760+ xlink:href="#linearGradient3850"
4761+ inkscape:collect="always" />
4762+ <linearGradient
4763+ y2="993.19702"
4764+ x2="-51.879555"
4765+ y1="593.11615"
4766+ x1="348.20132"
4767+ gradientTransform="translate(-318.48033,212.32022)"
4768+ gradientUnits="userSpaceOnUse"
4769+ id="linearGradient3904"
4770+ xlink:href="#linearGradient3850"
4771+ inkscape:collect="always" />
4772+ <clipPath
4773+ id="clipPath3906"
4774+ clipPathUnits="userSpaceOnUse">
4775+ <rect
4776+ transform="scale(1,-1)"
4777+ style="opacity:0.8;color:#000000;fill:#ff00ff;stroke:none;stroke-width:4;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
4778+ id="rect3908"
4779+ width="1019.1371"
4780+ height="1019.1371"
4781+ x="357.9816"
4782+ y="-1725.8152" />
4783+ </clipPath>
4784+ </defs>
4785+ <sodipodi:namedview
4786+ id="base"
4787+ pagecolor="#ffffff"
4788+ bordercolor="#666666"
4789+ borderopacity="1.0"
4790+ inkscape:pageopacity="0.0"
4791+ inkscape:pageshadow="2"
4792+ inkscape:zoom="4.0745361"
4793+ inkscape:cx="36.786689"
4794+ inkscape:cy="54.859325"
4795+ inkscape:document-units="px"
4796+ inkscape:current-layer="layer1"
4797+ showgrid="true"
4798+ fit-margin-top="0"
4799+ fit-margin-left="0"
4800+ fit-margin-right="0"
4801+ fit-margin-bottom="0"
4802+ inkscape:window-width="1920"
4803+ inkscape:window-height="1029"
4804+ inkscape:window-x="0"
4805+ inkscape:window-y="24"
4806+ inkscape:window-maximized="1"
4807+ showborder="true"
4808+ showguides="true"
4809+ inkscape:guide-bbox="true"
4810+ inkscape:showpageshadow="false"
4811+ inkscape:snap-global="true"
4812+ inkscape:snap-bbox="true"
4813+ inkscape:bbox-paths="true"
4814+ inkscape:bbox-nodes="true"
4815+ inkscape:snap-bbox-edge-midpoints="true"
4816+ inkscape:snap-bbox-midpoints="true"
4817+ inkscape:object-paths="true"
4818+ inkscape:snap-intersection-paths="true"
4819+ inkscape:object-nodes="true"
4820+ inkscape:snap-smooth-nodes="true"
4821+ inkscape:snap-midpoints="true"
4822+ inkscape:snap-object-midpoints="true"
4823+ inkscape:snap-center="true">
4824+ <inkscape:grid
4825+ type="xygrid"
4826+ id="grid821" />
4827+ <sodipodi:guide
4828+ orientation="1,0"
4829+ position="16,48"
4830+ id="guide823" />
4831+ <sodipodi:guide
4832+ orientation="0,1"
4833+ position="64,80"
4834+ id="guide825" />
4835+ <sodipodi:guide
4836+ orientation="1,0"
4837+ position="80,40"
4838+ id="guide827" />
4839+ <sodipodi:guide
4840+ orientation="0,1"
4841+ position="64,16"
4842+ id="guide829" />
4843+ </sodipodi:namedview>
4844+ <metadata
4845+ id="metadata6522">
4846+ <rdf:RDF>
4847+ <cc:Work
4848+ rdf:about="">
4849+ <dc:format>image/svg+xml</dc:format>
4850+ <dc:type
4851+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
4852+ <dc:title></dc:title>
4853+ </cc:Work>
4854+ </rdf:RDF>
4855+ </metadata>
4856+ <g
4857+ inkscape:label="BACKGROUND"
4858+ inkscape:groupmode="layer"
4859+ id="layer1"
4860+ transform="translate(268,-635.29076)"
4861+ style="display:inline">
4862+ <path
4863+ style="fill:url(#linearGradient908);fill-opacity:1;stroke:none;display:inline;filter:url(#filter1121)"
4864+ d="m -268,700.15563 0,-33.72973 c 0,-27.24324 3.88785,-31.13513 31.10302,-31.13513 l 33.79408,0 c 27.21507,0 31.1029,3.89189 31.1029,31.13513 l 0,33.72973 c 0,27.24325 -3.88783,31.13514 -31.1029,31.13514 l -33.79408,0 C -264.11215,731.29077 -268,727.39888 -268,700.15563 Z"
4865+ id="path6455"
4866+ inkscape:connector-curvature="0"
4867+ sodipodi:nodetypes="sssssssss" />
4868+ <g
4869+ transform="matrix(0.72090266,0,0,0.72090267,152.00189,279.53977)"
4870+ id="layer1-5"
4871+ inkscape:label="Layer 1"
4872+ inkscape:transform-center-x="34.09445"
4873+ inkscape:transform-center-y="31.651869">
4874+ <g
4875+ clip-path="url(#clipPath3906)"
4876+ transform="matrix(0.09419734,0,0,-0.09419734,-603.43521,674.35797)"
4877+ id="g4601">
4878+ <g
4879+ transform="translate(647.57667,2.5364216e-8)"
4880+ id="g3897">
4881+ <path
4882+ inkscape:connector-curvature="0"
4883+ style="opacity:0.7;color:#000000;fill:url(#radialGradient3902);fill-opacity:1;stroke:none;stroke-width:2;marker:none;visibility:visible;display:inline;overflow:visible;filter:url(#filter3831);enable-background:accumulate"
4884+ d="m -48.09375,67.8125 c -0.873996,-0.0028 -2.089735,0.01993 -3.40625,0.09375 -2.633031,0.147647 -5.700107,0.471759 -7.78125,1.53125 a 1.0001,1.0001 0 0 0 -0.25,1.59375 L -38.8125,92.375 a 1.0001,1.0001 0 0 0 0.84375,0.3125 L -24,90.5625 a 1.0001,1.0001 0 0 0 0.53125,-1.71875 L -46.0625,68.125 a 1.0001,1.0001 0 0 0 -0.625,-0.28125 C -46.6875,67.84375 -47.219754,67.81533 -48.09375,67.8125 Z"
4885+ transform="matrix(10.616011,0,0,-10.616011,357.98166,1725.8152)"
4886+ id="path3821" />
4887+ <path
4888+ style="opacity:0.6;color:#000000;fill:none;stroke:#000000;stroke-width:2.77429962;stroke-linecap:round;marker:none;visibility:visible;display:inline;overflow:visible;filter:url(#filter3868);enable-background:accumulate"
4889+ d="m -15.782705,81.725197 8.7458304,9.147937"
4890+ id="path3858"
4891+ inkscape:connector-curvature="0"
4892+ transform="matrix(10.616011,0,0,-10.616011,39.50133,1725.8152)" />
4893+ <path
4894+ style="font-size:xx-small;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-indent:0;text-align:start;text-decoration:none;line-height:normal;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;text-anchor:start;baseline-shift:baseline;opacity:0.3;color:#000000;fill:url(#linearGradient3904);fill-opacity:1;stroke:none;stroke-width:2;marker:none;visibility:visible;display:inline;overflow:visible;filter:url(#filter3885);enable-background:accumulate;font-family:Sans;-inkscape-font-specification:Sans"
4895+ d="m -95.18931,981.03569 a 10.617073,10.617073 0 0 1 -0.995251,-0.3318 l -42.795789,-5.308 a 10.617073,10.617073 0 0 1 -6.30326,-17.9145 L -4.2897203,812.5065 a 10.617073,10.617073 0 0 1 8.95726,-3.3175 l 49.0990503,7.63026 a 10.617073,10.617073 0 0 1 5.97151,17.91452 L -87.55905,978.04989 A 10.617073,10.617073 0 0 1 -95.18931,981.03569 Z"
4896+ id="path3874"
4897+ inkscape:connector-curvature="0" />
4898+ </g>
4899+ <path
4900+ style="fill:#c42e24;fill-opacity:1;fill-rule:nonzero;stroke:none"
4901+ d="m 1213.1531,1680.9287 -669.2496,-58.25 -2.7504,-0.25 c -2.6328,-0.38 -5.0232,-1.07 -7.5,-1.75 -3.812,-1.16 -7.504,-2.58 -11,-4.5 -2.2928,-1.23 -4.6408,-2.68 -6.7496,-4.25 -1.3248,-0.97 -2.7664,-1.93 -4,-3 -1.0368,-0.94 -1.9816,-2.04 -3,-3 -0.8208,-0.91 -1.7088,-1.55 -2.5008,-2.5 -0.7536,-0.85 -1.332,-1.81 -2,-2.75 -0.676,-0.85 -1.3824,-1.59 -2,-2.5 -0.5536,-0.84 -0.976,-1.58 -1.4992,-2.5 l -1.5008,-2.5 c -0.5264,-0.83 -0.8504,-1.83 -1.2496,-2.75 -0.4488,-0.8 -0.8992,-1.59 -1.2496,-2.5 -0.3992,-0.91 -0.6792,-1.89 -1,-2.75 -0.3792,-1.04 -0.7304,-1.78 -1,-2.75 -0.2896,-0.98 -0.5432,-1.98 -0.7504,-3 -0.2848,-1.02 -0.5744,-2.15 -0.7504,-3.25 -0.2024,-1.16 -0.4056,-2.34 -0.4992,-3.5 -0.1848,-1.41 -0.2072,-2.94 -0.2504,-4.25 l 0,-1.25 -0.5,-168 35.5,-35.25 -36.2504,-3.25 1,-254.75 34.7504,-34.5 -34.7504,-3 -0.4992,-173 c 0.032,-1.328 0.1408,-2.774 0.2496,-4 0.8952,-11.69904 5.6408,-22.03496 13.2496,-29.5 l 106.5008,-105.25 c 9.3072,-9.17664 22.5168,-14.28736 37.2496,-13 l 669.2496,58.75 c 28.2904,2.492 51.1904,27.184 51.2504,55.25 l 0.2504,173.25 -34.5008,34.25 34.7504,3 -1,255 -35.2496,34.75 36,3.25 0.4992,168 c 0.021,14.4992 -6.0584,27.0636 -15.7496,35.5 -0.4256,0.3279 -0.828,0.6684 -1.2504,1 l -103,102.25 -2,1.75 c -1.0592,0.94 -2.1,2.03 -3.2496,2.75 -0.9896,0.77 -2.16,1.29 -3.2496,2 -0.9808,0.54 -1.9504,1.18 -3,1.75 -0.96,0.46 -2.0208,0.87 -3,1.25 -1.06,0.38 -1.9504,0.95 -3,1.25 -1.0104,0.33 -2.1904,0.5 -3.2504,0.75 -1.12,0.26 -2.1304,0.49 -3.2504,0.75 -1.1096,0.17 -2.1,0.39 -3.2496,0.5 -1.24,0.14 -2.5,0.25 -3.7504,0.25 -1.34,0.13 -2.84,0.13 -4.2496,0 L 1213.1531,1680.9287 z m 121.5,-106.25 c 0.4992,-0.1066 1.0176,-0.1362 1.5,-0.25 1.0496,-0.32 1.94,-0.6 3,-1 C 1337.6267,1573.9445 1336.2515,1574.3161 1334.6531,1574.6787 z m -700.5,-61 c -1.3416,-0.5561 -2.4592,-1.3436 -3.7504,-2 -1.528,-0.7774 -3.0504,-1.5904 -4.4992,-2.5 1.0256,0.6311 1.9552,1.4171 3,2 C 630.6275,1512.1137 632.3131,1512.9249 634.1531,1513.6787 z m -32.7504,-36.25 c -0.5016,-1.6366 -0.9056,-3.3138 -1.2496,-5 -0.344,-1.6862 -0.8216,-3.2736 -1,-5 0.1136,1.27 0.32,2.27 0.5,3.5 0.112,0.4814 0.3856,0.9701 0.5,1.5 0.128,0.593 0.098,1.2272 0.2496,1.75 0.2584,1.03 0.4416,1.98 0.7504,3 L 601.4027,1477.4287 z m 500.5008,-81 1,-237 34.7496,-34.5 -35,-3 0,-37 0,-0.5 0,-2 -0.2504,-2 -0.4992,-1.75 -0.2504,-1.5 -0.5,-1.5 -0.5,-1.5 -0.7496,-1.25 -0.7504,-1.5 -0.7504,-1.25 -0.7496,-1.25 -1,-1.5 -1.2504,-1.25 -1.2496,-1.5 -1.5,-1.25 -2,-1.75 c -1.12,-0.8 -2.52,-1.52 -3.7496,-2.25 -1.78,-0.98 -3.4904,-1.86 -5.5008,-2.5 -1.2792,-0.33 -2.6496,-0.55 -4,-0.75 l -1.4992,-0.25 -316.2504,-27.5 -1,241.25 -35.7496,35.25 36.4992,3.25 0,31.75 c 0,14.67 12.0552,27.69 26.7504,29 L 1101.9035,1396.4287 z m -327.7504,-478.5 1.5,-0.5 1.7496,-0.5 C 776.2851,917.21686 775.1979,917.49502 774.1531,917.9287 z m -175.7504,-127.75 c -0.01,-1.75728 0.082,-3.55328 0.2504,-5.25 -0.043,0.4272 -0.2176,0.81864 -0.2504,1.25 C 598.2979,787.5387 598.3877,788.7527 598.4027,790.1787 Z"
4902+ id="path4611"
4903+ inkscape:connector-curvature="0" />
4904+ <path
4905+ id="path4613"
4906+ style="fill:url(#linearGradient4705);fill-opacity:1;fill-rule:nonzero;stroke:none"
4907+ d="m 1209.1,979.273 -106.4,105.137 0.03,0.63 106.37,-105.212 0,-0.555 m -0.13,-2.152 -106.36,105.209 0.09,2.08 106.4,-105.137 -0.13,-2.152 m -0.31,-1.914 -106.34,105.193 0.29,1.93 106.36,-105.209 -0.31,-1.914 m -0.32,-1.641 -106.37,105.104 0.35,1.73 106.34,-105.193 -0.32,-1.641 m -0.41,-1.617 -106.32,105.191 0.36,1.53 106.37,-105.104 -0.41,-1.617 m -0.48,-1.48 -106.38,105.191 0.54,1.48 106.32,-105.191 -0.48,-1.48 m -0.61,-1.407 -106.3,105.208 0.53,1.39 106.38,-105.191 -0.61,-1.407 m -0.58,-1.355 -106.4,105.163 0.68,1.4 106.3,-105.208 -0.58,-1.355 m -0.69,-1.352 -106.38,105.155 0.67,1.36 106.4,-105.163 -0.69,-1.352 m -0.8,-1.343 -106.34,105.198 0.76,1.3 106.38,-105.155 -0.8,-1.343 m -0.83,-1.344 -106.38,105.212 0.87,1.33 106.34,-105.198 -0.83,-1.344 m -0.99,-1.309 -106.38,105.161 0.99,1.36 106.38,-105.212 -0.99,-1.309 m -1.08,-1.355 -106.41,105.126 1.11,1.39 106.38,-105.161 -1.08,-1.355 m -1.34,-1.418 -106.38,105.144 1.31,1.4 106.41,-105.126 -1.34,-1.418 m -1.6,-1.555 -106.34,105.279 1.56,1.42 106.38,-105.144 -1.6,-1.555 m -2.05,-1.609 -106.34,105.168 2.05,1.72 106.34,-105.279 -2.05,-1.609 m -3.5,-2.184 -106.36,105.102 c 1.23,0.73 2.4,1.45 3.52,2.25 l 106.34,-105.168 c -1.08,-0.813 -2.29,-1.563 -3.5,-2.184 m -5.69,-2.347 -106.4,105.109 c 2.01,0.64 3.95,1.36 5.73,2.34 l 106.36,-105.102 c -1.79,-0.992 -3.74,-1.836 -5.69,-2.347 m -4.03,-0.895 -106.34,105.174 c 1.35,0.2 2.69,0.5 3.97,0.83 l 106.4,-105.109 c -1.31,-0.43 -2.65,-0.7 -4.03,-0.895 m -1.39,-0.176 -106.32,105.21 1.37,0.14 106.34,-105.174 -1.39,-0.176"
4908+ inkscape:connector-curvature="0" />
4909+ <path
4910+ id="path4615"
4911+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4912+ d="m 1209.16,1016.87 -106.4,105.16 160.86,14.14 106.34,-105.24 -160.8,-14.06"
4913+ inkscape:connector-curvature="0" />
4914+ <path
4915+ id="path4617"
4916+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4917+ d="m 1209.16,1016.87 -106.4,105.16 160.86,14.14 106.34,-105.24 -160.8,-14.06"
4918+ inkscape:connector-curvature="0" />
4919+ <path
4920+ id="path4619"
4921+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4922+ d="m 1209.1,979.828 -106.37,105.212 0.03,36.99 106.4,-105.16 -0.06,-37.042"
4923+ inkscape:connector-curvature="0" />
4924+ <path
4925+ id="path4621"
4926+ style="fill:#c42e24;fill-opacity:1;fill-rule:nonzero;stroke:none"
4927+ d="M 786.051,916.102 679.719,1021.34 1075.95,1056.03 1182.27,950.82 786.051,916.102"
4928+ inkscape:connector-curvature="0" />
4929+ <path
4930+ id="path4623"
4931+ style="fill:url(#linearGradient4707);fill-opacity:1;fill-rule:nonzero;stroke:none"
4932+ d="m 784.848,916.051 -106.391,105.129 1.262,0.16 106.332,-105.238 -1.203,-0.051 m -2.153,0 -106.343,105.199 2.105,-0.07 106.391,-105.129 -2.153,0 m -1.914,0.176 -106.363,105.153 1.934,-0.13 106.343,-105.199 -1.914,0.176 m -1.777,0.253 -106.336,105.16 1.75,-0.26 106.363,-105.153 -1.777,0.253 m -1.68,0.336 -106.347,105.164 1.691,-0.34 106.336,-105.16 -1.68,0.336 m -1.656,0.496 -106.305,105.178 1.614,-0.51 106.347,-105.164 -1.656,0.496 m -1.609,0.626 -106.325,105.122 1.629,-0.57 106.305,-105.178 -1.609,0.626 m -1.532,0.73 -106.355,105.172 1.562,-0.78 106.325,-105.122 -1.532,0.73 m -1.597,0.863 -106.356,105.209 1.598,-0.9 106.355,-105.172 -1.597,0.863 m -1.598,1.008 -106.398,105.191 1.64,-0.99 106.356,-105.209 -1.598,1.008 m -1.703,1.32 -106.34,105.321 1.645,-1.45 106.398,-105.191 -1.703,1.32 m -1.07,1.043 -106.368,105.238 1.098,-0.96 106.34,-105.321 -1.07,1.043"
4933+ inkscape:connector-curvature="0" />
4934+ <g
4935+ id="g4625">
4936+ <g
4937+ clip-path="url(#clipPath3195)"
4938+ id="g4627">
4939+ <path
4940+ id="path4629"
4941+ style="fill:#7f1d1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4942+ d="m 611.836,756.738 -106.34,105.207 c -7.609,7.465 -12.527,17.739 -13.422,29.438 L 598.445,786.152 c 0.879,-11.687 5.825,-21.863 13.391,-29.414"
4943+ inkscape:connector-curvature="0" />
4944+ <path
4945+ id="path4631"
4946+ style="fill:#841f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4947+ d="M 598.445,786.152 492.074,891.383 c -0.109,1.226 -0.144,2.613 -0.176,3.941 L 598.301,790.207 c -0.016,-1.426 0.039,-2.695 0.144,-4.055"
4948+ inkscape:connector-curvature="0" />
4949+ </g>
4950+ </g>
4951+ <path
4952+ id="path4633"
4953+ style="fill:#d93023;fill-opacity:1;fill-rule:nonzero;stroke:none"
4954+ d="m 1369.96,1030.93 -0.29,-173.184 c -0.06,-28.066 -22.94,-52.91 -51.23,-55.402 L 649.238,743.691 c -28.164,-2.461 -51.05,18.399 -50.937,46.516 l 0.289,173.18 160.937,14.133 -0.086,-37.102 c 0,-14.715 11.875,-25.594 26.61,-24.316 l 396.219,34.718 c 14.74,1.328 26.74,14.25 26.83,29.008 l 0.06,37.042 160.8,14.06"
4955+ inkscape:connector-curvature="0" />
4956+ <path
4957+ id="path4635"
4958+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4959+ d="M 598.59,963.387 492.281,1068.54 653.164,1082.71 759.527,977.52 598.59,963.387"
4960+ inkscape:connector-curvature="0" />
4961+ <path
4962+ id="path4637"
4963+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4964+ d="M 598.59,963.387 492.281,1068.54 653.164,1082.71 759.527,977.52 598.59,963.387"
4965+ inkscape:connector-curvature="0" />
4966+ <path
4967+ id="path4639"
4968+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4969+ d="M 598.301,790.207 491.898,895.324 492.281,1068.54 598.59,963.387 598.301,790.207"
4970+ inkscape:connector-curvature="0" />
4971+ <path
4972+ id="path4641"
4973+ style="fill:#e63f46;fill-opacity:1;fill-rule:nonzero;stroke:none"
4974+ d="m 1369.03,1323.22 1.02,-254.93 -160.81,-14.08 -1.03,254.89 160.82,14.12"
4975+ inkscape:connector-curvature="0" />
4976+ <path
4977+ id="path4643"
4978+ style="fill:#e23535;fill-opacity:1;fill-rule:nonzero;stroke:none"
4979+ d="m 1369.03,1323.22 1.02,-254.93 -160.81,-14.08 -1.03,254.89 160.82,14.12"
4980+ inkscape:connector-curvature="0" />
4981+ <path
4982+ id="path4645"
4983+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4984+ d="m 1209.24,1054.21 -106.32,105.18 -1.03,254.93 106.32,-105.22 1.03,-254.89"
4985+ inkscape:connector-curvature="0" />
4986+ <path
4987+ id="path4647"
4988+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4989+ d="m 1208.21,1309.1 -106.32,105.22 160.81,14.04 106.33,-105.14 -160.82,-14.12"
4990+ inkscape:connector-curvature="0" />
4991+ <path
4992+ id="path4649"
4993+ style="fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none"
4994+ d="m 1208.21,1309.1 -106.32,105.22 160.81,14.04 106.33,-105.14 -160.82,-14.12"
4995+ inkscape:connector-curvature="0" />
4996+ <path
4997+ id="path4651"
4998+ style="fill:#e63f46;fill-opacity:1;fill-rule:nonzero;stroke:none"
4999+ d="m 758.598,1269.75 0.972,-254.95 -160.871,-14.08 -1.019,254.94 160.918,14.09"
5000+ inkscape:connector-curvature="0" />
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches