Merge lp:~jacekn/charms/precise/haproxy/haproxy-updates into lp:charms/haproxy

Proposed by Jacek Nykis
Status: Merged
Merged at revision: 85
Proposed branch: lp:~jacekn/charms/precise/haproxy/haproxy-updates
Merge into: lp:charms/haproxy
Diff against target: 5649 lines (+4614/-89)
51 files modified
Makefile (+2/-2)
README.md (+51/-2)
config.yaml (+59/-0)
files/nrpe/check_haproxy.sh (+14/-4)
files/nrpe/check_haproxy_queue_depth.sh (+2/-2)
hooks/charmhelpers/cli/README.rst (+57/-0)
hooks/charmhelpers/cli/__init__.py (+147/-0)
hooks/charmhelpers/cli/commands.py (+2/-0)
hooks/charmhelpers/cli/host.py (+15/-0)
hooks/charmhelpers/contrib/ansible/__init__.py (+165/-0)
hooks/charmhelpers/contrib/charmhelpers/IMPORT (+4/-0)
hooks/charmhelpers/contrib/charmhelpers/__init__.py (+184/-0)
hooks/charmhelpers/contrib/charmsupport/IMPORT (+14/-0)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+1/-3)
hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
hooks/charmhelpers/contrib/jujugui/IMPORT (+4/-0)
hooks/charmhelpers/contrib/jujugui/utils.py (+602/-0)
hooks/charmhelpers/contrib/network/ip.py (+69/-0)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+75/-0)
hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
hooks/charmhelpers/contrib/openstack/context.py (+577/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+137/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+11/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+37/-0)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+23/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+440/-0)
hooks/charmhelpers/contrib/saltstack/__init__.py (+102/-0)
hooks/charmhelpers/contrib/ssl/__init__.py (+78/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+383/-0)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0)
hooks/charmhelpers/contrib/templating/contexts.py (+73/-0)
hooks/charmhelpers/contrib/templating/pyformat.py (+13/-0)
hooks/charmhelpers/core/hookenv.py (+78/-23)
hooks/charmhelpers/core/host.py (+66/-14)
hooks/charmhelpers/fetch/__init__.py (+84/-12)
hooks/charmhelpers/fetch/bzrurl.py (+7/-2)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/archive.py (+57/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/hooks.py (+86/-11)
hooks/tests/test_config_changed_hooks.py (+1/-0)
hooks/tests/test_helpers.py (+67/-9)
hooks/tests/test_install.py (+5/-0)
hooks/tests/test_reverseproxy_hooks.py (+55/-3)
hooks/tests/test_website_hooks.py (+0/-1)
metadata.yaml (+1/-1)
To merge this branch: bzr merge lp:~jacekn/charms/precise/haproxy/haproxy-updates
Reviewer Review Type Date Requested Status
Andrew McLeod (community) Approve
Tim Van Steenburgh (community) Needs Fixing
Review via email: mp+272559@code.launchpad.net

Description of the change

This branch makes a few improvements to the haproxy charm:
- add SSL support
- add nagios_servicegroups config option
- multiple monitoring fixes
- add open_monitoring_port option

Please note that all tests related to the changes are passing. There is however one test that is currently failing in the upstream branch, unrelated to my changes. See https://bugs.launchpad.net/charms/+source/haproxy/+bug/1499798 for details

To post a comment you must log in.
Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Hey Jacek,

This looks good, thanks! I went ahead and fixed up the test failures on another branch here: lp:~tvansteenburgh/charms/precise/haproxy/haproxy-updates-test-fixes. If you will merge my fixes into your branch, I'll gladly approve and merge your branch.

review: Needs Fixing
Revision history for this message
Barry Price (barryprice) wrote :

Hi Tim,

Sorry for the delay on this - as suggested, I've merged your fixes back into this branch if you could re-review.

Revision history for this message
Andrew McLeod (admcleod) wrote :

Hi Barry,

I've tested this now and can confirm all tests pass, so heres my +1

Andrew

review: Approve
Revision history for this message
Cory Johns (johnsca) wrote :

Barry and Jacek,

This has been merged and will be available in cs:precise/haproxy shortly.

However, I see that there is also a separate cs:trusty/haproxy branch. Should these changes also be proposed against that branch?

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2014-09-11 18:50:46 +0000
+++ Makefile 2016-02-12 04:16:45 +0000
@@ -18,7 +18,7 @@
18 @test `cat revision` = 0 && rm revision18 @test `cat revision` = 0 && rm revision
1919
20.venv:20.venv:
21 sudo apt-get install -y python-apt python-virtualenv21 sudo apt-get install -y python-apt python-virtualenv python-jinja2
22 virtualenv .venv --system-site-packages22 virtualenv .venv --system-site-packages
23 .venv/bin/pip install -I nose testtools mock pyyaml23 .venv/bin/pip install -I nose testtools mock pyyaml
2424
@@ -28,7 +28,7 @@
2828
29lint:29lint:
30 @echo Checking for Python syntax...30 @echo Checking for Python syntax...
31 @flake8 $(HOOKS_DIR) --ignore=E123 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK31 @flake8 $(HOOKS_DIR) --ignore=E123,E265 --exclude=$(HOOKS_DIR)/charmhelpers && echo OK
3232
33sourcedeps: $(PWD)/config-manager.txt33sourcedeps: $(PWD)/config-manager.txt
34 @echo Updating source dependencies...34 @echo Updating source dependencies...
3535
=== modified file 'README.md'
--- README.md 2014-09-08 18:21:01 +0000
+++ README.md 2016-02-12 04:16:45 +0000
@@ -78,7 +78,6 @@
7878
79## Website Relation79## Website Relation
8080
81
82The website relation is the other side of haproxy. It can communicate with81The website relation is the other side of haproxy. It can communicate with
83charms written like apache2 that can act as a front-end for haproxy to take of82charms written like apache2 that can act as a front-end for haproxy to take of
84things like ssl encryption. When joining a service like apache2 on its83things like ssl encryption. When joining a service like apache2 on its
@@ -130,7 +129,57 @@
130cross-environment relations then that will be the best way to handle 129cross-environment relations then that will be the best way to handle
131this configuration, as it will work in either scenario.130this configuration, as it will work in either scenario.
132131
133## HAProxy Project Information132## peering\_mode and the indirection layer
133
134If you are going to spawn multiple haproxy units, you should pay special
135attention to the peering\_mode configuration option.
136
137### active-passive mode
138
139The peering\_mode option defaults to "active-passive" and in this mode, all
140haproxy units ("peers") will proxy traffic to the first working peer (i.e. that
141passes a basic layer4 check). What this means is that extra peers are working
142as "hot spares", and so adding units doesn't add global bandwidth to the
143haproxy layer.
144
145In order to achieve this, the charm configures a new service in haproxy that
146will simply forward the traffic to the first working peer. The haproxy service
147that actually load-balances between the backends is renamed, and its port
148number is increased by one.
149
150For example, if you have 3 working haproxy units haproxy/0, haproxy/1 and
151haproxy/2 configured to listen on port 80, in active-passive mode, and
152haproxy/2 gets a request, the request is routed through the following path :
153
154haproxy/2:80 ==> haproxy/0:81 ==> \[backends\]
155
156In the same fashion, if haproxy/1 receives a request, it's routed in the following way :
157
158haproxy/1:80 ==> haproxy/0:81 ==> \[backends\]
159
160If haproxy/0 was to go down, then all the requests would be forwarded to the
161next working peer, i.e. haproxy/1. In this case, a request received by
162haproxy/2 would be routed as follows :
163
164haproxy/2:80 ==> haproxy/1:81 ==> \[backends\]
165
166This mode allows a strict control of the maximum number of connections the
167backends will receive, and guarantees you'll have enough bandwidth to the
168backends should an haproxy unit die, at the cost of having less overall
169bandwidth to the backends.
170
171### active-active mode
172
173If the peering\_mode option is set to "active-active", then any haproxy unit
174will be independant from each other and will simply load-balance the traffic to
175the backends. In this case, the indirection layer described above is not
176created in this case.
177
178This mode allows increasing the bandwidth to the backends by adding additional
179units, at the cost of having less control over the number of connections that
180they will receive.
181
182# HAProxy Project Information
134183
135- [HAProxy Homepage](http://haproxy.1wt.eu/)184- [HAProxy Homepage](http://haproxy.1wt.eu/)
136- [HAProxy mailing list](http://haproxy.1wt.eu/#tact)185- [HAProxy mailing list](http://haproxy.1wt.eu/#tact)
137186
=== modified file 'config.yaml'
--- config.yaml 2014-09-08 18:21:01 +0000
+++ config.yaml 2016-02-12 04:16:45 +0000
@@ -71,6 +71,14 @@
71 default: False71 default: False
72 type: boolean72 type: boolean
73 description: Enable monitoring73 description: Enable monitoring
74 open_monitoring_port:
75 default: True
76 type: boolean
77 description: |
78 Open the monitoring port when enable_monitoring is true.
79
80 Consider setting this to false if exposing haproxy on a shared
81 or untrusted network, e.g., when deploying a frontend.
74 monitoring_port:82 monitoring_port:
75 default: 1000083 default: 10000
76 type: int84 type: int
@@ -126,6 +134,20 @@
126 with a cookie. Session are sticky by default. To turn off sticky sessions,134 with a cookie. Session are sticky by default. To turn off sticky sessions,
127 remove the 'cookie SRVNAME insert' and 'cookie S{i}' stanzas from135 remove the 'cookie SRVNAME insert' and 'cookie S{i}' stanzas from
128 `service_options` and `server_options`.136 `service_options` and `server_options`.
137 ssl_cert:
138 default: ""
139 type: string
140 description: |
141 This option is only supported in Haproxy >= 1.5.
142
143 Use this SSL certificate for frontend SSL termination, if specified.
144 This should be a concatenation of:
145 * The public certificate (PEM)
146 * Zero or more intermediate CA certificates (PEM)
147 * The private key (PEM)
148 The certificate(s) + private key will be installed with read-access to the
149 haproxy service user. If this option is set, all bind stanzas will use this
150 certificate.
129 sysctl:151 sysctl:
130 default: ""152 default: ""
131 type: string153 type: string
@@ -142,6 +164,33 @@
142 juju-postgresql-0164 juju-postgresql-0
143 If you're running multiple environments with the same services in them165 If you're running multiple environments with the same services in them
144 this allows you to differentiate between them.166 this allows you to differentiate between them.
167 nagios_servicegroups:
168 default: ""
169 type: string
170 description: |
171 A comma-separated list of nagios servicegroups.
172 If left empty, the nagios_context will be used as the servicegroup
173 install_sources:
174 default: ""
175 type: string
176 description: |
177 YAML list of additional installation sources, as a string. The number of
178 install_sources must match the number of install_keys. For example:
179 install_sources: |
180 - ppa:project1/ppa
181 - ppa:project2/ppa
182 install_keys:
183 default: ""
184 type: string
185 description: |
186 YAML list of GPG keys for installation sources, as a string. For apt repository
187 URLs, use the public key ID used to verify package signatures. For
188 other sources such as PPA, use empty string. This list must have the
189 same number of elements as install_sources, even if the key items are
190 all empty string. An example to go with the above for install_sources:
191 install_keys: |
192 - ""
193 - ""
145 metrics_target:194 metrics_target:
146 default: ""195 default: ""
147 type: string196 type: string
@@ -159,3 +208,13 @@
159 default: 5208 default: 5
160 type: int209 type: int
161 description: Period for metrics cron job to run in minutes210 description: Period for metrics cron job to run in minutes
211 peering_mode:
212 default: "active-passive"
213 type: string
214 description: |
215 Possible values : "active-passive", "active-active". This is only used
216 if several units are spawned. In "active-passive" mode, all the units will
217 forward traffic to the first working haproxy unit, which will then forward it
218 to configured backends. In "active-active" mode, each unit will proxy the
219 traffic directly to the backends. The "active-passive" mode gives a better
220 control of the maximum connection that will be opened to a backend server.
162221
=== modified file 'files/nrpe/check_haproxy.sh'
--- files/nrpe/check_haproxy.sh 2013-03-27 15:41:26 +0000
+++ files/nrpe/check_haproxy.sh 2016-02-12 04:16:45 +0000
@@ -10,14 +10,24 @@
10NOTACTIVE=''10NOTACTIVE=''
11LOGFILE=/var/log/nagios/check_haproxy.log11LOGFILE=/var/log/nagios/check_haproxy.log
12AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')12AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
1313SSL=$(grep 10000 /etc/haproxy/haproxy.cfg | grep -q ssl && echo "-S")
14for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});14
15/usr/lib/nagios/plugins/check_http ${SSL} -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="HAProxy version 1.5.*" -e ' 200 OK' > /dev/null 2>&1
16if [ $? == 0 ]
17then
18 # this is 1.5, which changed the class values
19 REGEX="class=\"(active|backup)(3|4).*"
20else
21 REGEX="class=\"(active|backup)(2|3).*"
22fi
23
24for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2}');
15do25do
16 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')26 output=$(/usr/lib/nagios/plugins/check_http ${SSL} -a ${AUTH} -I 127.0.0.1 -p 10000 --regex="${REGEX}${appserver}" -e ' 200 OK')
17 if [ $? != 0 ]; then27 if [ $? != 0 ]; then
18 date >> $LOGFILE28 date >> $LOGFILE
19 echo $output >> $LOGFILE29 echo $output >> $LOGFILE
20 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 10000 -v | grep $appserver >> $LOGFILE 2>&130 /usr/lib/nagios/plugins/check_http ${SSL} -a ${AUTH} -I 127.0.0.1 -p 10000 -v | grep $appserver >> $LOGFILE 2>&1
21 CRITICAL=131 CRITICAL=1
22 NOTACTIVE="${NOTACTIVE} $appserver"32 NOTACTIVE="${NOTACTIVE} $appserver"
23 fi33 fi
2434
=== modified file 'files/nrpe/check_haproxy_queue_depth.sh'
--- files/nrpe/check_haproxy_queue_depth.sh 2014-07-22 21:13:48 +0000
+++ files/nrpe/check_haproxy_queue_depth.sh 2016-02-12 04:16:45 +0000
@@ -16,8 +16,8 @@
1616
17for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}')17for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}')
18do18do
19 CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3)19 CURRQ=$(echo "$HAPROXYSTATS" | grep ^$BACKEND, | grep BACKEND | cut -d , -f 3)
20 MAXQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 4)20 MAXQ=$(echo "$HAPROXYSTATS" | grep ^$BACKEND, | grep BACKEND | cut -d , -f 4)
2121
22 if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then22 if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then
23 echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ"23 echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ"
2424
=== added directory 'hooks/charmhelpers/cli'
=== added file 'hooks/charmhelpers/cli/README.rst'
--- hooks/charmhelpers/cli/README.rst 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/README.rst 2016-02-12 04:16:45 +0000
@@ -0,0 +1,57 @@
1==========
2Commandant
3==========
4
5-----------------------------------------------------
6Automatic command-line interfaces to Python functions
7-----------------------------------------------------
8
9One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands.
10
11Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life.
12
13Goals
14=====
15
16* Single decorator to expose a function as a command.
17 * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW)
18* Automatic analysis of function signature through ``inspect.getargspec()``
19* Command argument parser built automatically with ``argparse``
20* Interactive interpreter loop object made with ``Cmd``
21* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps.
22
23Other Important Features that need writing
24------------------------------------------
25
26* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour
27* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc.
28 - Filename arguments are important, as good practice is for functions to accept file objects as parameters.
29 - choices arguments help to limit bad input before the function is called
30* Some automatic behaviour could make for better defaults, once the user can override them.
31 - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True.
32 - We could automatically support hyphens as alternates for underscores
33 - Arguments defaulting to sequence types could support the ``append`` action.
34
35
36-----------------------------------------------------
37Implementing subcommands
38-----------------------------------------------------
39
40(WIP)
41
42So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose.
43
44Some examples::
45
46 from charmhelpers.cli import CommandLine
47 from charmhelpers.payload import execd
48 from charmhelpers.foo import bar
49
50 cli = CommandLine()
51
52 cli.subcommand(execd.execd_run)
53
54 @cli.subcommand_builder("bar", help="Bar baz qux")
55 def barcmd_builder(subparser):
56 subparser.add_argument('argument1', help="yackety")
57 return bar
058
=== added file 'hooks/charmhelpers/cli/__init__.py'
--- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,147 @@
1import inspect
2import itertools
3import argparse
4import sys
5
6
7class OutputFormatter(object):
8 def __init__(self, outfile=sys.stdout):
9 self.formats = (
10 "raw",
11 "json",
12 "py",
13 "yaml",
14 "csv",
15 "tab",
16 )
17 self.outfile = outfile
18
19 def add_arguments(self, argument_parser):
20 formatgroup = argument_parser.add_mutually_exclusive_group()
21 choices = self.supported_formats
22 formatgroup.add_argument("--format", metavar='FMT',
23 help="Select output format for returned data, "
24 "where FMT is one of: {}".format(choices),
25 choices=choices, default='raw')
26 for fmt in self.formats:
27 fmtfunc = getattr(self, fmt)
28 formatgroup.add_argument("-{}".format(fmt[0]),
29 "--{}".format(fmt), action='store_const',
30 const=fmt, dest='format',
31 help=fmtfunc.__doc__)
32
33 @property
34 def supported_formats(self):
35 return self.formats
36
37 def raw(self, output):
38 """Output data as raw string (default)"""
39 self.outfile.write(str(output))
40
41 def py(self, output):
42 """Output data as a nicely-formatted python data structure"""
43 import pprint
44 pprint.pprint(output, stream=self.outfile)
45
46 def json(self, output):
47 """Output data in JSON format"""
48 import json
49 json.dump(output, self.outfile)
50
51 def yaml(self, output):
52 """Output data in YAML format"""
53 import yaml
54 yaml.safe_dump(output, self.outfile)
55
56 def csv(self, output):
57 """Output data as excel-compatible CSV"""
58 import csv
59 csvwriter = csv.writer(self.outfile)
60 csvwriter.writerows(output)
61
62 def tab(self, output):
63 """Output data in excel-compatible tab-delimited format"""
64 import csv
65 csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
66 csvwriter.writerows(output)
67
68 def format_output(self, output, fmt='raw'):
69 fmtfunc = getattr(self, fmt)
70 fmtfunc(output)
71
72
73class CommandLine(object):
74 argument_parser = None
75 subparsers = None
76 formatter = None
77
78 def __init__(self):
79 if not self.argument_parser:
80 self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
81 if not self.formatter:
82 self.formatter = OutputFormatter()
83 self.formatter.add_arguments(self.argument_parser)
84 if not self.subparsers:
85 self.subparsers = self.argument_parser.add_subparsers(help='Commands')
86
87 def subcommand(self, command_name=None):
88 """
89 Decorate a function as a subcommand. Use its arguments as the
90 command-line arguments"""
91 def wrapper(decorated):
92 cmd_name = command_name or decorated.__name__
93 subparser = self.subparsers.add_parser(cmd_name,
94 description=decorated.__doc__)
95 for args, kwargs in describe_arguments(decorated):
96 subparser.add_argument(*args, **kwargs)
97 subparser.set_defaults(func=decorated)
98 return decorated
99 return wrapper
100
101 def subcommand_builder(self, command_name, description=None):
102 """
103 Decorate a function that builds a subcommand. Builders should accept a
104 single argument (the subparser instance) and return the function to be
105 run as the command."""
106 def wrapper(decorated):
107 subparser = self.subparsers.add_parser(command_name)
108 func = decorated(subparser)
109 subparser.set_defaults(func=func)
110 subparser.description = description or func.__doc__
111 return wrapper
112
113 def run(self):
114 "Run cli, processing arguments and executing subcommands."
115 arguments = self.argument_parser.parse_args()
116 argspec = inspect.getargspec(arguments.func)
117 vargs = []
118 kwargs = {}
119 if argspec.varargs:
120 vargs = getattr(arguments, argspec.varargs)
121 for arg in argspec.args:
122 kwargs[arg] = getattr(arguments, arg)
123 self.formatter.format_output(arguments.func(*vargs, **kwargs), arguments.format)
124
125
126cmdline = CommandLine()
127
128
129def describe_arguments(func):
130 """
131 Analyze a function's signature and return a data structure suitable for
132 passing in as arguments to an argparse parser's add_argument() method."""
133
134 argspec = inspect.getargspec(func)
135 # we should probably raise an exception somewhere if func includes **kwargs
136 if argspec.defaults:
137 positional_args = argspec.args[:-len(argspec.defaults)]
138 keyword_names = argspec.args[-len(argspec.defaults):]
139 for arg, default in itertools.izip(keyword_names, argspec.defaults):
140 yield ('--{}'.format(arg),), {'default': default}
141 else:
142 positional_args = argspec.args
143
144 for arg in positional_args:
145 yield (arg,), {}
146 if argspec.varargs:
147 yield (argspec.varargs,), {'nargs': '*'}
0148
=== added file 'hooks/charmhelpers/cli/commands.py'
--- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/commands.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,2 @@
1from . import CommandLine
2import host
03
=== added file 'hooks/charmhelpers/cli/host.py'
--- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/cli/host.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,15 @@
1from . import cmdline
2from charmhelpers.core import host
3
4
5@cmdline.subcommand()
6def mounts():
7 "List mounts"
8 return host.mounts()
9
10
11@cmdline.subcommand_builder('service', description="Control system services")
12def service(subparser):
13 subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
14 subparser.add_argument("service_name", help="Name of the service to control")
15 return host.service
016
=== added directory 'hooks/charmhelpers/contrib/ansible'
=== added file 'hooks/charmhelpers/contrib/ansible/__init__.py'
--- hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/ansible/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,165 @@
1# Copyright 2013 Canonical Ltd.
2#
3# Authors:
4# Charm Helpers Developers <juju@lists.ubuntu.com>
5"""Charm Helpers ansible - declare the state of your machines.
6
7This helper enables you to declare your machine state, rather than
8program it procedurally (and have to test each change to your procedures).
9Your install hook can be as simple as:
10
11{{{
12import charmhelpers.contrib.ansible
13
14
15def install():
16 charmhelpers.contrib.ansible.install_ansible_support()
17 charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
18}}}
19
20and won't need to change (nor will its tests) when you change the machine
21state.
22
23All of your juju config and relation-data are available as template
24variables within your playbooks and templates. An install playbook looks
25something like:
26
27{{{
28---
29- hosts: localhost
30 user: root
31
32 tasks:
33 - name: Add private repositories.
34 template:
35 src: ../templates/private-repositories.list.jinja2
36 dest: /etc/apt/sources.list.d/private.list
37
38 - name: Update the cache.
39 apt: update_cache=yes
40
41 - name: Install dependencies.
42 apt: pkg={{ item }}
43 with_items:
44 - python-mimeparse
45 - python-webob
46 - sunburnt
47
48 - name: Setup groups.
49 group: name={{ item.name }} gid={{ item.gid }}
50 with_items:
51 - { name: 'deploy_user', gid: 1800 }
52 - { name: 'service_user', gid: 1500 }
53
54 ...
55}}}
56
57Read more online about playbooks[1] and standard ansible modules[2].
58
59[1] http://www.ansibleworks.com/docs/playbooks.html
60[2] http://www.ansibleworks.com/docs/modules.html
61"""
62import os
63import subprocess
64
65import charmhelpers.contrib.templating.contexts
66import charmhelpers.core.host
67import charmhelpers.core.hookenv
68import charmhelpers.fetch
69
70
71charm_dir = os.environ.get('CHARM_DIR', '')
72ansible_hosts_path = '/etc/ansible/hosts'
73# Ansible will automatically include any vars in the following
74# file in its inventory when run locally.
75ansible_vars_path = '/etc/ansible/host_vars/localhost'
76
77
78def install_ansible_support(from_ppa=True):
79 """Installs the ansible package.
80
81 By default it is installed from the PPA [1] linked from
82 the ansible website [2].
83
84 [1] https://launchpad.net/~rquillo/+archive/ansible
85 [2] http://www.ansibleworks.com/docs/gettingstarted.html#ubuntu-and-debian
86
87 If from_ppa is false, you must ensure that the package is available
88 from a configured repository.
89 """
90 if from_ppa:
91 charmhelpers.fetch.add_source('ppa:rquillo/ansible')
92 charmhelpers.fetch.apt_update(fatal=True)
93 charmhelpers.fetch.apt_install('ansible')
94 with open(ansible_hosts_path, 'w+') as hosts_file:
95 hosts_file.write('localhost ansible_connection=local')
96
97
98def apply_playbook(playbook, tags=None):
99 tags = tags or []
100 tags = ",".join(tags)
101 charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
102 ansible_vars_path, namespace_separator='__',
103 allow_hyphens_in_keys=False)
104 call = [
105 'ansible-playbook',
106 '-c',
107 'local',
108 playbook,
109 ]
110 if tags:
111 call.extend(['--tags', '{}'.format(tags)])
112 subprocess.check_call(call)
113
114
115class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
116 """Run a playbook with the hook-name as the tag.
117
118 This helper builds on the standard hookenv.Hooks helper,
119 but additionally runs the playbook with the hook-name specified
120 using --tags (ie. running all the tasks tagged with the hook-name).
121
122 Example:
123 hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
124
125 # All the tasks within my_machine_state.yaml tagged with 'install'
126 # will be run automatically after do_custom_work()
127 @hooks.hook()
128 def install():
129 do_custom_work()
130
131 # For most of your hooks, you won't need to do anything other
132 # than run the tagged tasks for the hook:
133 @hooks.hook('config-changed', 'start', 'stop')
134 def just_use_playbook():
135 pass
136
137 # As a convenience, you can avoid the above noop function by specifying
138 # the hooks which are handled by ansible-only and they'll be registered
139 # for you:
140 # hooks = AnsibleHooks(
141 # 'playbooks/my_machine_state.yaml',
142 # default_hooks=['config-changed', 'start', 'stop'])
143
144 if __name__ == "__main__":
145 # execute a hook based on the name the program is called by
146 hooks.execute(sys.argv)
147 """
148
149 def __init__(self, playbook_path, default_hooks=None):
150 """Register any hooks handled by ansible."""
151 super(AnsibleHooks, self).__init__()
152
153 self.playbook_path = playbook_path
154
155 default_hooks = default_hooks or []
156 noop = lambda *args, **kwargs: None
157 for hook in default_hooks:
158 self.register(hook, noop)
159
160 def execute(self, args):
161 """Execute the hook followed by the playbook using the hook as tag."""
162 super(AnsibleHooks, self).execute(args)
163 hook_name = os.path.basename(args[0])
164 charmhelpers.contrib.ansible.apply_playbook(
165 self.playbook_path, tags=[hook_name])
0166
=== added directory 'hooks/charmhelpers/contrib/charmhelpers'
=== added file 'hooks/charmhelpers/contrib/charmhelpers/IMPORT'
--- hooks/charmhelpers/contrib/charmhelpers/IMPORT 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmhelpers/IMPORT 2016-02-12 04:16:45 +0000
@@ -0,0 +1,4 @@
1Source lp:charm-tools/trunk
2
3charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py
4charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py
05
=== added file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py'
--- hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,184 @@
1# Copyright 2012 Canonical Ltd. This software is licensed under the
2# GNU Affero General Public License version 3 (see the file LICENSE).
3
4import warnings
5warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning)
6
7"""Helper functions for writing Juju charms in Python."""
8
9__metaclass__ = type
10__all__ = [
11 #'get_config', # core.hookenv.config()
12 #'log', # core.hookenv.log()
13 #'log_entry', # core.hookenv.log()
14 #'log_exit', # core.hookenv.log()
15 #'relation_get', # core.hookenv.relation_get()
16 #'relation_set', # core.hookenv.relation_set()
17 #'relation_ids', # core.hookenv.relation_ids()
18 #'relation_list', # core.hookenv.relation_units()
19 #'config_get', # core.hookenv.config()
20 #'unit_get', # core.hookenv.unit_get()
21 #'open_port', # core.hookenv.open_port()
22 #'close_port', # core.hookenv.close_port()
23 #'service_control', # core.host.service()
24 'unit_info', # client-side, NOT IMPLEMENTED
25 'wait_for_machine', # client-side, NOT IMPLEMENTED
26 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
27 'wait_for_relation', # client-side, NOT IMPLEMENTED
28 'wait_for_unit', # client-side, NOT IMPLEMENTED
29]
30
31import operator
32from shelltoolbox import (
33 command,
34)
35import tempfile
36import time
37import urllib2
38import yaml
39
40SLEEP_AMOUNT = 0.1
41# We create a juju_status Command here because it makes testing much,
42# much easier.
43juju_status = lambda: command('juju')('status')
44
45# re-implemented as charmhelpers.fetch.configure_sources()
46#def configure_source(update=False):
47# source = config_get('source')
48# if ((source.startswith('ppa:') or
49# source.startswith('cloud:') or
50# source.startswith('http:'))):
51# run('add-apt-repository', source)
52# if source.startswith("http:"):
53# run('apt-key', 'import', config_get('key'))
54# if update:
55# run('apt-get', 'update')
56
57
58# DEPRECATED: client-side only
59def make_charm_config_file(charm_config):
60 charm_config_file = tempfile.NamedTemporaryFile()
61 charm_config_file.write(yaml.dump(charm_config))
62 charm_config_file.flush()
63 # The NamedTemporaryFile instance is returned instead of just the name
64 # because we want to take advantage of garbage collection-triggered
65 # deletion of the temp file when it goes out of scope in the caller.
66 return charm_config_file
67
68
69# DEPRECATED: client-side only
70def unit_info(service_name, item_name, data=None, unit=None):
71 if data is None:
72 data = yaml.safe_load(juju_status())
73 service = data['services'].get(service_name)
74 if service is None:
75 # XXX 2012-02-08 gmb:
76 # This allows us to cope with the race condition that we
77 # have between deploying a service and having it come up in
78 # `juju status`. We could probably do with cleaning it up so
79 # that it fails a bit more noisily after a while.
80 return ''
81 units = service['units']
82 if unit is not None:
83 item = units[unit][item_name]
84 else:
85 # It might seem odd to sort the units here, but we do it to
86 # ensure that when no unit is specified, the first unit for the
87 # service (or at least the one with the lowest number) is the
88 # one whose data gets returned.
89 sorted_unit_names = sorted(units.keys())
90 item = units[sorted_unit_names[0]][item_name]
91 return item
92
93
94# DEPRECATED: client-side only
95def get_machine_data():
96 return yaml.safe_load(juju_status())['machines']
97
98
99# DEPRECATED: client-side only
100def wait_for_machine(num_machines=1, timeout=300):
101 """Wait `timeout` seconds for `num_machines` machines to come up.
102
103 This wait_for... function can be called by other wait_for functions
104 whose timeouts might be too short in situations where only a bare
105 Juju setup has been bootstrapped.
106
107 :return: A tuple of (num_machines, time_taken). This is used for
108 testing.
109 """
110 # You may think this is a hack, and you'd be right. The easiest way
111 # to tell what environment we're working in (LXC vs EC2) is to check
112 # the dns-name of the first machine. If it's localhost we're in LXC
113 # and we can just return here.
114 if get_machine_data()[0]['dns-name'] == 'localhost':
115 return 1, 0
116 start_time = time.time()
117 while True:
118 # Drop the first machine, since it's the Zookeeper and that's
119 # not a machine that we need to wait for. This will only work
120 # for EC2 environments, which is why we return early above if
121 # we're in LXC.
122 machine_data = get_machine_data()
123 non_zookeeper_machines = [
124 machine_data[key] for key in machine_data.keys()[1:]]
125 if len(non_zookeeper_machines) >= num_machines:
126 all_machines_running = True
127 for machine in non_zookeeper_machines:
128 if machine.get('instance-state') != 'running':
129 all_machines_running = False
130 break
131 if all_machines_running:
132 break
133 if time.time() - start_time >= timeout:
134 raise RuntimeError('timeout waiting for service to start')
135 time.sleep(SLEEP_AMOUNT)
136 return num_machines, time.time() - start_time
137
138
139# DEPRECATED: client-side only
140def wait_for_unit(service_name, timeout=480):
141 """Wait `timeout` seconds for a given service name to come up."""
142 wait_for_machine(num_machines=1)
143 start_time = time.time()
144 while True:
145 state = unit_info(service_name, 'agent-state')
146 if 'error' in state or state == 'started':
147 break
148 if time.time() - start_time >= timeout:
149 raise RuntimeError('timeout waiting for service to start')
150 time.sleep(SLEEP_AMOUNT)
151 if state != 'started':
152 raise RuntimeError('unit did not start, agent-state: ' + state)
153
154
155# DEPRECATED: client-side only
156def wait_for_relation(service_name, relation_name, timeout=120):
157 """Wait `timeout` seconds for a given relation to come up."""
158 start_time = time.time()
159 while True:
160 relation = unit_info(service_name, 'relations').get(relation_name)
161 if relation is not None and relation['state'] == 'up':
162 break
163 if time.time() - start_time >= timeout:
164 raise RuntimeError('timeout waiting for relation to be up')
165 time.sleep(SLEEP_AMOUNT)
166
167
168# DEPRECATED: client-side only
169def wait_for_page_contents(url, contents, timeout=120, validate=None):
170 if validate is None:
171 validate = operator.contains
172 start_time = time.time()
173 while True:
174 try:
175 stream = urllib2.urlopen(url)
176 except (urllib2.HTTPError, urllib2.URLError):
177 pass
178 else:
179 page = stream.read()
180 if validate(page, contents):
181 return page
182 if time.time() - start_time >= timeout:
183 raise RuntimeError('timeout waiting for contents of ' + url)
184 time.sleep(SLEEP_AMOUNT)
0185
=== added file 'hooks/charmhelpers/contrib/charmsupport/IMPORT'
--- hooks/charmhelpers/contrib/charmsupport/IMPORT 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/charmsupport/IMPORT 2016-02-12 04:16:45 +0000
@@ -0,0 +1,14 @@
1Source: lp:charmsupport/trunk
2
3charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py
4charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py
5charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py
6charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py
7charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py
8
9charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py
10charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py
11charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py
12charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py
13
14charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport
015
=== modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2013-08-21 19:14:32 +0000
+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 2016-02-12 04:16:45 +0000
@@ -125,10 +125,8 @@
125125
126 def _locate_cmd(self, check_cmd):126 def _locate_cmd(self, check_cmd):
127 search_path = (127 search_path = (
128 '/',
129 os.path.join(os.environ['CHARM_DIR'],
130 'files/nrpe-external-master'),
131 '/usr/lib/nagios/plugins',128 '/usr/lib/nagios/plugins',
129 '/usr/local/lib/nagios/plugins',
132 )130 )
133 parts = shlex.split(check_cmd)131 parts = shlex.split(check_cmd)
134 for path in search_path:132 for path in search_path:
135133
=== added directory 'hooks/charmhelpers/contrib/hahelpers'
=== added file 'hooks/charmhelpers/contrib/hahelpers/__init__.py'
=== added file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
--- hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,58 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import subprocess
12
13from charmhelpers.core.hookenv import (
14 config as config_get,
15 relation_get,
16 relation_ids,
17 related_units as relation_list,
18 log,
19 INFO,
20)
21
22
23def get_cert():
24 cert = config_get('ssl_cert')
25 key = config_get('ssl_key')
26 if not (cert and key):
27 log("Inspecting identity-service relations for SSL certificate.",
28 level=INFO)
29 cert = key = None
30 for r_id in relation_ids('identity-service'):
31 for unit in relation_list(r_id):
32 if not cert:
33 cert = relation_get('ssl_cert',
34 rid=r_id, unit=unit)
35 if not key:
36 key = relation_get('ssl_key',
37 rid=r_id, unit=unit)
38 return (cert, key)
39
40
41def get_ca_cert():
42 ca_cert = None
43 log("Inspecting identity-service relations for CA SSL certificate.",
44 level=INFO)
45 for r_id in relation_ids('identity-service'):
46 for unit in relation_list(r_id):
47 if not ca_cert:
48 ca_cert = relation_get('ca_cert',
49 rid=r_id, unit=unit)
50 return ca_cert
51
52
53def install_ca_cert(ca_cert):
54 if ca_cert:
55 with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
56 'w') as crt:
57 crt.write(ca_cert)
58 subprocess.check_call(['update-ca-certificates', '--fresh'])
059
=== added file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
--- hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,183 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# James Page <james.page@ubuntu.com>
6# Adam Gandelman <adamg@ubuntu.com>
7#
8
9import subprocess
10import os
11
12from socket import gethostname as get_unit_hostname
13
14from charmhelpers.core.hookenv import (
15 log,
16 relation_ids,
17 related_units as relation_list,
18 relation_get,
19 config as config_get,
20 INFO,
21 ERROR,
22 unit_get,
23)
24
25
26class HAIncompleteConfig(Exception):
27 pass
28
29
30def is_clustered():
31 for r_id in (relation_ids('ha') or []):
32 for unit in (relation_list(r_id) or []):
33 clustered = relation_get('clustered',
34 rid=r_id,
35 unit=unit)
36 if clustered:
37 return True
38 return False
39
40
41def is_leader(resource):
42 cmd = [
43 "crm", "resource",
44 "show", resource
45 ]
46 try:
47 status = subprocess.check_output(cmd)
48 except subprocess.CalledProcessError:
49 return False
50 else:
51 if get_unit_hostname() in status:
52 return True
53 else:
54 return False
55
56
57def peer_units():
58 peers = []
59 for r_id in (relation_ids('cluster') or []):
60 for unit in (relation_list(r_id) or []):
61 peers.append(unit)
62 return peers
63
64
65def oldest_peer(peers):
66 local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
67 for peer in peers:
68 remote_unit_no = int(peer.split('/')[1])
69 if remote_unit_no < local_unit_no:
70 return False
71 return True
72
73
74def eligible_leader(resource):
75 if is_clustered():
76 if not is_leader(resource):
77 log('Deferring action to CRM leader.', level=INFO)
78 return False
79 else:
80 peers = peer_units()
81 if peers and not oldest_peer(peers):
82 log('Deferring action to oldest service unit.', level=INFO)
83 return False
84 return True
85
86
87def https():
88 '''
89 Determines whether enough data has been provided in configuration
90 or relation data to configure HTTPS
91 .
92 returns: boolean
93 '''
94 if config_get('use-https') == "yes":
95 return True
96 if config_get('ssl_cert') and config_get('ssl_key'):
97 return True
98 for r_id in relation_ids('identity-service'):
99 for unit in relation_list(r_id):
100 rel_state = [
101 relation_get('https_keystone', rid=r_id, unit=unit),
102 relation_get('ssl_cert', rid=r_id, unit=unit),
103 relation_get('ssl_key', rid=r_id, unit=unit),
104 relation_get('ca_cert', rid=r_id, unit=unit),
105 ]
106 # NOTE: works around (LP: #1203241)
107 if (None not in rel_state) and ('' not in rel_state):
108 return True
109 return False
110
111
112def determine_api_port(public_port):
113 '''
114 Determine correct API server listening port based on
115 existence of HTTPS reverse proxy and/or haproxy.
116
117 public_port: int: standard public port for given service
118
119 returns: int: the correct listening port for the API service
120 '''
121 i = 0
122 if len(peer_units()) > 0 or is_clustered():
123 i += 1
124 if https():
125 i += 1
126 return public_port - (i * 10)
127
128
129def determine_haproxy_port(public_port):
130 '''
131 Description: Determine correct proxy listening port based on public IP +
132 existence of HTTPS reverse proxy.
133
134 public_port: int: standard public port for given service
135
136 returns: int: the correct listening port for the HAProxy service
137 '''
138 i = 0
139 if https():
140 i += 1
141 return public_port - (i * 10)
142
143
144def get_hacluster_config():
145 '''
146 Obtains all relevant configuration from charm configuration required
147 for initiating a relation to hacluster:
148
149 ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
150
151 returns: dict: A dict containing settings keyed by setting name.
152 raises: HAIncompleteConfig if settings are missing.
153 '''
154 settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
155 conf = {}
156 for setting in settings:
157 conf[setting] = config_get(setting)
158 missing = []
159 [missing.append(s) for s, v in conf.iteritems() if v is None]
160 if missing:
161 log('Insufficient config data to configure hacluster.', level=ERROR)
162 raise HAIncompleteConfig
163 return conf
164
165
166def canonical_url(configs, vip_setting='vip'):
167 '''
168 Returns the correct HTTP URL to this host given the state of HTTPS
169 configuration and hacluster.
170
171 :configs : OSTemplateRenderer: A config tempating object to inspect for
172 a complete https context.
173 :vip_setting: str: Setting in charm config that specifies
174 VIP address.
175 '''
176 scheme = 'http'
177 if 'https' in configs.complete_contexts():
178 scheme = 'https'
179 if is_clustered():
180 addr = config_get(vip_setting)
181 else:
182 addr = unit_get('private-address')
183 return '%s://%s' % (scheme, addr)
0184
=== added directory 'hooks/charmhelpers/contrib/jujugui'
=== added file 'hooks/charmhelpers/contrib/jujugui/IMPORT'
--- hooks/charmhelpers/contrib/jujugui/IMPORT 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/jujugui/IMPORT 2016-02-12 04:16:45 +0000
@@ -0,0 +1,4 @@
1Source: lp:charms/juju-gui
2
3juju-gui/hooks/utils.py -> charm-helpers/charmhelpers/contrib/jujugui/utils.py
4juju-gui/tests/test_utils.py -> charm-helpers/tests/contrib/jujugui/test_utils.py
05
=== added file 'hooks/charmhelpers/contrib/jujugui/__init__.py'
=== added file 'hooks/charmhelpers/contrib/jujugui/utils.py'
--- hooks/charmhelpers/contrib/jujugui/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/jujugui/utils.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,602 @@
1"""Juju GUI charm utilities."""
2
3__all__ = [
4 'AGENT',
5 'APACHE',
6 'API_PORT',
7 'CURRENT_DIR',
8 'HAPROXY',
9 'IMPROV',
10 'JUJU_DIR',
11 'JUJU_GUI_DIR',
12 'JUJU_GUI_SITE',
13 'JUJU_PEM',
14 'WEB_PORT',
15 'bzr_checkout',
16 'chain',
17 'cmd_log',
18 'fetch_api',
19 'fetch_gui',
20 'find_missing_packages',
21 'first_path_in_dir',
22 'get_api_address',
23 'get_npm_cache_archive_url',
24 'get_release_file_url',
25 'get_staging_dependencies',
26 'get_zookeeper_address',
27 'legacy_juju',
28 'log_hook',
29 'merge',
30 'parse_source',
31 'prime_npm_cache',
32 'render_to_file',
33 'save_or_create_certificates',
34 'setup_apache',
35 'setup_gui',
36 'start_agent',
37 'start_gui',
38 'start_improv',
39 'write_apache_config',
40]
41
42from contextlib import contextmanager
43import errno
44import json
45import os
46import logging
47import shutil
48from subprocess import CalledProcessError
49import tempfile
50from urlparse import urlparse
51
52import apt
53import tempita
54
55from launchpadlib.launchpad import Launchpad
56from shelltoolbox import (
57 Serializer,
58 apt_get_install,
59 command,
60 environ,
61 install_extra_repositories,
62 run,
63 script_name,
64 search_file,
65 su,
66)
67from charmhelpers.core.host import (
68 service_start,
69)
70from charmhelpers.core.hookenv import (
71 log,
72 config,
73 unit_get,
74)
75
76
77AGENT = 'juju-api-agent'
78APACHE = 'apache2'
79IMPROV = 'juju-api-improv'
80HAPROXY = 'haproxy'
81
82API_PORT = 8080
83WEB_PORT = 8000
84
85CURRENT_DIR = os.getcwd()
86JUJU_DIR = os.path.join(CURRENT_DIR, 'juju')
87JUJU_GUI_DIR = os.path.join(CURRENT_DIR, 'juju-gui')
88JUJU_GUI_SITE = '/etc/apache2/sites-available/juju-gui'
89JUJU_GUI_PORTS = '/etc/apache2/ports.conf'
90JUJU_PEM = 'juju.includes-private-key.pem'
91BUILD_REPOSITORIES = ('ppa:chris-lea/node.js-legacy',)
92DEB_BUILD_DEPENDENCIES = (
93 'bzr', 'imagemagick', 'make', 'nodejs', 'npm',
94)
95DEB_STAGE_DEPENDENCIES = (
96 'zookeeper',
97)
98
99
100# Store the configuration from on invocation to the next.
101config_json = Serializer('/tmp/config.json')
102# Bazaar checkout command.
103bzr_checkout = command('bzr', 'co', '--lightweight')
104# Whether or not the charm is deployed using juju-core.
105# If juju-core has been used to deploy the charm, an agent.conf file must
106# be present in the charm parent directory.
107legacy_juju = lambda: not os.path.exists(
108 os.path.join(CURRENT_DIR, '..', 'agent.conf'))
109
110
111def _get_build_dependencies():
112 """Install deb dependencies for building."""
113 log('Installing build dependencies.')
114 cmd_log(install_extra_repositories(*BUILD_REPOSITORIES))
115 cmd_log(apt_get_install(*DEB_BUILD_DEPENDENCIES))
116
117
118def get_api_address(unit_dir):
119 """Return the Juju API address stored in the uniter agent.conf file."""
120 import yaml # python-yaml is only installed if juju-core is used.
121 # XXX 2013-03-27 frankban bug=1161443:
122 # currently the uniter agent.conf file does not include the API
123 # address. For now retrieve it from the machine agent file.
124 base_dir = os.path.abspath(os.path.join(unit_dir, '..'))
125 for dirname in os.listdir(base_dir):
126 if dirname.startswith('machine-'):
127 agent_conf = os.path.join(base_dir, dirname, 'agent.conf')
128 break
129 else:
130 raise IOError('Juju agent configuration file not found.')
131 contents = yaml.load(open(agent_conf))
132 return contents['apiinfo']['addrs'][0]
133
134
135def get_staging_dependencies():
136 """Install deb dependencies for the stage (improv) environment."""
137 log('Installing stage dependencies.')
138 cmd_log(apt_get_install(*DEB_STAGE_DEPENDENCIES))
139
140
141def first_path_in_dir(directory):
142 """Return the full path of the first file/dir in *directory*."""
143 return os.path.join(directory, os.listdir(directory)[0])
144
145
146def _get_by_attr(collection, attr, value):
147 """Return the first item in collection having attr == value.
148
149 Return None if the item is not found.
150 """
151 for item in collection:
152 if getattr(item, attr) == value:
153 return item
154
155
156def get_release_file_url(project, series_name, release_version):
157 """Return the URL of the release file hosted in Launchpad.
158
159 The returned URL points to a release file for the given project, series
160 name and release version.
161 The argument *project* is a project object as returned by launchpadlib.
162 The arguments *series_name* and *release_version* are strings. If
163 *release_version* is None, the URL of the latest release will be returned.
164 """
165 series = _get_by_attr(project.series, 'name', series_name)
166 if series is None:
167 raise ValueError('%r: series not found' % series_name)
168 # Releases are returned by Launchpad in reverse date order.
169 releases = list(series.releases)
170 if not releases:
171 raise ValueError('%r: series does not contain releases' % series_name)
172 if release_version is not None:
173 release = _get_by_attr(releases, 'version', release_version)
174 if release is None:
175 raise ValueError('%r: release not found' % release_version)
176 releases = [release]
177 for release in releases:
178 for file_ in release.files:
179 if str(file_).endswith('.tgz'):
180 return file_.file_link
181 raise ValueError('%r: file not found' % release_version)
182
183
184def get_zookeeper_address(agent_file_path):
185 """Retrieve the Zookeeper address contained in the given *agent_file_path*.
186
187 The *agent_file_path* is a path to a file containing a line similar to the
188 following::
189
190 env JUJU_ZOOKEEPER="address"
191 """
192 line = search_file('JUJU_ZOOKEEPER', agent_file_path).strip()
193 return line.split('=')[1].strip('"')
194
195
196@contextmanager
197def log_hook():
198 """Log when a hook starts and stops its execution.
199
200 Also log to stdout possible CalledProcessError exceptions raised executing
201 the hook.
202 """
203 script = script_name()
204 log(">>> Entering {}".format(script))
205 try:
206 yield
207 except CalledProcessError as err:
208 log('Exception caught:')
209 log(err.output)
210 raise
211 finally:
212 log("<<< Exiting {}".format(script))
213
214
215def parse_source(source):
216 """Parse the ``juju-gui-source`` option.
217
218 Return a tuple of two elements representing info on how to deploy Juju GUI.
219 Examples:
220 - ('stable', None): latest stable release;
221 - ('stable', '0.1.0'): stable release v0.1.0;
222 - ('trunk', None): latest trunk release;
223 - ('trunk', '0.1.0+build.1'): trunk release v0.1.0 bzr revision 1;
224 - ('branch', 'lp:juju-gui'): release is made from a branch;
225 - ('url', 'http://example.com/gui'): release from a downloaded file.
226 """
227 if source.startswith('url:'):
228 source = source[4:]
229 # Support file paths, including relative paths.
230 if urlparse(source).scheme == '':
231 if not source.startswith('/'):
232 source = os.path.join(os.path.abspath(CURRENT_DIR), source)
233 source = "file://%s" % source
234 return 'url', source
235 if source in ('stable', 'trunk'):
236 return source, None
237 if source.startswith('lp:') or source.startswith('http://'):
238 return 'branch', source
239 if 'build' in source:
240 return 'trunk', source
241 return 'stable', source
242
243
244def render_to_file(template_name, context, destination):
245 """Render the given *template_name* into *destination* using *context*.
246
247 The tempita template language is used to render contents
248 (see http://pythonpaste.org/tempita/).
249 The argument *template_name* is the name or path of the template file:
250 it may be either a path relative to ``../config`` or an absolute path.
251 The argument *destination* is a file path.
252 The argument *context* is a dict-like object.
253 """
254 template_path = os.path.abspath(template_name)
255 template = tempita.Template.from_filename(template_path)
256 with open(destination, 'w') as stream:
257 stream.write(template.substitute(context))
258
259
260results_log = None
261
262
263def _setupLogging():
264 global results_log
265 if results_log is not None:
266 return
267 cfg = config()
268 logging.basicConfig(
269 filename=cfg['command-log-file'],
270 level=logging.INFO,
271 format="%(asctime)s: %(name)s@%(levelname)s %(message)s")
272 results_log = logging.getLogger('juju-gui')
273
274
275def cmd_log(results):
276 global results_log
277 if not results:
278 return
279 if results_log is None:
280 _setupLogging()
281 # Since 'results' may be multi-line output, start it on a separate line
282 # from the logger timestamp, etc.
283 results_log.info('\n' + results)
284
285
286def start_improv(staging_env, ssl_cert_path,
287 config_path='/etc/init/juju-api-improv.conf'):
288 """Start a simulated juju environment using ``improv.py``."""
289 log('Setting up staging start up script.')
290 context = {
291 'juju_dir': JUJU_DIR,
292 'keys': ssl_cert_path,
293 'port': API_PORT,
294 'staging_env': staging_env,
295 }
296 render_to_file('config/juju-api-improv.conf.template', context, config_path)
297 log('Starting the staging backend.')
298 with su('root'):
299 service_start(IMPROV)
300
301
302def start_agent(
303 ssl_cert_path, config_path='/etc/init/juju-api-agent.conf',
304 read_only=False):
305 """Start the Juju agent and connect to the current environment."""
306 # Retrieve the Zookeeper address from the start up script.
307 unit_dir = os.path.realpath(os.path.join(CURRENT_DIR, '..'))
308 agent_file = '/etc/init/juju-{0}.conf'.format(os.path.basename(unit_dir))
309 zookeeper = get_zookeeper_address(agent_file)
310 log('Setting up API agent start up script.')
311 context = {
312 'juju_dir': JUJU_DIR,
313 'keys': ssl_cert_path,
314 'port': API_PORT,
315 'zookeeper': zookeeper,
316 'read_only': read_only
317 }
318 render_to_file('config/juju-api-agent.conf.template', context, config_path)
319 log('Starting API agent.')
320 with su('root'):
321 service_start(AGENT)
322
323
324def start_gui(
325 console_enabled, login_help, readonly, in_staging, ssl_cert_path,
326 charmworld_url, serve_tests, haproxy_path='/etc/haproxy/haproxy.cfg',
327 config_js_path=None, secure=True, sandbox=False):
328 """Set up and start the Juju GUI server."""
329 with su('root'):
330 run('chown', '-R', 'ubuntu:', JUJU_GUI_DIR)
331 # XXX 2013-02-05 frankban bug=1116320:
332 # External insecure resources are still loaded when testing in the
333 # debug environment. For now, switch to the production environment if
334 # the charm is configured to serve tests.
335 if in_staging and not serve_tests:
336 build_dirname = 'build-debug'
337 else:
338 build_dirname = 'build-prod'
339 build_dir = os.path.join(JUJU_GUI_DIR, build_dirname)
340 log('Generating the Juju GUI configuration file.')
341 is_legacy_juju = legacy_juju()
342 user, password = None, None
343 if (is_legacy_juju and in_staging) or sandbox:
344 user, password = 'admin', 'admin'
345 else:
346 user, password = None, None
347
348 api_backend = 'python' if is_legacy_juju else 'go'
349 if secure:
350 protocol = 'wss'
351 else:
352 log('Running in insecure mode! Port 80 will serve unencrypted.')
353 protocol = 'ws'
354
355 context = {
356 'raw_protocol': protocol,
357 'address': unit_get('public-address'),
358 'console_enabled': json.dumps(console_enabled),
359 'login_help': json.dumps(login_help),
360 'password': json.dumps(password),
361 'api_backend': json.dumps(api_backend),
362 'readonly': json.dumps(readonly),
363 'user': json.dumps(user),
364 'protocol': json.dumps(protocol),
365 'sandbox': json.dumps(sandbox),
366 'charmworld_url': json.dumps(charmworld_url),
367 }
368 if config_js_path is None:
369 config_js_path = os.path.join(
370 build_dir, 'juju-ui', 'assets', 'config.js')
371 render_to_file('config/config.js.template', context, config_js_path)
372
373 write_apache_config(build_dir, serve_tests)
374
375 log('Generating haproxy configuration file.')
376 if is_legacy_juju:
377 # The PyJuju API agent is listening on localhost.
378 api_address = '127.0.0.1:{0}'.format(API_PORT)
379 else:
380 # Retrieve the juju-core API server address.
381 api_address = get_api_address(os.path.join(CURRENT_DIR, '..'))
382 context = {
383 'api_address': api_address,
384 'api_pem': JUJU_PEM,
385 'legacy_juju': is_legacy_juju,
386 'ssl_cert_path': ssl_cert_path,
387 # In PyJuju environments, use the same certificate for both HTTPS and
388 # WebSocket connections. In juju-core the system already has the proper
389 # certificate installed.
390 'web_pem': JUJU_PEM,
391 'web_port': WEB_PORT,
392 'secure': secure
393 }
394 render_to_file('config/haproxy.cfg.template', context, haproxy_path)
395 log('Starting Juju GUI.')
396
397
398def write_apache_config(build_dir, serve_tests=False):
399 log('Generating the apache site configuration file.')
400 context = {
401 'port': WEB_PORT,
402 'serve_tests': serve_tests,
403 'server_root': build_dir,
404 'tests_root': os.path.join(JUJU_GUI_DIR, 'test', ''),
405 }
406 render_to_file('config/apache-ports.template', context, JUJU_GUI_PORTS)
407 render_to_file('config/apache-site.template', context, JUJU_GUI_SITE)
408
409
410def get_npm_cache_archive_url(Launchpad=Launchpad):
411 """Figure out the URL of the most recent NPM cache archive on Launchpad."""
412 launchpad = Launchpad.login_anonymously('Juju GUI charm', 'production')
413 project = launchpad.projects['juju-gui']
414 # Find the URL of the most recently created NPM cache archive.
415 npm_cache_url = get_release_file_url(project, 'npm-cache', None)
416 return npm_cache_url
417
418
419def prime_npm_cache(npm_cache_url):
420 """Download NPM cache archive and prime the NPM cache with it."""
421 # Download the cache archive and then uncompress it into the NPM cache.
422 npm_cache_archive = os.path.join(CURRENT_DIR, 'npm-cache.tgz')
423 cmd_log(run('curl', '-L', '-o', npm_cache_archive, npm_cache_url))
424 npm_cache_dir = os.path.expanduser('~/.npm')
425 # The NPM cache directory probably does not exist, so make it if not.
426 try:
427 os.mkdir(npm_cache_dir)
428 except OSError, e:
429 # If the directory already exists then ignore the error.
430 if e.errno != errno.EEXIST: # File exists.
431 raise
432 uncompress = command('tar', '-x', '-z', '-C', npm_cache_dir, '-f')
433 cmd_log(uncompress(npm_cache_archive))
434
435
436def fetch_gui(juju_gui_source, logpath):
437 """Retrieve the Juju GUI release/branch."""
438 # Retrieve a Juju GUI release.
439 origin, version_or_branch = parse_source(juju_gui_source)
440 if origin == 'branch':
441 # Make sure we have the dependencies necessary for us to actually make
442 # a build.
443 _get_build_dependencies()
444 # Create a release starting from a branch.
445 juju_gui_source_dir = os.path.join(CURRENT_DIR, 'juju-gui-source')
446 log('Retrieving Juju GUI source checkout from %s.' % version_or_branch)
447 cmd_log(run('rm', '-rf', juju_gui_source_dir))
448 cmd_log(bzr_checkout(version_or_branch, juju_gui_source_dir))
449 log('Preparing a Juju GUI release.')
450 logdir = os.path.dirname(logpath)
451 fd, name = tempfile.mkstemp(prefix='make-distfile-', dir=logdir)
452 log('Output from "make distfile" sent to %s' % name)
453 with environ(NO_BZR='1'):
454 run('make', '-C', juju_gui_source_dir, 'distfile',
455 stdout=fd, stderr=fd)
456 release_tarball = first_path_in_dir(
457 os.path.join(juju_gui_source_dir, 'releases'))
458 else:
459 log('Retrieving Juju GUI release.')
460 if origin == 'url':
461 file_url = version_or_branch
462 else:
463 # Retrieve a release from Launchpad.
464 launchpad = Launchpad.login_anonymously(
465 'Juju GUI charm', 'production')
466 project = launchpad.projects['juju-gui']
467 file_url = get_release_file_url(project, origin, version_or_branch)
468 log('Downloading release file from %s.' % file_url)
469 release_tarball = os.path.join(CURRENT_DIR, 'release.tgz')
470 cmd_log(run('curl', '-L', '-o', release_tarball, file_url))
471 return release_tarball
472
473
474def fetch_api(juju_api_branch):
475 """Retrieve the Juju branch."""
476 # Retrieve Juju API source checkout.
477 log('Retrieving Juju API source checkout.')
478 cmd_log(run('rm', '-rf', JUJU_DIR))
479 cmd_log(bzr_checkout(juju_api_branch, JUJU_DIR))
480
481
482def setup_gui(release_tarball):
483 """Set up Juju GUI."""
484 # Uncompress the release tarball.
485 log('Installing Juju GUI.')
486 release_dir = os.path.join(CURRENT_DIR, 'release')
487 cmd_log(run('rm', '-rf', release_dir))
488 os.mkdir(release_dir)
489 uncompress = command('tar', '-x', '-z', '-C', release_dir, '-f')
490 cmd_log(uncompress(release_tarball))
491 # Link the Juju GUI dir to the contents of the release tarball.
492 cmd_log(run('ln', '-sf', first_path_in_dir(release_dir), JUJU_GUI_DIR))
493
494
495def setup_apache():
496 """Set up apache."""
497 log('Setting up apache.')
498 if not os.path.exists(JUJU_GUI_SITE):
499 cmd_log(run('touch', JUJU_GUI_SITE))
500 cmd_log(run('chown', 'ubuntu:', JUJU_GUI_SITE))
501 cmd_log(
502 run('ln', '-s', JUJU_GUI_SITE,
503 '/etc/apache2/sites-enabled/juju-gui'))
504
505 if not os.path.exists(JUJU_GUI_PORTS):
506 cmd_log(run('touch', JUJU_GUI_PORTS))
507 cmd_log(run('chown', 'ubuntu:', JUJU_GUI_PORTS))
508
509 with su('root'):
510 run('a2dissite', 'default')
511 run('a2ensite', 'juju-gui')
512
513
514def save_or_create_certificates(
515 ssl_cert_path, ssl_cert_contents, ssl_key_contents):
516 """Generate the SSL certificates.
517
518 If both *ssl_cert_contents* and *ssl_key_contents* are provided, use them
519 as certificates; otherwise, generate them.
520
521 Also create a pem file, suitable for use in the haproxy configuration,
522 concatenating the key and the certificate files.
523 """
524 crt_path = os.path.join(ssl_cert_path, 'juju.crt')
525 key_path = os.path.join(ssl_cert_path, 'juju.key')
526 if not os.path.exists(ssl_cert_path):
527 os.makedirs(ssl_cert_path)
528 if ssl_cert_contents and ssl_key_contents:
529 # Save the provided certificates.
530 with open(crt_path, 'w') as cert_file:
531 cert_file.write(ssl_cert_contents)
532 with open(key_path, 'w') as key_file:
533 key_file.write(ssl_key_contents)
534 else:
535 # Generate certificates.
536 # See http://superuser.com/questions/226192/openssl-without-prompt
537 cmd_log(run(
538 'openssl', 'req', '-new', '-newkey', 'rsa:4096',
539 '-days', '365', '-nodes', '-x509', '-subj',
540 # These are arbitrary test values for the certificate.
541 '/C=GB/ST=Juju/L=GUI/O=Ubuntu/CN=juju.ubuntu.com',
542 '-keyout', key_path, '-out', crt_path))
543 # Generate the pem file.
544 pem_path = os.path.join(ssl_cert_path, JUJU_PEM)
545 if os.path.exists(pem_path):
546 os.remove(pem_path)
547 with open(pem_path, 'w') as pem_file:
548 shutil.copyfileobj(open(key_path), pem_file)
549 shutil.copyfileobj(open(crt_path), pem_file)
550
551
552def find_missing_packages(*packages):
553 """Given a list of packages, return the packages which are not installed.
554 """
555 cache = apt.Cache()
556 missing = set()
557 for pkg_name in packages:
558 try:
559 pkg = cache[pkg_name]
560 except KeyError:
561 missing.add(pkg_name)
562 continue
563 if pkg.is_installed:
564 continue
565 missing.add(pkg_name)
566 return missing
567
568
569## Backend support decorators
570
571def chain(name):
572 """Helper method to compose a set of mixin objects into a callable.
573
574 Each method is called in the context of its mixin instance, and its
575 argument is the Backend instance.
576 """
577 # Chain method calls through all implementing mixins.
578 def method(self):
579 for mixin in self.mixins:
580 a_callable = getattr(type(mixin), name, None)
581 if a_callable:
582 a_callable(mixin, self)
583
584 method.__name__ = name
585 return method
586
587
588def merge(name):
589 """Helper to merge a property from a set of strategy objects
590 into a unified set.
591 """
592 # Return merged property from every providing mixin as a set.
593 @property
594 def method(self):
595 result = set()
596 for mixin in self.mixins:
597 segment = getattr(type(mixin), name, None)
598 if segment and isinstance(segment, (list, tuple, set)):
599 result |= set(segment)
600
601 return result
602 return method
0603
=== added directory 'hooks/charmhelpers/contrib/network'
=== added file 'hooks/charmhelpers/contrib/network/__init__.py'
=== added file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,69 @@
1import sys
2
3from charmhelpers.fetch import apt_install
4from charmhelpers.core.hookenv import (
5 ERROR, log,
6)
7
8try:
9 import netifaces
10except ImportError:
11 apt_install('python-netifaces')
12 import netifaces
13
14try:
15 import netaddr
16except ImportError:
17 apt_install('python-netaddr')
18 import netaddr
19
20
21def _validate_cidr(network):
22 try:
23 netaddr.IPNetwork(network)
24 except (netaddr.core.AddrFormatError, ValueError):
25 raise ValueError("Network (%s) is not in CIDR presentation format" %
26 network)
27
28
29def get_address_in_network(network, fallback=None, fatal=False):
30 """
31 Get an IPv4 address within the network from the host.
32
33 Args:
34 network (str): CIDR presentation format. For example,
35 '192.168.1.0/24'.
36 fallback (str): If no address is found, return fallback.
37 fatal (boolean): If no address is found, fallback is not
38 set and fatal is True then exit(1).
39 """
40
41 def not_found_error_out():
42 log("No IP address found in network: %s" % network,
43 level=ERROR)
44 sys.exit(1)
45
46 if network is None:
47 if fallback is not None:
48 return fallback
49 else:
50 if fatal:
51 not_found_error_out()
52
53 _validate_cidr(network)
54 for iface in netifaces.interfaces():
55 addresses = netifaces.ifaddresses(iface)
56 if netifaces.AF_INET in addresses:
57 addr = addresses[netifaces.AF_INET][0]['addr']
58 netmask = addresses[netifaces.AF_INET][0]['netmask']
59 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
60 if cidr in netaddr.IPNetwork(network):
61 return str(cidr.ip)
62
63 if fallback is not None:
64 return fallback
65
66 if fatal:
67 not_found_error_out()
68
69 return None
070
=== added directory 'hooks/charmhelpers/contrib/network/ovs'
=== added file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
--- hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,75 @@
1''' Helpers for interacting with OpenvSwitch '''
2import subprocess
3import os
4from charmhelpers.core.hookenv import (
5 log, WARNING
6)
7from charmhelpers.core.host import (
8 service
9)
10
11
12def add_bridge(name):
13 ''' Add the named bridge to openvswitch '''
14 log('Creating bridge {}'.format(name))
15 subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
16
17
18def del_bridge(name):
19 ''' Delete the named bridge from openvswitch '''
20 log('Deleting bridge {}'.format(name))
21 subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
22
23
24def add_bridge_port(name, port):
25 ''' Add a port to the named openvswitch bridge '''
26 log('Adding port {} to bridge {}'.format(port, name))
27 subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
28 name, port])
29 subprocess.check_call(["ip", "link", "set", port, "up"])
30
31
32def del_bridge_port(name, port):
33 ''' Delete a port from the named openvswitch bridge '''
34 log('Deleting port {} from bridge {}'.format(port, name))
35 subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
36 name, port])
37 subprocess.check_call(["ip", "link", "set", port, "down"])
38
39
40def set_manager(manager):
41 ''' Set the controller for the local openvswitch '''
42 log('Setting manager for local ovs to {}'.format(manager))
43 subprocess.check_call(['ovs-vsctl', 'set-manager',
44 'ssl:{}'.format(manager)])
45
46
47CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
48
49
50def get_certificate():
51 ''' Read openvswitch certificate from disk '''
52 if os.path.exists(CERT_PATH):
53 log('Reading ovs certificate from {}'.format(CERT_PATH))
54 with open(CERT_PATH, 'r') as cert:
55 full_cert = cert.read()
56 begin_marker = "-----BEGIN CERTIFICATE-----"
57 end_marker = "-----END CERTIFICATE-----"
58 begin_index = full_cert.find(begin_marker)
59 end_index = full_cert.rfind(end_marker)
60 if end_index == -1 or begin_index == -1:
61 raise RuntimeError("Certificate does not contain valid begin"
62 " and end markers.")
63 full_cert = full_cert[begin_index:(end_index + len(end_marker))]
64 return full_cert
65 else:
66 log('Certificate not found', level=WARNING)
67 return None
68
69
70def full_restart():
71 ''' Full restart and reload of openvswitch '''
72 if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'):
73 service('start', 'openvswitch-force-reload-kmod')
74 else:
75 service('force-reload-kmod', 'openvswitch-switch')
076
=== added directory 'hooks/charmhelpers/contrib/openstack'
=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
=== added file 'hooks/charmhelpers/contrib/openstack/alternatives.py'
--- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/alternatives.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,17 @@
1''' Helper for managing alternatives for file conflict resolution '''
2
3import subprocess
4import shutil
5import os
6
7
8def install_alternative(name, target, source, priority=50):
9 ''' Install alternative configuration '''
10 if (os.path.exists(target) and not os.path.islink(target)):
11 # Move existing file/directory away before installing
12 shutil.move(target, '{}.bak'.format(target))
13 cmd = [
14 'update-alternatives', '--force', '--install',
15 target, name, source, str(priority)
16 ]
17 subprocess.check_call(cmd)
018
=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,577 @@
1import json
2import os
3
4from base64 import b64decode
5
6from subprocess import (
7 check_call
8)
9
10
11from charmhelpers.fetch import (
12 apt_install,
13 filter_installed_packages,
14)
15
16from charmhelpers.core.hookenv import (
17 config,
18 local_unit,
19 log,
20 relation_get,
21 relation_ids,
22 related_units,
23 unit_get,
24 unit_private_ip,
25 ERROR,
26)
27
28from charmhelpers.contrib.hahelpers.cluster import (
29 determine_api_port,
30 determine_haproxy_port,
31 https,
32 is_clustered,
33 peer_units,
34)
35
36from charmhelpers.contrib.hahelpers.apache import (
37 get_cert,
38 get_ca_cert,
39)
40
41from charmhelpers.contrib.openstack.neutron import (
42 neutron_plugin_attribute,
43)
44
45CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
46
47
48class OSContextError(Exception):
49 pass
50
51
52def ensure_packages(packages):
53 '''Install but do not upgrade required plugin packages'''
54 required = filter_installed_packages(packages)
55 if required:
56 apt_install(required, fatal=True)
57
58
59def context_complete(ctxt):
60 _missing = []
61 for k, v in ctxt.iteritems():
62 if v is None or v == '':
63 _missing.append(k)
64 if _missing:
65 log('Missing required data: %s' % ' '.join(_missing), level='INFO')
66 return False
67 return True
68
69
70class OSContextGenerator(object):
71 interfaces = []
72
73 def __call__(self):
74 raise NotImplementedError
75
76
77class SharedDBContext(OSContextGenerator):
78 interfaces = ['shared-db']
79
80 def __init__(self, database=None, user=None, relation_prefix=None):
81 '''
82 Allows inspecting relation for settings prefixed with relation_prefix.
83 This is useful for parsing access for multiple databases returned via
84 the shared-db interface (eg, nova_password, quantum_password)
85 '''
86 self.relation_prefix = relation_prefix
87 self.database = database
88 self.user = user
89
90 def __call__(self):
91 self.database = self.database or config('database')
92 self.user = self.user or config('database-user')
93 if None in [self.database, self.user]:
94 log('Could not generate shared_db context. '
95 'Missing required charm config options. '
96 '(database name and user)')
97 raise OSContextError
98 ctxt = {}
99
100 password_setting = 'password'
101 if self.relation_prefix:
102 password_setting = self.relation_prefix + '_password'
103
104 for rid in relation_ids('shared-db'):
105 for unit in related_units(rid):
106 passwd = relation_get(password_setting, rid=rid, unit=unit)
107 ctxt = {
108 'database_host': relation_get('db_host', rid=rid,
109 unit=unit),
110 'database': self.database,
111 'database_user': self.user,
112 'database_password': passwd,
113 }
114 if context_complete(ctxt):
115 return ctxt
116 return {}
117
118
119class IdentityServiceContext(OSContextGenerator):
120 interfaces = ['identity-service']
121
122 def __call__(self):
123 log('Generating template context for identity-service')
124 ctxt = {}
125
126 for rid in relation_ids('identity-service'):
127 for unit in related_units(rid):
128 ctxt = {
129 'service_port': relation_get('service_port', rid=rid,
130 unit=unit),
131 'service_host': relation_get('service_host', rid=rid,
132 unit=unit),
133 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
134 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
135 'admin_tenant_name': relation_get('service_tenant',
136 rid=rid, unit=unit),
137 'admin_user': relation_get('service_username', rid=rid,
138 unit=unit),
139 'admin_password': relation_get('service_password', rid=rid,
140 unit=unit),
141 # XXX: Hard-coded http.
142 'service_protocol': 'http',
143 'auth_protocol': 'http',
144 }
145 if context_complete(ctxt):
146 return ctxt
147 return {}
148
149
150class AMQPContext(OSContextGenerator):
151 interfaces = ['amqp']
152
153 def __call__(self):
154 log('Generating template context for amqp')
155 conf = config()
156 try:
157 username = conf['rabbit-user']
158 vhost = conf['rabbit-vhost']
159 except KeyError as e:
160 log('Could not generate shared_db context. '
161 'Missing required charm config options: %s.' % e)
162 raise OSContextError
163
164 ctxt = {}
165 for rid in relation_ids('amqp'):
166 for unit in related_units(rid):
167 if relation_get('clustered', rid=rid, unit=unit):
168 ctxt['clustered'] = True
169 ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
170 unit=unit)
171 else:
172 ctxt['rabbitmq_host'] = relation_get('private-address',
173 rid=rid, unit=unit)
174 ctxt.update({
175 'rabbitmq_user': username,
176 'rabbitmq_password': relation_get('password', rid=rid,
177 unit=unit),
178 'rabbitmq_virtual_host': vhost,
179 })
180 if context_complete(ctxt):
181 # Sufficient information found = break out!
182 break
183 # Used for active/active rabbitmq >= grizzly
184 if 'clustered' not in ctxt and len(related_units(rid)) > 1:
185 rabbitmq_hosts = []
186 for unit in related_units(rid):
187 rabbitmq_hosts.append(relation_get('private-address',
188 rid=rid, unit=unit))
189 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
190 if not context_complete(ctxt):
191 return {}
192 else:
193 return ctxt
194
195
196class CephContext(OSContextGenerator):
197 interfaces = ['ceph']
198
199 def __call__(self):
200 '''This generates context for /etc/ceph/ceph.conf templates'''
201 if not relation_ids('ceph'):
202 return {}
203 log('Generating template context for ceph')
204 mon_hosts = []
205 auth = None
206 key = None
207 for rid in relation_ids('ceph'):
208 for unit in related_units(rid):
209 mon_hosts.append(relation_get('private-address', rid=rid,
210 unit=unit))
211 auth = relation_get('auth', rid=rid, unit=unit)
212 key = relation_get('key', rid=rid, unit=unit)
213
214 ctxt = {
215 'mon_hosts': ' '.join(mon_hosts),
216 'auth': auth,
217 'key': key,
218 }
219
220 if not os.path.isdir('/etc/ceph'):
221 os.mkdir('/etc/ceph')
222
223 if not context_complete(ctxt):
224 return {}
225
226 ensure_packages(['ceph-common'])
227
228 return ctxt
229
230
231class HAProxyContext(OSContextGenerator):
232 interfaces = ['cluster']
233
234 def __call__(self):
235 '''
236 Builds half a context for the haproxy template, which describes
237 all peers to be included in the cluster. Each charm needs to include
238 its own context generator that describes the port mapping.
239 '''
240 if not relation_ids('cluster'):
241 return {}
242
243 cluster_hosts = {}
244 l_unit = local_unit().replace('/', '-')
245 cluster_hosts[l_unit] = unit_get('private-address')
246
247 for rid in relation_ids('cluster'):
248 for unit in related_units(rid):
249 _unit = unit.replace('/', '-')
250 addr = relation_get('private-address', rid=rid, unit=unit)
251 cluster_hosts[_unit] = addr
252
253 ctxt = {
254 'units': cluster_hosts,
255 }
256 if len(cluster_hosts.keys()) > 1:
257 # Enable haproxy when we have enough peers.
258 log('Ensuring haproxy enabled in /etc/default/haproxy.')
259 with open('/etc/default/haproxy', 'w') as out:
260 out.write('ENABLED=1\n')
261 return ctxt
262 log('HAProxy context is incomplete, this unit has no peers.')
263 return {}
264
265
266class ImageServiceContext(OSContextGenerator):
267 interfaces = ['image-service']
268
269 def __call__(self):
270 '''
271 Obtains the glance API server from the image-service relation. Useful
272 in nova and cinder (currently).
273 '''
274 log('Generating template context for image-service.')
275 rids = relation_ids('image-service')
276 if not rids:
277 return {}
278 for rid in rids:
279 for unit in related_units(rid):
280 api_server = relation_get('glance-api-server',
281 rid=rid, unit=unit)
282 if api_server:
283 return {'glance_api_servers': api_server}
284 log('ImageService context is incomplete. '
285 'Missing required relation data.')
286 return {}
287
288
289class ApacheSSLContext(OSContextGenerator):
290
291 """
292 Generates a context for an apache vhost configuration that configures
293 HTTPS reverse proxying for one or many endpoints. Generated context
294 looks something like:
295 {
296 'namespace': 'cinder',
297 'private_address': 'iscsi.mycinderhost.com',
298 'endpoints': [(8776, 8766), (8777, 8767)]
299 }
300
301 The endpoints list consists of a tuples mapping external ports
302 to internal ports.
303 """
304 interfaces = ['https']
305
306 # charms should inherit this context and set external ports
307 # and service namespace accordingly.
308 external_ports = []
309 service_namespace = None
310
311 def enable_modules(self):
312 cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
313 check_call(cmd)
314
315 def configure_cert(self):
316 if not os.path.isdir('/etc/apache2/ssl'):
317 os.mkdir('/etc/apache2/ssl')
318 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
319 if not os.path.isdir(ssl_dir):
320 os.mkdir(ssl_dir)
321 cert, key = get_cert()
322 with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
323 cert_out.write(b64decode(cert))
324 with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
325 key_out.write(b64decode(key))
326 ca_cert = get_ca_cert()
327 if ca_cert:
328 with open(CA_CERT_PATH, 'w') as ca_out:
329 ca_out.write(b64decode(ca_cert))
330 check_call(['update-ca-certificates'])
331
332 def __call__(self):
333 if isinstance(self.external_ports, basestring):
334 self.external_ports = [self.external_ports]
335 if (not self.external_ports or not https()):
336 return {}
337
338 self.configure_cert()
339 self.enable_modules()
340
341 ctxt = {
342 'namespace': self.service_namespace,
343 'private_address': unit_get('private-address'),
344 'endpoints': []
345 }
346 for ext_port in self.external_ports:
347 if peer_units() or is_clustered():
348 int_port = determine_haproxy_port(ext_port)
349 else:
350 int_port = determine_api_port(ext_port)
351 portmap = (int(ext_port), int(int_port))
352 ctxt['endpoints'].append(portmap)
353 return ctxt
354
355
356class NeutronContext(object):
357 interfaces = []
358
359 @property
360 def plugin(self):
361 return None
362
363 @property
364 def network_manager(self):
365 return None
366
367 @property
368 def packages(self):
369 return neutron_plugin_attribute(
370 self.plugin, 'packages', self.network_manager)
371
372 @property
373 def neutron_security_groups(self):
374 return None
375
376 def _ensure_packages(self):
377 [ensure_packages(pkgs) for pkgs in self.packages]
378
379 def _save_flag_file(self):
380 if self.network_manager == 'quantum':
381 _file = '/etc/nova/quantum_plugin.conf'
382 else:
383 _file = '/etc/nova/neutron_plugin.conf'
384 with open(_file, 'wb') as out:
385 out.write(self.plugin + '\n')
386
387 def ovs_ctxt(self):
388 driver = neutron_plugin_attribute(self.plugin, 'driver',
389 self.network_manager)
390 config = neutron_plugin_attribute(self.plugin, 'config',
391 self.network_manager)
392 ovs_ctxt = {
393 'core_plugin': driver,
394 'neutron_plugin': 'ovs',
395 'neutron_security_groups': self.neutron_security_groups,
396 'local_ip': unit_private_ip(),
397 'config': config
398 }
399
400 return ovs_ctxt
401
402 def nvp_ctxt(self):
403 driver = neutron_plugin_attribute(self.plugin, 'driver',
404 self.network_manager)
405 config = neutron_plugin_attribute(self.plugin, 'config',
406 self.network_manager)
407 nvp_ctxt = {
408 'core_plugin': driver,
409 'neutron_plugin': 'nvp',
410 'neutron_security_groups': self.neutron_security_groups,
411 'local_ip': unit_private_ip(),
412 'config': config
413 }
414
415 return nvp_ctxt
416
417 def __call__(self):
418 self._ensure_packages()
419
420 if self.network_manager not in ['quantum', 'neutron']:
421 return {}
422
423 if not self.plugin:
424 return {}
425
426 ctxt = {'network_manager': self.network_manager}
427
428 if self.plugin == 'ovs':
429 ctxt.update(self.ovs_ctxt())
430 elif self.plugin == 'nvp':
431 ctxt.update(self.nvp_ctxt())
432
433 self._save_flag_file()
434 return ctxt
435
436
437class OSConfigFlagContext(OSContextGenerator):
438
439 """
440 Responsible for adding user-defined config-flags in charm config to a
441 template context.
442
443 NOTE: the value of config-flags may be a comma-separated list of
444 key=value pairs and some Openstack config files support
445 comma-separated lists as values.
446 """
447
448 def __call__(self):
449 config_flags = config('config-flags')
450 if not config_flags:
451 return {}
452
453 if config_flags.find('==') >= 0:
454 log("config_flags is not in expected format (key=value)",
455 level=ERROR)
456 raise OSContextError
457
458 # strip the following from each value.
459 post_strippers = ' ,'
460 # we strip any leading/trailing '=' or ' ' from the string then
461 # split on '='.
462 split = config_flags.strip(' =').split('=')
463 limit = len(split)
464 flags = {}
465 for i in xrange(0, limit - 1):
466 current = split[i]
467 next = split[i + 1]
468 vindex = next.rfind(',')
469 if (i == limit - 2) or (vindex < 0):
470 value = next
471 else:
472 value = next[:vindex]
473
474 if i == 0:
475 key = current
476 else:
477 # if this not the first entry, expect an embedded key.
478 index = current.rfind(',')
479 if index < 0:
480 log("invalid config value(s) at index %s" % (i),
481 level=ERROR)
482 raise OSContextError
483 key = current[index + 1:]
484
485 # Add to collection.
486 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
487
488 return {'user_config_flags': flags}
489
490
491class SubordinateConfigContext(OSContextGenerator):
492
493 """
494 Responsible for inspecting relations to subordinates that
495 may be exporting required config via a json blob.
496
497 The subordinate interface allows subordinates to export their
498 configuration requirements to the principle for multiple config
499 files and multiple serivces. Ie, a subordinate that has interfaces
500 to both glance and nova may export to following yaml blob as json:
501
502 glance:
503 /etc/glance/glance-api.conf:
504 sections:
505 DEFAULT:
506 - [key1, value1]
507 /etc/glance/glance-registry.conf:
508 MYSECTION:
509 - [key2, value2]
510 nova:
511 /etc/nova/nova.conf:
512 sections:
513 DEFAULT:
514 - [key3, value3]
515
516
517 It is then up to the principle charms to subscribe this context to
518 the service+config file it is interestd in. Configuration data will
519 be available in the template context, in glance's case, as:
520 ctxt = {
521 ... other context ...
522 'subordinate_config': {
523 'DEFAULT': {
524 'key1': 'value1',
525 },
526 'MYSECTION': {
527 'key2': 'value2',
528 },
529 }
530 }
531
532 """
533
534 def __init__(self, service, config_file, interface):
535 """
536 :param service : Service name key to query in any subordinate
537 data found
538 :param config_file : Service's config file to query sections
539 :param interface : Subordinate interface to inspect
540 """
541 self.service = service
542 self.config_file = config_file
543 self.interface = interface
544
545 def __call__(self):
546 ctxt = {}
547 for rid in relation_ids(self.interface):
548 for unit in related_units(rid):
549 sub_config = relation_get('subordinate_configuration',
550 rid=rid, unit=unit)
551 if sub_config and sub_config != '':
552 try:
553 sub_config = json.loads(sub_config)
554 except:
555 log('Could not parse JSON from subordinate_config '
556 'setting from %s' % rid, level=ERROR)
557 continue
558
559 if self.service not in sub_config:
560 log('Found subordinate_config on %s but it contained'
561 'nothing for %s service' % (rid, self.service))
562 continue
563
564 sub_config = sub_config[self.service]
565 if self.config_file not in sub_config:
566 log('Found subordinate_config on %s but it contained'
567 'nothing for %s' % (rid, self.config_file))
568 continue
569
570 sub_config = sub_config[self.config_file]
571 for k, v in sub_config.iteritems():
572 ctxt[k] = v
573
574 if not ctxt:
575 ctxt['sections'] = {}
576
577 return ctxt
0578
=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,137 @@
1# Various utilies for dealing with Neutron and the renaming from Quantum.
2
3from subprocess import check_output
4
5from charmhelpers.core.hookenv import (
6 config,
7 log,
8 ERROR,
9)
10
11from charmhelpers.contrib.openstack.utils import os_release
12
13
14def headers_package():
15 """Ensures correct linux-headers for running kernel are installed,
16 for building DKMS package"""
17 kver = check_output(['uname', '-r']).strip()
18 return 'linux-headers-%s' % kver
19
20
21# legacy
22def quantum_plugins():
23 from charmhelpers.contrib.openstack import context
24 return {
25 'ovs': {
26 'config': '/etc/quantum/plugins/openvswitch/'
27 'ovs_quantum_plugin.ini',
28 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
29 'OVSQuantumPluginV2',
30 'contexts': [
31 context.SharedDBContext(user=config('neutron-database-user'),
32 database=config('neutron-database'),
33 relation_prefix='neutron')],
34 'services': ['quantum-plugin-openvswitch-agent'],
35 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
36 ['quantum-plugin-openvswitch-agent']],
37 'server_packages': ['quantum-server',
38 'quantum-plugin-openvswitch'],
39 'server_services': ['quantum-server']
40 },
41 'nvp': {
42 'config': '/etc/quantum/plugins/nicira/nvp.ini',
43 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
44 'QuantumPlugin.NvpPluginV2',
45 'contexts': [
46 context.SharedDBContext(user=config('neutron-database-user'),
47 database=config('neutron-database'),
48 relation_prefix='neutron')],
49 'services': [],
50 'packages': [],
51 'server_packages': ['quantum-server',
52 'quantum-plugin-nicira'],
53 'server_services': ['quantum-server']
54 }
55 }
56
57
58def neutron_plugins():
59 from charmhelpers.contrib.openstack import context
60 return {
61 'ovs': {
62 'config': '/etc/neutron/plugins/openvswitch/'
63 'ovs_neutron_plugin.ini',
64 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
65 'OVSNeutronPluginV2',
66 'contexts': [
67 context.SharedDBContext(user=config('neutron-database-user'),
68 database=config('neutron-database'),
69 relation_prefix='neutron')],
70 'services': ['neutron-plugin-openvswitch-agent'],
71 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
72 ['quantum-plugin-openvswitch-agent']],
73 'server_packages': ['neutron-server',
74 'neutron-plugin-openvswitch'],
75 'server_services': ['neutron-server']
76 },
77 'nvp': {
78 'config': '/etc/neutron/plugins/nicira/nvp.ini',
79 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
80 'NeutronPlugin.NvpPluginV2',
81 'contexts': [
82 context.SharedDBContext(user=config('neutron-database-user'),
83 database=config('neutron-database'),
84 relation_prefix='neutron')],
85 'services': [],
86 'packages': [],
87 'server_packages': ['neutron-server',
88 'neutron-plugin-nicira'],
89 'server_services': ['neutron-server']
90 }
91 }
92
93
94def neutron_plugin_attribute(plugin, attr, net_manager=None):
95 manager = net_manager or network_manager()
96 if manager == 'quantum':
97 plugins = quantum_plugins()
98 elif manager == 'neutron':
99 plugins = neutron_plugins()
100 else:
101 log('Error: Network manager does not support plugins.')
102 raise Exception
103
104 try:
105 _plugin = plugins[plugin]
106 except KeyError:
107 log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
108 raise Exception
109
110 try:
111 return _plugin[attr]
112 except KeyError:
113 return None
114
115
116def network_manager():
117 '''
118 Deals with the renaming of Quantum to Neutron in H and any situations
119 that require compatability (eg, deploying H with network-manager=quantum,
120 upgrading from G).
121 '''
122 release = os_release('nova-common')
123 manager = config('network-manager').lower()
124
125 if manager not in ['quantum', 'neutron']:
126 return manager
127
128 if release in ['essex']:
129 # E does not support neutron
130 log('Neutron networking not supported in Essex.', level=ERROR)
131 raise Exception
132 elif release in ['folsom', 'grizzly']:
133 # neutron is named quantum in F and G
134 return 'quantum'
135 else:
136 # ensure accurate naming for all releases post-H
137 return 'neutron'
0138
=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,2 @@
1# dummy __init__.py to fool syncer into thinking this is a syncable python
2# module
03
=== added file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2016-02-12 04:16:45 +0000
@@ -0,0 +1,11 @@
1###############################################################################
2# [ WARNING ]
3# cinder configuration file maintained by Juju
4# local changes may be overwritten.
5###############################################################################
6{% if auth -%}
7[global]
8 auth_supported = {{ auth }}
9 keyring = /etc/ceph/$cluster.$name.keyring
10 mon host = {{ mon_hosts }}
11{% endif -%}
012
=== added file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-02-12 04:16:45 +0000
@@ -0,0 +1,37 @@
1global
2 log 127.0.0.1 local0
3 log 127.0.0.1 local1 notice
4 maxconn 20000
5 user haproxy
6 group haproxy
7 spread-checks 0
8
9defaults
10 log global
11 mode http
12 option httplog
13 option dontlognull
14 retries 3
15 timeout queue 1000
16 timeout connect 1000
17 timeout client 30000
18 timeout server 30000
19
20listen stats :8888
21 mode http
22 stats enable
23 stats hide-version
24 stats realm Haproxy\ Statistics
25 stats uri /
26 stats auth admin:password
27
28{% if units -%}
29{% for service, ports in service_ports.iteritems() -%}
30listen {{ service }} 0.0.0.0:{{ ports[0] }}
31 balance roundrobin
32 option tcplog
33 {% for unit, address in units.iteritems() -%}
34 server {{ unit }} {{ address }}:{{ ports[1] }} check
35 {% endfor %}
36{% endfor -%}
37{% endif -%}
038
=== added file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend'
--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2016-02-12 04:16:45 +0000
@@ -0,0 +1,23 @@
1{% if endpoints -%}
2{% for ext, int in endpoints -%}
3Listen {{ ext }}
4NameVirtualHost *:{{ ext }}
5<VirtualHost *:{{ ext }}>
6 ServerName {{ private_address }}
7 SSLEngine on
8 SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert
9 SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key
10 ProxyPass / http://localhost:{{ int }}/
11 ProxyPassReverse / http://localhost:{{ int }}/
12 ProxyPreserveHost on
13</VirtualHost>
14<Proxy *>
15 Order deny,allow
16 Allow from all
17</Proxy>
18<Location />
19 Order allow,deny
20 Allow from all
21</Location>
22{% endfor -%}
23{% endif -%}
024
=== added symlink 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf'
=== target is u'openstack_https_frontend'
=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/templating.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,280 @@
1import os
2
3from charmhelpers.fetch import apt_install
4
5from charmhelpers.core.hookenv import (
6 log,
7 ERROR,
8 INFO
9)
10
11from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
12
13try:
14 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
15except ImportError:
16 # python-jinja2 may not be installed yet, or we're running unittests.
17 FileSystemLoader = ChoiceLoader = Environment = exceptions = None
18
19
20class OSConfigException(Exception):
21 pass
22
23
24def get_loader(templates_dir, os_release):
25 """
26 Create a jinja2.ChoiceLoader containing template dirs up to
27 and including os_release. If directory template directory
28 is missing at templates_dir, it will be omitted from the loader.
29 templates_dir is added to the bottom of the search list as a base
30 loading dir.
31
32 A charm may also ship a templates dir with this module
33 and it will be appended to the bottom of the search list, eg:
34 hooks/charmhelpers/contrib/openstack/templates.
35
36 :param templates_dir: str: Base template directory containing release
37 sub-directories.
38 :param os_release : str: OpenStack release codename to construct template
39 loader.
40
41 :returns : jinja2.ChoiceLoader constructed with a list of
42 jinja2.FilesystemLoaders, ordered in descending
43 order by OpenStack release.
44 """
45 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
46 for rel in OPENSTACK_CODENAMES.itervalues()]
47
48 if not os.path.isdir(templates_dir):
49 log('Templates directory not found @ %s.' % templates_dir,
50 level=ERROR)
51 raise OSConfigException
52
53 # the bottom contains tempaltes_dir and possibly a common templates dir
54 # shipped with the helper.
55 loaders = [FileSystemLoader(templates_dir)]
56 helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
57 if os.path.isdir(helper_templates):
58 loaders.append(FileSystemLoader(helper_templates))
59
60 for rel, tmpl_dir in tmpl_dirs:
61 if os.path.isdir(tmpl_dir):
62 loaders.insert(0, FileSystemLoader(tmpl_dir))
63 if rel == os_release:
64 break
65 log('Creating choice loader with dirs: %s' %
66 [l.searchpath for l in loaders], level=INFO)
67 return ChoiceLoader(loaders)
68
69
70class OSConfigTemplate(object):
71 """
72 Associates a config file template with a list of context generators.
73 Responsible for constructing a template context based on those generators.
74 """
75 def __init__(self, config_file, contexts):
76 self.config_file = config_file
77
78 if hasattr(contexts, '__call__'):
79 self.contexts = [contexts]
80 else:
81 self.contexts = contexts
82
83 self._complete_contexts = []
84
85 def context(self):
86 ctxt = {}
87 for context in self.contexts:
88 _ctxt = context()
89 if _ctxt:
90 ctxt.update(_ctxt)
91 # track interfaces for every complete context.
92 [self._complete_contexts.append(interface)
93 for interface in context.interfaces
94 if interface not in self._complete_contexts]
95 return ctxt
96
97 def complete_contexts(self):
98 '''
99 Return a list of interfaces that have atisfied contexts.
100 '''
101 if self._complete_contexts:
102 return self._complete_contexts
103 self.context()
104 return self._complete_contexts
105
106
107class OSConfigRenderer(object):
108 """
109 This class provides a common templating system to be used by OpenStack
110 charms. It is intended to help charms share common code and templates,
111 and ease the burden of managing config templates across multiple OpenStack
112 releases.
113
114 Basic usage:
115 # import some common context generates from charmhelpers
116 from charmhelpers.contrib.openstack import context
117
118 # Create a renderer object for a specific OS release.
119 configs = OSConfigRenderer(templates_dir='/tmp/templates',
120 openstack_release='folsom')
121 # register some config files with context generators.
122 configs.register(config_file='/etc/nova/nova.conf',
123 contexts=[context.SharedDBContext(),
124 context.AMQPContext()])
125 configs.register(config_file='/etc/nova/api-paste.ini',
126 contexts=[context.IdentityServiceContext()])
127 configs.register(config_file='/etc/haproxy/haproxy.conf',
128 contexts=[context.HAProxyContext()])
129 # write out a single config
130 configs.write('/etc/nova/nova.conf')
131 # write out all registered configs
132 configs.write_all()
133
134 Details:
135
136 OpenStack Releases and template loading
137 ---------------------------------------
138 When the object is instantiated, it is associated with a specific OS
139 release. This dictates how the template loader will be constructed.
140
141 The constructed loader attempts to load the template from several places
142 in the following order:
143 - from the most recent OS release-specific template dir (if one exists)
144 - the base templates_dir
145 - a template directory shipped in the charm with this helper file.
146
147
148 For the example above, '/tmp/templates' contains the following structure:
149 /tmp/templates/nova.conf
150 /tmp/templates/api-paste.ini
151 /tmp/templates/grizzly/api-paste.ini
152 /tmp/templates/havana/api-paste.ini
153
154 Since it was registered with the grizzly release, it first seraches
155 the grizzly directory for nova.conf, then the templates dir.
156
157 When writing api-paste.ini, it will find the template in the grizzly
158 directory.
159
160 If the object were created with folsom, it would fall back to the
161 base templates dir for its api-paste.ini template.
162
163 This system should help manage changes in config files through
164 openstack releases, allowing charms to fall back to the most recently
165 updated config template for a given release
166
167 The haproxy.conf, since it is not shipped in the templates dir, will
168 be loaded from the module directory's template directory, eg
169 $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
170 us to ship common templates (haproxy, apache) with the helpers.
171
172 Context generators
173 ---------------------------------------
174 Context generators are used to generate template contexts during hook
175 execution. Doing so may require inspecting service relations, charm
176 config, etc. When registered, a config file is associated with a list
177 of generators. When a template is rendered and written, all context
178 generates are called in a chain to generate the context dictionary
179 passed to the jinja2 template. See context.py for more info.
180 """
181 def __init__(self, templates_dir, openstack_release):
182 if not os.path.isdir(templates_dir):
183 log('Could not locate templates dir %s' % templates_dir,
184 level=ERROR)
185 raise OSConfigException
186
187 self.templates_dir = templates_dir
188 self.openstack_release = openstack_release
189 self.templates = {}
190 self._tmpl_env = None
191
192 if None in [Environment, ChoiceLoader, FileSystemLoader]:
193 # if this code is running, the object is created pre-install hook.
194 # jinja2 shouldn't get touched until the module is reloaded on next
195 # hook execution, with proper jinja2 bits successfully imported.
196 apt_install('python-jinja2')
197
198 def register(self, config_file, contexts):
199 """
200 Register a config file with a list of context generators to be called
201 during rendering.
202 """
203 self.templates[config_file] = OSConfigTemplate(config_file=config_file,
204 contexts=contexts)
205 log('Registered config file: %s' % config_file, level=INFO)
206
207 def _get_tmpl_env(self):
208 if not self._tmpl_env:
209 loader = get_loader(self.templates_dir, self.openstack_release)
210 self._tmpl_env = Environment(loader=loader)
211
212 def _get_template(self, template):
213 self._get_tmpl_env()
214 template = self._tmpl_env.get_template(template)
215 log('Loaded template from %s' % template.filename, level=INFO)
216 return template
217
218 def render(self, config_file):
219 if config_file not in self.templates:
220 log('Config not registered: %s' % config_file, level=ERROR)
221 raise OSConfigException
222 ctxt = self.templates[config_file].context()
223
224 _tmpl = os.path.basename(config_file)
225 try:
226 template = self._get_template(_tmpl)
227 except exceptions.TemplateNotFound:
228 # if no template is found with basename, try looking for it
229 # using a munged full path, eg:
230 # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
231 _tmpl = '_'.join(config_file.split('/')[1:])
232 try:
233 template = self._get_template(_tmpl)
234 except exceptions.TemplateNotFound as e:
235 log('Could not load template from %s by %s or %s.' %
236 (self.templates_dir, os.path.basename(config_file), _tmpl),
237 level=ERROR)
238 raise e
239
240 log('Rendering from template: %s' % _tmpl, level=INFO)
241 return template.render(ctxt)
242
243 def write(self, config_file):
244 """
245 Write a single config file, raises if config file is not registered.
246 """
247 if config_file not in self.templates:
248 log('Config not registered: %s' % config_file, level=ERROR)
249 raise OSConfigException
250
251 _out = self.render(config_file)
252
253 with open(config_file, 'wb') as out:
254 out.write(_out)
255
256 log('Wrote template %s.' % config_file, level=INFO)
257
258 def write_all(self):
259 """
260 Write out all registered config files.
261 """
262 [self.write(k) for k in self.templates.iterkeys()]
263
264 def set_release(self, openstack_release):
265 """
266 Resets the template environment and generates a new template loader
267 based on a the new openstack release.
268 """
269 self._tmpl_env = None
270 self.openstack_release = openstack_release
271 self._get_tmpl_env()
272
273 def complete_contexts(self):
274 '''
275 Returns a list of context interfaces that yield a complete context.
276 '''
277 interfaces = []
278 [interfaces.extend(i.complete_contexts())
279 for i in self.templates.itervalues()]
280 return interfaces
0281
=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,440 @@
1#!/usr/bin/python
2
3# Common python helper functions used for OpenStack charms.
4from collections import OrderedDict
5
6import apt_pkg as apt
7import subprocess
8import os
9import socket
10import sys
11
12from charmhelpers.core.hookenv import (
13 config,
14 log as juju_log,
15 charm_dir,
16 ERROR,
17 INFO
18)
19
20from charmhelpers.contrib.storage.linux.lvm import (
21 deactivate_lvm_volume_group,
22 is_lvm_physical_volume,
23 remove_lvm_physical_volume,
24)
25
26from charmhelpers.core.host import lsb_release, mounts, umount
27from charmhelpers.fetch import apt_install
28from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
29from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
30
31CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
32CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
33
34DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
35 'restricted main multiverse universe')
36
37
38UBUNTU_OPENSTACK_RELEASE = OrderedDict([
39 ('oneiric', 'diablo'),
40 ('precise', 'essex'),
41 ('quantal', 'folsom'),
42 ('raring', 'grizzly'),
43 ('saucy', 'havana'),
44 ('trusty', 'icehouse')
45])
46
47
48OPENSTACK_CODENAMES = OrderedDict([
49 ('2011.2', 'diablo'),
50 ('2012.1', 'essex'),
51 ('2012.2', 'folsom'),
52 ('2013.1', 'grizzly'),
53 ('2013.2', 'havana'),
54 ('2014.1', 'icehouse'),
55])
56
57# The ugly duckling
58SWIFT_CODENAMES = OrderedDict([
59 ('1.4.3', 'diablo'),
60 ('1.4.8', 'essex'),
61 ('1.7.4', 'folsom'),
62 ('1.8.0', 'grizzly'),
63 ('1.7.7', 'grizzly'),
64 ('1.7.6', 'grizzly'),
65 ('1.10.0', 'havana'),
66 ('1.9.1', 'havana'),
67 ('1.9.0', 'havana'),
68])
69
70DEFAULT_LOOPBACK_SIZE = '5G'
71
72
73def error_out(msg):
74 juju_log("FATAL ERROR: %s" % msg, level='ERROR')
75 sys.exit(1)
76
77
78def get_os_codename_install_source(src):
79 '''Derive OpenStack release codename from a given installation source.'''
80 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
81 rel = ''
82 if src in ['distro', 'distro-proposed']:
83 try:
84 rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
85 except KeyError:
86 e = 'Could not derive openstack release for '\
87 'this Ubuntu release: %s' % ubuntu_rel
88 error_out(e)
89 return rel
90
91 if src.startswith('cloud:'):
92 ca_rel = src.split(':')[1]
93 ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
94 return ca_rel
95
96 # Best guess match based on deb string provided
97 if src.startswith('deb') or src.startswith('ppa'):
98 for k, v in OPENSTACK_CODENAMES.iteritems():
99 if v in src:
100 return v
101
102
103def get_os_version_install_source(src):
104 codename = get_os_codename_install_source(src)
105 return get_os_version_codename(codename)
106
107
108def get_os_codename_version(vers):
109 '''Determine OpenStack codename from version number.'''
110 try:
111 return OPENSTACK_CODENAMES[vers]
112 except KeyError:
113 e = 'Could not determine OpenStack codename for version %s' % vers
114 error_out(e)
115
116
117def get_os_version_codename(codename):
118 '''Determine OpenStack version number from codename.'''
119 for k, v in OPENSTACK_CODENAMES.iteritems():
120 if v == codename:
121 return k
122 e = 'Could not derive OpenStack version for '\
123 'codename: %s' % codename
124 error_out(e)
125
126
127def get_os_codename_package(package, fatal=True):
128 '''Derive OpenStack release codename from an installed package.'''
129 apt.init()
130 cache = apt.Cache()
131
132 try:
133 pkg = cache[package]
134 except:
135 if not fatal:
136 return None
137 # the package is unknown to the current apt cache.
138 e = 'Could not determine version of package with no installation '\
139 'candidate: %s' % package
140 error_out(e)
141
142 if not pkg.current_ver:
143 if not fatal:
144 return None
145 # package is known, but no version is currently installed.
146 e = 'Could not determine version of uninstalled package: %s' % package
147 error_out(e)
148
149 vers = apt.upstream_version(pkg.current_ver.ver_str)
150
151 try:
152 if 'swift' in pkg.name:
153 swift_vers = vers[:5]
154 if swift_vers not in SWIFT_CODENAMES:
155 # Deal with 1.10.0 upward
156 swift_vers = vers[:6]
157 return SWIFT_CODENAMES[swift_vers]
158 else:
159 vers = vers[:6]
160 return OPENSTACK_CODENAMES[vers]
161 except KeyError:
162 e = 'Could not determine OpenStack codename for version %s' % vers
163 error_out(e)
164
165
166def get_os_version_package(pkg, fatal=True):
167 '''Derive OpenStack version number from an installed package.'''
168 codename = get_os_codename_package(pkg, fatal=fatal)
169
170 if not codename:
171 return None
172
173 if 'swift' in pkg:
174 vers_map = SWIFT_CODENAMES
175 else:
176 vers_map = OPENSTACK_CODENAMES
177
178 for version, cname in vers_map.iteritems():
179 if cname == codename:
180 return version
181 #e = "Could not determine OpenStack version for package: %s" % pkg
182 #error_out(e)
183
184
185os_rel = None
186
187
188def os_release(package, base='essex'):
189 '''
190 Returns OpenStack release codename from a cached global.
191 If the codename can not be determined from either an installed package or
192 the installation source, the earliest release supported by the charm should
193 be returned.
194 '''
195 global os_rel
196 if os_rel:
197 return os_rel
198 os_rel = (get_os_codename_package(package, fatal=False) or
199 get_os_codename_install_source(config('openstack-origin')) or
200 base)
201 return os_rel
202
203
204def import_key(keyid):
205 cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
206 "--recv-keys %s" % keyid
207 try:
208 subprocess.check_call(cmd.split(' '))
209 except subprocess.CalledProcessError:
210 error_out("Error importing repo key %s" % keyid)
211
212
213def configure_installation_source(rel):
214 '''Configure apt installation source.'''
215 if rel == 'distro':
216 return
217 elif rel == 'distro-proposed':
218 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
219 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
220 f.write(DISTRO_PROPOSED % ubuntu_rel)
221 elif rel[:4] == "ppa:":
222 src = rel
223 subprocess.check_call(["add-apt-repository", "-y", src])
224 elif rel[:3] == "deb":
225 l = len(rel.split('|'))
226 if l == 2:
227 src, key = rel.split('|')
228 juju_log("Importing PPA key from keyserver for %s" % src)
229 import_key(key)
230 elif l == 1:
231 src = rel
232 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
233 f.write(src)
234 elif rel[:6] == 'cloud:':
235 ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
236 rel = rel.split(':')[1]
237 u_rel = rel.split('-')[0]
238 ca_rel = rel.split('-')[1]
239
240 if u_rel != ubuntu_rel:
241 e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
242 'version (%s)' % (ca_rel, ubuntu_rel)
243 error_out(e)
244
245 if 'staging' in ca_rel:
246 # staging is just a regular PPA.
247 os_rel = ca_rel.split('/')[0]
248 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
249 cmd = 'add-apt-repository -y %s' % ppa
250 subprocess.check_call(cmd.split(' '))
251 return
252
253 # map charm config options to actual archive pockets.
254 pockets = {
255 'folsom': 'precise-updates/folsom',
256 'folsom/updates': 'precise-updates/folsom',
257 'folsom/proposed': 'precise-proposed/folsom',
258 'grizzly': 'precise-updates/grizzly',
259 'grizzly/updates': 'precise-updates/grizzly',
260 'grizzly/proposed': 'precise-proposed/grizzly',
261 'havana': 'precise-updates/havana',
262 'havana/updates': 'precise-updates/havana',
263 'havana/proposed': 'precise-proposed/havana',
264 'icehouse': 'precise-updates/icehouse',
265 'icehouse/updates': 'precise-updates/icehouse',
266 'icehouse/proposed': 'precise-proposed/icehouse',
267 }
268
269 try:
270 pocket = pockets[ca_rel]
271 except KeyError:
272 e = 'Invalid Cloud Archive release specified: %s' % rel
273 error_out(e)
274
275 src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
276 apt_install('ubuntu-cloud-keyring', fatal=True)
277
278 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
279 f.write(src)
280 else:
281 error_out("Invalid openstack-release specified: %s" % rel)
282
283
284def save_script_rc(script_path="scripts/scriptrc", **env_vars):
285 """
286 Write an rc file in the charm-delivered directory containing
287 exported environment variables provided by env_vars. Any charm scripts run
288 outside the juju hook environment can source this scriptrc to obtain
289 updated config information necessary to perform health checks or
290 service changes.
291 """
292 juju_rc_path = "%s/%s" % (charm_dir(), script_path)
293 if not os.path.exists(os.path.dirname(juju_rc_path)):
294 os.mkdir(os.path.dirname(juju_rc_path))
295 with open(juju_rc_path, 'wb') as rc_script:
296 rc_script.write(
297 "#!/bin/bash\n")
298 [rc_script.write('export %s=%s\n' % (u, p))
299 for u, p in env_vars.iteritems() if u != "script_path"]
300
301
302def openstack_upgrade_available(package):
303 """
304 Determines if an OpenStack upgrade is available from installation
305 source, based on version of installed package.
306
307 :param package: str: Name of installed package.
308
309 :returns: bool: : Returns True if configured installation source offers
310 a newer version of package.
311
312 """
313
314 src = config('openstack-origin')
315 cur_vers = get_os_version_package(package)
316 available_vers = get_os_version_install_source(src)
317 apt.init()
318 return apt.version_compare(available_vers, cur_vers) == 1
319
320
321def ensure_block_device(block_device):
322 '''
323 Confirm block_device, create as loopback if necessary.
324
325 :param block_device: str: Full path of block device to ensure.
326
327 :returns: str: Full path of ensured block device.
328 '''
329 _none = ['None', 'none', None]
330 if (block_device in _none):
331 error_out('prepare_storage(): Missing required input: '
332 'block_device=%s.' % block_device, level=ERROR)
333
334 if block_device.startswith('/dev/'):
335 bdev = block_device
336 elif block_device.startswith('/'):
337 _bd = block_device.split('|')
338 if len(_bd) == 2:
339 bdev, size = _bd
340 else:
341 bdev = block_device
342 size = DEFAULT_LOOPBACK_SIZE
343 bdev = ensure_loopback_device(bdev, size)
344 else:
345 bdev = '/dev/%s' % block_device
346
347 if not is_block_device(bdev):
348 error_out('Failed to locate valid block device at %s' % bdev,
349 level=ERROR)
350
351 return bdev
352
353
354def clean_storage(block_device):
355 '''
356 Ensures a block device is clean. That is:
357 - unmounted
358 - any lvm volume groups are deactivated
359 - any lvm physical device signatures removed
360 - partition table wiped
361
362 :param block_device: str: Full path to block device to clean.
363 '''
364 for mp, d in mounts():
365 if d == block_device:
366 juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
367 (d, mp), level=INFO)
368 umount(mp, persist=True)
369
370 if is_lvm_physical_volume(block_device):
371 deactivate_lvm_volume_group(block_device)
372 remove_lvm_physical_volume(block_device)
373 else:
374 zap_disk(block_device)
375
376
377def is_ip(address):
378 """
379 Returns True if address is a valid IP address.
380 """
381 try:
382 # Test to see if already an IPv4 address
383 socket.inet_aton(address)
384 return True
385 except socket.error:
386 return False
387
388
389def ns_query(address):
390 try:
391 import dns.resolver
392 except ImportError:
393 apt_install('python-dnspython')
394 import dns.resolver
395
396 if isinstance(address, dns.name.Name):
397 rtype = 'PTR'
398 elif isinstance(address, basestring):
399 rtype = 'A'
400
401 answers = dns.resolver.query(address, rtype)
402 if answers:
403 return str(answers[0])
404 return None
405
406
407def get_host_ip(hostname):
408 """
409 Resolves the IP for a given hostname, or returns
410 the input if it is already an IP.
411 """
412 if is_ip(hostname):
413 return hostname
414
415 return ns_query(hostname)
416
417
418def get_hostname(address):
419 """
420 Resolves hostname for given IP, or returns the input
421 if it is already a hostname.
422 """
423 if not is_ip(address):
424 return address
425
426 try:
427 import dns.reversename
428 except ImportError:
429 apt_install('python-dnspython')
430 import dns.reversename
431
432 rev = dns.reversename.from_address(address)
433 result = ns_query(rev)
434 if not result:
435 return None
436
437 # strip trailing .
438 if result.endswith('.'):
439 return result[:-1]
440 return result
0441
=== added directory 'hooks/charmhelpers/contrib/saltstack'
=== added file 'hooks/charmhelpers/contrib/saltstack/__init__.py'
--- hooks/charmhelpers/contrib/saltstack/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/saltstack/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,102 @@
1"""Charm Helpers saltstack - declare the state of your machines.
2
3This helper enables you to declare your machine state, rather than
4program it procedurally (and have to test each change to your procedures).
5Your install hook can be as simple as:
6
7{{{
8from charmhelpers.contrib.saltstack import (
9 install_salt_support,
10 update_machine_state,
11)
12
13
14def install():
15 install_salt_support()
16 update_machine_state('machine_states/dependencies.yaml')
17 update_machine_state('machine_states/installed.yaml')
18}}}
19
20and won't need to change (nor will its tests) when you change the machine
21state.
22
23It's using a python package called salt-minion which allows various formats for
24specifying resources, such as:
25
26{{{
27/srv/{{ basedir }}:
28 file.directory:
29 - group: ubunet
30 - user: ubunet
31 - require:
32 - user: ubunet
33 - recurse:
34 - user
35 - group
36
37ubunet:
38 group.present:
39 - gid: 1500
40 user.present:
41 - uid: 1500
42 - gid: 1500
43 - createhome: False
44 - require:
45 - group: ubunet
46}}}
47
48The docs for all the different state definitions are at:
49 http://docs.saltstack.com/ref/states/all/
50
51
52TODO:
53 * Add test helpers which will ensure that machine state definitions
54 are functionally (but not necessarily logically) correct (ie. getting
55 salt to parse all state defs.
56 * Add a link to a public bootstrap charm example / blogpost.
57 * Find a way to obviate the need to use the grains['charm_dir'] syntax
58 in templates.
59"""
60# Copyright 2013 Canonical Ltd.
61#
62# Authors:
63# Charm Helpers Developers <juju@lists.ubuntu.com>
64import subprocess
65
66import charmhelpers.contrib.templating.contexts
67import charmhelpers.core.host
68import charmhelpers.core.hookenv
69
70
71salt_grains_path = '/etc/salt/grains'
72
73
74def install_salt_support(from_ppa=True):
75 """Installs the salt-minion helper for machine state.
76
77 By default the salt-minion package is installed from
78 the saltstack PPA. If from_ppa is False you must ensure
79 that the salt-minion package is available in the apt cache.
80 """
81 if from_ppa:
82 subprocess.check_call([
83 '/usr/bin/add-apt-repository',
84 '--yes',
85 'ppa:saltstack/salt',
86 ])
87 subprocess.check_call(['/usr/bin/apt-get', 'update'])
88 # We install salt-common as salt-minion would run the salt-minion
89 # daemon.
90 charmhelpers.fetch.apt_install('salt-common')
91
92
93def update_machine_state(state_path):
94 """Update the machine state using the provided state declaration."""
95 charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
96 salt_grains_path)
97 subprocess.check_call([
98 'salt-call',
99 '--local',
100 'state.template',
101 state_path,
102 ])
0103
=== added directory 'hooks/charmhelpers/contrib/ssl'
=== added file 'hooks/charmhelpers/contrib/ssl/__init__.py'
--- hooks/charmhelpers/contrib/ssl/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/ssl/__init__.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,78 @@
1import subprocess
2from charmhelpers.core import hookenv
3
4
5def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None):
6 """Generate selfsigned SSL keypair
7
8 You must provide one of the 3 optional arguments:
9 config, subject or cn
10 If more than one is provided the leftmost will be used
11
12 Arguments:
13 keyfile -- (required) full path to the keyfile to be created
14 certfile -- (required) full path to the certfile to be created
15 keysize -- (optional) SSL key length
16 config -- (optional) openssl configuration file
17 subject -- (optional) dictionary with SSL subject variables
18 cn -- (optional) cerfificate common name
19
20 Required keys in subject dict:
21 cn -- Common name (eq. FQDN)
22
23 Optional keys in subject dict
24 country -- Country Name (2 letter code)
25 state -- State or Province Name (full name)
26 locality -- Locality Name (eg, city)
27 organization -- Organization Name (eg, company)
28 organizational_unit -- Organizational Unit Name (eg, section)
29 email -- Email Address
30 """
31
32 cmd = []
33 if config:
34 cmd = ["/usr/bin/openssl", "req", "-new", "-newkey",
35 "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509",
36 "-keyout", keyfile,
37 "-out", certfile, "-config", config]
38 elif subject:
39 ssl_subject = ""
40 if "country" in subject:
41 ssl_subject = ssl_subject + "/C={}".format(subject["country"])
42 if "state" in subject:
43 ssl_subject = ssl_subject + "/ST={}".format(subject["state"])
44 if "locality" in subject:
45 ssl_subject = ssl_subject + "/L={}".format(subject["locality"])
46 if "organization" in subject:
47 ssl_subject = ssl_subject + "/O={}".format(subject["organization"])
48 if "organizational_unit" in subject:
49 ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"])
50 if "cn" in subject:
51 ssl_subject = ssl_subject + "/CN={}".format(subject["cn"])
52 else:
53 hookenv.log("When using \"subject\" argument you must "
54 "provide \"cn\" field at very least")
55 return False
56 if "email" in subject:
57 ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"])
58
59 cmd = ["/usr/bin/openssl", "req", "-new", "-newkey",
60 "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509",
61 "-keyout", keyfile,
62 "-out", certfile, "-subj", ssl_subject]
63 elif cn:
64 cmd = ["/usr/bin/openssl", "req", "-new", "-newkey",
65 "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509",
66 "-keyout", keyfile,
67 "-out", certfile, "-subj", "/CN={}".format(cn)]
68
69 if not cmd:
70 hookenv.log("No config, subject or cn provided,"
71 "unable to generate self signed SSL certificates")
72 return False
73 try:
74 subprocess.check_call(cmd)
75 return True
76 except Exception as e:
77 print "Execution of openssl command failed:\n{}".format(e)
78 return False
079
=== added directory 'hooks/charmhelpers/contrib/storage'
=== added file 'hooks/charmhelpers/contrib/storage/__init__.py'
=== added directory 'hooks/charmhelpers/contrib/storage/linux'
=== added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py'
=== added file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,383 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import os
12import shutil
13import json
14import time
15
16from subprocess import (
17 check_call,
18 check_output,
19 CalledProcessError
20)
21
22from charmhelpers.core.hookenv import (
23 relation_get,
24 relation_ids,
25 related_units,
26 log,
27 INFO,
28 WARNING,
29 ERROR
30)
31
32from charmhelpers.core.host import (
33 mount,
34 mounts,
35 service_start,
36 service_stop,
37 service_running,
38 umount,
39)
40
41from charmhelpers.fetch import (
42 apt_install,
43)
44
45KEYRING = '/etc/ceph/ceph.client.{}.keyring'
46KEYFILE = '/etc/ceph/ceph.client.{}.key'
47
48CEPH_CONF = """[global]
49 auth supported = {auth}
50 keyring = {keyring}
51 mon host = {mon_hosts}
52"""
53
54
55def install():
56 ''' Basic Ceph client installation '''
57 ceph_dir = "/etc/ceph"
58 if not os.path.exists(ceph_dir):
59 os.mkdir(ceph_dir)
60 apt_install('ceph-common', fatal=True)
61
62
63def rbd_exists(service, pool, rbd_img):
64 ''' Check to see if a RADOS block device exists '''
65 try:
66 out = check_output(['rbd', 'list', '--id', service,
67 '--pool', pool])
68 except CalledProcessError:
69 return False
70 else:
71 return rbd_img in out
72
73
74def create_rbd_image(service, pool, image, sizemb):
75 ''' Create a new RADOS block device '''
76 cmd = [
77 'rbd',
78 'create',
79 image,
80 '--size',
81 str(sizemb),
82 '--id',
83 service,
84 '--pool',
85 pool
86 ]
87 check_call(cmd)
88
89
90def pool_exists(service, name):
91 ''' Check to see if a RADOS pool already exists '''
92 try:
93 out = check_output(['rados', '--id', service, 'lspools'])
94 except CalledProcessError:
95 return False
96 else:
97 return name in out
98
99
100def get_osds(service):
101 '''
102 Return a list of all Ceph Object Storage Daemons
103 currently in the cluster
104 '''
105 version = ceph_version()
106 if version and version >= '0.56':
107 return json.loads(check_output(['ceph', '--id', service,
108 'osd', 'ls', '--format=json']))
109 else:
110 return None
111
112
113def create_pool(service, name, replicas=2):
114 ''' Create a new RADOS pool '''
115 if pool_exists(service, name):
116 log("Ceph pool {} already exists, skipping creation".format(name),
117 level=WARNING)
118 return
119 # Calculate the number of placement groups based
120 # on upstream recommended best practices.
121 osds = get_osds(service)
122 if osds:
123 pgnum = (len(osds) * 100 / replicas)
124 else:
125 # NOTE(james-page): Default to 200 for older ceph versions
126 # which don't support OSD query from cli
127 pgnum = 200
128 cmd = [
129 'ceph', '--id', service,
130 'osd', 'pool', 'create',
131 name, str(pgnum)
132 ]
133 check_call(cmd)
134 cmd = [
135 'ceph', '--id', service,
136 'osd', 'pool', 'set', name,
137 'size', str(replicas)
138 ]
139 check_call(cmd)
140
141
142def delete_pool(service, name):
143 ''' Delete a RADOS pool from ceph '''
144 cmd = [
145 'ceph', '--id', service,
146 'osd', 'pool', 'delete',
147 name, '--yes-i-really-really-mean-it'
148 ]
149 check_call(cmd)
150
151
152def _keyfile_path(service):
153 return KEYFILE.format(service)
154
155
156def _keyring_path(service):
157 return KEYRING.format(service)
158
159
160def create_keyring(service, key):
161 ''' Create a new Ceph keyring containing key'''
162 keyring = _keyring_path(service)
163 if os.path.exists(keyring):
164 log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
165 return
166 cmd = [
167 'ceph-authtool',
168 keyring,
169 '--create-keyring',
170 '--name=client.{}'.format(service),
171 '--add-key={}'.format(key)
172 ]
173 check_call(cmd)
174 log('ceph: Created new ring at %s.' % keyring, level=INFO)
175
176
177def create_key_file(service, key):
178 ''' Create a file containing key '''
179 keyfile = _keyfile_path(service)
180 if os.path.exists(keyfile):
181 log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
182 return
183 with open(keyfile, 'w') as fd:
184 fd.write(key)
185 log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
186
187
188def get_ceph_nodes():
189 ''' Query named relation 'ceph' to detemine current nodes '''
190 hosts = []
191 for r_id in relation_ids('ceph'):
192 for unit in related_units(r_id):
193 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
194 return hosts
195
196
197def configure(service, key, auth):
198 ''' Perform basic configuration of Ceph '''
199 create_keyring(service, key)
200 create_key_file(service, key)
201 hosts = get_ceph_nodes()
202 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
203 ceph_conf.write(CEPH_CONF.format(auth=auth,
204 keyring=_keyring_path(service),
205 mon_hosts=",".join(map(str, hosts))))
206 modprobe('rbd')
207
208
209def image_mapped(name):
210 ''' Determine whether a RADOS block device is mapped locally '''
211 try:
212 out = check_output(['rbd', 'showmapped'])
213 except CalledProcessError:
214 return False
215 else:
216 return name in out
217
218
219def map_block_storage(service, pool, image):
220 ''' Map a RADOS block device for local use '''
221 cmd = [
222 'rbd',
223 'map',
224 '{}/{}'.format(pool, image),
225 '--user',
226 service,
227 '--secret',
228 _keyfile_path(service),
229 ]
230 check_call(cmd)
231
232
233def filesystem_mounted(fs):
234 ''' Determine whether a filesytems is already mounted '''
235 return fs in [f for f, m in mounts()]
236
237
238def make_filesystem(blk_device, fstype='ext4', timeout=10):
239 ''' Make a new filesystem on the specified block device '''
240 count = 0
241 e_noent = os.errno.ENOENT
242 while not os.path.exists(blk_device):
243 if count >= timeout:
244 log('ceph: gave up waiting on block device %s' % blk_device,
245 level=ERROR)
246 raise IOError(e_noent, os.strerror(e_noent), blk_device)
247 log('ceph: waiting for block device %s to appear' % blk_device,
248 level=INFO)
249 count += 1
250 time.sleep(1)
251 else:
252 log('ceph: Formatting block device %s as filesystem %s.' %
253 (blk_device, fstype), level=INFO)
254 check_call(['mkfs', '-t', fstype, blk_device])
255
256
257def place_data_on_block_device(blk_device, data_src_dst):
258 ''' Migrate data in data_src_dst to blk_device and then remount '''
259 # mount block device into /mnt
260 mount(blk_device, '/mnt')
261 # copy data to /mnt
262 copy_files(data_src_dst, '/mnt')
263 # umount block device
264 umount('/mnt')
265 # Grab user/group ID's from original source
266 _dir = os.stat(data_src_dst)
267 uid = _dir.st_uid
268 gid = _dir.st_gid
269 # re-mount where the data should originally be
270 # TODO: persist is currently a NO-OP in core.host
271 mount(blk_device, data_src_dst, persist=True)
272 # ensure original ownership of new mount.
273 os.chown(data_src_dst, uid, gid)
274
275
276# TODO: re-use
277def modprobe(module):
278 ''' Load a kernel module and configure for auto-load on reboot '''
279 log('ceph: Loading kernel module', level=INFO)
280 cmd = ['modprobe', module]
281 check_call(cmd)
282 with open('/etc/modules', 'r+') as modules:
283 if module not in modules.read():
284 modules.write(module)
285
286
287def copy_files(src, dst, symlinks=False, ignore=None):
288 ''' Copy files from src to dst '''
289 for item in os.listdir(src):
290 s = os.path.join(src, item)
291 d = os.path.join(dst, item)
292 if os.path.isdir(s):
293 shutil.copytree(s, d, symlinks, ignore)
294 else:
295 shutil.copy2(s, d)
296
297
298def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
299 blk_device, fstype, system_services=[]):
300 """
301 NOTE: This function must only be called from a single service unit for
302 the same rbd_img otherwise data loss will occur.
303
304 Ensures given pool and RBD image exists, is mapped to a block device,
305 and the device is formatted and mounted at the given mount_point.
306
307 If formatting a device for the first time, data existing at mount_point
308 will be migrated to the RBD device before being re-mounted.
309
310 All services listed in system_services will be stopped prior to data
311 migration and restarted when complete.
312 """
313 # Ensure pool, RBD image, RBD mappings are in place.
314 if not pool_exists(service, pool):
315 log('ceph: Creating new pool {}.'.format(pool))
316 create_pool(service, pool)
317
318 if not rbd_exists(service, pool, rbd_img):
319 log('ceph: Creating RBD image ({}).'.format(rbd_img))
320 create_rbd_image(service, pool, rbd_img, sizemb)
321
322 if not image_mapped(rbd_img):
323 log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
324 map_block_storage(service, pool, rbd_img)
325
326 # make file system
327 # TODO: What happens if for whatever reason this is run again and
328 # the data is already in the rbd device and/or is mounted??
329 # When it is mounted already, it will fail to make the fs
330 # XXX: This is really sketchy! Need to at least add an fstab entry
331 # otherwise this hook will blow away existing data if its executed
332 # after a reboot.
333 if not filesystem_mounted(mount_point):
334 make_filesystem(blk_device, fstype)
335
336 for svc in system_services:
337 if service_running(svc):
338 log('ceph: Stopping services {} prior to migrating data.'
339 .format(svc))
340 service_stop(svc)
341
342 place_data_on_block_device(blk_device, mount_point)
343
344 for svc in system_services:
345 log('ceph: Starting service {} after migrating data.'
346 .format(svc))
347 service_start(svc)
348
349
350def ensure_ceph_keyring(service, user=None, group=None):
351 '''
352 Ensures a ceph keyring is created for a named service
353 and optionally ensures user and group ownership.
354
355 Returns False if no ceph key is available in relation state.
356 '''
357 key = None
358 for rid in relation_ids('ceph'):
359 for unit in related_units(rid):
360 key = relation_get('key', rid=rid, unit=unit)
361 if key:
362 break
363 if not key:
364 return False
365 create_keyring(service=service, key=key)
366 keyring = _keyring_path(service)
367 if user and group:
368 check_call(['chown', '%s.%s' % (user, group), keyring])
369 return True
370
371
372def ceph_version():
373 ''' Retrieve the local version of ceph '''
374 if os.path.exists('/usr/bin/ceph'):
375 cmd = ['ceph', '-v']
376 output = check_output(cmd)
377 output = output.split()
378 if len(output) > 3:
379 return output[2]
380 else:
381 return None
382 else:
383 return None
0384
=== added file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,62 @@
1
2import os
3import re
4
5from subprocess import (
6 check_call,
7 check_output,
8)
9
10
11##################################################
12# loopback device helpers.
13##################################################
14def loopback_devices():
15 '''
16 Parse through 'losetup -a' output to determine currently mapped
17 loopback devices. Output is expected to look like:
18
19 /dev/loop0: [0807]:961814 (/tmp/my.img)
20
21 :returns: dict: a dict mapping {loopback_dev: backing_file}
22 '''
23 loopbacks = {}
24 cmd = ['losetup', '-a']
25 devs = [d.strip().split(' ') for d in
26 check_output(cmd).splitlines() if d != '']
27 for dev, _, f in devs:
28 loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
29 return loopbacks
30
31
32def create_loopback(file_path):
33 '''
34 Create a loopback device for a given backing file.
35
36 :returns: str: Full path to new loopback device (eg, /dev/loop0)
37 '''
38 file_path = os.path.abspath(file_path)
39 check_call(['losetup', '--find', file_path])
40 for d, f in loopback_devices().iteritems():
41 if f == file_path:
42 return d
43
44
45def ensure_loopback_device(path, size):
46 '''
47 Ensure a loopback device exists for a given backing file path and size.
48 If it a loopback device is not mapped to file, a new one will be created.
49
50 TODO: Confirm size of found loopback device.
51
52 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
53 '''
54 for d, f in loopback_devices().iteritems():
55 if f == path:
56 return d
57
58 if not os.path.exists(path):
59 cmd = ['truncate', '--size', size, path]
60 check_call(cmd)
61
62 return create_loopback(path)
063
=== added file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
--- hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,88 @@
1from subprocess import (
2 CalledProcessError,
3 check_call,
4 check_output,
5 Popen,
6 PIPE,
7)
8
9
10##################################################
11# LVM helpers.
12##################################################
13def deactivate_lvm_volume_group(block_device):
14 '''
15 Deactivate any volume gruop associated with an LVM physical volume.
16
17 :param block_device: str: Full path to LVM physical volume
18 '''
19 vg = list_lvm_volume_group(block_device)
20 if vg:
21 cmd = ['vgchange', '-an', vg]
22 check_call(cmd)
23
24
25def is_lvm_physical_volume(block_device):
26 '''
27 Determine whether a block device is initialized as an LVM PV.
28
29 :param block_device: str: Full path of block device to inspect.
30
31 :returns: boolean: True if block device is a PV, False if not.
32 '''
33 try:
34 check_output(['pvdisplay', block_device])
35 return True
36 except CalledProcessError:
37 return False
38
39
40def remove_lvm_physical_volume(block_device):
41 '''
42 Remove LVM PV signatures from a given block device.
43
44 :param block_device: str: Full path of block device to scrub.
45 '''
46 p = Popen(['pvremove', '-ff', block_device],
47 stdin=PIPE)
48 p.communicate(input='y\n')
49
50
51def list_lvm_volume_group(block_device):
52 '''
53 List LVM volume group associated with a given block device.
54
55 Assumes block device is a valid LVM PV.
56
57 :param block_device: str: Full path of block device to inspect.
58
59 :returns: str: Name of volume group associated with block device or None
60 '''
61 vg = None
62 pvd = check_output(['pvdisplay', block_device]).splitlines()
63 for l in pvd:
64 if l.strip().startswith('VG Name'):
65 vg = ' '.join(l.split()).split(' ').pop()
66 return vg
67
68
69def create_lvm_physical_volume(block_device):
70 '''
71 Initialize a block device as an LVM physical volume.
72
73 :param block_device: str: Full path of block device to initialize.
74
75 '''
76 check_call(['pvcreate', block_device])
77
78
79def create_lvm_volume_group(volume_group, block_device):
80 '''
81 Create an LVM volume group backed by a given block device.
82
83 Assumes block device has already been initialized as an LVM PV.
84
85 :param volume_group: str: Name of volume group to create.
86 :block_device: str: Full path of PV-initialized block device.
87 '''
88 check_call(['vgcreate', volume_group, block_device])
089
=== added file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
--- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,25 @@
1from os import stat
2from stat import S_ISBLK
3
4from subprocess import (
5 check_call
6)
7
8
9def is_block_device(path):
10 '''
11 Confirm device at path is a valid block device node.
12
13 :returns: boolean: True if path is a block device, False if not.
14 '''
15 return S_ISBLK(stat(path).st_mode)
16
17
18def zap_disk(block_device):
19 '''
20 Clear a block device of partition table. Relies on sgdisk, which is
21 installed as pat of the 'gdisk' package in Ubuntu.
22
23 :param block_device: str: Full path of block device to clean.
24 '''
25 check_call(['sgdisk', '--zap-all', '--mbrtogpt', block_device])
026
=== added directory 'hooks/charmhelpers/contrib/templating'
=== added file 'hooks/charmhelpers/contrib/templating/__init__.py'
=== added file 'hooks/charmhelpers/contrib/templating/contexts.py'
--- hooks/charmhelpers/contrib/templating/contexts.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/templating/contexts.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,73 @@
1# Copyright 2013 Canonical Ltd.
2#
3# Authors:
4# Charm Helpers Developers <juju@lists.ubuntu.com>
5"""A helper to create a yaml cache of config with namespaced relation data."""
6import os
7import yaml
8
9import charmhelpers.core.hookenv
10
11
12charm_dir = os.environ.get('CHARM_DIR', '')
13
14
15def juju_state_to_yaml(yaml_path, namespace_separator=':',
16 allow_hyphens_in_keys=True):
17 """Update the juju config and state in a yaml file.
18
19 This includes any current relation-get data, and the charm
20 directory.
21
22 This function was created for the ansible and saltstack
23 support, as those libraries can use a yaml file to supply
24 context to templates, but it may be useful generally to
25 create and update an on-disk cache of all the config, including
26 previous relation data.
27
28 By default, hyphens are allowed in keys as this is supported
29 by yaml, but for tools like ansible, hyphens are not valid [1].
30
31 [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name
32 """
33 config = charmhelpers.core.hookenv.config()
34
35 # Add the charm_dir which we will need to refer to charm
36 # file resources etc.
37 config['charm_dir'] = charm_dir
38 config['local_unit'] = charmhelpers.core.hookenv.local_unit()
39
40 # Add any relation data prefixed with the relation type.
41 relation_type = charmhelpers.core.hookenv.relation_type()
42 if relation_type is not None:
43 relation_data = charmhelpers.core.hookenv.relation_get()
44 relation_data = dict(
45 ("{relation_type}{namespace_separator}{key}".format(
46 relation_type=relation_type.replace('-', '_'),
47 key=key,
48 namespace_separator=namespace_separator), val)
49 for key, val in relation_data.items())
50 config.update(relation_data)
51
52 # Don't use non-standard tags for unicode which will not
53 # work when salt uses yaml.load_safe.
54 yaml.add_representer(unicode, lambda dumper,
55 value: dumper.represent_scalar(
56 u'tag:yaml.org,2002:str', value))
57
58 yaml_dir = os.path.dirname(yaml_path)
59 if not os.path.exists(yaml_dir):
60 os.makedirs(yaml_dir)
61
62 if os.path.exists(yaml_path):
63 with open(yaml_path, "r") as existing_vars_file:
64 existing_vars = yaml.load(existing_vars_file.read())
65 else:
66 existing_vars = {}
67
68 if not allow_hyphens_in_keys:
69 config = dict(
70 (key.replace('-', '_'), val) for key, val in config.items())
71 existing_vars.update(config)
72 with open(yaml_path, "w+") as fp:
73 fp.write(yaml.dump(existing_vars))
074
=== added file 'hooks/charmhelpers/contrib/templating/pyformat.py'
--- hooks/charmhelpers/contrib/templating/pyformat.py 1970-01-01 00:00:00 +0000
+++ hooks/charmhelpers/contrib/templating/pyformat.py 2016-02-12 04:16:45 +0000
@@ -0,0 +1,13 @@
1'''
2Templating using standard Python str.format() method.
3'''
4
5from charmhelpers.core import hookenv
6
7
8def render(template, extra={}, **kwargs):
9 """Return the template rendered using Python's str.format()."""
10 context = hookenv.execution_environment()
11 context.update(extra)
12 context.update(kwargs)
13 return template.format(**context)
014
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2013-08-21 19:14:32 +0000
+++ hooks/charmhelpers/core/hookenv.py 2016-02-12 04:16:45 +0000
@@ -9,6 +9,7 @@
9import yaml9import yaml
10import subprocess10import subprocess
11import UserDict11import UserDict
12from subprocess import CalledProcessError
1213
13CRITICAL = "CRITICAL"14CRITICAL = "CRITICAL"
14ERROR = "ERROR"15ERROR = "ERROR"
@@ -21,7 +22,7 @@
2122
2223
23def cached(func):24def cached(func):
24 ''' Cache return values for multiple executions of func + args25 """Cache return values for multiple executions of func + args
2526
26 For example:27 For example:
2728
@@ -32,7 +33,7 @@
32 unit_get('test')33 unit_get('test')
3334
34 will cache the result of unit_get + 'test' for future calls.35 will cache the result of unit_get + 'test' for future calls.
35 '''36 """
36 def wrapper(*args, **kwargs):37 def wrapper(*args, **kwargs):
37 global cache38 global cache
38 key = str((func, args, kwargs))39 key = str((func, args, kwargs))
@@ -46,8 +47,8 @@
4647
4748
48def flush(key):49def flush(key):
49 ''' Flushes any entries from function cache where the50 """Flushes any entries from function cache where the
50 key is found in the function+args '''51 key is found in the function+args """
51 flush_list = []52 flush_list = []
52 for item in cache:53 for item in cache:
53 if key in item:54 if key in item:
@@ -57,7 +58,7 @@
5758
5859
59def log(message, level=None):60def log(message, level=None):
60 "Write a message to the juju log"61 """Write a message to the juju log"""
61 command = ['juju-log']62 command = ['juju-log']
62 if level:63 if level:
63 command += ['-l', level]64 command += ['-l', level]
@@ -66,7 +67,7 @@
6667
6768
68class Serializable(UserDict.IterableUserDict):69class Serializable(UserDict.IterableUserDict):
69 "Wrapper, an object that can be serialized to yaml or json"70 """Wrapper, an object that can be serialized to yaml or json"""
7071
71 def __init__(self, obj):72 def __init__(self, obj):
72 # wrap the object73 # wrap the object
@@ -96,11 +97,11 @@
96 self.data = state97 self.data = state
9798
98 def json(self):99 def json(self):
99 "Serialize the object to json"100 """Serialize the object to json"""
100 return json.dumps(self.data)101 return json.dumps(self.data)
101102
102 def yaml(self):103 def yaml(self):
103 "Serialize the object to yaml"104 """Serialize the object to yaml"""
104 return yaml.dump(self.data)105 return yaml.dump(self.data)
105106
106107
@@ -119,38 +120,38 @@
119120
120121
121def in_relation_hook():122def in_relation_hook():
122 "Determine whether we're running in a relation hook"123 """Determine whether we're running in a relation hook"""
123 return 'JUJU_RELATION' in os.environ124 return 'JUJU_RELATION' in os.environ
124125
125126
126def relation_type():127def relation_type():
127 "The scope for the current relation hook"128 """The scope for the current relation hook"""
128 return os.environ.get('JUJU_RELATION', None)129 return os.environ.get('JUJU_RELATION', None)
129130
130131
131def relation_id():132def relation_id():
132 "The relation ID for the current relation hook"133 """The relation ID for the current relation hook"""
133 return os.environ.get('JUJU_RELATION_ID', None)134 return os.environ.get('JUJU_RELATION_ID', None)
134135
135136
136def local_unit():137def local_unit():
137 "Local unit ID"138 """Local unit ID"""
138 return os.environ['JUJU_UNIT_NAME']139 return os.environ['JUJU_UNIT_NAME']
139140
140141
141def remote_unit():142def remote_unit():
142 "The remote unit for the current relation hook"143 """The remote unit for the current relation hook"""
143 return os.environ['JUJU_REMOTE_UNIT']144 return os.environ['JUJU_REMOTE_UNIT']
144145
145146
146def service_name():147def service_name():
147 "The name service group this unit belongs to"148 """The name service group this unit belongs to"""
148 return local_unit().split('/')[0]149 return local_unit().split('/')[0]
149150
150151
151@cached152@cached
152def config(scope=None):153def config(scope=None):
153 "Juju charm configuration"154 """Juju charm configuration"""
154 config_cmd_line = ['config-get']155 config_cmd_line = ['config-get']
155 if scope is not None:156 if scope is not None:
156 config_cmd_line.append(scope)157 config_cmd_line.append(scope)
@@ -163,6 +164,7 @@
163164
164@cached165@cached
165def relation_get(attribute=None, unit=None, rid=None):166def relation_get(attribute=None, unit=None, rid=None):
167 """Get relation information"""
166 _args = ['relation-get', '--format=json']168 _args = ['relation-get', '--format=json']
167 if rid:169 if rid:
168 _args.append('-r')170 _args.append('-r')
@@ -174,9 +176,14 @@
174 return json.loads(subprocess.check_output(_args))176 return json.loads(subprocess.check_output(_args))
175 except ValueError:177 except ValueError:
176 return None178 return None
179 except CalledProcessError, e:
180 if e.returncode == 2:
181 return None
182 raise
177183
178184
179def relation_set(relation_id=None, relation_settings={}, **kwargs):185def relation_set(relation_id=None, relation_settings={}, **kwargs):
186 """Set relation information for the current unit"""
180 relation_cmd_line = ['relation-set']187 relation_cmd_line = ['relation-set']
181 if relation_id is not None:188 if relation_id is not None:
182 relation_cmd_line.extend(('-r', relation_id))189 relation_cmd_line.extend(('-r', relation_id))
@@ -192,7 +199,7 @@
192199
193@cached200@cached
194def relation_ids(reltype=None):201def relation_ids(reltype=None):
195 "A list of relation_ids"202 """A list of relation_ids"""
196 reltype = reltype or relation_type()203 reltype = reltype or relation_type()
197 relid_cmd_line = ['relation-ids', '--format=json']204 relid_cmd_line = ['relation-ids', '--format=json']
198 if reltype is not None:205 if reltype is not None:
@@ -203,7 +210,7 @@
203210
204@cached211@cached
205def related_units(relid=None):212def related_units(relid=None):
206 "A list of related units"213 """A list of related units"""
207 relid = relid or relation_id()214 relid = relid or relation_id()
208 units_cmd_line = ['relation-list', '--format=json']215 units_cmd_line = ['relation-list', '--format=json']
209 if relid is not None:216 if relid is not None:
@@ -213,7 +220,7 @@
213220
214@cached221@cached
215def relation_for_unit(unit=None, rid=None):222def relation_for_unit(unit=None, rid=None):
216 "Get the json represenation of a unit's relation"223 """Get the json represenation of a unit's relation"""
217 unit = unit or remote_unit()224 unit = unit or remote_unit()
218 relation = relation_get(unit=unit, rid=rid)225 relation = relation_get(unit=unit, rid=rid)
219 for key in relation:226 for key in relation:
@@ -225,7 +232,7 @@
225232
226@cached233@cached
227def relations_for_id(relid=None):234def relations_for_id(relid=None):
228 "Get relations of a specific relation ID"235 """Get relations of a specific relation ID"""
229 relation_data = []236 relation_data = []
230 relid = relid or relation_ids()237 relid = relid or relation_ids()
231 for unit in related_units(relid):238 for unit in related_units(relid):
@@ -237,7 +244,7 @@
237244
238@cached245@cached
239def relations_of_type(reltype=None):246def relations_of_type(reltype=None):
240 "Get relations of a specific type"247 """Get relations of a specific type"""
241 relation_data = []248 relation_data = []
242 reltype = reltype or relation_type()249 reltype = reltype or relation_type()
243 for relid in relation_ids(reltype):250 for relid in relation_ids(reltype):
@@ -249,7 +256,7 @@
249256
250@cached257@cached
251def relation_types():258def relation_types():
252 "Get a list of relation types supported by this charm"259 """Get a list of relation types supported by this charm"""
253 charmdir = os.environ.get('CHARM_DIR', '')260 charmdir = os.environ.get('CHARM_DIR', '')
254 mdf = open(os.path.join(charmdir, 'metadata.yaml'))261 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
255 md = yaml.safe_load(mdf)262 md = yaml.safe_load(mdf)
@@ -264,6 +271,7 @@
264271
265@cached272@cached
266def relations():273def relations():
274 """Get a nested dictionary of relation data for all related units"""
267 rels = {}275 rels = {}
268 for reltype in relation_types():276 for reltype in relation_types():
269 relids = {}277 relids = {}
@@ -277,15 +285,35 @@
277 return rels285 return rels
278286
279287
288@cached
289def is_relation_made(relation, keys='private-address'):
290 '''
291 Determine whether a relation is established by checking for
292 presence of key(s). If a list of keys is provided, they
293 must all be present for the relation to be identified as made
294 '''
295 if isinstance(keys, str):
296 keys = [keys]
297 for r_id in relation_ids(relation):
298 for unit in related_units(r_id):
299 context = {}
300 for k in keys:
301 context[k] = relation_get(k, rid=r_id,
302 unit=unit)
303 if None not in context.values():
304 return True
305 return False
306
307
280def open_port(port, protocol="TCP"):308def open_port(port, protocol="TCP"):
281 "Open a service network port"309 """Open a service network port"""
282 _args = ['open-port']310 _args = ['open-port']
283 _args.append('{}/{}'.format(port, protocol))311 _args.append('{}/{}'.format(port, protocol))
284 subprocess.check_call(_args)312 subprocess.check_call(_args)
285313
286314
287def close_port(port, protocol="TCP"):315def close_port(port, protocol="TCP"):
288 "Close a service network port"316 """Close a service network port"""
289 _args = ['close-port']317 _args = ['close-port']
290 _args.append('{}/{}'.format(port, protocol))318 _args.append('{}/{}'.format(port, protocol))
291 subprocess.check_call(_args)319 subprocess.check_call(_args)
@@ -293,6 +321,7 @@
293321
294@cached322@cached
295def unit_get(attribute):323def unit_get(attribute):
324 """Get the unit ID for the remote unit"""
296 _args = ['unit-get', '--format=json', attribute]325 _args = ['unit-get', '--format=json', attribute]
297 try:326 try:
298 return json.loads(subprocess.check_output(_args))327 return json.loads(subprocess.check_output(_args))
@@ -301,22 +330,46 @@
301330
302331
303def unit_private_ip():332def unit_private_ip():
333 """Get this unit's private IP address"""
304 return unit_get('private-address')334 return unit_get('private-address')
305335
306336
307class UnregisteredHookError(Exception):337class UnregisteredHookError(Exception):
338 """Raised when an undefined hook is called"""
308 pass339 pass
309340
310341
311class Hooks(object):342class Hooks(object):
343 """A convenient handler for hook functions.
344
345 Example:
346 hooks = Hooks()
347
348 # register a hook, taking its name from the function name
349 @hooks.hook()
350 def install():
351 ...
352
353 # register a hook, providing a custom hook name
354 @hooks.hook("config-changed")
355 def config_changed():
356 ...
357
358 if __name__ == "__main__":
359 # execute a hook based on the name the program is called by
360 hooks.execute(sys.argv)
361 """
362
312 def __init__(self):363 def __init__(self):
313 super(Hooks, self).__init__()364 super(Hooks, self).__init__()
314 self._hooks = {}365 self._hooks = {}
315366
316 def register(self, name, function):367 def register(self, name, function):
368 """Register a hook"""
317 self._hooks[name] = function369 self._hooks[name] = function
318370
319 def execute(self, args):371 def execute(self, args):
372 """Execute a registered hook based on args[0]"""
320 hook_name = os.path.basename(args[0])373 hook_name = os.path.basename(args[0])
321 if hook_name in self._hooks:374 if hook_name in self._hooks:
322 self._hooks[hook_name]()375 self._hooks[hook_name]()
@@ -324,6 +377,7 @@
324 raise UnregisteredHookError(hook_name)377 raise UnregisteredHookError(hook_name)
325378
326 def hook(self, *hook_names):379 def hook(self, *hook_names):
380 """Decorator, registering them as hooks"""
327 def wrapper(decorated):381 def wrapper(decorated):
328 for hook_name in hook_names:382 for hook_name in hook_names:
329 self.register(hook_name, decorated)383 self.register(hook_name, decorated)
@@ -337,4 +391,5 @@
337391
338392
339def charm_dir():393def charm_dir():
394 """Return the root directory of the current charm"""
340 return os.environ.get('CHARM_DIR')395 return os.environ.get('CHARM_DIR')
341396
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2013-08-21 19:14:32 +0000
+++ hooks/charmhelpers/core/host.py 2016-02-12 04:16:45 +0000
@@ -19,28 +19,36 @@
1919
2020
21def service_start(service_name):21def service_start(service_name):
22 service('start', service_name)22 """Start a system service"""
23 return service('start', service_name)
2324
2425
25def service_stop(service_name):26def service_stop(service_name):
26 service('stop', service_name)27 """Stop a system service"""
28 return service('stop', service_name)
2729
2830
29def service_restart(service_name):31def service_restart(service_name):
30 service('restart', service_name)32 """Restart a system service"""
33 return service('restart', service_name)
3134
3235
33def service_reload(service_name, restart_on_failure=False):36def service_reload(service_name, restart_on_failure=False):
34 if not service('reload', service_name) and restart_on_failure:37 """Reload a system service, optionally falling back to restart if reload fails"""
35 service('restart', service_name)38 service_result = service('reload', service_name)
39 if not service_result and restart_on_failure:
40 service_result = service('restart', service_name)
41 return service_result
3642
3743
38def service(action, service_name):44def service(action, service_name):
45 """Control a system service"""
39 cmd = ['service', service_name, action]46 cmd = ['service', service_name, action]
40 return subprocess.call(cmd) == 047 return subprocess.call(cmd) == 0
4148
4249
43def service_running(service):50def service_running(service):
51 """Determine whether a system service is running"""
44 try:52 try:
45 output = subprocess.check_output(['service', service, 'status'])53 output = subprocess.check_output(['service', service, 'status'])
46 except subprocess.CalledProcessError:54 except subprocess.CalledProcessError:
@@ -53,7 +61,7 @@
5361
5462
55def adduser(username, password=None, shell='/bin/bash', system_user=False):63def adduser(username, password=None, shell='/bin/bash', system_user=False):
56 """Add a user"""64 """Add a user to the system"""
57 try:65 try:
58 user_info = pwd.getpwnam(username)66 user_info = pwd.getpwnam(username)
59 log('user {0} already exists!'.format(username))67 log('user {0} already exists!'.format(username))
@@ -136,7 +144,7 @@
136144
137145
138def mount(device, mountpoint, options=None, persist=False):146def mount(device, mountpoint, options=None, persist=False):
139 '''Mount a filesystem'''147 """Mount a filesystem at a particular mountpoint"""
140 cmd_args = ['mount']148 cmd_args = ['mount']
141 if options is not None:149 if options is not None:
142 cmd_args.extend(['-o', options])150 cmd_args.extend(['-o', options])
@@ -153,7 +161,7 @@
153161
154162
155def umount(mountpoint, persist=False):163def umount(mountpoint, persist=False):
156 '''Unmount a filesystem'''164 """Unmount a filesystem"""
157 cmd_args = ['umount', mountpoint]165 cmd_args = ['umount', mountpoint]
158 try:166 try:
159 subprocess.check_output(cmd_args)167 subprocess.check_output(cmd_args)
@@ -167,7 +175,7 @@
167175
168176
169def mounts():177def mounts():
170 '''List of all mounted volumes as [[mountpoint,device],[...]]'''178 """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
171 with open('/proc/mounts') as f:179 with open('/proc/mounts') as f:
172 # [['/mount/point','/dev/path'],[...]]180 # [['/mount/point','/dev/path'],[...]]
173 system_mounts = [m[1::-1] for m in [l.strip().split()181 system_mounts = [m[1::-1] for m in [l.strip().split()
@@ -176,7 +184,7 @@
176184
177185
178def file_hash(path):186def file_hash(path):
179 ''' Generate a md5 hash of the contents of 'path' or None if not found '''187 """Generate a md5 hash of the contents of 'path' or None if not found """
180 if os.path.exists(path):188 if os.path.exists(path):
181 h = hashlib.md5()189 h = hashlib.md5()
182 with open(path, 'r') as source:190 with open(path, 'r') as source:
@@ -187,7 +195,7 @@
187195
188196
189def restart_on_change(restart_map):197def restart_on_change(restart_map):
190 ''' Restart services based on configuration files changing198 """Restart services based on configuration files changing
191199
192 This function is used a decorator, for example200 This function is used a decorator, for example
193201
@@ -200,7 +208,7 @@
200 In this example, the cinder-api and cinder-volume services208 In this example, the cinder-api and cinder-volume services
201 would be restarted if /etc/ceph/ceph.conf is changed by the209 would be restarted if /etc/ceph/ceph.conf is changed by the
202 ceph_client_changed function.210 ceph_client_changed function.
203 '''211 """
204 def wrap(f):212 def wrap(f):
205 def wrapped_f(*args):213 def wrapped_f(*args):
206 checksums = {}214 checksums = {}
@@ -218,7 +226,7 @@
218226
219227
220def lsb_release():228def lsb_release():
221 '''Return /etc/lsb-release in a dict'''229 """Return /etc/lsb-release in a dict"""
222 d = {}230 d = {}
223 with open('/etc/lsb-release', 'r') as lsb:231 with open('/etc/lsb-release', 'r') as lsb:
224 for l in lsb:232 for l in lsb:
@@ -228,7 +236,7 @@
228236
229237
230def pwgen(length=None):238def pwgen(length=None):
231 '''Generate a random pasword.'''239 """Generate a random pasword."""
232 if length is None:240 if length is None:
233 length = random.choice(range(35, 45))241 length = random.choice(range(35, 45))
234 alphanumeric_chars = [242 alphanumeric_chars = [
@@ -237,3 +245,47 @@
237 random_chars = [245 random_chars = [
238 random.choice(alphanumeric_chars) for _ in range(length)]246 random.choice(alphanumeric_chars) for _ in range(length)]
239 return(''.join(random_chars))247 return(''.join(random_chars))
248
249
250def list_nics(nic_type):
251 '''Return a list of nics of given type(s)'''
252 if isinstance(nic_type, basestring):
253 int_types = [nic_type]
254 else:
255 int_types = nic_type
256 interfaces = []
257 for int_type in int_types:
258 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
259 ip_output = subprocess.check_output(cmd).split('\n')
260 ip_output = (line for line in ip_output if line)
261 for line in ip_output:
262 if line.split()[1].startswith(int_type):
263 interfaces.append(line.split()[1].replace(":", ""))
264 return interfaces
265
266
267def set_nic_mtu(nic, mtu):
268 '''Set MTU on a network interface'''
269 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
270 subprocess.check_call(cmd)
271
272
273def get_nic_mtu(nic):
274 cmd = ['ip', 'addr', 'show', nic]
275 ip_output = subprocess.check_output(cmd).split('\n')
276 mtu = ""
277 for line in ip_output:
278 words = line.split()
279 if 'mtu' in words:
280 mtu = words[words.index("mtu") + 1]
281 return mtu
282
283
284def get_nic_hwaddr(nic):
285 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
286 ip_output = subprocess.check_output(cmd)
287 hwaddr = ""
288 words = ip_output.split()
289 if 'link/ether' in words:
290 hwaddr = words[words.index('link/ether') + 1]
291 return hwaddr
240292
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2013-08-21 19:19:29 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2016-02-12 04:16:45 +0000
@@ -13,6 +13,7 @@
13 log,13 log,
14)14)
15import apt_pkg15import apt_pkg
16import os
1617
17CLOUD_ARCHIVE = """# Ubuntu Cloud Archive18CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
18deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main19deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
@@ -20,6 +21,40 @@
20PROPOSED_POCKET = """# Proposed21PROPOSED_POCKET = """# Proposed
21deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted22deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
22"""23"""
24CLOUD_ARCHIVE_POCKETS = {
25 # Folsom
26 'folsom': 'precise-updates/folsom',
27 'precise-folsom': 'precise-updates/folsom',
28 'precise-folsom/updates': 'precise-updates/folsom',
29 'precise-updates/folsom': 'precise-updates/folsom',
30 'folsom/proposed': 'precise-proposed/folsom',
31 'precise-folsom/proposed': 'precise-proposed/folsom',
32 'precise-proposed/folsom': 'precise-proposed/folsom',
33 # Grizzly
34 'grizzly': 'precise-updates/grizzly',
35 'precise-grizzly': 'precise-updates/grizzly',
36 'precise-grizzly/updates': 'precise-updates/grizzly',
37 'precise-updates/grizzly': 'precise-updates/grizzly',
38 'grizzly/proposed': 'precise-proposed/grizzly',
39 'precise-grizzly/proposed': 'precise-proposed/grizzly',
40 'precise-proposed/grizzly': 'precise-proposed/grizzly',
41 # Havana
42 'havana': 'precise-updates/havana',
43 'precise-havana': 'precise-updates/havana',
44 'precise-havana/updates': 'precise-updates/havana',
45 'precise-updates/havana': 'precise-updates/havana',
46 'havana/proposed': 'precise-proposed/havana',
47 'precise-havana/proposed': 'precise-proposed/havana',
48 'precise-proposed/havana': 'precise-proposed/havana',
49 # Icehouse
50 'icehouse': 'precise-updates/icehouse',
51 'precise-icehouse': 'precise-updates/icehouse',
52 'precise-icehouse/updates': 'precise-updates/icehouse',
53 'precise-updates/icehouse': 'precise-updates/icehouse',
54 'icehouse/proposed': 'precise-proposed/icehouse',
55 'precise-icehouse/proposed': 'precise-proposed/icehouse',
56 'precise-proposed/icehouse': 'precise-proposed/icehouse',
57}
2358
2459
25def filter_installed_packages(packages):60def filter_installed_packages(packages):
@@ -40,8 +75,10 @@
4075
41def apt_install(packages, options=None, fatal=False):76def apt_install(packages, options=None, fatal=False):
42 """Install one or more packages"""77 """Install one or more packages"""
43 options = options or []78 if options is None:
44 cmd = ['apt-get', '-y']79 options = ['--option=Dpkg::Options::=--force-confold']
80
81 cmd = ['apt-get', '--assume-yes']
45 cmd.extend(options)82 cmd.extend(options)
46 cmd.append('install')83 cmd.append('install')
47 if isinstance(packages, basestring):84 if isinstance(packages, basestring):
@@ -50,10 +87,14 @@
50 cmd.extend(packages)87 cmd.extend(packages)
51 log("Installing {} with options: {}".format(packages,88 log("Installing {} with options: {}".format(packages,
52 options))89 options))
90 env = os.environ.copy()
91 if 'DEBIAN_FRONTEND' not in env:
92 env['DEBIAN_FRONTEND'] = 'noninteractive'
93
53 if fatal:94 if fatal:
54 subprocess.check_call(cmd)95 subprocess.check_call(cmd, env=env)
55 else:96 else:
56 subprocess.call(cmd)97 subprocess.call(cmd, env=env)
5798
5899
59def apt_update(fatal=False):100def apt_update(fatal=False):
@@ -67,7 +108,7 @@
67108
68def apt_purge(packages, fatal=False):109def apt_purge(packages, fatal=False):
69 """Purge one or more packages"""110 """Purge one or more packages"""
70 cmd = ['apt-get', '-y', 'purge']111 cmd = ['apt-get', '--assume-yes', 'purge']
71 if isinstance(packages, basestring):112 if isinstance(packages, basestring):
72 cmd.append(packages)113 cmd.append(packages)
73 else:114 else:
@@ -79,16 +120,37 @@
79 subprocess.call(cmd)120 subprocess.call(cmd)
80121
81122
123def apt_hold(packages, fatal=False):
124 """Hold one or more packages"""
125 cmd = ['apt-mark', 'hold']
126 if isinstance(packages, basestring):
127 cmd.append(packages)
128 else:
129 cmd.extend(packages)
130 log("Holding {}".format(packages))
131 if fatal:
132 subprocess.check_call(cmd)
133 else:
134 subprocess.call(cmd)
135
136
82def add_source(source, key=None):137def add_source(source, key=None):
83 if ((source.startswith('ppa:') or138 if (source.startswith('ppa:') or
84 source.startswith('http:'))):139 source.startswith('http:') or
140 source.startswith('deb ') or
141 source.startswith('cloud-archive:')):
85 subprocess.check_call(['add-apt-repository', '--yes', source])142 subprocess.check_call(['add-apt-repository', '--yes', source])
86 elif source.startswith('cloud:'):143 elif source.startswith('cloud:'):
87 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),144 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
88 fatal=True)145 fatal=True)
89 pocket = source.split(':')[-1]146 pocket = source.split(':')[-1]
147 if pocket not in CLOUD_ARCHIVE_POCKETS:
148 raise SourceConfigError(
149 'Unsupported cloud: source option %s' %
150 pocket)
151 actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
90 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:152 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
91 apt.write(CLOUD_ARCHIVE.format(pocket))153 apt.write(CLOUD_ARCHIVE.format(actual_pocket))
92 elif source == 'proposed':154 elif source == 'proposed':
93 release = lsb_release()['DISTRIB_CODENAME']155 release = lsb_release()['DISTRIB_CODENAME']
94 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:156 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
@@ -118,8 +180,13 @@
118 Note that 'null' (a.k.a. None) should not be quoted.180 Note that 'null' (a.k.a. None) should not be quoted.
119 """181 """
120 sources = safe_load(config(sources_var))182 sources = safe_load(config(sources_var))
121 keys = safe_load(config(keys_var))183 keys = config(keys_var)
122 if isinstance(sources, basestring) and isinstance(keys, basestring):184 if not sources:
185 return
186 if keys is not None:
187 keys = safe_load(keys)
188 if isinstance(sources, basestring) and (
189 keys is None or isinstance(keys, basestring)):
123 add_source(sources, keys)190 add_source(sources, keys)
124 else:191 else:
125 if not len(sources) == len(keys):192 if not len(sources) == len(keys):
@@ -172,7 +239,9 @@
172239
173240
174class BaseFetchHandler(object):241class BaseFetchHandler(object):
242
175 """Base class for FetchHandler implementations in fetch plugins"""243 """Base class for FetchHandler implementations in fetch plugins"""
244
176 def can_handle(self, source):245 def can_handle(self, source):
177 """Returns True if the source can be handled. Otherwise returns246 """Returns True if the source can be handled. Otherwise returns
178 a string explaining why it cannot"""247 a string explaining why it cannot"""
@@ -200,10 +269,13 @@
200 for handler_name in fetch_handlers:269 for handler_name in fetch_handlers:
201 package, classname = handler_name.rsplit('.', 1)270 package, classname = handler_name.rsplit('.', 1)
202 try:271 try:
203 handler_class = getattr(importlib.import_module(package), classname)272 handler_class = getattr(
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches