Merge lp:~ev/ubuntu-ci-services-itself/restish-swift-deployment into lp:ubuntu-ci-services-itself

Proposed by Evan
Status: Rejected
Rejected by: Andy Doan
Proposed branch: lp:~ev/ubuntu-ci-services-itself/restish-swift-deployment
Merge into: lp:ubuntu-ci-services-itself
Diff against target: 6569 lines (+6127/-4) (has conflicts)
52 files modified
charms/precise/restish/README.ex (+41/-0)
charms/precise/restish/config.yaml (+90/-0)
charms/precise/restish/hooks/charmhelpers/cli/README.rst (+57/-0)
charms/precise/restish/hooks/charmhelpers/cli/__init__.py (+147/-0)
charms/precise/restish/hooks/charmhelpers/cli/commands.py (+2/-0)
charms/precise/restish/hooks/charmhelpers/cli/host.py (+15/-0)
charms/precise/restish/hooks/charmhelpers/contrib/ansible/__init__.py (+165/-0)
charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/IMPORT (+4/-0)
charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/__init__.py (+184/-0)
charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/IMPORT (+14/-0)
charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/nrpe.py (+216/-0)
charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/volumes.py (+156/-0)
charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/apache.py (+58/-0)
charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/cluster.py (+183/-0)
charms/precise/restish/hooks/charmhelpers/contrib/jujugui/IMPORT (+4/-0)
charms/precise/restish/hooks/charmhelpers/contrib/jujugui/utils.py (+602/-0)
charms/precise/restish/hooks/charmhelpers/contrib/network/ip.py (+69/-0)
charms/precise/restish/hooks/charmhelpers/contrib/network/ovs/__init__.py (+75/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/context.py (+577/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/neutron.py (+137/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+11/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+37/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+23/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/templating.py (+280/-0)
charms/precise/restish/hooks/charmhelpers/contrib/openstack/utils.py (+440/-0)
charms/precise/restish/hooks/charmhelpers/contrib/saltstack/__init__.py (+102/-0)
charms/precise/restish/hooks/charmhelpers/contrib/ssl/__init__.py (+78/-0)
charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/ceph.py (+383/-0)
charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/loopback.py (+62/-0)
charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/lvm.py (+88/-0)
charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0)
charms/precise/restish/hooks/charmhelpers/contrib/templating/contexts.py (+73/-0)
charms/precise/restish/hooks/charmhelpers/contrib/templating/pyformat.py (+13/-0)
charms/precise/restish/hooks/charmhelpers/core/hookenv.py (+395/-0)
charms/precise/restish/hooks/charmhelpers/core/host.py (+291/-0)
charms/precise/restish/hooks/charmhelpers/fetch/__init__.py (+279/-0)
charms/precise/restish/hooks/charmhelpers/fetch/archiveurl.py (+48/-0)
charms/precise/restish/hooks/charmhelpers/fetch/bzrurl.py (+49/-0)
charms/precise/restish/hooks/charmhelpers/payload/__init__.py (+1/-0)
charms/precise/restish/hooks/charmhelpers/payload/archive.py (+57/-0)
charms/precise/restish/hooks/charmhelpers/payload/execd.py (+50/-0)
charms/precise/restish/hooks/hook_helpers.py (+149/-0)
charms/precise/restish/hooks/hooks.py (+237/-0)
charms/precise/restish/metadata.yaml (+20/-0)
juju-deployer/branch-source-builder.yaml.tmpl (+9/-1)
juju-deployer/deploy.py (+77/-0)
juju-deployer/image-builder.yaml.tmpl (+9/-1)
juju-deployer/lander.yaml.tmpl (+9/-1)
juju-deployer/test-runner.yaml.tmpl (+12/-1)
juju-deployer/test_deploy.py (+5/-0)
Text conflict in juju-deployer/test-runner.yaml.tmpl
To merge this branch: bzr merge lp:~ev/ubuntu-ci-services-itself/restish-swift-deployment
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration Approve
Chris Johnston (community) Needs Fixing
Review via email: mp+203385@code.launchpad.net

Commit message

Provide payloads for charm deployments via Swift.

Description of the change

Provide payloads for charm deployments via Swift. This breaks some assumptions we've made around the restish charm, so it won't cleanly deploy in juju-deployer yet.

This builds on Andy's work to bring the restish charm in the tree.

To post a comment you must log in.
Revision history for this message
Andy Doan (doanac) wrote :

You might want to take a look at my branch. I made an update based on your comments. Specifically i think you'll want to merge in:

 http://bazaar.launchpad.net/~doanac/ubuntu-ci-services-itself/restish-charm-local/revision/115

I also merged with trunk for revno 114. My notes:

1) The bulk of this is just adding charm-helpers. I think Joe's approach in the webui MP handles this better. He's got a makefile that pulls in a specific revno of charm-helpers. I think this will keep our source tree a lot cleaner - especially when we start to merge in all the other charms.

2) Are you sure code like:
  code_runner = config['user_code_runner']

is safe when no value is provided and default is intended? When I've done this in the past, there is no dictionary item for elements that aren't specified explicitly. so i've always had to do: config.get('user_code_runner', 'restish'). Seems odd/annoying juju wouldn't do that for me, so maybe you know a trick

3) do we really need all the swift_* config options? Now that we have started to say we want "auth config" type stuff to go into a juju-deployer/config file that we pass to the charm, I was thinking this type of data could go there, so we don't have to specify it in every charm?

Revision history for this message
Evan (ev) wrote :

On 27 January 2014 19:28, Andy Doan <email address hidden> wrote:
> 2) Are you sure code like:
> code_runner = config['user_code_runner']
>
> is safe when no value is provided and default is intended? When I've done this in the past, there is no dictionary item for elements that aren't specified explicitly. so i've always had to do: config.get('user_code_runner', 'restish'). Seems odd/annoying juju wouldn't do that for me, so maybe you know a trick

  user_code_runner:
    default: "restish"
    type: string
    description: The user that runs the code

So it's guaranteed to at least set to 'restish'.

Revision history for this message
Evan (ev) wrote :

On 27 January 2014 19:28, Andy Doan <email address hidden> wrote:
> You might want to take a look at my branch. I made an update based on your comments. Specifically i think you'll want to merge in:
>
> http://bazaar.launchpad.net/~doanac/ubuntu-ci-services-itself/restish-charm-local/revision/115
 194 # juju needs to know about our local charms.
 195 # we could use the charms directory, but juju will wind up copying
 196 # external charms we use into this directory and making our source
 197 # tree dirty. using the deployer directory just makes it a bit cleaner

I don't understand what you mean by making the tree dirty. Can you elaborate?

We want to have the external charms in tree for the deployment, which
juju-deployer does. It doesn't let you bind to a specific revno, so
we'll want to have a commit or branch that builds these out with
config-manager, as we discussed this morning.

Revision history for this message
Andy Doan (doanac) wrote :

On 01/28/2014 06:24 AM, Evan Dandrea wrote:
>> http://bazaar.launchpad.net/~doanac/ubuntu-ci-services-itself/restish-charm-local/revision/115
> 194 # juju needs to know about our local charms.
> 195 # we could use the charms directory, but juju will wind up copying
> 196 # external charms we use into this directory and making our source
> 197 # tree dirty. using the deployer directory just makes it a bit cleaner
>
> I don't understand what you mean by making the tree dirty. Can you elaborate?

w/o that snippet you wind up having things like the apache2 and postgres
charm get copied to your source's charms/precise directory. It just gets
a little annoying because "bzr stat" shows these things. With my snippet
they get thrown into the venv directory and you just see that one entry
when you run "bzr stat"

Revision history for this message
Evan (ev) wrote :

On 28 January 2014 12:36, Andy Doan <email address hidden> wrote:
> On 01/28/2014 06:24 AM, Evan Dandrea wrote:
>>> http://bazaar.launchpad.net/~doanac/ubuntu-ci-services-itself/restish-charm-local/revision/115
>> 194 # juju needs to know about our local charms.
>> 195 # we could use the charms directory, but juju will wind up copying
>> 196 # external charms we use into this directory and making our source
>> 197 # tree dirty. using the deployer directory just makes it a bit cleaner
>>
>> I don't understand what you mean by making the tree dirty. Can you elaborate?
>
> w/o that snippet you wind up having things like the apache2 and postgres
> charm get copied to your source's charms/precise directory. It just gets
> a little annoying because "bzr stat" shows these things. With my snippet
> they get thrown into the venv directory and you just see that one entry
> when you run "bzr stat"

.bzrignore? :)

Revision history for this message
Andy Doan (doanac) wrote :

On 01/28/2014 10:48 AM, Evan Dandrea wrote:
> .bzrignore? :)

yeah - i thought about that. its not a terrible idea. i suppose that's
not any worse than my hack to copy the charms to a temp directory

Revision history for this message
Joe Talbott (joetalbott) wrote :

On Tue, Jan 28, 2014 at 04:54:17PM -0000, Andy Doan wrote:
> On 01/28/2014 10:48 AM, Evan Dandrea wrote:
> > .bzrignore? :)
>
> yeah - i thought about that. its not a terrible idea. i suppose that's
> not any worse than my hack to copy the charms to a temp directory

I'm not a huge fan of the .bzrignore approach. I agree with Andy that
having external charms in a working copy of a bzr branch is bad. I like
to be able to easily blow away the juju-deployer set up charms/precise
directory at will and not have to pick and choose what to delete.

I don't have an easy solution either. I think having our charms in a
separate branch might be a better approach especially if we are also
going to need to be able to grab a specific revno of a charm. For that
matter I even think one branch per charm might be the best way to go.

I think the reason we initially went with including the charms in the
ubuntu-ci-services-itself (u-c-s-i) branch was to avoid having to update two
branches in lock-step. If I'm mistaken the rest of this paragraph is
probably moot. What solutions are available for inter-charm
dependencies? i.e. charm-A revno X requires charm-B revno Y. If we're
using revnos for each charm in u-c-s-i's juju-deploy config won't this
mostly mitigate the issue?

Revision history for this message
Chris Johnston (cjohnston) wrote :

Text conflict in juju-deployer/image-builder.yaml.tmpl
Text conflict in juju-deployer/lander.yaml.tmpl
Text conflict in juju-deployer/test-runner.yaml.tmpl
3 conflicts encountered.

review: Needs Fixing
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

FAILED: Continuous integration, rev:118
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want a jenkins rebuild you need to trigger it yourself):
https://code.launchpad.net/~ev/ubuntu-ci-services-itself/restish-swift-deployment/+merge/203385/+edit-commit-message

http://s-jenkins.ubuntu-ci:8080/job/uci-engine-ci/11/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/uci-engine-ci/11/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

PASSED: Continuous integration, rev:118
http://s-jenkins.ubuntu-ci:8080/job/uci-engine-ci/19/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/uci-engine-ci/19/rebuild

review: Approve (continuous-integration)
Revision history for this message
Andy Doan (doanac) wrote :

I've proposed:

 https://code.launchpad.net/~doanac/ubuntu-ci-services-itself/restish-swift/+merge/207068

which might be an easier branch for people to review and approve.

Unmerged revisions

118. By Evan

Use swift in deploy.py to deploy the payload to the juju units.

117. By Evan

Somehow this got lost in the merge (__init__ files for charmhelpers).

116. By Evan

Merge with trunk.

115. By Evan

Drop the code to copy files around. juju-deployer supports JUJU_REPOSITORY in tip.

114. By Evan

First cut at swift support.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'charms'
2=== added directory 'charms/precise'
3=== added directory 'charms/precise/restish'
4=== added file 'charms/precise/restish/README.ex'
5--- charms/precise/restish/README.ex 1970-01-01 00:00:00 +0000
6+++ charms/precise/restish/README.ex 2014-01-27 18:01:12 +0000
7@@ -0,0 +1,41 @@
8+Describe the intended usage of this charm and anything unique about how
9+this charm relates to others here.
10+
11+This README will be displayed in the Charm Store, it should be either Markdown or RST. Ideal READMEs include instructions on how to use the charm, expected usage, and charm features that your audience might be interested in. For an example of a well written README check out Hadoop: http://jujucharms.com/charms/precise/hadoop
12+
13+Here's an example you might wish to template off of:
14+
15+Overview
16+--------
17+
18+This charm provides (service) from (service homepage). Add a description here of what the service itself actually does.
19+
20+
21+Usage
22+-----
23+
24+Step by step instructions on using the charm:
25+
26+ juju deploy servicename
27+
28+and so on. If you're providing a web service or something that the end user needs to go to, tell them here, especially if you're deploying a service that might listen to a non-default port.
29+
30+You can then browse to http://ip-address to configure the service.
31+
32+Configuration
33+-------------
34+
35+The configuration options will be listed on the charm store, however If you're making assumptions or opinionated decisions in the charm (like setting a default administrator password), you should detail that here so the user knows how to change it immediately, etc.
36+
37+
38+Contact Information
39+-------------------
40+
41+Though this will be listed in the charm store itself don't assume a user will know that, so include that information here:
42+
43+Author:
44+Report bugs at: http://bugs.launchpad.net/charms/+source/charmname
45+Location: http://jujucharms.com/charms/distro/charmname
46+
47+* Be sure to remove the templated parts before submitting to https://launchpad.net/charms for inclusion in the charm store.
48+
49
50=== added file 'charms/precise/restish/config.yaml'
51--- charms/precise/restish/config.yaml 1970-01-01 00:00:00 +0000
52+++ charms/precise/restish/config.yaml 2014-01-27 18:01:12 +0000
53@@ -0,0 +1,90 @@
54+options:
55+ branch:
56+ type: string
57+ description: "BZR branch the service lives in"
58+ revno:
59+ type: string
60+ description: "Revision or tag to branch from"
61+ packages:
62+ type: string
63+ description: "Packages required for this service"
64+ default: "python-webtest python-mock python-jinja2" #jinja2 needed by gunicorn
65+ restish_version:
66+ type: string
67+ description: "The version of restish to deploy"
68+ default: "0.12.1"
69+ install_root:
70+ type: string
71+ description: "The root directory the service will be installed in"
72+ default: "/srv/"
73+ python_path:
74+ type: string
75+ description: "PYTHONPATH specification for the service. Can include paths relative to local bzr directory"
76+
77+ # required for rabbitmq-server charm:
78+ amqp-user:
79+ type: string
80+ default: workerbee
81+ description: The user to log into the rabbitMQ server.
82+ amqp-vhost:
83+ type: string
84+ default: '/'
85+ description: The vhost in the rabbitMQ server.
86+
87+ # required for the gunicorn charm:
88+ port:
89+ type: int
90+ default: 8080
91+ description: "Port the application will be listening."
92+
93+ # swift configuration for holding the charm payload:
94+ swift_username:
95+ default: ""
96+ type: string
97+ description: Username to use when accessing swift
98+ swift_password:
99+ default: ""
100+ type: string
101+ description: Password to use when accessing swift
102+ swift_auth_url:
103+ default: ""
104+ type: string
105+ description: URL for authenticating against Keystone
106+ swift_region_name:
107+ default: ""
108+ type: string
109+ description: Swift region
110+ swift_tenant_name:
111+ default: ""
112+ type: string
113+ description: Entity that owns resources
114+ swift_container_name:
115+ default: ""
116+ type: string
117+ description: Container to put objects in
118+ swift_payload_name:
119+ default: "payload.tar.gz"
120+ type: string
121+ description: The name of the tarball to deploy
122+ # Deployment location
123+ code_src:
124+ default: "branch"
125+ type: string
126+ description: local, branch, or swift
127+ # Deployment permissions
128+ user_code_runner:
129+ default: "restish"
130+ type: string
131+ description: The user that runs the code
132+ group_code_runner:
133+ default: "restish"
134+ type: string
135+ description: The group that runs the code
136+ user_code_owner:
137+ default: "webops_deploy"
138+ type: string
139+ description: The user that owns the code
140+ group_code_owner:
141+ default: "webops_deploy"
142+ type: string
143+ description: The group that owns the code
144
145=== added directory 'charms/precise/restish/hooks'
146=== added symlink 'charms/precise/restish/hooks/amqp-relation-broken'
147=== target is u'hooks.py'
148=== added symlink 'charms/precise/restish/hooks/amqp-relation-changed'
149=== target is u'hooks.py'
150=== added symlink 'charms/precise/restish/hooks/amqp-relation-joined'
151=== target is u'hooks.py'
152=== added directory 'charms/precise/restish/hooks/charmhelpers'
153=== added file 'charms/precise/restish/hooks/charmhelpers/__init__.py'
154=== added directory 'charms/precise/restish/hooks/charmhelpers/cli'
155=== added file 'charms/precise/restish/hooks/charmhelpers/cli/README.rst'
156--- charms/precise/restish/hooks/charmhelpers/cli/README.rst 1970-01-01 00:00:00 +0000
157+++ charms/precise/restish/hooks/charmhelpers/cli/README.rst 2014-01-27 18:01:12 +0000
158@@ -0,0 +1,57 @@
159+==========
160+Commandant
161+==========
162+
163+-----------------------------------------------------
164+Automatic command-line interfaces to Python functions
165+-----------------------------------------------------
166+
167+One of the benefits of ``libvirt`` is the uniformity of the interface: the C API (as well as the bindings in other languages) is a set of functions that accept parameters that are nearly identical to the command-line arguments. If you run ``virsh``, you get an interactive command prompt that supports all of the same commands that your shell scripts use as ``virsh`` subcommands.
168+
169+Command execution and stdio manipulation is the greatest common factor across all development systems in the POSIX environment. By exposing your functions as commands that manipulate streams of text, you can make life easier for all the Ruby and Erlang and Go programmers in your life.
170+
171+Goals
172+=====
173+
174+* Single decorator to expose a function as a command.
175+ * now two decorators - one "automatic" and one that allows authors to manipulate the arguments for fine-grained control.(MW)
176+* Automatic analysis of function signature through ``inspect.getargspec()``
177+* Command argument parser built automatically with ``argparse``
178+* Interactive interpreter loop object made with ``Cmd``
179+* Options to output structured return value data via ``pprint``, ``yaml`` or ``json`` dumps.
180+
181+Other Important Features that need writing
182+------------------------------------------
183+
184+* Help and Usage documentation can be automatically generated, but it will be important to let users override this behaviour
185+* The decorator should allow specifying further parameters to the parser's add_argument() calls, to specify types or to make arguments behave as boolean flags, etc.
186+ - Filename arguments are important, as good practice is for functions to accept file objects as parameters.
187+ - choices arguments help to limit bad input before the function is called
188+* Some automatic behaviour could make for better defaults, once the user can override them.
189+ - We could automatically detect arguments that default to False or True, and automatically support --no-foo for foo=True.
190+ - We could automatically support hyphens as alternates for underscores
191+ - Arguments defaulting to sequence types could support the ``append`` action.
192+
193+
194+-----------------------------------------------------
195+Implementing subcommands
196+-----------------------------------------------------
197+
198+(WIP)
199+
200+So as to avoid dependencies on the cli module, subcommands should be defined separately from their implementations. The recommmendation would be to place definitions into separate modules near the implementations which they expose.
201+
202+Some examples::
203+
204+ from charmhelpers.cli import CommandLine
205+ from charmhelpers.payload import execd
206+ from charmhelpers.foo import bar
207+
208+ cli = CommandLine()
209+
210+ cli.subcommand(execd.execd_run)
211+
212+ @cli.subcommand_builder("bar", help="Bar baz qux")
213+ def barcmd_builder(subparser):
214+ subparser.add_argument('argument1', help="yackety")
215+ return bar
216
217=== added file 'charms/precise/restish/hooks/charmhelpers/cli/__init__.py'
218--- charms/precise/restish/hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000
219+++ charms/precise/restish/hooks/charmhelpers/cli/__init__.py 2014-01-27 18:01:12 +0000
220@@ -0,0 +1,147 @@
221+import inspect
222+import itertools
223+import argparse
224+import sys
225+
226+
227+class OutputFormatter(object):
228+ def __init__(self, outfile=sys.stdout):
229+ self.formats = (
230+ "raw",
231+ "json",
232+ "py",
233+ "yaml",
234+ "csv",
235+ "tab",
236+ )
237+ self.outfile = outfile
238+
239+ def add_arguments(self, argument_parser):
240+ formatgroup = argument_parser.add_mutually_exclusive_group()
241+ choices = self.supported_formats
242+ formatgroup.add_argument("--format", metavar='FMT',
243+ help="Select output format for returned data, "
244+ "where FMT is one of: {}".format(choices),
245+ choices=choices, default='raw')
246+ for fmt in self.formats:
247+ fmtfunc = getattr(self, fmt)
248+ formatgroup.add_argument("-{}".format(fmt[0]),
249+ "--{}".format(fmt), action='store_const',
250+ const=fmt, dest='format',
251+ help=fmtfunc.__doc__)
252+
253+ @property
254+ def supported_formats(self):
255+ return self.formats
256+
257+ def raw(self, output):
258+ """Output data as raw string (default)"""
259+ self.outfile.write(str(output))
260+
261+ def py(self, output):
262+ """Output data as a nicely-formatted python data structure"""
263+ import pprint
264+ pprint.pprint(output, stream=self.outfile)
265+
266+ def json(self, output):
267+ """Output data in JSON format"""
268+ import json
269+ json.dump(output, self.outfile)
270+
271+ def yaml(self, output):
272+ """Output data in YAML format"""
273+ import yaml
274+ yaml.safe_dump(output, self.outfile)
275+
276+ def csv(self, output):
277+ """Output data as excel-compatible CSV"""
278+ import csv
279+ csvwriter = csv.writer(self.outfile)
280+ csvwriter.writerows(output)
281+
282+ def tab(self, output):
283+ """Output data in excel-compatible tab-delimited format"""
284+ import csv
285+ csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
286+ csvwriter.writerows(output)
287+
288+ def format_output(self, output, fmt='raw'):
289+ fmtfunc = getattr(self, fmt)
290+ fmtfunc(output)
291+
292+
293+class CommandLine(object):
294+ argument_parser = None
295+ subparsers = None
296+ formatter = None
297+
298+ def __init__(self):
299+ if not self.argument_parser:
300+ self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
301+ if not self.formatter:
302+ self.formatter = OutputFormatter()
303+ self.formatter.add_arguments(self.argument_parser)
304+ if not self.subparsers:
305+ self.subparsers = self.argument_parser.add_subparsers(help='Commands')
306+
307+ def subcommand(self, command_name=None):
308+ """
309+ Decorate a function as a subcommand. Use its arguments as the
310+ command-line arguments"""
311+ def wrapper(decorated):
312+ cmd_name = command_name or decorated.__name__
313+ subparser = self.subparsers.add_parser(cmd_name,
314+ description=decorated.__doc__)
315+ for args, kwargs in describe_arguments(decorated):
316+ subparser.add_argument(*args, **kwargs)
317+ subparser.set_defaults(func=decorated)
318+ return decorated
319+ return wrapper
320+
321+ def subcommand_builder(self, command_name, description=None):
322+ """
323+ Decorate a function that builds a subcommand. Builders should accept a
324+ single argument (the subparser instance) and return the function to be
325+ run as the command."""
326+ def wrapper(decorated):
327+ subparser = self.subparsers.add_parser(command_name)
328+ func = decorated(subparser)
329+ subparser.set_defaults(func=func)
330+ subparser.description = description or func.__doc__
331+ return wrapper
332+
333+ def run(self):
334+ "Run cli, processing arguments and executing subcommands."
335+ arguments = self.argument_parser.parse_args()
336+ argspec = inspect.getargspec(arguments.func)
337+ vargs = []
338+ kwargs = {}
339+ if argspec.varargs:
340+ vargs = getattr(arguments, argspec.varargs)
341+ for arg in argspec.args:
342+ kwargs[arg] = getattr(arguments, arg)
343+ self.formatter.format_output(arguments.func(*vargs, **kwargs), arguments.format)
344+
345+
346+cmdline = CommandLine()
347+
348+
349+def describe_arguments(func):
350+ """
351+ Analyze a function's signature and return a data structure suitable for
352+ passing in as arguments to an argparse parser's add_argument() method."""
353+
354+ argspec = inspect.getargspec(func)
355+ # we should probably raise an exception somewhere if func includes **kwargs
356+ if argspec.defaults:
357+ positional_args = argspec.args[:-len(argspec.defaults)]
358+ keyword_names = argspec.args[-len(argspec.defaults):]
359+ for arg, default in itertools.izip(keyword_names, argspec.defaults):
360+ yield ('--{}'.format(arg),), {'default': default}
361+ else:
362+ positional_args = argspec.args
363+
364+ for arg in positional_args:
365+ yield (arg,), {}
366+ if argspec.varargs:
367+ yield (argspec.varargs,), {'nargs': '*'}
368
369=== added file 'charms/precise/restish/hooks/charmhelpers/cli/commands.py'
370--- charms/precise/restish/hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000
371+++ charms/precise/restish/hooks/charmhelpers/cli/commands.py 2014-01-27 18:01:12 +0000
372@@ -0,0 +1,2 @@
373+from . import CommandLine
374+import host
375
376=== added file 'charms/precise/restish/hooks/charmhelpers/cli/host.py'
377--- charms/precise/restish/hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000
378+++ charms/precise/restish/hooks/charmhelpers/cli/host.py 2014-01-27 18:01:12 +0000
379@@ -0,0 +1,15 @@
380+from . import cmdline
381+from charmhelpers.core import host
382+
383+
384+@cmdline.subcommand()
385+def mounts():
386+ "List mounts"
387+ return host.mounts()
388+
389+
390+@cmdline.subcommand_builder('service', description="Control system services")
391+def service(subparser):
392+ subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
393+ subparser.add_argument("service_name", help="Name of the service to control")
394+ return host.service
395
396=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib'
397=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/__init__.py'
398=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/ansible'
399=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/ansible/__init__.py'
400--- charms/precise/restish/hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000
401+++ charms/precise/restish/hooks/charmhelpers/contrib/ansible/__init__.py 2014-01-27 18:01:12 +0000
402@@ -0,0 +1,165 @@
403+# Copyright 2013 Canonical Ltd.
404+#
405+# Authors:
406+# Charm Helpers Developers <juju@lists.ubuntu.com>
407+"""Charm Helpers ansible - declare the state of your machines.
408+
409+This helper enables you to declare your machine state, rather than
410+program it procedurally (and have to test each change to your procedures).
411+Your install hook can be as simple as:
412+
413+{{{
414+import charmhelpers.contrib.ansible
415+
416+
417+def install():
418+ charmhelpers.contrib.ansible.install_ansible_support()
419+ charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
420+}}}
421+
422+and won't need to change (nor will its tests) when you change the machine
423+state.
424+
425+All of your juju config and relation-data are available as template
426+variables within your playbooks and templates. An install playbook looks
427+something like:
428+
429+{{{
430+---
431+- hosts: localhost
432+ user: root
433+
434+ tasks:
435+ - name: Add private repositories.
436+ template:
437+ src: ../templates/private-repositories.list.jinja2
438+ dest: /etc/apt/sources.list.d/private.list
439+
440+ - name: Update the cache.
441+ apt: update_cache=yes
442+
443+ - name: Install dependencies.
444+ apt: pkg={{ item }}
445+ with_items:
446+ - python-mimeparse
447+ - python-webob
448+ - sunburnt
449+
450+ - name: Setup groups.
451+ group: name={{ item.name }} gid={{ item.gid }}
452+ with_items:
453+ - { name: 'deploy_user', gid: 1800 }
454+ - { name: 'service_user', gid: 1500 }
455+
456+ ...
457+}}}
458+
459+Read more online about playbooks[1] and standard ansible modules[2].
460+
461+[1] http://www.ansibleworks.com/docs/playbooks.html
462+[2] http://www.ansibleworks.com/docs/modules.html
463+"""
464+import os
465+import subprocess
466+
467+import charmhelpers.contrib.templating.contexts
468+import charmhelpers.core.host
469+import charmhelpers.core.hookenv
470+import charmhelpers.fetch
471+
472+
473+charm_dir = os.environ.get('CHARM_DIR', '')
474+ansible_hosts_path = '/etc/ansible/hosts'
475+# Ansible will automatically include any vars in the following
476+# file in its inventory when run locally.
477+ansible_vars_path = '/etc/ansible/host_vars/localhost'
478+
479+
480+def install_ansible_support(from_ppa=True):
481+ """Installs the ansible package.
482+
483+ By default it is installed from the PPA [1] linked from
484+ the ansible website [2].
485+
486+ [1] https://launchpad.net/~rquillo/+archive/ansible
487+ [2] http://www.ansibleworks.com/docs/gettingstarted.html#ubuntu-and-debian
488+
489+ If from_ppa is false, you must ensure that the package is available
490+ from a configured repository.
491+ """
492+ if from_ppa:
493+ charmhelpers.fetch.add_source('ppa:rquillo/ansible')
494+ charmhelpers.fetch.apt_update(fatal=True)
495+ charmhelpers.fetch.apt_install('ansible')
496+ with open(ansible_hosts_path, 'w+') as hosts_file:
497+ hosts_file.write('localhost ansible_connection=local')
498+
499+
500+def apply_playbook(playbook, tags=None):
501+ tags = tags or []
502+ tags = ",".join(tags)
503+ charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
504+ ansible_vars_path, namespace_separator='__',
505+ allow_hyphens_in_keys=False)
506+ call = [
507+ 'ansible-playbook',
508+ '-c',
509+ 'local',
510+ playbook,
511+ ]
512+ if tags:
513+ call.extend(['--tags', '{}'.format(tags)])
514+ subprocess.check_call(call)
515+
516+
517+class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
518+ """Run a playbook with the hook-name as the tag.
519+
520+ This helper builds on the standard hookenv.Hooks helper,
521+ but additionally runs the playbook with the hook-name specified
522+ using --tags (ie. running all the tasks tagged with the hook-name).
523+
524+ Example:
525+ hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
526+
527+ # All the tasks within my_machine_state.yaml tagged with 'install'
528+ # will be run automatically after do_custom_work()
529+ @hooks.hook()
530+ def install():
531+ do_custom_work()
532+
533+ # For most of your hooks, you won't need to do anything other
534+ # than run the tagged tasks for the hook:
535+ @hooks.hook('config-changed', 'start', 'stop')
536+ def just_use_playbook():
537+ pass
538+
539+ # As a convenience, you can avoid the above noop function by specifying
540+ # the hooks which are handled by ansible-only and they'll be registered
541+ # for you:
542+ # hooks = AnsibleHooks(
543+ # 'playbooks/my_machine_state.yaml',
544+ # default_hooks=['config-changed', 'start', 'stop'])
545+
546+ if __name__ == "__main__":
547+ # execute a hook based on the name the program is called by
548+ hooks.execute(sys.argv)
549+ """
550+
551+ def __init__(self, playbook_path, default_hooks=None):
552+ """Register any hooks handled by ansible."""
553+ super(AnsibleHooks, self).__init__()
554+
555+ self.playbook_path = playbook_path
556+
557+ default_hooks = default_hooks or []
558+ noop = lambda *args, **kwargs: None
559+ for hook in default_hooks:
560+ self.register(hook, noop)
561+
562+ def execute(self, args):
563+ """Execute the hook followed by the playbook using the hook as tag."""
564+ super(AnsibleHooks, self).execute(args)
565+ hook_name = os.path.basename(args[0])
566+ charmhelpers.contrib.ansible.apply_playbook(
567+ self.playbook_path, tags=[hook_name])
568
569=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers'
570=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/IMPORT'
571--- charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/IMPORT 1970-01-01 00:00:00 +0000
572+++ charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/IMPORT 2014-01-27 18:01:12 +0000
573@@ -0,0 +1,4 @@
574+Source lp:charm-tools/trunk
575+
576+charm-tools/helpers/python/charmhelpers/__init__.py -> charmhelpers/charmhelpers/contrib/charmhelpers/__init__.py
577+charm-tools/helpers/python/charmhelpers/tests/test_charmhelpers.py -> charmhelpers/tests/contrib/charmhelpers/test_charmhelpers.py
578
579=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/__init__.py'
580--- charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
581+++ charms/precise/restish/hooks/charmhelpers/contrib/charmhelpers/__init__.py 2014-01-27 18:01:12 +0000
582@@ -0,0 +1,184 @@
583+# Copyright 2012 Canonical Ltd. This software is licensed under the
584+# GNU Affero General Public License version 3 (see the file LICENSE).
585+
586+import warnings
587+warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning)
588+
589+"""Helper functions for writing Juju charms in Python."""
590+
591+__metaclass__ = type
592+__all__ = [
593+ #'get_config', # core.hookenv.config()
594+ #'log', # core.hookenv.log()
595+ #'log_entry', # core.hookenv.log()
596+ #'log_exit', # core.hookenv.log()
597+ #'relation_get', # core.hookenv.relation_get()
598+ #'relation_set', # core.hookenv.relation_set()
599+ #'relation_ids', # core.hookenv.relation_ids()
600+ #'relation_list', # core.hookenv.relation_units()
601+ #'config_get', # core.hookenv.config()
602+ #'unit_get', # core.hookenv.unit_get()
603+ #'open_port', # core.hookenv.open_port()
604+ #'close_port', # core.hookenv.close_port()
605+ #'service_control', # core.host.service()
606+ 'unit_info', # client-side, NOT IMPLEMENTED
607+ 'wait_for_machine', # client-side, NOT IMPLEMENTED
608+ 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
609+ 'wait_for_relation', # client-side, NOT IMPLEMENTED
610+ 'wait_for_unit', # client-side, NOT IMPLEMENTED
611+]
612+
613+import operator
614+from shelltoolbox import (
615+ command,
616+)
617+import tempfile
618+import time
619+import urllib2
620+import yaml
621+
622+SLEEP_AMOUNT = 0.1
623+# We create a juju_status Command here because it makes testing much,
624+# much easier.
625+juju_status = lambda: command('juju')('status')
626+
627+# re-implemented as charmhelpers.fetch.configure_sources()
628+#def configure_source(update=False):
629+# source = config_get('source')
630+# if ((source.startswith('ppa:') or
631+# source.startswith('cloud:') or
632+# source.startswith('http:'))):
633+# run('add-apt-repository', source)
634+# if source.startswith("http:"):
635+# run('apt-key', 'import', config_get('key'))
636+# if update:
637+# run('apt-get', 'update')
638+
639+
640+# DEPRECATED: client-side only
641+def make_charm_config_file(charm_config):
642+ charm_config_file = tempfile.NamedTemporaryFile()
643+ charm_config_file.write(yaml.dump(charm_config))
644+ charm_config_file.flush()
645+ # The NamedTemporaryFile instance is returned instead of just the name
646+ # because we want to take advantage of garbage collection-triggered
647+ # deletion of the temp file when it goes out of scope in the caller.
648+ return charm_config_file
649+
650+
651+# DEPRECATED: client-side only
652+def unit_info(service_name, item_name, data=None, unit=None):
653+ if data is None:
654+ data = yaml.safe_load(juju_status())
655+ service = data['services'].get(service_name)
656+ if service is None:
657+ # XXX 2012-02-08 gmb:
658+ # This allows us to cope with the race condition that we
659+ # have between deploying a service and having it come up in
660+ # `juju status`. We could probably do with cleaning it up so
661+ # that it fails a bit more noisily after a while.
662+ return ''
663+ units = service['units']
664+ if unit is not None:
665+ item = units[unit][item_name]
666+ else:
667+ # It might seem odd to sort the units here, but we do it to
668+ # ensure that when no unit is specified, the first unit for the
669+ # service (or at least the one with the lowest number) is the
670+ # one whose data gets returned.
671+ sorted_unit_names = sorted(units.keys())
672+ item = units[sorted_unit_names[0]][item_name]
673+ return item
674+
675+
676+# DEPRECATED: client-side only
677+def get_machine_data():
678+ return yaml.safe_load(juju_status())['machines']
679+
680+
681+# DEPRECATED: client-side only
682+def wait_for_machine(num_machines=1, timeout=300):
683+ """Wait `timeout` seconds for `num_machines` machines to come up.
684+
685+ This wait_for... function can be called by other wait_for functions
686+ whose timeouts might be too short in situations where only a bare
687+ Juju setup has been bootstrapped.
688+
689+ :return: A tuple of (num_machines, time_taken). This is used for
690+ testing.
691+ """
692+ # You may think this is a hack, and you'd be right. The easiest way
693+ # to tell what environment we're working in (LXC vs EC2) is to check
694+ # the dns-name of the first machine. If it's localhost we're in LXC
695+ # and we can just return here.
696+ if get_machine_data()[0]['dns-name'] == 'localhost':
697+ return 1, 0
698+ start_time = time.time()
699+ while True:
700+ # Drop the first machine, since it's the Zookeeper and that's
701+ # not a machine that we need to wait for. This will only work
702+ # for EC2 environments, which is why we return early above if
703+ # we're in LXC.
704+ machine_data = get_machine_data()
705+ non_zookeeper_machines = [
706+ machine_data[key] for key in machine_data.keys()[1:]]
707+ if len(non_zookeeper_machines) >= num_machines:
708+ all_machines_running = True
709+ for machine in non_zookeeper_machines:
710+ if machine.get('instance-state') != 'running':
711+ all_machines_running = False
712+ break
713+ if all_machines_running:
714+ break
715+ if time.time() - start_time >= timeout:
716+ raise RuntimeError('timeout waiting for service to start')
717+ time.sleep(SLEEP_AMOUNT)
718+ return num_machines, time.time() - start_time
719+
720+
721+# DEPRECATED: client-side only
722+def wait_for_unit(service_name, timeout=480):
723+ """Wait `timeout` seconds for a given service name to come up."""
724+ wait_for_machine(num_machines=1)
725+ start_time = time.time()
726+ while True:
727+ state = unit_info(service_name, 'agent-state')
728+ if 'error' in state or state == 'started':
729+ break
730+ if time.time() - start_time >= timeout:
731+ raise RuntimeError('timeout waiting for service to start')
732+ time.sleep(SLEEP_AMOUNT)
733+ if state != 'started':
734+ raise RuntimeError('unit did not start, agent-state: ' + state)
735+
736+
737+# DEPRECATED: client-side only
738+def wait_for_relation(service_name, relation_name, timeout=120):
739+ """Wait `timeout` seconds for a given relation to come up."""
740+ start_time = time.time()
741+ while True:
742+ relation = unit_info(service_name, 'relations').get(relation_name)
743+ if relation is not None and relation['state'] == 'up':
744+ break
745+ if time.time() - start_time >= timeout:
746+ raise RuntimeError('timeout waiting for relation to be up')
747+ time.sleep(SLEEP_AMOUNT)
748+
749+
750+# DEPRECATED: client-side only
751+def wait_for_page_contents(url, contents, timeout=120, validate=None):
752+ if validate is None:
753+ validate = operator.contains
754+ start_time = time.time()
755+ while True:
756+ try:
757+ stream = urllib2.urlopen(url)
758+ except (urllib2.HTTPError, urllib2.URLError):
759+ pass
760+ else:
761+ page = stream.read()
762+ if validate(page, contents):
763+ return page
764+ if time.time() - start_time >= timeout:
765+ raise RuntimeError('timeout waiting for contents of ' + url)
766+ time.sleep(SLEEP_AMOUNT)
767
768=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/charmsupport'
769=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/IMPORT'
770--- charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/IMPORT 1970-01-01 00:00:00 +0000
771+++ charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/IMPORT 2014-01-27 18:01:12 +0000
772@@ -0,0 +1,14 @@
773+Source: lp:charmsupport/trunk
774+
775+charmsupport/charmsupport/execd.py -> charm-helpers/charmhelpers/contrib/charmsupport/execd.py
776+charmsupport/charmsupport/hookenv.py -> charm-helpers/charmhelpers/contrib/charmsupport/hookenv.py
777+charmsupport/charmsupport/host.py -> charm-helpers/charmhelpers/contrib/charmsupport/host.py
778+charmsupport/charmsupport/nrpe.py -> charm-helpers/charmhelpers/contrib/charmsupport/nrpe.py
779+charmsupport/charmsupport/volumes.py -> charm-helpers/charmhelpers/contrib/charmsupport/volumes.py
780+
781+charmsupport/tests/test_execd.py -> charm-helpers/tests/contrib/charmsupport/test_execd.py
782+charmsupport/tests/test_hookenv.py -> charm-helpers/tests/contrib/charmsupport/test_hookenv.py
783+charmsupport/tests/test_host.py -> charm-helpers/tests/contrib/charmsupport/test_host.py
784+charmsupport/tests/test_nrpe.py -> charm-helpers/tests/contrib/charmsupport/test_nrpe.py
785+
786+charmsupport/bin/charmsupport -> charm-helpers/bin/contrib/charmsupport/charmsupport
787
788=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/__init__.py'
789=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/nrpe.py'
790--- charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
791+++ charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/nrpe.py 2014-01-27 18:01:12 +0000
792@@ -0,0 +1,216 @@
793+"""Compatibility with the nrpe-external-master charm"""
794+# Copyright 2012 Canonical Ltd.
795+#
796+# Authors:
797+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
798+
799+import subprocess
800+import pwd
801+import grp
802+import os
803+import re
804+import shlex
805+import yaml
806+
807+from charmhelpers.core.hookenv import (
808+ config,
809+ local_unit,
810+ log,
811+ relation_ids,
812+ relation_set,
813+)
814+
815+from charmhelpers.core.host import service
816+
817+# This module adds compatibility with the nrpe-external-master and plain nrpe
818+# subordinate charms. To use it in your charm:
819+#
820+# 1. Update metadata.yaml
821+#
822+# provides:
823+# (...)
824+# nrpe-external-master:
825+# interface: nrpe-external-master
826+# scope: container
827+#
828+# and/or
829+#
830+# provides:
831+# (...)
832+# local-monitors:
833+# interface: local-monitors
834+# scope: container
835+
836+#
837+# 2. Add the following to config.yaml
838+#
839+# nagios_context:
840+# default: "juju"
841+# type: string
842+# description: |
843+# Used by the nrpe subordinate charms.
844+# A string that will be prepended to instance name to set the host name
845+# in nagios. So for instance the hostname would be something like:
846+# juju-myservice-0
847+# If you're running multiple environments with the same services in them
848+# this allows you to differentiate between them.
849+#
850+# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
851+#
852+# 4. Update your hooks.py with something like this:
853+#
854+# from charmsupport.nrpe import NRPE
855+# (...)
856+# def update_nrpe_config():
857+# nrpe_compat = NRPE()
858+# nrpe_compat.add_check(
859+# shortname = "myservice",
860+# description = "Check MyService",
861+# check_cmd = "check_http -w 2 -c 10 http://localhost"
862+# )
863+# nrpe_compat.add_check(
864+# "myservice_other",
865+# "Check for widget failures",
866+# check_cmd = "/srv/myapp/scripts/widget_check"
867+# )
868+# nrpe_compat.write()
869+#
870+# def config_changed():
871+# (...)
872+# update_nrpe_config()
873+#
874+# def nrpe_external_master_relation_changed():
875+# update_nrpe_config()
876+#
877+# def local_monitors_relation_changed():
878+# update_nrpe_config()
879+#
880+# 5. ln -s hooks.py nrpe-external-master-relation-changed
881+# ln -s hooks.py local-monitors-relation-changed
882+
883+
884+class CheckException(Exception):
885+ pass
886+
887+
888+class Check(object):
889+ shortname_re = '[A-Za-z0-9-_]+$'
890+ service_template = ("""
891+#---------------------------------------------------
892+# This file is Juju managed
893+#---------------------------------------------------
894+define service {{
895+ use active-service
896+ host_name {nagios_hostname}
897+ service_description {nagios_hostname}[{shortname}] """
898+ """{description}
899+ check_command check_nrpe!{command}
900+ servicegroups {nagios_servicegroup}
901+}}
902+""")
903+
904+ def __init__(self, shortname, description, check_cmd):
905+ super(Check, self).__init__()
906+ # XXX: could be better to calculate this from the service name
907+ if not re.match(self.shortname_re, shortname):
908+ raise CheckException("shortname must match {}".format(
909+ Check.shortname_re))
910+ self.shortname = shortname
911+ self.command = "check_{}".format(shortname)
912+ # Note: a set of invalid characters is defined by the
913+ # Nagios server config
914+ # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
915+ self.description = description
916+ self.check_cmd = self._locate_cmd(check_cmd)
917+
918+ def _locate_cmd(self, check_cmd):
919+ search_path = (
920+ '/usr/lib/nagios/plugins',
921+ '/usr/local/lib/nagios/plugins',
922+ )
923+ parts = shlex.split(check_cmd)
924+ for path in search_path:
925+ if os.path.exists(os.path.join(path, parts[0])):
926+ command = os.path.join(path, parts[0])
927+ if len(parts) > 1:
928+ command += " " + " ".join(parts[1:])
929+ return command
930+ log('Check command not found: {}'.format(parts[0]))
931+ return ''
932+
933+ def write(self, nagios_context, hostname):
934+ nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
935+ self.command)
936+ with open(nrpe_check_file, 'w') as nrpe_check_config:
937+ nrpe_check_config.write("# check {}\n".format(self.shortname))
938+ nrpe_check_config.write("command[{}]={}\n".format(
939+ self.command, self.check_cmd))
940+
941+ if not os.path.exists(NRPE.nagios_exportdir):
942+ log('Not writing service config as {} is not accessible'.format(
943+ NRPE.nagios_exportdir))
944+ else:
945+ self.write_service_config(nagios_context, hostname)
946+
947+ def write_service_config(self, nagios_context, hostname):
948+ for f in os.listdir(NRPE.nagios_exportdir):
949+ if re.search('.*{}.cfg'.format(self.command), f):
950+ os.remove(os.path.join(NRPE.nagios_exportdir, f))
951+
952+ templ_vars = {
953+ 'nagios_hostname': hostname,
954+ 'nagios_servicegroup': nagios_context,
955+ 'description': self.description,
956+ 'shortname': self.shortname,
957+ 'command': self.command,
958+ }
959+ nrpe_service_text = Check.service_template.format(**templ_vars)
960+ nrpe_service_file = '{}/service__{}_{}.cfg'.format(
961+ NRPE.nagios_exportdir, hostname, self.command)
962+ with open(nrpe_service_file, 'w') as nrpe_service_config:
963+ nrpe_service_config.write(str(nrpe_service_text))
964+
965+ def run(self):
966+ subprocess.call(self.check_cmd)
967+
968+
969+class NRPE(object):
970+ nagios_logdir = '/var/log/nagios'
971+ nagios_exportdir = '/var/lib/nagios/export'
972+ nrpe_confdir = '/etc/nagios/nrpe.d'
973+
974+ def __init__(self):
975+ super(NRPE, self).__init__()
976+ self.config = config()
977+ self.nagios_context = self.config['nagios_context']
978+ self.unit_name = local_unit().replace('/', '-')
979+ self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
980+ self.checks = []
981+
982+ def add_check(self, *args, **kwargs):
983+ self.checks.append(Check(*args, **kwargs))
984+
985+ def write(self):
986+ try:
987+ nagios_uid = pwd.getpwnam('nagios').pw_uid
988+ nagios_gid = grp.getgrnam('nagios').gr_gid
989+ except:
990+ log("Nagios user not set up, nrpe checks not updated")
991+ return
992+
993+ if not os.path.exists(NRPE.nagios_logdir):
994+ os.mkdir(NRPE.nagios_logdir)
995+ os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
996+
997+ nrpe_monitors = {}
998+ monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
999+ for nrpecheck in self.checks:
1000+ nrpecheck.write(self.nagios_context, self.hostname)
1001+ nrpe_monitors[nrpecheck.shortname] = {
1002+ "command": nrpecheck.command,
1003+ }
1004+
1005+ service('restart', 'nagios-nrpe-server')
1006+
1007+ for rid in relation_ids("local-monitors"):
1008+ relation_set(relation_id=rid, monitors=yaml.dump(monitors))
1009
1010=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/volumes.py'
1011--- charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
1012+++ charms/precise/restish/hooks/charmhelpers/contrib/charmsupport/volumes.py 2014-01-27 18:01:12 +0000
1013@@ -0,0 +1,156 @@
1014+'''
1015+Functions for managing volumes in juju units. One volume is supported per unit.
1016+Subordinates may have their own storage, provided it is on its own partition.
1017+
1018+Configuration stanzas:
1019+ volume-ephemeral:
1020+ type: boolean
1021+ default: true
1022+ description: >
1023+ If false, a volume is mounted as sepecified in "volume-map"
1024+ If true, ephemeral storage will be used, meaning that log data
1025+ will only exist as long as the machine. YOU HAVE BEEN WARNED.
1026+ volume-map:
1027+ type: string
1028+ default: {}
1029+ description: >
1030+ YAML map of units to device names, e.g:
1031+ "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
1032+ Service units will raise a configure-error if volume-ephemeral
1033+ is 'true' and no volume-map value is set. Use 'juju set' to set a
1034+ value and 'juju resolved' to complete configuration.
1035+
1036+Usage:
1037+ from charmsupport.volumes import configure_volume, VolumeConfigurationError
1038+ from charmsupport.hookenv import log, ERROR
1039+ def post_mount_hook():
1040+ stop_service('myservice')
1041+ def post_mount_hook():
1042+ start_service('myservice')
1043+
1044+ if __name__ == '__main__':
1045+ try:
1046+ configure_volume(before_change=pre_mount_hook,
1047+ after_change=post_mount_hook)
1048+ except VolumeConfigurationError:
1049+ log('Storage could not be configured', ERROR)
1050+'''
1051+
1052+# XXX: Known limitations
1053+# - fstab is neither consulted nor updated
1054+
1055+import os
1056+from charmhelpers.core import hookenv
1057+from charmhelpers.core import host
1058+import yaml
1059+
1060+
1061+MOUNT_BASE = '/srv/juju/volumes'
1062+
1063+
1064+class VolumeConfigurationError(Exception):
1065+ '''Volume configuration data is missing or invalid'''
1066+ pass
1067+
1068+
1069+def get_config():
1070+ '''Gather and sanity-check volume configuration data'''
1071+ volume_config = {}
1072+ config = hookenv.config()
1073+
1074+ errors = False
1075+
1076+ if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
1077+ volume_config['ephemeral'] = True
1078+ else:
1079+ volume_config['ephemeral'] = False
1080+
1081+ try:
1082+ volume_map = yaml.safe_load(config.get('volume-map', '{}'))
1083+ except yaml.YAMLError as e:
1084+ hookenv.log("Error parsing YAML volume-map: {}".format(e),
1085+ hookenv.ERROR)
1086+ errors = True
1087+ if volume_map is None:
1088+ # probably an empty string
1089+ volume_map = {}
1090+ elif not isinstance(volume_map, dict):
1091+ hookenv.log("Volume-map should be a dictionary, not {}".format(
1092+ type(volume_map)))
1093+ errors = True
1094+
1095+ volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
1096+ if volume_config['device'] and volume_config['ephemeral']:
1097+ # asked for ephemeral storage but also defined a volume ID
1098+ hookenv.log('A volume is defined for this unit, but ephemeral '
1099+ 'storage was requested', hookenv.ERROR)
1100+ errors = True
1101+ elif not volume_config['device'] and not volume_config['ephemeral']:
1102+ # asked for permanent storage but did not define volume ID
1103+ hookenv.log('Ephemeral storage was requested, but there is no volume '
1104+ 'defined for this unit.', hookenv.ERROR)
1105+ errors = True
1106+
1107+ unit_mount_name = hookenv.local_unit().replace('/', '-')
1108+ volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
1109+
1110+ if errors:
1111+ return None
1112+ return volume_config
1113+
1114+
1115+def mount_volume(config):
1116+ if os.path.exists(config['mountpoint']):
1117+ if not os.path.isdir(config['mountpoint']):
1118+ hookenv.log('Not a directory: {}'.format(config['mountpoint']))
1119+ raise VolumeConfigurationError()
1120+ else:
1121+ host.mkdir(config['mountpoint'])
1122+ if os.path.ismount(config['mountpoint']):
1123+ unmount_volume(config)
1124+ if not host.mount(config['device'], config['mountpoint'], persist=True):
1125+ raise VolumeConfigurationError()
1126+
1127+
1128+def unmount_volume(config):
1129+ if os.path.ismount(config['mountpoint']):
1130+ if not host.umount(config['mountpoint'], persist=True):
1131+ raise VolumeConfigurationError()
1132+
1133+
1134+def managed_mounts():
1135+ '''List of all mounted managed volumes'''
1136+ return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
1137+
1138+
1139+def configure_volume(before_change=lambda: None, after_change=lambda: None):
1140+ '''Set up storage (or don't) according to the charm's volume configuration.
1141+ Returns the mount point or "ephemeral". before_change and after_change
1142+ are optional functions to be called if the volume configuration changes.
1143+ '''
1144+
1145+ config = get_config()
1146+ if not config:
1147+ hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
1148+ raise VolumeConfigurationError()
1149+
1150+ if config['ephemeral']:
1151+ if os.path.ismount(config['mountpoint']):
1152+ before_change()
1153+ unmount_volume(config)
1154+ after_change()
1155+ return 'ephemeral'
1156+ else:
1157+ # persistent storage
1158+ if os.path.ismount(config['mountpoint']):
1159+ mounts = dict(managed_mounts())
1160+ if mounts.get(config['mountpoint']) != config['device']:
1161+ before_change()
1162+ unmount_volume(config)
1163+ mount_volume(config)
1164+ after_change()
1165+ else:
1166+ before_change()
1167+ mount_volume(config)
1168+ after_change()
1169+ return config['mountpoint']
1170
1171=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/hahelpers'
1172=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/__init__.py'
1173=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/apache.py'
1174--- charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/apache.py 1970-01-01 00:00:00 +0000
1175+++ charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/apache.py 2014-01-27 18:01:12 +0000
1176@@ -0,0 +1,58 @@
1177+#
1178+# Copyright 2012 Canonical Ltd.
1179+#
1180+# This file is sourced from lp:openstack-charm-helpers
1181+#
1182+# Authors:
1183+# James Page <james.page@ubuntu.com>
1184+# Adam Gandelman <adamg@ubuntu.com>
1185+#
1186+
1187+import subprocess
1188+
1189+from charmhelpers.core.hookenv import (
1190+ config as config_get,
1191+ relation_get,
1192+ relation_ids,
1193+ related_units as relation_list,
1194+ log,
1195+ INFO,
1196+)
1197+
1198+
1199+def get_cert():
1200+ cert = config_get('ssl_cert')
1201+ key = config_get('ssl_key')
1202+ if not (cert and key):
1203+ log("Inspecting identity-service relations for SSL certificate.",
1204+ level=INFO)
1205+ cert = key = None
1206+ for r_id in relation_ids('identity-service'):
1207+ for unit in relation_list(r_id):
1208+ if not cert:
1209+ cert = relation_get('ssl_cert',
1210+ rid=r_id, unit=unit)
1211+ if not key:
1212+ key = relation_get('ssl_key',
1213+ rid=r_id, unit=unit)
1214+ return (cert, key)
1215+
1216+
1217+def get_ca_cert():
1218+ ca_cert = None
1219+ log("Inspecting identity-service relations for CA SSL certificate.",
1220+ level=INFO)
1221+ for r_id in relation_ids('identity-service'):
1222+ for unit in relation_list(r_id):
1223+ if not ca_cert:
1224+ ca_cert = relation_get('ca_cert',
1225+ rid=r_id, unit=unit)
1226+ return ca_cert
1227+
1228+
1229+def install_ca_cert(ca_cert):
1230+ if ca_cert:
1231+ with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
1232+ 'w') as crt:
1233+ crt.write(ca_cert)
1234+ subprocess.check_call(['update-ca-certificates', '--fresh'])
1235
1236=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/cluster.py'
1237--- charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/cluster.py 1970-01-01 00:00:00 +0000
1238+++ charms/precise/restish/hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-01-27 18:01:12 +0000
1239@@ -0,0 +1,183 @@
1240+#
1241+# Copyright 2012 Canonical Ltd.
1242+#
1243+# Authors:
1244+# James Page <james.page@ubuntu.com>
1245+# Adam Gandelman <adamg@ubuntu.com>
1246+#
1247+
1248+import subprocess
1249+import os
1250+
1251+from socket import gethostname as get_unit_hostname
1252+
1253+from charmhelpers.core.hookenv import (
1254+ log,
1255+ relation_ids,
1256+ related_units as relation_list,
1257+ relation_get,
1258+ config as config_get,
1259+ INFO,
1260+ ERROR,
1261+ unit_get,
1262+)
1263+
1264+
1265+class HAIncompleteConfig(Exception):
1266+ pass
1267+
1268+
1269+def is_clustered():
1270+ for r_id in (relation_ids('ha') or []):
1271+ for unit in (relation_list(r_id) or []):
1272+ clustered = relation_get('clustered',
1273+ rid=r_id,
1274+ unit=unit)
1275+ if clustered:
1276+ return True
1277+ return False
1278+
1279+
1280+def is_leader(resource):
1281+ cmd = [
1282+ "crm", "resource",
1283+ "show", resource
1284+ ]
1285+ try:
1286+ status = subprocess.check_output(cmd)
1287+ except subprocess.CalledProcessError:
1288+ return False
1289+ else:
1290+ if get_unit_hostname() in status:
1291+ return True
1292+ else:
1293+ return False
1294+
1295+
1296+def peer_units():
1297+ peers = []
1298+ for r_id in (relation_ids('cluster') or []):
1299+ for unit in (relation_list(r_id) or []):
1300+ peers.append(unit)
1301+ return peers
1302+
1303+
1304+def oldest_peer(peers):
1305+ local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
1306+ for peer in peers:
1307+ remote_unit_no = int(peer.split('/')[1])
1308+ if remote_unit_no < local_unit_no:
1309+ return False
1310+ return True
1311+
1312+
1313+def eligible_leader(resource):
1314+ if is_clustered():
1315+ if not is_leader(resource):
1316+ log('Deferring action to CRM leader.', level=INFO)
1317+ return False
1318+ else:
1319+ peers = peer_units()
1320+ if peers and not oldest_peer(peers):
1321+ log('Deferring action to oldest service unit.', level=INFO)
1322+ return False
1323+ return True
1324+
1325+
1326+def https():
1327+ '''
1328+ Determines whether enough data has been provided in configuration
1329+ or relation data to configure HTTPS
1330+ .
1331+ returns: boolean
1332+ '''
1333+ if config_get('use-https') == "yes":
1334+ return True
1335+ if config_get('ssl_cert') and config_get('ssl_key'):
1336+ return True
1337+ for r_id in relation_ids('identity-service'):
1338+ for unit in relation_list(r_id):
1339+ rel_state = [
1340+ relation_get('https_keystone', rid=r_id, unit=unit),
1341+ relation_get('ssl_cert', rid=r_id, unit=unit),
1342+ relation_get('ssl_key', rid=r_id, unit=unit),
1343+ relation_get('ca_cert', rid=r_id, unit=unit),
1344+ ]
1345+ # NOTE: works around (LP: #1203241)
1346+ if (None not in rel_state) and ('' not in rel_state):
1347+ return True
1348+ return False
1349+
1350+
1351+def determine_api_port(public_port):
1352+ '''
1353+ Determine correct API server listening port based on
1354+ existence of HTTPS reverse proxy and/or haproxy.
1355+
1356+ public_port: int: standard public port for given service
1357+
1358+ returns: int: the correct listening port for the API service
1359+ '''
1360+ i = 0
1361+ if len(peer_units()) > 0 or is_clustered():
1362+ i += 1
1363+ if https():
1364+ i += 1
1365+ return public_port - (i * 10)
1366+
1367+
1368+def determine_haproxy_port(public_port):
1369+ '''
1370+ Description: Determine correct proxy listening port based on public IP +
1371+ existence of HTTPS reverse proxy.
1372+
1373+ public_port: int: standard public port for given service
1374+
1375+ returns: int: the correct listening port for the HAProxy service
1376+ '''
1377+ i = 0
1378+ if https():
1379+ i += 1
1380+ return public_port - (i * 10)
1381+
1382+
1383+def get_hacluster_config():
1384+ '''
1385+ Obtains all relevant configuration from charm configuration required
1386+ for initiating a relation to hacluster:
1387+
1388+ ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
1389+
1390+ returns: dict: A dict containing settings keyed by setting name.
1391+ raises: HAIncompleteConfig if settings are missing.
1392+ '''
1393+ settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
1394+ conf = {}
1395+ for setting in settings:
1396+ conf[setting] = config_get(setting)
1397+ missing = []
1398+ [missing.append(s) for s, v in conf.iteritems() if v is None]
1399+ if missing:
1400+ log('Insufficient config data to configure hacluster.', level=ERROR)
1401+ raise HAIncompleteConfig
1402+ return conf
1403+
1404+
1405+def canonical_url(configs, vip_setting='vip'):
1406+ '''
1407+ Returns the correct HTTP URL to this host given the state of HTTPS
1408+ configuration and hacluster.
1409+
1410+ :configs : OSTemplateRenderer: A config tempating object to inspect for
1411+ a complete https context.
1412+ :vip_setting: str: Setting in charm config that specifies
1413+ VIP address.
1414+ '''
1415+ scheme = 'http'
1416+ if 'https' in configs.complete_contexts():
1417+ scheme = 'https'
1418+ if is_clustered():
1419+ addr = config_get(vip_setting)
1420+ else:
1421+ addr = unit_get('private-address')
1422+ return '%s://%s' % (scheme, addr)
1423
1424=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/jujugui'
1425=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/jujugui/IMPORT'
1426--- charms/precise/restish/hooks/charmhelpers/contrib/jujugui/IMPORT 1970-01-01 00:00:00 +0000
1427+++ charms/precise/restish/hooks/charmhelpers/contrib/jujugui/IMPORT 2014-01-27 18:01:12 +0000
1428@@ -0,0 +1,4 @@
1429+Source: lp:charms/juju-gui
1430+
1431+juju-gui/hooks/utils.py -> charm-helpers/charmhelpers/contrib/jujugui/utils.py
1432+juju-gui/tests/test_utils.py -> charm-helpers/tests/contrib/jujugui/test_utils.py
1433
1434=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/jujugui/__init__.py'
1435=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/jujugui/utils.py'
1436--- charms/precise/restish/hooks/charmhelpers/contrib/jujugui/utils.py 1970-01-01 00:00:00 +0000
1437+++ charms/precise/restish/hooks/charmhelpers/contrib/jujugui/utils.py 2014-01-27 18:01:12 +0000
1438@@ -0,0 +1,602 @@
1439+"""Juju GUI charm utilities."""
1440+
1441+__all__ = [
1442+ 'AGENT',
1443+ 'APACHE',
1444+ 'API_PORT',
1445+ 'CURRENT_DIR',
1446+ 'HAPROXY',
1447+ 'IMPROV',
1448+ 'JUJU_DIR',
1449+ 'JUJU_GUI_DIR',
1450+ 'JUJU_GUI_SITE',
1451+ 'JUJU_PEM',
1452+ 'WEB_PORT',
1453+ 'bzr_checkout',
1454+ 'chain',
1455+ 'cmd_log',
1456+ 'fetch_api',
1457+ 'fetch_gui',
1458+ 'find_missing_packages',
1459+ 'first_path_in_dir',
1460+ 'get_api_address',
1461+ 'get_npm_cache_archive_url',
1462+ 'get_release_file_url',
1463+ 'get_staging_dependencies',
1464+ 'get_zookeeper_address',
1465+ 'legacy_juju',
1466+ 'log_hook',
1467+ 'merge',
1468+ 'parse_source',
1469+ 'prime_npm_cache',
1470+ 'render_to_file',
1471+ 'save_or_create_certificates',
1472+ 'setup_apache',
1473+ 'setup_gui',
1474+ 'start_agent',
1475+ 'start_gui',
1476+ 'start_improv',
1477+ 'write_apache_config',
1478+]
1479+
1480+from contextlib import contextmanager
1481+import errno
1482+import json
1483+import os
1484+import logging
1485+import shutil
1486+from subprocess import CalledProcessError
1487+import tempfile
1488+from urlparse import urlparse
1489+
1490+import apt
1491+import tempita
1492+
1493+from launchpadlib.launchpad import Launchpad
1494+from shelltoolbox import (
1495+ Serializer,
1496+ apt_get_install,
1497+ command,
1498+ environ,
1499+ install_extra_repositories,
1500+ run,
1501+ script_name,
1502+ search_file,
1503+ su,
1504+)
1505+from charmhelpers.core.host import (
1506+ service_start,
1507+)
1508+from charmhelpers.core.hookenv import (
1509+ log,
1510+ config,
1511+ unit_get,
1512+)
1513+
1514+
1515+AGENT = 'juju-api-agent'
1516+APACHE = 'apache2'
1517+IMPROV = 'juju-api-improv'
1518+HAPROXY = 'haproxy'
1519+
1520+API_PORT = 8080
1521+WEB_PORT = 8000
1522+
1523+CURRENT_DIR = os.getcwd()
1524+JUJU_DIR = os.path.join(CURRENT_DIR, 'juju')
1525+JUJU_GUI_DIR = os.path.join(CURRENT_DIR, 'juju-gui')
1526+JUJU_GUI_SITE = '/etc/apache2/sites-available/juju-gui'
1527+JUJU_GUI_PORTS = '/etc/apache2/ports.conf'
1528+JUJU_PEM = 'juju.includes-private-key.pem'
1529+BUILD_REPOSITORIES = ('ppa:chris-lea/node.js-legacy',)
1530+DEB_BUILD_DEPENDENCIES = (
1531+ 'bzr', 'imagemagick', 'make', 'nodejs', 'npm',
1532+)
1533+DEB_STAGE_DEPENDENCIES = (
1534+ 'zookeeper',
1535+)
1536+
1537+
1538+# Store the configuration from on invocation to the next.
1539+config_json = Serializer('/tmp/config.json')
1540+# Bazaar checkout command.
1541+bzr_checkout = command('bzr', 'co', '--lightweight')
1542+# Whether or not the charm is deployed using juju-core.
1543+# If juju-core has been used to deploy the charm, an agent.conf file must
1544+# be present in the charm parent directory.
1545+legacy_juju = lambda: not os.path.exists(
1546+ os.path.join(CURRENT_DIR, '..', 'agent.conf'))
1547+
1548+
1549+def _get_build_dependencies():
1550+ """Install deb dependencies for building."""
1551+ log('Installing build dependencies.')
1552+ cmd_log(install_extra_repositories(*BUILD_REPOSITORIES))
1553+ cmd_log(apt_get_install(*DEB_BUILD_DEPENDENCIES))
1554+
1555+
1556+def get_api_address(unit_dir):
1557+ """Return the Juju API address stored in the uniter agent.conf file."""
1558+ import yaml # python-yaml is only installed if juju-core is used.
1559+ # XXX 2013-03-27 frankban bug=1161443:
1560+ # currently the uniter agent.conf file does not include the API
1561+ # address. For now retrieve it from the machine agent file.
1562+ base_dir = os.path.abspath(os.path.join(unit_dir, '..'))
1563+ for dirname in os.listdir(base_dir):
1564+ if dirname.startswith('machine-'):
1565+ agent_conf = os.path.join(base_dir, dirname, 'agent.conf')
1566+ break
1567+ else:
1568+ raise IOError('Juju agent configuration file not found.')
1569+ contents = yaml.load(open(agent_conf))
1570+ return contents['apiinfo']['addrs'][0]
1571+
1572+
1573+def get_staging_dependencies():
1574+ """Install deb dependencies for the stage (improv) environment."""
1575+ log('Installing stage dependencies.')
1576+ cmd_log(apt_get_install(*DEB_STAGE_DEPENDENCIES))
1577+
1578+
1579+def first_path_in_dir(directory):
1580+ """Return the full path of the first file/dir in *directory*."""
1581+ return os.path.join(directory, os.listdir(directory)[0])
1582+
1583+
1584+def _get_by_attr(collection, attr, value):
1585+ """Return the first item in collection having attr == value.
1586+
1587+ Return None if the item is not found.
1588+ """
1589+ for item in collection:
1590+ if getattr(item, attr) == value:
1591+ return item
1592+
1593+
1594+def get_release_file_url(project, series_name, release_version):
1595+ """Return the URL of the release file hosted in Launchpad.
1596+
1597+ The returned URL points to a release file for the given project, series
1598+ name and release version.
1599+ The argument *project* is a project object as returned by launchpadlib.
1600+ The arguments *series_name* and *release_version* are strings. If
1601+ *release_version* is None, the URL of the latest release will be returned.
1602+ """
1603+ series = _get_by_attr(project.series, 'name', series_name)
1604+ if series is None:
1605+ raise ValueError('%r: series not found' % series_name)
1606+ # Releases are returned by Launchpad in reverse date order.
1607+ releases = list(series.releases)
1608+ if not releases:
1609+ raise ValueError('%r: series does not contain releases' % series_name)
1610+ if release_version is not None:
1611+ release = _get_by_attr(releases, 'version', release_version)
1612+ if release is None:
1613+ raise ValueError('%r: release not found' % release_version)
1614+ releases = [release]
1615+ for release in releases:
1616+ for file_ in release.files:
1617+ if str(file_).endswith('.tgz'):
1618+ return file_.file_link
1619+ raise ValueError('%r: file not found' % release_version)
1620+
1621+
1622+def get_zookeeper_address(agent_file_path):
1623+ """Retrieve the Zookeeper address contained in the given *agent_file_path*.
1624+
1625+ The *agent_file_path* is a path to a file containing a line similar to the
1626+ following::
1627+
1628+ env JUJU_ZOOKEEPER="address"
1629+ """
1630+ line = search_file('JUJU_ZOOKEEPER', agent_file_path).strip()
1631+ return line.split('=')[1].strip('"')
1632+
1633+
1634+@contextmanager
1635+def log_hook():
1636+ """Log when a hook starts and stops its execution.
1637+
1638+ Also log to stdout possible CalledProcessError exceptions raised executing
1639+ the hook.
1640+ """
1641+ script = script_name()
1642+ log(">>> Entering {}".format(script))
1643+ try:
1644+ yield
1645+ except CalledProcessError as err:
1646+ log('Exception caught:')
1647+ log(err.output)
1648+ raise
1649+ finally:
1650+ log("<<< Exiting {}".format(script))
1651+
1652+
1653+def parse_source(source):
1654+ """Parse the ``juju-gui-source`` option.
1655+
1656+ Return a tuple of two elements representing info on how to deploy Juju GUI.
1657+ Examples:
1658+ - ('stable', None): latest stable release;
1659+ - ('stable', '0.1.0'): stable release v0.1.0;
1660+ - ('trunk', None): latest trunk release;
1661+ - ('trunk', '0.1.0+build.1'): trunk release v0.1.0 bzr revision 1;
1662+ - ('branch', 'lp:juju-gui'): release is made from a branch;
1663+ - ('url', 'http://example.com/gui'): release from a downloaded file.
1664+ """
1665+ if source.startswith('url:'):
1666+ source = source[4:]
1667+ # Support file paths, including relative paths.
1668+ if urlparse(source).scheme == '':
1669+ if not source.startswith('/'):
1670+ source = os.path.join(os.path.abspath(CURRENT_DIR), source)
1671+ source = "file://%s" % source
1672+ return 'url', source
1673+ if source in ('stable', 'trunk'):
1674+ return source, None
1675+ if source.startswith('lp:') or source.startswith('http://'):
1676+ return 'branch', source
1677+ if 'build' in source:
1678+ return 'trunk', source
1679+ return 'stable', source
1680+
1681+
1682+def render_to_file(template_name, context, destination):
1683+ """Render the given *template_name* into *destination* using *context*.
1684+
1685+ The tempita template language is used to render contents
1686+ (see http://pythonpaste.org/tempita/).
1687+ The argument *template_name* is the name or path of the template file:
1688+ it may be either a path relative to ``../config`` or an absolute path.
1689+ The argument *destination* is a file path.
1690+ The argument *context* is a dict-like object.
1691+ """
1692+ template_path = os.path.abspath(template_name)
1693+ template = tempita.Template.from_filename(template_path)
1694+ with open(destination, 'w') as stream:
1695+ stream.write(template.substitute(context))
1696+
1697+
1698+results_log = None
1699+
1700+
1701+def _setupLogging():
1702+ global results_log
1703+ if results_log is not None:
1704+ return
1705+ cfg = config()
1706+ logging.basicConfig(
1707+ filename=cfg['command-log-file'],
1708+ level=logging.INFO,
1709+ format="%(asctime)s: %(name)s@%(levelname)s %(message)s")
1710+ results_log = logging.getLogger('juju-gui')
1711+
1712+
1713+def cmd_log(results):
1714+ global results_log
1715+ if not results:
1716+ return
1717+ if results_log is None:
1718+ _setupLogging()
1719+ # Since 'results' may be multi-line output, start it on a separate line
1720+ # from the logger timestamp, etc.
1721+ results_log.info('\n' + results)
1722+
1723+
1724+def start_improv(staging_env, ssl_cert_path,
1725+ config_path='/etc/init/juju-api-improv.conf'):
1726+ """Start a simulated juju environment using ``improv.py``."""
1727+ log('Setting up staging start up script.')
1728+ context = {
1729+ 'juju_dir': JUJU_DIR,
1730+ 'keys': ssl_cert_path,
1731+ 'port': API_PORT,
1732+ 'staging_env': staging_env,
1733+ }
1734+ render_to_file('config/juju-api-improv.conf.template', context, config_path)
1735+ log('Starting the staging backend.')
1736+ with su('root'):
1737+ service_start(IMPROV)
1738+
1739+
1740+def start_agent(
1741+ ssl_cert_path, config_path='/etc/init/juju-api-agent.conf',
1742+ read_only=False):
1743+ """Start the Juju agent and connect to the current environment."""
1744+ # Retrieve the Zookeeper address from the start up script.
1745+ unit_dir = os.path.realpath(os.path.join(CURRENT_DIR, '..'))
1746+ agent_file = '/etc/init/juju-{0}.conf'.format(os.path.basename(unit_dir))
1747+ zookeeper = get_zookeeper_address(agent_file)
1748+ log('Setting up API agent start up script.')
1749+ context = {
1750+ 'juju_dir': JUJU_DIR,
1751+ 'keys': ssl_cert_path,
1752+ 'port': API_PORT,
1753+ 'zookeeper': zookeeper,
1754+ 'read_only': read_only
1755+ }
1756+ render_to_file('config/juju-api-agent.conf.template', context, config_path)
1757+ log('Starting API agent.')
1758+ with su('root'):
1759+ service_start(AGENT)
1760+
1761+
1762+def start_gui(
1763+ console_enabled, login_help, readonly, in_staging, ssl_cert_path,
1764+ charmworld_url, serve_tests, haproxy_path='/etc/haproxy/haproxy.cfg',
1765+ config_js_path=None, secure=True, sandbox=False):
1766+ """Set up and start the Juju GUI server."""
1767+ with su('root'):
1768+ run('chown', '-R', 'ubuntu:', JUJU_GUI_DIR)
1769+ # XXX 2013-02-05 frankban bug=1116320:
1770+ # External insecure resources are still loaded when testing in the
1771+ # debug environment. For now, switch to the production environment if
1772+ # the charm is configured to serve tests.
1773+ if in_staging and not serve_tests:
1774+ build_dirname = 'build-debug'
1775+ else:
1776+ build_dirname = 'build-prod'
1777+ build_dir = os.path.join(JUJU_GUI_DIR, build_dirname)
1778+ log('Generating the Juju GUI configuration file.')
1779+ is_legacy_juju = legacy_juju()
1780+ user, password = None, None
1781+ if (is_legacy_juju and in_staging) or sandbox:
1782+ user, password = 'admin', 'admin'
1783+ else:
1784+ user, password = None, None
1785+
1786+ api_backend = 'python' if is_legacy_juju else 'go'
1787+ if secure:
1788+ protocol = 'wss'
1789+ else:
1790+ log('Running in insecure mode! Port 80 will serve unencrypted.')
1791+ protocol = 'ws'
1792+
1793+ context = {
1794+ 'raw_protocol': protocol,
1795+ 'address': unit_get('public-address'),
1796+ 'console_enabled': json.dumps(console_enabled),
1797+ 'login_help': json.dumps(login_help),
1798+ 'password': json.dumps(password),
1799+ 'api_backend': json.dumps(api_backend),
1800+ 'readonly': json.dumps(readonly),
1801+ 'user': json.dumps(user),
1802+ 'protocol': json.dumps(protocol),
1803+ 'sandbox': json.dumps(sandbox),
1804+ 'charmworld_url': json.dumps(charmworld_url),
1805+ }
1806+ if config_js_path is None:
1807+ config_js_path = os.path.join(
1808+ build_dir, 'juju-ui', 'assets', 'config.js')
1809+ render_to_file('config/config.js.template', context, config_js_path)
1810+
1811+ write_apache_config(build_dir, serve_tests)
1812+
1813+ log('Generating haproxy configuration file.')
1814+ if is_legacy_juju:
1815+ # The PyJuju API agent is listening on localhost.
1816+ api_address = '127.0.0.1:{0}'.format(API_PORT)
1817+ else:
1818+ # Retrieve the juju-core API server address.
1819+ api_address = get_api_address(os.path.join(CURRENT_DIR, '..'))
1820+ context = {
1821+ 'api_address': api_address,
1822+ 'api_pem': JUJU_PEM,
1823+ 'legacy_juju': is_legacy_juju,
1824+ 'ssl_cert_path': ssl_cert_path,
1825+ # In PyJuju environments, use the same certificate for both HTTPS and
1826+ # WebSocket connections. In juju-core the system already has the proper
1827+ # certificate installed.
1828+ 'web_pem': JUJU_PEM,
1829+ 'web_port': WEB_PORT,
1830+ 'secure': secure
1831+ }
1832+ render_to_file('config/haproxy.cfg.template', context, haproxy_path)
1833+ log('Starting Juju GUI.')
1834+
1835+
1836+def write_apache_config(build_dir, serve_tests=False):
1837+ log('Generating the apache site configuration file.')
1838+ context = {
1839+ 'port': WEB_PORT,
1840+ 'serve_tests': serve_tests,
1841+ 'server_root': build_dir,
1842+ 'tests_root': os.path.join(JUJU_GUI_DIR, 'test', ''),
1843+ }
1844+ render_to_file('config/apache-ports.template', context, JUJU_GUI_PORTS)
1845+ render_to_file('config/apache-site.template', context, JUJU_GUI_SITE)
1846+
1847+
1848+def get_npm_cache_archive_url(Launchpad=Launchpad):
1849+ """Figure out the URL of the most recent NPM cache archive on Launchpad."""
1850+ launchpad = Launchpad.login_anonymously('Juju GUI charm', 'production')
1851+ project = launchpad.projects['juju-gui']
1852+ # Find the URL of the most recently created NPM cache archive.
1853+ npm_cache_url = get_release_file_url(project, 'npm-cache', None)
1854+ return npm_cache_url
1855+
1856+
1857+def prime_npm_cache(npm_cache_url):
1858+ """Download NPM cache archive and prime the NPM cache with it."""
1859+ # Download the cache archive and then uncompress it into the NPM cache.
1860+ npm_cache_archive = os.path.join(CURRENT_DIR, 'npm-cache.tgz')
1861+ cmd_log(run('curl', '-L', '-o', npm_cache_archive, npm_cache_url))
1862+ npm_cache_dir = os.path.expanduser('~/.npm')
1863+ # The NPM cache directory probably does not exist, so make it if not.
1864+ try:
1865+ os.mkdir(npm_cache_dir)
1866+ except OSError, e:
1867+ # If the directory already exists then ignore the error.
1868+ if e.errno != errno.EEXIST: # File exists.
1869+ raise
1870+ uncompress = command('tar', '-x', '-z', '-C', npm_cache_dir, '-f')
1871+ cmd_log(uncompress(npm_cache_archive))
1872+
1873+
1874+def fetch_gui(juju_gui_source, logpath):
1875+ """Retrieve the Juju GUI release/branch."""
1876+ # Retrieve a Juju GUI release.
1877+ origin, version_or_branch = parse_source(juju_gui_source)
1878+ if origin == 'branch':
1879+ # Make sure we have the dependencies necessary for us to actually make
1880+ # a build.
1881+ _get_build_dependencies()
1882+ # Create a release starting from a branch.
1883+ juju_gui_source_dir = os.path.join(CURRENT_DIR, 'juju-gui-source')
1884+ log('Retrieving Juju GUI source checkout from %s.' % version_or_branch)
1885+ cmd_log(run('rm', '-rf', juju_gui_source_dir))
1886+ cmd_log(bzr_checkout(version_or_branch, juju_gui_source_dir))
1887+ log('Preparing a Juju GUI release.')
1888+ logdir = os.path.dirname(logpath)
1889+ fd, name = tempfile.mkstemp(prefix='make-distfile-', dir=logdir)
1890+ log('Output from "make distfile" sent to %s' % name)
1891+ with environ(NO_BZR='1'):
1892+ run('make', '-C', juju_gui_source_dir, 'distfile',
1893+ stdout=fd, stderr=fd)
1894+ release_tarball = first_path_in_dir(
1895+ os.path.join(juju_gui_source_dir, 'releases'))
1896+ else:
1897+ log('Retrieving Juju GUI release.')
1898+ if origin == 'url':
1899+ file_url = version_or_branch
1900+ else:
1901+ # Retrieve a release from Launchpad.
1902+ launchpad = Launchpad.login_anonymously(
1903+ 'Juju GUI charm', 'production')
1904+ project = launchpad.projects['juju-gui']
1905+ file_url = get_release_file_url(project, origin, version_or_branch)
1906+ log('Downloading release file from %s.' % file_url)
1907+ release_tarball = os.path.join(CURRENT_DIR, 'release.tgz')
1908+ cmd_log(run('curl', '-L', '-o', release_tarball, file_url))
1909+ return release_tarball
1910+
1911+
1912+def fetch_api(juju_api_branch):
1913+ """Retrieve the Juju branch."""
1914+ # Retrieve Juju API source checkout.
1915+ log('Retrieving Juju API source checkout.')
1916+ cmd_log(run('rm', '-rf', JUJU_DIR))
1917+ cmd_log(bzr_checkout(juju_api_branch, JUJU_DIR))
1918+
1919+
1920+def setup_gui(release_tarball):
1921+ """Set up Juju GUI."""
1922+ # Uncompress the release tarball.
1923+ log('Installing Juju GUI.')
1924+ release_dir = os.path.join(CURRENT_DIR, 'release')
1925+ cmd_log(run('rm', '-rf', release_dir))
1926+ os.mkdir(release_dir)
1927+ uncompress = command('tar', '-x', '-z', '-C', release_dir, '-f')
1928+ cmd_log(uncompress(release_tarball))
1929+ # Link the Juju GUI dir to the contents of the release tarball.
1930+ cmd_log(run('ln', '-sf', first_path_in_dir(release_dir), JUJU_GUI_DIR))
1931+
1932+
1933+def setup_apache():
1934+ """Set up apache."""
1935+ log('Setting up apache.')
1936+ if not os.path.exists(JUJU_GUI_SITE):
1937+ cmd_log(run('touch', JUJU_GUI_SITE))
1938+ cmd_log(run('chown', 'ubuntu:', JUJU_GUI_SITE))
1939+ cmd_log(
1940+ run('ln', '-s', JUJU_GUI_SITE,
1941+ '/etc/apache2/sites-enabled/juju-gui'))
1942+
1943+ if not os.path.exists(JUJU_GUI_PORTS):
1944+ cmd_log(run('touch', JUJU_GUI_PORTS))
1945+ cmd_log(run('chown', 'ubuntu:', JUJU_GUI_PORTS))
1946+
1947+ with su('root'):
1948+ run('a2dissite', 'default')
1949+ run('a2ensite', 'juju-gui')
1950+
1951+
1952+def save_or_create_certificates(
1953+ ssl_cert_path, ssl_cert_contents, ssl_key_contents):
1954+ """Generate the SSL certificates.
1955+
1956+ If both *ssl_cert_contents* and *ssl_key_contents* are provided, use them
1957+ as certificates; otherwise, generate them.
1958+
1959+ Also create a pem file, suitable for use in the haproxy configuration,
1960+ concatenating the key and the certificate files.
1961+ """
1962+ crt_path = os.path.join(ssl_cert_path, 'juju.crt')
1963+ key_path = os.path.join(ssl_cert_path, 'juju.key')
1964+ if not os.path.exists(ssl_cert_path):
1965+ os.makedirs(ssl_cert_path)
1966+ if ssl_cert_contents and ssl_key_contents:
1967+ # Save the provided certificates.
1968+ with open(crt_path, 'w') as cert_file:
1969+ cert_file.write(ssl_cert_contents)
1970+ with open(key_path, 'w') as key_file:
1971+ key_file.write(ssl_key_contents)
1972+ else:
1973+ # Generate certificates.
1974+ # See http://superuser.com/questions/226192/openssl-without-prompt
1975+ cmd_log(run(
1976+ 'openssl', 'req', '-new', '-newkey', 'rsa:4096',
1977+ '-days', '365', '-nodes', '-x509', '-subj',
1978+ # These are arbitrary test values for the certificate.
1979+ '/C=GB/ST=Juju/L=GUI/O=Ubuntu/CN=juju.ubuntu.com',
1980+ '-keyout', key_path, '-out', crt_path))
1981+ # Generate the pem file.
1982+ pem_path = os.path.join(ssl_cert_path, JUJU_PEM)
1983+ if os.path.exists(pem_path):
1984+ os.remove(pem_path)
1985+ with open(pem_path, 'w') as pem_file:
1986+ shutil.copyfileobj(open(key_path), pem_file)
1987+ shutil.copyfileobj(open(crt_path), pem_file)
1988+
1989+
1990+def find_missing_packages(*packages):
1991+ """Given a list of packages, return the packages which are not installed.
1992+ """
1993+ cache = apt.Cache()
1994+ missing = set()
1995+ for pkg_name in packages:
1996+ try:
1997+ pkg = cache[pkg_name]
1998+ except KeyError:
1999+ missing.add(pkg_name)
2000+ continue
2001+ if pkg.is_installed:
2002+ continue
2003+ missing.add(pkg_name)
2004+ return missing
2005+
2006+
2007+## Backend support decorators
2008+
2009+def chain(name):
2010+ """Helper method to compose a set of mixin objects into a callable.
2011+
2012+ Each method is called in the context of its mixin instance, and its
2013+ argument is the Backend instance.
2014+ """
2015+ # Chain method calls through all implementing mixins.
2016+ def method(self):
2017+ for mixin in self.mixins:
2018+ a_callable = getattr(type(mixin), name, None)
2019+ if a_callable:
2020+ a_callable(mixin, self)
2021+
2022+ method.__name__ = name
2023+ return method
2024+
2025+
2026+def merge(name):
2027+ """Helper to merge a property from a set of strategy objects
2028+ into a unified set.
2029+ """
2030+ # Return merged property from every providing mixin as a set.
2031+ @property
2032+ def method(self):
2033+ result = set()
2034+ for mixin in self.mixins:
2035+ segment = getattr(type(mixin), name, None)
2036+ if segment and isinstance(segment, (list, tuple, set)):
2037+ result |= set(segment)
2038+
2039+ return result
2040+ return method
2041
2042=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/network'
2043=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/network/__init__.py'
2044=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/network/ip.py'
2045--- charms/precise/restish/hooks/charmhelpers/contrib/network/ip.py 1970-01-01 00:00:00 +0000
2046+++ charms/precise/restish/hooks/charmhelpers/contrib/network/ip.py 2014-01-27 18:01:12 +0000
2047@@ -0,0 +1,69 @@
2048+import sys
2049+
2050+from charmhelpers.fetch import apt_install
2051+from charmhelpers.core.hookenv import (
2052+ ERROR, log,
2053+)
2054+
2055+try:
2056+ import netifaces
2057+except ImportError:
2058+ apt_install('python-netifaces')
2059+ import netifaces
2060+
2061+try:
2062+ import netaddr
2063+except ImportError:
2064+ apt_install('python-netaddr')
2065+ import netaddr
2066+
2067+
2068+def _validate_cidr(network):
2069+ try:
2070+ netaddr.IPNetwork(network)
2071+ except (netaddr.core.AddrFormatError, ValueError):
2072+ raise ValueError("Network (%s) is not in CIDR presentation format" %
2073+ network)
2074+
2075+
2076+def get_address_in_network(network, fallback=None, fatal=False):
2077+ """
2078+ Get an IPv4 address within the network from the host.
2079+
2080+ Args:
2081+ network (str): CIDR presentation format. For example,
2082+ '192.168.1.0/24'.
2083+ fallback (str): If no address is found, return fallback.
2084+ fatal (boolean): If no address is found, fallback is not
2085+ set and fatal is True then exit(1).
2086+ """
2087+
2088+ def not_found_error_out():
2089+ log("No IP address found in network: %s" % network,
2090+ level=ERROR)
2091+ sys.exit(1)
2092+
2093+ if network is None:
2094+ if fallback is not None:
2095+ return fallback
2096+ else:
2097+ if fatal:
2098+ not_found_error_out()
2099+
2100+ _validate_cidr(network)
2101+ for iface in netifaces.interfaces():
2102+ addresses = netifaces.ifaddresses(iface)
2103+ if netifaces.AF_INET in addresses:
2104+ addr = addresses[netifaces.AF_INET][0]['addr']
2105+ netmask = addresses[netifaces.AF_INET][0]['netmask']
2106+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2107+ if cidr in netaddr.IPNetwork(network):
2108+ return str(cidr.ip)
2109+
2110+ if fallback is not None:
2111+ return fallback
2112+
2113+ if fatal:
2114+ not_found_error_out()
2115+
2116+ return None
2117
2118=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/network/ovs'
2119=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/network/ovs/__init__.py'
2120--- charms/precise/restish/hooks/charmhelpers/contrib/network/ovs/__init__.py 1970-01-01 00:00:00 +0000
2121+++ charms/precise/restish/hooks/charmhelpers/contrib/network/ovs/__init__.py 2014-01-27 18:01:12 +0000
2122@@ -0,0 +1,75 @@
2123+''' Helpers for interacting with OpenvSwitch '''
2124+import subprocess
2125+import os
2126+from charmhelpers.core.hookenv import (
2127+ log, WARNING
2128+)
2129+from charmhelpers.core.host import (
2130+ service
2131+)
2132+
2133+
2134+def add_bridge(name):
2135+ ''' Add the named bridge to openvswitch '''
2136+ log('Creating bridge {}'.format(name))
2137+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
2138+
2139+
2140+def del_bridge(name):
2141+ ''' Delete the named bridge from openvswitch '''
2142+ log('Deleting bridge {}'.format(name))
2143+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
2144+
2145+
2146+def add_bridge_port(name, port):
2147+ ''' Add a port to the named openvswitch bridge '''
2148+ log('Adding port {} to bridge {}'.format(port, name))
2149+ subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
2150+ name, port])
2151+ subprocess.check_call(["ip", "link", "set", port, "up"])
2152+
2153+
2154+def del_bridge_port(name, port):
2155+ ''' Delete a port from the named openvswitch bridge '''
2156+ log('Deleting port {} from bridge {}'.format(port, name))
2157+ subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
2158+ name, port])
2159+ subprocess.check_call(["ip", "link", "set", port, "down"])
2160+
2161+
2162+def set_manager(manager):
2163+ ''' Set the controller for the local openvswitch '''
2164+ log('Setting manager for local ovs to {}'.format(manager))
2165+ subprocess.check_call(['ovs-vsctl', 'set-manager',
2166+ 'ssl:{}'.format(manager)])
2167+
2168+
2169+CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
2170+
2171+
2172+def get_certificate():
2173+ ''' Read openvswitch certificate from disk '''
2174+ if os.path.exists(CERT_PATH):
2175+ log('Reading ovs certificate from {}'.format(CERT_PATH))
2176+ with open(CERT_PATH, 'r') as cert:
2177+ full_cert = cert.read()
2178+ begin_marker = "-----BEGIN CERTIFICATE-----"
2179+ end_marker = "-----END CERTIFICATE-----"
2180+ begin_index = full_cert.find(begin_marker)
2181+ end_index = full_cert.rfind(end_marker)
2182+ if end_index == -1 or begin_index == -1:
2183+ raise RuntimeError("Certificate does not contain valid begin"
2184+ " and end markers.")
2185+ full_cert = full_cert[begin_index:(end_index + len(end_marker))]
2186+ return full_cert
2187+ else:
2188+ log('Certificate not found', level=WARNING)
2189+ return None
2190+
2191+
2192+def full_restart():
2193+ ''' Full restart and reload of openvswitch '''
2194+ if os.path.exists('/etc/init/openvswitch-force-reload-kmod.conf'):
2195+ service('start', 'openvswitch-force-reload-kmod')
2196+ else:
2197+ service('force-reload-kmod', 'openvswitch-switch')
2198
2199=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/openstack'
2200=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/__init__.py'
2201=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/alternatives.py'
2202--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
2203+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/alternatives.py 2014-01-27 18:01:12 +0000
2204@@ -0,0 +1,17 @@
2205+''' Helper for managing alternatives for file conflict resolution '''
2206+
2207+import subprocess
2208+import shutil
2209+import os
2210+
2211+
2212+def install_alternative(name, target, source, priority=50):
2213+ ''' Install alternative configuration '''
2214+ if (os.path.exists(target) and not os.path.islink(target)):
2215+ # Move existing file/directory away before installing
2216+ shutil.move(target, '{}.bak'.format(target))
2217+ cmd = [
2218+ 'update-alternatives', '--force', '--install',
2219+ target, name, source, str(priority)
2220+ ]
2221+ subprocess.check_call(cmd)
2222
2223=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/context.py'
2224--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
2225+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/context.py 2014-01-27 18:01:12 +0000
2226@@ -0,0 +1,577 @@
2227+import json
2228+import os
2229+
2230+from base64 import b64decode
2231+
2232+from subprocess import (
2233+ check_call
2234+)
2235+
2236+
2237+from charmhelpers.fetch import (
2238+ apt_install,
2239+ filter_installed_packages,
2240+)
2241+
2242+from charmhelpers.core.hookenv import (
2243+ config,
2244+ local_unit,
2245+ log,
2246+ relation_get,
2247+ relation_ids,
2248+ related_units,
2249+ unit_get,
2250+ unit_private_ip,
2251+ ERROR,
2252+)
2253+
2254+from charmhelpers.contrib.hahelpers.cluster import (
2255+ determine_api_port,
2256+ determine_haproxy_port,
2257+ https,
2258+ is_clustered,
2259+ peer_units,
2260+)
2261+
2262+from charmhelpers.contrib.hahelpers.apache import (
2263+ get_cert,
2264+ get_ca_cert,
2265+)
2266+
2267+from charmhelpers.contrib.openstack.neutron import (
2268+ neutron_plugin_attribute,
2269+)
2270+
2271+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
2272+
2273+
2274+class OSContextError(Exception):
2275+ pass
2276+
2277+
2278+def ensure_packages(packages):
2279+ '''Install but do not upgrade required plugin packages'''
2280+ required = filter_installed_packages(packages)
2281+ if required:
2282+ apt_install(required, fatal=True)
2283+
2284+
2285+def context_complete(ctxt):
2286+ _missing = []
2287+ for k, v in ctxt.iteritems():
2288+ if v is None or v == '':
2289+ _missing.append(k)
2290+ if _missing:
2291+ log('Missing required data: %s' % ' '.join(_missing), level='INFO')
2292+ return False
2293+ return True
2294+
2295+
2296+class OSContextGenerator(object):
2297+ interfaces = []
2298+
2299+ def __call__(self):
2300+ raise NotImplementedError
2301+
2302+
2303+class SharedDBContext(OSContextGenerator):
2304+ interfaces = ['shared-db']
2305+
2306+ def __init__(self, database=None, user=None, relation_prefix=None):
2307+ '''
2308+ Allows inspecting relation for settings prefixed with relation_prefix.
2309+ This is useful for parsing access for multiple databases returned via
2310+ the shared-db interface (eg, nova_password, quantum_password)
2311+ '''
2312+ self.relation_prefix = relation_prefix
2313+ self.database = database
2314+ self.user = user
2315+
2316+ def __call__(self):
2317+ self.database = self.database or config('database')
2318+ self.user = self.user or config('database-user')
2319+ if None in [self.database, self.user]:
2320+ log('Could not generate shared_db context. '
2321+ 'Missing required charm config options. '
2322+ '(database name and user)')
2323+ raise OSContextError
2324+ ctxt = {}
2325+
2326+ password_setting = 'password'
2327+ if self.relation_prefix:
2328+ password_setting = self.relation_prefix + '_password'
2329+
2330+ for rid in relation_ids('shared-db'):
2331+ for unit in related_units(rid):
2332+ passwd = relation_get(password_setting, rid=rid, unit=unit)
2333+ ctxt = {
2334+ 'database_host': relation_get('db_host', rid=rid,
2335+ unit=unit),
2336+ 'database': self.database,
2337+ 'database_user': self.user,
2338+ 'database_password': passwd,
2339+ }
2340+ if context_complete(ctxt):
2341+ return ctxt
2342+ return {}
2343+
2344+
2345+class IdentityServiceContext(OSContextGenerator):
2346+ interfaces = ['identity-service']
2347+
2348+ def __call__(self):
2349+ log('Generating template context for identity-service')
2350+ ctxt = {}
2351+
2352+ for rid in relation_ids('identity-service'):
2353+ for unit in related_units(rid):
2354+ ctxt = {
2355+ 'service_port': relation_get('service_port', rid=rid,
2356+ unit=unit),
2357+ 'service_host': relation_get('service_host', rid=rid,
2358+ unit=unit),
2359+ 'auth_host': relation_get('auth_host', rid=rid, unit=unit),
2360+ 'auth_port': relation_get('auth_port', rid=rid, unit=unit),
2361+ 'admin_tenant_name': relation_get('service_tenant',
2362+ rid=rid, unit=unit),
2363+ 'admin_user': relation_get('service_username', rid=rid,
2364+ unit=unit),
2365+ 'admin_password': relation_get('service_password', rid=rid,
2366+ unit=unit),
2367+ # XXX: Hard-coded http.
2368+ 'service_protocol': 'http',
2369+ 'auth_protocol': 'http',
2370+ }
2371+ if context_complete(ctxt):
2372+ return ctxt
2373+ return {}
2374+
2375+
2376+class AMQPContext(OSContextGenerator):
2377+ interfaces = ['amqp']
2378+
2379+ def __call__(self):
2380+ log('Generating template context for amqp')
2381+ conf = config()
2382+ try:
2383+ username = conf['rabbit-user']
2384+ vhost = conf['rabbit-vhost']
2385+ except KeyError as e:
2386+ log('Could not generate shared_db context. '
2387+ 'Missing required charm config options: %s.' % e)
2388+ raise OSContextError
2389+
2390+ ctxt = {}
2391+ for rid in relation_ids('amqp'):
2392+ for unit in related_units(rid):
2393+ if relation_get('clustered', rid=rid, unit=unit):
2394+ ctxt['clustered'] = True
2395+ ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
2396+ unit=unit)
2397+ else:
2398+ ctxt['rabbitmq_host'] = relation_get('private-address',
2399+ rid=rid, unit=unit)
2400+ ctxt.update({
2401+ 'rabbitmq_user': username,
2402+ 'rabbitmq_password': relation_get('password', rid=rid,
2403+ unit=unit),
2404+ 'rabbitmq_virtual_host': vhost,
2405+ })
2406+ if context_complete(ctxt):
2407+ # Sufficient information found = break out!
2408+ break
2409+ # Used for active/active rabbitmq >= grizzly
2410+ if 'clustered' not in ctxt and len(related_units(rid)) > 1:
2411+ rabbitmq_hosts = []
2412+ for unit in related_units(rid):
2413+ rabbitmq_hosts.append(relation_get('private-address',
2414+ rid=rid, unit=unit))
2415+ ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
2416+ if not context_complete(ctxt):
2417+ return {}
2418+ else:
2419+ return ctxt
2420+
2421+
2422+class CephContext(OSContextGenerator):
2423+ interfaces = ['ceph']
2424+
2425+ def __call__(self):
2426+ '''This generates context for /etc/ceph/ceph.conf templates'''
2427+ if not relation_ids('ceph'):
2428+ return {}
2429+ log('Generating template context for ceph')
2430+ mon_hosts = []
2431+ auth = None
2432+ key = None
2433+ for rid in relation_ids('ceph'):
2434+ for unit in related_units(rid):
2435+ mon_hosts.append(relation_get('private-address', rid=rid,
2436+ unit=unit))
2437+ auth = relation_get('auth', rid=rid, unit=unit)
2438+ key = relation_get('key', rid=rid, unit=unit)
2439+
2440+ ctxt = {
2441+ 'mon_hosts': ' '.join(mon_hosts),
2442+ 'auth': auth,
2443+ 'key': key,
2444+ }
2445+
2446+ if not os.path.isdir('/etc/ceph'):
2447+ os.mkdir('/etc/ceph')
2448+
2449+ if not context_complete(ctxt):
2450+ return {}
2451+
2452+ ensure_packages(['ceph-common'])
2453+
2454+ return ctxt
2455+
2456+
2457+class HAProxyContext(OSContextGenerator):
2458+ interfaces = ['cluster']
2459+
2460+ def __call__(self):
2461+ '''
2462+ Builds half a context for the haproxy template, which describes
2463+ all peers to be included in the cluster. Each charm needs to include
2464+ its own context generator that describes the port mapping.
2465+ '''
2466+ if not relation_ids('cluster'):
2467+ return {}
2468+
2469+ cluster_hosts = {}
2470+ l_unit = local_unit().replace('/', '-')
2471+ cluster_hosts[l_unit] = unit_get('private-address')
2472+
2473+ for rid in relation_ids('cluster'):
2474+ for unit in related_units(rid):
2475+ _unit = unit.replace('/', '-')
2476+ addr = relation_get('private-address', rid=rid, unit=unit)
2477+ cluster_hosts[_unit] = addr
2478+
2479+ ctxt = {
2480+ 'units': cluster_hosts,
2481+ }
2482+ if len(cluster_hosts.keys()) > 1:
2483+ # Enable haproxy when we have enough peers.
2484+ log('Ensuring haproxy enabled in /etc/default/haproxy.')
2485+ with open('/etc/default/haproxy', 'w') as out:
2486+ out.write('ENABLED=1\n')
2487+ return ctxt
2488+ log('HAProxy context is incomplete, this unit has no peers.')
2489+ return {}
2490+
2491+
2492+class ImageServiceContext(OSContextGenerator):
2493+ interfaces = ['image-service']
2494+
2495+ def __call__(self):
2496+ '''
2497+ Obtains the glance API server from the image-service relation. Useful
2498+ in nova and cinder (currently).
2499+ '''
2500+ log('Generating template context for image-service.')
2501+ rids = relation_ids('image-service')
2502+ if not rids:
2503+ return {}
2504+ for rid in rids:
2505+ for unit in related_units(rid):
2506+ api_server = relation_get('glance-api-server',
2507+ rid=rid, unit=unit)
2508+ if api_server:
2509+ return {'glance_api_servers': api_server}
2510+ log('ImageService context is incomplete. '
2511+ 'Missing required relation data.')
2512+ return {}
2513+
2514+
2515+class ApacheSSLContext(OSContextGenerator):
2516+
2517+ """
2518+ Generates a context for an apache vhost configuration that configures
2519+ HTTPS reverse proxying for one or many endpoints. Generated context
2520+ looks something like:
2521+ {
2522+ 'namespace': 'cinder',
2523+ 'private_address': 'iscsi.mycinderhost.com',
2524+ 'endpoints': [(8776, 8766), (8777, 8767)]
2525+ }
2526+
2527+ The endpoints list consists of a tuples mapping external ports
2528+ to internal ports.
2529+ """
2530+ interfaces = ['https']
2531+
2532+ # charms should inherit this context and set external ports
2533+ # and service namespace accordingly.
2534+ external_ports = []
2535+ service_namespace = None
2536+
2537+ def enable_modules(self):
2538+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
2539+ check_call(cmd)
2540+
2541+ def configure_cert(self):
2542+ if not os.path.isdir('/etc/apache2/ssl'):
2543+ os.mkdir('/etc/apache2/ssl')
2544+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
2545+ if not os.path.isdir(ssl_dir):
2546+ os.mkdir(ssl_dir)
2547+ cert, key = get_cert()
2548+ with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
2549+ cert_out.write(b64decode(cert))
2550+ with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
2551+ key_out.write(b64decode(key))
2552+ ca_cert = get_ca_cert()
2553+ if ca_cert:
2554+ with open(CA_CERT_PATH, 'w') as ca_out:
2555+ ca_out.write(b64decode(ca_cert))
2556+ check_call(['update-ca-certificates'])
2557+
2558+ def __call__(self):
2559+ if isinstance(self.external_ports, basestring):
2560+ self.external_ports = [self.external_ports]
2561+ if (not self.external_ports or not https()):
2562+ return {}
2563+
2564+ self.configure_cert()
2565+ self.enable_modules()
2566+
2567+ ctxt = {
2568+ 'namespace': self.service_namespace,
2569+ 'private_address': unit_get('private-address'),
2570+ 'endpoints': []
2571+ }
2572+ for ext_port in self.external_ports:
2573+ if peer_units() or is_clustered():
2574+ int_port = determine_haproxy_port(ext_port)
2575+ else:
2576+ int_port = determine_api_port(ext_port)
2577+ portmap = (int(ext_port), int(int_port))
2578+ ctxt['endpoints'].append(portmap)
2579+ return ctxt
2580+
2581+
2582+class NeutronContext(object):
2583+ interfaces = []
2584+
2585+ @property
2586+ def plugin(self):
2587+ return None
2588+
2589+ @property
2590+ def network_manager(self):
2591+ return None
2592+
2593+ @property
2594+ def packages(self):
2595+ return neutron_plugin_attribute(
2596+ self.plugin, 'packages', self.network_manager)
2597+
2598+ @property
2599+ def neutron_security_groups(self):
2600+ return None
2601+
2602+ def _ensure_packages(self):
2603+ [ensure_packages(pkgs) for pkgs in self.packages]
2604+
2605+ def _save_flag_file(self):
2606+ if self.network_manager == 'quantum':
2607+ _file = '/etc/nova/quantum_plugin.conf'
2608+ else:
2609+ _file = '/etc/nova/neutron_plugin.conf'
2610+ with open(_file, 'wb') as out:
2611+ out.write(self.plugin + '\n')
2612+
2613+ def ovs_ctxt(self):
2614+ driver = neutron_plugin_attribute(self.plugin, 'driver',
2615+ self.network_manager)
2616+ config = neutron_plugin_attribute(self.plugin, 'config',
2617+ self.network_manager)
2618+ ovs_ctxt = {
2619+ 'core_plugin': driver,
2620+ 'neutron_plugin': 'ovs',
2621+ 'neutron_security_groups': self.neutron_security_groups,
2622+ 'local_ip': unit_private_ip(),
2623+ 'config': config
2624+ }
2625+
2626+ return ovs_ctxt
2627+
2628+ def nvp_ctxt(self):
2629+ driver = neutron_plugin_attribute(self.plugin, 'driver',
2630+ self.network_manager)
2631+ config = neutron_plugin_attribute(self.plugin, 'config',
2632+ self.network_manager)
2633+ nvp_ctxt = {
2634+ 'core_plugin': driver,
2635+ 'neutron_plugin': 'nvp',
2636+ 'neutron_security_groups': self.neutron_security_groups,
2637+ 'local_ip': unit_private_ip(),
2638+ 'config': config
2639+ }
2640+
2641+ return nvp_ctxt
2642+
2643+ def __call__(self):
2644+ self._ensure_packages()
2645+
2646+ if self.network_manager not in ['quantum', 'neutron']:
2647+ return {}
2648+
2649+ if not self.plugin:
2650+ return {}
2651+
2652+ ctxt = {'network_manager': self.network_manager}
2653+
2654+ if self.plugin == 'ovs':
2655+ ctxt.update(self.ovs_ctxt())
2656+ elif self.plugin == 'nvp':
2657+ ctxt.update(self.nvp_ctxt())
2658+
2659+ self._save_flag_file()
2660+ return ctxt
2661+
2662+
2663+class OSConfigFlagContext(OSContextGenerator):
2664+
2665+ """
2666+ Responsible for adding user-defined config-flags in charm config to a
2667+ template context.
2668+
2669+ NOTE: the value of config-flags may be a comma-separated list of
2670+ key=value pairs and some Openstack config files support
2671+ comma-separated lists as values.
2672+ """
2673+
2674+ def __call__(self):
2675+ config_flags = config('config-flags')
2676+ if not config_flags:
2677+ return {}
2678+
2679+ if config_flags.find('==') >= 0:
2680+ log("config_flags is not in expected format (key=value)",
2681+ level=ERROR)
2682+ raise OSContextError
2683+
2684+ # strip the following from each value.
2685+ post_strippers = ' ,'
2686+ # we strip any leading/trailing '=' or ' ' from the string then
2687+ # split on '='.
2688+ split = config_flags.strip(' =').split('=')
2689+ limit = len(split)
2690+ flags = {}
2691+ for i in xrange(0, limit - 1):
2692+ current = split[i]
2693+ next = split[i + 1]
2694+ vindex = next.rfind(',')
2695+ if (i == limit - 2) or (vindex < 0):
2696+ value = next
2697+ else:
2698+ value = next[:vindex]
2699+
2700+ if i == 0:
2701+ key = current
2702+ else:
2703+ # if this not the first entry, expect an embedded key.
2704+ index = current.rfind(',')
2705+ if index < 0:
2706+ log("invalid config value(s) at index %s" % (i),
2707+ level=ERROR)
2708+ raise OSContextError
2709+ key = current[index + 1:]
2710+
2711+ # Add to collection.
2712+ flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
2713+
2714+ return {'user_config_flags': flags}
2715+
2716+
2717+class SubordinateConfigContext(OSContextGenerator):
2718+
2719+ """
2720+ Responsible for inspecting relations to subordinates that
2721+ may be exporting required config via a json blob.
2722+
2723+ The subordinate interface allows subordinates to export their
2724+ configuration requirements to the principle for multiple config
2725+ files and multiple serivces. Ie, a subordinate that has interfaces
2726+ to both glance and nova may export to following yaml blob as json:
2727+
2728+ glance:
2729+ /etc/glance/glance-api.conf:
2730+ sections:
2731+ DEFAULT:
2732+ - [key1, value1]
2733+ /etc/glance/glance-registry.conf:
2734+ MYSECTION:
2735+ - [key2, value2]
2736+ nova:
2737+ /etc/nova/nova.conf:
2738+ sections:
2739+ DEFAULT:
2740+ - [key3, value3]
2741+
2742+
2743+ It is then up to the principle charms to subscribe this context to
2744+ the service+config file it is interestd in. Configuration data will
2745+ be available in the template context, in glance's case, as:
2746+ ctxt = {
2747+ ... other context ...
2748+ 'subordinate_config': {
2749+ 'DEFAULT': {
2750+ 'key1': 'value1',
2751+ },
2752+ 'MYSECTION': {
2753+ 'key2': 'value2',
2754+ },
2755+ }
2756+ }
2757+
2758+ """
2759+
2760+ def __init__(self, service, config_file, interface):
2761+ """
2762+ :param service : Service name key to query in any subordinate
2763+ data found
2764+ :param config_file : Service's config file to query sections
2765+ :param interface : Subordinate interface to inspect
2766+ """
2767+ self.service = service
2768+ self.config_file = config_file
2769+ self.interface = interface
2770+
2771+ def __call__(self):
2772+ ctxt = {}
2773+ for rid in relation_ids(self.interface):
2774+ for unit in related_units(rid):
2775+ sub_config = relation_get('subordinate_configuration',
2776+ rid=rid, unit=unit)
2777+ if sub_config and sub_config != '':
2778+ try:
2779+ sub_config = json.loads(sub_config)
2780+ except:
2781+ log('Could not parse JSON from subordinate_config '
2782+ 'setting from %s' % rid, level=ERROR)
2783+ continue
2784+
2785+ if self.service not in sub_config:
2786+ log('Found subordinate_config on %s but it contained'
2787+ 'nothing for %s service' % (rid, self.service))
2788+ continue
2789+
2790+ sub_config = sub_config[self.service]
2791+ if self.config_file not in sub_config:
2792+ log('Found subordinate_config on %s but it contained'
2793+ 'nothing for %s' % (rid, self.config_file))
2794+ continue
2795+
2796+ sub_config = sub_config[self.config_file]
2797+ for k, v in sub_config.iteritems():
2798+ ctxt[k] = v
2799+
2800+ if not ctxt:
2801+ ctxt['sections'] = {}
2802+
2803+ return ctxt
2804
2805=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/neutron.py'
2806--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
2807+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/neutron.py 2014-01-27 18:01:12 +0000
2808@@ -0,0 +1,137 @@
2809+# Various utilies for dealing with Neutron and the renaming from Quantum.
2810+
2811+from subprocess import check_output
2812+
2813+from charmhelpers.core.hookenv import (
2814+ config,
2815+ log,
2816+ ERROR,
2817+)
2818+
2819+from charmhelpers.contrib.openstack.utils import os_release
2820+
2821+
2822+def headers_package():
2823+ """Ensures correct linux-headers for running kernel are installed,
2824+ for building DKMS package"""
2825+ kver = check_output(['uname', '-r']).strip()
2826+ return 'linux-headers-%s' % kver
2827+
2828+
2829+# legacy
2830+def quantum_plugins():
2831+ from charmhelpers.contrib.openstack import context
2832+ return {
2833+ 'ovs': {
2834+ 'config': '/etc/quantum/plugins/openvswitch/'
2835+ 'ovs_quantum_plugin.ini',
2836+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
2837+ 'OVSQuantumPluginV2',
2838+ 'contexts': [
2839+ context.SharedDBContext(user=config('neutron-database-user'),
2840+ database=config('neutron-database'),
2841+ relation_prefix='neutron')],
2842+ 'services': ['quantum-plugin-openvswitch-agent'],
2843+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
2844+ ['quantum-plugin-openvswitch-agent']],
2845+ 'server_packages': ['quantum-server',
2846+ 'quantum-plugin-openvswitch'],
2847+ 'server_services': ['quantum-server']
2848+ },
2849+ 'nvp': {
2850+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
2851+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
2852+ 'QuantumPlugin.NvpPluginV2',
2853+ 'contexts': [
2854+ context.SharedDBContext(user=config('neutron-database-user'),
2855+ database=config('neutron-database'),
2856+ relation_prefix='neutron')],
2857+ 'services': [],
2858+ 'packages': [],
2859+ 'server_packages': ['quantum-server',
2860+ 'quantum-plugin-nicira'],
2861+ 'server_services': ['quantum-server']
2862+ }
2863+ }
2864+
2865+
2866+def neutron_plugins():
2867+ from charmhelpers.contrib.openstack import context
2868+ return {
2869+ 'ovs': {
2870+ 'config': '/etc/neutron/plugins/openvswitch/'
2871+ 'ovs_neutron_plugin.ini',
2872+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
2873+ 'OVSNeutronPluginV2',
2874+ 'contexts': [
2875+ context.SharedDBContext(user=config('neutron-database-user'),
2876+ database=config('neutron-database'),
2877+ relation_prefix='neutron')],
2878+ 'services': ['neutron-plugin-openvswitch-agent'],
2879+ 'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
2880+ ['quantum-plugin-openvswitch-agent']],
2881+ 'server_packages': ['neutron-server',
2882+ 'neutron-plugin-openvswitch'],
2883+ 'server_services': ['neutron-server']
2884+ },
2885+ 'nvp': {
2886+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
2887+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
2888+ 'NeutronPlugin.NvpPluginV2',
2889+ 'contexts': [
2890+ context.SharedDBContext(user=config('neutron-database-user'),
2891+ database=config('neutron-database'),
2892+ relation_prefix='neutron')],
2893+ 'services': [],
2894+ 'packages': [],
2895+ 'server_packages': ['neutron-server',
2896+ 'neutron-plugin-nicira'],
2897+ 'server_services': ['neutron-server']
2898+ }
2899+ }
2900+
2901+
2902+def neutron_plugin_attribute(plugin, attr, net_manager=None):
2903+ manager = net_manager or network_manager()
2904+ if manager == 'quantum':
2905+ plugins = quantum_plugins()
2906+ elif manager == 'neutron':
2907+ plugins = neutron_plugins()
2908+ else:
2909+ log('Error: Network manager does not support plugins.')
2910+ raise Exception
2911+
2912+ try:
2913+ _plugin = plugins[plugin]
2914+ except KeyError:
2915+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
2916+ raise Exception
2917+
2918+ try:
2919+ return _plugin[attr]
2920+ except KeyError:
2921+ return None
2922+
2923+
2924+def network_manager():
2925+ '''
2926+ Deals with the renaming of Quantum to Neutron in H and any situations
2927+ that require compatability (eg, deploying H with network-manager=quantum,
2928+ upgrading from G).
2929+ '''
2930+ release = os_release('nova-common')
2931+ manager = config('network-manager').lower()
2932+
2933+ if manager not in ['quantum', 'neutron']:
2934+ return manager
2935+
2936+ if release in ['essex']:
2937+ # E does not support neutron
2938+ log('Neutron networking not supported in Essex.', level=ERROR)
2939+ raise Exception
2940+ elif release in ['folsom', 'grizzly']:
2941+ # neutron is named quantum in F and G
2942+ return 'quantum'
2943+ else:
2944+ # ensure accurate naming for all releases post-H
2945+ return 'neutron'
2946
2947=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates'
2948=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/__init__.py'
2949--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
2950+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-01-27 18:01:12 +0000
2951@@ -0,0 +1,2 @@
2952+# dummy __init__.py to fool syncer into thinking this is a syncable python
2953+# module
2954
2955=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
2956--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/ceph.conf 1970-01-01 00:00:00 +0000
2957+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2014-01-27 18:01:12 +0000
2958@@ -0,0 +1,11 @@
2959+###############################################################################
2960+# [ WARNING ]
2961+# cinder configuration file maintained by Juju
2962+# local changes may be overwritten.
2963+###############################################################################
2964+{% if auth -%}
2965+[global]
2966+ auth_supported = {{ auth }}
2967+ keyring = /etc/ceph/$cluster.$name.keyring
2968+ mon host = {{ mon_hosts }}
2969+{% endif -%}
2970
2971=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
2972--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 1970-01-01 00:00:00 +0000
2973+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-01-27 18:01:12 +0000
2974@@ -0,0 +1,37 @@
2975+global
2976+ log 127.0.0.1 local0
2977+ log 127.0.0.1 local1 notice
2978+ maxconn 20000
2979+ user haproxy
2980+ group haproxy
2981+ spread-checks 0
2982+
2983+defaults
2984+ log global
2985+ mode http
2986+ option httplog
2987+ option dontlognull
2988+ retries 3
2989+ timeout queue 1000
2990+ timeout connect 1000
2991+ timeout client 30000
2992+ timeout server 30000
2993+
2994+listen stats :8888
2995+ mode http
2996+ stats enable
2997+ stats hide-version
2998+ stats realm Haproxy\ Statistics
2999+ stats uri /
3000+ stats auth admin:password
3001+
3002+{% if units -%}
3003+{% for service, ports in service_ports.iteritems() -%}
3004+listen {{ service }} 0.0.0.0:{{ ports[0] }}
3005+ balance roundrobin
3006+ option tcplog
3007+ {% for unit, address in units.iteritems() -%}
3008+ server {{ unit }} {{ address }}:{{ ports[1] }} check
3009+ {% endfor %}
3010+{% endfor -%}
3011+{% endif -%}
3012
3013=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend'
3014--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 1970-01-01 00:00:00 +0000
3015+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2014-01-27 18:01:12 +0000
3016@@ -0,0 +1,23 @@
3017+{% if endpoints -%}
3018+{% for ext, int in endpoints -%}
3019+Listen {{ ext }}
3020+NameVirtualHost *:{{ ext }}
3021+<VirtualHost *:{{ ext }}>
3022+ ServerName {{ private_address }}
3023+ SSLEngine on
3024+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert
3025+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key
3026+ ProxyPass / http://localhost:{{ int }}/
3027+ ProxyPassReverse / http://localhost:{{ int }}/
3028+ ProxyPreserveHost on
3029+</VirtualHost>
3030+<Proxy *>
3031+ Order deny,allow
3032+ Allow from all
3033+</Proxy>
3034+<Location />
3035+ Order allow,deny
3036+ Allow from all
3037+</Location>
3038+{% endfor -%}
3039+{% endif -%}
3040
3041=== added symlink 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf'
3042=== target is u'openstack_https_frontend'
3043=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/templating.py'
3044--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
3045+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/templating.py 2014-01-27 18:01:12 +0000
3046@@ -0,0 +1,280 @@
3047+import os
3048+
3049+from charmhelpers.fetch import apt_install
3050+
3051+from charmhelpers.core.hookenv import (
3052+ log,
3053+ ERROR,
3054+ INFO
3055+)
3056+
3057+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
3058+
3059+try:
3060+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
3061+except ImportError:
3062+ # python-jinja2 may not be installed yet, or we're running unittests.
3063+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
3064+
3065+
3066+class OSConfigException(Exception):
3067+ pass
3068+
3069+
3070+def get_loader(templates_dir, os_release):
3071+ """
3072+ Create a jinja2.ChoiceLoader containing template dirs up to
3073+ and including os_release. If directory template directory
3074+ is missing at templates_dir, it will be omitted from the loader.
3075+ templates_dir is added to the bottom of the search list as a base
3076+ loading dir.
3077+
3078+ A charm may also ship a templates dir with this module
3079+ and it will be appended to the bottom of the search list, eg:
3080+ hooks/charmhelpers/contrib/openstack/templates.
3081+
3082+ :param templates_dir: str: Base template directory containing release
3083+ sub-directories.
3084+ :param os_release : str: OpenStack release codename to construct template
3085+ loader.
3086+
3087+ :returns : jinja2.ChoiceLoader constructed with a list of
3088+ jinja2.FilesystemLoaders, ordered in descending
3089+ order by OpenStack release.
3090+ """
3091+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
3092+ for rel in OPENSTACK_CODENAMES.itervalues()]
3093+
3094+ if not os.path.isdir(templates_dir):
3095+ log('Templates directory not found @ %s.' % templates_dir,
3096+ level=ERROR)
3097+ raise OSConfigException
3098+
3099+ # the bottom contains tempaltes_dir and possibly a common templates dir
3100+ # shipped with the helper.
3101+ loaders = [FileSystemLoader(templates_dir)]
3102+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
3103+ if os.path.isdir(helper_templates):
3104+ loaders.append(FileSystemLoader(helper_templates))
3105+
3106+ for rel, tmpl_dir in tmpl_dirs:
3107+ if os.path.isdir(tmpl_dir):
3108+ loaders.insert(0, FileSystemLoader(tmpl_dir))
3109+ if rel == os_release:
3110+ break
3111+ log('Creating choice loader with dirs: %s' %
3112+ [l.searchpath for l in loaders], level=INFO)
3113+ return ChoiceLoader(loaders)
3114+
3115+
3116+class OSConfigTemplate(object):
3117+ """
3118+ Associates a config file template with a list of context generators.
3119+ Responsible for constructing a template context based on those generators.
3120+ """
3121+ def __init__(self, config_file, contexts):
3122+ self.config_file = config_file
3123+
3124+ if hasattr(contexts, '__call__'):
3125+ self.contexts = [contexts]
3126+ else:
3127+ self.contexts = contexts
3128+
3129+ self._complete_contexts = []
3130+
3131+ def context(self):
3132+ ctxt = {}
3133+ for context in self.contexts:
3134+ _ctxt = context()
3135+ if _ctxt:
3136+ ctxt.update(_ctxt)
3137+ # track interfaces for every complete context.
3138+ [self._complete_contexts.append(interface)
3139+ for interface in context.interfaces
3140+ if interface not in self._complete_contexts]
3141+ return ctxt
3142+
3143+ def complete_contexts(self):
3144+ '''
3145+ Return a list of interfaces that have atisfied contexts.
3146+ '''
3147+ if self._complete_contexts:
3148+ return self._complete_contexts
3149+ self.context()
3150+ return self._complete_contexts
3151+
3152+
3153+class OSConfigRenderer(object):
3154+ """
3155+ This class provides a common templating system to be used by OpenStack
3156+ charms. It is intended to help charms share common code and templates,
3157+ and ease the burden of managing config templates across multiple OpenStack
3158+ releases.
3159+
3160+ Basic usage:
3161+ # import some common context generates from charmhelpers
3162+ from charmhelpers.contrib.openstack import context
3163+
3164+ # Create a renderer object for a specific OS release.
3165+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
3166+ openstack_release='folsom')
3167+ # register some config files with context generators.
3168+ configs.register(config_file='/etc/nova/nova.conf',
3169+ contexts=[context.SharedDBContext(),
3170+ context.AMQPContext()])
3171+ configs.register(config_file='/etc/nova/api-paste.ini',
3172+ contexts=[context.IdentityServiceContext()])
3173+ configs.register(config_file='/etc/haproxy/haproxy.conf',
3174+ contexts=[context.HAProxyContext()])
3175+ # write out a single config
3176+ configs.write('/etc/nova/nova.conf')
3177+ # write out all registered configs
3178+ configs.write_all()
3179+
3180+ Details:
3181+
3182+ OpenStack Releases and template loading
3183+ ---------------------------------------
3184+ When the object is instantiated, it is associated with a specific OS
3185+ release. This dictates how the template loader will be constructed.
3186+
3187+ The constructed loader attempts to load the template from several places
3188+ in the following order:
3189+ - from the most recent OS release-specific template dir (if one exists)
3190+ - the base templates_dir
3191+ - a template directory shipped in the charm with this helper file.
3192+
3193+
3194+ For the example above, '/tmp/templates' contains the following structure:
3195+ /tmp/templates/nova.conf
3196+ /tmp/templates/api-paste.ini
3197+ /tmp/templates/grizzly/api-paste.ini
3198+ /tmp/templates/havana/api-paste.ini
3199+
3200+ Since it was registered with the grizzly release, it first seraches
3201+ the grizzly directory for nova.conf, then the templates dir.
3202+
3203+ When writing api-paste.ini, it will find the template in the grizzly
3204+ directory.
3205+
3206+ If the object were created with folsom, it would fall back to the
3207+ base templates dir for its api-paste.ini template.
3208+
3209+ This system should help manage changes in config files through
3210+ openstack releases, allowing charms to fall back to the most recently
3211+ updated config template for a given release
3212+
3213+ The haproxy.conf, since it is not shipped in the templates dir, will
3214+ be loaded from the module directory's template directory, eg
3215+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
3216+ us to ship common templates (haproxy, apache) with the helpers.
3217+
3218+ Context generators
3219+ ---------------------------------------
3220+ Context generators are used to generate template contexts during hook
3221+ execution. Doing so may require inspecting service relations, charm
3222+ config, etc. When registered, a config file is associated with a list
3223+ of generators. When a template is rendered and written, all context
3224+ generates are called in a chain to generate the context dictionary
3225+ passed to the jinja2 template. See context.py for more info.
3226+ """
3227+ def __init__(self, templates_dir, openstack_release):
3228+ if not os.path.isdir(templates_dir):
3229+ log('Could not locate templates dir %s' % templates_dir,
3230+ level=ERROR)
3231+ raise OSConfigException
3232+
3233+ self.templates_dir = templates_dir
3234+ self.openstack_release = openstack_release
3235+ self.templates = {}
3236+ self._tmpl_env = None
3237+
3238+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
3239+ # if this code is running, the object is created pre-install hook.
3240+ # jinja2 shouldn't get touched until the module is reloaded on next
3241+ # hook execution, with proper jinja2 bits successfully imported.
3242+ apt_install('python-jinja2')
3243+
3244+ def register(self, config_file, contexts):
3245+ """
3246+ Register a config file with a list of context generators to be called
3247+ during rendering.
3248+ """
3249+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
3250+ contexts=contexts)
3251+ log('Registered config file: %s' % config_file, level=INFO)
3252+
3253+ def _get_tmpl_env(self):
3254+ if not self._tmpl_env:
3255+ loader = get_loader(self.templates_dir, self.openstack_release)
3256+ self._tmpl_env = Environment(loader=loader)
3257+
3258+ def _get_template(self, template):
3259+ self._get_tmpl_env()
3260+ template = self._tmpl_env.get_template(template)
3261+ log('Loaded template from %s' % template.filename, level=INFO)
3262+ return template
3263+
3264+ def render(self, config_file):
3265+ if config_file not in self.templates:
3266+ log('Config not registered: %s' % config_file, level=ERROR)
3267+ raise OSConfigException
3268+ ctxt = self.templates[config_file].context()
3269+
3270+ _tmpl = os.path.basename(config_file)
3271+ try:
3272+ template = self._get_template(_tmpl)
3273+ except exceptions.TemplateNotFound:
3274+ # if no template is found with basename, try looking for it
3275+ # using a munged full path, eg:
3276+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
3277+ _tmpl = '_'.join(config_file.split('/')[1:])
3278+ try:
3279+ template = self._get_template(_tmpl)
3280+ except exceptions.TemplateNotFound as e:
3281+ log('Could not load template from %s by %s or %s.' %
3282+ (self.templates_dir, os.path.basename(config_file), _tmpl),
3283+ level=ERROR)
3284+ raise e
3285+
3286+ log('Rendering from template: %s' % _tmpl, level=INFO)
3287+ return template.render(ctxt)
3288+
3289+ def write(self, config_file):
3290+ """
3291+ Write a single config file, raises if config file is not registered.
3292+ """
3293+ if config_file not in self.templates:
3294+ log('Config not registered: %s' % config_file, level=ERROR)
3295+ raise OSConfigException
3296+
3297+ _out = self.render(config_file)
3298+
3299+ with open(config_file, 'wb') as out:
3300+ out.write(_out)
3301+
3302+ log('Wrote template %s.' % config_file, level=INFO)
3303+
3304+ def write_all(self):
3305+ """
3306+ Write out all registered config files.
3307+ """
3308+ [self.write(k) for k in self.templates.iterkeys()]
3309+
3310+ def set_release(self, openstack_release):
3311+ """
3312+ Resets the template environment and generates a new template loader
3313+ based on a the new openstack release.
3314+ """
3315+ self._tmpl_env = None
3316+ self.openstack_release = openstack_release
3317+ self._get_tmpl_env()
3318+
3319+ def complete_contexts(self):
3320+ '''
3321+ Returns a list of context interfaces that yield a complete context.
3322+ '''
3323+ interfaces = []
3324+ [interfaces.extend(i.complete_contexts())
3325+ for i in self.templates.itervalues()]
3326+ return interfaces
3327
3328=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/openstack/utils.py'
3329--- charms/precise/restish/hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
3330+++ charms/precise/restish/hooks/charmhelpers/contrib/openstack/utils.py 2014-01-27 18:01:12 +0000
3331@@ -0,0 +1,440 @@
3332+#!/usr/bin/python
3333+
3334+# Common python helper functions used for OpenStack charms.
3335+from collections import OrderedDict
3336+
3337+import apt_pkg as apt
3338+import subprocess
3339+import os
3340+import socket
3341+import sys
3342+
3343+from charmhelpers.core.hookenv import (
3344+ config,
3345+ log as juju_log,
3346+ charm_dir,
3347+ ERROR,
3348+ INFO
3349+)
3350+
3351+from charmhelpers.contrib.storage.linux.lvm import (
3352+ deactivate_lvm_volume_group,
3353+ is_lvm_physical_volume,
3354+ remove_lvm_physical_volume,
3355+)
3356+
3357+from charmhelpers.core.host import lsb_release, mounts, umount
3358+from charmhelpers.fetch import apt_install
3359+from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
3360+from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
3361+
3362+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
3363+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
3364+
3365+DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
3366+ 'restricted main multiverse universe')
3367+
3368+
3369+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
3370+ ('oneiric', 'diablo'),
3371+ ('precise', 'essex'),
3372+ ('quantal', 'folsom'),
3373+ ('raring', 'grizzly'),
3374+ ('saucy', 'havana'),
3375+ ('trusty', 'icehouse')
3376+])
3377+
3378+
3379+OPENSTACK_CODENAMES = OrderedDict([
3380+ ('2011.2', 'diablo'),
3381+ ('2012.1', 'essex'),
3382+ ('2012.2', 'folsom'),
3383+ ('2013.1', 'grizzly'),
3384+ ('2013.2', 'havana'),
3385+ ('2014.1', 'icehouse'),
3386+])
3387+
3388+# The ugly duckling
3389+SWIFT_CODENAMES = OrderedDict([
3390+ ('1.4.3', 'diablo'),
3391+ ('1.4.8', 'essex'),
3392+ ('1.7.4', 'folsom'),
3393+ ('1.8.0', 'grizzly'),
3394+ ('1.7.7', 'grizzly'),
3395+ ('1.7.6', 'grizzly'),
3396+ ('1.10.0', 'havana'),
3397+ ('1.9.1', 'havana'),
3398+ ('1.9.0', 'havana'),
3399+])
3400+
3401+DEFAULT_LOOPBACK_SIZE = '5G'
3402+
3403+
3404+def error_out(msg):
3405+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
3406+ sys.exit(1)
3407+
3408+
3409+def get_os_codename_install_source(src):
3410+ '''Derive OpenStack release codename from a given installation source.'''
3411+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3412+ rel = ''
3413+ if src in ['distro', 'distro-proposed']:
3414+ try:
3415+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
3416+ except KeyError:
3417+ e = 'Could not derive openstack release for '\
3418+ 'this Ubuntu release: %s' % ubuntu_rel
3419+ error_out(e)
3420+ return rel
3421+
3422+ if src.startswith('cloud:'):
3423+ ca_rel = src.split(':')[1]
3424+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
3425+ return ca_rel
3426+
3427+ # Best guess match based on deb string provided
3428+ if src.startswith('deb') or src.startswith('ppa'):
3429+ for k, v in OPENSTACK_CODENAMES.iteritems():
3430+ if v in src:
3431+ return v
3432+
3433+
3434+def get_os_version_install_source(src):
3435+ codename = get_os_codename_install_source(src)
3436+ return get_os_version_codename(codename)
3437+
3438+
3439+def get_os_codename_version(vers):
3440+ '''Determine OpenStack codename from version number.'''
3441+ try:
3442+ return OPENSTACK_CODENAMES[vers]
3443+ except KeyError:
3444+ e = 'Could not determine OpenStack codename for version %s' % vers
3445+ error_out(e)
3446+
3447+
3448+def get_os_version_codename(codename):
3449+ '''Determine OpenStack version number from codename.'''
3450+ for k, v in OPENSTACK_CODENAMES.iteritems():
3451+ if v == codename:
3452+ return k
3453+ e = 'Could not derive OpenStack version for '\
3454+ 'codename: %s' % codename
3455+ error_out(e)
3456+
3457+
3458+def get_os_codename_package(package, fatal=True):
3459+ '''Derive OpenStack release codename from an installed package.'''
3460+ apt.init()
3461+ cache = apt.Cache()
3462+
3463+ try:
3464+ pkg = cache[package]
3465+ except:
3466+ if not fatal:
3467+ return None
3468+ # the package is unknown to the current apt cache.
3469+ e = 'Could not determine version of package with no installation '\
3470+ 'candidate: %s' % package
3471+ error_out(e)
3472+
3473+ if not pkg.current_ver:
3474+ if not fatal:
3475+ return None
3476+ # package is known, but no version is currently installed.
3477+ e = 'Could not determine version of uninstalled package: %s' % package
3478+ error_out(e)
3479+
3480+ vers = apt.upstream_version(pkg.current_ver.ver_str)
3481+
3482+ try:
3483+ if 'swift' in pkg.name:
3484+ swift_vers = vers[:5]
3485+ if swift_vers not in SWIFT_CODENAMES:
3486+ # Deal with 1.10.0 upward
3487+ swift_vers = vers[:6]
3488+ return SWIFT_CODENAMES[swift_vers]
3489+ else:
3490+ vers = vers[:6]
3491+ return OPENSTACK_CODENAMES[vers]
3492+ except KeyError:
3493+ e = 'Could not determine OpenStack codename for version %s' % vers
3494+ error_out(e)
3495+
3496+
3497+def get_os_version_package(pkg, fatal=True):
3498+ '''Derive OpenStack version number from an installed package.'''
3499+ codename = get_os_codename_package(pkg, fatal=fatal)
3500+
3501+ if not codename:
3502+ return None
3503+
3504+ if 'swift' in pkg:
3505+ vers_map = SWIFT_CODENAMES
3506+ else:
3507+ vers_map = OPENSTACK_CODENAMES
3508+
3509+ for version, cname in vers_map.iteritems():
3510+ if cname == codename:
3511+ return version
3512+ #e = "Could not determine OpenStack version for package: %s" % pkg
3513+ #error_out(e)
3514+
3515+
3516+os_rel = None
3517+
3518+
3519+def os_release(package, base='essex'):
3520+ '''
3521+ Returns OpenStack release codename from a cached global.
3522+ If the codename can not be determined from either an installed package or
3523+ the installation source, the earliest release supported by the charm should
3524+ be returned.
3525+ '''
3526+ global os_rel
3527+ if os_rel:
3528+ return os_rel
3529+ os_rel = (get_os_codename_package(package, fatal=False) or
3530+ get_os_codename_install_source(config('openstack-origin')) or
3531+ base)
3532+ return os_rel
3533+
3534+
3535+def import_key(keyid):
3536+ cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
3537+ "--recv-keys %s" % keyid
3538+ try:
3539+ subprocess.check_call(cmd.split(' '))
3540+ except subprocess.CalledProcessError:
3541+ error_out("Error importing repo key %s" % keyid)
3542+
3543+
3544+def configure_installation_source(rel):
3545+ '''Configure apt installation source.'''
3546+ if rel == 'distro':
3547+ return
3548+ elif rel == 'distro-proposed':
3549+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3550+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
3551+ f.write(DISTRO_PROPOSED % ubuntu_rel)
3552+ elif rel[:4] == "ppa:":
3553+ src = rel
3554+ subprocess.check_call(["add-apt-repository", "-y", src])
3555+ elif rel[:3] == "deb":
3556+ l = len(rel.split('|'))
3557+ if l == 2:
3558+ src, key = rel.split('|')
3559+ juju_log("Importing PPA key from keyserver for %s" % src)
3560+ import_key(key)
3561+ elif l == 1:
3562+ src = rel
3563+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
3564+ f.write(src)
3565+ elif rel[:6] == 'cloud:':
3566+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
3567+ rel = rel.split(':')[1]
3568+ u_rel = rel.split('-')[0]
3569+ ca_rel = rel.split('-')[1]
3570+
3571+ if u_rel != ubuntu_rel:
3572+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
3573+ 'version (%s)' % (ca_rel, ubuntu_rel)
3574+ error_out(e)
3575+
3576+ if 'staging' in ca_rel:
3577+ # staging is just a regular PPA.
3578+ os_rel = ca_rel.split('/')[0]
3579+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
3580+ cmd = 'add-apt-repository -y %s' % ppa
3581+ subprocess.check_call(cmd.split(' '))
3582+ return
3583+
3584+ # map charm config options to actual archive pockets.
3585+ pockets = {
3586+ 'folsom': 'precise-updates/folsom',
3587+ 'folsom/updates': 'precise-updates/folsom',
3588+ 'folsom/proposed': 'precise-proposed/folsom',
3589+ 'grizzly': 'precise-updates/grizzly',
3590+ 'grizzly/updates': 'precise-updates/grizzly',
3591+ 'grizzly/proposed': 'precise-proposed/grizzly',
3592+ 'havana': 'precise-updates/havana',
3593+ 'havana/updates': 'precise-updates/havana',
3594+ 'havana/proposed': 'precise-proposed/havana',
3595+ 'icehouse': 'precise-updates/icehouse',
3596+ 'icehouse/updates': 'precise-updates/icehouse',
3597+ 'icehouse/proposed': 'precise-proposed/icehouse',
3598+ }
3599+
3600+ try:
3601+ pocket = pockets[ca_rel]
3602+ except KeyError:
3603+ e = 'Invalid Cloud Archive release specified: %s' % rel
3604+ error_out(e)
3605+
3606+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
3607+ apt_install('ubuntu-cloud-keyring', fatal=True)
3608+
3609+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
3610+ f.write(src)
3611+ else:
3612+ error_out("Invalid openstack-release specified: %s" % rel)
3613+
3614+
3615+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
3616+ """
3617+ Write an rc file in the charm-delivered directory containing
3618+ exported environment variables provided by env_vars. Any charm scripts run
3619+ outside the juju hook environment can source this scriptrc to obtain
3620+ updated config information necessary to perform health checks or
3621+ service changes.
3622+ """
3623+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
3624+ if not os.path.exists(os.path.dirname(juju_rc_path)):
3625+ os.mkdir(os.path.dirname(juju_rc_path))
3626+ with open(juju_rc_path, 'wb') as rc_script:
3627+ rc_script.write(
3628+ "#!/bin/bash\n")
3629+ [rc_script.write('export %s=%s\n' % (u, p))
3630+ for u, p in env_vars.iteritems() if u != "script_path"]
3631+
3632+
3633+def openstack_upgrade_available(package):
3634+ """
3635+ Determines if an OpenStack upgrade is available from installation
3636+ source, based on version of installed package.
3637+
3638+ :param package: str: Name of installed package.
3639+
3640+ :returns: bool: : Returns True if configured installation source offers
3641+ a newer version of package.
3642+
3643+ """
3644+
3645+ src = config('openstack-origin')
3646+ cur_vers = get_os_version_package(package)
3647+ available_vers = get_os_version_install_source(src)
3648+ apt.init()
3649+ return apt.version_compare(available_vers, cur_vers) == 1
3650+
3651+
3652+def ensure_block_device(block_device):
3653+ '''
3654+ Confirm block_device, create as loopback if necessary.
3655+
3656+ :param block_device: str: Full path of block device to ensure.
3657+
3658+ :returns: str: Full path of ensured block device.
3659+ '''
3660+ _none = ['None', 'none', None]
3661+ if (block_device in _none):
3662+ error_out('prepare_storage(): Missing required input: '
3663+ 'block_device=%s.' % block_device, level=ERROR)
3664+
3665+ if block_device.startswith('/dev/'):
3666+ bdev = block_device
3667+ elif block_device.startswith('/'):
3668+ _bd = block_device.split('|')
3669+ if len(_bd) == 2:
3670+ bdev, size = _bd
3671+ else:
3672+ bdev = block_device
3673+ size = DEFAULT_LOOPBACK_SIZE
3674+ bdev = ensure_loopback_device(bdev, size)
3675+ else:
3676+ bdev = '/dev/%s' % block_device
3677+
3678+ if not is_block_device(bdev):
3679+ error_out('Failed to locate valid block device at %s' % bdev,
3680+ level=ERROR)
3681+
3682+ return bdev
3683+
3684+
3685+def clean_storage(block_device):
3686+ '''
3687+ Ensures a block device is clean. That is:
3688+ - unmounted
3689+ - any lvm volume groups are deactivated
3690+ - any lvm physical device signatures removed
3691+ - partition table wiped
3692+
3693+ :param block_device: str: Full path to block device to clean.
3694+ '''
3695+ for mp, d in mounts():
3696+ if d == block_device:
3697+ juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
3698+ (d, mp), level=INFO)
3699+ umount(mp, persist=True)
3700+
3701+ if is_lvm_physical_volume(block_device):
3702+ deactivate_lvm_volume_group(block_device)
3703+ remove_lvm_physical_volume(block_device)
3704+ else:
3705+ zap_disk(block_device)
3706+
3707+
3708+def is_ip(address):
3709+ """
3710+ Returns True if address is a valid IP address.
3711+ """
3712+ try:
3713+ # Test to see if already an IPv4 address
3714+ socket.inet_aton(address)
3715+ return True
3716+ except socket.error:
3717+ return False
3718+
3719+
3720+def ns_query(address):
3721+ try:
3722+ import dns.resolver
3723+ except ImportError:
3724+ apt_install('python-dnspython')
3725+ import dns.resolver
3726+
3727+ if isinstance(address, dns.name.Name):
3728+ rtype = 'PTR'
3729+ elif isinstance(address, basestring):
3730+ rtype = 'A'
3731+
3732+ answers = dns.resolver.query(address, rtype)
3733+ if answers:
3734+ return str(answers[0])
3735+ return None
3736+
3737+
3738+def get_host_ip(hostname):
3739+ """
3740+ Resolves the IP for a given hostname, or returns
3741+ the input if it is already an IP.
3742+ """
3743+ if is_ip(hostname):
3744+ return hostname
3745+
3746+ return ns_query(hostname)
3747+
3748+
3749+def get_hostname(address):
3750+ """
3751+ Resolves hostname for given IP, or returns the input
3752+ if it is already a hostname.
3753+ """
3754+ if not is_ip(address):
3755+ return address
3756+
3757+ try:
3758+ import dns.reversename
3759+ except ImportError:
3760+ apt_install('python-dnspython')
3761+ import dns.reversename
3762+
3763+ rev = dns.reversename.from_address(address)
3764+ result = ns_query(rev)
3765+ if not result:
3766+ return None
3767+
3768+ # strip trailing .
3769+ if result.endswith('.'):
3770+ return result[:-1]
3771+ return result
3772
3773=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/saltstack'
3774=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/saltstack/__init__.py'
3775--- charms/precise/restish/hooks/charmhelpers/contrib/saltstack/__init__.py 1970-01-01 00:00:00 +0000
3776+++ charms/precise/restish/hooks/charmhelpers/contrib/saltstack/__init__.py 2014-01-27 18:01:12 +0000
3777@@ -0,0 +1,102 @@
3778+"""Charm Helpers saltstack - declare the state of your machines.
3779+
3780+This helper enables you to declare your machine state, rather than
3781+program it procedurally (and have to test each change to your procedures).
3782+Your install hook can be as simple as:
3783+
3784+{{{
3785+from charmhelpers.contrib.saltstack import (
3786+ install_salt_support,
3787+ update_machine_state,
3788+)
3789+
3790+
3791+def install():
3792+ install_salt_support()
3793+ update_machine_state('machine_states/dependencies.yaml')
3794+ update_machine_state('machine_states/installed.yaml')
3795+}}}
3796+
3797+and won't need to change (nor will its tests) when you change the machine
3798+state.
3799+
3800+It's using a python package called salt-minion which allows various formats for
3801+specifying resources, such as:
3802+
3803+{{{
3804+/srv/{{ basedir }}:
3805+ file.directory:
3806+ - group: ubunet
3807+ - user: ubunet
3808+ - require:
3809+ - user: ubunet
3810+ - recurse:
3811+ - user
3812+ - group
3813+
3814+ubunet:
3815+ group.present:
3816+ - gid: 1500
3817+ user.present:
3818+ - uid: 1500
3819+ - gid: 1500
3820+ - createhome: False
3821+ - require:
3822+ - group: ubunet
3823+}}}
3824+
3825+The docs for all the different state definitions are at:
3826+ http://docs.saltstack.com/ref/states/all/
3827+
3828+
3829+TODO:
3830+ * Add test helpers which will ensure that machine state definitions
3831+ are functionally (but not necessarily logically) correct (ie. getting
3832+ salt to parse all state defs.
3833+ * Add a link to a public bootstrap charm example / blogpost.
3834+ * Find a way to obviate the need to use the grains['charm_dir'] syntax
3835+ in templates.
3836+"""
3837+# Copyright 2013 Canonical Ltd.
3838+#
3839+# Authors:
3840+# Charm Helpers Developers <juju@lists.ubuntu.com>
3841+import subprocess
3842+
3843+import charmhelpers.contrib.templating.contexts
3844+import charmhelpers.core.host
3845+import charmhelpers.core.hookenv
3846+
3847+
3848+salt_grains_path = '/etc/salt/grains'
3849+
3850+
3851+def install_salt_support(from_ppa=True):
3852+ """Installs the salt-minion helper for machine state.
3853+
3854+ By default the salt-minion package is installed from
3855+ the saltstack PPA. If from_ppa is False you must ensure
3856+ that the salt-minion package is available in the apt cache.
3857+ """
3858+ if from_ppa:
3859+ subprocess.check_call([
3860+ '/usr/bin/add-apt-repository',
3861+ '--yes',
3862+ 'ppa:saltstack/salt',
3863+ ])
3864+ subprocess.check_call(['/usr/bin/apt-get', 'update'])
3865+ # We install salt-common as salt-minion would run the salt-minion
3866+ # daemon.
3867+ charmhelpers.fetch.apt_install('salt-common')
3868+
3869+
3870+def update_machine_state(state_path):
3871+ """Update the machine state using the provided state declaration."""
3872+ charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
3873+ salt_grains_path)
3874+ subprocess.check_call([
3875+ 'salt-call',
3876+ '--local',
3877+ 'state.template',
3878+ state_path,
3879+ ])
3880
3881=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/ssl'
3882=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/ssl/__init__.py'
3883--- charms/precise/restish/hooks/charmhelpers/contrib/ssl/__init__.py 1970-01-01 00:00:00 +0000
3884+++ charms/precise/restish/hooks/charmhelpers/contrib/ssl/__init__.py 2014-01-27 18:01:12 +0000
3885@@ -0,0 +1,78 @@
3886+import subprocess
3887+from charmhelpers.core import hookenv
3888+
3889+
3890+def generate_selfsigned(keyfile, certfile, keysize="1024", config=None, subject=None, cn=None):
3891+ """Generate selfsigned SSL keypair
3892+
3893+ You must provide one of the 3 optional arguments:
3894+ config, subject or cn
3895+ If more than one is provided the leftmost will be used
3896+
3897+ Arguments:
3898+ keyfile -- (required) full path to the keyfile to be created
3899+ certfile -- (required) full path to the certfile to be created
3900+ keysize -- (optional) SSL key length
3901+ config -- (optional) openssl configuration file
3902+ subject -- (optional) dictionary with SSL subject variables
3903+ cn -- (optional) cerfificate common name
3904+
3905+ Required keys in subject dict:
3906+ cn -- Common name (eq. FQDN)
3907+
3908+ Optional keys in subject dict
3909+ country -- Country Name (2 letter code)
3910+ state -- State or Province Name (full name)
3911+ locality -- Locality Name (eg, city)
3912+ organization -- Organization Name (eg, company)
3913+ organizational_unit -- Organizational Unit Name (eg, section)
3914+ email -- Email Address
3915+ """
3916+
3917+ cmd = []
3918+ if config:
3919+ cmd = ["/usr/bin/openssl", "req", "-new", "-newkey",
3920+ "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509",
3921+ "-keyout", keyfile,
3922+ "-out", certfile, "-config", config]
3923+ elif subject:
3924+ ssl_subject = ""
3925+ if "country" in subject:
3926+ ssl_subject = ssl_subject + "/C={}".format(subject["country"])
3927+ if "state" in subject:
3928+ ssl_subject = ssl_subject + "/ST={}".format(subject["state"])
3929+ if "locality" in subject:
3930+ ssl_subject = ssl_subject + "/L={}".format(subject["locality"])
3931+ if "organization" in subject:
3932+ ssl_subject = ssl_subject + "/O={}".format(subject["organization"])
3933+ if "organizational_unit" in subject:
3934+ ssl_subject = ssl_subject + "/OU={}".format(subject["organizational_unit"])
3935+ if "cn" in subject:
3936+ ssl_subject = ssl_subject + "/CN={}".format(subject["cn"])
3937+ else:
3938+ hookenv.log("When using \"subject\" argument you must "
3939+ "provide \"cn\" field at very least")
3940+ return False
3941+ if "email" in subject:
3942+ ssl_subject = ssl_subject + "/emailAddress={}".format(subject["email"])
3943+
3944+ cmd = ["/usr/bin/openssl", "req", "-new", "-newkey",
3945+ "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509",
3946+ "-keyout", keyfile,
3947+ "-out", certfile, "-subj", ssl_subject]
3948+ elif cn:
3949+ cmd = ["/usr/bin/openssl", "req", "-new", "-newkey",
3950+ "rsa:{}".format(keysize), "-days", "365", "-nodes", "-x509",
3951+ "-keyout", keyfile,
3952+ "-out", certfile, "-subj", "/CN={}".format(cn)]
3953+
3954+ if not cmd:
3955+ hookenv.log("No config, subject or cn provided,"
3956+ "unable to generate self signed SSL certificates")
3957+ return False
3958+ try:
3959+ subprocess.check_call(cmd)
3960+ return True
3961+ except Exception as e:
3962+ print "Execution of openssl command failed:\n{}".format(e)
3963+ return False
3964
3965=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/storage'
3966=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/storage/__init__.py'
3967=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/storage/linux'
3968=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/__init__.py'
3969=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/ceph.py'
3970--- charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/ceph.py 1970-01-01 00:00:00 +0000
3971+++ charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-01-27 18:01:12 +0000
3972@@ -0,0 +1,383 @@
3973+#
3974+# Copyright 2012 Canonical Ltd.
3975+#
3976+# This file is sourced from lp:openstack-charm-helpers
3977+#
3978+# Authors:
3979+# James Page <james.page@ubuntu.com>
3980+# Adam Gandelman <adamg@ubuntu.com>
3981+#
3982+
3983+import os
3984+import shutil
3985+import json
3986+import time
3987+
3988+from subprocess import (
3989+ check_call,
3990+ check_output,
3991+ CalledProcessError
3992+)
3993+
3994+from charmhelpers.core.hookenv import (
3995+ relation_get,
3996+ relation_ids,
3997+ related_units,
3998+ log,
3999+ INFO,
4000+ WARNING,
4001+ ERROR
4002+)
4003+
4004+from charmhelpers.core.host import (
4005+ mount,
4006+ mounts,
4007+ service_start,
4008+ service_stop,
4009+ service_running,
4010+ umount,
4011+)
4012+
4013+from charmhelpers.fetch import (
4014+ apt_install,
4015+)
4016+
4017+KEYRING = '/etc/ceph/ceph.client.{}.keyring'
4018+KEYFILE = '/etc/ceph/ceph.client.{}.key'
4019+
4020+CEPH_CONF = """[global]
4021+ auth supported = {auth}
4022+ keyring = {keyring}
4023+ mon host = {mon_hosts}
4024+"""
4025+
4026+
4027+def install():
4028+ ''' Basic Ceph client installation '''
4029+ ceph_dir = "/etc/ceph"
4030+ if not os.path.exists(ceph_dir):
4031+ os.mkdir(ceph_dir)
4032+ apt_install('ceph-common', fatal=True)
4033+
4034+
4035+def rbd_exists(service, pool, rbd_img):
4036+ ''' Check to see if a RADOS block device exists '''
4037+ try:
4038+ out = check_output(['rbd', 'list', '--id', service,
4039+ '--pool', pool])
4040+ except CalledProcessError:
4041+ return False
4042+ else:
4043+ return rbd_img in out
4044+
4045+
4046+def create_rbd_image(service, pool, image, sizemb):
4047+ ''' Create a new RADOS block device '''
4048+ cmd = [
4049+ 'rbd',
4050+ 'create',
4051+ image,
4052+ '--size',
4053+ str(sizemb),
4054+ '--id',
4055+ service,
4056+ '--pool',
4057+ pool
4058+ ]
4059+ check_call(cmd)
4060+
4061+
4062+def pool_exists(service, name):
4063+ ''' Check to see if a RADOS pool already exists '''
4064+ try:
4065+ out = check_output(['rados', '--id', service, 'lspools'])
4066+ except CalledProcessError:
4067+ return False
4068+ else:
4069+ return name in out
4070+
4071+
4072+def get_osds(service):
4073+ '''
4074+ Return a list of all Ceph Object Storage Daemons
4075+ currently in the cluster
4076+ '''
4077+ version = ceph_version()
4078+ if version and version >= '0.56':
4079+ return json.loads(check_output(['ceph', '--id', service,
4080+ 'osd', 'ls', '--format=json']))
4081+ else:
4082+ return None
4083+
4084+
4085+def create_pool(service, name, replicas=2):
4086+ ''' Create a new RADOS pool '''
4087+ if pool_exists(service, name):
4088+ log("Ceph pool {} already exists, skipping creation".format(name),
4089+ level=WARNING)
4090+ return
4091+ # Calculate the number of placement groups based
4092+ # on upstream recommended best practices.
4093+ osds = get_osds(service)
4094+ if osds:
4095+ pgnum = (len(osds) * 100 / replicas)
4096+ else:
4097+ # NOTE(james-page): Default to 200 for older ceph versions
4098+ # which don't support OSD query from cli
4099+ pgnum = 200
4100+ cmd = [
4101+ 'ceph', '--id', service,
4102+ 'osd', 'pool', 'create',
4103+ name, str(pgnum)
4104+ ]
4105+ check_call(cmd)
4106+ cmd = [
4107+ 'ceph', '--id', service,
4108+ 'osd', 'pool', 'set', name,
4109+ 'size', str(replicas)
4110+ ]
4111+ check_call(cmd)
4112+
4113+
4114+def delete_pool(service, name):
4115+ ''' Delete a RADOS pool from ceph '''
4116+ cmd = [
4117+ 'ceph', '--id', service,
4118+ 'osd', 'pool', 'delete',
4119+ name, '--yes-i-really-really-mean-it'
4120+ ]
4121+ check_call(cmd)
4122+
4123+
4124+def _keyfile_path(service):
4125+ return KEYFILE.format(service)
4126+
4127+
4128+def _keyring_path(service):
4129+ return KEYRING.format(service)
4130+
4131+
4132+def create_keyring(service, key):
4133+ ''' Create a new Ceph keyring containing key'''
4134+ keyring = _keyring_path(service)
4135+ if os.path.exists(keyring):
4136+ log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
4137+ return
4138+ cmd = [
4139+ 'ceph-authtool',
4140+ keyring,
4141+ '--create-keyring',
4142+ '--name=client.{}'.format(service),
4143+ '--add-key={}'.format(key)
4144+ ]
4145+ check_call(cmd)
4146+ log('ceph: Created new ring at %s.' % keyring, level=INFO)
4147+
4148+
4149+def create_key_file(service, key):
4150+ ''' Create a file containing key '''
4151+ keyfile = _keyfile_path(service)
4152+ if os.path.exists(keyfile):
4153+ log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
4154+ return
4155+ with open(keyfile, 'w') as fd:
4156+ fd.write(key)
4157+ log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
4158+
4159+
4160+def get_ceph_nodes():
4161+ ''' Query named relation 'ceph' to detemine current nodes '''
4162+ hosts = []
4163+ for r_id in relation_ids('ceph'):
4164+ for unit in related_units(r_id):
4165+ hosts.append(relation_get('private-address', unit=unit, rid=r_id))
4166+ return hosts
4167+
4168+
4169+def configure(service, key, auth):
4170+ ''' Perform basic configuration of Ceph '''
4171+ create_keyring(service, key)
4172+ create_key_file(service, key)
4173+ hosts = get_ceph_nodes()
4174+ with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
4175+ ceph_conf.write(CEPH_CONF.format(auth=auth,
4176+ keyring=_keyring_path(service),
4177+ mon_hosts=",".join(map(str, hosts))))
4178+ modprobe('rbd')
4179+
4180+
4181+def image_mapped(name):
4182+ ''' Determine whether a RADOS block device is mapped locally '''
4183+ try:
4184+ out = check_output(['rbd', 'showmapped'])
4185+ except CalledProcessError:
4186+ return False
4187+ else:
4188+ return name in out
4189+
4190+
4191+def map_block_storage(service, pool, image):
4192+ ''' Map a RADOS block device for local use '''
4193+ cmd = [
4194+ 'rbd',
4195+ 'map',
4196+ '{}/{}'.format(pool, image),
4197+ '--user',
4198+ service,
4199+ '--secret',
4200+ _keyfile_path(service),
4201+ ]
4202+ check_call(cmd)
4203+
4204+
4205+def filesystem_mounted(fs):
4206+ ''' Determine whether a filesytems is already mounted '''
4207+ return fs in [f for f, m in mounts()]
4208+
4209+
4210+def make_filesystem(blk_device, fstype='ext4', timeout=10):
4211+ ''' Make a new filesystem on the specified block device '''
4212+ count = 0
4213+ e_noent = os.errno.ENOENT
4214+ while not os.path.exists(blk_device):
4215+ if count >= timeout:
4216+ log('ceph: gave up waiting on block device %s' % blk_device,
4217+ level=ERROR)
4218+ raise IOError(e_noent, os.strerror(e_noent), blk_device)
4219+ log('ceph: waiting for block device %s to appear' % blk_device,
4220+ level=INFO)
4221+ count += 1
4222+ time.sleep(1)
4223+ else:
4224+ log('ceph: Formatting block device %s as filesystem %s.' %
4225+ (blk_device, fstype), level=INFO)
4226+ check_call(['mkfs', '-t', fstype, blk_device])
4227+
4228+
4229+def place_data_on_block_device(blk_device, data_src_dst):
4230+ ''' Migrate data in data_src_dst to blk_device and then remount '''
4231+ # mount block device into /mnt
4232+ mount(blk_device, '/mnt')
4233+ # copy data to /mnt
4234+ copy_files(data_src_dst, '/mnt')
4235+ # umount block device
4236+ umount('/mnt')
4237+ # Grab user/group ID's from original source
4238+ _dir = os.stat(data_src_dst)
4239+ uid = _dir.st_uid
4240+ gid = _dir.st_gid
4241+ # re-mount where the data should originally be
4242+ # TODO: persist is currently a NO-OP in core.host
4243+ mount(blk_device, data_src_dst, persist=True)
4244+ # ensure original ownership of new mount.
4245+ os.chown(data_src_dst, uid, gid)
4246+
4247+
4248+# TODO: re-use
4249+def modprobe(module):
4250+ ''' Load a kernel module and configure for auto-load on reboot '''
4251+ log('ceph: Loading kernel module', level=INFO)
4252+ cmd = ['modprobe', module]
4253+ check_call(cmd)
4254+ with open('/etc/modules', 'r+') as modules:
4255+ if module not in modules.read():
4256+ modules.write(module)
4257+
4258+
4259+def copy_files(src, dst, symlinks=False, ignore=None):
4260+ ''' Copy files from src to dst '''
4261+ for item in os.listdir(src):
4262+ s = os.path.join(src, item)
4263+ d = os.path.join(dst, item)
4264+ if os.path.isdir(s):
4265+ shutil.copytree(s, d, symlinks, ignore)
4266+ else:
4267+ shutil.copy2(s, d)
4268+
4269+
4270+def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
4271+ blk_device, fstype, system_services=[]):
4272+ """
4273+ NOTE: This function must only be called from a single service unit for
4274+ the same rbd_img otherwise data loss will occur.
4275+
4276+ Ensures given pool and RBD image exists, is mapped to a block device,
4277+ and the device is formatted and mounted at the given mount_point.
4278+
4279+ If formatting a device for the first time, data existing at mount_point
4280+ will be migrated to the RBD device before being re-mounted.
4281+
4282+ All services listed in system_services will be stopped prior to data
4283+ migration and restarted when complete.
4284+ """
4285+ # Ensure pool, RBD image, RBD mappings are in place.
4286+ if not pool_exists(service, pool):
4287+ log('ceph: Creating new pool {}.'.format(pool))
4288+ create_pool(service, pool)
4289+
4290+ if not rbd_exists(service, pool, rbd_img):
4291+ log('ceph: Creating RBD image ({}).'.format(rbd_img))
4292+ create_rbd_image(service, pool, rbd_img, sizemb)
4293+
4294+ if not image_mapped(rbd_img):
4295+ log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
4296+ map_block_storage(service, pool, rbd_img)
4297+
4298+ # make file system
4299+ # TODO: What happens if for whatever reason this is run again and
4300+ # the data is already in the rbd device and/or is mounted??
4301+ # When it is mounted already, it will fail to make the fs
4302+ # XXX: This is really sketchy! Need to at least add an fstab entry
4303+ # otherwise this hook will blow away existing data if its executed
4304+ # after a reboot.
4305+ if not filesystem_mounted(mount_point):
4306+ make_filesystem(blk_device, fstype)
4307+
4308+ for svc in system_services:
4309+ if service_running(svc):
4310+ log('ceph: Stopping services {} prior to migrating data.'
4311+ .format(svc))
4312+ service_stop(svc)
4313+
4314+ place_data_on_block_device(blk_device, mount_point)
4315+
4316+ for svc in system_services:
4317+ log('ceph: Starting service {} after migrating data.'
4318+ .format(svc))
4319+ service_start(svc)
4320+
4321+
4322+def ensure_ceph_keyring(service, user=None, group=None):
4323+ '''
4324+ Ensures a ceph keyring is created for a named service
4325+ and optionally ensures user and group ownership.
4326+
4327+ Returns False if no ceph key is available in relation state.
4328+ '''
4329+ key = None
4330+ for rid in relation_ids('ceph'):
4331+ for unit in related_units(rid):
4332+ key = relation_get('key', rid=rid, unit=unit)
4333+ if key:
4334+ break
4335+ if not key:
4336+ return False
4337+ create_keyring(service=service, key=key)
4338+ keyring = _keyring_path(service)
4339+ if user and group:
4340+ check_call(['chown', '%s.%s' % (user, group), keyring])
4341+ return True
4342+
4343+
4344+def ceph_version():
4345+ ''' Retrieve the local version of ceph '''
4346+ if os.path.exists('/usr/bin/ceph'):
4347+ cmd = ['ceph', '-v']
4348+ output = check_output(cmd)
4349+ output = output.split()
4350+ if len(output) > 3:
4351+ return output[2]
4352+ else:
4353+ return None
4354+ else:
4355+ return None
4356
4357=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/loopback.py'
4358--- charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/loopback.py 1970-01-01 00:00:00 +0000
4359+++ charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-01-27 18:01:12 +0000
4360@@ -0,0 +1,62 @@
4361+
4362+import os
4363+import re
4364+
4365+from subprocess import (
4366+ check_call,
4367+ check_output,
4368+)
4369+
4370+
4371+##################################################
4372+# loopback device helpers.
4373+##################################################
4374+def loopback_devices():
4375+ '''
4376+ Parse through 'losetup -a' output to determine currently mapped
4377+ loopback devices. Output is expected to look like:
4378+
4379+ /dev/loop0: [0807]:961814 (/tmp/my.img)
4380+
4381+ :returns: dict: a dict mapping {loopback_dev: backing_file}
4382+ '''
4383+ loopbacks = {}
4384+ cmd = ['losetup', '-a']
4385+ devs = [d.strip().split(' ') for d in
4386+ check_output(cmd).splitlines() if d != '']
4387+ for dev, _, f in devs:
4388+ loopbacks[dev.replace(':', '')] = re.search('\((\S+)\)', f).groups()[0]
4389+ return loopbacks
4390+
4391+
4392+def create_loopback(file_path):
4393+ '''
4394+ Create a loopback device for a given backing file.
4395+
4396+ :returns: str: Full path to new loopback device (eg, /dev/loop0)
4397+ '''
4398+ file_path = os.path.abspath(file_path)
4399+ check_call(['losetup', '--find', file_path])
4400+ for d, f in loopback_devices().iteritems():
4401+ if f == file_path:
4402+ return d
4403+
4404+
4405+def ensure_loopback_device(path, size):
4406+ '''
4407+ Ensure a loopback device exists for a given backing file path and size.
4408+ If it a loopback device is not mapped to file, a new one will be created.
4409+
4410+ TODO: Confirm size of found loopback device.
4411+
4412+ :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
4413+ '''
4414+ for d, f in loopback_devices().iteritems():
4415+ if f == path:
4416+ return d
4417+
4418+ if not os.path.exists(path):
4419+ cmd = ['truncate', '--size', size, path]
4420+ check_call(cmd)
4421+
4422+ return create_loopback(path)
4423
4424=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/lvm.py'
4425--- charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/lvm.py 1970-01-01 00:00:00 +0000
4426+++ charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-01-27 18:01:12 +0000
4427@@ -0,0 +1,88 @@
4428+from subprocess import (
4429+ CalledProcessError,
4430+ check_call,
4431+ check_output,
4432+ Popen,
4433+ PIPE,
4434+)
4435+
4436+
4437+##################################################
4438+# LVM helpers.
4439+##################################################
4440+def deactivate_lvm_volume_group(block_device):
4441+ '''
4442+ Deactivate any volume gruop associated with an LVM physical volume.
4443+
4444+ :param block_device: str: Full path to LVM physical volume
4445+ '''
4446+ vg = list_lvm_volume_group(block_device)
4447+ if vg:
4448+ cmd = ['vgchange', '-an', vg]
4449+ check_call(cmd)
4450+
4451+
4452+def is_lvm_physical_volume(block_device):
4453+ '''
4454+ Determine whether a block device is initialized as an LVM PV.
4455+
4456+ :param block_device: str: Full path of block device to inspect.
4457+
4458+ :returns: boolean: True if block device is a PV, False if not.
4459+ '''
4460+ try:
4461+ check_output(['pvdisplay', block_device])
4462+ return True
4463+ except CalledProcessError:
4464+ return False
4465+
4466+
4467+def remove_lvm_physical_volume(block_device):
4468+ '''
4469+ Remove LVM PV signatures from a given block device.
4470+
4471+ :param block_device: str: Full path of block device to scrub.
4472+ '''
4473+ p = Popen(['pvremove', '-ff', block_device],
4474+ stdin=PIPE)
4475+ p.communicate(input='y\n')
4476+
4477+
4478+def list_lvm_volume_group(block_device):
4479+ '''
4480+ List LVM volume group associated with a given block device.
4481+
4482+ Assumes block device is a valid LVM PV.
4483+
4484+ :param block_device: str: Full path of block device to inspect.
4485+
4486+ :returns: str: Name of volume group associated with block device or None
4487+ '''
4488+ vg = None
4489+ pvd = check_output(['pvdisplay', block_device]).splitlines()
4490+ for l in pvd:
4491+ if l.strip().startswith('VG Name'):
4492+ vg = ' '.join(l.split()).split(' ').pop()
4493+ return vg
4494+
4495+
4496+def create_lvm_physical_volume(block_device):
4497+ '''
4498+ Initialize a block device as an LVM physical volume.
4499+
4500+ :param block_device: str: Full path of block device to initialize.
4501+
4502+ '''
4503+ check_call(['pvcreate', block_device])
4504+
4505+
4506+def create_lvm_volume_group(volume_group, block_device):
4507+ '''
4508+ Create an LVM volume group backed by a given block device.
4509+
4510+ Assumes block device has already been initialized as an LVM PV.
4511+
4512+ :param volume_group: str: Name of volume group to create.
4513+ :block_device: str: Full path of PV-initialized block device.
4514+ '''
4515+ check_call(['vgcreate', volume_group, block_device])
4516
4517=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/utils.py'
4518--- charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000
4519+++ charms/precise/restish/hooks/charmhelpers/contrib/storage/linux/utils.py 2014-01-27 18:01:12 +0000
4520@@ -0,0 +1,25 @@
4521+from os import stat
4522+from stat import S_ISBLK
4523+
4524+from subprocess import (
4525+ check_call
4526+)
4527+
4528+
4529+def is_block_device(path):
4530+ '''
4531+ Confirm device at path is a valid block device node.
4532+
4533+ :returns: boolean: True if path is a block device, False if not.
4534+ '''
4535+ return S_ISBLK(stat(path).st_mode)
4536+
4537+
4538+def zap_disk(block_device):
4539+ '''
4540+ Clear a block device of partition table. Relies on sgdisk, which is
4541+ installed as pat of the 'gdisk' package in Ubuntu.
4542+
4543+ :param block_device: str: Full path of block device to clean.
4544+ '''
4545+ check_call(['sgdisk', '--zap-all', '--mbrtogpt', block_device])
4546
4547=== added directory 'charms/precise/restish/hooks/charmhelpers/contrib/templating'
4548=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/templating/__init__.py'
4549=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/templating/contexts.py'
4550--- charms/precise/restish/hooks/charmhelpers/contrib/templating/contexts.py 1970-01-01 00:00:00 +0000
4551+++ charms/precise/restish/hooks/charmhelpers/contrib/templating/contexts.py 2014-01-27 18:01:12 +0000
4552@@ -0,0 +1,73 @@
4553+# Copyright 2013 Canonical Ltd.
4554+#
4555+# Authors:
4556+# Charm Helpers Developers <juju@lists.ubuntu.com>
4557+"""A helper to create a yaml cache of config with namespaced relation data."""
4558+import os
4559+import yaml
4560+
4561+import charmhelpers.core.hookenv
4562+
4563+
4564+charm_dir = os.environ.get('CHARM_DIR', '')
4565+
4566+
4567+def juju_state_to_yaml(yaml_path, namespace_separator=':',
4568+ allow_hyphens_in_keys=True):
4569+ """Update the juju config and state in a yaml file.
4570+
4571+ This includes any current relation-get data, and the charm
4572+ directory.
4573+
4574+ This function was created for the ansible and saltstack
4575+ support, as those libraries can use a yaml file to supply
4576+ context to templates, but it may be useful generally to
4577+ create and update an on-disk cache of all the config, including
4578+ previous relation data.
4579+
4580+ By default, hyphens are allowed in keys as this is supported
4581+ by yaml, but for tools like ansible, hyphens are not valid [1].
4582+
4583+ [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name
4584+ """
4585+ config = charmhelpers.core.hookenv.config()
4586+
4587+ # Add the charm_dir which we will need to refer to charm
4588+ # file resources etc.
4589+ config['charm_dir'] = charm_dir
4590+ config['local_unit'] = charmhelpers.core.hookenv.local_unit()
4591+
4592+ # Add any relation data prefixed with the relation type.
4593+ relation_type = charmhelpers.core.hookenv.relation_type()
4594+ if relation_type is not None:
4595+ relation_data = charmhelpers.core.hookenv.relation_get()
4596+ relation_data = dict(
4597+ ("{relation_type}{namespace_separator}{key}".format(
4598+ relation_type=relation_type.replace('-', '_'),
4599+ key=key,
4600+ namespace_separator=namespace_separator), val)
4601+ for key, val in relation_data.items())
4602+ config.update(relation_data)
4603+
4604+ # Don't use non-standard tags for unicode which will not
4605+ # work when salt uses yaml.load_safe.
4606+ yaml.add_representer(unicode, lambda dumper,
4607+ value: dumper.represent_scalar(
4608+ u'tag:yaml.org,2002:str', value))
4609+
4610+ yaml_dir = os.path.dirname(yaml_path)
4611+ if not os.path.exists(yaml_dir):
4612+ os.makedirs(yaml_dir)
4613+
4614+ if os.path.exists(yaml_path):
4615+ with open(yaml_path, "r") as existing_vars_file:
4616+ existing_vars = yaml.load(existing_vars_file.read())
4617+ else:
4618+ existing_vars = {}
4619+
4620+ if not allow_hyphens_in_keys:
4621+ config = dict(
4622+ (key.replace('-', '_'), val) for key, val in config.items())
4623+ existing_vars.update(config)
4624+ with open(yaml_path, "w+") as fp:
4625+ fp.write(yaml.dump(existing_vars))
4626
4627=== added file 'charms/precise/restish/hooks/charmhelpers/contrib/templating/pyformat.py'
4628--- charms/precise/restish/hooks/charmhelpers/contrib/templating/pyformat.py 1970-01-01 00:00:00 +0000
4629+++ charms/precise/restish/hooks/charmhelpers/contrib/templating/pyformat.py 2014-01-27 18:01:12 +0000
4630@@ -0,0 +1,13 @@
4631+'''
4632+Templating using standard Python str.format() method.
4633+'''
4634+
4635+from charmhelpers.core import hookenv
4636+
4637+
4638+def render(template, extra={}, **kwargs):
4639+ """Return the template rendered using Python's str.format()."""
4640+ context = hookenv.execution_environment()
4641+ context.update(extra)
4642+ context.update(kwargs)
4643+ return template.format(**context)
4644
4645=== added directory 'charms/precise/restish/hooks/charmhelpers/core'
4646=== added file 'charms/precise/restish/hooks/charmhelpers/core/__init__.py'
4647=== added file 'charms/precise/restish/hooks/charmhelpers/core/hookenv.py'
4648--- charms/precise/restish/hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
4649+++ charms/precise/restish/hooks/charmhelpers/core/hookenv.py 2014-01-27 18:01:12 +0000
4650@@ -0,0 +1,395 @@
4651+"Interactions with the Juju environment"
4652+# Copyright 2013 Canonical Ltd.
4653+#
4654+# Authors:
4655+# Charm Helpers Developers <juju@lists.ubuntu.com>
4656+
4657+import os
4658+import json
4659+import yaml
4660+import subprocess
4661+import UserDict
4662+from subprocess import CalledProcessError
4663+
4664+CRITICAL = "CRITICAL"
4665+ERROR = "ERROR"
4666+WARNING = "WARNING"
4667+INFO = "INFO"
4668+DEBUG = "DEBUG"
4669+MARKER = object()
4670+
4671+cache = {}
4672+
4673+
4674+def cached(func):
4675+ """Cache return values for multiple executions of func + args
4676+
4677+ For example:
4678+
4679+ @cached
4680+ def unit_get(attribute):
4681+ pass
4682+
4683+ unit_get('test')
4684+
4685+ will cache the result of unit_get + 'test' for future calls.
4686+ """
4687+ def wrapper(*args, **kwargs):
4688+ global cache
4689+ key = str((func, args, kwargs))
4690+ try:
4691+ return cache[key]
4692+ except KeyError:
4693+ res = func(*args, **kwargs)
4694+ cache[key] = res
4695+ return res
4696+ return wrapper
4697+
4698+
4699+def flush(key):
4700+ """Flushes any entries from function cache where the
4701+ key is found in the function+args """
4702+ flush_list = []
4703+ for item in cache:
4704+ if key in item:
4705+ flush_list.append(item)
4706+ for item in flush_list:
4707+ del cache[item]
4708+
4709+
4710+def log(message, level=None):
4711+ """Write a message to the juju log"""
4712+ command = ['juju-log']
4713+ if level:
4714+ command += ['-l', level]
4715+ command += [message]
4716+ subprocess.call(command)
4717+
4718+
4719+class Serializable(UserDict.IterableUserDict):
4720+ """Wrapper, an object that can be serialized to yaml or json"""
4721+
4722+ def __init__(self, obj):
4723+ # wrap the object
4724+ UserDict.IterableUserDict.__init__(self)
4725+ self.data = obj
4726+
4727+ def __getattr__(self, attr):
4728+ # See if this object has attribute.
4729+ if attr in ("json", "yaml", "data"):
4730+ return self.__dict__[attr]
4731+ # Check for attribute in wrapped object.
4732+ got = getattr(self.data, attr, MARKER)
4733+ if got is not MARKER:
4734+ return got
4735+ # Proxy to the wrapped object via dict interface.
4736+ try:
4737+ return self.data[attr]
4738+ except KeyError:
4739+ raise AttributeError(attr)
4740+
4741+ def __getstate__(self):
4742+ # Pickle as a standard dictionary.
4743+ return self.data
4744+
4745+ def __setstate__(self, state):
4746+ # Unpickle into our wrapper.
4747+ self.data = state
4748+
4749+ def json(self):
4750+ """Serialize the object to json"""
4751+ return json.dumps(self.data)
4752+
4753+ def yaml(self):
4754+ """Serialize the object to yaml"""
4755+ return yaml.dump(self.data)
4756+
4757+
4758+def execution_environment():
4759+ """A convenient bundling of the current execution context"""
4760+ context = {}
4761+ context['conf'] = config()
4762+ if relation_id():
4763+ context['reltype'] = relation_type()
4764+ context['relid'] = relation_id()
4765+ context['rel'] = relation_get()
4766+ context['unit'] = local_unit()
4767+ context['rels'] = relations()
4768+ context['env'] = os.environ
4769+ return context
4770+
4771+
4772+def in_relation_hook():
4773+ """Determine whether we're running in a relation hook"""
4774+ return 'JUJU_RELATION' in os.environ
4775+
4776+
4777+def relation_type():
4778+ """The scope for the current relation hook"""
4779+ return os.environ.get('JUJU_RELATION', None)
4780+
4781+
4782+def relation_id():
4783+ """The relation ID for the current relation hook"""
4784+ return os.environ.get('JUJU_RELATION_ID', None)
4785+
4786+
4787+def local_unit():
4788+ """Local unit ID"""
4789+ return os.environ['JUJU_UNIT_NAME']
4790+
4791+
4792+def remote_unit():
4793+ """The remote unit for the current relation hook"""
4794+ return os.environ['JUJU_REMOTE_UNIT']
4795+
4796+
4797+def service_name():
4798+ """The name service group this unit belongs to"""
4799+ return local_unit().split('/')[0]
4800+
4801+
4802+@cached
4803+def config(scope=None):
4804+ """Juju charm configuration"""
4805+ config_cmd_line = ['config-get']
4806+ if scope is not None:
4807+ config_cmd_line.append(scope)
4808+ config_cmd_line.append('--format=json')
4809+ try:
4810+ return json.loads(subprocess.check_output(config_cmd_line))
4811+ except ValueError:
4812+ return None
4813+
4814+
4815+@cached
4816+def relation_get(attribute=None, unit=None, rid=None):
4817+ """Get relation information"""
4818+ _args = ['relation-get', '--format=json']
4819+ if rid:
4820+ _args.append('-r')
4821+ _args.append(rid)
4822+ _args.append(attribute or '-')
4823+ if unit:
4824+ _args.append(unit)
4825+ try:
4826+ return json.loads(subprocess.check_output(_args))
4827+ except ValueError:
4828+ return None
4829+ except CalledProcessError, e:
4830+ if e.returncode == 2:
4831+ return None
4832+ raise
4833+
4834+
4835+def relation_set(relation_id=None, relation_settings={}, **kwargs):
4836+ """Set relation information for the current unit"""
4837+ relation_cmd_line = ['relation-set']
4838+ if relation_id is not None:
4839+ relation_cmd_line.extend(('-r', relation_id))
4840+ for k, v in (relation_settings.items() + kwargs.items()):
4841+ if v is None:
4842+ relation_cmd_line.append('{}='.format(k))
4843+ else:
4844+ relation_cmd_line.append('{}={}'.format(k, v))
4845+ subprocess.check_call(relation_cmd_line)
4846+ # Flush cache of any relation-gets for local unit
4847+ flush(local_unit())
4848+
4849+
4850+@cached
4851+def relation_ids(reltype=None):
4852+ """A list of relation_ids"""
4853+ reltype = reltype or relation_type()
4854+ relid_cmd_line = ['relation-ids', '--format=json']
4855+ if reltype is not None:
4856+ relid_cmd_line.append(reltype)
4857+ return json.loads(subprocess.check_output(relid_cmd_line)) or []
4858+ return []
4859+
4860+
4861+@cached
4862+def related_units(relid=None):
4863+ """A list of related units"""
4864+ relid = relid or relation_id()
4865+ units_cmd_line = ['relation-list', '--format=json']
4866+ if relid is not None:
4867+ units_cmd_line.extend(('-r', relid))
4868+ return json.loads(subprocess.check_output(units_cmd_line)) or []
4869+
4870+
4871+@cached
4872+def relation_for_unit(unit=None, rid=None):
4873+ """Get the json represenation of a unit's relation"""
4874+ unit = unit or remote_unit()
4875+ relation = relation_get(unit=unit, rid=rid)
4876+ for key in relation:
4877+ if key.endswith('-list'):
4878+ relation[key] = relation[key].split()
4879+ relation['__unit__'] = unit
4880+ return relation
4881+
4882+
4883+@cached
4884+def relations_for_id(relid=None):
4885+ """Get relations of a specific relation ID"""
4886+ relation_data = []
4887+ relid = relid or relation_ids()
4888+ for unit in related_units(relid):
4889+ unit_data = relation_for_unit(unit, relid)
4890+ unit_data['__relid__'] = relid
4891+ relation_data.append(unit_data)
4892+ return relation_data
4893+
4894+
4895+@cached
4896+def relations_of_type(reltype=None):
4897+ """Get relations of a specific type"""
4898+ relation_data = []
4899+ reltype = reltype or relation_type()
4900+ for relid in relation_ids(reltype):
4901+ for relation in relations_for_id(relid):
4902+ relation['__relid__'] = relid
4903+ relation_data.append(relation)
4904+ return relation_data
4905+
4906+
4907+@cached
4908+def relation_types():
4909+ """Get a list of relation types supported by this charm"""
4910+ charmdir = os.environ.get('CHARM_DIR', '')
4911+ mdf = open(os.path.join(charmdir, 'metadata.yaml'))
4912+ md = yaml.safe_load(mdf)
4913+ rel_types = []
4914+ for key in ('provides', 'requires', 'peers'):
4915+ section = md.get(key)
4916+ if section:
4917+ rel_types.extend(section.keys())
4918+ mdf.close()
4919+ return rel_types
4920+
4921+
4922+@cached
4923+def relations():
4924+ """Get a nested dictionary of relation data for all related units"""
4925+ rels = {}
4926+ for reltype in relation_types():
4927+ relids = {}
4928+ for relid in relation_ids(reltype):
4929+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
4930+ for unit in related_units(relid):
4931+ reldata = relation_get(unit=unit, rid=relid)
4932+ units[unit] = reldata
4933+ relids[relid] = units
4934+ rels[reltype] = relids
4935+ return rels
4936+
4937+
4938+@cached
4939+def is_relation_made(relation, keys='private-address'):
4940+ '''
4941+ Determine whether a relation is established by checking for
4942+ presence of key(s). If a list of keys is provided, they
4943+ must all be present for the relation to be identified as made
4944+ '''
4945+ if isinstance(keys, str):
4946+ keys = [keys]
4947+ for r_id in relation_ids(relation):
4948+ for unit in related_units(r_id):
4949+ context = {}
4950+ for k in keys:
4951+ context[k] = relation_get(k, rid=r_id,
4952+ unit=unit)
4953+ if None not in context.values():
4954+ return True
4955+ return False
4956+
4957+
4958+def open_port(port, protocol="TCP"):
4959+ """Open a service network port"""
4960+ _args = ['open-port']
4961+ _args.append('{}/{}'.format(port, protocol))
4962+ subprocess.check_call(_args)
4963+
4964+
4965+def close_port(port, protocol="TCP"):
4966+ """Close a service network port"""
4967+ _args = ['close-port']
4968+ _args.append('{}/{}'.format(port, protocol))
4969+ subprocess.check_call(_args)
4970+
4971+
4972+@cached
4973+def unit_get(attribute):
4974+ """Get the unit ID for the remote unit"""
4975+ _args = ['unit-get', '--format=json', attribute]
4976+ try:
4977+ return json.loads(subprocess.check_output(_args))
4978+ except ValueError:
4979+ return None
4980+
4981+
4982+def unit_private_ip():
4983+ """Get this unit's private IP address"""
4984+ return unit_get('private-address')
4985+
4986+
4987+class UnregisteredHookError(Exception):
4988+ """Raised when an undefined hook is called"""
4989+ pass
4990+
4991+
4992+class Hooks(object):
4993+ """A convenient handler for hook functions.
4994+
4995+ Example:
4996+ hooks = Hooks()
4997+
4998+ # register a hook, taking its name from the function name
4999+ @hooks.hook()
5000+ def install():
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: