Merge lp:~koolhead17/pyjuju/jujudoc into lp:pyjuju

Proposed by koolhead17
Status: Rejected
Rejected by: Kapil Thangavelu
Proposed branch: lp:~koolhead17/pyjuju/jujudoc
Merge into: lp:pyjuju
Diff against target: 3917 lines (+3762/-0) (has conflicts)
30 files modified
Makefile (+132/-0)
source/_templates/project-links.html (+9/-0)
source/about.rst (+38/-0)
source/charm-upgrades.rst (+117/-0)
source/charm.rst (+379/-0)
source/conf.py (+225/-0)
source/drafts/charm-namespaces.rst (+72/-0)
source/drafts/developer-install.rst (+49/-0)
source/drafts/expose-services.rst (+20/-0)
source/drafts/resolved.rst (+60/-0)
source/drafts/service-config.rst (+162/-0)
source/expose-services.rst (+43/-0)
source/faq.rst (+91/-0)
source/generate_modules.py (+107/-0)
source/getting-started.rst (+80/-0)
source/glossary.rst (+121/-0)
source/hook-debugging.rst (+108/-0)
source/index.rst (+35/-0)
source/internals/agent-presence.rst (+154/-0)
source/internals/expose-services.rst (+143/-0)
source/internals/unit-agent-hooks.rst (+307/-0)
source/internals/unit-agent-startup.rst (+156/-0)
source/internals/zookeeper.rst (+215/-0)
source/juju-drafts.rst (+10/-0)
source/juju-internals.rst (+11/-0)
source/provider-configuration-ec2.rst (+64/-0)
source/provider-configuration-local.rst (+53/-0)
source/upgrades.rst (+57/-0)
source/user-tutorial.rst (+335/-0)
source/write-charm.rst (+409/-0)
Conflict adding file Makefile.  Moved existing file to Makefile.moved.
To merge this branch: bzr merge lp:~koolhead17/pyjuju/jujudoc
Reviewer Review Type Date Requested Status
Kapil Thangavelu (community) Needs Fixing
Jorge Castro Pending
Review via email: mp+89140@code.launchpad.net

Description of the change

I have modified 2 files currently :-

1. write-charm.rst

We need to create separate revision file for charms, done that and explained about the revision file.

2. provider-configuration-local.rs
Added :

       control-bucket
       juju-origin

To post a comment you must log in.
Revision history for this message
Kapil Thangavelu (hazmat) wrote :

this is pretty indecipherable due to how merge was setup. the underlying changes should be reflected already (revision as separate file), although the local provider could still use origin info, control-bucket doesn't apply to it.

review: Needs Fixing
Revision history for this message
Kapil Thangavelu (hazmat) wrote :

also please note that docs are now in a separate docs branch with a much wider reviewer base and committer audience and are part of the charmers review queue (http://jujucharms.com/review-queue).

Unmerged revisions

2. By Atul Jha <email address hidden>

added saperate revision file or write-charm.rst and added missing config files for provider-configuration-local.rst file

1. By Kapil Thangavelu

move docs over

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2012-01-18 20:50:30 +0000
@@ -0,0 +1,132 @@
1# Makefile for Sphinx documentation
2#
3
4# You can set these variables from the command line.
5SPHINXOPTS =
6SPHINXBUILD = python source/generate_modules.py ../juju source/generated && sphinx-build
7PAPER =
8BUILDDIR = build
9
10# Internal variables.
11PAPEROPT_a4 = -D latex_paper_size=a4
12PAPEROPT_letter = -D latex_paper_size=letter
13ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
14
15.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
16
17help:
18 @echo "Please use \`make <target>' where <target> is one of"
19 @echo " html to make standalone HTML files"
20 @echo " dirhtml to make HTML files named index.html in directories"
21 @echo " singlehtml to make a single large HTML file"
22 @echo " pickle to make pickle files"
23 @echo " json to make JSON files"
24 @echo " htmlhelp to make HTML files and a HTML help project"
25 @echo " qthelp to make HTML files and a qthelp project"
26 @echo " devhelp to make HTML files and a Devhelp project"
27 @echo " epub to make an epub"
28 @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
29 @echo " latexpdf to make LaTeX files and run them through pdflatex"
30 @echo " text to make text files"
31 @echo " man to make manual pages"
32 @echo " changes to make an overview of all changed/added/deprecated items"
33 @echo " linkcheck to check all external links for integrity"
34 @echo " doctest to run all doctests embedded in the documentation (if enabled)"
35 @echo " clean to clean (remove) everything under the build directory"
36
37clean:
38 -rm -rf $(BUILDDIR)/*
39 -rm -rf source/generated
40
41html:
42 $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
43 @echo
44 @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
45
46dirhtml:
47 $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
48 @echo
49 @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
50
51singlehtml:
52 $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
53 @echo
54 @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
55
56pickle:
57 $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
58 @echo
59 @echo "Build finished; now you can process the pickle files."
60
61json:
62 $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
63 @echo
64 @echo "Build finished; now you can process the JSON files."
65
66htmlhelp:
67 $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
68 @echo
69 @echo "Build finished; now you can run HTML Help Workshop with the" \
70 ".hhp project file in $(BUILDDIR)/htmlhelp."
71
72qthelp:
73 $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
74 @echo
75 @echo "Build finished; now you can run "qcollectiongenerator" with the" \
76 ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
77 @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/juju.qhcp"
78 @echo "To view the help file:"
79 @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/juju.qhc"
80
81devhelp:
82 $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
83 @echo
84 @echo "Build finished."
85 @echo "To view the help file:"
86 @echo "# mkdir -p $$HOME/.local/share/devhelp/juju"
87 @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/juju"
88 @echo "# devhelp"
89
90epub:
91 $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
92 @echo
93 @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
94
95latex:
96 $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
97 @echo
98 @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
99 @echo "Run \`make' in that directory to run these through (pdf)latex" \
100 "(use \`make latexpdf' here to do that automatically)."
101
102latexpdf:
103 $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
104 @echo "Running LaTeX files through pdflatex..."
105 make -C $(BUILDDIR)/latex all-pdf
106 @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
107
108text:
109 $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
110 @echo
111 @echo "Build finished. The text files are in $(BUILDDIR)/text."
112
113man:
114 $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
115 @echo
116 @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
117
118changes:
119 $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
120 @echo
121 @echo "The overview file is in $(BUILDDIR)/changes."
122
123linkcheck:
124 $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
125 @echo
126 @echo "Link check complete; look for any errors in the above output " \
127 "or in $(BUILDDIR)/linkcheck/output.txt."
128
129doctest:
130 $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
131 @echo "Testing of doctests in the sources finished, look at the " \
132 "results in $(BUILDDIR)/doctest/output.txt."
0133
=== renamed file 'Makefile' => 'Makefile.moved'
=== added directory 'source'
=== added directory 'source/_static'
=== added directory 'source/_templates'
=== added file 'source/_templates/project-links.html'
--- source/_templates/project-links.html 1970-01-01 00:00:00 +0000
+++ source/_templates/project-links.html 2012-01-18 20:50:30 +0000
@@ -0,0 +1,9 @@
1<h3>Launchpad</h3>
2<ul>
3 <li>
4 <a href="https://launchpad.net/~juju">Overview</a>
5 </li>
6 <li>
7 <a href="https://code.launchpad.net/~juju">Code</a>
8 </li>
9</ul>
010
=== added file 'source/about.rst'
--- source/about.rst 1970-01-01 00:00:00 +0000
+++ source/about.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,38 @@
1About juju
2==========
3
4Since long ago, Linux server deployments have been moving towards the
5collaboration of multiple physical machines. In some cases, different servers
6run each a different set of applications, bringing organization, isolation,
7reserved resources, and other desirable characteristics to the composed
8assembly. In other situations, servers are set up with very similar
9configurations, so that the system becomes more scalable by having load
10distributed among the several instances, and so that the overall system becomes
11more reliable when the failure of any individual machine does not affect the
12assembly as a whole. In this reality, server administrators become invaluable
13maestros which orchestrate the placement and connectivity of services within
14the assembly of servers.
15
16Given that scenario, it's surprising that most of the efforts towards advancing
17the management of software configuration are still bound to individual machines.
18Package managers, and software like dbus and gconf are examples of this. Other
19efforts do look at the problem of managing multiple machines as a unit, but
20interestingly, they are still a mechanism for scaling up the management of
21services individually. In other words, they empower the administrator with the
22ability to tweak the individual configuration of multiple services at once,
23but they do not collaborate towards offering services themselves and other tools
24the understanding of the composed juju. This distinction looks subtle in
25principle, but it may be a key factor in enabling all the parties (system
26administrators, software developers, vendors, and integrators) to collaborate
27in deploying, maintaining, and enriching distributed software configurations.
28
29This is the challenge which motivates the research happening through the
30juju project at Canonical. juju aims to be a service deployment and
31orchestration tool which enables the same kind of collaboration and ease of
32use which today is seen around package management to happen on a higher
33level, around services. With juju, different authors are able to create
34services independently, and make those services communicate through a simple
35configuration protocol. Then, users can take the product of both authors
36and very comfortably deploy those services in an environment, in way
37resembling how people are able to install a network of packages with a single
38command via APT.
039
=== added file 'source/charm-upgrades.rst'
--- source/charm-upgrades.rst 1970-01-01 00:00:00 +0000
+++ source/charm-upgrades.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,117 @@
1Charm Upgrades
2================
3
4
5Upgrading a charm
6-------------------
7
8A charm_ can be upgraded via the command line using the following
9syntax::
10
11 $ juju upgrade-charm <service-name>
12
13In the case of a local charm the sytax would be::
14
15 $ juju upgrade-charm --repository=principia <service-name>
16
17This will examine the named service, determine its charm, and check the
18charm's originating repository for a newer version of the charm.
19If a newer charm version is found, it will be uploaded to the juju
20environment, and downloaded to all the running units of the service.
21The unit agent will switch over to executing hooks from the new charm,
22after executing the `upgrade-charm` hook.
23
24.. _charm: ../charm.html
25
26
27Charm upgrade support
28-----------------------
29
30A charm author can add charm specific support for upgrades by
31providing an additional hook that can customize its upgrade behavior.
32
33The hook ``upgrade-charm`` is executed with the new charm version
34in place on the unit. juju guarantees this hook will be the first
35executed hook from the new charm.
36
37The hook is intended to allow the charm to process any upgrade
38concerns it may have with regard to upgrading databases, software, etc
39before its new version of hooks are executed.
40
41After the ``upgrade-charm`` hook is executed, new hooks of the
42charm will be utilized to respond to any system changes.
43
44Futures
45-------
46
47The ``upgrade-charm`` hook will likely need access to a new cli-api
48to access all relations of the unit, in addition to the standard hook
49api commands like ``relation-list``, ``relation-get``,
50``relation-set``, to perform per unit relation upgrades.
51
52The new hook-cli api name is open, but possible suggestions are
53``unit-relations`` or ``query-relations`` and would list
54all the relations a unit is a member of.
55
56Most `server` services have multiple instances of a named relation.
57Else name iteration of the charm defined relations would suffice.
58It's an open question on how these effectively anonymous instances
59of a named relation would be addressed.
60
61The existing relation-* cli would also need to be extended to take
62a relation parameter, or documented usage of environment variables
63when doing relation iteration during upgrades.
64
65Internals
66---------
67
68The upgrade cli updates the service with its new unit, and sets
69an upgrade flag on each of its units. The unit agent then processes
70the upgrade using the workflow machinery to execute hooks and
71track upgrades across service units.
72
73A unit whose upgrade-charm hook fails will be left running
74but won't process any additional hooks. The hooks will continue
75to be queued for execution.
76
77The upgrade cli command is responsible for
78
79 - Finding the named service.
80
81 - Determining its charm.
82
83 - Determining if a newer version of the charm exists in the
84 origin repository.
85
86 - Uploading the new version of the charm to the environment's machine
87 provider storage.
88
89 - Updating the service state with a reference to the new charm.
90
91 - Marking the associated unit states as needing an upgrade.
92
93As far as determining newer versions, the cli will assume the same charm
94name with the max version number that is greater than the installed to
95be an upgrade.
96
97The unit agent is responsible for
98
99 - Watching the unit state for upgrade changes.
100
101 - Clearing the upgrade setting on the unit state.
102
103 - Downloading the new charm version.
104
105 - Stopping hook execution, hooks will continue to queue while
106 the execution is stopped.
107
108 - Extracting the charm into the unit container.
109
110 - Updating the unit charm reference.
111
112 - Running the upgrade workflow transition which will run the
113 upgrade-charm hook, and restart normal hook execution.
114
115Only the charm directory within a unit container/directory is
116replaced on upgrade, any existing peristent data within the unit
117container is maintained.
0118
=== added file 'source/charm.rst'
--- source/charm.rst 1970-01-01 00:00:00 +0000
+++ source/charm.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,379 @@
1Charms
2======
3
4Introduction
5------------
6
7Charms define how services integrate and how their service units
8react to events in the distributed environment, as orchestrated by
9juju.
10
11This specification describes how charms are defined, including their
12metadata and hooks. It also describes the resources available to hooks
13in working with the juju environment.
14
15
16The metadata file
17-----------------
18
19The `metadata.yaml` file, at the root of the charm directory,
20describes the charm. The following fields are supported:
21
22 * **name:** - The charm name itself. Charm names are formed by
23 lowercase letters, digits, and dashes, and must necessarily
24 begin with a letter and have no digits alone in a dashed
25 section.
26
27 * **summary:** - A one-line description of the charm.
28
29 * **description:** - Long explanation of the charm and its
30 features.
31
32 * **provides:** - The deployed service unit must have the given
33 relations established with another service unit whose charm
34 requires them for the service to work properly. See below for how
35 to define a relation.
36
37 * **requires:** - The deployed service unit must have the given
38 relations established with another service unit whose charm
39 provides them for the service to work properly. See below for how
40 to define a relation.
41
42 * **peers:** - Relations that are established with P2P semantics
43 instead of a provides/requires (or client/server) style. When the
44 charm is deployed as a service unit, all the units from the
45 given service will automatically be made part of the relation.
46 See below for how to define a relation.
47
48
49Relations available in `provides`, `requires`, and `peers` are defined
50as follows:
51
52 * **provides|requires|peers:**
53
54 * **<relation name>:** - This name is a user-provided value which
55 identifies the relation uniquely within the given charm.
56 Examples include "database", "cache", "proxy", and "appserver".
57
58 Each relation may have the following fields defined:
59
60 * **interface:** - This field defines the type of the
61 relation. The relation will only be established with service
62 units that define a compatible relation with the same
63 interface. Examples include "http", "mysql", and
64 "backup-schedule".
65
66 * **limit:** - The maximum number of relations of this kind
67 which may be established to other service units. Defaults to
68 1 for `requires` relations, and to "none" (no limit) for
69 `provides` and `peers` relations. While you may define it,
70 this field is not yet enforced by juju.
71
72 * **optional:** - Whether this relation is required for the
73 service unit to function or not. Defaults to `false`, which
74 means the relation is required. While you may define it, this
75 field is not yet enforced by juju.
76
77 As a shortcut, if these properties are not defined, and instead
78 a single string value is provided next to the relation name, the
79 string is taken as the interface value, as seen in this
80 example::
81
82 requires:
83 db: mysql
84
85Some sample charm definitions are provided at the end of this
86specification.
87
88
89Hooks
90-----
91
92juju uses hooks to notify a service unit about changes happening
93in its lifecycle or the larger distributed environment. A hook running
94for a service unit can query this environment, make any desired local
95changes on its underlying machine, and change the relation
96settings.
97
98Each hook for a charm is implemented by placing an executable with
99the desired hook name under the ``hooks/`` directory of the charm
100directory. juju will execute the hook based on its file name when
101the corresponding event occurs.
102
103All hooks are optional. Not including a corresponding executable in
104the charm is treated by juju as if the hook executed and then
105exited with an exit code of 0.
106
107All hooks are executed in the charm directory on the service unit.
108
109The following hooks are with respect to the lifecycle of a service unit:
110
111 * **install** - Runs just once during the life time of a service
112 unit. Currently this hook is the right place to ensure any package
113 dependencies are met. However, in the future juju will use the
114 charm metadata to perform this role instead.
115
116 * **start** - Runs when the service unit is started. This happens
117 before any relation hooks are called. The purpose of this hook is
118 to get the service unit ready for relations to be established.
119
120 * **stop** - Runs when the service unit is stopped. If relations
121 exist, they will be broken and the respective hooks called before
122 this hook is called.
123
124The following hooks are called on each service unit as the membership
125of an established relation changes:
126
127 * **<relation name>-relation-joined** - Runs upon each time a remote
128 service unit joins the relation.
129
130 * **<relation name>-relation-changed** - Runs upon each time the
131 following events occur:
132
133 1. A remote service unit joins the relation, right after the
134 **<relation name>-relation-joined** hook was called.
135
136 2. A remote service unit changes its relation settings.
137
138 This hook enables the charm to modify the service unit state
139 (configuration, running processes, or anything else) to adapt to
140 the relation settings of remote units.
141
142 An example usage is that HAProxy needs to be aware of web servers
143 as they become available, including details like its IP
144 address. Web server service units can publish their availability
145 by making the appropriate relation settings in the hook that makes
146 the most sense. Assume the HAProxy uses the relation name of
147 ``server``. Then upon that happening, the HAProxy in its
148 ``server-relation-changed hook`` can then change its own
149 configuration as to what is available to be proxied.
150
151 * **<relation name>-relation-departed** - Runs upon each time a
152 remote service unit leaves a relation. This could happen because
153 the service unit has been removed, its service has been destroyed,
154 or the relation between this service and the remote service has
155 been removed.
156
157 An example usage is that HAProxy needs to be aware of web servers
158 when they are no longer available. It can remove each web server
159 its configuration as the corresponding service unit departs the
160 relation.
161
162This relation hook is with respect to the relation itself:
163
164 * **<relation name>-relation-broken** - Runs when a relation which
165 had at least one other relation hook run for it (successfully or
166 not) is now unavailable. The service unit can then clean up any
167 established state.
168
169 An example might be cleaning up the configuration changes which
170 were performed when HAProxy was asked to load-balance for another
171 service unit.
172
173Note that the coupling between charms is defined by which settings
174are required and made available to them through the relation hooks and
175how these settings are used. Those conventions then define what the
176relation interface really is, and the **interface** name in the
177`metadata.yaml` file is simply a way to refer to them and avoid the
178attempting of incompatible conversations. Keep that in mind when
179designing your charms and relations, since it is a good idea to
180allow the implementation of the charm to change and be replaced with
181alternative versions without changing the relation conventions in a
182backwards incompatible way.
183
184
185Hook environment
186----------------
187
188Hooks can expect to be invoked with a standard environment and
189context. The following environment variables are set:
190
191 * **$JUJU_UNIT_NAME** - The name of the local unit executing,
192 in the form ``<service name>/<unit sequence>``. E.g. ``myblog/3``.
193
194Hooks called for relation changes will have the follow additional
195environment variables set:
196
197 * **$JUJU_RELATION** - The relation name this hook is running
198 for. It's redundant with the hook name, but is necessary for
199 the command line tools to know the current context.
200
201 * **$JUJU_REMOTE_UNIT** - The unit name of the remote unit
202 which has triggered the hook execution.
203
204
205Hook commands for working with relations
206----------------------------------------
207
208In implementing their functionality, hooks can leverage a set of
209command tools provided by juju for working with relations. These
210utilities enable the hook to collaborate on their relation settings,
211and to inquire about the peers the service unit has relations with.
212
213The following command line tools are made available:
214
215 * **relation-get** - Queries a setting from an established relation
216 with one or more service units. This command will read some
217 context information from environment variables (e.g.
218 $JUJU_RELATION_NAME).
219
220 Examples:
221
222 Get the IP address from the remote unit which triggered the hook
223 execution::
224
225 relation-get ip
226
227 Get all the settings from the remote unit which triggered the hook
228 execution::
229
230 relation-get
231
232 Get the port information from the `wordpress/3` unit::
233
234 relation-get port wordpress/3
235
236 Get all the settings from the `wordpress/3` unit, in JSON format::
237
238 relation-get - wordpress/3
239
240 * **relation-set** - Changes a setting in an established relation.
241
242 Examples:
243
244 Set this unit's port number for other peers to use::
245
246 relation-set port=8080
247
248 Change two settings at once::
249
250 relation-set dbname=wordpress dbpass="super secur3"
251
252 Change several settings at once, with a JSON file::
253
254 cat settings.json | relation-set
255
256 Delete a setting::
257
258 relation-set name=
259
260 * **relation-list** - List all service units participating in the
261 established relation. This list excludes the local service unit
262 which is executing the command. For `provides` and `requires`
263 relations, this command will always return a single service unit.
264
265 Example::
266
267 MEMBERS=$(relation-list)
268
269Changes to relation settings are only committed if the hook exited
270with an exit code of 0. Such changes will then trigger further hook
271execution in the remote unit(s), through the **<relation
272name>-relation-changed** hook. This mechanism enables a general
273communication mechanism for service units to coordinate.
274
275
276Hook commands for opening and closing ports
277-------------------------------------------
278
279Service exposing determines which ports to expose by using the
280``open-port`` and ``close-port`` commands in hooks. They may be
281executed within any charm hook. The commands take the same
282arguments::
283
284 open-port port[/protocol]
285
286 close-port port[/protocol]
287
288These commands are executed immediately; they do not depend on the
289exit status of the hook.
290
291As an example, consider the WordPress charm, which has been deployed
292as ``my-wordpress``. After completing the setup and restart of Apache,
293the ``wordpress`` charm can then publish the available port in its
294``start`` hook for a given service unit::
295
296 open-port 80
297
298External access to the service unit is only allowed when both
299``open-port`` is executed within any hook and the administrator has
300exposed its service. The order in which these happen is not
301important, however.
302
303.. note::
304
305 Being able to use any hook may be important for your charm.
306 Ideally, the service does not have ports that are vulnerable if
307 exposed prior to the service being fully ready. But if that's the
308 case, you can solve this problem by only opening the port in the
309 appropriate hook and when the desired conditions are met.
310
311Alternatively, you may need to expose more than one port, or expose
312ports that don't use the TCP protocol. To expose ports for
313HTTP and HTTPS, your charm could instead make these settings::
314
315 open-port 80
316 open-port 443
317
318Or if you are writing a charm for a DNS server that you would like
319to expose, then specify the protocol to be UDP::
320
321 open-port 53/udp
322
323When the service unit is removed or stopped for any reason, the
324firewall will again be changed to block traffic which was previously
325allowed to reach the exposed service. Your charm can also do this to
326close the port::
327
328 close-port 80
329
330To be precise, the firewall is only open for the exposed ports during
331the time both these conditions hold:
332
333 * A service has been exposed.
334 * A corresponding ``open-port`` command has been run (without a
335 subsequent ``close-port``).
336
337
338Sample metadata.yaml files
339--------------------------
340
341Below are presented some sample metadata files.
342
343
344MySQL::
345
346 name: mysql
347 revision: 1
348 summary: "A pretty popular database"
349
350 provides:
351 db: mysql
352
353
354Wordpress::
355
356 name: wordpress
357 revision: 3
358 summary: "A pretty popular blog engine"
359 provides:
360 url:
361 interface: http
362
363 requires:
364 db:
365 interface: mysql
366
367
368Riak::
369
370 name: riak
371 revision: 7
372 summary: "Scalable K/V Store in Erlang with Clocks :-)"
373 provides:
374 endpoint:
375 interface: http
376
377 peers:
378 ring:
379 interface: riak
0380
=== added file 'source/conf.py'
--- source/conf.py 1970-01-01 00:00:00 +0000
+++ source/conf.py 2012-01-18 20:50:30 +0000
@@ -0,0 +1,225 @@
1# -*- coding: utf-8 -*-
2#
3# juju documentation build configuration file, created by
4# sphinx-quickstart on Wed Jul 14 09:40:34 2010.
5#
6# This file is execfile()d with the current directory set to its containing dir.
7#
8# Note that not all possible configuration values are present in this
9# autogenerated file.
10#
11# All configuration values have a default; values that are commented out
12# serve to show the default.
13
14import sys, os
15
16# If extensions (or modules to document with autodoc) are in another directory,
17# add these directories to sys.path here. If the directory is relative to the
18# documentation root, use os.path.abspath to make it absolute, like shown here.
19sys.path.insert(0, os.path.abspath('../..'))
20
21# -- General configuration -----------------------------------------------------
22
23# If your documentation needs a minimal Sphinx version, state it here.
24#needs_sphinx = '1.0'
25
26# Add any Sphinx extension module names here, as strings. They can be extensions
27# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
28import sphinx
29
30extensions = ['sphinx.ext.autodoc']
31
32if [int(x) for x in sphinx.__version__.split(".")] > [1, 0]:
33 if "singlehtml" not in sys.argv:
34 # singlehtml builder skips the step that would cause the _modules
35 # directory to be created, so source links don't work
36 extensions.append('sphinx.ext.viewcode')
37
38# Add any paths that contain templates here, relative to this directory.
39templates_path = ['_templates']
40
41# The suffix of source filenames.
42source_suffix = '.rst'
43
44# The encoding of source files.
45#source_encoding = 'utf-8-sig'
46
47# The master toctree document.
48master_doc = 'index'
49
50# General information about the project.
51project = u'juju'
52copyright = u'2010, Canonical'
53
54# The version info for the project you're documenting, acts as replacement for
55# |version| and |release|, also used in various other places throughout the
56# built documents.
57#
58# The short X.Y version.
59version = '1.0'
60# The full version, including alpha/beta/rc tags.
61release = '1.0dev'
62
63# The language for content autogenerated by Sphinx. Refer to documentation
64# for a list of supported languages.
65#language = None
66
67# There are two options for replacing |today|: either, you set today to some
68# non-false value, then it is used:
69#today = ''
70# Else, today_fmt is used as the format for a strftime call.
71#today_fmt = '%B %d, %Y'
72
73# List of patterns, relative to source directory, that match files and
74# directories to ignore when looking for source files.
75exclude_patterns = []
76
77# The reST default role (used for this markup: `text`) to use for all documents.
78#default_role = None
79
80# If true, '()' will be appended to :func: etc. cross-reference text.
81#add_function_parentheses = True
82
83# If true, the current module name will be prepended to all description
84# unit titles (such as .. function::).
85#add_module_names = True
86
87# If true, sectionauthor and moduleauthor directives will be shown in the
88# output. They are ignored by default.
89#show_authors = False
90
91# The name of the Pygments (syntax highlighting) style to use.
92pygments_style = 'sphinx'
93
94# A list of ignored prefixes for module index sorting.
95#modindex_common_prefix = []
96
97
98# -- Options for HTML output ---------------------------------------------------
99
100# The theme to use for HTML and HTML Help pages. See the documentation for
101# a list of builtin themes.
102html_theme = 'default'
103
104# Theme options are theme-specific and customize the look and feel of a theme
105# further. For a list of options available for each theme, see the
106# documentation.
107#html_theme_options = {}
108
109# Add any paths that contain custom themes here, relative to this directory.
110#html_theme_path = []
111
112# The name for this set of Sphinx documents. If None, it defaults to
113# "<project> v<release> documentation".
114#html_title = None
115
116# A shorter title for the navigation bar. Default is the same as html_title.
117#html_short_title = None
118
119# The name of an image file (relative to this directory) to place at the top
120# of the sidebar.
121#html_logo = None
122
123# The name of an image file (within the static path) to use as favicon of the
124# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
125# pixels large.
126#html_favicon = None
127
128# Add any paths that contain custom static files (such as style sheets) here,
129# relative to this directory. They are copied after the builtin static files,
130# so a file named "default.css" will overwrite the builtin "default.css".
131#html_static_path = ['_static']
132
133# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
134# using the given strftime format.
135#html_last_updated_fmt = '%b %d, %Y'
136
137# If true, SmartyPants will be used to convert quotes and dashes to
138# typographically correct entities.
139#html_use_smartypants = True
140
141# Custom sidebar templates, maps document names to template names.
142html_sidebars = {
143 'index': 'project-links.html'
144}
145# Additional templates that should be rendered to pages, maps page names to
146# template names.
147#html_additional_pages = {}
148
149# If false, no module index is generated.
150html_domain_indices = False
151
152# If false, no index is generated.
153#html_use_index = True
154
155# If true, the index is split into individual pages for each letter.
156#html_split_index = False
157
158# If true, links to the reST sources are added to the pages.
159#html_show_sourcelink = True
160
161# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
162#html_show_sphinx = True
163
164# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
165#html_show_copyright = True
166
167# If true, an OpenSearch description file will be output, and all pages will
168# contain a <link> tag referring to it. The value of this option must be the
169# base URL from which the finished HTML is served.
170#html_use_opensearch = ''
171
172# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
173#html_file_suffix = ''
174
175# Output file base name for HTML help builder.
176htmlhelp_basename = 'jujudoc'
177
178
179# -- Options for LaTeX output --------------------------------------------------
180
181# The paper size ('letter' or 'a4').
182#latex_paper_size = 'letter'
183
184# The font size ('10pt', '11pt' or '12pt').
185#latex_font_size = '10pt'
186
187# Grouping the document tree into LaTeX files. List of tuples
188# (source start file, target name, title, author, documentclass [howto/manual]).
189latex_documents = [
190 ('index', 'juju.tex', u'juju documentation',
191 u'Canonical', 'manual'),
192]
193
194# The name of an image file (relative to this directory) to place at the top of
195# the title page.
196#latex_logo = None
197
198# For "manual" documents, if this is true, then toplevel headings are parts,
199# not chapters.
200#latex_use_parts = False
201
202# If true, show page references after internal links.
203#latex_show_pagerefs = False
204
205# If true, show URL addresses after external links.
206#latex_show_urls = False
207
208# Additional stuff for the LaTeX preamble.
209#latex_preamble = ''
210
211# Documents to append as an appendix to all manuals.
212#latex_appendices = []
213
214# If false, no module index is generated.
215#latex_domain_indices = True
216
217
218# -- Options for manual page output --------------------------------------------
219
220# One entry per manual page. List of tuples
221# (source start file, name, description, authors, manual section).
222man_pages = [
223 ('index', 'juju', u'juju documentation',
224 [u'Canonical'], 1)
225]
0226
=== added directory 'source/drafts'
=== added file 'source/drafts/charm-namespaces.rst'
--- source/drafts/charm-namespaces.rst 1970-01-01 00:00:00 +0000
+++ source/drafts/charm-namespaces.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,72 @@
1
2Charm Namespaces
3================
4
5Introduction
6------------
7
8juju supports deployment of charms from multiple sources.
9
10By default juju searches only the Ubuntu charm namespace to resolve
11charms. For example the following command line snippet will install wordpress
12from the ubuntu charm namespace.::
13
14 juju deploy wordpress
15
16
17In order to support local charm development and completely offline private
18repositories, charms can also be deployed directly from a local directory.
19For example the following will resolve the wordpress charm to the
20$HOME/local_charms directory.::
21
22 juju deploy --repository=~/local_charms wordpress
23
24With this parameter any charm dependencies from the wordpress charm will be
25looked up first in the local directory and then in the ubuntu charm
26namespace. So the command line flag '--repository' alters the charm lookup
27from the default such that it prepends the local directory to the lookup order.
28
29
30The lookup order can also be altered to utilize a 3rd party published repository
31in preference to the Ubuntu charm repository. For example the following will
32perform a charm lookup for wordpress and its dependencies from the published
33'openstack' 3rd party repository before looking up dependencies in the Ubuntu
34charm repository.::
35
36 juju deploy --repository=es:openstack wordpress
37
38The lookup order can also be specified just for a single charm. For example
39the following command would deploy the wordpress charm from the openstack
40namespace but would resolve dependencies (like apache and mysql) via the ubuntu
41namespace.::
42
43 juju deploy es:openstack/wordpress
44
45The lookup order can also be explicitly specified in the client configuration
46to define a custom lookup order without the use of command line options.::
47
48 environments.yaml
49
50 repositories:
51
52 - http://charms.ubuntu.com/collection/ubuntu
53 - http://charms.ubuntu.com/collection/openstack
54 - http://charms.ubuntu.com/people/miked
55 - /var/lib/charms
56
57The repositories in the configuration file are specified as a yaml list, and the
58list order defines the lookup order for charms.
59
60
61Deployment
62----------
63
64After juju resolves a charm and its dependencies, it bundles them and
65deploys them to a machine provider charm cache/repository. This allows the
66same charm to be deployed to multiple machines repeatably and with minimal
67network transfers.
68
69juju stores the qualified name of the charm when saving it to the machine
70provider cache. This allows a charm to be unambigiously identified, ie.
71whether it came from the ubuntu namespace or a 3rdparty namespace, or even from
72disk.
073
=== added file 'source/drafts/developer-install.rst'
--- source/drafts/developer-install.rst 1970-01-01 00:00:00 +0000
+++ source/drafts/developer-install.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,49 @@
1Developer Install
2------------------
3
4For folks who want to develop on juju itself, a source install
5from trunk or branch is recommended.
6
7To run juju from source, you will need the following dependencies
8installed:
9
10 * zookeeper
11 * txzookeeper
12 * txaws
13
14The juju team recommends install the zookeeper package from the
15juju PPA, or a source compilation as of ubuntu natty (11.04) due
16to bugs in the packaged version.
17
18On a modern Ubuntu Linux system execute::
19
20 $ sudo apt-get install python-zookeeper python-virtualenv python-yaml
21
22You will also need Python 2.6 or better.
23
24We recommend and demonstrate the use of virtualenv to install juju
25and its dependencies in a sandbox, in case you latter install a newer
26version via package.
27
28First let's setup a virtualenv::
29
30 $ virtualenv juju
31 $ cd juju
32 $ source bin/activate
33
34Next we'll fetch and install a few juju dependencies from source::
35
36 $ bzr branch lp:txaws
37 $ cd txaws && python setup.py develop && cd..
38 $ bzr branch lp:txzookeeper
39 $ cd txzookeeper && python setup.py develop && cd..
40
41Lastly, we fetch juju and install it from trunk::
42
43 $ bzr branch lp:juju
44 $ cd juju && python setup.py develop
45
46You can now configure your juju environment per the getting-started
47documentation.
48
49
050
=== added file 'source/drafts/expose-services.rst'
--- source/drafts/expose-services.rst 1970-01-01 00:00:00 +0000
+++ source/drafts/expose-services.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,20 @@
1Exposing a service
2==================
3
4The following functionality will be implemented at a later date.
5
6
7``exposed`` and ``unexposed`` hooks
8-----------------------------------
9
10Upon a service being exposed, the ``exposed`` hook will be run, if it
11is present in the charm.
12
13This may be an appropriate place to run the ``open-port`` command,
14however, it is up to the charm author where it should be run, since
15it and ``close-port`` are available commands for every hook.
16
17Likewise, when a service is unexposed, the ``unexposed`` hook will be
18run, if present. Many charms likely do not need to implement this
19hook, however, it could be an opportunity to terminate unnecessary
20processes or remove other resources.
021
=== added file 'source/drafts/resolved.rst'
--- source/drafts/resolved.rst 1970-01-01 00:00:00 +0000
+++ source/drafts/resolved.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,60 @@
1Resolving errors
2================
3
4juju internally tracks the state of units and their relations.
5It moves them through a simple state machine to ensure the correct
6sequencing of hooks and associated unit agent behavior. Typically
7this means that all hooks are executed as part of a workflow transition.
8
9If a hook fails, then juju notes this failure, and transitions
10either the unit or the unit relation (depending on the hook) to a
11failure state.
12
13If a hook for a unit relation fails, only that unit relation is
14considered to be in an error state, the unit and other relations of
15the unit continue to operate normally.
16
17If a hook for the unit fails (install, start, stop, etc), the unit
18is considered not running, and its unit relations will stop responding
19to changes.
20
21As a means to recover from hook errors, juju offers the
22``juju resolved`` command.
23
24This command will operate on either a unit or unit relation and
25schedules a transition from the error state back to its original
26destination state. For example given a unit mysql/0 whose start
27hook had failed, and the following command line::
28
29 $ juju resolved mysql/0
30
31After being resolved the unit would be in the started state. It
32important to note that by default ``juju resolved`` does
33not fire hooks and the ``start`` hook would not be invoked again
34as a result of the above.
35
36If a unit's relation-change/joined/departed hook had failed, then
37juju resolved can also be utilized to resolve the error on
38the unit relation::
39
40 $ juju resolved mysql/0 db
41
42This would re-enable change watching, and hook execution for the
43``db`` relation after a hook failure.
44
45It's expected that an admin will typically have a look at the system to
46determine or correct the issue before using this command line.
47``juju resolved`` is meant primarily as a mechanism to remove the
48error block after correction of the original issue.
49
50However ``juju resolved`` can optionally re-invoke the failed hook.
51This feature is particular beneficial during charm development, when
52iterating over an in development hook. Assuming a mysql unit with
53with a start hook error upon executing the following command::
54
55 $ juju resolved --retry mysql/0
56
57juju will examine the mysql/0 unit, and will re-execute its start
58hook before marking it as running. If the start hook fails again,
59then the unit will remain in the same state.
60
061
=== added file 'source/drafts/service-config.rst'
--- source/drafts/service-config.rst 1970-01-01 00:00:00 +0000
+++ source/drafts/service-config.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,162 @@
1.. _"Service Configuration":
2
3Service configuration
4=====================
5
6Introduction
7------------
8
9A Charm_ often will require access to specific options or
10configuration. Charms allow for the manipulation of the various
11configuration options which the charm author has chosen to
12expose. juju provides tools to help manage these options and
13respond to changes in these options over the lifetime of the `service`
14deployment. These options apply to the entire service, as opposed to
15only a specific unit or relation. Configuration is modified by an
16administrator at deployment time or over the lifetime of the services.
17
18As an example a wordpress service may expose a 'blog-title'
19option. This option would control the title of the blog being
20published. Changes to this option would be applied to all units
21implementing this service through the invocation of a hook on each of
22them.
23
24.. _Charm: ./charm.html
25
26
27Using configuration options
28---------------------------
29
30Configuration options are manipulated using a command line
31interface. juju provide a `set` command to aid the administrator
32in changing values.
33
34::
35 juju set <service name> option=value [option=value]
36
37This command allows changing options at runtime and takes one or more
38name/value pairs which will be set into the service
39options. Configuration options which are set together are delivered to
40the services for handling together. E.g. if you are changing a
41username and a password, changing them individually may yield bad
42results since the username will temporarily be set with an incorrect
43password.
44
45While its possible to set multiple configuration options on the
46command line its also convenient to pass multiple configuration
47options via the --file argument which takes the name of a YAML
48file. The contents of this file will be applied as though these
49elements had been passed to `juju set`.
50
51A configuration file may be provided at deployment time using the
52--config option, as follows::
53
54 juju deploy [--config local.yaml] wordpress myblog
55
56The service name is looked up inside the YAML file to allow for
57related service configuration options to be collected into a single
58file for the purposes of deployment and passed repeated to each
59`juju deploy` invocation.
60
61Below is an example local.yaml containing options
62which would be used during deployment of a service named myblog.
63
64::
65
66 myblog:
67 blog-roll: ['http://foobar.com', 'http://testing.com']
68 blog-title: Awesome Sauce
69 password: n0nsense
70
71
72Creating charms
73---------------
74
75Charm authors create a `config.yaml` file which resides in the
76charm's top-level directory. The configuration options supported by
77a service are defined within its respective charm. juju will
78only allow the manipulation of options which were explicitly defined
79as supported.
80
81The specification of possible configuration values is intentionally
82minimal, but still evolving. Currently the charm define a list of
83names which they react. Information includes a human readable
84description and an optional default value. Additionally `type` may be
85specified. All options have a default type of 'str' which means its
86value will only be treated as a text string. Other valid options are
87'int', 'float' and 'regex'. When 'regex' is used an addtional element
88must be provided, 'validator'. This must be a valid Python regex as
89specified at http://docs.python.org/lib/re.html
90
91The following `config.yaml` would be included in the top level
92directory of a charm and includes a list of option definitions::
93
94 options:
95 blog-roll:
96 default: null
97 description: List of URLs which will be included as the blog roll
98 blog-title:
99 default: My Blog
100 description: The title of the blog.
101 password:
102 default: changeme
103 description: Password to be used for the account specified by 'username'
104 type: regex
105 validator: '.{6,12}'
106 username:
107 default: admin
108 description: The name of the initial account (given admin permissions).
109
110
111To access these configuration options from a hook we provide the following::
112
113 config-get [option name]
114
115`config-get` returns all the configuration options for a service as
116JSON data when no option name is specified. If an option name is
117specified the value of that option is output according to the normal
118rules and obeying the `--output` and `--format` arguments. Hooks
119implicitly know the service they are executing for and config-get
120always gets values from the service of the hook.
121
122Changes to options (see previous section) trigger the charm's
123`config-changed` hook. The `config-changed` hook is guaranteed to run
124after any changes are made to the configuration, but it is possible
125that multiple changes will be observed at once. Because its possible
126to set many configuration options on a single command line invocation
127it is easily possible to ensure related options are available to the
128service at the same time.
129
130The `config-changed` hook must be written in such a way as to deal
131with changes to one or more options and deal gracefully with options
132that are required by the charm but not yet set by an
133administrator. Errors in the config-changed hook force juju to
134assume the service is no longer properly configured. If the service is
135not already in a stopped state it will be stopped and taken out of
136service. The status command will be extended in the future to report
137on workflow and unit agent status which will help reveal error
138conditions of this nature.
139
140When options are passed using `juju deploy` their values will be
141read in from a file and made available to the service prior to the
142invocation of the its `install` hook. The `install` and `start` hooks
143will have access to config-get and thus complete access to the
144configuration options during their execution. If the `install` or
145`start` hooks don't directly need to deal with options they can simply
146invoke the `config-changed` hook.
147
148
149
150Internals
151---------
152
153.. note::
154 This section explains details useful to the implementation but not of
155 interest to the casual reader.
156
157Hooks normally attempt to provide a consistent view of the shared
158state of the system and the handling of config options within hooks
159(config-changed and the relation hooks) is no different. The first
160access to the configuration data of a service will retain a cached
161copy of the service options. Cached data will be used for the
162duration of the hook invocation.
0163
=== added file 'source/expose-services.rst'
--- source/expose-services.rst 1970-01-01 00:00:00 +0000
+++ source/expose-services.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,43 @@
1Exposing a service
2==================
3
4In juju, making a service public -- its ports available for public
5use -- requires that it be explicitly *exposed*. Note that this
6exposing does not yet involve DNS or other directory information. For
7now, it simply makes the service public.
8
9Service exposing works by opening appropriate ports in the firewall of
10the cloud provider. Because service exposing is necessarily tied to
11the underlying provider, juju manages all aspects of
12exposing. Such management ensures that a charm can work with other
13cloud providers besides EC2, once support for them is implemented.
14
15juju provides the ``juju expose`` command to expose a service.
16For example, you might have deployed a ``my-wordpress`` service, which
17is defined by a ``wordpress`` charm. To expose this service, simply
18execute the following command::
19
20 juju expose my-wordpress
21
22To stop exposing this service, and make any corresponding firewall
23changes immediately, you can run this command::
24
25 juju unexpose my-wordpress
26
27You can see the status of your exposed ports by running the ``juju
28status`` command. If ports have been opened by the service and you
29have exposed the service, then you will see something like the
30following output for the deployed services::
31
32 services:
33 wordpress:
34 exposed: true
35 charm: local:oneiric/wordpress-42
36 relations: {db: mysql}
37 units:
38 wordpress/0:
39 machine: 2
40 open-ports: [80/tcp]
41 relations:
42 db: {state: up}
43 state: started
044
=== added file 'source/faq.rst'
--- source/faq.rst 1970-01-01 00:00:00 +0000
+++ source/faq.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,91 @@
1Frequently Asked Questions
2==========================
3
4Where does the name juju come from?
5
6 It means magic in the same african roots where the word ubuntu comes from.
7 Please see http://afgen.com/juju.html for a more detailed explanation.
8
9Why is juju useful?
10
11 juju is a next generation service deployment and orchestration
12 framework. It has been likened to APT for the cloud. With juju,
13 different authors are able to create service charms independently, and
14 make those services coordinate their communication through a simple
15 protocol. Users can then take the product of different authors and very
16 comfortably deploy those services in an environment. The result is
17 multiple machines and components transparently collaborating towards
18 providing the requested service. Read more :doc:`about`
19
20When will it be ready for production?
21
22 As of Ubuntu Natty 11.04, juju is a technology preview. It is not yet
23 ready to be used in production. However, adventurous users are encouraged to
24 evaluate it, study it, start writing charms for it or start hacking on
25 juju internals. The rough roadmap is to have juju packaged for
26 Universe by 11.10 release and perhaps in main by 12.04
27
28What language is juju developed in?
29
30 juju itself is developed using Python. However, writing charms for
31 juju can be done in any language. All juju cares about is finding a
32 set of executable files, which it will trigger appropriately
33
34Does juju start from a pre-configured AMI Image?
35
36 No, juju uses a plain Ubuntu image. All needed components are installed
37 at run-time. Then the juju charm is sent to the machine and hooks start
38 getting executed in response to events
39
40Is it possible to deploy multiple services per machine?
41
42 Currently each service unit is deployed to a separate machine (ec2 instance)
43 that can relate to other services running on different nodes. This was done
44 to get juju into a working state faster. juju will definitely support
45 multiple services per machine in the future
46
47Is it possible to pass parameters to juju charms?
48
49 Tunables are landing very soon in juju. Once ready you will be able to
50 use "juju set service key=value" and respond to that from within the
51 juju charm. This will enable dynamic features to be added to charms
52
53Does juju only deploy to the Amazon EC2 cloud?
54
55 Currently yes. However work is underway to enable deploying to LXC containers
56 such that you are able to run juju charms on a single local machine
57 Also integration work with the `Orchestra <https://launchpad.net/orchestra>`_
58 project is underway to enable deployment to hardware machines
59
60What directory are hooks executed in?
61
62 Hooks are executed in the charm directory (the parent directory to the hook
63 directory). This is primarily to encourage putting additional resources that
64 a hook may use outside of the hooks directory which is the public interface
65 of the charm.
66
67How are charms licensed?
68
69 Charms are effectively data inputs to juju, and are therefore
70 licensed/copyrighted by the author as an independent work. You are free to
71 claim copyright solely for yourself if it's an independent work, and to
72 license it as you see fit. If you as the charm author are performing the
73 work as a result of a fiduciary agreement, the terms of such agreement come
74 into play and so the licensing choice is up to the hiring entity.
75
76How can I contact the juju team?
77
78 User and charm author oriented resources
79 * Mailing list: https://lists.ubuntu.com/mailman/listinfo/Ubuntu-cloud
80 * IRC #ubuntu-cloud
81 juju development
82 * Mailing list: https://lists.ubuntu.com/mailman/listinfo/juju
83 * IRC #juju (Freenode)
84
85Where can I find out more about juju?
86
87 * Project Site: https://launchpad.net/juju
88 * Documentation: https://juju.ubuntu.com/docs/
89 * Work Items: https://juju.ubuntu.com/kanban/dublin.html
90 * Principia charms project: https://launchpad.net/principia
91 * Principia-Tools project: https://launchpad.net/principia-tools
092
=== added file 'source/generate_modules.py'
--- source/generate_modules.py 1970-01-01 00:00:00 +0000
+++ source/generate_modules.py 2012-01-18 20:50:30 +0000
@@ -0,0 +1,107 @@
1import os
2import sys
3
4INIT = "__init__.py"
5TESTS = "tests"
6
7
8def get_modules(names):
9 if INIT in names:
10 names.remove(INIT)
11 return [n[:-3] for n in names if n.endswith(".py")]
12
13
14def trim_dirs(root, dirs):
15 for dir_ in dirs[:]:
16 if dir_ == TESTS:
17 dirs.remove(TESTS)
18 if not os.path.exists(os.path.join(root, dir_, INIT)):
19 dirs.remove(dir_)
20 return dirs
21
22
23def module_name(base, root, name=None):
24 path = root[len(base) + 1:]
25 if name:
26 path = os.path.join(path, name)
27 return path.replace("/", ".")
28
29
30def collect_modules(src):
31 src = os.path.abspath(src)
32 base = os.path.dirname(src)
33
34 names = []
35 for root, dirs, files in os.walk(src):
36 modules = get_modules(files)
37 packages = trim_dirs(root, dirs)
38 if modules or packages:
39 names.append(module_name(base, root))
40 for name in modules:
41 names.append(module_name(base, root, name))
42 return sorted(names)
43
44
45def subpackages(names, parent):
46 return [name for name in names
47 if name.startswith(parent) and name != parent]
48
49
50def dst_file(dst, name):
51 return open(os.path.join(dst, "%s.rst" % name), "w")
52
53
54def write_title(f, name, kind):
55 f.write("%s\n%s\n\n" % (name, kind * len(name)))
56
57
58def write_packages(f, names):
59 for name in names:
60 f.write(" " * name.count("."))
61 f.write("* :mod:`%s`\n" % name)
62 f.write("\n")
63
64
65def abbreviate(name):
66 parts = name.split(".")
67 short_parts = [part[0] for part in parts[:-2]]
68 return ".".join(short_parts + parts[-2:])
69
70
71def write_module(f, name, subs):
72 write_title(f, abbreviate(name), "=")
73 f.write(".. automodule:: %s\n"
74 " :members:\n"
75 " :undoc-members:\n"
76 " :show-inheritance:\n\n"
77 % name)
78 if subs:
79 write_title(f, "Subpackages", "-")
80 write_packages(f, subs)
81
82
83def write_module_list(f, names):
84 write_title(f, "juju modules", "=")
85 write_packages(f, names)
86 f.write(".. toctree::\n :hidden:\n\n")
87 for name in names:
88 f.write(" %s\n" % name)
89
90
91def generate(src, dst):
92 names = collect_modules(src)
93
94 if not os.path.exists(dst):
95 os.makedirs(dst)
96
97 with dst_file(dst, "modules") as f:
98 write_module_list(f, names)
99
100 for name in names:
101 with dst_file(dst, name) as f:
102 write_module(f, name, subpackages(names, name))
103
104
105if __name__ == "__main__":
106 src, dst = sys.argv[1:]
107 generate(src, dst)
0108
=== added file 'source/getting-started.rst'
--- source/getting-started.rst 1970-01-01 00:00:00 +0000
+++ source/getting-started.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,80 @@
1.. _getting-started:
2
3Getting started
4===============
5
6Introduction
7------------
8
9This tutorial gets you started with juju. A prerequisite is the
10access credentials to a dedicated computing environment such as what
11is offered by a virtualized cloud hosting environment.
12
13juju has been designed for environments which can provide a
14new machine with an Ubuntu cloud operating system image
15on-demand. This includes services such as `Amazon EC2
16<http://aws.amazon.com/ec2/>`_ or `RackSpace
17<http://www.rackspace.com>`_.
18
19It's also required that the environment provides a permanent storage
20facility such as `Amazon S3 <https://s3.amazonaws.com/>`_.
21
22For the moment, though, the only environment supported is EC2.
23
24Running from PPA
25----------------
26
27The juju team's Personal Package Archive (PPA) installation is
28currently the preferred installation mechanism for juju. It
29includes newer upstream versions of binary dependencies like Zookeeper
30which are more recent than the latest ubuntu release (natty 11.04) and
31contain important bugfixes.
32
33To install juju from the PPA, execute the following in a shell::
34
35 sudo add-apt-repository ppa:juju/pkgs
36 sudo apt-get update && sudo apt-get install juju
37
38The juju environment can now be configured per the following.
39
40Configuring your environment
41----------------------------
42
43Run the command-line utility with no arguments to create a sample
44environment::
45
46 $ juju
47
48This will create the file ``~/.juju/environments.yaml``, which will look
49something like this::
50
51 environments:
52 sample:
53 type: ec2
54 control-bucket: juju-faefb490d69a41f0a3616a4808e0766b
55 admin-secret: 81a1e7429e6847c4941fda7591246594
56
57Which is a sample environment configured to run with EC2 machines and S3
58permanent storage. To make this environment actually useful, you will need
59to tell juju about an AWS access key and secret key. To do this, you
60can either set the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``
61environment variables (as usual for other EC2 tools) or you can add
62``access-key`` and ``secret-key`` options to your ``environments.yaml``.
63For example::
64
65 environments:
66 sample:
67 type: ec2
68 access-key: YOUR-ACCESS-KEY-GOES-HERE
69 secret-key: YOUR-SECRET-KEY-GOES-HERE
70 control-bucket: juju-faefb490d69a41f0a3616a4808e0766b
71 admin-secret: 81a1e7429e6847c4941fda7591246594
72
73The S3 bucket does not need to exist already.
74
75.. note::
76 If you already have an AWS account, you can determine your access key by
77 visiting http://aws.amazon.com/account, clicking "Security Credentials" and
78 then clicking "Access Credentials". You'll be taken to a table that lists
79 your access keys and has a "show" link for each access key that will reveal
80 the associated secret key.
081
=== added file 'source/glossary.rst'
--- source/glossary.rst 1970-01-01 00:00:00 +0000
+++ source/glossary.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,121 @@
1.. _glossary:
2
3Glossary
4========
5
6.. glossary::
7
8 Bootstrap
9 To boostrap an environment means initializing it so that Services may be
10 deployed on it.
11
12 Endpoint
13 The combination of a service name and a relation name.
14
15 juju
16 The whole software here documented.
17
18 Environment
19 An Environment is a configured location where Services
20 can be deployed onto. An Environment typically has a name,
21 which can usually be omitted when there's a single Environment
22 configured, or when a default is explicitly defined.
23 Depending on the type of Environment, it may have to be
24 bootstrapped before interactions with it may take place
25 (e.g. EC2). The local environment configuration is defined in
26 the ~/.juju/environments.yaml file.
27
28 Charm
29 A Charm provides the definition of the service, including its metadata,
30 dependencies to other services, packages necessary, as well as the logic
31 for management of the application. It is the layer that integrates an
32 external application component like Postgres or WordPress into juju.
33 An juju Service may generally be seen as the composition of its juju
34 Charm and the upstream application (traditionally made available through
35 its package).
36
37 Charm URL
38 A Charm URL is a resource locator for a charm, with the following format
39 and restrictions::
40
41 <schema>:[~<user>/]<collection>/<name>[-<revision>]
42
43 `schema` must be either "cs", for a charm from the Juju charm store, or
44 "local", for a charm from a local repository.
45
46 `user` is only valid in charm store URLs, and allows you to source
47 charms from individual users (rather than from the main charm store);
48 it must be a valid Launchpad user name.
49
50 `collection` denotes a charm's purpose and status, and is derived from
51 the Ubuntu series targeted by its contained charms: examples include
52 "natty", "oneiric", "oneiric-universe".
53
54 `name` is just the name of the charm; it must start and end with
55 lowercase (ascii) letters, and can otherwise contain any combination of
56 lowercase letters, digits, and "-"s.
57
58 `revision`, if specified, points to a specific revision of the charm
59 pointed to by the rest of the URL. It must be a non-negative integer.
60
61 Repository
62 A location where multiple charms are stored. Repositories may be as simple
63 as a directory structure on a local disk, or as complex as a rich smart
64 server supporting remote searching and so on.
65
66 Relation
67 Relations are the way in which juju enables Services to communicate
68 to each other, and the way in which the topology of Services is assembled.
69 The Charm defines which Relations a given Service may establish, and
70 what kind of interface these Relations require. In many cases, the
71 establishment of a Relation will result into an actual TCP connection being
72 created between the Service Units, but that's not necessarily the case.
73 Relations may also be established to inform Services of configuration
74 parameters, to request monitoring information, or any other details which
75 the Charm author has chosen to make available.
76
77 Service
78 juju operates in terms of services.
79
80 A service is any application (or set of applications) that is
81 integrated into the framework as an individual component which should
82 generally be joined with other components to perform a more complex
83 goal.
84
85 As an example, WordPress could be deployed as a service and, to perform
86 its tasks properly, might communicate with a database service
87 and a load balancer service.
88
89 Service Unit
90 A running instance of a given juju Service. Simple Services may
91 be deployed with a single Service Unit, but it is possible for an
92 individual Service to have multiple Service Units running in independent
93 machines. All Service Units for a given Service will share the same
94 Charm, the same relations, and the same user-provided configuration.
95
96 For instance, one may deploy a single MongoDB Service, and specify that
97 it should run 3 Units, so that the replica set is resilient to failures.
98 Internally, even though the replica set shares the same user-provided
99 configuration, each Unit may be performing different roles within th
100 replica set, as defined by the Charm.
101
102 Service Configuration
103 There are many different settings in an juju deployment, but
104 the term Service Configuration refers to the settings which a user can
105 define to customize the behavior of a Service.
106
107 The behavior of a Service when its Service Configuration changes is
108 entirely defined by its Charm.
109
110 Provisioning Agent
111 Software responsible for automatically allocating and terminating
112 machines in an Environment, as necessary for the requested configuration.
113
114 Machine Agent
115 Software which runs inside each machine that is part of an Environment,
116 and is able to handle the needs of deploying and managing Service Units
117 in this machine.
118
119 Service Unit Agent
120 Software which manages all the lifecycle of a single Service Unit.
121
0122
=== added file 'source/hook-debugging.rst'
--- source/hook-debugging.rst 1970-01-01 00:00:00 +0000
+++ source/hook-debugging.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,108 @@
1Hook debugging
2==============
3
4Introduction
5------------
6
7An important facility in any distributed system is the ability to
8introspect the running system, and to debug it. Within juju the
9actions performed by the system are executing charm defined
10hooks. The ``debug-log`` cli provides for inspecting the total state of
11the system via capturing the logs of all agents and output of all
12hooks run by the system.
13
14To facilitate better debugging of hooks, the ``debug-hooks`` cli
15provides for interactive shell usage as a substitute for running a
16hook. This allows a charm author or system adminstrator the ability
17to interact with the system in a live environment and either develop
18or debug a hook.
19
20How it works
21------------
22
23When the juju user utilizes the hook debug command like so::
24
25 juju debug-hooks unit_name [hook_name]
26
27juju is instructed to replace the execution of the hook from the
28charm of the respective service unit, and instead to execute it in a
29shell associated to a tmux session. If no hook name is given, then all
30hooks will be debugged in this fashion. Multiple hook names can also be
31specified on the command line. Shell regular expressions can also be
32utilized to specify hook names.
33
34The ``debug-hooks`` command line invocation will immediately connect
35to the remote machine of the remote unit and start a named shell
36connected to the same tmux session there.
37
38The native capabilities of tmux can be exploited to construct a full
39debug/development environment on the remote machine.
40
41When a debugged hook is executed a new named window will pop up in the
42tmux session with the hook shell started. The new window's title will
43match the hook name, and the shell environment will have all the
44juju environment variables in place, and all of the hook cli API
45may be utilized (relation-get, relation-set, relation-list, etc.).
46
47It's important to note that juju serializes hook execution, so
48while the shell is active, no other hooks will be executed on the
49unit. Once the experimentation is done, the user must stop the hook
50by exiting the shell session. At this point the system is then
51free to execute additional hooks.
52
53It's important to note that any state changes performed while in the
54hook window via relation-set are buffered till the hook is done
55executing, in the same way performed for all the relation hooks when
56running outside of a debug session.
57
58The debug-hooks can be used to debug the same hook being invoked
59multiple times as long as the user has not closed the debug screen
60session.
61
62The user can exit debug mode by exiting the tmux session (e.g.
63exiting all shells). The unit will then resume its normal
64processing.
65
66
67Limitations
68-----------
69
70Note that right now one can only query relation information when
71debugging a running relation hook. This means commands such as
72relation-get, relation-set, etc, will not work on a hook like
73'install' or 'upgrade'.
74
75This problem will be solved once the following bug is fixed:
76
77 https://bugs.launchpad.net/juju/+bug/767195
78
79
80Internals
81---------
82
83Internally the ``debug-hooks`` cli begins by verifying its arguments,
84namely the unit exists, and the named hook is valid for the charm.
85After that it modifies the zookeeper state of the unit node, setting
86a flag noting the hook to debug. It then establishes an ssh
87connection to the machine and executes the tmux command.
88
89The unit-agent will establish a watch on its own debug settings, on
90changes introspecting the debug flag, and pass any named hook values
91down to the hook executor, which will construct debug hook scripts on
92the fly for matching hooks. These debug hook scripts are responsible
93for connecting to tmux and monitoring the execution of the hook
94therein.
95
96Special care will be taken to ensure the viability of the tmux
97session and that debug mode is active before creating the interactive
98hook window in tmux.
99
100
101Screen vs. tmux
102---------------
103
104Initially juju used GNU screen for the debugging sessions rather
105than tmux, but tmux turned out to make it easier to avoid race
106conditions when starting the same session concurrently, as done for
107the debugging system. This was the main motivation that prompted
108the change to tmux. They both worked very similarly otherwise.
0109
=== added file 'source/index.rst'
--- source/index.rst 1970-01-01 00:00:00 +0000
+++ source/index.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,35 @@
1Documentation
2=============
3
4.. note:: juju is still in a stage of fast development, and is not yet
5 ready for prime time. The current software is being made available as
6 an early technology preview, and while it can be experimented with, it
7 should not be used in real deployments just yet.
8
9.. toctree::
10 :maxdepth: 2
11
12 about
13 faq
14 getting-started
15 user-tutorial
16 write-charm
17 charm
18 expose-services
19 hook-debugging
20 upgrades
21 charm-upgrades
22 provider-configuration-ec2
23 provider-configuration-local
24 juju-internals
25 juju-drafts
26 glossary
27 generated/modules
28
29
30Index and Glossary
31==================
32
33* :ref:`glossary`
34* :ref:`genindex`
35* :ref:`search`
036
=== added directory 'source/internals'
=== added file 'source/internals/agent-presence.rst'
--- source/internals/agent-presence.rst 1970-01-01 00:00:00 +0000
+++ source/internals/agent-presence.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,154 @@
1Agent presence and settings
2===========================
3
4Agents are a set of distributed processes within the juju
5framework tasked individually with various juju roles. Each agent
6process interacts with zookeeper state and its environment to perform
7its role.
8
9Common to all agents are the need to make their presence known, such
10that it can be monitored for availability, as well the need for storage
11so an agent can record its state.
12
13Presence
14--------
15
16The presence/aliveness of an agent process within the zookeeper state
17hierarchy is denoted by an ephemeral node. This ephemeral presence
18node is also used to store transient settings by the agent
19process. These transient values will have a scope of the agent process
20lifetime. These ephemeral presence nodes are stored under the /agents
21container in a hierarchy, according to their agents role. Agents
22fufill many different roles within the juju system. Within the
23/agents container hierarchy, each agent's ephemeral node is contained
24within an <agent-role> container.
25
26For example, unit agents are stored in the following container::
27
28 /agents/unit/
29
30And provisioning agents in::
31
32 /agents/provisioning/
33
34The agent presence node within these role containers is further
35distinguished by the id it chooses to use within the container. Some
36agents are commonly associated to a persistent domain object, such as
37a unit or machine, in that case they will utilize the persistent domain
38object's id for their node name.
39
40For example, a unit agent for unit 11 (display name: mysql/0), would
41have a presence node at::
42
43 /agents/unit/unit-11
44
45For agents not associated to a persistent domain object, the number of
46agents is determined by configuration, and they'll utilize an ephemeral
47sequence to denote their id. For example the first provisioning agent
48process in the system would have a path::
49
50 /agents/provisioning/provisioning-0000000000
51
52and the second
53
54 /agents/provisioning/provisioning-0000000001
55
56Persistence
57-----------
58
59All agents are able to store transient settings of the agent process
60within their ephemeral presence nodes within zookeeper. If an agent
61needs persistent settings, they should be stored on an associated
62peristent domain object.
63
64
65Availability
66------------
67
68One of the key features of the juju framework, is an absence of
69single points of failures. To enable availability across agents we'll
70run multiple instances of agents as appropriate, monitor the presence
71of agents, and restart them as nescessary. Using the role information
72and the agent id as encoded in the presence node path, we can dispatch
73appropriate error handling and recovery logic, ie. restart a unit
74agent, or provisioning agent.
75
76For agents providing cluster wide services, it will be typical to have
77multiple agents for each role (ie. provisionig, recovery).
78
79A recovery agent will need to distinguish causal factors regarding the
80disappearance of a unit presence node. In addition to error scenarios,
81the configuration state may change such that an agent is no longer
82nescessary, for example an unused machine being terminated, or a unit no
83longer being assigned to a machine. To facilitate identiying the
84cause, a recovery agent would subscribe to the topology to distinguish
85configuration change vs. a runtime change. For agents not associated
86to a persistent domain object, this identification will be based on
87examining the configured number of agent for the role, and verifying
88that it matches the runtime state.
89
90
91Startup and recovery
92--------------------
93
94On startup, an agent will attempt to create its presence node. For
95agents associated to persistent domain objects, this process will
96either succeed, or result in an error due to an existing agent already
97in place, as the ids used are unique to a single instance of the agent
98since the id is based on the domain object id.
99
100For agents not attached to persistent domain objects, they should
101verify their configuration parameter for the total number of agents
102for the role.
103
104In the case of a conflict or a satisified configuration the agent
105process should terminate with an error message.
106
107
108Agent state API
109---------------
110
111
112``IAgentAssociated``::
113
114 """An api for persistent domain objects associated to an agent."""
115
116 def has_agent():
117 """Return boolean whether the agent (presence node) exists."""
118
119 def get_agent_state():
120 """Retrieve the agent associated to this domain object."""
121
122 def connect_agent():
123 """Create an agent presence node.
124
125 This serves to connect the agent process with its agent state,
126 and will create the agent presence node if doesn't exist, else
127 raise an exception.
128
129 Returns an agent state.
130 """
131
132
133``IAgentState``::
134
135 def get_transient_data():
136 """
137 Retrieve the transient data for the agent as a byte string.
138 """
139
140 def set_transient_state(data):
141 """
142 Set the transient data for the agent as a byte string.
143 """
144
145 def get_domain_object():
146 """
147 TBD if Desireable. An agent attached to a persistent domain
148 object has all the knowledge to retrieve the associated
149 persistent domain object. For a machine agent state, this would
150 retrieve the machine state. For a unit agent state this would
151 retrieve the unit. Most agent implementations will already have
152 access to the domain object, and will likley retrieve or create
153 the agent from it.
154 """
0155
=== added file 'source/internals/expose-services.rst'
--- source/internals/expose-services.rst 1970-01-01 00:00:00 +0000
+++ source/internals/expose-services.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,143 @@
1Service exposing implementation details
2=======================================
3
4
5Not in scope
6------------
7
8It is not in the scope of this specification to determine mapping to a
9public DNS or other directory service.
10
11
12Implementation of ``expose`` and ``unexpose`` subcommands
13---------------------------------------------------------
14
15Two new user commands were added::
16
17 juju expose <service name>
18
19 juju unexpose <service name>
20
21These commands set and remove a flag znode, **/services/<internal
22service id>/exposed**, respectively.
23
24
25Hook command additions
26----------------------
27
28Two new hook commands were added for opening and closing ports. They
29may be executed within any charm hook::
30
31 open-port port[/protocol]
32
33 close-port port[/protocol]
34
35These commands store in the ZK tree, under **/units/<internal unit
36id>/ports**, the desired opened port information as serialized to
37JSON. For example, executing ``open-port 80`` would be serialized as
38follows::
39
40 {"open": [{"port": 80, "proto": "tcp"}, ...]}
41
42This format accommodates tracking other ancillary information for
43exposing services.
44
45These commands are executed immediately within the hook.
46
47
48New ``exposed`` and ``unexposed`` service hooks
49-----------------------------------------------
50
51The ``exposed`` service hook runs upon a service being exposed with
52the ``juju expose`` command. As part of the unit workflow, it is
53scheduled to run upon the existence of **/services/<internal service
54id>/exposed** and the service unit being in the ``started`` state.
55
56Likewise, the ``unexposed`` service hook runs upon the removal of a
57**/services/<internal service id>/exposed** flag znode.
58
59These hooks will be implemented at a future time.
60
61
62``juju status`` display of opened ports
63-------------------------------------------
64
65If a service has been exposed, then the juju status output is
66augmented. For the YAML serialization, for each exposed service, the
67``exposed`` key is added, with the value of ``true``. (It is not
68displayed otherwise.) For each service unit of an exposed service with
69opened ports, the ``open-ports`` key is added, with its value a
70sequence of ``port/proto`` strings. If no ports are opened, its value
71is an empty list.
72
73
74Provisioning agent implementation
75---------------------------------
76
77The provisioning agent currently is the only place within juju
78that can take global actions with respect to the provider. Consequently,
79provisioning is currently responsible for the current, if simple EC2
80security group management (with the policy of open all ports, seen in
81the code `juju.providers.ec2.launch.EC2LaunchMachine`).
82
83The provisioning agent watches for the existence of
84**/services/<internal service id>/exposed**, and if so watches the
85service units settings **/units/<internal unit id>/ports** and makes
86changes in the firewall settings through the provider.
87
88For the EC2 provider, this is done through security groups (see
89below). Later we will revisit to let a machine agent do this in the
90context of iptables, so as to get out of the 500 security group limit
91for EC2, enable multiple service units per machine, be generic with
92other providers, and to provide future support for internal firewall
93config.
94
95
96EC2 provider implementation
97---------------------------
98
99Prior to the launch of a new machine instance, a unique EC2 security
100group is added. The machine instance is then assigned to this group at
101launch. Likewise, terminating the machine will result in the EC2
102provider deleting the security group for the machine. (This cleanup
103will be implemented in a future branch.)
104
105Given this model of a security group per machine, with one service
106unit per machine, exposing and unexposing ports for a service unit
107corresponds to EC2's support for authorization and revocation of ports
108per security group. In particular, EC2 supports a source address of
109``0.0.0.0/0`` that corresponds to exposing the port to the world.
110
111To make this concrete, consider the example of exposing the
112``my-wordpress`` service. Once the command ``open-port 80`` has been
113run on a given service unit of ``my-wordpress``, then for the
114corresponding machine instance, the equivalent of this EC2 command is
115run::
116
117 ec2-authorize $MACHINE_SECURITY_GROUP -P tcp -p 80 -s 0.0.0.0/0
118
119``$MACHINE_SECURITY_GROUP`` is named ``juju-ENVIRONMENT-MACHINE_ID``,
120eg. something like ``juju-prod-2``.
121
122Any additional service units of ``my-wordpress``, if they run
123``open-port 80``, will likewise invoke the equivalent of the above
124command, for the corresponding machine security groups.
125
126If ``my-wordpress`` is unexposed, a ``my-wordpress`` service unit is
127removed, the ``my-wordpress`` service is destroyed, or the
128``close-port`` command is run for a service unit, then the equivalent
129of the following EC2 command is run, for all applicable machines::
130
131 ec2-revoke $MACHINE_SECURITY_GROUP -P tcp -p 80 -s 0.0.0.0/0
132
133Although this section showed the equivalent EC2 commands for
134simplicity, txaws is used for the actual implementation.
135
136
137Implementation plan
138-------------------
139
140The following functionality needs to be added. This should divisible
141into separate, small branches:
142
143 * Implement exposed and unexposed hooks.
0144
=== added file 'source/internals/unit-agent-hooks.rst'
--- source/internals/unit-agent-hooks.rst 1970-01-01 00:00:00 +0000
+++ source/internals/unit-agent-hooks.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,307 @@
1Unit Agent hooks
2================
3
4Introduction
5------------
6
7The Unit Agent (**UA**) in juju is responsible for managing and
8maintaining the per-machine service units. By calling life-cycle and
9state change hooks the UA is able to allow the Service Unit to respond
10to changes in the state of the environment. This is done through the
11invocation of hooks provided by the charm author.
12
13This specification outlines the interaction between the UA, the
14running software being managed by the UA and the hooks invoked in
15response to state or process level changes in the runtime.
16
17Hooks_ are defined in another document. This specification only
18captures how they are invoked and managed, not why.
19
20.. _Hooks: ../charm.html#hooks
21
22When the Machine Agent (**MA**) spawns a UA it does so in order to
23manage the smallest managed unit of service deployment and
24management. The process managed by the UA will be called the **UAP**
25later in the documentation.
26
27The UAP does not directly communicate with the UA, that is the
28responsibility of the hooks and is handled by the provided command
29line tools. The means through which that communication occurs and the
30semantics of it are described in this document.
31
32
33Hooks access to settings
34------------------------
35
36Hooks have access to two kinds of settings. The first is the
37*"service settings"*, which cover configuration details for the
38all units of the given service. These are usually provided
39manually by the user, are global to the service, and will not
40be written to by service units themselves. This is the
41principal way through which an administrator configures the
42software running inside an juju service unit.
43
44The second kind is known as *"relation settings"*, and are
45made available to service units whenever they are participating in
46a relation with one or more service units. In these cases, each
47participating unit will have its own set of settings specific to
48that relation, and will be able to query both its local settings
49and the remote settings from any of the participating units.
50That's the main mechanism used by juju to allow service units
51to communicate with each other.
52
53Using the example of a blog deployment we might include information
54such as the theme used by the blog engine and the title of the blog in
55the "service settings". The "relation settings" might contain specific
56information about the blog engines connection to a database deployed
57on its behalf, for example and IP address and port.
58
59There is a single ZK node for the "service settings" and another for
60the "relation settings". Within this node we store an dictionary
61mapping string keys to opaque blobs of information which are managed
62by the service hooks and the juju administrator.
63
64Hooks are presented with a synchronous view of the state of these
65nodes. When a request is made for a particular setting in a particular
66node the cache will present a view of that node that is consistent to
67the client for the lifetime of the hook invocation. For example,
68assume a settings node with settings 'a' and 'b'. When the hook
69requests the value of 'a' from the a relation settings node we would
70present a consistent view of those settings should they request 'a' or
71'b' from that same relation settings during the lifetime of the
72hook. If however they were to attempt to request value 'a' from a
73different relation settings node this new nodes setting would be
74cached at the time of its first interaction with the hook. Repeated
75reads of data from the same settings node will continue to yield the
76clients view of that data.
77
78When manipulating data, even if the initial interaction with the data
79is a set, the settings are first read into the UA cache and the cache
80is updated with the current value.
81
82
83Service Unit name
84-----------------
85
86A service unit name in juju is formed by including both the name
87of the service and a monotonically increasing number that uniquely
88specifies the service unit for the life time of an juju
89deployment::
90
91 <service_name>/<service_unit_number>
92
93This results in name like "wordpress/1" and "mysql/1". The numbers
94themselves are not significant but do obey the rule that they will not
95be reused during the lifetime of a service. This means that if a UA
96goes away the number that represented it is retired from the
97deployment.
98
99For additional details see juju/state/service.py.
100
101
102Client Id
103---------
104
105Because of the way in which settings state is presented through the
106command line utilities within hooks clients are provided a string
107token through a calling environmental variable,
108*JUJU_CLIENT_ID*. Using this variable all command line tools will
109connect with a shared common state when used from a single hook
110invocation.
111
112The few command line utilities, such as juju-log, which could be
113called outside the context of a hook need not pass a client id. At the
114time of this writing its expected that cli tools which don't need hook
115context either don't make an effort to present a stable view of
116settings between calls (and thus run with a completely pass-through
117cache proxy) or don't interact directly with the state.
118
119However as indicated below the *--client_id* flag can be passed
120directly to any tool indicating the caching context which should be
121used. This facilitates testing as well as allowing some flexibility in
122the future.
123
124Passing a client_id which the UA is unaware of (or which has expired
125through some other means) will result in an error and an exit code
126being returned to the client.
127
128
129Hook invocation and communication
130---------------------------------
131
132Twisted (which is used to handle networking and asynchronous
133interactions throughout the codebase) defines a key-value oriented
134binary protocol called AMP which is used to communicate between the UA
135and the hooks executed on behalf of the charm. To facilitate this
136the filename of a Unix Domain socket is provided through the process
137environment. This socket is shared among all hook invocations and can
138even be used by tools outside the context of a particular hook
139invocation. Because of this providing a 'client id'_ to calls will
140establish a connection to an internal data-cache offering a consistent
141view of settings on a per-node, per-client basis.
142
143Communication over this socket takes place using an abstraction
144provided by AMP called Commands. Hooks trigger, through the invocation
145of utility commands, these commands to the provided socket. These
146commands in turn schedule interactions with the settings available in
147ZK.
148
149Because of the policy used for scheduling changes to settings the
150actions of hooks are not applied directly to ZK (and this are not
151visible outside the particular UA invoking the hook) until the hook
152terminates with a success code.
153
154Here are the commands the initial revision will support and a bit about
155there characteristics:
156
157 * **get(client_id, unit_name, setting_name)** - This command will return the
158 value for a given key name or return a KeyError. A key error
159 can be mapped through to the cli as null with a failed exit
160 code. **unit_name** is processed using the standard `Service
161 Unit Name`_ policy.
162
163 * **set(client_id, unit_name, json_blob)** - This command will enqueue a
164 state change to ZK pending successful termination of the
165 hook. **unit_name** is processed using the standard `Service
166 Unit Name`_ policy. The json_blob is a JSON string
167 serialization of a dict which will be applied as a set of
168 updates to the keys and values stored in the existing
169 settings. Because the cache object contains the updated state
170 (but is not visiable outside the hook until successful
171 completion) subsequent reads of settings would return the
172 values provided by the set call.
173
174 * **list_relations(client_id)** - Returns a list of all relations
175 associated with a hook at the time of invocation. The values
176 of this call will typically also be exposed as a environment
177 variable, **JUJU_MEMBERS**.
178
179 * **flush(client_id)** - reserved
180
181 * **sync(client_id)** - reserved
182
183 * **wait(client_id, keyname)** - reserved
184
185
186Unit Agent internal state
187-------------------------
188
189This is a list of internal state which the UA maintains for the proper
190management of hook invocations.
191
192 * which hooks have fired (and the expected result state).
193 * the UNIX domain socket passed to hooks for AMP communication
194 * the path to the container in which the Service Unit is executing
195 (passed in environment to hooks).
196 * the cached state of relationship nodes and settings relative to
197 particular hook invocations.
198
199
200Command line interface
201----------------------
202
203While the command line utilities provided use the underlying AMP
204commands to enact their work they provide a standard set of utilities
205for passing data between files and ZK state.
206
207Hooks have access to many commands provided by juju for
208interfacing with settings. These provide a set of standard command
209line options and conventions.
210
211 * Command line tools like *relation-set* will check stdin
212 processing the provided input as a JSON dict of values that
213 should be handled as though they were command line
214 arguments. Using this convention its possible to easily set
215 many values at once without any thought to escaping values for
216 the shell.
217
218 * Similar to *curl(1)* if you start the data with the letter @,
219 the rest should be a file name to read the data from, or - if
220 you want to read the data from stdin.
221
222 * Command line tools responsible for returning data to the user,
223 such as **relation-get**, will output JSON by default when
224 returning more than a single value or **--format=json** is
225 present in the command line. Requests for a single value default
226 to returning the value without JSON serialisation unless the
227 --format=json flag is passed.
228
229 * Output from command lines tools default to stdout. If the **-o**
230 option is provided any tool will write its output to a file
231 named after that flag. ex. **relation-get -o /tmp/output.json**
232 will create or replace a file called /tmp/output.json with the
233 data existent in the relation.
234
235
236Logging
237-------
238
239Command line hooks communicate with the user/admin by means three
240primary channels.
241
242 * **Hook exit code** Zero is success, anything is regarded as hook
243 failure and will cause the hook to be run at a later time.
244
245 * **Stdout/Stderr** Messages printed, echoed or otherwise emitted
246 from the hooks on stdout or stderr are converted to log
247 messages of levels INFO and ERROR respectively. These messages
248 will then be emitted by the UA as they occur and are not
249 buffered like global state changes.
250
251 * **juju-logger** (reserved) An additional command line tool
252 provided to communicate more complex logging messages to the
253 UA and help them be made available to the user.
254
255
256Calling environment
257-------------------
258
259Hooks can expect to be invoked with a standard environment and
260context. The following be included:
261
262 * `$JUJU_SOCKET` - Path to a UNIX Domain socket which will be
263 made available to the command line tools in order to communicate
264 with the UA.
265
266 * `$JUJU_CLIENT_ID` - A unique identifier passed to a hook
267 invocation used to populate the --client_id flag to cli
268 tools. This is describe in the section, `Client Id`_.
269
270 * `$JUJU_LOCAL_UNIT` - The unit name of the unit this hook is
271 being invoked in. (ex: myblog/0)
272
273 * `$JUJU_SERVICE` - The name of the service for which this hook
274 is running. (ex: myblog)
275
276 * `$JUJU_CHARM` - The name of the charm which deployed the
277 unit the hook is running in. (ex: wordpress)
278
279
280Hooks called for relationships will have the follow additional
281environment variables available to them.
282
283 * `$JUJU_MEMBERS` - A space-delimited list of qualified
284 relationship ids uniquely specifying all the UAs participating in
285 a given relationship. (ex. "wordpress/1 worpress/2")
286
287 * `$JUJU_RELATION` - The relation name this hook is running
288 for. It's redundant with the hook name, but is necessary for
289 the command line tools to know the current context.
290
291 * `$JUJU_REMOTE_UNIT` - The unit name of the remote unit
292 which has triggered the hook execution.
293
294
295Open issues
296-----------
297
298There are still a number of open issues with this specification. There
299is still open debate if the UA runs inside the same process
300space/container and how this will play out with security. This has
301ramifications to this specification as well as we'd take steps to make sure
302client code cannot violate the ZK juju by connection with their
303own copy of the code on a known port.
304
305There specification doesn't define 100% which command line tools will
306get which environment settings.
307
0308
=== added file 'source/internals/unit-agent-startup.rst'
--- source/internals/unit-agent-startup.rst 1970-01-01 00:00:00 +0000
+++ source/internals/unit-agent-startup.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,156 @@
1Unit Agent startup
2==================
3
4Introduction
5------------
6
7The unit agent manages a state machine workflow for the unit. For each
8transition the agent records the current state of the unit and stores
9that information as defined below. If the agent dies, or is restarted
10for any reason, the agent will resume the workflow from its last known
11state.
12
13The available workflow states and transitions are::
14
15 "new" -> "ready" [label="install"]
16 "new" -> "install-error" [label="error-install"]
17 "ready" -> "running" [label="start"]
18 "ready" -> "start-error" [label="error-start"]
19 "running" -> "ready" [label="stop"]
20 "running" -> "stop-error" [label="error-stop"]
21
22The agent does not have any insight into external processes that the
23unit's charm may be managing, it's sole responsibility is executing
24hooks in deterministic fashion as a consequence of state changes.
25
26Charm hook execution (excepting relation hooks), corresponds to
27invoking a transition on the unit workflow state. Any errors during a
28transition, will prevent a state change. All state changes are
29recorded persistently on the unit state. If a state change fails, it
30will be reattempted until a max number of retries, after which the
31unit workflow will be transitioned to failure state specific to the
32current state and attempted transition, and administrator intervention
33will be required to resolve.
34
35On startup the agent will, establish its presence node (as per the
36agent state spec), and read the state of the unit. If the unit is not
37running it will have its transition hooks executed to place it in the
38running state.
39
40The persistent state of the unit as per this state machine is stored
41locally on disk of the unit. This allows for the continuation of long
42running tasks in the face of transient communication failures with zk.
43For example if a long running install task is kicked off then it may
44complete and record the transition to persistent state even if the zk
45connection is not available when the install hook has completed.
46
47The persistent workflow state of the unit is also replicated to
48zookeeper for introspectability, and communication of local failures
49to the global coordination space. The zk state for this workflow is
50considered non-authoritative by the unit-agent if its operating in a
51disconnected mode.
52
53
54Startup sequence
55----------------
56
57The following outlines the set of steps a unit agent executes when
58starting up on a machine resource.
59
60 - Unit agent process starts, inspects its configuration and
61 environment.
62
63 - A zookeeper client handle is obtained.
64
65 - The agent retrieves its unit state, via the service state manager.
66
67 - The agent retrieves its service relations, via the relation state
68 manager.
69
70At deployment time, a service is deployed with its dependencies. Those
71dependencies are actualized in relations between the services that are
72being deployed. There are several times of relations that can be
73established. The most common is a client/server relationship, like a
74client application and a database server. Each of the services in such
75a relation performs a role within that relation. In this case the
76database performs the 'server' role, and the client application
77performs the 'client' role. When actualizing the service relations,
78the physical layout within the coordination space (zookeeper) takes
79these roles into account.
80
81For example in the client server relation, the service performing the
82'server' role has its units under a service-role container named
83'server' denoting the role of its units in the relation.
84
85For each service relation, the agent will
86
87 - Creates its ``/relations/relation-1/settings/unit-X`` relation
88 local data node, if it doesn't exist.
89
90 - Creates its ``/relations/relation-1/<service-role>/unit-X`` if it
91 doesn't exist. The node is not considered 'established' for the
92 purposes of hook execution on other units till this node exists.
93
94 - Establish watches as outlined below.
95
96
97Unit relation observation
98-------------------------
99
100Based on the relation type and the unit's service role, the unit agent
101will establish will retrieve and establish watches on the other units
102in the relation.
103
104The relation type determines which service role container the
105container will get and observe children of. In a client server
106relation there would be both::
107
108 /relations/relation-1/server
109 /relations/relation-1/client
110
111And a client unit would observe and process the unit children of the
112server node which functions as the service-role representing the
113endpoint of the relation. In a peer relation there would be a
114service-role container with the path ``/relations/relation-1/peer``
115which would be observed and processed.
116
117 - The unit agent will get the children and establish a watch (w-1) on
118 the service role container in the relationship.
119
120 - For each unit found, the relation local data node
121 ``/relations/relation-X/settings/unit-X`` will have a get watch
122 (w-2) established .
123
124 - the agent stores a process local variable noting which children its
125 seen (v-1)
126
127Finally after processing the children.
128
129 - if the unit agent is completing its startup, and another
130 'established' unit was found, the agent should fire the its
131 relation-changed hook (type joined).
132
133
134Watch behavior
135--------------
136
137 - (w-1) if the service-role child watch fires with a delete event,
138 reestablish the watch, and execute the relation-changed hook (type
139 departed), update variable (v-1)
140
141 - (w-1) if the service-role child watch fires with a created event,
142 reestablish the watch, and execute the relation-changed hook (type
143 joined), update variable (v-1)
144
145 - (w-1) if the service-role node child watch fires with a deleted
146 event, the agent invokes the ``relation-broken`` hook. (the service
147 role container was removed)
148
149 - (w-3) if a unit relation local data node watch fires with a
150 modified event, reestablish the watch, and execute the
151 relation-changed hook (type changed) if the unit is in variable
152 (v-1).
153
154 - (w-3) if a unit relation local data node watch fires with a delete
155 event, ignore (the agent exists watch must also have fired with a
156 delet
0157
=== added file 'source/internals/zookeeper.rst'
--- source/internals/zookeeper.rst 1970-01-01 00:00:00 +0000
+++ source/internals/zookeeper.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,215 @@
1ZooKeeper
2=========
3
4This document describes the reasoning behind juju's use of ZooKeeper,
5and also the structure and semantics used by juju in the ZooKeeper
6filesystem.
7
8juju & ZooKeeper
9--------------------
10
11ZooKeeper offers a virtual filesystem with so called *znodes* (we'll
12refer to them simply as *nodes* in this document). The state stored in
13the filesystem is fully introspectable and observable, and the changes
14performed on it are atomic and globally ordered. These features are
15used by juju to maintain its distributed runtime state in a reliable
16and fault tolerant fashion.
17
18When some part of juju wants to modify the runtime state anyhow,
19rather than enqueuing a message to a specific agent, it should instead
20perform the modification in the ZooKeeper representation of the state,
21and the agents responsible for enforcing the requested modification
22should be watching the given nodes, so that they can realize the changes
23performed.
24
25When compared to traditional message queueing, this kind of behavior
26enables easier global analysis, fault tolerance (through redundant
27agents which watch the same states), introspection, and so on.
28
29
30Filesystem Organization
31-----------------------
32
33The semantics and structures of all nodes used by juju in its
34ZooKeeper filesystem usage are described below. Each entry here maps
35to a node, and the semantics of the given node are described right
36below it.
37
38Note that, unlike a traditional filesystem, nodes in ZooKeeper may
39hold data, while still being a parent of other nodes. In some cases,
40information is stored as content for the node itself, in YAML format.
41These are noted in the tree below under a bulleted list and *italics*.
42In other cases, data is stored inside a child node, noted in the tree
43below as indented **/bold**. The decision around whether to use a
44child node or content in the parent node revolves around use cases.
45
46
47.. Not for now:
48
49 .. _/files:
50
51 **/files**
52 Holds information about files stored in the machine provider. Each
53 file stored in the machine provider's storage location must have a
54 entry here with metadata about the file.
55
56 **/<filename>:<sha256>**
57 The name of nodes here is composed by a plain filename, a colon, and
58 the file content's sha256. As of today these nodes are empty, since
59 the node name itself is enough to locate it in the storage, and to
60 assess its validity.
61
62**/topology**
63 Describes the current topology of machines, services, and service units. Nodes
64 under ``/machines``, ``/services``, and ``/units``, should not be considered
65 as valid unless they are described in this file. The precise format of this
66 file is an implementation detail.
67
68**/charms**
69 Each charm used in this environment must have one entry inside this
70 node.
71
72 :Readable by: Everyone
73
74 **/<namespace>:<name>-<revision>**
75 Represents a charm available in this environment. The node name
76 includes the charm namespace (ubuntu, ~user, etc), the charm name,
77 and the charm revision.
78
79 - *sha256*: This option contains the sha256 of a file in the file
80 storage, which contains the charm bundle itself.
81
82 - *metadata*: Contains the metadata for the charm itself.
83
84 - *schema*: The settings accepted by this charm. The precise details
85 of this are still unspecified.
86
87**/services**
88 Each charm to be deployed must be included under an entry in
89 this tree.
90
91 :Readable by: Everyone
92
93 **/service-<0..N>**
94 Node with details about the configuration for one charm, which can
95 be used to deploy one or more charm instances for this specific
96 charm.
97
98 - *charm*: The charm to be deployed. The value of this option should
99 be the name of a child node under the ``/charms`` parent.
100
101 **/settings**
102 Options for the charm provided by the user, stored internally in
103 YAML format.
104
105 :Readable by: Charm Agent
106 :Writable by: Admin tools
107
108**/units**
109 Each node under this parent reflects an actual service agent which should
110 be running to manage a charm.
111
112 **/unit-<0..N>**
113 One running service.
114
115 :Readable by: Charm Agent
116 :Writable by: Charm Agent
117
118 **/machine**
119 Contains the internal machine id this service is assigned to.
120
121 **/charm-agent-connected**
122 Ephemeral node which exists when a charm agent is handling
123 this instance.
124
125
126**/machines**
127
128 **/machine-<0..N>**
129
130 **/provisioning-lock**
131 The Machine Provisioning Agent
132
133 **/machine-agent-connected**
134 Ephemeral node created when the Machine Agent is connected.
135
136 **/info**
137 Basic information about this machine.
138
139 - *public-dns-name*: The public DNS name of this machine.
140 - *machine-provider-id*: ie. EC2 instance id.
141
142
143Provisioning a new machine
144--------------------------
145
146When the need for a new machine is determined, the following sequence of
147events happen inside the ZooKeeper filesystem to deploy the new machine:
148
1491. A new node is created at ``/machines/instances/<N>``.
1502. Machine Provisioning Agent has a watcher on ``/machines/instances/``, and
151 gets notified about the new node.
1523. Agent acquires a provisioning lock at
153 ``/machines/instances/<N>/provisioning-lock``
1544. Agent checks if the machine still has to be provisioned by verifying
155 if ``/machines/instances/<N>/info`` exists.
1565. If the machine has provider launch information, than the agent schedules
157 to come back to the machine after ``<MachineBootstrapMaxTime>``.
1586. If not, the agent fires the machine via the provider and stores the
159 provider launch info (ie. EC2 machine id, etc.) and schedules the
160 to come back to the machine after ``<MachineBootstrapMaxTime>``.
1617. As a result of a schedule call the machine provider verifies the
162 existence of a ``/machines/instance/<N>/machine-agent-connected`` node
163 and if it does sets a watch on it.
1648. If the agent node doesn't exist after the <MachineBootstrapMaxTime> then
165 the agent acquires the ``/machines/instances/<N>/provisioning-lock``,
166 terminates the instance, and goes to step 6.
167
168
169Bootstrap Notes
170~~~~~~~~~~~~~~~
171
172This verification of the connected machine agent helps us guard against any
173transient errors that may exist on a given virtual node due to provider
174vagaries.
175
176When a machine provisioning agent comes up, it must scan the entire instance
177tree to verify all nodes are running. We need to keep some state to distinguish
178a node that has never come up from a node that has had its machine agent connection
179die so that a new provisioning agent can distinguish between a new machine bootstrap
180failure and an running machine failure.
181
182use a one time password (otp) via user data to guard the machine agent
183permanent principal credentials.
184
185TODO... we should track a counter to keep track of how many times we've
186attempt to launch a single instance.
187
188
189Connecting a Machine
190--------------------
191
192When a machine is launched, we utilize cloud-init to install the requisite
193packages to run a machine agent (libzookeeper, twisted) and launch the
194machine agent.
195
196The machine agent reads its one time password from ec2 user-data and connects
197to zookeeper and reads its permanent principal info and role information which
198it adds to its connection.
199
200The machine agent reads and sets a watch on
201``/machines/instances/<N>/services/``. When a service is placed there the agent
202resolve its charm, downloads the charm, creates an lxc container, and launches
203a charm agent within the container passing the charm path.
204
205Starting a Charm
206------------------
207
208The charm agent connects to zookeeper using principal information provided
209by the machine agent. The charm agent reads the charm metadata, and
210installs any package dependencies, and then starts invoking charm hooks.
211
212The charm agent creates the ephemeral node
213``/services/<service name>/instances/<N>/charm-agent-connected``.
214
215The charm is running when....
0216
=== added file 'source/juju-drafts.rst'
--- source/juju-drafts.rst 1970-01-01 00:00:00 +0000
+++ source/juju-drafts.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,10 @@
1Drafts
2======
3
4This section contains documents which may be unreviewed, incomplete,
5incorrect, out-of-date, or all of those.
6
7.. toctree::
8 :glob:
9
10 drafts/*
011
=== added file 'source/juju-internals.rst'
--- source/juju-internals.rst 1970-01-01 00:00:00 +0000
+++ source/juju-internals.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,11 @@
1Implementation details
2======================
3
4This section details topics which are generally not very useful
5for running juju, but may be interesting if you want to hack it.
6
7.. toctree::
8 :glob:
9
10 internals/*
11
012
=== added file 'source/provider-configuration-ec2.rst'
--- source/provider-configuration-ec2.rst 1970-01-01 00:00:00 +0000
+++ source/provider-configuration-ec2.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,64 @@
1EC2 provider configuration
2--------------------------
3
4The EC2 provider accepts a number of configuration options, that can
5be specified in the ``environments.yaml`` file under an ec2 provider section.
6
7 access-key:
8 The AWS access key to utilize for calls to the AWS APIs.
9
10 secret-key:
11 The AWS secret key to utilize for calls to the AWS APIs.
12
13 ec2-uri:
14 The EC2 api endpoint URI, by default points to `ec2.amazonaws.com`
15
16 region:
17 The EC2 region, by default points to `us-east-1`. If 'ec2-uri' is
18 specified, it will take precedence.
19
20 s3-uri:
21 The S3 api endpoint URI, by default points to `s3.amazonaws.com`
22
23 control-bucket:
24 An S3 bucket unique to the environment, where some runtime metadata and
25 charms are stored.
26
27 juju-origin:
28 Defines where juju should be obtained for installing in
29 machines. Can be set to a "lp:..." branch url, to "ppa" for
30 getting packages from the official juju PPA, or to "distro"
31 for using packages from the official Ubuntu repositories.
32
33 If this option is not set, juju will attempt to detect the
34 correct origin based on its run location and the installed
35 juju package.
36
37 default-instance-type:
38 The instance type to be used for machines launched within the juju
39 environment. Acceptable values are based on EC2 instance type API names
40 like t1.micro or m1.xlarge.
41
42 default-image-id:
43 The default amazon machine image to utilize for machines in the
44 juju environment. If not specified the default image id varies by
45 region.
46
47 default-series:
48 The default Ubuntu series to use (`oneiric`, for instance). EC2 images
49 and charms referenced without an explicit series will both default to
50 the value of this setting.
51
52Additional configuration options, not specific to EC2:
53
54 authorized-keys-path:
55 The path to a public key to place onto launched machines. If no value
56 is provided for either this or ``authorized-keys`` then a search is
57 made for some default public keys "id_dsa.pub", "id_rsa.pub",
58 "identity.pub". If none of those exist, then a LookupError error is
59 raised when launching a machine.
60
61 authorized-keys:
62 The full content of a public key to utilize on launched machines.
63
64
065
=== added file 'source/provider-configuration-local.rst'
--- source/provider-configuration-local.rst 1970-01-01 00:00:00 +0000
+++ source/provider-configuration-local.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,53 @@
1Local provider configuration
2----------------------------
3
4The local provider allows for deploying services directly against the local/host machine
5using LXC containers with the goal of experimenting with juju and developing formulas.
6
7The local provider has some additional package dependencies. Attempts to use
8this provider without these packages installed will terminate with a message
9indicating the missing packages.
10
11The following are packages are required.
12
13 - libvirt-bin
14 - lxc
15 - apt-cacher-ng
16 - zookeeper
17
18
19The local provider can be configured by specifying "provider: local" and a "data-dir":
20as an example::
21
22 local:
23 type: local
24 data-dir: /tmp/local-dev
25 control-bucket: juju-a14dfae3830142d9ac23c499395c2785999
26 admin-secret: b3a5dee4fb8c4fc9a4db04751e5936f4
27 juju-origin: distro
28 default-series: oneiric
29
30Upon running ``juju bootstrap`` a zookeeper instance will be started on the host
31along with a machine agent. The bootstrap command will prompt for sudo access
32as the machine agent needs to run as root in order to create containers on the
33local machine.
34
35The containers created are namespaced in such a way that you can create multiple
36environments on a machine. The containers also namespaced by user for multi-user
37machines.
38
39Local provider environments do not survive reboots of the host at this time, the
40environment will need to be destroyed and recreated after a reboot.
41
42
43Provider specific options
44=========================
45
46 data-dir:
47 Directory for zookeeper state and log files.
48
49
50
51
52
53
054
=== added file 'source/upgrades.rst'
--- source/upgrades.rst 1970-01-01 00:00:00 +0000
+++ source/upgrades.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,57 @@
1Upgrades
2========
3
4A core functionality of any configuration management system is
5handling the full lifecycle of service and configuration
6upgrades.
7
8Charm upgrades
9--------------
10
11A common task when doing charm development is iterating over
12charm versions via upgrading charms of a running service
13while it's live.
14
15The use case also extends to a user upgrading a deployed
16service's charm with a newer version from an upsteam charm
17repository.
18
19In some cases a new charm version will also reference newer
20software/package versions or new packages.
21
22More details in the `charm upgrades documentation`_
23
24.. _`charm upgrades documentation`: ./charm-upgrades.html
25
26
27*NOTE* At the moment this is the only upgrade form that juju
28provides.
29
30Service upgrades
31----------------
32
33There's an interesting set of upgrade use cases which embody lots of
34real world usage, which has been left for future work.
35
36One case is where an application is deployed across multiple service
37units, and the code needs to be upgraded in lock step across all of
38them (either due to software incompatability or data changes in
39related services).
40
41Additionally the practices of a rolling uprade, and cloning a service
42as an upgrade mechanism are also interesting problems, which are left
43for future work.
44
45juju upgrades
46-------------
47
48One last upgrade scenario, is upgrading of the juju software
49itself.
50
51At the moment juju is deployed from revision control, although it's
52being packaged for the future. Currently all of the juju agents
53maintain persistent connections to zookeeper, the failure of which may
54be grounds for the system to take corrective action. As a simple
55notion of performing system wide juju upgrades, the software would
56be updated on the existing systems, and then the agents restarted but
57instructed to keep their existing zookeeper session ids.
058
=== added file 'source/user-tutorial.rst'
--- source/user-tutorial.rst 1970-01-01 00:00:00 +0000
+++ source/user-tutorial.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,335 @@
1.. _user-tutorial:
2
3User tutorial
4=============
5
6Introduction
7------------
8
9This tutorial demonstrates basic features of juju from a user perspective.
10An juju user would typically be a devops or a sys-admin who is interested in
11automated deployment and management of servers and services.
12
13Bootstrapping
14-------------
15
16The first step for deploying an juju system is to perform bootstrapping.
17Bootstrapping launches a utility instance that is used in all subsequent
18operations to launch and orchestrate other instances::
19
20 $ juju bootstrap
21
22Note that while the command should display a message indicating it has finished
23successfully, that does not mean the bootstrapping instance is immediately
24ready for usage. Bootstrapping an instance can require a couple of minutes. To
25check on the status of the juju deployment, we can use the status command::
26
27 $ juju status
28
29If the bootstrapping node has not yet completed bootstrapping, the status
30command may either mention the environment is not yet ready, or may display a
31connection timeout such as::
32
33 INFO Connecting to environment.
34 ERROR Connection refused
35 ProviderError: Interaction with machine provider failed:
36 ConnectionTimeoutException('could not connect before timeout after 2
37 retries',)
38 ERROR ProviderError: Interaction with machine
39 provider failed: ConnectionTimeoutException('could not connect before timeout
40 after 2 retries',)
41
42This is simply an indication the environment needs more time to complete
43initialization. It is recommended you retry every minute. Once the environment
44has properly initialized, the status command should display::
45
46 machines:
47 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745}
48 services: {}
49
50Note the following, machine "0" has been started. This is the bootstrapping
51node and the first node to be started. The dns-name for the node is printed.
52Also the EC2 instance-id is printed. Since no services are yet deployed to the
53juju system yet, the list of deployed services is empty
54
55Starting debug-log
56------------------
57
58While not a requirement, it is beneficial for the understanding of juju to
59start a debug-log session. juju's debug-log provides great insight into the
60execution of various hooks as they are triggered by various events. It is
61important to understand that debug-log shows events from a distributed
62environment (multiple-instances). This means that log lines will alternate
63between output from different instances. To start a debug-log session, from a
64secondary terminal issue::
65
66 $ juju debug-log
67 INFO Connecting to environment.
68 INFO Enabling distributed debug log.
69 INFO Tailing logs - Ctrl-C to stop.
70
71This will connect to the environment, and start tailing logs.
72
73Deploying service units
74-----------------------
75
76Now that we have bootstrapped the juju environment, and started the
77debug-log viewer, let's proceed by deploying a mysql service::
78
79 $ juju deploy --repository=/usr/share/doc/juju/examples local:oneiric/mysql
80 INFO Connecting to environment.
81 INFO Charm deployed as service: 'mysql'
82 INFO 'deploy' command finished successfully
83
84Checking the debug-log window, we can see the mysql service unit being
85downloaded and started::
86
87 Machine:1: juju.agents.machine DEBUG: Downloading charm
88 local:oneiric/mysql-11...
89 Machine:1: juju.agents.machine INFO: Started service unit mysql/0
90
91It is important to note the different debug levels. DEBUG is used for very
92detailed logging messages, usually you should not care about reading such
93messages unless you are trying to debug (hence the name) a specific problem.
94INFO debugging level is used for slightly more important informational
95messages. In this case, these messages are generated as the mysql charm's
96hooks are being executed. Let's check the current status::
97
98 $ juju status
99 machines:
100 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745}
101 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d}
102 services:
103 mysql:
104 charm: local:oneiric/mysql-11
105 relations: {}
106 units:
107 mysql/0:
108 machine: 1
109 relations: {}
110 state: null
111
112We can see a new EC2 instance has now been spun up for mysql. Information for
113this instance is displayed as machine number 1 and mysql is now listed under
114services. It is apparent the mysql service unit has no relations, since it has
115not been connected to wordpress yet. Since this is the first mysql service
116unit, it is being referred to as mysql/0, subsequent service units would be
117named mysql/1 and so on.
118
119.. note::
120 An important distinction to make is the difference between a service
121 and a service unit. A service is a high level concept relating to an
122 end-user visible service such as mysql. The mysql service would be
123 composed of several mysql service units referred to as mysql/0, mysql/1
124 and so on.
125
126The mysql service state is listed as null since it's not ready yet.
127Downloading, installing, configuring and starting mysql can take some time.
128However we don't have to wait for it to configure, let's proceed deploying
129wordpress::
130
131 $ juju deploy --repository=/usr/share/doc/juju/examples local:oneiric/wordpress
132
133Let's wait for a minute for all services to complete their configuration cycle and
134get properly started, then issue a status command::
135
136 $ juju status
137 machines:
138 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745}
139 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d}
140 2: {dns-name: ec2-184-72-156-54.compute-1.amazonaws.com, instance-id: i-9c7e29f3}
141 services:
142 mysql:
143 charm: local:oneiric/mysql-11
144 relations: {}
145 units:
146 mysql/0:
147 machine: 1
148 relations: {}
149 state: started
150 wordpress:
151 charm: local:oneiric/wordpress-29
152 relations: {}
153 units:
154 wordpress/0:
155 machine: 2
156 relations: {}
157 state: started
158
159mysql/0 as well as wordpress/0 are both now in the started state. Checking the
160debug-log would reveal wordpress has been started as well
161
162Adding a relation
163-----------------
164
165While mysql and wordpress service units have been started, they are still
166isolated from each other. An important concept for juju is connecting
167various service units together to create a bigger juju! Adding a relation
168between service units causes hooks to trigger, in effect causing all service
169units to collaborate and work together to reach the desired end state. Adding a
170relation is extremely simple::
171
172 $ juju add-relation wordpress mysql
173 INFO Connecting to environment.
174 INFO Added mysql relation to all service units.
175 INFO 'add_relation' command finished successfully
176
177Checking the juju status we see that the db relation now exists with state
178up::
179
180 $ juju status
181 machines:
182 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745}
183 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d}
184 2: {dns-name: ec2-184-72-156-54.compute-1.amazonaws.com, instance-id: i-9c7e29f3}
185 services:
186 mysql:
187 charm: local:oneiric/mysql-11
188 relations: {db: wordpress}
189 units:
190 mysql/0:
191 machine: 1
192 relations:
193 db: {state: up}
194 state: started
195 wordpress:
196 charm: local:oneiric/wordpress-29
197 relations: {db: mysql}
198 units:
199 wordpress/0:
200 machine: 2
201 relations:
202 db: {state: up}
203 state: started
204
205Exposing the service to the world
206---------------------------------
207
208All that remains is to expose the service to the outside world::
209
210 $ juju expose wordpress
211
212You can now point your browser at the public dns-name for instance 2 (running
213wordpress) to view the wordpress blog
214
215Tracing hook execution
216----------------------
217
218An juju user should never have to trace the execution order of hooks,
219however if you are the kind of person who enjoys looking under the hood, this
220section is for you. Understanding hook order execution, the parallel nature of
221hook execution across instances, and how relation-set in a hook can trigger the
222execution of another hook is quite interesting and provides insight into
223juju internals
224
225Here are a few important messages from the debug-log of this juju run. The
226date field has been deliberately left in this log, in order to understand the
227parallel nature of hook execution.
228
229Things to consider while reading the log include:
230 * The time the log message was generated
231 * Which service unit is causing the log message (for example mysql/0)
232 * The message logging level. In this run DEBUG messages are generated by the
233 juju core engine, while WARNING messages are generated by calling
234 juju-log from inside charms (which you can read in the examples
235 folder)
236
237Let's view select debug-log messages which can help understand the execution
238order::
239
240 14:29:43,625 unit:mysql/0: hook.scheduler DEBUG: executing hook for wordpress/0:joined
241 14:29:43,626 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined
242 14:29:43,660 unit:wordpress/0: hook.scheduler DEBUG: executing hook for mysql/0:joined
243 14:29:43,660 unit:wordpress/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined
244 14:29:43,661 unit:wordpress/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed
245 14:29:43,789 unit:mysql/0: unit.hook.api WARNING: Creating new database and corresponding security settings
246 14:29:43,813 unit:wordpress/0: unit.hook.api WARNING: Retrieved hostname: ec2-184-72-156-54.compute-1.amazonaws.com
247 14:29:43,976 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed
248 14:29:43,997 unit:wordpress/0: hook.scheduler DEBUG: executing hook for mysql/0:modified
249 14:29:43,997 unit:wordpress/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed
250 14:29:44,143 unit:wordpress/0: unit.hook.api WARNING: Retrieved hostname: ec2-184-72-156-54.compute-1.amazonaws.com
251 14:29:44,849 unit:wordpress/0: unit.hook.api WARNING: Creating appropriate upload paths and directories
252 14:29:44,992 unit:wordpress/0: unit.hook.api WARNING: Writing wordpress config file /etc/wordpress/config-ec2-184-72-156-54.compute-1.amazonaws.com.php
253 14:29:45,130 unit:wordpress/0: unit.hook.api WARNING: Writing apache config file /etc/apache2/sites-available/ec2-184-72-156-54.compute-1.amazonaws.com
254 14:29:45,301 unit:wordpress/0: unit.hook.api WARNING: Enabling apache modules: rewrite, vhost_alias
255 14:29:45,512 unit:wordpress/0: unit.hook.api WARNING: Enabling apache site: ec2-184-72-156-54.compute-1.amazonaws.com
256 14:29:45,688 unit:wordpress/0: unit.hook.api WARNING: Restarting apache2 service
257
258
259Scaling the juju
260--------------------
261
262Assuming your blog got really popular, is having high load and you decided to
263scale it up (it's a cloud deployment after all). juju makes this magically
264easy. All what is needed is::
265
266 $ juju add-unit wordpress
267 INFO Connecting to environment.
268 INFO Unit 'wordpress/1' added to service 'wordpress'
269 INFO 'add_unit' command finished successfully
270 $ juju status
271 machines:
272 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745}
273 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d}
274 2: {dns-name: ec2-184-72-156-54.compute-1.amazonaws.com, instance-id: i-9c7e29f3}
275 3: {dns-name: ec2-50-16-156-106.compute-1.amazonaws.com, instance-id: i-ba6532d5}
276 services:
277 mysql:
278 charm: local:oneiric/mysql-11
279 relations: {db: wordpress}
280 units:
281 mysql/0:
282 machine: 1
283 relations:
284 db: {state: up}
285 state: started
286 wordpress:
287 charm: local:oneiric/wordpress-29
288 relations: {db: mysql}
289 units:
290 wordpress/0:
291 machine: 2
292 relations:
293 db: {state: up}
294 state: started
295 wordpress/1:
296 machine: 3
297 relations:
298 db: {state: up}
299 state: started
300
301
302The add-unit command starts a new wordpress instance (wordpress/1), which then
303joins the relation with the already existing mysql/0 instance. mysql/0 notices
304the database required has already been created and thus decides all needed
305configuration has already been done. On the other hand wordpress/1 reads
306service settings from mysql/0 and starts configuring itself and joining the
307juju. Let's review a short version of debug-log for adding wordpress/1::
308
309 14:36:19,755 unit:mysql/0: hook.scheduler DEBUG: executing hook for wordpress/1:joined
310 14:36:19,755 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined
311 14:36:19,810 unit:wordpress/1: hook.scheduler DEBUG: executing hook for mysql/0:joined
312 14:36:19,811 unit:wordpress/1: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined
313 14:36:19,811 unit:wordpress/1: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed
314 14:36:19,918 unit:mysql/0: unit.hook.api WARNING: Database already exists, exiting
315 14:36:19,938 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed
316 14:36:19,990 unit:wordpress/1: unit.hook.api WARNING: Retrieved hostname: ec2-50-16-156-106.compute-1.amazonaws.com
317 14:36:20,757 unit:wordpress/1: unit.hook.api WARNING: Creating appropriate upload paths and directories
318 14:36:20,916 unit:wordpress/1: unit.hook.api WARNING: Writing wordpress config file /etc/wordpress/config-ec2-50-16-156-106.compute-1.amazonaws.com.php
319 14:36:21,088 unit:wordpress/1: unit.hook.api WARNING: Writing apache config file /etc/apache2/sites-available/ec2-50-16-156-106.compute-1.amazonaws.com
320 14:36:21,236 unit:wordpress/1: unit.hook.api WARNING: Enabling apache modules: rewrite, vhost_alias
321 14:36:21,476 unit:wordpress/1: unit.hook.api WARNING: Enabling apache site: ec2-50-16-156-106.compute-1.amazonaws.com
322 14:36:21,682 unit:wordpress/1: unit.hook.api WARNING: Restarting apache2 service
323
324Destroying the environment
325--------------------------
326
327Once you are done with an juju deployment, you need to terminate
328all running instances in order to stop paying for them. The
329destroy-environment command will terminate all running instances in an
330environment::
331
332 $ juju destroy-environment
333
334juju will ask for user confirmation before proceeding as this
335command will destroy service data in the environment as well.
0336
=== added file 'source/write-charm.rst'
--- source/write-charm.rst 1970-01-01 00:00:00 +0000
+++ source/write-charm.rst 2012-01-18 20:50:30 +0000
@@ -0,0 +1,409 @@
1.. _write-charm:
2
3Writing a charm
4===============
5
6This tutorial demonstrates the basic workflow for writing, running and
7debugging an juju charm. Charms are a way to package and share your
8service deployment and orchestration knowledge and share them with the world.
9
10Creating the charm
11--------------------
12
13In this example we are going to write a charm to deploy the drupal CMS
14system. For the sake of simplicity, we are going to use the mysql charm that
15comes bundled with juju in the examples directory. Assuming the current
16directory is the juju trunk, let's create the directory hierarchy::
17
18 $ cd examples/oneiric
19 mkdir -p drupal/hooks
20 vim drupal/metadata.yaml
21 vim drupal/revision
22
23Note: if you don't have the juju source tree available, the `examples` repository
24is installed into `/usr/share/doc/juju`; you can copy the repository to your
25current directory, and work from there.
26
27Edit the metadata.yaml file to resemble::
28
29 name: drupal
30 summary: "Drupal CMS"
31 description: |
32 Installs the drupal CMS system, relates to the mysql charm provided in
33 examples directory. Can be scaled to multiple web servers
34 requires:
35 db:
36 interface: mysql
37
38The metadata.yaml file provides metadata around the charm. The file declares
39a charm with the name drupal. Since this is the first time to edit this
40charm, its revision number is one. A short and long description of the
41charm are provided. The final field is `requires`, this mentions the
42interface type required by this charm. Since this drupal charm uses the
43services of a mysql database, we need to require it in the metadata. Since this
44charm does not provide a service to any other charm, there is no `provides`
45field. You might be wondering where did the interface name "mysql" come from,
46you can locate the interface information from the mysql charm's
47metadata.yaml. Here it is for convenience::
48
49 name: mysql
50 summary: "MySQL relational database provider"
51 description: |
52 Installs and configures the MySQL package (mysqldb), then runs it.
53
54 Upon a consuming service establishing a relation, creates a new
55 database for that service, if the database does not yet
56 exist. Publishes the following relation settings for consuming
57 services:
58
59 database: database name
60 user: user name to access database
61 password: password to access the database
62 host: local hostname
63 provides:
64 db:
65 interface: mysql
66
67That very last line mentions that the interface that mysql provides to us is
68"mysql". Also the description mentions that four parameters are sent to the
69connecting charm (database, user, password, host) in order to enable it to
70connect to the database. We will make use of those variables once we start
71writing hooks. Such interface information is either provided in a bundled
72README file, or in the description. Of course you can also read the charm
73code to discover such information as well
74
75 Revision is a integer representing the version of the charm. The revision must always be incremented (monotonically increasing) upon changing a charm to allow for charm upgrades.
76
77 $vim revision
78 1
79
80Have a plan
81-----------
82
83When attempting to write a charm, it is beneficial to have a mental plan of
84what it takes to deploy the software. In our case, you should deploy drupal
85manually, understand where its configuration information is written, how the
86first node is deployed, and how further nodes are configured. With respect to
87this charm, this is the plan
88
89 * Install hook installs all needed components (apache, php, drush)
90 * Once the database connection information is ready, call drush on first node
91 to perform the initial setup (creates DB tables, completes setup)
92 * For scaling onto other nodes, the DB tables have already been set-up. Thus
93 we only need to append the database connection information into drupal's
94 settings.php file. We will use a template file for that
95
96.. note::
97 The hooks in a charm are executable files that can be written using any
98 scripting or programming language. In our case, we'll use bash
99
100For production charms it is always recommended that you install software
101components from the Ubuntu archive (using apt-get) in order to get security
102updates. However in this example I am installing drush (Drupal shell) using
103apt-get, then using that to download and install the latest version of drupal.
104If you were deploying your own code, you could just as easily install a
105revision control tool (bzr, git, hg...etc) and use that to checkout a code
106branch to deploy from. This demonstrates the flexibility offered by juju
107which doesn't really force you into one way of doing things.
108
109Write hooks
110-----------
111
112Let's change into the hooks directory::
113
114 $ cd drupal/hooks
115 vim install
116
117Since you should have already installed drupal, you have an idea what it takes
118to get it installed. My install script looks like::
119
120 #!/bin/bash
121
122 set -eux # -x for verbose logging to juju debug-log
123 juju-log "Installing drush,apache2,php via apt-get"
124 apt-get -y install drush apache2 php5-gd libapache2-mod-php5 php5-cgi mysql-client-core-5.1
125 a2enmod php5
126 /etc/init.d/apache2 restart
127 juju-log "Using drush to download latest Drupal"
128 # Typo on next line, it should be www not ww
129 cd /var/ww && drush dl drupal --drupal-project-rename=juju
130
131I have introduced an artificial typo on the last line "ww not www", this is to
132simulate any error which you are bound to face sooner or later. Let's create
133other hooks::
134
135 $ vim start
136
137The start hook is empty, however it needs to be a valid executable, thus we'll
138add the first bash shebang line, here it is::
139
140 #!/bin/bash
141
142Here's the "stop" script::
143
144 #!/bin/bash
145 juju-log "Stopping apache"
146 /etc/init.d/apache2 stop
147
148The final script, which does most of the work is "db-relation-changed". This
149script gets the database connection information set by the mysql charm then
150sets up drupal for the first time, and opens port 80 for web access. Let's
151start with a simple version that only installs drupal on the first node. Here
152it is::
153
154 #!/bin/bash
155 set -eux # -x for verbose logging to juju debug-log
156 hooksdir=$PWD
157 user=`relation-get user`
158 password=`relation-get password`
159 host=`relation-get host`
160 database=`relation-get database`
161 # All values are set together, so checking on a single value is enough
162 # If $user is not set, DB is still setting itself up, we exit awaiting next run
163 [ -z "$user" ] && exit 0
164 juju-log "Setting up Drupal for the first time"
165 cd /var/www/juju && drush site-install -y standard \
166 --db-url=mysql://$user:$password@$host/$database \
167 --site-name=juju --clean-url=0
168 cd /var/www/juju && chown www-data sites/default/settings.php
169 open-port 80/tcp
170
171The script is quite simple, it reads the four variables needed to connect to
172mysql, ensures they are not null, then passes them to the drupal installer.
173Make sure all the hook scripts have executable permissions, and change
174directory above the examples directory::
175
176 $ chmod +x *
177 $ cd ../../../..
178
179Checking on the drupal charm file-structure, this is what we have::
180
181 $ find examples/oneiric/drupal
182 examples/oneiric/drupal
183 examples/oneiric/drupal/metadata.yaml
184 examples/oneiric/drupal/revision
185 examples/oneiric/drupal/hooks
186 examples/oneiric/drupal/hooks/db-relation-changed
187 examples/oneiric/drupal/hooks/stop
188 examples/oneiric/drupal/hooks/install
189 examples/oneiric/drupal/hooks/start
190
191Test run
192--------
193
194Let us deploy the drupal charm. Remember that the install hook has a problem
195and will not exit cleanly. Deploying::
196
197 $ juju bootstrap
198
199Wait a minute for the environment to bootstrap. Keep issuing the status command
200till you know the environment is ready::
201
202 $ juju status
203 2011-06-07 14:04:06,816 INFO Connecting to environment.
204 machines: 0: {dns-name: ec2-50-19-154-237.compute-1.amazonaws.com, instance-id: i-6fb52301}
205 services: {}
206 2011-06-07 14:04:11,125 INFO 'status' command finished successfully
207
208It can be beneficial when debugging a new charm to always have the
209distributed debug-log running in a separate window::
210
211 $ juju debug-log
212
213Let's deploy the mysql and drupal charms::
214
215 $ juju deploy --repository=examples local:oneiric/mysql
216 $ juju deploy --repository=examples local:oneiric/drupal
217
218Once the machines are started (hint: check the debug-log), issue a status
219command::
220
221 $ juju status
222 machines:
223 0: {dns-name: ec2-50-19-154-237.compute-1.amazonaws.com, instance-id: i-6fb52301}
224 1: {dns-name: ec2-50-16-9-102.compute-1.amazonaws.com, instance-id: i-19b12777}
225 2: {dns-name: ec2-50-17-147-79.compute-1.amazonaws.com, instance-id: i-e7ba2c89}
226 services:
227 drupal:
228 charm: local:oneiric/drupal-1
229 relations: {}
230 units:
231 drupal/1:
232 machine: 4
233 open-ports: []
234 relations: {}
235 state: install_error
236 mysql:
237 charm: local:oneiric/mysql-12
238 relations: {}
239 units:
240 mysql/0:
241 machine: 1
242 relations: {}
243 state: started
244
245Note how mysql is listed as started, while drupal's state is install_error. This is
246because the install hook has an error, and did not exit cleanly (exit code 1).
247
248Debugging hooks
249---------------
250
251Let's debug the install hook, from a new window::
252
253 $ juju debug-hooks drupal/0
254
255This will connect you to the drupal machine, and present a shell. The way the
256debug-hooks functionality works is by starting a new terminal window instead of
257executing a hook when it is triggered. This way you get a chance of running the
258hook manually, fixing any errors and re-running it again. In order to trigger
259re-running the install hook, from another window::
260
261 $ juju resolved --retry drupal/0
262
263Switching to the debug-hooks window, you will notice a new window named
264"install" poped up. Note that "install" is the name of the hook that this
265debug-hooks session is replacing. We change directory into the hooks directory
266and rerun the hook manually::
267
268 $ cd /var/lib/juju/units/drupal-0/charm/hooks/
269 $ ./install
270 # -- snip --
271 + cd /var/ww
272 ./install: line 10: cd: /var/ww: No such file or directory
273
274Problem identified. Let's edit the script, changing ww into www. Rerunning it
275again should work successfully. This is why it is very good practice to write
276hook scripts in an idempotent manner such that rerunning them over and over
277always results in the same state. Do not forget to exit the install window by
278typing "exit", this signals that the hook has finished executing successfully.
279If you have finished debugging, you may want to exit the debug-hooks session
280completely by typing "exit" into the very first window Window0
281
282.. note::
283 While we have fixed the script, this was done on the remote machine only. You
284 need to update the local copy of the charm with your changes, increment the
285 resivion number in metadata.yaml and perform a charm upgrade to push the
286 changes, like::
287
288 $ juju upgrade-charm --repository=examples/ drupal
289
290Let's continue after having fixed the install error::
291
292 $ juju add-relation mysql drupal
293
294Watching the debug-log window, you can see debugging information to verify the
295hooks are working as they should. If you spot any error, you can launch
296debug-hooks in another window to start debugging the misbehaving hooks again.
297Note that since "add-relation" relates two charms together, you cannot really
298retrigger it by simply issuing "resolved --retry" like we did for the install
299hook. In order to retrigger the db-relation-changed hook, you need to remove
300the relation, and create it again like so::
301
302 $ juju remove-relation mysql drupal
303 $ juju add-relation mysql drupal
304
305The service should now be ready for use. The remaining step is to expose it to
306public access. While the charm signaled it needs port 80 to be open, for
307public accessibility, the port is not open until the administrator explicitly
308uses the expose command::
309
310 $ juju expose drupal
311
312Let's see a status with the ports exposed::
313
314 $ juju status
315 machines:
316 0: {dns-name: ec2-50-19-154-237.compute-1.amazonaws.com, instance-id: i-6fb52301}
317 1: {dns-name: ec2-50-16-9-102.compute-1.amazonaws.com, instance-id: i-19b12777}
318 2: {dns-name: ec2-50-17-147-79.compute-1.amazonaws.com, instance-id: i-e7ba2c89}
319 services:
320 drupal:
321 exposed: true
322 charm: local:oneiric/drupal-1
323 relations: {db: mysql}
324 units:
325 drupal/1:
326 machine: 4
327 open-ports: [80/tcp]
328 relations:
329 db: {state: up}
330 state: started
331 mysql:
332 charm: local:oneiric/mysql-12
333 relations: {db: drupal}
334 units:
335 mysql/0:
336 machine: 1
337 relations:
338 db: {state: up}
339 state: started
340
341
342Congratulations, your charm should now be working successfully! The
343db-relation-changed hook previously shown is not suitable for scaling drupal to
344more than one node, since it always drops the database and recreates a new one.
345A more complete hook would need to first check whether or not the DB tables
346exist and act accordingly. Here is how such a hook might be written::
347
348 #!/bin/bash
349 set -eux # -x for verbose logging to juju debug-log
350 hooksdir=$PWD
351 user=`relation-get user`
352 password=`relation-get password`
353 host=`relation-get host`
354 database=`relation-get database`
355 # All values are set together, so checking on a single value is enough
356 # If $user is not set, DB is still setting itself up, we exit awaiting next run
357 [ -z "$user" ] && exit 0
358
359 if $(mysql -u $user --password=$password -h $host -e 'use drupal; show tables;' | grep -q users); then
360 juju-log "Drupal already set-up. Adding DB info to configuration"
361 cd /var/www/juju/sites/default
362 cp default.settings.php settings.php
363 sed -e "s/USER/$user/" \
364 -e "s/PASSWORD/$password/" \
365 -e "s/HOST/$host/" \
366 -e "s/DATABASE/$database/" \
367 $hooksdir/drupal-settings.template >> settings.php
368 else
369 juju-log "Setting up Drupal for the first time"
370 cd /var/www/juju && drush site-install -y standard \
371 --db-url=mysql://$user:$password@$host/$database \
372 --site-name=juju --clean-url=0
373 fi
374 cd /var/www/juju && chown www-data sites/default/settings.php
375 open-port 80/tcp
376
377.. note::
378 Any files that you store in the hooks directory are transported as is to the
379 deployment machine. You can drop in configuration files or templates that you
380 can use from your hook scripts. An example of this technique is the
381 drupal-settings.template file that is used in the previous hook. The template
382 is rendered using sed, however any other more advanced template engine can be
383 used
384
385Here is the template file used::
386
387 $databases = array (
388 'default' =>
389 array (
390 'default' =>
391 array (
392 'database' => 'DATABASE',
393 'username' => 'USER',
394 'password' => 'PASSWORD',
395 'host' => 'HOST',
396 'port' => '',
397 'driver' => 'mysql',
398 'prefix' => '',
399 ),
400 ),
401 );
402
403Learn more
404----------
405
406Read more detailed information about :doc:`charm` and hooks. For more hook
407examples, please check the examples directory in the juju source tree, or
408check out the various charms already included in `Principia
409<https://launchpad.net/principia>`_.

Subscribers

People subscribed via source and target branches

to status/vote changes: