Status: | Rejected |
---|---|
Rejected by: | Kapil Thangavelu |
Proposed branch: | lp:~koolhead17/pyjuju/jujudoc |
Merge into: | lp:pyjuju |
Diff against target: |
3917 lines (+3762/-0) (has conflicts) 30 files modified
Makefile (+132/-0) source/_templates/project-links.html (+9/-0) source/about.rst (+38/-0) source/charm-upgrades.rst (+117/-0) source/charm.rst (+379/-0) source/conf.py (+225/-0) source/drafts/charm-namespaces.rst (+72/-0) source/drafts/developer-install.rst (+49/-0) source/drafts/expose-services.rst (+20/-0) source/drafts/resolved.rst (+60/-0) source/drafts/service-config.rst (+162/-0) source/expose-services.rst (+43/-0) source/faq.rst (+91/-0) source/generate_modules.py (+107/-0) source/getting-started.rst (+80/-0) source/glossary.rst (+121/-0) source/hook-debugging.rst (+108/-0) source/index.rst (+35/-0) source/internals/agent-presence.rst (+154/-0) source/internals/expose-services.rst (+143/-0) source/internals/unit-agent-hooks.rst (+307/-0) source/internals/unit-agent-startup.rst (+156/-0) source/internals/zookeeper.rst (+215/-0) source/juju-drafts.rst (+10/-0) source/juju-internals.rst (+11/-0) source/provider-configuration-ec2.rst (+64/-0) source/provider-configuration-local.rst (+53/-0) source/upgrades.rst (+57/-0) source/user-tutorial.rst (+335/-0) source/write-charm.rst (+409/-0) Conflict adding file Makefile. Moved existing file to Makefile.moved. |
To merge this branch: | bzr merge lp:~koolhead17/pyjuju/jujudoc |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Kapil Thangavelu (community) | Needs Fixing | ||
Jorge Castro | Pending | ||
Review via email:
|
Commit message
Description of the change
I have modified 2 files currently :-
1. write-charm.rst
We need to create separate revision file for charms, done that and explained about the revision file.
2. provider-
Added :
juju-origin
To post a comment you must log in.
Revision history for this message

Kapil Thangavelu (hazmat) wrote : | # |
also please note that docs are now in a separate docs branch with a much wider reviewer base and committer audience and are part of the charmers review queue (http://
Unmerged revisions
- 2. By Atul Jha <email address hidden>
-
added saperate revision file or write-charm.rst and added missing config files for provider-
configuration- local.rst file - 1. By Kapil Thangavelu
-
move docs over
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file 'Makefile' |
2 | --- Makefile 1970-01-01 00:00:00 +0000 |
3 | +++ Makefile 2012-01-18 20:50:30 +0000 |
4 | @@ -0,0 +1,132 @@ |
5 | +# Makefile for Sphinx documentation |
6 | +# |
7 | + |
8 | +# You can set these variables from the command line. |
9 | +SPHINXOPTS = |
10 | +SPHINXBUILD = python source/generate_modules.py ../juju source/generated && sphinx-build |
11 | +PAPER = |
12 | +BUILDDIR = build |
13 | + |
14 | +# Internal variables. |
15 | +PAPEROPT_a4 = -D latex_paper_size=a4 |
16 | +PAPEROPT_letter = -D latex_paper_size=letter |
17 | +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source |
18 | + |
19 | +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest |
20 | + |
21 | +help: |
22 | + @echo "Please use \`make <target>' where <target> is one of" |
23 | + @echo " html to make standalone HTML files" |
24 | + @echo " dirhtml to make HTML files named index.html in directories" |
25 | + @echo " singlehtml to make a single large HTML file" |
26 | + @echo " pickle to make pickle files" |
27 | + @echo " json to make JSON files" |
28 | + @echo " htmlhelp to make HTML files and a HTML help project" |
29 | + @echo " qthelp to make HTML files and a qthelp project" |
30 | + @echo " devhelp to make HTML files and a Devhelp project" |
31 | + @echo " epub to make an epub" |
32 | + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" |
33 | + @echo " latexpdf to make LaTeX files and run them through pdflatex" |
34 | + @echo " text to make text files" |
35 | + @echo " man to make manual pages" |
36 | + @echo " changes to make an overview of all changed/added/deprecated items" |
37 | + @echo " linkcheck to check all external links for integrity" |
38 | + @echo " doctest to run all doctests embedded in the documentation (if enabled)" |
39 | + @echo " clean to clean (remove) everything under the build directory" |
40 | + |
41 | +clean: |
42 | + -rm -rf $(BUILDDIR)/* |
43 | + -rm -rf source/generated |
44 | + |
45 | +html: |
46 | + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html |
47 | + @echo |
48 | + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." |
49 | + |
50 | +dirhtml: |
51 | + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml |
52 | + @echo |
53 | + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." |
54 | + |
55 | +singlehtml: |
56 | + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml |
57 | + @echo |
58 | + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." |
59 | + |
60 | +pickle: |
61 | + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle |
62 | + @echo |
63 | + @echo "Build finished; now you can process the pickle files." |
64 | + |
65 | +json: |
66 | + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json |
67 | + @echo |
68 | + @echo "Build finished; now you can process the JSON files." |
69 | + |
70 | +htmlhelp: |
71 | + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp |
72 | + @echo |
73 | + @echo "Build finished; now you can run HTML Help Workshop with the" \ |
74 | + ".hhp project file in $(BUILDDIR)/htmlhelp." |
75 | + |
76 | +qthelp: |
77 | + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp |
78 | + @echo |
79 | + @echo "Build finished; now you can run "qcollectiongenerator" with the" \ |
80 | + ".qhcp project file in $(BUILDDIR)/qthelp, like this:" |
81 | + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/juju.qhcp" |
82 | + @echo "To view the help file:" |
83 | + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/juju.qhc" |
84 | + |
85 | +devhelp: |
86 | + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp |
87 | + @echo |
88 | + @echo "Build finished." |
89 | + @echo "To view the help file:" |
90 | + @echo "# mkdir -p $$HOME/.local/share/devhelp/juju" |
91 | + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/juju" |
92 | + @echo "# devhelp" |
93 | + |
94 | +epub: |
95 | + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub |
96 | + @echo |
97 | + @echo "Build finished. The epub file is in $(BUILDDIR)/epub." |
98 | + |
99 | +latex: |
100 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex |
101 | + @echo |
102 | + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." |
103 | + @echo "Run \`make' in that directory to run these through (pdf)latex" \ |
104 | + "(use \`make latexpdf' here to do that automatically)." |
105 | + |
106 | +latexpdf: |
107 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex |
108 | + @echo "Running LaTeX files through pdflatex..." |
109 | + make -C $(BUILDDIR)/latex all-pdf |
110 | + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." |
111 | + |
112 | +text: |
113 | + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text |
114 | + @echo |
115 | + @echo "Build finished. The text files are in $(BUILDDIR)/text." |
116 | + |
117 | +man: |
118 | + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man |
119 | + @echo |
120 | + @echo "Build finished. The manual pages are in $(BUILDDIR)/man." |
121 | + |
122 | +changes: |
123 | + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes |
124 | + @echo |
125 | + @echo "The overview file is in $(BUILDDIR)/changes." |
126 | + |
127 | +linkcheck: |
128 | + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck |
129 | + @echo |
130 | + @echo "Link check complete; look for any errors in the above output " \ |
131 | + "or in $(BUILDDIR)/linkcheck/output.txt." |
132 | + |
133 | +doctest: |
134 | + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest |
135 | + @echo "Testing of doctests in the sources finished, look at the " \ |
136 | + "results in $(BUILDDIR)/doctest/output.txt." |
137 | |
138 | === renamed file 'Makefile' => 'Makefile.moved' |
139 | === added directory 'source' |
140 | === added directory 'source/_static' |
141 | === added directory 'source/_templates' |
142 | === added file 'source/_templates/project-links.html' |
143 | --- source/_templates/project-links.html 1970-01-01 00:00:00 +0000 |
144 | +++ source/_templates/project-links.html 2012-01-18 20:50:30 +0000 |
145 | @@ -0,0 +1,9 @@ |
146 | +<h3>Launchpad</h3> |
147 | +<ul> |
148 | + <li> |
149 | + <a href="https://launchpad.net/~juju">Overview</a> |
150 | + </li> |
151 | + <li> |
152 | + <a href="https://code.launchpad.net/~juju">Code</a> |
153 | + </li> |
154 | +</ul> |
155 | |
156 | === added file 'source/about.rst' |
157 | --- source/about.rst 1970-01-01 00:00:00 +0000 |
158 | +++ source/about.rst 2012-01-18 20:50:30 +0000 |
159 | @@ -0,0 +1,38 @@ |
160 | +About juju |
161 | +========== |
162 | + |
163 | +Since long ago, Linux server deployments have been moving towards the |
164 | +collaboration of multiple physical machines. In some cases, different servers |
165 | +run each a different set of applications, bringing organization, isolation, |
166 | +reserved resources, and other desirable characteristics to the composed |
167 | +assembly. In other situations, servers are set up with very similar |
168 | +configurations, so that the system becomes more scalable by having load |
169 | +distributed among the several instances, and so that the overall system becomes |
170 | +more reliable when the failure of any individual machine does not affect the |
171 | +assembly as a whole. In this reality, server administrators become invaluable |
172 | +maestros which orchestrate the placement and connectivity of services within |
173 | +the assembly of servers. |
174 | + |
175 | +Given that scenario, it's surprising that most of the efforts towards advancing |
176 | +the management of software configuration are still bound to individual machines. |
177 | +Package managers, and software like dbus and gconf are examples of this. Other |
178 | +efforts do look at the problem of managing multiple machines as a unit, but |
179 | +interestingly, they are still a mechanism for scaling up the management of |
180 | +services individually. In other words, they empower the administrator with the |
181 | +ability to tweak the individual configuration of multiple services at once, |
182 | +but they do not collaborate towards offering services themselves and other tools |
183 | +the understanding of the composed juju. This distinction looks subtle in |
184 | +principle, but it may be a key factor in enabling all the parties (system |
185 | +administrators, software developers, vendors, and integrators) to collaborate |
186 | +in deploying, maintaining, and enriching distributed software configurations. |
187 | + |
188 | +This is the challenge which motivates the research happening through the |
189 | +juju project at Canonical. juju aims to be a service deployment and |
190 | +orchestration tool which enables the same kind of collaboration and ease of |
191 | +use which today is seen around package management to happen on a higher |
192 | +level, around services. With juju, different authors are able to create |
193 | +services independently, and make those services communicate through a simple |
194 | +configuration protocol. Then, users can take the product of both authors |
195 | +and very comfortably deploy those services in an environment, in way |
196 | +resembling how people are able to install a network of packages with a single |
197 | +command via APT. |
198 | |
199 | === added file 'source/charm-upgrades.rst' |
200 | --- source/charm-upgrades.rst 1970-01-01 00:00:00 +0000 |
201 | +++ source/charm-upgrades.rst 2012-01-18 20:50:30 +0000 |
202 | @@ -0,0 +1,117 @@ |
203 | +Charm Upgrades |
204 | +================ |
205 | + |
206 | + |
207 | +Upgrading a charm |
208 | +------------------- |
209 | + |
210 | +A charm_ can be upgraded via the command line using the following |
211 | +syntax:: |
212 | + |
213 | + $ juju upgrade-charm <service-name> |
214 | + |
215 | +In the case of a local charm the sytax would be:: |
216 | + |
217 | + $ juju upgrade-charm --repository=principia <service-name> |
218 | + |
219 | +This will examine the named service, determine its charm, and check the |
220 | +charm's originating repository for a newer version of the charm. |
221 | +If a newer charm version is found, it will be uploaded to the juju |
222 | +environment, and downloaded to all the running units of the service. |
223 | +The unit agent will switch over to executing hooks from the new charm, |
224 | +after executing the `upgrade-charm` hook. |
225 | + |
226 | +.. _charm: ../charm.html |
227 | + |
228 | + |
229 | +Charm upgrade support |
230 | +----------------------- |
231 | + |
232 | +A charm author can add charm specific support for upgrades by |
233 | +providing an additional hook that can customize its upgrade behavior. |
234 | + |
235 | +The hook ``upgrade-charm`` is executed with the new charm version |
236 | +in place on the unit. juju guarantees this hook will be the first |
237 | +executed hook from the new charm. |
238 | + |
239 | +The hook is intended to allow the charm to process any upgrade |
240 | +concerns it may have with regard to upgrading databases, software, etc |
241 | +before its new version of hooks are executed. |
242 | + |
243 | +After the ``upgrade-charm`` hook is executed, new hooks of the |
244 | +charm will be utilized to respond to any system changes. |
245 | + |
246 | +Futures |
247 | +------- |
248 | + |
249 | +The ``upgrade-charm`` hook will likely need access to a new cli-api |
250 | +to access all relations of the unit, in addition to the standard hook |
251 | +api commands like ``relation-list``, ``relation-get``, |
252 | +``relation-set``, to perform per unit relation upgrades. |
253 | + |
254 | +The new hook-cli api name is open, but possible suggestions are |
255 | +``unit-relations`` or ``query-relations`` and would list |
256 | +all the relations a unit is a member of. |
257 | + |
258 | +Most `server` services have multiple instances of a named relation. |
259 | +Else name iteration of the charm defined relations would suffice. |
260 | +It's an open question on how these effectively anonymous instances |
261 | +of a named relation would be addressed. |
262 | + |
263 | +The existing relation-* cli would also need to be extended to take |
264 | +a relation parameter, or documented usage of environment variables |
265 | +when doing relation iteration during upgrades. |
266 | + |
267 | +Internals |
268 | +--------- |
269 | + |
270 | +The upgrade cli updates the service with its new unit, and sets |
271 | +an upgrade flag on each of its units. The unit agent then processes |
272 | +the upgrade using the workflow machinery to execute hooks and |
273 | +track upgrades across service units. |
274 | + |
275 | +A unit whose upgrade-charm hook fails will be left running |
276 | +but won't process any additional hooks. The hooks will continue |
277 | +to be queued for execution. |
278 | + |
279 | +The upgrade cli command is responsible for |
280 | + |
281 | + - Finding the named service. |
282 | + |
283 | + - Determining its charm. |
284 | + |
285 | + - Determining if a newer version of the charm exists in the |
286 | + origin repository. |
287 | + |
288 | + - Uploading the new version of the charm to the environment's machine |
289 | + provider storage. |
290 | + |
291 | + - Updating the service state with a reference to the new charm. |
292 | + |
293 | + - Marking the associated unit states as needing an upgrade. |
294 | + |
295 | +As far as determining newer versions, the cli will assume the same charm |
296 | +name with the max version number that is greater than the installed to |
297 | +be an upgrade. |
298 | + |
299 | +The unit agent is responsible for |
300 | + |
301 | + - Watching the unit state for upgrade changes. |
302 | + |
303 | + - Clearing the upgrade setting on the unit state. |
304 | + |
305 | + - Downloading the new charm version. |
306 | + |
307 | + - Stopping hook execution, hooks will continue to queue while |
308 | + the execution is stopped. |
309 | + |
310 | + - Extracting the charm into the unit container. |
311 | + |
312 | + - Updating the unit charm reference. |
313 | + |
314 | + - Running the upgrade workflow transition which will run the |
315 | + upgrade-charm hook, and restart normal hook execution. |
316 | + |
317 | +Only the charm directory within a unit container/directory is |
318 | +replaced on upgrade, any existing peristent data within the unit |
319 | +container is maintained. |
320 | |
321 | === added file 'source/charm.rst' |
322 | --- source/charm.rst 1970-01-01 00:00:00 +0000 |
323 | +++ source/charm.rst 2012-01-18 20:50:30 +0000 |
324 | @@ -0,0 +1,379 @@ |
325 | +Charms |
326 | +====== |
327 | + |
328 | +Introduction |
329 | +------------ |
330 | + |
331 | +Charms define how services integrate and how their service units |
332 | +react to events in the distributed environment, as orchestrated by |
333 | +juju. |
334 | + |
335 | +This specification describes how charms are defined, including their |
336 | +metadata and hooks. It also describes the resources available to hooks |
337 | +in working with the juju environment. |
338 | + |
339 | + |
340 | +The metadata file |
341 | +----------------- |
342 | + |
343 | +The `metadata.yaml` file, at the root of the charm directory, |
344 | +describes the charm. The following fields are supported: |
345 | + |
346 | + * **name:** - The charm name itself. Charm names are formed by |
347 | + lowercase letters, digits, and dashes, and must necessarily |
348 | + begin with a letter and have no digits alone in a dashed |
349 | + section. |
350 | + |
351 | + * **summary:** - A one-line description of the charm. |
352 | + |
353 | + * **description:** - Long explanation of the charm and its |
354 | + features. |
355 | + |
356 | + * **provides:** - The deployed service unit must have the given |
357 | + relations established with another service unit whose charm |
358 | + requires them for the service to work properly. See below for how |
359 | + to define a relation. |
360 | + |
361 | + * **requires:** - The deployed service unit must have the given |
362 | + relations established with another service unit whose charm |
363 | + provides them for the service to work properly. See below for how |
364 | + to define a relation. |
365 | + |
366 | + * **peers:** - Relations that are established with P2P semantics |
367 | + instead of a provides/requires (or client/server) style. When the |
368 | + charm is deployed as a service unit, all the units from the |
369 | + given service will automatically be made part of the relation. |
370 | + See below for how to define a relation. |
371 | + |
372 | + |
373 | +Relations available in `provides`, `requires`, and `peers` are defined |
374 | +as follows: |
375 | + |
376 | + * **provides|requires|peers:** |
377 | + |
378 | + * **<relation name>:** - This name is a user-provided value which |
379 | + identifies the relation uniquely within the given charm. |
380 | + Examples include "database", "cache", "proxy", and "appserver". |
381 | + |
382 | + Each relation may have the following fields defined: |
383 | + |
384 | + * **interface:** - This field defines the type of the |
385 | + relation. The relation will only be established with service |
386 | + units that define a compatible relation with the same |
387 | + interface. Examples include "http", "mysql", and |
388 | + "backup-schedule". |
389 | + |
390 | + * **limit:** - The maximum number of relations of this kind |
391 | + which may be established to other service units. Defaults to |
392 | + 1 for `requires` relations, and to "none" (no limit) for |
393 | + `provides` and `peers` relations. While you may define it, |
394 | + this field is not yet enforced by juju. |
395 | + |
396 | + * **optional:** - Whether this relation is required for the |
397 | + service unit to function or not. Defaults to `false`, which |
398 | + means the relation is required. While you may define it, this |
399 | + field is not yet enforced by juju. |
400 | + |
401 | + As a shortcut, if these properties are not defined, and instead |
402 | + a single string value is provided next to the relation name, the |
403 | + string is taken as the interface value, as seen in this |
404 | + example:: |
405 | + |
406 | + requires: |
407 | + db: mysql |
408 | + |
409 | +Some sample charm definitions are provided at the end of this |
410 | +specification. |
411 | + |
412 | + |
413 | +Hooks |
414 | +----- |
415 | + |
416 | +juju uses hooks to notify a service unit about changes happening |
417 | +in its lifecycle or the larger distributed environment. A hook running |
418 | +for a service unit can query this environment, make any desired local |
419 | +changes on its underlying machine, and change the relation |
420 | +settings. |
421 | + |
422 | +Each hook for a charm is implemented by placing an executable with |
423 | +the desired hook name under the ``hooks/`` directory of the charm |
424 | +directory. juju will execute the hook based on its file name when |
425 | +the corresponding event occurs. |
426 | + |
427 | +All hooks are optional. Not including a corresponding executable in |
428 | +the charm is treated by juju as if the hook executed and then |
429 | +exited with an exit code of 0. |
430 | + |
431 | +All hooks are executed in the charm directory on the service unit. |
432 | + |
433 | +The following hooks are with respect to the lifecycle of a service unit: |
434 | + |
435 | + * **install** - Runs just once during the life time of a service |
436 | + unit. Currently this hook is the right place to ensure any package |
437 | + dependencies are met. However, in the future juju will use the |
438 | + charm metadata to perform this role instead. |
439 | + |
440 | + * **start** - Runs when the service unit is started. This happens |
441 | + before any relation hooks are called. The purpose of this hook is |
442 | + to get the service unit ready for relations to be established. |
443 | + |
444 | + * **stop** - Runs when the service unit is stopped. If relations |
445 | + exist, they will be broken and the respective hooks called before |
446 | + this hook is called. |
447 | + |
448 | +The following hooks are called on each service unit as the membership |
449 | +of an established relation changes: |
450 | + |
451 | + * **<relation name>-relation-joined** - Runs upon each time a remote |
452 | + service unit joins the relation. |
453 | + |
454 | + * **<relation name>-relation-changed** - Runs upon each time the |
455 | + following events occur: |
456 | + |
457 | + 1. A remote service unit joins the relation, right after the |
458 | + **<relation name>-relation-joined** hook was called. |
459 | + |
460 | + 2. A remote service unit changes its relation settings. |
461 | + |
462 | + This hook enables the charm to modify the service unit state |
463 | + (configuration, running processes, or anything else) to adapt to |
464 | + the relation settings of remote units. |
465 | + |
466 | + An example usage is that HAProxy needs to be aware of web servers |
467 | + as they become available, including details like its IP |
468 | + address. Web server service units can publish their availability |
469 | + by making the appropriate relation settings in the hook that makes |
470 | + the most sense. Assume the HAProxy uses the relation name of |
471 | + ``server``. Then upon that happening, the HAProxy in its |
472 | + ``server-relation-changed hook`` can then change its own |
473 | + configuration as to what is available to be proxied. |
474 | + |
475 | + * **<relation name>-relation-departed** - Runs upon each time a |
476 | + remote service unit leaves a relation. This could happen because |
477 | + the service unit has been removed, its service has been destroyed, |
478 | + or the relation between this service and the remote service has |
479 | + been removed. |
480 | + |
481 | + An example usage is that HAProxy needs to be aware of web servers |
482 | + when they are no longer available. It can remove each web server |
483 | + its configuration as the corresponding service unit departs the |
484 | + relation. |
485 | + |
486 | +This relation hook is with respect to the relation itself: |
487 | + |
488 | + * **<relation name>-relation-broken** - Runs when a relation which |
489 | + had at least one other relation hook run for it (successfully or |
490 | + not) is now unavailable. The service unit can then clean up any |
491 | + established state. |
492 | + |
493 | + An example might be cleaning up the configuration changes which |
494 | + were performed when HAProxy was asked to load-balance for another |
495 | + service unit. |
496 | + |
497 | +Note that the coupling between charms is defined by which settings |
498 | +are required and made available to them through the relation hooks and |
499 | +how these settings are used. Those conventions then define what the |
500 | +relation interface really is, and the **interface** name in the |
501 | +`metadata.yaml` file is simply a way to refer to them and avoid the |
502 | +attempting of incompatible conversations. Keep that in mind when |
503 | +designing your charms and relations, since it is a good idea to |
504 | +allow the implementation of the charm to change and be replaced with |
505 | +alternative versions without changing the relation conventions in a |
506 | +backwards incompatible way. |
507 | + |
508 | + |
509 | +Hook environment |
510 | +---------------- |
511 | + |
512 | +Hooks can expect to be invoked with a standard environment and |
513 | +context. The following environment variables are set: |
514 | + |
515 | + * **$JUJU_UNIT_NAME** - The name of the local unit executing, |
516 | + in the form ``<service name>/<unit sequence>``. E.g. ``myblog/3``. |
517 | + |
518 | +Hooks called for relation changes will have the follow additional |
519 | +environment variables set: |
520 | + |
521 | + * **$JUJU_RELATION** - The relation name this hook is running |
522 | + for. It's redundant with the hook name, but is necessary for |
523 | + the command line tools to know the current context. |
524 | + |
525 | + * **$JUJU_REMOTE_UNIT** - The unit name of the remote unit |
526 | + which has triggered the hook execution. |
527 | + |
528 | + |
529 | +Hook commands for working with relations |
530 | +---------------------------------------- |
531 | + |
532 | +In implementing their functionality, hooks can leverage a set of |
533 | +command tools provided by juju for working with relations. These |
534 | +utilities enable the hook to collaborate on their relation settings, |
535 | +and to inquire about the peers the service unit has relations with. |
536 | + |
537 | +The following command line tools are made available: |
538 | + |
539 | + * **relation-get** - Queries a setting from an established relation |
540 | + with one or more service units. This command will read some |
541 | + context information from environment variables (e.g. |
542 | + $JUJU_RELATION_NAME). |
543 | + |
544 | + Examples: |
545 | + |
546 | + Get the IP address from the remote unit which triggered the hook |
547 | + execution:: |
548 | + |
549 | + relation-get ip |
550 | + |
551 | + Get all the settings from the remote unit which triggered the hook |
552 | + execution:: |
553 | + |
554 | + relation-get |
555 | + |
556 | + Get the port information from the `wordpress/3` unit:: |
557 | + |
558 | + relation-get port wordpress/3 |
559 | + |
560 | + Get all the settings from the `wordpress/3` unit, in JSON format:: |
561 | + |
562 | + relation-get - wordpress/3 |
563 | + |
564 | + * **relation-set** - Changes a setting in an established relation. |
565 | + |
566 | + Examples: |
567 | + |
568 | + Set this unit's port number for other peers to use:: |
569 | + |
570 | + relation-set port=8080 |
571 | + |
572 | + Change two settings at once:: |
573 | + |
574 | + relation-set dbname=wordpress dbpass="super secur3" |
575 | + |
576 | + Change several settings at once, with a JSON file:: |
577 | + |
578 | + cat settings.json | relation-set |
579 | + |
580 | + Delete a setting:: |
581 | + |
582 | + relation-set name= |
583 | + |
584 | + * **relation-list** - List all service units participating in the |
585 | + established relation. This list excludes the local service unit |
586 | + which is executing the command. For `provides` and `requires` |
587 | + relations, this command will always return a single service unit. |
588 | + |
589 | + Example:: |
590 | + |
591 | + MEMBERS=$(relation-list) |
592 | + |
593 | +Changes to relation settings are only committed if the hook exited |
594 | +with an exit code of 0. Such changes will then trigger further hook |
595 | +execution in the remote unit(s), through the **<relation |
596 | +name>-relation-changed** hook. This mechanism enables a general |
597 | +communication mechanism for service units to coordinate. |
598 | + |
599 | + |
600 | +Hook commands for opening and closing ports |
601 | +------------------------------------------- |
602 | + |
603 | +Service exposing determines which ports to expose by using the |
604 | +``open-port`` and ``close-port`` commands in hooks. They may be |
605 | +executed within any charm hook. The commands take the same |
606 | +arguments:: |
607 | + |
608 | + open-port port[/protocol] |
609 | + |
610 | + close-port port[/protocol] |
611 | + |
612 | +These commands are executed immediately; they do not depend on the |
613 | +exit status of the hook. |
614 | + |
615 | +As an example, consider the WordPress charm, which has been deployed |
616 | +as ``my-wordpress``. After completing the setup and restart of Apache, |
617 | +the ``wordpress`` charm can then publish the available port in its |
618 | +``start`` hook for a given service unit:: |
619 | + |
620 | + open-port 80 |
621 | + |
622 | +External access to the service unit is only allowed when both |
623 | +``open-port`` is executed within any hook and the administrator has |
624 | +exposed its service. The order in which these happen is not |
625 | +important, however. |
626 | + |
627 | +.. note:: |
628 | + |
629 | + Being able to use any hook may be important for your charm. |
630 | + Ideally, the service does not have ports that are vulnerable if |
631 | + exposed prior to the service being fully ready. But if that's the |
632 | + case, you can solve this problem by only opening the port in the |
633 | + appropriate hook and when the desired conditions are met. |
634 | + |
635 | +Alternatively, you may need to expose more than one port, or expose |
636 | +ports that don't use the TCP protocol. To expose ports for |
637 | +HTTP and HTTPS, your charm could instead make these settings:: |
638 | + |
639 | + open-port 80 |
640 | + open-port 443 |
641 | + |
642 | +Or if you are writing a charm for a DNS server that you would like |
643 | +to expose, then specify the protocol to be UDP:: |
644 | + |
645 | + open-port 53/udp |
646 | + |
647 | +When the service unit is removed or stopped for any reason, the |
648 | +firewall will again be changed to block traffic which was previously |
649 | +allowed to reach the exposed service. Your charm can also do this to |
650 | +close the port:: |
651 | + |
652 | + close-port 80 |
653 | + |
654 | +To be precise, the firewall is only open for the exposed ports during |
655 | +the time both these conditions hold: |
656 | + |
657 | + * A service has been exposed. |
658 | + * A corresponding ``open-port`` command has been run (without a |
659 | + subsequent ``close-port``). |
660 | + |
661 | + |
662 | +Sample metadata.yaml files |
663 | +-------------------------- |
664 | + |
665 | +Below are presented some sample metadata files. |
666 | + |
667 | + |
668 | +MySQL:: |
669 | + |
670 | + name: mysql |
671 | + revision: 1 |
672 | + summary: "A pretty popular database" |
673 | + |
674 | + provides: |
675 | + db: mysql |
676 | + |
677 | + |
678 | +Wordpress:: |
679 | + |
680 | + name: wordpress |
681 | + revision: 3 |
682 | + summary: "A pretty popular blog engine" |
683 | + provides: |
684 | + url: |
685 | + interface: http |
686 | + |
687 | + requires: |
688 | + db: |
689 | + interface: mysql |
690 | + |
691 | + |
692 | +Riak:: |
693 | + |
694 | + name: riak |
695 | + revision: 7 |
696 | + summary: "Scalable K/V Store in Erlang with Clocks :-)" |
697 | + provides: |
698 | + endpoint: |
699 | + interface: http |
700 | + |
701 | + peers: |
702 | + ring: |
703 | + interface: riak |
704 | |
705 | === added file 'source/conf.py' |
706 | --- source/conf.py 1970-01-01 00:00:00 +0000 |
707 | +++ source/conf.py 2012-01-18 20:50:30 +0000 |
708 | @@ -0,0 +1,225 @@ |
709 | +# -*- coding: utf-8 -*- |
710 | +# |
711 | +# juju documentation build configuration file, created by |
712 | +# sphinx-quickstart on Wed Jul 14 09:40:34 2010. |
713 | +# |
714 | +# This file is execfile()d with the current directory set to its containing dir. |
715 | +# |
716 | +# Note that not all possible configuration values are present in this |
717 | +# autogenerated file. |
718 | +# |
719 | +# All configuration values have a default; values that are commented out |
720 | +# serve to show the default. |
721 | + |
722 | +import sys, os |
723 | + |
724 | +# If extensions (or modules to document with autodoc) are in another directory, |
725 | +# add these directories to sys.path here. If the directory is relative to the |
726 | +# documentation root, use os.path.abspath to make it absolute, like shown here. |
727 | +sys.path.insert(0, os.path.abspath('../..')) |
728 | + |
729 | +# -- General configuration ----------------------------------------------------- |
730 | + |
731 | +# If your documentation needs a minimal Sphinx version, state it here. |
732 | +#needs_sphinx = '1.0' |
733 | + |
734 | +# Add any Sphinx extension module names here, as strings. They can be extensions |
735 | +# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. |
736 | +import sphinx |
737 | + |
738 | +extensions = ['sphinx.ext.autodoc'] |
739 | + |
740 | +if [int(x) for x in sphinx.__version__.split(".")] > [1, 0]: |
741 | + if "singlehtml" not in sys.argv: |
742 | + # singlehtml builder skips the step that would cause the _modules |
743 | + # directory to be created, so source links don't work |
744 | + extensions.append('sphinx.ext.viewcode') |
745 | + |
746 | +# Add any paths that contain templates here, relative to this directory. |
747 | +templates_path = ['_templates'] |
748 | + |
749 | +# The suffix of source filenames. |
750 | +source_suffix = '.rst' |
751 | + |
752 | +# The encoding of source files. |
753 | +#source_encoding = 'utf-8-sig' |
754 | + |
755 | +# The master toctree document. |
756 | +master_doc = 'index' |
757 | + |
758 | +# General information about the project. |
759 | +project = u'juju' |
760 | +copyright = u'2010, Canonical' |
761 | + |
762 | +# The version info for the project you're documenting, acts as replacement for |
763 | +# |version| and |release|, also used in various other places throughout the |
764 | +# built documents. |
765 | +# |
766 | +# The short X.Y version. |
767 | +version = '1.0' |
768 | +# The full version, including alpha/beta/rc tags. |
769 | +release = '1.0dev' |
770 | + |
771 | +# The language for content autogenerated by Sphinx. Refer to documentation |
772 | +# for a list of supported languages. |
773 | +#language = None |
774 | + |
775 | +# There are two options for replacing |today|: either, you set today to some |
776 | +# non-false value, then it is used: |
777 | +#today = '' |
778 | +# Else, today_fmt is used as the format for a strftime call. |
779 | +#today_fmt = '%B %d, %Y' |
780 | + |
781 | +# List of patterns, relative to source directory, that match files and |
782 | +# directories to ignore when looking for source files. |
783 | +exclude_patterns = [] |
784 | + |
785 | +# The reST default role (used for this markup: `text`) to use for all documents. |
786 | +#default_role = None |
787 | + |
788 | +# If true, '()' will be appended to :func: etc. cross-reference text. |
789 | +#add_function_parentheses = True |
790 | + |
791 | +# If true, the current module name will be prepended to all description |
792 | +# unit titles (such as .. function::). |
793 | +#add_module_names = True |
794 | + |
795 | +# If true, sectionauthor and moduleauthor directives will be shown in the |
796 | +# output. They are ignored by default. |
797 | +#show_authors = False |
798 | + |
799 | +# The name of the Pygments (syntax highlighting) style to use. |
800 | +pygments_style = 'sphinx' |
801 | + |
802 | +# A list of ignored prefixes for module index sorting. |
803 | +#modindex_common_prefix = [] |
804 | + |
805 | + |
806 | +# -- Options for HTML output --------------------------------------------------- |
807 | + |
808 | +# The theme to use for HTML and HTML Help pages. See the documentation for |
809 | +# a list of builtin themes. |
810 | +html_theme = 'default' |
811 | + |
812 | +# Theme options are theme-specific and customize the look and feel of a theme |
813 | +# further. For a list of options available for each theme, see the |
814 | +# documentation. |
815 | +#html_theme_options = {} |
816 | + |
817 | +# Add any paths that contain custom themes here, relative to this directory. |
818 | +#html_theme_path = [] |
819 | + |
820 | +# The name for this set of Sphinx documents. If None, it defaults to |
821 | +# "<project> v<release> documentation". |
822 | +#html_title = None |
823 | + |
824 | +# A shorter title for the navigation bar. Default is the same as html_title. |
825 | +#html_short_title = None |
826 | + |
827 | +# The name of an image file (relative to this directory) to place at the top |
828 | +# of the sidebar. |
829 | +#html_logo = None |
830 | + |
831 | +# The name of an image file (within the static path) to use as favicon of the |
832 | +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 |
833 | +# pixels large. |
834 | +#html_favicon = None |
835 | + |
836 | +# Add any paths that contain custom static files (such as style sheets) here, |
837 | +# relative to this directory. They are copied after the builtin static files, |
838 | +# so a file named "default.css" will overwrite the builtin "default.css". |
839 | +#html_static_path = ['_static'] |
840 | + |
841 | +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, |
842 | +# using the given strftime format. |
843 | +#html_last_updated_fmt = '%b %d, %Y' |
844 | + |
845 | +# If true, SmartyPants will be used to convert quotes and dashes to |
846 | +# typographically correct entities. |
847 | +#html_use_smartypants = True |
848 | + |
849 | +# Custom sidebar templates, maps document names to template names. |
850 | +html_sidebars = { |
851 | + 'index': 'project-links.html' |
852 | +} |
853 | +# Additional templates that should be rendered to pages, maps page names to |
854 | +# template names. |
855 | +#html_additional_pages = {} |
856 | + |
857 | +# If false, no module index is generated. |
858 | +html_domain_indices = False |
859 | + |
860 | +# If false, no index is generated. |
861 | +#html_use_index = True |
862 | + |
863 | +# If true, the index is split into individual pages for each letter. |
864 | +#html_split_index = False |
865 | + |
866 | +# If true, links to the reST sources are added to the pages. |
867 | +#html_show_sourcelink = True |
868 | + |
869 | +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. |
870 | +#html_show_sphinx = True |
871 | + |
872 | +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. |
873 | +#html_show_copyright = True |
874 | + |
875 | +# If true, an OpenSearch description file will be output, and all pages will |
876 | +# contain a <link> tag referring to it. The value of this option must be the |
877 | +# base URL from which the finished HTML is served. |
878 | +#html_use_opensearch = '' |
879 | + |
880 | +# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). |
881 | +#html_file_suffix = '' |
882 | + |
883 | +# Output file base name for HTML help builder. |
884 | +htmlhelp_basename = 'jujudoc' |
885 | + |
886 | + |
887 | +# -- Options for LaTeX output -------------------------------------------------- |
888 | + |
889 | +# The paper size ('letter' or 'a4'). |
890 | +#latex_paper_size = 'letter' |
891 | + |
892 | +# The font size ('10pt', '11pt' or '12pt'). |
893 | +#latex_font_size = '10pt' |
894 | + |
895 | +# Grouping the document tree into LaTeX files. List of tuples |
896 | +# (source start file, target name, title, author, documentclass [howto/manual]). |
897 | +latex_documents = [ |
898 | + ('index', 'juju.tex', u'juju documentation', |
899 | + u'Canonical', 'manual'), |
900 | +] |
901 | + |
902 | +# The name of an image file (relative to this directory) to place at the top of |
903 | +# the title page. |
904 | +#latex_logo = None |
905 | + |
906 | +# For "manual" documents, if this is true, then toplevel headings are parts, |
907 | +# not chapters. |
908 | +#latex_use_parts = False |
909 | + |
910 | +# If true, show page references after internal links. |
911 | +#latex_show_pagerefs = False |
912 | + |
913 | +# If true, show URL addresses after external links. |
914 | +#latex_show_urls = False |
915 | + |
916 | +# Additional stuff for the LaTeX preamble. |
917 | +#latex_preamble = '' |
918 | + |
919 | +# Documents to append as an appendix to all manuals. |
920 | +#latex_appendices = [] |
921 | + |
922 | +# If false, no module index is generated. |
923 | +#latex_domain_indices = True |
924 | + |
925 | + |
926 | +# -- Options for manual page output -------------------------------------------- |
927 | + |
928 | +# One entry per manual page. List of tuples |
929 | +# (source start file, name, description, authors, manual section). |
930 | +man_pages = [ |
931 | + ('index', 'juju', u'juju documentation', |
932 | + [u'Canonical'], 1) |
933 | +] |
934 | |
935 | === added directory 'source/drafts' |
936 | === added file 'source/drafts/charm-namespaces.rst' |
937 | --- source/drafts/charm-namespaces.rst 1970-01-01 00:00:00 +0000 |
938 | +++ source/drafts/charm-namespaces.rst 2012-01-18 20:50:30 +0000 |
939 | @@ -0,0 +1,72 @@ |
940 | + |
941 | +Charm Namespaces |
942 | +================ |
943 | + |
944 | +Introduction |
945 | +------------ |
946 | + |
947 | +juju supports deployment of charms from multiple sources. |
948 | + |
949 | +By default juju searches only the Ubuntu charm namespace to resolve |
950 | +charms. For example the following command line snippet will install wordpress |
951 | +from the ubuntu charm namespace.:: |
952 | + |
953 | + juju deploy wordpress |
954 | + |
955 | + |
956 | +In order to support local charm development and completely offline private |
957 | +repositories, charms can also be deployed directly from a local directory. |
958 | +For example the following will resolve the wordpress charm to the |
959 | +$HOME/local_charms directory.:: |
960 | + |
961 | + juju deploy --repository=~/local_charms wordpress |
962 | + |
963 | +With this parameter any charm dependencies from the wordpress charm will be |
964 | +looked up first in the local directory and then in the ubuntu charm |
965 | +namespace. So the command line flag '--repository' alters the charm lookup |
966 | +from the default such that it prepends the local directory to the lookup order. |
967 | + |
968 | + |
969 | +The lookup order can also be altered to utilize a 3rd party published repository |
970 | +in preference to the Ubuntu charm repository. For example the following will |
971 | +perform a charm lookup for wordpress and its dependencies from the published |
972 | +'openstack' 3rd party repository before looking up dependencies in the Ubuntu |
973 | +charm repository.:: |
974 | + |
975 | + juju deploy --repository=es:openstack wordpress |
976 | + |
977 | +The lookup order can also be specified just for a single charm. For example |
978 | +the following command would deploy the wordpress charm from the openstack |
979 | +namespace but would resolve dependencies (like apache and mysql) via the ubuntu |
980 | +namespace.:: |
981 | + |
982 | + juju deploy es:openstack/wordpress |
983 | + |
984 | +The lookup order can also be explicitly specified in the client configuration |
985 | +to define a custom lookup order without the use of command line options.:: |
986 | + |
987 | + environments.yaml |
988 | + |
989 | + repositories: |
990 | + |
991 | + - http://charms.ubuntu.com/collection/ubuntu |
992 | + - http://charms.ubuntu.com/collection/openstack |
993 | + - http://charms.ubuntu.com/people/miked |
994 | + - /var/lib/charms |
995 | + |
996 | +The repositories in the configuration file are specified as a yaml list, and the |
997 | +list order defines the lookup order for charms. |
998 | + |
999 | + |
1000 | +Deployment |
1001 | +---------- |
1002 | + |
1003 | +After juju resolves a charm and its dependencies, it bundles them and |
1004 | +deploys them to a machine provider charm cache/repository. This allows the |
1005 | +same charm to be deployed to multiple machines repeatably and with minimal |
1006 | +network transfers. |
1007 | + |
1008 | +juju stores the qualified name of the charm when saving it to the machine |
1009 | +provider cache. This allows a charm to be unambigiously identified, ie. |
1010 | +whether it came from the ubuntu namespace or a 3rdparty namespace, or even from |
1011 | +disk. |
1012 | |
1013 | === added file 'source/drafts/developer-install.rst' |
1014 | --- source/drafts/developer-install.rst 1970-01-01 00:00:00 +0000 |
1015 | +++ source/drafts/developer-install.rst 2012-01-18 20:50:30 +0000 |
1016 | @@ -0,0 +1,49 @@ |
1017 | +Developer Install |
1018 | +------------------ |
1019 | + |
1020 | +For folks who want to develop on juju itself, a source install |
1021 | +from trunk or branch is recommended. |
1022 | + |
1023 | +To run juju from source, you will need the following dependencies |
1024 | +installed: |
1025 | + |
1026 | + * zookeeper |
1027 | + * txzookeeper |
1028 | + * txaws |
1029 | + |
1030 | +The juju team recommends install the zookeeper package from the |
1031 | +juju PPA, or a source compilation as of ubuntu natty (11.04) due |
1032 | +to bugs in the packaged version. |
1033 | + |
1034 | +On a modern Ubuntu Linux system execute:: |
1035 | + |
1036 | + $ sudo apt-get install python-zookeeper python-virtualenv python-yaml |
1037 | + |
1038 | +You will also need Python 2.6 or better. |
1039 | + |
1040 | +We recommend and demonstrate the use of virtualenv to install juju |
1041 | +and its dependencies in a sandbox, in case you latter install a newer |
1042 | +version via package. |
1043 | + |
1044 | +First let's setup a virtualenv:: |
1045 | + |
1046 | + $ virtualenv juju |
1047 | + $ cd juju |
1048 | + $ source bin/activate |
1049 | + |
1050 | +Next we'll fetch and install a few juju dependencies from source:: |
1051 | + |
1052 | + $ bzr branch lp:txaws |
1053 | + $ cd txaws && python setup.py develop && cd.. |
1054 | + $ bzr branch lp:txzookeeper |
1055 | + $ cd txzookeeper && python setup.py develop && cd.. |
1056 | + |
1057 | +Lastly, we fetch juju and install it from trunk:: |
1058 | + |
1059 | + $ bzr branch lp:juju |
1060 | + $ cd juju && python setup.py develop |
1061 | + |
1062 | +You can now configure your juju environment per the getting-started |
1063 | +documentation. |
1064 | + |
1065 | + |
1066 | |
1067 | === added file 'source/drafts/expose-services.rst' |
1068 | --- source/drafts/expose-services.rst 1970-01-01 00:00:00 +0000 |
1069 | +++ source/drafts/expose-services.rst 2012-01-18 20:50:30 +0000 |
1070 | @@ -0,0 +1,20 @@ |
1071 | +Exposing a service |
1072 | +================== |
1073 | + |
1074 | +The following functionality will be implemented at a later date. |
1075 | + |
1076 | + |
1077 | +``exposed`` and ``unexposed`` hooks |
1078 | +----------------------------------- |
1079 | + |
1080 | +Upon a service being exposed, the ``exposed`` hook will be run, if it |
1081 | +is present in the charm. |
1082 | + |
1083 | +This may be an appropriate place to run the ``open-port`` command, |
1084 | +however, it is up to the charm author where it should be run, since |
1085 | +it and ``close-port`` are available commands for every hook. |
1086 | + |
1087 | +Likewise, when a service is unexposed, the ``unexposed`` hook will be |
1088 | +run, if present. Many charms likely do not need to implement this |
1089 | +hook, however, it could be an opportunity to terminate unnecessary |
1090 | +processes or remove other resources. |
1091 | |
1092 | === added file 'source/drafts/resolved.rst' |
1093 | --- source/drafts/resolved.rst 1970-01-01 00:00:00 +0000 |
1094 | +++ source/drafts/resolved.rst 2012-01-18 20:50:30 +0000 |
1095 | @@ -0,0 +1,60 @@ |
1096 | +Resolving errors |
1097 | +================ |
1098 | + |
1099 | +juju internally tracks the state of units and their relations. |
1100 | +It moves them through a simple state machine to ensure the correct |
1101 | +sequencing of hooks and associated unit agent behavior. Typically |
1102 | +this means that all hooks are executed as part of a workflow transition. |
1103 | + |
1104 | +If a hook fails, then juju notes this failure, and transitions |
1105 | +either the unit or the unit relation (depending on the hook) to a |
1106 | +failure state. |
1107 | + |
1108 | +If a hook for a unit relation fails, only that unit relation is |
1109 | +considered to be in an error state, the unit and other relations of |
1110 | +the unit continue to operate normally. |
1111 | + |
1112 | +If a hook for the unit fails (install, start, stop, etc), the unit |
1113 | +is considered not running, and its unit relations will stop responding |
1114 | +to changes. |
1115 | + |
1116 | +As a means to recover from hook errors, juju offers the |
1117 | +``juju resolved`` command. |
1118 | + |
1119 | +This command will operate on either a unit or unit relation and |
1120 | +schedules a transition from the error state back to its original |
1121 | +destination state. For example given a unit mysql/0 whose start |
1122 | +hook had failed, and the following command line:: |
1123 | + |
1124 | + $ juju resolved mysql/0 |
1125 | + |
1126 | +After being resolved the unit would be in the started state. It |
1127 | +important to note that by default ``juju resolved`` does |
1128 | +not fire hooks and the ``start`` hook would not be invoked again |
1129 | +as a result of the above. |
1130 | + |
1131 | +If a unit's relation-change/joined/departed hook had failed, then |
1132 | +juju resolved can also be utilized to resolve the error on |
1133 | +the unit relation:: |
1134 | + |
1135 | + $ juju resolved mysql/0 db |
1136 | + |
1137 | +This would re-enable change watching, and hook execution for the |
1138 | +``db`` relation after a hook failure. |
1139 | + |
1140 | +It's expected that an admin will typically have a look at the system to |
1141 | +determine or correct the issue before using this command line. |
1142 | +``juju resolved`` is meant primarily as a mechanism to remove the |
1143 | +error block after correction of the original issue. |
1144 | + |
1145 | +However ``juju resolved`` can optionally re-invoke the failed hook. |
1146 | +This feature is particular beneficial during charm development, when |
1147 | +iterating over an in development hook. Assuming a mysql unit with |
1148 | +with a start hook error upon executing the following command:: |
1149 | + |
1150 | + $ juju resolved --retry mysql/0 |
1151 | + |
1152 | +juju will examine the mysql/0 unit, and will re-execute its start |
1153 | +hook before marking it as running. If the start hook fails again, |
1154 | +then the unit will remain in the same state. |
1155 | + |
1156 | |
1157 | === added file 'source/drafts/service-config.rst' |
1158 | --- source/drafts/service-config.rst 1970-01-01 00:00:00 +0000 |
1159 | +++ source/drafts/service-config.rst 2012-01-18 20:50:30 +0000 |
1160 | @@ -0,0 +1,162 @@ |
1161 | +.. _"Service Configuration": |
1162 | + |
1163 | +Service configuration |
1164 | +===================== |
1165 | + |
1166 | +Introduction |
1167 | +------------ |
1168 | + |
1169 | +A Charm_ often will require access to specific options or |
1170 | +configuration. Charms allow for the manipulation of the various |
1171 | +configuration options which the charm author has chosen to |
1172 | +expose. juju provides tools to help manage these options and |
1173 | +respond to changes in these options over the lifetime of the `service` |
1174 | +deployment. These options apply to the entire service, as opposed to |
1175 | +only a specific unit or relation. Configuration is modified by an |
1176 | +administrator at deployment time or over the lifetime of the services. |
1177 | + |
1178 | +As an example a wordpress service may expose a 'blog-title' |
1179 | +option. This option would control the title of the blog being |
1180 | +published. Changes to this option would be applied to all units |
1181 | +implementing this service through the invocation of a hook on each of |
1182 | +them. |
1183 | + |
1184 | +.. _Charm: ./charm.html |
1185 | + |
1186 | + |
1187 | +Using configuration options |
1188 | +--------------------------- |
1189 | + |
1190 | +Configuration options are manipulated using a command line |
1191 | +interface. juju provide a `set` command to aid the administrator |
1192 | +in changing values. |
1193 | + |
1194 | +:: |
1195 | + juju set <service name> option=value [option=value] |
1196 | + |
1197 | +This command allows changing options at runtime and takes one or more |
1198 | +name/value pairs which will be set into the service |
1199 | +options. Configuration options which are set together are delivered to |
1200 | +the services for handling together. E.g. if you are changing a |
1201 | +username and a password, changing them individually may yield bad |
1202 | +results since the username will temporarily be set with an incorrect |
1203 | +password. |
1204 | + |
1205 | +While its possible to set multiple configuration options on the |
1206 | +command line its also convenient to pass multiple configuration |
1207 | +options via the --file argument which takes the name of a YAML |
1208 | +file. The contents of this file will be applied as though these |
1209 | +elements had been passed to `juju set`. |
1210 | + |
1211 | +A configuration file may be provided at deployment time using the |
1212 | +--config option, as follows:: |
1213 | + |
1214 | + juju deploy [--config local.yaml] wordpress myblog |
1215 | + |
1216 | +The service name is looked up inside the YAML file to allow for |
1217 | +related service configuration options to be collected into a single |
1218 | +file for the purposes of deployment and passed repeated to each |
1219 | +`juju deploy` invocation. |
1220 | + |
1221 | +Below is an example local.yaml containing options |
1222 | +which would be used during deployment of a service named myblog. |
1223 | + |
1224 | +:: |
1225 | + |
1226 | + myblog: |
1227 | + blog-roll: ['http://foobar.com', 'http://testing.com'] |
1228 | + blog-title: Awesome Sauce |
1229 | + password: n0nsense |
1230 | + |
1231 | + |
1232 | +Creating charms |
1233 | +--------------- |
1234 | + |
1235 | +Charm authors create a `config.yaml` file which resides in the |
1236 | +charm's top-level directory. The configuration options supported by |
1237 | +a service are defined within its respective charm. juju will |
1238 | +only allow the manipulation of options which were explicitly defined |
1239 | +as supported. |
1240 | + |
1241 | +The specification of possible configuration values is intentionally |
1242 | +minimal, but still evolving. Currently the charm define a list of |
1243 | +names which they react. Information includes a human readable |
1244 | +description and an optional default value. Additionally `type` may be |
1245 | +specified. All options have a default type of 'str' which means its |
1246 | +value will only be treated as a text string. Other valid options are |
1247 | +'int', 'float' and 'regex'. When 'regex' is used an addtional element |
1248 | +must be provided, 'validator'. This must be a valid Python regex as |
1249 | +specified at http://docs.python.org/lib/re.html |
1250 | + |
1251 | +The following `config.yaml` would be included in the top level |
1252 | +directory of a charm and includes a list of option definitions:: |
1253 | + |
1254 | + options: |
1255 | + blog-roll: |
1256 | + default: null |
1257 | + description: List of URLs which will be included as the blog roll |
1258 | + blog-title: |
1259 | + default: My Blog |
1260 | + description: The title of the blog. |
1261 | + password: |
1262 | + default: changeme |
1263 | + description: Password to be used for the account specified by 'username' |
1264 | + type: regex |
1265 | + validator: '.{6,12}' |
1266 | + username: |
1267 | + default: admin |
1268 | + description: The name of the initial account (given admin permissions). |
1269 | + |
1270 | + |
1271 | +To access these configuration options from a hook we provide the following:: |
1272 | + |
1273 | + config-get [option name] |
1274 | + |
1275 | +`config-get` returns all the configuration options for a service as |
1276 | +JSON data when no option name is specified. If an option name is |
1277 | +specified the value of that option is output according to the normal |
1278 | +rules and obeying the `--output` and `--format` arguments. Hooks |
1279 | +implicitly know the service they are executing for and config-get |
1280 | +always gets values from the service of the hook. |
1281 | + |
1282 | +Changes to options (see previous section) trigger the charm's |
1283 | +`config-changed` hook. The `config-changed` hook is guaranteed to run |
1284 | +after any changes are made to the configuration, but it is possible |
1285 | +that multiple changes will be observed at once. Because its possible |
1286 | +to set many configuration options on a single command line invocation |
1287 | +it is easily possible to ensure related options are available to the |
1288 | +service at the same time. |
1289 | + |
1290 | +The `config-changed` hook must be written in such a way as to deal |
1291 | +with changes to one or more options and deal gracefully with options |
1292 | +that are required by the charm but not yet set by an |
1293 | +administrator. Errors in the config-changed hook force juju to |
1294 | +assume the service is no longer properly configured. If the service is |
1295 | +not already in a stopped state it will be stopped and taken out of |
1296 | +service. The status command will be extended in the future to report |
1297 | +on workflow and unit agent status which will help reveal error |
1298 | +conditions of this nature. |
1299 | + |
1300 | +When options are passed using `juju deploy` their values will be |
1301 | +read in from a file and made available to the service prior to the |
1302 | +invocation of the its `install` hook. The `install` and `start` hooks |
1303 | +will have access to config-get and thus complete access to the |
1304 | +configuration options during their execution. If the `install` or |
1305 | +`start` hooks don't directly need to deal with options they can simply |
1306 | +invoke the `config-changed` hook. |
1307 | + |
1308 | + |
1309 | + |
1310 | +Internals |
1311 | +--------- |
1312 | + |
1313 | +.. note:: |
1314 | + This section explains details useful to the implementation but not of |
1315 | + interest to the casual reader. |
1316 | + |
1317 | +Hooks normally attempt to provide a consistent view of the shared |
1318 | +state of the system and the handling of config options within hooks |
1319 | +(config-changed and the relation hooks) is no different. The first |
1320 | +access to the configuration data of a service will retain a cached |
1321 | +copy of the service options. Cached data will be used for the |
1322 | +duration of the hook invocation. |
1323 | |
1324 | === added file 'source/expose-services.rst' |
1325 | --- source/expose-services.rst 1970-01-01 00:00:00 +0000 |
1326 | +++ source/expose-services.rst 2012-01-18 20:50:30 +0000 |
1327 | @@ -0,0 +1,43 @@ |
1328 | +Exposing a service |
1329 | +================== |
1330 | + |
1331 | +In juju, making a service public -- its ports available for public |
1332 | +use -- requires that it be explicitly *exposed*. Note that this |
1333 | +exposing does not yet involve DNS or other directory information. For |
1334 | +now, it simply makes the service public. |
1335 | + |
1336 | +Service exposing works by opening appropriate ports in the firewall of |
1337 | +the cloud provider. Because service exposing is necessarily tied to |
1338 | +the underlying provider, juju manages all aspects of |
1339 | +exposing. Such management ensures that a charm can work with other |
1340 | +cloud providers besides EC2, once support for them is implemented. |
1341 | + |
1342 | +juju provides the ``juju expose`` command to expose a service. |
1343 | +For example, you might have deployed a ``my-wordpress`` service, which |
1344 | +is defined by a ``wordpress`` charm. To expose this service, simply |
1345 | +execute the following command:: |
1346 | + |
1347 | + juju expose my-wordpress |
1348 | + |
1349 | +To stop exposing this service, and make any corresponding firewall |
1350 | +changes immediately, you can run this command:: |
1351 | + |
1352 | + juju unexpose my-wordpress |
1353 | + |
1354 | +You can see the status of your exposed ports by running the ``juju |
1355 | +status`` command. If ports have been opened by the service and you |
1356 | +have exposed the service, then you will see something like the |
1357 | +following output for the deployed services:: |
1358 | + |
1359 | + services: |
1360 | + wordpress: |
1361 | + exposed: true |
1362 | + charm: local:oneiric/wordpress-42 |
1363 | + relations: {db: mysql} |
1364 | + units: |
1365 | + wordpress/0: |
1366 | + machine: 2 |
1367 | + open-ports: [80/tcp] |
1368 | + relations: |
1369 | + db: {state: up} |
1370 | + state: started |
1371 | |
1372 | === added file 'source/faq.rst' |
1373 | --- source/faq.rst 1970-01-01 00:00:00 +0000 |
1374 | +++ source/faq.rst 2012-01-18 20:50:30 +0000 |
1375 | @@ -0,0 +1,91 @@ |
1376 | +Frequently Asked Questions |
1377 | +========================== |
1378 | + |
1379 | +Where does the name juju come from? |
1380 | + |
1381 | + It means magic in the same african roots where the word ubuntu comes from. |
1382 | + Please see http://afgen.com/juju.html for a more detailed explanation. |
1383 | + |
1384 | +Why is juju useful? |
1385 | + |
1386 | + juju is a next generation service deployment and orchestration |
1387 | + framework. It has been likened to APT for the cloud. With juju, |
1388 | + different authors are able to create service charms independently, and |
1389 | + make those services coordinate their communication through a simple |
1390 | + protocol. Users can then take the product of different authors and very |
1391 | + comfortably deploy those services in an environment. The result is |
1392 | + multiple machines and components transparently collaborating towards |
1393 | + providing the requested service. Read more :doc:`about` |
1394 | + |
1395 | +When will it be ready for production? |
1396 | + |
1397 | + As of Ubuntu Natty 11.04, juju is a technology preview. It is not yet |
1398 | + ready to be used in production. However, adventurous users are encouraged to |
1399 | + evaluate it, study it, start writing charms for it or start hacking on |
1400 | + juju internals. The rough roadmap is to have juju packaged for |
1401 | + Universe by 11.10 release and perhaps in main by 12.04 |
1402 | + |
1403 | +What language is juju developed in? |
1404 | + |
1405 | + juju itself is developed using Python. However, writing charms for |
1406 | + juju can be done in any language. All juju cares about is finding a |
1407 | + set of executable files, which it will trigger appropriately |
1408 | + |
1409 | +Does juju start from a pre-configured AMI Image? |
1410 | + |
1411 | + No, juju uses a plain Ubuntu image. All needed components are installed |
1412 | + at run-time. Then the juju charm is sent to the machine and hooks start |
1413 | + getting executed in response to events |
1414 | + |
1415 | +Is it possible to deploy multiple services per machine? |
1416 | + |
1417 | + Currently each service unit is deployed to a separate machine (ec2 instance) |
1418 | + that can relate to other services running on different nodes. This was done |
1419 | + to get juju into a working state faster. juju will definitely support |
1420 | + multiple services per machine in the future |
1421 | + |
1422 | +Is it possible to pass parameters to juju charms? |
1423 | + |
1424 | + Tunables are landing very soon in juju. Once ready you will be able to |
1425 | + use "juju set service key=value" and respond to that from within the |
1426 | + juju charm. This will enable dynamic features to be added to charms |
1427 | + |
1428 | +Does juju only deploy to the Amazon EC2 cloud? |
1429 | + |
1430 | + Currently yes. However work is underway to enable deploying to LXC containers |
1431 | + such that you are able to run juju charms on a single local machine |
1432 | + Also integration work with the `Orchestra <https://launchpad.net/orchestra>`_ |
1433 | + project is underway to enable deployment to hardware machines |
1434 | + |
1435 | +What directory are hooks executed in? |
1436 | + |
1437 | + Hooks are executed in the charm directory (the parent directory to the hook |
1438 | + directory). This is primarily to encourage putting additional resources that |
1439 | + a hook may use outside of the hooks directory which is the public interface |
1440 | + of the charm. |
1441 | + |
1442 | +How are charms licensed? |
1443 | + |
1444 | + Charms are effectively data inputs to juju, and are therefore |
1445 | + licensed/copyrighted by the author as an independent work. You are free to |
1446 | + claim copyright solely for yourself if it's an independent work, and to |
1447 | + license it as you see fit. If you as the charm author are performing the |
1448 | + work as a result of a fiduciary agreement, the terms of such agreement come |
1449 | + into play and so the licensing choice is up to the hiring entity. |
1450 | + |
1451 | +How can I contact the juju team? |
1452 | + |
1453 | + User and charm author oriented resources |
1454 | + * Mailing list: https://lists.ubuntu.com/mailman/listinfo/Ubuntu-cloud |
1455 | + * IRC #ubuntu-cloud |
1456 | + juju development |
1457 | + * Mailing list: https://lists.ubuntu.com/mailman/listinfo/juju |
1458 | + * IRC #juju (Freenode) |
1459 | + |
1460 | +Where can I find out more about juju? |
1461 | + |
1462 | + * Project Site: https://launchpad.net/juju |
1463 | + * Documentation: https://juju.ubuntu.com/docs/ |
1464 | + * Work Items: https://juju.ubuntu.com/kanban/dublin.html |
1465 | + * Principia charms project: https://launchpad.net/principia |
1466 | + * Principia-Tools project: https://launchpad.net/principia-tools |
1467 | |
1468 | === added file 'source/generate_modules.py' |
1469 | --- source/generate_modules.py 1970-01-01 00:00:00 +0000 |
1470 | +++ source/generate_modules.py 2012-01-18 20:50:30 +0000 |
1471 | @@ -0,0 +1,107 @@ |
1472 | +import os |
1473 | +import sys |
1474 | + |
1475 | +INIT = "__init__.py" |
1476 | +TESTS = "tests" |
1477 | + |
1478 | + |
1479 | +def get_modules(names): |
1480 | + if INIT in names: |
1481 | + names.remove(INIT) |
1482 | + return [n[:-3] for n in names if n.endswith(".py")] |
1483 | + |
1484 | + |
1485 | +def trim_dirs(root, dirs): |
1486 | + for dir_ in dirs[:]: |
1487 | + if dir_ == TESTS: |
1488 | + dirs.remove(TESTS) |
1489 | + if not os.path.exists(os.path.join(root, dir_, INIT)): |
1490 | + dirs.remove(dir_) |
1491 | + return dirs |
1492 | + |
1493 | + |
1494 | +def module_name(base, root, name=None): |
1495 | + path = root[len(base) + 1:] |
1496 | + if name: |
1497 | + path = os.path.join(path, name) |
1498 | + return path.replace("/", ".") |
1499 | + |
1500 | + |
1501 | +def collect_modules(src): |
1502 | + src = os.path.abspath(src) |
1503 | + base = os.path.dirname(src) |
1504 | + |
1505 | + names = [] |
1506 | + for root, dirs, files in os.walk(src): |
1507 | + modules = get_modules(files) |
1508 | + packages = trim_dirs(root, dirs) |
1509 | + if modules or packages: |
1510 | + names.append(module_name(base, root)) |
1511 | + for name in modules: |
1512 | + names.append(module_name(base, root, name)) |
1513 | + return sorted(names) |
1514 | + |
1515 | + |
1516 | +def subpackages(names, parent): |
1517 | + return [name for name in names |
1518 | + if name.startswith(parent) and name != parent] |
1519 | + |
1520 | + |
1521 | +def dst_file(dst, name): |
1522 | + return open(os.path.join(dst, "%s.rst" % name), "w") |
1523 | + |
1524 | + |
1525 | +def write_title(f, name, kind): |
1526 | + f.write("%s\n%s\n\n" % (name, kind * len(name))) |
1527 | + |
1528 | + |
1529 | +def write_packages(f, names): |
1530 | + for name in names: |
1531 | + f.write(" " * name.count(".")) |
1532 | + f.write("* :mod:`%s`\n" % name) |
1533 | + f.write("\n") |
1534 | + |
1535 | + |
1536 | +def abbreviate(name): |
1537 | + parts = name.split(".") |
1538 | + short_parts = [part[0] for part in parts[:-2]] |
1539 | + return ".".join(short_parts + parts[-2:]) |
1540 | + |
1541 | + |
1542 | +def write_module(f, name, subs): |
1543 | + write_title(f, abbreviate(name), "=") |
1544 | + f.write(".. automodule:: %s\n" |
1545 | + " :members:\n" |
1546 | + " :undoc-members:\n" |
1547 | + " :show-inheritance:\n\n" |
1548 | + % name) |
1549 | + if subs: |
1550 | + write_title(f, "Subpackages", "-") |
1551 | + write_packages(f, subs) |
1552 | + |
1553 | + |
1554 | +def write_module_list(f, names): |
1555 | + write_title(f, "juju modules", "=") |
1556 | + write_packages(f, names) |
1557 | + f.write(".. toctree::\n :hidden:\n\n") |
1558 | + for name in names: |
1559 | + f.write(" %s\n" % name) |
1560 | + |
1561 | + |
1562 | +def generate(src, dst): |
1563 | + names = collect_modules(src) |
1564 | + |
1565 | + if not os.path.exists(dst): |
1566 | + os.makedirs(dst) |
1567 | + |
1568 | + with dst_file(dst, "modules") as f: |
1569 | + write_module_list(f, names) |
1570 | + |
1571 | + for name in names: |
1572 | + with dst_file(dst, name) as f: |
1573 | + write_module(f, name, subpackages(names, name)) |
1574 | + |
1575 | + |
1576 | +if __name__ == "__main__": |
1577 | + src, dst = sys.argv[1:] |
1578 | + generate(src, dst) |
1579 | |
1580 | === added file 'source/getting-started.rst' |
1581 | --- source/getting-started.rst 1970-01-01 00:00:00 +0000 |
1582 | +++ source/getting-started.rst 2012-01-18 20:50:30 +0000 |
1583 | @@ -0,0 +1,80 @@ |
1584 | +.. _getting-started: |
1585 | + |
1586 | +Getting started |
1587 | +=============== |
1588 | + |
1589 | +Introduction |
1590 | +------------ |
1591 | + |
1592 | +This tutorial gets you started with juju. A prerequisite is the |
1593 | +access credentials to a dedicated computing environment such as what |
1594 | +is offered by a virtualized cloud hosting environment. |
1595 | + |
1596 | +juju has been designed for environments which can provide a |
1597 | +new machine with an Ubuntu cloud operating system image |
1598 | +on-demand. This includes services such as `Amazon EC2 |
1599 | +<http://aws.amazon.com/ec2/>`_ or `RackSpace |
1600 | +<http://www.rackspace.com>`_. |
1601 | + |
1602 | +It's also required that the environment provides a permanent storage |
1603 | +facility such as `Amazon S3 <https://s3.amazonaws.com/>`_. |
1604 | + |
1605 | +For the moment, though, the only environment supported is EC2. |
1606 | + |
1607 | +Running from PPA |
1608 | +---------------- |
1609 | + |
1610 | +The juju team's Personal Package Archive (PPA) installation is |
1611 | +currently the preferred installation mechanism for juju. It |
1612 | +includes newer upstream versions of binary dependencies like Zookeeper |
1613 | +which are more recent than the latest ubuntu release (natty 11.04) and |
1614 | +contain important bugfixes. |
1615 | + |
1616 | +To install juju from the PPA, execute the following in a shell:: |
1617 | + |
1618 | + sudo add-apt-repository ppa:juju/pkgs |
1619 | + sudo apt-get update && sudo apt-get install juju |
1620 | + |
1621 | +The juju environment can now be configured per the following. |
1622 | + |
1623 | +Configuring your environment |
1624 | +---------------------------- |
1625 | + |
1626 | +Run the command-line utility with no arguments to create a sample |
1627 | +environment:: |
1628 | + |
1629 | + $ juju |
1630 | + |
1631 | +This will create the file ``~/.juju/environments.yaml``, which will look |
1632 | +something like this:: |
1633 | + |
1634 | + environments: |
1635 | + sample: |
1636 | + type: ec2 |
1637 | + control-bucket: juju-faefb490d69a41f0a3616a4808e0766b |
1638 | + admin-secret: 81a1e7429e6847c4941fda7591246594 |
1639 | + |
1640 | +Which is a sample environment configured to run with EC2 machines and S3 |
1641 | +permanent storage. To make this environment actually useful, you will need |
1642 | +to tell juju about an AWS access key and secret key. To do this, you |
1643 | +can either set the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` |
1644 | +environment variables (as usual for other EC2 tools) or you can add |
1645 | +``access-key`` and ``secret-key`` options to your ``environments.yaml``. |
1646 | +For example:: |
1647 | + |
1648 | + environments: |
1649 | + sample: |
1650 | + type: ec2 |
1651 | + access-key: YOUR-ACCESS-KEY-GOES-HERE |
1652 | + secret-key: YOUR-SECRET-KEY-GOES-HERE |
1653 | + control-bucket: juju-faefb490d69a41f0a3616a4808e0766b |
1654 | + admin-secret: 81a1e7429e6847c4941fda7591246594 |
1655 | + |
1656 | +The S3 bucket does not need to exist already. |
1657 | + |
1658 | +.. note:: |
1659 | + If you already have an AWS account, you can determine your access key by |
1660 | + visiting http://aws.amazon.com/account, clicking "Security Credentials" and |
1661 | + then clicking "Access Credentials". You'll be taken to a table that lists |
1662 | + your access keys and has a "show" link for each access key that will reveal |
1663 | + the associated secret key. |
1664 | |
1665 | === added file 'source/glossary.rst' |
1666 | --- source/glossary.rst 1970-01-01 00:00:00 +0000 |
1667 | +++ source/glossary.rst 2012-01-18 20:50:30 +0000 |
1668 | @@ -0,0 +1,121 @@ |
1669 | +.. _glossary: |
1670 | + |
1671 | +Glossary |
1672 | +======== |
1673 | + |
1674 | +.. glossary:: |
1675 | + |
1676 | + Bootstrap |
1677 | + To boostrap an environment means initializing it so that Services may be |
1678 | + deployed on it. |
1679 | + |
1680 | + Endpoint |
1681 | + The combination of a service name and a relation name. |
1682 | + |
1683 | + juju |
1684 | + The whole software here documented. |
1685 | + |
1686 | + Environment |
1687 | + An Environment is a configured location where Services |
1688 | + can be deployed onto. An Environment typically has a name, |
1689 | + which can usually be omitted when there's a single Environment |
1690 | + configured, or when a default is explicitly defined. |
1691 | + Depending on the type of Environment, it may have to be |
1692 | + bootstrapped before interactions with it may take place |
1693 | + (e.g. EC2). The local environment configuration is defined in |
1694 | + the ~/.juju/environments.yaml file. |
1695 | + |
1696 | + Charm |
1697 | + A Charm provides the definition of the service, including its metadata, |
1698 | + dependencies to other services, packages necessary, as well as the logic |
1699 | + for management of the application. It is the layer that integrates an |
1700 | + external application component like Postgres or WordPress into juju. |
1701 | + An juju Service may generally be seen as the composition of its juju |
1702 | + Charm and the upstream application (traditionally made available through |
1703 | + its package). |
1704 | + |
1705 | + Charm URL |
1706 | + A Charm URL is a resource locator for a charm, with the following format |
1707 | + and restrictions:: |
1708 | + |
1709 | + <schema>:[~<user>/]<collection>/<name>[-<revision>] |
1710 | + |
1711 | + `schema` must be either "cs", for a charm from the Juju charm store, or |
1712 | + "local", for a charm from a local repository. |
1713 | + |
1714 | + `user` is only valid in charm store URLs, and allows you to source |
1715 | + charms from individual users (rather than from the main charm store); |
1716 | + it must be a valid Launchpad user name. |
1717 | + |
1718 | + `collection` denotes a charm's purpose and status, and is derived from |
1719 | + the Ubuntu series targeted by its contained charms: examples include |
1720 | + "natty", "oneiric", "oneiric-universe". |
1721 | + |
1722 | + `name` is just the name of the charm; it must start and end with |
1723 | + lowercase (ascii) letters, and can otherwise contain any combination of |
1724 | + lowercase letters, digits, and "-"s. |
1725 | + |
1726 | + `revision`, if specified, points to a specific revision of the charm |
1727 | + pointed to by the rest of the URL. It must be a non-negative integer. |
1728 | + |
1729 | + Repository |
1730 | + A location where multiple charms are stored. Repositories may be as simple |
1731 | + as a directory structure on a local disk, or as complex as a rich smart |
1732 | + server supporting remote searching and so on. |
1733 | + |
1734 | + Relation |
1735 | + Relations are the way in which juju enables Services to communicate |
1736 | + to each other, and the way in which the topology of Services is assembled. |
1737 | + The Charm defines which Relations a given Service may establish, and |
1738 | + what kind of interface these Relations require. In many cases, the |
1739 | + establishment of a Relation will result into an actual TCP connection being |
1740 | + created between the Service Units, but that's not necessarily the case. |
1741 | + Relations may also be established to inform Services of configuration |
1742 | + parameters, to request monitoring information, or any other details which |
1743 | + the Charm author has chosen to make available. |
1744 | + |
1745 | + Service |
1746 | + juju operates in terms of services. |
1747 | + |
1748 | + A service is any application (or set of applications) that is |
1749 | + integrated into the framework as an individual component which should |
1750 | + generally be joined with other components to perform a more complex |
1751 | + goal. |
1752 | + |
1753 | + As an example, WordPress could be deployed as a service and, to perform |
1754 | + its tasks properly, might communicate with a database service |
1755 | + and a load balancer service. |
1756 | + |
1757 | + Service Unit |
1758 | + A running instance of a given juju Service. Simple Services may |
1759 | + be deployed with a single Service Unit, but it is possible for an |
1760 | + individual Service to have multiple Service Units running in independent |
1761 | + machines. All Service Units for a given Service will share the same |
1762 | + Charm, the same relations, and the same user-provided configuration. |
1763 | + |
1764 | + For instance, one may deploy a single MongoDB Service, and specify that |
1765 | + it should run 3 Units, so that the replica set is resilient to failures. |
1766 | + Internally, even though the replica set shares the same user-provided |
1767 | + configuration, each Unit may be performing different roles within th |
1768 | + replica set, as defined by the Charm. |
1769 | + |
1770 | + Service Configuration |
1771 | + There are many different settings in an juju deployment, but |
1772 | + the term Service Configuration refers to the settings which a user can |
1773 | + define to customize the behavior of a Service. |
1774 | + |
1775 | + The behavior of a Service when its Service Configuration changes is |
1776 | + entirely defined by its Charm. |
1777 | + |
1778 | + Provisioning Agent |
1779 | + Software responsible for automatically allocating and terminating |
1780 | + machines in an Environment, as necessary for the requested configuration. |
1781 | + |
1782 | + Machine Agent |
1783 | + Software which runs inside each machine that is part of an Environment, |
1784 | + and is able to handle the needs of deploying and managing Service Units |
1785 | + in this machine. |
1786 | + |
1787 | + Service Unit Agent |
1788 | + Software which manages all the lifecycle of a single Service Unit. |
1789 | + |
1790 | |
1791 | === added file 'source/hook-debugging.rst' |
1792 | --- source/hook-debugging.rst 1970-01-01 00:00:00 +0000 |
1793 | +++ source/hook-debugging.rst 2012-01-18 20:50:30 +0000 |
1794 | @@ -0,0 +1,108 @@ |
1795 | +Hook debugging |
1796 | +============== |
1797 | + |
1798 | +Introduction |
1799 | +------------ |
1800 | + |
1801 | +An important facility in any distributed system is the ability to |
1802 | +introspect the running system, and to debug it. Within juju the |
1803 | +actions performed by the system are executing charm defined |
1804 | +hooks. The ``debug-log`` cli provides for inspecting the total state of |
1805 | +the system via capturing the logs of all agents and output of all |
1806 | +hooks run by the system. |
1807 | + |
1808 | +To facilitate better debugging of hooks, the ``debug-hooks`` cli |
1809 | +provides for interactive shell usage as a substitute for running a |
1810 | +hook. This allows a charm author or system adminstrator the ability |
1811 | +to interact with the system in a live environment and either develop |
1812 | +or debug a hook. |
1813 | + |
1814 | +How it works |
1815 | +------------ |
1816 | + |
1817 | +When the juju user utilizes the hook debug command like so:: |
1818 | + |
1819 | + juju debug-hooks unit_name [hook_name] |
1820 | + |
1821 | +juju is instructed to replace the execution of the hook from the |
1822 | +charm of the respective service unit, and instead to execute it in a |
1823 | +shell associated to a tmux session. If no hook name is given, then all |
1824 | +hooks will be debugged in this fashion. Multiple hook names can also be |
1825 | +specified on the command line. Shell regular expressions can also be |
1826 | +utilized to specify hook names. |
1827 | + |
1828 | +The ``debug-hooks`` command line invocation will immediately connect |
1829 | +to the remote machine of the remote unit and start a named shell |
1830 | +connected to the same tmux session there. |
1831 | + |
1832 | +The native capabilities of tmux can be exploited to construct a full |
1833 | +debug/development environment on the remote machine. |
1834 | + |
1835 | +When a debugged hook is executed a new named window will pop up in the |
1836 | +tmux session with the hook shell started. The new window's title will |
1837 | +match the hook name, and the shell environment will have all the |
1838 | +juju environment variables in place, and all of the hook cli API |
1839 | +may be utilized (relation-get, relation-set, relation-list, etc.). |
1840 | + |
1841 | +It's important to note that juju serializes hook execution, so |
1842 | +while the shell is active, no other hooks will be executed on the |
1843 | +unit. Once the experimentation is done, the user must stop the hook |
1844 | +by exiting the shell session. At this point the system is then |
1845 | +free to execute additional hooks. |
1846 | + |
1847 | +It's important to note that any state changes performed while in the |
1848 | +hook window via relation-set are buffered till the hook is done |
1849 | +executing, in the same way performed for all the relation hooks when |
1850 | +running outside of a debug session. |
1851 | + |
1852 | +The debug-hooks can be used to debug the same hook being invoked |
1853 | +multiple times as long as the user has not closed the debug screen |
1854 | +session. |
1855 | + |
1856 | +The user can exit debug mode by exiting the tmux session (e.g. |
1857 | +exiting all shells). The unit will then resume its normal |
1858 | +processing. |
1859 | + |
1860 | + |
1861 | +Limitations |
1862 | +----------- |
1863 | + |
1864 | +Note that right now one can only query relation information when |
1865 | +debugging a running relation hook. This means commands such as |
1866 | +relation-get, relation-set, etc, will not work on a hook like |
1867 | +'install' or 'upgrade'. |
1868 | + |
1869 | +This problem will be solved once the following bug is fixed: |
1870 | + |
1871 | + https://bugs.launchpad.net/juju/+bug/767195 |
1872 | + |
1873 | + |
1874 | +Internals |
1875 | +--------- |
1876 | + |
1877 | +Internally the ``debug-hooks`` cli begins by verifying its arguments, |
1878 | +namely the unit exists, and the named hook is valid for the charm. |
1879 | +After that it modifies the zookeeper state of the unit node, setting |
1880 | +a flag noting the hook to debug. It then establishes an ssh |
1881 | +connection to the machine and executes the tmux command. |
1882 | + |
1883 | +The unit-agent will establish a watch on its own debug settings, on |
1884 | +changes introspecting the debug flag, and pass any named hook values |
1885 | +down to the hook executor, which will construct debug hook scripts on |
1886 | +the fly for matching hooks. These debug hook scripts are responsible |
1887 | +for connecting to tmux and monitoring the execution of the hook |
1888 | +therein. |
1889 | + |
1890 | +Special care will be taken to ensure the viability of the tmux |
1891 | +session and that debug mode is active before creating the interactive |
1892 | +hook window in tmux. |
1893 | + |
1894 | + |
1895 | +Screen vs. tmux |
1896 | +--------------- |
1897 | + |
1898 | +Initially juju used GNU screen for the debugging sessions rather |
1899 | +than tmux, but tmux turned out to make it easier to avoid race |
1900 | +conditions when starting the same session concurrently, as done for |
1901 | +the debugging system. This was the main motivation that prompted |
1902 | +the change to tmux. They both worked very similarly otherwise. |
1903 | |
1904 | === added file 'source/index.rst' |
1905 | --- source/index.rst 1970-01-01 00:00:00 +0000 |
1906 | +++ source/index.rst 2012-01-18 20:50:30 +0000 |
1907 | @@ -0,0 +1,35 @@ |
1908 | +Documentation |
1909 | +============= |
1910 | + |
1911 | +.. note:: juju is still in a stage of fast development, and is not yet |
1912 | + ready for prime time. The current software is being made available as |
1913 | + an early technology preview, and while it can be experimented with, it |
1914 | + should not be used in real deployments just yet. |
1915 | + |
1916 | +.. toctree:: |
1917 | + :maxdepth: 2 |
1918 | + |
1919 | + about |
1920 | + faq |
1921 | + getting-started |
1922 | + user-tutorial |
1923 | + write-charm |
1924 | + charm |
1925 | + expose-services |
1926 | + hook-debugging |
1927 | + upgrades |
1928 | + charm-upgrades |
1929 | + provider-configuration-ec2 |
1930 | + provider-configuration-local |
1931 | + juju-internals |
1932 | + juju-drafts |
1933 | + glossary |
1934 | + generated/modules |
1935 | + |
1936 | + |
1937 | +Index and Glossary |
1938 | +================== |
1939 | + |
1940 | +* :ref:`glossary` |
1941 | +* :ref:`genindex` |
1942 | +* :ref:`search` |
1943 | |
1944 | === added directory 'source/internals' |
1945 | === added file 'source/internals/agent-presence.rst' |
1946 | --- source/internals/agent-presence.rst 1970-01-01 00:00:00 +0000 |
1947 | +++ source/internals/agent-presence.rst 2012-01-18 20:50:30 +0000 |
1948 | @@ -0,0 +1,154 @@ |
1949 | +Agent presence and settings |
1950 | +=========================== |
1951 | + |
1952 | +Agents are a set of distributed processes within the juju |
1953 | +framework tasked individually with various juju roles. Each agent |
1954 | +process interacts with zookeeper state and its environment to perform |
1955 | +its role. |
1956 | + |
1957 | +Common to all agents are the need to make their presence known, such |
1958 | +that it can be monitored for availability, as well the need for storage |
1959 | +so an agent can record its state. |
1960 | + |
1961 | +Presence |
1962 | +-------- |
1963 | + |
1964 | +The presence/aliveness of an agent process within the zookeeper state |
1965 | +hierarchy is denoted by an ephemeral node. This ephemeral presence |
1966 | +node is also used to store transient settings by the agent |
1967 | +process. These transient values will have a scope of the agent process |
1968 | +lifetime. These ephemeral presence nodes are stored under the /agents |
1969 | +container in a hierarchy, according to their agents role. Agents |
1970 | +fufill many different roles within the juju system. Within the |
1971 | +/agents container hierarchy, each agent's ephemeral node is contained |
1972 | +within an <agent-role> container. |
1973 | + |
1974 | +For example, unit agents are stored in the following container:: |
1975 | + |
1976 | + /agents/unit/ |
1977 | + |
1978 | +And provisioning agents in:: |
1979 | + |
1980 | + /agents/provisioning/ |
1981 | + |
1982 | +The agent presence node within these role containers is further |
1983 | +distinguished by the id it chooses to use within the container. Some |
1984 | +agents are commonly associated to a persistent domain object, such as |
1985 | +a unit or machine, in that case they will utilize the persistent domain |
1986 | +object's id for their node name. |
1987 | + |
1988 | +For example, a unit agent for unit 11 (display name: mysql/0), would |
1989 | +have a presence node at:: |
1990 | + |
1991 | + /agents/unit/unit-11 |
1992 | + |
1993 | +For agents not associated to a persistent domain object, the number of |
1994 | +agents is determined by configuration, and they'll utilize an ephemeral |
1995 | +sequence to denote their id. For example the first provisioning agent |
1996 | +process in the system would have a path:: |
1997 | + |
1998 | + /agents/provisioning/provisioning-0000000000 |
1999 | + |
2000 | +and the second |
2001 | + |
2002 | + /agents/provisioning/provisioning-0000000001 |
2003 | + |
2004 | +Persistence |
2005 | +----------- |
2006 | + |
2007 | +All agents are able to store transient settings of the agent process |
2008 | +within their ephemeral presence nodes within zookeeper. If an agent |
2009 | +needs persistent settings, they should be stored on an associated |
2010 | +peristent domain object. |
2011 | + |
2012 | + |
2013 | +Availability |
2014 | +------------ |
2015 | + |
2016 | +One of the key features of the juju framework, is an absence of |
2017 | +single points of failures. To enable availability across agents we'll |
2018 | +run multiple instances of agents as appropriate, monitor the presence |
2019 | +of agents, and restart them as nescessary. Using the role information |
2020 | +and the agent id as encoded in the presence node path, we can dispatch |
2021 | +appropriate error handling and recovery logic, ie. restart a unit |
2022 | +agent, or provisioning agent. |
2023 | + |
2024 | +For agents providing cluster wide services, it will be typical to have |
2025 | +multiple agents for each role (ie. provisionig, recovery). |
2026 | + |
2027 | +A recovery agent will need to distinguish causal factors regarding the |
2028 | +disappearance of a unit presence node. In addition to error scenarios, |
2029 | +the configuration state may change such that an agent is no longer |
2030 | +nescessary, for example an unused machine being terminated, or a unit no |
2031 | +longer being assigned to a machine. To facilitate identiying the |
2032 | +cause, a recovery agent would subscribe to the topology to distinguish |
2033 | +configuration change vs. a runtime change. For agents not associated |
2034 | +to a persistent domain object, this identification will be based on |
2035 | +examining the configured number of agent for the role, and verifying |
2036 | +that it matches the runtime state. |
2037 | + |
2038 | + |
2039 | +Startup and recovery |
2040 | +-------------------- |
2041 | + |
2042 | +On startup, an agent will attempt to create its presence node. For |
2043 | +agents associated to persistent domain objects, this process will |
2044 | +either succeed, or result in an error due to an existing agent already |
2045 | +in place, as the ids used are unique to a single instance of the agent |
2046 | +since the id is based on the domain object id. |
2047 | + |
2048 | +For agents not attached to persistent domain objects, they should |
2049 | +verify their configuration parameter for the total number of agents |
2050 | +for the role. |
2051 | + |
2052 | +In the case of a conflict or a satisified configuration the agent |
2053 | +process should terminate with an error message. |
2054 | + |
2055 | + |
2056 | +Agent state API |
2057 | +--------------- |
2058 | + |
2059 | + |
2060 | +``IAgentAssociated``:: |
2061 | + |
2062 | + """An api for persistent domain objects associated to an agent.""" |
2063 | + |
2064 | + def has_agent(): |
2065 | + """Return boolean whether the agent (presence node) exists.""" |
2066 | + |
2067 | + def get_agent_state(): |
2068 | + """Retrieve the agent associated to this domain object.""" |
2069 | + |
2070 | + def connect_agent(): |
2071 | + """Create an agent presence node. |
2072 | + |
2073 | + This serves to connect the agent process with its agent state, |
2074 | + and will create the agent presence node if doesn't exist, else |
2075 | + raise an exception. |
2076 | + |
2077 | + Returns an agent state. |
2078 | + """ |
2079 | + |
2080 | + |
2081 | +``IAgentState``:: |
2082 | + |
2083 | + def get_transient_data(): |
2084 | + """ |
2085 | + Retrieve the transient data for the agent as a byte string. |
2086 | + """ |
2087 | + |
2088 | + def set_transient_state(data): |
2089 | + """ |
2090 | + Set the transient data for the agent as a byte string. |
2091 | + """ |
2092 | + |
2093 | + def get_domain_object(): |
2094 | + """ |
2095 | + TBD if Desireable. An agent attached to a persistent domain |
2096 | + object has all the knowledge to retrieve the associated |
2097 | + persistent domain object. For a machine agent state, this would |
2098 | + retrieve the machine state. For a unit agent state this would |
2099 | + retrieve the unit. Most agent implementations will already have |
2100 | + access to the domain object, and will likley retrieve or create |
2101 | + the agent from it. |
2102 | + """ |
2103 | |
2104 | === added file 'source/internals/expose-services.rst' |
2105 | --- source/internals/expose-services.rst 1970-01-01 00:00:00 +0000 |
2106 | +++ source/internals/expose-services.rst 2012-01-18 20:50:30 +0000 |
2107 | @@ -0,0 +1,143 @@ |
2108 | +Service exposing implementation details |
2109 | +======================================= |
2110 | + |
2111 | + |
2112 | +Not in scope |
2113 | +------------ |
2114 | + |
2115 | +It is not in the scope of this specification to determine mapping to a |
2116 | +public DNS or other directory service. |
2117 | + |
2118 | + |
2119 | +Implementation of ``expose`` and ``unexpose`` subcommands |
2120 | +--------------------------------------------------------- |
2121 | + |
2122 | +Two new user commands were added:: |
2123 | + |
2124 | + juju expose <service name> |
2125 | + |
2126 | + juju unexpose <service name> |
2127 | + |
2128 | +These commands set and remove a flag znode, **/services/<internal |
2129 | +service id>/exposed**, respectively. |
2130 | + |
2131 | + |
2132 | +Hook command additions |
2133 | +---------------------- |
2134 | + |
2135 | +Two new hook commands were added for opening and closing ports. They |
2136 | +may be executed within any charm hook:: |
2137 | + |
2138 | + open-port port[/protocol] |
2139 | + |
2140 | + close-port port[/protocol] |
2141 | + |
2142 | +These commands store in the ZK tree, under **/units/<internal unit |
2143 | +id>/ports**, the desired opened port information as serialized to |
2144 | +JSON. For example, executing ``open-port 80`` would be serialized as |
2145 | +follows:: |
2146 | + |
2147 | + {"open": [{"port": 80, "proto": "tcp"}, ...]} |
2148 | + |
2149 | +This format accommodates tracking other ancillary information for |
2150 | +exposing services. |
2151 | + |
2152 | +These commands are executed immediately within the hook. |
2153 | + |
2154 | + |
2155 | +New ``exposed`` and ``unexposed`` service hooks |
2156 | +----------------------------------------------- |
2157 | + |
2158 | +The ``exposed`` service hook runs upon a service being exposed with |
2159 | +the ``juju expose`` command. As part of the unit workflow, it is |
2160 | +scheduled to run upon the existence of **/services/<internal service |
2161 | +id>/exposed** and the service unit being in the ``started`` state. |
2162 | + |
2163 | +Likewise, the ``unexposed`` service hook runs upon the removal of a |
2164 | +**/services/<internal service id>/exposed** flag znode. |
2165 | + |
2166 | +These hooks will be implemented at a future time. |
2167 | + |
2168 | + |
2169 | +``juju status`` display of opened ports |
2170 | +------------------------------------------- |
2171 | + |
2172 | +If a service has been exposed, then the juju status output is |
2173 | +augmented. For the YAML serialization, for each exposed service, the |
2174 | +``exposed`` key is added, with the value of ``true``. (It is not |
2175 | +displayed otherwise.) For each service unit of an exposed service with |
2176 | +opened ports, the ``open-ports`` key is added, with its value a |
2177 | +sequence of ``port/proto`` strings. If no ports are opened, its value |
2178 | +is an empty list. |
2179 | + |
2180 | + |
2181 | +Provisioning agent implementation |
2182 | +--------------------------------- |
2183 | + |
2184 | +The provisioning agent currently is the only place within juju |
2185 | +that can take global actions with respect to the provider. Consequently, |
2186 | +provisioning is currently responsible for the current, if simple EC2 |
2187 | +security group management (with the policy of open all ports, seen in |
2188 | +the code `juju.providers.ec2.launch.EC2LaunchMachine`). |
2189 | + |
2190 | +The provisioning agent watches for the existence of |
2191 | +**/services/<internal service id>/exposed**, and if so watches the |
2192 | +service units settings **/units/<internal unit id>/ports** and makes |
2193 | +changes in the firewall settings through the provider. |
2194 | + |
2195 | +For the EC2 provider, this is done through security groups (see |
2196 | +below). Later we will revisit to let a machine agent do this in the |
2197 | +context of iptables, so as to get out of the 500 security group limit |
2198 | +for EC2, enable multiple service units per machine, be generic with |
2199 | +other providers, and to provide future support for internal firewall |
2200 | +config. |
2201 | + |
2202 | + |
2203 | +EC2 provider implementation |
2204 | +--------------------------- |
2205 | + |
2206 | +Prior to the launch of a new machine instance, a unique EC2 security |
2207 | +group is added. The machine instance is then assigned to this group at |
2208 | +launch. Likewise, terminating the machine will result in the EC2 |
2209 | +provider deleting the security group for the machine. (This cleanup |
2210 | +will be implemented in a future branch.) |
2211 | + |
2212 | +Given this model of a security group per machine, with one service |
2213 | +unit per machine, exposing and unexposing ports for a service unit |
2214 | +corresponds to EC2's support for authorization and revocation of ports |
2215 | +per security group. In particular, EC2 supports a source address of |
2216 | +``0.0.0.0/0`` that corresponds to exposing the port to the world. |
2217 | + |
2218 | +To make this concrete, consider the example of exposing the |
2219 | +``my-wordpress`` service. Once the command ``open-port 80`` has been |
2220 | +run on a given service unit of ``my-wordpress``, then for the |
2221 | +corresponding machine instance, the equivalent of this EC2 command is |
2222 | +run:: |
2223 | + |
2224 | + ec2-authorize $MACHINE_SECURITY_GROUP -P tcp -p 80 -s 0.0.0.0/0 |
2225 | + |
2226 | +``$MACHINE_SECURITY_GROUP`` is named ``juju-ENVIRONMENT-MACHINE_ID``, |
2227 | +eg. something like ``juju-prod-2``. |
2228 | + |
2229 | +Any additional service units of ``my-wordpress``, if they run |
2230 | +``open-port 80``, will likewise invoke the equivalent of the above |
2231 | +command, for the corresponding machine security groups. |
2232 | + |
2233 | +If ``my-wordpress`` is unexposed, a ``my-wordpress`` service unit is |
2234 | +removed, the ``my-wordpress`` service is destroyed, or the |
2235 | +``close-port`` command is run for a service unit, then the equivalent |
2236 | +of the following EC2 command is run, for all applicable machines:: |
2237 | + |
2238 | + ec2-revoke $MACHINE_SECURITY_GROUP -P tcp -p 80 -s 0.0.0.0/0 |
2239 | + |
2240 | +Although this section showed the equivalent EC2 commands for |
2241 | +simplicity, txaws is used for the actual implementation. |
2242 | + |
2243 | + |
2244 | +Implementation plan |
2245 | +------------------- |
2246 | + |
2247 | +The following functionality needs to be added. This should divisible |
2248 | +into separate, small branches: |
2249 | + |
2250 | + * Implement exposed and unexposed hooks. |
2251 | |
2252 | === added file 'source/internals/unit-agent-hooks.rst' |
2253 | --- source/internals/unit-agent-hooks.rst 1970-01-01 00:00:00 +0000 |
2254 | +++ source/internals/unit-agent-hooks.rst 2012-01-18 20:50:30 +0000 |
2255 | @@ -0,0 +1,307 @@ |
2256 | +Unit Agent hooks |
2257 | +================ |
2258 | + |
2259 | +Introduction |
2260 | +------------ |
2261 | + |
2262 | +The Unit Agent (**UA**) in juju is responsible for managing and |
2263 | +maintaining the per-machine service units. By calling life-cycle and |
2264 | +state change hooks the UA is able to allow the Service Unit to respond |
2265 | +to changes in the state of the environment. This is done through the |
2266 | +invocation of hooks provided by the charm author. |
2267 | + |
2268 | +This specification outlines the interaction between the UA, the |
2269 | +running software being managed by the UA and the hooks invoked in |
2270 | +response to state or process level changes in the runtime. |
2271 | + |
2272 | +Hooks_ are defined in another document. This specification only |
2273 | +captures how they are invoked and managed, not why. |
2274 | + |
2275 | +.. _Hooks: ../charm.html#hooks |
2276 | + |
2277 | +When the Machine Agent (**MA**) spawns a UA it does so in order to |
2278 | +manage the smallest managed unit of service deployment and |
2279 | +management. The process managed by the UA will be called the **UAP** |
2280 | +later in the documentation. |
2281 | + |
2282 | +The UAP does not directly communicate with the UA, that is the |
2283 | +responsibility of the hooks and is handled by the provided command |
2284 | +line tools. The means through which that communication occurs and the |
2285 | +semantics of it are described in this document. |
2286 | + |
2287 | + |
2288 | +Hooks access to settings |
2289 | +------------------------ |
2290 | + |
2291 | +Hooks have access to two kinds of settings. The first is the |
2292 | +*"service settings"*, which cover configuration details for the |
2293 | +all units of the given service. These are usually provided |
2294 | +manually by the user, are global to the service, and will not |
2295 | +be written to by service units themselves. This is the |
2296 | +principal way through which an administrator configures the |
2297 | +software running inside an juju service unit. |
2298 | + |
2299 | +The second kind is known as *"relation settings"*, and are |
2300 | +made available to service units whenever they are participating in |
2301 | +a relation with one or more service units. In these cases, each |
2302 | +participating unit will have its own set of settings specific to |
2303 | +that relation, and will be able to query both its local settings |
2304 | +and the remote settings from any of the participating units. |
2305 | +That's the main mechanism used by juju to allow service units |
2306 | +to communicate with each other. |
2307 | + |
2308 | +Using the example of a blog deployment we might include information |
2309 | +such as the theme used by the blog engine and the title of the blog in |
2310 | +the "service settings". The "relation settings" might contain specific |
2311 | +information about the blog engines connection to a database deployed |
2312 | +on its behalf, for example and IP address and port. |
2313 | + |
2314 | +There is a single ZK node for the "service settings" and another for |
2315 | +the "relation settings". Within this node we store an dictionary |
2316 | +mapping string keys to opaque blobs of information which are managed |
2317 | +by the service hooks and the juju administrator. |
2318 | + |
2319 | +Hooks are presented with a synchronous view of the state of these |
2320 | +nodes. When a request is made for a particular setting in a particular |
2321 | +node the cache will present a view of that node that is consistent to |
2322 | +the client for the lifetime of the hook invocation. For example, |
2323 | +assume a settings node with settings 'a' and 'b'. When the hook |
2324 | +requests the value of 'a' from the a relation settings node we would |
2325 | +present a consistent view of those settings should they request 'a' or |
2326 | +'b' from that same relation settings during the lifetime of the |
2327 | +hook. If however they were to attempt to request value 'a' from a |
2328 | +different relation settings node this new nodes setting would be |
2329 | +cached at the time of its first interaction with the hook. Repeated |
2330 | +reads of data from the same settings node will continue to yield the |
2331 | +clients view of that data. |
2332 | + |
2333 | +When manipulating data, even if the initial interaction with the data |
2334 | +is a set, the settings are first read into the UA cache and the cache |
2335 | +is updated with the current value. |
2336 | + |
2337 | + |
2338 | +Service Unit name |
2339 | +----------------- |
2340 | + |
2341 | +A service unit name in juju is formed by including both the name |
2342 | +of the service and a monotonically increasing number that uniquely |
2343 | +specifies the service unit for the life time of an juju |
2344 | +deployment:: |
2345 | + |
2346 | + <service_name>/<service_unit_number> |
2347 | + |
2348 | +This results in name like "wordpress/1" and "mysql/1". The numbers |
2349 | +themselves are not significant but do obey the rule that they will not |
2350 | +be reused during the lifetime of a service. This means that if a UA |
2351 | +goes away the number that represented it is retired from the |
2352 | +deployment. |
2353 | + |
2354 | +For additional details see juju/state/service.py. |
2355 | + |
2356 | + |
2357 | +Client Id |
2358 | +--------- |
2359 | + |
2360 | +Because of the way in which settings state is presented through the |
2361 | +command line utilities within hooks clients are provided a string |
2362 | +token through a calling environmental variable, |
2363 | +*JUJU_CLIENT_ID*. Using this variable all command line tools will |
2364 | +connect with a shared common state when used from a single hook |
2365 | +invocation. |
2366 | + |
2367 | +The few command line utilities, such as juju-log, which could be |
2368 | +called outside the context of a hook need not pass a client id. At the |
2369 | +time of this writing its expected that cli tools which don't need hook |
2370 | +context either don't make an effort to present a stable view of |
2371 | +settings between calls (and thus run with a completely pass-through |
2372 | +cache proxy) or don't interact directly with the state. |
2373 | + |
2374 | +However as indicated below the *--client_id* flag can be passed |
2375 | +directly to any tool indicating the caching context which should be |
2376 | +used. This facilitates testing as well as allowing some flexibility in |
2377 | +the future. |
2378 | + |
2379 | +Passing a client_id which the UA is unaware of (or which has expired |
2380 | +through some other means) will result in an error and an exit code |
2381 | +being returned to the client. |
2382 | + |
2383 | + |
2384 | +Hook invocation and communication |
2385 | +--------------------------------- |
2386 | + |
2387 | +Twisted (which is used to handle networking and asynchronous |
2388 | +interactions throughout the codebase) defines a key-value oriented |
2389 | +binary protocol called AMP which is used to communicate between the UA |
2390 | +and the hooks executed on behalf of the charm. To facilitate this |
2391 | +the filename of a Unix Domain socket is provided through the process |
2392 | +environment. This socket is shared among all hook invocations and can |
2393 | +even be used by tools outside the context of a particular hook |
2394 | +invocation. Because of this providing a 'client id'_ to calls will |
2395 | +establish a connection to an internal data-cache offering a consistent |
2396 | +view of settings on a per-node, per-client basis. |
2397 | + |
2398 | +Communication over this socket takes place using an abstraction |
2399 | +provided by AMP called Commands. Hooks trigger, through the invocation |
2400 | +of utility commands, these commands to the provided socket. These |
2401 | +commands in turn schedule interactions with the settings available in |
2402 | +ZK. |
2403 | + |
2404 | +Because of the policy used for scheduling changes to settings the |
2405 | +actions of hooks are not applied directly to ZK (and this are not |
2406 | +visible outside the particular UA invoking the hook) until the hook |
2407 | +terminates with a success code. |
2408 | + |
2409 | +Here are the commands the initial revision will support and a bit about |
2410 | +there characteristics: |
2411 | + |
2412 | + * **get(client_id, unit_name, setting_name)** - This command will return the |
2413 | + value for a given key name or return a KeyError. A key error |
2414 | + can be mapped through to the cli as null with a failed exit |
2415 | + code. **unit_name** is processed using the standard `Service |
2416 | + Unit Name`_ policy. |
2417 | + |
2418 | + * **set(client_id, unit_name, json_blob)** - This command will enqueue a |
2419 | + state change to ZK pending successful termination of the |
2420 | + hook. **unit_name** is processed using the standard `Service |
2421 | + Unit Name`_ policy. The json_blob is a JSON string |
2422 | + serialization of a dict which will be applied as a set of |
2423 | + updates to the keys and values stored in the existing |
2424 | + settings. Because the cache object contains the updated state |
2425 | + (but is not visiable outside the hook until successful |
2426 | + completion) subsequent reads of settings would return the |
2427 | + values provided by the set call. |
2428 | + |
2429 | + * **list_relations(client_id)** - Returns a list of all relations |
2430 | + associated with a hook at the time of invocation. The values |
2431 | + of this call will typically also be exposed as a environment |
2432 | + variable, **JUJU_MEMBERS**. |
2433 | + |
2434 | + * **flush(client_id)** - reserved |
2435 | + |
2436 | + * **sync(client_id)** - reserved |
2437 | + |
2438 | + * **wait(client_id, keyname)** - reserved |
2439 | + |
2440 | + |
2441 | +Unit Agent internal state |
2442 | +------------------------- |
2443 | + |
2444 | +This is a list of internal state which the UA maintains for the proper |
2445 | +management of hook invocations. |
2446 | + |
2447 | + * which hooks have fired (and the expected result state). |
2448 | + * the UNIX domain socket passed to hooks for AMP communication |
2449 | + * the path to the container in which the Service Unit is executing |
2450 | + (passed in environment to hooks). |
2451 | + * the cached state of relationship nodes and settings relative to |
2452 | + particular hook invocations. |
2453 | + |
2454 | + |
2455 | +Command line interface |
2456 | +---------------------- |
2457 | + |
2458 | +While the command line utilities provided use the underlying AMP |
2459 | +commands to enact their work they provide a standard set of utilities |
2460 | +for passing data between files and ZK state. |
2461 | + |
2462 | +Hooks have access to many commands provided by juju for |
2463 | +interfacing with settings. These provide a set of standard command |
2464 | +line options and conventions. |
2465 | + |
2466 | + * Command line tools like *relation-set* will check stdin |
2467 | + processing the provided input as a JSON dict of values that |
2468 | + should be handled as though they were command line |
2469 | + arguments. Using this convention its possible to easily set |
2470 | + many values at once without any thought to escaping values for |
2471 | + the shell. |
2472 | + |
2473 | + * Similar to *curl(1)* if you start the data with the letter @, |
2474 | + the rest should be a file name to read the data from, or - if |
2475 | + you want to read the data from stdin. |
2476 | + |
2477 | + * Command line tools responsible for returning data to the user, |
2478 | + such as **relation-get**, will output JSON by default when |
2479 | + returning more than a single value or **--format=json** is |
2480 | + present in the command line. Requests for a single value default |
2481 | + to returning the value without JSON serialisation unless the |
2482 | + --format=json flag is passed. |
2483 | + |
2484 | + * Output from command lines tools default to stdout. If the **-o** |
2485 | + option is provided any tool will write its output to a file |
2486 | + named after that flag. ex. **relation-get -o /tmp/output.json** |
2487 | + will create or replace a file called /tmp/output.json with the |
2488 | + data existent in the relation. |
2489 | + |
2490 | + |
2491 | +Logging |
2492 | +------- |
2493 | + |
2494 | +Command line hooks communicate with the user/admin by means three |
2495 | +primary channels. |
2496 | + |
2497 | + * **Hook exit code** Zero is success, anything is regarded as hook |
2498 | + failure and will cause the hook to be run at a later time. |
2499 | + |
2500 | + * **Stdout/Stderr** Messages printed, echoed or otherwise emitted |
2501 | + from the hooks on stdout or stderr are converted to log |
2502 | + messages of levels INFO and ERROR respectively. These messages |
2503 | + will then be emitted by the UA as they occur and are not |
2504 | + buffered like global state changes. |
2505 | + |
2506 | + * **juju-logger** (reserved) An additional command line tool |
2507 | + provided to communicate more complex logging messages to the |
2508 | + UA and help them be made available to the user. |
2509 | + |
2510 | + |
2511 | +Calling environment |
2512 | +------------------- |
2513 | + |
2514 | +Hooks can expect to be invoked with a standard environment and |
2515 | +context. The following be included: |
2516 | + |
2517 | + * `$JUJU_SOCKET` - Path to a UNIX Domain socket which will be |
2518 | + made available to the command line tools in order to communicate |
2519 | + with the UA. |
2520 | + |
2521 | + * `$JUJU_CLIENT_ID` - A unique identifier passed to a hook |
2522 | + invocation used to populate the --client_id flag to cli |
2523 | + tools. This is describe in the section, `Client Id`_. |
2524 | + |
2525 | + * `$JUJU_LOCAL_UNIT` - The unit name of the unit this hook is |
2526 | + being invoked in. (ex: myblog/0) |
2527 | + |
2528 | + * `$JUJU_SERVICE` - The name of the service for which this hook |
2529 | + is running. (ex: myblog) |
2530 | + |
2531 | + * `$JUJU_CHARM` - The name of the charm which deployed the |
2532 | + unit the hook is running in. (ex: wordpress) |
2533 | + |
2534 | + |
2535 | +Hooks called for relationships will have the follow additional |
2536 | +environment variables available to them. |
2537 | + |
2538 | + * `$JUJU_MEMBERS` - A space-delimited list of qualified |
2539 | + relationship ids uniquely specifying all the UAs participating in |
2540 | + a given relationship. (ex. "wordpress/1 worpress/2") |
2541 | + |
2542 | + * `$JUJU_RELATION` - The relation name this hook is running |
2543 | + for. It's redundant with the hook name, but is necessary for |
2544 | + the command line tools to know the current context. |
2545 | + |
2546 | + * `$JUJU_REMOTE_UNIT` - The unit name of the remote unit |
2547 | + which has triggered the hook execution. |
2548 | + |
2549 | + |
2550 | +Open issues |
2551 | +----------- |
2552 | + |
2553 | +There are still a number of open issues with this specification. There |
2554 | +is still open debate if the UA runs inside the same process |
2555 | +space/container and how this will play out with security. This has |
2556 | +ramifications to this specification as well as we'd take steps to make sure |
2557 | +client code cannot violate the ZK juju by connection with their |
2558 | +own copy of the code on a known port. |
2559 | + |
2560 | +There specification doesn't define 100% which command line tools will |
2561 | +get which environment settings. |
2562 | + |
2563 | |
2564 | === added file 'source/internals/unit-agent-startup.rst' |
2565 | --- source/internals/unit-agent-startup.rst 1970-01-01 00:00:00 +0000 |
2566 | +++ source/internals/unit-agent-startup.rst 2012-01-18 20:50:30 +0000 |
2567 | @@ -0,0 +1,156 @@ |
2568 | +Unit Agent startup |
2569 | +================== |
2570 | + |
2571 | +Introduction |
2572 | +------------ |
2573 | + |
2574 | +The unit agent manages a state machine workflow for the unit. For each |
2575 | +transition the agent records the current state of the unit and stores |
2576 | +that information as defined below. If the agent dies, or is restarted |
2577 | +for any reason, the agent will resume the workflow from its last known |
2578 | +state. |
2579 | + |
2580 | +The available workflow states and transitions are:: |
2581 | + |
2582 | + "new" -> "ready" [label="install"] |
2583 | + "new" -> "install-error" [label="error-install"] |
2584 | + "ready" -> "running" [label="start"] |
2585 | + "ready" -> "start-error" [label="error-start"] |
2586 | + "running" -> "ready" [label="stop"] |
2587 | + "running" -> "stop-error" [label="error-stop"] |
2588 | + |
2589 | +The agent does not have any insight into external processes that the |
2590 | +unit's charm may be managing, it's sole responsibility is executing |
2591 | +hooks in deterministic fashion as a consequence of state changes. |
2592 | + |
2593 | +Charm hook execution (excepting relation hooks), corresponds to |
2594 | +invoking a transition on the unit workflow state. Any errors during a |
2595 | +transition, will prevent a state change. All state changes are |
2596 | +recorded persistently on the unit state. If a state change fails, it |
2597 | +will be reattempted until a max number of retries, after which the |
2598 | +unit workflow will be transitioned to failure state specific to the |
2599 | +current state and attempted transition, and administrator intervention |
2600 | +will be required to resolve. |
2601 | + |
2602 | +On startup the agent will, establish its presence node (as per the |
2603 | +agent state spec), and read the state of the unit. If the unit is not |
2604 | +running it will have its transition hooks executed to place it in the |
2605 | +running state. |
2606 | + |
2607 | +The persistent state of the unit as per this state machine is stored |
2608 | +locally on disk of the unit. This allows for the continuation of long |
2609 | +running tasks in the face of transient communication failures with zk. |
2610 | +For example if a long running install task is kicked off then it may |
2611 | +complete and record the transition to persistent state even if the zk |
2612 | +connection is not available when the install hook has completed. |
2613 | + |
2614 | +The persistent workflow state of the unit is also replicated to |
2615 | +zookeeper for introspectability, and communication of local failures |
2616 | +to the global coordination space. The zk state for this workflow is |
2617 | +considered non-authoritative by the unit-agent if its operating in a |
2618 | +disconnected mode. |
2619 | + |
2620 | + |
2621 | +Startup sequence |
2622 | +---------------- |
2623 | + |
2624 | +The following outlines the set of steps a unit agent executes when |
2625 | +starting up on a machine resource. |
2626 | + |
2627 | + - Unit agent process starts, inspects its configuration and |
2628 | + environment. |
2629 | + |
2630 | + - A zookeeper client handle is obtained. |
2631 | + |
2632 | + - The agent retrieves its unit state, via the service state manager. |
2633 | + |
2634 | + - The agent retrieves its service relations, via the relation state |
2635 | + manager. |
2636 | + |
2637 | +At deployment time, a service is deployed with its dependencies. Those |
2638 | +dependencies are actualized in relations between the services that are |
2639 | +being deployed. There are several times of relations that can be |
2640 | +established. The most common is a client/server relationship, like a |
2641 | +client application and a database server. Each of the services in such |
2642 | +a relation performs a role within that relation. In this case the |
2643 | +database performs the 'server' role, and the client application |
2644 | +performs the 'client' role. When actualizing the service relations, |
2645 | +the physical layout within the coordination space (zookeeper) takes |
2646 | +these roles into account. |
2647 | + |
2648 | +For example in the client server relation, the service performing the |
2649 | +'server' role has its units under a service-role container named |
2650 | +'server' denoting the role of its units in the relation. |
2651 | + |
2652 | +For each service relation, the agent will |
2653 | + |
2654 | + - Creates its ``/relations/relation-1/settings/unit-X`` relation |
2655 | + local data node, if it doesn't exist. |
2656 | + |
2657 | + - Creates its ``/relations/relation-1/<service-role>/unit-X`` if it |
2658 | + doesn't exist. The node is not considered 'established' for the |
2659 | + purposes of hook execution on other units till this node exists. |
2660 | + |
2661 | + - Establish watches as outlined below. |
2662 | + |
2663 | + |
2664 | +Unit relation observation |
2665 | +------------------------- |
2666 | + |
2667 | +Based on the relation type and the unit's service role, the unit agent |
2668 | +will establish will retrieve and establish watches on the other units |
2669 | +in the relation. |
2670 | + |
2671 | +The relation type determines which service role container the |
2672 | +container will get and observe children of. In a client server |
2673 | +relation there would be both:: |
2674 | + |
2675 | + /relations/relation-1/server |
2676 | + /relations/relation-1/client |
2677 | + |
2678 | +And a client unit would observe and process the unit children of the |
2679 | +server node which functions as the service-role representing the |
2680 | +endpoint of the relation. In a peer relation there would be a |
2681 | +service-role container with the path ``/relations/relation-1/peer`` |
2682 | +which would be observed and processed. |
2683 | + |
2684 | + - The unit agent will get the children and establish a watch (w-1) on |
2685 | + the service role container in the relationship. |
2686 | + |
2687 | + - For each unit found, the relation local data node |
2688 | + ``/relations/relation-X/settings/unit-X`` will have a get watch |
2689 | + (w-2) established . |
2690 | + |
2691 | + - the agent stores a process local variable noting which children its |
2692 | + seen (v-1) |
2693 | + |
2694 | +Finally after processing the children. |
2695 | + |
2696 | + - if the unit agent is completing its startup, and another |
2697 | + 'established' unit was found, the agent should fire the its |
2698 | + relation-changed hook (type joined). |
2699 | + |
2700 | + |
2701 | +Watch behavior |
2702 | +-------------- |
2703 | + |
2704 | + - (w-1) if the service-role child watch fires with a delete event, |
2705 | + reestablish the watch, and execute the relation-changed hook (type |
2706 | + departed), update variable (v-1) |
2707 | + |
2708 | + - (w-1) if the service-role child watch fires with a created event, |
2709 | + reestablish the watch, and execute the relation-changed hook (type |
2710 | + joined), update variable (v-1) |
2711 | + |
2712 | + - (w-1) if the service-role node child watch fires with a deleted |
2713 | + event, the agent invokes the ``relation-broken`` hook. (the service |
2714 | + role container was removed) |
2715 | + |
2716 | + - (w-3) if a unit relation local data node watch fires with a |
2717 | + modified event, reestablish the watch, and execute the |
2718 | + relation-changed hook (type changed) if the unit is in variable |
2719 | + (v-1). |
2720 | + |
2721 | + - (w-3) if a unit relation local data node watch fires with a delete |
2722 | + event, ignore (the agent exists watch must also have fired with a |
2723 | + delet |
2724 | |
2725 | === added file 'source/internals/zookeeper.rst' |
2726 | --- source/internals/zookeeper.rst 1970-01-01 00:00:00 +0000 |
2727 | +++ source/internals/zookeeper.rst 2012-01-18 20:50:30 +0000 |
2728 | @@ -0,0 +1,215 @@ |
2729 | +ZooKeeper |
2730 | +========= |
2731 | + |
2732 | +This document describes the reasoning behind juju's use of ZooKeeper, |
2733 | +and also the structure and semantics used by juju in the ZooKeeper |
2734 | +filesystem. |
2735 | + |
2736 | +juju & ZooKeeper |
2737 | +-------------------- |
2738 | + |
2739 | +ZooKeeper offers a virtual filesystem with so called *znodes* (we'll |
2740 | +refer to them simply as *nodes* in this document). The state stored in |
2741 | +the filesystem is fully introspectable and observable, and the changes |
2742 | +performed on it are atomic and globally ordered. These features are |
2743 | +used by juju to maintain its distributed runtime state in a reliable |
2744 | +and fault tolerant fashion. |
2745 | + |
2746 | +When some part of juju wants to modify the runtime state anyhow, |
2747 | +rather than enqueuing a message to a specific agent, it should instead |
2748 | +perform the modification in the ZooKeeper representation of the state, |
2749 | +and the agents responsible for enforcing the requested modification |
2750 | +should be watching the given nodes, so that they can realize the changes |
2751 | +performed. |
2752 | + |
2753 | +When compared to traditional message queueing, this kind of behavior |
2754 | +enables easier global analysis, fault tolerance (through redundant |
2755 | +agents which watch the same states), introspection, and so on. |
2756 | + |
2757 | + |
2758 | +Filesystem Organization |
2759 | +----------------------- |
2760 | + |
2761 | +The semantics and structures of all nodes used by juju in its |
2762 | +ZooKeeper filesystem usage are described below. Each entry here maps |
2763 | +to a node, and the semantics of the given node are described right |
2764 | +below it. |
2765 | + |
2766 | +Note that, unlike a traditional filesystem, nodes in ZooKeeper may |
2767 | +hold data, while still being a parent of other nodes. In some cases, |
2768 | +information is stored as content for the node itself, in YAML format. |
2769 | +These are noted in the tree below under a bulleted list and *italics*. |
2770 | +In other cases, data is stored inside a child node, noted in the tree |
2771 | +below as indented **/bold**. The decision around whether to use a |
2772 | +child node or content in the parent node revolves around use cases. |
2773 | + |
2774 | + |
2775 | +.. Not for now: |
2776 | + |
2777 | + .. _/files: |
2778 | + |
2779 | + **/files** |
2780 | + Holds information about files stored in the machine provider. Each |
2781 | + file stored in the machine provider's storage location must have a |
2782 | + entry here with metadata about the file. |
2783 | + |
2784 | + **/<filename>:<sha256>** |
2785 | + The name of nodes here is composed by a plain filename, a colon, and |
2786 | + the file content's sha256. As of today these nodes are empty, since |
2787 | + the node name itself is enough to locate it in the storage, and to |
2788 | + assess its validity. |
2789 | + |
2790 | +**/topology** |
2791 | + Describes the current topology of machines, services, and service units. Nodes |
2792 | + under ``/machines``, ``/services``, and ``/units``, should not be considered |
2793 | + as valid unless they are described in this file. The precise format of this |
2794 | + file is an implementation detail. |
2795 | + |
2796 | +**/charms** |
2797 | + Each charm used in this environment must have one entry inside this |
2798 | + node. |
2799 | + |
2800 | + :Readable by: Everyone |
2801 | + |
2802 | + **/<namespace>:<name>-<revision>** |
2803 | + Represents a charm available in this environment. The node name |
2804 | + includes the charm namespace (ubuntu, ~user, etc), the charm name, |
2805 | + and the charm revision. |
2806 | + |
2807 | + - *sha256*: This option contains the sha256 of a file in the file |
2808 | + storage, which contains the charm bundle itself. |
2809 | + |
2810 | + - *metadata*: Contains the metadata for the charm itself. |
2811 | + |
2812 | + - *schema*: The settings accepted by this charm. The precise details |
2813 | + of this are still unspecified. |
2814 | + |
2815 | +**/services** |
2816 | + Each charm to be deployed must be included under an entry in |
2817 | + this tree. |
2818 | + |
2819 | + :Readable by: Everyone |
2820 | + |
2821 | + **/service-<0..N>** |
2822 | + Node with details about the configuration for one charm, which can |
2823 | + be used to deploy one or more charm instances for this specific |
2824 | + charm. |
2825 | + |
2826 | + - *charm*: The charm to be deployed. The value of this option should |
2827 | + be the name of a child node under the ``/charms`` parent. |
2828 | + |
2829 | + **/settings** |
2830 | + Options for the charm provided by the user, stored internally in |
2831 | + YAML format. |
2832 | + |
2833 | + :Readable by: Charm Agent |
2834 | + :Writable by: Admin tools |
2835 | + |
2836 | +**/units** |
2837 | + Each node under this parent reflects an actual service agent which should |
2838 | + be running to manage a charm. |
2839 | + |
2840 | + **/unit-<0..N>** |
2841 | + One running service. |
2842 | + |
2843 | + :Readable by: Charm Agent |
2844 | + :Writable by: Charm Agent |
2845 | + |
2846 | + **/machine** |
2847 | + Contains the internal machine id this service is assigned to. |
2848 | + |
2849 | + **/charm-agent-connected** |
2850 | + Ephemeral node which exists when a charm agent is handling |
2851 | + this instance. |
2852 | + |
2853 | + |
2854 | +**/machines** |
2855 | + |
2856 | + **/machine-<0..N>** |
2857 | + |
2858 | + **/provisioning-lock** |
2859 | + The Machine Provisioning Agent |
2860 | + |
2861 | + **/machine-agent-connected** |
2862 | + Ephemeral node created when the Machine Agent is connected. |
2863 | + |
2864 | + **/info** |
2865 | + Basic information about this machine. |
2866 | + |
2867 | + - *public-dns-name*: The public DNS name of this machine. |
2868 | + - *machine-provider-id*: ie. EC2 instance id. |
2869 | + |
2870 | + |
2871 | +Provisioning a new machine |
2872 | +-------------------------- |
2873 | + |
2874 | +When the need for a new machine is determined, the following sequence of |
2875 | +events happen inside the ZooKeeper filesystem to deploy the new machine: |
2876 | + |
2877 | +1. A new node is created at ``/machines/instances/<N>``. |
2878 | +2. Machine Provisioning Agent has a watcher on ``/machines/instances/``, and |
2879 | + gets notified about the new node. |
2880 | +3. Agent acquires a provisioning lock at |
2881 | + ``/machines/instances/<N>/provisioning-lock`` |
2882 | +4. Agent checks if the machine still has to be provisioned by verifying |
2883 | + if ``/machines/instances/<N>/info`` exists. |
2884 | +5. If the machine has provider launch information, than the agent schedules |
2885 | + to come back to the machine after ``<MachineBootstrapMaxTime>``. |
2886 | +6. If not, the agent fires the machine via the provider and stores the |
2887 | + provider launch info (ie. EC2 machine id, etc.) and schedules the |
2888 | + to come back to the machine after ``<MachineBootstrapMaxTime>``. |
2889 | +7. As a result of a schedule call the machine provider verifies the |
2890 | + existence of a ``/machines/instance/<N>/machine-agent-connected`` node |
2891 | + and if it does sets a watch on it. |
2892 | +8. If the agent node doesn't exist after the <MachineBootstrapMaxTime> then |
2893 | + the agent acquires the ``/machines/instances/<N>/provisioning-lock``, |
2894 | + terminates the instance, and goes to step 6. |
2895 | + |
2896 | + |
2897 | +Bootstrap Notes |
2898 | +~~~~~~~~~~~~~~~ |
2899 | + |
2900 | +This verification of the connected machine agent helps us guard against any |
2901 | +transient errors that may exist on a given virtual node due to provider |
2902 | +vagaries. |
2903 | + |
2904 | +When a machine provisioning agent comes up, it must scan the entire instance |
2905 | +tree to verify all nodes are running. We need to keep some state to distinguish |
2906 | +a node that has never come up from a node that has had its machine agent connection |
2907 | +die so that a new provisioning agent can distinguish between a new machine bootstrap |
2908 | +failure and an running machine failure. |
2909 | + |
2910 | +use a one time password (otp) via user data to guard the machine agent |
2911 | +permanent principal credentials. |
2912 | + |
2913 | +TODO... we should track a counter to keep track of how many times we've |
2914 | +attempt to launch a single instance. |
2915 | + |
2916 | + |
2917 | +Connecting a Machine |
2918 | +-------------------- |
2919 | + |
2920 | +When a machine is launched, we utilize cloud-init to install the requisite |
2921 | +packages to run a machine agent (libzookeeper, twisted) and launch the |
2922 | +machine agent. |
2923 | + |
2924 | +The machine agent reads its one time password from ec2 user-data and connects |
2925 | +to zookeeper and reads its permanent principal info and role information which |
2926 | +it adds to its connection. |
2927 | + |
2928 | +The machine agent reads and sets a watch on |
2929 | +``/machines/instances/<N>/services/``. When a service is placed there the agent |
2930 | +resolve its charm, downloads the charm, creates an lxc container, and launches |
2931 | +a charm agent within the container passing the charm path. |
2932 | + |
2933 | +Starting a Charm |
2934 | +------------------ |
2935 | + |
2936 | +The charm agent connects to zookeeper using principal information provided |
2937 | +by the machine agent. The charm agent reads the charm metadata, and |
2938 | +installs any package dependencies, and then starts invoking charm hooks. |
2939 | + |
2940 | +The charm agent creates the ephemeral node |
2941 | +``/services/<service name>/instances/<N>/charm-agent-connected``. |
2942 | + |
2943 | +The charm is running when.... |
2944 | |
2945 | === added file 'source/juju-drafts.rst' |
2946 | --- source/juju-drafts.rst 1970-01-01 00:00:00 +0000 |
2947 | +++ source/juju-drafts.rst 2012-01-18 20:50:30 +0000 |
2948 | @@ -0,0 +1,10 @@ |
2949 | +Drafts |
2950 | +====== |
2951 | + |
2952 | +This section contains documents which may be unreviewed, incomplete, |
2953 | +incorrect, out-of-date, or all of those. |
2954 | + |
2955 | +.. toctree:: |
2956 | + :glob: |
2957 | + |
2958 | + drafts/* |
2959 | |
2960 | === added file 'source/juju-internals.rst' |
2961 | --- source/juju-internals.rst 1970-01-01 00:00:00 +0000 |
2962 | +++ source/juju-internals.rst 2012-01-18 20:50:30 +0000 |
2963 | @@ -0,0 +1,11 @@ |
2964 | +Implementation details |
2965 | +====================== |
2966 | + |
2967 | +This section details topics which are generally not very useful |
2968 | +for running juju, but may be interesting if you want to hack it. |
2969 | + |
2970 | +.. toctree:: |
2971 | + :glob: |
2972 | + |
2973 | + internals/* |
2974 | + |
2975 | |
2976 | === added file 'source/provider-configuration-ec2.rst' |
2977 | --- source/provider-configuration-ec2.rst 1970-01-01 00:00:00 +0000 |
2978 | +++ source/provider-configuration-ec2.rst 2012-01-18 20:50:30 +0000 |
2979 | @@ -0,0 +1,64 @@ |
2980 | +EC2 provider configuration |
2981 | +-------------------------- |
2982 | + |
2983 | +The EC2 provider accepts a number of configuration options, that can |
2984 | +be specified in the ``environments.yaml`` file under an ec2 provider section. |
2985 | + |
2986 | + access-key: |
2987 | + The AWS access key to utilize for calls to the AWS APIs. |
2988 | + |
2989 | + secret-key: |
2990 | + The AWS secret key to utilize for calls to the AWS APIs. |
2991 | + |
2992 | + ec2-uri: |
2993 | + The EC2 api endpoint URI, by default points to `ec2.amazonaws.com` |
2994 | + |
2995 | + region: |
2996 | + The EC2 region, by default points to `us-east-1`. If 'ec2-uri' is |
2997 | + specified, it will take precedence. |
2998 | + |
2999 | + s3-uri: |
3000 | + The S3 api endpoint URI, by default points to `s3.amazonaws.com` |
3001 | + |
3002 | + control-bucket: |
3003 | + An S3 bucket unique to the environment, where some runtime metadata and |
3004 | + charms are stored. |
3005 | + |
3006 | + juju-origin: |
3007 | + Defines where juju should be obtained for installing in |
3008 | + machines. Can be set to a "lp:..." branch url, to "ppa" for |
3009 | + getting packages from the official juju PPA, or to "distro" |
3010 | + for using packages from the official Ubuntu repositories. |
3011 | + |
3012 | + If this option is not set, juju will attempt to detect the |
3013 | + correct origin based on its run location and the installed |
3014 | + juju package. |
3015 | + |
3016 | + default-instance-type: |
3017 | + The instance type to be used for machines launched within the juju |
3018 | + environment. Acceptable values are based on EC2 instance type API names |
3019 | + like t1.micro or m1.xlarge. |
3020 | + |
3021 | + default-image-id: |
3022 | + The default amazon machine image to utilize for machines in the |
3023 | + juju environment. If not specified the default image id varies by |
3024 | + region. |
3025 | + |
3026 | + default-series: |
3027 | + The default Ubuntu series to use (`oneiric`, for instance). EC2 images |
3028 | + and charms referenced without an explicit series will both default to |
3029 | + the value of this setting. |
3030 | + |
3031 | +Additional configuration options, not specific to EC2: |
3032 | + |
3033 | + authorized-keys-path: |
3034 | + The path to a public key to place onto launched machines. If no value |
3035 | + is provided for either this or ``authorized-keys`` then a search is |
3036 | + made for some default public keys "id_dsa.pub", "id_rsa.pub", |
3037 | + "identity.pub". If none of those exist, then a LookupError error is |
3038 | + raised when launching a machine. |
3039 | + |
3040 | + authorized-keys: |
3041 | + The full content of a public key to utilize on launched machines. |
3042 | + |
3043 | + |
3044 | |
3045 | === added file 'source/provider-configuration-local.rst' |
3046 | --- source/provider-configuration-local.rst 1970-01-01 00:00:00 +0000 |
3047 | +++ source/provider-configuration-local.rst 2012-01-18 20:50:30 +0000 |
3048 | @@ -0,0 +1,53 @@ |
3049 | +Local provider configuration |
3050 | +---------------------------- |
3051 | + |
3052 | +The local provider allows for deploying services directly against the local/host machine |
3053 | +using LXC containers with the goal of experimenting with juju and developing formulas. |
3054 | + |
3055 | +The local provider has some additional package dependencies. Attempts to use |
3056 | +this provider without these packages installed will terminate with a message |
3057 | +indicating the missing packages. |
3058 | + |
3059 | +The following are packages are required. |
3060 | + |
3061 | + - libvirt-bin |
3062 | + - lxc |
3063 | + - apt-cacher-ng |
3064 | + - zookeeper |
3065 | + |
3066 | + |
3067 | +The local provider can be configured by specifying "provider: local" and a "data-dir": |
3068 | +as an example:: |
3069 | + |
3070 | + local: |
3071 | + type: local |
3072 | + data-dir: /tmp/local-dev |
3073 | + control-bucket: juju-a14dfae3830142d9ac23c499395c2785999 |
3074 | + admin-secret: b3a5dee4fb8c4fc9a4db04751e5936f4 |
3075 | + juju-origin: distro |
3076 | + default-series: oneiric |
3077 | + |
3078 | +Upon running ``juju bootstrap`` a zookeeper instance will be started on the host |
3079 | +along with a machine agent. The bootstrap command will prompt for sudo access |
3080 | +as the machine agent needs to run as root in order to create containers on the |
3081 | +local machine. |
3082 | + |
3083 | +The containers created are namespaced in such a way that you can create multiple |
3084 | +environments on a machine. The containers also namespaced by user for multi-user |
3085 | +machines. |
3086 | + |
3087 | +Local provider environments do not survive reboots of the host at this time, the |
3088 | +environment will need to be destroyed and recreated after a reboot. |
3089 | + |
3090 | + |
3091 | +Provider specific options |
3092 | +========================= |
3093 | + |
3094 | + data-dir: |
3095 | + Directory for zookeeper state and log files. |
3096 | + |
3097 | + |
3098 | + |
3099 | + |
3100 | + |
3101 | + |
3102 | |
3103 | === added file 'source/upgrades.rst' |
3104 | --- source/upgrades.rst 1970-01-01 00:00:00 +0000 |
3105 | +++ source/upgrades.rst 2012-01-18 20:50:30 +0000 |
3106 | @@ -0,0 +1,57 @@ |
3107 | +Upgrades |
3108 | +======== |
3109 | + |
3110 | +A core functionality of any configuration management system is |
3111 | +handling the full lifecycle of service and configuration |
3112 | +upgrades. |
3113 | + |
3114 | +Charm upgrades |
3115 | +-------------- |
3116 | + |
3117 | +A common task when doing charm development is iterating over |
3118 | +charm versions via upgrading charms of a running service |
3119 | +while it's live. |
3120 | + |
3121 | +The use case also extends to a user upgrading a deployed |
3122 | +service's charm with a newer version from an upsteam charm |
3123 | +repository. |
3124 | + |
3125 | +In some cases a new charm version will also reference newer |
3126 | +software/package versions or new packages. |
3127 | + |
3128 | +More details in the `charm upgrades documentation`_ |
3129 | + |
3130 | +.. _`charm upgrades documentation`: ./charm-upgrades.html |
3131 | + |
3132 | + |
3133 | +*NOTE* At the moment this is the only upgrade form that juju |
3134 | +provides. |
3135 | + |
3136 | +Service upgrades |
3137 | +---------------- |
3138 | + |
3139 | +There's an interesting set of upgrade use cases which embody lots of |
3140 | +real world usage, which has been left for future work. |
3141 | + |
3142 | +One case is where an application is deployed across multiple service |
3143 | +units, and the code needs to be upgraded in lock step across all of |
3144 | +them (either due to software incompatability or data changes in |
3145 | +related services). |
3146 | + |
3147 | +Additionally the practices of a rolling uprade, and cloning a service |
3148 | +as an upgrade mechanism are also interesting problems, which are left |
3149 | +for future work. |
3150 | + |
3151 | +juju upgrades |
3152 | +------------- |
3153 | + |
3154 | +One last upgrade scenario, is upgrading of the juju software |
3155 | +itself. |
3156 | + |
3157 | +At the moment juju is deployed from revision control, although it's |
3158 | +being packaged for the future. Currently all of the juju agents |
3159 | +maintain persistent connections to zookeeper, the failure of which may |
3160 | +be grounds for the system to take corrective action. As a simple |
3161 | +notion of performing system wide juju upgrades, the software would |
3162 | +be updated on the existing systems, and then the agents restarted but |
3163 | +instructed to keep their existing zookeeper session ids. |
3164 | |
3165 | === added file 'source/user-tutorial.rst' |
3166 | --- source/user-tutorial.rst 1970-01-01 00:00:00 +0000 |
3167 | +++ source/user-tutorial.rst 2012-01-18 20:50:30 +0000 |
3168 | @@ -0,0 +1,335 @@ |
3169 | +.. _user-tutorial: |
3170 | + |
3171 | +User tutorial |
3172 | +============= |
3173 | + |
3174 | +Introduction |
3175 | +------------ |
3176 | + |
3177 | +This tutorial demonstrates basic features of juju from a user perspective. |
3178 | +An juju user would typically be a devops or a sys-admin who is interested in |
3179 | +automated deployment and management of servers and services. |
3180 | + |
3181 | +Bootstrapping |
3182 | +------------- |
3183 | + |
3184 | +The first step for deploying an juju system is to perform bootstrapping. |
3185 | +Bootstrapping launches a utility instance that is used in all subsequent |
3186 | +operations to launch and orchestrate other instances:: |
3187 | + |
3188 | + $ juju bootstrap |
3189 | + |
3190 | +Note that while the command should display a message indicating it has finished |
3191 | +successfully, that does not mean the bootstrapping instance is immediately |
3192 | +ready for usage. Bootstrapping an instance can require a couple of minutes. To |
3193 | +check on the status of the juju deployment, we can use the status command:: |
3194 | + |
3195 | + $ juju status |
3196 | + |
3197 | +If the bootstrapping node has not yet completed bootstrapping, the status |
3198 | +command may either mention the environment is not yet ready, or may display a |
3199 | +connection timeout such as:: |
3200 | + |
3201 | + INFO Connecting to environment. |
3202 | + ERROR Connection refused |
3203 | + ProviderError: Interaction with machine provider failed: |
3204 | + ConnectionTimeoutException('could not connect before timeout after 2 |
3205 | + retries',) |
3206 | + ERROR ProviderError: Interaction with machine |
3207 | + provider failed: ConnectionTimeoutException('could not connect before timeout |
3208 | + after 2 retries',) |
3209 | + |
3210 | +This is simply an indication the environment needs more time to complete |
3211 | +initialization. It is recommended you retry every minute. Once the environment |
3212 | +has properly initialized, the status command should display:: |
3213 | + |
3214 | + machines: |
3215 | + 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745} |
3216 | + services: {} |
3217 | + |
3218 | +Note the following, machine "0" has been started. This is the bootstrapping |
3219 | +node and the first node to be started. The dns-name for the node is printed. |
3220 | +Also the EC2 instance-id is printed. Since no services are yet deployed to the |
3221 | +juju system yet, the list of deployed services is empty |
3222 | + |
3223 | +Starting debug-log |
3224 | +------------------ |
3225 | + |
3226 | +While not a requirement, it is beneficial for the understanding of juju to |
3227 | +start a debug-log session. juju's debug-log provides great insight into the |
3228 | +execution of various hooks as they are triggered by various events. It is |
3229 | +important to understand that debug-log shows events from a distributed |
3230 | +environment (multiple-instances). This means that log lines will alternate |
3231 | +between output from different instances. To start a debug-log session, from a |
3232 | +secondary terminal issue:: |
3233 | + |
3234 | + $ juju debug-log |
3235 | + INFO Connecting to environment. |
3236 | + INFO Enabling distributed debug log. |
3237 | + INFO Tailing logs - Ctrl-C to stop. |
3238 | + |
3239 | +This will connect to the environment, and start tailing logs. |
3240 | + |
3241 | +Deploying service units |
3242 | +----------------------- |
3243 | + |
3244 | +Now that we have bootstrapped the juju environment, and started the |
3245 | +debug-log viewer, let's proceed by deploying a mysql service:: |
3246 | + |
3247 | + $ juju deploy --repository=/usr/share/doc/juju/examples local:oneiric/mysql |
3248 | + INFO Connecting to environment. |
3249 | + INFO Charm deployed as service: 'mysql' |
3250 | + INFO 'deploy' command finished successfully |
3251 | + |
3252 | +Checking the debug-log window, we can see the mysql service unit being |
3253 | +downloaded and started:: |
3254 | + |
3255 | + Machine:1: juju.agents.machine DEBUG: Downloading charm |
3256 | + local:oneiric/mysql-11... |
3257 | + Machine:1: juju.agents.machine INFO: Started service unit mysql/0 |
3258 | + |
3259 | +It is important to note the different debug levels. DEBUG is used for very |
3260 | +detailed logging messages, usually you should not care about reading such |
3261 | +messages unless you are trying to debug (hence the name) a specific problem. |
3262 | +INFO debugging level is used for slightly more important informational |
3263 | +messages. In this case, these messages are generated as the mysql charm's |
3264 | +hooks are being executed. Let's check the current status:: |
3265 | + |
3266 | + $ juju status |
3267 | + machines: |
3268 | + 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745} |
3269 | + 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d} |
3270 | + services: |
3271 | + mysql: |
3272 | + charm: local:oneiric/mysql-11 |
3273 | + relations: {} |
3274 | + units: |
3275 | + mysql/0: |
3276 | + machine: 1 |
3277 | + relations: {} |
3278 | + state: null |
3279 | + |
3280 | +We can see a new EC2 instance has now been spun up for mysql. Information for |
3281 | +this instance is displayed as machine number 1 and mysql is now listed under |
3282 | +services. It is apparent the mysql service unit has no relations, since it has |
3283 | +not been connected to wordpress yet. Since this is the first mysql service |
3284 | +unit, it is being referred to as mysql/0, subsequent service units would be |
3285 | +named mysql/1 and so on. |
3286 | + |
3287 | +.. note:: |
3288 | + An important distinction to make is the difference between a service |
3289 | + and a service unit. A service is a high level concept relating to an |
3290 | + end-user visible service such as mysql. The mysql service would be |
3291 | + composed of several mysql service units referred to as mysql/0, mysql/1 |
3292 | + and so on. |
3293 | + |
3294 | +The mysql service state is listed as null since it's not ready yet. |
3295 | +Downloading, installing, configuring and starting mysql can take some time. |
3296 | +However we don't have to wait for it to configure, let's proceed deploying |
3297 | +wordpress:: |
3298 | + |
3299 | + $ juju deploy --repository=/usr/share/doc/juju/examples local:oneiric/wordpress |
3300 | + |
3301 | +Let's wait for a minute for all services to complete their configuration cycle and |
3302 | +get properly started, then issue a status command:: |
3303 | + |
3304 | + $ juju status |
3305 | + machines: |
3306 | + 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745} |
3307 | + 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d} |
3308 | + 2: {dns-name: ec2-184-72-156-54.compute-1.amazonaws.com, instance-id: i-9c7e29f3} |
3309 | + services: |
3310 | + mysql: |
3311 | + charm: local:oneiric/mysql-11 |
3312 | + relations: {} |
3313 | + units: |
3314 | + mysql/0: |
3315 | + machine: 1 |
3316 | + relations: {} |
3317 | + state: started |
3318 | + wordpress: |
3319 | + charm: local:oneiric/wordpress-29 |
3320 | + relations: {} |
3321 | + units: |
3322 | + wordpress/0: |
3323 | + machine: 2 |
3324 | + relations: {} |
3325 | + state: started |
3326 | + |
3327 | +mysql/0 as well as wordpress/0 are both now in the started state. Checking the |
3328 | +debug-log would reveal wordpress has been started as well |
3329 | + |
3330 | +Adding a relation |
3331 | +----------------- |
3332 | + |
3333 | +While mysql and wordpress service units have been started, they are still |
3334 | +isolated from each other. An important concept for juju is connecting |
3335 | +various service units together to create a bigger juju! Adding a relation |
3336 | +between service units causes hooks to trigger, in effect causing all service |
3337 | +units to collaborate and work together to reach the desired end state. Adding a |
3338 | +relation is extremely simple:: |
3339 | + |
3340 | + $ juju add-relation wordpress mysql |
3341 | + INFO Connecting to environment. |
3342 | + INFO Added mysql relation to all service units. |
3343 | + INFO 'add_relation' command finished successfully |
3344 | + |
3345 | +Checking the juju status we see that the db relation now exists with state |
3346 | +up:: |
3347 | + |
3348 | + $ juju status |
3349 | + machines: |
3350 | + 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745} |
3351 | + 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d} |
3352 | + 2: {dns-name: ec2-184-72-156-54.compute-1.amazonaws.com, instance-id: i-9c7e29f3} |
3353 | + services: |
3354 | + mysql: |
3355 | + charm: local:oneiric/mysql-11 |
3356 | + relations: {db: wordpress} |
3357 | + units: |
3358 | + mysql/0: |
3359 | + machine: 1 |
3360 | + relations: |
3361 | + db: {state: up} |
3362 | + state: started |
3363 | + wordpress: |
3364 | + charm: local:oneiric/wordpress-29 |
3365 | + relations: {db: mysql} |
3366 | + units: |
3367 | + wordpress/0: |
3368 | + machine: 2 |
3369 | + relations: |
3370 | + db: {state: up} |
3371 | + state: started |
3372 | + |
3373 | +Exposing the service to the world |
3374 | +--------------------------------- |
3375 | + |
3376 | +All that remains is to expose the service to the outside world:: |
3377 | + |
3378 | + $ juju expose wordpress |
3379 | + |
3380 | +You can now point your browser at the public dns-name for instance 2 (running |
3381 | +wordpress) to view the wordpress blog |
3382 | + |
3383 | +Tracing hook execution |
3384 | +---------------------- |
3385 | + |
3386 | +An juju user should never have to trace the execution order of hooks, |
3387 | +however if you are the kind of person who enjoys looking under the hood, this |
3388 | +section is for you. Understanding hook order execution, the parallel nature of |
3389 | +hook execution across instances, and how relation-set in a hook can trigger the |
3390 | +execution of another hook is quite interesting and provides insight into |
3391 | +juju internals |
3392 | + |
3393 | +Here are a few important messages from the debug-log of this juju run. The |
3394 | +date field has been deliberately left in this log, in order to understand the |
3395 | +parallel nature of hook execution. |
3396 | + |
3397 | +Things to consider while reading the log include: |
3398 | + * The time the log message was generated |
3399 | + * Which service unit is causing the log message (for example mysql/0) |
3400 | + * The message logging level. In this run DEBUG messages are generated by the |
3401 | + juju core engine, while WARNING messages are generated by calling |
3402 | + juju-log from inside charms (which you can read in the examples |
3403 | + folder) |
3404 | + |
3405 | +Let's view select debug-log messages which can help understand the execution |
3406 | +order:: |
3407 | + |
3408 | + 14:29:43,625 unit:mysql/0: hook.scheduler DEBUG: executing hook for wordpress/0:joined |
3409 | + 14:29:43,626 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined |
3410 | + 14:29:43,660 unit:wordpress/0: hook.scheduler DEBUG: executing hook for mysql/0:joined |
3411 | + 14:29:43,660 unit:wordpress/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined |
3412 | + 14:29:43,661 unit:wordpress/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed |
3413 | + 14:29:43,789 unit:mysql/0: unit.hook.api WARNING: Creating new database and corresponding security settings |
3414 | + 14:29:43,813 unit:wordpress/0: unit.hook.api WARNING: Retrieved hostname: ec2-184-72-156-54.compute-1.amazonaws.com |
3415 | + 14:29:43,976 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed |
3416 | + 14:29:43,997 unit:wordpress/0: hook.scheduler DEBUG: executing hook for mysql/0:modified |
3417 | + 14:29:43,997 unit:wordpress/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed |
3418 | + 14:29:44,143 unit:wordpress/0: unit.hook.api WARNING: Retrieved hostname: ec2-184-72-156-54.compute-1.amazonaws.com |
3419 | + 14:29:44,849 unit:wordpress/0: unit.hook.api WARNING: Creating appropriate upload paths and directories |
3420 | + 14:29:44,992 unit:wordpress/0: unit.hook.api WARNING: Writing wordpress config file /etc/wordpress/config-ec2-184-72-156-54.compute-1.amazonaws.com.php |
3421 | + 14:29:45,130 unit:wordpress/0: unit.hook.api WARNING: Writing apache config file /etc/apache2/sites-available/ec2-184-72-156-54.compute-1.amazonaws.com |
3422 | + 14:29:45,301 unit:wordpress/0: unit.hook.api WARNING: Enabling apache modules: rewrite, vhost_alias |
3423 | + 14:29:45,512 unit:wordpress/0: unit.hook.api WARNING: Enabling apache site: ec2-184-72-156-54.compute-1.amazonaws.com |
3424 | + 14:29:45,688 unit:wordpress/0: unit.hook.api WARNING: Restarting apache2 service |
3425 | + |
3426 | + |
3427 | +Scaling the juju |
3428 | +-------------------- |
3429 | + |
3430 | +Assuming your blog got really popular, is having high load and you decided to |
3431 | +scale it up (it's a cloud deployment after all). juju makes this magically |
3432 | +easy. All what is needed is:: |
3433 | + |
3434 | + $ juju add-unit wordpress |
3435 | + INFO Connecting to environment. |
3436 | + INFO Unit 'wordpress/1' added to service 'wordpress' |
3437 | + INFO 'add_unit' command finished successfully |
3438 | + $ juju status |
3439 | + machines: |
3440 | + 0: {dns-name: ec2-50-16-61-111.compute-1.amazonaws.com, instance-id: i-2a702745} |
3441 | + 1: {dns-name: ec2-50-16-117-185.compute-1.amazonaws.com, instance-id: i-227e294d} |
3442 | + 2: {dns-name: ec2-184-72-156-54.compute-1.amazonaws.com, instance-id: i-9c7e29f3} |
3443 | + 3: {dns-name: ec2-50-16-156-106.compute-1.amazonaws.com, instance-id: i-ba6532d5} |
3444 | + services: |
3445 | + mysql: |
3446 | + charm: local:oneiric/mysql-11 |
3447 | + relations: {db: wordpress} |
3448 | + units: |
3449 | + mysql/0: |
3450 | + machine: 1 |
3451 | + relations: |
3452 | + db: {state: up} |
3453 | + state: started |
3454 | + wordpress: |
3455 | + charm: local:oneiric/wordpress-29 |
3456 | + relations: {db: mysql} |
3457 | + units: |
3458 | + wordpress/0: |
3459 | + machine: 2 |
3460 | + relations: |
3461 | + db: {state: up} |
3462 | + state: started |
3463 | + wordpress/1: |
3464 | + machine: 3 |
3465 | + relations: |
3466 | + db: {state: up} |
3467 | + state: started |
3468 | + |
3469 | + |
3470 | +The add-unit command starts a new wordpress instance (wordpress/1), which then |
3471 | +joins the relation with the already existing mysql/0 instance. mysql/0 notices |
3472 | +the database required has already been created and thus decides all needed |
3473 | +configuration has already been done. On the other hand wordpress/1 reads |
3474 | +service settings from mysql/0 and starts configuring itself and joining the |
3475 | +juju. Let's review a short version of debug-log for adding wordpress/1:: |
3476 | + |
3477 | + 14:36:19,755 unit:mysql/0: hook.scheduler DEBUG: executing hook for wordpress/1:joined |
3478 | + 14:36:19,755 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined |
3479 | + 14:36:19,810 unit:wordpress/1: hook.scheduler DEBUG: executing hook for mysql/0:joined |
3480 | + 14:36:19,811 unit:wordpress/1: unit.relation.lifecycle DEBUG: Executing hook db-relation-joined |
3481 | + 14:36:19,811 unit:wordpress/1: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed |
3482 | + 14:36:19,918 unit:mysql/0: unit.hook.api WARNING: Database already exists, exiting |
3483 | + 14:36:19,938 unit:mysql/0: unit.relation.lifecycle DEBUG: Executing hook db-relation-changed |
3484 | + 14:36:19,990 unit:wordpress/1: unit.hook.api WARNING: Retrieved hostname: ec2-50-16-156-106.compute-1.amazonaws.com |
3485 | + 14:36:20,757 unit:wordpress/1: unit.hook.api WARNING: Creating appropriate upload paths and directories |
3486 | + 14:36:20,916 unit:wordpress/1: unit.hook.api WARNING: Writing wordpress config file /etc/wordpress/config-ec2-50-16-156-106.compute-1.amazonaws.com.php |
3487 | + 14:36:21,088 unit:wordpress/1: unit.hook.api WARNING: Writing apache config file /etc/apache2/sites-available/ec2-50-16-156-106.compute-1.amazonaws.com |
3488 | + 14:36:21,236 unit:wordpress/1: unit.hook.api WARNING: Enabling apache modules: rewrite, vhost_alias |
3489 | + 14:36:21,476 unit:wordpress/1: unit.hook.api WARNING: Enabling apache site: ec2-50-16-156-106.compute-1.amazonaws.com |
3490 | + 14:36:21,682 unit:wordpress/1: unit.hook.api WARNING: Restarting apache2 service |
3491 | + |
3492 | +Destroying the environment |
3493 | +-------------------------- |
3494 | + |
3495 | +Once you are done with an juju deployment, you need to terminate |
3496 | +all running instances in order to stop paying for them. The |
3497 | +destroy-environment command will terminate all running instances in an |
3498 | +environment:: |
3499 | + |
3500 | + $ juju destroy-environment |
3501 | + |
3502 | +juju will ask for user confirmation before proceeding as this |
3503 | +command will destroy service data in the environment as well. |
3504 | |
3505 | === added file 'source/write-charm.rst' |
3506 | --- source/write-charm.rst 1970-01-01 00:00:00 +0000 |
3507 | +++ source/write-charm.rst 2012-01-18 20:50:30 +0000 |
3508 | @@ -0,0 +1,409 @@ |
3509 | +.. _write-charm: |
3510 | + |
3511 | +Writing a charm |
3512 | +=============== |
3513 | + |
3514 | +This tutorial demonstrates the basic workflow for writing, running and |
3515 | +debugging an juju charm. Charms are a way to package and share your |
3516 | +service deployment and orchestration knowledge and share them with the world. |
3517 | + |
3518 | +Creating the charm |
3519 | +-------------------- |
3520 | + |
3521 | +In this example we are going to write a charm to deploy the drupal CMS |
3522 | +system. For the sake of simplicity, we are going to use the mysql charm that |
3523 | +comes bundled with juju in the examples directory. Assuming the current |
3524 | +directory is the juju trunk, let's create the directory hierarchy:: |
3525 | + |
3526 | + $ cd examples/oneiric |
3527 | + mkdir -p drupal/hooks |
3528 | + vim drupal/metadata.yaml |
3529 | + vim drupal/revision |
3530 | + |
3531 | +Note: if you don't have the juju source tree available, the `examples` repository |
3532 | +is installed into `/usr/share/doc/juju`; you can copy the repository to your |
3533 | +current directory, and work from there. |
3534 | + |
3535 | +Edit the metadata.yaml file to resemble:: |
3536 | + |
3537 | + name: drupal |
3538 | + summary: "Drupal CMS" |
3539 | + description: | |
3540 | + Installs the drupal CMS system, relates to the mysql charm provided in |
3541 | + examples directory. Can be scaled to multiple web servers |
3542 | + requires: |
3543 | + db: |
3544 | + interface: mysql |
3545 | + |
3546 | +The metadata.yaml file provides metadata around the charm. The file declares |
3547 | +a charm with the name drupal. Since this is the first time to edit this |
3548 | +charm, its revision number is one. A short and long description of the |
3549 | +charm are provided. The final field is `requires`, this mentions the |
3550 | +interface type required by this charm. Since this drupal charm uses the |
3551 | +services of a mysql database, we need to require it in the metadata. Since this |
3552 | +charm does not provide a service to any other charm, there is no `provides` |
3553 | +field. You might be wondering where did the interface name "mysql" come from, |
3554 | +you can locate the interface information from the mysql charm's |
3555 | +metadata.yaml. Here it is for convenience:: |
3556 | + |
3557 | + name: mysql |
3558 | + summary: "MySQL relational database provider" |
3559 | + description: | |
3560 | + Installs and configures the MySQL package (mysqldb), then runs it. |
3561 | + |
3562 | + Upon a consuming service establishing a relation, creates a new |
3563 | + database for that service, if the database does not yet |
3564 | + exist. Publishes the following relation settings for consuming |
3565 | + services: |
3566 | + |
3567 | + database: database name |
3568 | + user: user name to access database |
3569 | + password: password to access the database |
3570 | + host: local hostname |
3571 | + provides: |
3572 | + db: |
3573 | + interface: mysql |
3574 | + |
3575 | +That very last line mentions that the interface that mysql provides to us is |
3576 | +"mysql". Also the description mentions that four parameters are sent to the |
3577 | +connecting charm (database, user, password, host) in order to enable it to |
3578 | +connect to the database. We will make use of those variables once we start |
3579 | +writing hooks. Such interface information is either provided in a bundled |
3580 | +README file, or in the description. Of course you can also read the charm |
3581 | +code to discover such information as well |
3582 | + |
3583 | + Revision is a integer representing the version of the charm. The revision must always be incremented (monotonically increasing) upon changing a charm to allow for charm upgrades. |
3584 | + |
3585 | + $vim revision |
3586 | + 1 |
3587 | + |
3588 | +Have a plan |
3589 | +----------- |
3590 | + |
3591 | +When attempting to write a charm, it is beneficial to have a mental plan of |
3592 | +what it takes to deploy the software. In our case, you should deploy drupal |
3593 | +manually, understand where its configuration information is written, how the |
3594 | +first node is deployed, and how further nodes are configured. With respect to |
3595 | +this charm, this is the plan |
3596 | + |
3597 | + * Install hook installs all needed components (apache, php, drush) |
3598 | + * Once the database connection information is ready, call drush on first node |
3599 | + to perform the initial setup (creates DB tables, completes setup) |
3600 | + * For scaling onto other nodes, the DB tables have already been set-up. Thus |
3601 | + we only need to append the database connection information into drupal's |
3602 | + settings.php file. We will use a template file for that |
3603 | + |
3604 | +.. note:: |
3605 | + The hooks in a charm are executable files that can be written using any |
3606 | + scripting or programming language. In our case, we'll use bash |
3607 | + |
3608 | +For production charms it is always recommended that you install software |
3609 | +components from the Ubuntu archive (using apt-get) in order to get security |
3610 | +updates. However in this example I am installing drush (Drupal shell) using |
3611 | +apt-get, then using that to download and install the latest version of drupal. |
3612 | +If you were deploying your own code, you could just as easily install a |
3613 | +revision control tool (bzr, git, hg...etc) and use that to checkout a code |
3614 | +branch to deploy from. This demonstrates the flexibility offered by juju |
3615 | +which doesn't really force you into one way of doing things. |
3616 | + |
3617 | +Write hooks |
3618 | +----------- |
3619 | + |
3620 | +Let's change into the hooks directory:: |
3621 | + |
3622 | + $ cd drupal/hooks |
3623 | + vim install |
3624 | + |
3625 | +Since you should have already installed drupal, you have an idea what it takes |
3626 | +to get it installed. My install script looks like:: |
3627 | + |
3628 | + #!/bin/bash |
3629 | + |
3630 | + set -eux # -x for verbose logging to juju debug-log |
3631 | + juju-log "Installing drush,apache2,php via apt-get" |
3632 | + apt-get -y install drush apache2 php5-gd libapache2-mod-php5 php5-cgi mysql-client-core-5.1 |
3633 | + a2enmod php5 |
3634 | + /etc/init.d/apache2 restart |
3635 | + juju-log "Using drush to download latest Drupal" |
3636 | + # Typo on next line, it should be www not ww |
3637 | + cd /var/ww && drush dl drupal --drupal-project-rename=juju |
3638 | + |
3639 | +I have introduced an artificial typo on the last line "ww not www", this is to |
3640 | +simulate any error which you are bound to face sooner or later. Let's create |
3641 | +other hooks:: |
3642 | + |
3643 | + $ vim start |
3644 | + |
3645 | +The start hook is empty, however it needs to be a valid executable, thus we'll |
3646 | +add the first bash shebang line, here it is:: |
3647 | + |
3648 | + #!/bin/bash |
3649 | + |
3650 | +Here's the "stop" script:: |
3651 | + |
3652 | + #!/bin/bash |
3653 | + juju-log "Stopping apache" |
3654 | + /etc/init.d/apache2 stop |
3655 | + |
3656 | +The final script, which does most of the work is "db-relation-changed". This |
3657 | +script gets the database connection information set by the mysql charm then |
3658 | +sets up drupal for the first time, and opens port 80 for web access. Let's |
3659 | +start with a simple version that only installs drupal on the first node. Here |
3660 | +it is:: |
3661 | + |
3662 | + #!/bin/bash |
3663 | + set -eux # -x for verbose logging to juju debug-log |
3664 | + hooksdir=$PWD |
3665 | + user=`relation-get user` |
3666 | + password=`relation-get password` |
3667 | + host=`relation-get host` |
3668 | + database=`relation-get database` |
3669 | + # All values are set together, so checking on a single value is enough |
3670 | + # If $user is not set, DB is still setting itself up, we exit awaiting next run |
3671 | + [ -z "$user" ] && exit 0 |
3672 | + juju-log "Setting up Drupal for the first time" |
3673 | + cd /var/www/juju && drush site-install -y standard \ |
3674 | + --db-url=mysql://$user:$password@$host/$database \ |
3675 | + --site-name=juju --clean-url=0 |
3676 | + cd /var/www/juju && chown www-data sites/default/settings.php |
3677 | + open-port 80/tcp |
3678 | + |
3679 | +The script is quite simple, it reads the four variables needed to connect to |
3680 | +mysql, ensures they are not null, then passes them to the drupal installer. |
3681 | +Make sure all the hook scripts have executable permissions, and change |
3682 | +directory above the examples directory:: |
3683 | + |
3684 | + $ chmod +x * |
3685 | + $ cd ../../../.. |
3686 | + |
3687 | +Checking on the drupal charm file-structure, this is what we have:: |
3688 | + |
3689 | + $ find examples/oneiric/drupal |
3690 | + examples/oneiric/drupal |
3691 | + examples/oneiric/drupal/metadata.yaml |
3692 | + examples/oneiric/drupal/revision |
3693 | + examples/oneiric/drupal/hooks |
3694 | + examples/oneiric/drupal/hooks/db-relation-changed |
3695 | + examples/oneiric/drupal/hooks/stop |
3696 | + examples/oneiric/drupal/hooks/install |
3697 | + examples/oneiric/drupal/hooks/start |
3698 | + |
3699 | +Test run |
3700 | +-------- |
3701 | + |
3702 | +Let us deploy the drupal charm. Remember that the install hook has a problem |
3703 | +and will not exit cleanly. Deploying:: |
3704 | + |
3705 | + $ juju bootstrap |
3706 | + |
3707 | +Wait a minute for the environment to bootstrap. Keep issuing the status command |
3708 | +till you know the environment is ready:: |
3709 | + |
3710 | + $ juju status |
3711 | + 2011-06-07 14:04:06,816 INFO Connecting to environment. |
3712 | + machines: 0: {dns-name: ec2-50-19-154-237.compute-1.amazonaws.com, instance-id: i-6fb52301} |
3713 | + services: {} |
3714 | + 2011-06-07 14:04:11,125 INFO 'status' command finished successfully |
3715 | + |
3716 | +It can be beneficial when debugging a new charm to always have the |
3717 | +distributed debug-log running in a separate window:: |
3718 | + |
3719 | + $ juju debug-log |
3720 | + |
3721 | +Let's deploy the mysql and drupal charms:: |
3722 | + |
3723 | + $ juju deploy --repository=examples local:oneiric/mysql |
3724 | + $ juju deploy --repository=examples local:oneiric/drupal |
3725 | + |
3726 | +Once the machines are started (hint: check the debug-log), issue a status |
3727 | +command:: |
3728 | + |
3729 | + $ juju status |
3730 | + machines: |
3731 | + 0: {dns-name: ec2-50-19-154-237.compute-1.amazonaws.com, instance-id: i-6fb52301} |
3732 | + 1: {dns-name: ec2-50-16-9-102.compute-1.amazonaws.com, instance-id: i-19b12777} |
3733 | + 2: {dns-name: ec2-50-17-147-79.compute-1.amazonaws.com, instance-id: i-e7ba2c89} |
3734 | + services: |
3735 | + drupal: |
3736 | + charm: local:oneiric/drupal-1 |
3737 | + relations: {} |
3738 | + units: |
3739 | + drupal/1: |
3740 | + machine: 4 |
3741 | + open-ports: [] |
3742 | + relations: {} |
3743 | + state: install_error |
3744 | + mysql: |
3745 | + charm: local:oneiric/mysql-12 |
3746 | + relations: {} |
3747 | + units: |
3748 | + mysql/0: |
3749 | + machine: 1 |
3750 | + relations: {} |
3751 | + state: started |
3752 | + |
3753 | +Note how mysql is listed as started, while drupal's state is install_error. This is |
3754 | +because the install hook has an error, and did not exit cleanly (exit code 1). |
3755 | + |
3756 | +Debugging hooks |
3757 | +--------------- |
3758 | + |
3759 | +Let's debug the install hook, from a new window:: |
3760 | + |
3761 | + $ juju debug-hooks drupal/0 |
3762 | + |
3763 | +This will connect you to the drupal machine, and present a shell. The way the |
3764 | +debug-hooks functionality works is by starting a new terminal window instead of |
3765 | +executing a hook when it is triggered. This way you get a chance of running the |
3766 | +hook manually, fixing any errors and re-running it again. In order to trigger |
3767 | +re-running the install hook, from another window:: |
3768 | + |
3769 | + $ juju resolved --retry drupal/0 |
3770 | + |
3771 | +Switching to the debug-hooks window, you will notice a new window named |
3772 | +"install" poped up. Note that "install" is the name of the hook that this |
3773 | +debug-hooks session is replacing. We change directory into the hooks directory |
3774 | +and rerun the hook manually:: |
3775 | + |
3776 | + $ cd /var/lib/juju/units/drupal-0/charm/hooks/ |
3777 | + $ ./install |
3778 | + # -- snip -- |
3779 | + + cd /var/ww |
3780 | + ./install: line 10: cd: /var/ww: No such file or directory |
3781 | + |
3782 | +Problem identified. Let's edit the script, changing ww into www. Rerunning it |
3783 | +again should work successfully. This is why it is very good practice to write |
3784 | +hook scripts in an idempotent manner such that rerunning them over and over |
3785 | +always results in the same state. Do not forget to exit the install window by |
3786 | +typing "exit", this signals that the hook has finished executing successfully. |
3787 | +If you have finished debugging, you may want to exit the debug-hooks session |
3788 | +completely by typing "exit" into the very first window Window0 |
3789 | + |
3790 | +.. note:: |
3791 | + While we have fixed the script, this was done on the remote machine only. You |
3792 | + need to update the local copy of the charm with your changes, increment the |
3793 | + resivion number in metadata.yaml and perform a charm upgrade to push the |
3794 | + changes, like:: |
3795 | + |
3796 | + $ juju upgrade-charm --repository=examples/ drupal |
3797 | + |
3798 | +Let's continue after having fixed the install error:: |
3799 | + |
3800 | + $ juju add-relation mysql drupal |
3801 | + |
3802 | +Watching the debug-log window, you can see debugging information to verify the |
3803 | +hooks are working as they should. If you spot any error, you can launch |
3804 | +debug-hooks in another window to start debugging the misbehaving hooks again. |
3805 | +Note that since "add-relation" relates two charms together, you cannot really |
3806 | +retrigger it by simply issuing "resolved --retry" like we did for the install |
3807 | +hook. In order to retrigger the db-relation-changed hook, you need to remove |
3808 | +the relation, and create it again like so:: |
3809 | + |
3810 | + $ juju remove-relation mysql drupal |
3811 | + $ juju add-relation mysql drupal |
3812 | + |
3813 | +The service should now be ready for use. The remaining step is to expose it to |
3814 | +public access. While the charm signaled it needs port 80 to be open, for |
3815 | +public accessibility, the port is not open until the administrator explicitly |
3816 | +uses the expose command:: |
3817 | + |
3818 | + $ juju expose drupal |
3819 | + |
3820 | +Let's see a status with the ports exposed:: |
3821 | + |
3822 | + $ juju status |
3823 | + machines: |
3824 | + 0: {dns-name: ec2-50-19-154-237.compute-1.amazonaws.com, instance-id: i-6fb52301} |
3825 | + 1: {dns-name: ec2-50-16-9-102.compute-1.amazonaws.com, instance-id: i-19b12777} |
3826 | + 2: {dns-name: ec2-50-17-147-79.compute-1.amazonaws.com, instance-id: i-e7ba2c89} |
3827 | + services: |
3828 | + drupal: |
3829 | + exposed: true |
3830 | + charm: local:oneiric/drupal-1 |
3831 | + relations: {db: mysql} |
3832 | + units: |
3833 | + drupal/1: |
3834 | + machine: 4 |
3835 | + open-ports: [80/tcp] |
3836 | + relations: |
3837 | + db: {state: up} |
3838 | + state: started |
3839 | + mysql: |
3840 | + charm: local:oneiric/mysql-12 |
3841 | + relations: {db: drupal} |
3842 | + units: |
3843 | + mysql/0: |
3844 | + machine: 1 |
3845 | + relations: |
3846 | + db: {state: up} |
3847 | + state: started |
3848 | + |
3849 | + |
3850 | +Congratulations, your charm should now be working successfully! The |
3851 | +db-relation-changed hook previously shown is not suitable for scaling drupal to |
3852 | +more than one node, since it always drops the database and recreates a new one. |
3853 | +A more complete hook would need to first check whether or not the DB tables |
3854 | +exist and act accordingly. Here is how such a hook might be written:: |
3855 | + |
3856 | + #!/bin/bash |
3857 | + set -eux # -x for verbose logging to juju debug-log |
3858 | + hooksdir=$PWD |
3859 | + user=`relation-get user` |
3860 | + password=`relation-get password` |
3861 | + host=`relation-get host` |
3862 | + database=`relation-get database` |
3863 | + # All values are set together, so checking on a single value is enough |
3864 | + # If $user is not set, DB is still setting itself up, we exit awaiting next run |
3865 | + [ -z "$user" ] && exit 0 |
3866 | + |
3867 | + if $(mysql -u $user --password=$password -h $host -e 'use drupal; show tables;' | grep -q users); then |
3868 | + juju-log "Drupal already set-up. Adding DB info to configuration" |
3869 | + cd /var/www/juju/sites/default |
3870 | + cp default.settings.php settings.php |
3871 | + sed -e "s/USER/$user/" \ |
3872 | + -e "s/PASSWORD/$password/" \ |
3873 | + -e "s/HOST/$host/" \ |
3874 | + -e "s/DATABASE/$database/" \ |
3875 | + $hooksdir/drupal-settings.template >> settings.php |
3876 | + else |
3877 | + juju-log "Setting up Drupal for the first time" |
3878 | + cd /var/www/juju && drush site-install -y standard \ |
3879 | + --db-url=mysql://$user:$password@$host/$database \ |
3880 | + --site-name=juju --clean-url=0 |
3881 | + fi |
3882 | + cd /var/www/juju && chown www-data sites/default/settings.php |
3883 | + open-port 80/tcp |
3884 | + |
3885 | +.. note:: |
3886 | + Any files that you store in the hooks directory are transported as is to the |
3887 | + deployment machine. You can drop in configuration files or templates that you |
3888 | + can use from your hook scripts. An example of this technique is the |
3889 | + drupal-settings.template file that is used in the previous hook. The template |
3890 | + is rendered using sed, however any other more advanced template engine can be |
3891 | + used |
3892 | + |
3893 | +Here is the template file used:: |
3894 | + |
3895 | + $databases = array ( |
3896 | + 'default' => |
3897 | + array ( |
3898 | + 'default' => |
3899 | + array ( |
3900 | + 'database' => 'DATABASE', |
3901 | + 'username' => 'USER', |
3902 | + 'password' => 'PASSWORD', |
3903 | + 'host' => 'HOST', |
3904 | + 'port' => '', |
3905 | + 'driver' => 'mysql', |
3906 | + 'prefix' => '', |
3907 | + ), |
3908 | + ), |
3909 | + ); |
3910 | + |
3911 | +Learn more |
3912 | +---------- |
3913 | + |
3914 | +Read more detailed information about :doc:`charm` and hooks. For more hook |
3915 | +examples, please check the examples directory in the juju source tree, or |
3916 | +check out the various charms already included in `Principia |
3917 | +<https://launchpad.net/principia>`_. |
this is pretty indecipherable due to how merge was setup. the underlying changes should be reflected already (revision as separate file), although the local provider could still use origin info, control-bucket doesn't apply to it.