Merge lp:~benji/juju-deployer/add-fetching-configs into lp:~gandelman-a/juju-deployer/trunk
- add-fetching-configs
- Merge into trunk
Proposed by
Benji York
Status: | Superseded |
---|---|
Proposed branch: | lp:~benji/juju-deployer/add-fetching-configs |
Merge into: | lp:~gandelman-a/juju-deployer/trunk |
Diff against target: |
4049 lines (+3903/-1) 25 files modified
.bzrignore (+7/-0) LICENSE (+674/-0) MANIFEST.in (+2/-0) README (+75/-1) configs/blog.yaml (+36/-0) configs/gui-export.yml (+42/-0) configs/wiki.yaml (+24/-0) deployer.py (+1756/-0) doc/Makefile (+130/-0) doc/announcement.txt (+53/-0) doc/conf.py (+216/-0) doc/config.rst (+96/-0) doc/index.rst (+24/-0) doc/notes.txt (+39/-0) setup.cfg (+7/-0) setup.py (+26/-0) test_data/blog.snippet (+1/-0) test_data/blog.yaml (+63/-0) test_data/precise/appsrv/metadata.yaml (+2/-0) test_data/stack-default.cfg (+58/-0) test_data/stack-include.template (+18/-0) test_data/stack-include.yaml (+19/-0) test_data/stack-inherits.cfg (+41/-0) test_deployment.py (+447/-0) todo.txt (+47/-0) |
To merge this branch: | bzr merge lp:~benji/juju-deployer/add-fetching-configs |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Adam Gandelman | Pending | ||
Review via email: mp+174869@code.launchpad.net |
This proposal has been superseded by a proposal from 2013-07-15.
Commit message
Add the ability to specify a configuration file by URL.
Description of the change
This branch adds the ability to specify a configuration file (in a -c option) by URL and have that configuration file loaded over the network, stored in a temp file locally, and used by the deployer.
To post a comment you must log in.
Unmerged revisions
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file '.bzrignore' |
2 | --- .bzrignore 1970-01-01 00:00:00 +0000 |
3 | +++ .bzrignore 2013-07-15 21:24:33 +0000 |
4 | @@ -0,0 +1,7 @@ |
5 | +deployer.sublime-project |
6 | +deployer.sublime-workspace |
7 | +tmp |
8 | +juju_deployer.egg-info |
9 | +.emacs.desktop |
10 | +.emacs.desktop.lock |
11 | +_build |
12 | |
13 | === added file 'LICENSE' |
14 | --- LICENSE 1970-01-01 00:00:00 +0000 |
15 | +++ LICENSE 2013-07-15 21:24:33 +0000 |
16 | @@ -0,0 +1,674 @@ |
17 | + GNU GENERAL PUBLIC LICENSE |
18 | + Version 3, 29 June 2007 |
19 | + |
20 | + Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> |
21 | + Everyone is permitted to copy and distribute verbatim copies |
22 | + of this license document, but changing it is not allowed. |
23 | + |
24 | + Preamble |
25 | + |
26 | + The GNU General Public License is a free, copyleft license for |
27 | +software and other kinds of works. |
28 | + |
29 | + The licenses for most software and other practical works are designed |
30 | +to take away your freedom to share and change the works. By contrast, |
31 | +the GNU General Public License is intended to guarantee your freedom to |
32 | +share and change all versions of a program--to make sure it remains free |
33 | +software for all its users. We, the Free Software Foundation, use the |
34 | +GNU General Public License for most of our software; it applies also to |
35 | +any other work released this way by its authors. You can apply it to |
36 | +your programs, too. |
37 | + |
38 | + When we speak of free software, we are referring to freedom, not |
39 | +price. Our General Public Licenses are designed to make sure that you |
40 | +have the freedom to distribute copies of free software (and charge for |
41 | +them if you wish), that you receive source code or can get it if you |
42 | +want it, that you can change the software or use pieces of it in new |
43 | +free programs, and that you know you can do these things. |
44 | + |
45 | + To protect your rights, we need to prevent others from denying you |
46 | +these rights or asking you to surrender the rights. Therefore, you have |
47 | +certain responsibilities if you distribute copies of the software, or if |
48 | +you modify it: responsibilities to respect the freedom of others. |
49 | + |
50 | + For example, if you distribute copies of such a program, whether |
51 | +gratis or for a fee, you must pass on to the recipients the same |
52 | +freedoms that you received. You must make sure that they, too, receive |
53 | +or can get the source code. And you must show them these terms so they |
54 | +know their rights. |
55 | + |
56 | + Developers that use the GNU GPL protect your rights with two steps: |
57 | +(1) assert copyright on the software, and (2) offer you this License |
58 | +giving you legal permission to copy, distribute and/or modify it. |
59 | + |
60 | + For the developers' and authors' protection, the GPL clearly explains |
61 | +that there is no warranty for this free software. For both users' and |
62 | +authors' sake, the GPL requires that modified versions be marked as |
63 | +changed, so that their problems will not be attributed erroneously to |
64 | +authors of previous versions. |
65 | + |
66 | + Some devices are designed to deny users access to install or run |
67 | +modified versions of the software inside them, although the manufacturer |
68 | +can do so. This is fundamentally incompatible with the aim of |
69 | +protecting users' freedom to change the software. The systematic |
70 | +pattern of such abuse occurs in the area of products for individuals to |
71 | +use, which is precisely where it is most unacceptable. Therefore, we |
72 | +have designed this version of the GPL to prohibit the practice for those |
73 | +products. If such problems arise substantially in other domains, we |
74 | +stand ready to extend this provision to those domains in future versions |
75 | +of the GPL, as needed to protect the freedom of users. |
76 | + |
77 | + Finally, every program is threatened constantly by software patents. |
78 | +States should not allow patents to restrict development and use of |
79 | +software on general-purpose computers, but in those that do, we wish to |
80 | +avoid the special danger that patents applied to a free program could |
81 | +make it effectively proprietary. To prevent this, the GPL assures that |
82 | +patents cannot be used to render the program non-free. |
83 | + |
84 | + The precise terms and conditions for copying, distribution and |
85 | +modification follow. |
86 | + |
87 | + TERMS AND CONDITIONS |
88 | + |
89 | + 0. Definitions. |
90 | + |
91 | + "This License" refers to version 3 of the GNU General Public License. |
92 | + |
93 | + "Copyright" also means copyright-like laws that apply to other kinds of |
94 | +works, such as semiconductor masks. |
95 | + |
96 | + "The Program" refers to any copyrightable work licensed under this |
97 | +License. Each licensee is addressed as "you". "Licensees" and |
98 | +"recipients" may be individuals or organizations. |
99 | + |
100 | + To "modify" a work means to copy from or adapt all or part of the work |
101 | +in a fashion requiring copyright permission, other than the making of an |
102 | +exact copy. The resulting work is called a "modified version" of the |
103 | +earlier work or a work "based on" the earlier work. |
104 | + |
105 | + A "covered work" means either the unmodified Program or a work based |
106 | +on the Program. |
107 | + |
108 | + To "propagate" a work means to do anything with it that, without |
109 | +permission, would make you directly or secondarily liable for |
110 | +infringement under applicable copyright law, except executing it on a |
111 | +computer or modifying a private copy. Propagation includes copying, |
112 | +distribution (with or without modification), making available to the |
113 | +public, and in some countries other activities as well. |
114 | + |
115 | + To "convey" a work means any kind of propagation that enables other |
116 | +parties to make or receive copies. Mere interaction with a user through |
117 | +a computer network, with no transfer of a copy, is not conveying. |
118 | + |
119 | + An interactive user interface displays "Appropriate Legal Notices" |
120 | +to the extent that it includes a convenient and prominently visible |
121 | +feature that (1) displays an appropriate copyright notice, and (2) |
122 | +tells the user that there is no warranty for the work (except to the |
123 | +extent that warranties are provided), that licensees may convey the |
124 | +work under this License, and how to view a copy of this License. If |
125 | +the interface presents a list of user commands or options, such as a |
126 | +menu, a prominent item in the list meets this criterion. |
127 | + |
128 | + 1. Source Code. |
129 | + |
130 | + The "source code" for a work means the preferred form of the work |
131 | +for making modifications to it. "Object code" means any non-source |
132 | +form of a work. |
133 | + |
134 | + A "Standard Interface" means an interface that either is an official |
135 | +standard defined by a recognized standards body, or, in the case of |
136 | +interfaces specified for a particular programming language, one that |
137 | +is widely used among developers working in that language. |
138 | + |
139 | + The "System Libraries" of an executable work include anything, other |
140 | +than the work as a whole, that (a) is included in the normal form of |
141 | +packaging a Major Component, but which is not part of that Major |
142 | +Component, and (b) serves only to enable use of the work with that |
143 | +Major Component, or to implement a Standard Interface for which an |
144 | +implementation is available to the public in source code form. A |
145 | +"Major Component", in this context, means a major essential component |
146 | +(kernel, window system, and so on) of the specific operating system |
147 | +(if any) on which the executable work runs, or a compiler used to |
148 | +produce the work, or an object code interpreter used to run it. |
149 | + |
150 | + The "Corresponding Source" for a work in object code form means all |
151 | +the source code needed to generate, install, and (for an executable |
152 | +work) run the object code and to modify the work, including scripts to |
153 | +control those activities. However, it does not include the work's |
154 | +System Libraries, or general-purpose tools or generally available free |
155 | +programs which are used unmodified in performing those activities but |
156 | +which are not part of the work. For example, Corresponding Source |
157 | +includes interface definition files associated with source files for |
158 | +the work, and the source code for shared libraries and dynamically |
159 | +linked subprograms that the work is specifically designed to require, |
160 | +such as by intimate data communication or control flow between those |
161 | +subprograms and other parts of the work. |
162 | + |
163 | + The Corresponding Source need not include anything that users |
164 | +can regenerate automatically from other parts of the Corresponding |
165 | +Source. |
166 | + |
167 | + The Corresponding Source for a work in source code form is that |
168 | +same work. |
169 | + |
170 | + 2. Basic Permissions. |
171 | + |
172 | + All rights granted under this License are granted for the term of |
173 | +copyright on the Program, and are irrevocable provided the stated |
174 | +conditions are met. This License explicitly affirms your unlimited |
175 | +permission to run the unmodified Program. The output from running a |
176 | +covered work is covered by this License only if the output, given its |
177 | +content, constitutes a covered work. This License acknowledges your |
178 | +rights of fair use or other equivalent, as provided by copyright law. |
179 | + |
180 | + You may make, run and propagate covered works that you do not |
181 | +convey, without conditions so long as your license otherwise remains |
182 | +in force. You may convey covered works to others for the sole purpose |
183 | +of having them make modifications exclusively for you, or provide you |
184 | +with facilities for running those works, provided that you comply with |
185 | +the terms of this License in conveying all material for which you do |
186 | +not control copyright. Those thus making or running the covered works |
187 | +for you must do so exclusively on your behalf, under your direction |
188 | +and control, on terms that prohibit them from making any copies of |
189 | +your copyrighted material outside their relationship with you. |
190 | + |
191 | + Conveying under any other circumstances is permitted solely under |
192 | +the conditions stated below. Sublicensing is not allowed; section 10 |
193 | +makes it unnecessary. |
194 | + |
195 | + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. |
196 | + |
197 | + No covered work shall be deemed part of an effective technological |
198 | +measure under any applicable law fulfilling obligations under article |
199 | +11 of the WIPO copyright treaty adopted on 20 December 1996, or |
200 | +similar laws prohibiting or restricting circumvention of such |
201 | +measures. |
202 | + |
203 | + When you convey a covered work, you waive any legal power to forbid |
204 | +circumvention of technological measures to the extent such circumvention |
205 | +is effected by exercising rights under this License with respect to |
206 | +the covered work, and you disclaim any intention to limit operation or |
207 | +modification of the work as a means of enforcing, against the work's |
208 | +users, your or third parties' legal rights to forbid circumvention of |
209 | +technological measures. |
210 | + |
211 | + 4. Conveying Verbatim Copies. |
212 | + |
213 | + You may convey verbatim copies of the Program's source code as you |
214 | +receive it, in any medium, provided that you conspicuously and |
215 | +appropriately publish on each copy an appropriate copyright notice; |
216 | +keep intact all notices stating that this License and any |
217 | +non-permissive terms added in accord with section 7 apply to the code; |
218 | +keep intact all notices of the absence of any warranty; and give all |
219 | +recipients a copy of this License along with the Program. |
220 | + |
221 | + You may charge any price or no price for each copy that you convey, |
222 | +and you may offer support or warranty protection for a fee. |
223 | + |
224 | + 5. Conveying Modified Source Versions. |
225 | + |
226 | + You may convey a work based on the Program, or the modifications to |
227 | +produce it from the Program, in the form of source code under the |
228 | +terms of section 4, provided that you also meet all of these conditions: |
229 | + |
230 | + a) The work must carry prominent notices stating that you modified |
231 | + it, and giving a relevant date. |
232 | + |
233 | + b) The work must carry prominent notices stating that it is |
234 | + released under this License and any conditions added under section |
235 | + 7. This requirement modifies the requirement in section 4 to |
236 | + "keep intact all notices". |
237 | + |
238 | + c) You must license the entire work, as a whole, under this |
239 | + License to anyone who comes into possession of a copy. This |
240 | + License will therefore apply, along with any applicable section 7 |
241 | + additional terms, to the whole of the work, and all its parts, |
242 | + regardless of how they are packaged. This License gives no |
243 | + permission to license the work in any other way, but it does not |
244 | + invalidate such permission if you have separately received it. |
245 | + |
246 | + d) If the work has interactive user interfaces, each must display |
247 | + Appropriate Legal Notices; however, if the Program has interactive |
248 | + interfaces that do not display Appropriate Legal Notices, your |
249 | + work need not make them do so. |
250 | + |
251 | + A compilation of a covered work with other separate and independent |
252 | +works, which are not by their nature extensions of the covered work, |
253 | +and which are not combined with it such as to form a larger program, |
254 | +in or on a volume of a storage or distribution medium, is called an |
255 | +"aggregate" if the compilation and its resulting copyright are not |
256 | +used to limit the access or legal rights of the compilation's users |
257 | +beyond what the individual works permit. Inclusion of a covered work |
258 | +in an aggregate does not cause this License to apply to the other |
259 | +parts of the aggregate. |
260 | + |
261 | + 6. Conveying Non-Source Forms. |
262 | + |
263 | + You may convey a covered work in object code form under the terms |
264 | +of sections 4 and 5, provided that you also convey the |
265 | +machine-readable Corresponding Source under the terms of this License, |
266 | +in one of these ways: |
267 | + |
268 | + a) Convey the object code in, or embodied in, a physical product |
269 | + (including a physical distribution medium), accompanied by the |
270 | + Corresponding Source fixed on a durable physical medium |
271 | + customarily used for software interchange. |
272 | + |
273 | + b) Convey the object code in, or embodied in, a physical product |
274 | + (including a physical distribution medium), accompanied by a |
275 | + written offer, valid for at least three years and valid for as |
276 | + long as you offer spare parts or customer support for that product |
277 | + model, to give anyone who possesses the object code either (1) a |
278 | + copy of the Corresponding Source for all the software in the |
279 | + product that is covered by this License, on a durable physical |
280 | + medium customarily used for software interchange, for a price no |
281 | + more than your reasonable cost of physically performing this |
282 | + conveying of source, or (2) access to copy the |
283 | + Corresponding Source from a network server at no charge. |
284 | + |
285 | + c) Convey individual copies of the object code with a copy of the |
286 | + written offer to provide the Corresponding Source. This |
287 | + alternative is allowed only occasionally and noncommercially, and |
288 | + only if you received the object code with such an offer, in accord |
289 | + with subsection 6b. |
290 | + |
291 | + d) Convey the object code by offering access from a designated |
292 | + place (gratis or for a charge), and offer equivalent access to the |
293 | + Corresponding Source in the same way through the same place at no |
294 | + further charge. You need not require recipients to copy the |
295 | + Corresponding Source along with the object code. If the place to |
296 | + copy the object code is a network server, the Corresponding Source |
297 | + may be on a different server (operated by you or a third party) |
298 | + that supports equivalent copying facilities, provided you maintain |
299 | + clear directions next to the object code saying where to find the |
300 | + Corresponding Source. Regardless of what server hosts the |
301 | + Corresponding Source, you remain obligated to ensure that it is |
302 | + available for as long as needed to satisfy these requirements. |
303 | + |
304 | + e) Convey the object code using peer-to-peer transmission, provided |
305 | + you inform other peers where the object code and Corresponding |
306 | + Source of the work are being offered to the general public at no |
307 | + charge under subsection 6d. |
308 | + |
309 | + A separable portion of the object code, whose source code is excluded |
310 | +from the Corresponding Source as a System Library, need not be |
311 | +included in conveying the object code work. |
312 | + |
313 | + A "User Product" is either (1) a "consumer product", which means any |
314 | +tangible personal property which is normally used for personal, family, |
315 | +or household purposes, or (2) anything designed or sold for incorporation |
316 | +into a dwelling. In determining whether a product is a consumer product, |
317 | +doubtful cases shall be resolved in favor of coverage. For a particular |
318 | +product received by a particular user, "normally used" refers to a |
319 | +typical or common use of that class of product, regardless of the status |
320 | +of the particular user or of the way in which the particular user |
321 | +actually uses, or expects or is expected to use, the product. A product |
322 | +is a consumer product regardless of whether the product has substantial |
323 | +commercial, industrial or non-consumer uses, unless such uses represent |
324 | +the only significant mode of use of the product. |
325 | + |
326 | + "Installation Information" for a User Product means any methods, |
327 | +procedures, authorization keys, or other information required to install |
328 | +and execute modified versions of a covered work in that User Product from |
329 | +a modified version of its Corresponding Source. The information must |
330 | +suffice to ensure that the continued functioning of the modified object |
331 | +code is in no case prevented or interfered with solely because |
332 | +modification has been made. |
333 | + |
334 | + If you convey an object code work under this section in, or with, or |
335 | +specifically for use in, a User Product, and the conveying occurs as |
336 | +part of a transaction in which the right of possession and use of the |
337 | +User Product is transferred to the recipient in perpetuity or for a |
338 | +fixed term (regardless of how the transaction is characterized), the |
339 | +Corresponding Source conveyed under this section must be accompanied |
340 | +by the Installation Information. But this requirement does not apply |
341 | +if neither you nor any third party retains the ability to install |
342 | +modified object code on the User Product (for example, the work has |
343 | +been installed in ROM). |
344 | + |
345 | + The requirement to provide Installation Information does not include a |
346 | +requirement to continue to provide support service, warranty, or updates |
347 | +for a work that has been modified or installed by the recipient, or for |
348 | +the User Product in which it has been modified or installed. Access to a |
349 | +network may be denied when the modification itself materially and |
350 | +adversely affects the operation of the network or violates the rules and |
351 | +protocols for communication across the network. |
352 | + |
353 | + Corresponding Source conveyed, and Installation Information provided, |
354 | +in accord with this section must be in a format that is publicly |
355 | +documented (and with an implementation available to the public in |
356 | +source code form), and must require no special password or key for |
357 | +unpacking, reading or copying. |
358 | + |
359 | + 7. Additional Terms. |
360 | + |
361 | + "Additional permissions" are terms that supplement the terms of this |
362 | +License by making exceptions from one or more of its conditions. |
363 | +Additional permissions that are applicable to the entire Program shall |
364 | +be treated as though they were included in this License, to the extent |
365 | +that they are valid under applicable law. If additional permissions |
366 | +apply only to part of the Program, that part may be used separately |
367 | +under those permissions, but the entire Program remains governed by |
368 | +this License without regard to the additional permissions. |
369 | + |
370 | + When you convey a copy of a covered work, you may at your option |
371 | +remove any additional permissions from that copy, or from any part of |
372 | +it. (Additional permissions may be written to require their own |
373 | +removal in certain cases when you modify the work.) You may place |
374 | +additional permissions on material, added by you to a covered work, |
375 | +for which you have or can give appropriate copyright permission. |
376 | + |
377 | + Notwithstanding any other provision of this License, for material you |
378 | +add to a covered work, you may (if authorized by the copyright holders of |
379 | +that material) supplement the terms of this License with terms: |
380 | + |
381 | + a) Disclaiming warranty or limiting liability differently from the |
382 | + terms of sections 15 and 16 of this License; or |
383 | + |
384 | + b) Requiring preservation of specified reasonable legal notices or |
385 | + author attributions in that material or in the Appropriate Legal |
386 | + Notices displayed by works containing it; or |
387 | + |
388 | + c) Prohibiting misrepresentation of the origin of that material, or |
389 | + requiring that modified versions of such material be marked in |
390 | + reasonable ways as different from the original version; or |
391 | + |
392 | + d) Limiting the use for publicity purposes of names of licensors or |
393 | + authors of the material; or |
394 | + |
395 | + e) Declining to grant rights under trademark law for use of some |
396 | + trade names, trademarks, or service marks; or |
397 | + |
398 | + f) Requiring indemnification of licensors and authors of that |
399 | + material by anyone who conveys the material (or modified versions of |
400 | + it) with contractual assumptions of liability to the recipient, for |
401 | + any liability that these contractual assumptions directly impose on |
402 | + those licensors and authors. |
403 | + |
404 | + All other non-permissive additional terms are considered "further |
405 | +restrictions" within the meaning of section 10. If the Program as you |
406 | +received it, or any part of it, contains a notice stating that it is |
407 | +governed by this License along with a term that is a further |
408 | +restriction, you may remove that term. If a license document contains |
409 | +a further restriction but permits relicensing or conveying under this |
410 | +License, you may add to a covered work material governed by the terms |
411 | +of that license document, provided that the further restriction does |
412 | +not survive such relicensing or conveying. |
413 | + |
414 | + If you add terms to a covered work in accord with this section, you |
415 | +must place, in the relevant source files, a statement of the |
416 | +additional terms that apply to those files, or a notice indicating |
417 | +where to find the applicable terms. |
418 | + |
419 | + Additional terms, permissive or non-permissive, may be stated in the |
420 | +form of a separately written license, or stated as exceptions; |
421 | +the above requirements apply either way. |
422 | + |
423 | + 8. Termination. |
424 | + |
425 | + You may not propagate or modify a covered work except as expressly |
426 | +provided under this License. Any attempt otherwise to propagate or |
427 | +modify it is void, and will automatically terminate your rights under |
428 | +this License (including any patent licenses granted under the third |
429 | +paragraph of section 11). |
430 | + |
431 | + However, if you cease all violation of this License, then your |
432 | +license from a particular copyright holder is reinstated (a) |
433 | +provisionally, unless and until the copyright holder explicitly and |
434 | +finally terminates your license, and (b) permanently, if the copyright |
435 | +holder fails to notify you of the violation by some reasonable means |
436 | +prior to 60 days after the cessation. |
437 | + |
438 | + Moreover, your license from a particular copyright holder is |
439 | +reinstated permanently if the copyright holder notifies you of the |
440 | +violation by some reasonable means, this is the first time you have |
441 | +received notice of violation of this License (for any work) from that |
442 | +copyright holder, and you cure the violation prior to 30 days after |
443 | +your receipt of the notice. |
444 | + |
445 | + Termination of your rights under this section does not terminate the |
446 | +licenses of parties who have received copies or rights from you under |
447 | +this License. If your rights have been terminated and not permanently |
448 | +reinstated, you do not qualify to receive new licenses for the same |
449 | +material under section 10. |
450 | + |
451 | + 9. Acceptance Not Required for Having Copies. |
452 | + |
453 | + You are not required to accept this License in order to receive or |
454 | +run a copy of the Program. Ancillary propagation of a covered work |
455 | +occurring solely as a consequence of using peer-to-peer transmission |
456 | +to receive a copy likewise does not require acceptance. However, |
457 | +nothing other than this License grants you permission to propagate or |
458 | +modify any covered work. These actions infringe copyright if you do |
459 | +not accept this License. Therefore, by modifying or propagating a |
460 | +covered work, you indicate your acceptance of this License to do so. |
461 | + |
462 | + 10. Automatic Licensing of Downstream Recipients. |
463 | + |
464 | + Each time you convey a covered work, the recipient automatically |
465 | +receives a license from the original licensors, to run, modify and |
466 | +propagate that work, subject to this License. You are not responsible |
467 | +for enforcing compliance by third parties with this License. |
468 | + |
469 | + An "entity transaction" is a transaction transferring control of an |
470 | +organization, or substantially all assets of one, or subdividing an |
471 | +organization, or merging organizations. If propagation of a covered |
472 | +work results from an entity transaction, each party to that |
473 | +transaction who receives a copy of the work also receives whatever |
474 | +licenses to the work the party's predecessor in interest had or could |
475 | +give under the previous paragraph, plus a right to possession of the |
476 | +Corresponding Source of the work from the predecessor in interest, if |
477 | +the predecessor has it or can get it with reasonable efforts. |
478 | + |
479 | + You may not impose any further restrictions on the exercise of the |
480 | +rights granted or affirmed under this License. For example, you may |
481 | +not impose a license fee, royalty, or other charge for exercise of |
482 | +rights granted under this License, and you may not initiate litigation |
483 | +(including a cross-claim or counterclaim in a lawsuit) alleging that |
484 | +any patent claim is infringed by making, using, selling, offering for |
485 | +sale, or importing the Program or any portion of it. |
486 | + |
487 | + 11. Patents. |
488 | + |
489 | + A "contributor" is a copyright holder who authorizes use under this |
490 | +License of the Program or a work on which the Program is based. The |
491 | +work thus licensed is called the contributor's "contributor version". |
492 | + |
493 | + A contributor's "essential patent claims" are all patent claims |
494 | +owned or controlled by the contributor, whether already acquired or |
495 | +hereafter acquired, that would be infringed by some manner, permitted |
496 | +by this License, of making, using, or selling its contributor version, |
497 | +but do not include claims that would be infringed only as a |
498 | +consequence of further modification of the contributor version. For |
499 | +purposes of this definition, "control" includes the right to grant |
500 | +patent sublicenses in a manner consistent with the requirements of |
501 | +this License. |
502 | + |
503 | + Each contributor grants you a non-exclusive, worldwide, royalty-free |
504 | +patent license under the contributor's essential patent claims, to |
505 | +make, use, sell, offer for sale, import and otherwise run, modify and |
506 | +propagate the contents of its contributor version. |
507 | + |
508 | + In the following three paragraphs, a "patent license" is any express |
509 | +agreement or commitment, however denominated, not to enforce a patent |
510 | +(such as an express permission to practice a patent or covenant not to |
511 | +sue for patent infringement). To "grant" such a patent license to a |
512 | +party means to make such an agreement or commitment not to enforce a |
513 | +patent against the party. |
514 | + |
515 | + If you convey a covered work, knowingly relying on a patent license, |
516 | +and the Corresponding Source of the work is not available for anyone |
517 | +to copy, free of charge and under the terms of this License, through a |
518 | +publicly available network server or other readily accessible means, |
519 | +then you must either (1) cause the Corresponding Source to be so |
520 | +available, or (2) arrange to deprive yourself of the benefit of the |
521 | +patent license for this particular work, or (3) arrange, in a manner |
522 | +consistent with the requirements of this License, to extend the patent |
523 | +license to downstream recipients. "Knowingly relying" means you have |
524 | +actual knowledge that, but for the patent license, your conveying the |
525 | +covered work in a country, or your recipient's use of the covered work |
526 | +in a country, would infringe one or more identifiable patents in that |
527 | +country that you have reason to believe are valid. |
528 | + |
529 | + If, pursuant to or in connection with a single transaction or |
530 | +arrangement, you convey, or propagate by procuring conveyance of, a |
531 | +covered work, and grant a patent license to some of the parties |
532 | +receiving the covered work authorizing them to use, propagate, modify |
533 | +or convey a specific copy of the covered work, then the patent license |
534 | +you grant is automatically extended to all recipients of the covered |
535 | +work and works based on it. |
536 | + |
537 | + A patent license is "discriminatory" if it does not include within |
538 | +the scope of its coverage, prohibits the exercise of, or is |
539 | +conditioned on the non-exercise of one or more of the rights that are |
540 | +specifically granted under this License. You may not convey a covered |
541 | +work if you are a party to an arrangement with a third party that is |
542 | +in the business of distributing software, under which you make payment |
543 | +to the third party based on the extent of your activity of conveying |
544 | +the work, and under which the third party grants, to any of the |
545 | +parties who would receive the covered work from you, a discriminatory |
546 | +patent license (a) in connection with copies of the covered work |
547 | +conveyed by you (or copies made from those copies), or (b) primarily |
548 | +for and in connection with specific products or compilations that |
549 | +contain the covered work, unless you entered into that arrangement, |
550 | +or that patent license was granted, prior to 28 March 2007. |
551 | + |
552 | + Nothing in this License shall be construed as excluding or limiting |
553 | +any implied license or other defenses to infringement that may |
554 | +otherwise be available to you under applicable patent law. |
555 | + |
556 | + 12. No Surrender of Others' Freedom. |
557 | + |
558 | + If conditions are imposed on you (whether by court order, agreement or |
559 | +otherwise) that contradict the conditions of this License, they do not |
560 | +excuse you from the conditions of this License. If you cannot convey a |
561 | +covered work so as to satisfy simultaneously your obligations under this |
562 | +License and any other pertinent obligations, then as a consequence you may |
563 | +not convey it at all. For example, if you agree to terms that obligate you |
564 | +to collect a royalty for further conveying from those to whom you convey |
565 | +the Program, the only way you could satisfy both those terms and this |
566 | +License would be to refrain entirely from conveying the Program. |
567 | + |
568 | + 13. Use with the GNU Affero General Public License. |
569 | + |
570 | + Notwithstanding any other provision of this License, you have |
571 | +permission to link or combine any covered work with a work licensed |
572 | +under version 3 of the GNU Affero General Public License into a single |
573 | +combined work, and to convey the resulting work. The terms of this |
574 | +License will continue to apply to the part which is the covered work, |
575 | +but the special requirements of the GNU Affero General Public License, |
576 | +section 13, concerning interaction through a network will apply to the |
577 | +combination as such. |
578 | + |
579 | + 14. Revised Versions of this License. |
580 | + |
581 | + The Free Software Foundation may publish revised and/or new versions of |
582 | +the GNU General Public License from time to time. Such new versions will |
583 | +be similar in spirit to the present version, but may differ in detail to |
584 | +address new problems or concerns. |
585 | + |
586 | + Each version is given a distinguishing version number. If the |
587 | +Program specifies that a certain numbered version of the GNU General |
588 | +Public License "or any later version" applies to it, you have the |
589 | +option of following the terms and conditions either of that numbered |
590 | +version or of any later version published by the Free Software |
591 | +Foundation. If the Program does not specify a version number of the |
592 | +GNU General Public License, you may choose any version ever published |
593 | +by the Free Software Foundation. |
594 | + |
595 | + If the Program specifies that a proxy can decide which future |
596 | +versions of the GNU General Public License can be used, that proxy's |
597 | +public statement of acceptance of a version permanently authorizes you |
598 | +to choose that version for the Program. |
599 | + |
600 | + Later license versions may give you additional or different |
601 | +permissions. However, no additional obligations are imposed on any |
602 | +author or copyright holder as a result of your choosing to follow a |
603 | +later version. |
604 | + |
605 | + 15. Disclaimer of Warranty. |
606 | + |
607 | + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY |
608 | +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT |
609 | +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY |
610 | +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, |
611 | +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR |
612 | +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM |
613 | +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF |
614 | +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. |
615 | + |
616 | + 16. Limitation of Liability. |
617 | + |
618 | + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING |
619 | +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS |
620 | +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY |
621 | +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE |
622 | +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF |
623 | +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD |
624 | +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), |
625 | +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF |
626 | +SUCH DAMAGES. |
627 | + |
628 | + 17. Interpretation of Sections 15 and 16. |
629 | + |
630 | + If the disclaimer of warranty and limitation of liability provided |
631 | +above cannot be given local legal effect according to their terms, |
632 | +reviewing courts shall apply local law that most closely approximates |
633 | +an absolute waiver of all civil liability in connection with the |
634 | +Program, unless a warranty or assumption of liability accompanies a |
635 | +copy of the Program in return for a fee. |
636 | + |
637 | + END OF TERMS AND CONDITIONS |
638 | + |
639 | + How to Apply These Terms to Your New Programs |
640 | + |
641 | + If you develop a new program, and you want it to be of the greatest |
642 | +possible use to the public, the best way to achieve this is to make it |
643 | +free software which everyone can redistribute and change under these terms. |
644 | + |
645 | + To do so, attach the following notices to the program. It is safest |
646 | +to attach them to the start of each source file to most effectively |
647 | +state the exclusion of warranty; and each file should have at least |
648 | +the "copyright" line and a pointer to where the full notice is found. |
649 | + |
650 | + <one line to give the program's name and a brief idea of what it does.> |
651 | + Copyright (C) <year> <name of author> |
652 | + |
653 | + This program is free software: you can redistribute it and/or modify |
654 | + it under the terms of the GNU General Public License as published by |
655 | + the Free Software Foundation, either version 3 of the License, or |
656 | + (at your option) any later version. |
657 | + |
658 | + This program is distributed in the hope that it will be useful, |
659 | + but WITHOUT ANY WARRANTY; without even the implied warranty of |
660 | + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
661 | + GNU General Public License for more details. |
662 | + |
663 | + You should have received a copy of the GNU General Public License |
664 | + along with this program. If not, see <http://www.gnu.org/licenses/>. |
665 | + |
666 | +Also add information on how to contact you by electronic and paper mail. |
667 | + |
668 | + If the program does terminal interaction, make it output a short |
669 | +notice like this when it starts in an interactive mode: |
670 | + |
671 | + <program> Copyright (C) <year> <name of author> |
672 | + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. |
673 | + This is free software, and you are welcome to redistribute it |
674 | + under certain conditions; type `show c' for details. |
675 | + |
676 | +The hypothetical commands `show w' and `show c' should show the appropriate |
677 | +parts of the General Public License. Of course, your program's commands |
678 | +might be different; for a GUI interface, you would use an "about box". |
679 | + |
680 | + You should also get your employer (if you work as a programmer) or school, |
681 | +if any, to sign a "copyright disclaimer" for the program, if necessary. |
682 | +For more information on this, and how to apply and follow the GNU GPL, see |
683 | +<http://www.gnu.org/licenses/>. |
684 | + |
685 | + The GNU General Public License does not permit incorporating your program |
686 | +into proprietary programs. If your program is a subroutine library, you |
687 | +may consider it more useful to permit linking proprietary applications with |
688 | +the library. If this is what you want to do, use the GNU Lesser General |
689 | +Public License instead of this License. But first, please read |
690 | +<http://www.gnu.org/philosophy/why-not-lgpl.html>. |
691 | \ No newline at end of file |
692 | |
693 | === added file 'MANIFEST.in' |
694 | --- MANIFEST.in 1970-01-01 00:00:00 +0000 |
695 | +++ MANIFEST.in 2013-07-15 21:24:33 +0000 |
696 | @@ -0,0 +1,2 @@ |
697 | +include README |
698 | +include LICENSE |
699 | \ No newline at end of file |
700 | |
701 | === modified file 'README' |
702 | --- README 2013-01-10 18:23:17 +0000 |
703 | +++ README 2013-07-15 21:24:33 +0000 |
704 | @@ -1,4 +1,78 @@ |
705 | -1. Juju deployer |
706 | +Juju Deployer |
707 | +------------- |
708 | + |
709 | +A deployment tool for juju that allows stack-like configurations of complex |
710 | +deployments. |
711 | + |
712 | +It supports configuration in yaml or json. |
713 | + |
714 | +Installation |
715 | +------------ |
716 | + |
717 | + $ virtualenv --system-site-packages deployer |
718 | + $ ./deployer/bin/easy_install juju-deployer |
719 | + $ ./deployer/bin/juju-deployer -h |
720 | + |
721 | + |
722 | +Usage |
723 | +----- |
724 | + |
725 | + |
726 | +Stack Definitions |
727 | +----------------- |
728 | + |
729 | +High level view:: |
730 | + |
731 | + blog: |
732 | + series: precise |
733 | + services: |
734 | + blog: |
735 | + charm: wordpress |
736 | + branch: lp:charms/precise/wordpress |
737 | + db: |
738 | + charm: mysql |
739 | + branch: lp:charms/precise/mysql |
740 | + relations: |
741 | + - [db, blog] |
742 | + |
743 | + blog-prod: |
744 | + inherits: blog |
745 | + services: |
746 | + blog: |
747 | + num_units: 3 |
748 | + constraints: instance-type=m1.medium |
749 | + options: |
750 | + wp-content: include-file://content-branch.txt |
751 | + db: |
752 | + constraints: instance-type=m1.large |
753 | + options: |
754 | + tuning: include-base64://db-tuning.txt |
755 | + cachelb: |
756 | + charm: varnish |
757 | + branch: lp:charms/precise/varnish |
758 | + relations: |
759 | + - [cachelb, blog] |
760 | + |
761 | + |
762 | + We've got two deployment stacks here, blog, and blog-prod. The blog stack defines |
763 | + a simple wordpress deploy with mysql and two relations. In this case its |
764 | + |
765 | + |
766 | +Development |
767 | +----------- |
768 | + |
769 | + Obtain source |
770 | + |
771 | + $ bzr branch lp:juju-deployer/darwin deployer |
772 | + $ cd deployer |
773 | + |
774 | + # Test runner |
775 | + $ python test_deployment.py -h |
776 | + |
777 | + |
778 | + |
779 | +Background |
780 | +---------- |
781 | |
782 | This is a wrapper for Juju that allows stack-like configurations of complex |
783 | deployments. It was created to deploy Openstack but should be able to deploy |
784 | |
785 | === added directory 'configs' |
786 | === added file 'configs/blog.yaml' |
787 | --- configs/blog.yaml 1970-01-01 00:00:00 +0000 |
788 | +++ configs/blog.yaml 2013-07-15 21:24:33 +0000 |
789 | @@ -0,0 +1,36 @@ |
790 | +wordpress-stage: |
791 | + series: precise |
792 | + services: |
793 | + blog: |
794 | + charm: wordpress |
795 | + branch: lp:charms/precise/wordpress |
796 | + constraints: mem=2 |
797 | + options: |
798 | + tuning: optimized |
799 | + engine: apache |
800 | + db: |
801 | + charm: mysql |
802 | + branch: lp:charms/precise/mysql |
803 | + options: |
804 | + tuning-level: fast |
805 | + memcached: |
806 | + branch: lp:charms/precise/memcached |
807 | + options: |
808 | + request-limit: 32 |
809 | + relations: |
810 | + - [blog, [db, memcached]] |
811 | + |
812 | +wordpress-prod: |
813 | + series: precise |
814 | + inherits: wordpress-stage |
815 | + services: |
816 | + blog: |
817 | + options: |
818 | + engine: nginx |
819 | + tuning: optimized |
820 | + constraints: cpu-cores=1 |
821 | + |
822 | + db: |
823 | + constraints: cpu-cores=2 |
824 | + options: |
825 | + tuning-level: safest |
826 | |
827 | === added file 'configs/gui-export.yml' |
828 | --- configs/gui-export.yml 1970-01-01 00:00:00 +0000 |
829 | +++ configs/gui-export.yml 2013-07-15 21:24:33 +0000 |
830 | @@ -0,0 +1,42 @@ |
831 | +envExport: |
832 | + series: precise |
833 | + services: |
834 | + |
835 | + fs: |
836 | + charm: cs:precise/ceph |
837 | + options: |
838 | + "auth-supported": cephx |
839 | + "ephemeral-unmount": "" |
840 | + fsid: "" |
841 | + key: "" |
842 | + "monitor-count": "3" |
843 | + "monitor-secret": "" |
844 | + "osd-devices": /dev/vdb |
845 | + "osd-format": xfs |
846 | + "osd-journal": "" |
847 | + "osd-reformat": "" |
848 | + source: "cloud:precise-updates/folsom" |
849 | + num_units: 1 |
850 | + wordpress: |
851 | + charm: cs:precise/wordpress |
852 | + options: |
853 | + debug: no |
854 | + engine: nginx |
855 | + tuning: single |
856 | + "wp-content": "" |
857 | + num_units: 1 |
858 | + mysql: |
859 | + charm: cs:precise/mysql |
860 | + options: |
861 | + "binlog-format": MIXED |
862 | + "dataset-size": "80%" |
863 | + flavor: distro |
864 | + "max-connections": "-1" |
865 | + "preferred-storage-engine": InnoDB |
866 | + "query-cache-size": "-1" |
867 | + "query-cache-type": OFF |
868 | + "tuning-level": safest |
869 | + num_units: 1 |
870 | + relations: |
871 | + - - "wordpress:db" |
872 | + - "mysql:db" |
873 | |
874 | === renamed file 'deployments.cfg' => 'configs/openstack.cfg' |
875 | === renamed file 'deployments.cfg.sample' => 'configs/ostack-testing-sample.cfg' |
876 | === added file 'configs/wiki.yaml' |
877 | --- configs/wiki.yaml 1970-01-01 00:00:00 +0000 |
878 | +++ configs/wiki.yaml 2013-07-15 21:24:33 +0000 |
879 | @@ -0,0 +1,24 @@ |
880 | +wiki: |
881 | + series: precise |
882 | + services: |
883 | + wiki: |
884 | + charm: mediawiki |
885 | + branch: lp:charms/precise/mediawiki |
886 | + constraints: mem=2 |
887 | + num_units: 2 |
888 | + db: |
889 | + charm: mysql |
890 | + branch: lp:charms/precise/mysql |
891 | + options: |
892 | + tuning-level: fast |
893 | + haproxy: |
894 | + branch: lp:charms/precise/haproxy |
895 | + options: |
896 | + request-limit: 32 |
897 | + memcached: |
898 | + branch: lp:charms/precise/memcached |
899 | + options: |
900 | + request-limit: 32 |
901 | + relations: |
902 | + - ["wiki:db", mysql] |
903 | + - [wiki, [haproxy, memcached]] |
904 | |
905 | === added file 'deployer.py' |
906 | --- deployer.py 1970-01-01 00:00:00 +0000 |
907 | +++ deployer.py 2013-07-15 21:24:33 +0000 |
908 | @@ -0,0 +1,1756 @@ |
909 | +#!/usr/bin/env python |
910 | +""" |
911 | +Juju Deployer |
912 | + |
913 | +Deployment automation for juju. |
914 | + |
915 | +""" |
916 | +import argparse |
917 | +from base64 import b64encode |
918 | +from bzrlib.workingtree import WorkingTree |
919 | +from copy import deepcopy |
920 | +from contextlib import contextmanager |
921 | + |
922 | +import errno |
923 | +import json |
924 | +import logging |
925 | +from logging.config import dictConfig as logConfig |
926 | +import os |
927 | + |
928 | +from os.path import abspath, dirname, isabs |
929 | +from os.path import join as path_join |
930 | +from os.path import exists as path_exists |
931 | + |
932 | +import pprint |
933 | +import shutil |
934 | +import stat |
935 | +import socket |
936 | +import subprocess |
937 | +import sys |
938 | +import tempfile |
939 | +import time |
940 | +import urllib |
941 | +import urlparse |
942 | +import zipfile |
943 | + |
944 | +try: |
945 | + from yaml import CSafeLoader, CSafeDumper |
946 | + SafeLoader, SafeDumper = CSafeLoader, CSafeDumper |
947 | +except ImportError: |
948 | + from yaml import SafeLoader, SafeDumper |
949 | + |
950 | +import yaml |
951 | + |
952 | + |
953 | +from jujuclient import ( |
954 | + Environment as EnvironmentClient, UnitErrors, EnvError) |
955 | + |
956 | + |
957 | +class ErrorExit(Exception): |
958 | + |
959 | + def __init__(self, error=None): |
960 | + self.error = error |
961 | + |
962 | +# Utility functions |
963 | + |
964 | + |
965 | +def yaml_dump(value): |
966 | + return yaml.dump(value, default_flow_style=False) |
967 | + |
968 | + |
969 | +def yaml_load(value): |
970 | + return yaml.load(value, Loader=SafeLoader) |
971 | + |
972 | + |
973 | +DEFAULT_LOGGING = """ |
974 | +version: 1 |
975 | +formatters: |
976 | + standard: |
977 | + format: '%(asctime)s %(message)s' |
978 | + datefmt: "%Y-%m-%d %H:%M:%S" |
979 | + detailed: |
980 | + format: '%(asctime)s [%(levelname)s] %(name)s: %(message)s' |
981 | + datefmt: "%Y-%m-%d %H:%M:%S" |
982 | +handlers: |
983 | + console: |
984 | + class: logging.StreamHandler |
985 | + formatter: standard |
986 | + level: DEBUG |
987 | + stream: ext://sys.stderr |
988 | +loggers: |
989 | + deployer: |
990 | + level: INFO |
991 | + propogate: true |
992 | + deploy.cli: |
993 | + level: DEBUG |
994 | + propogate: true |
995 | + deploy.charm: |
996 | + level: DEBUG |
997 | + propogate: true |
998 | + deploy.env: |
999 | + level: DEBUG |
1000 | + propogate: true |
1001 | + deploy.deploy: |
1002 | + level: DEBUG |
1003 | + propogate: true |
1004 | + deploy.importer: |
1005 | + level: DEBUG |
1006 | + propogate: true |
1007 | + "": |
1008 | + level: INFO |
1009 | + handlers: |
1010 | + - console |
1011 | +""" |
1012 | + |
1013 | +STORE_CACHE_DIR = "deployer-cache" |
1014 | + |
1015 | +STORE_URL = "https://store.juju.ubuntu.com" |
1016 | + |
1017 | + |
1018 | +def setup_logging(verbose=False, debug=False, stream=None): |
1019 | + config = yaml_load(DEFAULT_LOGGING) |
1020 | + log_options = {} |
1021 | + if verbose: |
1022 | + log_options.update({"loggers": { |
1023 | + "deployer": {"level": "DEBUG", "propogate": True}}}) |
1024 | + #log_options.update({"loggers": {"" |
1025 | + if debug: |
1026 | + log_options.update( |
1027 | + {"handlers": {"console": {"formatter": "detailed"}}}) |
1028 | + config = dict_merge(config, log_options) |
1029 | + logConfig(config) |
1030 | + |
1031 | + # Allow tests to reuse this func to mass configure log streams. |
1032 | + if stream: |
1033 | + root = logging.getLogger() |
1034 | + previous = root.handlers[0] |
1035 | + root.handlers[0] = current = logging.StreamHandler(stream) |
1036 | + current.setFormatter(previous.formatter) |
1037 | + return stream |
1038 | + |
1039 | + |
1040 | +@contextmanager |
1041 | +def temp_file(): |
1042 | + t = tempfile.NamedTemporaryFile() |
1043 | + try: |
1044 | + yield t |
1045 | + finally: |
1046 | + t.close() |
1047 | + |
1048 | + |
1049 | +def extract_zip(zip_path, dir_path): |
1050 | + zf = zipfile.ZipFile(zip_path, "r") |
1051 | + for info in zf.infolist(): |
1052 | + mode = info.external_attr >> 16 |
1053 | + if stat.S_ISLNK(mode): |
1054 | + source = zf.read(info.filename) |
1055 | + target = os.path.join(dir_path, info.filename) |
1056 | + if os.path.exists(target): |
1057 | + os.remove(target) |
1058 | + os.symlink(source, target) |
1059 | + continue |
1060 | + extract_path = zf.extract(info, dir_path) |
1061 | + os.chmod(extract_path, mode) |
1062 | + |
1063 | + |
1064 | +def select_runtime(env_name): |
1065 | + # pyjuju does juju --version |
1066 | + result = _check_call(["juju", "version"], None, ignoreerr=True) |
1067 | + if result is None: |
1068 | + return PyEnvironment(env_name) |
1069 | + return GoEnvironment(env_name) |
1070 | + |
1071 | + |
1072 | +def _parse_constraints(value): |
1073 | + constraints = {} |
1074 | + pairs = value.strip().split() |
1075 | + for p in pairs: |
1076 | + k, v = p.split('=') |
1077 | + try: |
1078 | + v = int(v) |
1079 | + except ValueError: |
1080 | + try: |
1081 | + v = float(v) |
1082 | + except: |
1083 | + pass |
1084 | + constraints[k] = v |
1085 | + return constraints |
1086 | + |
1087 | + |
1088 | +def _get_juju_home(): |
1089 | + jhome = os.environ.get("JUJU_HOME") |
1090 | + if jhome is None: |
1091 | + jhome = path_join(os.environ.get('HOME'), '.juju') |
1092 | + return jhome |
1093 | + |
1094 | + |
1095 | +def _check_call(params, log, *args, **kw): |
1096 | + try: |
1097 | + cwd = abspath(".") |
1098 | + if 'cwd' in kw: |
1099 | + cwd = kw['cwd'] |
1100 | + stderr = subprocess.STDOUT |
1101 | + if 'stderr' in kw: |
1102 | + stderr = kw['stderr'] |
1103 | + output = subprocess.check_output( |
1104 | + params, cwd=cwd, stderr=stderr, env=os.environ) |
1105 | + except subprocess.CalledProcessError, e: |
1106 | + if 'ignoreerr' in kw: |
1107 | + return |
1108 | + #print "subprocess error" |
1109 | + #print " ".join(params), "\ncwd: %s\n" % cwd, "\n ".join( |
1110 | + # ["%s=%s" % (k, os.environ[k]) for k in os.environ |
1111 | + # if k.startswith("JUJU")]) |
1112 | + #print e.output |
1113 | + log.error(*args) |
1114 | + log.error("Command (%s) Output:\n\n %s", " ".join(params), e.output) |
1115 | + raise ErrorExit(e) |
1116 | + return output |
1117 | + |
1118 | + |
1119 | +# Utils from deployer 1 |
1120 | +def relations_combine(onto, source): |
1121 | + target = deepcopy(onto) |
1122 | + # Support list of relations targets |
1123 | + if isinstance(onto, list) and isinstance(source, list): |
1124 | + target.extend(source) |
1125 | + return target |
1126 | + for (key, value) in source.items(): |
1127 | + if key in target: |
1128 | + if isinstance(target[key], dict) and isinstance(value, dict): |
1129 | + target[key] = relations_combine(target[key], value) |
1130 | + elif isinstance(target[key], list) and isinstance(value, list): |
1131 | + target[key] = list(set(target[key] + value)) |
1132 | + else: |
1133 | + target[key] = value |
1134 | + return target |
1135 | + |
1136 | + |
1137 | +def dict_merge(onto, source): |
1138 | + target = deepcopy(onto) |
1139 | + for (key, value) in source.items(): |
1140 | + if key == 'relations' and key in target: |
1141 | + target[key] = relations_combine(target[key], value) |
1142 | + elif (key in target and isinstance(target[key], dict) and |
1143 | + isinstance(value, dict)): |
1144 | + target[key] = dict_merge(target[key], value) |
1145 | + else: |
1146 | + target[key] = value |
1147 | + return target |
1148 | + |
1149 | + |
1150 | +def resolve_include(fname, include_dirs): |
1151 | + if isabs(fname): |
1152 | + return fname |
1153 | + for path in include_dirs: |
1154 | + full_path = path_join(path, fname) |
1155 | + if path_exists(full_path): |
1156 | + return full_path |
1157 | + |
1158 | + return None |
1159 | + |
1160 | + |
1161 | +def setup_parser(): |
1162 | + parser = argparse.ArgumentParser() |
1163 | + parser.add_argument( |
1164 | + '-c', '--config', |
1165 | + help=('File containing deployment(s) json config. This ' |
1166 | + 'option can be repeated, with later files overriding ' |
1167 | + 'values in earlier ones.'), |
1168 | + dest='configs', action='append') |
1169 | + parser.add_argument( |
1170 | + '-d', '--debug', help='Enable debugging to stdout', |
1171 | + dest="debug", |
1172 | + action="store_true", default=False) |
1173 | + parser.add_argument( |
1174 | + '-L', '--local-mods', |
1175 | + help='Disallow deployment of locally-modified charms', |
1176 | + dest="no_local_mods", default=True, action='store_false') |
1177 | + parser.add_argument( |
1178 | + '-u', '--update-charms', |
1179 | + help='Update existing charm branches', |
1180 | + dest="update_charms", default=False, action="store_true") |
1181 | + parser.add_argument( |
1182 | + '-l', '--ls', help='List available deployments', |
1183 | + dest="list_deploys", action="store_true", default=False) |
1184 | + parser.add_argument( |
1185 | + '-D', '--destroy-services', |
1186 | + help='Destroy all services (do not terminate machines)', |
1187 | + dest="destroy_services", action="store_true", |
1188 | + default=False) |
1189 | + parser.add_argument( |
1190 | + '-T', '--terminate-machines', |
1191 | + help=('Terminate all machines but the bootstrap node. ' |
1192 | + 'Destroy any services that exist on each'), |
1193 | + dest="terminate_machines", action="store_true", |
1194 | + default=False) |
1195 | + parser.add_argument( |
1196 | + '-t', '--timeout', |
1197 | + help='Timeout (sec) for entire deployment (45min default)', |
1198 | + dest='timeout', action='store', type=int, default=2700) |
1199 | + parser.add_argument( |
1200 | + "-f", '--find-service', action="store", type=str, |
1201 | + help='Find hostname from first unit of a specific service.', |
1202 | + dest="find_service") |
1203 | + parser.add_argument( |
1204 | + "-b", '--branch-only', action="store_true", |
1205 | + help='Update vcs branches and exit.', |
1206 | + dest="branch_only") |
1207 | + parser.add_argument( |
1208 | + '-s', '--deploy-delay', action='store', type=float, |
1209 | + help=("Time in seconds to sleep between 'deploy' commands, " |
1210 | + "to allow machine provider to process requests. This " |
1211 | + "delay is also enforced between calls to " |
1212 | + "terminate_machine"), |
1213 | + dest="deploy_delay", default=0) |
1214 | + parser.add_argument( |
1215 | + '-e', '--environment', action='store', dest='juju_env', |
1216 | + help='Deploy to a specific Juju environment.', |
1217 | + default=os.getenv('JUJU_ENV')) |
1218 | + parser.add_argument( |
1219 | + '-o', '--override', action='append', type=str, |
1220 | + help=('Override *all* config options of the same name ' |
1221 | + 'across all services. Input as key=value.'), |
1222 | + dest='overrides', default=None) |
1223 | + parser.add_argument( |
1224 | + '-v', '--verbose', action='store_true', default=False, |
1225 | + dest="verbose", help='Verbose output') |
1226 | + parser.add_argument( |
1227 | + '-W', '--watch', help='Watch environment changes on console', |
1228 | + dest="watch", action="store_true", default=False) |
1229 | + parser.add_argument( |
1230 | + '-r', "--retry", default=0, type=int, dest="retry_count", |
1231 | + help=("Resolve unit errors via retry." |
1232 | + " Either standalone or in a deployment")) |
1233 | + parser.add_argument( |
1234 | + "--diff", action="store_true", default=False, |
1235 | + help=("Resolve unit errors via retry." |
1236 | + " Either standalone or in a deployment")) |
1237 | + parser.add_argument( |
1238 | + '-w', '--relation-wait', action='store', dest='rel_wait', |
1239 | + default=60, type=int, |
1240 | + help=('Number of seconds to wait before checking for ' |
1241 | + 'relation errors after all relations have been added ' |
1242 | + 'and subordinates started. (default: 60)')) |
1243 | + parser.add_argument("deployment", nargs="?") |
1244 | + return parser |
1245 | + |
1246 | + |
1247 | +class ConfigStack(object): |
1248 | + |
1249 | + log = logging.getLogger("deployer.config") |
1250 | + |
1251 | + def __init__(self, config_files): |
1252 | + self.config_files = config_files |
1253 | + self.data = {} |
1254 | + self.include_dirs = [] |
1255 | + self.load() |
1256 | + |
1257 | + def keys(self): |
1258 | + return sorted(self.data) |
1259 | + |
1260 | + def get(self, key): |
1261 | + if not key in self.data: |
1262 | + self.log.warning("Deployment %r not found. Available %s", |
1263 | + key, ", ".join(self.keys())) |
1264 | + raise ErrorExit() |
1265 | + deploy_data = self.data[key] |
1266 | + deploy_data = self._resolve_inherited(deploy_data) |
1267 | + return Deployment(key, deploy_data, self.include_dirs) |
1268 | + |
1269 | + def load(self, urlopen=urllib.urlopen): |
1270 | + data = {} |
1271 | + include_dirs = [] |
1272 | + for fp in self.config_files: |
1273 | + if not path_exists(fp): |
1274 | + # If the config file path is a URL, fetch it and use it. |
1275 | + if urlparse.urlparse(fp).scheme: |
1276 | + response = urlopen(fp) |
1277 | + if response.getcode() == 200: |
1278 | + temp = tempfile.NamedTemporaryFile(delete=True) |
1279 | + shutil.copyfileobj(response, temp) |
1280 | + temp.flush() |
1281 | + fp = temp.name |
1282 | + else: |
1283 | + self.log.warning("Could not retrieve %s", fp) |
1284 | + raise ErrorExit() |
1285 | + else: |
1286 | + self.log.warning("Config file not found %s", fp) |
1287 | + raise ErrorExit() |
1288 | + else: |
1289 | + include_dirs.append(dirname(abspath(fp))) |
1290 | + with open(fp) as fh: |
1291 | + try: |
1292 | + d = yaml_load(fh.read()) |
1293 | + data = dict_merge(data, d) |
1294 | + except Exception, e: |
1295 | + self.log.warning( |
1296 | + "Couldn't load config file @ %r, error: %s:%s", |
1297 | + fp, type(e), e) |
1298 | + raise |
1299 | + self.data = data |
1300 | + self.include_dirs = include_dirs |
1301 | + |
1302 | + def _inherits(self, d): |
1303 | + parents = d.get('inherits', ()) |
1304 | + if isinstance(parents, basestring): |
1305 | + parents = [parents] |
1306 | + return parents |
1307 | + |
1308 | + def _resolve_inherited(self, deploy_data): |
1309 | + if not 'inherits' in deploy_data: |
1310 | + return deploy_data |
1311 | + inherits = parents = self._inherits(deploy_data) |
1312 | + for parent_name in parents: |
1313 | + parent = self.get(parent_name) |
1314 | + inherits.extend(self._inherits(parent.data)) |
1315 | + deploy_data = dict_merge(deploy_data, parent.data) |
1316 | + deploy_data['inherits'] = inherits |
1317 | + return deploy_data |
1318 | + |
1319 | + |
1320 | +class Endpoint(object): |
1321 | + |
1322 | + def __init__(self, ep): |
1323 | + self.ep = ep |
1324 | + self.name = None |
1325 | + if ":" in self.ep: |
1326 | + self.service, self.name = self.ep.split(":") |
1327 | + else: |
1328 | + self.service = ep |
1329 | + |
1330 | + |
1331 | +class EndpointPair(object): |
1332 | + # Really simple endpoint service matching that does not work for multiple |
1333 | + # relations between two services (used by diff at the moment) |
1334 | + |
1335 | + def __init__(self, ep_x, ep_y=None): |
1336 | + self.ep_x = Endpoint(ep_x) |
1337 | + self.ep_y = ep_y and Endpoint(ep_y) |
1338 | + |
1339 | + def __eq__(self, ep_pair): |
1340 | + if not isinstance(ep_pair, EndpointPair): |
1341 | + return False |
1342 | + return (ep_pair.ep_x.service in self |
1343 | + and |
1344 | + ep_pair.ep_y.service in self) |
1345 | + |
1346 | + def __contains__(self, svc_name): |
1347 | + return (svc_name == self.ep_x.service |
1348 | + or |
1349 | + svc_name == self.ep_y.service) |
1350 | + |
1351 | + def __hash__(self): |
1352 | + return hash(tuple(sorted( |
1353 | + (self.ep_x.service, self.ep_y.service)))) |
1354 | + |
1355 | + def __repr__(self): |
1356 | + return "%s <-> %s" % ( |
1357 | + self.ep_x.ep, |
1358 | + self.ep_y.ep) |
1359 | + |
1360 | + @staticmethod |
1361 | + def to_yaml(dumper, data): |
1362 | + return dumper.represent_list([[data.ep_x.ep, data.ep_y.ep]]) |
1363 | + |
1364 | + |
1365 | +yaml.add_representer(EndpointPair, EndpointPair.to_yaml) |
1366 | + |
1367 | + |
1368 | +class Service(object): |
1369 | + |
1370 | + def __init__(self, name, svc_data): |
1371 | + self.svc_data = svc_data |
1372 | + self.name = name |
1373 | + |
1374 | + @property |
1375 | + def config(self): |
1376 | + return self.svc_data.get('options', None) |
1377 | + |
1378 | + @property |
1379 | + def constraints(self): |
1380 | + return self.svc_data.get('constraints', None) |
1381 | + |
1382 | + @property |
1383 | + def num_units(self): |
1384 | + return int(self.svc_data.get('num_units', 1)) |
1385 | + |
1386 | + @property |
1387 | + def force_machine(self): |
1388 | + return self.svc_data.get('force-machine') |
1389 | + |
1390 | + @property |
1391 | + def expose(self): |
1392 | + return self.svc_data.get('expose', False) |
1393 | + |
1394 | + |
1395 | +class Vcs(object): |
1396 | + |
1397 | + err_update = ( |
1398 | + "Could not update branch %(path)s from %(branch_url)s\n\n %(output)s") |
1399 | + err_branch = "Could not branch %(branch_url)s to %(path)s\n\n %(output)s" |
1400 | + err_is_mod = "Couldn't determine if %(path)s was modified\n\n %(output)s" |
1401 | + err_pull = ( |
1402 | + "Could not pull branch @ %(branch_url)s to %(path)s\n\n %(output)s") |
1403 | + err_cur_rev = ( |
1404 | + "Could not determine current revision %(path)s\n\n %(output)s") |
1405 | + |
1406 | + def __init__(self, path, origin, log): |
1407 | + self.path = path |
1408 | + self.log = log |
1409 | + self.origin = origin |
1410 | + |
1411 | + def _call(self, args, error_msg, cwd=None, stderr=()): |
1412 | + try: |
1413 | + stderr = stderr is None and stderr or subprocess.STDOUT |
1414 | + output = subprocess.check_output( |
1415 | + args, cwd=cwd or self.path, stderr=subprocess.STDOUT) |
1416 | + except subprocess.CalledProcessError, e: |
1417 | + #print "vcs err", " ".join(args), "[dir: %s]" % cwd |
1418 | + self.log.error(error_msg % self.get_err_msg_ctx(e)) |
1419 | + raise ErrorExit() |
1420 | + return output.strip() |
1421 | + |
1422 | + def get_err_msg_ctx(self, e): |
1423 | + return { |
1424 | + 'path': self.path, |
1425 | + 'branch_url': self.origin, |
1426 | + 'exit_code': e.returncode, |
1427 | + 'output': e.output, |
1428 | + 'vcs': self.__class__.__name__.lower()} |
1429 | + |
1430 | + def get_cur_rev(self): |
1431 | + raise NotImplementedError() |
1432 | + |
1433 | + def update(self, rev=None): |
1434 | + raise NotImplementedError() |
1435 | + |
1436 | + def branch(self): |
1437 | + raise NotImplementedError() |
1438 | + |
1439 | + def pull(self): |
1440 | + raise NotImplementedError() |
1441 | + |
1442 | + def is_modified(self): |
1443 | + raise NotImplementedError() |
1444 | + |
1445 | + # upstream missing revisions? |
1446 | + |
1447 | + |
1448 | +class Bzr(Vcs): |
1449 | + |
1450 | + def get_cur_rev(self): |
1451 | + params = ["bzr", "revno", "--tree"] |
1452 | + return self._call(params, self.err_cur_rev) |
1453 | + |
1454 | + def update(self, rev=None): |
1455 | + params = ["bzr", "up"] |
1456 | + if rev: |
1457 | + params.extend(["-r", str(rev)]) |
1458 | + self._call(params, self.err_update) |
1459 | + |
1460 | + def pull(self): |
1461 | + params = ["bzr", "pull", "--remember", self.origin] |
1462 | + self._call(params, self.err_pull) |
1463 | + |
1464 | + def branch(self): |
1465 | + params = ["bzr", "branch", self.origin, self.path] |
1466 | + cwd = os.path.dirname(os.path.dirname(self.path)) |
1467 | + if not cwd: |
1468 | + cwd = "." |
1469 | + self._call(params, self.err_branch, cwd) |
1470 | + |
1471 | + def is_modified(self): |
1472 | + # To replace with bzr cli, we need to be able to detect |
1473 | + # changes to a wc @ a rev or @ trunk. |
1474 | + tree = WorkingTree.open(self.path) |
1475 | + return tree.has_changes() |
1476 | + |
1477 | + |
1478 | +class Git(Vcs): |
1479 | + |
1480 | + def get_cur_rev(self): |
1481 | + params = ["git", "rev-parse", "HEAD"] |
1482 | + return self._call(params, self.err_cur_rev) |
1483 | + |
1484 | + def update(self, rev=None): |
1485 | + params = ["git", "reset", "--merge"] |
1486 | + if rev: |
1487 | + params.append(rev) |
1488 | + self._call(params, self.err_update) |
1489 | + |
1490 | + def pull(self): |
1491 | + params = ["git", "pull", "master"] |
1492 | + self._call(params, self.err_pull) |
1493 | + |
1494 | + def branch(self): |
1495 | + params = ["git", "clone", self.branch] |
1496 | + self._call(params, self.err_branch, os.path.dirname(self.path)) |
1497 | + |
1498 | + def is_modified(self): |
1499 | + params = ["git", "stat", "-s"] |
1500 | + return bool(self._call(params, self.err_is_mod).strip()) |
1501 | + |
1502 | + def get_origin(self): |
1503 | + params = ["git", "config", "--get", "remote.origin.url"] |
1504 | + return self._call(params, "") |
1505 | + |
1506 | + |
1507 | +class Charm(object): |
1508 | + |
1509 | + log = logging.getLogger('deployer.charm') |
1510 | + |
1511 | + def __init__(self, name, path, branch, rev, build, charm_url=""): |
1512 | + self.name = name |
1513 | + self._path = path |
1514 | + self.branch = branch |
1515 | + self.rev = rev |
1516 | + self._charm_url = charm_url |
1517 | + self._build = build |
1518 | + self.vcs = self.get_vcs() |
1519 | + |
1520 | + def get_vcs(self): |
1521 | + if not self.branch: |
1522 | + return None |
1523 | + if self.branch.startswith('git') or 'github.com' in self.branch: |
1524 | + return Git(self.path, self.branch, self.log) |
1525 | + elif self.branch.startswith("bzr") or self.branch.startswith('lp:'): |
1526 | + return Bzr(self.path, self.branch, self.log) |
1527 | + |
1528 | + @classmethod |
1529 | + def from_service(cls, name, series_path, d): |
1530 | + branch, rev = None, None |
1531 | + charm_branch = d.get('branch') |
1532 | + if charm_branch is not None: |
1533 | + branch, sep, rev = charm_branch.partition('@') |
1534 | + |
1535 | + charm_path, store_url, build = None, None, None |
1536 | + name = d.get('charm', name) |
1537 | + if name.startswith('cs'): |
1538 | + store_url = name |
1539 | + else: |
1540 | + charm_path = path_join(series_path, name) |
1541 | + build = d.get('build', '') |
1542 | + if not store_url: |
1543 | + store_url = d.get('charm_url', None) |
1544 | + |
1545 | + if store_url and branch: |
1546 | + cls.log.error( |
1547 | + "Service: %s has both charm url: %s and branch: %s specified", |
1548 | + name, store_url, branch) |
1549 | + return cls(name, charm_path, branch, rev, build, store_url) |
1550 | + |
1551 | + def is_local(self): |
1552 | + return not self._charm_url |
1553 | + |
1554 | + def exists(self): |
1555 | + return self.is_local() and path_exists(self.path) |
1556 | + |
1557 | + def is_subordinate(self): |
1558 | + return self.metadata.get('subordinate', False) |
1559 | + |
1560 | + @property |
1561 | + def charm_url(self): |
1562 | + if self._charm_url: |
1563 | + return self._charm_url |
1564 | + series = os.path.basename(os.path.dirname(self.path)) |
1565 | + return "local:%s/%s" % (series, self.name) |
1566 | + |
1567 | + def build(self): |
1568 | + if not self._build: |
1569 | + return |
1570 | + self.log.debug("Building charm %s with %s", self.path, self._build) |
1571 | + _check_call([self._build], self.log, |
1572 | + "Charm build failed %s @ %s", self._build, self.path, |
1573 | + cwd=self.path) |
1574 | + |
1575 | + def fetch(self): |
1576 | + if self._charm_url: |
1577 | + return self._fetch_store_charm() |
1578 | + elif not self.branch: |
1579 | + self.log.warning("Invalid charm specification %s", self.name) |
1580 | + return |
1581 | + self.log.debug(" Branching charm %s @ %s", self.branch, self.path) |
1582 | + self.vcs.branch() |
1583 | + self.build() |
1584 | + |
1585 | + @property |
1586 | + def path(self): |
1587 | + if not self.is_local() and not self._path: |
1588 | + self._path = self._get_charm_store_cache() |
1589 | + return self._path |
1590 | + |
1591 | + def _fetch_store_charm(self, update=False): |
1592 | + cache_dir = self._get_charm_store_cache() |
1593 | + self.log.debug("Cache dir %s", cache_dir) |
1594 | + qualified_url = None |
1595 | + |
1596 | + if os.path.exists(cache_dir) and not update: |
1597 | + return |
1598 | + |
1599 | + # If we have a qualified url, check cache and move on. |
1600 | + parts = self.charm_url.rsplit('-', 1) |
1601 | + if len(parts) > 1 and parts[-1].isdigit(): |
1602 | + qualified_url = self.charm_url |
1603 | + |
1604 | + if not qualified_url: |
1605 | + info_url = "%s/charm-info?charms=%s" % (STORE_URL, self.charm_url) |
1606 | + fh = urllib.urlopen(info_url) |
1607 | + content = json.loads(fh.read()) |
1608 | + rev = content[self.charm_url]['revision'] |
1609 | + qualified_url = "%s-%d" % (self.charm_url, rev) |
1610 | + |
1611 | + self.log.debug("Retrieving store charm %s" % qualified_url) |
1612 | + |
1613 | + if update and os.path.exists(cache_dir): |
1614 | + shutil.rmtree(cache_dir) |
1615 | + |
1616 | + with temp_file() as fh: |
1617 | + ufh = urllib.urlopen("%s/charm/%s" % ( |
1618 | + STORE_URL, qualified_url[3:])) |
1619 | + shutil.copyfileobj(ufh, fh) |
1620 | + fh.flush() |
1621 | + extract_zip(fh.name, self.path) |
1622 | + |
1623 | + self.config |
1624 | + |
1625 | + def _get_charm_store_cache(self): |
1626 | + assert not self.is_local(), "Attempt to get store charm for local" |
1627 | + # Cache |
1628 | + jhome = _get_juju_home() |
1629 | + cache_dir = os.path.join(jhome, ".deployer-store-cache") |
1630 | + if not os.path.exists(cache_dir): |
1631 | + os.mkdir(cache_dir) |
1632 | + return os.path.join( |
1633 | + cache_dir, |
1634 | + self.charm_url.replace(':', '_').replace("/", "_")) |
1635 | + |
1636 | + def update(self, build=False): |
1637 | + if not self.branch: |
1638 | + return |
1639 | + assert self.exists() |
1640 | + self.log.debug(" Updating charm %s from %s", self.path, self.branch) |
1641 | + self.vcs.update(self.rev) |
1642 | + if build: |
1643 | + self.build() |
1644 | + |
1645 | + def is_modified(self): |
1646 | + if not self.branch: |
1647 | + return False |
1648 | + return self.vcs.is_modified() |
1649 | + |
1650 | + @property |
1651 | + def config(self): |
1652 | + config_path = path_join(self.path, "config.yaml") |
1653 | + if not path_exists(config_path): |
1654 | + return {} |
1655 | + |
1656 | + with open(config_path) as fh: |
1657 | + return yaml_load(fh.read()).get('options', {}) |
1658 | + |
1659 | + @property |
1660 | + def metadata(self): |
1661 | + md_path = path_join(self.path, "metadata.yaml") |
1662 | + if not path_exists(md_path): |
1663 | + if not path_exists(self.path): |
1664 | + raise RuntimeError("No charm metadata @ %s", md_path) |
1665 | + with open(md_path) as fh: |
1666 | + return yaml_load(fh.read()) |
1667 | + |
1668 | + def get_provides(self): |
1669 | + p = {'juju-info': [{'name': 'juju-info'}]} |
1670 | + for key, value in self.metadata['provides'].items(): |
1671 | + value['name'] = key |
1672 | + p.setdefault(value['interface'], []).append(value) |
1673 | + return p |
1674 | + |
1675 | + def get_requires(self): |
1676 | + r = {} |
1677 | + for key, value in self.metadata['requires'].items(): |
1678 | + value['name'] = key |
1679 | + r.setdefault(value['interface'], []).append(value) |
1680 | + return r |
1681 | + |
1682 | + |
1683 | +class Deployment(object): |
1684 | + |
1685 | + log = logging.getLogger("deployer.deploy") |
1686 | + |
1687 | + def __init__(self, name, data, include_dirs, repo_path=""): |
1688 | + self.name = name |
1689 | + self.data = data |
1690 | + self.include_dirs = include_dirs |
1691 | + self.repo_path = repo_path |
1692 | + |
1693 | + @property |
1694 | + def series(self): |
1695 | + # Series could use a little help, charm series should be inferred |
1696 | + # directly from a store url |
1697 | + return self.data.get('series', 'precise') |
1698 | + |
1699 | + @property |
1700 | + def series_path(self): |
1701 | + return path_join(self.repo_path, self.series) |
1702 | + |
1703 | + def pretty_print(self): |
1704 | + pprint.pprint(self.data) |
1705 | + |
1706 | + def get_service(self, name): |
1707 | + if not name in self.data['services']: |
1708 | + return |
1709 | + return Service(name, self.data['services'][name]) |
1710 | + |
1711 | + def get_services(self): |
1712 | + for name, svc_data in self.data.get('services', {}).items(): |
1713 | + yield Service(name, svc_data) |
1714 | + |
1715 | + def get_relations(self): |
1716 | + if 'relations' not in self.data: |
1717 | + return |
1718 | + |
1719 | + # Strip duplicate rels |
1720 | + seen = set() |
1721 | + |
1722 | + def check(a, b): |
1723 | + k = tuple(sorted([a, b])) |
1724 | + if k in seen: |
1725 | + #self.log.warning(" Skipping duplicate relation %r" % (k,)) |
1726 | + return |
1727 | + seen.add(k) |
1728 | + return True |
1729 | + |
1730 | + # Support an ordered list of [endpoints] |
1731 | + if isinstance(self.data['relations'], list): |
1732 | + for end_a, end_b in self.data['relations']: |
1733 | + # Allow shorthand of [end_a, [end_b, end_c]] |
1734 | + if isinstance(end_b, list): |
1735 | + for eb in end_b: |
1736 | + if check(end_a, eb): |
1737 | + yield (end_a, eb) |
1738 | + else: |
1739 | + if check(end_a, end_b): |
1740 | + yield (end_a, end_b) |
1741 | + return |
1742 | + |
1743 | + # Legacy format (dictionary of dictionaries with weights) |
1744 | + rels = {} |
1745 | + for k, v in self.data['relations'].items(): |
1746 | + expanded = [] |
1747 | + for c in v['consumes']: |
1748 | + expanded.append((k, c)) |
1749 | + rels[v.get('weight', 0)] = expanded |
1750 | + for k in sorted(rels): |
1751 | + for r in rels[k]: |
1752 | + if check(*r): |
1753 | + yield r |
1754 | + #self.log.debug( |
1755 | + # "Found relations %s\n %s" % (" ".join(map(str, seen)))) |
1756 | + |
1757 | + def get_charms(self): |
1758 | + for k, v in self.data.get('services', {}).items(): |
1759 | + yield Charm.from_service(k, self.series_path, v) |
1760 | + |
1761 | + def get_charm_for(self, svc_name): |
1762 | + svc_data = self.data['services'][svc_name] |
1763 | + return Charm.from_service(svc_name, self.series_path, svc_data) |
1764 | + |
1765 | + def fetch_charms(self, update=False, no_local_mods=False): |
1766 | + if not os.path.exists(self.series_path): |
1767 | + os.mkdir(self.series_path) |
1768 | + for charm in self.get_charms(): |
1769 | + if charm.is_local(): |
1770 | + if charm.exists(): |
1771 | + if no_local_mods: |
1772 | + if charm.is_modified(): |
1773 | + self.log.warning( |
1774 | + "Charm %r has local modifications", |
1775 | + charm.path) |
1776 | + raise ErrorExit() |
1777 | + if update: |
1778 | + charm.update(build=True) |
1779 | + return |
1780 | + charm.fetch() |
1781 | + |
1782 | + def resolve(self, cli_overides=()): |
1783 | + # Once we have charms definitions available, we can do verification |
1784 | + # of config options. |
1785 | + self.load_overrides(cli_overides) |
1786 | + self.resolve_config() |
1787 | + self.validate_relations() |
1788 | + |
1789 | + def load_overrides(self, cli_overrides=()): |
1790 | + """Load overrides.""" |
1791 | + overrides = {} |
1792 | + overrides.update(self.data.get('overrides', {})) |
1793 | + |
1794 | + for o in cli_overrides: |
1795 | + key, value = o.split('=', 1) |
1796 | + overrides[key] = value |
1797 | + |
1798 | + for k, v in overrides.iteritems(): |
1799 | + found = False |
1800 | + for svc_name, svc_data in self.data['services'].items(): |
1801 | + charm = self.get_charm_for(svc_name) |
1802 | + if k in charm.config: |
1803 | + svc_data['options'][k] = v |
1804 | + found = True |
1805 | + if not found: |
1806 | + self.log.warning( |
1807 | + "Override %s does not match any charms", k) |
1808 | + |
1809 | + def resolve_config(self): |
1810 | + """Load any lazy config values (includes), and verify config options. |
1811 | + """ |
1812 | + self.log.debug("Resolving configuration") |
1813 | + # XXX TODO, rename resolve, validate relations |
1814 | + # against defined services |
1815 | + for svc_name, svc_data in self.data.get('services', {}).items(): |
1816 | + if not 'options' in svc_data: |
1817 | + continue |
1818 | + charm = self.get_charm_for(svc_name) |
1819 | + config = charm.config |
1820 | + options = {} |
1821 | + |
1822 | + for k, v in svc_data['options'].items(): |
1823 | + if not k in config: |
1824 | + self.log.error( |
1825 | + "Invalid config charm %s %s=%s", charm.name, k, v) |
1826 | + raise ErrorExit() |
1827 | + iv = self._resolve_include(svc_name, k, v) |
1828 | + if iv is not None: |
1829 | + v = iv |
1830 | + options[k] = v |
1831 | + svc_data['options'] = options |
1832 | + |
1833 | + def _resolve_include(self, svc_name, k, v): |
1834 | + for include_type in ["file", "base64"]: |
1835 | + if (not isinstance(v, basestring) |
1836 | + or not v.startswith( |
1837 | + "include-%s://" % include_type)): |
1838 | + continue |
1839 | + include, fname = v.split("://", 1) |
1840 | + ip = resolve_include(fname, self.include_dirs) |
1841 | + if ip is None: |
1842 | + self.log.warning( |
1843 | + "Invalid config %s.%s include not found %s", |
1844 | + svc_name, k, v) |
1845 | + continue |
1846 | + with open(ip) as fh: |
1847 | + v = fh.read() |
1848 | + if include_type == "base64": |
1849 | + v = b64encode(v) |
1850 | + return v |
1851 | + |
1852 | + def validate_relations(self): |
1853 | + # Could extend to do interface matching against charms. |
1854 | + services = dict([(s.name, s) for s in self.get_services()]) |
1855 | + for e_a, e_b in self.get_relations(): |
1856 | + for ep in [Endpoint(e_a), Endpoint(e_b)]: |
1857 | + if not ep.service in services: |
1858 | + self.log.error( |
1859 | + ("Invalid relation in config," |
1860 | + " service %s not found, rel %s"), |
1861 | + ep.service, "%s <-> %s" % (e_a, e_b)) |
1862 | + raise ErrorExit() |
1863 | + |
1864 | + def save(self, path): |
1865 | + with open(path, "w") as fh: |
1866 | + fh.write(yaml_dump(self.data, defaul_flow_style=False)) |
1867 | + |
1868 | + |
1869 | +class BaseEnvironment(object): |
1870 | + |
1871 | + log = logging.getLogger("deployer.env") |
1872 | + |
1873 | + @property |
1874 | + def env_config_path(self): |
1875 | + jhome = _get_juju_home() |
1876 | + env_config_path = path_join(jhome, 'environments.yaml') |
1877 | + return env_config_path |
1878 | + |
1879 | + def _named_env(self, params): |
1880 | + if self.name: |
1881 | + params.extend(["-e", self.name]) |
1882 | + return params |
1883 | + |
1884 | + def _get_env_config(self): |
1885 | + with open(self.env_config_path) as fh: |
1886 | + config = yaml_load(fh.read()) |
1887 | + if self.name: |
1888 | + if self.name not in config['environments']: |
1889 | + self.log.error("Environment %r not found", self.name) |
1890 | + raise ErrorExit() |
1891 | + return config['environments'][self.name] |
1892 | + else: |
1893 | + env_name = config.get('default') |
1894 | + if env_name is None: |
1895 | + if len(config['environments'].keys()) == 1: |
1896 | + env_name = config['environments'].keys().pop() |
1897 | + else: |
1898 | + self.log.error("Ambigious operation environment") |
1899 | + raise ErrorExit() |
1900 | + return config['environments'][env_name] |
1901 | + |
1902 | + def _write_config(self, svc_name, config, fh): |
1903 | + fh.write(yaml_dump({svc_name: config})) |
1904 | + fh.flush() |
1905 | + |
1906 | +# def _write_config(self, svc_name, config, fh): |
1907 | +# fh.write(yaml_dump(config)) |
1908 | +# fh.flush() |
1909 | + |
1910 | + def _get_units_in_error(self): |
1911 | + units = [] |
1912 | + status = self.status() |
1913 | + for s in status.get('services', {}).keys(): |
1914 | + for uid, u in status['services'][s].get('units', {}).items(): |
1915 | + if u['agent-state'] == 'error': |
1916 | + units.append(uid) |
1917 | + for uid, u in u.get('subordinates', {}).items(): |
1918 | + if u['agent-state'] == 'error': |
1919 | + units.append(uid) |
1920 | + return units |
1921 | + |
1922 | + def deploy(self, name, charm_url, |
1923 | + repo=None, config=None, |
1924 | + constraints=None, num_units=1, force_machine=None): |
1925 | + params = self._named_env(["juju", "deploy"]) |
1926 | + with temp_file() as fh: |
1927 | + if config: |
1928 | + fh.write(yaml_dump({name: config})) |
1929 | + fh.flush() |
1930 | + params.extend(["--config", fh.name]) |
1931 | + if constraints: |
1932 | + params.extend(['--constraints', constraints]) |
1933 | + if num_units != 1: |
1934 | + params.extend(["--num-units", str(num_units)]) |
1935 | + if charm_url.startswith('local'): |
1936 | + if repo == "": |
1937 | + repo = "." |
1938 | + params.extend(["--repository=%s" % repo]) |
1939 | + if force_machine is not None: |
1940 | + params.extend["--force-machine=%d" % force_machine] |
1941 | + |
1942 | + params.extend([charm_url, name]) |
1943 | + _check_call(params, self.log, "Error deploying service %r", name) |
1944 | + |
1945 | + def terminate_machine(self, mid, wait=False): |
1946 | + """Terminate a machine. |
1947 | + |
1948 | + The machine can't have any running units, after removing the units or |
1949 | + destroying the service, use wait_for_units to know when its safe to |
1950 | + delete the machine (ie units have finished executing stop hooks and are |
1951 | + removed) |
1952 | + """ |
1953 | + if int(mid) == 0: |
1954 | + raise RuntimeError("Can't terminate machine 0") |
1955 | + params = self._named_env(["juju", "terminate-machine"]) |
1956 | + params.append(mid) |
1957 | + _check_call(params, self.log, "Error terminating machine %r" % mid) |
1958 | + |
1959 | + def get_service_address(self, svc_name): |
1960 | + status = self.get_cli_status() |
1961 | + if svc_name not in status['services']: |
1962 | + self.log.warning("Service %s does not exist", svc_name) |
1963 | + return None |
1964 | + units = status['services'][svc_name].get('units', {}) |
1965 | + unit_keys = list(sorted(units.keys())) |
1966 | + if unit_keys: |
1967 | + return units[unit_keys[0]].get('public-address', '') |
1968 | + self.log.warning("Service %s has no units") |
1969 | + |
1970 | + def get_cli_status(self): |
1971 | + params = self._named_env(["juju", "status"]) |
1972 | + with open('/dev/null', 'w') as fh: |
1973 | + output = _check_call( |
1974 | + params, self.log, "Error getting status, is it bootstrapped?", |
1975 | + stderr=fh) |
1976 | + status = yaml_load(output) |
1977 | + return status |
1978 | + |
1979 | + |
1980 | +class PyEnvironment(BaseEnvironment): |
1981 | + |
1982 | + def __init__(self, name): |
1983 | + self.name = name |
1984 | + |
1985 | + def add_units(self, service_name, num_units): |
1986 | + params = self._named_env(["juju", "add-unit"]) |
1987 | + if num_units > 1: |
1988 | + params.extend(["-n", str(num_units)]) |
1989 | + params.append(service_name) |
1990 | + _check_call( |
1991 | + params, self.log, "Error adding units to %s", service_name) |
1992 | + |
1993 | + def add_relation(self, endpoint_a, endpoint_b): |
1994 | + params = self._named_env(["juju", "add-relation"]) |
1995 | + params.extend([endpoint_a, endpoint_b]) |
1996 | + _check_call( |
1997 | + params, self.log, "Error adding relation to %s %s", |
1998 | + endpoint_a, endpoint_b) |
1999 | + |
2000 | + def close(self): |
2001 | + """ NoOp """ |
2002 | + |
2003 | + def connect(self): |
2004 | + """ NoOp """ |
2005 | + |
2006 | + def _destroy_service(self, service_name): |
2007 | + params = self._named_env(["juju", "destroy-service"]) |
2008 | + params.append(service_name) |
2009 | + _check_call( |
2010 | + params, self.log, "Error destroying service %s" % service_name) |
2011 | + |
2012 | + def get_config(self, svc_name): |
2013 | + params = self._named_env(["juju", "get"]) |
2014 | + params.append(svc_name) |
2015 | + return _check_call( |
2016 | + params, self.log, "Error retrieving config for %r", svc_name) |
2017 | + |
2018 | + def get_constraints(self, svc_name): |
2019 | + params = self._named_env(["juju", "get-constraints"]) |
2020 | + params.append(svc_name) |
2021 | + return _check_call( |
2022 | + params, self.log, "Error retrieving constraints for %r", svc_name) |
2023 | + |
2024 | + def reset(self, |
2025 | + terminate_machines=False, |
2026 | + terminate_delay=0, |
2027 | + timeout=360, |
2028 | + watch=False): |
2029 | + status = self.status() |
2030 | + for s in status.get('services'): |
2031 | + self.log.debug(" Destroying service %s", s) |
2032 | + self._destroy_service(s) |
2033 | + if not terminate_machines: |
2034 | + return True |
2035 | + for m in status.get('machines'): |
2036 | + if int(m) == 0: |
2037 | + continue |
2038 | + self.log.debug(" Terminating machine %s", m) |
2039 | + self.terminate_machine(str(m)) |
2040 | + if terminate_delay: |
2041 | + self.log.debug(" Waiting for terminate delay") |
2042 | + time.sleep(terminate_delay) |
2043 | + |
2044 | + def resolve_errors(self, retry_count=0, timeout=600, watch=False, delay=5): |
2045 | + pass |
2046 | + |
2047 | + def status(self): |
2048 | + return self.get_cli_status() |
2049 | + |
2050 | + def wait_for_units( |
2051 | + self, timeout, goal_state="started", watch=False, no_exit=False): |
2052 | + |
2053 | + max_time = time.time() + timeout |
2054 | + status = self.status() |
2055 | + while max_time > time.time(): |
2056 | + for s in status.get('services', {}).values(): |
2057 | + for u in s.get('units', {}).values(): |
2058 | + value = u.get('state') |
2059 | + |
2060 | + |
2061 | +class GoEnvironment(BaseEnvironment): |
2062 | + |
2063 | + def __init__(self, name, endpoint=None): |
2064 | + self.name = name |
2065 | + self.api_endpoint = endpoint |
2066 | + self.client = None |
2067 | + |
2068 | + def _get_token(self): |
2069 | + config = self._get_env_config() |
2070 | + return config['admin-secret'] |
2071 | + |
2072 | + def add_units(self, service_name, num_units): |
2073 | + return self.client.add_units(service_name, num_units) |
2074 | + |
2075 | + def add_relation(self, endpoint_a, endpoint_b): |
2076 | + return self.client.add_relation(endpoint_a, endpoint_b) |
2077 | + |
2078 | + def close(self): |
2079 | + if self.client: |
2080 | + self.client.close() |
2081 | + |
2082 | + def connect(self): |
2083 | + if not self.api_endpoint: |
2084 | + # Should really have a cheaper/faster way of getting the endpoint |
2085 | + # ala juju status 0 for a get_endpoint method. |
2086 | + self.get_cli_status() |
2087 | + while True: |
2088 | + try: |
2089 | + self.client = EnvironmentClient(self.api_endpoint) |
2090 | + except socket.error, e: |
2091 | + if e.errno != errno.ETIMEDOUT: |
2092 | + raise |
2093 | + continue |
2094 | + else: |
2095 | + break |
2096 | + self.client.login(self._get_token()) |
2097 | + self.log.debug("Connected to environment") |
2098 | + |
2099 | + def get_config(self, svc_name): |
2100 | + return self.client.get_config(svc_name) |
2101 | + |
2102 | + def get_constraints(self, svc_name): |
2103 | + return self.client.get_constraints(svc_name) |
2104 | + |
2105 | + def get_cli_status(self): |
2106 | + status = super(GoEnvironment, self).get_cli_status() |
2107 | + # Opportunistic, see connect method comment. |
2108 | + if not self.api_endpoint: |
2109 | + self.api_endpoint = "wss://%s:17070/" % ( |
2110 | + status["machines"]["0"]["dns-name"]) |
2111 | + return status |
2112 | + |
2113 | + def reset(self, |
2114 | + terminate_machines=False, |
2115 | + terminate_delay=0, |
2116 | + timeout=360, watch=False): |
2117 | + """Destroy/reset the environment.""" |
2118 | + status = self.status() |
2119 | + destroyed = False |
2120 | + for s in status.get('services', {}).keys(): |
2121 | + self.log.debug(" Destroying service %s", s) |
2122 | + self.client.destroy_service(s) |
2123 | + destroyed = True |
2124 | + |
2125 | + if destroyed: |
2126 | + # Mark any errors as resolved so destruction can proceed. |
2127 | + self.resolve_errors() |
2128 | + |
2129 | + # Wait for units |
2130 | + self.wait_for_units(timeout, "removed", watch=watch) |
2131 | + |
2132 | + # The only value to not terminating is keeping the data on the |
2133 | + # machines around. |
2134 | + if not terminate_machines: |
2135 | + self.log.info(" *juju-core machines are not reusable for units") |
2136 | + return |
2137 | + |
2138 | + # Terminate machines |
2139 | + self.log.debug(" Terminating machines") |
2140 | + for mid in status['machines'].keys(): |
2141 | + if mid == "0": |
2142 | + continue |
2143 | + self.log.debug(" Terminating machine %s", mid) |
2144 | + self.terminate_machine(mid) |
2145 | + if terminate_delay: |
2146 | + self.log.debug(" Waiting for terminate delay") |
2147 | + time.sleep(terminate_delay) |
2148 | + |
2149 | + def _check_timeout(self, etime): |
2150 | + w_timeout = etime - time.time() |
2151 | + if w_timeout < 0: |
2152 | + self.log.error("Timeout reached while resolving errors") |
2153 | + raise ErrorExit() |
2154 | + return w_timeout |
2155 | + |
2156 | + def resolve_errors(self, retry_count=0, timeout=600, watch=False, delay=5): |
2157 | + """Resolve any unit errors in the environment. |
2158 | + |
2159 | + If retry_count is given then the hook execution is reattempted. The |
2160 | + system will do up to retry_count passes through the system resolving |
2161 | + errors. |
2162 | + |
2163 | + If retry count is not given, the error is marked resolved permanently. |
2164 | + """ |
2165 | + etime = time.time() + timeout |
2166 | + count = 0 |
2167 | + while True: |
2168 | + error_units = self._get_units_in_error() |
2169 | + for e_uid in error_units: |
2170 | + try: |
2171 | + self.client.resolved(e_uid, retry=bool(retry_count)) |
2172 | + self.log.debug(" Resolving error on %s", e_uid) |
2173 | + except EnvError, e: |
2174 | + if 'already resolved' in e: |
2175 | + continue |
2176 | + |
2177 | + if not error_units and count: |
2178 | + if not count: |
2179 | + self.log.debug(" No unit errors found.") |
2180 | + else: |
2181 | + self.log.debug(" No more unit errors found.") |
2182 | + return |
2183 | + |
2184 | + w_timeout = self._check_timeout(etime) |
2185 | + if retry_count: |
2186 | + time.sleep(delay) |
2187 | + |
2188 | + count += 1 |
2189 | + try: |
2190 | + self.wait_for_units( |
2191 | + timeout=int(w_timeout), watch=True, no_exit=True) |
2192 | + except UnitErrors, e: |
2193 | + if retry_count == count: |
2194 | + self.log.info( |
2195 | + " Retry count %d exhausted, but units in error (%s)", |
2196 | + retry_count, " ".join(u['Name'] for u in e.errors)) |
2197 | + return |
2198 | + else: |
2199 | + return |
2200 | + |
2201 | + def status(self): |
2202 | + return self.client.get_stat() |
2203 | + |
2204 | + def wait_for_units( |
2205 | + self, timeout, goal_state="started", watch=False, no_exit=False): |
2206 | + """Wait for units to reach a given condition. |
2207 | + """ |
2208 | + callback = watch and self._delta_event_log or None |
2209 | + try: |
2210 | + self.client.wait_for_units(timeout, goal_state, callback=callback) |
2211 | + except UnitErrors, e: |
2212 | + error_units = [ |
2213 | + "unit: %s: machine: %s agent-state: %s details: %s" % ( |
2214 | + u['Name'], u['MachineId'], u['Status'], u['StatusInfo'] |
2215 | + ) |
2216 | + for u in e.errors] |
2217 | + if no_exit: |
2218 | + raise |
2219 | + self.log.error("The following units had errors:\n %s" % ( |
2220 | + " \n".join(error_units))) |
2221 | + raise ErrorExit() |
2222 | + |
2223 | + def _delta_event_log(self, et, ct, d): |
2224 | + # event type, change type, data |
2225 | + name = d.get('Name', d.get('Id', 'unknown')) |
2226 | + state = d.get('Status', d.get('Life', 'unknown')) |
2227 | + if et == "relation": |
2228 | + name = self._format_endpoints(d['Endpoints']) |
2229 | + state = "created" |
2230 | + if ct == "remove": |
2231 | + state = "removed" |
2232 | + self.log.debug( |
2233 | + " Delta %s: %s %s:%s", et, name, ct, state) |
2234 | + |
2235 | + def _format_endpoints(self, eps): |
2236 | + if len(eps) == 1: |
2237 | + ep = eps.pop() |
2238 | + return "[%s:%s:%s]" % ( |
2239 | + ep['ServiceName'], |
2240 | + ep['Relation']['Name'], |
2241 | + ep['Relation']['Role']) |
2242 | + |
2243 | + return "[%s:%s <-> %s:%s]" % ( |
2244 | + eps[0]['ServiceName'], |
2245 | + eps[0]['Relation']['Name'], |
2246 | + eps[1]['ServiceName'], |
2247 | + eps[1]['Relation']['Name']) |
2248 | + |
2249 | + |
2250 | +class BaseAction(object): |
2251 | + pass |
2252 | + |
2253 | + |
2254 | +class Export(BaseAction): |
2255 | + |
2256 | + log = logging.getLogger("deployer.export") |
2257 | + |
2258 | + def __init__(self, env, deployment, options): |
2259 | + self.options = options |
2260 | + self.env = env |
2261 | + self.deployment = deployment |
2262 | + self.env_status = None |
2263 | + self.env_state = {'services': {}, 'relations': {}} |
2264 | + |
2265 | + def run(self): |
2266 | + pass |
2267 | + |
2268 | + |
2269 | +class Diff(BaseAction): |
2270 | + |
2271 | + log = logging.getLogger("deployer.diff") |
2272 | + |
2273 | + def __init__(self, env, deployment, options): |
2274 | + self.options = options |
2275 | + self.env = env |
2276 | + self.deployment = deployment |
2277 | + self.env_status = None |
2278 | + self.env_state = {'services': {}, 'relations': []} |
2279 | + |
2280 | + def load_env(self): |
2281 | + """ |
2282 | + """ |
2283 | + rels = set() |
2284 | + for svc_name in self.env_status['services']: |
2285 | + if not svc_name in self.env_status['services']: |
2286 | + self.env_state['services'][svc_name] = 'missing' |
2287 | + self.env_state['services'].setdefault(svc_name, {})[ |
2288 | + 'options'] = self.env.get_config(svc_name) |
2289 | + self.env_state['services'][svc_name][ |
2290 | + 'constraints'] = self.env.get_constraints(svc_name) |
2291 | + self.env_state['services'][svc_name][ |
2292 | + 'unit_count'] = len(self.env_status[ |
2293 | + 'services'][svc_name]['units']) |
2294 | + rels.update(self._load_rels(svc_name)) |
2295 | + self.env_state['relations'] = sorted(rels) |
2296 | + |
2297 | + def _load_rels(self, svc_name): |
2298 | + rels = set() |
2299 | + svc_rels = self.env_status['services'][svc_name].get( |
2300 | + 'relations', {}) |
2301 | + # There is ambiguity here for multiple rels between two |
2302 | + # services without the relation id, which we need support |
2303 | + # from core for. |
2304 | + for r_name, r_svcs in svc_rels.items(): |
2305 | + for r_svc in r_svcs: |
2306 | + # Skip peer relations |
2307 | + if r_svc == svc_name: |
2308 | + continue |
2309 | + rr_name = self._get_rel_name(svc_name, r_svc) |
2310 | + rels.add( |
2311 | + tuple(sorted([ |
2312 | + "%s:%s" % (svc_name, r_name), |
2313 | + "%s:%s" % (r_svc, rr_name)]))) |
2314 | + return rels |
2315 | + |
2316 | + def _get_rel_name(self, src, tgt): |
2317 | + svc_rels = self.env_status['services'][tgt]['relations'] |
2318 | + found = None |
2319 | + for r, eps in svc_rels.items(): |
2320 | + if src in eps: |
2321 | + if found: |
2322 | + raise ValueError("Ambigious relations for service") |
2323 | + found = r |
2324 | + return found |
2325 | + |
2326 | + def get_delta(self): |
2327 | + delta = {} |
2328 | + rels_delta = self._get_relations_delta() |
2329 | + if rels_delta: |
2330 | + delta['relations'] = rels_delta |
2331 | + svc_delta = self._get_services_delta() |
2332 | + if svc_delta: |
2333 | + delta['services'] = svc_delta |
2334 | + return delta |
2335 | + |
2336 | + def _get_relations_delta(self): |
2337 | + # Simple endpoint diff, no qualified endpoint checking. |
2338 | + |
2339 | + # Env relations are always qualified (at least in go). |
2340 | + delta = {} |
2341 | + env_rels = set( |
2342 | + EndpointPair(*x) for x in self.env_state.get('relations', ())) |
2343 | + dep_rels = set( |
2344 | + [EndpointPair(*y) for y in self.deployment.get_relations()]) |
2345 | + |
2346 | + for r in dep_rels.difference(env_rels): |
2347 | + delta.setdefault('missing', []).append(r) |
2348 | + |
2349 | + for r in env_rels.difference(dep_rels): |
2350 | + delta.setdefault('unknown', []).append(r) |
2351 | + |
2352 | + return delta |
2353 | + |
2354 | + def _get_services_delta(self): |
2355 | + delta = {} |
2356 | + env_svcs = set(self.env_status['services'].keys()) |
2357 | + dep_svcs = set([s.name for s in self.deployment.get_services()]) |
2358 | + |
2359 | + missing = dep_svcs - env_svcs |
2360 | + if missing: |
2361 | + delta['missing'] = {} |
2362 | + for a in missing: |
2363 | + delta['missing'][a] = self.deployment.get_service( |
2364 | + a).svc_data |
2365 | + unknown = env_svcs - dep_svcs |
2366 | + if unknown: |
2367 | + delta['unknown'] = {} |
2368 | + for r in unknown: |
2369 | + delta['unknown'][r] = self.env_state.get(r) |
2370 | + |
2371 | + for cs in env_svcs.intersection(dep_svcs): |
2372 | + d_s = self.deployment.get_service(cs).svc_data |
2373 | + e_s = self.env_state['services'][cs] |
2374 | + mod = self._diff_service(e_s, d_s) |
2375 | + if not mod: |
2376 | + continue |
2377 | + if not 'modified' in delta: |
2378 | + delta['modified'] = {} |
2379 | + delta['modified'][cs] = mod |
2380 | + return delta |
2381 | + |
2382 | + def _diff_service(self, e_s, d_s): |
2383 | + mod = {} |
2384 | + if 'constraints' in d_s: |
2385 | + d_sc = _parse_constraints(d_s['constraints']) |
2386 | + if d_sc != e_s['constraints']: |
2387 | + mod['constraints'] = e_s['constraints'] |
2388 | + for k, v in d_s.get('options', {}).items(): |
2389 | + # Deploy options not known to the env may originate |
2390 | + # from charm version delta or be an invalid config. |
2391 | + if not k in e_s['options']: |
2392 | + continue |
2393 | + e_v = e_s['options'].get(k, {}).get('value') |
2394 | + if e_v != v: |
2395 | + mod['config'] = {k: e_v} |
2396 | + if e_s['unit_count'] != d_s.get('num_units', 1): |
2397 | + mod['num_units'] = e_s['num_units'] |
2398 | + return mod |
2399 | + |
2400 | + def run(self): |
2401 | + self.start_time = time.time() |
2402 | + self.env.connect() |
2403 | + self.env_status = self.env.status() |
2404 | + self.load_env() |
2405 | + delta = self.get_delta() |
2406 | + if delta: |
2407 | + print yaml_dump(delta) |
2408 | + |
2409 | + |
2410 | +class Importer(BaseAction): |
2411 | + |
2412 | + log = logging.getLogger("deployer.import") |
2413 | + |
2414 | + def __init__(self, env, deployment, options): |
2415 | + self.options = options |
2416 | + self.env = env |
2417 | + self.deployment = deployment |
2418 | + |
2419 | + def add_units(self): |
2420 | + self.log.debug("Adding units...") |
2421 | + # Add units to existing services that don't match count. |
2422 | + env_status = self.env.status() |
2423 | + added = set() |
2424 | + for svc in self.deployment.get_services(): |
2425 | + delta = (svc.num_units - |
2426 | + len(env_status['services'][svc.name].get('units', ()))) |
2427 | + if delta > 0: |
2428 | + charm = self.deployment.get_charm_for(svc.name) |
2429 | + if charm.is_subordinate(): |
2430 | + self.log.warning( |
2431 | + "Config specifies num units for subordinate: %s", |
2432 | + svc.name) |
2433 | + continue |
2434 | + self.log.info( |
2435 | + "Adding %d more units to %s" % (abs(delta), svc.name)) |
2436 | + for u in self.env.add_units(svc.name, abs(delta)): |
2437 | + added.add(u) |
2438 | + else: |
2439 | + self.log.debug( |
2440 | + " Service %r does not need any more units added.", |
2441 | + svc.name) |
2442 | + |
2443 | + def get_charms(self): |
2444 | + # Get Charms |
2445 | + self.log.debug("Getting charms...") |
2446 | + self.deployment.fetch_charms( |
2447 | + update=self.options.update_charms, |
2448 | + no_local_mods=self.options.no_local_mods) |
2449 | + |
2450 | + # Load config overrides/includes and verify rels after we can |
2451 | + # validate them. |
2452 | + self.deployment.resolve(self.options.overrides or ()) |
2453 | + |
2454 | + def deploy_services(self): |
2455 | + self.log.info("Deploying services...") |
2456 | + env_status = self.env.status() |
2457 | + for svc in self.deployment.get_services(): |
2458 | + if svc.name in env_status['services']: |
2459 | + self.log.debug( |
2460 | + " Service %r already deployed. Skipping" % svc.name) |
2461 | + continue |
2462 | + |
2463 | + charm = self.deployment.get_charm_for(svc.name) |
2464 | + self.log.info( |
2465 | + " Deploying service %s using %s", svc.name, charm.charm_url) |
2466 | + self.env.deploy( |
2467 | + svc.name, |
2468 | + charm.charm_url, |
2469 | + self.deployment.repo_path, |
2470 | + svc.config, |
2471 | + svc.constraints, |
2472 | + svc.num_units, |
2473 | + svc.force_machine) |
2474 | + |
2475 | + if svc.expose: |
2476 | + self.env.expose(svc.name) |
2477 | + |
2478 | + if self.options.deploy_delay: |
2479 | + self.log.debug(" Waiting for deploy delay") |
2480 | + time.sleep(self.options.deploy_delay) |
2481 | + |
2482 | + def add_relations(self): |
2483 | + self.log.info("Adding relations...") |
2484 | + |
2485 | + # Relations |
2486 | + status = self.env.status() |
2487 | + created = False |
2488 | + |
2489 | + for end_a, end_b in self.deployment.get_relations(): |
2490 | + if self._rel_exists(status, end_a, end_b): |
2491 | + continue |
2492 | + self.log.info(" Adding relation %s <-> %s", end_a, end_b) |
2493 | + self.env.add_relation(end_a, end_b) |
2494 | + created = True |
2495 | + # per the original, not sure the use case. |
2496 | + self.log.debug(" Waiting 5s before next relation") |
2497 | + time.sleep(5) |
2498 | + return created |
2499 | + |
2500 | + def _rel_exists(self, status, end_a, end_b): |
2501 | + # Checks for a named relation on one side that matches the local |
2502 | + # endpoint and remote service. |
2503 | + (name_a, name_b, rem_a, rem_b) = (end_a, end_b, None, None) |
2504 | + |
2505 | + if ":" in end_a: |
2506 | + name_a, rem_a = end_a.split(":", 1) |
2507 | + if ":" in end_b: |
2508 | + name_b, rem_b = end_b.split(":", 1) |
2509 | + |
2510 | + rels_svc_a = status['services'][name_a].get('relations', {}) |
2511 | + |
2512 | + found = False |
2513 | + for r, related in rels_svc_a.items(): |
2514 | + if name_b in related: |
2515 | + if rem_a and not r in rem_a: |
2516 | + continue |
2517 | + found = True |
2518 | + break |
2519 | + if found: |
2520 | + return True |
2521 | + return False |
2522 | + |
2523 | + def wait_for_units(self, ignore_error=False): |
2524 | + timeout = self.options.timeout - (time.time() - self.start_time) |
2525 | + if timeout < 0: |
2526 | + self.log.error("Reached deployment timeout.. exiting") |
2527 | + raise ErrorExit() |
2528 | + try: |
2529 | + self.env.wait_for_units( |
2530 | + int(timeout), watch=self.options.watch, no_exit=ignore_error) |
2531 | + except UnitErrors: |
2532 | + if not ignore_error: |
2533 | + raise |
2534 | + |
2535 | + def run(self): |
2536 | + self.start_time = time.time() |
2537 | + self.env.connect() |
2538 | + |
2539 | + # Get charms |
2540 | + self.get_charms() |
2541 | + if self.options.branch_only: |
2542 | + return |
2543 | + |
2544 | + self.deploy_services() |
2545 | + self.wait_for_units() |
2546 | + self.add_units() |
2547 | + |
2548 | + rels_created = self.add_relations() |
2549 | + |
2550 | + # Wait for the units to be up before waiting for rel stability. |
2551 | + self.log.debug("Waiting for units to be started") |
2552 | + self.wait_for_units(self.options.retry_count) |
2553 | + if rels_created: |
2554 | + self.log.debug("Waiting for relations %d", self.options.rel_wait) |
2555 | + time.sleep(self.options.rel_wait) |
2556 | + self.wait_for_units(self.options.retry_count) |
2557 | + |
2558 | + if self.options.retry_count: |
2559 | + self.log.info("Looking for errors to auto-retry") |
2560 | + self.env.resolve_errors( |
2561 | + self.retry_count, |
2562 | + self.options.timeout - time.time() - self.start_time) |
2563 | + |
2564 | + |
2565 | +def main(): |
2566 | + stime = time.time() |
2567 | + try: |
2568 | + run() |
2569 | + except ErrorExit: |
2570 | + logging.getLogger('deployer.cli').info( |
2571 | + "Deployment stopped. run time: %0.2f", time.time() - stime) |
2572 | + sys.exit(1) |
2573 | + |
2574 | + |
2575 | +def run(): |
2576 | + parser = setup_parser() |
2577 | + options = parser.parse_args() |
2578 | + |
2579 | + # Debug implies watching and verbose |
2580 | + if options.debug: |
2581 | + options.watch = options.verbose = True |
2582 | + setup_logging(options.verbose, options.debug) |
2583 | + |
2584 | + log = logging.getLogger("deployer.cli") |
2585 | + start_time = time.time() |
2586 | + |
2587 | + env = select_runtime(options.juju_env) |
2588 | + log.debug('Using runtime %s', env.__class__.__name__) |
2589 | + |
2590 | + config = ConfigStack(options.configs or []) |
2591 | + |
2592 | + # Destroy services and exit |
2593 | + if options.destroy_services or options.terminate_machines: |
2594 | + log.info("Resetting environment...") |
2595 | + env.connect() |
2596 | + env.reset(terminate_machines=options.terminate_machines, |
2597 | + terminate_delay=options.deploy_delay, |
2598 | + watch=options.watch) |
2599 | + log.info("Environment reset in %0.2f", time.time() - start_time) |
2600 | + sys.exit(0) |
2601 | + |
2602 | + # Display service info and exit |
2603 | + if options.find_service: |
2604 | + address = env.get_service_address(options.find_service) |
2605 | + if address is None: |
2606 | + log.error("Service not found %r", options.find_service) |
2607 | + sys.exit(1) |
2608 | + elif not address: |
2609 | + log.warning("Service: %s has no address for first unit", |
2610 | + options.find_service) |
2611 | + else: |
2612 | + log.info("Service: %s address: %s", options.find_service, address) |
2613 | + sys.exit(0) |
2614 | + |
2615 | + # Just resolve/retry hooks in the environment |
2616 | + if not options.deployment and options.retry_count: |
2617 | + log.info("Retrying hooks for error resolution") |
2618 | + env.connect() |
2619 | + env.resolve_errors( |
2620 | + options.retry_count, watch=options.watch, timeout=options.timeout) |
2621 | + |
2622 | + # Arg check on config files and deployment name. |
2623 | + if not options.configs: |
2624 | + log.error("Config files must be specified") |
2625 | + sys.exit(1) |
2626 | + |
2627 | + config.load() |
2628 | + |
2629 | + # Just list the available deployments |
2630 | + if options.list_deploys: |
2631 | + print "\n".join(sorted(config.keys())) |
2632 | + sys.exit(0) |
2633 | + |
2634 | + # Do something to a deployment |
2635 | + if not options.deployment: |
2636 | + log.error( |
2637 | + "Deployment name must be specified. available: %s", tuple( |
2638 | + sorted(config.keys()))) |
2639 | + sys.exit(1) |
2640 | + |
2641 | + deployment = config.get(options.deployment) |
2642 | + |
2643 | + if options.diff: |
2644 | + Diff(env, deployment, options).run() |
2645 | + return |
2646 | + |
2647 | + # Import it |
2648 | + log.info("Starting deployment of %s", options.deployment) |
2649 | + Importer(env, deployment, options).run() |
2650 | + |
2651 | + # Deploy complete |
2652 | + log.info("Deployment complete in %0.2f seconds" % ( |
2653 | + time.time() - start_time)) |
2654 | + |
2655 | +if __name__ == '__main__': |
2656 | + |
2657 | + try: |
2658 | + main() |
2659 | + except SystemExit: |
2660 | + pass |
2661 | + except: |
2662 | + import pdb, traceback, sys |
2663 | + traceback.print_exc() |
2664 | + pdb.post_mortem(sys.exc_info()[-1]) |
2665 | |
2666 | === added directory 'doc' |
2667 | === added file 'doc/Makefile' |
2668 | --- doc/Makefile 1970-01-01 00:00:00 +0000 |
2669 | +++ doc/Makefile 2013-07-15 21:24:33 +0000 |
2670 | @@ -0,0 +1,130 @@ |
2671 | +# Makefile for Sphinx documentation |
2672 | +# |
2673 | + |
2674 | +# You can set these variables from the command line. |
2675 | +SPHINXOPTS = |
2676 | +SPHINXBUILD = sphinx-build |
2677 | +PAPER = |
2678 | +BUILDDIR = _build |
2679 | + |
2680 | +# Internal variables. |
2681 | +PAPEROPT_a4 = -D latex_paper_size=a4 |
2682 | +PAPEROPT_letter = -D latex_paper_size=letter |
2683 | +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . |
2684 | + |
2685 | +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest |
2686 | + |
2687 | +help: |
2688 | + @echo "Please use \`make <target>' where <target> is one of" |
2689 | + @echo " html to make standalone HTML files" |
2690 | + @echo " dirhtml to make HTML files named index.html in directories" |
2691 | + @echo " singlehtml to make a single large HTML file" |
2692 | + @echo " pickle to make pickle files" |
2693 | + @echo " json to make JSON files" |
2694 | + @echo " htmlhelp to make HTML files and a HTML help project" |
2695 | + @echo " qthelp to make HTML files and a qthelp project" |
2696 | + @echo " devhelp to make HTML files and a Devhelp project" |
2697 | + @echo " epub to make an epub" |
2698 | + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" |
2699 | + @echo " latexpdf to make LaTeX files and run them through pdflatex" |
2700 | + @echo " text to make text files" |
2701 | + @echo " man to make manual pages" |
2702 | + @echo " changes to make an overview of all changed/added/deprecated items" |
2703 | + @echo " linkcheck to check all external links for integrity" |
2704 | + @echo " doctest to run all doctests embedded in the documentation (if enabled)" |
2705 | + |
2706 | +clean: |
2707 | + -rm -rf $(BUILDDIR)/* |
2708 | + |
2709 | +html: |
2710 | + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html |
2711 | + @echo |
2712 | + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." |
2713 | + |
2714 | +dirhtml: |
2715 | + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml |
2716 | + @echo |
2717 | + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." |
2718 | + |
2719 | +singlehtml: |
2720 | + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml |
2721 | + @echo |
2722 | + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." |
2723 | + |
2724 | +pickle: |
2725 | + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle |
2726 | + @echo |
2727 | + @echo "Build finished; now you can process the pickle files." |
2728 | + |
2729 | +json: |
2730 | + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json |
2731 | + @echo |
2732 | + @echo "Build finished; now you can process the JSON files." |
2733 | + |
2734 | +htmlhelp: |
2735 | + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp |
2736 | + @echo |
2737 | + @echo "Build finished; now you can run HTML Help Workshop with the" \ |
2738 | + ".hhp project file in $(BUILDDIR)/htmlhelp." |
2739 | + |
2740 | +qthelp: |
2741 | + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp |
2742 | + @echo |
2743 | + @echo "Build finished; now you can run "qcollectiongenerator" with the" \ |
2744 | + ".qhcp project file in $(BUILDDIR)/qthelp, like this:" |
2745 | + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/JujuDeployer.qhcp" |
2746 | + @echo "To view the help file:" |
2747 | + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/JujuDeployer.qhc" |
2748 | + |
2749 | +devhelp: |
2750 | + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp |
2751 | + @echo |
2752 | + @echo "Build finished." |
2753 | + @echo "To view the help file:" |
2754 | + @echo "# mkdir -p $$HOME/.local/share/devhelp/JujuDeployer" |
2755 | + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/JujuDeployer" |
2756 | + @echo "# devhelp" |
2757 | + |
2758 | +epub: |
2759 | + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub |
2760 | + @echo |
2761 | + @echo "Build finished. The epub file is in $(BUILDDIR)/epub." |
2762 | + |
2763 | +latex: |
2764 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex |
2765 | + @echo |
2766 | + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." |
2767 | + @echo "Run \`make' in that directory to run these through (pdf)latex" \ |
2768 | + "(use \`make latexpdf' here to do that automatically)." |
2769 | + |
2770 | +latexpdf: |
2771 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex |
2772 | + @echo "Running LaTeX files through pdflatex..." |
2773 | + make -C $(BUILDDIR)/latex all-pdf |
2774 | + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." |
2775 | + |
2776 | +text: |
2777 | + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text |
2778 | + @echo |
2779 | + @echo "Build finished. The text files are in $(BUILDDIR)/text." |
2780 | + |
2781 | +man: |
2782 | + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man |
2783 | + @echo |
2784 | + @echo "Build finished. The manual pages are in $(BUILDDIR)/man." |
2785 | + |
2786 | +changes: |
2787 | + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes |
2788 | + @echo |
2789 | + @echo "The overview file is in $(BUILDDIR)/changes." |
2790 | + |
2791 | +linkcheck: |
2792 | + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck |
2793 | + @echo |
2794 | + @echo "Link check complete; look for any errors in the above output " \ |
2795 | + "or in $(BUILDDIR)/linkcheck/output.txt." |
2796 | + |
2797 | +doctest: |
2798 | + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest |
2799 | + @echo "Testing of doctests in the sources finished, look at the " \ |
2800 | + "results in $(BUILDDIR)/doctest/output.txt." |
2801 | |
2802 | === added directory 'doc/_static' |
2803 | === added directory 'doc/_templates' |
2804 | === added file 'doc/announcement.txt' |
2805 | --- doc/announcement.txt 1970-01-01 00:00:00 +0000 |
2806 | +++ doc/announcement.txt 2013-07-15 21:24:33 +0000 |
2807 | @@ -0,0 +1,53 @@ |
2808 | + |
2809 | +Juju deployer for juju-core. |
2810 | + |
2811 | +Hi Folks |
2812 | + |
2813 | +Juju deployer is an automation tool for deploying complex applications |
2814 | +with juju. It encompasses most of the properties of a juju environment |
2815 | +(relations, config, constraints) along with charm vcs management into |
2816 | +a simple definition format. It allows for inheritance between |
2817 | +configurations which makes it straightforward to take customize/share |
2818 | +a base environment configuration across multiple environments. |
2819 | + |
2820 | +Deployer was written by Adam Gandleman, and originated from some of |
2821 | +Canonical's openstack charm development and deployment, and has seen fairly |
2822 | +widespread adoption within canonical for application deployment. |
2823 | + |
2824 | +There's a new implementation of juju-deployer that supports juju-core |
2825 | +and pyjuju. I took the opportunity to add some new features, tests, |
2826 | +and docs. When used with juju-core it now uses the environment's |
2827 | +websocket api instead of the cli where possible, and also for allow |
2828 | +for reporting live changes as the deployment proceeds. Change listing |
2829 | +at the bottom of the email. |
2830 | + |
2831 | + |
2832 | +Installation |
2833 | +------------ |
2834 | + |
2835 | + $ pip install juju-deployer |
2836 | + |
2837 | +Source |
2838 | +------ |
2839 | + |
2840 | + bzr branch lp:juju-deployer/darwin |
2841 | + |
2842 | +Docs |
2843 | +---- |
2844 | + |
2845 | + http://pythonhosted.org/juju-deployer/ |
2846 | + |
2847 | +Changes |
2848 | +------- |
2849 | + |
2850 | + - Relations only added if they don't exist. |
2851 | + - Support for multiple services using the same charm. |
2852 | + - Support for automatic retrying of failed relations. |
2853 | + - Support for simpler relation definition syntax, either a list of |
2854 | + endpoint pairs - [blog, db] or list of endpoint, [targets] can be used. ie. [blog, [db, cache]] can be used. |
2855 | + - Uses watches for live feedback of change progress in an environment. |
2856 | + - Additional sanity/error checking prior to modifying an environment. |
2857 | + - yaml support config file support |
2858 | + - multiple inheritance |
2859 | + - No support atm for resetting the environment's charm cache. |
2860 | + - Unit tests. |
2861 | |
2862 | === added file 'doc/conf.py' |
2863 | --- doc/conf.py 1970-01-01 00:00:00 +0000 |
2864 | +++ doc/conf.py 2013-07-15 21:24:33 +0000 |
2865 | @@ -0,0 +1,216 @@ |
2866 | +# -*- coding: utf-8 -*- |
2867 | +# |
2868 | +# Juju Deployer documentation build configuration file, created by |
2869 | +# sphinx-quickstart on Sat May 11 14:34:00 2013. |
2870 | +# |
2871 | +# This file is execfile()d with the current directory set to its containing dir. |
2872 | +# |
2873 | +# Note that not all possible configuration values are present in this |
2874 | +# autogenerated file. |
2875 | +# |
2876 | +# All configuration values have a default; values that are commented out |
2877 | +# serve to show the default. |
2878 | + |
2879 | +import sys, os |
2880 | + |
2881 | +# If extensions (or modules to document with autodoc) are in another directory, |
2882 | +# add these directories to sys.path here. If the directory is relative to the |
2883 | +# documentation root, use os.path.abspath to make it absolute, like shown here. |
2884 | +#sys.path.insert(0, os.path.abspath('.')) |
2885 | + |
2886 | +# -- General configuration ----------------------------------------------------- |
2887 | + |
2888 | +# If your documentation needs a minimal Sphinx version, state it here. |
2889 | +#needs_sphinx = '1.0' |
2890 | + |
2891 | +# Add any Sphinx extension module names here, as strings. They can be extensions |
2892 | +# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. |
2893 | +extensions = [] |
2894 | + |
2895 | +# Add any paths that contain templates here, relative to this directory. |
2896 | +templates_path = ['_templates'] |
2897 | + |
2898 | +# The suffix of source filenames. |
2899 | +source_suffix = '.rst' |
2900 | + |
2901 | +# The encoding of source files. |
2902 | +#source_encoding = 'utf-8-sig' |
2903 | + |
2904 | +# The master toctree document. |
2905 | +master_doc = 'index' |
2906 | + |
2907 | +# General information about the project. |
2908 | +project = u'Juju Deployer' |
2909 | +copyright = u'2013, Kapil Thangavelu' |
2910 | + |
2911 | +# The version info for the project you're documenting, acts as replacement for |
2912 | +# |version| and |release|, also used in various other places throughout the |
2913 | +# built documents. |
2914 | +# |
2915 | +# The short X.Y version. |
2916 | +version = '0.0.4' |
2917 | +# The full version, including alpha/beta/rc tags. |
2918 | +release = '0.0.4' |
2919 | + |
2920 | +# The language for content autogenerated by Sphinx. Refer to documentation |
2921 | +# for a list of supported languages. |
2922 | +#language = None |
2923 | + |
2924 | +# There are two options for replacing |today|: either, you set today to some |
2925 | +# non-false value, then it is used: |
2926 | +#today = '' |
2927 | +# Else, today_fmt is used as the format for a strftime call. |
2928 | +#today_fmt = '%B %d, %Y' |
2929 | + |
2930 | +# List of patterns, relative to source directory, that match files and |
2931 | +# directories to ignore when looking for source files. |
2932 | +exclude_patterns = ['_build'] |
2933 | + |
2934 | +# The reST default role (used for this markup: `text`) to use for all documents. |
2935 | +#default_role = None |
2936 | + |
2937 | +# If true, '()' will be appended to :func: etc. cross-reference text. |
2938 | +#add_function_parentheses = True |
2939 | + |
2940 | +# If true, the current module name will be prepended to all description |
2941 | +# unit titles (such as .. function::). |
2942 | +#add_module_names = True |
2943 | + |
2944 | +# If true, sectionauthor and moduleauthor directives will be shown in the |
2945 | +# output. They are ignored by default. |
2946 | +#show_authors = False |
2947 | + |
2948 | +# The name of the Pygments (syntax highlighting) style to use. |
2949 | +pygments_style = 'sphinx' |
2950 | + |
2951 | +# A list of ignored prefixes for module index sorting. |
2952 | +#modindex_common_prefix = [] |
2953 | + |
2954 | + |
2955 | +# -- Options for HTML output --------------------------------------------------- |
2956 | + |
2957 | +# The theme to use for HTML and HTML Help pages. See the documentation for |
2958 | +# a list of builtin themes. |
2959 | +html_theme = 'default' |
2960 | + |
2961 | +# Theme options are theme-specific and customize the look and feel of a theme |
2962 | +# further. For a list of options available for each theme, see the |
2963 | +# documentation. |
2964 | +#html_theme_options = {} |
2965 | + |
2966 | +# Add any paths that contain custom themes here, relative to this directory. |
2967 | +#html_theme_path = [] |
2968 | + |
2969 | +# The name for this set of Sphinx documents. If None, it defaults to |
2970 | +# "<project> v<release> documentation". |
2971 | +#html_title = None |
2972 | + |
2973 | +# A shorter title for the navigation bar. Default is the same as html_title. |
2974 | +#html_short_title = None |
2975 | + |
2976 | +# The name of an image file (relative to this directory) to place at the top |
2977 | +# of the sidebar. |
2978 | +#html_logo = None |
2979 | + |
2980 | +# The name of an image file (within the static path) to use as favicon of the |
2981 | +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 |
2982 | +# pixels large. |
2983 | +#html_favicon = None |
2984 | + |
2985 | +# Add any paths that contain custom static files (such as style sheets) here, |
2986 | +# relative to this directory. They are copied after the builtin static files, |
2987 | +# so a file named "default.css" will overwrite the builtin "default.css". |
2988 | +html_static_path = ['_static'] |
2989 | + |
2990 | +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, |
2991 | +# using the given strftime format. |
2992 | +#html_last_updated_fmt = '%b %d, %Y' |
2993 | + |
2994 | +# If true, SmartyPants will be used to convert quotes and dashes to |
2995 | +# typographically correct entities. |
2996 | +#html_use_smartypants = True |
2997 | + |
2998 | +# Custom sidebar templates, maps document names to template names. |
2999 | +#html_sidebars = {} |
3000 | + |
3001 | +# Additional templates that should be rendered to pages, maps page names to |
3002 | +# template names. |
3003 | +#html_additional_pages = {} |
3004 | + |
3005 | +# If false, no module index is generated. |
3006 | +#html_domain_indices = True |
3007 | + |
3008 | +# If false, no index is generated. |
3009 | +#html_use_index = True |
3010 | + |
3011 | +# If true, the index is split into individual pages for each letter. |
3012 | +#html_split_index = False |
3013 | + |
3014 | +# If true, links to the reST sources are added to the pages. |
3015 | +#html_show_sourcelink = True |
3016 | + |
3017 | +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. |
3018 | +#html_show_sphinx = True |
3019 | + |
3020 | +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. |
3021 | +#html_show_copyright = True |
3022 | + |
3023 | +# If true, an OpenSearch description file will be output, and all pages will |
3024 | +# contain a <link> tag referring to it. The value of this option must be the |
3025 | +# base URL from which the finished HTML is served. |
3026 | +#html_use_opensearch = '' |
3027 | + |
3028 | +# This is the file name suffix for HTML files (e.g. ".xhtml"). |
3029 | +#html_file_suffix = None |
3030 | + |
3031 | +# Output file base name for HTML help builder. |
3032 | +htmlhelp_basename = 'JujuDeployerdoc' |
3033 | + |
3034 | + |
3035 | +# -- Options for LaTeX output -------------------------------------------------- |
3036 | + |
3037 | +# The paper size ('letter' or 'a4'). |
3038 | +#latex_paper_size = 'letter' |
3039 | + |
3040 | +# The font size ('10pt', '11pt' or '12pt'). |
3041 | +#latex_font_size = '10pt' |
3042 | + |
3043 | +# Grouping the document tree into LaTeX files. List of tuples |
3044 | +# (source start file, target name, title, author, documentclass [howto/manual]). |
3045 | +latex_documents = [ |
3046 | + ('index', 'JujuDeployer.tex', u'Juju Deployer Documentation', |
3047 | + u'Kapil Thangavelu', 'manual'), |
3048 | +] |
3049 | + |
3050 | +# The name of an image file (relative to this directory) to place at the top of |
3051 | +# the title page. |
3052 | +#latex_logo = None |
3053 | + |
3054 | +# For "manual" documents, if this is true, then toplevel headings are parts, |
3055 | +# not chapters. |
3056 | +#latex_use_parts = False |
3057 | + |
3058 | +# If true, show page references after internal links. |
3059 | +#latex_show_pagerefs = False |
3060 | + |
3061 | +# If true, show URL addresses after external links. |
3062 | +#latex_show_urls = False |
3063 | + |
3064 | +# Additional stuff for the LaTeX preamble. |
3065 | +#latex_preamble = '' |
3066 | + |
3067 | +# Documents to append as an appendix to all manuals. |
3068 | +#latex_appendices = [] |
3069 | + |
3070 | +# If false, no module index is generated. |
3071 | +#latex_domain_indices = True |
3072 | + |
3073 | + |
3074 | +# -- Options for manual page output -------------------------------------------- |
3075 | + |
3076 | +# One entry per manual page. List of tuples |
3077 | +# (source start file, name, description, authors, manual section). |
3078 | +man_pages = [ |
3079 | + ('index', 'jujudeployer', u'Juju Deployer Documentation', |
3080 | + [u'Kapil Thangavelu'], 1) |
3081 | +] |
3082 | |
3083 | === added file 'doc/config.rst' |
3084 | --- doc/config.rst 1970-01-01 00:00:00 +0000 |
3085 | +++ doc/config.rst 2013-07-15 21:24:33 +0000 |
3086 | @@ -0,0 +1,96 @@ |
3087 | + |
3088 | +Configuration |
3089 | +============= |
3090 | + |
3091 | + |
3092 | +The deployer configuration file can be in either yaml or json |
3093 | +formats. The contents of the top level are a dictionary/hash mapping |
3094 | +of stack names to configuration. |
3095 | + |
3096 | +Environment/Stack |
3097 | +================== |
3098 | + |
3099 | +The stack can take several options for specifying, series:: |
3100 | + |
3101 | + wordpress-stage: |
3102 | + series: precise |
3103 | + |
3104 | +Services |
3105 | +======== |
3106 | + |
3107 | + |
3108 | +Snippet:: |
3109 | + |
3110 | + wordpress-stage: |
3111 | + series: precise |
3112 | + services: |
3113 | + blog: |
3114 | + charm: wordpress |
3115 | + branch: lp:charms/precise/wordpress |
3116 | + constraints: mem=2 |
3117 | + options: |
3118 | + tuning: optimized |
3119 | + engine: apache |
3120 | + icon: include-base64://file.ico |
3121 | + |
3122 | + |
3123 | + The service's charm name will be infered from the vcs branch name if |
3124 | + possible, else it will assume the service name, is the name of the |
3125 | + charm, or can be specified manually via the charm key. |
3126 | + |
3127 | + The service constraints are specified as a single string value |
3128 | + corresponding to what would be specified on the command line. Valid |
3129 | + generic constraints for juju-core are mem, cpu-cores, cpu-power. |
3130 | + |
3131 | + Service configuration is specified a dictionary of key/value |
3132 | + mapping. There is support for including data from external |
3133 | + files with either the include-file://path-to-file or |
3134 | + include-base64://path-to-file. Relative paths are resolved |
3135 | + relative to the config file. |
3136 | + |
3137 | + |
3138 | +Relations |
3139 | +========= |
3140 | + |
3141 | + |
3142 | +Relations can be specified in a few different formats. |
3143 | + |
3144 | +legacy:: |
3145 | + relations: |
3146 | + - [blog, [db, memcached]] |
3147 | + |
3148 | +endpoint pairs:: |
3149 | + relations: |
3150 | + - [blog, db] |
3151 | + - [blog, memcached] |
3152 | + |
3153 | +nested endpoint pairs:: |
3154 | + relations: |
3155 | + - [blog, [db, memcached]] |
3156 | + |
3157 | + |
3158 | +Inheritance |
3159 | +=========== |
3160 | + |
3161 | +An environment configuration:: |
3162 | + |
3163 | + wordpress-stage: |
3164 | + series: precise |
3165 | + services: |
3166 | + blog: |
3167 | + charm: wordpress |
3168 | + branch: lp:charms/precise/wordpress |
3169 | + constraints: mem=2 |
3170 | + options: |
3171 | + tuning: optimized |
3172 | + engine: apache |
3173 | + icon: include-base64://file.ico |
3174 | + |
3175 | + |
3176 | + wordpress-prod: |
3177 | + inherits: [wordpress-stage, monitoring] |
3178 | + services: |
3179 | + blog: |
3180 | + constraints: mem=16 |
3181 | + options: |
3182 | + tuning: optimized |
3183 | |
3184 | === added file 'doc/index.rst' |
3185 | --- doc/index.rst 1970-01-01 00:00:00 +0000 |
3186 | +++ doc/index.rst 2013-07-15 21:24:33 +0000 |
3187 | @@ -0,0 +1,24 @@ |
3188 | +.. Juju Deployer documentation master file, created by |
3189 | + sphinx-quickstart on Sat May 11 14:34:00 2013. |
3190 | + You can adapt this file completely to your liking, but it should at least |
3191 | + contain the root `toctree` directive. |
3192 | + |
3193 | +Welcome to Juju Deployer's documentation! |
3194 | +========================================= |
3195 | + |
3196 | +Contents: |
3197 | + |
3198 | + |
3199 | +.. toctree:: |
3200 | + :maxdepth: 2 |
3201 | + |
3202 | + config.rst |
3203 | + |
3204 | + |
3205 | +Indices and tables |
3206 | +================== |
3207 | + |
3208 | +* :ref:`genindex` |
3209 | +* :ref:`modindex` |
3210 | +* :ref:`search` |
3211 | + |
3212 | |
3213 | === added file 'doc/notes.txt' |
3214 | --- doc/notes.txt 1970-01-01 00:00:00 +0000 |
3215 | +++ doc/notes.txt 2013-07-15 21:24:33 +0000 |
3216 | @@ -0,0 +1,39 @@ |
3217 | + |
3218 | +Feaures |
3219 | + |
3220 | + - dryrun |
3221 | + - export |
3222 | + - sync |
3223 | + - diff |
3224 | + |
3225 | + |
3226 | + |
3227 | +services: |
3228 | + blog: |
3229 | + charm: local:precise/wordpress-93 |
3230 | + exposed: false |
3231 | + relations: |
3232 | + cache: |
3233 | + - memcached |
3234 | + db: |
3235 | + - db |
3236 | + loadbalancer: |
3237 | + - blog |
3238 | + units: |
3239 | + blog/0: |
3240 | + agent-state: started |
3241 | + agent-version: 1.11.0 |
3242 | + machine: "4" |
3243 | + public-address: ec2-50-112-56-62.us-west-2.compute.amazonaws.com |
3244 | + memcached: |
3245 | + charm: local:precise/memcached-28 |
3246 | + exposed: false |
3247 | + life: dying |
3248 | + relations: |
3249 | + cache: |
3250 | + - blog |
3251 | + |
3252 | + |
3253 | +$ juju remove-relation blog memcached |
3254 | + |
3255 | +No Op on status |
3256 | |
3257 | === renamed file 'deployer.py' => 'old_deployer.py' |
3258 | === renamed file 'utils.py' => 'old_utils.py' |
3259 | === added file 'setup.cfg' |
3260 | --- setup.cfg 1970-01-01 00:00:00 +0000 |
3261 | +++ setup.cfg 2013-07-15 21:24:33 +0000 |
3262 | @@ -0,0 +1,7 @@ |
3263 | +[build_sphinx] |
3264 | +source-dir = doc/ |
3265 | +build-dir = doc/_build |
3266 | +all_files = 1 |
3267 | + |
3268 | +[upload_sphinx] |
3269 | +upload-dir = doc/_build/html |
3270 | \ No newline at end of file |
3271 | |
3272 | === added file 'setup.py' |
3273 | --- setup.py 1970-01-01 00:00:00 +0000 |
3274 | +++ setup.py 2013-07-15 21:24:33 +0000 |
3275 | @@ -0,0 +1,26 @@ |
3276 | +from setuptools import setup |
3277 | + |
3278 | +long_description = """ |
3279 | +Python client for juju-core websocket api. |
3280 | +""" |
3281 | + |
3282 | +setup( |
3283 | + name="juju-deployer", |
3284 | + version="0.1.0", |
3285 | + description="A tool for deploying complex stacks with juju.", |
3286 | + long_description=open("README").read(), |
3287 | + author="Kapil Thangavelu", |
3288 | + author_email="kapil.foss@gmail.com", |
3289 | + url="http://juju.ubuntu.com", |
3290 | + install_requires=["jujuclient >= 0.0.5"], |
3291 | + classifiers=[ |
3292 | + "Development Status :: 2 - Pre-Alpha", |
3293 | + "Programming Language :: Python", |
3294 | + "Topic :: Internet", |
3295 | + "Topic :: Software Development :: Libraries :: Python Modules", |
3296 | + "Intended Audience :: Developers"], |
3297 | +# test_suite="test_deployment", |
3298 | + entry_points={ |
3299 | + "console_scripts": [ |
3300 | + 'juju-deployer = deployer:main']}, |
3301 | + py_modules=["deployer"]) |
3302 | |
3303 | === added directory 'test_data' |
3304 | === added file 'test_data/blog.snippet' |
3305 | --- test_data/blog.snippet 1970-01-01 00:00:00 +0000 |
3306 | +++ test_data/blog.snippet 2013-07-15 21:24:33 +0000 |
3307 | @@ -0,0 +1,1 @@ |
3308 | +HelloWorld |
3309 | \ No newline at end of file |
3310 | |
3311 | === added file 'test_data/blog.yaml' |
3312 | --- test_data/blog.yaml 1970-01-01 00:00:00 +0000 |
3313 | +++ test_data/blog.yaml 2013-07-15 21:24:33 +0000 |
3314 | @@ -0,0 +1,63 @@ |
3315 | + |
3316 | +metrics-base: |
3317 | + services: |
3318 | + newrelic: |
3319 | + branch: lp:charms/precise/newrelic |
3320 | + options: |
3321 | + key: measureallthethings |
3322 | + |
3323 | +wordpress-base: |
3324 | + series: precise |
3325 | + services: |
3326 | + blog: |
3327 | + charm: wordpress |
3328 | + branch: lp:charms/precise/wordpress |
3329 | + |
3330 | +wordpress-stage: |
3331 | + series: precise |
3332 | + inherits: |
3333 | + - wordpress-base |
3334 | + - metrics-base |
3335 | + services: |
3336 | + blog: |
3337 | + charm: wordpress |
3338 | + constraints: instance-type=m1.small |
3339 | + num_units: 3 |
3340 | + options: |
3341 | + tuning: optimized |
3342 | + engine: apache |
3343 | + wp-content: include-base64://blog.snippet |
3344 | + db: |
3345 | + charm: mysql |
3346 | + branch: lp:charms/precise/mysql |
3347 | + options: |
3348 | + tuning-level: fast |
3349 | + memcached: |
3350 | + branch: lp:charms/precise/memcached |
3351 | + options: |
3352 | + size: 100 |
3353 | + haproxy: |
3354 | + charm_url: cs:precise/haproxy |
3355 | + options: |
3356 | + services: include-file://blog-include.yaml |
3357 | + relations: |
3358 | + - [blog, db] |
3359 | + - - blog |
3360 | + - cache |
3361 | + - - blog |
3362 | + - haproxy |
3363 | + |
3364 | +wordpress-prod: |
3365 | + series: precise |
3366 | + inherits: wordpress-stage |
3367 | + services: |
3368 | + blog: |
3369 | + options: |
3370 | + engine: nginx |
3371 | + tuning: optimized |
3372 | + constraints: instance-type=m1.large |
3373 | + |
3374 | + db: |
3375 | + constraints: instance-type=m1.large |
3376 | + options: |
3377 | + tuning-level: safest |
3378 | \ No newline at end of file |
3379 | |
3380 | === added directory 'test_data/precise' |
3381 | === added directory 'test_data/precise/appsrv' |
3382 | === added file 'test_data/precise/appsrv/metadata.yaml' |
3383 | --- test_data/precise/appsrv/metadata.yaml 1970-01-01 00:00:00 +0000 |
3384 | +++ test_data/precise/appsrv/metadata.yaml 2013-07-15 21:24:33 +0000 |
3385 | @@ -0,0 +1,2 @@ |
3386 | +charm: true |
3387 | + |
3388 | |
3389 | === added file 'test_data/stack-default.cfg' |
3390 | --- test_data/stack-default.cfg 1970-01-01 00:00:00 +0000 |
3391 | +++ test_data/stack-default.cfg 2013-07-15 21:24:33 +0000 |
3392 | @@ -0,0 +1,58 @@ |
3393 | +{ |
3394 | + "wordpress": { |
3395 | + "series": "precise", |
3396 | + "services": { |
3397 | + "wordpress": { |
3398 | + "constraints": "instance-type=m1.small", |
3399 | + "options": { |
3400 | + "engine": "", |
3401 | + "enable_modules": "proxy rewrite proxy_http proxy_balancer ssl headers", |
3402 | + "vhost_https_template": "include-base64://stack-include.template" |
3403 | + } |
3404 | + }, |
3405 | + "db": { |
3406 | + "constraints": "instance-type=m1.small", |
3407 | + "charm": "mysql", |
3408 | + "options": { |
3409 | + "tuning-level": "safest"} |
3410 | + }, |
3411 | + "memcached": { |
3412 | + "constraints": "instance-type=m1.small", |
3413 | + "charm_url": "" |
3414 | + }, |
3415 | + "haproxy": { |
3416 | + |
3417 | + }, |
3418 | + "my-app-cache": { |
3419 | + "constraints": "instance-type=m1.small", |
3420 | + "options": { |
3421 | + "x_balancer_name_allowed": "true" |
3422 | + } |
3423 | + }, |
3424 | + "my-nrpe-app-cache": { |
3425 | + }, |
3426 | + "my-app-cache-lb": { |
3427 | + "constraints": "instance-type=m1.small", |
3428 | + "options": { |
3429 | + "enable_monitoring": "true" |
3430 | + } |
3431 | + }, |
3432 | + "my-nrpe-app-cache-lb": { |
3433 | + } |
3434 | + }, |
3435 | + "relations": { |
3436 | + "my-app-fe:balancer": { |
3437 | + "weight": 100, |
3438 | + "consumes": ["my-app-lb:website"] |
3439 | + }, |
3440 | + "my-app-lb:reverseproxy": { |
3441 | + "weight": 90, |
3442 | + "consumes": ["my-app-cache:cached-website"] |
3443 | + }, |
3444 | + "my-app-cache:website": { |
3445 | + "weight": 80, |
3446 | + "consumes": ["my-app-cache-lb:website"] |
3447 | + } |
3448 | + } |
3449 | + } |
3450 | +} |
3451 | |
3452 | === added file 'test_data/stack-include.template' |
3453 | --- test_data/stack-include.template 1970-01-01 00:00:00 +0000 |
3454 | +++ test_data/stack-include.template 2013-07-15 21:24:33 +0000 |
3455 | @@ -0,0 +1,18 @@ |
3456 | +<VirtualHost _default_:80> |
3457 | + ServerAdmin webmaster@myapp.com |
3458 | + |
3459 | + ErrorLog ${APACHE_LOG_DIR}/error.log |
3460 | + CustomLog ${APACHE_LOG_DIR}/access.log combined |
3461 | + LogLevel warn |
3462 | + |
3463 | + DocumentRoot /srv/myapp.com/www/root |
3464 | + |
3465 | + ProxyRequests off |
3466 | + <Proxy *> |
3467 | + Order deny,allow |
3468 | + Allow from all |
3469 | + </Proxy> |
3470 | + |
3471 | + ProxyPreserveHost off |
3472 | + |
3473 | +</VirtulaHost |
3474 | \ No newline at end of file |
3475 | |
3476 | === added file 'test_data/stack-include.yaml' |
3477 | --- test_data/stack-include.yaml 1970-01-01 00:00:00 +0000 |
3478 | +++ test_data/stack-include.yaml 2013-07-15 21:24:33 +0000 |
3479 | @@ -0,0 +1,19 @@ |
3480 | +- service_name: thumbnail-private |
3481 | + service_host: 0.0.0.0 |
3482 | + service_port: 1001 |
3483 | + service_options: ['timeout client 65000', 'timeout server 65000', 'option httpchk', 'balance leastconn'] |
3484 | + server_options: ['check inter 2000', 'rise 2', 'fall 5', 'maxconn 4'] |
3485 | + |
3486 | + |
3487 | +- service_name: updown-download-private |
3488 | + service_host: 0.0.0.0 |
3489 | + service_port: 1011 |
3490 | + service_options: ['timeout client 65000', 'timeout server 300000', 'option httpchk', 'balance leastconn'] |
3491 | + server_options: ['check inter 2000', 'rise 2', 'fall 5', 'maxconn 16'] |
3492 | + |
3493 | + |
3494 | +- service_name: updown-upload |
3495 | + service_host: 0.0.0.0 |
3496 | + service_port: 1021 |
3497 | + service_options: ['balance leastconn', 'timeout server 600000'] |
3498 | + server_options: ['check inter 2000', 'rise 2', 'fall 5', 'maxconn 8'] |
3499 | |
3500 | === added file 'test_data/stack-inherits.cfg' |
3501 | --- test_data/stack-inherits.cfg 1970-01-01 00:00:00 +0000 |
3502 | +++ test_data/stack-inherits.cfg 2013-07-15 21:24:33 +0000 |
3503 | @@ -0,0 +1,41 @@ |
3504 | +{ |
3505 | + "my-files-frontend-dev": { |
3506 | + "inherits": "default-myapp-frontend", |
3507 | + "services": { |
3508 | + "my-files-fe": { |
3509 | + "constraints": "instance-type=m1.small" |
3510 | + }, |
3511 | + "my-files-lb": { |
3512 | + "constraints": "instance-type=m1.small", |
3513 | + "options": { |
3514 | + "services": "include-file://stack-include.yaml" |
3515 | + } |
3516 | + }, |
3517 | + "my-nagios": { |
3518 | + "constraints": "instance-type=m1.small" |
3519 | + } |
3520 | + }, |
3521 | + "relations": { |
3522 | + "my-nrpe-files-fe:monitors": { |
3523 | + "weight": 99, |
3524 | + "consumes": ["my-nagios:monitors"] |
3525 | + }, |
3526 | + "my-files-fe:juju-info": { |
3527 | + "weight": 98, |
3528 | + "consumes": ["my-nagios:nagios"] |
3529 | + }, |
3530 | + "my-nrpe-files-lb:monitors": { |
3531 | + "weight": 89, |
3532 | + "consumes": ["my-nagios:monitors"] |
3533 | + }, |
3534 | + "my-files-lb:juju-info": { |
3535 | + "weight": 88, |
3536 | + "consumes": ["my-nagios:nagios"] |
3537 | + }, |
3538 | + "my-files-lb:local-monitors": { |
3539 | + "weight": 88, |
3540 | + "consumes": ["my-nrpe-files-lb:local-monitors"] |
3541 | + } |
3542 | + } |
3543 | + } |
3544 | +} |
3545 | \ No newline at end of file |
3546 | |
3547 | === added file 'test_deployment.py' |
3548 | --- test_deployment.py 1970-01-01 00:00:00 +0000 |
3549 | +++ test_deployment.py 2013-07-15 21:24:33 +0000 |
3550 | @@ -0,0 +1,447 @@ |
3551 | +""" |
3552 | + |
3553 | +For live connected tests you must setup a test environment and give |
3554 | +the deployer the following. |
3555 | + |
3556 | + - JUJU_API = API Endpoint for an environment (machine 0 port 17070). Example |
3557 | + JUJU_API=wss://instance-address.com:17070 |
3558 | + |
3559 | + - JUJU_HOME = If your using a non standard juju home location |
3560 | + |
3561 | + - Provider api credentials in their default environment locations. For some |
3562 | + commands we need to execute the cli at the moment (terminate-machine, |
3563 | + deploy local charm). |
3564 | + |
3565 | + - JUJU_BIN = If you want to use a non-PATH juju binary |
3566 | + |
3567 | + |
3568 | +For devs, there is an isolated environment test example below that |
3569 | +will self generate an environment. |
3570 | +""" |
3571 | +import base64 |
3572 | +import logging |
3573 | +import os |
3574 | +import shutil |
3575 | +import StringIO |
3576 | +import subprocess |
3577 | +import sys |
3578 | +import tempfile |
3579 | +import time |
3580 | +import unittest |
3581 | + |
3582 | +from deployer import ( |
3583 | + ConfigStack, |
3584 | + Service, |
3585 | + Charm, |
3586 | + Deployment, |
3587 | + EnvironmentClient, |
3588 | + Importer, |
3589 | + ErrorExit) |
3590 | + |
3591 | +from deployer import ( |
3592 | + path_join, yaml_dump, yaml_load, setup_logging, dict_merge) |
3593 | + |
3594 | + |
3595 | +class Base(unittest.TestCase): |
3596 | + |
3597 | + def capture_logging(self, name="", level=logging.INFO, |
3598 | + log_file=None, formatter=None): |
3599 | + if log_file is None: |
3600 | + log_file = StringIO.StringIO() |
3601 | + log_handler = logging.StreamHandler(log_file) |
3602 | + if formatter: |
3603 | + log_handler.setFormatter(formatter) |
3604 | + logger = logging.getLogger(name) |
3605 | + logger.addHandler(log_handler) |
3606 | + old_logger_level = logger.level |
3607 | + logger.setLevel(level) |
3608 | + |
3609 | + @self.addCleanup |
3610 | + def reset_logging(): |
3611 | + logger.removeHandler(log_handler) |
3612 | + logger.setLevel(old_logger_level) |
3613 | + return log_file |
3614 | + |
3615 | + def mkdir(self): |
3616 | + d = tempfile.mkdtemp() |
3617 | + self.addCleanup(shutil.rmtree, d) |
3618 | + return d |
3619 | + |
3620 | + def change_environment(self, **kw): |
3621 | + """ |
3622 | + """ |
3623 | + original_environ = dict(os.environ) |
3624 | + |
3625 | + @self.addCleanup |
3626 | + def cleanup_env(): |
3627 | + os.environ.clear() |
3628 | + os.environ.update(original_environ) |
3629 | + |
3630 | + os.environ.update(kw) |
3631 | + |
3632 | + |
3633 | +class UtilTests(Base): |
3634 | + |
3635 | + def test_relation_list_merge(self): |
3636 | + self.assertEqual( |
3637 | + dict_merge( |
3638 | + {'relations': [['m1', 'x1']]}, |
3639 | + {'relations': [['m2', 'x2']]}), |
3640 | + {'relations': [['m1', 'x1'], ['m2', 'x2']]}) |
3641 | + |
3642 | + def test_no_rels_in_target(self): |
3643 | + self.assertEqual( |
3644 | + dict_merge( |
3645 | + {'a': 1}, |
3646 | + {'relations': [['m1', 'x1'], ['m2', 'x2']]}), |
3647 | + {'a': 1, 'relations': [['m1', 'x1'], ['m2', 'x2']]}) |
3648 | + |
3649 | + |
3650 | +class CharmTest(Base): |
3651 | + |
3652 | + def setUp(self): |
3653 | + d = self.mkdir() |
3654 | + self.series_path = os.path.join(d, "precise") |
3655 | + os.mkdir(self.series_path) |
3656 | + |
3657 | + self.charm_data = { |
3658 | + "charm": "couchdb", |
3659 | + "build": None, |
3660 | + "branch": "lp:charms/precise/couchdb", |
3661 | + "rev": None, |
3662 | + "charm_url": None, |
3663 | + } |
3664 | + self.output = self.capture_logging( |
3665 | + "deployer.charm", level=logging.DEBUG) |
3666 | + |
3667 | + def test_charm(self): |
3668 | + params = dict(self.charm_data) |
3669 | + charm = Charm.from_service("scratch", self.series_path, params) |
3670 | + |
3671 | + charm.fetch() |
3672 | + self.assertEqual(charm.metadata['name'], 'couchdb') |
3673 | + |
3674 | + charm.rev = 7 |
3675 | + charm.update() |
3676 | + output = subprocess.check_output( |
3677 | + ["bzr", "revno", "--tree"], cwd=charm.path) |
3678 | + self.assertEqual(output.strip(), str(7)) |
3679 | + |
3680 | + self.assertFalse(charm.is_modified()) |
3681 | + with open(os.path.join(charm.path, 'revision'), 'w') as fh: |
3682 | + fh.write('0') |
3683 | + self.assertTrue(charm.is_modified()) |
3684 | + |
3685 | + def test_charm_error(self): |
3686 | + params = dict(self.charm_data) |
3687 | + params['branch'] = "lp:charms/precise/zebramoon" |
3688 | + charm = Charm.from_service("scratch", self.series_path, params) |
3689 | + self.assertRaises(ErrorExit, charm.fetch) |
3690 | + self.assertIn('bzr: ERROR: Not a branch: ', self.output.getvalue()) |
3691 | + |
3692 | + |
3693 | +class ConfigTest(Base): |
3694 | + |
3695 | + def setUp(self): |
3696 | + self.output = self.capture_logging( |
3697 | + "deployer.config", level=logging.DEBUG) |
3698 | + |
3699 | + def test_config_basic(self): |
3700 | + config = ConfigStack(['configs/ostack-testing-sample.cfg']) |
3701 | + config.load() |
3702 | + self.assertEqual( |
3703 | + config.keys(), |
3704 | + [u'openstack-precise-ec2', |
3705 | + u'openstack-precise-ec2-trunk', |
3706 | + u'openstack-ubuntu-testing']) |
3707 | + self.assertRaises(ErrorExit, config.get, 'zeeland') |
3708 | + result = config.get("openstack-precise-ec2") |
3709 | + self.assertTrue(isinstance(result, Deployment)) |
3710 | + |
3711 | + def test_config(self): |
3712 | + config = ConfigStack([ |
3713 | + "test_data/stack-default.cfg", "test_data/stack-inherits.cfg"]) |
3714 | + config.load() |
3715 | + self.assertEqual( |
3716 | + config.keys(), |
3717 | + [u'my-files-frontend-dev', u'wordpress']) |
3718 | + deployment = config.get("wordpress") |
3719 | + self.assertTrue(deployment) |
3720 | + |
3721 | + |
3722 | +class NetworkConfigFetchingTests(Base): |
3723 | + """Configuration files can be specified via URL that is then fetched.""" |
3724 | + |
3725 | + def setUp(self): |
3726 | + self.output = self.capture_logging( |
3727 | + "deployer.config", level=logging.DEBUG) |
3728 | + |
3729 | + def test_urls_are_fetched(self): |
3730 | + # If a config file is specified as a URL, that URL is fetched and |
3731 | + # placed at a temporary location where it is read and treated as a |
3732 | + # regular config file. |
3733 | + CONFIG_URL = 'http://site.invalid/config-1' |
3734 | + config = ConfigStack([]) |
3735 | + config.config_files = [CONFIG_URL] |
3736 | + |
3737 | + class FauxResponse(file): |
3738 | + def getcode(self): |
3739 | + return 200 |
3740 | + |
3741 | + def faux_urlopen(url): |
3742 | + self.assertEqual(url, CONFIG_URL) |
3743 | + return FauxResponse('configs/ostack-testing-sample.cfg') |
3744 | + |
3745 | + config.load(urlopen=faux_urlopen) |
3746 | + self.assertEqual( |
3747 | + config.keys(), |
3748 | + [u'openstack-precise-ec2', |
3749 | + u'openstack-precise-ec2-trunk', |
3750 | + u'openstack-ubuntu-testing']) |
3751 | + self.assertRaises(ErrorExit, config.get, 'zeeland') |
3752 | + result = config.get("openstack-precise-ec2") |
3753 | + self.assertTrue(isinstance(result, Deployment)) |
3754 | + |
3755 | + def test_unfetchable_urls_generate_an_error(self): |
3756 | + # If a config file is specified as a URL, that URL is fetched and |
3757 | + # placed at a temporary location where it is read and treated as a |
3758 | + # regular config file. |
3759 | + CONFIG_URL = 'http://site.invalid/config-1' |
3760 | + config = ConfigStack([]) |
3761 | + config.config_files = [CONFIG_URL] |
3762 | + |
3763 | + class FauxResponse(file): |
3764 | + def getcode(self): |
3765 | + return 400 |
3766 | + |
3767 | + def faux_urlopen(url): |
3768 | + self.assertEqual(url, CONFIG_URL) |
3769 | + return FauxResponse('configs/ostack-testing-sample.cfg') |
3770 | + |
3771 | + self.assertRaises(ErrorExit, config.load, urlopen=faux_urlopen) |
3772 | + |
3773 | + |
3774 | +class ServiceTest(Base): |
3775 | + |
3776 | + def test_service(self): |
3777 | + data = { |
3778 | + 'branch': 'lp:precise/mysql'} |
3779 | + |
3780 | + s = Service('db', data) |
3781 | + self.assertEqual(s.name, "db") |
3782 | + self.assertEqual(s.num_units, 1) |
3783 | + self.assertEqual(s.constraints, None) |
3784 | + self.assertEqual(s.config, None) |
3785 | + |
3786 | + data = { |
3787 | + 'branch': 'lp:precise/mysql', |
3788 | + 'constraints': "instance-type=m1.small", |
3789 | + 'options': {"services": "include-file://stack-include.yaml"}, |
3790 | + 'num_units': 10} |
3791 | + s = Service('db', data) |
3792 | + self.assertEquals(s.num_units, 10) |
3793 | + self.assertEquals(s.constraints, "instance-type=m1.small") |
3794 | + self.assertEquals(s.config, {"services": "include-file://stack-include.yaml"}) |
3795 | + |
3796 | + |
3797 | +class DeploymentTest(Base): |
3798 | + def setUp(self): |
3799 | + self.output = setup_logging( |
3800 | + debug=True, verbose=True, stream=StringIO.StringIO()) |
3801 | + |
3802 | + def test_deployer(self): |
3803 | + d = ConfigStack(["test_data/blog.yaml"]).get('wordpress-prod') |
3804 | + services = d.get_services() |
3805 | + self.assertTrue([s for s in services if s.name == "newrelic"]) |
3806 | + |
3807 | + # Ensure inheritance order reflects reality, instead of merge value. |
3808 | + self.assertEqual( |
3809 | + d.data['inherits'], ['wordpress-stage', 'wordpress-base', 'metrics-base']) |
3810 | + |
3811 | + # Fetch charms to verify late binding config values & validation. |
3812 | + t = self.mkdir() |
3813 | + os.mkdir(os.path.join(t, "precise")) |
3814 | + d.repo_path = t |
3815 | + d.fetch_charms() |
3816 | + |
3817 | + # Load up overrides and resolves |
3818 | + d.load_overrides(["key=abc"]) |
3819 | + d.resolve_config() |
3820 | + |
3821 | + # Verify include-base64 |
3822 | + self.assertEqual(d.get_service('newrelic').config, {'key': 'abc'}) |
3823 | + self.assertEqual( |
3824 | + base64.b64decode(d.get_service('blog').config['wp-content']), |
3825 | + "HelloWorld") |
3826 | + |
3827 | + # TODO verify include-file |
3828 | + |
3829 | + # Verify relations |
3830 | + self.assertEqual( |
3831 | + list(d.get_relations()), |
3832 | + [('blog', 'db'), ('blog', 'cache'), ('blog', 'haproxy')]) |
3833 | + |
3834 | + |
3835 | +@unittest.skipIf( |
3836 | + (not bool(os.environ.get("JUJU_ENDPOINT")) |
3837 | + or not bool(os.environ.get("JUJU_AUTH"))), |
3838 | + "Test env must be defined: JUJU_ENDPOINT/JUJU_AUTH") |
3839 | +class LiveEnvironmentTest(Base): |
3840 | + |
3841 | + def setUp(self): |
3842 | + self.endpoint = os.environ.get("JUJU_ENDPOINT") |
3843 | + self.output = self.capture_logging( |
3844 | + "deployer.env", log_file=sys.stderr, level=logging.DEBUG) |
3845 | + self.env = Environment("", self.endpoint) |
3846 | + self.env.connect() |
3847 | + |
3848 | + def tearDown(self): |
3849 | + self.env.reset(timeout=240) |
3850 | + self.env.close() |
3851 | + |
3852 | + def test_env(self): |
3853 | + # Destroy everything.. consistent baseline |
3854 | + self.env.reset(timeout=240) |
3855 | + |
3856 | + status = self.env.status() |
3857 | + self.assertFalse('services' in status) |
3858 | + self.env.deploy("test-blog", "cs:precise/wordpress") |
3859 | + self.env.deploy("test-db", "cs:precise/mysql") |
3860 | + self.env.add_relation("test-db", "test-blog") |
3861 | + self.env.add_units('test-blog', 1) |
3862 | + self.env.wait_for_units(timeout=600) |
3863 | + |
3864 | + status = self.env.status() |
3865 | + services = ["test-blog", "test-db"] |
3866 | + self.assertEqual( |
3867 | + sorted(status['services'].keys()), |
3868 | + services) |
3869 | + for s in services: |
3870 | + for u in status['services'][s]['units']: |
3871 | + self.assertEqual(u['agent-state'], "started") |
3872 | + |
3873 | + |
3874 | +@unittest.skipIf( |
3875 | + not bool(os.environ.get("JUJU_NEW_ENV")), |
3876 | + "Skipping new env tests unless new env specified (JUJU_NEW_ENV)") |
3877 | +class DISABLEDIsolatedEnvironmentTest(Base): |
3878 | + """ |
3879 | + Environments tests that *create* & destroy isolated environments for testing. |
3880 | + """ |
3881 | + |
3882 | + @classmethod |
3883 | + def _exec(cls, params, ign_err=False, **env_vars): |
3884 | + env = dict(os.environ) |
3885 | + env.update(env_vars) |
3886 | + env["JUJU_HOME"] = cls.cls_juju_home |
3887 | + |
3888 | + print "cmd:", " ".join(params), "env:", " ".join( |
3889 | + ["%s=%s" % (k, env[k]) for k in env if k.startswith("JUJU")]) |
3890 | + |
3891 | + try: |
3892 | + return subprocess.check_output( |
3893 | + params, stderr=subprocess.STDOUT, env=env) |
3894 | + except subprocess.CalledProcessError, e: |
3895 | + print "Error on exec" |
3896 | + print " ".join(params), env["JUJU_HOME"] |
3897 | + print e.output |
3898 | + |
3899 | + @classmethod |
3900 | + def setUpClass(cls): |
3901 | + # Bootstraps a new juju test environment |
3902 | + cls.cls_juju_home = tempfile.mkdtemp() |
3903 | + cls.juju_bin = os.environ.get("JUJU_BIN", "juju") |
3904 | + |
3905 | + # Create a new environment |
3906 | + conf = yaml_load(cls._exec([cls.juju_bin, 'init'])) |
3907 | + envs = conf['environments'] |
3908 | + env_type = os.environ.get("JUJU_ENV", "amazon") |
3909 | + |
3910 | + assert env_type in envs, "Invalid env type specified %s" % (envs.keys()) |
3911 | + conf["default"] = env_type |
3912 | + |
3913 | + # Find a public key to use for the environment |
3914 | + found = False |
3915 | + key_set = ["~/.ssh/id_dsa.pub", "~/.ssh/id_rsa.pub"] |
3916 | + for key in key_set: |
3917 | + key = os.path.expanduser(key) |
3918 | + if os.path.exists(key): |
3919 | + found = True |
3920 | + break |
3921 | + if not found: |
3922 | + raise RuntimeError("No ssh public key found: tried (%s" % ( |
3923 | + " ".join(key_set))) |
3924 | + envs[env_type]['authorized-keys-path'] = key |
3925 | + |
3926 | + # Bootstrap |
3927 | + with open(path_join( |
3928 | + cls.cls_juju_home, "environments.yaml"), "w") as fh: |
3929 | + fh.write(yaml_dump(conf)) |
3930 | + cls._exec([cls.juju_bin, "bootstrap"]) |
3931 | + |
3932 | + @classmethod |
3933 | + def tearDownClass(cls): |
3934 | + cls._exec([cls.juju_bin, "destroy-environment"]) |
3935 | + |
3936 | + def setUp(self): |
3937 | + self.st = time.time() |
3938 | + self.change_environment(JUJU_HOME=self.__class__.cls_juju_home) |
3939 | + self.capture_logging("deployer.env", log_file=sys.stderr, level=logging.DEBUG) |
3940 | + |
3941 | + def test_environment(self): |
3942 | + env = Environment("") |
3943 | + st = time.time() |
3944 | + env.connect() |
3945 | + print "Bootstrap took", st - time.time() |
3946 | + status = env.status() |
3947 | + self.assertFalse(status['services'].keys()) |
3948 | + |
3949 | + |
3950 | +_marker = object() |
3951 | + |
3952 | + |
3953 | +class Options(dict): |
3954 | + |
3955 | + def __getattr__(self, key): |
3956 | + v = self.get(key, _marker) |
3957 | + if v is _marker: |
3958 | + raise AttributeError(key) |
3959 | + return v |
3960 | + |
3961 | + |
3962 | +@unittest.skipIf( |
3963 | + not bool(os.environ.get("JUJU_API")), |
3964 | + "Test environment endpoint must be defined JUJU_API") |
3965 | +class ImporterTest(Base): |
3966 | + |
3967 | + def setUp(self): |
3968 | + self.env = Environment("", os.environ["JUJU_API"]) |
3969 | + self.output = StringIO() |
3970 | + setup_logging(verbose=True, debug=True, stream=self.output) |
3971 | + |
3972 | + self.importer = Importer() |
3973 | + self.options = Options({ |
3974 | + 'verbose': True, 'watch': True}) |
3975 | + |
3976 | + self.importer = Importer(self.env, self.deployment, self.options) |
3977 | + |
3978 | + def get_importer(self, data, dirs=(), **kw): |
3979 | + pass |
3980 | + |
3981 | + def test_importer(self): |
3982 | + pass |
3983 | + |
3984 | + |
3985 | +def collector(): |
3986 | + # import __main__ triggers code re-execution |
3987 | + from unittest.loader import defaultTestLoader |
3988 | + __main__ = sys.modules['__main__'] |
3989 | + setupDir = os.path.abspath(os.path.dirname(__main__.__file__)) |
3990 | + return defaultTestLoader.discover(setupDir) |
3991 | + |
3992 | + |
3993 | +def main(): |
3994 | + unittest.main() |
3995 | + |
3996 | +if __name__ == '__main__': |
3997 | + main() |
3998 | |
3999 | === added file 'todo.txt' |
4000 | --- todo.txt 1970-01-01 00:00:00 +0000 |
4001 | +++ todo.txt 2013-07-15 21:24:33 +0000 |
4002 | @@ -0,0 +1,47 @@ |
4003 | +Todo Items |
4004 | +---------- |
4005 | + |
4006 | +Long term |
4007 | +========= |
4008 | + |
4009 | +- Freeze to generate an updated |
4010 | +- support for revision of charms as of date |
4011 | +- include-env for shell env var inclusion |
4012 | + |
4013 | +- generate an archive with charms |
4014 | +- gpg sign an archive |
4015 | + |
4016 | + # Workaround / say if using cached Charm present |
4017 | + # Write out resolved deployment file with includes & overrides |
4018 | + # Ensure bootstrap |
4019 | + # |
4020 | + |
4021 | +Medium Term |
4022 | +=========== |
4023 | +- support for github charms |
4024 | +- Get core support for resetting charms. |
4025 | + |
4026 | + |
4027 | +Short Term |
4028 | +========== |
4029 | +- Switch to argparse/Use subcommands for new commands |
4030 | +- Better deployment validation |
4031 | +- Diff an environment to config |
4032 | +- Save a charm |
4033 | +- Update/Run against an existing environment |
4034 | +- Write out resolved deployment file |
4035 | +- Version file. |
4036 | + |
4037 | + |
4038 | +---------------------------------------- |
4039 | + |
4040 | +- allow for specifying list of lists for a single rel entry. |
4041 | +- include variable for env uuid and name |
4042 | +- charm branch parse for info |
4043 | + |
4044 | +---------------------------------------- |
4045 | + |
4046 | +Bugs encountered |
4047 | + - http://pad.lv/1174616 [wedged unit] |
4048 | + - http://pad.lv/1174613 [relation-list -r] |
4049 | + - http://pad.lv/1174610 [unit-id recycling] |