Merge lp:~fwereade/pyjuju/restore-unit-relation-nodes into lp:pyjuju
- restore-unit-relation-nodes
- Merge into trunk
Proposed by
William Reade
Status: | Superseded | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~fwereade/pyjuju/restore-unit-relation-nodes | ||||
Merge into: | lp:pyjuju | ||||
Diff against target: |
7460 lines (+3429/-1783) 54 files modified
docs/source/internals/unit-agent-persistence.rst (+138/-0) examples/oneiric/mysql/hooks/install (+2/-2) examples/oneiric/mysql/hooks/start (+3/-1) examples/oneiric/mysql/hooks/stop (+1/-1) juju/agents/base.py (+56/-3) juju/agents/tests/common.py (+1/-0) juju/agents/tests/test_base.py (+174/-29) juju/agents/tests/test_machine.py (+3/-4) juju/agents/tests/test_unit.py (+71/-136) juju/agents/unit.py (+42/-114) juju/control/options.py (+4/-5) juju/control/status.py (+2/-0) juju/control/tests/test_resolved.py (+4/-4) juju/control/tests/test_status.py (+3/-1) juju/errors.py (+14/-1) juju/hooks/scheduler.py (+102/-48) juju/hooks/tests/test_scheduler.py (+258/-41) juju/lib/lxc/tests/test_lxc.py (+12/-5) juju/lib/tests/data/test_basic_install (+10/-0) juju/lib/tests/data/test_less_basic_install (+11/-0) juju/lib/tests/data/test_standard_install (+10/-0) juju/lib/tests/test_statemachine.py (+14/-0) juju/lib/tests/test_upstart.py (+339/-0) juju/lib/upstart.py (+166/-0) juju/machine/tests/test_unit_deployment.py (+192/-238) juju/machine/unit.py (+101/-190) juju/providers/common/cloudinit.py (+19/-12) juju/providers/common/tests/data/cloud_init_bootstrap (+54/-8) juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers (+54/-9) juju/providers/common/tests/data/cloud_init_branch (+29/-7) juju/providers/common/tests/data/cloud_init_branch_trunk (+29/-7) juju/providers/common/tests/data/cloud_init_distro (+28/-6) juju/providers/common/tests/data/cloud_init_ppa (+29/-7) juju/providers/common/tests/test_cloudinit.py (+3/-2) juju/providers/ec2/tests/data/bootstrap_cloud_init (+56/-11) juju/providers/ec2/tests/data/launch_cloud_init (+28/-6) juju/providers/ec2/tests/data/launch_cloud_init_branch (+30/-11) juju/providers/ec2/tests/data/launch_cloud_init_ppa (+28/-6) juju/providers/local/__init__.py (+5/-9) juju/providers/local/agent.py (+32/-88) juju/providers/local/files.py (+53/-39) juju/providers/local/tests/test_agent.py (+69/-44) juju/providers/local/tests/test_files.py (+136/-21) juju/providers/orchestra/tests/data/bootstrap_user_data (+53/-8) juju/providers/orchestra/tests/data/launch_user_data (+27/-5) juju/state/relation.py (+35/-30) juju/state/service.py (+10/-10) juju/state/tests/test_relation.py (+312/-391) juju/state/tests/test_security.py (+1/-0) juju/tests/test_errors.py (+37/-9) juju/unit/lifecycle.py (+173/-72) juju/unit/tests/test_lifecycle.py (+116/-38) juju/unit/tests/test_workflow.py (+196/-76) juju/unit/workflow.py (+54/-28) |
||||
To merge this branch: | bzr merge lp:~fwereade/pyjuju/restore-unit-relation-nodes | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+91307@code.launchpad.net |
This proposal has been superseded by a proposal from 2012-02-02.
Commit message
Description of the change
When reconstructing unit relation state in UnitLifecycle, ensure presence nodes are created for any relations which are not already departed.
To post a comment you must log in.
- 502. By William Reade
-
excise amusing print; expand testing a little
- 503. By William Reade
-
er, excise the other print
- 504. By William Reade
-
merge parent
- 505. By William Reade
-
merge parent
- 506. By William Reade
-
switch parent to trunk
- 507. By William Reade
-
merge parent
Unmerged revisions
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file 'docs/source/internals/unit-agent-persistence.rst' | |||
2 | --- docs/source/internals/unit-agent-persistence.rst 1970-01-01 00:00:00 +0000 | |||
3 | +++ docs/source/internals/unit-agent-persistence.rst 2012-02-02 16:42:42 +0000 | |||
4 | @@ -0,0 +1,138 @@ | |||
5 | 1 | Notes on unit agent persistence | ||
6 | 2 | =============================== | ||
7 | 3 | |||
8 | 4 | Introduction | ||
9 | 5 | ------------ | ||
10 | 6 | |||
11 | 7 | This was first written to explain the extensive changes made in the branch | ||
12 | 8 | lp:~fwereade/juju/restart-transitions; that branch has been split out into | ||
13 | 9 | four separate branches, but this discussion should remain a useful guide to | ||
14 | 10 | the changes made to the unit and relation workflows and lifecycles in the | ||
15 | 11 | course of making the unit agent resumable. | ||
16 | 12 | |||
17 | 13 | |||
18 | 14 | Glossary | ||
19 | 15 | -------- | ||
20 | 16 | |||
21 | 17 | UA = UnitAgent | ||
22 | 18 | UL = UnitLifecycle | ||
23 | 19 | UWS = UnitWorkflowState | ||
24 | 20 | URL = UnitRelationLifecycle | ||
25 | 21 | RWS = RelationWorkflowState | ||
26 | 22 | URS = UnitRelationState | ||
27 | 23 | SRS = ServiceRelationState | ||
28 | 24 | |||
29 | 25 | |||
30 | 26 | Technical discussion | ||
31 | 27 | -------------------- | ||
32 | 28 | |||
33 | 29 | Probably the most fundamental change is the addition of a "synchronize" method | ||
34 | 30 | to both UWS and RWS. Calling "synchronize" should generally be *all* you need | ||
35 | 31 | to do to put the workflow and associated components into "the right state"; ie | ||
36 | 32 | ZK state will be restored, the appropriate lifecycle will be started (or not), | ||
37 | 33 | and any initial transitons will automatically be fired ("start" for RWS; | ||
38 | 34 | "install", "start" for UWS). | ||
39 | 35 | |||
40 | 36 | The synchronize method keeps responsibility for the lifecycle's state purely in | ||
41 | 37 | the hands of the workflow; once a workflow is synced, the *only* necessary | ||
42 | 38 | interactions with it should be in response to changes in ZK. | ||
43 | 39 | |||
44 | 40 | The disadvantage is that lifecycle "start" and "stop" methods have become a | ||
45 | 41 | touch overloaded: | ||
46 | 42 | |||
47 | 43 | * UL.stop(): now takes "stop_relations" in addition to "fire_hooks", in which | ||
48 | 44 | "stop_relations" being True causes the orginal behaviour (transition "up" | ||
49 | 45 | RWSs to "down", as when transitioning the UWS to a "stopped" or "error" | ||
50 | 46 | state), but False simply causes them to stop watching for changes (in | ||
51 | 47 | preparation for an orderly shutdown, for example). | ||
52 | 48 | |||
53 | 49 | * UL.start(): now takes "start_relations" in addition to "fire_hooks", in which | ||
54 | 50 | the "start_relations" flag being True causes the original behaviour | ||
55 | 51 | (automatically transition "down" RWSs to "up", as when restarting/resolving | ||
56 | 52 | the UWS), while False causes the RWSs only to be synced. | ||
57 | 53 | |||
58 | 54 | * URL.start(): now takes "scheduler" in addition to "watches", allowing the | ||
59 | 55 | watching and the contained HookScheduler to be controlled separately | ||
60 | 56 | (allowing us to actually perform the RWS synchronise correctly). | ||
61 | 57 | |||
62 | 58 | * URL.stop(): still just takes "watches", because there wasn't a scenario in | ||
63 | 59 | which I wanted to stop the watches but not the HookScheduler. | ||
64 | 60 | |||
65 | 61 | I still think it's a win, though: and I don't think that turning them into | ||
66 | 62 | separate methods is the right way to go; "start" and "stop" remain perfectly | ||
67 | 63 | decent and appropriate names for what they do. | ||
68 | 64 | |||
69 | 65 | Now this has been done, we can always launch directly into whatever state we | ||
70 | 66 | shut down in, and that's great, because sudden process death doesn't hurt us | ||
71 | 67 | any more [0] [1]. Except... when we're upgrading a charm. It emerges that the | ||
72 | 68 | charm upgrade state transition only covers the process of firing the hook, and | ||
73 | 69 | not the process of actually upgrading the charm. | ||
74 | 70 | |||
75 | 71 | In short, we had a mechanism, completely outside the workflow's purview, for | ||
76 | 72 | potentially *brutal* modifications of state (both in terms of the charm itself, | ||
77 | 73 | on disk, and also in that the hook executor should remain stopped forever while | ||
78 | 74 | in "charm_upgrade_error" state); and this rather scuppered the "restart in the | ||
79 | 75 | same state" goal. The obvious thing to do was to move the charm upgrade | ||
80 | 76 | operation into the "charm_upgrade" transition, so we had a *chance* of being | ||
81 | 77 | able to start in the correct state. | ||
82 | 78 | |||
83 | 79 | UL.upgrade_charm, called by UWS, does itself have subtleties, but it should be | ||
84 | 80 | reasonably clear when examined in context; the most important point is that it | ||
85 | 81 | will call back at the start and end of the risky period, and that the UWS's | ||
86 | 82 | handler for this callback sets a flag in "started"'s state_vars for the | ||
87 | 83 | duration of the upgrade. If that flag is set when we subsequently start up | ||
88 | 84 | again and synchronize the UWS, then we know to immediately force the | ||
89 | 85 | charm_upgrade_error state and work from there. | ||
90 | 86 | |||
91 | 87 | [0] Well, it does, because we need to persist more than just the (already- | ||
92 | 88 | persisted) workflow state. This branch includes RWS persistence in the UL, as | ||
93 | 89 | requested in this branch's first pre-review (back in the day...), but does not | ||
94 | 90 | include HookScheduler persistence in the URLs, so it remains possible for | ||
95 | 91 | relation hooks which have been queued, but not yet executed, to be lost if the | ||
96 | 92 | process executes before the queue empties. That will be coming in another | ||
97 | 93 | branch (resolve-unit-relation-diffs). | ||
98 | 94 | |||
99 | 95 | [1] This seems like a good time to mention the UL's relation-broken handling | ||
100 | 96 | for relations that went away while the process was stopped: every time | ||
101 | 97 | ._relations is changed, it writes out enough state to recreate a Frankenstein's | ||
102 | 98 | URS object, which it can then use on load to reconstruct the necessary URL and | ||
103 | 99 | hence RWS. | ||
104 | 100 | |||
105 | 101 | We don't strictly need to *reconstruct* it in every case -- we can just use | ||
106 | 102 | SRS.get_unit_state if the relation still exists -- but given that sometimes we | ||
107 | 103 | do, it seemed senseless to have two code paths for the same operations. Of the | ||
108 | 104 | RWSs we reconstruct, those with existing SRSs will be synchronized (because we | ||
109 | 105 | know it's safe to do so), and the remainder will be stored untouched (because | ||
110 | 106 | we know that _process_service_changes will fire the "depart" transition for us | ||
111 | 107 | before doing anything else... and the "relation-broken" hook will be executed | ||
112 | 108 | in a DepartedRelationHookContext, which is rather restricted, and so shouldn't | ||
113 | 109 | cause the Frankenstein's URS to hit state we can't be sure exists). | ||
114 | 110 | |||
115 | 111 | |||
116 | 112 | Appendix: a rough history of changes to restart-transitions | ||
117 | 113 | ----------------------------------------------------------- | ||
118 | 114 | |||
119 | 115 | * Add UWS transitions from "stopped" to "started", so that process restarts can | ||
120 | 116 | be made to restart UWSs. | ||
121 | 117 | * Upon review, add RWS persistence to UL, to ensure we can't miss | ||
122 | 118 | relation-broken hooks; as part of this, as discussed, add | ||
123 | 119 | DepartedRelationHookContext in which to execute them. | ||
124 | 120 | * Upon discussion, discover that original UWS "started" -> "stopped" behaviour | ||
125 | 121 | on process shutdown is not actually the desired behaviour (and that the | ||
126 | 122 | associated RWS "up" -> "down" shouldn't happen either. | ||
127 | 123 | * Make changes to UL.start/stop, and add UWS/RWS.synchronize, to allow us to | ||
128 | 124 | shut down workflows cleanly without transitions and bring them up again in | ||
129 | 125 | the same state. | ||
130 | 126 | * Discover that we don't have any other reason to transition UWS to "stopped"; | ||
131 | 127 | to actually fire stop hooks at the right time, we need a more sophisticated | ||
132 | 128 | system (possibly involving the machine agent telling the unit agent to shut | ||
133 | 129 | itself down). Remove the newly-added "restart" transitions, because they're | ||
134 | 130 | meaningless now; ponder what good it does us to have a "stopped" state that | ||
135 | 131 | we never actually enter; chicken out of actually removing it. | ||
136 | 132 | * Realise that charm upgrades do an end-run around the whole UWS mechanism, and | ||
137 | 133 | resolve to intergate them so I can actually detect upgrades left incomplete | ||
138 | 134 | due to process death. | ||
139 | 135 | * Move charm upgrade operation from agent into UL; come to appreciate the | ||
140 | 136 | subtleties of the charm upgrade process; make necessary tweaks to | ||
141 | 137 | UL.upgrade_charm, and UWS, to allow for synchronization of incomplete | ||
142 | 138 | upgrades. | ||
143 | 0 | 139 | ||
144 | === modified file 'examples/oneiric/mysql/hooks/install' | |||
145 | --- examples/oneiric/mysql/hooks/install 2011-09-15 18:56:08 +0000 | |||
146 | +++ examples/oneiric/mysql/hooks/install 2012-02-02 16:42:42 +0000 | |||
147 | @@ -21,5 +21,5 @@ | |||
148 | 21 | juju-log "Editing my.cnf to allow listening on all interfaces" | 21 | juju-log "Editing my.cnf to allow listening on all interfaces" |
149 | 22 | sed --in-place=old 's/127\.0\.0\.1/0.0.0.0/' /etc/mysql/my.cnf | 22 | sed --in-place=old 's/127\.0\.0\.1/0.0.0.0/' /etc/mysql/my.cnf |
150 | 23 | 23 | ||
153 | 24 | juju-log "Restarting mysql service" | 24 | juju-log "Stopping mysql service" |
154 | 25 | service mysql restart | 25 | service mysql stop |
155 | 26 | 26 | ||
156 | === modified file 'examples/oneiric/mysql/hooks/start' | |||
157 | --- examples/oneiric/mysql/hooks/start 2011-02-03 01:23:43 +0000 | |||
158 | +++ examples/oneiric/mysql/hooks/start 2012-02-02 16:42:42 +0000 | |||
159 | @@ -1,1 +1,3 @@ | |||
160 | 1 | #!/bin/bash | ||
161 | 2 | \ No newline at end of file | 1 | \ No newline at end of file |
162 | 2 | #!/bin/bash | ||
163 | 3 | juju-log "Starting mysql service" | ||
164 | 4 | service mysql start || service mysql restart | ||
165 | 3 | 5 | ||
166 | === modified file 'examples/oneiric/mysql/hooks/stop' | |||
167 | --- examples/oneiric/mysql/hooks/stop 2011-09-15 18:56:08 +0000 | |||
168 | +++ examples/oneiric/mysql/hooks/stop 2012-02-02 16:42:42 +0000 | |||
169 | @@ -1,3 +1,3 @@ | |||
170 | 1 | #!/bin/bash | 1 | #!/bin/bash |
171 | 2 | juju-log "Stopping mysql service" | 2 | juju-log "Stopping mysql service" |
173 | 3 | /etc/init.d/mysql stop | 3 | service mysql stop || true |
174 | 4 | 4 | ||
175 | === modified file 'juju/agents/base.py' | |||
176 | --- juju/agents/base.py 2011-09-22 13:23:00 +0000 | |||
177 | +++ juju/agents/base.py 2012-02-02 16:42:42 +0000 | |||
178 | @@ -1,7 +1,9 @@ | |||
179 | 1 | import argparse | 1 | import argparse |
180 | 2 | import os | 2 | import os |
181 | 3 | import logging | ||
182 | 4 | import stat | ||
183 | 3 | import sys | 5 | import sys |
185 | 4 | import logging | 6 | import yaml |
186 | 5 | 7 | ||
187 | 6 | import zookeeper | 8 | import zookeeper |
188 | 7 | 9 | ||
189 | @@ -18,6 +20,23 @@ | |||
190 | 18 | from juju.state.environment import GlobalSettingsStateManager | 20 | from juju.state.environment import GlobalSettingsStateManager |
191 | 19 | 21 | ||
192 | 20 | 22 | ||
193 | 23 | def load_client_id(path): | ||
194 | 24 | try: | ||
195 | 25 | with open(path) as f: | ||
196 | 26 | return yaml.load(f.read()) | ||
197 | 27 | except IOError: | ||
198 | 28 | return None | ||
199 | 29 | |||
200 | 30 | |||
201 | 31 | def save_client_id(path, client_id): | ||
202 | 32 | parent = os.path.dirname(path) | ||
203 | 33 | if not os.path.exists(parent): | ||
204 | 34 | os.makedirs(parent) | ||
205 | 35 | with open(path, "w") as f: | ||
206 | 36 | f.write(yaml.dump(client_id)) | ||
207 | 37 | os.chmod(path, stat.S_IRUSR | stat.S_IWUSR) | ||
208 | 38 | |||
209 | 39 | |||
210 | 21 | class TwistedOptionNamespace(object): | 40 | class TwistedOptionNamespace(object): |
211 | 22 | """ | 41 | """ |
212 | 23 | An argparse namespace implementation that is compatible with twisted | 42 | An argparse namespace implementation that is compatible with twisted |
213 | @@ -153,13 +172,40 @@ | |||
214 | 153 | "Invalid juju-directory %r, does not exist." % ( | 172 | "Invalid juju-directory %r, does not exist." % ( |
215 | 154 | options.get("juju_directory"))) | 173 | options.get("juju_directory"))) |
216 | 155 | 174 | ||
217 | 175 | if options["session_file"] is None: | ||
218 | 176 | raise JujuError("No session file specified") | ||
219 | 177 | |||
220 | 156 | self.config = options | 178 | self.config = options |
221 | 157 | 179 | ||
222 | 158 | @inlineCallbacks | 180 | @inlineCallbacks |
223 | 181 | def _kill_existing_session(self): | ||
224 | 182 | try: | ||
225 | 183 | # We might have died suddenly, in which case the session may | ||
226 | 184 | # still be alive. If this is the case, shoot it in the head, so | ||
227 | 185 | # it doesn't interfere with our attempts to recreate our state. | ||
228 | 186 | # (We need to be able to recreate our state *anyway*, and it's | ||
229 | 187 | # much simpler to force ourselves to recreate it every time than | ||
230 | 188 | # it is to mess around partially recreating partial state.) | ||
231 | 189 | client_id = load_client_id(self.config["session_file"]) | ||
232 | 190 | if client_id is None: | ||
233 | 191 | return | ||
234 | 192 | temp_client = yield ZookeeperClient().connect( | ||
235 | 193 | self.config["zookeeper_servers"], client_id=client_id) | ||
236 | 194 | yield temp_client.close() | ||
237 | 195 | except zookeeper.ZooKeeperException: | ||
238 | 196 | # We don't really care what went wrong; just that we're not able | ||
239 | 197 | # to connect using the old session, and therefore we should be ok | ||
240 | 198 | # to start a fresh one without transient state hanging around. | ||
241 | 199 | pass | ||
242 | 200 | |||
243 | 201 | @inlineCallbacks | ||
244 | 159 | def connect(self): | 202 | def connect(self): |
245 | 160 | """Return an authenticated connection to the juju zookeeper.""" | 203 | """Return an authenticated connection to the juju zookeeper.""" |
248 | 161 | hosts = self.config["zookeeper_servers"] | 204 | yield self._kill_existing_session() |
249 | 162 | self.client = yield ZookeeperClient().connect(hosts) | 205 | self.client = yield ZookeeperClient().connect( |
250 | 206 | self.config["zookeeper_servers"]) | ||
251 | 207 | save_client_id( | ||
252 | 208 | self.config["session_file"], self.client.client_id) | ||
253 | 163 | 209 | ||
254 | 164 | principals = self.config.get("principals", ()) | 210 | principals = self.config.get("principals", ()) |
255 | 165 | for principal in principals: | 211 | for principal in principals: |
256 | @@ -200,6 +246,9 @@ | |||
257 | 200 | finally: | 246 | finally: |
258 | 201 | if self.client and self.client.connected: | 247 | if self.client and self.client.connected: |
259 | 202 | self.client.close() | 248 | self.client.close() |
260 | 249 | session_file = self.config["session_file"] | ||
261 | 250 | if os.path.exists(session_file): | ||
262 | 251 | os.unlink(session_file) | ||
263 | 203 | 252 | ||
264 | 204 | def set_watch_enabled(self, flag): | 253 | def set_watch_enabled(self, flag): |
265 | 205 | """Set boolean flag for whether this agent should watching zookeeper. | 254 | """Set boolean flag for whether this agent should watching zookeeper. |
266 | @@ -285,3 +334,7 @@ | |||
267 | 285 | parser.add_argument( | 334 | parser.add_argument( |
268 | 286 | "--juju-directory", default=juju_home, type=os.path.abspath, | 335 | "--juju-directory", default=juju_home, type=os.path.abspath, |
269 | 287 | help="juju working directory ($JUJU_HOME)") | 336 | help="juju working directory ($JUJU_HOME)") |
270 | 337 | |||
271 | 338 | parser.add_argument( | ||
272 | 339 | "--session-file", default=None, type=os.path.abspath, | ||
273 | 340 | help="like a pidfile, but for the zookeeper session id") | ||
274 | 288 | 341 | ||
275 | === modified file 'juju/agents/tests/common.py' | |||
276 | --- juju/agents/tests/common.py 2011-09-15 19:24:47 +0000 | |||
277 | +++ juju/agents/tests/common.py 2012-02-02 16:42:42 +0000 | |||
278 | @@ -55,6 +55,7 @@ | |||
279 | 55 | options = TwistedOptionNamespace() | 55 | options = TwistedOptionNamespace() |
280 | 56 | options["juju_directory"] = self.juju_directory | 56 | options["juju_directory"] = self.juju_directory |
281 | 57 | options["zookeeper_servers"] = get_test_zookeeper_address() | 57 | options["zookeeper_servers"] = get_test_zookeeper_address() |
282 | 58 | options["session_file"] = self.makeFile() | ||
283 | 58 | return succeed(options) | 59 | return succeed(options) |
284 | 59 | 60 | ||
285 | 60 | @inlineCallbacks | 61 | @inlineCallbacks |
286 | 61 | 62 | ||
287 | === modified file 'juju/agents/tests/test_base.py' | |||
288 | --- juju/agents/tests/test_base.py 2011-09-15 18:50:23 +0000 | |||
289 | +++ juju/agents/tests/test_base.py 2012-02-02 16:42:42 +0000 | |||
290 | @@ -2,13 +2,14 @@ | |||
291 | 2 | import json | 2 | import json |
292 | 3 | import logging | 3 | import logging |
293 | 4 | import os | 4 | import os |
294 | 5 | import stat | ||
295 | 5 | import sys | 6 | import sys |
297 | 6 | 7 | import yaml | |
298 | 7 | 8 | ||
299 | 8 | from twisted.application.app import AppLogger | 9 | from twisted.application.app import AppLogger |
300 | 9 | from twisted.application.service import IService, IServiceCollection | 10 | from twisted.application.service import IService, IServiceCollection |
301 | 10 | from twisted.internet.defer import ( | 11 | from twisted.internet.defer import ( |
303 | 11 | succeed, Deferred, inlineCallbacks, returnValue) | 12 | fail, succeed, Deferred, inlineCallbacks, returnValue) |
304 | 12 | from twisted.python.components import Componentized | 13 | from twisted.python.components import Componentized |
305 | 13 | from twisted.python import log | 14 | from twisted.python import log |
306 | 14 | 15 | ||
307 | @@ -20,6 +21,7 @@ | |||
308 | 20 | 21 | ||
309 | 21 | from juju.agents.base import ( | 22 | from juju.agents.base import ( |
310 | 22 | BaseAgent, TwistedOptionNamespace, AgentRunner, AgentLogger) | 23 | BaseAgent, TwistedOptionNamespace, AgentRunner, AgentLogger) |
311 | 24 | from juju.agents.dummy import DummyAgent | ||
312 | 23 | from juju.errors import NoConnection, JujuError | 25 | from juju.errors import NoConnection, JujuError |
313 | 24 | from juju.lib.zklog import ZookeeperHandler | 26 | from juju.lib.zklog import ZookeeperHandler |
314 | 25 | 27 | ||
315 | @@ -34,8 +36,8 @@ | |||
316 | 34 | @inlineCallbacks | 36 | @inlineCallbacks |
317 | 35 | def setUp(self): | 37 | def setUp(self): |
318 | 36 | yield super(BaseAgentTest, self).setUp() | 38 | yield super(BaseAgentTest, self).setUp() |
321 | 37 | self.change_environment( | 39 | self.juju_home = self.makeDir() |
322 | 38 | JUJU_HOME=self.makeDir()) | 40 | self.change_environment(JUJU_HOME=self.juju_home) |
323 | 39 | 41 | ||
324 | 40 | def test_as_app(self): | 42 | def test_as_app(self): |
325 | 41 | """The agent class can be accessed as an application.""" | 43 | """The agent class can be accessed as an application.""" |
326 | @@ -53,11 +55,10 @@ | |||
327 | 53 | # Daemon group | 55 | # Daemon group |
328 | 54 | self.assertEqual( | 56 | self.assertEqual( |
329 | 55 | parser.get_default("logfile"), "%s.log" % BaseAgent.name) | 57 | parser.get_default("logfile"), "%s.log" % BaseAgent.name) |
332 | 56 | self.assertEqual( | 58 | self.assertEqual(parser.get_default("pidfile"), "") |
331 | 57 | parser.get_default("pidfile"), "%s.pid" % BaseAgent.name) | ||
333 | 58 | 59 | ||
334 | 59 | self.assertEqual(parser.get_default("loglevel"), "DEBUG") | 60 | self.assertEqual(parser.get_default("loglevel"), "DEBUG") |
336 | 60 | self.assertTrue(parser.get_default("nodaemon")) | 61 | self.assertFalse(parser.get_default("nodaemon")) |
337 | 61 | self.assertEqual(parser.get_default("rundir"), ".") | 62 | self.assertEqual(parser.get_default("rundir"), ".") |
338 | 62 | self.assertEqual(parser.get_default("chroot"), None) | 63 | self.assertEqual(parser.get_default("chroot"), None) |
339 | 63 | self.assertEqual(parser.get_default("umask"), None) | 64 | self.assertEqual(parser.get_default("umask"), None) |
340 | @@ -80,6 +81,8 @@ | |||
341 | 80 | # Agent options | 81 | # Agent options |
342 | 81 | self.assertEqual(parser.get_default("principals"), []) | 82 | self.assertEqual(parser.get_default("principals"), []) |
343 | 82 | self.assertEqual(parser.get_default("zookeeper_servers"), "") | 83 | self.assertEqual(parser.get_default("zookeeper_servers"), "") |
344 | 84 | self.assertEqual(parser.get_default("juju_directory"), self.juju_home) | ||
345 | 85 | self.assertEqual(parser.get_default("session_file"), None) | ||
346 | 83 | 86 | ||
347 | 84 | def test_twistd_flags_correspond(self): | 87 | def test_twistd_flags_correspond(self): |
348 | 85 | parser = argparse.ArgumentParser() | 88 | parser = argparse.ArgumentParser() |
349 | @@ -87,11 +90,11 @@ | |||
350 | 87 | args = [ | 90 | args = [ |
351 | 88 | "--profile", | 91 | "--profile", |
352 | 89 | "--savestats", | 92 | "--savestats", |
354 | 90 | "--daemon"] | 93 | "--nodaemon"] |
355 | 91 | 94 | ||
356 | 92 | options = parser.parse_args(args, namespace=TwistedOptionNamespace()) | 95 | options = parser.parse_args(args, namespace=TwistedOptionNamespace()) |
357 | 93 | self.assertEqual(options.get("savestats"), True) | 96 | self.assertEqual(options.get("savestats"), True) |
359 | 94 | self.assertEqual(options.get("nodaemon"), False) | 97 | self.assertEqual(options.get("nodaemon"), True) |
360 | 95 | self.assertEqual(options.get("profile"), True) | 98 | self.assertEqual(options.get("profile"), True) |
361 | 96 | 99 | ||
362 | 97 | def test_agent_logger(self): | 100 | def test_agent_logger(self): |
363 | @@ -100,7 +103,8 @@ | |||
364 | 100 | log_file_path = self.makeFile() | 103 | log_file_path = self.makeFile() |
365 | 101 | 104 | ||
366 | 102 | options = parser.parse_args( | 105 | options = parser.parse_args( |
368 | 103 | ["--logfile", log_file_path], namespace=TwistedOptionNamespace()) | 106 | ["--logfile", log_file_path, "--session-file", self.makeFile()], |
369 | 107 | namespace=TwistedOptionNamespace()) | ||
370 | 104 | 108 | ||
371 | 105 | def match_observer(observer): | 109 | def match_observer(observer): |
372 | 106 | return isinstance(observer.im_self, log.PythonLoggingObserver) | 110 | return isinstance(observer.im_self, log.PythonLoggingObserver) |
373 | @@ -181,8 +185,9 @@ | |||
374 | 181 | This will create an agent instance, parse the cli args, passes them to | 185 | This will create an agent instance, parse the cli args, passes them to |
375 | 182 | the agent, and starts the agent runner. | 186 | the agent, and starts the agent runner. |
376 | 183 | """ | 187 | """ |
379 | 184 | self.change_args("es-agent", "--zookeeper-servers", | 188 | self.change_args( |
380 | 185 | get_test_zookeeper_address()) | 189 | "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), |
381 | 190 | "--session-file", self.makeFile()) | ||
382 | 186 | runner = self.mocker.patch(AgentRunner) | 191 | runner = self.mocker.patch(AgentRunner) |
383 | 187 | runner.run() | 192 | runner.run() |
384 | 188 | mock_agent = self.mocker.patch(BaseAgent) | 193 | mock_agent = self.mocker.patch(BaseAgent) |
385 | @@ -219,11 +224,10 @@ | |||
386 | 219 | 224 | ||
387 | 220 | started.addCallback(validate_started) | 225 | started.addCallback(validate_started) |
388 | 221 | 226 | ||
389 | 222 | pid_file = self.makeFile() | ||
390 | 223 | self.change_args( | 227 | self.change_args( |
392 | 224 | "es-agent", | 228 | "es-agent", "--nodaemon", |
393 | 225 | "--zookeeper-servers", get_test_zookeeper_address(), | 229 | "--zookeeper-servers", get_test_zookeeper_address(), |
395 | 226 | "--pidfile", pid_file) | 230 | "--session-file", self.makeFile()) |
396 | 227 | runner = self.mocker.patch(AgentRunner) | 231 | runner = self.mocker.patch(AgentRunner) |
397 | 228 | logger = self.mocker.patch(AppLogger) | 232 | logger = self.mocker.patch(AppLogger) |
398 | 229 | logger.start(MATCH_APP) | 233 | logger.start(MATCH_APP) |
399 | @@ -233,6 +237,7 @@ | |||
400 | 233 | DummyAgent.run() | 237 | DummyAgent.run() |
401 | 234 | return started | 238 | return started |
402 | 235 | 239 | ||
403 | 240 | @inlineCallbacks | ||
404 | 236 | def test_stop_service_stub_closes_agent(self): | 241 | def test_stop_service_stub_closes_agent(self): |
405 | 237 | """The base class agent, stopService will the stop method. | 242 | """The base class agent, stopService will the stop method. |
406 | 238 | 243 | ||
407 | @@ -241,6 +246,7 @@ | |||
408 | 241 | """ | 246 | """ |
409 | 242 | mock_agent = self.mocker.patch(BaseAgent) | 247 | mock_agent = self.mocker.patch(BaseAgent) |
410 | 243 | mock_client = self.mocker.mock(ZookeeperClient) | 248 | mock_client = self.mocker.mock(ZookeeperClient) |
411 | 249 | session_file = self.makeFile() | ||
412 | 244 | 250 | ||
413 | 245 | # connection is closed after agent.stop invoked. | 251 | # connection is closed after agent.stop invoked. |
414 | 246 | with self.mocker.order(): | 252 | with self.mocker.order(): |
415 | @@ -262,11 +268,17 @@ | |||
416 | 262 | self.mocker.result(mock_client) | 268 | self.mocker.result(mock_client) |
417 | 263 | mock_client.close() | 269 | mock_client.close() |
418 | 264 | 270 | ||
419 | 271 | # delete session file | ||
420 | 272 | mock_agent.config | ||
421 | 273 | self.mocker.result({"session_file": session_file}) | ||
422 | 274 | |||
423 | 265 | self.mocker.replay() | 275 | self.mocker.replay() |
424 | 266 | 276 | ||
425 | 267 | agent = BaseAgent() | 277 | agent = BaseAgent() |
427 | 268 | return agent.stopService() | 278 | yield agent.stopService() |
428 | 279 | self.assertFalse(os.path.exists(session_file)) | ||
429 | 269 | 280 | ||
430 | 281 | @inlineCallbacks | ||
431 | 270 | def test_stop_service_stub_ignores_disconnected_agent(self): | 282 | def test_stop_service_stub_ignores_disconnected_agent(self): |
432 | 271 | """The base class agent, stopService will the stop method. | 283 | """The base class agent, stopService will the stop method. |
433 | 272 | 284 | ||
434 | @@ -274,6 +286,7 @@ | |||
435 | 274 | """ | 286 | """ |
436 | 275 | mock_agent = self.mocker.patch(BaseAgent) | 287 | mock_agent = self.mocker.patch(BaseAgent) |
437 | 276 | mock_client = self.mocker.mock(ZookeeperClient) | 288 | mock_client = self.mocker.mock(ZookeeperClient) |
438 | 289 | session_file = self.makeFile() | ||
439 | 277 | 290 | ||
440 | 278 | # connection is closed after agent.stop invoked. | 291 | # connection is closed after agent.stop invoked. |
441 | 279 | with self.mocker.order(): | 292 | with self.mocker.order(): |
442 | @@ -289,10 +302,14 @@ | |||
443 | 289 | mock_client.connected | 302 | mock_client.connected |
444 | 290 | self.mocker.result(False) | 303 | self.mocker.result(False) |
445 | 291 | 304 | ||
446 | 305 | mock_agent.config | ||
447 | 306 | self.mocker.result({"session_file": session_file}) | ||
448 | 307 | |||
449 | 292 | self.mocker.replay() | 308 | self.mocker.replay() |
450 | 293 | 309 | ||
451 | 294 | agent = BaseAgent() | 310 | agent = BaseAgent() |
453 | 295 | return agent.stopService() | 311 | yield agent.stopService() |
454 | 312 | self.assertFalse(os.path.exists(session_file)) | ||
455 | 296 | 313 | ||
456 | 297 | def test_run_base_raises_error(self): | 314 | def test_run_base_raises_error(self): |
457 | 298 | """The base class agent, raises a notimplemented error when started.""" | 315 | """The base class agent, raises a notimplemented error when started.""" |
458 | @@ -300,12 +317,15 @@ | |||
459 | 300 | client.connect(get_test_zookeeper_address()) | 317 | client.connect(get_test_zookeeper_address()) |
460 | 301 | client_mock = self.mocker.mock() | 318 | client_mock = self.mocker.mock() |
461 | 302 | self.mocker.result(succeed(client_mock)) | 319 | self.mocker.result(succeed(client_mock)) |
462 | 320 | client_mock.client_id | ||
463 | 321 | self.mocker.result((123, "abc")) | ||
464 | 303 | self.mocker.replay() | 322 | self.mocker.replay() |
465 | 304 | 323 | ||
466 | 305 | agent = BaseAgent() | 324 | agent = BaseAgent() |
467 | 306 | agent.configure({ | 325 | agent.configure({ |
468 | 307 | "zookeeper_servers": get_test_zookeeper_address(), | 326 | "zookeeper_servers": get_test_zookeeper_address(), |
470 | 308 | "juju_directory": self.makeDir()}) | 327 | "juju_directory": self.makeDir(), |
471 | 328 | "session_file": self.makeFile()}) | ||
472 | 309 | d = agent.startService() | 329 | d = agent.startService() |
473 | 310 | self.failUnlessFailure(d, NotImplementedError) | 330 | self.failUnlessFailure(d, NotImplementedError) |
474 | 311 | return d | 331 | return d |
475 | @@ -316,35 +336,43 @@ | |||
476 | 316 | client = self.mocker.patch(ZookeeperClient) | 336 | client = self.mocker.patch(ZookeeperClient) |
477 | 317 | client.connect("x2.example.com") | 337 | client.connect("x2.example.com") |
478 | 318 | self.mocker.result(succeed(mock_client)) | 338 | self.mocker.result(succeed(mock_client)) |
479 | 339 | mock_client.client_id | ||
480 | 340 | self.mocker.result((123, "abc")) | ||
481 | 319 | self.mocker.replay() | 341 | self.mocker.replay() |
482 | 320 | 342 | ||
483 | 321 | agent = BaseAgent() | 343 | agent = BaseAgent() |
484 | 322 | agent.configure({"zookeeper_servers": "x2.example.com", | 344 | agent.configure({"zookeeper_servers": "x2.example.com", |
486 | 323 | "juju_directory": self.makeDir()}) | 345 | "juju_directory": self.makeDir(), |
487 | 346 | "session_file": self.makeFile()}) | ||
488 | 324 | result = agent.connect() | 347 | result = agent.connect() |
489 | 325 | self.assertEqual(result.result, mock_client) | 348 | self.assertEqual(result.result, mock_client) |
490 | 326 | self.assertEqual(agent.client, mock_client) | 349 | self.assertEqual(agent.client, mock_client) |
491 | 327 | 350 | ||
493 | 328 | def test_non_existant_directory(self): | 351 | def test_nonexistent_directory(self): |
494 | 329 | """If the juju directory does not exist an error should be raised. | 352 | """If the juju directory does not exist an error should be raised. |
495 | 330 | """ | 353 | """ |
496 | 331 | juju_directory = self.makeDir() | 354 | juju_directory = self.makeDir() |
497 | 332 | os.rmdir(juju_directory) | 355 | os.rmdir(juju_directory) |
498 | 333 | data = {"zookeeper_servers": get_test_zookeeper_address(), | 356 | data = {"zookeeper_servers": get_test_zookeeper_address(), |
500 | 334 | "juju_directory": juju_directory} | 357 | "juju_directory": juju_directory, |
501 | 358 | "session_file": self.makeFile()} | ||
502 | 359 | self.assertRaises(JujuError, BaseAgent().configure, data) | ||
503 | 335 | 360 | ||
509 | 336 | agent = BaseAgent() | 361 | def test_bad_session_file(self): |
510 | 337 | self.assertRaises( | 362 | """If the session file cannot be created an error should be raised. |
511 | 338 | JujuError, | 363 | """ |
512 | 339 | agent.configure, | 364 | data = {"zookeeper_servers": get_test_zookeeper_address(), |
513 | 340 | data) | 365 | "juju_directory": self.makeDir(), |
514 | 366 | "session_file": None} | ||
515 | 367 | self.assertRaises(JujuError, BaseAgent().configure, data) | ||
516 | 341 | 368 | ||
517 | 342 | def test_directory_cli_option(self): | 369 | def test_directory_cli_option(self): |
518 | 343 | """The juju directory can be configured on the cli.""" | 370 | """The juju directory can be configured on the cli.""" |
519 | 344 | juju_directory = self.makeDir() | 371 | juju_directory = self.makeDir() |
520 | 345 | self.change_args( | 372 | self.change_args( |
521 | 346 | "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), | 373 | "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), |
523 | 347 | "--juju-directory", juju_directory) | 374 | "--juju-directory", juju_directory, |
524 | 375 | "--session-file", self.makeFile()) | ||
525 | 348 | 376 | ||
526 | 349 | agent = BaseAgent() | 377 | agent = BaseAgent() |
527 | 350 | parser = argparse.ArgumentParser() | 378 | parser = argparse.ArgumentParser() |
528 | @@ -366,7 +394,9 @@ | |||
529 | 366 | agent = BaseAgent() | 394 | agent = BaseAgent() |
530 | 367 | parser = argparse.ArgumentParser() | 395 | parser = argparse.ArgumentParser() |
531 | 368 | agent.setup_options(parser) | 396 | agent.setup_options(parser) |
533 | 369 | options = parser.parse_args(namespace=TwistedOptionNamespace()) | 397 | options = parser.parse_args( |
534 | 398 | ["--session-file", self.makeFile()], | ||
535 | 399 | namespace=TwistedOptionNamespace()) | ||
536 | 370 | agent.configure(options) | 400 | agent.configure(options) |
537 | 371 | self.assertEqual( | 401 | self.assertEqual( |
538 | 372 | agent.config["juju_directory"], juju_directory) | 402 | agent.config["juju_directory"], juju_directory) |
539 | @@ -382,6 +412,8 @@ | |||
540 | 382 | client = self.mocker.patch(ZookeeperClient) | 412 | client = self.mocker.patch(ZookeeperClient) |
541 | 383 | client.connect("x1.example.com") | 413 | client.connect("x1.example.com") |
542 | 384 | self.mocker.result(succeed(client)) | 414 | self.mocker.result(succeed(client)) |
543 | 415 | client.client_id | ||
544 | 416 | self.mocker.result((123, "abc")) | ||
545 | 385 | client.add_auth("digest", "admin:abc") | 417 | client.add_auth("digest", "admin:abc") |
546 | 386 | client.add_auth("digest", "agent:xyz") | 418 | client.add_auth("digest", "agent:xyz") |
547 | 387 | client.exists("/") | 419 | client.exists("/") |
548 | @@ -390,7 +422,105 @@ | |||
549 | 390 | agent = BaseAgent() | 422 | agent = BaseAgent() |
550 | 391 | parser = argparse.ArgumentParser() | 423 | parser = argparse.ArgumentParser() |
551 | 392 | agent.setup_options(parser) | 424 | agent.setup_options(parser) |
553 | 393 | options = parser.parse_args(namespace=TwistedOptionNamespace()) | 425 | options = parser.parse_args( |
554 | 426 | ["--session-file", self.makeFile()], | ||
555 | 427 | namespace=TwistedOptionNamespace()) | ||
556 | 428 | agent.configure(options) | ||
557 | 429 | d = agent.startService() | ||
558 | 430 | self.failUnlessFailure(d, NotImplementedError) | ||
559 | 431 | return d | ||
560 | 432 | |||
561 | 433 | def test_connect_closes_running_session(self): | ||
562 | 434 | self.change_args("es-agent") | ||
563 | 435 | self.change_environment( | ||
564 | 436 | JUJU_HOME=self.makeDir(), | ||
565 | 437 | JUJU_ZOOKEEPER="x1.example.com") | ||
566 | 438 | |||
567 | 439 | session_file = self.makeFile() | ||
568 | 440 | with open(session_file, "w") as f: | ||
569 | 441 | f.write(yaml.dump((123, "abc"))) | ||
570 | 442 | mock_client_1 = self.mocker.mock() | ||
571 | 443 | client = self.mocker.patch(ZookeeperClient) | ||
572 | 444 | client.connect("x1.example.com", client_id=(123, "abc")) | ||
573 | 445 | self.mocker.result(succeed(mock_client_1)) | ||
574 | 446 | mock_client_1.close() | ||
575 | 447 | self.mocker.result(None) | ||
576 | 448 | |||
577 | 449 | mock_client_2 = self.mocker.mock() | ||
578 | 450 | client.connect("x1.example.com") | ||
579 | 451 | self.mocker.result(succeed(mock_client_2)) | ||
580 | 452 | mock_client_2.client_id | ||
581 | 453 | self.mocker.result((456, "def")) | ||
582 | 454 | self.mocker.replay() | ||
583 | 455 | |||
584 | 456 | agent = BaseAgent() | ||
585 | 457 | parser = argparse.ArgumentParser() | ||
586 | 458 | agent.setup_options(parser) | ||
587 | 459 | options = parser.parse_args( | ||
588 | 460 | ["--session-file", session_file], | ||
589 | 461 | namespace=TwistedOptionNamespace()) | ||
590 | 462 | agent.configure(options) | ||
591 | 463 | d = agent.startService() | ||
592 | 464 | self.failUnlessFailure(d, NotImplementedError) | ||
593 | 465 | return d | ||
594 | 466 | |||
595 | 467 | def test_connect_handles_expired_session(self): | ||
596 | 468 | self.change_args("es-agent") | ||
597 | 469 | self.change_environment( | ||
598 | 470 | JUJU_HOME=self.makeDir(), | ||
599 | 471 | JUJU_ZOOKEEPER="x1.example.com") | ||
600 | 472 | |||
601 | 473 | session_file = self.makeFile() | ||
602 | 474 | with open(session_file, "w") as f: | ||
603 | 475 | f.write(yaml.dump((123, "abc"))) | ||
604 | 476 | client = self.mocker.patch(ZookeeperClient) | ||
605 | 477 | client.connect("x1.example.com", client_id=(123, "abc")) | ||
606 | 478 | self.mocker.result(fail(zookeeper.SessionExpiredException())) | ||
607 | 479 | |||
608 | 480 | mock_client = self.mocker.mock() | ||
609 | 481 | client.connect("x1.example.com") | ||
610 | 482 | self.mocker.result(succeed(mock_client)) | ||
611 | 483 | mock_client.client_id | ||
612 | 484 | self.mocker.result((456, "def")) | ||
613 | 485 | self.mocker.replay() | ||
614 | 486 | |||
615 | 487 | agent = BaseAgent() | ||
616 | 488 | parser = argparse.ArgumentParser() | ||
617 | 489 | agent.setup_options(parser) | ||
618 | 490 | options = parser.parse_args( | ||
619 | 491 | ["--session-file", session_file], | ||
620 | 492 | namespace=TwistedOptionNamespace()) | ||
621 | 493 | agent.configure(options) | ||
622 | 494 | d = agent.startService() | ||
623 | 495 | self.failUnlessFailure(d, NotImplementedError) | ||
624 | 496 | return d | ||
625 | 497 | |||
626 | 498 | def test_connect_handles_nonsense_session(self): | ||
627 | 499 | self.change_args("es-agent") | ||
628 | 500 | self.change_environment( | ||
629 | 501 | JUJU_HOME=self.makeDir(), | ||
630 | 502 | JUJU_ZOOKEEPER="x1.example.com") | ||
631 | 503 | |||
632 | 504 | session_file = self.makeFile() | ||
633 | 505 | with open(session_file, "w") as f: | ||
634 | 506 | f.write(yaml.dump("cheesy wotsits")) | ||
635 | 507 | client = self.mocker.patch(ZookeeperClient) | ||
636 | 508 | client.connect("x1.example.com", client_id="cheesy wotsits") | ||
637 | 509 | self.mocker.result(fail(zookeeper.ZooKeeperException())) | ||
638 | 510 | |||
639 | 511 | mock_client = self.mocker.mock() | ||
640 | 512 | client.connect("x1.example.com") | ||
641 | 513 | self.mocker.result(succeed(mock_client)) | ||
642 | 514 | mock_client.client_id | ||
643 | 515 | self.mocker.result((456, "def")) | ||
644 | 516 | self.mocker.replay() | ||
645 | 517 | |||
646 | 518 | agent = BaseAgent() | ||
647 | 519 | parser = argparse.ArgumentParser() | ||
648 | 520 | agent.setup_options(parser) | ||
649 | 521 | options = parser.parse_args( | ||
650 | 522 | ["--session-file", session_file], | ||
651 | 523 | namespace=TwistedOptionNamespace()) | ||
652 | 394 | agent.configure(options) | 524 | agent.configure(options) |
653 | 395 | d = agent.startService() | 525 | d = agent.startService() |
654 | 396 | self.failUnlessFailure(d, NotImplementedError) | 526 | self.failUnlessFailure(d, NotImplementedError) |
655 | @@ -408,6 +538,21 @@ | |||
656 | 408 | agent.set_watch_enabled(False) | 538 | agent.set_watch_enabled(False) |
657 | 409 | self.assertFalse(agent.get_watch_enabled()) | 539 | self.assertFalse(agent.get_watch_enabled()) |
658 | 410 | 540 | ||
659 | 541 | @inlineCallbacks | ||
660 | 542 | def test_session_file_permissions(self): | ||
661 | 543 | session_file = self.makeFile() | ||
662 | 544 | agent = DummyAgent() | ||
663 | 545 | agent.configure({ | ||
664 | 546 | "session_file": session_file, | ||
665 | 547 | "juju_directory": self.makeDir(), | ||
666 | 548 | "zookeeper_servers": get_test_zookeeper_address()}) | ||
667 | 549 | yield agent.startService() | ||
668 | 550 | mode = os.stat(session_file).st_mode | ||
669 | 551 | mask = stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO | ||
670 | 552 | self.assertEquals(mode & mask, stat.S_IRUSR | stat.S_IWUSR) | ||
671 | 553 | yield agent.stopService() | ||
672 | 554 | self.assertFalse(os.path.exists(session_file)) | ||
673 | 555 | |||
674 | 411 | 556 | ||
675 | 412 | class AgentDebugLogSettingsWatch(AgentTestBase): | 557 | class AgentDebugLogSettingsWatch(AgentTestBase): |
676 | 413 | 558 | ||
677 | 414 | 559 | ||
678 | === modified file 'juju/agents/tests/test_machine.py' | |||
679 | --- juju/agents/tests/test_machine.py 2011-10-05 13:59:44 +0000 | |||
680 | +++ juju/agents/tests/test_machine.py 2012-02-02 16:42:42 +0000 | |||
681 | @@ -120,10 +120,9 @@ | |||
682 | 120 | # initially setup by get_agent_config in setUp | 120 | # initially setup by get_agent_config in setUp |
683 | 121 | self.change_environment(JUJU_MACHINE_ID="") | 121 | self.change_environment(JUJU_MACHINE_ID="") |
684 | 122 | self.change_args("es-agent", | 122 | self.change_args("es-agent", |
689 | 123 | "--zookeeper-servers", | 123 | "--zookeeper-servers", get_test_zookeeper_address(), |
690 | 124 | get_test_zookeeper_address(), | 124 | "--juju-directory", self.makeDir(), |
691 | 125 | "--juju-directory", | 125 | "--session-file", self.makeFile()) |
688 | 126 | self.makeDir()) | ||
692 | 127 | parser = argparse.ArgumentParser() | 126 | parser = argparse.ArgumentParser() |
693 | 128 | self.agent.setup_options(parser) | 127 | self.agent.setup_options(parser) |
694 | 129 | options = parser.parse_args(namespace=TwistedOptionNamespace()) | 128 | options = parser.parse_args(namespace=TwistedOptionNamespace()) |
695 | 130 | 129 | ||
696 | === modified file 'juju/agents/tests/test_unit.py' | |||
697 | --- juju/agents/tests/test_unit.py 2011-12-16 09:23:31 +0000 | |||
698 | +++ juju/agents/tests/test_unit.py 2012-02-02 16:42:42 +0000 | |||
699 | @@ -3,10 +3,9 @@ | |||
700 | 3 | import os | 3 | import os |
701 | 4 | import yaml | 4 | import yaml |
702 | 5 | 5 | ||
705 | 6 | from twisted.internet.defer import ( | 6 | from twisted.internet.defer import inlineCallbacks, returnValue |
704 | 7 | inlineCallbacks, returnValue, fail, Deferred) | ||
706 | 8 | 7 | ||
708 | 9 | from juju.agents.unit import UnitAgent, CharmUpgradeOperation | 8 | from juju.agents.unit import UnitAgent |
709 | 10 | from juju.agents.base import TwistedOptionNamespace | 9 | from juju.agents.base import TwistedOptionNamespace |
710 | 11 | from juju.charm import get_charm_from_path | 10 | from juju.charm import get_charm_from_path |
711 | 12 | from juju.charm.url import CharmURL | 11 | from juju.charm.url import CharmURL |
712 | @@ -74,8 +73,10 @@ | |||
713 | 74 | "stop", "#!/bin/bash\necho stop >> %s" % output_file) | 73 | "stop", "#!/bin/bash\necho stop >> %s" % output_file) |
714 | 75 | 74 | ||
715 | 76 | for k in kw.keys(): | 75 | for k in kw.keys(): |
718 | 77 | self.write_hook(k.replace("_", "-"), | 76 | hook_name = k.replace("_", "-") |
719 | 78 | "#!/bin/bash\necho $0 >> %s" % output_file) | 77 | self.write_hook( |
720 | 78 | hook_name, | ||
721 | 79 | "#!/bin/bash\necho %s >> %s" % (hook_name, output_file)) | ||
722 | 79 | 80 | ||
723 | 80 | return output_file | 81 | return output_file |
724 | 81 | 82 | ||
725 | @@ -183,7 +184,8 @@ | |||
726 | 183 | self.change_args( | 184 | self.change_args( |
727 | 184 | "unit-agent", | 185 | "unit-agent", |
728 | 185 | "--juju-directory", self.makeDir(), | 186 | "--juju-directory", self.makeDir(), |
730 | 186 | "--zookeeper-servers", get_test_zookeeper_address()) | 187 | "--zookeeper-servers", get_test_zookeeper_address(), |
731 | 188 | "--session-file", self.makeFile()) | ||
732 | 187 | 189 | ||
733 | 188 | parser = argparse.ArgumentParser() | 190 | parser = argparse.ArgumentParser() |
734 | 189 | self.agent.setup_options(parser) | 191 | self.agent.setup_options(parser) |
735 | @@ -215,6 +217,7 @@ | |||
736 | 215 | options = {} | 217 | options = {} |
737 | 216 | options["juju_directory"] = self.juju_directory | 218 | options["juju_directory"] = self.juju_directory |
738 | 217 | options["zookeeper_servers"] = get_test_zookeeper_address() | 219 | options["zookeeper_servers"] = get_test_zookeeper_address() |
739 | 220 | options["session_file"] = self.makeFile() | ||
740 | 218 | options["unit_name"] = "rabbit-1" | 221 | options["unit_name"] = "rabbit-1" |
741 | 219 | agent = self.agent_class() | 222 | agent = self.agent_class() |
742 | 220 | agent.configure(options) | 223 | agent.configure(options) |
743 | @@ -568,6 +571,12 @@ | |||
744 | 568 | self.makeDir(path=os.path.join(self.juju_directory, "charms")) | 571 | self.makeDir(path=os.path.join(self.juju_directory, "charms")) |
745 | 569 | 572 | ||
746 | 570 | @inlineCallbacks | 573 | @inlineCallbacks |
747 | 574 | def wait_for_log(self, logger_name, message, level=logging.DEBUG): | ||
748 | 575 | output = self.capture_logging(logger_name, level=level) | ||
749 | 576 | while message not in output.getvalue(): | ||
750 | 577 | yield self.sleep(0.1) | ||
751 | 578 | |||
752 | 579 | @inlineCallbacks | ||
753 | 571 | def mark_charm_upgrade(self): | 580 | def mark_charm_upgrade(self): |
754 | 572 | # Create a new version of the charm | 581 | # Create a new version of the charm |
755 | 573 | repository = self.increment_charm(self.charm) | 582 | repository = self.increment_charm(self.charm) |
756 | @@ -592,158 +601,84 @@ | |||
757 | 592 | yield self.assertState(self.agent.workflow, "started") | 601 | yield self.assertState(self.agent.workflow, "started") |
758 | 593 | 602 | ||
759 | 594 | @inlineCallbacks | 603 | @inlineCallbacks |
760 | 595 | def test_agent_upgrade_watch_continues_on_unexpected_error(self): | ||
761 | 596 | """The agent watches for unit upgrades and continues if there is an | ||
762 | 597 | unexpected error.""" | ||
763 | 598 | yield self.mark_charm_upgrade() | ||
764 | 599 | self.agent.set_watch_enabled(True) | ||
765 | 600 | |||
766 | 601 | output = self.capture_logging( | ||
767 | 602 | "juju.agents.unit", level=logging.DEBUG) | ||
768 | 603 | |||
769 | 604 | upgrade_done = Deferred() | ||
770 | 605 | |||
771 | 606 | def operation_has_run(): | ||
772 | 607 | upgrade_done.callback(True) | ||
773 | 608 | |||
774 | 609 | operation = self.mocker.patch(CharmUpgradeOperation) | ||
775 | 610 | operation.run() | ||
776 | 611 | |||
777 | 612 | self.mocker.call(operation_has_run) | ||
778 | 613 | self.mocker.result(fail(ValueError("magic mouse"))) | ||
779 | 614 | self.mocker.replay() | ||
780 | 615 | |||
781 | 616 | yield self.agent.startService() | ||
782 | 617 | |||
783 | 618 | yield upgrade_done | ||
784 | 619 | self.assertIn("Error while upgrading", output.getvalue()) | ||
785 | 620 | self.assertIn("magic mouse", output.getvalue()) | ||
786 | 621 | |||
787 | 622 | yield self.agent.workflow.fire_transition("stop") | ||
788 | 623 | |||
789 | 624 | @inlineCallbacks | ||
790 | 625 | def test_agent_upgrade(self): | 604 | def test_agent_upgrade(self): |
791 | 626 | """The agent can succesfully upgrade its charm.""" | 605 | """The agent can succesfully upgrade its charm.""" |
797 | 627 | self.agent.set_watch_enabled(False) | 606 | log_written = self.wait_for_log("juju.agents.unit", "Upgrade complete") |
793 | 628 | yield self.agent.startService() | ||
794 | 629 | |||
795 | 630 | yield self.mark_charm_upgrade() | ||
796 | 631 | |||
798 | 632 | hook_done = self.wait_on_hook( | 607 | hook_done = self.wait_on_hook( |
799 | 633 | "upgrade-charm", executor=self.agent.executor) | 608 | "upgrade-charm", executor=self.agent.executor) |
810 | 634 | self.write_hook("upgrade-charm", "#!/bin/bash\nexit 0") | 609 | |
811 | 635 | output = self.capture_logging("unit.upgrade", level=logging.DEBUG) | 610 | self.agent.set_watch_enabled(True) |
812 | 636 | 611 | yield self.agent.startService() | |
813 | 637 | # Do the upgrade | 612 | yield self.mark_charm_upgrade() |
804 | 638 | upgrade = CharmUpgradeOperation(self.agent) | ||
805 | 639 | value = yield upgrade.run() | ||
806 | 640 | |||
807 | 641 | # Verify the upgrade. | ||
808 | 642 | self.assertIdentical(value, True) | ||
809 | 643 | self.assertIn("Unit upgraded", output.getvalue()) | ||
814 | 644 | yield hook_done | 613 | yield hook_done |
815 | 614 | yield log_written | ||
816 | 645 | 615 | ||
817 | 616 | self.assertIdentical( | ||
818 | 617 | (yield self.states["unit"].get_upgrade_flag()), | ||
819 | 618 | False) | ||
820 | 646 | new_charm = get_charm_from_path( | 619 | new_charm = get_charm_from_path( |
821 | 647 | os.path.join(self.agent.unit_directory, "charm")) | 620 | os.path.join(self.agent.unit_directory, "charm")) |
822 | 648 | |||
823 | 649 | self.assertEqual( | 621 | self.assertEqual( |
824 | 650 | self.charm.get_revision() + 1, new_charm.get_revision()) | 622 | self.charm.get_revision() + 1, new_charm.get_revision()) |
825 | 651 | 623 | ||
826 | 652 | @inlineCallbacks | 624 | @inlineCallbacks |
827 | 625 | def test_agent_upgrade_version_current(self): | ||
828 | 626 | """If the unit is running the latest charm, do nothing.""" | ||
829 | 627 | log_written = self.wait_for_log( | ||
830 | 628 | "juju.agents.unit", | ||
831 | 629 | "Upgrade ignored: already running latest charm") | ||
832 | 630 | |||
833 | 631 | old_charm_id = yield self.states["unit"].get_charm_id() | ||
834 | 632 | self.agent.set_watch_enabled(True) | ||
835 | 633 | yield self.agent.startService() | ||
836 | 634 | yield self.states["unit"].set_upgrade_flag() | ||
837 | 635 | yield log_written | ||
838 | 636 | |||
839 | 637 | self.assertIdentical( | ||
840 | 638 | (yield self.states["unit"].get_upgrade_flag()), False) | ||
841 | 639 | self.assertEquals( | ||
842 | 640 | (yield self.states["unit"].get_charm_id()), old_charm_id) | ||
843 | 641 | |||
844 | 642 | |||
845 | 643 | @inlineCallbacks | ||
846 | 653 | def test_agent_upgrade_bad_unit_state(self): | 644 | def test_agent_upgrade_bad_unit_state(self): |
851 | 654 | """The an upgrade fails if the unit is in a bad state.""" | 645 | """The upgrade fails if the unit is in a bad state.""" |
848 | 655 | self.agent.set_watch_enabled(False) | ||
849 | 656 | yield self.agent.startService() | ||
850 | 657 | |||
852 | 658 | # Upload a new version of the unit's charm | 646 | # Upload a new version of the unit's charm |
853 | 659 | repository = self.increment_charm(self.charm) | 647 | repository = self.increment_charm(self.charm) |
854 | 660 | charm = yield repository.find(CharmURL.parse("local:series/mysql")) | 648 | charm = yield repository.find(CharmURL.parse("local:series/mysql")) |
855 | 661 | charm, charm_state = yield self.publish_charm(charm.path) | 649 | charm, charm_state = yield self.publish_charm(charm.path) |
856 | 650 | old_charm_id = yield self.states["unit"].get_charm_id() | ||
857 | 651 | |||
858 | 652 | log_written = self.wait_for_log( | ||
859 | 653 | "juju.agents.unit", | ||
860 | 654 | "Cannot upgrade: unit is in non-started state configure_error. " | ||
861 | 655 | "Reissue upgrade command to try again.") | ||
862 | 656 | self.agent.set_watch_enabled(True) | ||
863 | 657 | yield self.agent.startService() | ||
864 | 662 | 658 | ||
865 | 663 | # Mark the unit for upgrade, with an invalid state. | 659 | # Mark the unit for upgrade, with an invalid state. |
866 | 660 | yield self.agent.workflow.fire_transition("error_configure") | ||
867 | 664 | yield self.states["service"].set_charm_id(charm_state.id) | 661 | yield self.states["service"].set_charm_id(charm_state.id) |
868 | 665 | yield self.states["unit"].set_upgrade_flag() | 662 | yield self.states["unit"].set_upgrade_flag() |
881 | 666 | yield self.agent.workflow.set_state("start_error") | 663 | yield log_written |
882 | 667 | 664 | ||
871 | 668 | output = self.capture_logging("unit.upgrade", level=logging.DEBUG) | ||
872 | 669 | |||
873 | 670 | # Do the upgrade | ||
874 | 671 | upgrade = CharmUpgradeOperation(self.agent) | ||
875 | 672 | value = yield upgrade.run() | ||
876 | 673 | |||
877 | 674 | # Verify the upgrade. | ||
878 | 675 | self.assertIdentical(value, False) | ||
879 | 676 | self.assertIn("Unit not in an upgradeable state: start_error", | ||
880 | 677 | output.getvalue()) | ||
883 | 678 | self.assertIdentical( | 665 | self.assertIdentical( |
886 | 679 | (yield self.states["unit"].get_upgrade_flag()), | 666 | (yield self.states["unit"].get_upgrade_flag()), False) |
887 | 680 | False) | 667 | self.assertEquals( |
888 | 668 | (yield self.states["unit"].get_charm_id()), old_charm_id) | ||
889 | 681 | 669 | ||
890 | 682 | @inlineCallbacks | 670 | @inlineCallbacks |
891 | 683 | def test_agent_upgrade_no_flag(self): | 671 | def test_agent_upgrade_no_flag(self): |
958 | 684 | """An upgrade fails if there is no upgrade flag set.""" | 672 | """An upgrade stops if there is no upgrade flag set.""" |
959 | 685 | self.agent.set_watch_enabled(False) | 673 | log_written = self.wait_for_log( |
960 | 686 | yield self.agent.startService() | 674 | "juju.agents.unit", "No upgrade flag set") |
961 | 687 | output = self.capture_logging("unit.upgrade", level=logging.DEBUG) | 675 | old_charm_id = yield self.states["unit"].get_charm_id() |
962 | 688 | upgrade = CharmUpgradeOperation(self.agent) | 676 | self.agent.set_watch_enabled(True) |
963 | 689 | value = yield upgrade.run() | 677 | yield self.agent.startService() |
964 | 690 | self.assertIdentical(value, False) | 678 | yield log_written |
965 | 691 | self.assertIn("No upgrade flag set", output.getvalue()) | 679 | |
966 | 692 | yield self.agent.workflow.fire_transition("stop") | 680 | self.assertIdentical( |
967 | 693 | 681 | (yield self.states["unit"].get_upgrade_flag()), | |
968 | 694 | @inlineCallbacks | 682 | False) |
969 | 695 | def test_agent_upgrade_version_current(self): | 683 | new_charm_id = yield self.states["unit"].get_charm_id() |
970 | 696 | """An upgrade fails if the unit is running the latest charm.""" | 684 | self.assertEquals(new_charm_id, old_charm_id) |
905 | 697 | self.agent.set_watch_enabled(False) | ||
906 | 698 | yield self.agent.startService() | ||
907 | 699 | yield self.states["unit"].set_upgrade_flag() | ||
908 | 700 | output = self.capture_logging("unit.upgrade", level=logging.DEBUG) | ||
909 | 701 | upgrade = CharmUpgradeOperation(self.agent) | ||
910 | 702 | value = yield upgrade.run() | ||
911 | 703 | self.assertIdentical(value, True) | ||
912 | 704 | self.assertIn("Unit already running latest charm", output.getvalue()) | ||
913 | 705 | self.assertFalse((yield self.states["unit"].get_upgrade_flag())) | ||
914 | 706 | |||
915 | 707 | @inlineCallbacks | ||
916 | 708 | def test_agent_upgrade_hook_failure(self): | ||
917 | 709 | """An upgrade fails if the upgrade hook errors.""" | ||
918 | 710 | self.agent.set_watch_enabled(False) | ||
919 | 711 | yield self.agent.startService() | ||
920 | 712 | |||
921 | 713 | # Upload a new version of the unit's charm | ||
922 | 714 | repository = self.increment_charm(self.charm) | ||
923 | 715 | charm = yield repository.find(CharmURL.parse("local:series/mysql")) | ||
924 | 716 | charm, charm_state = yield self.publish_charm(charm.path) | ||
925 | 717 | |||
926 | 718 | # Mark the unit for upgrade | ||
927 | 719 | yield self.states["service"].set_charm_id(charm_state.id) | ||
928 | 720 | yield self.states["unit"].set_upgrade_flag() | ||
929 | 721 | |||
930 | 722 | hook_done = self.wait_on_hook( | ||
931 | 723 | "upgrade-charm", executor=self.agent.executor) | ||
932 | 724 | self.write_hook("upgrade-charm", "#!/bin/bash\nexit 1") | ||
933 | 725 | output = self.capture_logging("unit.upgrade", level=logging.DEBUG) | ||
934 | 726 | |||
935 | 727 | # Do the upgrade | ||
936 | 728 | upgrade = CharmUpgradeOperation(self.agent) | ||
937 | 729 | value = yield upgrade.run() | ||
938 | 730 | |||
939 | 731 | # Verify the failed upgrade. | ||
940 | 732 | self.assertIdentical(value, False) | ||
941 | 733 | self.assertIn("Invoking upgrade transition", output.getvalue()) | ||
942 | 734 | self.assertIn("Upgrade failed.", output.getvalue()) | ||
943 | 735 | yield hook_done | ||
944 | 736 | |||
945 | 737 | # Verify state | ||
946 | 738 | workflow_state = yield self.agent.workflow.get_state() | ||
947 | 739 | self.assertEqual("charm_upgrade_error", workflow_state) | ||
948 | 740 | |||
949 | 741 | # Verify new charm is in place | ||
950 | 742 | new_charm = get_charm_from_path( | ||
951 | 743 | os.path.join(self.agent.unit_directory, "charm")) | ||
952 | 744 | |||
953 | 745 | self.assertEqual( | ||
954 | 746 | self.charm.get_revision() + 1, new_charm.get_revision()) | ||
955 | 747 | |||
956 | 748 | # Verify upgrade flag is cleared. | ||
957 | 749 | self.assertFalse((yield self.states["unit"].get_upgrade_flag())) | ||
971 | 750 | 685 | ||
972 | === modified file 'juju/agents/unit.py' | |||
973 | --- juju/agents/unit.py 2012-01-10 14:14:28 +0000 | |||
974 | +++ juju/agents/unit.py 2012-02-02 16:42:42 +0000 | |||
975 | @@ -1,7 +1,5 @@ | |||
976 | 1 | import os | 1 | import os |
977 | 2 | import logging | 2 | import logging |
978 | 3 | import shutil | ||
979 | 4 | import tempfile | ||
980 | 5 | 3 | ||
981 | 6 | from twisted.internet.defer import inlineCallbacks, returnValue | 4 | from twisted.internet.defer import inlineCallbacks, returnValue |
982 | 7 | 5 | ||
983 | @@ -14,8 +12,6 @@ | |||
984 | 14 | from juju.unit.lifecycle import UnitLifecycle, HOOK_SOCKET_FILE | 12 | from juju.unit.lifecycle import UnitLifecycle, HOOK_SOCKET_FILE |
985 | 15 | from juju.unit.workflow import UnitWorkflowState | 13 | from juju.unit.workflow import UnitWorkflowState |
986 | 16 | 14 | ||
987 | 17 | from juju.unit.charm import download_charm | ||
988 | 18 | |||
989 | 19 | from juju.agents.base import BaseAgent | 15 | from juju.agents.base import BaseAgent |
990 | 20 | 16 | ||
991 | 21 | 17 | ||
992 | @@ -66,14 +62,14 @@ | |||
993 | 66 | @inlineCallbacks | 62 | @inlineCallbacks |
994 | 67 | def start(self): | 63 | def start(self): |
995 | 68 | """Start the unit agent process.""" | 64 | """Start the unit agent process.""" |
997 | 69 | self.service_state_manager = ServiceStateManager(self.client) | 65 | service_state_manager = ServiceStateManager(self.client) |
998 | 70 | 66 | ||
999 | 71 | # Retrieve our unit and configure working directories. | 67 | # Retrieve our unit and configure working directories. |
1000 | 72 | service_name = self.unit_name.split("/")[0] | 68 | service_name = self.unit_name.split("/")[0] |
1002 | 73 | service_state = yield self.service_state_manager.get_service_state( | 69 | self.service_state = yield service_state_manager.get_service_state( |
1003 | 74 | service_name) | 70 | service_name) |
1004 | 75 | 71 | ||
1006 | 76 | self.unit_state = yield service_state.get_unit_state( | 72 | self.unit_state = yield self.service_state.get_unit_state( |
1007 | 77 | self.unit_name) | 73 | self.unit_name) |
1008 | 78 | self.unit_directory = os.path.join( | 74 | self.unit_directory = os.path.join( |
1009 | 79 | self.config["juju_directory"], "units", | 75 | self.config["juju_directory"], "units", |
1010 | @@ -82,10 +78,11 @@ | |||
1011 | 82 | self.config["juju_directory"], "state") | 78 | self.config["juju_directory"], "state") |
1012 | 83 | 79 | ||
1013 | 84 | # Setup the server portion of the cli api exposed to hooks. | 80 | # Setup the server portion of the cli api exposed to hooks. |
1014 | 81 | socket_path = os.path.join(self.unit_directory, HOOK_SOCKET_FILE) | ||
1015 | 82 | if os.path.exists(socket_path): | ||
1016 | 83 | os.unlink(socket_path) | ||
1017 | 85 | from twisted.internet import reactor | 84 | from twisted.internet import reactor |
1021 | 86 | self.api_socket = reactor.listenUNIX( | 85 | self.api_socket = reactor.listenUNIX(socket_path, self.api_factory) |
1019 | 87 | os.path.join(self.unit_directory, HOOK_SOCKET_FILE), | ||
1020 | 88 | self.api_factory) | ||
1022 | 89 | 86 | ||
1023 | 90 | # Setup the unit state's address | 87 | # Setup the unit state's address |
1024 | 91 | address = yield get_unit_address(self.client) | 88 | address = yield get_unit_address(self.client) |
1025 | @@ -100,9 +97,13 @@ | |||
1026 | 100 | # Inform the system, we're alive. | 97 | # Inform the system, we're alive. |
1027 | 101 | yield self.unit_state.connect_agent() | 98 | yield self.unit_state.connect_agent() |
1028 | 102 | 99 | ||
1029 | 100 | # Start paying attention to the debug-log setting | ||
1030 | 101 | if self.get_watch_enabled(): | ||
1031 | 102 | yield self.unit_state.watch_hook_debug(self.cb_watch_hook_debug) | ||
1032 | 103 | |||
1033 | 103 | self.lifecycle = UnitLifecycle( | 104 | self.lifecycle = UnitLifecycle( |
1036 | 104 | self.client, self.unit_state, service_state, self.unit_directory, | 105 | self.client, self.unit_state, self.service_state, |
1037 | 105 | self.state_directory, self.executor) | 106 | self.unit_directory, self.state_directory, self.executor) |
1038 | 106 | 107 | ||
1039 | 107 | self.workflow = UnitWorkflowState( | 108 | self.workflow = UnitWorkflowState( |
1040 | 108 | self.client, self.unit_state, self.lifecycle, self.state_directory) | 109 | self.client, self.unit_state, self.lifecycle, self.state_directory) |
1041 | @@ -113,7 +114,7 @@ | |||
1042 | 113 | 114 | ||
1043 | 114 | if self.get_watch_enabled(): | 115 | if self.get_watch_enabled(): |
1044 | 115 | yield self.unit_state.watch_resolved(self.cb_watch_resolved) | 116 | yield self.unit_state.watch_resolved(self.cb_watch_resolved) |
1046 | 116 | yield service_state.watch_config_state( | 117 | yield self.service_state.watch_config_state( |
1047 | 117 | self.cb_watch_config_changed) | 118 | self.cb_watch_config_changed) |
1048 | 118 | yield self.unit_state.watch_upgrade_flag( | 119 | yield self.unit_state.watch_upgrade_flag( |
1049 | 119 | self.cb_watch_upgrade_flag) | 120 | self.cb_watch_upgrade_flag) |
1050 | @@ -175,13 +176,34 @@ | |||
1051 | 175 | """Update the unit's charm when requested. | 176 | """Update the unit's charm when requested. |
1052 | 176 | """ | 177 | """ |
1053 | 177 | upgrade_flag = yield self.unit_state.get_upgrade_flag() | 178 | upgrade_flag = yield self.unit_state.get_upgrade_flag() |
1061 | 178 | if upgrade_flag: | 179 | if not upgrade_flag: |
1062 | 179 | log.info("Upgrade detected, starting upgrade") | 180 | log.info("No upgrade flag set.") |
1063 | 180 | upgrade = CharmUpgradeOperation(self) | 181 | return |
1064 | 181 | try: | 182 | |
1065 | 182 | yield upgrade.run() | 183 | log.info("Upgrade detected") |
1066 | 183 | except Exception: | 184 | # Clear the flag immediately; this means that upgrade requests will |
1067 | 184 | log.exception("Error while upgrading") | 185 | # be *ignored* by units which are not "started", and will need to be |
1068 | 186 | # reissued when the units are in acceptable states. | ||
1069 | 187 | yield self.unit_state.clear_upgrade_flag() | ||
1070 | 188 | |||
1071 | 189 | new_id = yield self.service_state.get_charm_id() | ||
1072 | 190 | old_id = yield self.unit_state.get_charm_id() | ||
1073 | 191 | if new_id == old_id: | ||
1074 | 192 | log.info("Upgrade ignored: already running latest charm") | ||
1075 | 193 | return | ||
1076 | 194 | |||
1077 | 195 | state = yield self.workflow.get_state() | ||
1078 | 196 | if state != "started": | ||
1079 | 197 | log.warning( | ||
1080 | 198 | "Cannot upgrade: unit is in non-started state %s. Reissue " | ||
1081 | 199 | "upgrade command to try again.", state) | ||
1082 | 200 | return | ||
1083 | 201 | |||
1084 | 202 | log.info("Starting upgrade") | ||
1085 | 203 | if (yield self.workflow.fire_transition("upgrade_charm")): | ||
1086 | 204 | log.info("Upgrade complete") | ||
1087 | 205 | else: | ||
1088 | 206 | log.info("Upgrade failed") | ||
1089 | 185 | 207 | ||
1090 | 186 | @inlineCallbacks | 208 | @inlineCallbacks |
1091 | 187 | def cb_watch_config_changed(self, change): | 209 | def cb_watch_config_changed(self, change): |
1092 | @@ -198,99 +220,5 @@ | |||
1093 | 198 | yield self.workflow.fire_transition("reconfigure") | 220 | yield self.workflow.fire_transition("reconfigure") |
1094 | 199 | 221 | ||
1095 | 200 | 222 | ||
1096 | 201 | class CharmUpgradeOperation(object): | ||
1097 | 202 | """A unit agent charm upgrade operation.""" | ||
1098 | 203 | |||
1099 | 204 | def __init__(self, agent): | ||
1100 | 205 | self._agent = agent | ||
1101 | 206 | self._log = logging.getLogger("unit.upgrade") | ||
1102 | 207 | self._charm_directory = tempfile.mkdtemp( | ||
1103 | 208 | suffix="charm-upgrade", prefix="tmp") | ||
1104 | 209 | |||
1105 | 210 | def retrieve_charm(self, charm_id): | ||
1106 | 211 | return download_charm( | ||
1107 | 212 | self._agent.client, charm_id, self._charm_directory) | ||
1108 | 213 | |||
1109 | 214 | def _remove_tree(self, result): | ||
1110 | 215 | if os.path.exists(self._charm_directory): | ||
1111 | 216 | shutil.rmtree(self._charm_directory) | ||
1112 | 217 | return result | ||
1113 | 218 | |||
1114 | 219 | def run(self): | ||
1115 | 220 | d = self._run() | ||
1116 | 221 | d.addBoth(self._remove_tree) | ||
1117 | 222 | return d | ||
1118 | 223 | |||
1119 | 224 | @inlineCallbacks | ||
1120 | 225 | def _run(self): | ||
1121 | 226 | self._log.info("Starting charm upgrade...") | ||
1122 | 227 | |||
1123 | 228 | # Verify the workflow state | ||
1124 | 229 | workflow_state = yield self._agent.workflow.get_state() | ||
1125 | 230 | if workflow_state != "started": | ||
1126 | 231 | self._log.warning( | ||
1127 | 232 | "Unit not in an upgradeable state: %s", workflow_state) | ||
1128 | 233 | # Upgrades can only be supported while the unit is | ||
1129 | 234 | # running, we clear the flag because we don't support | ||
1130 | 235 | # persistent upgrade requests across unit starts. The | ||
1131 | 236 | # upgrade request will need to be reissued, after | ||
1132 | 237 | # resolving or restarting the unit. | ||
1133 | 238 | yield self._agent.unit_state.clear_upgrade_flag() | ||
1134 | 239 | returnValue(False) | ||
1135 | 240 | |||
1136 | 241 | # Get, check, and clear the flag. Do it first so a second upgrade | ||
1137 | 242 | # will restablish the upgrade request. | ||
1138 | 243 | upgrade_flag = yield self._agent.unit_state.get_upgrade_flag() | ||
1139 | 244 | if not upgrade_flag: | ||
1140 | 245 | self._log.warning("No upgrade flag set.") | ||
1141 | 246 | returnValue(False) | ||
1142 | 247 | |||
1143 | 248 | self._log.debug("Clearing upgrade flag.") | ||
1144 | 249 | yield self._agent.unit_state.clear_upgrade_flag() | ||
1145 | 250 | |||
1146 | 251 | # Retrieve the service state | ||
1147 | 252 | service_state_manager = ServiceStateManager(self._agent.client) | ||
1148 | 253 | service_state = yield service_state_manager.get_service_state( | ||
1149 | 254 | self._agent.unit_name.split("/")[0]) | ||
1150 | 255 | |||
1151 | 256 | # Verify unit state, upgrade flag, and newer version requested. | ||
1152 | 257 | service_charm_id = yield service_state.get_charm_id() | ||
1153 | 258 | unit_charm_id = yield self._agent.unit_state.get_charm_id() | ||
1154 | 259 | |||
1155 | 260 | if service_charm_id == unit_charm_id: | ||
1156 | 261 | self._log.debug("Unit already running latest charm") | ||
1157 | 262 | yield self._agent.unit_state.clear_upgrade_flag() | ||
1158 | 263 | returnValue(True) | ||
1159 | 264 | |||
1160 | 265 | # Retrieve charm | ||
1161 | 266 | self._log.debug("Retrieving charm %s", service_charm_id) | ||
1162 | 267 | charm = yield self.retrieve_charm(service_charm_id) | ||
1163 | 268 | |||
1164 | 269 | # Stop hook executions | ||
1165 | 270 | self._log.debug("Stopping hook execution.") | ||
1166 | 271 | yield self._agent.executor.stop() | ||
1167 | 272 | |||
1168 | 273 | # Note the current charm version | ||
1169 | 274 | self._log.debug("Setting unit charm id to %s", service_charm_id) | ||
1170 | 275 | yield self._agent.unit_state.set_charm_id(service_charm_id) | ||
1171 | 276 | |||
1172 | 277 | # Extract charm | ||
1173 | 278 | self._log.debug("Extracting new charm.") | ||
1174 | 279 | charm.extract_to( | ||
1175 | 280 | os.path.join(self._agent.unit_directory, "charm")) | ||
1176 | 281 | |||
1177 | 282 | # Upgrade | ||
1178 | 283 | self._log.debug("Invoking upgrade transition.") | ||
1179 | 284 | |||
1180 | 285 | success = yield self._agent.workflow.fire_transition( | ||
1181 | 286 | "upgrade_charm") | ||
1182 | 287 | |||
1183 | 288 | if success: | ||
1184 | 289 | self._log.debug("Unit upgraded.") | ||
1185 | 290 | else: | ||
1186 | 291 | self._log.warning("Upgrade failed.") | ||
1187 | 292 | |||
1188 | 293 | returnValue(success) | ||
1189 | 294 | |||
1190 | 295 | if __name__ == '__main__': | 223 | if __name__ == '__main__': |
1191 | 296 | UnitAgent.run() | 224 | UnitAgent.run() |
1192 | 297 | 225 | ||
1193 | === modified file 'juju/control/options.py' | |||
1194 | --- juju/control/options.py 2011-01-20 18:00:23 +0000 | |||
1195 | +++ juju/control/options.py 2012-02-02 16:42:42 +0000 | |||
1196 | @@ -53,9 +53,8 @@ | |||
1197 | 53 | ) | 53 | ) |
1198 | 54 | 54 | ||
1199 | 55 | unix_group.add_argument( | 55 | unix_group.add_argument( |
1201 | 56 | "--pidfile", default="%s.pid" % agent.name, | 56 | "--pidfile", default="", |
1202 | 57 | help="Path to the pid file", | 57 | help="Path to the pid file", |
1203 | 58 | type=ensure_abs_path, | ||
1204 | 59 | ) | 58 | ) |
1205 | 60 | 59 | ||
1206 | 61 | unix_group.add_argument( | 60 | unix_group.add_argument( |
1207 | @@ -91,9 +90,9 @@ | |||
1208 | 91 | ) | 90 | ) |
1209 | 92 | 91 | ||
1210 | 93 | unix_group.add_argument( | 92 | unix_group.add_argument( |
1214 | 94 | "--daemon", "-n", default=True, | 93 | "--nodaemon", "-n", default=False, |
1215 | 95 | dest="nodaemon", action="store_false", | 94 | dest="nodaemon", action="store_true", |
1216 | 96 | help="Daemonize the process", | 95 | help="Don't daemonize (stay in foreground)", |
1217 | 97 | ) | 96 | ) |
1218 | 98 | 97 | ||
1219 | 99 | unix_group.add_argument( | 98 | unix_group.add_argument( |
1220 | 100 | 99 | ||
1221 | === modified file 'juju/control/status.py' | |||
1222 | --- juju/control/status.py 2011-12-07 05:02:08 +0000 | |||
1223 | +++ juju/control/status.py 2012-02-02 16:42:42 +0000 | |||
1224 | @@ -216,8 +216,10 @@ | |||
1225 | 216 | relation_status = {} | 216 | relation_status = {} |
1226 | 217 | for relation in relations: | 217 | for relation in relations: |
1227 | 218 | try: | 218 | try: |
1228 | 219 | print unit.unit_name | ||
1229 | 219 | relation_unit = yield relation.get_unit_state(unit) | 220 | relation_unit = yield relation.get_unit_state(unit) |
1230 | 220 | except UnitRelationStateNotFound: | 221 | except UnitRelationStateNotFound: |
1231 | 222 | print "POW SPLAT" | ||
1232 | 221 | # This exception will occur when relations are | 223 | # This exception will occur when relations are |
1233 | 222 | # established between services without service | 224 | # established between services without service |
1234 | 223 | # units, and therefore never have any | 225 | # units, and therefore never have any |
1235 | 224 | 226 | ||
1236 | === modified file 'juju/control/tests/test_resolved.py' | |||
1237 | --- juju/control/tests/test_resolved.py 2012-01-12 10:18:07 +0000 | |||
1238 | +++ juju/control/tests/test_resolved.py 2012-02-02 16:42:42 +0000 | |||
1239 | @@ -88,10 +88,10 @@ | |||
1240 | 88 | """ | 88 | """ |
1241 | 89 | for unit, state in units: | 89 | for unit, state in units: |
1242 | 90 | unit_relation = yield service_relation.add_unit_state(unit) | 90 | unit_relation = yield service_relation.add_unit_state(unit) |
1247 | 91 | lifecycle = UnitRelationLifecycle(self.client, | 91 | lifecycle = UnitRelationLifecycle( |
1248 | 92 | unit.unit_name, unit_relation, | 92 | self.client, unit.unit_name, unit_relation, |
1249 | 93 | service_relation.relation_name, | 93 | service_relation.relation_name, self.makeDir(), self.makeDir(), |
1250 | 94 | self.makeDir(), self.executor) | 94 | self.executor) |
1251 | 95 | workflow_state = RelationWorkflowState( | 95 | workflow_state = RelationWorkflowState( |
1252 | 96 | self.client, unit_relation, service_relation.relation_name, | 96 | self.client, unit_relation, service_relation.relation_name, |
1253 | 97 | lifecycle, self.makeDir()) | 97 | lifecycle, self.makeDir()) |
1254 | 98 | 98 | ||
1255 | === modified file 'juju/control/tests/test_status.py' | |||
1256 | --- juju/control/tests/test_status.py 2011-12-07 18:29:12 +0000 | |||
1257 | +++ juju/control/tests/test_status.py 2012-02-02 16:42:42 +0000 | |||
1258 | @@ -39,7 +39,7 @@ | |||
1259 | 39 | # Status tests setup a large tree every time, make allowances for it. | 39 | # Status tests setup a large tree every time, make allowances for it. |
1260 | 40 | # TODO: create minimal trees needed per test. | 40 | # TODO: create minimal trees needed per test. |
1261 | 41 | timeout = 10 | 41 | timeout = 10 |
1263 | 42 | 42 | ||
1264 | 43 | @inlineCallbacks | 43 | @inlineCallbacks |
1265 | 44 | def setUp(self): | 44 | def setUp(self): |
1266 | 45 | yield super(StatusTestBase, self).setUp() | 45 | yield super(StatusTestBase, self).setUp() |
1267 | @@ -107,6 +107,7 @@ | |||
1268 | 107 | options = TwistedOptionNamespace() | 107 | options = TwistedOptionNamespace() |
1269 | 108 | options["juju_directory"] = path | 108 | options["juju_directory"] = path |
1270 | 109 | options["zookeeper_servers"] = get_test_zookeeper_address() | 109 | options["zookeeper_servers"] = get_test_zookeeper_address() |
1271 | 110 | options["session_file"] = self.makeFile() | ||
1272 | 110 | for k, v in extra_options.items(): | 111 | for k, v in extra_options.items(): |
1273 | 111 | options[k] = v | 112 | options[k] = v |
1274 | 112 | agent.configure(options) | 113 | agent.configure(options) |
1275 | @@ -302,6 +303,7 @@ | |||
1276 | 302 | options = TwistedOptionNamespace() | 303 | options = TwistedOptionNamespace() |
1277 | 303 | options["juju_directory"] = self.makeDir() | 304 | options["juju_directory"] = self.makeDir() |
1278 | 304 | options["zookeeper_servers"] = get_test_zookeeper_address() | 305 | options["zookeeper_servers"] = get_test_zookeeper_address() |
1279 | 306 | options["session_file"] = self.makeFile() | ||
1280 | 305 | options["machine_id"] = "0" | 307 | options["machine_id"] = "0" |
1281 | 306 | agent.configure(options) | 308 | agent.configure(options) |
1282 | 307 | agent.set_watch_enabled(False) | 309 | agent.set_watch_enabled(False) |
1283 | 308 | 310 | ||
1284 | === modified file 'juju/errors.py' | |||
1285 | --- juju/errors.py 2011-09-24 22:21:23 +0000 | |||
1286 | +++ juju/errors.py 2012-02-02 16:42:42 +0000 | |||
1287 | @@ -62,7 +62,7 @@ | |||
1288 | 62 | return "Error processing %r: %s" % (self.path, self.message) | 62 | return "Error processing %r: %s" % (self.path, self.message) |
1289 | 63 | 63 | ||
1290 | 64 | 64 | ||
1292 | 65 | class CharmInvocationError(JujuError): | 65 | class CharmInvocationError(CharmError): |
1293 | 66 | """A charm's hook invocation exited with an error""" | 66 | """A charm's hook invocation exited with an error""" |
1294 | 67 | 67 | ||
1295 | 68 | def __init__(self, path, exit_code): | 68 | def __init__(self, path, exit_code): |
1296 | @@ -74,6 +74,16 @@ | |||
1297 | 74 | self.path, self.exit_code) | 74 | self.path, self.exit_code) |
1298 | 75 | 75 | ||
1299 | 76 | 76 | ||
1300 | 77 | class CharmUpgradeError(CharmError): | ||
1301 | 78 | """Something went wrong trying to upgrade a charm""" | ||
1302 | 79 | |||
1303 | 80 | def __init__(self, message): | ||
1304 | 81 | self.message = message | ||
1305 | 82 | |||
1306 | 83 | def __str__(self): | ||
1307 | 84 | return "Cannot upgrade charm: %s" % self.message | ||
1308 | 85 | |||
1309 | 86 | |||
1310 | 77 | class FileAlreadyExists(JujuError): | 87 | class FileAlreadyExists(JujuError): |
1311 | 78 | """Raised when something refuses to overwrite an existing file. | 88 | """Raised when something refuses to overwrite an existing file. |
1312 | 79 | 89 | ||
1313 | @@ -164,3 +174,6 @@ | |||
1314 | 164 | self.user_policy, | 174 | self.user_policy, |
1315 | 165 | self.provider_type, | 175 | self.provider_type, |
1316 | 166 | ", ".join(self.provider_policies))) | 176 | ", ".join(self.provider_policies))) |
1317 | 177 | |||
1318 | 178 | class ServiceError(JujuError): | ||
1319 | 179 | """Some problem with an upstart service""" | ||
1320 | 167 | 180 | ||
1321 | === modified file 'juju/hooks/scheduler.py' | |||
1322 | --- juju/hooks/scheduler.py 2011-12-12 01:56:05 +0000 | |||
1323 | +++ juju/hooks/scheduler.py 2012-02-02 16:42:42 +0000 | |||
1324 | @@ -1,4 +1,6 @@ | |||
1325 | 1 | import logging | 1 | import logging |
1326 | 2 | import os | ||
1327 | 3 | import yaml | ||
1328 | 2 | 4 | ||
1329 | 3 | from twisted.internet.defer import DeferredQueue, inlineCallbacks | 5 | from twisted.internet.defer import DeferredQueue, inlineCallbacks |
1330 | 4 | from juju.state.hook import RelationHookContext, RelationChange | 6 | from juju.state.hook import RelationHookContext, RelationChange |
1331 | @@ -28,31 +30,69 @@ | |||
1332 | 28 | the run queue. | 30 | the run queue. |
1333 | 29 | """ | 31 | """ |
1334 | 30 | 32 | ||
1337 | 31 | def __init__(self, client, executor, unit_relation, relation_name, unit_name): | 33 | def __init__(self, client, executor, unit_relation, relation_name, |
1338 | 32 | self._running = None | 34 | unit_name, state_path): |
1339 | 35 | self._running = False | ||
1340 | 36 | self._state_path = state_path | ||
1341 | 33 | 37 | ||
1342 | 34 | # The thing that will actually run the hook for us | 38 | # The thing that will actually run the hook for us |
1343 | 35 | self._executor = executor | 39 | self._executor = executor |
1344 | 36 | |||
1345 | 37 | # For hook context construction. | 40 | # For hook context construction. |
1346 | 38 | self._client = client | 41 | self._client = client |
1347 | 39 | self._unit_relation = unit_relation | 42 | self._unit_relation = unit_relation |
1348 | 40 | self._relation_name = relation_name | 43 | self._relation_name = relation_name |
1349 | 41 | self._members = None | ||
1350 | 42 | self._unit_name = unit_name | 44 | self._unit_name = unit_name |
1351 | 43 | 45 | ||
1358 | 44 | # Track next operation by node | 46 | if os.path.exists(self._state_path): |
1359 | 45 | self._node_queue = {} | 47 | self._load_state() |
1360 | 46 | 48 | else: | |
1361 | 47 | # Track node operations by clock tick | 49 | self._create_state() |
1362 | 48 | self._clock_queue = {} | 50 | |
1363 | 49 | 51 | def _create_state(self): | |
1364 | 52 | # Current units (as far as the next hook should know) | ||
1365 | 53 | self._context_members = None | ||
1366 | 54 | # Current units and settings versions (as far as the queue knows) | ||
1367 | 55 | self._member_versions = {} | ||
1368 | 56 | # Tracks next operation by unit | ||
1369 | 57 | self._unit_ops = {} | ||
1370 | 58 | # Tracks unit operations by clock tick | ||
1371 | 59 | self._clock_units = {} | ||
1372 | 50 | # Run queue (clock) | 60 | # Run queue (clock) |
1373 | 51 | self._run_queue = DeferredQueue() | 61 | self._run_queue = DeferredQueue() |
1374 | 52 | |||
1375 | 53 | # Artifical clock sequence | 62 | # Artifical clock sequence |
1376 | 54 | self._clock_sequence = 0 | 63 | self._clock_sequence = 0 |
1377 | 55 | 64 | ||
1378 | 65 | def _load_state(self): | ||
1379 | 66 | with open(self._state_path) as f: | ||
1380 | 67 | state = yaml.load(f.read()) | ||
1381 | 68 | if not state: | ||
1382 | 69 | return self._create_state() | ||
1383 | 70 | self._context_members = state["context_members"] | ||
1384 | 71 | self._member_versions = state["member_versions"] | ||
1385 | 72 | self._unit_ops = state["unit_ops"] | ||
1386 | 73 | self._clock_units = state["clock_units"] | ||
1387 | 74 | self._run_queue = DeferredQueue() | ||
1388 | 75 | self._run_queue.pending = state["clock_queue"] | ||
1389 | 76 | self._clock_sequence = state["clock_sequence"] | ||
1390 | 77 | |||
1391 | 78 | def _save_state(self): | ||
1392 | 79 | state = yaml.dump({ | ||
1393 | 80 | "context_members": self._context_members, | ||
1394 | 81 | "member_versions": self._member_versions, | ||
1395 | 82 | "unit_ops": self._unit_ops, | ||
1396 | 83 | "clock_units": self._clock_units, | ||
1397 | 84 | "clock_queue": [ | ||
1398 | 85 | # Strip "stop" instructions: if the lifecycle stopped us, | ||
1399 | 86 | # then if/when the lifecycle comes up again in a stopped | ||
1400 | 87 | # state, it won't start us in the first place. | ||
1401 | 88 | c for c in self._run_queue.pending if c is not None], | ||
1402 | 89 | "clock_sequence": self._clock_sequence}) | ||
1403 | 90 | |||
1404 | 91 | temp_path = self._state_path + "~" | ||
1405 | 92 | with open(temp_path, "w") as f: | ||
1406 | 93 | f.write(state) | ||
1407 | 94 | os.rename(temp_path, self._state_path) | ||
1408 | 95 | |||
1409 | 56 | @property | 96 | @property |
1410 | 57 | def running(self): | 97 | def running(self): |
1411 | 58 | return self._running is True | 98 | return self._running is True |
1412 | @@ -61,6 +101,12 @@ | |||
1413 | 61 | def run(self): | 101 | def run(self): |
1414 | 62 | """Run the hook scheduler and execution.""" | 102 | """Run the hook scheduler and execution.""" |
1415 | 63 | assert not self._running, "Scheduler is already running" | 103 | assert not self._running, "Scheduler is already running" |
1416 | 104 | try: | ||
1417 | 105 | with open(self._state_path, "a"): | ||
1418 | 106 | pass | ||
1419 | 107 | except IOError: | ||
1420 | 108 | raise AssertionError("%s is not writable!" % self._state_path) | ||
1421 | 109 | |||
1422 | 64 | self._running = True | 110 | self._running = True |
1423 | 65 | log.debug("start") | 111 | log.debug("start") |
1424 | 66 | 112 | ||
1425 | @@ -72,16 +118,17 @@ | |||
1426 | 72 | break | 118 | break |
1427 | 73 | 119 | ||
1428 | 74 | # Get all the units with changes in this clock tick. | 120 | # Get all the units with changes in this clock tick. |
1430 | 75 | for unit_name in self._clock_queue.pop(clock): | 121 | for unit_name in self._clock_units.pop(clock): |
1431 | 76 | 122 | ||
1432 | 77 | # Get the change for the unit. | 123 | # Get the change for the unit. |
1434 | 78 | change_clock, change_type = self._node_queue.pop(unit_name) | 124 | change_clock, change_type = self._unit_ops.pop(unit_name) |
1435 | 79 | 125 | ||
1436 | 80 | log.debug("executing hook for %s:%s", | 126 | log.debug("executing hook for %s:%s", |
1437 | 81 | unit_name, CHANGE_LABELS[change_type]) | 127 | unit_name, CHANGE_LABELS[change_type]) |
1438 | 82 | 128 | ||
1439 | 83 | # Execute the hook | 129 | # Execute the hook |
1440 | 84 | yield self._execute(unit_name, change_type) | 130 | yield self._execute(unit_name, change_type) |
1441 | 131 | self._save_state() | ||
1442 | 85 | 132 | ||
1443 | 86 | def stop(self): | 133 | def stop(self): |
1444 | 87 | """Stop the hook execution. | 134 | """Stop the hook execution. |
1445 | @@ -99,26 +146,24 @@ | |||
1446 | 99 | # occurs. | 146 | # occurs. |
1447 | 100 | self._run_queue.put(None) | 147 | self._run_queue.put(None) |
1448 | 101 | 148 | ||
1455 | 102 | def notify_change(self, old_units=(), new_units=(), modified=()): | 149 | def cb_change_members(self, old_units, new_units): |
1456 | 103 | """Receive changes regarding related units and schedule hook execution. | 150 | log.debug("members changed: old=%s, new=%s", old_units, new_units) |
1457 | 104 | """ | 151 | scheduled = 0 |
1452 | 105 | log.debug("relation change old:%s, new:%s, modified:%s", | ||
1453 | 106 | old_units, new_units, modified) | ||
1454 | 107 | |||
1458 | 108 | self._clock_sequence += 1 | 152 | self._clock_sequence += 1 |
1459 | 109 | 153 | ||
1472 | 110 | # keep track if we've scheduled changes during this clock | 154 | if self._context_members is None: |
1473 | 111 | scheduled = 0 | 155 | self._context_members = list(old_units) |
1474 | 112 | 156 | ||
1475 | 113 | # Handle membership changes | 157 | if set(self._member_versions) != set(old_units): |
1476 | 114 | 158 | log.debug( | |
1477 | 115 | # If we don't have a cached membership yet, use the old units | 159 | "old does not match last recorded units: %s", |
1478 | 116 | # as a baseline. | 160 | sorted(self._member_versions)) |
1479 | 117 | if self._members is None: | 161 | |
1480 | 118 | self._members = list(old_units) | 162 | added = set(new_units) - set(self._member_versions) |
1481 | 119 | 163 | removed = set(self._member_versions) - set(new_units) | |
1482 | 120 | added = set(new_units) - set(old_units) | 164 | self._member_versions.update(dict((unit, 0) for unit in added)) |
1483 | 121 | removed = set(old_units) - set(new_units) | 165 | for unit in removed: |
1484 | 166 | del self._member_versions[unit] | ||
1485 | 122 | 167 | ||
1486 | 123 | for unit_name in sorted(added): | 168 | for unit_name in sorted(added): |
1487 | 124 | scheduled += self._queue_change( | 169 | scheduled += self._queue_change( |
1488 | @@ -128,59 +173,68 @@ | |||
1489 | 128 | scheduled += self._queue_change( | 173 | scheduled += self._queue_change( |
1490 | 129 | unit_name, REMOVED, self._clock_sequence) | 174 | unit_name, REMOVED, self._clock_sequence) |
1491 | 130 | 175 | ||
1496 | 131 | # Handle modified change | 176 | if scheduled: |
1497 | 132 | for unit_name in modified: | 177 | self._run_queue.put(self._clock_sequence) |
1498 | 133 | scheduled += self._queue_change( | 178 | self._save_state() |
1495 | 134 | unit_name, MODIFIED, self._clock_sequence) | ||
1499 | 135 | 179 | ||
1500 | 180 | def cb_change_settings(self, unit_versions): | ||
1501 | 181 | log.debug("settings changed: %s", unit_versions) | ||
1502 | 182 | scheduled = 0 | ||
1503 | 183 | self._clock_sequence += 1 | ||
1504 | 184 | for (unit_name, version) in unit_versions: | ||
1505 | 185 | if version > self._member_versions.get(unit_name, 0): | ||
1506 | 186 | self._member_versions[unit_name] = version | ||
1507 | 187 | scheduled += self._queue_change( | ||
1508 | 188 | unit_name, MODIFIED, self._clock_sequence) | ||
1509 | 136 | if scheduled: | 189 | if scheduled: |
1510 | 137 | self._run_queue.put(self._clock_sequence) | 190 | self._run_queue.put(self._clock_sequence) |
1511 | 191 | self._save_state() | ||
1512 | 138 | 192 | ||
1513 | 139 | def get_hook_context(self, change): | 193 | def get_hook_context(self, change): |
1514 | 140 | """ | 194 | """ |
1515 | 141 | Return a hook context, corresponding to the current state of the | 195 | Return a hook context, corresponding to the current state of the |
1516 | 142 | system. | 196 | system. |
1517 | 143 | """ | 197 | """ |
1519 | 144 | members = self._members or () | 198 | context_members = self._context_members or () |
1520 | 145 | context = RelationHookContext( | 199 | context = RelationHookContext( |
1521 | 146 | self._client, self._unit_relation, change, | 200 | self._client, self._unit_relation, change, |
1523 | 147 | sorted(members), unit_name=self._unit_name) | 201 | sorted(context_members), unit_name=self._unit_name) |
1524 | 148 | return context | 202 | return context |
1525 | 149 | 203 | ||
1526 | 150 | def _queue_change(self, unit_name, change_type, clock): | 204 | def _queue_change(self, unit_name, change_type, clock): |
1527 | 151 | """Queue up the node change for execution. | 205 | """Queue up the node change for execution. |
1528 | 152 | """ | 206 | """ |
1529 | 153 | # If its a new change for the unit store it, and return. | 207 | # If its a new change for the unit store it, and return. |
1533 | 154 | if not unit_name in self._node_queue: | 208 | if not unit_name in self._unit_ops: |
1534 | 155 | self._node_queue[unit_name] = (clock, change_type) | 209 | self._unit_ops[unit_name] = (clock, change_type) |
1535 | 156 | self._clock_queue.setdefault(clock, []).append(unit_name) | 210 | self._clock_units.setdefault(clock, []).append(unit_name) |
1536 | 157 | return True | 211 | return True |
1537 | 158 | 212 | ||
1538 | 159 | # Else merge/reduce with the previous operation. | 213 | # Else merge/reduce with the previous operation. |
1540 | 160 | previous_clock, previous_change = self._node_queue[unit_name] | 214 | previous_clock, previous_change = self._unit_ops[unit_name] |
1541 | 161 | change_clock, change_type = self._reduce( | 215 | change_clock, change_type = self._reduce( |
1542 | 162 | (previous_clock, previous_change), | 216 | (previous_clock, previous_change), |
1543 | 163 | (self._clock_sequence, change_type)) | 217 | (self._clock_sequence, change_type)) |
1544 | 164 | 218 | ||
1545 | 165 | # If they've cancelled, remove from node and clock queues | 219 | # If they've cancelled, remove from node and clock queues |
1546 | 166 | if change_type is None: | 220 | if change_type is None: |
1549 | 167 | del self._node_queue[unit_name] | 221 | del self._unit_ops[unit_name] |
1550 | 168 | self._clock_queue[previous_clock].remove(unit_name) | 222 | self._clock_units[previous_clock].remove(unit_name) |
1551 | 169 | return False | 223 | return False |
1552 | 170 | 224 | ||
1553 | 171 | # Update the node queue with the merged change. | 225 | # Update the node queue with the merged change. |
1555 | 172 | self._node_queue[unit_name] = (change_clock, change_type) | 226 | self._unit_ops[unit_name] = (change_clock, change_type) |
1556 | 173 | 227 | ||
1557 | 174 | # If the clock has changed, remove the old entry. | 228 | # If the clock has changed, remove the old entry. |
1558 | 175 | if change_clock != previous_clock: | 229 | if change_clock != previous_clock: |
1560 | 176 | self._clock_queue[previous_clock].remove(unit_name) | 230 | self._clock_units[previous_clock].remove(unit_name) |
1561 | 177 | 231 | ||
1562 | 178 | # If the old entry has precedence, we didn't schedule anything for | 232 | # If the old entry has precedence, we didn't schedule anything for |
1563 | 179 | # this clock cycle. | 233 | # this clock cycle. |
1564 | 180 | if change_clock != clock: | 234 | if change_clock != clock: |
1565 | 181 | return False | 235 | return False |
1566 | 182 | 236 | ||
1568 | 183 | self._clock_queue.setdefault(clock, []).append(unit_name) | 237 | self._clock_units.setdefault(clock, []).append(unit_name) |
1569 | 184 | return True | 238 | return True |
1570 | 185 | 239 | ||
1571 | 186 | def _reduce(self, previous, new): | 240 | def _reduce(self, previous, new): |
1572 | @@ -214,9 +268,9 @@ | |||
1573 | 214 | """ | 268 | """ |
1574 | 215 | # Determine the current members as of the change. | 269 | # Determine the current members as of the change. |
1575 | 216 | if change_type == ADDED: | 270 | if change_type == ADDED: |
1577 | 217 | self._members.append(unit_name) | 271 | self._context_members.append(unit_name) |
1578 | 218 | elif change_type == REMOVED: | 272 | elif change_type == REMOVED: |
1580 | 219 | self._members.remove(unit_name) | 273 | self._context_members.remove(unit_name) |
1581 | 220 | 274 | ||
1582 | 221 | # Assemble the change and hook execution context | 275 | # Assemble the change and hook execution context |
1583 | 222 | change = RelationChange( | 276 | change = RelationChange( |
1584 | 223 | 277 | ||
1585 | === modified file 'juju/hooks/tests/test_scheduler.py' | |||
1586 | --- juju/hooks/tests/test_scheduler.py 2011-12-12 01:56:05 +0000 | |||
1587 | +++ juju/hooks/tests/test_scheduler.py 2012-02-02 16:42:42 +0000 | |||
1588 | @@ -1,11 +1,18 @@ | |||
1589 | 1 | import logging | 1 | import logging |
1594 | 2 | 2 | import os | |
1595 | 3 | from twisted.internet.defer import inlineCallbacks | 3 | import yaml |
1596 | 4 | 4 | ||
1597 | 5 | from juju.hooks.scheduler import HookScheduler | 5 | from twisted.internet.defer import inlineCallbacks, fail, succeed |
1598 | 6 | |||
1599 | 7 | from juju.hooks.scheduler import HookScheduler, ADDED, MODIFIED, REMOVED | ||
1600 | 6 | from juju.state.hook import RelationChange | 8 | from juju.state.hook import RelationChange |
1601 | 7 | from juju.state.tests.test_service import ServiceStateManagerTestBase | 9 | from juju.state.tests.test_service import ServiceStateManagerTestBase |
1602 | 8 | 10 | ||
1603 | 11 | |||
1604 | 12 | class SomeError(Exception): | ||
1605 | 13 | pass | ||
1606 | 14 | |||
1607 | 15 | |||
1608 | 9 | class HookSchedulerTest(ServiceStateManagerTestBase): | 16 | class HookSchedulerTest(ServiceStateManagerTestBase): |
1609 | 10 | 17 | ||
1610 | 11 | @inlineCallbacks | 18 | @inlineCallbacks |
1611 | @@ -15,29 +22,49 @@ | |||
1612 | 15 | self.unit_relation = self.mocker.mock() | 22 | self.unit_relation = self.mocker.mock() |
1613 | 16 | self.executions = [] | 23 | self.executions = [] |
1614 | 17 | self.service = yield self.add_service_from_charm("wordpress") | 24 | self.service = yield self.add_service_from_charm("wordpress") |
1619 | 18 | self.scheduler = HookScheduler(self.client, | 25 | self.state_file = self.makeFile() |
1620 | 19 | self.collect_executor, | 26 | self.executor = self.collect_executor |
1621 | 20 | self.unit_relation, "", | 27 | self._scheduler = None |
1618 | 21 | unit_name="wordpress/0") | ||
1622 | 22 | self.log_stream = self.capture_logging( | 28 | self.log_stream = self.capture_logging( |
1623 | 23 | "hook.scheduler", level=logging.DEBUG) | 29 | "hook.scheduler", level=logging.DEBUG) |
1624 | 24 | 30 | ||
1625 | 31 | @property | ||
1626 | 32 | def scheduler(self): | ||
1627 | 33 | # Create lazily, so we can create with a state file if we want to, | ||
1628 | 34 | # and swap out collect_executor when helpful to do so. | ||
1629 | 35 | if self._scheduler is None: | ||
1630 | 36 | self._scheduler = HookScheduler( | ||
1631 | 37 | self.client, self.executor, self.unit_relation, "", | ||
1632 | 38 | "wordpress/0", self.state_file) | ||
1633 | 39 | return self._scheduler | ||
1634 | 40 | |||
1635 | 25 | def collect_executor(self, context, change): | 41 | def collect_executor(self, context, change): |
1636 | 26 | self.executions.append((context, change)) | 42 | self.executions.append((context, change)) |
1637 | 27 | 43 | ||
1638 | 44 | def write_single_unit_state(self): | ||
1639 | 45 | with open(self.state_file, "w") as f: | ||
1640 | 46 | f.write(yaml.dump({ | ||
1641 | 47 | "context_members": ["u-1"], | ||
1642 | 48 | "member_versions": {"u-1": 0}, | ||
1643 | 49 | "unit_ops": {}, | ||
1644 | 50 | "clock_units": {}, | ||
1645 | 51 | "clock_queue": [], | ||
1646 | 52 | "clock_sequence": 1})) | ||
1647 | 53 | |||
1648 | 28 | # Event reduction/coalescing cases | 54 | # Event reduction/coalescing cases |
1649 | 29 | def test_reduce_removed_added(self): | 55 | def test_reduce_removed_added(self): |
1650 | 30 | """ A remove event for a node followed by an add event, | 56 | """ A remove event for a node followed by an add event, |
1651 | 31 | results in a modify event. | 57 | results in a modify event. |
1652 | 32 | """ | 58 | """ |
1655 | 33 | self.scheduler.notify_change(old_units=["u-1"], new_units=[]) | 59 | self.write_single_unit_state() |
1656 | 34 | self.scheduler.notify_change(old_units=[], new_units=["u-1"]) | 60 | self.scheduler.cb_change_members(["u-1"], []) |
1657 | 61 | self.scheduler.cb_change_members([], ["u-1"]) | ||
1658 | 35 | self.scheduler.run() | 62 | self.scheduler.run() |
1659 | 36 | self.assertEqual(len(self.executions), 1) | 63 | self.assertEqual(len(self.executions), 1) |
1660 | 37 | self.assertEqual(self.executions[0][1].change_type, "modified") | 64 | self.assertEqual(self.executions[0][1].change_type, "modified") |
1661 | 38 | 65 | ||
1664 | 39 | output = ("relation change old:['u-1'], new:[], modified:()", | 66 | output = ("members changed: old=['u-1'], new=[]", |
1665 | 40 | "relation change old:[], new:['u-1'], modified:()", | 67 | "members changed: old=[], new=['u-1']", |
1666 | 41 | "start", | 68 | "start", |
1667 | 42 | "executing hook for u-1:modified\n") | 69 | "executing hook for u-1:modified\n") |
1668 | 43 | self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) | 70 | self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) |
1669 | @@ -46,34 +73,34 @@ | |||
1670 | 46 | """A modify, remove, add event for a node results in a modify. | 73 | """A modify, remove, add event for a node results in a modify. |
1671 | 47 | An extra validation of the previous test. | 74 | An extra validation of the previous test. |
1672 | 48 | """ | 75 | """ |
1676 | 49 | self.scheduler.notify_change(modified=["u-1"]) | 76 | self.write_single_unit_state() |
1677 | 50 | self.scheduler.notify_change(old_units=["u-1"], new_units=[]) | 77 | self.scheduler.cb_change_settings([("u-1", 1)]) |
1678 | 51 | self.scheduler.notify_change(old_units=[], new_units=["u-1"]) | 78 | self.scheduler.cb_change_members(["u-1"], []) |
1679 | 79 | self.scheduler.cb_change_members([], ["u-1"]) | ||
1680 | 52 | self.scheduler.run() | 80 | self.scheduler.run() |
1681 | 53 | self.assertEqual(len(self.executions), 1) | 81 | self.assertEqual(len(self.executions), 1) |
1682 | 54 | self.assertEqual(self.executions[0][1].change_type, "modified") | 82 | self.assertEqual(self.executions[0][1].change_type, "modified") |
1683 | 55 | 83 | ||
1684 | 56 | def test_reduce_add_modify(self): | 84 | def test_reduce_add_modify(self): |
1685 | 57 | """An add and modify event for a node are coalesced to an add.""" | 85 | """An add and modify event for a node are coalesced to an add.""" |
1688 | 58 | self.scheduler.notify_change(old_units=[], new_units=["u-1"]) | 86 | self.scheduler.cb_change_members([], ["u-1"]) |
1689 | 59 | self.scheduler.notify_change(modified=["u-1"]) | 87 | self.scheduler.cb_change_settings([("u-1", 1)]) |
1690 | 60 | self.scheduler.run() | 88 | self.scheduler.run() |
1691 | 61 | self.assertEqual(len(self.executions), 1) | 89 | self.assertEqual(len(self.executions), 1) |
1692 | 62 | self.assertEqual(self.executions[0][1].change_type, "joined") | 90 | self.assertEqual(self.executions[0][1].change_type, "joined") |
1693 | 63 | 91 | ||
1694 | 64 | def test_reduce_add_remove(self): | 92 | def test_reduce_add_remove(self): |
1695 | 65 | """an add followed by a removal results in a noop.""" | 93 | """an add followed by a removal results in a noop.""" |
1698 | 66 | self.scheduler.notify_change(old_units=[], new_units=["u-1"]) | 94 | self.scheduler.cb_change_members([], ["u-1"]) |
1699 | 67 | self.scheduler.notify_change(old_units=["u-1"], new_units=[]) | 95 | self.scheduler.cb_change_members(["u-1"], []) |
1700 | 68 | self.scheduler.run() | 96 | self.scheduler.run() |
1701 | 69 | self.assertEqual(len(self.executions), 0) | 97 | self.assertEqual(len(self.executions), 0) |
1702 | 70 | 98 | ||
1703 | 71 | def test_reduce_modify_remove(self): | 99 | def test_reduce_modify_remove(self): |
1704 | 72 | """Modifying and then removing a node, results in just the removal.""" | 100 | """Modifying and then removing a node, results in just the removal.""" |
1709 | 73 | self.scheduler.notify_change(old_units=["u-1"], | 101 | self.write_single_unit_state() |
1710 | 74 | new_units=["u-1"], | 102 | self.scheduler.cb_change_settings([("u-1", 1)]) |
1711 | 75 | modified=["u-1"]) | 103 | self.scheduler.cb_change_members(["u-1"], []) |
1708 | 76 | self.scheduler.notify_change(old_units=["u-1"], new_units=[]) | ||
1712 | 77 | self.scheduler.run() | 104 | self.scheduler.run() |
1713 | 78 | self.assertEqual(len(self.executions), 1) | 105 | self.assertEqual(len(self.executions), 1) |
1714 | 79 | self.assertEqual(self.executions[0][1].change_type, "departed") | 106 | self.assertEqual(self.executions[0][1].change_type, "departed") |
1715 | @@ -82,15 +109,15 @@ | |||
1716 | 82 | """Multiple modifies get coalesced to a single modify.""" | 109 | """Multiple modifies get coalesced to a single modify.""" |
1717 | 83 | # simulate normal startup, the first notify will always be the existing | 110 | # simulate normal startup, the first notify will always be the existing |
1718 | 84 | # membership set. | 111 | # membership set. |
1720 | 85 | self.scheduler.notify_change(old_units=[], new_units=["u-1"]) | 112 | self.scheduler.cb_change_members([], ["u-1"]) |
1721 | 86 | self.scheduler.run() | 113 | self.scheduler.run() |
1722 | 87 | self.scheduler.stop() | 114 | self.scheduler.stop() |
1723 | 88 | self.assertEqual(len(self.executions), 1) | 115 | self.assertEqual(len(self.executions), 1) |
1724 | 89 | 116 | ||
1725 | 90 | # Now continue the modify/modify reduction. | 117 | # Now continue the modify/modify reduction. |
1729 | 91 | self.scheduler.notify_change(modified=["u-1"]) | 118 | self.scheduler.cb_change_settings([("u-1", 1)]) |
1730 | 92 | self.scheduler.notify_change(modified=["u-1"]) | 119 | self.scheduler.cb_change_settings([("u-1", 2)]) |
1731 | 93 | self.scheduler.notify_change(modified=["u-1"]) | 120 | self.scheduler.cb_change_settings([("u-1", 3)]) |
1732 | 94 | self.scheduler.run() | 121 | self.scheduler.run() |
1733 | 95 | 122 | ||
1734 | 96 | self.assertEqual(len(self.executions), 2) | 123 | self.assertEqual(len(self.executions), 2) |
1735 | @@ -112,20 +139,35 @@ | |||
1736 | 112 | self.assertFalse(self.scheduler.running) | 139 | self.assertFalse(self.scheduler.running) |
1737 | 113 | 140 | ||
1738 | 114 | @inlineCallbacks | 141 | @inlineCallbacks |
1739 | 142 | def test_run_requires_writable_state(self): | ||
1740 | 143 | # Induce lazy creation of scheduler, then break state file | ||
1741 | 144 | self.scheduler | ||
1742 | 145 | with open(self.state_file, "w"): | ||
1743 | 146 | pass | ||
1744 | 147 | os.chmod(self.state_file, 0) | ||
1745 | 148 | e = yield self.assertFailure(self.scheduler.run(), AssertionError) | ||
1746 | 149 | self.assertEquals(str(e), "%s is not writable!" % self.state_file) | ||
1747 | 150 | |||
1748 | 151 | def test_empty_state(self): | ||
1749 | 152 | with open(self.state_file, "w") as f: | ||
1750 | 153 | f.write(yaml.dump({})) | ||
1751 | 154 | |||
1752 | 155 | # Induce lazy creation to verify it can still survive | ||
1753 | 156 | self.scheduler | ||
1754 | 157 | |||
1755 | 158 | @inlineCallbacks | ||
1756 | 115 | def test_membership_visibility_per_change(self): | 159 | def test_membership_visibility_per_change(self): |
1757 | 116 | """Hooks are executed against changes, those changes are | 160 | """Hooks are executed against changes, those changes are |
1758 | 117 | associated to a temporal timestamp, however the changes | 161 | associated to a temporal timestamp, however the changes |
1759 | 118 | are scheduled for execution, and the state/time of the | 162 | are scheduled for execution, and the state/time of the |
1760 | 119 | world may have advanced, to present a logically consistent | 163 | world may have advanced, to present a logically consistent |
1763 | 120 | view, we try to gaurantee at a minimum, that hooks will | 164 | view, we try to guarantee at a minimum, that hooks will |
1764 | 121 | always see the membership of a relations it was at the | 165 | always see the membership of a relation as it was at the |
1765 | 122 | time of their associated change. | 166 | time of their associated change. |
1766 | 123 | """ | 167 | """ |
1772 | 124 | self.scheduler.notify_change( | 168 | self.scheduler.cb_change_members([], ["u-1", "u-2"]) |
1773 | 125 | old_units=[], new_units=["u-1", "u-2"]) | 169 | self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"]) |
1774 | 126 | self.scheduler.notify_change( | 170 | self.scheduler.cb_change_settings([("u-2", 1)]) |
1770 | 127 | old_units=["u-1", "u-2"], new_units=["u-2", "u-3"]) | ||
1771 | 128 | self.scheduler.notify_change(modified=["u-2"]) | ||
1775 | 129 | 171 | ||
1776 | 130 | self.scheduler.run() | 172 | self.scheduler.run() |
1777 | 131 | self.scheduler.stop() | 173 | self.scheduler.stop() |
1778 | @@ -139,9 +181,8 @@ | |||
1779 | 139 | change_members = yield self.executions[0][0].get_members() | 181 | change_members = yield self.executions[0][0].get_members() |
1780 | 140 | self.assertEqual(change_members, ["u-2"]) | 182 | self.assertEqual(change_members, ["u-2"]) |
1781 | 141 | 183 | ||
1785 | 142 | self.scheduler.notify_change(modified=["u-2"]) | 184 | self.scheduler.cb_change_settings([("u-2", 2)]) |
1786 | 143 | self.scheduler.notify_change( | 185 | self.scheduler.cb_change_members(["u-2", "u-3"], ["u-2"]) |
1784 | 144 | old_units=["u-2", "u-3"], new_units=["u-2"]) | ||
1787 | 145 | self.scheduler.run() | 186 | self.scheduler.run() |
1788 | 146 | 187 | ||
1789 | 147 | self.assertEqual(len(self.executions), 4) | 188 | self.assertEqual(len(self.executions), 4) |
1790 | @@ -156,10 +197,17 @@ | |||
1791 | 156 | a hook wont see any 'active' members in a membership list, that | 197 | a hook wont see any 'active' members in a membership list, that |
1792 | 157 | it hasn't previously been given a notify of before. | 198 | it hasn't previously been given a notify of before. |
1793 | 158 | """ | 199 | """ |
1798 | 159 | self.scheduler.notify_change( | 200 | with open(self.state_file, "w") as f: |
1799 | 160 | old_units=["u-1", "u-2"], | 201 | f.write(yaml.dump({ |
1800 | 161 | new_units=["u-2", "u-3", "u-4"], | 202 | "context_members": ["u-1", "u-2"], |
1801 | 162 | modified=["u-2"]) | 203 | "member_versions": {"u-1": 0, "u-2": 0}, |
1802 | 204 | "unit_ops": {}, | ||
1803 | 205 | "clock_units": {}, | ||
1804 | 206 | "clock_queue": [], | ||
1805 | 207 | "clock_sequence": 1})) | ||
1806 | 208 | |||
1807 | 209 | self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3", "u-4"]) | ||
1808 | 210 | self.scheduler.cb_change_settings([("u-2", 1)]) | ||
1809 | 163 | 211 | ||
1810 | 164 | self.scheduler.run() | 212 | self.scheduler.run() |
1811 | 165 | self.scheduler.stop() | 213 | self.scheduler.stop() |
1812 | @@ -197,3 +245,172 @@ | |||
1813 | 197 | RelationChange("", "", "")) | 245 | RelationChange("", "", "")) |
1814 | 198 | members = yield context.get_members() | 246 | members = yield context.get_members() |
1815 | 199 | self.assertEqual(members, []) | 247 | self.assertEqual(members, []) |
1816 | 248 | |||
1817 | 249 | @inlineCallbacks | ||
1818 | 250 | def test_state_is_loaded(self): | ||
1819 | 251 | with open(self.state_file, "w") as f: | ||
1820 | 252 | f.write(yaml.dump({ | ||
1821 | 253 | "context_members": ["u-1", "u-2"], | ||
1822 | 254 | "member_versions": {"u-1": 5, "u-2": 2, "u-3": 0}, | ||
1823 | 255 | "unit_ops": {"u-1": (3, MODIFIED), "u-3": (4, ADDED)}, | ||
1824 | 256 | "clock_units": {3: ["u-1"], 4: ["u-3"]}, | ||
1825 | 257 | "clock_queue": [3, 4], | ||
1826 | 258 | "clock_sequence": 4})) | ||
1827 | 259 | |||
1828 | 260 | self.scheduler.run() | ||
1829 | 261 | while len(self.executions) < 2: | ||
1830 | 262 | yield self.poke_zk() | ||
1831 | 263 | self.scheduler.stop() | ||
1832 | 264 | |||
1833 | 265 | self.assertEqual(self.executions[0][1].change_type, "modified") | ||
1834 | 266 | members = yield self.executions[0][0].get_members() | ||
1835 | 267 | self.assertEqual(members, ["u-1", "u-2"]) | ||
1836 | 268 | |||
1837 | 269 | self.assertEqual(self.executions[1][1].change_type, "joined") | ||
1838 | 270 | members = yield self.executions[1][0].get_members() | ||
1839 | 271 | self.assertEqual(members, ["u-1", "u-2", "u-3"]) | ||
1840 | 272 | |||
1841 | 273 | with open(self.state_file) as f: | ||
1842 | 274 | state = yaml.load(f.read()) | ||
1843 | 275 | self.assertEquals(state, { | ||
1844 | 276 | "context_members": ["u-1", "u-2", "u-3"], | ||
1845 | 277 | "member_versions": {"u-1": 5, "u-2": 2, "u-3": 0}, | ||
1846 | 278 | "unit_ops": {}, | ||
1847 | 279 | "clock_units": {}, | ||
1848 | 280 | "clock_queue": [], | ||
1849 | 281 | "clock_sequence": 4}) | ||
1850 | 282 | |||
1851 | 283 | def test_state_is_stored(self): | ||
1852 | 284 | with open(self.state_file, "w") as f: | ||
1853 | 285 | f.write(yaml.dump({ | ||
1854 | 286 | "context_members": ["u-1", "u-2"], | ||
1855 | 287 | "member_versions": {"u-1": 0, "u-2": 2}, | ||
1856 | 288 | "unit_ops": {}, | ||
1857 | 289 | "clock_units": {}, | ||
1858 | 290 | "clock_queue": [], | ||
1859 | 291 | "clock_sequence": 7})) | ||
1860 | 292 | |||
1861 | 293 | self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"]) | ||
1862 | 294 | self.scheduler.cb_change_settings([("u-2", 3)]) | ||
1863 | 295 | |||
1864 | 296 | # Add a stop instruction to the queue, which should *not* be saved. | ||
1865 | 297 | self.scheduler.stop() | ||
1866 | 298 | |||
1867 | 299 | with open(self.state_file) as f: | ||
1868 | 300 | state = yaml.load(f.read()) | ||
1869 | 301 | self.assertEquals(state, { | ||
1870 | 302 | "context_members": ["u-1", "u-2"], | ||
1871 | 303 | "member_versions": {"u-2": 3, "u-3": 0}, | ||
1872 | 304 | "unit_ops": {"u-1": (8, REMOVED), | ||
1873 | 305 | "u-2": (9, MODIFIED), | ||
1874 | 306 | "u-3": (8, ADDED)}, | ||
1875 | 307 | "clock_units": {8: ["u-3", "u-1"], 9: ["u-2"]}, | ||
1876 | 308 | "clock_queue": [8, 9], | ||
1877 | 309 | "clock_sequence": 9}) | ||
1878 | 310 | |||
1879 | 311 | @inlineCallbacks | ||
1880 | 312 | def test_state_stored_after_tick(self): | ||
1881 | 313 | |||
1882 | 314 | def execute(context, change): | ||
1883 | 315 | self.execute_calls += 1 | ||
1884 | 316 | if self.execute_calls > 1: | ||
1885 | 317 | return fail(SomeError()) | ||
1886 | 318 | return succeed(None) | ||
1887 | 319 | self.execute_calls = 0 | ||
1888 | 320 | self.executor = execute | ||
1889 | 321 | |||
1890 | 322 | with open(self.state_file, "w") as f: | ||
1891 | 323 | f.write(yaml.dump({ | ||
1892 | 324 | "context_members": ["u-1", "u-2"], | ||
1893 | 325 | "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, | ||
1894 | 326 | "unit_ops": {"u-1": (3, MODIFIED), "u-3": (4, ADDED)}, | ||
1895 | 327 | "clock_units": {3: ["u-1"], 4: ["u-3"]}, | ||
1896 | 328 | "clock_queue": [3, 4], | ||
1897 | 329 | "clock_sequence": 4})) | ||
1898 | 330 | |||
1899 | 331 | d = self.scheduler.run() | ||
1900 | 332 | while self.execute_calls < 2: | ||
1901 | 333 | yield self.poke_zk() | ||
1902 | 334 | yield self.assertFailure(d, SomeError) | ||
1903 | 335 | with open(self.state_file) as f: | ||
1904 | 336 | self.assertEquals(yaml.load(f.read()), { | ||
1905 | 337 | "context_members": ["u-1", "u-2"], | ||
1906 | 338 | "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, | ||
1907 | 339 | "unit_ops": {"u-3": (4, ADDED)}, | ||
1908 | 340 | "clock_units": {4: ["u-3"]}, | ||
1909 | 341 | "clock_queue": [4], | ||
1910 | 342 | "clock_sequence": 4}) | ||
1911 | 343 | |||
1912 | 344 | @inlineCallbacks | ||
1913 | 345 | def test_state_not_stored_mid_tick(self): | ||
1914 | 346 | |||
1915 | 347 | def execute(context, change): | ||
1916 | 348 | self.execute_called = True | ||
1917 | 349 | return fail(SomeError()) | ||
1918 | 350 | self.execute_called = False | ||
1919 | 351 | self.executor = execute | ||
1920 | 352 | |||
1921 | 353 | initial_state = { | ||
1922 | 354 | "context_members": ["u-1", "u-2"], | ||
1923 | 355 | "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, | ||
1924 | 356 | "unit_ops": {"u-1": (3, MODIFIED), "u-3": (4, ADDED)}, | ||
1925 | 357 | "clock_units": {3: ["u-1"], 4: ["u-3"]}, | ||
1926 | 358 | "clock_queue": [3, 4], | ||
1927 | 359 | "clock_sequence": 4} | ||
1928 | 360 | with open(self.state_file, "w") as f: | ||
1929 | 361 | f.write(yaml.dump(initial_state)) | ||
1930 | 362 | |||
1931 | 363 | d = self.scheduler.run() | ||
1932 | 364 | while not self.execute_called: | ||
1933 | 365 | yield self.poke_zk() | ||
1934 | 366 | yield self.assertFailure(d, SomeError) | ||
1935 | 367 | with open(self.state_file) as f: | ||
1936 | 368 | self.assertEquals(yaml.load(f.read()), initial_state) | ||
1937 | 369 | |||
1938 | 370 | def test_ignore_equal_settings_version(self): | ||
1939 | 371 | """ | ||
1940 | 372 | A modified event whose version is not greater than the latest known | ||
1941 | 373 | version for that unit will be ignored. | ||
1942 | 374 | """ | ||
1943 | 375 | self.write_single_unit_state() | ||
1944 | 376 | self.scheduler.cb_change_settings([("u-1", 0),]) | ||
1945 | 377 | self.scheduler.run() | ||
1946 | 378 | self.assertEquals(len(self.executions), 0) | ||
1947 | 379 | |||
1948 | 380 | def test_settings_version_0_on_add(self): | ||
1949 | 381 | """ | ||
1950 | 382 | When a unit is added, we assume its settings version to be 0, and | ||
1951 | 383 | therefore modified events with version 0 will be ignored. | ||
1952 | 384 | """ | ||
1953 | 385 | self.scheduler.cb_change_members([], ["u-1"]) | ||
1954 | 386 | self.scheduler.cb_change_settings([("u-1", 0),]) | ||
1955 | 387 | self.scheduler.run() | ||
1956 | 388 | self.assertEquals(len(self.executions), 1) | ||
1957 | 389 | self.assertEqual(self.executions[0][1].change_type, "joined") | ||
1958 | 390 | |||
1959 | 391 | def test_membership_timeslip(self): | ||
1960 | 392 | """ | ||
1961 | 393 | Adds and removes are calculated based on known membership state, NOT | ||
1962 | 394 | on old_units. | ||
1963 | 395 | """ | ||
1964 | 396 | with open(self.state_file, "w") as f: | ||
1965 | 397 | f.write(yaml.dump({ | ||
1966 | 398 | "context_members": ["u-1", "u-2"], | ||
1967 | 399 | "member_versions": {"u-1": 0, "u-2": 0}, | ||
1968 | 400 | "unit_ops": {}, | ||
1969 | 401 | "clock_units": {}, | ||
1970 | 402 | "clock_queue": [], | ||
1971 | 403 | "clock_sequence": 4})) | ||
1972 | 404 | |||
1973 | 405 | self.scheduler.cb_change_members(["u-2"], ["u-3", "u-4"]) | ||
1974 | 406 | self.scheduler.run() | ||
1975 | 407 | |||
1976 | 408 | output = ( | ||
1977 | 409 | "members changed: old=['u-2'], new=['u-3', 'u-4']", | ||
1978 | 410 | "old does not match last recorded units: ['u-1', 'u-2']", | ||
1979 | 411 | "start", | ||
1980 | 412 | "executing hook for u-3:joined", | ||
1981 | 413 | "executing hook for u-4:joined", | ||
1982 | 414 | "executing hook for u-1:departed", | ||
1983 | 415 | "executing hook for u-2:departed\n") | ||
1984 | 416 | self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) | ||
1985 | 200 | 417 | ||
1986 | === modified file 'juju/lib/lxc/tests/test_lxc.py' | |||
1987 | --- juju/lib/lxc/tests/test_lxc.py 2011-10-01 00:04:14 +0000 | |||
1988 | +++ juju/lib/lxc/tests/test_lxc.py 2012-02-02 16:42:42 +0000 | |||
1989 | @@ -10,10 +10,17 @@ | |||
1990 | 10 | from juju.lib.testing import TestCase | 10 | from juju.lib.testing import TestCase |
1991 | 11 | 11 | ||
1992 | 12 | 12 | ||
1997 | 13 | def run_lxc_tests(): | 13 | def skip_sudo_tests(): |
1998 | 14 | if os.environ.get("TEST_LXC"): | 14 | if os.environ.get("TEST_SUDO"): |
1999 | 15 | return None | 15 | # Get user's password *now*, if needed, not mid-run |
2000 | 16 | return "TEST_LXC=1 to include lxc tests" | 16 | os.system("sudo false") |
2001 | 17 | return False | ||
2002 | 18 | return "TEST_SUDO=1 to include tests which use sudo (including lxc tests)" | ||
2003 | 19 | |||
2004 | 20 | |||
2005 | 21 | def uses_sudo(f): | ||
2006 | 22 | f.skip = skip_sudo_tests() | ||
2007 | 23 | return f | ||
2008 | 17 | 24 | ||
2009 | 18 | 25 | ||
2010 | 19 | DATA_PATH = os.path.abspath( | 26 | DATA_PATH = os.path.abspath( |
2011 | @@ -23,9 +30,9 @@ | |||
2012 | 23 | DEFAULT_CONTAINER = "lxc_test" | 30 | DEFAULT_CONTAINER = "lxc_test" |
2013 | 24 | 31 | ||
2014 | 25 | 32 | ||
2015 | 33 | @uses_sudo | ||
2016 | 26 | class LXCTest(TestCase): | 34 | class LXCTest(TestCase): |
2017 | 27 | timeout = 240 | 35 | timeout = 240 |
2018 | 28 | skip = run_lxc_tests() | ||
2019 | 29 | 36 | ||
2020 | 30 | def setUp(self): | 37 | def setUp(self): |
2021 | 31 | self.config = self.make_config() | 38 | self.config = self.make_config() |
2022 | 32 | 39 | ||
2023 | === added directory 'juju/lib/tests/data' | |||
2024 | === added file 'juju/lib/tests/data/test_basic_install' | |||
2025 | --- juju/lib/tests/data/test_basic_install 1970-01-01 00:00:00 +0000 | |||
2026 | +++ juju/lib/tests/data/test_basic_install 2012-02-02 16:42:42 +0000 | |||
2027 | @@ -0,0 +1,10 @@ | |||
2028 | 1 | description "uninteresting service" | ||
2029 | 2 | author "Juju Team <juju@lists.ubuntu.com>" | ||
2030 | 3 | |||
2031 | 4 | start on runlevel [2345] | ||
2032 | 5 | stop on runlevel [!2345] | ||
2033 | 6 | respawn | ||
2034 | 7 | |||
2035 | 8 | |||
2036 | 9 | |||
2037 | 10 | exec /bin/false >> /tmp/some-name.output 2>&1 | ||
2038 | 0 | 11 | ||
2039 | === added file 'juju/lib/tests/data/test_less_basic_install' | |||
2040 | --- juju/lib/tests/data/test_less_basic_install 1970-01-01 00:00:00 +0000 | |||
2041 | +++ juju/lib/tests/data/test_less_basic_install 2012-02-02 16:42:42 +0000 | |||
2042 | @@ -0,0 +1,11 @@ | |||
2043 | 1 | description "pew pew pew blam" | ||
2044 | 2 | author "Juju Team <juju@lists.ubuntu.com>" | ||
2045 | 3 | |||
2046 | 4 | start on runlevel [2345] | ||
2047 | 5 | stop on runlevel [!2345] | ||
2048 | 6 | respawn | ||
2049 | 7 | |||
2050 | 8 | env FOO="bar baz qux" | ||
2051 | 9 | env PEW="pew" | ||
2052 | 10 | |||
2053 | 11 | exec /bin/deathstar --ignore-ewoks endor >> /somewhere/else 2>&1 | ||
2054 | 0 | 12 | ||
2055 | === added file 'juju/lib/tests/data/test_standard_install' | |||
2056 | --- juju/lib/tests/data/test_standard_install 1970-01-01 00:00:00 +0000 | |||
2057 | +++ juju/lib/tests/data/test_standard_install 2012-02-02 16:42:42 +0000 | |||
2058 | @@ -0,0 +1,10 @@ | |||
2059 | 1 | description "a wretched hive of scum and villainy" | ||
2060 | 2 | author "Juju Team <juju@lists.ubuntu.com>" | ||
2061 | 3 | |||
2062 | 4 | start on runlevel [2345] | ||
2063 | 5 | stop on runlevel [!2345] | ||
2064 | 6 | respawn | ||
2065 | 7 | |||
2066 | 8 | env LIGHTSABER="civilised weapon" | ||
2067 | 9 | |||
2068 | 10 | exec /bin/imagination-failure --no-ideas >> /tmp/some-name.output 2>&1 | ||
2069 | 0 | 11 | ||
2070 | === modified file 'juju/lib/tests/test_statemachine.py' | |||
2071 | --- juju/lib/tests/test_statemachine.py 2011-09-15 18:50:23 +0000 | |||
2072 | +++ juju/lib/tests/test_statemachine.py 2012-02-02 16:42:42 +0000 | |||
2073 | @@ -121,10 +121,14 @@ | |||
2074 | 121 | workflow_state = AttributeWorkflowState(workflow) | 121 | workflow_state = AttributeWorkflowState(workflow) |
2075 | 122 | current_state = yield workflow_state.get_state() | 122 | current_state = yield workflow_state.get_state() |
2076 | 123 | self.assertEqual(current_state, None) | 123 | self.assertEqual(current_state, None) |
2077 | 124 | current_vars = yield workflow_state.get_state_variables() | ||
2078 | 125 | self.assertEqual(current_vars, {}) | ||
2079 | 124 | 126 | ||
2080 | 125 | yield workflow_state.set_state("started") | 127 | yield workflow_state.set_state("started") |
2081 | 126 | current_state = yield workflow_state.get_state() | 128 | current_state = yield workflow_state.get_state() |
2082 | 127 | self.assertEqual(current_state, "started") | 129 | self.assertEqual(current_state, "started") |
2083 | 130 | current_vars = yield workflow_state.get_state_variables() | ||
2084 | 131 | self.assertEqual(current_vars, {}) | ||
2085 | 128 | 132 | ||
2086 | 129 | @inlineCallbacks | 133 | @inlineCallbacks |
2087 | 130 | def test_state_fire_transition(self): | 134 | def test_state_fire_transition(self): |
2088 | @@ -333,3 +337,13 @@ | |||
2089 | 333 | 337 | ||
2090 | 334 | self.assertFailure(workflow_state.transition_state("unknown"), | 338 | self.assertFailure(workflow_state.transition_state("unknown"), |
2091 | 335 | InvalidStateError) | 339 | InvalidStateError) |
2092 | 340 | |||
2093 | 341 | @inlineCallbacks | ||
2094 | 342 | def test_load_bad_state(self): | ||
2095 | 343 | class BadLoadWorkflowState(WorkflowState): | ||
2096 | 344 | def _load(self): | ||
2097 | 345 | return succeed({"some": "other-data"}) | ||
2098 | 346 | |||
2099 | 347 | workflow = BadLoadWorkflowState(Workflow()) | ||
2100 | 348 | yield self.assertFailure(workflow.get_state(), KeyError) | ||
2101 | 349 | yield self.assertFailure(workflow.get_state_variables(), KeyError) | ||
2102 | 336 | 350 | ||
2103 | === added file 'juju/lib/tests/test_upstart.py' | |||
2104 | --- juju/lib/tests/test_upstart.py 1970-01-01 00:00:00 +0000 | |||
2105 | +++ juju/lib/tests/test_upstart.py 2012-02-02 16:42:42 +0000 | |||
2106 | @@ -0,0 +1,339 @@ | |||
2107 | 1 | import os | ||
2108 | 2 | |||
2109 | 3 | from twisted.internet.defer import inlineCallbacks, succeed | ||
2110 | 4 | |||
2111 | 5 | from juju.errors import ServiceError | ||
2112 | 6 | from juju.lib.mocker import ANY, KWARGS | ||
2113 | 7 | from juju.lib.testing import TestCase | ||
2114 | 8 | from juju.lib.upstart import UpstartService | ||
2115 | 9 | |||
2116 | 10 | |||
2117 | 11 | DATA_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), "data") | ||
2118 | 12 | |||
2119 | 13 | |||
2120 | 14 | class UpstartServiceTest(TestCase): | ||
2121 | 15 | |||
2122 | 16 | @inlineCallbacks | ||
2123 | 17 | def setUp(self): | ||
2124 | 18 | yield super(UpstartServiceTest, self).setUp() | ||
2125 | 19 | self.init_dir = self.makeDir() | ||
2126 | 20 | self.conf = os.path.join(self.init_dir, "some-name.conf") | ||
2127 | 21 | self.output = "/tmp/some-name.output" | ||
2128 | 22 | self.patch(UpstartService, "init_dir", self.init_dir) | ||
2129 | 23 | self.service = UpstartService("some-name") | ||
2130 | 24 | |||
2131 | 25 | def setup_service(self): | ||
2132 | 26 | self.service.set_description("a wretched hive of scum and villainy") | ||
2133 | 27 | self.service.set_command("/bin/imagination-failure --no-ideas") | ||
2134 | 28 | self.service.set_environ({"LIGHTSABER": "civilised weapon"}) | ||
2135 | 29 | |||
2136 | 30 | def setup_mock(self): | ||
2137 | 31 | self.check_call = self.mocker.replace("subprocess.check_call") | ||
2138 | 32 | self.getProcessOutput = self.mocker.replace( | ||
2139 | 33 | "twisted.internet.utils.getProcessOutput") | ||
2140 | 34 | |||
2141 | 35 | def mock_status(self, result): | ||
2142 | 36 | self.getProcessOutput("/sbin/status", ["some-name"]) | ||
2143 | 37 | self.mocker.result(result) | ||
2144 | 38 | |||
2145 | 39 | def mock_call(self, args, output=None): | ||
2146 | 40 | self.check_call(args, KWARGS) | ||
2147 | 41 | if output is None: | ||
2148 | 42 | self.mocker.result(0) | ||
2149 | 43 | else: | ||
2150 | 44 | def write(ANY, **_): | ||
2151 | 45 | with open(self.output, "w") as f: | ||
2152 | 46 | f.write(output) | ||
2153 | 47 | self.mocker.call(write) | ||
2154 | 48 | |||
2155 | 49 | def mock_start(self, output=None): | ||
2156 | 50 | self.mock_call(("/sbin/start", "some-name"), output) | ||
2157 | 51 | |||
2158 | 52 | def mock_stop(self): | ||
2159 | 53 | self.mock_call(("/sbin/stop", "some-name")) | ||
2160 | 54 | |||
2161 | 55 | def mock_check_success(self): | ||
2162 | 56 | for _ in range(5): | ||
2163 | 57 | self.mock_status(succeed("blah start/running blah 12345")) | ||
2164 | 58 | |||
2165 | 59 | def mock_check_unstable(self): | ||
2166 | 60 | for _ in range(4): | ||
2167 | 61 | self.mock_status(succeed("blah start/running blah 12345")) | ||
2168 | 62 | self.mock_status(succeed("blah start/running blah 12346")) | ||
2169 | 63 | |||
2170 | 64 | def mock_check_not_running(self): | ||
2171 | 65 | self.mock_status(succeed("blah")) | ||
2172 | 66 | |||
2173 | 67 | def write_dummy_conf(self): | ||
2174 | 68 | with open(self.conf, "w") as f: | ||
2175 | 69 | f.write("dummy") | ||
2176 | 70 | |||
2177 | 71 | def assert_dummy_conf(self): | ||
2178 | 72 | with open(self.conf) as f: | ||
2179 | 73 | self.assertEquals(f.read(), "dummy") | ||
2180 | 74 | |||
2181 | 75 | def assert_no_conf(self): | ||
2182 | 76 | self.assertFalse(os.path.exists(self.conf)) | ||
2183 | 77 | |||
2184 | 78 | def assert_conf(self, name="test_standard_install"): | ||
2185 | 79 | with open(os.path.join(DATA_DIR, name)) as expected: | ||
2186 | 80 | with open(self.conf) as actual: | ||
2187 | 81 | self.assertEquals(actual.read(), expected.read()) | ||
2188 | 82 | |||
2189 | 83 | def test_is_installed(self): | ||
2190 | 84 | """Check is_installed depends on conf file existence""" | ||
2191 | 85 | self.assertFalse(self.service.is_installed()) | ||
2192 | 86 | self.write_dummy_conf() | ||
2193 | 87 | self.assertTrue(self.service.is_installed()) | ||
2194 | 88 | |||
2195 | 89 | def test_init_dir(self): | ||
2196 | 90 | """ | ||
2197 | 91 | Check is_installed still works when init_dir specified explicitly | ||
2198 | 92 | """ | ||
2199 | 93 | self.patch(UpstartService, "init_dir", "/BAD/PATH") | ||
2200 | 94 | self.service = UpstartService("some-name", init_dir=self.init_dir) | ||
2201 | 95 | self.setup_service() | ||
2202 | 96 | |||
2203 | 97 | self.assertFalse(self.service.is_installed()) | ||
2204 | 98 | self.write_dummy_conf() | ||
2205 | 99 | self.assertTrue(self.service.is_installed()) | ||
2206 | 100 | |||
2207 | 101 | @inlineCallbacks | ||
2208 | 102 | def test_is_running(self): | ||
2209 | 103 | """ | ||
2210 | 104 | Check is_running interprets status output (when service is installed) | ||
2211 | 105 | """ | ||
2212 | 106 | self.setup_mock() | ||
2213 | 107 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2214 | 108 | self.mock_status(succeed("blah blob/gibbering blah")) | ||
2215 | 109 | self.mock_status(succeed("blah start/running blah 12345")) | ||
2216 | 110 | self.mocker.replay() | ||
2217 | 111 | |||
2218 | 112 | # Won't hit status; conf is not installed | ||
2219 | 113 | self.assertFalse((yield self.service.is_running())) | ||
2220 | 114 | self.write_dummy_conf() | ||
2221 | 115 | |||
2222 | 116 | # These 3 calls correspond to the first 3 mock_status calls above | ||
2223 | 117 | self.assertFalse((yield self.service.is_running())) | ||
2224 | 118 | self.assertFalse((yield self.service.is_running())) | ||
2225 | 119 | self.assertTrue((yield self.service.is_running())) | ||
2226 | 120 | |||
2227 | 121 | @inlineCallbacks | ||
2228 | 122 | def test_is_stable_yes(self): | ||
2229 | 123 | self.setup_mock() | ||
2230 | 124 | self.mock_check_success() | ||
2231 | 125 | self.mocker.replay() | ||
2232 | 126 | |||
2233 | 127 | self.write_dummy_conf() | ||
2234 | 128 | self.assertTrue((yield self.service.is_stable())) | ||
2235 | 129 | |||
2236 | 130 | @inlineCallbacks | ||
2237 | 131 | def test_is_stable_no(self): | ||
2238 | 132 | self.setup_mock() | ||
2239 | 133 | self.mock_check_unstable() | ||
2240 | 134 | self.mocker.replay() | ||
2241 | 135 | |||
2242 | 136 | self.write_dummy_conf() | ||
2243 | 137 | self.assertFalse((yield self.service.is_stable())) | ||
2244 | 138 | |||
2245 | 139 | @inlineCallbacks | ||
2246 | 140 | def test_is_stable_not_running(self): | ||
2247 | 141 | self.setup_mock() | ||
2248 | 142 | self.mock_check_not_running() | ||
2249 | 143 | self.mocker.replay() | ||
2250 | 144 | |||
2251 | 145 | self.write_dummy_conf() | ||
2252 | 146 | self.assertFalse((yield self.service.is_stable())) | ||
2253 | 147 | |||
2254 | 148 | @inlineCallbacks | ||
2255 | 149 | def test_is_stable_not_even_installed(self): | ||
2256 | 150 | self.assertFalse((yield self.service.is_stable())) | ||
2257 | 151 | |||
2258 | 152 | @inlineCallbacks | ||
2259 | 153 | def test_get_pid(self): | ||
2260 | 154 | """ | ||
2261 | 155 | Check get_pid interprets status output (when service is installed) | ||
2262 | 156 | """ | ||
2263 | 157 | self.setup_mock() | ||
2264 | 158 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2265 | 159 | self.mock_status(succeed("blah blob/gibbering blah")) | ||
2266 | 160 | self.mock_status(succeed("blah start/running blah 12345")) | ||
2267 | 161 | self.mocker.replay() | ||
2268 | 162 | |||
2269 | 163 | # Won't hit status; conf is not installed | ||
2270 | 164 | self.assertEquals((yield self.service.get_pid()), None) | ||
2271 | 165 | self.write_dummy_conf() | ||
2272 | 166 | |||
2273 | 167 | # These 3 calls correspond to the first 3 mock_status calls above | ||
2274 | 168 | self.assertEquals((yield self.service.get_pid()), None) | ||
2275 | 169 | self.assertEquals((yield self.service.get_pid()), None) | ||
2276 | 170 | self.assertEquals((yield self.service.get_pid()), 12345) | ||
2277 | 171 | |||
2278 | 172 | @inlineCallbacks | ||
2279 | 173 | def test_basic_install(self): | ||
2280 | 174 | """Check a simple UpstartService writes expected conf file""" | ||
2281 | 175 | e = yield self.assertFailure(self.service.install(), ServiceError) | ||
2282 | 176 | self.assertEquals(str(e), "Cannot render .conf: no description set") | ||
2283 | 177 | self.service.set_description("uninteresting service") | ||
2284 | 178 | e = yield self.assertFailure(self.service.install(), ServiceError) | ||
2285 | 179 | self.assertEquals(str(e), "Cannot render .conf: no command set") | ||
2286 | 180 | self.service.set_command("/bin/false") | ||
2287 | 181 | yield self.service.install() | ||
2288 | 182 | |||
2289 | 183 | self.assert_conf("test_basic_install") | ||
2290 | 184 | |||
2291 | 185 | @inlineCallbacks | ||
2292 | 186 | def test_less_basic_install(self): | ||
2293 | 187 | """Check conf for a different UpstartService (which sets an env var)""" | ||
2294 | 188 | self.service.set_description("pew pew pew blam") | ||
2295 | 189 | self.service.set_command("/bin/deathstar --ignore-ewoks endor") | ||
2296 | 190 | self.service.set_environ({"FOO": "bar baz qux", "PEW": "pew"}) | ||
2297 | 191 | self.service.set_output_path("/somewhere/else") | ||
2298 | 192 | yield self.service.install() | ||
2299 | 193 | |||
2300 | 194 | self.assert_conf("test_less_basic_install") | ||
2301 | 195 | |||
2302 | 196 | def test_install_via_script(self): | ||
2303 | 197 | """Check that the output-as-script form does the right thing""" | ||
2304 | 198 | self.setup_service() | ||
2305 | 199 | install, start = self.service.get_cloud_init_commands() | ||
2306 | 200 | |||
2307 | 201 | os.system(install) | ||
2308 | 202 | self.assert_conf() | ||
2309 | 203 | self.assertEquals(start, "/sbin/start some-name") | ||
2310 | 204 | |||
2311 | 205 | @inlineCallbacks | ||
2312 | 206 | def test_start_not_installed(self): | ||
2313 | 207 | """Check that .start() also installs if necessary""" | ||
2314 | 208 | self.setup_mock() | ||
2315 | 209 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2316 | 210 | self.mock_start() | ||
2317 | 211 | self.mock_check_success() | ||
2318 | 212 | self.mocker.replay() | ||
2319 | 213 | |||
2320 | 214 | self.setup_service() | ||
2321 | 215 | yield self.service.start() | ||
2322 | 216 | self.assert_conf() | ||
2323 | 217 | |||
2324 | 218 | @inlineCallbacks | ||
2325 | 219 | def test_start_not_started_stable(self): | ||
2326 | 220 | """Check that .start() starts if stopped, and checks for stable pid""" | ||
2327 | 221 | self.write_dummy_conf() | ||
2328 | 222 | self.setup_mock() | ||
2329 | 223 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2330 | 224 | self.mock_start("ignored") | ||
2331 | 225 | self.mock_check_success() | ||
2332 | 226 | self.mocker.replay() | ||
2333 | 227 | |||
2334 | 228 | self.setup_service() | ||
2335 | 229 | yield self.service.start() | ||
2336 | 230 | self.assert_dummy_conf() | ||
2337 | 231 | |||
2338 | 232 | @inlineCallbacks | ||
2339 | 233 | def test_start_not_started_unstable(self): | ||
2340 | 234 | """Check that .start() starts if stopped, and raises on unstable pid""" | ||
2341 | 235 | self.write_dummy_conf() | ||
2342 | 236 | self.setup_mock() | ||
2343 | 237 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2344 | 238 | self.mock_start("kangaroo") | ||
2345 | 239 | self.mock_check_unstable() | ||
2346 | 240 | self.mocker.replay() | ||
2347 | 241 | |||
2348 | 242 | self.setup_service() | ||
2349 | 243 | e = yield self.assertFailure(self.service.start(), ServiceError) | ||
2350 | 244 | self.assertEquals( | ||
2351 | 245 | str(e), "Failed to start job some-name; got output:\nkangaroo") | ||
2352 | 246 | self.assert_dummy_conf() | ||
2353 | 247 | |||
2354 | 248 | @inlineCallbacks | ||
2355 | 249 | def test_start_not_started_failure(self): | ||
2356 | 250 | """Check that .start() starts if stopped, and raises on no pid""" | ||
2357 | 251 | self.write_dummy_conf() | ||
2358 | 252 | self.setup_mock() | ||
2359 | 253 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2360 | 254 | self.mock_start() | ||
2361 | 255 | self.mock_check_not_running() | ||
2362 | 256 | self.mocker.replay() | ||
2363 | 257 | |||
2364 | 258 | self.setup_service() | ||
2365 | 259 | e = yield self.assertFailure(self.service.start(), ServiceError) | ||
2366 | 260 | self.assertEquals( | ||
2367 | 261 | str(e), "Failed to start job some-name; no output detected") | ||
2368 | 262 | self.assert_dummy_conf() | ||
2369 | 263 | |||
2370 | 264 | @inlineCallbacks | ||
2371 | 265 | def test_start_started(self): | ||
2372 | 266 | """Check that .start() does nothing if already running""" | ||
2373 | 267 | self.write_dummy_conf() | ||
2374 | 268 | self.setup_mock() | ||
2375 | 269 | self.mock_status(succeed("blah start/running blah 12345")) | ||
2376 | 270 | self.mocker.replay() | ||
2377 | 271 | |||
2378 | 272 | self.setup_service() | ||
2379 | 273 | yield self.service.start() | ||
2380 | 274 | self.assert_dummy_conf() | ||
2381 | 275 | |||
2382 | 276 | @inlineCallbacks | ||
2383 | 277 | def test_destroy_not_installed(self): | ||
2384 | 278 | """Check .destroy() does nothing if not installed""" | ||
2385 | 279 | yield self.service.destroy() | ||
2386 | 280 | self.assert_no_conf() | ||
2387 | 281 | |||
2388 | 282 | @inlineCallbacks | ||
2389 | 283 | def test_destroy_not_started(self): | ||
2390 | 284 | """Check .destroy just deletes conf if not running""" | ||
2391 | 285 | self.write_dummy_conf() | ||
2392 | 286 | self.setup_mock() | ||
2393 | 287 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2394 | 288 | self.mocker.replay() | ||
2395 | 289 | |||
2396 | 290 | yield self.service.destroy() | ||
2397 | 291 | self.assert_no_conf() | ||
2398 | 292 | |||
2399 | 293 | @inlineCallbacks | ||
2400 | 294 | def test_destroy_started(self): | ||
2401 | 295 | """Check .destroy() stops running service and deletes conf file""" | ||
2402 | 296 | self.write_dummy_conf() | ||
2403 | 297 | self.setup_mock() | ||
2404 | 298 | self.mock_status(succeed("blah start/running blah 54321")) | ||
2405 | 299 | self.mock_stop() | ||
2406 | 300 | self.mocker.replay() | ||
2407 | 301 | |||
2408 | 302 | yield self.service.destroy() | ||
2409 | 303 | self.assert_no_conf() | ||
2410 | 304 | |||
2411 | 305 | @inlineCallbacks | ||
2412 | 306 | def test_use_sudo(self): | ||
2413 | 307 | """Check that expected commands are generated when use_sudo is set""" | ||
2414 | 308 | self.setup_mock() | ||
2415 | 309 | self.service = UpstartService("some-name", use_sudo=True) | ||
2416 | 310 | self.setup_service() | ||
2417 | 311 | with open(self.output, "w") as f: | ||
2418 | 312 | f.write("clear this file out...") | ||
2419 | 313 | |||
2420 | 314 | def verify_cp(args, **kwargs): | ||
2421 | 315 | sudo, cp, src, dst = args | ||
2422 | 316 | self.assertEquals(sudo, "sudo") | ||
2423 | 317 | self.assertEquals(cp, "cp") | ||
2424 | 318 | with open(os.path.join(DATA_DIR, "test_standard_install")) as exp: | ||
2425 | 319 | with open(src) as actual: | ||
2426 | 320 | self.assertEquals(actual.read(), exp.read()) | ||
2427 | 321 | self.assertEquals(dst, self.conf) | ||
2428 | 322 | self.write_dummy_conf() | ||
2429 | 323 | |||
2430 | 324 | self.check_call(ANY, KWARGS) | ||
2431 | 325 | self.mocker.call(verify_cp) | ||
2432 | 326 | self.mock_call(("sudo", "rm", self.output)) | ||
2433 | 327 | self.mock_call(("sudo", "chmod", "644", self.conf)) | ||
2434 | 328 | self.mock_status(succeed("blah stop/waiting blah")) | ||
2435 | 329 | self.mock_call(("sudo", "/sbin/start", "some-name")) | ||
2436 | 330 | # 5 for initial stability check; 1 for final do-we-need-to-stop check | ||
2437 | 331 | for _ in range(6): | ||
2438 | 332 | self.mock_status(succeed("blah start/running blah 12345")) | ||
2439 | 333 | self.mock_call(("sudo", "/sbin/stop", "some-name")) | ||
2440 | 334 | self.mock_call(("sudo", "rm", self.conf)) | ||
2441 | 335 | self.mock_call(("sudo", "rm", self.output)) | ||
2442 | 336 | |||
2443 | 337 | self.mocker.replay() | ||
2444 | 338 | yield self.service.start() | ||
2445 | 339 | yield self.service.destroy() | ||
2446 | 0 | 340 | ||
2447 | === added file 'juju/lib/upstart.py' | |||
2448 | --- juju/lib/upstart.py 1970-01-01 00:00:00 +0000 | |||
2449 | +++ juju/lib/upstart.py 2012-02-02 16:42:42 +0000 | |||
2450 | @@ -0,0 +1,166 @@ | |||
2451 | 1 | import os | ||
2452 | 2 | import subprocess | ||
2453 | 3 | from tempfile import NamedTemporaryFile | ||
2454 | 4 | |||
2455 | 5 | from twisted.internet.defer import inlineCallbacks, returnValue | ||
2456 | 6 | from twisted.internet.threads import deferToThread | ||
2457 | 7 | from twisted.internet.utils import getProcessOutput | ||
2458 | 8 | |||
2459 | 9 | from juju.errors import ServiceError | ||
2460 | 10 | from juju.lib.twistutils import sleep | ||
2461 | 11 | |||
2462 | 12 | |||
2463 | 13 | _CONF_TEMPLATE = """\ | ||
2464 | 14 | description "%s" | ||
2465 | 15 | author "Juju Team <juju@lists.ubuntu.com>" | ||
2466 | 16 | |||
2467 | 17 | start on runlevel [2345] | ||
2468 | 18 | stop on runlevel [!2345] | ||
2469 | 19 | respawn | ||
2470 | 20 | |||
2471 | 21 | %s | ||
2472 | 22 | |||
2473 | 23 | exec %s >> %s 2>&1 | ||
2474 | 24 | """ | ||
2475 | 25 | |||
2476 | 26 | def _silent_check_call(args): | ||
2477 | 27 | with open(os.devnull, "w") as f: | ||
2478 | 28 | return subprocess.check_call( | ||
2479 | 29 | args, stdout=f.fileno(), stderr=f.fileno()) | ||
2480 | 30 | |||
2481 | 31 | |||
2482 | 32 | class UpstartService(object): | ||
2483 | 33 | |||
2484 | 34 | # on class for ease of testing | ||
2485 | 35 | init_dir = "/etc/init" | ||
2486 | 36 | |||
2487 | 37 | def __init__(self, name, init_dir=None, use_sudo=False): | ||
2488 | 38 | self._name = name | ||
2489 | 39 | if init_dir is not None: | ||
2490 | 40 | self.init_dir = init_dir | ||
2491 | 41 | self._use_sudo = use_sudo | ||
2492 | 42 | self._output_path = None | ||
2493 | 43 | self._description = None | ||
2494 | 44 | self._environ = {} | ||
2495 | 45 | self._command = None | ||
2496 | 46 | |||
2497 | 47 | @property | ||
2498 | 48 | def _conf_path(self): | ||
2499 | 49 | return os.path.join( | ||
2500 | 50 | self.init_dir, "%s.conf" % self._name) | ||
2501 | 51 | |||
2502 | 52 | @property | ||
2503 | 53 | def output_path(self): | ||
2504 | 54 | if self._output_path is not None: | ||
2505 | 55 | return self._output_path | ||
2506 | 56 | return "/tmp/%s.output" % self._name | ||
2507 | 57 | |||
2508 | 58 | def set_description(self, description): | ||
2509 | 59 | self._description = description | ||
2510 | 60 | |||
2511 | 61 | def set_environ(self, environ): | ||
2512 | 62 | self._environ = environ | ||
2513 | 63 | |||
2514 | 64 | def set_command(self, command): | ||
2515 | 65 | self._command = command | ||
2516 | 66 | |||
2517 | 67 | def set_output_path(self, path): | ||
2518 | 68 | self._output_path = path | ||
2519 | 69 | |||
2520 | 70 | @inlineCallbacks | ||
2521 | 71 | def _trash_output(self): | ||
2522 | 72 | if os.path.exists(self.output_path): | ||
2523 | 73 | # Just using os.unlink will fail when we're running TEST_SUDO tests | ||
2524 | 74 | # which hit this code path (because root will own self.output_path) | ||
2525 | 75 | yield self._call("rm", self.output_path) | ||
2526 | 76 | |||
2527 | 77 | def _render(self): | ||
2528 | 78 | if self._description is None: | ||
2529 | 79 | raise ServiceError("Cannot render .conf: no description set") | ||
2530 | 80 | if self._command is None: | ||
2531 | 81 | raise ServiceError("Cannot render .conf: no command set") | ||
2532 | 82 | return _CONF_TEMPLATE % ( | ||
2533 | 83 | self._description, | ||
2534 | 84 | "\n".join('env %s="%s"' % kv | ||
2535 | 85 | for kv in sorted(self._environ.items())), | ||
2536 | 86 | self._command, | ||
2537 | 87 | self.output_path) | ||
2538 | 88 | |||
2539 | 89 | def _call(self, *args): | ||
2540 | 90 | if self._use_sudo: | ||
2541 | 91 | args = ("sudo",) + args | ||
2542 | 92 | return deferToThread(_silent_check_call, args) | ||
2543 | 93 | |||
2544 | 94 | def get_cloud_init_commands(self): | ||
2545 | 95 | return ["cat >> %s <<EOF\n%sEOF\n" % (self._conf_path, self._render()), | ||
2546 | 96 | "/sbin/start %s" % self._name] | ||
2547 | 97 | |||
2548 | 98 | @inlineCallbacks | ||
2549 | 99 | def install(self): | ||
2550 | 100 | with NamedTemporaryFile() as f: | ||
2551 | 101 | f.write(self._render()) | ||
2552 | 102 | f.flush() | ||
2553 | 103 | yield self._call("cp", f.name, self._conf_path) | ||
2554 | 104 | yield self._call("chmod", "644", self._conf_path) | ||
2555 | 105 | |||
2556 | 106 | @inlineCallbacks | ||
2557 | 107 | def start(self): | ||
2558 | 108 | if not self.is_installed(): | ||
2559 | 109 | yield self.install() | ||
2560 | 110 | if (yield self.is_running()): | ||
2561 | 111 | return | ||
2562 | 112 | yield self._trash_output() | ||
2563 | 113 | yield self._call("/sbin/start", self._name) | ||
2564 | 114 | if (yield self.is_stable()): | ||
2565 | 115 | return | ||
2566 | 116 | |||
2567 | 117 | output = None | ||
2568 | 118 | if os.path.exists(self.output_path): | ||
2569 | 119 | with open(self.output_path) as f: | ||
2570 | 120 | output = f.read() | ||
2571 | 121 | if not output: | ||
2572 | 122 | raise ServiceError( | ||
2573 | 123 | "Failed to start job %s; no output detected" % self._name) | ||
2574 | 124 | raise ServiceError( | ||
2575 | 125 | "Failed to start job %s; got output:\n%s" % (self._name, output)) | ||
2576 | 126 | |||
2577 | 127 | @inlineCallbacks | ||
2578 | 128 | def destroy(self): | ||
2579 | 129 | if (yield self.is_running()): | ||
2580 | 130 | yield self._call("/sbin/stop", self._name) | ||
2581 | 131 | if self.is_installed(): | ||
2582 | 132 | yield self._call("rm", self._conf_path) | ||
2583 | 133 | yield self._trash_output() | ||
2584 | 134 | |||
2585 | 135 | @inlineCallbacks | ||
2586 | 136 | def get_pid(self): | ||
2587 | 137 | if not self.is_installed(): | ||
2588 | 138 | returnValue(None) | ||
2589 | 139 | status = yield getProcessOutput("/sbin/status", [self._name]) | ||
2590 | 140 | if "start/running" not in status: | ||
2591 | 141 | returnValue(None) | ||
2592 | 142 | pid = status.split(" ")[-1] | ||
2593 | 143 | returnValue(int(pid)) | ||
2594 | 144 | |||
2595 | 145 | @inlineCallbacks | ||
2596 | 146 | def is_running(self): | ||
2597 | 147 | pid = yield self.get_pid() | ||
2598 | 148 | returnValue(pid is not None) | ||
2599 | 149 | |||
2600 | 150 | @inlineCallbacks | ||
2601 | 151 | def is_stable(self): | ||
2602 | 152 | """Does the process continue to run with the same pid? | ||
2603 | 153 | |||
2604 | 154 | (5 times in a row, with a gap of 0.1s between each check) | ||
2605 | 155 | """ | ||
2606 | 156 | pid = yield self.get_pid() | ||
2607 | 157 | if pid is None: | ||
2608 | 158 | returnValue(False) | ||
2609 | 159 | for _ in range(4): | ||
2610 | 160 | yield sleep(0.1) | ||
2611 | 161 | if pid != (yield self.get_pid()): | ||
2612 | 162 | returnValue(False) | ||
2613 | 163 | returnValue(True) | ||
2614 | 164 | |||
2615 | 165 | def is_installed(self): | ||
2616 | 166 | return os.path.exists(self._conf_path) | ||
2617 | 0 | 167 | ||
2618 | === modified file 'juju/machine/tests/test_unit_deployment.py' | |||
2619 | --- juju/machine/tests/test_unit_deployment.py 2011-10-05 13:59:44 +0000 | |||
2620 | +++ juju/machine/tests/test_unit_deployment.py 2012-02-02 16:42:42 +0000 | |||
2621 | @@ -3,25 +3,21 @@ | |||
2622 | 3 | """ | 3 | """ |
2623 | 4 | import logging | 4 | import logging |
2624 | 5 | import os | 5 | import os |
2626 | 6 | import sys | 6 | import subprocess |
2627 | 7 | 7 | ||
2628 | 8 | from twisted.internet.protocol import ProcessProtocol | ||
2629 | 9 | from twisted.internet.defer import inlineCallbacks, succeed | 8 | from twisted.internet.defer import inlineCallbacks, succeed |
2630 | 10 | 9 | ||
2631 | 11 | import juju | ||
2632 | 12 | from juju.charm import get_charm_from_path | 10 | from juju.charm import get_charm_from_path |
2633 | 13 | from juju.charm.tests.test_repository import RepositoryTestBase | 11 | from juju.charm.tests.test_repository import RepositoryTestBase |
2634 | 14 | from juju.lib.lxc import LXCContainer | 12 | from juju.lib.lxc import LXCContainer |
2637 | 15 | from juju.lib.mocker import MATCH, ANY | 13 | from juju.lib.lxc.tests.test_lxc import uses_sudo |
2638 | 16 | from juju.lib.twistutils import get_module_directory | 14 | from juju.lib.mocker import ANY, KWARGS |
2639 | 15 | from juju.lib.upstart import UpstartService | ||
2640 | 17 | from juju.machine.unit import UnitMachineDeployment, UnitContainerDeployment | 16 | from juju.machine.unit import UnitMachineDeployment, UnitContainerDeployment |
2641 | 18 | from juju.machine.errors import UnitDeploymentError | 17 | from juju.machine.errors import UnitDeploymentError |
2642 | 19 | from juju.tests.common import get_test_zookeeper_address | 18 | from juju.tests.common import get_test_zookeeper_address |
2643 | 20 | 19 | ||
2644 | 21 | 20 | ||
2645 | 22 | MATCH_PROTOCOL = MATCH(lambda x: isinstance(x, ProcessProtocol)) | ||
2646 | 23 | |||
2647 | 24 | |||
2648 | 25 | class UnitMachineDeploymentTest(RepositoryTestBase): | 21 | class UnitMachineDeploymentTest(RepositoryTestBase): |
2649 | 26 | 22 | ||
2650 | 27 | def setUp(self): | 23 | def setUp(self): |
2651 | @@ -32,6 +28,11 @@ | |||
2652 | 32 | self.units_directory = os.path.join(self.juju_directory, "units") | 28 | self.units_directory = os.path.join(self.juju_directory, "units") |
2653 | 33 | os.mkdir(self.units_directory) | 29 | os.mkdir(self.units_directory) |
2654 | 34 | self.unit_name = "wordpress/0" | 30 | self.unit_name = "wordpress/0" |
2655 | 31 | self.rootfs = self.makeDir() | ||
2656 | 32 | self.init_dir = os.path.join(self.rootfs, "etc", "init") | ||
2657 | 33 | os.makedirs(self.init_dir) | ||
2658 | 34 | self.real_init_dir = self.patch( | ||
2659 | 35 | UpstartService, "init_dir", self.init_dir) | ||
2660 | 35 | 36 | ||
2661 | 36 | self.deployment = UnitMachineDeployment( | 37 | self.deployment = UnitMachineDeployment( |
2662 | 37 | self.unit_name, | 38 | self.unit_name, |
2663 | @@ -41,11 +42,43 @@ | |||
2664 | 41 | self.deployment.unit_agent_module, "juju.agents.unit") | 42 | self.deployment.unit_agent_module, "juju.agents.unit") |
2665 | 42 | self.deployment.unit_agent_module = "juju.agents.dummy" | 43 | self.deployment.unit_agent_module = "juju.agents.dummy" |
2666 | 43 | 44 | ||
2672 | 44 | def process_kill(self, pid): | 45 | def setup_mock(self): |
2673 | 45 | try: | 46 | self.check_call = self.mocker.replace("subprocess.check_call") |
2674 | 46 | os.kill(pid, 9) | 47 | self.getProcessOutput = self.mocker.replace( |
2675 | 47 | except OSError: | 48 | "twisted.internet.utils.getProcessOutput") |
2676 | 48 | pass | 49 | |
2677 | 50 | def mock_is_running(self, running): | ||
2678 | 51 | self.getProcessOutput("/sbin/status", ["juju-wordpress-0"]) | ||
2679 | 52 | if running: | ||
2680 | 53 | self.mocker.result(succeed( | ||
2681 | 54 | "juju-wordpress-0 start/running, process 12345")) | ||
2682 | 55 | else: | ||
2683 | 56 | self.mocker.result(succeed("juju-wordpress-0 stop/waiting")) | ||
2684 | 57 | |||
2685 | 58 | def _without_sudo(self, args, **_): | ||
2686 | 59 | self.assertEquals(args[0], "sudo") | ||
2687 | 60 | return subprocess.call(args[1:]) | ||
2688 | 61 | |||
2689 | 62 | def mock_install(self): | ||
2690 | 63 | self.check_call(ANY, KWARGS) # cp to init dir | ||
2691 | 64 | self.mocker.call(self._without_sudo) | ||
2692 | 65 | self.check_call(ANY, KWARGS) # chmod 644 | ||
2693 | 66 | self.mocker.call(self._without_sudo) | ||
2694 | 67 | |||
2695 | 68 | def mock_start(self): | ||
2696 | 69 | self.check_call(("sudo", "/sbin/start", "juju-wordpress-0"), KWARGS) | ||
2697 | 70 | self.mocker.result(0) | ||
2698 | 71 | for _ in range(5): | ||
2699 | 72 | self.mock_is_running(True) | ||
2700 | 73 | |||
2701 | 74 | def mock_destroy(self): | ||
2702 | 75 | self.check_call(("sudo", "/sbin/stop", "juju-wordpress-0"), KWARGS) | ||
2703 | 76 | self.mocker.result(0) | ||
2704 | 77 | self.check_call(ANY, KWARGS) # rm from init dir | ||
2705 | 78 | self.mocker.call(self._without_sudo) | ||
2706 | 79 | |||
2707 | 80 | def assert_pid_running(self, pid, expect): | ||
2708 | 81 | self.assertEquals(os.path.exists("/proc/%s" % pid), expect) | ||
2709 | 49 | 82 | ||
2710 | 50 | def test_unit_name_with_path_manipulation_raises_assertion(self): | 83 | def test_unit_name_with_path_manipulation_raises_assertion(self): |
2711 | 51 | self.assertRaises( | 84 | self.assertRaises( |
2712 | @@ -60,129 +93,50 @@ | |||
2713 | 60 | os.path.join(self.units_directory, | 93 | os.path.join(self.units_directory, |
2714 | 61 | self.unit_name.replace("/", "-"))) | 94 | self.unit_name.replace("/", "-"))) |
2715 | 62 | 95 | ||
2716 | 63 | def test_unit_pid_file(self): | ||
2717 | 64 | self.assertEqual( | ||
2718 | 65 | self.deployment.pid_file, | ||
2719 | 66 | os.path.join(self.units_directory, | ||
2720 | 67 | "%s.pid" % (self.unit_name.replace("/", "-")))) | ||
2721 | 68 | |||
2722 | 69 | def test_service_unit_start(self): | 96 | def test_service_unit_start(self): |
2723 | 70 | """ | 97 | """ |
2724 | 71 | Starting a service unit will result in a unit workspace being created | 98 | Starting a service unit will result in a unit workspace being created |
2725 | 72 | if it does not exist and a running service unit agent. | 99 | if it does not exist and a running service unit agent. |
2726 | 73 | """ | 100 | """ |
2825 | 74 | 101 | self.setup_mock() | |
2826 | 75 | d = self.deployment.start( | 102 | self.mock_install() |
2827 | 76 | "0", get_test_zookeeper_address(), self.bundle) | 103 | self.mock_is_running(False) |
2828 | 77 | 104 | self.mock_start() | |
2731 | 78 | @inlineCallbacks | ||
2732 | 79 | def validate_result(result): | ||
2733 | 80 | # give process time to write its pid | ||
2734 | 81 | yield self.sleep(0.1) | ||
2735 | 82 | self.addCleanup( | ||
2736 | 83 | self.process_kill, | ||
2737 | 84 | int(open(self.deployment.pid_file).read())) | ||
2738 | 85 | self.assertEqual(result, True) | ||
2739 | 86 | |||
2740 | 87 | d.addCallback(validate_result) | ||
2741 | 88 | return d | ||
2742 | 89 | |||
2743 | 90 | def test_deployment_get_environment(self): | ||
2744 | 91 | zk_address = get_test_zookeeper_address() | ||
2745 | 92 | environ = self.deployment.get_environment(21, zk_address) | ||
2746 | 93 | environ.pop("PYTHONPATH") | ||
2747 | 94 | self.assertEqual(environ["JUJU_HOME"], self.juju_directory) | ||
2748 | 95 | self.assertEqual(environ["JUJU_UNIT_NAME"], self.unit_name) | ||
2749 | 96 | self.assertEqual(environ["JUJU_ZOOKEEPER"], zk_address) | ||
2750 | 97 | self.assertEqual(environ["JUJU_MACHINE_ID"], "21") | ||
2751 | 98 | |||
2752 | 99 | def test_service_unit_start_with_integer_machine_id(self): | ||
2753 | 100 | """ | ||
2754 | 101 | Starting a service unit will result in a unit workspace being created | ||
2755 | 102 | if it does not exist and a running service unit agent. | ||
2756 | 103 | """ | ||
2757 | 104 | d = self.deployment.start( | ||
2758 | 105 | 21, get_test_zookeeper_address(), self.bundle) | ||
2759 | 106 | |||
2760 | 107 | @inlineCallbacks | ||
2761 | 108 | def validate_result(result): | ||
2762 | 109 | # give process time to write its pid | ||
2763 | 110 | yield self.sleep(0.1) | ||
2764 | 111 | self.addCleanup( | ||
2765 | 112 | self.process_kill, | ||
2766 | 113 | int(open(self.deployment.pid_file).read())) | ||
2767 | 114 | self.assertEqual(result, True) | ||
2768 | 115 | |||
2769 | 116 | d.addCallback(validate_result) | ||
2770 | 117 | return d | ||
2771 | 118 | |||
2772 | 119 | def test_service_unit_start_with_agent_startup_error(self): | ||
2773 | 120 | """ | ||
2774 | 121 | Starting a service unit will result in a unit workspace being created | ||
2775 | 122 | if it does not exist and a running service unit agent. | ||
2776 | 123 | """ | ||
2777 | 124 | self.deployment.unit_agent_module = "magichat.xr1" | ||
2778 | 125 | d = self.deployment.start( | ||
2779 | 126 | "0", get_test_zookeeper_address(), self.bundle) | ||
2780 | 127 | |||
2781 | 128 | self.failUnlessFailure(d, UnitDeploymentError) | ||
2782 | 129 | |||
2783 | 130 | def validate_result(error): | ||
2784 | 131 | self.assertIn("No module named magichat", str(error)) | ||
2785 | 132 | |||
2786 | 133 | d.addCallback(validate_result) | ||
2787 | 134 | return d | ||
2788 | 135 | |||
2789 | 136 | def test_service_unit_start_agent_arguments(self): | ||
2790 | 137 | """ | ||
2791 | 138 | Starting a service unit will start a service unit agent with arguments | ||
2792 | 139 | denoting the current machine id, zookeeper server location, and the | ||
2793 | 140 | unit name. Additionally it will configure the log and pid file | ||
2794 | 141 | locations. | ||
2795 | 142 | """ | ||
2796 | 143 | machine_id = "0" | ||
2797 | 144 | zookeeper_hosts = "menagerie.example.com:2181" | ||
2798 | 145 | |||
2799 | 146 | from twisted.internet import reactor | ||
2800 | 147 | mock_reactor = self.mocker.patch(reactor) | ||
2801 | 148 | |||
2802 | 149 | environ = dict(os.environ) | ||
2803 | 150 | environ["JUJU_UNIT_NAME"] = self.unit_name | ||
2804 | 151 | environ["JUJU_HOME"] = self.juju_directory | ||
2805 | 152 | environ["JUJU_MACHINE_ID"] = machine_id | ||
2806 | 153 | environ["JUJU_ZOOKEEPER"] = zookeeper_hosts | ||
2807 | 154 | environ["PYTHONPATH"] = ":".join( | ||
2808 | 155 | filter(None, [ | ||
2809 | 156 | os.path.dirname(get_module_directory(juju)), | ||
2810 | 157 | environ.get("PYTHONPATH")])) | ||
2811 | 158 | |||
2812 | 159 | pid_file = os.path.join( | ||
2813 | 160 | self.units_directory, | ||
2814 | 161 | "%s.pid" % self.unit_name.replace("/", "-")) | ||
2815 | 162 | |||
2816 | 163 | log_file = os.path.join( | ||
2817 | 164 | self.deployment.directory, | ||
2818 | 165 | "charm.log") | ||
2819 | 166 | |||
2820 | 167 | args = [sys.executable, "-m", "juju.agents.dummy", "-n", | ||
2821 | 168 | "--pidfile", pid_file, "--logfile", log_file] | ||
2822 | 169 | |||
2823 | 170 | mock_reactor.spawnProcess( | ||
2824 | 171 | MATCH_PROTOCOL, sys.executable, args, environ) | ||
2829 | 172 | self.mocker.replay() | 105 | self.mocker.replay() |
2843 | 173 | self.deployment.start( | 106 | |
2844 | 174 | machine_id, zookeeper_hosts, self.bundle) | 107 | d = self.deployment.start( |
2845 | 175 | 108 | "123", get_test_zookeeper_address(), self.bundle) | |
2846 | 176 | def xtest_service_unit_start_pre_unpack(self): | 109 | |
2847 | 177 | """ | 110 | def verify_upstart(_): |
2848 | 178 | Attempting to start a charm before the charm is unpacked | 111 | conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
2849 | 179 | results in an exception. | 112 | with open(conf_path) as f: |
2850 | 180 | """ | 113 | lines = f.readlines() |
2851 | 181 | error = yield self.assertFailure( | 114 | |
2852 | 182 | self.deployment.start( | 115 | env = [] |
2853 | 183 | "0", get_test_zookeeper_address(), self.bundle), | 116 | for line in lines: |
2854 | 184 | UnitDeploymentError) | 117 | if line.startswith("env "): |
2855 | 185 | self.assertEquals(str(error), "Charm must be unpacked first.") | 118 | env.append(line[4:-1].split("=", 1)) |
2856 | 119 | if line.startswith("exec "): | ||
2857 | 120 | exec_ = line[5:-1] | ||
2858 | 121 | |||
2859 | 122 | env = dict((k, v.strip('"')) for (k, v) in env) | ||
2860 | 123 | env.pop("PYTHONPATH") | ||
2861 | 124 | self.assertEquals(env, { | ||
2862 | 125 | "JUJU_HOME": self.juju_directory, | ||
2863 | 126 | "JUJU_UNIT_NAME": self.unit_name, | ||
2864 | 127 | "JUJU_ZOOKEEPER": get_test_zookeeper_address(), | ||
2865 | 128 | "JUJU_MACHINE_ID": "123"}) | ||
2866 | 129 | |||
2867 | 130 | log_file = os.path.join( | ||
2868 | 131 | self.deployment.directory, "charm.log") | ||
2869 | 132 | command = " ".join([ | ||
2870 | 133 | "/usr/bin/python", "-m", "juju.agents.dummy", "--nodaemon", | ||
2871 | 134 | "--logfile", log_file, "--session-file", | ||
2872 | 135 | "/var/run/juju/unit-wordpress-0-agent.zksession", | ||
2873 | 136 | ">> /tmp/juju-wordpress-0.output 2>&1"]) | ||
2874 | 137 | self.assertEquals(exec_, command) | ||
2875 | 138 | d.addCallback(verify_upstart) | ||
2876 | 139 | return d | ||
2877 | 186 | 140 | ||
2878 | 187 | @inlineCallbacks | 141 | @inlineCallbacks |
2879 | 188 | def test_service_unit_destroy(self): | 142 | def test_service_unit_destroy(self): |
2880 | @@ -190,48 +144,22 @@ | |||
2881 | 190 | Forcibly stop a unit, and destroy any directories associated to it | 144 | Forcibly stop a unit, and destroy any directories associated to it |
2882 | 191 | on the machine, and kills the unit agent process. | 145 | on the machine, and kills the unit agent process. |
2883 | 192 | """ | 146 | """ |
2884 | 147 | self.setup_mock() | ||
2885 | 148 | self.mock_install() | ||
2886 | 149 | self.mock_is_running(False) | ||
2887 | 150 | self.mock_start() | ||
2888 | 151 | self.mock_is_running(True) | ||
2889 | 152 | self.mock_destroy() | ||
2890 | 153 | self.mocker.replay() | ||
2891 | 154 | |||
2892 | 193 | yield self.deployment.start( | 155 | yield self.deployment.start( |
2893 | 194 | "0", get_test_zookeeper_address(), self.bundle) | 156 | "0", get_test_zookeeper_address(), self.bundle) |
2897 | 195 | # give the process time to write its pid file | 157 | |
2895 | 196 | yield self.sleep(0.1) | ||
2896 | 197 | pid = int(open(self.deployment.pid_file).read()) | ||
2898 | 198 | yield self.deployment.destroy() | 158 | yield self.deployment.destroy() |
2899 | 199 | # give the process time to die. | ||
2900 | 200 | yield self.sleep(0.1) | ||
2901 | 201 | e = self.assertRaises(OSError, os.kill, pid, 0) | ||
2902 | 202 | self.assertEqual(e.errno, 3) | ||
2903 | 203 | self.assertFalse(os.path.exists(self.deployment.directory)) | 159 | self.assertFalse(os.path.exists(self.deployment.directory)) |
2935 | 204 | self.assertFalse(os.path.exists(self.deployment.pid_file)) | 160 | |
2936 | 205 | 161 | conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") | |
2937 | 206 | def test_service_unit_destroy_stale_pid(self): | 162 | self.assertFalse(os.path.exists(conf_path)) |
2907 | 207 | """ | ||
2908 | 208 | A stale pid file does not cause any errors. | ||
2909 | 209 | |||
2910 | 210 | We mock away is_running as otherwise it will check for this, but | ||
2911 | 211 | there exists a small window when the result may disagree. | ||
2912 | 212 | """ | ||
2913 | 213 | self.makeFile("8917238", path=self.deployment.pid_file) | ||
2914 | 214 | mock_deployment = self.mocker.patch(self.deployment) | ||
2915 | 215 | mock_deployment.is_running() | ||
2916 | 216 | self.mocker.result(succeed(True)) | ||
2917 | 217 | self.mocker.replay() | ||
2918 | 218 | return self.deployment.destroy() | ||
2919 | 219 | |||
2920 | 220 | def test_service_unit_destroy_perm_error(self): | ||
2921 | 221 | """ | ||
2922 | 222 | A stale pid file does not cause any errors. | ||
2923 | 223 | |||
2924 | 224 | We mock away is_running as otherwise it will check for this, but | ||
2925 | 225 | there exists a small window when the result may disagree. | ||
2926 | 226 | """ | ||
2927 | 227 | if os.geteuid() == 0: | ||
2928 | 228 | return | ||
2929 | 229 | self.makeFile("1", path=self.deployment.pid_file) | ||
2930 | 230 | mock_deployment = self.mocker.patch(self.deployment) | ||
2931 | 231 | mock_deployment.is_running() | ||
2932 | 232 | self.mocker.result(succeed(True)) | ||
2933 | 233 | self.mocker.replay() | ||
2934 | 234 | return self.assertFailure(self.deployment.destroy(), OSError) | ||
2938 | 235 | 163 | ||
2939 | 236 | @inlineCallbacks | 164 | @inlineCallbacks |
2940 | 237 | def test_service_unit_destroy_undeployed(self): | 165 | def test_service_unit_destroy_undeployed(self): |
2941 | @@ -247,11 +175,23 @@ | |||
2942 | 247 | If the unit is not running, then destroy will just remove | 175 | If the unit is not running, then destroy will just remove |
2943 | 248 | its directory. | 176 | its directory. |
2944 | 249 | """ | 177 | """ |
2947 | 250 | self.deployment.unpack_charm(self.bundle) | 178 | self.setup_mock() |
2948 | 251 | self.assertTrue(os.path.exists(self.deployment.directory)) | 179 | self.mock_install() |
2949 | 180 | self.mock_is_running(False) | ||
2950 | 181 | self.mock_start() | ||
2951 | 182 | self.mock_is_running(False) | ||
2952 | 183 | self.check_call(ANY, KWARGS) # rm from init dir | ||
2953 | 184 | self.mocker.call(self._without_sudo) | ||
2954 | 185 | self.mocker.replay() | ||
2955 | 186 | |||
2956 | 187 | yield self.deployment.start( | ||
2957 | 188 | "0", get_test_zookeeper_address(), self.bundle) | ||
2958 | 252 | yield self.deployment.destroy() | 189 | yield self.deployment.destroy() |
2959 | 253 | self.assertFalse(os.path.exists(self.deployment.directory)) | 190 | self.assertFalse(os.path.exists(self.deployment.directory)) |
2960 | 254 | 191 | ||
2961 | 192 | conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") | ||
2962 | 193 | self.assertFalse(os.path.exists(conf_path)) | ||
2963 | 194 | |||
2964 | 255 | def test_unpack_charm(self): | 195 | def test_unpack_charm(self): |
2965 | 256 | """ | 196 | """ |
2966 | 257 | The deployment unpacks a charm bundle into the unit workspace. | 197 | The deployment unpacks a charm bundle into the unit workspace. |
2967 | @@ -279,62 +219,81 @@ | |||
2968 | 279 | str(error), | 219 | str(error), |
2969 | 280 | "Invalid charm for deployment: %s" % self.charm.path) | 220 | "Invalid charm for deployment: %s" % self.charm.path) |
2970 | 281 | 221 | ||
2984 | 282 | def test_is_running_no_pid_file(self): | 222 | @inlineCallbacks |
2985 | 283 | """ | 223 | def test_is_running_not_installed(self): |
2986 | 284 | If there is no pid file the service unit is not running. | 224 | """ |
2987 | 285 | """ | 225 | If there is no conf file the service unit is not running. |
2988 | 286 | self.assertEqual((yield self.deployment.is_running()), False) | 226 | """ |
2989 | 287 | 227 | self.assertEqual((yield self.deployment.is_running()), False) | |
2990 | 288 | def test_is_running(self): | 228 | |
2991 | 289 | """ | 229 | @inlineCallbacks |
2992 | 290 | The service deployment will check the pid and validate | 230 | def test_is_running_not_running(self): |
2993 | 291 | that the pid found is a running process. | 231 | """ |
2994 | 292 | """ | 232 | If the conf file exists, but job not running, unit not running |
2995 | 293 | self.makeFile( | 233 | """ |
2996 | 294 | str(os.getpid()), path=self.deployment.pid_file) | 234 | conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
2997 | 235 | with open(conf_path, "w") as f: | ||
2998 | 236 | f.write("blah") | ||
2999 | 237 | self.setup_mock() | ||
3000 | 238 | self.mock_is_running(False) | ||
3001 | 239 | self.mocker.replay() | ||
3002 | 240 | self.assertEqual((yield self.deployment.is_running()), False) | ||
3003 | 241 | |||
3004 | 242 | @inlineCallbacks | ||
3005 | 243 | def test_is_running_success(self): | ||
3006 | 244 | """ | ||
3007 | 245 | Check running job. | ||
3008 | 246 | """ | ||
3009 | 247 | conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") | ||
3010 | 248 | with open(conf_path, "w") as f: | ||
3011 | 249 | f.write("blah") | ||
3012 | 250 | self.setup_mock() | ||
3013 | 251 | self.mock_is_running(True) | ||
3014 | 252 | self.mocker.replay() | ||
3015 | 295 | self.assertEqual((yield self.deployment.is_running()), True) | 253 | self.assertEqual((yield self.deployment.is_running()), True) |
3016 | 296 | 254 | ||
3058 | 297 | def test_is_running_against_unknown_error(self): | 255 | @uses_sudo |
3059 | 298 | """ | 256 | @inlineCallbacks |
3060 | 299 | If we don't have permission to access the process, the | 257 | def test_run_actual_process(self): |
3061 | 300 | original error should get passed along. | 258 | # "unpatch" to use real /etc/init |
3062 | 301 | """ | 259 | self.patch(UpstartService, "init_dir", self.real_init_dir) |
3063 | 302 | if os.geteuid() == 0: | 260 | yield self.deployment.start( |
3064 | 303 | return | 261 | "0", get_test_zookeeper_address(), self.bundle) |
3065 | 304 | self.makeFile("1", path=self.deployment.pid_file) | 262 | old_pid = yield self.deployment.get_pid() |
3066 | 305 | self.assertFailure(self.deployment.is_running(), OSError) | 263 | self.assert_pid_running(old_pid, True) |
3067 | 306 | 264 | ||
3068 | 307 | def test_is_running_invalid_pid_file(self): | 265 | # Give the job a chance to fall over and be restarted (if the |
3069 | 308 | """ | 266 | # pid doesn't change, that hasn't hapened) |
3070 | 309 | If the pid file is corrupted on disk, and does not contain | 267 | yield self.sleep(0.1) |
3071 | 310 | a valid integer, then the agent is not running. | 268 | self.assertEquals((yield self.deployment.get_pid()), old_pid) |
3072 | 311 | """ | 269 | self.assert_pid_running(old_pid, True) |
3073 | 312 | self.makeFile("abcdef", path=self.deployment.pid_file) | 270 | |
3074 | 313 | self.assertEqual( | 271 | # Kick the job over ourselves; check it comes back |
3075 | 314 | (yield self.deployment.is_running()), False) | 272 | os.system("sudo kill -9 %s" % old_pid) |
3076 | 315 | 273 | yield self.sleep(0.1) | |
3077 | 316 | def test_is_running_invalid_pid(self): | 274 | self.assert_pid_running(old_pid, False) |
3078 | 317 | """ | 275 | new_pid = yield self.deployment.get_pid() |
3079 | 318 | If the pid file refers to an invalid process then the | 276 | self.assertNotEquals(new_pid, old_pid) |
3080 | 319 | agent is not running. | 277 | self.assert_pid_running(new_pid, True) |
3081 | 320 | """ | 278 | |
3082 | 321 | self.makeFile("669966", path=self.deployment.pid_file) | 279 | yield self.deployment.destroy() |
3083 | 322 | self.assertEqual( | 280 | self.assertEquals((yield self.deployment.get_pid()), None) |
3084 | 323 | (yield self.deployment.is_running()), False) | 281 | self.assert_pid_running(new_pid, False) |
3085 | 324 | 282 | ||
3086 | 325 | 283 | @uses_sudo | |
3087 | 326 | upstart_job_sample = '''\ | 284 | @inlineCallbacks |
3088 | 327 | description "Unit agent for riak/0" | 285 | def test_fail_to_run_actual_process(self): |
3089 | 328 | author "Juju Team <juju@lists.canonical.com>" | 286 | self.deployment.unit_agent_module = "haha.disregard.that" |
3090 | 329 | start on start on filesystem or runlevel [2345] | 287 | self.patch(UpstartService, "init_dir", self.real_init_dir) |
3091 | 330 | stop on runlevel [!2345] | 288 | |
3092 | 331 | 289 | d = self.deployment.start( | |
3093 | 332 | respawn | 290 | "0", get_test_zookeeper_address(), self.bundle) |
3094 | 333 | 291 | e = yield self.assertFailure(d, UnitDeploymentError) | |
3095 | 334 | env JUJU_MACHINE_ID="0" | 292 | self.assertTrue(str(e).startswith( |
3096 | 335 | env JUJU_HOME="/var/lib/juju" | 293 | "Failed to start job juju-wordpress-0; got output:\n")) |
3097 | 336 | env JUJU_ZOOKEEPER="127.0.1.1:2181" | 294 | self.assertIn("No module named haha", str(e)) |
3098 | 337 | env JUJU_UNIT_NAME="riak/0"''' | 295 | |
3099 | 296 | yield self.deployment.destroy() | ||
3100 | 338 | 297 | ||
3101 | 339 | 298 | ||
3102 | 340 | class UnitContainerDeploymentTest(RepositoryTestBase): | 299 | class UnitContainerDeploymentTest(RepositoryTestBase): |
3103 | @@ -365,14 +324,6 @@ | |||
3104 | 365 | "ns1-riak-0", | 324 | "ns1-riak-0", |
3105 | 366 | self.unit_deploy.container_name) | 325 | self.unit_deploy.container_name) |
3106 | 367 | 326 | ||
3107 | 368 | def test_get_upstart_job(self): | ||
3108 | 369 | upstart_job = self.unit_deploy.get_upstart_unit_job( | ||
3109 | 370 | 0, "127.0.1.1:2181") | ||
3110 | 371 | job = self.get_normalized(upstart_job) | ||
3111 | 372 | self.assertIn('JUJU_ZOOKEEPER="127.0.1.1:2181"', job) | ||
3112 | 373 | self.assertIn('JUJU_MACHINE_ID="0"', job) | ||
3113 | 374 | self.assertIn('JUJU_UNIT_NAME="riak/0"', job) | ||
3114 | 375 | |||
3115 | 376 | @inlineCallbacks | 327 | @inlineCallbacks |
3116 | 377 | def test_destroy(self): | 328 | def test_destroy(self): |
3117 | 378 | mock_container = self.mocker.patch(self.unit_deploy.container) | 329 | mock_container = self.mocker.patch(self.unit_deploy.container) |
3118 | @@ -403,7 +354,7 @@ | |||
3119 | 403 | unit_deploy = UnitContainerDeployment( | 354 | unit_deploy = UnitContainerDeployment( |
3120 | 404 | self.unit_name, self.juju_home) | 355 | self.unit_name, self.juju_home) |
3121 | 405 | container = yield unit_deploy._get_master_template( | 356 | container = yield unit_deploy._get_master_template( |
3123 | 406 | "local", "127.0.0.1:1", "abc") | 357 | "local", "abc") |
3124 | 407 | self.assertEqual(container.origin, "lp:~juju/foobar") | 358 | self.assertEqual(container.origin, "lp:~juju/foobar") |
3125 | 408 | self.assertEqual( | 359 | self.assertEqual( |
3126 | 409 | container.customize_log, | 360 | container.customize_log, |
3127 | @@ -420,7 +371,7 @@ | |||
3128 | 420 | mock_deploy = self.mocker.patch(self.unit_deploy) | 371 | mock_deploy = self.mocker.patch(self.unit_deploy) |
3129 | 421 | # this minimally validates that we are also called with the | 372 | # this minimally validates that we are also called with the |
3130 | 422 | # expect public key | 373 | # expect public key |
3132 | 423 | mock_deploy._get_container(ANY, ANY, ANY, env["JUJU_PUBLIC_KEY"]) | 374 | mock_deploy._get_container(ANY, ANY, env["JUJU_PUBLIC_KEY"]) |
3133 | 424 | self.mocker.result((container, rootfs)) | 375 | self.mocker.result((container, rootfs)) |
3134 | 425 | 376 | ||
3135 | 426 | mock_container = self.mocker.patch(container) | 377 | mock_container = self.mocker.patch(container) |
3136 | @@ -434,7 +385,7 @@ | |||
3137 | 434 | yield self.unit_deploy.start("0", "127.0.1.1:2181", self.bundle) | 385 | yield self.unit_deploy.start("0", "127.0.1.1:2181", self.bundle) |
3138 | 435 | 386 | ||
3139 | 436 | # Verify the upstart job | 387 | # Verify the upstart job |
3141 | 437 | upstart_agent_name = "%s-unit-agent.conf" % ( | 388 | upstart_agent_name = "juju-%s.conf" % ( |
3142 | 438 | self.unit_name.replace("/", "-")) | 389 | self.unit_name.replace("/", "-")) |
3143 | 439 | content = open( | 390 | content = open( |
3144 | 440 | os.path.join(rootfs, "etc", "init", upstart_agent_name)).read() | 391 | os.path.join(rootfs, "etc", "init", upstart_agent_name)).read() |
3145 | @@ -443,10 +394,13 @@ | |||
3146 | 443 | self.assertIn('JUJU_MACHINE_ID="0"', job) | 394 | self.assertIn('JUJU_MACHINE_ID="0"', job) |
3147 | 444 | self.assertIn('JUJU_UNIT_NAME="riak/0"', job) | 395 | self.assertIn('JUJU_UNIT_NAME="riak/0"', job) |
3148 | 445 | 396 | ||
3150 | 446 | # Verify the symlink exists | 397 | # Verify the symlinks exist |
3151 | 447 | self.assertTrue(os.path.lexists(os.path.join( | 398 | self.assertTrue(os.path.lexists(os.path.join( |
3152 | 448 | self.unit_deploy.juju_home, "units", | 399 | self.unit_deploy.juju_home, "units", |
3153 | 449 | self.unit_deploy.unit_path_name, "unit.log"))) | 400 | self.unit_deploy.unit_path_name, "unit.log"))) |
3154 | 401 | self.assertTrue(os.path.lexists(os.path.join( | ||
3155 | 402 | self.unit_deploy.juju_home, "units", | ||
3156 | 403 | self.unit_deploy.unit_path_name, "output.log"))) | ||
3157 | 450 | 404 | ||
3158 | 451 | # Verify the charm is on disk. | 405 | # Verify the charm is on disk. |
3159 | 452 | self.assertTrue(os.path.exists(os.path.join( | 406 | self.assertTrue(os.path.exists(os.path.join( |
3160 | @@ -471,7 +425,7 @@ | |||
3161 | 471 | container = LXCContainer(self.unit_name, None, None, None) | 425 | container = LXCContainer(self.unit_name, None, None, None) |
3162 | 472 | 426 | ||
3163 | 473 | mock_deploy = self.mocker.patch(self.unit_deploy) | 427 | mock_deploy = self.mocker.patch(self.unit_deploy) |
3165 | 474 | mock_deploy._get_master_template(ANY, ANY, ANY) | 428 | mock_deploy._get_master_template(ANY, ANY) |
3166 | 475 | self.mocker.result(container) | 429 | self.mocker.result(container) |
3167 | 476 | 430 | ||
3168 | 477 | mock_container = self.mocker.patch(container) | 431 | mock_container = self.mocker.patch(container) |
3169 | @@ -481,7 +435,7 @@ | |||
3170 | 481 | self.mocker.replay() | 435 | self.mocker.replay() |
3171 | 482 | 436 | ||
3172 | 483 | container, rootfs = yield self.unit_deploy._get_container( | 437 | container, rootfs = yield self.unit_deploy._get_container( |
3174 | 484 | "0", "127.0.0.1:2181", None, "dsa...") | 438 | "0", None, "dsa...") |
3175 | 485 | 439 | ||
3176 | 486 | output = self.output.getvalue() | 440 | output = self.output.getvalue() |
3177 | 487 | self.assertIn("Container created for %s" % self.unit_deploy.unit_name, | 441 | self.assertIn("Container created for %s" % self.unit_deploy.unit_name, |
3178 | 488 | 442 | ||
3179 | === modified file 'juju/machine/unit.py' | |||
3180 | --- juju/machine/unit.py 2011-10-01 00:04:14 +0000 | |||
3181 | +++ juju/machine/unit.py 2012-02-02 16:42:42 +0000 | |||
3182 | @@ -1,19 +1,17 @@ | |||
3183 | 1 | import os | 1 | import os |
3184 | 2 | import errno | ||
3185 | 3 | import signal | ||
3186 | 4 | import shutil | 2 | import shutil |
3187 | 5 | import sys | 3 | import sys |
3188 | 6 | import logging | 4 | import logging |
3189 | 7 | 5 | ||
3190 | 8 | import juju | 6 | import juju |
3191 | 9 | 7 | ||
3195 | 10 | from twisted.internet.defer import ( | 8 | from twisted.internet.defer import inlineCallbacks, returnValue |
3193 | 11 | Deferred, inlineCallbacks, returnValue, succeed, fail) | ||
3194 | 12 | from twisted.internet.protocol import ProcessProtocol | ||
3196 | 13 | 9 | ||
3197 | 14 | from juju.charm.bundle import CharmBundle | 10 | from juju.charm.bundle import CharmBundle |
3198 | 11 | from juju.errors import ServiceError | ||
3199 | 12 | from juju.lib.lxc import LXCContainer, get_containers, LXCError | ||
3200 | 15 | from juju.lib.twistutils import get_module_directory | 13 | from juju.lib.twistutils import get_module_directory |
3202 | 16 | from juju.lib.lxc import LXCContainer, get_containers, LXCError | 14 | from juju.lib.upstart import UpstartService |
3203 | 17 | 15 | ||
3204 | 18 | from .errors import UnitDeploymentError | 16 | from .errors import UnitDeploymentError |
3205 | 19 | 17 | ||
3206 | @@ -26,22 +24,17 @@ | |||
3207 | 26 | return UnitMachineDeployment | 24 | return UnitMachineDeployment |
3208 | 27 | 25 | ||
3209 | 28 | 26 | ||
3226 | 29 | class AgentProcessProtocol(ProcessProtocol): | 27 | def _get_environment(unit_name, juju_home, machine_id, zookeeper_hosts): |
3227 | 30 | 28 | environ = dict() | |
3228 | 31 | def __init__(self, deferred): | 29 | environ["JUJU_MACHINE_ID"] = str(machine_id) |
3229 | 32 | self.deferred = deferred | 30 | environ["JUJU_UNIT_NAME"] = unit_name |
3230 | 33 | self._error_buffer = [] | 31 | environ["JUJU_HOME"] = juju_home |
3231 | 34 | 32 | environ["JUJU_ZOOKEEPER"] = zookeeper_hosts | |
3232 | 35 | def errReceived(self, data): | 33 | environ["PYTHONPATH"] = ":".join( |
3233 | 36 | self._error_buffer.append(data) | 34 | filter(None, [ |
3234 | 37 | 35 | os.path.dirname(get_module_directory(juju)), | |
3235 | 38 | def processEnded(self, reason): | 36 | os.environ.get("PYTHONPATH")])) |
3236 | 39 | if self._error_buffer: | 37 | return environ |
3221 | 40 | msg = "".join(self._error_buffer) | ||
3222 | 41 | msg.strip() | ||
3223 | 42 | self.deferred.errback(UnitDeploymentError(msg)) | ||
3224 | 43 | else: | ||
3225 | 44 | self.deferred.callback(True) | ||
3237 | 45 | 38 | ||
3238 | 46 | 39 | ||
3239 | 47 | class UnitMachineDeployment(object): | 40 | class UnitMachineDeployment(object): |
3240 | @@ -58,46 +51,35 @@ | |||
3241 | 58 | unit_agent_module = "juju.agents.unit" | 51 | unit_agent_module = "juju.agents.unit" |
3242 | 59 | 52 | ||
3243 | 60 | def __init__(self, unit_name, juju_home): | 53 | def __init__(self, unit_name, juju_home): |
3244 | 61 | self.unit_name = unit_name | ||
3245 | 62 | |||
3246 | 63 | assert ".." not in unit_name, "Invalid Unit Name" | 54 | assert ".." not in unit_name, "Invalid Unit Name" |
3247 | 55 | self.unit_name = unit_name | ||
3248 | 56 | self.juju_home = juju_home | ||
3249 | 64 | self.unit_path_name = unit_name.replace("/", "-") | 57 | self.unit_path_name = unit_name.replace("/", "-") |
3250 | 65 | self.juju_home = juju_home | ||
3251 | 66 | |||
3252 | 67 | self.directory = os.path.join( | 58 | self.directory = os.path.join( |
3253 | 68 | self.juju_home, "units", self.unit_path_name) | 59 | self.juju_home, "units", self.unit_path_name) |
3258 | 69 | 60 | self.service = UpstartService( | |
3259 | 70 | self.pid_file = os.path.join( | 61 | # NOTE: we need use_sudo to work correctly during tests that |
3260 | 71 | self.juju_home, "units", "%s.pid" % self.unit_path_name) | 62 | # launch actual processes (rather than just mocking/trusting). |
3261 | 72 | 63 | "juju-%s" % self.unit_path_name, use_sudo=True) | |
3262 | 64 | |||
3263 | 65 | @inlineCallbacks | ||
3264 | 73 | def start(self, machine_id, zookeeper_hosts, bundle): | 66 | def start(self, machine_id, zookeeper_hosts, bundle): |
3265 | 74 | """Start a service unit agent.""" | 67 | """Start a service unit agent.""" |
3266 | 75 | # Extract the charm into the unit directory. | ||
3267 | 76 | self.unpack_charm(bundle) | 68 | self.unpack_charm(bundle) |
3292 | 77 | 69 | self.service.set_description( | |
3293 | 78 | # Start the service unit agent | 70 | "Juju unit agent for %s" % self.unit_name) |
3294 | 79 | log_file = os.path.join(self.directory, "charm.log") | 71 | self.service.set_environ(_get_environment( |
3295 | 80 | environ = self.get_environment(machine_id, zookeeper_hosts) | 72 | self.unit_name, self.juju_home, machine_id, zookeeper_hosts)) |
3296 | 81 | args = [sys.executable, "-m", self.unit_agent_module, "-n", | 73 | self.service.set_command(" ".join(( |
3297 | 82 | "--pidfile", self.pid_file, "--logfile", log_file] | 74 | "/usr/bin/python", "-m", self.unit_agent_module, |
3298 | 83 | 75 | "--nodaemon", | |
3299 | 84 | from twisted.internet import reactor | 76 | "--logfile", os.path.join(self.directory, "charm.log"), |
3300 | 85 | process_deferred = Deferred() | 77 | "--session-file", |
3301 | 86 | protocol = AgentProcessProtocol(process_deferred) | 78 | "/var/run/juju/unit-%s-agent.zksession" % self.unit_path_name))) |
3302 | 87 | reactor.spawnProcess(protocol, sys.executable, args, environ) | 79 | try: |
3303 | 88 | return process_deferred | 80 | yield self.service.start() |
3304 | 89 | 81 | except ServiceError as e: | |
3305 | 90 | def get_environment(self, machine_id, zookeeper_hosts): | 82 | raise UnitDeploymentError(str(e)) |
3282 | 91 | environ = dict(os.environ) | ||
3283 | 92 | environ["JUJU_MACHINE_ID"] = str(machine_id) | ||
3284 | 93 | environ["JUJU_UNIT_NAME"] = self.unit_name | ||
3285 | 94 | environ["JUJU_HOME"] = self.juju_home | ||
3286 | 95 | environ["JUJU_ZOOKEEPER"] = zookeeper_hosts | ||
3287 | 96 | environ["PYTHONPATH"] = ":".join( | ||
3288 | 97 | filter(None, [ | ||
3289 | 98 | os.path.dirname(get_module_directory(juju)), | ||
3290 | 99 | environ.get("PYTHONPATH")])) | ||
3291 | 100 | return environ | ||
3306 | 101 | 83 | ||
3307 | 102 | @inlineCallbacks | 84 | @inlineCallbacks |
3308 | 103 | def destroy(self): | 85 | def destroy(self): |
3309 | @@ -105,41 +87,17 @@ | |||
3310 | 105 | 87 | ||
3311 | 106 | This will destroy/unmount any state on disk. | 88 | This will destroy/unmount any state on disk. |
3312 | 107 | """ | 89 | """ |
3325 | 108 | running = yield self.is_running() | 90 | yield self.service.destroy() |
3314 | 109 | if running: | ||
3315 | 110 | pid = int(open(self.pid_file).read()) | ||
3316 | 111 | try: | ||
3317 | 112 | os.kill(pid, signal.SIGKILL) | ||
3318 | 113 | except OSError, e: | ||
3319 | 114 | if e.errno != errno.ESRCH: | ||
3320 | 115 | raise | ||
3321 | 116 | |||
3322 | 117 | if os.path.exists(self.pid_file): | ||
3323 | 118 | os.remove(self.pid_file) | ||
3324 | 119 | |||
3326 | 120 | if os.path.exists(self.directory): | 91 | if os.path.exists(self.directory): |
3327 | 121 | shutil.rmtree(self.directory) | 92 | shutil.rmtree(self.directory) |
3328 | 122 | 93 | ||
3329 | 94 | def get_pid(self): | ||
3330 | 95 | """Get the service unit's process id.""" | ||
3331 | 96 | return self.service.get_pid() | ||
3332 | 97 | |||
3333 | 123 | def is_running(self): | 98 | def is_running(self): |
3334 | 124 | """Is the service unit running.""" | 99 | """Is the service unit running.""" |
3353 | 125 | try: | 100 | return self.service.is_running() |
3336 | 126 | with open(self.pid_file) as pid_fh: | ||
3337 | 127 | pid = int(pid_fh.read()) | ||
3338 | 128 | except (IOError, ValueError): | ||
3339 | 129 | return succeed(False) | ||
3340 | 130 | |||
3341 | 131 | # Attempt to send a signal to the process to verify its a valid process | ||
3342 | 132 | # From man 2 kill | ||
3343 | 133 | # "If sig is 0, then no signal is sent, but error checking is still | ||
3344 | 134 | # performed; this can be used to check for the existence of a process | ||
3345 | 135 | # ID or process group ID." | ||
3346 | 136 | try: | ||
3347 | 137 | os.kill(pid, 0) | ||
3348 | 138 | except OSError, e: | ||
3349 | 139 | if e.errno == errno.ESRCH: | ||
3350 | 140 | return succeed(False) | ||
3351 | 141 | return fail(e) | ||
3352 | 142 | return succeed(True) | ||
3354 | 143 | 101 | ||
3355 | 144 | def unpack_charm(self, charm): | 102 | def unpack_charm(self, charm): |
3356 | 145 | """Unpack a charm to the service units directory.""" | 103 | """Unpack a charm to the service units directory.""" |
3357 | @@ -150,27 +108,7 @@ | |||
3358 | 150 | charm.extract_to(os.path.join(self.directory, "charm")) | 108 | charm.extract_to(os.path.join(self.directory, "charm")) |
3359 | 151 | 109 | ||
3360 | 152 | 110 | ||
3382 | 153 | container_upstart_job_template = """\ | 111 | class UnitContainerDeployment(object): |
3362 | 154 | description "Unit agent for %(JUJU_UNIT_NAME)s" | ||
3363 | 155 | author "Juju Team <juju@lists.canonical.com>" | ||
3364 | 156 | |||
3365 | 157 | start on start on filesystem or runlevel [2345] | ||
3366 | 158 | stop on runlevel [!2345] | ||
3367 | 159 | |||
3368 | 160 | respawn | ||
3369 | 161 | |||
3370 | 162 | env JUJU_MACHINE_ID="%(JUJU_MACHINE_ID)s" | ||
3371 | 163 | env JUJU_HOME="%(JUJU_HOME)s" | ||
3372 | 164 | env JUJU_ZOOKEEPER="%(JUJU_ZOOKEEPER)s" | ||
3373 | 165 | env JUJU_UNIT_NAME="%(JUJU_UNIT_NAME)s" | ||
3374 | 166 | env PYTHONPATH="%(PYTHONPATH)s" | ||
3375 | 167 | |||
3376 | 168 | exec /usr/bin/python -m juju.agents.unit \ | ||
3377 | 169 | --logfile=/var/log/juju/unit-%(UNIT_PATH_NAME)s.log | ||
3378 | 170 | """ | ||
3379 | 171 | |||
3380 | 172 | |||
3381 | 173 | class UnitContainerDeployment(UnitMachineDeployment): | ||
3383 | 174 | """Deploy a service unit in a container. | 112 | """Deploy a service unit in a container. |
3384 | 175 | 113 | ||
3385 | 176 | Units deployed in a container have strong isolation between | 114 | Units deployed in a container have strong isolation between |
3386 | @@ -185,66 +123,35 @@ | |||
3387 | 185 | """ | 123 | """ |
3388 | 186 | 124 | ||
3389 | 187 | def __init__(self, unit_name, juju_home): | 125 | def __init__(self, unit_name, juju_home): |
3391 | 188 | super(UnitContainerDeployment, self).__init__(unit_name, juju_home) | 126 | self.unit_name = unit_name |
3392 | 127 | self.juju_home = juju_home | ||
3393 | 128 | self.unit_path_name = unit_name.replace("/", "-") | ||
3394 | 189 | 129 | ||
3395 | 130 | self._juju_origin = os.environ.get("JUJU_ORIGIN") | ||
3396 | 190 | self._unit_namespace = os.environ.get("JUJU_UNIT_NS") | 131 | self._unit_namespace = os.environ.get("JUJU_UNIT_NS") |
3397 | 191 | self._juju_origin = os.environ.get("JUJU_ORIGIN") | ||
3398 | 192 | assert self._unit_namespace is not None, "Required unit ns not found" | 132 | assert self._unit_namespace is not None, "Required unit ns not found" |
3399 | 133 | self.container_name = "%s-%s" % ( | ||
3400 | 134 | self._unit_namespace, self.unit_path_name) | ||
3401 | 193 | 135 | ||
3402 | 194 | self.pid_file = None | ||
3403 | 195 | self.container = LXCContainer(self.container_name, None, None, None) | 136 | self.container = LXCContainer(self.container_name, None, None, None) |
3426 | 196 | 137 | self.directory = None | |
3405 | 197 | @property | ||
3406 | 198 | def container_name(self): | ||
3407 | 199 | """Get a qualfied name for the container. | ||
3408 | 200 | |||
3409 | 201 | The units directory for the machine points to a path like:: | ||
3410 | 202 | |||
3411 | 203 | /var/lib/juju/units | ||
3412 | 204 | |||
3413 | 205 | In the case of the local provider this directory is qualified | ||
3414 | 206 | to allow for multiple users with multiple environments:: | ||
3415 | 207 | |||
3416 | 208 | /var/lib/juju/username-envname | ||
3417 | 209 | |||
3418 | 210 | This value is passed to the agent via the JUJU_HOME environment | ||
3419 | 211 | variable. | ||
3420 | 212 | |||
3421 | 213 | This function extracts the name qualifier for the container from | ||
3422 | 214 | the JUJU_HOME value. | ||
3423 | 215 | """ | ||
3424 | 216 | return "%s-%s" % (self._unit_namespace, | ||
3425 | 217 | self.unit_name.replace("/", "-")) | ||
3427 | 218 | 138 | ||
3428 | 219 | def setup_directories(self): | 139 | def setup_directories(self): |
3429 | 220 | # Create state directories for unit in the container | 140 | # Create state directories for unit in the container |
3430 | 221 | # Move to juju-create script | 141 | # Move to juju-create script |
3454 | 222 | units_dir = os.path.join( | 142 | base = self.directory |
3455 | 223 | self.directory, "var", "lib", "juju", "units") | 143 | dirs = ((base, "var", "lib", "juju", "units", self.unit_path_name), |
3456 | 224 | if not os.path.exists(units_dir): | 144 | (base, "var", "lib", "juju", "state"), |
3457 | 225 | os.makedirs(units_dir) | 145 | (base, "var", "log", "juju"), |
3458 | 226 | 146 | (self.juju_home, "units", self.unit_path_name)) | |
3459 | 227 | state_dir = os.path.join( | 147 | |
3460 | 228 | self.directory, "var", "lib", "juju", "state") | 148 | for parts in dirs: |
3461 | 229 | if not os.path.exists(state_dir): | 149 | dir_ = os.path.join(*parts) |
3462 | 230 | os.makedirs(state_dir) | 150 | if not os.path.exists(dir_): |
3463 | 231 | 151 | os.makedirs(dir_) | |
3441 | 232 | log_dir = os.path.join( | ||
3442 | 233 | self.directory, "var", "log", "juju") | ||
3443 | 234 | if not os.path.exists(log_dir): | ||
3444 | 235 | os.makedirs(log_dir) | ||
3445 | 236 | |||
3446 | 237 | unit_dir = os.path.join(units_dir, self.unit_path_name) | ||
3447 | 238 | if not os.path.exists(unit_dir): | ||
3448 | 239 | os.mkdir(unit_dir) | ||
3449 | 240 | |||
3450 | 241 | host_unit_dir = os.path.join( | ||
3451 | 242 | self.juju_home, "units", self.unit_path_name) | ||
3452 | 243 | if not os.path.exists(host_unit_dir): | ||
3453 | 244 | os.makedirs(host_unit_dir) | ||
3464 | 245 | 152 | ||
3465 | 246 | @inlineCallbacks | 153 | @inlineCallbacks |
3467 | 247 | def _get_master_template(self, machine_id, zookeeper_hosts, public_key): | 154 | def _get_master_template(self, machine_id, public_key): |
3468 | 248 | container_template_name = "%s-%s-template" % ( | 155 | container_template_name = "%s-%s-template" % ( |
3469 | 249 | self._unit_namespace, machine_id) | 156 | self._unit_namespace, machine_id) |
3470 | 250 | 157 | ||
3471 | @@ -260,7 +167,7 @@ | |||
3472 | 260 | if not master_template.is_constructed(): | 167 | if not master_template.is_constructed(): |
3473 | 261 | log.debug("Creating master container...") | 168 | log.debug("Creating master container...") |
3474 | 262 | yield master_template.create() | 169 | yield master_template.create() |
3476 | 263 | log.debug("Created master container %s" % container_template_name) | 170 | log.debug("Created master container %s", container_template_name) |
3477 | 264 | 171 | ||
3478 | 265 | # it wasn't constructed and we couldn't construct it | 172 | # it wasn't constructed and we couldn't construct it |
3479 | 266 | if not master_template.is_constructed(): | 173 | if not master_template.is_constructed(): |
3480 | @@ -269,15 +176,15 @@ | |||
3481 | 269 | returnValue(master_template) | 176 | returnValue(master_template) |
3482 | 270 | 177 | ||
3483 | 271 | @inlineCallbacks | 178 | @inlineCallbacks |
3485 | 272 | def _get_container(self, machine_id, zookeeper_hosts, bundle, public_key): | 179 | def _get_container(self, machine_id, bundle, public_key): |
3486 | 273 | master_template = yield self._get_master_template( | 180 | master_template = yield self._get_master_template( |
3488 | 274 | machine_id, zookeeper_hosts, public_key) | 181 | machine_id, public_key) |
3489 | 275 | log.info( | 182 | log.info( |
3491 | 276 | "Creating container %s...", os.path.basename(self.directory)) | 183 | "Creating container %s...", self.unit_path_name) |
3492 | 277 | 184 | ||
3493 | 278 | container = yield master_template.clone(self.container_name) | 185 | container = yield master_template.clone(self.container_name) |
3494 | 279 | directory = container.rootfs | 186 | directory = container.rootfs |
3496 | 280 | log.info("Container created for %s" % self.unit_name) | 187 | log.info("Container created for %s", self.unit_name) |
3497 | 281 | returnValue((container, directory)) | 188 | returnValue((container, directory)) |
3498 | 282 | 189 | ||
3499 | 283 | @inlineCallbacks | 190 | @inlineCallbacks |
3500 | @@ -293,10 +200,9 @@ | |||
3501 | 293 | # Build a template container that can be cloned in deploy | 200 | # Build a template container that can be cloned in deploy |
3502 | 294 | # we leave the loosely initialized self.container in place for | 201 | # we leave the loosely initialized self.container in place for |
3503 | 295 | # the class as thats all we need for methods other than start. | 202 | # the class as thats all we need for methods other than start. |
3508 | 296 | self.container, self.directory = yield self._get_container(machine_id, | 203 | self.container, self.directory = yield self._get_container( |
3509 | 297 | zookeeper_hosts, | 204 | machine_id, bundle, public_key) |
3510 | 298 | bundle, | 205 | |
3507 | 299 | public_key) | ||
3511 | 300 | # Create state directories for unit in the container | 206 | # Create state directories for unit in the container |
3512 | 301 | self.setup_directories() | 207 | self.setup_directories() |
3513 | 302 | 208 | ||
3514 | @@ -308,13 +214,25 @@ | |||
3515 | 308 | log.debug("Charm extracted into container") | 214 | log.debug("Charm extracted into container") |
3516 | 309 | 215 | ||
3517 | 310 | # Write upstart file for the agent into the container | 216 | # Write upstart file for the agent into the container |
3523 | 311 | upstart_path = os.path.join( | 217 | service_name = "juju-%s" % self.unit_path_name |
3524 | 312 | self.directory, "etc", "init", | 218 | init_dir = os.path.join(self.directory, "etc", "init") |
3525 | 313 | "%s-unit-agent.conf" % self.unit_path_name) | 219 | service = UpstartService(service_name, init_dir=init_dir) |
3526 | 314 | with open(upstart_path, "w") as fh: | 220 | service.set_description( |
3527 | 315 | fh.write(self.get_upstart_unit_job(machine_id, zookeeper_hosts)) | 221 | "Juju unit agent for %s" % self.unit_name) |
3528 | 222 | service.set_environ(_get_environment( | ||
3529 | 223 | self.unit_name, "/var/lib/juju", machine_id, zookeeper_hosts)) | ||
3530 | 224 | service.set_output_path( | ||
3531 | 225 | "/var/log/juju/unit-%s-output.log" % self.unit_path_name) | ||
3532 | 226 | service.set_command(" ".join(( | ||
3533 | 227 | "/usr/bin/python", | ||
3534 | 228 | "-m", "juju.agents.unit", | ||
3535 | 229 | "--nodaemon", | ||
3536 | 230 | "--logfile", "/var/log/juju/unit-%s.log" % self.unit_path_name, | ||
3537 | 231 | "--session-file", | ||
3538 | 232 | "/var/run/juju/unit-%s-agent.zksession" % self.unit_path_name))) | ||
3539 | 233 | yield service.install() | ||
3540 | 316 | 234 | ||
3542 | 317 | # Create a symlink on the host for easier access to the unit log file | 235 | # Create symlinks on the host for easier access to the unit log files |
3543 | 318 | unit_log_path_host = os.path.join( | 236 | unit_log_path_host = os.path.join( |
3544 | 319 | self.juju_home, "units", self.unit_path_name, "unit.log") | 237 | self.juju_home, "units", self.unit_path_name, "unit.log") |
3545 | 320 | if not os.path.lexists(unit_log_path_host): | 238 | if not os.path.lexists(unit_log_path_host): |
3546 | @@ -322,6 +240,13 @@ | |||
3547 | 322 | os.path.join(self.directory, "var", "log", "juju", | 240 | os.path.join(self.directory, "var", "log", "juju", |
3548 | 323 | "unit-%s.log" % self.unit_path_name), | 241 | "unit-%s.log" % self.unit_path_name), |
3549 | 324 | unit_log_path_host) | 242 | unit_log_path_host) |
3550 | 243 | unit_output_path_host = os.path.join( | ||
3551 | 244 | self.juju_home, "units", self.unit_path_name, "output.log") | ||
3552 | 245 | if not os.path.lexists(unit_output_path_host): | ||
3553 | 246 | os.symlink( | ||
3554 | 247 | os.path.join(self.directory, "var", "log", "juju", | ||
3555 | 248 | "unit-%s-output.log" % self.unit_path_name), | ||
3556 | 249 | unit_output_path_host) | ||
3557 | 325 | 250 | ||
3558 | 326 | # Debug log for the container | 251 | # Debug log for the container |
3559 | 327 | container_log_path = os.path.join( | 252 | container_log_path = os.path.join( |
3560 | @@ -330,36 +255,22 @@ | |||
3561 | 330 | 255 | ||
3562 | 331 | log.debug("Starting container...") | 256 | log.debug("Starting container...") |
3563 | 332 | yield self.container.run() | 257 | yield self.container.run() |
3565 | 333 | log.info("Started container for %s" % self.unit_name) | 258 | log.info("Started container for %s", self.unit_name) |
3566 | 334 | 259 | ||
3567 | 335 | @inlineCallbacks | 260 | @inlineCallbacks |
3568 | 336 | def destroy(self): | 261 | def destroy(self): |
3571 | 337 | """Destroy the unit container. | 262 | """Destroy the unit container.""" |
3570 | 338 | """ | ||
3572 | 339 | log.debug("Destroying container...") | 263 | log.debug("Destroying container...") |
3573 | 340 | yield self.container.destroy() | 264 | yield self.container.destroy() |
3575 | 341 | log.info("Destroyed container for %s" % self.unit_name) | 265 | log.info("Destroyed container for %s", self.unit_name) |
3576 | 342 | 266 | ||
3577 | 343 | @inlineCallbacks | 267 | @inlineCallbacks |
3578 | 344 | def is_running(self): | 268 | def is_running(self): |
3586 | 345 | """Is the unit container running. | 269 | """Is the unit container running?""" |
3587 | 346 | """ | 270 | # TODO: container running may not imply agent running. |
3588 | 347 | # TODO: container running may not imply agent running. the | 271 | # query zookeeper for the unit agent presence node? |
3582 | 348 | # pid file has the pid from the container, we need a container | ||
3583 | 349 | # pid -> host pid mapping to query status from the machine agent. | ||
3584 | 350 | # alternatively querying zookeeper for the unit agent presence | ||
3585 | 351 | # node. | ||
3589 | 352 | if not self.container: | 272 | if not self.container: |
3590 | 353 | returnValue(False) | 273 | returnValue(False) |
3591 | 354 | container_map = yield get_containers( | 274 | container_map = yield get_containers( |
3592 | 355 | prefix=self.container.container_name) | 275 | prefix=self.container.container_name) |
3593 | 356 | returnValue(container_map.get(self.container.container_name, False)) | 276 | returnValue(container_map.get(self.container.container_name, False)) |
3594 | 357 | |||
3595 | 358 | def get_upstart_unit_job(self, machine_id, zookeeper_hosts): | ||
3596 | 359 | """Return a string containing the upstart job to start the unit agent. | ||
3597 | 360 | """ | ||
3598 | 361 | environ = self.get_environment(machine_id, zookeeper_hosts) | ||
3599 | 362 | # Keep qualified locations within the container for colo support | ||
3600 | 363 | environ["JUJU_HOME"] = "/var/lib/juju" | ||
3601 | 364 | environ["UNIT_PATH_NAME"] = self.unit_path_name | ||
3602 | 365 | return container_upstart_job_template % environ | ||
3603 | 366 | 277 | ||
3604 | === modified file 'juju/providers/common/cloudinit.py' | |||
3605 | --- juju/providers/common/cloudinit.py 2012-01-09 13:58:21 +0000 | |||
3606 | +++ juju/providers/common/cloudinit.py 2012-02-02 16:42:42 +0000 | |||
3607 | @@ -1,6 +1,7 @@ | |||
3608 | 1 | from subprocess import Popen, PIPE | 1 | from subprocess import Popen, PIPE |
3609 | 2 | 2 | ||
3610 | 3 | from juju.errors import CloudInitError | 3 | from juju.errors import CloudInitError |
3611 | 4 | from juju.lib.upstart import UpstartService | ||
3612 | 4 | from juju.providers.common.utils import format_cloud_init | 5 | from juju.providers.common.utils import format_cloud_init |
3613 | 5 | from juju.state.auth import make_identity | 6 | from juju.state.auth import make_identity |
3614 | 6 | import juju | 7 | import juju |
3615 | @@ -41,21 +42,26 @@ | |||
3616 | 41 | 42 | ||
3617 | 42 | 43 | ||
3618 | 43 | def _machine_scripts(machine_id, zookeeper_hosts): | 44 | def _machine_scripts(machine_id, zookeeper_hosts): |
3625 | 44 | return [ | 45 | service = UpstartService("juju-machine-agent") |
3626 | 45 | "JUJU_MACHINE_ID=%s JUJU_ZOOKEEPER=%s " | 46 | service.set_description("Juju machine agent") |
3627 | 46 | "python -m juju.agents.machine -n " | 47 | service.set_environ( |
3628 | 47 | "--logfile=/var/log/juju/machine-agent.log " | 48 | {"JUJU_MACHINE_ID": machine_id, "JUJU_ZOOKEEPER": zookeeper_hosts}) |
3629 | 48 | "--pidfile=/var/run/juju/machine-agent.pid" | 49 | service.set_command( |
3630 | 49 | % (machine_id, zookeeper_hosts)] | 50 | "python -m juju.agents.machine --nodaemon " |
3631 | 51 | "--logfile /var/log/juju/machine-agent.log " | ||
3632 | 52 | "--session-file /var/run/juju/machine-agent.zksession") | ||
3633 | 53 | return service.get_cloud_init_commands() | ||
3634 | 50 | 54 | ||
3635 | 51 | 55 | ||
3636 | 52 | def _provision_scripts(zookeeper_hosts): | 56 | def _provision_scripts(zookeeper_hosts): |
3643 | 53 | return [ | 57 | service = UpstartService("juju-provision-agent") |
3644 | 54 | "JUJU_ZOOKEEPER=%s " | 58 | service.set_description("Juju provisioning agent") |
3645 | 55 | "python -m juju.agents.provision -n " | 59 | service.set_environ({"JUJU_ZOOKEEPER": zookeeper_hosts}) |
3646 | 56 | "--logfile=/var/log/juju/provision-agent.log " | 60 | service.set_command( |
3647 | 57 | "--pidfile=/var/run/juju/provision-agent.pid" | 61 | "python -m juju.agents.provision --nodaemon " |
3648 | 58 | % zookeeper_hosts] | 62 | "--logfile /var/log/juju/provision-agent.log " |
3649 | 63 | "--session-file /var/run/juju/provision-agent.zksession") | ||
3650 | 64 | return service.get_cloud_init_commands() | ||
3651 | 59 | 65 | ||
3652 | 60 | 66 | ||
3653 | 61 | def _line_generator(data): | 67 | def _line_generator(data): |
3654 | @@ -64,6 +70,7 @@ | |||
3655 | 64 | if stripped: | 70 | if stripped: |
3656 | 65 | yield (len(line)-len(stripped), stripped) | 71 | yield (len(line)-len(stripped), stripped) |
3657 | 66 | 72 | ||
3658 | 73 | |||
3659 | 67 | def parse_juju_origin(data): | 74 | def parse_juju_origin(data): |
3660 | 68 | next = _line_generator(data).next | 75 | next = _line_generator(data).next |
3661 | 69 | try: | 76 | try: |
3662 | 70 | 77 | ||
3663 | === modified file 'juju/providers/common/tests/data/cloud_init_bootstrap' | |||
3664 | --- juju/providers/common/tests/data/cloud_init_bootstrap 2012-01-09 14:17:21 +0000 | |||
3665 | +++ juju/providers/common/tests/data/cloud_init_bootstrap 2012-02-02 16:42:42 +0000 | |||
3666 | @@ -4,12 +4,58 @@ | |||
3667 | 4 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'localhost:2181', | 4 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'localhost:2181', |
3668 | 5 | machine-id: passport} | 5 | machine-id: passport} |
3669 | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3678 | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
3679 | 8 | python-zookeeper, default-jre-headless, zookeeper, zookeeperd] | 8 | default-jre-headless, zookeeper, zookeeperd] |
3680 | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3681 | 10 | -p /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --provider-type=dummy', | 10 | /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= |
3682 | 11 | 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.machine | 11 | --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3683 | 12 | -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid', | 12 | |
3684 | 13 | 'JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log | 13 | description "Juju machine agent" |
3685 | 14 | --pidfile=/var/run/juju/provision-agent.pid'] | 14 | |
3686 | 15 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3687 | 16 | |||
3688 | 17 | |||
3689 | 18 | start on runlevel [2345] | ||
3690 | 19 | |||
3691 | 20 | stop on runlevel [!2345] | ||
3692 | 21 | |||
3693 | 22 | respawn | ||
3694 | 23 | |||
3695 | 24 | |||
3696 | 25 | env JUJU_MACHINE_ID="passport" | ||
3697 | 26 | |||
3698 | 27 | env JUJU_ZOOKEEPER="localhost:2181" | ||
3699 | 28 | |||
3700 | 29 | |||
3701 | 30 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
3702 | 31 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
3703 | 32 | 2>&1 | ||
3704 | 33 | |||
3705 | 34 | EOF | ||
3706 | 35 | |||
3707 | 36 | ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf | ||
3708 | 37 | <<EOF | ||
3709 | 38 | |||
3710 | 39 | description "Juju provisioning agent" | ||
3711 | 40 | |||
3712 | 41 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3713 | 42 | |||
3714 | 43 | |||
3715 | 44 | start on runlevel [2345] | ||
3716 | 45 | |||
3717 | 46 | stop on runlevel [!2345] | ||
3718 | 47 | |||
3719 | 48 | respawn | ||
3720 | 49 | |||
3721 | 50 | |||
3722 | 51 | env JUJU_ZOOKEEPER="localhost:2181" | ||
3723 | 52 | |||
3724 | 53 | |||
3725 | 54 | exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log | ||
3726 | 55 | --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output | ||
3727 | 56 | 2>&1 | ||
3728 | 57 | |||
3729 | 58 | EOF | ||
3730 | 59 | |||
3731 | 60 | ', /sbin/start juju-provision-agent] | ||
3732 | 15 | ssh_authorized_keys: [chubb] | 61 | ssh_authorized_keys: [chubb] |
3733 | 16 | 62 | ||
3734 | === modified file 'juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers' | |||
3735 | --- juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers 2012-01-09 14:17:21 +0000 | |||
3736 | +++ juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers 2012-02-02 16:42:42 +0000 | |||
3737 | @@ -4,13 +4,58 @@ | |||
3738 | 4 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181,localhost:2181', | 4 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181,localhost:2181', |
3739 | 5 | machine-id: passport} | 5 | machine-id: passport} |
3740 | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3750 | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
3751 | 8 | python-zookeeper, default-jre-headless, zookeeper, zookeeperd] | 8 | default-jre-headless, zookeeper, zookeeperd] |
3752 | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3753 | 10 | -p /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --provider-type=dummy', | 10 | /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= |
3754 | 11 | 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181,localhost:2181 | 11 | --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3755 | 12 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 12 | |
3756 | 13 | --pidfile=/var/run/juju/machine-agent.pid', 'JUJU_ZOOKEEPER=cotswold:2181,longleat:2181,localhost:2181 | 13 | description "Juju machine agent" |
3757 | 14 | python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log | 14 | |
3758 | 15 | --pidfile=/var/run/juju/provision-agent.pid'] | 15 | author "Juju Team <juju@lists.ubuntu.com>" |
3759 | 16 | |||
3760 | 17 | |||
3761 | 18 | start on runlevel [2345] | ||
3762 | 19 | |||
3763 | 20 | stop on runlevel [!2345] | ||
3764 | 21 | |||
3765 | 22 | respawn | ||
3766 | 23 | |||
3767 | 24 | |||
3768 | 25 | env JUJU_MACHINE_ID="passport" | ||
3769 | 26 | |||
3770 | 27 | env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181,localhost:2181" | ||
3771 | 28 | |||
3772 | 29 | |||
3773 | 30 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
3774 | 31 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
3775 | 32 | 2>&1 | ||
3776 | 33 | |||
3777 | 34 | EOF | ||
3778 | 35 | |||
3779 | 36 | ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf | ||
3780 | 37 | <<EOF | ||
3781 | 38 | |||
3782 | 39 | description "Juju provisioning agent" | ||
3783 | 40 | |||
3784 | 41 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3785 | 42 | |||
3786 | 43 | |||
3787 | 44 | start on runlevel [2345] | ||
3788 | 45 | |||
3789 | 46 | stop on runlevel [!2345] | ||
3790 | 47 | |||
3791 | 48 | respawn | ||
3792 | 49 | |||
3793 | 50 | |||
3794 | 51 | env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181,localhost:2181" | ||
3795 | 52 | |||
3796 | 53 | |||
3797 | 54 | exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log | ||
3798 | 55 | --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output | ||
3799 | 56 | 2>&1 | ||
3800 | 57 | |||
3801 | 58 | EOF | ||
3802 | 59 | |||
3803 | 60 | ', /sbin/start juju-provision-agent] | ||
3804 | 16 | ssh_authorized_keys: [chubb] | 61 | ssh_authorized_keys: [chubb] |
3805 | 17 | 62 | ||
3806 | === modified file 'juju/providers/common/tests/data/cloud_init_branch' | |||
3807 | --- juju/providers/common/tests/data/cloud_init_branch 2012-01-09 14:17:21 +0000 | |||
3808 | +++ juju/providers/common/tests/data/cloud_init_branch 2012-02-02 16:42:42 +0000 | |||
3809 | @@ -6,12 +6,34 @@ | |||
3810 | 6 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', | 6 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3811 | 7 | machine-id: passport} | 7 | machine-id: passport} |
3812 | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3815 | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3814 | 10 | python-zookeeper] | ||
3816 | 11 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, | 10 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, |
3822 | 12 | 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:blah/juju/blah-blah juju', | 11 | 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:blah/juju/blah-blah juju', cd /usr/lib/juju/juju |
3823 | 13 | cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, | 12 | && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, |
3824 | 14 | sudo mkdir -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 | 13 | 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3825 | 15 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 14 | |
3826 | 16 | --pidfile=/var/run/juju/machine-agent.pid'] | 15 | description "Juju machine agent" |
3827 | 16 | |||
3828 | 17 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3829 | 18 | |||
3830 | 19 | |||
3831 | 20 | start on runlevel [2345] | ||
3832 | 21 | |||
3833 | 22 | stop on runlevel [!2345] | ||
3834 | 23 | |||
3835 | 24 | respawn | ||
3836 | 25 | |||
3837 | 26 | |||
3838 | 27 | env JUJU_MACHINE_ID="passport" | ||
3839 | 28 | |||
3840 | 29 | env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" | ||
3841 | 30 | |||
3842 | 31 | |||
3843 | 32 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
3844 | 33 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
3845 | 34 | 2>&1 | ||
3846 | 35 | |||
3847 | 36 | EOF | ||
3848 | 37 | |||
3849 | 38 | ', /sbin/start juju-machine-agent] | ||
3850 | 17 | ssh_authorized_keys: [chubb] | 39 | ssh_authorized_keys: [chubb] |
3851 | 18 | 40 | ||
3852 | === modified file 'juju/providers/common/tests/data/cloud_init_branch_trunk' | |||
3853 | --- juju/providers/common/tests/data/cloud_init_branch_trunk 2012-01-09 14:17:21 +0000 | |||
3854 | +++ juju/providers/common/tests/data/cloud_init_branch_trunk 2012-02-02 16:42:42 +0000 | |||
3855 | @@ -6,12 +6,34 @@ | |||
3856 | 6 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', | 6 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3857 | 7 | machine-id: passport} | 7 | machine-id: passport} |
3858 | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3861 | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3860 | 10 | python-zookeeper] | ||
3862 | 11 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, | 10 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, |
3868 | 12 | 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:juju juju', | 11 | 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:juju juju', cd /usr/lib/juju/juju && |
3869 | 13 | cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, | 12 | sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, |
3870 | 14 | sudo mkdir -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 | 13 | 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3871 | 15 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 14 | |
3872 | 16 | --pidfile=/var/run/juju/machine-agent.pid'] | 15 | description "Juju machine agent" |
3873 | 16 | |||
3874 | 17 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3875 | 18 | |||
3876 | 19 | |||
3877 | 20 | start on runlevel [2345] | ||
3878 | 21 | |||
3879 | 22 | stop on runlevel [!2345] | ||
3880 | 23 | |||
3881 | 24 | respawn | ||
3882 | 25 | |||
3883 | 26 | |||
3884 | 27 | env JUJU_MACHINE_ID="passport" | ||
3885 | 28 | |||
3886 | 29 | env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" | ||
3887 | 30 | |||
3888 | 31 | |||
3889 | 32 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
3890 | 33 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
3891 | 34 | 2>&1 | ||
3892 | 35 | |||
3893 | 36 | EOF | ||
3894 | 37 | |||
3895 | 38 | ', /sbin/start juju-machine-agent] | ||
3896 | 17 | ssh_authorized_keys: [chubb] | 39 | ssh_authorized_keys: [chubb] |
3897 | 18 | 40 | ||
3898 | === modified file 'juju/providers/common/tests/data/cloud_init_distro' | |||
3899 | --- juju/providers/common/tests/data/cloud_init_distro 2012-01-09 14:17:21 +0000 | |||
3900 | +++ juju/providers/common/tests/data/cloud_init_distro 2012-02-02 16:42:42 +0000 | |||
3901 | @@ -4,10 +4,32 @@ | |||
3902 | 4 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', | 4 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3903 | 5 | machine-id: passport} | 5 | machine-id: passport} |
3904 | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3911 | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3912 | 8 | python-zookeeper] | 8 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3913 | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 9 | /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3914 | 10 | -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 | 10 | |
3915 | 11 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 11 | description "Juju machine agent" |
3916 | 12 | --pidfile=/var/run/juju/machine-agent.pid'] | 12 | |
3917 | 13 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3918 | 14 | |||
3919 | 15 | |||
3920 | 16 | start on runlevel [2345] | ||
3921 | 17 | |||
3922 | 18 | stop on runlevel [!2345] | ||
3923 | 19 | |||
3924 | 20 | respawn | ||
3925 | 21 | |||
3926 | 22 | |||
3927 | 23 | env JUJU_MACHINE_ID="passport" | ||
3928 | 24 | |||
3929 | 25 | env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" | ||
3930 | 26 | |||
3931 | 27 | |||
3932 | 28 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
3933 | 29 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
3934 | 30 | 2>&1 | ||
3935 | 31 | |||
3936 | 32 | EOF | ||
3937 | 33 | |||
3938 | 34 | ', /sbin/start juju-machine-agent] | ||
3939 | 13 | ssh_authorized_keys: [chubb] | 35 | ssh_authorized_keys: [chubb] |
3940 | 14 | 36 | ||
3941 | === modified file 'juju/providers/common/tests/data/cloud_init_ppa' | |||
3942 | --- juju/providers/common/tests/data/cloud_init_ppa 2012-01-09 14:17:21 +0000 | |||
3943 | +++ juju/providers/common/tests/data/cloud_init_ppa 2012-02-02 16:42:42 +0000 | |||
3944 | @@ -2,14 +2,36 @@ | |||
3945 | 2 | apt-update: true | 2 | apt-update: true |
3946 | 3 | apt-upgrade: true | 3 | apt-upgrade: true |
3947 | 4 | apt_sources: | 4 | apt_sources: |
3949 | 5 | - {'source': 'ppa:juju/pkgs'} | 5 | - {source: 'ppa:juju/pkgs'} |
3950 | 6 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', | 6 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3951 | 7 | machine-id: passport} | 7 | machine-id: passport} |
3952 | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3959 | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3960 | 10 | python-zookeeper] | 10 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3961 | 11 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 11 | /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3962 | 12 | -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 | 12 | |
3963 | 13 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 13 | description "Juju machine agent" |
3964 | 14 | --pidfile=/var/run/juju/machine-agent.pid'] | 14 | |
3965 | 15 | author "Juju Team <juju@lists.ubuntu.com>" | ||
3966 | 16 | |||
3967 | 17 | |||
3968 | 18 | start on runlevel [2345] | ||
3969 | 19 | |||
3970 | 20 | stop on runlevel [!2345] | ||
3971 | 21 | |||
3972 | 22 | respawn | ||
3973 | 23 | |||
3974 | 24 | |||
3975 | 25 | env JUJU_MACHINE_ID="passport" | ||
3976 | 26 | |||
3977 | 27 | env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" | ||
3978 | 28 | |||
3979 | 29 | |||
3980 | 30 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
3981 | 31 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
3982 | 32 | 2>&1 | ||
3983 | 33 | |||
3984 | 34 | EOF | ||
3985 | 35 | |||
3986 | 36 | ', /sbin/start juju-machine-agent] | ||
3987 | 15 | ssh_authorized_keys: [chubb] | 37 | ssh_authorized_keys: [chubb] |
3988 | 16 | 38 | ||
3989 | === modified file 'juju/providers/common/tests/test_cloudinit.py' | |||
3990 | --- juju/providers/common/tests/test_cloudinit.py 2011-10-04 21:22:48 +0000 | |||
3991 | +++ juju/providers/common/tests/test_cloudinit.py 2012-02-02 16:42:42 +0000 | |||
3992 | @@ -44,8 +44,9 @@ | |||
3993 | 44 | def assert_render(self, cloud_init, name): | 44 | def assert_render(self, cloud_init, name): |
3994 | 45 | with open(os.path.join(DATA_DIR, name)) as f: | 45 | with open(os.path.join(DATA_DIR, name)) as f: |
3995 | 46 | expected = yaml.load(f.read()) | 46 | expected = yaml.load(f.read()) |
3998 | 47 | obtained = yaml.load(cloud_init.render()) | 47 | rendered = cloud_init.render() |
3999 | 48 | self.assertEquals(obtained, expected) | 48 | self.assertTrue(rendered.startswith("#cloud-config")) |
4000 | 49 | self.assertEquals(yaml.load(rendered), expected) | ||
4001 | 49 | 50 | ||
4002 | 50 | def test_render_validate_normal(self): | 51 | def test_render_validate_normal(self): |
4003 | 51 | cloud_init = CloudInit() | 52 | cloud_init = CloudInit() |
4004 | 52 | 53 | ||
4005 | === modified file 'juju/providers/ec2/tests/data/bootstrap_cloud_init' | |||
4006 | --- juju/providers/ec2/tests/data/bootstrap_cloud_init 2012-01-09 14:17:21 +0000 | |||
4007 | +++ juju/providers/ec2/tests/data/bootstrap_cloud_init 2012-02-02 16:42:42 +0000 | |||
4008 | @@ -1,16 +1,61 @@ | |||
4009 | 1 | #cloud-config | 1 | #cloud-config |
4010 | 2 | apt-update: true | 2 | apt-update: true |
4011 | 3 | apt-upgrade: true | 3 | apt-upgrade: true |
4014 | 4 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'localhost:2181', | 4 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'localhost:2181', machine-id: '0'} |
4013 | 5 | machine-id: '0'} | ||
4015 | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 5 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4025 | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 6 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
4026 | 8 | python-zookeeper, default-jre-headless, zookeeper, zookeeperd] | 7 | default-jre-headless, zookeeper, zookeeperd] |
4027 | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 8 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4028 | 10 | -p /var/log/juju, 'juju-admin initialize --instance-id=$(curl http://169.254.169.254/1.0/meta-data/instance-id) | 9 | /var/log/juju, 'juju-admin initialize --instance-id=$(curl http://169.254.169.254/1.0/meta-data/instance-id) |
4029 | 11 | --admin-identity=admin:JbJ6sDGV37EHzbG9FPvttk64cmg= --provider-type=ec2', 'JUJU_MACHINE_ID=0 JUJU_ZOOKEEPER=localhost:2181 | 10 | --admin-identity=admin:JbJ6sDGV37EHzbG9FPvttk64cmg= --provider-type=ec2', 'cat |
4030 | 12 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 11 | >> /etc/init/juju-machine-agent.conf <<EOF |
4031 | 13 | --pidfile=/var/run/juju/machine-agent.pid', 'JUJU_ZOOKEEPER=localhost:2181 | 12 | |
4032 | 14 | python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log | 13 | description "Juju machine agent" |
4033 | 15 | --pidfile=/var/run/juju/provision-agent.pid'] | 14 | |
4034 | 15 | author "Juju Team <juju@lists.ubuntu.com>" | ||
4035 | 16 | |||
4036 | 17 | |||
4037 | 18 | start on runlevel [2345] | ||
4038 | 19 | |||
4039 | 20 | stop on runlevel [!2345] | ||
4040 | 21 | |||
4041 | 22 | respawn | ||
4042 | 23 | |||
4043 | 24 | |||
4044 | 25 | env JUJU_MACHINE_ID="0" | ||
4045 | 26 | |||
4046 | 27 | env JUJU_ZOOKEEPER="localhost:2181" | ||
4047 | 28 | |||
4048 | 29 | |||
4049 | 30 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
4050 | 31 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
4051 | 32 | 2>&1 | ||
4052 | 33 | |||
4053 | 34 | EOF | ||
4054 | 35 | |||
4055 | 36 | ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf | ||
4056 | 37 | <<EOF | ||
4057 | 38 | |||
4058 | 39 | description "Juju provisioning agent" | ||
4059 | 40 | |||
4060 | 41 | author "Juju Team <juju@lists.ubuntu.com>" | ||
4061 | 42 | |||
4062 | 43 | |||
4063 | 44 | start on runlevel [2345] | ||
4064 | 45 | |||
4065 | 46 | stop on runlevel [!2345] | ||
4066 | 47 | |||
4067 | 48 | respawn | ||
4068 | 49 | |||
4069 | 50 | |||
4070 | 51 | env JUJU_ZOOKEEPER="localhost:2181" | ||
4071 | 52 | |||
4072 | 53 | |||
4073 | 54 | exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log | ||
4074 | 55 | --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output | ||
4075 | 56 | 2>&1 | ||
4076 | 57 | |||
4077 | 58 | EOF | ||
4078 | 59 | |||
4079 | 60 | ', /sbin/start juju-provision-agent] | ||
4080 | 16 | ssh_authorized_keys: [zebra] | 61 | ssh_authorized_keys: [zebra] |
4081 | 17 | 62 | ||
4082 | === modified file 'juju/providers/ec2/tests/data/launch_cloud_init' | |||
4083 | --- juju/providers/ec2/tests/data/launch_cloud_init 2012-01-09 14:17:21 +0000 | |||
4084 | +++ juju/providers/ec2/tests/data/launch_cloud_init 2012-02-02 16:42:42 +0000 | |||
4085 | @@ -4,10 +4,32 @@ | |||
4086 | 4 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', | 4 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', |
4087 | 5 | machine-id: '1'} | 5 | machine-id: '1'} |
4088 | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4095 | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
4096 | 8 | python-zookeeper] | 8 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4097 | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 9 | /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4098 | 10 | -p /var/log/juju, 'JUJU_MACHINE_ID=1 JUJU_ZOOKEEPER=es.example.internal:2181 | 10 | |
4099 | 11 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 11 | description "Juju machine agent" |
4100 | 12 | --pidfile=/var/run/juju/machine-agent.pid'] | 12 | |
4101 | 13 | author "Juju Team <juju@lists.ubuntu.com>" | ||
4102 | 14 | |||
4103 | 15 | |||
4104 | 16 | start on runlevel [2345] | ||
4105 | 17 | |||
4106 | 18 | stop on runlevel [!2345] | ||
4107 | 19 | |||
4108 | 20 | respawn | ||
4109 | 21 | |||
4110 | 22 | |||
4111 | 23 | env JUJU_MACHINE_ID="1" | ||
4112 | 24 | |||
4113 | 25 | env JUJU_ZOOKEEPER="es.example.internal:2181" | ||
4114 | 26 | |||
4115 | 27 | |||
4116 | 28 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
4117 | 29 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
4118 | 30 | 2>&1 | ||
4119 | 31 | |||
4120 | 32 | EOF | ||
4121 | 33 | |||
4122 | 34 | ', /sbin/start juju-machine-agent] | ||
4123 | 13 | ssh_authorized_keys: [zebra] | 35 | ssh_authorized_keys: [zebra] |
4124 | 14 | 36 | ||
4125 | === modified file 'juju/providers/ec2/tests/data/launch_cloud_init_branch' | |||
4126 | --- juju/providers/ec2/tests/data/launch_cloud_init_branch 2012-01-09 14:17:21 +0000 | |||
4127 | +++ juju/providers/ec2/tests/data/launch_cloud_init_branch 2012-02-02 16:42:42 +0000 | |||
4128 | @@ -6,15 +6,34 @@ | |||
4129 | 6 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', | 6 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', |
4130 | 7 | machine-id: '1'} | 7 | machine-id: '1'} |
4131 | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4143 | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
4144 | 10 | python-zookeeper] | 10 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, |
4145 | 11 | runcmd: [sudo apt-get install -y python-txzookeeper, | 11 | 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:~wizard/juju-juicebar juju', cd /usr/lib/juju/juju |
4146 | 12 | sudo mkdir -p /usr/lib/juju, | 12 | && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, |
4147 | 13 | 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:~wizard/juju-juicebar juju', | 13 | 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4148 | 14 | cd /usr/lib/juju/juju && sudo python setup.py develop, | 14 | |
4149 | 15 | sudo mkdir -p /var/lib/juju, | 15 | description "Juju machine agent" |
4150 | 16 | sudo mkdir -p /var/log/juju, | 16 | |
4151 | 17 | 'JUJU_MACHINE_ID=1 JUJU_ZOOKEEPER=es.example.internal:2181 | 17 | author "Juju Team <juju@lists.ubuntu.com>" |
4152 | 18 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 18 | |
4153 | 19 | --pidfile=/var/run/juju/machine-agent.pid'] | 19 | |
4154 | 20 | start on runlevel [2345] | ||
4155 | 21 | |||
4156 | 22 | stop on runlevel [!2345] | ||
4157 | 23 | |||
4158 | 24 | respawn | ||
4159 | 25 | |||
4160 | 26 | |||
4161 | 27 | env JUJU_MACHINE_ID="1" | ||
4162 | 28 | |||
4163 | 29 | env JUJU_ZOOKEEPER="es.example.internal:2181" | ||
4164 | 30 | |||
4165 | 31 | |||
4166 | 32 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
4167 | 33 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
4168 | 34 | 2>&1 | ||
4169 | 35 | |||
4170 | 36 | EOF | ||
4171 | 37 | |||
4172 | 38 | ', /sbin/start juju-machine-agent] | ||
4173 | 20 | ssh_authorized_keys: [zebra] | 39 | ssh_authorized_keys: [zebra] |
4174 | 21 | 40 | ||
4175 | === modified file 'juju/providers/ec2/tests/data/launch_cloud_init_ppa' | |||
4176 | --- juju/providers/ec2/tests/data/launch_cloud_init_ppa 2012-01-09 14:17:21 +0000 | |||
4177 | +++ juju/providers/ec2/tests/data/launch_cloud_init_ppa 2012-02-02 16:42:42 +0000 | |||
4178 | @@ -6,10 +6,32 @@ | |||
4179 | 6 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', | 6 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', |
4180 | 7 | machine-id: '1'} | 7 | machine-id: '1'} |
4181 | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 8 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4188 | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 9 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
4189 | 10 | python-zookeeper] | 10 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4190 | 11 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir | 11 | /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4191 | 12 | -p /var/log/juju, 'JUJU_MACHINE_ID=1 JUJU_ZOOKEEPER=es.example.internal:2181 | 12 | |
4192 | 13 | python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log | 13 | description "Juju machine agent" |
4193 | 14 | --pidfile=/var/run/juju/machine-agent.pid'] | 14 | |
4194 | 15 | author "Juju Team <juju@lists.ubuntu.com>" | ||
4195 | 16 | |||
4196 | 17 | |||
4197 | 18 | start on runlevel [2345] | ||
4198 | 19 | |||
4199 | 20 | stop on runlevel [!2345] | ||
4200 | 21 | |||
4201 | 22 | respawn | ||
4202 | 23 | |||
4203 | 24 | |||
4204 | 25 | env JUJU_MACHINE_ID="1" | ||
4205 | 26 | |||
4206 | 27 | env JUJU_ZOOKEEPER="es.example.internal:2181" | ||
4207 | 28 | |||
4208 | 29 | |||
4209 | 30 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
4210 | 31 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
4211 | 32 | 2>&1 | ||
4212 | 33 | |||
4213 | 34 | EOF | ||
4214 | 35 | |||
4215 | 36 | ', /sbin/start juju-machine-agent] | ||
4216 | 15 | ssh_authorized_keys: [zebra] | 37 | ssh_authorized_keys: [zebra] |
4217 | 16 | 38 | ||
4218 | === modified file 'juju/providers/local/__init__.py' | |||
4219 | --- juju/providers/local/__init__.py 2011-11-16 13:56:03 +0000 | |||
4220 | +++ juju/providers/local/__init__.py 2012-02-02 16:42:42 +0000 | |||
4221 | @@ -102,11 +102,11 @@ | |||
4222 | 102 | # Starting provider storage server | 102 | # Starting provider storage server |
4223 | 103 | log.info("Starting storage server...") | 103 | log.info("Starting storage server...") |
4224 | 104 | storage_server = StorageServer( | 104 | storage_server = StorageServer( |
4226 | 105 | pid_file=os.path.join(self._directory, "storage-server.pid"), | 105 | self._qualified_name, |
4227 | 106 | storage_dir=os.path.join(self._directory, "files"), | 106 | storage_dir=os.path.join(self._directory, "files"), |
4228 | 107 | host=net_attributes["ip"]["address"], | 107 | host=net_attributes["ip"]["address"], |
4229 | 108 | port=get_open_port(net_attributes["ip"]["address"]), | 108 | port=get_open_port(net_attributes["ip"]["address"]), |
4231 | 109 | log_file=os.path.join(self._directory, "storage-server.log")) | 109 | logfile=os.path.join(self._directory, "storage-server.log")) |
4232 | 110 | yield storage_server.start() | 110 | yield storage_server.start() |
4233 | 111 | 111 | ||
4234 | 112 | # Save the zookeeper start to provider storage. | 112 | # Save the zookeeper start to provider storage. |
4235 | @@ -130,17 +130,15 @@ | |||
4236 | 130 | raise ProviderError(str(e)) | 130 | raise ProviderError(str(e)) |
4237 | 131 | 131 | ||
4238 | 132 | # Startup the machine agent | 132 | # Startup the machine agent |
4239 | 133 | pid_file = os.path.join(self._directory, "machine-agent.pid") | ||
4240 | 134 | log_file = os.path.join(self._directory, "machine-agent.log") | 133 | log_file = os.path.join(self._directory, "machine-agent.log") |
4241 | 135 | 134 | ||
4242 | 136 | juju_origin = self.config.get("juju-origin") | 135 | juju_origin = self.config.get("juju-origin") |
4244 | 137 | agent = ManagedMachineAgent(pid_file, | 136 | agent = ManagedMachineAgent(self._qualified_name, |
4245 | 138 | zookeeper_hosts=zookeeper.address, | 137 | zookeeper_hosts=zookeeper.address, |
4246 | 139 | machine_id="0", | 138 | machine_id="0", |
4247 | 140 | juju_directory=self._directory, | 139 | juju_directory=self._directory, |
4248 | 141 | log_file=log_file, | 140 | log_file=log_file, |
4249 | 142 | juju_origin=juju_origin, | 141 | juju_origin=juju_origin, |
4250 | 143 | juju_unit_namespace=self._qualified_name, | ||
4251 | 144 | public_key=public_key) | 142 | public_key=public_key) |
4252 | 145 | log.info( | 143 | log.info( |
4253 | 146 | "Starting machine agent (origin: %s)... ", agent.juju_origin) | 144 | "Starting machine agent (origin: %s)... ", agent.juju_origin) |
4254 | @@ -158,14 +156,12 @@ | |||
4255 | 158 | 156 | ||
4256 | 159 | # Stop the machine agent | 157 | # Stop the machine agent |
4257 | 160 | log.debug("Stopping machine agent...") | 158 | log.debug("Stopping machine agent...") |
4260 | 161 | pid_file = os.path.join(self._directory, "machine-agent.pid") | 159 | agent = ManagedMachineAgent(self._qualified_name) |
4259 | 162 | agent = ManagedMachineAgent(pid_file) | ||
4261 | 163 | yield agent.stop() | 160 | yield agent.stop() |
4262 | 164 | 161 | ||
4263 | 165 | # Stop the storage server | 162 | # Stop the storage server |
4264 | 166 | log.debug("Stopping storage server...") | 163 | log.debug("Stopping storage server...") |
4267 | 167 | pid_file = os.path.join(self._directory, "storage-server.pid") | 164 | storage_server = StorageServer(self._qualified_name) |
4266 | 168 | storage_server = StorageServer(pid_file) | ||
4268 | 169 | yield storage_server.stop() | 165 | yield storage_server.stop() |
4269 | 170 | 166 | ||
4270 | 171 | # Stop zookeeper | 167 | # Stop zookeeper |
4271 | 172 | 168 | ||
4272 | === modified file 'juju/providers/local/agent.py' | |||
4273 | --- juju/providers/local/agent.py 2011-10-05 12:14:41 +0000 | |||
4274 | +++ juju/providers/local/agent.py 2012-02-02 16:42:42 +0000 | |||
4275 | @@ -1,12 +1,6 @@ | |||
4276 | 1 | import errno | ||
4277 | 2 | import os | ||
4278 | 3 | import pipes | ||
4279 | 4 | import subprocess | ||
4280 | 5 | import sys | 1 | import sys |
4281 | 6 | 2 | ||
4285 | 7 | from twisted.internet.defer import inlineCallbacks, returnValue | 3 | from juju.lib.upstart import UpstartService |
4283 | 8 | from twisted.internet.threads import deferToThread | ||
4284 | 9 | |||
4286 | 10 | from juju.providers.common.cloudinit import get_default_origin, BRANCH | 4 | from juju.providers.common.cloudinit import get_default_origin, BRANCH |
4287 | 11 | 5 | ||
4288 | 12 | 6 | ||
4289 | @@ -15,11 +9,10 @@ | |||
4290 | 15 | agent_module = "juju.agents.machine" | 9 | agent_module = "juju.agents.machine" |
4291 | 16 | 10 | ||
4292 | 17 | def __init__( | 11 | def __init__( |
4296 | 18 | self, pid_file, zookeeper_hosts=None, machine_id="0", | 12 | self, juju_unit_namespace, zookeeper_hosts=None, |
4297 | 19 | log_file=None, juju_directory="/var/lib/juju", | 13 | machine_id="0", log_file=None, juju_directory="/var/lib/juju", |
4298 | 20 | juju_unit_namespace="", public_key=None, juju_origin="ppa"): | 14 | public_key=None, juju_origin="ppa"): |
4299 | 21 | """ | 15 | """ |
4300 | 22 | :param pid_file: Path to file used to store process id. | ||
4301 | 23 | :param machine_id: machine id for the local machine. | 16 | :param machine_id: machine id for the local machine. |
4302 | 24 | :param zookeeper_hosts: Zookeeper hosts to connect. | 17 | :param zookeeper_hosts: Zookeeper hosts to connect. |
4303 | 25 | :param log_file: A file to use for the agent logs. | 18 | :param log_file: A file to use for the agent logs. |
4304 | @@ -31,95 +24,46 @@ | |||
4305 | 31 | :param public_key: An SSH public key (string) that will be | 24 | :param public_key: An SSH public key (string) that will be |
4306 | 32 | used in the container for access. | 25 | used in the container for access. |
4307 | 33 | """ | 26 | """ |
4308 | 34 | self._pid_file = pid_file | ||
4309 | 35 | self._machine_id = machine_id | ||
4310 | 36 | self._zookeeper_hosts = zookeeper_hosts | ||
4311 | 37 | self._juju_directory = juju_directory | ||
4312 | 38 | self._juju_unit_namespace = juju_unit_namespace | ||
4313 | 39 | self._log_file = log_file | ||
4314 | 40 | self._public_key = public_key | ||
4315 | 41 | self._juju_origin = juju_origin | 27 | self._juju_origin = juju_origin |
4316 | 42 | |||
4317 | 43 | if self._juju_origin is None: | 28 | if self._juju_origin is None: |
4318 | 44 | origin, source = get_default_origin() | 29 | origin, source = get_default_origin() |
4319 | 45 | if origin == BRANCH: | 30 | if origin == BRANCH: |
4320 | 46 | origin = source | 31 | origin = source |
4321 | 47 | self._juju_origin = origin | 32 | self._juju_origin = origin |
4322 | 48 | 33 | ||
4323 | 34 | env = {"JUJU_MACHINE_ID": machine_id, | ||
4324 | 35 | "JUJU_ZOOKEEPER": zookeeper_hosts, | ||
4325 | 36 | "JUJU_HOME": juju_directory, | ||
4326 | 37 | "JUJU_ORIGIN": self._juju_origin, | ||
4327 | 38 | "JUJU_UNIT_NS": juju_unit_namespace, | ||
4328 | 39 | "PYTHONPATH": ":".join(sys.path)} | ||
4329 | 40 | if public_key: | ||
4330 | 41 | env["JUJU_PUBLIC_KEY"] = public_key | ||
4331 | 42 | |||
4332 | 43 | self._service = UpstartService( | ||
4333 | 44 | "juju-%s-machine-agent" % juju_unit_namespace, use_sudo=True) | ||
4334 | 45 | self._service.set_description( | ||
4335 | 46 | "Juju machine agent for %s" % juju_unit_namespace) | ||
4336 | 47 | self._service.set_environ(env) | ||
4337 | 48 | self._service_args = [ | ||
4338 | 49 | "/usr/bin/python", "-m", self.agent_module, | ||
4339 | 50 | "--nodaemon", "--logfile", log_file, | ||
4340 | 51 | "--session-file", | ||
4341 | 52 | "/var/run/juju/%s-machine-agent.zksession" % juju_unit_namespace] | ||
4342 | 53 | |||
4343 | 49 | @property | 54 | @property |
4344 | 50 | def juju_origin(self): | 55 | def juju_origin(self): |
4345 | 51 | return self._juju_origin | 56 | return self._juju_origin |
4346 | 52 | 57 | ||
4347 | 53 | @inlineCallbacks | ||
4348 | 54 | def start(self): | 58 | def start(self): |
4376 | 55 | """Start the machine agent. | 59 | """Start the machine agent.""" |
4377 | 56 | """ | 60 | self._service.set_command(" ".join(self._service_args)) |
4378 | 57 | assert self._zookeeper_hosts and self._log_file | 61 | return self._service.start() |
4379 | 58 | 62 | ||
4353 | 59 | if (yield self.is_running()): | ||
4354 | 60 | return | ||
4355 | 61 | |||
4356 | 62 | # sudo even with -E will strip pythonpath, so pass it directly | ||
4357 | 63 | # to the command. | ||
4358 | 64 | args = ["sudo", | ||
4359 | 65 | "JUJU_ZOOKEEPER=%s" % self._zookeeper_hosts, | ||
4360 | 66 | "JUJU_ORIGIN=%s" % self._juju_origin, | ||
4361 | 67 | "JUJU_MACHINE_ID=%s" % self._machine_id, | ||
4362 | 68 | "JUJU_HOME=%s" % self._juju_directory, | ||
4363 | 69 | "JUJU_UNIT_NS=%s" % self._juju_unit_namespace, | ||
4364 | 70 | "PYTHONPATH=%s" % ":".join(sys.path), | ||
4365 | 71 | sys.executable, "-m", self.agent_module, | ||
4366 | 72 | "-n", "--pidfile", self._pid_file, | ||
4367 | 73 | "--logfile", self._log_file] | ||
4368 | 74 | |||
4369 | 75 | if self._public_key: | ||
4370 | 76 | args.insert( | ||
4371 | 77 | 1, "JUJU_PUBLIC_KEY=%s" % pipes.quote(self._public_key)) | ||
4372 | 78 | |||
4373 | 79 | yield deferToThread(subprocess.check_call, args) | ||
4374 | 80 | |||
4375 | 81 | @inlineCallbacks | ||
4380 | 82 | def stop(self): | 63 | def stop(self): |
4418 | 83 | """Stop the machine agent. | 64 | """Stop the machine agent.""" |
4419 | 84 | """ | 65 | return self._service.destroy() |
4420 | 85 | pid = yield self._get_pid() | 66 | |
4384 | 86 | if pid is None: | ||
4385 | 87 | return | ||
4386 | 88 | |||
4387 | 89 | # Verify the cmdline before attempting to kill. | ||
4388 | 90 | try: | ||
4389 | 91 | with open("/proc/%s/cmdline" % pid) as cmd_file: | ||
4390 | 92 | cmdline = cmd_file.read() | ||
4391 | 93 | if self.agent_module not in cmdline: | ||
4392 | 94 | raise RuntimeError("Mismatch cmdline") | ||
4393 | 95 | except IOError, e: | ||
4394 | 96 | # Process already died. | ||
4395 | 97 | if e.errno == errno.ENOENT: | ||
4396 | 98 | return | ||
4397 | 99 | |||
4398 | 100 | yield deferToThread( | ||
4399 | 101 | subprocess.check_call, ["sudo", "kill", str(pid)]) | ||
4400 | 102 | |||
4401 | 103 | @inlineCallbacks | ||
4402 | 104 | def _get_pid(self): | ||
4403 | 105 | """Return the agent process id or None. | ||
4404 | 106 | """ | ||
4405 | 107 | # Default root pidfile mask is restrictive | ||
4406 | 108 | try: | ||
4407 | 109 | pid = yield deferToThread( | ||
4408 | 110 | subprocess.check_output, | ||
4409 | 111 | ["sudo", "cat", self._pid_file], | ||
4410 | 112 | stderr=subprocess.STDOUT) | ||
4411 | 113 | except subprocess.CalledProcessError: | ||
4412 | 114 | return | ||
4413 | 115 | if not pid: | ||
4414 | 116 | return | ||
4415 | 117 | returnValue(int(pid.strip())) | ||
4416 | 118 | |||
4417 | 119 | @inlineCallbacks | ||
4421 | 120 | def is_running(self): | 67 | def is_running(self): |
4422 | 121 | """Boolean value, true if the machine agent is running.""" | 68 | """Boolean value, true if the machine agent is running.""" |
4427 | 122 | pid = yield self._get_pid() | 69 | return self._service.is_running() |
4424 | 123 | if pid is None: | ||
4425 | 124 | returnValue(False) | ||
4426 | 125 | returnValue(os.path.isdir("/proc/%s" % pid)) | ||
4428 | 126 | 70 | ||
4429 | === modified file 'juju/providers/local/files.py' | |||
4430 | --- juju/providers/local/files.py 2011-10-07 18:19:58 +0000 | |||
4431 | +++ juju/providers/local/files.py 2012-02-02 16:42:42 +0000 | |||
4432 | @@ -1,13 +1,14 @@ | |||
4434 | 1 | import errno | 1 | from getpass import getuser |
4435 | 2 | import os | 2 | import os |
4436 | 3 | import signal | ||
4437 | 4 | from StringIO import StringIO | 3 | from StringIO import StringIO |
4438 | 5 | import subprocess | ||
4439 | 6 | import yaml | 4 | import yaml |
4440 | 7 | 5 | ||
4441 | 8 | from twisted.internet.defer import inlineCallbacks, returnValue | 6 | from twisted.internet.defer import inlineCallbacks, returnValue |
4442 | 7 | from twisted.internet.error import ConnectionRefusedError | ||
4443 | 8 | from twisted.web.client import getPage | ||
4444 | 9 | 9 | ||
4445 | 10 | from juju.errors import ProviderError, FileNotFound | 10 | from juju.errors import ProviderError, FileNotFound |
4446 | 11 | from juju.lib.upstart import UpstartService | ||
4447 | 11 | from juju.providers.common.files import FileStorage | 12 | from juju.providers.common.files import FileStorage |
4448 | 12 | 13 | ||
4449 | 13 | 14 | ||
4450 | @@ -16,22 +17,46 @@ | |||
4451 | 16 | 17 | ||
4452 | 17 | class StorageServer(object): | 18 | class StorageServer(object): |
4453 | 18 | 19 | ||
4456 | 19 | def __init__( | 20 | def __init__(self, juju_unit_namespace, storage_dir=None, |
4457 | 20 | self, pid_file, storage_dir=None, host=None, port=None, log_file=None): | 21 | host=None, port=None, logfile=None): |
4458 | 21 | """Management facade for a web server on top of the provider storage. | 22 | """Management facade for a web server on top of the provider storage. |
4459 | 22 | 23 | ||
4461 | 23 | :param pid_file: Path to the web server pid file. | 24 | :param juju_unit_namespace: For disambiguation. |
4462 | 24 | :param host: Host interface to bind to. | 25 | :param host: Host interface to bind to. |
4463 | 25 | :param port: Port to bind to. | 26 | :param port: Port to bind to. |
4465 | 26 | :param log_file: Path to store log output. | 27 | :param logfile: Path to store log output. |
4466 | 27 | """ | 28 | """ |
4467 | 28 | if storage_dir: | 29 | if storage_dir: |
4468 | 29 | storage_dir = os.path.abspath(storage_dir) | 30 | storage_dir = os.path.abspath(storage_dir) |
4469 | 30 | self._storage_dir = storage_dir | 31 | self._storage_dir = storage_dir |
4470 | 31 | self._host = host | 32 | self._host = host |
4471 | 32 | self._port = port | 33 | self._port = port |
4474 | 33 | self._pid_file = pid_file | 34 | self._logfile = logfile |
4475 | 34 | self._log_file = log_file | 35 | |
4476 | 36 | self._service = UpstartService( | ||
4477 | 37 | "juju-%s-file-storage" % juju_unit_namespace, use_sudo=True) | ||
4478 | 38 | self._service.set_description( | ||
4479 | 39 | "Juju file storage for %s" % juju_unit_namespace) | ||
4480 | 40 | self._service_args = [ | ||
4481 | 41 | "twistd", | ||
4482 | 42 | "--nodaemon", | ||
4483 | 43 | "--uid", str(os.getuid()), | ||
4484 | 44 | "--gid", str(os.getgid()), | ||
4485 | 45 | "--logfile", logfile, | ||
4486 | 46 | "--pidfile=", | ||
4487 | 47 | "-d", self._storage_dir, | ||
4488 | 48 | "web", | ||
4489 | 49 | "--port", "tcp:%s:interface=%s" % (self._port, self._host), | ||
4490 | 50 | "--path", self._storage_dir] | ||
4491 | 51 | |||
4492 | 52 | @inlineCallbacks | ||
4493 | 53 | def is_serving(self): | ||
4494 | 54 | try: | ||
4495 | 55 | storage = LocalStorage(self._storage_dir) | ||
4496 | 56 | yield getPage((yield storage.get_url(SERVER_URL_KEY))) | ||
4497 | 57 | returnValue(True) | ||
4498 | 58 | except ConnectionRefusedError: | ||
4499 | 59 | returnValue(False) | ||
4500 | 35 | 60 | ||
4501 | 36 | @inlineCallbacks | 61 | @inlineCallbacks |
4502 | 37 | def start(self): | 62 | def start(self): |
4503 | @@ -39,44 +64,33 @@ | |||
4504 | 39 | 64 | ||
4505 | 40 | Also stores the storage server url directly into provider storage. | 65 | Also stores the storage server url directly into provider storage. |
4506 | 41 | """ | 66 | """ |
4509 | 42 | assert (self._storage_dir and self._host | 67 | assert self._storage_dir, "no storage_dir set" |
4510 | 43 | and self._port and self._log_file), "Missing start params." | 68 | assert self._host, "no host set" |
4511 | 69 | assert self._port, "no port set" | ||
4512 | 70 | assert None not in self._service_args, "unset params" | ||
4513 | 44 | assert os.path.exists(self._storage_dir), "Invalid storage directory" | 71 | assert os.path.exists(self._storage_dir), "Invalid storage directory" |
4514 | 72 | try: | ||
4515 | 73 | with open(self._logfile, "a"): | ||
4516 | 74 | pass | ||
4517 | 75 | except IOError: | ||
4518 | 76 | raise AssertionError("logfile not writable by this user") | ||
4519 | 77 | |||
4520 | 45 | 78 | ||
4521 | 46 | storage = LocalStorage(self._storage_dir) | 79 | storage = LocalStorage(self._storage_dir) |
4522 | 47 | yield storage.put( | 80 | yield storage.put( |
4523 | 48 | SERVER_URL_KEY, | 81 | SERVER_URL_KEY, |
4524 | 49 | StringIO(yaml.safe_dump( | 82 | StringIO(yaml.safe_dump( |
4536 | 50 | {"storage-url": "http://%s:%s/" % ( | 83 | {"storage-url": "http://%s:%s/" % (self._host, self._port)}))) |
4537 | 51 | self._host, self._port)}))) | 84 | |
4538 | 52 | 85 | self._service.set_command(" ".join(self._service_args)) | |
4539 | 53 | subprocess.check_output( | 86 | yield self._service.start() |
4540 | 54 | ["twistd", | 87 | |
4541 | 55 | "--pidfile", self._pid_file, | 88 | def get_pid(self): |
4542 | 56 | "--logfile", self._log_file, | 89 | return self._service.get_pid() |
4532 | 57 | "-d", self._storage_dir, | ||
4533 | 58 | "web", "--port", | ||
4534 | 59 | "tcp:%s:interface=%s" % (self._port, self._host), | ||
4535 | 60 | "--path", self._storage_dir]) | ||
4543 | 61 | 90 | ||
4544 | 62 | def stop(self): | 91 | def stop(self): |
4562 | 63 | """Stop the storage server. | 92 | """Stop the storage server.""" |
4563 | 64 | """ | 93 | return self._service.destroy() |
4547 | 65 | try: | ||
4548 | 66 | with open(self._pid_file) as pid_file: | ||
4549 | 67 | pid = int(pid_file.read().strip()) | ||
4550 | 68 | except IOError: | ||
4551 | 69 | # No pid, move on | ||
4552 | 70 | return | ||
4553 | 71 | |||
4554 | 72 | try: | ||
4555 | 73 | os.kill(pid, 0) | ||
4556 | 74 | except OSError, e: | ||
4557 | 75 | if e.errno == errno.ESRCH: # No such process, already dead. | ||
4558 | 76 | return | ||
4559 | 77 | raise | ||
4560 | 78 | |||
4561 | 79 | os.kill(pid, signal.SIGKILL) | ||
4564 | 80 | 94 | ||
4565 | 81 | 95 | ||
4566 | 82 | class LocalStorage(FileStorage): | 96 | class LocalStorage(FileStorage): |
4567 | 83 | 97 | ||
4568 | === modified file 'juju/providers/local/tests/test_agent.py' | |||
4569 | --- juju/providers/local/tests/test_agent.py 2011-10-05 12:14:41 +0000 | |||
4570 | +++ juju/providers/local/tests/test_agent.py 2012-02-02 16:42:42 +0000 | |||
4571 | @@ -1,10 +1,13 @@ | |||
4572 | 1 | import os | 1 | import os |
4573 | 2 | import tempfile | 2 | import tempfile |
4574 | 3 | import subprocess | 3 | import subprocess |
4575 | 4 | import sys | ||
4576 | 5 | |||
4577 | 4 | from twisted.internet.defer import inlineCallbacks, succeed | 6 | from twisted.internet.defer import inlineCallbacks, succeed |
4578 | 5 | 7 | ||
4579 | 8 | from juju.lib.lxc.tests.test_lxc import uses_sudo | ||
4580 | 6 | from juju.lib.testing import TestCase | 9 | from juju.lib.testing import TestCase |
4582 | 7 | from juju.lib.lxc.tests.test_lxc import run_lxc_tests | 10 | from juju.lib.upstart import UpstartService |
4583 | 8 | from juju.tests.common import get_test_zookeeper_address | 11 | from juju.tests.common import get_test_zookeeper_address |
4584 | 9 | from juju.providers.local.agent import ManagedMachineAgent | 12 | from juju.providers.local.agent import ManagedMachineAgent |
4585 | 10 | 13 | ||
4586 | @@ -12,61 +15,93 @@ | |||
4587 | 12 | class ManagedAgentTest(TestCase): | 15 | class ManagedAgentTest(TestCase): |
4588 | 13 | 16 | ||
4589 | 14 | @inlineCallbacks | 17 | @inlineCallbacks |
4599 | 15 | def test_managed_agent_args(self): | 18 | def test_managed_agent_config(self): |
4600 | 16 | 19 | subprocess_calls = [] | |
4601 | 17 | captured_args = [] | 20 | |
4602 | 18 | 21 | def intercept_args(args, **kwargs): | |
4603 | 19 | def intercept_args(args): | 22 | subprocess_calls.append(args) |
4604 | 20 | captured_args.extend(args) | 23 | self.assertEquals(args[0], "sudo") |
4605 | 21 | return True | 24 | if args[1] == "cp": |
4606 | 22 | 25 | return real_check_call(args[1:], **kwargs) | |
4607 | 23 | self.patch(subprocess, "check_call", intercept_args) | 26 | return 0 |
4608 | 27 | |||
4609 | 28 | real_check_call = self.patch(subprocess, "check_call", intercept_args) | ||
4610 | 29 | init_dir = self.makeDir() | ||
4611 | 30 | self.patch(UpstartService, "init_dir", init_dir) | ||
4612 | 31 | |||
4613 | 32 | # Mock out the repeated checking for unstable pid, after an initial | ||
4614 | 33 | # stop/waiting to induce the actual start | ||
4615 | 34 | getProcessOutput = self.mocker.replace( | ||
4616 | 35 | "twisted.internet.utils.getProcessOutput") | ||
4617 | 36 | getProcessOutput("/sbin/status", ["juju-ns1-machine-agent"]) | ||
4618 | 37 | self.mocker.result(succeed("stop/waiting")) | ||
4619 | 38 | for _ in range(5): | ||
4620 | 39 | getProcessOutput("/sbin/status", ["juju-ns1-machine-agent"]) | ||
4621 | 40 | self.mocker.result(succeed("start/running 123")) | ||
4622 | 41 | self.mocker.replay() | ||
4623 | 24 | 42 | ||
4624 | 25 | juju_directory = self.makeDir() | 43 | juju_directory = self.makeDir() |
4625 | 26 | pid_file = self.makeFile() | ||
4626 | 27 | log_file = self.makeFile() | 44 | log_file = self.makeFile() |
4627 | 28 | |||
4628 | 29 | agent = ManagedMachineAgent( | 45 | agent = ManagedMachineAgent( |
4630 | 30 | pid_file, get_test_zookeeper_address(), | 46 | "ns1", |
4631 | 47 | get_test_zookeeper_address(), | ||
4632 | 31 | juju_directory=juju_directory, | 48 | juju_directory=juju_directory, |
4634 | 32 | log_file=log_file, juju_unit_namespace="ns1", | 49 | log_file=log_file, |
4635 | 33 | juju_origin="lp:juju/trunk") | 50 | juju_origin="lp:juju/trunk") |
4636 | 34 | 51 | ||
4641 | 35 | mock_agent = self.mocker.patch(agent) | 52 | try: |
4642 | 36 | mock_agent.is_running() | 53 | os.remove("/tmp/juju-ns1-machine-agent.output") |
4643 | 37 | self.mocker.result(succeed(False)) | 54 | except OSError: |
4644 | 38 | self.mocker.replay() | 55 | pass # just make sure it's not there, so the .start() |
4645 | 56 | # doesn't insert a spurious rm | ||
4646 | 39 | 57 | ||
4647 | 40 | self.assertEqual(agent.juju_origin, "lp:juju/trunk") | ||
4648 | 41 | yield agent.start() | 58 | yield agent.start() |
4649 | 42 | 59 | ||
4662 | 43 | # Verify machine agent environment | 60 | conf_dest = os.path.join( |
4663 | 44 | env_vars = dict( | 61 | init_dir, "juju-ns1-machine-agent.conf") |
4664 | 45 | [arg.split("=") for arg in captured_args if "=" in arg]) | 62 | chmod, start = subprocess_calls[1:] |
4665 | 46 | env_vars.pop("PYTHONPATH") | 63 | self.assertEquals(chmod, ("sudo", "chmod", "644", conf_dest)) |
4666 | 47 | self.assertEqual( | 64 | self.assertEquals( |
4667 | 48 | env_vars, | 65 | start, ("sudo", "/sbin/start", "juju-ns1-machine-agent")) |
4668 | 49 | dict(JUJU_ZOOKEEPER=get_test_zookeeper_address(), | 66 | |
4669 | 50 | JUJU_MACHINE_ID="0", | 67 | env = [] |
4670 | 51 | JUJU_HOME=juju_directory, | 68 | with open(conf_dest) as f: |
4671 | 52 | JUJU_ORIGIN="lp:juju/trunk", | 69 | for line in f: |
4672 | 53 | JUJU_UNIT_NS="ns1")) | 70 | if line.startswith("env"): |
4673 | 54 | 71 | env.append(line[4:-1].split("=", 1)) | |
4674 | 72 | if line.startswith("exec"): | ||
4675 | 73 | exec_ = line[5:-1] | ||
4676 | 74 | |||
4677 | 75 | expect_exec = ( | ||
4678 | 76 | "/usr/bin/python -m juju.agents.machine --nodaemon --logfile %s " | ||
4679 | 77 | "--session-file /var/run/juju/ns1-machine-agent.zksession " | ||
4680 | 78 | ">> /tmp/juju-ns1-machine-agent.output 2>&1" | ||
4681 | 79 | % log_file) | ||
4682 | 80 | self.assertEquals(exec_, expect_exec) | ||
4683 | 81 | |||
4684 | 82 | env = dict((k, v.strip('"')) for (k, v) in env) | ||
4685 | 83 | self.assertEquals(env, { | ||
4686 | 84 | "JUJU_ZOOKEEPER": get_test_zookeeper_address(), | ||
4687 | 85 | "JUJU_MACHINE_ID": "0", | ||
4688 | 86 | "JUJU_HOME": juju_directory, | ||
4689 | 87 | "JUJU_ORIGIN": "lp:juju/trunk", | ||
4690 | 88 | "JUJU_UNIT_NS": "ns1", | ||
4691 | 89 | "PYTHONPATH": ":".join(sys.path)}) | ||
4692 | 90 | |||
4693 | 91 | @uses_sudo | ||
4694 | 55 | @inlineCallbacks | 92 | @inlineCallbacks |
4695 | 56 | def test_managed_agent_root(self): | 93 | def test_managed_agent_root(self): |
4696 | 57 | juju_directory = self.makeDir() | 94 | juju_directory = self.makeDir() |
4697 | 58 | pid_file = tempfile.mktemp() | ||
4698 | 59 | log_file = tempfile.mktemp() | 95 | log_file = tempfile.mktemp() |
4699 | 60 | 96 | ||
4700 | 61 | # The pid file and log file get written as root | 97 | # The pid file and log file get written as root |
4701 | 62 | def cleanup_root_file(cleanup_file): | 98 | def cleanup_root_file(cleanup_file): |
4702 | 63 | subprocess.check_call( | 99 | subprocess.check_call( |
4703 | 64 | ["sudo", "rm", "-f", cleanup_file], stderr=subprocess.STDOUT) | 100 | ["sudo", "rm", "-f", cleanup_file], stderr=subprocess.STDOUT) |
4704 | 65 | self.addCleanup(cleanup_root_file, pid_file) | ||
4705 | 66 | self.addCleanup(cleanup_root_file, log_file) | 101 | self.addCleanup(cleanup_root_file, log_file) |
4706 | 67 | 102 | ||
4707 | 68 | agent = ManagedMachineAgent( | 103 | agent = ManagedMachineAgent( |
4709 | 69 | pid_file, machine_id="0", log_file=log_file, | 104 | "test-ns", machine_id="0", log_file=log_file, |
4710 | 70 | zookeeper_hosts=get_test_zookeeper_address(), | 105 | zookeeper_hosts=get_test_zookeeper_address(), |
4711 | 71 | juju_directory=juju_directory) | 106 | juju_directory=juju_directory) |
4712 | 72 | 107 | ||
4713 | @@ -85,13 +120,3 @@ | |||
4714 | 85 | 120 | ||
4715 | 86 | # running stop again is fine, detects the process is stopped. | 121 | # running stop again is fine, detects the process is stopped. |
4716 | 87 | yield agent.stop() | 122 | yield agent.stop() |
4717 | 88 | |||
4718 | 89 | self.assertFalse(os.path.exists(pid_file)) | ||
4719 | 90 | |||
4720 | 91 | # Stop raises runtime error if the process doesn't match up. | ||
4721 | 92 | with open(pid_file, "w") as pid_handle: | ||
4722 | 93 | pid_handle.write("1") | ||
4723 | 94 | self.assertFailure(agent.stop(), RuntimeError) | ||
4724 | 95 | |||
4725 | 96 | # Reuse the lxc flag for tests needing sudo | ||
4726 | 97 | test_managed_agent_root.skip = run_lxc_tests() | ||
4727 | 98 | 123 | ||
4728 | === modified file 'juju/providers/local/tests/test_files.py' | |||
4729 | --- juju/providers/local/tests/test_files.py 2011-10-07 18:19:58 +0000 | |||
4730 | +++ juju/providers/local/tests/test_files.py 2012-02-02 16:42:42 +0000 | |||
4731 | @@ -1,14 +1,16 @@ | |||
4732 | 1 | import os | 1 | import os |
4733 | 2 | import signal | ||
4734 | 2 | from StringIO import StringIO | 3 | from StringIO import StringIO |
4735 | 4 | import subprocess | ||
4736 | 3 | import yaml | 5 | import yaml |
4737 | 4 | 6 | ||
4739 | 5 | from twisted.internet.defer import inlineCallbacks | 7 | from twisted.internet.defer import inlineCallbacks, succeed |
4740 | 6 | from twisted.web.client import getPage | 8 | from twisted.web.client import getPage |
4741 | 7 | 9 | ||
4742 | 10 | from juju.errors import ProviderError, ServiceError | ||
4743 | 11 | from juju.lib.lxc.tests.test_lxc import uses_sudo | ||
4744 | 8 | from juju.lib.testing import TestCase | 12 | from juju.lib.testing import TestCase |
4748 | 9 | 13 | from juju.lib.upstart import UpstartService | |
4746 | 10 | |||
4747 | 11 | from juju.errors import ProviderError | ||
4749 | 12 | from juju.providers.local.files import ( | 14 | from juju.providers.local.files import ( |
4750 | 13 | LocalStorage, StorageServer, SERVER_URL_KEY) | 15 | LocalStorage, StorageServer, SERVER_URL_KEY) |
4751 | 14 | from juju.state.utils import get_open_port | 16 | from juju.state.utils import get_open_port |
4752 | @@ -16,38 +18,151 @@ | |||
4753 | 16 | 18 | ||
4754 | 17 | class WebFileStorageTest(TestCase): | 19 | class WebFileStorageTest(TestCase): |
4755 | 18 | 20 | ||
4756 | 21 | @inlineCallbacks | ||
4757 | 19 | def setUp(self): | 22 | def setUp(self): |
4758 | 23 | yield super(WebFileStorageTest, self).setUp() | ||
4759 | 20 | self._storage_path = self.makeDir() | 24 | self._storage_path = self.makeDir() |
4760 | 25 | self._logfile = self.makeFile() | ||
4761 | 21 | self._storage = LocalStorage(self._storage_path) | 26 | self._storage = LocalStorage(self._storage_path) |
4762 | 22 | self._log_path = self.makeFile() | ||
4763 | 23 | self._pid_path = self.makeFile() | ||
4764 | 24 | self._port = get_open_port() | 27 | self._port = get_open_port() |
4765 | 25 | self._server = StorageServer( | 28 | self._server = StorageServer( |
4768 | 26 | self._pid_path, self._storage_path, "localhost", | 29 | "ns1", self._storage_path, "localhost", self._port, self._logfile) |
4767 | 27 | get_open_port(), self._log_path) | ||
4769 | 28 | 30 | ||
4770 | 29 | @inlineCallbacks | 31 | @inlineCallbacks |
4780 | 30 | def test_start_stop(self): | 32 | def wait_for_server(self, server): |
4781 | 31 | yield self._storage.put("abc", StringIO("hello world")) | 33 | while not (yield server.is_serving()): |
4782 | 32 | yield self._server.start() | 34 | yield self.sleep(0.1) |
4774 | 33 | storage_url = yield self._storage.get_url("abc") | ||
4775 | 34 | contents = yield getPage(storage_url) | ||
4776 | 35 | self.assertEqual("hello world", contents) | ||
4777 | 36 | self._server.stop() | ||
4778 | 37 | # Stopping multiple times is fine. | ||
4779 | 38 | self._server.stop() | ||
4783 | 39 | 35 | ||
4784 | 40 | def test_start_missing_args(self): | 36 | def test_start_missing_args(self): |
4786 | 41 | server = StorageServer(self._pid_path) | 37 | server = StorageServer("ns1", self._storage_path) |
4787 | 42 | return self.assertFailure(server.start(), AssertionError) | 38 | return self.assertFailure(server.start(), AssertionError) |
4788 | 43 | 39 | ||
4789 | 44 | def test_start_invalid_directory(self): | 40 | def test_start_invalid_directory(self): |
4790 | 45 | os.rmdir(self._storage_path) | 41 | os.rmdir(self._storage_path) |
4791 | 46 | return self.assertFailure(self._server.start(), AssertionError) | 42 | return self.assertFailure(self._server.start(), AssertionError) |
4792 | 47 | 43 | ||
4796 | 48 | def test_stop_missing_pid(self): | 44 | @inlineCallbacks |
4797 | 49 | server = StorageServer(self._pid_path) | 45 | def test_upstart(self): |
4798 | 50 | server.stop() | 46 | subprocess_calls = [] |
4799 | 47 | |||
4800 | 48 | def intercept_args(args, **kwargs): | ||
4801 | 49 | subprocess_calls.append(args) | ||
4802 | 50 | self.assertEquals(args[0], "sudo") | ||
4803 | 51 | if args[1] == "cp": | ||
4804 | 52 | return real_check_call(args[1:], **kwargs) | ||
4805 | 53 | return 0 | ||
4806 | 54 | |||
4807 | 55 | real_check_call = self.patch(subprocess, "check_call", intercept_args) | ||
4808 | 56 | init_dir = self.makeDir() | ||
4809 | 57 | self.patch(UpstartService, "init_dir", init_dir) | ||
4810 | 58 | |||
4811 | 59 | # Mock out the repeated checking for unstable pid, after an initial | ||
4812 | 60 | # stop/waiting to induce the actual start | ||
4813 | 61 | getProcessOutput = self.mocker.replace( | ||
4814 | 62 | "twisted.internet.utils.getProcessOutput") | ||
4815 | 63 | getProcessOutput("/sbin/status", ["juju-ns1-file-storage"]) | ||
4816 | 64 | self.mocker.result(succeed("stop/waiting")) | ||
4817 | 65 | for _ in range(5): | ||
4818 | 66 | getProcessOutput("/sbin/status", ["juju-ns1-file-storage"]) | ||
4819 | 67 | self.mocker.result(succeed("start/running 123")) | ||
4820 | 68 | self.mocker.replay() | ||
4821 | 69 | |||
4822 | 70 | try: | ||
4823 | 71 | os.remove("/tmp/juju-ns1-file-storage.output") | ||
4824 | 72 | except OSError: | ||
4825 | 73 | pass # just make sure it's not there, so the .start() | ||
4826 | 74 | # doesn't insert a spurious rm | ||
4827 | 75 | |||
4828 | 76 | yield self._server.start() | ||
4829 | 77 | chmod = subprocess_calls[1] | ||
4830 | 78 | conf_dest = os.path.join(init_dir, "juju-ns1-file-storage.conf") | ||
4831 | 79 | self.assertEquals(chmod, ("sudo", "chmod", "644", conf_dest)) | ||
4832 | 80 | start = subprocess_calls[-1] | ||
4833 | 81 | self.assertEquals( | ||
4834 | 82 | start, ("sudo", "/sbin/start", "juju-ns1-file-storage")) | ||
4835 | 83 | |||
4836 | 84 | with open(conf_dest) as f: | ||
4837 | 85 | for line in f: | ||
4838 | 86 | if line.startswith("env"): | ||
4839 | 87 | self.fail("didn't expect any special environment") | ||
4840 | 88 | if line.startswith("exec"): | ||
4841 | 89 | exec_ = line[5:].strip() | ||
4842 | 90 | |||
4843 | 91 | expect_exec = ( | ||
4844 | 92 | "twistd --nodaemon --uid %s --gid %s --logfile %s --pidfile= -d " | ||
4845 | 93 | "%s web --port tcp:%s:interface=localhost --path %s >> " | ||
4846 | 94 | "/tmp/juju-ns1-file-storage.output 2>&1" | ||
4847 | 95 | % (os.getuid(), os.getgid(), self._logfile, self._storage_path, | ||
4848 | 96 | self._port, self._storage_path)) | ||
4849 | 97 | self.assertEquals(exec_, expect_exec) | ||
4850 | 98 | |||
4851 | 99 | @uses_sudo | ||
4852 | 100 | @inlineCallbacks | ||
4853 | 101 | def test_start_stop(self): | ||
4854 | 102 | yield self._storage.put("abc", StringIO("hello world")) | ||
4855 | 103 | yield self._server.start() | ||
4856 | 104 | # Starting multiple times is fine. | ||
4857 | 105 | yield self._server.start() | ||
4858 | 106 | storage_url = yield self._storage.get_url("abc") | ||
4859 | 107 | |||
4860 | 108 | # It might not have started actually accepting connections yet... | ||
4861 | 109 | yield self.wait_for_server(self._server) | ||
4862 | 110 | self.assertEqual((yield getPage(storage_url)), "hello world") | ||
4863 | 111 | |||
4864 | 112 | # Check that it can be killed by the current user (ie, is not running | ||
4865 | 113 | # as root) and still comes back up | ||
4866 | 114 | old_pid = yield self._server.get_pid() | ||
4867 | 115 | os.kill(old_pid, signal.SIGKILL) | ||
4868 | 116 | new_pid = yield self._server.get_pid() | ||
4869 | 117 | self.assertNotEquals(old_pid, new_pid) | ||
4870 | 118 | |||
4871 | 119 | # Give it a moment to actually start serving again | ||
4872 | 120 | yield self.wait_for_server(self._server) | ||
4873 | 121 | self.assertEqual((yield getPage(storage_url)), "hello world") | ||
4874 | 122 | |||
4875 | 123 | yield self._server.stop() | ||
4876 | 124 | # Stopping multiple times is fine too. | ||
4877 | 125 | yield self._server.stop() | ||
4878 | 126 | |||
4879 | 127 | @uses_sudo | ||
4880 | 128 | @inlineCallbacks | ||
4881 | 129 | def test_namespacing(self): | ||
4882 | 130 | alt_storage_path = self.makeDir() | ||
4883 | 131 | alt_storage = LocalStorage(alt_storage_path) | ||
4884 | 132 | yield alt_storage.put("some-path", StringIO("alternative")) | ||
4885 | 133 | yield self._storage.put("some-path", StringIO("original")) | ||
4886 | 134 | |||
4887 | 135 | alt_server = StorageServer( | ||
4888 | 136 | "ns2", alt_storage_path, "localhost", get_open_port(), | ||
4889 | 137 | self.makeFile()) | ||
4890 | 138 | yield alt_server.start() | ||
4891 | 139 | yield self._server.start() | ||
4892 | 140 | yield self.wait_for_server(alt_server) | ||
4893 | 141 | yield self.wait_for_server(self._server) | ||
4894 | 142 | |||
4895 | 143 | alt_contents = yield getPage( | ||
4896 | 144 | (yield alt_storage.get_url("some-path"))) | ||
4897 | 145 | self.assertEquals(alt_contents, "alternative") | ||
4898 | 146 | orig_contents = yield getPage( | ||
4899 | 147 | (yield self._storage.get_url("some-path"))) | ||
4900 | 148 | self.assertEquals(orig_contents, "original") | ||
4901 | 149 | |||
4902 | 150 | yield alt_server.stop() | ||
4903 | 151 | yield self._server.stop() | ||
4904 | 152 | |||
4905 | 153 | @uses_sudo | ||
4906 | 154 | @inlineCallbacks | ||
4907 | 155 | def test_capture_errors(self): | ||
4908 | 156 | self._port = get_open_port() | ||
4909 | 157 | self._server = StorageServer( | ||
4910 | 158 | "borken", self._storage_path, "lol borken", self._port, | ||
4911 | 159 | self._logfile) | ||
4912 | 160 | d = self._server.start() | ||
4913 | 161 | e = yield self.assertFailure(d, ServiceError) | ||
4914 | 162 | self.assertTrue(str(e).startswith( | ||
4915 | 163 | "Failed to start job juju-borken-file-storage; got output:\n")) | ||
4916 | 164 | self.assertIn("Wrong number of arguments", str(e)) | ||
4917 | 165 | yield self._server.stop() | ||
4918 | 51 | 166 | ||
4919 | 52 | 167 | ||
4920 | 53 | class FileStorageTest(TestCase): | 168 | class FileStorageTest(TestCase): |
4921 | 54 | 169 | ||
4922 | === modified file 'juju/providers/orchestra/tests/data/bootstrap_user_data' | |||
4923 | --- juju/providers/orchestra/tests/data/bootstrap_user_data 2012-01-09 14:17:21 +0000 | |||
4924 | +++ juju/providers/orchestra/tests/data/bootstrap_user_data 2012-02-02 16:42:42 +0000 | |||
4925 | @@ -1,15 +1,60 @@ | |||
4926 | 1 | #cloud-config | ||
4927 | 2 | apt-update: true | 1 | apt-update: true |
4928 | 3 | apt-upgrade: true | 2 | apt-upgrade: true |
4929 | 4 | machine-data: {juju-provider-type: orchestra, juju-zookeeper-hosts: 'localhost:2181', | 3 | machine-data: {juju-provider-type: orchestra, juju-zookeeper-hosts: 'localhost:2181', |
4930 | 5 | machine-id: '0'} | 4 | machine-id: '0'} |
4931 | 6 | output: {all: '| tee -a /var/log/cloud-init-output.log'} | 5 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4934 | 7 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, | 6 | packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
4935 | 8 | python-zookeeper, default-jre-headless, zookeeper, zookeeperd] | 7 | default-jre-headless, zookeeper, zookeeperd] |
4936 | 9 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p | 8 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4942 | 10 | /var/log/juju, 'juju-admin initialize --instance-id=winston-uid --admin-identity=admin:qRBXC1ubEEUqRL6wcBhgmc9xkaY= --provider-type=orchestra', | 9 | /var/log/juju, 'juju-admin initialize --instance-id=winston-uid --admin-identity=admin:qRBXC1ubEEUqRL6wcBhgmc9xkaY= |
4943 | 11 | 'JUJU_MACHINE_ID=0 JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.machine -n | 10 | --provider-type=orchestra', 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4944 | 12 | --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid', | 11 | |
4945 | 13 | 'JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log | 12 | description "Juju machine agent" |
4946 | 14 | --pidfile=/var/run/juju/provision-agent.pid'] | 13 | |
4947 | 14 | author "Juju Team <juju@lists.ubuntu.com>" | ||
4948 | 15 | |||
4949 | 16 | |||
4950 | 17 | start on runlevel [2345] | ||
4951 | 18 | |||
4952 | 19 | stop on runlevel [!2345] | ||
4953 | 20 | |||
4954 | 21 | respawn | ||
4955 | 22 | |||
4956 | 23 | |||
4957 | 24 | env JUJU_MACHINE_ID="0" | ||
4958 | 25 | |||
4959 | 26 | env JUJU_ZOOKEEPER="localhost:2181" | ||
4960 | 27 | |||
4961 | 28 | |||
4962 | 29 | exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log | ||
4963 | 30 | --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output | ||
4964 | 31 | 2>&1 | ||
4965 | 32 | |||
4966 | 33 | EOF | ||
4967 | 34 | |||
4968 | 35 | ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf | ||
4969 | 36 | <<EOF | ||
4970 | 37 | |||
4971 | 38 | description "Juju provisioning agent" | ||
4972 | 39 | |||
4973 | 40 | author "Juju Team <juju@lists.ubuntu.com>" | ||
4974 | 41 | |||
4975 | 42 | |||
4976 | 43 | start on runlevel [2345] | ||
4977 | 44 | |||
4978 | 45 | stop on runlevel [!2345] | ||
4979 | 46 | |||
4980 | 47 | respawn | ||
4981 | 48 | |||
4982 | 49 | |||
4983 | 50 | env JUJU_ZOOKEEPER="localhost:2181" | ||
4984 | 51 | |||
4985 | 52 | |||
4986 | 53 | exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log | ||
4987 | 54 | --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output | ||
4988 | 55 | 2>&1 | ||
4989 | 56 | |||
4990 | 57 | EOF | ||
4991 | 58 | |||
4992 | 59 | ', /sbin/start juju-provision-agent] | ||
4993 | 15 | ssh_authorized_keys: [this-is-a-public-key] | 60 | ssh_authorized_keys: [this-is-a-public-key] |
4994 | 16 | 61 | ||
4995 | === modified file 'juju/providers/orchestra/tests/data/launch_user_data' | |||
4996 | --- juju/providers/orchestra/tests/data/launch_user_data 2012-01-09 14:17:21 +0000 | |||
4997 | +++ juju/providers/orchestra/tests/data/launch_user_data 2012-02-02 16:42:42 +0000 | |||
4998 | @@ -1,12 +1,34 @@ | |||
4999 | 1 | #cloud-config | ||
5000 | 2 | apt-update: true | 1 | apt-update: true |
The diff has been truncated for viewing.