Merge lp:~fwereade/pyjuju/restore-unit-relation-nodes into lp:pyjuju
- restore-unit-relation-nodes
- Merge into trunk
Proposed by
William Reade
Status: | Superseded | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~fwereade/pyjuju/restore-unit-relation-nodes | ||||
Merge into: | lp:pyjuju | ||||
Diff against target: |
7460 lines (+3429/-1783) 54 files modified
docs/source/internals/unit-agent-persistence.rst (+138/-0) examples/oneiric/mysql/hooks/install (+2/-2) examples/oneiric/mysql/hooks/start (+3/-1) examples/oneiric/mysql/hooks/stop (+1/-1) juju/agents/base.py (+56/-3) juju/agents/tests/common.py (+1/-0) juju/agents/tests/test_base.py (+174/-29) juju/agents/tests/test_machine.py (+3/-4) juju/agents/tests/test_unit.py (+71/-136) juju/agents/unit.py (+42/-114) juju/control/options.py (+4/-5) juju/control/status.py (+2/-0) juju/control/tests/test_resolved.py (+4/-4) juju/control/tests/test_status.py (+3/-1) juju/errors.py (+14/-1) juju/hooks/scheduler.py (+102/-48) juju/hooks/tests/test_scheduler.py (+258/-41) juju/lib/lxc/tests/test_lxc.py (+12/-5) juju/lib/tests/data/test_basic_install (+10/-0) juju/lib/tests/data/test_less_basic_install (+11/-0) juju/lib/tests/data/test_standard_install (+10/-0) juju/lib/tests/test_statemachine.py (+14/-0) juju/lib/tests/test_upstart.py (+339/-0) juju/lib/upstart.py (+166/-0) juju/machine/tests/test_unit_deployment.py (+192/-238) juju/machine/unit.py (+101/-190) juju/providers/common/cloudinit.py (+19/-12) juju/providers/common/tests/data/cloud_init_bootstrap (+54/-8) juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers (+54/-9) juju/providers/common/tests/data/cloud_init_branch (+29/-7) juju/providers/common/tests/data/cloud_init_branch_trunk (+29/-7) juju/providers/common/tests/data/cloud_init_distro (+28/-6) juju/providers/common/tests/data/cloud_init_ppa (+29/-7) juju/providers/common/tests/test_cloudinit.py (+3/-2) juju/providers/ec2/tests/data/bootstrap_cloud_init (+56/-11) juju/providers/ec2/tests/data/launch_cloud_init (+28/-6) juju/providers/ec2/tests/data/launch_cloud_init_branch (+30/-11) juju/providers/ec2/tests/data/launch_cloud_init_ppa (+28/-6) juju/providers/local/__init__.py (+5/-9) juju/providers/local/agent.py (+32/-88) juju/providers/local/files.py (+53/-39) juju/providers/local/tests/test_agent.py (+69/-44) juju/providers/local/tests/test_files.py (+136/-21) juju/providers/orchestra/tests/data/bootstrap_user_data (+53/-8) juju/providers/orchestra/tests/data/launch_user_data (+27/-5) juju/state/relation.py (+35/-30) juju/state/service.py (+10/-10) juju/state/tests/test_relation.py (+312/-391) juju/state/tests/test_security.py (+1/-0) juju/tests/test_errors.py (+37/-9) juju/unit/lifecycle.py (+173/-72) juju/unit/tests/test_lifecycle.py (+116/-38) juju/unit/tests/test_workflow.py (+196/-76) juju/unit/workflow.py (+54/-28) |
||||
To merge this branch: | bzr merge lp:~fwereade/pyjuju/restore-unit-relation-nodes | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+91307@code.launchpad.net |
This proposal has been superseded by a proposal from 2012-02-02.
Commit message
Description of the change
When reconstructing unit relation state in UnitLifecycle, ensure presence nodes are created for any relations which are not already departed.
To post a comment you must log in.
- 502. By William Reade
-
excise amusing print; expand testing a little
- 503. By William Reade
-
er, excise the other print
- 504. By William Reade
-
merge parent
- 505. By William Reade
-
merge parent
- 506. By William Reade
-
switch parent to trunk
- 507. By William Reade
-
merge parent
Unmerged revisions
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === added file 'docs/source/internals/unit-agent-persistence.rst' |
2 | --- docs/source/internals/unit-agent-persistence.rst 1970-01-01 00:00:00 +0000 |
3 | +++ docs/source/internals/unit-agent-persistence.rst 2012-02-02 16:42:42 +0000 |
4 | @@ -0,0 +1,138 @@ |
5 | +Notes on unit agent persistence |
6 | +=============================== |
7 | + |
8 | +Introduction |
9 | +------------ |
10 | + |
11 | +This was first written to explain the extensive changes made in the branch |
12 | +lp:~fwereade/juju/restart-transitions; that branch has been split out into |
13 | +four separate branches, but this discussion should remain a useful guide to |
14 | +the changes made to the unit and relation workflows and lifecycles in the |
15 | +course of making the unit agent resumable. |
16 | + |
17 | + |
18 | +Glossary |
19 | +-------- |
20 | + |
21 | +UA = UnitAgent |
22 | +UL = UnitLifecycle |
23 | +UWS = UnitWorkflowState |
24 | +URL = UnitRelationLifecycle |
25 | +RWS = RelationWorkflowState |
26 | +URS = UnitRelationState |
27 | +SRS = ServiceRelationState |
28 | + |
29 | + |
30 | +Technical discussion |
31 | +-------------------- |
32 | + |
33 | +Probably the most fundamental change is the addition of a "synchronize" method |
34 | +to both UWS and RWS. Calling "synchronize" should generally be *all* you need |
35 | +to do to put the workflow and associated components into "the right state"; ie |
36 | +ZK state will be restored, the appropriate lifecycle will be started (or not), |
37 | +and any initial transitons will automatically be fired ("start" for RWS; |
38 | +"install", "start" for UWS). |
39 | + |
40 | +The synchronize method keeps responsibility for the lifecycle's state purely in |
41 | +the hands of the workflow; once a workflow is synced, the *only* necessary |
42 | +interactions with it should be in response to changes in ZK. |
43 | + |
44 | +The disadvantage is that lifecycle "start" and "stop" methods have become a |
45 | +touch overloaded: |
46 | + |
47 | +* UL.stop(): now takes "stop_relations" in addition to "fire_hooks", in which |
48 | + "stop_relations" being True causes the orginal behaviour (transition "up" |
49 | + RWSs to "down", as when transitioning the UWS to a "stopped" or "error" |
50 | + state), but False simply causes them to stop watching for changes (in |
51 | + preparation for an orderly shutdown, for example). |
52 | + |
53 | +* UL.start(): now takes "start_relations" in addition to "fire_hooks", in which |
54 | + the "start_relations" flag being True causes the original behaviour |
55 | + (automatically transition "down" RWSs to "up", as when restarting/resolving |
56 | + the UWS), while False causes the RWSs only to be synced. |
57 | + |
58 | +* URL.start(): now takes "scheduler" in addition to "watches", allowing the |
59 | + watching and the contained HookScheduler to be controlled separately |
60 | + (allowing us to actually perform the RWS synchronise correctly). |
61 | + |
62 | +* URL.stop(): still just takes "watches", because there wasn't a scenario in |
63 | + which I wanted to stop the watches but not the HookScheduler. |
64 | + |
65 | +I still think it's a win, though: and I don't think that turning them into |
66 | +separate methods is the right way to go; "start" and "stop" remain perfectly |
67 | +decent and appropriate names for what they do. |
68 | + |
69 | +Now this has been done, we can always launch directly into whatever state we |
70 | +shut down in, and that's great, because sudden process death doesn't hurt us |
71 | +any more [0] [1]. Except... when we're upgrading a charm. It emerges that the |
72 | +charm upgrade state transition only covers the process of firing the hook, and |
73 | +not the process of actually upgrading the charm. |
74 | + |
75 | +In short, we had a mechanism, completely outside the workflow's purview, for |
76 | +potentially *brutal* modifications of state (both in terms of the charm itself, |
77 | +on disk, and also in that the hook executor should remain stopped forever while |
78 | +in "charm_upgrade_error" state); and this rather scuppered the "restart in the |
79 | +same state" goal. The obvious thing to do was to move the charm upgrade |
80 | +operation into the "charm_upgrade" transition, so we had a *chance* of being |
81 | +able to start in the correct state. |
82 | + |
83 | +UL.upgrade_charm, called by UWS, does itself have subtleties, but it should be |
84 | +reasonably clear when examined in context; the most important point is that it |
85 | +will call back at the start and end of the risky period, and that the UWS's |
86 | +handler for this callback sets a flag in "started"'s state_vars for the |
87 | +duration of the upgrade. If that flag is set when we subsequently start up |
88 | +again and synchronize the UWS, then we know to immediately force the |
89 | +charm_upgrade_error state and work from there. |
90 | + |
91 | +[0] Well, it does, because we need to persist more than just the (already- |
92 | +persisted) workflow state. This branch includes RWS persistence in the UL, as |
93 | +requested in this branch's first pre-review (back in the day...), but does not |
94 | +include HookScheduler persistence in the URLs, so it remains possible for |
95 | +relation hooks which have been queued, but not yet executed, to be lost if the |
96 | +process executes before the queue empties. That will be coming in another |
97 | +branch (resolve-unit-relation-diffs). |
98 | + |
99 | +[1] This seems like a good time to mention the UL's relation-broken handling |
100 | +for relations that went away while the process was stopped: every time |
101 | +._relations is changed, it writes out enough state to recreate a Frankenstein's |
102 | +URS object, which it can then use on load to reconstruct the necessary URL and |
103 | +hence RWS. |
104 | + |
105 | +We don't strictly need to *reconstruct* it in every case -- we can just use |
106 | +SRS.get_unit_state if the relation still exists -- but given that sometimes we |
107 | +do, it seemed senseless to have two code paths for the same operations. Of the |
108 | +RWSs we reconstruct, those with existing SRSs will be synchronized (because we |
109 | +know it's safe to do so), and the remainder will be stored untouched (because |
110 | +we know that _process_service_changes will fire the "depart" transition for us |
111 | +before doing anything else... and the "relation-broken" hook will be executed |
112 | +in a DepartedRelationHookContext, which is rather restricted, and so shouldn't |
113 | +cause the Frankenstein's URS to hit state we can't be sure exists). |
114 | + |
115 | + |
116 | +Appendix: a rough history of changes to restart-transitions |
117 | +----------------------------------------------------------- |
118 | + |
119 | +* Add UWS transitions from "stopped" to "started", so that process restarts can |
120 | + be made to restart UWSs. |
121 | +* Upon review, add RWS persistence to UL, to ensure we can't miss |
122 | + relation-broken hooks; as part of this, as discussed, add |
123 | + DepartedRelationHookContext in which to execute them. |
124 | +* Upon discussion, discover that original UWS "started" -> "stopped" behaviour |
125 | + on process shutdown is not actually the desired behaviour (and that the |
126 | + associated RWS "up" -> "down" shouldn't happen either. |
127 | +* Make changes to UL.start/stop, and add UWS/RWS.synchronize, to allow us to |
128 | + shut down workflows cleanly without transitions and bring them up again in |
129 | + the same state. |
130 | +* Discover that we don't have any other reason to transition UWS to "stopped"; |
131 | + to actually fire stop hooks at the right time, we need a more sophisticated |
132 | + system (possibly involving the machine agent telling the unit agent to shut |
133 | + itself down). Remove the newly-added "restart" transitions, because they're |
134 | + meaningless now; ponder what good it does us to have a "stopped" state that |
135 | + we never actually enter; chicken out of actually removing it. |
136 | +* Realise that charm upgrades do an end-run around the whole UWS mechanism, and |
137 | + resolve to intergate them so I can actually detect upgrades left incomplete |
138 | + due to process death. |
139 | +* Move charm upgrade operation from agent into UL; come to appreciate the |
140 | + subtleties of the charm upgrade process; make necessary tweaks to |
141 | + UL.upgrade_charm, and UWS, to allow for synchronization of incomplete |
142 | + upgrades. |
143 | |
144 | === modified file 'examples/oneiric/mysql/hooks/install' |
145 | --- examples/oneiric/mysql/hooks/install 2011-09-15 18:56:08 +0000 |
146 | +++ examples/oneiric/mysql/hooks/install 2012-02-02 16:42:42 +0000 |
147 | @@ -21,5 +21,5 @@ |
148 | juju-log "Editing my.cnf to allow listening on all interfaces" |
149 | sed --in-place=old 's/127\.0\.0\.1/0.0.0.0/' /etc/mysql/my.cnf |
150 | |
151 | -juju-log "Restarting mysql service" |
152 | -service mysql restart |
153 | +juju-log "Stopping mysql service" |
154 | +service mysql stop |
155 | |
156 | === modified file 'examples/oneiric/mysql/hooks/start' |
157 | --- examples/oneiric/mysql/hooks/start 2011-02-03 01:23:43 +0000 |
158 | +++ examples/oneiric/mysql/hooks/start 2012-02-02 16:42:42 +0000 |
159 | @@ -1,1 +1,3 @@ |
160 | -#!/bin/bash |
161 | \ No newline at end of file |
162 | +#!/bin/bash |
163 | +juju-log "Starting mysql service" |
164 | +service mysql start || service mysql restart |
165 | |
166 | === modified file 'examples/oneiric/mysql/hooks/stop' |
167 | --- examples/oneiric/mysql/hooks/stop 2011-09-15 18:56:08 +0000 |
168 | +++ examples/oneiric/mysql/hooks/stop 2012-02-02 16:42:42 +0000 |
169 | @@ -1,3 +1,3 @@ |
170 | #!/bin/bash |
171 | juju-log "Stopping mysql service" |
172 | -/etc/init.d/mysql stop |
173 | +service mysql stop || true |
174 | |
175 | === modified file 'juju/agents/base.py' |
176 | --- juju/agents/base.py 2011-09-22 13:23:00 +0000 |
177 | +++ juju/agents/base.py 2012-02-02 16:42:42 +0000 |
178 | @@ -1,7 +1,9 @@ |
179 | import argparse |
180 | import os |
181 | +import logging |
182 | +import stat |
183 | import sys |
184 | -import logging |
185 | +import yaml |
186 | |
187 | import zookeeper |
188 | |
189 | @@ -18,6 +20,23 @@ |
190 | from juju.state.environment import GlobalSettingsStateManager |
191 | |
192 | |
193 | +def load_client_id(path): |
194 | + try: |
195 | + with open(path) as f: |
196 | + return yaml.load(f.read()) |
197 | + except IOError: |
198 | + return None |
199 | + |
200 | + |
201 | +def save_client_id(path, client_id): |
202 | + parent = os.path.dirname(path) |
203 | + if not os.path.exists(parent): |
204 | + os.makedirs(parent) |
205 | + with open(path, "w") as f: |
206 | + f.write(yaml.dump(client_id)) |
207 | + os.chmod(path, stat.S_IRUSR | stat.S_IWUSR) |
208 | + |
209 | + |
210 | class TwistedOptionNamespace(object): |
211 | """ |
212 | An argparse namespace implementation that is compatible with twisted |
213 | @@ -153,13 +172,40 @@ |
214 | "Invalid juju-directory %r, does not exist." % ( |
215 | options.get("juju_directory"))) |
216 | |
217 | + if options["session_file"] is None: |
218 | + raise JujuError("No session file specified") |
219 | + |
220 | self.config = options |
221 | |
222 | @inlineCallbacks |
223 | + def _kill_existing_session(self): |
224 | + try: |
225 | + # We might have died suddenly, in which case the session may |
226 | + # still be alive. If this is the case, shoot it in the head, so |
227 | + # it doesn't interfere with our attempts to recreate our state. |
228 | + # (We need to be able to recreate our state *anyway*, and it's |
229 | + # much simpler to force ourselves to recreate it every time than |
230 | + # it is to mess around partially recreating partial state.) |
231 | + client_id = load_client_id(self.config["session_file"]) |
232 | + if client_id is None: |
233 | + return |
234 | + temp_client = yield ZookeeperClient().connect( |
235 | + self.config["zookeeper_servers"], client_id=client_id) |
236 | + yield temp_client.close() |
237 | + except zookeeper.ZooKeeperException: |
238 | + # We don't really care what went wrong; just that we're not able |
239 | + # to connect using the old session, and therefore we should be ok |
240 | + # to start a fresh one without transient state hanging around. |
241 | + pass |
242 | + |
243 | + @inlineCallbacks |
244 | def connect(self): |
245 | """Return an authenticated connection to the juju zookeeper.""" |
246 | - hosts = self.config["zookeeper_servers"] |
247 | - self.client = yield ZookeeperClient().connect(hosts) |
248 | + yield self._kill_existing_session() |
249 | + self.client = yield ZookeeperClient().connect( |
250 | + self.config["zookeeper_servers"]) |
251 | + save_client_id( |
252 | + self.config["session_file"], self.client.client_id) |
253 | |
254 | principals = self.config.get("principals", ()) |
255 | for principal in principals: |
256 | @@ -200,6 +246,9 @@ |
257 | finally: |
258 | if self.client and self.client.connected: |
259 | self.client.close() |
260 | + session_file = self.config["session_file"] |
261 | + if os.path.exists(session_file): |
262 | + os.unlink(session_file) |
263 | |
264 | def set_watch_enabled(self, flag): |
265 | """Set boolean flag for whether this agent should watching zookeeper. |
266 | @@ -285,3 +334,7 @@ |
267 | parser.add_argument( |
268 | "--juju-directory", default=juju_home, type=os.path.abspath, |
269 | help="juju working directory ($JUJU_HOME)") |
270 | + |
271 | + parser.add_argument( |
272 | + "--session-file", default=None, type=os.path.abspath, |
273 | + help="like a pidfile, but for the zookeeper session id") |
274 | |
275 | === modified file 'juju/agents/tests/common.py' |
276 | --- juju/agents/tests/common.py 2011-09-15 19:24:47 +0000 |
277 | +++ juju/agents/tests/common.py 2012-02-02 16:42:42 +0000 |
278 | @@ -55,6 +55,7 @@ |
279 | options = TwistedOptionNamespace() |
280 | options["juju_directory"] = self.juju_directory |
281 | options["zookeeper_servers"] = get_test_zookeeper_address() |
282 | + options["session_file"] = self.makeFile() |
283 | return succeed(options) |
284 | |
285 | @inlineCallbacks |
286 | |
287 | === modified file 'juju/agents/tests/test_base.py' |
288 | --- juju/agents/tests/test_base.py 2011-09-15 18:50:23 +0000 |
289 | +++ juju/agents/tests/test_base.py 2012-02-02 16:42:42 +0000 |
290 | @@ -2,13 +2,14 @@ |
291 | import json |
292 | import logging |
293 | import os |
294 | +import stat |
295 | import sys |
296 | - |
297 | +import yaml |
298 | |
299 | from twisted.application.app import AppLogger |
300 | from twisted.application.service import IService, IServiceCollection |
301 | from twisted.internet.defer import ( |
302 | - succeed, Deferred, inlineCallbacks, returnValue) |
303 | + fail, succeed, Deferred, inlineCallbacks, returnValue) |
304 | from twisted.python.components import Componentized |
305 | from twisted.python import log |
306 | |
307 | @@ -20,6 +21,7 @@ |
308 | |
309 | from juju.agents.base import ( |
310 | BaseAgent, TwistedOptionNamespace, AgentRunner, AgentLogger) |
311 | +from juju.agents.dummy import DummyAgent |
312 | from juju.errors import NoConnection, JujuError |
313 | from juju.lib.zklog import ZookeeperHandler |
314 | |
315 | @@ -34,8 +36,8 @@ |
316 | @inlineCallbacks |
317 | def setUp(self): |
318 | yield super(BaseAgentTest, self).setUp() |
319 | - self.change_environment( |
320 | - JUJU_HOME=self.makeDir()) |
321 | + self.juju_home = self.makeDir() |
322 | + self.change_environment(JUJU_HOME=self.juju_home) |
323 | |
324 | def test_as_app(self): |
325 | """The agent class can be accessed as an application.""" |
326 | @@ -53,11 +55,10 @@ |
327 | # Daemon group |
328 | self.assertEqual( |
329 | parser.get_default("logfile"), "%s.log" % BaseAgent.name) |
330 | - self.assertEqual( |
331 | - parser.get_default("pidfile"), "%s.pid" % BaseAgent.name) |
332 | + self.assertEqual(parser.get_default("pidfile"), "") |
333 | |
334 | self.assertEqual(parser.get_default("loglevel"), "DEBUG") |
335 | - self.assertTrue(parser.get_default("nodaemon")) |
336 | + self.assertFalse(parser.get_default("nodaemon")) |
337 | self.assertEqual(parser.get_default("rundir"), ".") |
338 | self.assertEqual(parser.get_default("chroot"), None) |
339 | self.assertEqual(parser.get_default("umask"), None) |
340 | @@ -80,6 +81,8 @@ |
341 | # Agent options |
342 | self.assertEqual(parser.get_default("principals"), []) |
343 | self.assertEqual(parser.get_default("zookeeper_servers"), "") |
344 | + self.assertEqual(parser.get_default("juju_directory"), self.juju_home) |
345 | + self.assertEqual(parser.get_default("session_file"), None) |
346 | |
347 | def test_twistd_flags_correspond(self): |
348 | parser = argparse.ArgumentParser() |
349 | @@ -87,11 +90,11 @@ |
350 | args = [ |
351 | "--profile", |
352 | "--savestats", |
353 | - "--daemon"] |
354 | + "--nodaemon"] |
355 | |
356 | options = parser.parse_args(args, namespace=TwistedOptionNamespace()) |
357 | self.assertEqual(options.get("savestats"), True) |
358 | - self.assertEqual(options.get("nodaemon"), False) |
359 | + self.assertEqual(options.get("nodaemon"), True) |
360 | self.assertEqual(options.get("profile"), True) |
361 | |
362 | def test_agent_logger(self): |
363 | @@ -100,7 +103,8 @@ |
364 | log_file_path = self.makeFile() |
365 | |
366 | options = parser.parse_args( |
367 | - ["--logfile", log_file_path], namespace=TwistedOptionNamespace()) |
368 | + ["--logfile", log_file_path, "--session-file", self.makeFile()], |
369 | + namespace=TwistedOptionNamespace()) |
370 | |
371 | def match_observer(observer): |
372 | return isinstance(observer.im_self, log.PythonLoggingObserver) |
373 | @@ -181,8 +185,9 @@ |
374 | This will create an agent instance, parse the cli args, passes them to |
375 | the agent, and starts the agent runner. |
376 | """ |
377 | - self.change_args("es-agent", "--zookeeper-servers", |
378 | - get_test_zookeeper_address()) |
379 | + self.change_args( |
380 | + "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), |
381 | + "--session-file", self.makeFile()) |
382 | runner = self.mocker.patch(AgentRunner) |
383 | runner.run() |
384 | mock_agent = self.mocker.patch(BaseAgent) |
385 | @@ -219,11 +224,10 @@ |
386 | |
387 | started.addCallback(validate_started) |
388 | |
389 | - pid_file = self.makeFile() |
390 | self.change_args( |
391 | - "es-agent", |
392 | + "es-agent", "--nodaemon", |
393 | "--zookeeper-servers", get_test_zookeeper_address(), |
394 | - "--pidfile", pid_file) |
395 | + "--session-file", self.makeFile()) |
396 | runner = self.mocker.patch(AgentRunner) |
397 | logger = self.mocker.patch(AppLogger) |
398 | logger.start(MATCH_APP) |
399 | @@ -233,6 +237,7 @@ |
400 | DummyAgent.run() |
401 | return started |
402 | |
403 | + @inlineCallbacks |
404 | def test_stop_service_stub_closes_agent(self): |
405 | """The base class agent, stopService will the stop method. |
406 | |
407 | @@ -241,6 +246,7 @@ |
408 | """ |
409 | mock_agent = self.mocker.patch(BaseAgent) |
410 | mock_client = self.mocker.mock(ZookeeperClient) |
411 | + session_file = self.makeFile() |
412 | |
413 | # connection is closed after agent.stop invoked. |
414 | with self.mocker.order(): |
415 | @@ -262,11 +268,17 @@ |
416 | self.mocker.result(mock_client) |
417 | mock_client.close() |
418 | |
419 | + # delete session file |
420 | + mock_agent.config |
421 | + self.mocker.result({"session_file": session_file}) |
422 | + |
423 | self.mocker.replay() |
424 | |
425 | agent = BaseAgent() |
426 | - return agent.stopService() |
427 | + yield agent.stopService() |
428 | + self.assertFalse(os.path.exists(session_file)) |
429 | |
430 | + @inlineCallbacks |
431 | def test_stop_service_stub_ignores_disconnected_agent(self): |
432 | """The base class agent, stopService will the stop method. |
433 | |
434 | @@ -274,6 +286,7 @@ |
435 | """ |
436 | mock_agent = self.mocker.patch(BaseAgent) |
437 | mock_client = self.mocker.mock(ZookeeperClient) |
438 | + session_file = self.makeFile() |
439 | |
440 | # connection is closed after agent.stop invoked. |
441 | with self.mocker.order(): |
442 | @@ -289,10 +302,14 @@ |
443 | mock_client.connected |
444 | self.mocker.result(False) |
445 | |
446 | + mock_agent.config |
447 | + self.mocker.result({"session_file": session_file}) |
448 | + |
449 | self.mocker.replay() |
450 | |
451 | agent = BaseAgent() |
452 | - return agent.stopService() |
453 | + yield agent.stopService() |
454 | + self.assertFalse(os.path.exists(session_file)) |
455 | |
456 | def test_run_base_raises_error(self): |
457 | """The base class agent, raises a notimplemented error when started.""" |
458 | @@ -300,12 +317,15 @@ |
459 | client.connect(get_test_zookeeper_address()) |
460 | client_mock = self.mocker.mock() |
461 | self.mocker.result(succeed(client_mock)) |
462 | + client_mock.client_id |
463 | + self.mocker.result((123, "abc")) |
464 | self.mocker.replay() |
465 | |
466 | agent = BaseAgent() |
467 | agent.configure({ |
468 | "zookeeper_servers": get_test_zookeeper_address(), |
469 | - "juju_directory": self.makeDir()}) |
470 | + "juju_directory": self.makeDir(), |
471 | + "session_file": self.makeFile()}) |
472 | d = agent.startService() |
473 | self.failUnlessFailure(d, NotImplementedError) |
474 | return d |
475 | @@ -316,35 +336,43 @@ |
476 | client = self.mocker.patch(ZookeeperClient) |
477 | client.connect("x2.example.com") |
478 | self.mocker.result(succeed(mock_client)) |
479 | + mock_client.client_id |
480 | + self.mocker.result((123, "abc")) |
481 | self.mocker.replay() |
482 | |
483 | agent = BaseAgent() |
484 | agent.configure({"zookeeper_servers": "x2.example.com", |
485 | - "juju_directory": self.makeDir()}) |
486 | + "juju_directory": self.makeDir(), |
487 | + "session_file": self.makeFile()}) |
488 | result = agent.connect() |
489 | self.assertEqual(result.result, mock_client) |
490 | self.assertEqual(agent.client, mock_client) |
491 | |
492 | - def test_non_existant_directory(self): |
493 | + def test_nonexistent_directory(self): |
494 | """If the juju directory does not exist an error should be raised. |
495 | """ |
496 | juju_directory = self.makeDir() |
497 | os.rmdir(juju_directory) |
498 | data = {"zookeeper_servers": get_test_zookeeper_address(), |
499 | - "juju_directory": juju_directory} |
500 | + "juju_directory": juju_directory, |
501 | + "session_file": self.makeFile()} |
502 | + self.assertRaises(JujuError, BaseAgent().configure, data) |
503 | |
504 | - agent = BaseAgent() |
505 | - self.assertRaises( |
506 | - JujuError, |
507 | - agent.configure, |
508 | - data) |
509 | + def test_bad_session_file(self): |
510 | + """If the session file cannot be created an error should be raised. |
511 | + """ |
512 | + data = {"zookeeper_servers": get_test_zookeeper_address(), |
513 | + "juju_directory": self.makeDir(), |
514 | + "session_file": None} |
515 | + self.assertRaises(JujuError, BaseAgent().configure, data) |
516 | |
517 | def test_directory_cli_option(self): |
518 | """The juju directory can be configured on the cli.""" |
519 | juju_directory = self.makeDir() |
520 | self.change_args( |
521 | "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), |
522 | - "--juju-directory", juju_directory) |
523 | + "--juju-directory", juju_directory, |
524 | + "--session-file", self.makeFile()) |
525 | |
526 | agent = BaseAgent() |
527 | parser = argparse.ArgumentParser() |
528 | @@ -366,7 +394,9 @@ |
529 | agent = BaseAgent() |
530 | parser = argparse.ArgumentParser() |
531 | agent.setup_options(parser) |
532 | - options = parser.parse_args(namespace=TwistedOptionNamespace()) |
533 | + options = parser.parse_args( |
534 | + ["--session-file", self.makeFile()], |
535 | + namespace=TwistedOptionNamespace()) |
536 | agent.configure(options) |
537 | self.assertEqual( |
538 | agent.config["juju_directory"], juju_directory) |
539 | @@ -382,6 +412,8 @@ |
540 | client = self.mocker.patch(ZookeeperClient) |
541 | client.connect("x1.example.com") |
542 | self.mocker.result(succeed(client)) |
543 | + client.client_id |
544 | + self.mocker.result((123, "abc")) |
545 | client.add_auth("digest", "admin:abc") |
546 | client.add_auth("digest", "agent:xyz") |
547 | client.exists("/") |
548 | @@ -390,7 +422,105 @@ |
549 | agent = BaseAgent() |
550 | parser = argparse.ArgumentParser() |
551 | agent.setup_options(parser) |
552 | - options = parser.parse_args(namespace=TwistedOptionNamespace()) |
553 | + options = parser.parse_args( |
554 | + ["--session-file", self.makeFile()], |
555 | + namespace=TwistedOptionNamespace()) |
556 | + agent.configure(options) |
557 | + d = agent.startService() |
558 | + self.failUnlessFailure(d, NotImplementedError) |
559 | + return d |
560 | + |
561 | + def test_connect_closes_running_session(self): |
562 | + self.change_args("es-agent") |
563 | + self.change_environment( |
564 | + JUJU_HOME=self.makeDir(), |
565 | + JUJU_ZOOKEEPER="x1.example.com") |
566 | + |
567 | + session_file = self.makeFile() |
568 | + with open(session_file, "w") as f: |
569 | + f.write(yaml.dump((123, "abc"))) |
570 | + mock_client_1 = self.mocker.mock() |
571 | + client = self.mocker.patch(ZookeeperClient) |
572 | + client.connect("x1.example.com", client_id=(123, "abc")) |
573 | + self.mocker.result(succeed(mock_client_1)) |
574 | + mock_client_1.close() |
575 | + self.mocker.result(None) |
576 | + |
577 | + mock_client_2 = self.mocker.mock() |
578 | + client.connect("x1.example.com") |
579 | + self.mocker.result(succeed(mock_client_2)) |
580 | + mock_client_2.client_id |
581 | + self.mocker.result((456, "def")) |
582 | + self.mocker.replay() |
583 | + |
584 | + agent = BaseAgent() |
585 | + parser = argparse.ArgumentParser() |
586 | + agent.setup_options(parser) |
587 | + options = parser.parse_args( |
588 | + ["--session-file", session_file], |
589 | + namespace=TwistedOptionNamespace()) |
590 | + agent.configure(options) |
591 | + d = agent.startService() |
592 | + self.failUnlessFailure(d, NotImplementedError) |
593 | + return d |
594 | + |
595 | + def test_connect_handles_expired_session(self): |
596 | + self.change_args("es-agent") |
597 | + self.change_environment( |
598 | + JUJU_HOME=self.makeDir(), |
599 | + JUJU_ZOOKEEPER="x1.example.com") |
600 | + |
601 | + session_file = self.makeFile() |
602 | + with open(session_file, "w") as f: |
603 | + f.write(yaml.dump((123, "abc"))) |
604 | + client = self.mocker.patch(ZookeeperClient) |
605 | + client.connect("x1.example.com", client_id=(123, "abc")) |
606 | + self.mocker.result(fail(zookeeper.SessionExpiredException())) |
607 | + |
608 | + mock_client = self.mocker.mock() |
609 | + client.connect("x1.example.com") |
610 | + self.mocker.result(succeed(mock_client)) |
611 | + mock_client.client_id |
612 | + self.mocker.result((456, "def")) |
613 | + self.mocker.replay() |
614 | + |
615 | + agent = BaseAgent() |
616 | + parser = argparse.ArgumentParser() |
617 | + agent.setup_options(parser) |
618 | + options = parser.parse_args( |
619 | + ["--session-file", session_file], |
620 | + namespace=TwistedOptionNamespace()) |
621 | + agent.configure(options) |
622 | + d = agent.startService() |
623 | + self.failUnlessFailure(d, NotImplementedError) |
624 | + return d |
625 | + |
626 | + def test_connect_handles_nonsense_session(self): |
627 | + self.change_args("es-agent") |
628 | + self.change_environment( |
629 | + JUJU_HOME=self.makeDir(), |
630 | + JUJU_ZOOKEEPER="x1.example.com") |
631 | + |
632 | + session_file = self.makeFile() |
633 | + with open(session_file, "w") as f: |
634 | + f.write(yaml.dump("cheesy wotsits")) |
635 | + client = self.mocker.patch(ZookeeperClient) |
636 | + client.connect("x1.example.com", client_id="cheesy wotsits") |
637 | + self.mocker.result(fail(zookeeper.ZooKeeperException())) |
638 | + |
639 | + mock_client = self.mocker.mock() |
640 | + client.connect("x1.example.com") |
641 | + self.mocker.result(succeed(mock_client)) |
642 | + mock_client.client_id |
643 | + self.mocker.result((456, "def")) |
644 | + self.mocker.replay() |
645 | + |
646 | + agent = BaseAgent() |
647 | + parser = argparse.ArgumentParser() |
648 | + agent.setup_options(parser) |
649 | + options = parser.parse_args( |
650 | + ["--session-file", session_file], |
651 | + namespace=TwistedOptionNamespace()) |
652 | agent.configure(options) |
653 | d = agent.startService() |
654 | self.failUnlessFailure(d, NotImplementedError) |
655 | @@ -408,6 +538,21 @@ |
656 | agent.set_watch_enabled(False) |
657 | self.assertFalse(agent.get_watch_enabled()) |
658 | |
659 | + @inlineCallbacks |
660 | + def test_session_file_permissions(self): |
661 | + session_file = self.makeFile() |
662 | + agent = DummyAgent() |
663 | + agent.configure({ |
664 | + "session_file": session_file, |
665 | + "juju_directory": self.makeDir(), |
666 | + "zookeeper_servers": get_test_zookeeper_address()}) |
667 | + yield agent.startService() |
668 | + mode = os.stat(session_file).st_mode |
669 | + mask = stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO |
670 | + self.assertEquals(mode & mask, stat.S_IRUSR | stat.S_IWUSR) |
671 | + yield agent.stopService() |
672 | + self.assertFalse(os.path.exists(session_file)) |
673 | + |
674 | |
675 | class AgentDebugLogSettingsWatch(AgentTestBase): |
676 | |
677 | |
678 | === modified file 'juju/agents/tests/test_machine.py' |
679 | --- juju/agents/tests/test_machine.py 2011-10-05 13:59:44 +0000 |
680 | +++ juju/agents/tests/test_machine.py 2012-02-02 16:42:42 +0000 |
681 | @@ -120,10 +120,9 @@ |
682 | # initially setup by get_agent_config in setUp |
683 | self.change_environment(JUJU_MACHINE_ID="") |
684 | self.change_args("es-agent", |
685 | - "--zookeeper-servers", |
686 | - get_test_zookeeper_address(), |
687 | - "--juju-directory", |
688 | - self.makeDir()) |
689 | + "--zookeeper-servers", get_test_zookeeper_address(), |
690 | + "--juju-directory", self.makeDir(), |
691 | + "--session-file", self.makeFile()) |
692 | parser = argparse.ArgumentParser() |
693 | self.agent.setup_options(parser) |
694 | options = parser.parse_args(namespace=TwistedOptionNamespace()) |
695 | |
696 | === modified file 'juju/agents/tests/test_unit.py' |
697 | --- juju/agents/tests/test_unit.py 2011-12-16 09:23:31 +0000 |
698 | +++ juju/agents/tests/test_unit.py 2012-02-02 16:42:42 +0000 |
699 | @@ -3,10 +3,9 @@ |
700 | import os |
701 | import yaml |
702 | |
703 | -from twisted.internet.defer import ( |
704 | - inlineCallbacks, returnValue, fail, Deferred) |
705 | +from twisted.internet.defer import inlineCallbacks, returnValue |
706 | |
707 | -from juju.agents.unit import UnitAgent, CharmUpgradeOperation |
708 | +from juju.agents.unit import UnitAgent |
709 | from juju.agents.base import TwistedOptionNamespace |
710 | from juju.charm import get_charm_from_path |
711 | from juju.charm.url import CharmURL |
712 | @@ -74,8 +73,10 @@ |
713 | "stop", "#!/bin/bash\necho stop >> %s" % output_file) |
714 | |
715 | for k in kw.keys(): |
716 | - self.write_hook(k.replace("_", "-"), |
717 | - "#!/bin/bash\necho $0 >> %s" % output_file) |
718 | + hook_name = k.replace("_", "-") |
719 | + self.write_hook( |
720 | + hook_name, |
721 | + "#!/bin/bash\necho %s >> %s" % (hook_name, output_file)) |
722 | |
723 | return output_file |
724 | |
725 | @@ -183,7 +184,8 @@ |
726 | self.change_args( |
727 | "unit-agent", |
728 | "--juju-directory", self.makeDir(), |
729 | - "--zookeeper-servers", get_test_zookeeper_address()) |
730 | + "--zookeeper-servers", get_test_zookeeper_address(), |
731 | + "--session-file", self.makeFile()) |
732 | |
733 | parser = argparse.ArgumentParser() |
734 | self.agent.setup_options(parser) |
735 | @@ -215,6 +217,7 @@ |
736 | options = {} |
737 | options["juju_directory"] = self.juju_directory |
738 | options["zookeeper_servers"] = get_test_zookeeper_address() |
739 | + options["session_file"] = self.makeFile() |
740 | options["unit_name"] = "rabbit-1" |
741 | agent = self.agent_class() |
742 | agent.configure(options) |
743 | @@ -568,6 +571,12 @@ |
744 | self.makeDir(path=os.path.join(self.juju_directory, "charms")) |
745 | |
746 | @inlineCallbacks |
747 | + def wait_for_log(self, logger_name, message, level=logging.DEBUG): |
748 | + output = self.capture_logging(logger_name, level=level) |
749 | + while message not in output.getvalue(): |
750 | + yield self.sleep(0.1) |
751 | + |
752 | + @inlineCallbacks |
753 | def mark_charm_upgrade(self): |
754 | # Create a new version of the charm |
755 | repository = self.increment_charm(self.charm) |
756 | @@ -592,158 +601,84 @@ |
757 | yield self.assertState(self.agent.workflow, "started") |
758 | |
759 | @inlineCallbacks |
760 | - def test_agent_upgrade_watch_continues_on_unexpected_error(self): |
761 | - """The agent watches for unit upgrades and continues if there is an |
762 | - unexpected error.""" |
763 | - yield self.mark_charm_upgrade() |
764 | - self.agent.set_watch_enabled(True) |
765 | - |
766 | - output = self.capture_logging( |
767 | - "juju.agents.unit", level=logging.DEBUG) |
768 | - |
769 | - upgrade_done = Deferred() |
770 | - |
771 | - def operation_has_run(): |
772 | - upgrade_done.callback(True) |
773 | - |
774 | - operation = self.mocker.patch(CharmUpgradeOperation) |
775 | - operation.run() |
776 | - |
777 | - self.mocker.call(operation_has_run) |
778 | - self.mocker.result(fail(ValueError("magic mouse"))) |
779 | - self.mocker.replay() |
780 | - |
781 | - yield self.agent.startService() |
782 | - |
783 | - yield upgrade_done |
784 | - self.assertIn("Error while upgrading", output.getvalue()) |
785 | - self.assertIn("magic mouse", output.getvalue()) |
786 | - |
787 | - yield self.agent.workflow.fire_transition("stop") |
788 | - |
789 | - @inlineCallbacks |
790 | def test_agent_upgrade(self): |
791 | """The agent can succesfully upgrade its charm.""" |
792 | - self.agent.set_watch_enabled(False) |
793 | - yield self.agent.startService() |
794 | - |
795 | - yield self.mark_charm_upgrade() |
796 | - |
797 | + log_written = self.wait_for_log("juju.agents.unit", "Upgrade complete") |
798 | hook_done = self.wait_on_hook( |
799 | "upgrade-charm", executor=self.agent.executor) |
800 | - self.write_hook("upgrade-charm", "#!/bin/bash\nexit 0") |
801 | - output = self.capture_logging("unit.upgrade", level=logging.DEBUG) |
802 | - |
803 | - # Do the upgrade |
804 | - upgrade = CharmUpgradeOperation(self.agent) |
805 | - value = yield upgrade.run() |
806 | - |
807 | - # Verify the upgrade. |
808 | - self.assertIdentical(value, True) |
809 | - self.assertIn("Unit upgraded", output.getvalue()) |
810 | + |
811 | + self.agent.set_watch_enabled(True) |
812 | + yield self.agent.startService() |
813 | + yield self.mark_charm_upgrade() |
814 | yield hook_done |
815 | + yield log_written |
816 | |
817 | + self.assertIdentical( |
818 | + (yield self.states["unit"].get_upgrade_flag()), |
819 | + False) |
820 | new_charm = get_charm_from_path( |
821 | os.path.join(self.agent.unit_directory, "charm")) |
822 | - |
823 | self.assertEqual( |
824 | self.charm.get_revision() + 1, new_charm.get_revision()) |
825 | |
826 | @inlineCallbacks |
827 | + def test_agent_upgrade_version_current(self): |
828 | + """If the unit is running the latest charm, do nothing.""" |
829 | + log_written = self.wait_for_log( |
830 | + "juju.agents.unit", |
831 | + "Upgrade ignored: already running latest charm") |
832 | + |
833 | + old_charm_id = yield self.states["unit"].get_charm_id() |
834 | + self.agent.set_watch_enabled(True) |
835 | + yield self.agent.startService() |
836 | + yield self.states["unit"].set_upgrade_flag() |
837 | + yield log_written |
838 | + |
839 | + self.assertIdentical( |
840 | + (yield self.states["unit"].get_upgrade_flag()), False) |
841 | + self.assertEquals( |
842 | + (yield self.states["unit"].get_charm_id()), old_charm_id) |
843 | + |
844 | + |
845 | + @inlineCallbacks |
846 | def test_agent_upgrade_bad_unit_state(self): |
847 | - """The an upgrade fails if the unit is in a bad state.""" |
848 | - self.agent.set_watch_enabled(False) |
849 | - yield self.agent.startService() |
850 | - |
851 | + """The upgrade fails if the unit is in a bad state.""" |
852 | # Upload a new version of the unit's charm |
853 | repository = self.increment_charm(self.charm) |
854 | charm = yield repository.find(CharmURL.parse("local:series/mysql")) |
855 | charm, charm_state = yield self.publish_charm(charm.path) |
856 | + old_charm_id = yield self.states["unit"].get_charm_id() |
857 | + |
858 | + log_written = self.wait_for_log( |
859 | + "juju.agents.unit", |
860 | + "Cannot upgrade: unit is in non-started state configure_error. " |
861 | + "Reissue upgrade command to try again.") |
862 | + self.agent.set_watch_enabled(True) |
863 | + yield self.agent.startService() |
864 | |
865 | # Mark the unit for upgrade, with an invalid state. |
866 | + yield self.agent.workflow.fire_transition("error_configure") |
867 | yield self.states["service"].set_charm_id(charm_state.id) |
868 | yield self.states["unit"].set_upgrade_flag() |
869 | - yield self.agent.workflow.set_state("start_error") |
870 | - |
871 | - output = self.capture_logging("unit.upgrade", level=logging.DEBUG) |
872 | - |
873 | - # Do the upgrade |
874 | - upgrade = CharmUpgradeOperation(self.agent) |
875 | - value = yield upgrade.run() |
876 | - |
877 | - # Verify the upgrade. |
878 | - self.assertIdentical(value, False) |
879 | - self.assertIn("Unit not in an upgradeable state: start_error", |
880 | - output.getvalue()) |
881 | + yield log_written |
882 | + |
883 | self.assertIdentical( |
884 | - (yield self.states["unit"].get_upgrade_flag()), |
885 | - False) |
886 | + (yield self.states["unit"].get_upgrade_flag()), False) |
887 | + self.assertEquals( |
888 | + (yield self.states["unit"].get_charm_id()), old_charm_id) |
889 | |
890 | @inlineCallbacks |
891 | def test_agent_upgrade_no_flag(self): |
892 | - """An upgrade fails if there is no upgrade flag set.""" |
893 | - self.agent.set_watch_enabled(False) |
894 | - yield self.agent.startService() |
895 | - output = self.capture_logging("unit.upgrade", level=logging.DEBUG) |
896 | - upgrade = CharmUpgradeOperation(self.agent) |
897 | - value = yield upgrade.run() |
898 | - self.assertIdentical(value, False) |
899 | - self.assertIn("No upgrade flag set", output.getvalue()) |
900 | - yield self.agent.workflow.fire_transition("stop") |
901 | - |
902 | - @inlineCallbacks |
903 | - def test_agent_upgrade_version_current(self): |
904 | - """An upgrade fails if the unit is running the latest charm.""" |
905 | - self.agent.set_watch_enabled(False) |
906 | - yield self.agent.startService() |
907 | - yield self.states["unit"].set_upgrade_flag() |
908 | - output = self.capture_logging("unit.upgrade", level=logging.DEBUG) |
909 | - upgrade = CharmUpgradeOperation(self.agent) |
910 | - value = yield upgrade.run() |
911 | - self.assertIdentical(value, True) |
912 | - self.assertIn("Unit already running latest charm", output.getvalue()) |
913 | - self.assertFalse((yield self.states["unit"].get_upgrade_flag())) |
914 | - |
915 | - @inlineCallbacks |
916 | - def test_agent_upgrade_hook_failure(self): |
917 | - """An upgrade fails if the upgrade hook errors.""" |
918 | - self.agent.set_watch_enabled(False) |
919 | - yield self.agent.startService() |
920 | - |
921 | - # Upload a new version of the unit's charm |
922 | - repository = self.increment_charm(self.charm) |
923 | - charm = yield repository.find(CharmURL.parse("local:series/mysql")) |
924 | - charm, charm_state = yield self.publish_charm(charm.path) |
925 | - |
926 | - # Mark the unit for upgrade |
927 | - yield self.states["service"].set_charm_id(charm_state.id) |
928 | - yield self.states["unit"].set_upgrade_flag() |
929 | - |
930 | - hook_done = self.wait_on_hook( |
931 | - "upgrade-charm", executor=self.agent.executor) |
932 | - self.write_hook("upgrade-charm", "#!/bin/bash\nexit 1") |
933 | - output = self.capture_logging("unit.upgrade", level=logging.DEBUG) |
934 | - |
935 | - # Do the upgrade |
936 | - upgrade = CharmUpgradeOperation(self.agent) |
937 | - value = yield upgrade.run() |
938 | - |
939 | - # Verify the failed upgrade. |
940 | - self.assertIdentical(value, False) |
941 | - self.assertIn("Invoking upgrade transition", output.getvalue()) |
942 | - self.assertIn("Upgrade failed.", output.getvalue()) |
943 | - yield hook_done |
944 | - |
945 | - # Verify state |
946 | - workflow_state = yield self.agent.workflow.get_state() |
947 | - self.assertEqual("charm_upgrade_error", workflow_state) |
948 | - |
949 | - # Verify new charm is in place |
950 | - new_charm = get_charm_from_path( |
951 | - os.path.join(self.agent.unit_directory, "charm")) |
952 | - |
953 | - self.assertEqual( |
954 | - self.charm.get_revision() + 1, new_charm.get_revision()) |
955 | - |
956 | - # Verify upgrade flag is cleared. |
957 | - self.assertFalse((yield self.states["unit"].get_upgrade_flag())) |
958 | + """An upgrade stops if there is no upgrade flag set.""" |
959 | + log_written = self.wait_for_log( |
960 | + "juju.agents.unit", "No upgrade flag set") |
961 | + old_charm_id = yield self.states["unit"].get_charm_id() |
962 | + self.agent.set_watch_enabled(True) |
963 | + yield self.agent.startService() |
964 | + yield log_written |
965 | + |
966 | + self.assertIdentical( |
967 | + (yield self.states["unit"].get_upgrade_flag()), |
968 | + False) |
969 | + new_charm_id = yield self.states["unit"].get_charm_id() |
970 | + self.assertEquals(new_charm_id, old_charm_id) |
971 | |
972 | === modified file 'juju/agents/unit.py' |
973 | --- juju/agents/unit.py 2012-01-10 14:14:28 +0000 |
974 | +++ juju/agents/unit.py 2012-02-02 16:42:42 +0000 |
975 | @@ -1,7 +1,5 @@ |
976 | import os |
977 | import logging |
978 | -import shutil |
979 | -import tempfile |
980 | |
981 | from twisted.internet.defer import inlineCallbacks, returnValue |
982 | |
983 | @@ -14,8 +12,6 @@ |
984 | from juju.unit.lifecycle import UnitLifecycle, HOOK_SOCKET_FILE |
985 | from juju.unit.workflow import UnitWorkflowState |
986 | |
987 | -from juju.unit.charm import download_charm |
988 | - |
989 | from juju.agents.base import BaseAgent |
990 | |
991 | |
992 | @@ -66,14 +62,14 @@ |
993 | @inlineCallbacks |
994 | def start(self): |
995 | """Start the unit agent process.""" |
996 | - self.service_state_manager = ServiceStateManager(self.client) |
997 | + service_state_manager = ServiceStateManager(self.client) |
998 | |
999 | # Retrieve our unit and configure working directories. |
1000 | service_name = self.unit_name.split("/")[0] |
1001 | - service_state = yield self.service_state_manager.get_service_state( |
1002 | + self.service_state = yield service_state_manager.get_service_state( |
1003 | service_name) |
1004 | |
1005 | - self.unit_state = yield service_state.get_unit_state( |
1006 | + self.unit_state = yield self.service_state.get_unit_state( |
1007 | self.unit_name) |
1008 | self.unit_directory = os.path.join( |
1009 | self.config["juju_directory"], "units", |
1010 | @@ -82,10 +78,11 @@ |
1011 | self.config["juju_directory"], "state") |
1012 | |
1013 | # Setup the server portion of the cli api exposed to hooks. |
1014 | + socket_path = os.path.join(self.unit_directory, HOOK_SOCKET_FILE) |
1015 | + if os.path.exists(socket_path): |
1016 | + os.unlink(socket_path) |
1017 | from twisted.internet import reactor |
1018 | - self.api_socket = reactor.listenUNIX( |
1019 | - os.path.join(self.unit_directory, HOOK_SOCKET_FILE), |
1020 | - self.api_factory) |
1021 | + self.api_socket = reactor.listenUNIX(socket_path, self.api_factory) |
1022 | |
1023 | # Setup the unit state's address |
1024 | address = yield get_unit_address(self.client) |
1025 | @@ -100,9 +97,13 @@ |
1026 | # Inform the system, we're alive. |
1027 | yield self.unit_state.connect_agent() |
1028 | |
1029 | + # Start paying attention to the debug-log setting |
1030 | + if self.get_watch_enabled(): |
1031 | + yield self.unit_state.watch_hook_debug(self.cb_watch_hook_debug) |
1032 | + |
1033 | self.lifecycle = UnitLifecycle( |
1034 | - self.client, self.unit_state, service_state, self.unit_directory, |
1035 | - self.state_directory, self.executor) |
1036 | + self.client, self.unit_state, self.service_state, |
1037 | + self.unit_directory, self.state_directory, self.executor) |
1038 | |
1039 | self.workflow = UnitWorkflowState( |
1040 | self.client, self.unit_state, self.lifecycle, self.state_directory) |
1041 | @@ -113,7 +114,7 @@ |
1042 | |
1043 | if self.get_watch_enabled(): |
1044 | yield self.unit_state.watch_resolved(self.cb_watch_resolved) |
1045 | - yield service_state.watch_config_state( |
1046 | + yield self.service_state.watch_config_state( |
1047 | self.cb_watch_config_changed) |
1048 | yield self.unit_state.watch_upgrade_flag( |
1049 | self.cb_watch_upgrade_flag) |
1050 | @@ -175,13 +176,34 @@ |
1051 | """Update the unit's charm when requested. |
1052 | """ |
1053 | upgrade_flag = yield self.unit_state.get_upgrade_flag() |
1054 | - if upgrade_flag: |
1055 | - log.info("Upgrade detected, starting upgrade") |
1056 | - upgrade = CharmUpgradeOperation(self) |
1057 | - try: |
1058 | - yield upgrade.run() |
1059 | - except Exception: |
1060 | - log.exception("Error while upgrading") |
1061 | + if not upgrade_flag: |
1062 | + log.info("No upgrade flag set.") |
1063 | + return |
1064 | + |
1065 | + log.info("Upgrade detected") |
1066 | + # Clear the flag immediately; this means that upgrade requests will |
1067 | + # be *ignored* by units which are not "started", and will need to be |
1068 | + # reissued when the units are in acceptable states. |
1069 | + yield self.unit_state.clear_upgrade_flag() |
1070 | + |
1071 | + new_id = yield self.service_state.get_charm_id() |
1072 | + old_id = yield self.unit_state.get_charm_id() |
1073 | + if new_id == old_id: |
1074 | + log.info("Upgrade ignored: already running latest charm") |
1075 | + return |
1076 | + |
1077 | + state = yield self.workflow.get_state() |
1078 | + if state != "started": |
1079 | + log.warning( |
1080 | + "Cannot upgrade: unit is in non-started state %s. Reissue " |
1081 | + "upgrade command to try again.", state) |
1082 | + return |
1083 | + |
1084 | + log.info("Starting upgrade") |
1085 | + if (yield self.workflow.fire_transition("upgrade_charm")): |
1086 | + log.info("Upgrade complete") |
1087 | + else: |
1088 | + log.info("Upgrade failed") |
1089 | |
1090 | @inlineCallbacks |
1091 | def cb_watch_config_changed(self, change): |
1092 | @@ -198,99 +220,5 @@ |
1093 | yield self.workflow.fire_transition("reconfigure") |
1094 | |
1095 | |
1096 | -class CharmUpgradeOperation(object): |
1097 | - """A unit agent charm upgrade operation.""" |
1098 | - |
1099 | - def __init__(self, agent): |
1100 | - self._agent = agent |
1101 | - self._log = logging.getLogger("unit.upgrade") |
1102 | - self._charm_directory = tempfile.mkdtemp( |
1103 | - suffix="charm-upgrade", prefix="tmp") |
1104 | - |
1105 | - def retrieve_charm(self, charm_id): |
1106 | - return download_charm( |
1107 | - self._agent.client, charm_id, self._charm_directory) |
1108 | - |
1109 | - def _remove_tree(self, result): |
1110 | - if os.path.exists(self._charm_directory): |
1111 | - shutil.rmtree(self._charm_directory) |
1112 | - return result |
1113 | - |
1114 | - def run(self): |
1115 | - d = self._run() |
1116 | - d.addBoth(self._remove_tree) |
1117 | - return d |
1118 | - |
1119 | - @inlineCallbacks |
1120 | - def _run(self): |
1121 | - self._log.info("Starting charm upgrade...") |
1122 | - |
1123 | - # Verify the workflow state |
1124 | - workflow_state = yield self._agent.workflow.get_state() |
1125 | - if workflow_state != "started": |
1126 | - self._log.warning( |
1127 | - "Unit not in an upgradeable state: %s", workflow_state) |
1128 | - # Upgrades can only be supported while the unit is |
1129 | - # running, we clear the flag because we don't support |
1130 | - # persistent upgrade requests across unit starts. The |
1131 | - # upgrade request will need to be reissued, after |
1132 | - # resolving or restarting the unit. |
1133 | - yield self._agent.unit_state.clear_upgrade_flag() |
1134 | - returnValue(False) |
1135 | - |
1136 | - # Get, check, and clear the flag. Do it first so a second upgrade |
1137 | - # will restablish the upgrade request. |
1138 | - upgrade_flag = yield self._agent.unit_state.get_upgrade_flag() |
1139 | - if not upgrade_flag: |
1140 | - self._log.warning("No upgrade flag set.") |
1141 | - returnValue(False) |
1142 | - |
1143 | - self._log.debug("Clearing upgrade flag.") |
1144 | - yield self._agent.unit_state.clear_upgrade_flag() |
1145 | - |
1146 | - # Retrieve the service state |
1147 | - service_state_manager = ServiceStateManager(self._agent.client) |
1148 | - service_state = yield service_state_manager.get_service_state( |
1149 | - self._agent.unit_name.split("/")[0]) |
1150 | - |
1151 | - # Verify unit state, upgrade flag, and newer version requested. |
1152 | - service_charm_id = yield service_state.get_charm_id() |
1153 | - unit_charm_id = yield self._agent.unit_state.get_charm_id() |
1154 | - |
1155 | - if service_charm_id == unit_charm_id: |
1156 | - self._log.debug("Unit already running latest charm") |
1157 | - yield self._agent.unit_state.clear_upgrade_flag() |
1158 | - returnValue(True) |
1159 | - |
1160 | - # Retrieve charm |
1161 | - self._log.debug("Retrieving charm %s", service_charm_id) |
1162 | - charm = yield self.retrieve_charm(service_charm_id) |
1163 | - |
1164 | - # Stop hook executions |
1165 | - self._log.debug("Stopping hook execution.") |
1166 | - yield self._agent.executor.stop() |
1167 | - |
1168 | - # Note the current charm version |
1169 | - self._log.debug("Setting unit charm id to %s", service_charm_id) |
1170 | - yield self._agent.unit_state.set_charm_id(service_charm_id) |
1171 | - |
1172 | - # Extract charm |
1173 | - self._log.debug("Extracting new charm.") |
1174 | - charm.extract_to( |
1175 | - os.path.join(self._agent.unit_directory, "charm")) |
1176 | - |
1177 | - # Upgrade |
1178 | - self._log.debug("Invoking upgrade transition.") |
1179 | - |
1180 | - success = yield self._agent.workflow.fire_transition( |
1181 | - "upgrade_charm") |
1182 | - |
1183 | - if success: |
1184 | - self._log.debug("Unit upgraded.") |
1185 | - else: |
1186 | - self._log.warning("Upgrade failed.") |
1187 | - |
1188 | - returnValue(success) |
1189 | - |
1190 | if __name__ == '__main__': |
1191 | UnitAgent.run() |
1192 | |
1193 | === modified file 'juju/control/options.py' |
1194 | --- juju/control/options.py 2011-01-20 18:00:23 +0000 |
1195 | +++ juju/control/options.py 2012-02-02 16:42:42 +0000 |
1196 | @@ -53,9 +53,8 @@ |
1197 | ) |
1198 | |
1199 | unix_group.add_argument( |
1200 | - "--pidfile", default="%s.pid" % agent.name, |
1201 | + "--pidfile", default="", |
1202 | help="Path to the pid file", |
1203 | - type=ensure_abs_path, |
1204 | ) |
1205 | |
1206 | unix_group.add_argument( |
1207 | @@ -91,9 +90,9 @@ |
1208 | ) |
1209 | |
1210 | unix_group.add_argument( |
1211 | - "--daemon", "-n", default=True, |
1212 | - dest="nodaemon", action="store_false", |
1213 | - help="Daemonize the process", |
1214 | + "--nodaemon", "-n", default=False, |
1215 | + dest="nodaemon", action="store_true", |
1216 | + help="Don't daemonize (stay in foreground)", |
1217 | ) |
1218 | |
1219 | unix_group.add_argument( |
1220 | |
1221 | === modified file 'juju/control/status.py' |
1222 | --- juju/control/status.py 2011-12-07 05:02:08 +0000 |
1223 | +++ juju/control/status.py 2012-02-02 16:42:42 +0000 |
1224 | @@ -216,8 +216,10 @@ |
1225 | relation_status = {} |
1226 | for relation in relations: |
1227 | try: |
1228 | + print unit.unit_name |
1229 | relation_unit = yield relation.get_unit_state(unit) |
1230 | except UnitRelationStateNotFound: |
1231 | + print "POW SPLAT" |
1232 | # This exception will occur when relations are |
1233 | # established between services without service |
1234 | # units, and therefore never have any |
1235 | |
1236 | === modified file 'juju/control/tests/test_resolved.py' |
1237 | --- juju/control/tests/test_resolved.py 2012-01-12 10:18:07 +0000 |
1238 | +++ juju/control/tests/test_resolved.py 2012-02-02 16:42:42 +0000 |
1239 | @@ -88,10 +88,10 @@ |
1240 | """ |
1241 | for unit, state in units: |
1242 | unit_relation = yield service_relation.add_unit_state(unit) |
1243 | - lifecycle = UnitRelationLifecycle(self.client, |
1244 | - unit.unit_name, unit_relation, |
1245 | - service_relation.relation_name, |
1246 | - self.makeDir(), self.executor) |
1247 | + lifecycle = UnitRelationLifecycle( |
1248 | + self.client, unit.unit_name, unit_relation, |
1249 | + service_relation.relation_name, self.makeDir(), self.makeDir(), |
1250 | + self.executor) |
1251 | workflow_state = RelationWorkflowState( |
1252 | self.client, unit_relation, service_relation.relation_name, |
1253 | lifecycle, self.makeDir()) |
1254 | |
1255 | === modified file 'juju/control/tests/test_status.py' |
1256 | --- juju/control/tests/test_status.py 2011-12-07 18:29:12 +0000 |
1257 | +++ juju/control/tests/test_status.py 2012-02-02 16:42:42 +0000 |
1258 | @@ -39,7 +39,7 @@ |
1259 | # Status tests setup a large tree every time, make allowances for it. |
1260 | # TODO: create minimal trees needed per test. |
1261 | timeout = 10 |
1262 | - |
1263 | + |
1264 | @inlineCallbacks |
1265 | def setUp(self): |
1266 | yield super(StatusTestBase, self).setUp() |
1267 | @@ -107,6 +107,7 @@ |
1268 | options = TwistedOptionNamespace() |
1269 | options["juju_directory"] = path |
1270 | options["zookeeper_servers"] = get_test_zookeeper_address() |
1271 | + options["session_file"] = self.makeFile() |
1272 | for k, v in extra_options.items(): |
1273 | options[k] = v |
1274 | agent.configure(options) |
1275 | @@ -302,6 +303,7 @@ |
1276 | options = TwistedOptionNamespace() |
1277 | options["juju_directory"] = self.makeDir() |
1278 | options["zookeeper_servers"] = get_test_zookeeper_address() |
1279 | + options["session_file"] = self.makeFile() |
1280 | options["machine_id"] = "0" |
1281 | agent.configure(options) |
1282 | agent.set_watch_enabled(False) |
1283 | |
1284 | === modified file 'juju/errors.py' |
1285 | --- juju/errors.py 2011-09-24 22:21:23 +0000 |
1286 | +++ juju/errors.py 2012-02-02 16:42:42 +0000 |
1287 | @@ -62,7 +62,7 @@ |
1288 | return "Error processing %r: %s" % (self.path, self.message) |
1289 | |
1290 | |
1291 | -class CharmInvocationError(JujuError): |
1292 | +class CharmInvocationError(CharmError): |
1293 | """A charm's hook invocation exited with an error""" |
1294 | |
1295 | def __init__(self, path, exit_code): |
1296 | @@ -74,6 +74,16 @@ |
1297 | self.path, self.exit_code) |
1298 | |
1299 | |
1300 | +class CharmUpgradeError(CharmError): |
1301 | + """Something went wrong trying to upgrade a charm""" |
1302 | + |
1303 | + def __init__(self, message): |
1304 | + self.message = message |
1305 | + |
1306 | + def __str__(self): |
1307 | + return "Cannot upgrade charm: %s" % self.message |
1308 | + |
1309 | + |
1310 | class FileAlreadyExists(JujuError): |
1311 | """Raised when something refuses to overwrite an existing file. |
1312 | |
1313 | @@ -164,3 +174,6 @@ |
1314 | self.user_policy, |
1315 | self.provider_type, |
1316 | ", ".join(self.provider_policies))) |
1317 | + |
1318 | +class ServiceError(JujuError): |
1319 | + """Some problem with an upstart service""" |
1320 | |
1321 | === modified file 'juju/hooks/scheduler.py' |
1322 | --- juju/hooks/scheduler.py 2011-12-12 01:56:05 +0000 |
1323 | +++ juju/hooks/scheduler.py 2012-02-02 16:42:42 +0000 |
1324 | @@ -1,4 +1,6 @@ |
1325 | import logging |
1326 | +import os |
1327 | +import yaml |
1328 | |
1329 | from twisted.internet.defer import DeferredQueue, inlineCallbacks |
1330 | from juju.state.hook import RelationHookContext, RelationChange |
1331 | @@ -28,31 +30,69 @@ |
1332 | the run queue. |
1333 | """ |
1334 | |
1335 | - def __init__(self, client, executor, unit_relation, relation_name, unit_name): |
1336 | - self._running = None |
1337 | + def __init__(self, client, executor, unit_relation, relation_name, |
1338 | + unit_name, state_path): |
1339 | + self._running = False |
1340 | + self._state_path = state_path |
1341 | |
1342 | # The thing that will actually run the hook for us |
1343 | self._executor = executor |
1344 | - |
1345 | # For hook context construction. |
1346 | self._client = client |
1347 | self._unit_relation = unit_relation |
1348 | self._relation_name = relation_name |
1349 | - self._members = None |
1350 | self._unit_name = unit_name |
1351 | |
1352 | - # Track next operation by node |
1353 | - self._node_queue = {} |
1354 | - |
1355 | - # Track node operations by clock tick |
1356 | - self._clock_queue = {} |
1357 | - |
1358 | + if os.path.exists(self._state_path): |
1359 | + self._load_state() |
1360 | + else: |
1361 | + self._create_state() |
1362 | + |
1363 | + def _create_state(self): |
1364 | + # Current units (as far as the next hook should know) |
1365 | + self._context_members = None |
1366 | + # Current units and settings versions (as far as the queue knows) |
1367 | + self._member_versions = {} |
1368 | + # Tracks next operation by unit |
1369 | + self._unit_ops = {} |
1370 | + # Tracks unit operations by clock tick |
1371 | + self._clock_units = {} |
1372 | # Run queue (clock) |
1373 | self._run_queue = DeferredQueue() |
1374 | - |
1375 | # Artifical clock sequence |
1376 | self._clock_sequence = 0 |
1377 | |
1378 | + def _load_state(self): |
1379 | + with open(self._state_path) as f: |
1380 | + state = yaml.load(f.read()) |
1381 | + if not state: |
1382 | + return self._create_state() |
1383 | + self._context_members = state["context_members"] |
1384 | + self._member_versions = state["member_versions"] |
1385 | + self._unit_ops = state["unit_ops"] |
1386 | + self._clock_units = state["clock_units"] |
1387 | + self._run_queue = DeferredQueue() |
1388 | + self._run_queue.pending = state["clock_queue"] |
1389 | + self._clock_sequence = state["clock_sequence"] |
1390 | + |
1391 | + def _save_state(self): |
1392 | + state = yaml.dump({ |
1393 | + "context_members": self._context_members, |
1394 | + "member_versions": self._member_versions, |
1395 | + "unit_ops": self._unit_ops, |
1396 | + "clock_units": self._clock_units, |
1397 | + "clock_queue": [ |
1398 | + # Strip "stop" instructions: if the lifecycle stopped us, |
1399 | + # then if/when the lifecycle comes up again in a stopped |
1400 | + # state, it won't start us in the first place. |
1401 | + c for c in self._run_queue.pending if c is not None], |
1402 | + "clock_sequence": self._clock_sequence}) |
1403 | + |
1404 | + temp_path = self._state_path + "~" |
1405 | + with open(temp_path, "w") as f: |
1406 | + f.write(state) |
1407 | + os.rename(temp_path, self._state_path) |
1408 | + |
1409 | @property |
1410 | def running(self): |
1411 | return self._running is True |
1412 | @@ -61,6 +101,12 @@ |
1413 | def run(self): |
1414 | """Run the hook scheduler and execution.""" |
1415 | assert not self._running, "Scheduler is already running" |
1416 | + try: |
1417 | + with open(self._state_path, "a"): |
1418 | + pass |
1419 | + except IOError: |
1420 | + raise AssertionError("%s is not writable!" % self._state_path) |
1421 | + |
1422 | self._running = True |
1423 | log.debug("start") |
1424 | |
1425 | @@ -72,16 +118,17 @@ |
1426 | break |
1427 | |
1428 | # Get all the units with changes in this clock tick. |
1429 | - for unit_name in self._clock_queue.pop(clock): |
1430 | + for unit_name in self._clock_units.pop(clock): |
1431 | |
1432 | # Get the change for the unit. |
1433 | - change_clock, change_type = self._node_queue.pop(unit_name) |
1434 | + change_clock, change_type = self._unit_ops.pop(unit_name) |
1435 | |
1436 | log.debug("executing hook for %s:%s", |
1437 | unit_name, CHANGE_LABELS[change_type]) |
1438 | |
1439 | # Execute the hook |
1440 | yield self._execute(unit_name, change_type) |
1441 | + self._save_state() |
1442 | |
1443 | def stop(self): |
1444 | """Stop the hook execution. |
1445 | @@ -99,26 +146,24 @@ |
1446 | # occurs. |
1447 | self._run_queue.put(None) |
1448 | |
1449 | - def notify_change(self, old_units=(), new_units=(), modified=()): |
1450 | - """Receive changes regarding related units and schedule hook execution. |
1451 | - """ |
1452 | - log.debug("relation change old:%s, new:%s, modified:%s", |
1453 | - old_units, new_units, modified) |
1454 | - |
1455 | + def cb_change_members(self, old_units, new_units): |
1456 | + log.debug("members changed: old=%s, new=%s", old_units, new_units) |
1457 | + scheduled = 0 |
1458 | self._clock_sequence += 1 |
1459 | |
1460 | - # keep track if we've scheduled changes during this clock |
1461 | - scheduled = 0 |
1462 | - |
1463 | - # Handle membership changes |
1464 | - |
1465 | - # If we don't have a cached membership yet, use the old units |
1466 | - # as a baseline. |
1467 | - if self._members is None: |
1468 | - self._members = list(old_units) |
1469 | - |
1470 | - added = set(new_units) - set(old_units) |
1471 | - removed = set(old_units) - set(new_units) |
1472 | + if self._context_members is None: |
1473 | + self._context_members = list(old_units) |
1474 | + |
1475 | + if set(self._member_versions) != set(old_units): |
1476 | + log.debug( |
1477 | + "old does not match last recorded units: %s", |
1478 | + sorted(self._member_versions)) |
1479 | + |
1480 | + added = set(new_units) - set(self._member_versions) |
1481 | + removed = set(self._member_versions) - set(new_units) |
1482 | + self._member_versions.update(dict((unit, 0) for unit in added)) |
1483 | + for unit in removed: |
1484 | + del self._member_versions[unit] |
1485 | |
1486 | for unit_name in sorted(added): |
1487 | scheduled += self._queue_change( |
1488 | @@ -128,59 +173,68 @@ |
1489 | scheduled += self._queue_change( |
1490 | unit_name, REMOVED, self._clock_sequence) |
1491 | |
1492 | - # Handle modified change |
1493 | - for unit_name in modified: |
1494 | - scheduled += self._queue_change( |
1495 | - unit_name, MODIFIED, self._clock_sequence) |
1496 | + if scheduled: |
1497 | + self._run_queue.put(self._clock_sequence) |
1498 | + self._save_state() |
1499 | |
1500 | + def cb_change_settings(self, unit_versions): |
1501 | + log.debug("settings changed: %s", unit_versions) |
1502 | + scheduled = 0 |
1503 | + self._clock_sequence += 1 |
1504 | + for (unit_name, version) in unit_versions: |
1505 | + if version > self._member_versions.get(unit_name, 0): |
1506 | + self._member_versions[unit_name] = version |
1507 | + scheduled += self._queue_change( |
1508 | + unit_name, MODIFIED, self._clock_sequence) |
1509 | if scheduled: |
1510 | self._run_queue.put(self._clock_sequence) |
1511 | + self._save_state() |
1512 | |
1513 | def get_hook_context(self, change): |
1514 | """ |
1515 | Return a hook context, corresponding to the current state of the |
1516 | system. |
1517 | """ |
1518 | - members = self._members or () |
1519 | + context_members = self._context_members or () |
1520 | context = RelationHookContext( |
1521 | self._client, self._unit_relation, change, |
1522 | - sorted(members), unit_name=self._unit_name) |
1523 | + sorted(context_members), unit_name=self._unit_name) |
1524 | return context |
1525 | |
1526 | def _queue_change(self, unit_name, change_type, clock): |
1527 | """Queue up the node change for execution. |
1528 | """ |
1529 | # If its a new change for the unit store it, and return. |
1530 | - if not unit_name in self._node_queue: |
1531 | - self._node_queue[unit_name] = (clock, change_type) |
1532 | - self._clock_queue.setdefault(clock, []).append(unit_name) |
1533 | + if not unit_name in self._unit_ops: |
1534 | + self._unit_ops[unit_name] = (clock, change_type) |
1535 | + self._clock_units.setdefault(clock, []).append(unit_name) |
1536 | return True |
1537 | |
1538 | # Else merge/reduce with the previous operation. |
1539 | - previous_clock, previous_change = self._node_queue[unit_name] |
1540 | + previous_clock, previous_change = self._unit_ops[unit_name] |
1541 | change_clock, change_type = self._reduce( |
1542 | (previous_clock, previous_change), |
1543 | (self._clock_sequence, change_type)) |
1544 | |
1545 | # If they've cancelled, remove from node and clock queues |
1546 | if change_type is None: |
1547 | - del self._node_queue[unit_name] |
1548 | - self._clock_queue[previous_clock].remove(unit_name) |
1549 | + del self._unit_ops[unit_name] |
1550 | + self._clock_units[previous_clock].remove(unit_name) |
1551 | return False |
1552 | |
1553 | # Update the node queue with the merged change. |
1554 | - self._node_queue[unit_name] = (change_clock, change_type) |
1555 | + self._unit_ops[unit_name] = (change_clock, change_type) |
1556 | |
1557 | # If the clock has changed, remove the old entry. |
1558 | if change_clock != previous_clock: |
1559 | - self._clock_queue[previous_clock].remove(unit_name) |
1560 | + self._clock_units[previous_clock].remove(unit_name) |
1561 | |
1562 | # If the old entry has precedence, we didn't schedule anything for |
1563 | # this clock cycle. |
1564 | if change_clock != clock: |
1565 | return False |
1566 | |
1567 | - self._clock_queue.setdefault(clock, []).append(unit_name) |
1568 | + self._clock_units.setdefault(clock, []).append(unit_name) |
1569 | return True |
1570 | |
1571 | def _reduce(self, previous, new): |
1572 | @@ -214,9 +268,9 @@ |
1573 | """ |
1574 | # Determine the current members as of the change. |
1575 | if change_type == ADDED: |
1576 | - self._members.append(unit_name) |
1577 | + self._context_members.append(unit_name) |
1578 | elif change_type == REMOVED: |
1579 | - self._members.remove(unit_name) |
1580 | + self._context_members.remove(unit_name) |
1581 | |
1582 | # Assemble the change and hook execution context |
1583 | change = RelationChange( |
1584 | |
1585 | === modified file 'juju/hooks/tests/test_scheduler.py' |
1586 | --- juju/hooks/tests/test_scheduler.py 2011-12-12 01:56:05 +0000 |
1587 | +++ juju/hooks/tests/test_scheduler.py 2012-02-02 16:42:42 +0000 |
1588 | @@ -1,11 +1,18 @@ |
1589 | import logging |
1590 | - |
1591 | -from twisted.internet.defer import inlineCallbacks |
1592 | - |
1593 | -from juju.hooks.scheduler import HookScheduler |
1594 | +import os |
1595 | +import yaml |
1596 | + |
1597 | +from twisted.internet.defer import inlineCallbacks, fail, succeed |
1598 | + |
1599 | +from juju.hooks.scheduler import HookScheduler, ADDED, MODIFIED, REMOVED |
1600 | from juju.state.hook import RelationChange |
1601 | from juju.state.tests.test_service import ServiceStateManagerTestBase |
1602 | |
1603 | + |
1604 | +class SomeError(Exception): |
1605 | + pass |
1606 | + |
1607 | + |
1608 | class HookSchedulerTest(ServiceStateManagerTestBase): |
1609 | |
1610 | @inlineCallbacks |
1611 | @@ -15,29 +22,49 @@ |
1612 | self.unit_relation = self.mocker.mock() |
1613 | self.executions = [] |
1614 | self.service = yield self.add_service_from_charm("wordpress") |
1615 | - self.scheduler = HookScheduler(self.client, |
1616 | - self.collect_executor, |
1617 | - self.unit_relation, "", |
1618 | - unit_name="wordpress/0") |
1619 | + self.state_file = self.makeFile() |
1620 | + self.executor = self.collect_executor |
1621 | + self._scheduler = None |
1622 | self.log_stream = self.capture_logging( |
1623 | "hook.scheduler", level=logging.DEBUG) |
1624 | |
1625 | + @property |
1626 | + def scheduler(self): |
1627 | + # Create lazily, so we can create with a state file if we want to, |
1628 | + # and swap out collect_executor when helpful to do so. |
1629 | + if self._scheduler is None: |
1630 | + self._scheduler = HookScheduler( |
1631 | + self.client, self.executor, self.unit_relation, "", |
1632 | + "wordpress/0", self.state_file) |
1633 | + return self._scheduler |
1634 | + |
1635 | def collect_executor(self, context, change): |
1636 | self.executions.append((context, change)) |
1637 | |
1638 | + def write_single_unit_state(self): |
1639 | + with open(self.state_file, "w") as f: |
1640 | + f.write(yaml.dump({ |
1641 | + "context_members": ["u-1"], |
1642 | + "member_versions": {"u-1": 0}, |
1643 | + "unit_ops": {}, |
1644 | + "clock_units": {}, |
1645 | + "clock_queue": [], |
1646 | + "clock_sequence": 1})) |
1647 | + |
1648 | # Event reduction/coalescing cases |
1649 | def test_reduce_removed_added(self): |
1650 | """ A remove event for a node followed by an add event, |
1651 | results in a modify event. |
1652 | """ |
1653 | - self.scheduler.notify_change(old_units=["u-1"], new_units=[]) |
1654 | - self.scheduler.notify_change(old_units=[], new_units=["u-1"]) |
1655 | + self.write_single_unit_state() |
1656 | + self.scheduler.cb_change_members(["u-1"], []) |
1657 | + self.scheduler.cb_change_members([], ["u-1"]) |
1658 | self.scheduler.run() |
1659 | self.assertEqual(len(self.executions), 1) |
1660 | self.assertEqual(self.executions[0][1].change_type, "modified") |
1661 | |
1662 | - output = ("relation change old:['u-1'], new:[], modified:()", |
1663 | - "relation change old:[], new:['u-1'], modified:()", |
1664 | + output = ("members changed: old=['u-1'], new=[]", |
1665 | + "members changed: old=[], new=['u-1']", |
1666 | "start", |
1667 | "executing hook for u-1:modified\n") |
1668 | self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) |
1669 | @@ -46,34 +73,34 @@ |
1670 | """A modify, remove, add event for a node results in a modify. |
1671 | An extra validation of the previous test. |
1672 | """ |
1673 | - self.scheduler.notify_change(modified=["u-1"]) |
1674 | - self.scheduler.notify_change(old_units=["u-1"], new_units=[]) |
1675 | - self.scheduler.notify_change(old_units=[], new_units=["u-1"]) |
1676 | + self.write_single_unit_state() |
1677 | + self.scheduler.cb_change_settings([("u-1", 1)]) |
1678 | + self.scheduler.cb_change_members(["u-1"], []) |
1679 | + self.scheduler.cb_change_members([], ["u-1"]) |
1680 | self.scheduler.run() |
1681 | self.assertEqual(len(self.executions), 1) |
1682 | self.assertEqual(self.executions[0][1].change_type, "modified") |
1683 | |
1684 | def test_reduce_add_modify(self): |
1685 | """An add and modify event for a node are coalesced to an add.""" |
1686 | - self.scheduler.notify_change(old_units=[], new_units=["u-1"]) |
1687 | - self.scheduler.notify_change(modified=["u-1"]) |
1688 | + self.scheduler.cb_change_members([], ["u-1"]) |
1689 | + self.scheduler.cb_change_settings([("u-1", 1)]) |
1690 | self.scheduler.run() |
1691 | self.assertEqual(len(self.executions), 1) |
1692 | self.assertEqual(self.executions[0][1].change_type, "joined") |
1693 | |
1694 | def test_reduce_add_remove(self): |
1695 | """an add followed by a removal results in a noop.""" |
1696 | - self.scheduler.notify_change(old_units=[], new_units=["u-1"]) |
1697 | - self.scheduler.notify_change(old_units=["u-1"], new_units=[]) |
1698 | + self.scheduler.cb_change_members([], ["u-1"]) |
1699 | + self.scheduler.cb_change_members(["u-1"], []) |
1700 | self.scheduler.run() |
1701 | self.assertEqual(len(self.executions), 0) |
1702 | |
1703 | def test_reduce_modify_remove(self): |
1704 | """Modifying and then removing a node, results in just the removal.""" |
1705 | - self.scheduler.notify_change(old_units=["u-1"], |
1706 | - new_units=["u-1"], |
1707 | - modified=["u-1"]) |
1708 | - self.scheduler.notify_change(old_units=["u-1"], new_units=[]) |
1709 | + self.write_single_unit_state() |
1710 | + self.scheduler.cb_change_settings([("u-1", 1)]) |
1711 | + self.scheduler.cb_change_members(["u-1"], []) |
1712 | self.scheduler.run() |
1713 | self.assertEqual(len(self.executions), 1) |
1714 | self.assertEqual(self.executions[0][1].change_type, "departed") |
1715 | @@ -82,15 +109,15 @@ |
1716 | """Multiple modifies get coalesced to a single modify.""" |
1717 | # simulate normal startup, the first notify will always be the existing |
1718 | # membership set. |
1719 | - self.scheduler.notify_change(old_units=[], new_units=["u-1"]) |
1720 | + self.scheduler.cb_change_members([], ["u-1"]) |
1721 | self.scheduler.run() |
1722 | self.scheduler.stop() |
1723 | self.assertEqual(len(self.executions), 1) |
1724 | |
1725 | # Now continue the modify/modify reduction. |
1726 | - self.scheduler.notify_change(modified=["u-1"]) |
1727 | - self.scheduler.notify_change(modified=["u-1"]) |
1728 | - self.scheduler.notify_change(modified=["u-1"]) |
1729 | + self.scheduler.cb_change_settings([("u-1", 1)]) |
1730 | + self.scheduler.cb_change_settings([("u-1", 2)]) |
1731 | + self.scheduler.cb_change_settings([("u-1", 3)]) |
1732 | self.scheduler.run() |
1733 | |
1734 | self.assertEqual(len(self.executions), 2) |
1735 | @@ -112,20 +139,35 @@ |
1736 | self.assertFalse(self.scheduler.running) |
1737 | |
1738 | @inlineCallbacks |
1739 | + def test_run_requires_writable_state(self): |
1740 | + # Induce lazy creation of scheduler, then break state file |
1741 | + self.scheduler |
1742 | + with open(self.state_file, "w"): |
1743 | + pass |
1744 | + os.chmod(self.state_file, 0) |
1745 | + e = yield self.assertFailure(self.scheduler.run(), AssertionError) |
1746 | + self.assertEquals(str(e), "%s is not writable!" % self.state_file) |
1747 | + |
1748 | + def test_empty_state(self): |
1749 | + with open(self.state_file, "w") as f: |
1750 | + f.write(yaml.dump({})) |
1751 | + |
1752 | + # Induce lazy creation to verify it can still survive |
1753 | + self.scheduler |
1754 | + |
1755 | + @inlineCallbacks |
1756 | def test_membership_visibility_per_change(self): |
1757 | """Hooks are executed against changes, those changes are |
1758 | associated to a temporal timestamp, however the changes |
1759 | are scheduled for execution, and the state/time of the |
1760 | world may have advanced, to present a logically consistent |
1761 | - view, we try to gaurantee at a minimum, that hooks will |
1762 | - always see the membership of a relations it was at the |
1763 | + view, we try to guarantee at a minimum, that hooks will |
1764 | + always see the membership of a relation as it was at the |
1765 | time of their associated change. |
1766 | """ |
1767 | - self.scheduler.notify_change( |
1768 | - old_units=[], new_units=["u-1", "u-2"]) |
1769 | - self.scheduler.notify_change( |
1770 | - old_units=["u-1", "u-2"], new_units=["u-2", "u-3"]) |
1771 | - self.scheduler.notify_change(modified=["u-2"]) |
1772 | + self.scheduler.cb_change_members([], ["u-1", "u-2"]) |
1773 | + self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"]) |
1774 | + self.scheduler.cb_change_settings([("u-2", 1)]) |
1775 | |
1776 | self.scheduler.run() |
1777 | self.scheduler.stop() |
1778 | @@ -139,9 +181,8 @@ |
1779 | change_members = yield self.executions[0][0].get_members() |
1780 | self.assertEqual(change_members, ["u-2"]) |
1781 | |
1782 | - self.scheduler.notify_change(modified=["u-2"]) |
1783 | - self.scheduler.notify_change( |
1784 | - old_units=["u-2", "u-3"], new_units=["u-2"]) |
1785 | + self.scheduler.cb_change_settings([("u-2", 2)]) |
1786 | + self.scheduler.cb_change_members(["u-2", "u-3"], ["u-2"]) |
1787 | self.scheduler.run() |
1788 | |
1789 | self.assertEqual(len(self.executions), 4) |
1790 | @@ -156,10 +197,17 @@ |
1791 | a hook wont see any 'active' members in a membership list, that |
1792 | it hasn't previously been given a notify of before. |
1793 | """ |
1794 | - self.scheduler.notify_change( |
1795 | - old_units=["u-1", "u-2"], |
1796 | - new_units=["u-2", "u-3", "u-4"], |
1797 | - modified=["u-2"]) |
1798 | + with open(self.state_file, "w") as f: |
1799 | + f.write(yaml.dump({ |
1800 | + "context_members": ["u-1", "u-2"], |
1801 | + "member_versions": {"u-1": 0, "u-2": 0}, |
1802 | + "unit_ops": {}, |
1803 | + "clock_units": {}, |
1804 | + "clock_queue": [], |
1805 | + "clock_sequence": 1})) |
1806 | + |
1807 | + self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3", "u-4"]) |
1808 | + self.scheduler.cb_change_settings([("u-2", 1)]) |
1809 | |
1810 | self.scheduler.run() |
1811 | self.scheduler.stop() |
1812 | @@ -197,3 +245,172 @@ |
1813 | RelationChange("", "", "")) |
1814 | members = yield context.get_members() |
1815 | self.assertEqual(members, []) |
1816 | + |
1817 | + @inlineCallbacks |
1818 | + def test_state_is_loaded(self): |
1819 | + with open(self.state_file, "w") as f: |
1820 | + f.write(yaml.dump({ |
1821 | + "context_members": ["u-1", "u-2"], |
1822 | + "member_versions": {"u-1": 5, "u-2": 2, "u-3": 0}, |
1823 | + "unit_ops": {"u-1": (3, MODIFIED), "u-3": (4, ADDED)}, |
1824 | + "clock_units": {3: ["u-1"], 4: ["u-3"]}, |
1825 | + "clock_queue": [3, 4], |
1826 | + "clock_sequence": 4})) |
1827 | + |
1828 | + self.scheduler.run() |
1829 | + while len(self.executions) < 2: |
1830 | + yield self.poke_zk() |
1831 | + self.scheduler.stop() |
1832 | + |
1833 | + self.assertEqual(self.executions[0][1].change_type, "modified") |
1834 | + members = yield self.executions[0][0].get_members() |
1835 | + self.assertEqual(members, ["u-1", "u-2"]) |
1836 | + |
1837 | + self.assertEqual(self.executions[1][1].change_type, "joined") |
1838 | + members = yield self.executions[1][0].get_members() |
1839 | + self.assertEqual(members, ["u-1", "u-2", "u-3"]) |
1840 | + |
1841 | + with open(self.state_file) as f: |
1842 | + state = yaml.load(f.read()) |
1843 | + self.assertEquals(state, { |
1844 | + "context_members": ["u-1", "u-2", "u-3"], |
1845 | + "member_versions": {"u-1": 5, "u-2": 2, "u-3": 0}, |
1846 | + "unit_ops": {}, |
1847 | + "clock_units": {}, |
1848 | + "clock_queue": [], |
1849 | + "clock_sequence": 4}) |
1850 | + |
1851 | + def test_state_is_stored(self): |
1852 | + with open(self.state_file, "w") as f: |
1853 | + f.write(yaml.dump({ |
1854 | + "context_members": ["u-1", "u-2"], |
1855 | + "member_versions": {"u-1": 0, "u-2": 2}, |
1856 | + "unit_ops": {}, |
1857 | + "clock_units": {}, |
1858 | + "clock_queue": [], |
1859 | + "clock_sequence": 7})) |
1860 | + |
1861 | + self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"]) |
1862 | + self.scheduler.cb_change_settings([("u-2", 3)]) |
1863 | + |
1864 | + # Add a stop instruction to the queue, which should *not* be saved. |
1865 | + self.scheduler.stop() |
1866 | + |
1867 | + with open(self.state_file) as f: |
1868 | + state = yaml.load(f.read()) |
1869 | + self.assertEquals(state, { |
1870 | + "context_members": ["u-1", "u-2"], |
1871 | + "member_versions": {"u-2": 3, "u-3": 0}, |
1872 | + "unit_ops": {"u-1": (8, REMOVED), |
1873 | + "u-2": (9, MODIFIED), |
1874 | + "u-3": (8, ADDED)}, |
1875 | + "clock_units": {8: ["u-3", "u-1"], 9: ["u-2"]}, |
1876 | + "clock_queue": [8, 9], |
1877 | + "clock_sequence": 9}) |
1878 | + |
1879 | + @inlineCallbacks |
1880 | + def test_state_stored_after_tick(self): |
1881 | + |
1882 | + def execute(context, change): |
1883 | + self.execute_calls += 1 |
1884 | + if self.execute_calls > 1: |
1885 | + return fail(SomeError()) |
1886 | + return succeed(None) |
1887 | + self.execute_calls = 0 |
1888 | + self.executor = execute |
1889 | + |
1890 | + with open(self.state_file, "w") as f: |
1891 | + f.write(yaml.dump({ |
1892 | + "context_members": ["u-1", "u-2"], |
1893 | + "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, |
1894 | + "unit_ops": {"u-1": (3, MODIFIED), "u-3": (4, ADDED)}, |
1895 | + "clock_units": {3: ["u-1"], 4: ["u-3"]}, |
1896 | + "clock_queue": [3, 4], |
1897 | + "clock_sequence": 4})) |
1898 | + |
1899 | + d = self.scheduler.run() |
1900 | + while self.execute_calls < 2: |
1901 | + yield self.poke_zk() |
1902 | + yield self.assertFailure(d, SomeError) |
1903 | + with open(self.state_file) as f: |
1904 | + self.assertEquals(yaml.load(f.read()), { |
1905 | + "context_members": ["u-1", "u-2"], |
1906 | + "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, |
1907 | + "unit_ops": {"u-3": (4, ADDED)}, |
1908 | + "clock_units": {4: ["u-3"]}, |
1909 | + "clock_queue": [4], |
1910 | + "clock_sequence": 4}) |
1911 | + |
1912 | + @inlineCallbacks |
1913 | + def test_state_not_stored_mid_tick(self): |
1914 | + |
1915 | + def execute(context, change): |
1916 | + self.execute_called = True |
1917 | + return fail(SomeError()) |
1918 | + self.execute_called = False |
1919 | + self.executor = execute |
1920 | + |
1921 | + initial_state = { |
1922 | + "context_members": ["u-1", "u-2"], |
1923 | + "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, |
1924 | + "unit_ops": {"u-1": (3, MODIFIED), "u-3": (4, ADDED)}, |
1925 | + "clock_units": {3: ["u-1"], 4: ["u-3"]}, |
1926 | + "clock_queue": [3, 4], |
1927 | + "clock_sequence": 4} |
1928 | + with open(self.state_file, "w") as f: |
1929 | + f.write(yaml.dump(initial_state)) |
1930 | + |
1931 | + d = self.scheduler.run() |
1932 | + while not self.execute_called: |
1933 | + yield self.poke_zk() |
1934 | + yield self.assertFailure(d, SomeError) |
1935 | + with open(self.state_file) as f: |
1936 | + self.assertEquals(yaml.load(f.read()), initial_state) |
1937 | + |
1938 | + def test_ignore_equal_settings_version(self): |
1939 | + """ |
1940 | + A modified event whose version is not greater than the latest known |
1941 | + version for that unit will be ignored. |
1942 | + """ |
1943 | + self.write_single_unit_state() |
1944 | + self.scheduler.cb_change_settings([("u-1", 0),]) |
1945 | + self.scheduler.run() |
1946 | + self.assertEquals(len(self.executions), 0) |
1947 | + |
1948 | + def test_settings_version_0_on_add(self): |
1949 | + """ |
1950 | + When a unit is added, we assume its settings version to be 0, and |
1951 | + therefore modified events with version 0 will be ignored. |
1952 | + """ |
1953 | + self.scheduler.cb_change_members([], ["u-1"]) |
1954 | + self.scheduler.cb_change_settings([("u-1", 0),]) |
1955 | + self.scheduler.run() |
1956 | + self.assertEquals(len(self.executions), 1) |
1957 | + self.assertEqual(self.executions[0][1].change_type, "joined") |
1958 | + |
1959 | + def test_membership_timeslip(self): |
1960 | + """ |
1961 | + Adds and removes are calculated based on known membership state, NOT |
1962 | + on old_units. |
1963 | + """ |
1964 | + with open(self.state_file, "w") as f: |
1965 | + f.write(yaml.dump({ |
1966 | + "context_members": ["u-1", "u-2"], |
1967 | + "member_versions": {"u-1": 0, "u-2": 0}, |
1968 | + "unit_ops": {}, |
1969 | + "clock_units": {}, |
1970 | + "clock_queue": [], |
1971 | + "clock_sequence": 4})) |
1972 | + |
1973 | + self.scheduler.cb_change_members(["u-2"], ["u-3", "u-4"]) |
1974 | + self.scheduler.run() |
1975 | + |
1976 | + output = ( |
1977 | + "members changed: old=['u-2'], new=['u-3', 'u-4']", |
1978 | + "old does not match last recorded units: ['u-1', 'u-2']", |
1979 | + "start", |
1980 | + "executing hook for u-3:joined", |
1981 | + "executing hook for u-4:joined", |
1982 | + "executing hook for u-1:departed", |
1983 | + "executing hook for u-2:departed\n") |
1984 | + self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) |
1985 | |
1986 | === modified file 'juju/lib/lxc/tests/test_lxc.py' |
1987 | --- juju/lib/lxc/tests/test_lxc.py 2011-10-01 00:04:14 +0000 |
1988 | +++ juju/lib/lxc/tests/test_lxc.py 2012-02-02 16:42:42 +0000 |
1989 | @@ -10,10 +10,17 @@ |
1990 | from juju.lib.testing import TestCase |
1991 | |
1992 | |
1993 | -def run_lxc_tests(): |
1994 | - if os.environ.get("TEST_LXC"): |
1995 | - return None |
1996 | - return "TEST_LXC=1 to include lxc tests" |
1997 | +def skip_sudo_tests(): |
1998 | + if os.environ.get("TEST_SUDO"): |
1999 | + # Get user's password *now*, if needed, not mid-run |
2000 | + os.system("sudo false") |
2001 | + return False |
2002 | + return "TEST_SUDO=1 to include tests which use sudo (including lxc tests)" |
2003 | + |
2004 | + |
2005 | +def uses_sudo(f): |
2006 | + f.skip = skip_sudo_tests() |
2007 | + return f |
2008 | |
2009 | |
2010 | DATA_PATH = os.path.abspath( |
2011 | @@ -23,9 +30,9 @@ |
2012 | DEFAULT_CONTAINER = "lxc_test" |
2013 | |
2014 | |
2015 | +@uses_sudo |
2016 | class LXCTest(TestCase): |
2017 | timeout = 240 |
2018 | - skip = run_lxc_tests() |
2019 | |
2020 | def setUp(self): |
2021 | self.config = self.make_config() |
2022 | |
2023 | === added directory 'juju/lib/tests/data' |
2024 | === added file 'juju/lib/tests/data/test_basic_install' |
2025 | --- juju/lib/tests/data/test_basic_install 1970-01-01 00:00:00 +0000 |
2026 | +++ juju/lib/tests/data/test_basic_install 2012-02-02 16:42:42 +0000 |
2027 | @@ -0,0 +1,10 @@ |
2028 | +description "uninteresting service" |
2029 | +author "Juju Team <juju@lists.ubuntu.com>" |
2030 | + |
2031 | +start on runlevel [2345] |
2032 | +stop on runlevel [!2345] |
2033 | +respawn |
2034 | + |
2035 | + |
2036 | + |
2037 | +exec /bin/false >> /tmp/some-name.output 2>&1 |
2038 | |
2039 | === added file 'juju/lib/tests/data/test_less_basic_install' |
2040 | --- juju/lib/tests/data/test_less_basic_install 1970-01-01 00:00:00 +0000 |
2041 | +++ juju/lib/tests/data/test_less_basic_install 2012-02-02 16:42:42 +0000 |
2042 | @@ -0,0 +1,11 @@ |
2043 | +description "pew pew pew blam" |
2044 | +author "Juju Team <juju@lists.ubuntu.com>" |
2045 | + |
2046 | +start on runlevel [2345] |
2047 | +stop on runlevel [!2345] |
2048 | +respawn |
2049 | + |
2050 | +env FOO="bar baz qux" |
2051 | +env PEW="pew" |
2052 | + |
2053 | +exec /bin/deathstar --ignore-ewoks endor >> /somewhere/else 2>&1 |
2054 | |
2055 | === added file 'juju/lib/tests/data/test_standard_install' |
2056 | --- juju/lib/tests/data/test_standard_install 1970-01-01 00:00:00 +0000 |
2057 | +++ juju/lib/tests/data/test_standard_install 2012-02-02 16:42:42 +0000 |
2058 | @@ -0,0 +1,10 @@ |
2059 | +description "a wretched hive of scum and villainy" |
2060 | +author "Juju Team <juju@lists.ubuntu.com>" |
2061 | + |
2062 | +start on runlevel [2345] |
2063 | +stop on runlevel [!2345] |
2064 | +respawn |
2065 | + |
2066 | +env LIGHTSABER="civilised weapon" |
2067 | + |
2068 | +exec /bin/imagination-failure --no-ideas >> /tmp/some-name.output 2>&1 |
2069 | |
2070 | === modified file 'juju/lib/tests/test_statemachine.py' |
2071 | --- juju/lib/tests/test_statemachine.py 2011-09-15 18:50:23 +0000 |
2072 | +++ juju/lib/tests/test_statemachine.py 2012-02-02 16:42:42 +0000 |
2073 | @@ -121,10 +121,14 @@ |
2074 | workflow_state = AttributeWorkflowState(workflow) |
2075 | current_state = yield workflow_state.get_state() |
2076 | self.assertEqual(current_state, None) |
2077 | + current_vars = yield workflow_state.get_state_variables() |
2078 | + self.assertEqual(current_vars, {}) |
2079 | |
2080 | yield workflow_state.set_state("started") |
2081 | current_state = yield workflow_state.get_state() |
2082 | self.assertEqual(current_state, "started") |
2083 | + current_vars = yield workflow_state.get_state_variables() |
2084 | + self.assertEqual(current_vars, {}) |
2085 | |
2086 | @inlineCallbacks |
2087 | def test_state_fire_transition(self): |
2088 | @@ -333,3 +337,13 @@ |
2089 | |
2090 | self.assertFailure(workflow_state.transition_state("unknown"), |
2091 | InvalidStateError) |
2092 | + |
2093 | + @inlineCallbacks |
2094 | + def test_load_bad_state(self): |
2095 | + class BadLoadWorkflowState(WorkflowState): |
2096 | + def _load(self): |
2097 | + return succeed({"some": "other-data"}) |
2098 | + |
2099 | + workflow = BadLoadWorkflowState(Workflow()) |
2100 | + yield self.assertFailure(workflow.get_state(), KeyError) |
2101 | + yield self.assertFailure(workflow.get_state_variables(), KeyError) |
2102 | |
2103 | === added file 'juju/lib/tests/test_upstart.py' |
2104 | --- juju/lib/tests/test_upstart.py 1970-01-01 00:00:00 +0000 |
2105 | +++ juju/lib/tests/test_upstart.py 2012-02-02 16:42:42 +0000 |
2106 | @@ -0,0 +1,339 @@ |
2107 | +import os |
2108 | + |
2109 | +from twisted.internet.defer import inlineCallbacks, succeed |
2110 | + |
2111 | +from juju.errors import ServiceError |
2112 | +from juju.lib.mocker import ANY, KWARGS |
2113 | +from juju.lib.testing import TestCase |
2114 | +from juju.lib.upstart import UpstartService |
2115 | + |
2116 | + |
2117 | +DATA_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), "data") |
2118 | + |
2119 | + |
2120 | +class UpstartServiceTest(TestCase): |
2121 | + |
2122 | + @inlineCallbacks |
2123 | + def setUp(self): |
2124 | + yield super(UpstartServiceTest, self).setUp() |
2125 | + self.init_dir = self.makeDir() |
2126 | + self.conf = os.path.join(self.init_dir, "some-name.conf") |
2127 | + self.output = "/tmp/some-name.output" |
2128 | + self.patch(UpstartService, "init_dir", self.init_dir) |
2129 | + self.service = UpstartService("some-name") |
2130 | + |
2131 | + def setup_service(self): |
2132 | + self.service.set_description("a wretched hive of scum and villainy") |
2133 | + self.service.set_command("/bin/imagination-failure --no-ideas") |
2134 | + self.service.set_environ({"LIGHTSABER": "civilised weapon"}) |
2135 | + |
2136 | + def setup_mock(self): |
2137 | + self.check_call = self.mocker.replace("subprocess.check_call") |
2138 | + self.getProcessOutput = self.mocker.replace( |
2139 | + "twisted.internet.utils.getProcessOutput") |
2140 | + |
2141 | + def mock_status(self, result): |
2142 | + self.getProcessOutput("/sbin/status", ["some-name"]) |
2143 | + self.mocker.result(result) |
2144 | + |
2145 | + def mock_call(self, args, output=None): |
2146 | + self.check_call(args, KWARGS) |
2147 | + if output is None: |
2148 | + self.mocker.result(0) |
2149 | + else: |
2150 | + def write(ANY, **_): |
2151 | + with open(self.output, "w") as f: |
2152 | + f.write(output) |
2153 | + self.mocker.call(write) |
2154 | + |
2155 | + def mock_start(self, output=None): |
2156 | + self.mock_call(("/sbin/start", "some-name"), output) |
2157 | + |
2158 | + def mock_stop(self): |
2159 | + self.mock_call(("/sbin/stop", "some-name")) |
2160 | + |
2161 | + def mock_check_success(self): |
2162 | + for _ in range(5): |
2163 | + self.mock_status(succeed("blah start/running blah 12345")) |
2164 | + |
2165 | + def mock_check_unstable(self): |
2166 | + for _ in range(4): |
2167 | + self.mock_status(succeed("blah start/running blah 12345")) |
2168 | + self.mock_status(succeed("blah start/running blah 12346")) |
2169 | + |
2170 | + def mock_check_not_running(self): |
2171 | + self.mock_status(succeed("blah")) |
2172 | + |
2173 | + def write_dummy_conf(self): |
2174 | + with open(self.conf, "w") as f: |
2175 | + f.write("dummy") |
2176 | + |
2177 | + def assert_dummy_conf(self): |
2178 | + with open(self.conf) as f: |
2179 | + self.assertEquals(f.read(), "dummy") |
2180 | + |
2181 | + def assert_no_conf(self): |
2182 | + self.assertFalse(os.path.exists(self.conf)) |
2183 | + |
2184 | + def assert_conf(self, name="test_standard_install"): |
2185 | + with open(os.path.join(DATA_DIR, name)) as expected: |
2186 | + with open(self.conf) as actual: |
2187 | + self.assertEquals(actual.read(), expected.read()) |
2188 | + |
2189 | + def test_is_installed(self): |
2190 | + """Check is_installed depends on conf file existence""" |
2191 | + self.assertFalse(self.service.is_installed()) |
2192 | + self.write_dummy_conf() |
2193 | + self.assertTrue(self.service.is_installed()) |
2194 | + |
2195 | + def test_init_dir(self): |
2196 | + """ |
2197 | + Check is_installed still works when init_dir specified explicitly |
2198 | + """ |
2199 | + self.patch(UpstartService, "init_dir", "/BAD/PATH") |
2200 | + self.service = UpstartService("some-name", init_dir=self.init_dir) |
2201 | + self.setup_service() |
2202 | + |
2203 | + self.assertFalse(self.service.is_installed()) |
2204 | + self.write_dummy_conf() |
2205 | + self.assertTrue(self.service.is_installed()) |
2206 | + |
2207 | + @inlineCallbacks |
2208 | + def test_is_running(self): |
2209 | + """ |
2210 | + Check is_running interprets status output (when service is installed) |
2211 | + """ |
2212 | + self.setup_mock() |
2213 | + self.mock_status(succeed("blah stop/waiting blah")) |
2214 | + self.mock_status(succeed("blah blob/gibbering blah")) |
2215 | + self.mock_status(succeed("blah start/running blah 12345")) |
2216 | + self.mocker.replay() |
2217 | + |
2218 | + # Won't hit status; conf is not installed |
2219 | + self.assertFalse((yield self.service.is_running())) |
2220 | + self.write_dummy_conf() |
2221 | + |
2222 | + # These 3 calls correspond to the first 3 mock_status calls above |
2223 | + self.assertFalse((yield self.service.is_running())) |
2224 | + self.assertFalse((yield self.service.is_running())) |
2225 | + self.assertTrue((yield self.service.is_running())) |
2226 | + |
2227 | + @inlineCallbacks |
2228 | + def test_is_stable_yes(self): |
2229 | + self.setup_mock() |
2230 | + self.mock_check_success() |
2231 | + self.mocker.replay() |
2232 | + |
2233 | + self.write_dummy_conf() |
2234 | + self.assertTrue((yield self.service.is_stable())) |
2235 | + |
2236 | + @inlineCallbacks |
2237 | + def test_is_stable_no(self): |
2238 | + self.setup_mock() |
2239 | + self.mock_check_unstable() |
2240 | + self.mocker.replay() |
2241 | + |
2242 | + self.write_dummy_conf() |
2243 | + self.assertFalse((yield self.service.is_stable())) |
2244 | + |
2245 | + @inlineCallbacks |
2246 | + def test_is_stable_not_running(self): |
2247 | + self.setup_mock() |
2248 | + self.mock_check_not_running() |
2249 | + self.mocker.replay() |
2250 | + |
2251 | + self.write_dummy_conf() |
2252 | + self.assertFalse((yield self.service.is_stable())) |
2253 | + |
2254 | + @inlineCallbacks |
2255 | + def test_is_stable_not_even_installed(self): |
2256 | + self.assertFalse((yield self.service.is_stable())) |
2257 | + |
2258 | + @inlineCallbacks |
2259 | + def test_get_pid(self): |
2260 | + """ |
2261 | + Check get_pid interprets status output (when service is installed) |
2262 | + """ |
2263 | + self.setup_mock() |
2264 | + self.mock_status(succeed("blah stop/waiting blah")) |
2265 | + self.mock_status(succeed("blah blob/gibbering blah")) |
2266 | + self.mock_status(succeed("blah start/running blah 12345")) |
2267 | + self.mocker.replay() |
2268 | + |
2269 | + # Won't hit status; conf is not installed |
2270 | + self.assertEquals((yield self.service.get_pid()), None) |
2271 | + self.write_dummy_conf() |
2272 | + |
2273 | + # These 3 calls correspond to the first 3 mock_status calls above |
2274 | + self.assertEquals((yield self.service.get_pid()), None) |
2275 | + self.assertEquals((yield self.service.get_pid()), None) |
2276 | + self.assertEquals((yield self.service.get_pid()), 12345) |
2277 | + |
2278 | + @inlineCallbacks |
2279 | + def test_basic_install(self): |
2280 | + """Check a simple UpstartService writes expected conf file""" |
2281 | + e = yield self.assertFailure(self.service.install(), ServiceError) |
2282 | + self.assertEquals(str(e), "Cannot render .conf: no description set") |
2283 | + self.service.set_description("uninteresting service") |
2284 | + e = yield self.assertFailure(self.service.install(), ServiceError) |
2285 | + self.assertEquals(str(e), "Cannot render .conf: no command set") |
2286 | + self.service.set_command("/bin/false") |
2287 | + yield self.service.install() |
2288 | + |
2289 | + self.assert_conf("test_basic_install") |
2290 | + |
2291 | + @inlineCallbacks |
2292 | + def test_less_basic_install(self): |
2293 | + """Check conf for a different UpstartService (which sets an env var)""" |
2294 | + self.service.set_description("pew pew pew blam") |
2295 | + self.service.set_command("/bin/deathstar --ignore-ewoks endor") |
2296 | + self.service.set_environ({"FOO": "bar baz qux", "PEW": "pew"}) |
2297 | + self.service.set_output_path("/somewhere/else") |
2298 | + yield self.service.install() |
2299 | + |
2300 | + self.assert_conf("test_less_basic_install") |
2301 | + |
2302 | + def test_install_via_script(self): |
2303 | + """Check that the output-as-script form does the right thing""" |
2304 | + self.setup_service() |
2305 | + install, start = self.service.get_cloud_init_commands() |
2306 | + |
2307 | + os.system(install) |
2308 | + self.assert_conf() |
2309 | + self.assertEquals(start, "/sbin/start some-name") |
2310 | + |
2311 | + @inlineCallbacks |
2312 | + def test_start_not_installed(self): |
2313 | + """Check that .start() also installs if necessary""" |
2314 | + self.setup_mock() |
2315 | + self.mock_status(succeed("blah stop/waiting blah")) |
2316 | + self.mock_start() |
2317 | + self.mock_check_success() |
2318 | + self.mocker.replay() |
2319 | + |
2320 | + self.setup_service() |
2321 | + yield self.service.start() |
2322 | + self.assert_conf() |
2323 | + |
2324 | + @inlineCallbacks |
2325 | + def test_start_not_started_stable(self): |
2326 | + """Check that .start() starts if stopped, and checks for stable pid""" |
2327 | + self.write_dummy_conf() |
2328 | + self.setup_mock() |
2329 | + self.mock_status(succeed("blah stop/waiting blah")) |
2330 | + self.mock_start("ignored") |
2331 | + self.mock_check_success() |
2332 | + self.mocker.replay() |
2333 | + |
2334 | + self.setup_service() |
2335 | + yield self.service.start() |
2336 | + self.assert_dummy_conf() |
2337 | + |
2338 | + @inlineCallbacks |
2339 | + def test_start_not_started_unstable(self): |
2340 | + """Check that .start() starts if stopped, and raises on unstable pid""" |
2341 | + self.write_dummy_conf() |
2342 | + self.setup_mock() |
2343 | + self.mock_status(succeed("blah stop/waiting blah")) |
2344 | + self.mock_start("kangaroo") |
2345 | + self.mock_check_unstable() |
2346 | + self.mocker.replay() |
2347 | + |
2348 | + self.setup_service() |
2349 | + e = yield self.assertFailure(self.service.start(), ServiceError) |
2350 | + self.assertEquals( |
2351 | + str(e), "Failed to start job some-name; got output:\nkangaroo") |
2352 | + self.assert_dummy_conf() |
2353 | + |
2354 | + @inlineCallbacks |
2355 | + def test_start_not_started_failure(self): |
2356 | + """Check that .start() starts if stopped, and raises on no pid""" |
2357 | + self.write_dummy_conf() |
2358 | + self.setup_mock() |
2359 | + self.mock_status(succeed("blah stop/waiting blah")) |
2360 | + self.mock_start() |
2361 | + self.mock_check_not_running() |
2362 | + self.mocker.replay() |
2363 | + |
2364 | + self.setup_service() |
2365 | + e = yield self.assertFailure(self.service.start(), ServiceError) |
2366 | + self.assertEquals( |
2367 | + str(e), "Failed to start job some-name; no output detected") |
2368 | + self.assert_dummy_conf() |
2369 | + |
2370 | + @inlineCallbacks |
2371 | + def test_start_started(self): |
2372 | + """Check that .start() does nothing if already running""" |
2373 | + self.write_dummy_conf() |
2374 | + self.setup_mock() |
2375 | + self.mock_status(succeed("blah start/running blah 12345")) |
2376 | + self.mocker.replay() |
2377 | + |
2378 | + self.setup_service() |
2379 | + yield self.service.start() |
2380 | + self.assert_dummy_conf() |
2381 | + |
2382 | + @inlineCallbacks |
2383 | + def test_destroy_not_installed(self): |
2384 | + """Check .destroy() does nothing if not installed""" |
2385 | + yield self.service.destroy() |
2386 | + self.assert_no_conf() |
2387 | + |
2388 | + @inlineCallbacks |
2389 | + def test_destroy_not_started(self): |
2390 | + """Check .destroy just deletes conf if not running""" |
2391 | + self.write_dummy_conf() |
2392 | + self.setup_mock() |
2393 | + self.mock_status(succeed("blah stop/waiting blah")) |
2394 | + self.mocker.replay() |
2395 | + |
2396 | + yield self.service.destroy() |
2397 | + self.assert_no_conf() |
2398 | + |
2399 | + @inlineCallbacks |
2400 | + def test_destroy_started(self): |
2401 | + """Check .destroy() stops running service and deletes conf file""" |
2402 | + self.write_dummy_conf() |
2403 | + self.setup_mock() |
2404 | + self.mock_status(succeed("blah start/running blah 54321")) |
2405 | + self.mock_stop() |
2406 | + self.mocker.replay() |
2407 | + |
2408 | + yield self.service.destroy() |
2409 | + self.assert_no_conf() |
2410 | + |
2411 | + @inlineCallbacks |
2412 | + def test_use_sudo(self): |
2413 | + """Check that expected commands are generated when use_sudo is set""" |
2414 | + self.setup_mock() |
2415 | + self.service = UpstartService("some-name", use_sudo=True) |
2416 | + self.setup_service() |
2417 | + with open(self.output, "w") as f: |
2418 | + f.write("clear this file out...") |
2419 | + |
2420 | + def verify_cp(args, **kwargs): |
2421 | + sudo, cp, src, dst = args |
2422 | + self.assertEquals(sudo, "sudo") |
2423 | + self.assertEquals(cp, "cp") |
2424 | + with open(os.path.join(DATA_DIR, "test_standard_install")) as exp: |
2425 | + with open(src) as actual: |
2426 | + self.assertEquals(actual.read(), exp.read()) |
2427 | + self.assertEquals(dst, self.conf) |
2428 | + self.write_dummy_conf() |
2429 | + |
2430 | + self.check_call(ANY, KWARGS) |
2431 | + self.mocker.call(verify_cp) |
2432 | + self.mock_call(("sudo", "rm", self.output)) |
2433 | + self.mock_call(("sudo", "chmod", "644", self.conf)) |
2434 | + self.mock_status(succeed("blah stop/waiting blah")) |
2435 | + self.mock_call(("sudo", "/sbin/start", "some-name")) |
2436 | + # 5 for initial stability check; 1 for final do-we-need-to-stop check |
2437 | + for _ in range(6): |
2438 | + self.mock_status(succeed("blah start/running blah 12345")) |
2439 | + self.mock_call(("sudo", "/sbin/stop", "some-name")) |
2440 | + self.mock_call(("sudo", "rm", self.conf)) |
2441 | + self.mock_call(("sudo", "rm", self.output)) |
2442 | + |
2443 | + self.mocker.replay() |
2444 | + yield self.service.start() |
2445 | + yield self.service.destroy() |
2446 | |
2447 | === added file 'juju/lib/upstart.py' |
2448 | --- juju/lib/upstart.py 1970-01-01 00:00:00 +0000 |
2449 | +++ juju/lib/upstart.py 2012-02-02 16:42:42 +0000 |
2450 | @@ -0,0 +1,166 @@ |
2451 | +import os |
2452 | +import subprocess |
2453 | +from tempfile import NamedTemporaryFile |
2454 | + |
2455 | +from twisted.internet.defer import inlineCallbacks, returnValue |
2456 | +from twisted.internet.threads import deferToThread |
2457 | +from twisted.internet.utils import getProcessOutput |
2458 | + |
2459 | +from juju.errors import ServiceError |
2460 | +from juju.lib.twistutils import sleep |
2461 | + |
2462 | + |
2463 | +_CONF_TEMPLATE = """\ |
2464 | +description "%s" |
2465 | +author "Juju Team <juju@lists.ubuntu.com>" |
2466 | + |
2467 | +start on runlevel [2345] |
2468 | +stop on runlevel [!2345] |
2469 | +respawn |
2470 | + |
2471 | +%s |
2472 | + |
2473 | +exec %s >> %s 2>&1 |
2474 | +""" |
2475 | + |
2476 | +def _silent_check_call(args): |
2477 | + with open(os.devnull, "w") as f: |
2478 | + return subprocess.check_call( |
2479 | + args, stdout=f.fileno(), stderr=f.fileno()) |
2480 | + |
2481 | + |
2482 | +class UpstartService(object): |
2483 | + |
2484 | + # on class for ease of testing |
2485 | + init_dir = "/etc/init" |
2486 | + |
2487 | + def __init__(self, name, init_dir=None, use_sudo=False): |
2488 | + self._name = name |
2489 | + if init_dir is not None: |
2490 | + self.init_dir = init_dir |
2491 | + self._use_sudo = use_sudo |
2492 | + self._output_path = None |
2493 | + self._description = None |
2494 | + self._environ = {} |
2495 | + self._command = None |
2496 | + |
2497 | + @property |
2498 | + def _conf_path(self): |
2499 | + return os.path.join( |
2500 | + self.init_dir, "%s.conf" % self._name) |
2501 | + |
2502 | + @property |
2503 | + def output_path(self): |
2504 | + if self._output_path is not None: |
2505 | + return self._output_path |
2506 | + return "/tmp/%s.output" % self._name |
2507 | + |
2508 | + def set_description(self, description): |
2509 | + self._description = description |
2510 | + |
2511 | + def set_environ(self, environ): |
2512 | + self._environ = environ |
2513 | + |
2514 | + def set_command(self, command): |
2515 | + self._command = command |
2516 | + |
2517 | + def set_output_path(self, path): |
2518 | + self._output_path = path |
2519 | + |
2520 | + @inlineCallbacks |
2521 | + def _trash_output(self): |
2522 | + if os.path.exists(self.output_path): |
2523 | + # Just using os.unlink will fail when we're running TEST_SUDO tests |
2524 | + # which hit this code path (because root will own self.output_path) |
2525 | + yield self._call("rm", self.output_path) |
2526 | + |
2527 | + def _render(self): |
2528 | + if self._description is None: |
2529 | + raise ServiceError("Cannot render .conf: no description set") |
2530 | + if self._command is None: |
2531 | + raise ServiceError("Cannot render .conf: no command set") |
2532 | + return _CONF_TEMPLATE % ( |
2533 | + self._description, |
2534 | + "\n".join('env %s="%s"' % kv |
2535 | + for kv in sorted(self._environ.items())), |
2536 | + self._command, |
2537 | + self.output_path) |
2538 | + |
2539 | + def _call(self, *args): |
2540 | + if self._use_sudo: |
2541 | + args = ("sudo",) + args |
2542 | + return deferToThread(_silent_check_call, args) |
2543 | + |
2544 | + def get_cloud_init_commands(self): |
2545 | + return ["cat >> %s <<EOF\n%sEOF\n" % (self._conf_path, self._render()), |
2546 | + "/sbin/start %s" % self._name] |
2547 | + |
2548 | + @inlineCallbacks |
2549 | + def install(self): |
2550 | + with NamedTemporaryFile() as f: |
2551 | + f.write(self._render()) |
2552 | + f.flush() |
2553 | + yield self._call("cp", f.name, self._conf_path) |
2554 | + yield self._call("chmod", "644", self._conf_path) |
2555 | + |
2556 | + @inlineCallbacks |
2557 | + def start(self): |
2558 | + if not self.is_installed(): |
2559 | + yield self.install() |
2560 | + if (yield self.is_running()): |
2561 | + return |
2562 | + yield self._trash_output() |
2563 | + yield self._call("/sbin/start", self._name) |
2564 | + if (yield self.is_stable()): |
2565 | + return |
2566 | + |
2567 | + output = None |
2568 | + if os.path.exists(self.output_path): |
2569 | + with open(self.output_path) as f: |
2570 | + output = f.read() |
2571 | + if not output: |
2572 | + raise ServiceError( |
2573 | + "Failed to start job %s; no output detected" % self._name) |
2574 | + raise ServiceError( |
2575 | + "Failed to start job %s; got output:\n%s" % (self._name, output)) |
2576 | + |
2577 | + @inlineCallbacks |
2578 | + def destroy(self): |
2579 | + if (yield self.is_running()): |
2580 | + yield self._call("/sbin/stop", self._name) |
2581 | + if self.is_installed(): |
2582 | + yield self._call("rm", self._conf_path) |
2583 | + yield self._trash_output() |
2584 | + |
2585 | + @inlineCallbacks |
2586 | + def get_pid(self): |
2587 | + if not self.is_installed(): |
2588 | + returnValue(None) |
2589 | + status = yield getProcessOutput("/sbin/status", [self._name]) |
2590 | + if "start/running" not in status: |
2591 | + returnValue(None) |
2592 | + pid = status.split(" ")[-1] |
2593 | + returnValue(int(pid)) |
2594 | + |
2595 | + @inlineCallbacks |
2596 | + def is_running(self): |
2597 | + pid = yield self.get_pid() |
2598 | + returnValue(pid is not None) |
2599 | + |
2600 | + @inlineCallbacks |
2601 | + def is_stable(self): |
2602 | + """Does the process continue to run with the same pid? |
2603 | + |
2604 | + (5 times in a row, with a gap of 0.1s between each check) |
2605 | + """ |
2606 | + pid = yield self.get_pid() |
2607 | + if pid is None: |
2608 | + returnValue(False) |
2609 | + for _ in range(4): |
2610 | + yield sleep(0.1) |
2611 | + if pid != (yield self.get_pid()): |
2612 | + returnValue(False) |
2613 | + returnValue(True) |
2614 | + |
2615 | + def is_installed(self): |
2616 | + return os.path.exists(self._conf_path) |
2617 | |
2618 | === modified file 'juju/machine/tests/test_unit_deployment.py' |
2619 | --- juju/machine/tests/test_unit_deployment.py 2011-10-05 13:59:44 +0000 |
2620 | +++ juju/machine/tests/test_unit_deployment.py 2012-02-02 16:42:42 +0000 |
2621 | @@ -3,25 +3,21 @@ |
2622 | """ |
2623 | import logging |
2624 | import os |
2625 | -import sys |
2626 | +import subprocess |
2627 | |
2628 | -from twisted.internet.protocol import ProcessProtocol |
2629 | from twisted.internet.defer import inlineCallbacks, succeed |
2630 | |
2631 | -import juju |
2632 | from juju.charm import get_charm_from_path |
2633 | from juju.charm.tests.test_repository import RepositoryTestBase |
2634 | from juju.lib.lxc import LXCContainer |
2635 | -from juju.lib.mocker import MATCH, ANY |
2636 | -from juju.lib.twistutils import get_module_directory |
2637 | +from juju.lib.lxc.tests.test_lxc import uses_sudo |
2638 | +from juju.lib.mocker import ANY, KWARGS |
2639 | +from juju.lib.upstart import UpstartService |
2640 | from juju.machine.unit import UnitMachineDeployment, UnitContainerDeployment |
2641 | from juju.machine.errors import UnitDeploymentError |
2642 | from juju.tests.common import get_test_zookeeper_address |
2643 | |
2644 | |
2645 | -MATCH_PROTOCOL = MATCH(lambda x: isinstance(x, ProcessProtocol)) |
2646 | - |
2647 | - |
2648 | class UnitMachineDeploymentTest(RepositoryTestBase): |
2649 | |
2650 | def setUp(self): |
2651 | @@ -32,6 +28,11 @@ |
2652 | self.units_directory = os.path.join(self.juju_directory, "units") |
2653 | os.mkdir(self.units_directory) |
2654 | self.unit_name = "wordpress/0" |
2655 | + self.rootfs = self.makeDir() |
2656 | + self.init_dir = os.path.join(self.rootfs, "etc", "init") |
2657 | + os.makedirs(self.init_dir) |
2658 | + self.real_init_dir = self.patch( |
2659 | + UpstartService, "init_dir", self.init_dir) |
2660 | |
2661 | self.deployment = UnitMachineDeployment( |
2662 | self.unit_name, |
2663 | @@ -41,11 +42,43 @@ |
2664 | self.deployment.unit_agent_module, "juju.agents.unit") |
2665 | self.deployment.unit_agent_module = "juju.agents.dummy" |
2666 | |
2667 | - def process_kill(self, pid): |
2668 | - try: |
2669 | - os.kill(pid, 9) |
2670 | - except OSError: |
2671 | - pass |
2672 | + def setup_mock(self): |
2673 | + self.check_call = self.mocker.replace("subprocess.check_call") |
2674 | + self.getProcessOutput = self.mocker.replace( |
2675 | + "twisted.internet.utils.getProcessOutput") |
2676 | + |
2677 | + def mock_is_running(self, running): |
2678 | + self.getProcessOutput("/sbin/status", ["juju-wordpress-0"]) |
2679 | + if running: |
2680 | + self.mocker.result(succeed( |
2681 | + "juju-wordpress-0 start/running, process 12345")) |
2682 | + else: |
2683 | + self.mocker.result(succeed("juju-wordpress-0 stop/waiting")) |
2684 | + |
2685 | + def _without_sudo(self, args, **_): |
2686 | + self.assertEquals(args[0], "sudo") |
2687 | + return subprocess.call(args[1:]) |
2688 | + |
2689 | + def mock_install(self): |
2690 | + self.check_call(ANY, KWARGS) # cp to init dir |
2691 | + self.mocker.call(self._without_sudo) |
2692 | + self.check_call(ANY, KWARGS) # chmod 644 |
2693 | + self.mocker.call(self._without_sudo) |
2694 | + |
2695 | + def mock_start(self): |
2696 | + self.check_call(("sudo", "/sbin/start", "juju-wordpress-0"), KWARGS) |
2697 | + self.mocker.result(0) |
2698 | + for _ in range(5): |
2699 | + self.mock_is_running(True) |
2700 | + |
2701 | + def mock_destroy(self): |
2702 | + self.check_call(("sudo", "/sbin/stop", "juju-wordpress-0"), KWARGS) |
2703 | + self.mocker.result(0) |
2704 | + self.check_call(ANY, KWARGS) # rm from init dir |
2705 | + self.mocker.call(self._without_sudo) |
2706 | + |
2707 | + def assert_pid_running(self, pid, expect): |
2708 | + self.assertEquals(os.path.exists("/proc/%s" % pid), expect) |
2709 | |
2710 | def test_unit_name_with_path_manipulation_raises_assertion(self): |
2711 | self.assertRaises( |
2712 | @@ -60,129 +93,50 @@ |
2713 | os.path.join(self.units_directory, |
2714 | self.unit_name.replace("/", "-"))) |
2715 | |
2716 | - def test_unit_pid_file(self): |
2717 | - self.assertEqual( |
2718 | - self.deployment.pid_file, |
2719 | - os.path.join(self.units_directory, |
2720 | - "%s.pid" % (self.unit_name.replace("/", "-")))) |
2721 | - |
2722 | def test_service_unit_start(self): |
2723 | """ |
2724 | Starting a service unit will result in a unit workspace being created |
2725 | if it does not exist and a running service unit agent. |
2726 | """ |
2727 | - |
2728 | - d = self.deployment.start( |
2729 | - "0", get_test_zookeeper_address(), self.bundle) |
2730 | - |
2731 | - @inlineCallbacks |
2732 | - def validate_result(result): |
2733 | - # give process time to write its pid |
2734 | - yield self.sleep(0.1) |
2735 | - self.addCleanup( |
2736 | - self.process_kill, |
2737 | - int(open(self.deployment.pid_file).read())) |
2738 | - self.assertEqual(result, True) |
2739 | - |
2740 | - d.addCallback(validate_result) |
2741 | - return d |
2742 | - |
2743 | - def test_deployment_get_environment(self): |
2744 | - zk_address = get_test_zookeeper_address() |
2745 | - environ = self.deployment.get_environment(21, zk_address) |
2746 | - environ.pop("PYTHONPATH") |
2747 | - self.assertEqual(environ["JUJU_HOME"], self.juju_directory) |
2748 | - self.assertEqual(environ["JUJU_UNIT_NAME"], self.unit_name) |
2749 | - self.assertEqual(environ["JUJU_ZOOKEEPER"], zk_address) |
2750 | - self.assertEqual(environ["JUJU_MACHINE_ID"], "21") |
2751 | - |
2752 | - def test_service_unit_start_with_integer_machine_id(self): |
2753 | - """ |
2754 | - Starting a service unit will result in a unit workspace being created |
2755 | - if it does not exist and a running service unit agent. |
2756 | - """ |
2757 | - d = self.deployment.start( |
2758 | - 21, get_test_zookeeper_address(), self.bundle) |
2759 | - |
2760 | - @inlineCallbacks |
2761 | - def validate_result(result): |
2762 | - # give process time to write its pid |
2763 | - yield self.sleep(0.1) |
2764 | - self.addCleanup( |
2765 | - self.process_kill, |
2766 | - int(open(self.deployment.pid_file).read())) |
2767 | - self.assertEqual(result, True) |
2768 | - |
2769 | - d.addCallback(validate_result) |
2770 | - return d |
2771 | - |
2772 | - def test_service_unit_start_with_agent_startup_error(self): |
2773 | - """ |
2774 | - Starting a service unit will result in a unit workspace being created |
2775 | - if it does not exist and a running service unit agent. |
2776 | - """ |
2777 | - self.deployment.unit_agent_module = "magichat.xr1" |
2778 | - d = self.deployment.start( |
2779 | - "0", get_test_zookeeper_address(), self.bundle) |
2780 | - |
2781 | - self.failUnlessFailure(d, UnitDeploymentError) |
2782 | - |
2783 | - def validate_result(error): |
2784 | - self.assertIn("No module named magichat", str(error)) |
2785 | - |
2786 | - d.addCallback(validate_result) |
2787 | - return d |
2788 | - |
2789 | - def test_service_unit_start_agent_arguments(self): |
2790 | - """ |
2791 | - Starting a service unit will start a service unit agent with arguments |
2792 | - denoting the current machine id, zookeeper server location, and the |
2793 | - unit name. Additionally it will configure the log and pid file |
2794 | - locations. |
2795 | - """ |
2796 | - machine_id = "0" |
2797 | - zookeeper_hosts = "menagerie.example.com:2181" |
2798 | - |
2799 | - from twisted.internet import reactor |
2800 | - mock_reactor = self.mocker.patch(reactor) |
2801 | - |
2802 | - environ = dict(os.environ) |
2803 | - environ["JUJU_UNIT_NAME"] = self.unit_name |
2804 | - environ["JUJU_HOME"] = self.juju_directory |
2805 | - environ["JUJU_MACHINE_ID"] = machine_id |
2806 | - environ["JUJU_ZOOKEEPER"] = zookeeper_hosts |
2807 | - environ["PYTHONPATH"] = ":".join( |
2808 | - filter(None, [ |
2809 | - os.path.dirname(get_module_directory(juju)), |
2810 | - environ.get("PYTHONPATH")])) |
2811 | - |
2812 | - pid_file = os.path.join( |
2813 | - self.units_directory, |
2814 | - "%s.pid" % self.unit_name.replace("/", "-")) |
2815 | - |
2816 | - log_file = os.path.join( |
2817 | - self.deployment.directory, |
2818 | - "charm.log") |
2819 | - |
2820 | - args = [sys.executable, "-m", "juju.agents.dummy", "-n", |
2821 | - "--pidfile", pid_file, "--logfile", log_file] |
2822 | - |
2823 | - mock_reactor.spawnProcess( |
2824 | - MATCH_PROTOCOL, sys.executable, args, environ) |
2825 | + self.setup_mock() |
2826 | + self.mock_install() |
2827 | + self.mock_is_running(False) |
2828 | + self.mock_start() |
2829 | self.mocker.replay() |
2830 | - self.deployment.start( |
2831 | - machine_id, zookeeper_hosts, self.bundle) |
2832 | - |
2833 | - def xtest_service_unit_start_pre_unpack(self): |
2834 | - """ |
2835 | - Attempting to start a charm before the charm is unpacked |
2836 | - results in an exception. |
2837 | - """ |
2838 | - error = yield self.assertFailure( |
2839 | - self.deployment.start( |
2840 | - "0", get_test_zookeeper_address(), self.bundle), |
2841 | - UnitDeploymentError) |
2842 | - self.assertEquals(str(error), "Charm must be unpacked first.") |
2843 | + |
2844 | + d = self.deployment.start( |
2845 | + "123", get_test_zookeeper_address(), self.bundle) |
2846 | + |
2847 | + def verify_upstart(_): |
2848 | + conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
2849 | + with open(conf_path) as f: |
2850 | + lines = f.readlines() |
2851 | + |
2852 | + env = [] |
2853 | + for line in lines: |
2854 | + if line.startswith("env "): |
2855 | + env.append(line[4:-1].split("=", 1)) |
2856 | + if line.startswith("exec "): |
2857 | + exec_ = line[5:-1] |
2858 | + |
2859 | + env = dict((k, v.strip('"')) for (k, v) in env) |
2860 | + env.pop("PYTHONPATH") |
2861 | + self.assertEquals(env, { |
2862 | + "JUJU_HOME": self.juju_directory, |
2863 | + "JUJU_UNIT_NAME": self.unit_name, |
2864 | + "JUJU_ZOOKEEPER": get_test_zookeeper_address(), |
2865 | + "JUJU_MACHINE_ID": "123"}) |
2866 | + |
2867 | + log_file = os.path.join( |
2868 | + self.deployment.directory, "charm.log") |
2869 | + command = " ".join([ |
2870 | + "/usr/bin/python", "-m", "juju.agents.dummy", "--nodaemon", |
2871 | + "--logfile", log_file, "--session-file", |
2872 | + "/var/run/juju/unit-wordpress-0-agent.zksession", |
2873 | + ">> /tmp/juju-wordpress-0.output 2>&1"]) |
2874 | + self.assertEquals(exec_, command) |
2875 | + d.addCallback(verify_upstart) |
2876 | + return d |
2877 | |
2878 | @inlineCallbacks |
2879 | def test_service_unit_destroy(self): |
2880 | @@ -190,48 +144,22 @@ |
2881 | Forcibly stop a unit, and destroy any directories associated to it |
2882 | on the machine, and kills the unit agent process. |
2883 | """ |
2884 | + self.setup_mock() |
2885 | + self.mock_install() |
2886 | + self.mock_is_running(False) |
2887 | + self.mock_start() |
2888 | + self.mock_is_running(True) |
2889 | + self.mock_destroy() |
2890 | + self.mocker.replay() |
2891 | + |
2892 | yield self.deployment.start( |
2893 | "0", get_test_zookeeper_address(), self.bundle) |
2894 | - # give the process time to write its pid file |
2895 | - yield self.sleep(0.1) |
2896 | - pid = int(open(self.deployment.pid_file).read()) |
2897 | + |
2898 | yield self.deployment.destroy() |
2899 | - # give the process time to die. |
2900 | - yield self.sleep(0.1) |
2901 | - e = self.assertRaises(OSError, os.kill, pid, 0) |
2902 | - self.assertEqual(e.errno, 3) |
2903 | self.assertFalse(os.path.exists(self.deployment.directory)) |
2904 | - self.assertFalse(os.path.exists(self.deployment.pid_file)) |
2905 | - |
2906 | - def test_service_unit_destroy_stale_pid(self): |
2907 | - """ |
2908 | - A stale pid file does not cause any errors. |
2909 | - |
2910 | - We mock away is_running as otherwise it will check for this, but |
2911 | - there exists a small window when the result may disagree. |
2912 | - """ |
2913 | - self.makeFile("8917238", path=self.deployment.pid_file) |
2914 | - mock_deployment = self.mocker.patch(self.deployment) |
2915 | - mock_deployment.is_running() |
2916 | - self.mocker.result(succeed(True)) |
2917 | - self.mocker.replay() |
2918 | - return self.deployment.destroy() |
2919 | - |
2920 | - def test_service_unit_destroy_perm_error(self): |
2921 | - """ |
2922 | - A stale pid file does not cause any errors. |
2923 | - |
2924 | - We mock away is_running as otherwise it will check for this, but |
2925 | - there exists a small window when the result may disagree. |
2926 | - """ |
2927 | - if os.geteuid() == 0: |
2928 | - return |
2929 | - self.makeFile("1", path=self.deployment.pid_file) |
2930 | - mock_deployment = self.mocker.patch(self.deployment) |
2931 | - mock_deployment.is_running() |
2932 | - self.mocker.result(succeed(True)) |
2933 | - self.mocker.replay() |
2934 | - return self.assertFailure(self.deployment.destroy(), OSError) |
2935 | + |
2936 | + conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
2937 | + self.assertFalse(os.path.exists(conf_path)) |
2938 | |
2939 | @inlineCallbacks |
2940 | def test_service_unit_destroy_undeployed(self): |
2941 | @@ -247,11 +175,23 @@ |
2942 | If the unit is not running, then destroy will just remove |
2943 | its directory. |
2944 | """ |
2945 | - self.deployment.unpack_charm(self.bundle) |
2946 | - self.assertTrue(os.path.exists(self.deployment.directory)) |
2947 | + self.setup_mock() |
2948 | + self.mock_install() |
2949 | + self.mock_is_running(False) |
2950 | + self.mock_start() |
2951 | + self.mock_is_running(False) |
2952 | + self.check_call(ANY, KWARGS) # rm from init dir |
2953 | + self.mocker.call(self._without_sudo) |
2954 | + self.mocker.replay() |
2955 | + |
2956 | + yield self.deployment.start( |
2957 | + "0", get_test_zookeeper_address(), self.bundle) |
2958 | yield self.deployment.destroy() |
2959 | self.assertFalse(os.path.exists(self.deployment.directory)) |
2960 | |
2961 | + conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
2962 | + self.assertFalse(os.path.exists(conf_path)) |
2963 | + |
2964 | def test_unpack_charm(self): |
2965 | """ |
2966 | The deployment unpacks a charm bundle into the unit workspace. |
2967 | @@ -279,62 +219,81 @@ |
2968 | str(error), |
2969 | "Invalid charm for deployment: %s" % self.charm.path) |
2970 | |
2971 | - def test_is_running_no_pid_file(self): |
2972 | - """ |
2973 | - If there is no pid file the service unit is not running. |
2974 | - """ |
2975 | - self.assertEqual((yield self.deployment.is_running()), False) |
2976 | - |
2977 | - def test_is_running(self): |
2978 | - """ |
2979 | - The service deployment will check the pid and validate |
2980 | - that the pid found is a running process. |
2981 | - """ |
2982 | - self.makeFile( |
2983 | - str(os.getpid()), path=self.deployment.pid_file) |
2984 | + @inlineCallbacks |
2985 | + def test_is_running_not_installed(self): |
2986 | + """ |
2987 | + If there is no conf file the service unit is not running. |
2988 | + """ |
2989 | + self.assertEqual((yield self.deployment.is_running()), False) |
2990 | + |
2991 | + @inlineCallbacks |
2992 | + def test_is_running_not_running(self): |
2993 | + """ |
2994 | + If the conf file exists, but job not running, unit not running |
2995 | + """ |
2996 | + conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
2997 | + with open(conf_path, "w") as f: |
2998 | + f.write("blah") |
2999 | + self.setup_mock() |
3000 | + self.mock_is_running(False) |
3001 | + self.mocker.replay() |
3002 | + self.assertEqual((yield self.deployment.is_running()), False) |
3003 | + |
3004 | + @inlineCallbacks |
3005 | + def test_is_running_success(self): |
3006 | + """ |
3007 | + Check running job. |
3008 | + """ |
3009 | + conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") |
3010 | + with open(conf_path, "w") as f: |
3011 | + f.write("blah") |
3012 | + self.setup_mock() |
3013 | + self.mock_is_running(True) |
3014 | + self.mocker.replay() |
3015 | self.assertEqual((yield self.deployment.is_running()), True) |
3016 | |
3017 | - def test_is_running_against_unknown_error(self): |
3018 | - """ |
3019 | - If we don't have permission to access the process, the |
3020 | - original error should get passed along. |
3021 | - """ |
3022 | - if os.geteuid() == 0: |
3023 | - return |
3024 | - self.makeFile("1", path=self.deployment.pid_file) |
3025 | - self.assertFailure(self.deployment.is_running(), OSError) |
3026 | - |
3027 | - def test_is_running_invalid_pid_file(self): |
3028 | - """ |
3029 | - If the pid file is corrupted on disk, and does not contain |
3030 | - a valid integer, then the agent is not running. |
3031 | - """ |
3032 | - self.makeFile("abcdef", path=self.deployment.pid_file) |
3033 | - self.assertEqual( |
3034 | - (yield self.deployment.is_running()), False) |
3035 | - |
3036 | - def test_is_running_invalid_pid(self): |
3037 | - """ |
3038 | - If the pid file refers to an invalid process then the |
3039 | - agent is not running. |
3040 | - """ |
3041 | - self.makeFile("669966", path=self.deployment.pid_file) |
3042 | - self.assertEqual( |
3043 | - (yield self.deployment.is_running()), False) |
3044 | - |
3045 | - |
3046 | -upstart_job_sample = '''\ |
3047 | -description "Unit agent for riak/0" |
3048 | -author "Juju Team <juju@lists.canonical.com>" |
3049 | -start on start on filesystem or runlevel [2345] |
3050 | -stop on runlevel [!2345] |
3051 | - |
3052 | -respawn |
3053 | - |
3054 | -env JUJU_MACHINE_ID="0" |
3055 | -env JUJU_HOME="/var/lib/juju" |
3056 | -env JUJU_ZOOKEEPER="127.0.1.1:2181" |
3057 | -env JUJU_UNIT_NAME="riak/0"''' |
3058 | + @uses_sudo |
3059 | + @inlineCallbacks |
3060 | + def test_run_actual_process(self): |
3061 | + # "unpatch" to use real /etc/init |
3062 | + self.patch(UpstartService, "init_dir", self.real_init_dir) |
3063 | + yield self.deployment.start( |
3064 | + "0", get_test_zookeeper_address(), self.bundle) |
3065 | + old_pid = yield self.deployment.get_pid() |
3066 | + self.assert_pid_running(old_pid, True) |
3067 | + |
3068 | + # Give the job a chance to fall over and be restarted (if the |
3069 | + # pid doesn't change, that hasn't hapened) |
3070 | + yield self.sleep(0.1) |
3071 | + self.assertEquals((yield self.deployment.get_pid()), old_pid) |
3072 | + self.assert_pid_running(old_pid, True) |
3073 | + |
3074 | + # Kick the job over ourselves; check it comes back |
3075 | + os.system("sudo kill -9 %s" % old_pid) |
3076 | + yield self.sleep(0.1) |
3077 | + self.assert_pid_running(old_pid, False) |
3078 | + new_pid = yield self.deployment.get_pid() |
3079 | + self.assertNotEquals(new_pid, old_pid) |
3080 | + self.assert_pid_running(new_pid, True) |
3081 | + |
3082 | + yield self.deployment.destroy() |
3083 | + self.assertEquals((yield self.deployment.get_pid()), None) |
3084 | + self.assert_pid_running(new_pid, False) |
3085 | + |
3086 | + @uses_sudo |
3087 | + @inlineCallbacks |
3088 | + def test_fail_to_run_actual_process(self): |
3089 | + self.deployment.unit_agent_module = "haha.disregard.that" |
3090 | + self.patch(UpstartService, "init_dir", self.real_init_dir) |
3091 | + |
3092 | + d = self.deployment.start( |
3093 | + "0", get_test_zookeeper_address(), self.bundle) |
3094 | + e = yield self.assertFailure(d, UnitDeploymentError) |
3095 | + self.assertTrue(str(e).startswith( |
3096 | + "Failed to start job juju-wordpress-0; got output:\n")) |
3097 | + self.assertIn("No module named haha", str(e)) |
3098 | + |
3099 | + yield self.deployment.destroy() |
3100 | |
3101 | |
3102 | class UnitContainerDeploymentTest(RepositoryTestBase): |
3103 | @@ -365,14 +324,6 @@ |
3104 | "ns1-riak-0", |
3105 | self.unit_deploy.container_name) |
3106 | |
3107 | - def test_get_upstart_job(self): |
3108 | - upstart_job = self.unit_deploy.get_upstart_unit_job( |
3109 | - 0, "127.0.1.1:2181") |
3110 | - job = self.get_normalized(upstart_job) |
3111 | - self.assertIn('JUJU_ZOOKEEPER="127.0.1.1:2181"', job) |
3112 | - self.assertIn('JUJU_MACHINE_ID="0"', job) |
3113 | - self.assertIn('JUJU_UNIT_NAME="riak/0"', job) |
3114 | - |
3115 | @inlineCallbacks |
3116 | def test_destroy(self): |
3117 | mock_container = self.mocker.patch(self.unit_deploy.container) |
3118 | @@ -403,7 +354,7 @@ |
3119 | unit_deploy = UnitContainerDeployment( |
3120 | self.unit_name, self.juju_home) |
3121 | container = yield unit_deploy._get_master_template( |
3122 | - "local", "127.0.0.1:1", "abc") |
3123 | + "local", "abc") |
3124 | self.assertEqual(container.origin, "lp:~juju/foobar") |
3125 | self.assertEqual( |
3126 | container.customize_log, |
3127 | @@ -420,7 +371,7 @@ |
3128 | mock_deploy = self.mocker.patch(self.unit_deploy) |
3129 | # this minimally validates that we are also called with the |
3130 | # expect public key |
3131 | - mock_deploy._get_container(ANY, ANY, ANY, env["JUJU_PUBLIC_KEY"]) |
3132 | + mock_deploy._get_container(ANY, ANY, env["JUJU_PUBLIC_KEY"]) |
3133 | self.mocker.result((container, rootfs)) |
3134 | |
3135 | mock_container = self.mocker.patch(container) |
3136 | @@ -434,7 +385,7 @@ |
3137 | yield self.unit_deploy.start("0", "127.0.1.1:2181", self.bundle) |
3138 | |
3139 | # Verify the upstart job |
3140 | - upstart_agent_name = "%s-unit-agent.conf" % ( |
3141 | + upstart_agent_name = "juju-%s.conf" % ( |
3142 | self.unit_name.replace("/", "-")) |
3143 | content = open( |
3144 | os.path.join(rootfs, "etc", "init", upstart_agent_name)).read() |
3145 | @@ -443,10 +394,13 @@ |
3146 | self.assertIn('JUJU_MACHINE_ID="0"', job) |
3147 | self.assertIn('JUJU_UNIT_NAME="riak/0"', job) |
3148 | |
3149 | - # Verify the symlink exists |
3150 | + # Verify the symlinks exist |
3151 | self.assertTrue(os.path.lexists(os.path.join( |
3152 | self.unit_deploy.juju_home, "units", |
3153 | self.unit_deploy.unit_path_name, "unit.log"))) |
3154 | + self.assertTrue(os.path.lexists(os.path.join( |
3155 | + self.unit_deploy.juju_home, "units", |
3156 | + self.unit_deploy.unit_path_name, "output.log"))) |
3157 | |
3158 | # Verify the charm is on disk. |
3159 | self.assertTrue(os.path.exists(os.path.join( |
3160 | @@ -471,7 +425,7 @@ |
3161 | container = LXCContainer(self.unit_name, None, None, None) |
3162 | |
3163 | mock_deploy = self.mocker.patch(self.unit_deploy) |
3164 | - mock_deploy._get_master_template(ANY, ANY, ANY) |
3165 | + mock_deploy._get_master_template(ANY, ANY) |
3166 | self.mocker.result(container) |
3167 | |
3168 | mock_container = self.mocker.patch(container) |
3169 | @@ -481,7 +435,7 @@ |
3170 | self.mocker.replay() |
3171 | |
3172 | container, rootfs = yield self.unit_deploy._get_container( |
3173 | - "0", "127.0.0.1:2181", None, "dsa...") |
3174 | + "0", None, "dsa...") |
3175 | |
3176 | output = self.output.getvalue() |
3177 | self.assertIn("Container created for %s" % self.unit_deploy.unit_name, |
3178 | |
3179 | === modified file 'juju/machine/unit.py' |
3180 | --- juju/machine/unit.py 2011-10-01 00:04:14 +0000 |
3181 | +++ juju/machine/unit.py 2012-02-02 16:42:42 +0000 |
3182 | @@ -1,19 +1,17 @@ |
3183 | import os |
3184 | -import errno |
3185 | -import signal |
3186 | import shutil |
3187 | import sys |
3188 | import logging |
3189 | |
3190 | import juju |
3191 | |
3192 | -from twisted.internet.defer import ( |
3193 | - Deferred, inlineCallbacks, returnValue, succeed, fail) |
3194 | -from twisted.internet.protocol import ProcessProtocol |
3195 | +from twisted.internet.defer import inlineCallbacks, returnValue |
3196 | |
3197 | from juju.charm.bundle import CharmBundle |
3198 | +from juju.errors import ServiceError |
3199 | +from juju.lib.lxc import LXCContainer, get_containers, LXCError |
3200 | from juju.lib.twistutils import get_module_directory |
3201 | -from juju.lib.lxc import LXCContainer, get_containers, LXCError |
3202 | +from juju.lib.upstart import UpstartService |
3203 | |
3204 | from .errors import UnitDeploymentError |
3205 | |
3206 | @@ -26,22 +24,17 @@ |
3207 | return UnitMachineDeployment |
3208 | |
3209 | |
3210 | -class AgentProcessProtocol(ProcessProtocol): |
3211 | - |
3212 | - def __init__(self, deferred): |
3213 | - self.deferred = deferred |
3214 | - self._error_buffer = [] |
3215 | - |
3216 | - def errReceived(self, data): |
3217 | - self._error_buffer.append(data) |
3218 | - |
3219 | - def processEnded(self, reason): |
3220 | - if self._error_buffer: |
3221 | - msg = "".join(self._error_buffer) |
3222 | - msg.strip() |
3223 | - self.deferred.errback(UnitDeploymentError(msg)) |
3224 | - else: |
3225 | - self.deferred.callback(True) |
3226 | +def _get_environment(unit_name, juju_home, machine_id, zookeeper_hosts): |
3227 | + environ = dict() |
3228 | + environ["JUJU_MACHINE_ID"] = str(machine_id) |
3229 | + environ["JUJU_UNIT_NAME"] = unit_name |
3230 | + environ["JUJU_HOME"] = juju_home |
3231 | + environ["JUJU_ZOOKEEPER"] = zookeeper_hosts |
3232 | + environ["PYTHONPATH"] = ":".join( |
3233 | + filter(None, [ |
3234 | + os.path.dirname(get_module_directory(juju)), |
3235 | + os.environ.get("PYTHONPATH")])) |
3236 | + return environ |
3237 | |
3238 | |
3239 | class UnitMachineDeployment(object): |
3240 | @@ -58,46 +51,35 @@ |
3241 | unit_agent_module = "juju.agents.unit" |
3242 | |
3243 | def __init__(self, unit_name, juju_home): |
3244 | - self.unit_name = unit_name |
3245 | - |
3246 | assert ".." not in unit_name, "Invalid Unit Name" |
3247 | + self.unit_name = unit_name |
3248 | + self.juju_home = juju_home |
3249 | self.unit_path_name = unit_name.replace("/", "-") |
3250 | - self.juju_home = juju_home |
3251 | - |
3252 | self.directory = os.path.join( |
3253 | self.juju_home, "units", self.unit_path_name) |
3254 | - |
3255 | - self.pid_file = os.path.join( |
3256 | - self.juju_home, "units", "%s.pid" % self.unit_path_name) |
3257 | - |
3258 | + self.service = UpstartService( |
3259 | + # NOTE: we need use_sudo to work correctly during tests that |
3260 | + # launch actual processes (rather than just mocking/trusting). |
3261 | + "juju-%s" % self.unit_path_name, use_sudo=True) |
3262 | + |
3263 | + @inlineCallbacks |
3264 | def start(self, machine_id, zookeeper_hosts, bundle): |
3265 | """Start a service unit agent.""" |
3266 | - # Extract the charm into the unit directory. |
3267 | self.unpack_charm(bundle) |
3268 | - |
3269 | - # Start the service unit agent |
3270 | - log_file = os.path.join(self.directory, "charm.log") |
3271 | - environ = self.get_environment(machine_id, zookeeper_hosts) |
3272 | - args = [sys.executable, "-m", self.unit_agent_module, "-n", |
3273 | - "--pidfile", self.pid_file, "--logfile", log_file] |
3274 | - |
3275 | - from twisted.internet import reactor |
3276 | - process_deferred = Deferred() |
3277 | - protocol = AgentProcessProtocol(process_deferred) |
3278 | - reactor.spawnProcess(protocol, sys.executable, args, environ) |
3279 | - return process_deferred |
3280 | - |
3281 | - def get_environment(self, machine_id, zookeeper_hosts): |
3282 | - environ = dict(os.environ) |
3283 | - environ["JUJU_MACHINE_ID"] = str(machine_id) |
3284 | - environ["JUJU_UNIT_NAME"] = self.unit_name |
3285 | - environ["JUJU_HOME"] = self.juju_home |
3286 | - environ["JUJU_ZOOKEEPER"] = zookeeper_hosts |
3287 | - environ["PYTHONPATH"] = ":".join( |
3288 | - filter(None, [ |
3289 | - os.path.dirname(get_module_directory(juju)), |
3290 | - environ.get("PYTHONPATH")])) |
3291 | - return environ |
3292 | + self.service.set_description( |
3293 | + "Juju unit agent for %s" % self.unit_name) |
3294 | + self.service.set_environ(_get_environment( |
3295 | + self.unit_name, self.juju_home, machine_id, zookeeper_hosts)) |
3296 | + self.service.set_command(" ".join(( |
3297 | + "/usr/bin/python", "-m", self.unit_agent_module, |
3298 | + "--nodaemon", |
3299 | + "--logfile", os.path.join(self.directory, "charm.log"), |
3300 | + "--session-file", |
3301 | + "/var/run/juju/unit-%s-agent.zksession" % self.unit_path_name))) |
3302 | + try: |
3303 | + yield self.service.start() |
3304 | + except ServiceError as e: |
3305 | + raise UnitDeploymentError(str(e)) |
3306 | |
3307 | @inlineCallbacks |
3308 | def destroy(self): |
3309 | @@ -105,41 +87,17 @@ |
3310 | |
3311 | This will destroy/unmount any state on disk. |
3312 | """ |
3313 | - running = yield self.is_running() |
3314 | - if running: |
3315 | - pid = int(open(self.pid_file).read()) |
3316 | - try: |
3317 | - os.kill(pid, signal.SIGKILL) |
3318 | - except OSError, e: |
3319 | - if e.errno != errno.ESRCH: |
3320 | - raise |
3321 | - |
3322 | - if os.path.exists(self.pid_file): |
3323 | - os.remove(self.pid_file) |
3324 | - |
3325 | + yield self.service.destroy() |
3326 | if os.path.exists(self.directory): |
3327 | shutil.rmtree(self.directory) |
3328 | |
3329 | + def get_pid(self): |
3330 | + """Get the service unit's process id.""" |
3331 | + return self.service.get_pid() |
3332 | + |
3333 | def is_running(self): |
3334 | """Is the service unit running.""" |
3335 | - try: |
3336 | - with open(self.pid_file) as pid_fh: |
3337 | - pid = int(pid_fh.read()) |
3338 | - except (IOError, ValueError): |
3339 | - return succeed(False) |
3340 | - |
3341 | - # Attempt to send a signal to the process to verify its a valid process |
3342 | - # From man 2 kill |
3343 | - # "If sig is 0, then no signal is sent, but error checking is still |
3344 | - # performed; this can be used to check for the existence of a process |
3345 | - # ID or process group ID." |
3346 | - try: |
3347 | - os.kill(pid, 0) |
3348 | - except OSError, e: |
3349 | - if e.errno == errno.ESRCH: |
3350 | - return succeed(False) |
3351 | - return fail(e) |
3352 | - return succeed(True) |
3353 | + return self.service.is_running() |
3354 | |
3355 | def unpack_charm(self, charm): |
3356 | """Unpack a charm to the service units directory.""" |
3357 | @@ -150,27 +108,7 @@ |
3358 | charm.extract_to(os.path.join(self.directory, "charm")) |
3359 | |
3360 | |
3361 | -container_upstart_job_template = """\ |
3362 | -description "Unit agent for %(JUJU_UNIT_NAME)s" |
3363 | -author "Juju Team <juju@lists.canonical.com>" |
3364 | - |
3365 | -start on start on filesystem or runlevel [2345] |
3366 | -stop on runlevel [!2345] |
3367 | - |
3368 | -respawn |
3369 | - |
3370 | -env JUJU_MACHINE_ID="%(JUJU_MACHINE_ID)s" |
3371 | -env JUJU_HOME="%(JUJU_HOME)s" |
3372 | -env JUJU_ZOOKEEPER="%(JUJU_ZOOKEEPER)s" |
3373 | -env JUJU_UNIT_NAME="%(JUJU_UNIT_NAME)s" |
3374 | -env PYTHONPATH="%(PYTHONPATH)s" |
3375 | - |
3376 | -exec /usr/bin/python -m juju.agents.unit \ |
3377 | - --logfile=/var/log/juju/unit-%(UNIT_PATH_NAME)s.log |
3378 | -""" |
3379 | - |
3380 | - |
3381 | -class UnitContainerDeployment(UnitMachineDeployment): |
3382 | +class UnitContainerDeployment(object): |
3383 | """Deploy a service unit in a container. |
3384 | |
3385 | Units deployed in a container have strong isolation between |
3386 | @@ -185,66 +123,35 @@ |
3387 | """ |
3388 | |
3389 | def __init__(self, unit_name, juju_home): |
3390 | - super(UnitContainerDeployment, self).__init__(unit_name, juju_home) |
3391 | + self.unit_name = unit_name |
3392 | + self.juju_home = juju_home |
3393 | + self.unit_path_name = unit_name.replace("/", "-") |
3394 | |
3395 | + self._juju_origin = os.environ.get("JUJU_ORIGIN") |
3396 | self._unit_namespace = os.environ.get("JUJU_UNIT_NS") |
3397 | - self._juju_origin = os.environ.get("JUJU_ORIGIN") |
3398 | assert self._unit_namespace is not None, "Required unit ns not found" |
3399 | + self.container_name = "%s-%s" % ( |
3400 | + self._unit_namespace, self.unit_path_name) |
3401 | |
3402 | - self.pid_file = None |
3403 | self.container = LXCContainer(self.container_name, None, None, None) |
3404 | - |
3405 | - @property |
3406 | - def container_name(self): |
3407 | - """Get a qualfied name for the container. |
3408 | - |
3409 | - The units directory for the machine points to a path like:: |
3410 | - |
3411 | - /var/lib/juju/units |
3412 | - |
3413 | - In the case of the local provider this directory is qualified |
3414 | - to allow for multiple users with multiple environments:: |
3415 | - |
3416 | - /var/lib/juju/username-envname |
3417 | - |
3418 | - This value is passed to the agent via the JUJU_HOME environment |
3419 | - variable. |
3420 | - |
3421 | - This function extracts the name qualifier for the container from |
3422 | - the JUJU_HOME value. |
3423 | - """ |
3424 | - return "%s-%s" % (self._unit_namespace, |
3425 | - self.unit_name.replace("/", "-")) |
3426 | + self.directory = None |
3427 | |
3428 | def setup_directories(self): |
3429 | # Create state directories for unit in the container |
3430 | # Move to juju-create script |
3431 | - units_dir = os.path.join( |
3432 | - self.directory, "var", "lib", "juju", "units") |
3433 | - if not os.path.exists(units_dir): |
3434 | - os.makedirs(units_dir) |
3435 | - |
3436 | - state_dir = os.path.join( |
3437 | - self.directory, "var", "lib", "juju", "state") |
3438 | - if not os.path.exists(state_dir): |
3439 | - os.makedirs(state_dir) |
3440 | - |
3441 | - log_dir = os.path.join( |
3442 | - self.directory, "var", "log", "juju") |
3443 | - if not os.path.exists(log_dir): |
3444 | - os.makedirs(log_dir) |
3445 | - |
3446 | - unit_dir = os.path.join(units_dir, self.unit_path_name) |
3447 | - if not os.path.exists(unit_dir): |
3448 | - os.mkdir(unit_dir) |
3449 | - |
3450 | - host_unit_dir = os.path.join( |
3451 | - self.juju_home, "units", self.unit_path_name) |
3452 | - if not os.path.exists(host_unit_dir): |
3453 | - os.makedirs(host_unit_dir) |
3454 | + base = self.directory |
3455 | + dirs = ((base, "var", "lib", "juju", "units", self.unit_path_name), |
3456 | + (base, "var", "lib", "juju", "state"), |
3457 | + (base, "var", "log", "juju"), |
3458 | + (self.juju_home, "units", self.unit_path_name)) |
3459 | + |
3460 | + for parts in dirs: |
3461 | + dir_ = os.path.join(*parts) |
3462 | + if not os.path.exists(dir_): |
3463 | + os.makedirs(dir_) |
3464 | |
3465 | @inlineCallbacks |
3466 | - def _get_master_template(self, machine_id, zookeeper_hosts, public_key): |
3467 | + def _get_master_template(self, machine_id, public_key): |
3468 | container_template_name = "%s-%s-template" % ( |
3469 | self._unit_namespace, machine_id) |
3470 | |
3471 | @@ -260,7 +167,7 @@ |
3472 | if not master_template.is_constructed(): |
3473 | log.debug("Creating master container...") |
3474 | yield master_template.create() |
3475 | - log.debug("Created master container %s" % container_template_name) |
3476 | + log.debug("Created master container %s", container_template_name) |
3477 | |
3478 | # it wasn't constructed and we couldn't construct it |
3479 | if not master_template.is_constructed(): |
3480 | @@ -269,15 +176,15 @@ |
3481 | returnValue(master_template) |
3482 | |
3483 | @inlineCallbacks |
3484 | - def _get_container(self, machine_id, zookeeper_hosts, bundle, public_key): |
3485 | + def _get_container(self, machine_id, bundle, public_key): |
3486 | master_template = yield self._get_master_template( |
3487 | - machine_id, zookeeper_hosts, public_key) |
3488 | + machine_id, public_key) |
3489 | log.info( |
3490 | - "Creating container %s...", os.path.basename(self.directory)) |
3491 | + "Creating container %s...", self.unit_path_name) |
3492 | |
3493 | container = yield master_template.clone(self.container_name) |
3494 | directory = container.rootfs |
3495 | - log.info("Container created for %s" % self.unit_name) |
3496 | + log.info("Container created for %s", self.unit_name) |
3497 | returnValue((container, directory)) |
3498 | |
3499 | @inlineCallbacks |
3500 | @@ -293,10 +200,9 @@ |
3501 | # Build a template container that can be cloned in deploy |
3502 | # we leave the loosely initialized self.container in place for |
3503 | # the class as thats all we need for methods other than start. |
3504 | - self.container, self.directory = yield self._get_container(machine_id, |
3505 | - zookeeper_hosts, |
3506 | - bundle, |
3507 | - public_key) |
3508 | + self.container, self.directory = yield self._get_container( |
3509 | + machine_id, bundle, public_key) |
3510 | + |
3511 | # Create state directories for unit in the container |
3512 | self.setup_directories() |
3513 | |
3514 | @@ -308,13 +214,25 @@ |
3515 | log.debug("Charm extracted into container") |
3516 | |
3517 | # Write upstart file for the agent into the container |
3518 | - upstart_path = os.path.join( |
3519 | - self.directory, "etc", "init", |
3520 | - "%s-unit-agent.conf" % self.unit_path_name) |
3521 | - with open(upstart_path, "w") as fh: |
3522 | - fh.write(self.get_upstart_unit_job(machine_id, zookeeper_hosts)) |
3523 | + service_name = "juju-%s" % self.unit_path_name |
3524 | + init_dir = os.path.join(self.directory, "etc", "init") |
3525 | + service = UpstartService(service_name, init_dir=init_dir) |
3526 | + service.set_description( |
3527 | + "Juju unit agent for %s" % self.unit_name) |
3528 | + service.set_environ(_get_environment( |
3529 | + self.unit_name, "/var/lib/juju", machine_id, zookeeper_hosts)) |
3530 | + service.set_output_path( |
3531 | + "/var/log/juju/unit-%s-output.log" % self.unit_path_name) |
3532 | + service.set_command(" ".join(( |
3533 | + "/usr/bin/python", |
3534 | + "-m", "juju.agents.unit", |
3535 | + "--nodaemon", |
3536 | + "--logfile", "/var/log/juju/unit-%s.log" % self.unit_path_name, |
3537 | + "--session-file", |
3538 | + "/var/run/juju/unit-%s-agent.zksession" % self.unit_path_name))) |
3539 | + yield service.install() |
3540 | |
3541 | - # Create a symlink on the host for easier access to the unit log file |
3542 | + # Create symlinks on the host for easier access to the unit log files |
3543 | unit_log_path_host = os.path.join( |
3544 | self.juju_home, "units", self.unit_path_name, "unit.log") |
3545 | if not os.path.lexists(unit_log_path_host): |
3546 | @@ -322,6 +240,13 @@ |
3547 | os.path.join(self.directory, "var", "log", "juju", |
3548 | "unit-%s.log" % self.unit_path_name), |
3549 | unit_log_path_host) |
3550 | + unit_output_path_host = os.path.join( |
3551 | + self.juju_home, "units", self.unit_path_name, "output.log") |
3552 | + if not os.path.lexists(unit_output_path_host): |
3553 | + os.symlink( |
3554 | + os.path.join(self.directory, "var", "log", "juju", |
3555 | + "unit-%s-output.log" % self.unit_path_name), |
3556 | + unit_output_path_host) |
3557 | |
3558 | # Debug log for the container |
3559 | container_log_path = os.path.join( |
3560 | @@ -330,36 +255,22 @@ |
3561 | |
3562 | log.debug("Starting container...") |
3563 | yield self.container.run() |
3564 | - log.info("Started container for %s" % self.unit_name) |
3565 | + log.info("Started container for %s", self.unit_name) |
3566 | |
3567 | @inlineCallbacks |
3568 | def destroy(self): |
3569 | - """Destroy the unit container. |
3570 | - """ |
3571 | + """Destroy the unit container.""" |
3572 | log.debug("Destroying container...") |
3573 | yield self.container.destroy() |
3574 | - log.info("Destroyed container for %s" % self.unit_name) |
3575 | + log.info("Destroyed container for %s", self.unit_name) |
3576 | |
3577 | @inlineCallbacks |
3578 | def is_running(self): |
3579 | - """Is the unit container running. |
3580 | - """ |
3581 | - # TODO: container running may not imply agent running. the |
3582 | - # pid file has the pid from the container, we need a container |
3583 | - # pid -> host pid mapping to query status from the machine agent. |
3584 | - # alternatively querying zookeeper for the unit agent presence |
3585 | - # node. |
3586 | + """Is the unit container running?""" |
3587 | + # TODO: container running may not imply agent running. |
3588 | + # query zookeeper for the unit agent presence node? |
3589 | if not self.container: |
3590 | returnValue(False) |
3591 | container_map = yield get_containers( |
3592 | prefix=self.container.container_name) |
3593 | returnValue(container_map.get(self.container.container_name, False)) |
3594 | - |
3595 | - def get_upstart_unit_job(self, machine_id, zookeeper_hosts): |
3596 | - """Return a string containing the upstart job to start the unit agent. |
3597 | - """ |
3598 | - environ = self.get_environment(machine_id, zookeeper_hosts) |
3599 | - # Keep qualified locations within the container for colo support |
3600 | - environ["JUJU_HOME"] = "/var/lib/juju" |
3601 | - environ["UNIT_PATH_NAME"] = self.unit_path_name |
3602 | - return container_upstart_job_template % environ |
3603 | |
3604 | === modified file 'juju/providers/common/cloudinit.py' |
3605 | --- juju/providers/common/cloudinit.py 2012-01-09 13:58:21 +0000 |
3606 | +++ juju/providers/common/cloudinit.py 2012-02-02 16:42:42 +0000 |
3607 | @@ -1,6 +1,7 @@ |
3608 | from subprocess import Popen, PIPE |
3609 | |
3610 | from juju.errors import CloudInitError |
3611 | +from juju.lib.upstart import UpstartService |
3612 | from juju.providers.common.utils import format_cloud_init |
3613 | from juju.state.auth import make_identity |
3614 | import juju |
3615 | @@ -41,21 +42,26 @@ |
3616 | |
3617 | |
3618 | def _machine_scripts(machine_id, zookeeper_hosts): |
3619 | - return [ |
3620 | - "JUJU_MACHINE_ID=%s JUJU_ZOOKEEPER=%s " |
3621 | - "python -m juju.agents.machine -n " |
3622 | - "--logfile=/var/log/juju/machine-agent.log " |
3623 | - "--pidfile=/var/run/juju/machine-agent.pid" |
3624 | - % (machine_id, zookeeper_hosts)] |
3625 | + service = UpstartService("juju-machine-agent") |
3626 | + service.set_description("Juju machine agent") |
3627 | + service.set_environ( |
3628 | + {"JUJU_MACHINE_ID": machine_id, "JUJU_ZOOKEEPER": zookeeper_hosts}) |
3629 | + service.set_command( |
3630 | + "python -m juju.agents.machine --nodaemon " |
3631 | + "--logfile /var/log/juju/machine-agent.log " |
3632 | + "--session-file /var/run/juju/machine-agent.zksession") |
3633 | + return service.get_cloud_init_commands() |
3634 | |
3635 | |
3636 | def _provision_scripts(zookeeper_hosts): |
3637 | - return [ |
3638 | - "JUJU_ZOOKEEPER=%s " |
3639 | - "python -m juju.agents.provision -n " |
3640 | - "--logfile=/var/log/juju/provision-agent.log " |
3641 | - "--pidfile=/var/run/juju/provision-agent.pid" |
3642 | - % zookeeper_hosts] |
3643 | + service = UpstartService("juju-provision-agent") |
3644 | + service.set_description("Juju provisioning agent") |
3645 | + service.set_environ({"JUJU_ZOOKEEPER": zookeeper_hosts}) |
3646 | + service.set_command( |
3647 | + "python -m juju.agents.provision --nodaemon " |
3648 | + "--logfile /var/log/juju/provision-agent.log " |
3649 | + "--session-file /var/run/juju/provision-agent.zksession") |
3650 | + return service.get_cloud_init_commands() |
3651 | |
3652 | |
3653 | def _line_generator(data): |
3654 | @@ -64,6 +70,7 @@ |
3655 | if stripped: |
3656 | yield (len(line)-len(stripped), stripped) |
3657 | |
3658 | + |
3659 | def parse_juju_origin(data): |
3660 | next = _line_generator(data).next |
3661 | try: |
3662 | |
3663 | === modified file 'juju/providers/common/tests/data/cloud_init_bootstrap' |
3664 | --- juju/providers/common/tests/data/cloud_init_bootstrap 2012-01-09 14:17:21 +0000 |
3665 | +++ juju/providers/common/tests/data/cloud_init_bootstrap 2012-02-02 16:42:42 +0000 |
3666 | @@ -4,12 +4,58 @@ |
3667 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'localhost:2181', |
3668 | machine-id: passport} |
3669 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3670 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
3671 | - python-zookeeper, default-jre-headless, zookeeper, zookeeperd] |
3672 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
3673 | - -p /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --provider-type=dummy', |
3674 | - 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.machine |
3675 | - -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid', |
3676 | - 'JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log |
3677 | - --pidfile=/var/run/juju/provision-agent.pid'] |
3678 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
3679 | + default-jre-headless, zookeeper, zookeeperd] |
3680 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3681 | + /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= |
3682 | + --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3683 | + |
3684 | + description "Juju machine agent" |
3685 | + |
3686 | + author "Juju Team <juju@lists.ubuntu.com>" |
3687 | + |
3688 | + |
3689 | + start on runlevel [2345] |
3690 | + |
3691 | + stop on runlevel [!2345] |
3692 | + |
3693 | + respawn |
3694 | + |
3695 | + |
3696 | + env JUJU_MACHINE_ID="passport" |
3697 | + |
3698 | + env JUJU_ZOOKEEPER="localhost:2181" |
3699 | + |
3700 | + |
3701 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
3702 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
3703 | + 2>&1 |
3704 | + |
3705 | + EOF |
3706 | + |
3707 | + ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf |
3708 | + <<EOF |
3709 | + |
3710 | + description "Juju provisioning agent" |
3711 | + |
3712 | + author "Juju Team <juju@lists.ubuntu.com>" |
3713 | + |
3714 | + |
3715 | + start on runlevel [2345] |
3716 | + |
3717 | + stop on runlevel [!2345] |
3718 | + |
3719 | + respawn |
3720 | + |
3721 | + |
3722 | + env JUJU_ZOOKEEPER="localhost:2181" |
3723 | + |
3724 | + |
3725 | + exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log |
3726 | + --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output |
3727 | + 2>&1 |
3728 | + |
3729 | + EOF |
3730 | + |
3731 | + ', /sbin/start juju-provision-agent] |
3732 | ssh_authorized_keys: [chubb] |
3733 | |
3734 | === modified file 'juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers' |
3735 | --- juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers 2012-01-09 14:17:21 +0000 |
3736 | +++ juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers 2012-02-02 16:42:42 +0000 |
3737 | @@ -4,13 +4,58 @@ |
3738 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181,localhost:2181', |
3739 | machine-id: passport} |
3740 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3741 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
3742 | - python-zookeeper, default-jre-headless, zookeeper, zookeeperd] |
3743 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
3744 | - -p /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --provider-type=dummy', |
3745 | - 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181,localhost:2181 |
3746 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
3747 | - --pidfile=/var/run/juju/machine-agent.pid', 'JUJU_ZOOKEEPER=cotswold:2181,longleat:2181,localhost:2181 |
3748 | - python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log |
3749 | - --pidfile=/var/run/juju/provision-agent.pid'] |
3750 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
3751 | + default-jre-headless, zookeeper, zookeeperd] |
3752 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3753 | + /var/log/juju, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= |
3754 | + --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3755 | + |
3756 | + description "Juju machine agent" |
3757 | + |
3758 | + author "Juju Team <juju@lists.ubuntu.com>" |
3759 | + |
3760 | + |
3761 | + start on runlevel [2345] |
3762 | + |
3763 | + stop on runlevel [!2345] |
3764 | + |
3765 | + respawn |
3766 | + |
3767 | + |
3768 | + env JUJU_MACHINE_ID="passport" |
3769 | + |
3770 | + env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181,localhost:2181" |
3771 | + |
3772 | + |
3773 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
3774 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
3775 | + 2>&1 |
3776 | + |
3777 | + EOF |
3778 | + |
3779 | + ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf |
3780 | + <<EOF |
3781 | + |
3782 | + description "Juju provisioning agent" |
3783 | + |
3784 | + author "Juju Team <juju@lists.ubuntu.com>" |
3785 | + |
3786 | + |
3787 | + start on runlevel [2345] |
3788 | + |
3789 | + stop on runlevel [!2345] |
3790 | + |
3791 | + respawn |
3792 | + |
3793 | + |
3794 | + env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181,localhost:2181" |
3795 | + |
3796 | + |
3797 | + exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log |
3798 | + --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output |
3799 | + 2>&1 |
3800 | + |
3801 | + EOF |
3802 | + |
3803 | + ', /sbin/start juju-provision-agent] |
3804 | ssh_authorized_keys: [chubb] |
3805 | |
3806 | === modified file 'juju/providers/common/tests/data/cloud_init_branch' |
3807 | --- juju/providers/common/tests/data/cloud_init_branch 2012-01-09 14:17:21 +0000 |
3808 | +++ juju/providers/common/tests/data/cloud_init_branch 2012-02-02 16:42:42 +0000 |
3809 | @@ -6,12 +6,34 @@ |
3810 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3811 | machine-id: passport} |
3812 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3813 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
3814 | - python-zookeeper] |
3815 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3816 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, |
3817 | - 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:blah/juju/blah-blah juju', |
3818 | - cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, |
3819 | - sudo mkdir -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 |
3820 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
3821 | - --pidfile=/var/run/juju/machine-agent.pid'] |
3822 | + 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:blah/juju/blah-blah juju', cd /usr/lib/juju/juju |
3823 | + && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, |
3824 | + 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3825 | + |
3826 | + description "Juju machine agent" |
3827 | + |
3828 | + author "Juju Team <juju@lists.ubuntu.com>" |
3829 | + |
3830 | + |
3831 | + start on runlevel [2345] |
3832 | + |
3833 | + stop on runlevel [!2345] |
3834 | + |
3835 | + respawn |
3836 | + |
3837 | + |
3838 | + env JUJU_MACHINE_ID="passport" |
3839 | + |
3840 | + env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" |
3841 | + |
3842 | + |
3843 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
3844 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
3845 | + 2>&1 |
3846 | + |
3847 | + EOF |
3848 | + |
3849 | + ', /sbin/start juju-machine-agent] |
3850 | ssh_authorized_keys: [chubb] |
3851 | |
3852 | === modified file 'juju/providers/common/tests/data/cloud_init_branch_trunk' |
3853 | --- juju/providers/common/tests/data/cloud_init_branch_trunk 2012-01-09 14:17:21 +0000 |
3854 | +++ juju/providers/common/tests/data/cloud_init_branch_trunk 2012-02-02 16:42:42 +0000 |
3855 | @@ -6,12 +6,34 @@ |
3856 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3857 | machine-id: passport} |
3858 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3859 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
3860 | - python-zookeeper] |
3861 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3862 | runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, |
3863 | - 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:juju juju', |
3864 | - cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, |
3865 | - sudo mkdir -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 |
3866 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
3867 | - --pidfile=/var/run/juju/machine-agent.pid'] |
3868 | + 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:juju juju', cd /usr/lib/juju/juju && |
3869 | + sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, |
3870 | + 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3871 | + |
3872 | + description "Juju machine agent" |
3873 | + |
3874 | + author "Juju Team <juju@lists.ubuntu.com>" |
3875 | + |
3876 | + |
3877 | + start on runlevel [2345] |
3878 | + |
3879 | + stop on runlevel [!2345] |
3880 | + |
3881 | + respawn |
3882 | + |
3883 | + |
3884 | + env JUJU_MACHINE_ID="passport" |
3885 | + |
3886 | + env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" |
3887 | + |
3888 | + |
3889 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
3890 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
3891 | + 2>&1 |
3892 | + |
3893 | + EOF |
3894 | + |
3895 | + ', /sbin/start juju-machine-agent] |
3896 | ssh_authorized_keys: [chubb] |
3897 | |
3898 | === modified file 'juju/providers/common/tests/data/cloud_init_distro' |
3899 | --- juju/providers/common/tests/data/cloud_init_distro 2012-01-09 14:17:21 +0000 |
3900 | +++ juju/providers/common/tests/data/cloud_init_distro 2012-02-02 16:42:42 +0000 |
3901 | @@ -4,10 +4,32 @@ |
3902 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3903 | machine-id: passport} |
3904 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3905 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
3906 | - python-zookeeper] |
3907 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
3908 | - -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 |
3909 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
3910 | - --pidfile=/var/run/juju/machine-agent.pid'] |
3911 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3912 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3913 | + /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3914 | + |
3915 | + description "Juju machine agent" |
3916 | + |
3917 | + author "Juju Team <juju@lists.ubuntu.com>" |
3918 | + |
3919 | + |
3920 | + start on runlevel [2345] |
3921 | + |
3922 | + stop on runlevel [!2345] |
3923 | + |
3924 | + respawn |
3925 | + |
3926 | + |
3927 | + env JUJU_MACHINE_ID="passport" |
3928 | + |
3929 | + env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" |
3930 | + |
3931 | + |
3932 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
3933 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
3934 | + 2>&1 |
3935 | + |
3936 | + EOF |
3937 | + |
3938 | + ', /sbin/start juju-machine-agent] |
3939 | ssh_authorized_keys: [chubb] |
3940 | |
3941 | === modified file 'juju/providers/common/tests/data/cloud_init_ppa' |
3942 | --- juju/providers/common/tests/data/cloud_init_ppa 2012-01-09 14:17:21 +0000 |
3943 | +++ juju/providers/common/tests/data/cloud_init_ppa 2012-02-02 16:42:42 +0000 |
3944 | @@ -2,14 +2,36 @@ |
3945 | apt-update: true |
3946 | apt-upgrade: true |
3947 | apt_sources: |
3948 | -- {'source': 'ppa:juju/pkgs'} |
3949 | +- {source: 'ppa:juju/pkgs'} |
3950 | machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', |
3951 | machine-id: passport} |
3952 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
3953 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
3954 | - python-zookeeper] |
3955 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
3956 | - -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 |
3957 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
3958 | - --pidfile=/var/run/juju/machine-agent.pid'] |
3959 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
3960 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
3961 | + /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
3962 | + |
3963 | + description "Juju machine agent" |
3964 | + |
3965 | + author "Juju Team <juju@lists.ubuntu.com>" |
3966 | + |
3967 | + |
3968 | + start on runlevel [2345] |
3969 | + |
3970 | + stop on runlevel [!2345] |
3971 | + |
3972 | + respawn |
3973 | + |
3974 | + |
3975 | + env JUJU_MACHINE_ID="passport" |
3976 | + |
3977 | + env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" |
3978 | + |
3979 | + |
3980 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
3981 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
3982 | + 2>&1 |
3983 | + |
3984 | + EOF |
3985 | + |
3986 | + ', /sbin/start juju-machine-agent] |
3987 | ssh_authorized_keys: [chubb] |
3988 | |
3989 | === modified file 'juju/providers/common/tests/test_cloudinit.py' |
3990 | --- juju/providers/common/tests/test_cloudinit.py 2011-10-04 21:22:48 +0000 |
3991 | +++ juju/providers/common/tests/test_cloudinit.py 2012-02-02 16:42:42 +0000 |
3992 | @@ -44,8 +44,9 @@ |
3993 | def assert_render(self, cloud_init, name): |
3994 | with open(os.path.join(DATA_DIR, name)) as f: |
3995 | expected = yaml.load(f.read()) |
3996 | - obtained = yaml.load(cloud_init.render()) |
3997 | - self.assertEquals(obtained, expected) |
3998 | + rendered = cloud_init.render() |
3999 | + self.assertTrue(rendered.startswith("#cloud-config")) |
4000 | + self.assertEquals(yaml.load(rendered), expected) |
4001 | |
4002 | def test_render_validate_normal(self): |
4003 | cloud_init = CloudInit() |
4004 | |
4005 | === modified file 'juju/providers/ec2/tests/data/bootstrap_cloud_init' |
4006 | --- juju/providers/ec2/tests/data/bootstrap_cloud_init 2012-01-09 14:17:21 +0000 |
4007 | +++ juju/providers/ec2/tests/data/bootstrap_cloud_init 2012-02-02 16:42:42 +0000 |
4008 | @@ -1,16 +1,61 @@ |
4009 | #cloud-config |
4010 | apt-update: true |
4011 | apt-upgrade: true |
4012 | -machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'localhost:2181', |
4013 | - machine-id: '0'} |
4014 | +machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'localhost:2181', machine-id: '0'} |
4015 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4016 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
4017 | - python-zookeeper, default-jre-headless, zookeeper, zookeeperd] |
4018 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
4019 | - -p /var/log/juju, 'juju-admin initialize --instance-id=$(curl http://169.254.169.254/1.0/meta-data/instance-id) |
4020 | - --admin-identity=admin:JbJ6sDGV37EHzbG9FPvttk64cmg= --provider-type=ec2', 'JUJU_MACHINE_ID=0 JUJU_ZOOKEEPER=localhost:2181 |
4021 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
4022 | - --pidfile=/var/run/juju/machine-agent.pid', 'JUJU_ZOOKEEPER=localhost:2181 |
4023 | - python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log |
4024 | - --pidfile=/var/run/juju/provision-agent.pid'] |
4025 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
4026 | + default-jre-headless, zookeeper, zookeeperd] |
4027 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4028 | + /var/log/juju, 'juju-admin initialize --instance-id=$(curl http://169.254.169.254/1.0/meta-data/instance-id) |
4029 | + --admin-identity=admin:JbJ6sDGV37EHzbG9FPvttk64cmg= --provider-type=ec2', 'cat |
4030 | + >> /etc/init/juju-machine-agent.conf <<EOF |
4031 | + |
4032 | + description "Juju machine agent" |
4033 | + |
4034 | + author "Juju Team <juju@lists.ubuntu.com>" |
4035 | + |
4036 | + |
4037 | + start on runlevel [2345] |
4038 | + |
4039 | + stop on runlevel [!2345] |
4040 | + |
4041 | + respawn |
4042 | + |
4043 | + |
4044 | + env JUJU_MACHINE_ID="0" |
4045 | + |
4046 | + env JUJU_ZOOKEEPER="localhost:2181" |
4047 | + |
4048 | + |
4049 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
4050 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
4051 | + 2>&1 |
4052 | + |
4053 | + EOF |
4054 | + |
4055 | + ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf |
4056 | + <<EOF |
4057 | + |
4058 | + description "Juju provisioning agent" |
4059 | + |
4060 | + author "Juju Team <juju@lists.ubuntu.com>" |
4061 | + |
4062 | + |
4063 | + start on runlevel [2345] |
4064 | + |
4065 | + stop on runlevel [!2345] |
4066 | + |
4067 | + respawn |
4068 | + |
4069 | + |
4070 | + env JUJU_ZOOKEEPER="localhost:2181" |
4071 | + |
4072 | + |
4073 | + exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log |
4074 | + --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output |
4075 | + 2>&1 |
4076 | + |
4077 | + EOF |
4078 | + |
4079 | + ', /sbin/start juju-provision-agent] |
4080 | ssh_authorized_keys: [zebra] |
4081 | |
4082 | === modified file 'juju/providers/ec2/tests/data/launch_cloud_init' |
4083 | --- juju/providers/ec2/tests/data/launch_cloud_init 2012-01-09 14:17:21 +0000 |
4084 | +++ juju/providers/ec2/tests/data/launch_cloud_init 2012-02-02 16:42:42 +0000 |
4085 | @@ -4,10 +4,32 @@ |
4086 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', |
4087 | machine-id: '1'} |
4088 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4089 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
4090 | - python-zookeeper] |
4091 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
4092 | - -p /var/log/juju, 'JUJU_MACHINE_ID=1 JUJU_ZOOKEEPER=es.example.internal:2181 |
4093 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
4094 | - --pidfile=/var/run/juju/machine-agent.pid'] |
4095 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
4096 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4097 | + /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4098 | + |
4099 | + description "Juju machine agent" |
4100 | + |
4101 | + author "Juju Team <juju@lists.ubuntu.com>" |
4102 | + |
4103 | + |
4104 | + start on runlevel [2345] |
4105 | + |
4106 | + stop on runlevel [!2345] |
4107 | + |
4108 | + respawn |
4109 | + |
4110 | + |
4111 | + env JUJU_MACHINE_ID="1" |
4112 | + |
4113 | + env JUJU_ZOOKEEPER="es.example.internal:2181" |
4114 | + |
4115 | + |
4116 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
4117 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
4118 | + 2>&1 |
4119 | + |
4120 | + EOF |
4121 | + |
4122 | + ', /sbin/start juju-machine-agent] |
4123 | ssh_authorized_keys: [zebra] |
4124 | |
4125 | === modified file 'juju/providers/ec2/tests/data/launch_cloud_init_branch' |
4126 | --- juju/providers/ec2/tests/data/launch_cloud_init_branch 2012-01-09 14:17:21 +0000 |
4127 | +++ juju/providers/ec2/tests/data/launch_cloud_init_branch 2012-02-02 16:42:42 +0000 |
4128 | @@ -6,15 +6,34 @@ |
4129 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', |
4130 | machine-id: '1'} |
4131 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4132 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
4133 | - python-zookeeper] |
4134 | -runcmd: [sudo apt-get install -y python-txzookeeper, |
4135 | - sudo mkdir -p /usr/lib/juju, |
4136 | - 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:~wizard/juju-juicebar juju', |
4137 | - cd /usr/lib/juju/juju && sudo python setup.py develop, |
4138 | - sudo mkdir -p /var/lib/juju, |
4139 | - sudo mkdir -p /var/log/juju, |
4140 | - 'JUJU_MACHINE_ID=1 JUJU_ZOOKEEPER=es.example.internal:2181 |
4141 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
4142 | - --pidfile=/var/run/juju/machine-agent.pid'] |
4143 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
4144 | +runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, |
4145 | + 'cd /usr/lib/juju && sudo /usr/bin/bzr co lp:~wizard/juju-juicebar juju', cd /usr/lib/juju/juju |
4146 | + && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, |
4147 | + 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4148 | + |
4149 | + description "Juju machine agent" |
4150 | + |
4151 | + author "Juju Team <juju@lists.ubuntu.com>" |
4152 | + |
4153 | + |
4154 | + start on runlevel [2345] |
4155 | + |
4156 | + stop on runlevel [!2345] |
4157 | + |
4158 | + respawn |
4159 | + |
4160 | + |
4161 | + env JUJU_MACHINE_ID="1" |
4162 | + |
4163 | + env JUJU_ZOOKEEPER="es.example.internal:2181" |
4164 | + |
4165 | + |
4166 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
4167 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
4168 | + 2>&1 |
4169 | + |
4170 | + EOF |
4171 | + |
4172 | + ', /sbin/start juju-machine-agent] |
4173 | ssh_authorized_keys: [zebra] |
4174 | |
4175 | === modified file 'juju/providers/ec2/tests/data/launch_cloud_init_ppa' |
4176 | --- juju/providers/ec2/tests/data/launch_cloud_init_ppa 2012-01-09 14:17:21 +0000 |
4177 | +++ juju/providers/ec2/tests/data/launch_cloud_init_ppa 2012-02-02 16:42:42 +0000 |
4178 | @@ -6,10 +6,32 @@ |
4179 | machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', |
4180 | machine-id: '1'} |
4181 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4182 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
4183 | - python-zookeeper] |
4184 | -runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir |
4185 | - -p /var/log/juju, 'JUJU_MACHINE_ID=1 JUJU_ZOOKEEPER=es.example.internal:2181 |
4186 | - python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log |
4187 | - --pidfile=/var/run/juju/machine-agent.pid'] |
4188 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] |
4189 | +runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4190 | + /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4191 | + |
4192 | + description "Juju machine agent" |
4193 | + |
4194 | + author "Juju Team <juju@lists.ubuntu.com>" |
4195 | + |
4196 | + |
4197 | + start on runlevel [2345] |
4198 | + |
4199 | + stop on runlevel [!2345] |
4200 | + |
4201 | + respawn |
4202 | + |
4203 | + |
4204 | + env JUJU_MACHINE_ID="1" |
4205 | + |
4206 | + env JUJU_ZOOKEEPER="es.example.internal:2181" |
4207 | + |
4208 | + |
4209 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
4210 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
4211 | + 2>&1 |
4212 | + |
4213 | + EOF |
4214 | + |
4215 | + ', /sbin/start juju-machine-agent] |
4216 | ssh_authorized_keys: [zebra] |
4217 | |
4218 | === modified file 'juju/providers/local/__init__.py' |
4219 | --- juju/providers/local/__init__.py 2011-11-16 13:56:03 +0000 |
4220 | +++ juju/providers/local/__init__.py 2012-02-02 16:42:42 +0000 |
4221 | @@ -102,11 +102,11 @@ |
4222 | # Starting provider storage server |
4223 | log.info("Starting storage server...") |
4224 | storage_server = StorageServer( |
4225 | - pid_file=os.path.join(self._directory, "storage-server.pid"), |
4226 | + self._qualified_name, |
4227 | storage_dir=os.path.join(self._directory, "files"), |
4228 | host=net_attributes["ip"]["address"], |
4229 | port=get_open_port(net_attributes["ip"]["address"]), |
4230 | - log_file=os.path.join(self._directory, "storage-server.log")) |
4231 | + logfile=os.path.join(self._directory, "storage-server.log")) |
4232 | yield storage_server.start() |
4233 | |
4234 | # Save the zookeeper start to provider storage. |
4235 | @@ -130,17 +130,15 @@ |
4236 | raise ProviderError(str(e)) |
4237 | |
4238 | # Startup the machine agent |
4239 | - pid_file = os.path.join(self._directory, "machine-agent.pid") |
4240 | log_file = os.path.join(self._directory, "machine-agent.log") |
4241 | |
4242 | juju_origin = self.config.get("juju-origin") |
4243 | - agent = ManagedMachineAgent(pid_file, |
4244 | + agent = ManagedMachineAgent(self._qualified_name, |
4245 | zookeeper_hosts=zookeeper.address, |
4246 | machine_id="0", |
4247 | juju_directory=self._directory, |
4248 | log_file=log_file, |
4249 | juju_origin=juju_origin, |
4250 | - juju_unit_namespace=self._qualified_name, |
4251 | public_key=public_key) |
4252 | log.info( |
4253 | "Starting machine agent (origin: %s)... ", agent.juju_origin) |
4254 | @@ -158,14 +156,12 @@ |
4255 | |
4256 | # Stop the machine agent |
4257 | log.debug("Stopping machine agent...") |
4258 | - pid_file = os.path.join(self._directory, "machine-agent.pid") |
4259 | - agent = ManagedMachineAgent(pid_file) |
4260 | + agent = ManagedMachineAgent(self._qualified_name) |
4261 | yield agent.stop() |
4262 | |
4263 | # Stop the storage server |
4264 | log.debug("Stopping storage server...") |
4265 | - pid_file = os.path.join(self._directory, "storage-server.pid") |
4266 | - storage_server = StorageServer(pid_file) |
4267 | + storage_server = StorageServer(self._qualified_name) |
4268 | yield storage_server.stop() |
4269 | |
4270 | # Stop zookeeper |
4271 | |
4272 | === modified file 'juju/providers/local/agent.py' |
4273 | --- juju/providers/local/agent.py 2011-10-05 12:14:41 +0000 |
4274 | +++ juju/providers/local/agent.py 2012-02-02 16:42:42 +0000 |
4275 | @@ -1,12 +1,6 @@ |
4276 | -import errno |
4277 | -import os |
4278 | -import pipes |
4279 | -import subprocess |
4280 | import sys |
4281 | |
4282 | -from twisted.internet.defer import inlineCallbacks, returnValue |
4283 | -from twisted.internet.threads import deferToThread |
4284 | - |
4285 | +from juju.lib.upstart import UpstartService |
4286 | from juju.providers.common.cloudinit import get_default_origin, BRANCH |
4287 | |
4288 | |
4289 | @@ -15,11 +9,10 @@ |
4290 | agent_module = "juju.agents.machine" |
4291 | |
4292 | def __init__( |
4293 | - self, pid_file, zookeeper_hosts=None, machine_id="0", |
4294 | - log_file=None, juju_directory="/var/lib/juju", |
4295 | - juju_unit_namespace="", public_key=None, juju_origin="ppa"): |
4296 | + self, juju_unit_namespace, zookeeper_hosts=None, |
4297 | + machine_id="0", log_file=None, juju_directory="/var/lib/juju", |
4298 | + public_key=None, juju_origin="ppa"): |
4299 | """ |
4300 | - :param pid_file: Path to file used to store process id. |
4301 | :param machine_id: machine id for the local machine. |
4302 | :param zookeeper_hosts: Zookeeper hosts to connect. |
4303 | :param log_file: A file to use for the agent logs. |
4304 | @@ -31,95 +24,46 @@ |
4305 | :param public_key: An SSH public key (string) that will be |
4306 | used in the container for access. |
4307 | """ |
4308 | - self._pid_file = pid_file |
4309 | - self._machine_id = machine_id |
4310 | - self._zookeeper_hosts = zookeeper_hosts |
4311 | - self._juju_directory = juju_directory |
4312 | - self._juju_unit_namespace = juju_unit_namespace |
4313 | - self._log_file = log_file |
4314 | - self._public_key = public_key |
4315 | self._juju_origin = juju_origin |
4316 | - |
4317 | if self._juju_origin is None: |
4318 | origin, source = get_default_origin() |
4319 | if origin == BRANCH: |
4320 | origin = source |
4321 | self._juju_origin = origin |
4322 | |
4323 | + env = {"JUJU_MACHINE_ID": machine_id, |
4324 | + "JUJU_ZOOKEEPER": zookeeper_hosts, |
4325 | + "JUJU_HOME": juju_directory, |
4326 | + "JUJU_ORIGIN": self._juju_origin, |
4327 | + "JUJU_UNIT_NS": juju_unit_namespace, |
4328 | + "PYTHONPATH": ":".join(sys.path)} |
4329 | + if public_key: |
4330 | + env["JUJU_PUBLIC_KEY"] = public_key |
4331 | + |
4332 | + self._service = UpstartService( |
4333 | + "juju-%s-machine-agent" % juju_unit_namespace, use_sudo=True) |
4334 | + self._service.set_description( |
4335 | + "Juju machine agent for %s" % juju_unit_namespace) |
4336 | + self._service.set_environ(env) |
4337 | + self._service_args = [ |
4338 | + "/usr/bin/python", "-m", self.agent_module, |
4339 | + "--nodaemon", "--logfile", log_file, |
4340 | + "--session-file", |
4341 | + "/var/run/juju/%s-machine-agent.zksession" % juju_unit_namespace] |
4342 | + |
4343 | @property |
4344 | def juju_origin(self): |
4345 | return self._juju_origin |
4346 | |
4347 | - @inlineCallbacks |
4348 | def start(self): |
4349 | - """Start the machine agent. |
4350 | - """ |
4351 | - assert self._zookeeper_hosts and self._log_file |
4352 | - |
4353 | - if (yield self.is_running()): |
4354 | - return |
4355 | - |
4356 | - # sudo even with -E will strip pythonpath, so pass it directly |
4357 | - # to the command. |
4358 | - args = ["sudo", |
4359 | - "JUJU_ZOOKEEPER=%s" % self._zookeeper_hosts, |
4360 | - "JUJU_ORIGIN=%s" % self._juju_origin, |
4361 | - "JUJU_MACHINE_ID=%s" % self._machine_id, |
4362 | - "JUJU_HOME=%s" % self._juju_directory, |
4363 | - "JUJU_UNIT_NS=%s" % self._juju_unit_namespace, |
4364 | - "PYTHONPATH=%s" % ":".join(sys.path), |
4365 | - sys.executable, "-m", self.agent_module, |
4366 | - "-n", "--pidfile", self._pid_file, |
4367 | - "--logfile", self._log_file] |
4368 | - |
4369 | - if self._public_key: |
4370 | - args.insert( |
4371 | - 1, "JUJU_PUBLIC_KEY=%s" % pipes.quote(self._public_key)) |
4372 | - |
4373 | - yield deferToThread(subprocess.check_call, args) |
4374 | - |
4375 | - @inlineCallbacks |
4376 | + """Start the machine agent.""" |
4377 | + self._service.set_command(" ".join(self._service_args)) |
4378 | + return self._service.start() |
4379 | + |
4380 | def stop(self): |
4381 | - """Stop the machine agent. |
4382 | - """ |
4383 | - pid = yield self._get_pid() |
4384 | - if pid is None: |
4385 | - return |
4386 | - |
4387 | - # Verify the cmdline before attempting to kill. |
4388 | - try: |
4389 | - with open("/proc/%s/cmdline" % pid) as cmd_file: |
4390 | - cmdline = cmd_file.read() |
4391 | - if self.agent_module not in cmdline: |
4392 | - raise RuntimeError("Mismatch cmdline") |
4393 | - except IOError, e: |
4394 | - # Process already died. |
4395 | - if e.errno == errno.ENOENT: |
4396 | - return |
4397 | - |
4398 | - yield deferToThread( |
4399 | - subprocess.check_call, ["sudo", "kill", str(pid)]) |
4400 | - |
4401 | - @inlineCallbacks |
4402 | - def _get_pid(self): |
4403 | - """Return the agent process id or None. |
4404 | - """ |
4405 | - # Default root pidfile mask is restrictive |
4406 | - try: |
4407 | - pid = yield deferToThread( |
4408 | - subprocess.check_output, |
4409 | - ["sudo", "cat", self._pid_file], |
4410 | - stderr=subprocess.STDOUT) |
4411 | - except subprocess.CalledProcessError: |
4412 | - return |
4413 | - if not pid: |
4414 | - return |
4415 | - returnValue(int(pid.strip())) |
4416 | - |
4417 | - @inlineCallbacks |
4418 | + """Stop the machine agent.""" |
4419 | + return self._service.destroy() |
4420 | + |
4421 | def is_running(self): |
4422 | """Boolean value, true if the machine agent is running.""" |
4423 | - pid = yield self._get_pid() |
4424 | - if pid is None: |
4425 | - returnValue(False) |
4426 | - returnValue(os.path.isdir("/proc/%s" % pid)) |
4427 | + return self._service.is_running() |
4428 | |
4429 | === modified file 'juju/providers/local/files.py' |
4430 | --- juju/providers/local/files.py 2011-10-07 18:19:58 +0000 |
4431 | +++ juju/providers/local/files.py 2012-02-02 16:42:42 +0000 |
4432 | @@ -1,13 +1,14 @@ |
4433 | -import errno |
4434 | +from getpass import getuser |
4435 | import os |
4436 | -import signal |
4437 | from StringIO import StringIO |
4438 | -import subprocess |
4439 | import yaml |
4440 | |
4441 | from twisted.internet.defer import inlineCallbacks, returnValue |
4442 | +from twisted.internet.error import ConnectionRefusedError |
4443 | +from twisted.web.client import getPage |
4444 | |
4445 | from juju.errors import ProviderError, FileNotFound |
4446 | +from juju.lib.upstart import UpstartService |
4447 | from juju.providers.common.files import FileStorage |
4448 | |
4449 | |
4450 | @@ -16,22 +17,46 @@ |
4451 | |
4452 | class StorageServer(object): |
4453 | |
4454 | - def __init__( |
4455 | - self, pid_file, storage_dir=None, host=None, port=None, log_file=None): |
4456 | + def __init__(self, juju_unit_namespace, storage_dir=None, |
4457 | + host=None, port=None, logfile=None): |
4458 | """Management facade for a web server on top of the provider storage. |
4459 | |
4460 | - :param pid_file: Path to the web server pid file. |
4461 | + :param juju_unit_namespace: For disambiguation. |
4462 | :param host: Host interface to bind to. |
4463 | :param port: Port to bind to. |
4464 | - :param log_file: Path to store log output. |
4465 | + :param logfile: Path to store log output. |
4466 | """ |
4467 | if storage_dir: |
4468 | storage_dir = os.path.abspath(storage_dir) |
4469 | self._storage_dir = storage_dir |
4470 | self._host = host |
4471 | self._port = port |
4472 | - self._pid_file = pid_file |
4473 | - self._log_file = log_file |
4474 | + self._logfile = logfile |
4475 | + |
4476 | + self._service = UpstartService( |
4477 | + "juju-%s-file-storage" % juju_unit_namespace, use_sudo=True) |
4478 | + self._service.set_description( |
4479 | + "Juju file storage for %s" % juju_unit_namespace) |
4480 | + self._service_args = [ |
4481 | + "twistd", |
4482 | + "--nodaemon", |
4483 | + "--uid", str(os.getuid()), |
4484 | + "--gid", str(os.getgid()), |
4485 | + "--logfile", logfile, |
4486 | + "--pidfile=", |
4487 | + "-d", self._storage_dir, |
4488 | + "web", |
4489 | + "--port", "tcp:%s:interface=%s" % (self._port, self._host), |
4490 | + "--path", self._storage_dir] |
4491 | + |
4492 | + @inlineCallbacks |
4493 | + def is_serving(self): |
4494 | + try: |
4495 | + storage = LocalStorage(self._storage_dir) |
4496 | + yield getPage((yield storage.get_url(SERVER_URL_KEY))) |
4497 | + returnValue(True) |
4498 | + except ConnectionRefusedError: |
4499 | + returnValue(False) |
4500 | |
4501 | @inlineCallbacks |
4502 | def start(self): |
4503 | @@ -39,44 +64,33 @@ |
4504 | |
4505 | Also stores the storage server url directly into provider storage. |
4506 | """ |
4507 | - assert (self._storage_dir and self._host |
4508 | - and self._port and self._log_file), "Missing start params." |
4509 | + assert self._storage_dir, "no storage_dir set" |
4510 | + assert self._host, "no host set" |
4511 | + assert self._port, "no port set" |
4512 | + assert None not in self._service_args, "unset params" |
4513 | assert os.path.exists(self._storage_dir), "Invalid storage directory" |
4514 | + try: |
4515 | + with open(self._logfile, "a"): |
4516 | + pass |
4517 | + except IOError: |
4518 | + raise AssertionError("logfile not writable by this user") |
4519 | + |
4520 | |
4521 | storage = LocalStorage(self._storage_dir) |
4522 | yield storage.put( |
4523 | SERVER_URL_KEY, |
4524 | StringIO(yaml.safe_dump( |
4525 | - {"storage-url": "http://%s:%s/" % ( |
4526 | - self._host, self._port)}))) |
4527 | - |
4528 | - subprocess.check_output( |
4529 | - ["twistd", |
4530 | - "--pidfile", self._pid_file, |
4531 | - "--logfile", self._log_file, |
4532 | - "-d", self._storage_dir, |
4533 | - "web", "--port", |
4534 | - "tcp:%s:interface=%s" % (self._port, self._host), |
4535 | - "--path", self._storage_dir]) |
4536 | + {"storage-url": "http://%s:%s/" % (self._host, self._port)}))) |
4537 | + |
4538 | + self._service.set_command(" ".join(self._service_args)) |
4539 | + yield self._service.start() |
4540 | + |
4541 | + def get_pid(self): |
4542 | + return self._service.get_pid() |
4543 | |
4544 | def stop(self): |
4545 | - """Stop the storage server. |
4546 | - """ |
4547 | - try: |
4548 | - with open(self._pid_file) as pid_file: |
4549 | - pid = int(pid_file.read().strip()) |
4550 | - except IOError: |
4551 | - # No pid, move on |
4552 | - return |
4553 | - |
4554 | - try: |
4555 | - os.kill(pid, 0) |
4556 | - except OSError, e: |
4557 | - if e.errno == errno.ESRCH: # No such process, already dead. |
4558 | - return |
4559 | - raise |
4560 | - |
4561 | - os.kill(pid, signal.SIGKILL) |
4562 | + """Stop the storage server.""" |
4563 | + return self._service.destroy() |
4564 | |
4565 | |
4566 | class LocalStorage(FileStorage): |
4567 | |
4568 | === modified file 'juju/providers/local/tests/test_agent.py' |
4569 | --- juju/providers/local/tests/test_agent.py 2011-10-05 12:14:41 +0000 |
4570 | +++ juju/providers/local/tests/test_agent.py 2012-02-02 16:42:42 +0000 |
4571 | @@ -1,10 +1,13 @@ |
4572 | import os |
4573 | import tempfile |
4574 | import subprocess |
4575 | +import sys |
4576 | + |
4577 | from twisted.internet.defer import inlineCallbacks, succeed |
4578 | |
4579 | +from juju.lib.lxc.tests.test_lxc import uses_sudo |
4580 | from juju.lib.testing import TestCase |
4581 | -from juju.lib.lxc.tests.test_lxc import run_lxc_tests |
4582 | +from juju.lib.upstart import UpstartService |
4583 | from juju.tests.common import get_test_zookeeper_address |
4584 | from juju.providers.local.agent import ManagedMachineAgent |
4585 | |
4586 | @@ -12,61 +15,93 @@ |
4587 | class ManagedAgentTest(TestCase): |
4588 | |
4589 | @inlineCallbacks |
4590 | - def test_managed_agent_args(self): |
4591 | - |
4592 | - captured_args = [] |
4593 | - |
4594 | - def intercept_args(args): |
4595 | - captured_args.extend(args) |
4596 | - return True |
4597 | - |
4598 | - self.patch(subprocess, "check_call", intercept_args) |
4599 | + def test_managed_agent_config(self): |
4600 | + subprocess_calls = [] |
4601 | + |
4602 | + def intercept_args(args, **kwargs): |
4603 | + subprocess_calls.append(args) |
4604 | + self.assertEquals(args[0], "sudo") |
4605 | + if args[1] == "cp": |
4606 | + return real_check_call(args[1:], **kwargs) |
4607 | + return 0 |
4608 | + |
4609 | + real_check_call = self.patch(subprocess, "check_call", intercept_args) |
4610 | + init_dir = self.makeDir() |
4611 | + self.patch(UpstartService, "init_dir", init_dir) |
4612 | + |
4613 | + # Mock out the repeated checking for unstable pid, after an initial |
4614 | + # stop/waiting to induce the actual start |
4615 | + getProcessOutput = self.mocker.replace( |
4616 | + "twisted.internet.utils.getProcessOutput") |
4617 | + getProcessOutput("/sbin/status", ["juju-ns1-machine-agent"]) |
4618 | + self.mocker.result(succeed("stop/waiting")) |
4619 | + for _ in range(5): |
4620 | + getProcessOutput("/sbin/status", ["juju-ns1-machine-agent"]) |
4621 | + self.mocker.result(succeed("start/running 123")) |
4622 | + self.mocker.replay() |
4623 | |
4624 | juju_directory = self.makeDir() |
4625 | - pid_file = self.makeFile() |
4626 | log_file = self.makeFile() |
4627 | - |
4628 | agent = ManagedMachineAgent( |
4629 | - pid_file, get_test_zookeeper_address(), |
4630 | + "ns1", |
4631 | + get_test_zookeeper_address(), |
4632 | juju_directory=juju_directory, |
4633 | - log_file=log_file, juju_unit_namespace="ns1", |
4634 | + log_file=log_file, |
4635 | juju_origin="lp:juju/trunk") |
4636 | |
4637 | - mock_agent = self.mocker.patch(agent) |
4638 | - mock_agent.is_running() |
4639 | - self.mocker.result(succeed(False)) |
4640 | - self.mocker.replay() |
4641 | + try: |
4642 | + os.remove("/tmp/juju-ns1-machine-agent.output") |
4643 | + except OSError: |
4644 | + pass # just make sure it's not there, so the .start() |
4645 | + # doesn't insert a spurious rm |
4646 | |
4647 | - self.assertEqual(agent.juju_origin, "lp:juju/trunk") |
4648 | yield agent.start() |
4649 | |
4650 | - # Verify machine agent environment |
4651 | - env_vars = dict( |
4652 | - [arg.split("=") for arg in captured_args if "=" in arg]) |
4653 | - env_vars.pop("PYTHONPATH") |
4654 | - self.assertEqual( |
4655 | - env_vars, |
4656 | - dict(JUJU_ZOOKEEPER=get_test_zookeeper_address(), |
4657 | - JUJU_MACHINE_ID="0", |
4658 | - JUJU_HOME=juju_directory, |
4659 | - JUJU_ORIGIN="lp:juju/trunk", |
4660 | - JUJU_UNIT_NS="ns1")) |
4661 | - |
4662 | + conf_dest = os.path.join( |
4663 | + init_dir, "juju-ns1-machine-agent.conf") |
4664 | + chmod, start = subprocess_calls[1:] |
4665 | + self.assertEquals(chmod, ("sudo", "chmod", "644", conf_dest)) |
4666 | + self.assertEquals( |
4667 | + start, ("sudo", "/sbin/start", "juju-ns1-machine-agent")) |
4668 | + |
4669 | + env = [] |
4670 | + with open(conf_dest) as f: |
4671 | + for line in f: |
4672 | + if line.startswith("env"): |
4673 | + env.append(line[4:-1].split("=", 1)) |
4674 | + if line.startswith("exec"): |
4675 | + exec_ = line[5:-1] |
4676 | + |
4677 | + expect_exec = ( |
4678 | + "/usr/bin/python -m juju.agents.machine --nodaemon --logfile %s " |
4679 | + "--session-file /var/run/juju/ns1-machine-agent.zksession " |
4680 | + ">> /tmp/juju-ns1-machine-agent.output 2>&1" |
4681 | + % log_file) |
4682 | + self.assertEquals(exec_, expect_exec) |
4683 | + |
4684 | + env = dict((k, v.strip('"')) for (k, v) in env) |
4685 | + self.assertEquals(env, { |
4686 | + "JUJU_ZOOKEEPER": get_test_zookeeper_address(), |
4687 | + "JUJU_MACHINE_ID": "0", |
4688 | + "JUJU_HOME": juju_directory, |
4689 | + "JUJU_ORIGIN": "lp:juju/trunk", |
4690 | + "JUJU_UNIT_NS": "ns1", |
4691 | + "PYTHONPATH": ":".join(sys.path)}) |
4692 | + |
4693 | + @uses_sudo |
4694 | @inlineCallbacks |
4695 | def test_managed_agent_root(self): |
4696 | juju_directory = self.makeDir() |
4697 | - pid_file = tempfile.mktemp() |
4698 | log_file = tempfile.mktemp() |
4699 | |
4700 | # The pid file and log file get written as root |
4701 | def cleanup_root_file(cleanup_file): |
4702 | subprocess.check_call( |
4703 | ["sudo", "rm", "-f", cleanup_file], stderr=subprocess.STDOUT) |
4704 | - self.addCleanup(cleanup_root_file, pid_file) |
4705 | self.addCleanup(cleanup_root_file, log_file) |
4706 | |
4707 | agent = ManagedMachineAgent( |
4708 | - pid_file, machine_id="0", log_file=log_file, |
4709 | + "test-ns", machine_id="0", log_file=log_file, |
4710 | zookeeper_hosts=get_test_zookeeper_address(), |
4711 | juju_directory=juju_directory) |
4712 | |
4713 | @@ -85,13 +120,3 @@ |
4714 | |
4715 | # running stop again is fine, detects the process is stopped. |
4716 | yield agent.stop() |
4717 | - |
4718 | - self.assertFalse(os.path.exists(pid_file)) |
4719 | - |
4720 | - # Stop raises runtime error if the process doesn't match up. |
4721 | - with open(pid_file, "w") as pid_handle: |
4722 | - pid_handle.write("1") |
4723 | - self.assertFailure(agent.stop(), RuntimeError) |
4724 | - |
4725 | - # Reuse the lxc flag for tests needing sudo |
4726 | - test_managed_agent_root.skip = run_lxc_tests() |
4727 | |
4728 | === modified file 'juju/providers/local/tests/test_files.py' |
4729 | --- juju/providers/local/tests/test_files.py 2011-10-07 18:19:58 +0000 |
4730 | +++ juju/providers/local/tests/test_files.py 2012-02-02 16:42:42 +0000 |
4731 | @@ -1,14 +1,16 @@ |
4732 | import os |
4733 | +import signal |
4734 | from StringIO import StringIO |
4735 | +import subprocess |
4736 | import yaml |
4737 | |
4738 | -from twisted.internet.defer import inlineCallbacks |
4739 | +from twisted.internet.defer import inlineCallbacks, succeed |
4740 | from twisted.web.client import getPage |
4741 | |
4742 | +from juju.errors import ProviderError, ServiceError |
4743 | +from juju.lib.lxc.tests.test_lxc import uses_sudo |
4744 | from juju.lib.testing import TestCase |
4745 | - |
4746 | - |
4747 | -from juju.errors import ProviderError |
4748 | +from juju.lib.upstart import UpstartService |
4749 | from juju.providers.local.files import ( |
4750 | LocalStorage, StorageServer, SERVER_URL_KEY) |
4751 | from juju.state.utils import get_open_port |
4752 | @@ -16,38 +18,151 @@ |
4753 | |
4754 | class WebFileStorageTest(TestCase): |
4755 | |
4756 | + @inlineCallbacks |
4757 | def setUp(self): |
4758 | + yield super(WebFileStorageTest, self).setUp() |
4759 | self._storage_path = self.makeDir() |
4760 | + self._logfile = self.makeFile() |
4761 | self._storage = LocalStorage(self._storage_path) |
4762 | - self._log_path = self.makeFile() |
4763 | - self._pid_path = self.makeFile() |
4764 | self._port = get_open_port() |
4765 | self._server = StorageServer( |
4766 | - self._pid_path, self._storage_path, "localhost", |
4767 | - get_open_port(), self._log_path) |
4768 | + "ns1", self._storage_path, "localhost", self._port, self._logfile) |
4769 | |
4770 | @inlineCallbacks |
4771 | - def test_start_stop(self): |
4772 | - yield self._storage.put("abc", StringIO("hello world")) |
4773 | - yield self._server.start() |
4774 | - storage_url = yield self._storage.get_url("abc") |
4775 | - contents = yield getPage(storage_url) |
4776 | - self.assertEqual("hello world", contents) |
4777 | - self._server.stop() |
4778 | - # Stopping multiple times is fine. |
4779 | - self._server.stop() |
4780 | + def wait_for_server(self, server): |
4781 | + while not (yield server.is_serving()): |
4782 | + yield self.sleep(0.1) |
4783 | |
4784 | def test_start_missing_args(self): |
4785 | - server = StorageServer(self._pid_path) |
4786 | + server = StorageServer("ns1", self._storage_path) |
4787 | return self.assertFailure(server.start(), AssertionError) |
4788 | |
4789 | def test_start_invalid_directory(self): |
4790 | os.rmdir(self._storage_path) |
4791 | return self.assertFailure(self._server.start(), AssertionError) |
4792 | |
4793 | - def test_stop_missing_pid(self): |
4794 | - server = StorageServer(self._pid_path) |
4795 | - server.stop() |
4796 | + @inlineCallbacks |
4797 | + def test_upstart(self): |
4798 | + subprocess_calls = [] |
4799 | + |
4800 | + def intercept_args(args, **kwargs): |
4801 | + subprocess_calls.append(args) |
4802 | + self.assertEquals(args[0], "sudo") |
4803 | + if args[1] == "cp": |
4804 | + return real_check_call(args[1:], **kwargs) |
4805 | + return 0 |
4806 | + |
4807 | + real_check_call = self.patch(subprocess, "check_call", intercept_args) |
4808 | + init_dir = self.makeDir() |
4809 | + self.patch(UpstartService, "init_dir", init_dir) |
4810 | + |
4811 | + # Mock out the repeated checking for unstable pid, after an initial |
4812 | + # stop/waiting to induce the actual start |
4813 | + getProcessOutput = self.mocker.replace( |
4814 | + "twisted.internet.utils.getProcessOutput") |
4815 | + getProcessOutput("/sbin/status", ["juju-ns1-file-storage"]) |
4816 | + self.mocker.result(succeed("stop/waiting")) |
4817 | + for _ in range(5): |
4818 | + getProcessOutput("/sbin/status", ["juju-ns1-file-storage"]) |
4819 | + self.mocker.result(succeed("start/running 123")) |
4820 | + self.mocker.replay() |
4821 | + |
4822 | + try: |
4823 | + os.remove("/tmp/juju-ns1-file-storage.output") |
4824 | + except OSError: |
4825 | + pass # just make sure it's not there, so the .start() |
4826 | + # doesn't insert a spurious rm |
4827 | + |
4828 | + yield self._server.start() |
4829 | + chmod = subprocess_calls[1] |
4830 | + conf_dest = os.path.join(init_dir, "juju-ns1-file-storage.conf") |
4831 | + self.assertEquals(chmod, ("sudo", "chmod", "644", conf_dest)) |
4832 | + start = subprocess_calls[-1] |
4833 | + self.assertEquals( |
4834 | + start, ("sudo", "/sbin/start", "juju-ns1-file-storage")) |
4835 | + |
4836 | + with open(conf_dest) as f: |
4837 | + for line in f: |
4838 | + if line.startswith("env"): |
4839 | + self.fail("didn't expect any special environment") |
4840 | + if line.startswith("exec"): |
4841 | + exec_ = line[5:].strip() |
4842 | + |
4843 | + expect_exec = ( |
4844 | + "twistd --nodaemon --uid %s --gid %s --logfile %s --pidfile= -d " |
4845 | + "%s web --port tcp:%s:interface=localhost --path %s >> " |
4846 | + "/tmp/juju-ns1-file-storage.output 2>&1" |
4847 | + % (os.getuid(), os.getgid(), self._logfile, self._storage_path, |
4848 | + self._port, self._storage_path)) |
4849 | + self.assertEquals(exec_, expect_exec) |
4850 | + |
4851 | + @uses_sudo |
4852 | + @inlineCallbacks |
4853 | + def test_start_stop(self): |
4854 | + yield self._storage.put("abc", StringIO("hello world")) |
4855 | + yield self._server.start() |
4856 | + # Starting multiple times is fine. |
4857 | + yield self._server.start() |
4858 | + storage_url = yield self._storage.get_url("abc") |
4859 | + |
4860 | + # It might not have started actually accepting connections yet... |
4861 | + yield self.wait_for_server(self._server) |
4862 | + self.assertEqual((yield getPage(storage_url)), "hello world") |
4863 | + |
4864 | + # Check that it can be killed by the current user (ie, is not running |
4865 | + # as root) and still comes back up |
4866 | + old_pid = yield self._server.get_pid() |
4867 | + os.kill(old_pid, signal.SIGKILL) |
4868 | + new_pid = yield self._server.get_pid() |
4869 | + self.assertNotEquals(old_pid, new_pid) |
4870 | + |
4871 | + # Give it a moment to actually start serving again |
4872 | + yield self.wait_for_server(self._server) |
4873 | + self.assertEqual((yield getPage(storage_url)), "hello world") |
4874 | + |
4875 | + yield self._server.stop() |
4876 | + # Stopping multiple times is fine too. |
4877 | + yield self._server.stop() |
4878 | + |
4879 | + @uses_sudo |
4880 | + @inlineCallbacks |
4881 | + def test_namespacing(self): |
4882 | + alt_storage_path = self.makeDir() |
4883 | + alt_storage = LocalStorage(alt_storage_path) |
4884 | + yield alt_storage.put("some-path", StringIO("alternative")) |
4885 | + yield self._storage.put("some-path", StringIO("original")) |
4886 | + |
4887 | + alt_server = StorageServer( |
4888 | + "ns2", alt_storage_path, "localhost", get_open_port(), |
4889 | + self.makeFile()) |
4890 | + yield alt_server.start() |
4891 | + yield self._server.start() |
4892 | + yield self.wait_for_server(alt_server) |
4893 | + yield self.wait_for_server(self._server) |
4894 | + |
4895 | + alt_contents = yield getPage( |
4896 | + (yield alt_storage.get_url("some-path"))) |
4897 | + self.assertEquals(alt_contents, "alternative") |
4898 | + orig_contents = yield getPage( |
4899 | + (yield self._storage.get_url("some-path"))) |
4900 | + self.assertEquals(orig_contents, "original") |
4901 | + |
4902 | + yield alt_server.stop() |
4903 | + yield self._server.stop() |
4904 | + |
4905 | + @uses_sudo |
4906 | + @inlineCallbacks |
4907 | + def test_capture_errors(self): |
4908 | + self._port = get_open_port() |
4909 | + self._server = StorageServer( |
4910 | + "borken", self._storage_path, "lol borken", self._port, |
4911 | + self._logfile) |
4912 | + d = self._server.start() |
4913 | + e = yield self.assertFailure(d, ServiceError) |
4914 | + self.assertTrue(str(e).startswith( |
4915 | + "Failed to start job juju-borken-file-storage; got output:\n")) |
4916 | + self.assertIn("Wrong number of arguments", str(e)) |
4917 | + yield self._server.stop() |
4918 | |
4919 | |
4920 | class FileStorageTest(TestCase): |
4921 | |
4922 | === modified file 'juju/providers/orchestra/tests/data/bootstrap_user_data' |
4923 | --- juju/providers/orchestra/tests/data/bootstrap_user_data 2012-01-09 14:17:21 +0000 |
4924 | +++ juju/providers/orchestra/tests/data/bootstrap_user_data 2012-02-02 16:42:42 +0000 |
4925 | @@ -1,15 +1,60 @@ |
4926 | -#cloud-config |
4927 | apt-update: true |
4928 | apt-upgrade: true |
4929 | machine-data: {juju-provider-type: orchestra, juju-zookeeper-hosts: 'localhost:2181', |
4930 | machine-id: '0'} |
4931 | output: {all: '| tee -a /var/log/cloud-init-output.log'} |
4932 | -packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, |
4933 | - python-zookeeper, default-jre-headless, zookeeper, zookeeperd] |
4934 | +packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, |
4935 | + default-jre-headless, zookeeper, zookeeperd] |
4936 | runcmd: [sudo apt-get -y install juju, sudo mkdir -p /var/lib/juju, sudo mkdir -p |
4937 | - /var/log/juju, 'juju-admin initialize --instance-id=winston-uid --admin-identity=admin:qRBXC1ubEEUqRL6wcBhgmc9xkaY= --provider-type=orchestra', |
4938 | - 'JUJU_MACHINE_ID=0 JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.machine -n |
4939 | - --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid', |
4940 | - 'JUJU_ZOOKEEPER=localhost:2181 python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log |
4941 | - --pidfile=/var/run/juju/provision-agent.pid'] |
4942 | + /var/log/juju, 'juju-admin initialize --instance-id=winston-uid --admin-identity=admin:qRBXC1ubEEUqRL6wcBhgmc9xkaY= |
4943 | + --provider-type=orchestra', 'cat >> /etc/init/juju-machine-agent.conf <<EOF |
4944 | + |
4945 | + description "Juju machine agent" |
4946 | + |
4947 | + author "Juju Team <juju@lists.ubuntu.com>" |
4948 | + |
4949 | + |
4950 | + start on runlevel [2345] |
4951 | + |
4952 | + stop on runlevel [!2345] |
4953 | + |
4954 | + respawn |
4955 | + |
4956 | + |
4957 | + env JUJU_MACHINE_ID="0" |
4958 | + |
4959 | + env JUJU_ZOOKEEPER="localhost:2181" |
4960 | + |
4961 | + |
4962 | + exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log |
4963 | + --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output |
4964 | + 2>&1 |
4965 | + |
4966 | + EOF |
4967 | + |
4968 | + ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf |
4969 | + <<EOF |
4970 | + |
4971 | + description "Juju provisioning agent" |
4972 | + |
4973 | + author "Juju Team <juju@lists.ubuntu.com>" |
4974 | + |
4975 | + |
4976 | + start on runlevel [2345] |
4977 | + |
4978 | + stop on runlevel [!2345] |
4979 | + |
4980 | + respawn |
4981 | + |
4982 | + |
4983 | + env JUJU_ZOOKEEPER="localhost:2181" |
4984 | + |
4985 | + |
4986 | + exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log |
4987 | + --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output |
4988 | + 2>&1 |
4989 | + |
4990 | + EOF |
4991 | + |
4992 | + ', /sbin/start juju-provision-agent] |
4993 | ssh_authorized_keys: [this-is-a-public-key] |
4994 | |
4995 | === modified file 'juju/providers/orchestra/tests/data/launch_user_data' |
4996 | --- juju/providers/orchestra/tests/data/launch_user_data 2012-01-09 14:17:21 +0000 |
4997 | +++ juju/providers/orchestra/tests/data/launch_user_data 2012-02-02 16:42:42 +0000 |
4998 | @@ -1,12 +1,34 @@ |
4999 | -#cloud-config |
5000 | apt-update: true |
The diff has been truncated for viewing.