Merge lp:~bcsaller/pyjuju/subordinate-removes into lp:pyjuju
- subordinate-removes
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~bcsaller/pyjuju/subordinate-removes |
Merge into: | lp:pyjuju |
Diff against target: |
550 lines (+264/-43) 11 files modified
juju/control/destroy_service.py (+2/-2) juju/control/remove_relation.py (+5/-3) juju/control/tests/test_destroy_service.py (+18/-4) juju/control/tests/test_remove_relation.py (+2/-2) juju/hooks/tests/test_invoker.py (+2/-0) juju/lib/testing.py (+3/-3) juju/state/hook.py (+9/-3) juju/state/tests/test_service.py (+78/-22) juju/unit/lifecycle.py (+64/-0) juju/unit/tests/test_lifecycle.py (+76/-2) juju/unit/workflow.py (+5/-2) |
To merge this branch: | bzr merge lp:~bcsaller/pyjuju/subordinate-removes |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+102427@code.launchpad.net |
Commit message
Description of the change
Destroying a subordinate shouldn't result in tracebacks
Destroying a subordinate service after destroying its primary services results in a traceback
This patch corrects this. The associated test also previously failed to exercise all the needed
behavior to demonstrate this, which is corrected as well.
Benjamin Saller (bcsaller) wrote : | # |
Reviewers: mp+102427_
Message:
Please take a look.
Description:
Destroying a subordinate shouldn't result in tracebacks
Destroying a subordinate service after destroying its primary services
results in a traceback
This patch corrects this. The associated test also previously failed to
exercise all the needed
behavior to demonstrate this, which is corrected as well.
https:/
(do not edit description out of merge proposal)
Please review this at https:/
Affected files:
A [revision details]
M juju/control/
M juju/control/
M juju/control/
M juju/control/
M juju/hooks/
M juju/lib/testing.py
M juju/state/
M juju/state/
M juju/unit/
M juju/unit/
M juju/unit/
Kapil Thangavelu (hazmat) wrote : | # |
Needs Fixing
There seem to be two separate implementations of unrelated issues here
that would ideally be in separate branches. One for open/close port. And
one for subordinate suicide on removal. I'm focusing this review on
subordinate suicide aspect.
https:/
File juju/control/
https:/
juju/control/
this should default to false, right?
https:/
juju/control/
pls document here why the subordinate is coming first in the ep pair.
its easy to gloss over this is just an error message formatting thing.
https:/
File juju/control/
https:/
juju/control/
traceback
This doesn't actually test anything related to the branch afaics. The
actual problem is asynchronously triggered.
https:/
File juju/hooks/
https:/
juju/hooks/
run and changes are immediate."""
does this change belong in this branch?
https:/
File juju/state/
https:/
juju/state/
ports triggers the appropriate firewall
does this change belong in this branch?
https:/
File juju/unit/
https:/
juju/unit/
method name is a bit ambigious its always a suicide op, ie.
_undeploy_self..
https:/
juju/unit/
agents deployment.
s/agents/
https:/
juju/unit/
are you sure about that? i don't see anything obvious there, if so its a
bug in the deployer.. start would imply deploying again.. which is
bogus.
https:/
File juju/unit/
https:/
juju/unit/
how does this help? what if the node is already gone when the
retry_change fires. also untested.
- 537. By Benjamin Saller
-
typo
- 538. By Benjamin Saller
-
merge trunk
- 539. By Benjamin Saller
-
move ports handling to another branch
- 540. By Benjamin Saller
-
verify unit state removal on subordinate suicide
- 541. By Benjamin Saller
-
catch another async failure location
- 542. By Benjamin Saller
-
missing import
- 543. By Benjamin Saller
-
missing import
Kapil Thangavelu (hazmat) wrote : | # |
so post discussions, i believe we've said this branch/approach is dead in favor of the stop protocol spec.
Unmerged revisions
- 543. By Benjamin Saller
-
missing import
- 542. By Benjamin Saller
-
missing import
- 541. By Benjamin Saller
-
catch another async failure location
- 540. By Benjamin Saller
-
verify unit state removal on subordinate suicide
- 539. By Benjamin Saller
-
move ports handling to another branch
- 538. By Benjamin Saller
-
merge trunk
- 537. By Benjamin Saller
-
typo
- 536. By Benjamin Saller
-
remove print
- 535. By Benjamin Saller
-
add back in subordinate removal, this will have to change to honor the normal flow control model later. When stop hooks are properly added this will also need another review
- 534. By Benjamin Saller
-
change arg. order for error message
Preview Diff
1 | === modified file 'juju/control/destroy_service.py' |
2 | --- juju/control/destroy_service.py 2012-04-11 17:48:26 +0000 |
3 | +++ juju/control/destroy_service.py 2012-06-04 23:05:24 +0000 |
4 | @@ -47,14 +47,14 @@ |
5 | |
6 | if relations: |
7 | principal_service = None |
8 | - # if we have a container we can destroy the subordinate |
9 | + # if we have a container we can't destroy the subordinate |
10 | # (revisit in the future) |
11 | for relation in relations: |
12 | if relation.relation_scope != "container": |
13 | continue |
14 | services = yield relation.get_service_states() |
15 | remote_service = [s for s in services if s.service_name != |
16 | - service_state.service_name][0] |
17 | + service_state.service_name][0] |
18 | if not (yield remote_service.is_subordinate()): |
19 | principal_service = remote_service |
20 | break |
21 | |
22 | === modified file 'juju/control/remove_relation.py' |
23 | --- juju/control/remove_relation.py 2012-04-12 09:23:50 +0000 |
24 | +++ juju/control/remove_relation.py 2012-06-04 23:05:24 +0000 |
25 | @@ -69,17 +69,19 @@ |
26 | service_pair = [] # ordered such that sub, principal |
27 | |
28 | is_container = False |
29 | - has_principal = True |
30 | + has_principal = False |
31 | + # Sort the endpoints so we know which is the sub and which is the |
32 | + # principal in the event we need to raise an error below |
33 | for ep in endpoints: |
34 | if ep.relation_scope == "container": |
35 | is_container = True |
36 | service = yield service_state_manager.get_service_state( |
37 | ep.service_name) |
38 | if (yield service.is_subordinate()): |
39 | - service_pair.append(service) |
40 | + service_pair.insert(0, service) |
41 | has_principal = True |
42 | else: |
43 | - service_pair.insert(0, service) |
44 | + service_pair.append(service) |
45 | if is_container and len(service_pair) == 2 and has_principal: |
46 | sub, principal = service_pair |
47 | raise UnsupportedSubordinateServiceRemoval(sub.service_name, |
48 | |
49 | === modified file 'juju/control/tests/test_destroy_service.py' |
50 | --- juju/control/tests/test_destroy_service.py 2012-04-10 10:26:54 +0000 |
51 | +++ juju/control/tests/test_destroy_service.py 2012-06-04 23:05:24 +0000 |
52 | @@ -94,13 +94,15 @@ |
53 | @inlineCallbacks |
54 | def test_destroy_principal_with_subordinates(self): |
55 | log_service = yield self.add_service_from_charm("logging") |
56 | - lu1 = yield log_service.add_unit_state() |
57 | |
58 | yield self.add_relation( |
59 | "juju-info", |
60 | + "container", |
61 | (self.service_state1, "juju-info", "server"), |
62 | (log_service, "juju-info", "client")) |
63 | |
64 | + lu1 = yield log_service.add_unit_state(self.service1_unit) |
65 | + |
66 | finished = self.setup_cli_reactor() |
67 | topology = yield self.get_topology() |
68 | service_id = topology.find_service_with_name("mysql") |
69 | @@ -111,9 +113,10 @@ |
70 | |
71 | self.setup_exit(0) |
72 | self.mocker.replay() |
73 | - |
74 | main(["destroy-service", "mysql"]) |
75 | + |
76 | yield finished |
77 | + |
78 | service_id = topology.find_service_with_name("mysql") |
79 | |
80 | topology = yield self.get_topology() |
81 | @@ -128,9 +131,21 @@ |
82 | # Zookeeper. see test_destroy_subordinate_without_relations. |
83 | exists = yield self.client.exists("/services/%s" % logging_id) |
84 | self.assertTrue(exists) |
85 | - |
86 | self.assertIn("Service 'mysql' destroyed.", self.output.getvalue()) |
87 | |
88 | + # At this point trigger a destroy on the subordinate |
89 | + # this deals with issue #982353 where partial state |
90 | + # resulted in a traceback |
91 | + # Assert that methods related to destruction fire w/o exception |
92 | + self.assertEquals(True, (yield log_service.is_subordinate())) |
93 | + self.assertEquals(None, (yield lu1.get_container())) |
94 | + rels = yield self.relation_state_manager.get_relations_for_service( |
95 | + log_service) |
96 | + self.assertEquals([], rels) |
97 | + yield self.service_state_manager.remove_service_state(log_service) |
98 | + exists = yield self.client.exists("/services/%s" % logging_id) |
99 | + self.assertFalse(exists) |
100 | + |
101 | @inlineCallbacks |
102 | def test_destroy_subordinate_without_relations(self): |
103 | """Verify we can remove a subordinate w/o relations.""" |
104 | @@ -146,7 +161,6 @@ |
105 | |
106 | main(["destroy-service", "logging"]) |
107 | yield finished |
108 | - |
109 | topology = yield self.get_topology() |
110 | self.assertFalse(topology.has_service(logging_id)) |
111 | exists = yield self.client.exists("/services/%s" % logging_id) |
112 | |
113 | === modified file 'juju/control/tests/test_remove_relation.py' |
114 | --- juju/control/tests/test_remove_relation.py 2012-04-10 22:26:36 +0000 |
115 | +++ juju/control/tests/test_remove_relation.py 2012-06-04 23:05:24 +0000 |
116 | @@ -210,6 +210,6 @@ |
117 | main(["remove-relation", "logging", "wordpress"]) |
118 | yield wait_on_reactor_stopped |
119 | self.assertIn("Unsupported attempt to destroy " |
120 | - "subordinate service 'wordpress' while " |
121 | - "principal service 'logging' is related.", |
122 | + "subordinate service 'logging' while " |
123 | + "principal service 'wordpress' is related.", |
124 | self.output.getvalue()) |
125 | |
126 | === modified file 'juju/hooks/tests/test_invoker.py' |
127 | --- juju/hooks/tests/test_invoker.py 2012-05-04 03:36:36 +0000 |
128 | +++ juju/hooks/tests/test_invoker.py 2012-06-04 23:05:24 +0000 |
129 | @@ -1467,6 +1467,8 @@ |
130 | |
131 | mu1, mu2 = my_units |
132 | lu1, lu2 = log_units |
133 | + self.mysql_units = my_units |
134 | + self.log_units = log_units |
135 | |
136 | mystate = pick_attr(service_states, relation_role="server") |
137 | logstate = pick_attr(service_states, relation_role="client") |
138 | |
139 | === modified file 'juju/lib/testing.py' |
140 | --- juju/lib/testing.py 2012-04-06 14:42:06 +0000 |
141 | +++ juju/lib/testing.py 2012-06-04 23:05:24 +0000 |
142 | @@ -170,19 +170,19 @@ |
143 | data, stat = yield client.get(path) |
144 | name = path.rsplit('/', 1)[1] |
145 | |
146 | - d['contents'] = _decode_fmt(data, yaml.load) |
147 | + d["contents"] = _decode_fmt(data, yaml.load) |
148 | |
149 | children = yield client.get_children(path) |
150 | for name in children: |
151 | if path == "/" and name == "zookeeper": |
152 | continue |
153 | |
154 | - cd = yield export_tree(path + '/' + name, indent) |
155 | + cd = yield export_tree(path + "/" + name, indent) |
156 | d[name] = cd |
157 | |
158 | returnValue(d) |
159 | |
160 | - output[path.rsplit('/', 1)[1]] = yield export_tree(path, '') |
161 | + output[path.rsplit("/", 1)[1]] = yield export_tree(path, "") |
162 | returnValue(output) |
163 | |
164 | @inlineCallbacks |
165 | |
166 | === modified file 'juju/state/hook.py' |
167 | --- juju/state/hook.py 2012-04-09 19:40:57 +0000 |
168 | +++ juju/state/hook.py 2012-06-04 23:05:24 +0000 |
169 | @@ -4,8 +4,9 @@ |
170 | |
171 | from juju.state.base import StateBase |
172 | from juju.state.errors import ( |
173 | - UnitRelationStateNotFound, StateNotFound, RelationBrokenContextError, |
174 | - RelationStateNotFound, InvalidRelationIdentity, StateChanged) |
175 | + ServiceStateNotFound, UnitRelationStateNotFound, StateNotFound, |
176 | + RelationBrokenContextError, RelationStateNotFound, |
177 | + InvalidRelationIdentity, StateChanged) |
178 | from juju.state.relation import ( |
179 | RelationStateManager, ServiceRelationState, UnitRelationState) |
180 | from juju.state.service import ServiceStateManager, parse_service_name |
181 | @@ -102,7 +103,12 @@ |
182 | relations = [] |
183 | if self._topology is None: |
184 | self._topology = yield self._read_topology() |
185 | - service = yield self.get_local_service() |
186 | + try: |
187 | + service = yield self.get_local_service() |
188 | + except ServiceStateNotFound: |
189 | + # The service has been removed |
190 | + returnValue(relations) |
191 | + |
192 | internal_service_id = service.internal_id |
193 | for info in self._topology.get_relations_for_service( |
194 | internal_service_id): |
195 | |
196 | === modified file 'juju/state/tests/test_service.py' |
197 | --- juju/state/tests/test_service.py 2012-04-11 19:38:41 +0000 |
198 | +++ juju/state/tests/test_service.py 2012-06-04 23:05:24 +0000 |
199 | @@ -159,6 +159,43 @@ |
200 | unit_state = yield service_state.add_unit_state() |
201 | returnValue(unit_state) |
202 | |
203 | + @inlineCallbacks |
204 | + def setup_subordinate_defaults(self): |
205 | + mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", |
206 | + "server", "global") |
207 | + logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", |
208 | + "client", "container") |
209 | + |
210 | + logging_charm_state = yield self.get_subordinate_charm() |
211 | + self.assertTrue(logging_charm_state.is_subordinate()) |
212 | + log_state = yield self.service_state_manager.add_service_state( |
213 | + "logging", logging_charm_state, dummy_constraints) |
214 | + mysql_state = yield self.service_state_manager.add_service_state( |
215 | + "mysql", self.charm_state, dummy_constraints) |
216 | + |
217 | + m1 = yield self.machine_state_manager.add_machine_state( |
218 | + series_constraints) |
219 | + m2 = yield self.machine_state_manager.add_machine_state( |
220 | + series_constraints) |
221 | + |
222 | + relation_state, service_states = (yield |
223 | + self.relation_state_manager.add_relation_state( |
224 | + mysql_ep, logging_ep)) |
225 | + |
226 | + unit_state1 = yield mysql_state.add_unit_state() |
227 | + yield unit_state1.assign_to_machine(m1) |
228 | + unit_state0 = yield log_state.add_unit_state( |
229 | + container=unit_state1) |
230 | + |
231 | + unit_state3 = yield mysql_state.add_unit_state() |
232 | + yield unit_state3.assign_to_machine(m2) |
233 | + unit_state2 = yield log_state.add_unit_state( |
234 | + container=unit_state3) |
235 | + |
236 | + returnValue([service_states, relation_state, |
237 | + [(unit_state0, unit_state1), |
238 | + (unit_state2, unit_state3)]]) |
239 | + |
240 | |
241 | class ServiceStateManagerTest(ServiceStateManagerTestBase): |
242 | |
243 | @@ -455,27 +492,9 @@ |
244 | """ |
245 | Validate adding units with containers specified and recovering that. |
246 | """ |
247 | - mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", |
248 | - "server", "global") |
249 | - logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", |
250 | - "client", "container") |
251 | - |
252 | - logging_charm_state = yield self.get_subordinate_charm() |
253 | - self.assertTrue(logging_charm_state.is_subordinate()) |
254 | - log_state = yield self.service_state_manager.add_service_state( |
255 | - "logging", logging_charm_state, dummy_constraints) |
256 | - mysql_state = yield self.service_state_manager.add_service_state( |
257 | - "mysql", self.charm_state, dummy_constraints) |
258 | - |
259 | - relation_state, service_states = (yield |
260 | - self.relation_state_manager.add_relation_state( |
261 | - mysql_ep, logging_ep)) |
262 | - |
263 | - unit_state1 = yield mysql_state.add_unit_state() |
264 | - unit_state0 = yield log_state.add_unit_state(container=unit_state1) |
265 | - |
266 | - unit_state3 = yield mysql_state.add_unit_state() |
267 | - unit_state2 = yield log_state.add_unit_state(container=unit_state3) |
268 | + service_states, relation_state, unit_states = ( |
269 | + yield self.setup_subordinate_defaults()) |
270 | + (unit_state0, unit_state1), (unit_state2, unit_state3) = unit_states |
271 | |
272 | self.assertEquals((yield unit_state1.get_container()), None) |
273 | self.assertEquals((yield unit_state0.get_container()), unit_state1) |
274 | @@ -498,7 +517,7 @@ |
275 | |
276 | @inlineCallbacks |
277 | def verify_container(relation_state, service_relation_state, |
278 | - unit_state, container): |
279 | + unit_state, container): |
280 | presence_path = self.get_presence_path( |
281 | relation_state, |
282 | service_relation_state.relation_role, |
283 | @@ -533,6 +552,11 @@ |
284 | # we verify the content elsewhere |
285 | self.assertTrue(settings_info["private-address"]) |
286 | |
287 | + mid = yield unit_state.get_assigned_machine_id() |
288 | + cmid = yield container.get_assigned_machine_id() |
289 | + if container: |
290 | + self.assertEqual(mid, cmid) |
291 | + |
292 | # verify all the units are constructed as expected |
293 | # first the client roles with another container |
294 | yield verify_container(relation_state, logstate, |
295 | @@ -2540,6 +2564,38 @@ |
296 | self.assertEqual(exposed_flag, False) |
297 | |
298 | @inlineCallbacks |
299 | + def test_set_and_clear_exposed_flag_subordinate(self): |
300 | + """An exposed flag can be set on a subordinate service.""" |
301 | + service_states, relation_state, unit_states = ( |
302 | + yield self.setup_subordinate_defaults()) |
303 | + (u0, u1), (u2, u3) = unit_states |
304 | + service_state = yield u1.get_service_state() |
305 | + |
306 | + # Defaults to false |
307 | + exposed_flag = yield service_state.get_exposed_flag() |
308 | + self.assertEqual(exposed_flag, False) |
309 | + |
310 | + # Can be set |
311 | + yield service_state.set_exposed_flag() |
312 | + exposed_flag = yield service_state.get_exposed_flag() |
313 | + self.assertEqual(exposed_flag, True) |
314 | + |
315 | + # Can be set multiple times |
316 | + yield service_state.set_exposed_flag() |
317 | + exposed_flag = yield service_state.get_exposed_flag() |
318 | + self.assertEqual(exposed_flag, True) |
319 | + |
320 | + # Can be cleared |
321 | + yield service_state.clear_exposed_flag() |
322 | + exposed_flag = yield service_state.get_exposed_flag() |
323 | + self.assertEqual(exposed_flag, False) |
324 | + |
325 | + # Can be cleared multiple times |
326 | + yield service_state.clear_exposed_flag() |
327 | + exposed_flag = yield service_state.get_exposed_flag() |
328 | + self.assertEqual(exposed_flag, False) |
329 | + |
330 | + @inlineCallbacks |
331 | def test_watch_exposed_flag(self): |
332 | """An exposed watch is setup on a permanent basis.""" |
333 | service_state = yield self.add_service("wordpress") |
334 | |
335 | === modified file 'juju/unit/lifecycle.py' |
336 | --- juju/unit/lifecycle.py 2012-05-01 00:26:25 +0000 |
337 | +++ juju/unit/lifecycle.py 2012-06-04 23:05:24 +0000 |
338 | @@ -106,6 +106,7 @@ |
339 | self._state_dir = state_dir |
340 | self._relations = None |
341 | self._running = False |
342 | + self._remote_service = {} |
343 | self._watching_relation_memberships = False |
344 | self._watching_relation_resolved = False |
345 | self._run_lock = DeferredLock() |
346 | @@ -404,6 +405,14 @@ |
347 | yield workflow.transition_state("departed") |
348 | self._store_relations() |
349 | |
350 | + if not is_principal: |
351 | + service_relation = old_relations[relation_id] |
352 | + removed = yield self._undeploy_subordinate(service_relation) |
353 | + if removed: |
354 | + # we don't even need to process adds at this point |
355 | + # (that would be a sick, sad world) |
356 | + returnValue(None) |
357 | + |
358 | # Process new relations. |
359 | for relation_id in added: |
360 | service_relation = new_relations[relation_id] |
361 | @@ -434,10 +443,65 @@ |
362 | |
363 | self._relations[service_relation.internal_relation_id] = workflow |
364 | |
365 | + # Cache the remote unit's service (to ease subordinate undeploy) |
366 | + service_states = yield service_relation.get_service_states() |
367 | + remote_service = [s for s in service_states if |
368 | + s.service_name != self._unit.service_name][0] |
369 | + |
370 | + self._remote_service[ \ |
371 | + service_relation.internal_relation_id] = remote_service |
372 | + |
373 | with (yield workflow.lock()): |
374 | yield workflow.synchronize() |
375 | |
376 | @inlineCallbacks |
377 | + def _undeploy_subordinate(self, service_relation): |
378 | + """Conditionally stop and remove _this_ subordinate unit. |
379 | + |
380 | + When a relationship is removed we check if its with this subordinate's |
381 | + principal service. If it is we need to stop the agent and remove the |
382 | + on disk deployment. |
383 | + |
384 | + returns boolean indicating if we have self-terminated or not. |
385 | + """ |
386 | + # determine if the remote end of this service relation is the |
387 | + # container of this unit (it could be a scope:container relation under |
388 | + # the same principal). |
389 | + |
390 | + container = yield self._unit.get_container() |
391 | + container_service = yield container.get_service_state() |
392 | + # We cannot use service_relation.get_service_states() as we're |
393 | + # reacting to relation removal and its no longer in the topology. |
394 | + # (forcing us to manage this cache) |
395 | + remote_service = self._remote_service[ |
396 | + service_relation.internal_relation_id] |
397 | + |
398 | + if container is None or ( |
399 | + remote_service.internal_id == container_service.internal_id): |
400 | + # The container has already been removed we can safely proceed |
401 | + # OR |
402 | + # The remote side of the relationship being removed is the |
403 | + # principal. |
404 | + # At this point we will being shutting down this unit, |
405 | + # remove our unit state |
406 | + self._log.debug("Removing unit state for %s", self._unit) |
407 | + yield self._service.remove_unit_state(self._unit) |
408 | + # the final problem |
409 | + yield self._do_unit_kill(self._unit.unit_name, "", |
410 | + self._unit_dir) |
411 | + returnValue(True) |
412 | + |
413 | + returnValue(False) |
414 | + |
415 | + @inlineCallbacks |
416 | + def _do_unit_kill(self, unit_name, machine_id, charm_dir): |
417 | + unit_deployer = UnitDeployer(self._client, machine_id, charm_dir) |
418 | + # start the unit deployer (this is needed to bootstrap the factory) |
419 | + # kill ourself |
420 | + yield unit_deployer.start("subordinate") |
421 | + yield unit_deployer.kill_service_unit(unit_name) |
422 | + |
423 | + @inlineCallbacks |
424 | def _do_unit_deploy(self, unit_name, machine_id, charm_dir): |
425 | # this method exists to aid testing rather than being an |
426 | # inline |
427 | |
428 | === modified file 'juju/unit/tests/test_lifecycle.py' |
429 | --- juju/unit/tests/test_lifecycle.py 2012-05-04 03:39:27 +0000 |
430 | +++ juju/unit/tests/test_lifecycle.py 2012-06-04 23:05:24 +0000 |
431 | @@ -23,7 +23,8 @@ |
432 | from juju.machine.tests.test_constraints import series_constraints |
433 | |
434 | from juju.state.endpoint import RelationEndpoint |
435 | -from juju.state.errors import UnitRelationStateNotFound |
436 | +from juju.state.errors import (UnitRelationStateNotFound, |
437 | + ServiceUnitStateNotFound) |
438 | from juju.state.machine import MachineStateManager |
439 | from juju.state.relation import ClientServerUnitWatcher |
440 | from juju.state.service import NO_HOOKS |
441 | @@ -34,7 +35,7 @@ |
442 | from juju.unit.tests.test_charm import CharmPublisherTestBase |
443 | |
444 | from juju.lib.testing import TestCase |
445 | -from juju.lib.mocker import MATCH |
446 | +from juju.lib.mocker import MATCH, ANY, Mocker |
447 | |
448 | |
449 | class UnwriteablePath(object): |
450 | @@ -1441,3 +1442,76 @@ |
451 | yield lifecycle.start() |
452 | yield test_deferred |
453 | # mocker has already verified the call signature, we're happy |
454 | + |
455 | + @inlineCallbacks |
456 | + def test_subordinate_lifecycle_stop_principal(self): |
457 | + state = yield self.setup_default_subordinate_relation() |
458 | + mu1, my_srv = state["principal"] |
459 | + lu1, log_srv = state["subordinate"] |
460 | + |
461 | + # Principal setup |
462 | + principal_lifecycle = UnitLifecycle( |
463 | + self.client, mu1, my_srv, |
464 | + self.unit_directory, self.state_directory, self.executor) |
465 | + |
466 | + test_deferred = Deferred() |
467 | + principal_results = [] |
468 | + |
469 | + def test_deploy(unit_name, machine_id, charm_dir): |
470 | + principal_results.append([unit_name, machine_id, charm_dir]) |
471 | + test_deferred.callback(True) |
472 | + |
473 | + mock_unit_deploy = self.mocker.patch(principal_lifecycle) |
474 | + mock_unit_deploy._do_unit_deploy("logging/0", 0, ANY) |
475 | + self.mocker.call(test_deploy) |
476 | + |
477 | + # Subordinate setup |
478 | + test_sub_deferred = Deferred() |
479 | + sub_results = [] |
480 | + |
481 | + def test_undeploy(unit_name, machine_id, charm_dir): |
482 | + sub_results.append([unit_name, machine_id, charm_dir]) |
483 | + test_sub_deferred.callback(True) |
484 | + |
485 | + # Actual Test |
486 | + self.mocker.replay() |
487 | + yield principal_lifecycle.start() |
488 | + yield test_deferred |
489 | + |
490 | + # The principal is running and did a deploy (mocked out) |
491 | + # at this point we should be able to collect the relation |
492 | + # and the unit and inject them into the subordinate (which |
493 | + # would have them in its own process normally) |
494 | + log = state["subordinate"][1] |
495 | + log_units = yield log.get_all_unit_states() |
496 | + lu1 = log_units[0] |
497 | + |
498 | + # A second instance of mocker around the subordinate |
499 | + # avoids issues with patch/replace confusing the single |
500 | + # instance |
501 | + mocker2 = Mocker() |
502 | + subordinate_lifecycle = UnitLifecycle( |
503 | + self.client, lu1, log_srv, |
504 | + self.unit_directory, self.state_directory, self.executor) |
505 | + |
506 | + mock_subunit_deploy = mocker2.patch(subordinate_lifecycle) |
507 | + mock_subunit_deploy._do_unit_kill( |
508 | + "logging/0", "", ANY) |
509 | + mocker2.call(test_undeploy) |
510 | + |
511 | + with mocker2: |
512 | + yield subordinate_lifecycle.start() |
513 | + |
514 | + # break relation to principal |
515 | + yield self.relation_manager.remove_relation_state( |
516 | + state["relation_state"]) |
517 | + |
518 | + # verify that we attempt to we see the departed event |
519 | + # and the attempt of the subordinate to self terminate |
520 | + yield test_sub_deferred |
521 | + # again, mocker verified the call on do_unit_kill |
522 | + # this verifies that the unit attempted to self destruct |
523 | + |
524 | + # verify the subordinate unit state was removed |
525 | + failure = self.service_manager.get_unit_state("logging/0") |
526 | + yield self.assertFailure(failure, ServiceUnitStateNotFound) |
527 | |
528 | === modified file 'juju/unit/workflow.py' |
529 | --- juju/unit/workflow.py 2012-03-29 07:52:41 +0000 |
530 | +++ juju/unit/workflow.py 2012-06-04 23:05:24 +0000 |
531 | @@ -1,7 +1,7 @@ |
532 | -import yaml |
533 | import csv |
534 | +import logging |
535 | import os |
536 | -import logging |
537 | +import yaml |
538 | |
539 | from zookeeper import NoNodeException |
540 | from twisted.internet.defer import inlineCallbacks, returnValue |
541 | @@ -226,6 +226,9 @@ |
542 | state_serialized = yaml.safe_dump(state_dict) |
543 | |
544 | def update_state(content, stat): |
545 | + if not stat: |
546 | + # State has gone away |
547 | + return |
548 | unit_data = yaml.load(content) |
549 | if not unit_data: |
550 | unit_data = {} |
Please take a look.