Merge lp:~jimbaker/pyjuju/purge-queued-hooks into lp:pyjuju

Proposed by Jim Baker
Status: Work in progress
Proposed branch: lp:~jimbaker/pyjuju/purge-queued-hooks
Merge into: lp:pyjuju
Diff against target: 606 lines (+331/-54)
6 files modified
juju/hooks/scheduler.py (+67/-16)
juju/hooks/tests/test_scheduler.py (+65/-10)
juju/unit/lifecycle.py (+31/-6)
juju/unit/tests/test_lifecycle.py (+35/-16)
juju/unit/tests/test_workflow.py (+132/-6)
juju/unit/workflow.py (+1/-0)
To merge this branch: bzr merge lp:~jimbaker/pyjuju/purge-queued-hooks
Reviewer Review Type Date Requested Status
Kapil Thangavelu (community) Needs Fixing
Review via email: mp+93135@code.launchpad.net

Description of the change

Purge queued hooks upon a hook error.

Implements changes necessary to purge queued hooks. This involves two set of changes: 1) not executing a -changed hook if the -joined hook fails, which is the senario seen in the related bug (and exactly tested by test_purge_queued_hook_after error); 2) purging changes from the scheduler in the event of an errored hook, until the workflow is reset (remaining tests).

There are also some minor fixes of related docstrings, removal of unnecessary sleeps, and adding a missing yield (in releasing the run lock for UnitRelationLifecycle.start).

https://codereview.appspot.com/5673051/

To post a comment you must log in.
Revision history for this message
Kapil Thangavelu (hazmat) wrote :

This isn't purging the right thing, and its also become clear that purging is not the correct strategy, and that the underlying cause/problem isn't addressed by it. The relation scheduler feeds into executor. Hook errors move the relation workflow, lifecycle, and scheduler into stopped states. The problem occurs when a second execution from the relation scheduler goes into the hook executor queue, prior to its stoppping. Identifying and fixing that is thhe important thing to address. Purging the scheduler while stopping isn't clearly useful.

Ie instead of inspecting the queue, we should insure the queue consumption is curtailed on error. The tests in the branch for purging queues serve only to demonstrate that information is lost by purging, not that an error'd relation won't have a hook awaiting execution in the hook executor. We should talk about this more and try to come to a more effective plan.

review: Needs Fixing

Unmerged revisions

463. By Jim Baker

Locking test for UnitRelationLifecycle.purge

462. By Jim Baker

Docstrings/comments/PEP8/PyFlakes

461. By Jim Baker

Merged trunk

460. By Jim Baker

Cleanup

459. By Jim Baker

Handle merging events in case of purge

458. By Jim Baker

Cleanup

457. By Jim Baker

Ensure test loops

456. By Jim Baker

Added purge queued hooks test

455. By Jim Baker

Ensure accompanying -changed hook does not execute if -joined fails

454. By Jim Baker

Added test to verify queued hooks are purged after error

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'juju/hooks/scheduler.py'
2--- juju/hooks/scheduler.py 2011-12-12 01:56:05 +0000
3+++ juju/hooks/scheduler.py 2012-02-15 05:08:17 +0000
4@@ -18,17 +18,18 @@
5 class HookScheduler(object):
6 """Schedules hooks for execution in response to changes observed.
7
8- Performs merging of redunant events, and maintains a membership
9- list for hooks, guaranteeing only nodes that have previously been
10- notified of joining are present in th membership.
11+ Merges redundant events and maintains a membership list for hooks,
12+ guaranteeing that only nodes that have previously been notified of
13+ joining are present in the membership.
14
15- Internally this utilizes a change clock, which is incremented for
16- every change seen. All hook operations that would result from a
17- change are indexed by change clock, and the clock is placed into
18- the run queue.
19+ Internally the scheduler utilizes a change clock that is
20+ incremented for every change seen. All hook operations that would
21+ result from a change are indexed by change clock, and the clock is
22+ placed into the run queue.
23 """
24
25- def __init__(self, client, executor, unit_relation, relation_name, unit_name):
26+ def __init__(self,
27+ client, executor, unit_relation, relation_name, unit_name):
28 self._running = None
29
30 # The thing that will actually run the hook for us
31@@ -50,9 +51,12 @@
32 # Run queue (clock)
33 self._run_queue = DeferredQueue()
34
35- # Artifical clock sequence
36+ # Artificial clock sequence
37 self._clock_sequence = 0
38
39+ # Current purge clock value
40+ self._purge_clock = 0
41+
42 @property
43 def running(self):
44 return self._running is True
45@@ -77,11 +81,34 @@
46 # Get the change for the unit.
47 change_clock, change_type = self._node_queue.pop(unit_name)
48
49- log.debug("executing hook for %s:%s",
50+ # Execute or purge the hook, depending on the current
51+ # purging status
52+ if clock > self._purge_clock:
53+ log.debug("executing hook for %s:%s",
54 unit_name, CHANGE_LABELS[change_type])
55-
56- # Execute the hook
57- yield self._execute(unit_name, change_type)
58+ yield self._execute(unit_name, change_type)
59+ else:
60+ log.debug("purging hook for %s:%s",
61+ unit_name, CHANGE_LABELS[change_type])
62+ self._purge_execute(unit_name, change_type)
63+
64+ def purge(self):
65+ """Ensure all events in the scheduler up to this point are purged.
66+
67+ Calling this method does not preclude or otherwise change the
68+ relation watcher from enqueuing additional changes to the
69+ scheduler.
70+
71+ Such new changes are preserved, with the following exceptions:
72+
73+ * a purged ADD followed by a new DELETE is reduced to a nop
74+
75+ * a purged ADD followed by a new MODIFIED is reduced to an
76+ ADD, but with the new clock (otherwise it would be purged)
77+ """
78+ self._purge_clock = self._clock_sequence
79+ self._clock_sequence += 1
80+ log.debug("purging requested, purge clock: %s", self._purge_clock)
81
82 def stop(self):
83 """Stop the hook execution.
84@@ -150,19 +177,32 @@
85 def _queue_change(self, unit_name, change_type, clock):
86 """Queue up the node change for execution.
87 """
88- # If its a new change for the unit store it, and return.
89+ # If it's a new change for the unit, store it and return.
90 if not unit_name in self._node_queue:
91 self._node_queue[unit_name] = (clock, change_type)
92 self._clock_queue.setdefault(clock, []).append(unit_name)
93 return True
94
95+ previous_clock, previous_change = self._node_queue[unit_name]
96+
97+ # If purging an ADD, with an unpurged MODIFIED following it,
98+ # delete the previous ADD and change the current MODIFIED to
99+ # ADD (effectively a merge barrier).
100+ if previous_clock <= self._purge_clock and (
101+ previous_change == ADDED and change_type == MODIFIED):
102+ log.debug("purging hook for %s:%s",
103+ unit_name, CHANGE_LABELS[previous_change])
104+ self._clock_queue[previous_clock] = []
105+ self._node_queue[unit_name] = (clock, ADDED)
106+ self._clock_queue.setdefault(clock, []).append(unit_name)
107+ return True
108+
109 # Else merge/reduce with the previous operation.
110- previous_clock, previous_change = self._node_queue[unit_name]
111 change_clock, change_type = self._reduce(
112 (previous_clock, previous_change),
113 (self._clock_sequence, change_type))
114
115- # If they've cancelled, remove from node and clock queues
116+ # If they've been cancelled, remove from node and clock queues.
117 if change_type is None:
118 del self._node_queue[unit_name]
119 self._clock_queue[previous_clock].remove(unit_name)
120@@ -209,6 +249,10 @@
121 elif previous_change == MODIFIED and new_change == MODIFIED:
122 return (previous_clock, previous_change)
123
124+ else:
125+ raise AssertionError("Invalid change pair for reduce: %s, %s" % (
126+ CHANGE_LABELS[previous_change], CHANGE_LABELS[new_change]))
127+
128 def _execute(self, unit_name, change_type):
129 """Execute a hook script for a change.
130 """
131@@ -225,3 +269,10 @@
132
133 # Execute the change.
134 return self._executor(context, change)
135+
136+ def _purge_execute(self, unit_name, change_type):
137+ """Maintain the membership in the event of a purge."""
138+ if change_type == ADDED:
139+ self._members.append(unit_name)
140+ elif change_type == REMOVED:
141+ self._members.remove(unit_name)
142
143=== modified file 'juju/hooks/tests/test_scheduler.py'
144--- juju/hooks/tests/test_scheduler.py 2011-12-12 01:56:05 +0000
145+++ juju/hooks/tests/test_scheduler.py 2012-02-15 05:08:17 +0000
146@@ -1,6 +1,6 @@
147 import logging
148
149-from twisted.internet.defer import inlineCallbacks
150+from twisted.internet.defer import inlineCallbacks, succeed
151
152 from juju.hooks.scheduler import HookScheduler
153 from juju.state.hook import RelationChange
154@@ -24,6 +24,7 @@
155
156 def collect_executor(self, context, change):
157 self.executions.append((context, change))
158+ return succeed(True)
159
160 # Event reduction/coalescing cases
161 def test_reduce_removed_added(self):
162@@ -113,13 +114,12 @@
163
164 @inlineCallbacks
165 def test_membership_visibility_per_change(self):
166- """Hooks are executed against changes, those changes are
167- associated to a temporal timestamp, however the changes
168- are scheduled for execution, and the state/time of the
169- world may have advanced, to present a logically consistent
170- view, we try to gaurantee at a minimum, that hooks will
171- always see the membership of a relations it was at the
172- time of their associated change.
173+ """Hooks are executed against changes, with changes associated
174+ to a temporal timestamp. However when the changes are
175+ scheduled for execution, the state/time of the world may have
176+ advanced. To present a logically consistent view, we guarantee
177+ at a minimum that hooks will always see the membership of a
178+ relations as it was at the time of their associated change.
179 """
180 self.scheduler.notify_change(
181 old_units=[], new_units=["u-1", "u-2"])
182@@ -152,8 +152,8 @@
183
184 @inlineCallbacks
185 def test_membership_visibility_with_change(self):
186- """We express a stronger guarantee of the above, namely that
187- a hook wont see any 'active' members in a membership list, that
188+ """We express a stronger guarantee of the above, namely that a
189+ hook won't see any 'active' members in a membership list, that
190 it hasn't previously been given a notify of before.
191 """
192 self.scheduler.notify_change(
193@@ -197,3 +197,58 @@
194 RelationChange("", "", ""))
195 members = yield context.get_members()
196 self.assertEqual(members, [])
197+
198+ @inlineCallbacks
199+ def test_purge(self):
200+ """Verify calling purge ensures enqueued changes are not executed."""
201+ self.scheduler.notify_change(
202+ old_units=["u-1", "u-2"],
203+ new_units=["u-2", "u-3", "u-4"],
204+ modified=["u-2"])
205+ self.scheduler.purge()
206+ d = self.scheduler.run()
207+ self.scheduler.stop()
208+ yield d
209+
210+ self.assertEqual(self.executions, [])
211+ self.assertEqual(
212+ self.log_stream.getvalue(),
213+ "relation change old:['u-1', 'u-2'], new:['u-2', 'u-3', 'u-4'],"
214+ " modified:['u-2']\n"
215+ "purging requested, purge clock: 1\n"
216+ "start\n"
217+ "purging hook for u-3:joined\n"
218+ "purging hook for u-4:joined\n"
219+ "purging hook for u-1:departed\n"
220+ "purging hook for u-2:modified\n"
221+ "stop\n")
222+
223+ @inlineCallbacks
224+ def test_first_purge_then_more_changes_after_reset(self):
225+ """Verify that purging, then adding more changes, schedules properly."""
226+ self.scheduler.notify_change(
227+ old_units=[], new_units=["u-1", "u-2"])
228+ self.scheduler.purge()
229+ self.scheduler.notify_change(
230+ old_units=["u-1", "u-2"], new_units=["u-2", "u-3"], modified=["u-2"])
231+ d = self.scheduler.run()
232+ self.scheduler.stop()
233+ yield d
234+
235+ self.assertEqual(len(self.executions), 2)
236+ self.assertEqual(self.executions[0][1].change_type, "joined")
237+ self.assertEqual((yield self.executions[0][0].get_members()),
238+ ["u-3"])
239+ self.assertEqual(self.executions[1][1].change_type, "joined")
240+ self.assertEqual((yield self.executions[1][0].get_members()),
241+ ["u-2", "u-3"])
242+ self.assertEqual(
243+ self.log_stream.getvalue(),
244+ "relation change old:[], new:['u-1', 'u-2'], modified:()\n"
245+ "purging requested, purge clock: 1\n"
246+ "relation change old:['u-1', 'u-2'], new:['u-2', 'u-3'],"
247+ " modified:['u-2']\n"
248+ "purging hook for u-2:joined\nstart\n"
249+ "executing hook for u-3:joined\n"
250+ "executing hook for u-2:joined\n"
251+ "stop\n")
252
253=== modified file 'juju/unit/lifecycle.py'
254--- juju/unit/lifecycle.py 2012-01-28 08:45:10 +0000
255+++ juju/unit/lifecycle.py 2012-02-15 05:08:17 +0000
256@@ -507,10 +507,24 @@
257 # And start the watcher.
258 if start_watches and not self.watching:
259 yield self._watcher.start()
260- finally:
261- self._run_lock.release()
262- self._log.debug(
263- "started relation:%s lifecycle", self._relation_name)
264+ self._log.debug("started relation:%s lifecycle",
265+ self._relation_name)
266+ finally:
267+ yield self._run_lock.release()
268+
269+ @inlineCallbacks
270+ def purge(self):
271+ """Purge any currnt relation change hooks.
272+
273+ This method should be called when a hook has errored.
274+ """
275+ yield self._run_lock.acquire()
276+ try:
277+ self._scheduler.purge()
278+ self._log.debug("purged relation:%s lifecycle",
279+ self._relation_name)
280+ finally:
281+ yield self._run_lock.release()
282
283 @inlineCallbacks
284 def stop(self, stop_watches=True):
285@@ -526,9 +540,10 @@
286 self._watcher.stop()
287 if self._scheduler.running:
288 self._scheduler.stop()
289+ self._log.debug("stopped relation:%s lifecycle",
290+ self._relation_name)
291 finally:
292 yield self._run_lock.release()
293- self._log.debug("stopped relation:%s lifecycle", self._relation_name)
294
295 @inlineCallbacks
296 def depart(self):
297@@ -570,8 +585,16 @@
298 hook_names = ["%s-relation-changed" % self._relation_name]
299
300 invoker = self._get_invoker(context, change)
301+ purge = False
302 for hook_name in hook_names:
303- yield self._execute_hook(invoker, hook_name, change)
304+ if purge:
305+ # Effectively this will purge a -changed hook that is
306+ # called after -joined (as part of hook exec
307+ # semantics)
308+ self._log.debug("Purging hook %s (chained with %s)",
309+ hook_name, change.change_type)
310+ else:
311+ purge = yield self._execute_hook(invoker, hook_name, change)
312
313 @inlineCallbacks
314 def _execute_hook(self, invoker, hook_name, change):
315@@ -593,5 +616,7 @@
316 self._log.info(
317 "Invoked error handler for %s hook", hook_name)
318 yield self._error_handler(change, e)
319+ returnValue(True)
320 else:
321 yield self._run_lock.release()
322+ returnValue(False)
323
324=== modified file 'juju/unit/tests/test_lifecycle.py'
325--- juju/unit/tests/test_lifecycle.py 2012-01-11 09:37:48 +0000
326+++ juju/unit/tests/test_lifecycle.py 2012-02-15 05:08:17 +0000
327@@ -9,23 +9,19 @@
328
329 from twisted.internet.defer import inlineCallbacks, Deferred, fail, returnValue
330
331-from juju.unit.lifecycle import (
332- UnitLifecycle, UnitRelationLifecycle, RelationInvoker)
333-
334+from juju.errors import CharmInvocationError, CharmError
335 from juju.hooks.invoker import Invoker
336 from juju.hooks.executor import HookExecutor
337-
338-from juju.errors import CharmInvocationError, CharmError
339-
340+from juju.hooks.scheduler import HookScheduler
341+from juju.lib.testing import TestCase
342+from juju.lib.mocker import MATCH
343 from juju.state.endpoint import RelationEndpoint
344+from juju.state.hook import RelationChange
345 from juju.state.relation import ClientServerUnitWatcher
346 from juju.state.service import NO_HOOKS
347 from juju.state.tests.test_relation import RelationTestBase
348-from juju.state.hook import RelationChange
349-
350-
351-from juju.lib.testing import TestCase
352-from juju.lib.mocker import MATCH
353+from juju.unit.lifecycle import (
354+ UnitLifecycle, UnitRelationLifecycle, RelationInvoker)
355
356
357 class LifecycleTestBase(RelationTestBase):
358@@ -1077,13 +1073,11 @@
359 @inlineCallbacks
360 def test_lock_start_stop(self):
361 """
362- The relation lifecycle, internally uses a lock when its interacting
363- with zk, and acquires the lock to protct its internal data structures.
364+ The relation lifecycle internally uses a lock when it's interacting
365+ with zk and acquires the lock to protect its internal data structures.
366 """
367-
368 original_method = ClientServerUnitWatcher.start
369 watcher = self.mocker.patch(ClientServerUnitWatcher)
370-
371 finish_callback = Deferred()
372
373 @inlineCallbacks
374@@ -1098,7 +1092,6 @@
375 start_complete = self.lifecycle.start()
376 stop_complete = self.lifecycle.stop()
377
378- yield self.sleep(0.1)
379 self.assertFalse(start_complete.called)
380 self.assertFalse(stop_complete.called)
381 finish_callback.callback(True)
382@@ -1107,6 +1100,32 @@
383 self.assertTrue(stop_complete.called)
384
385 @inlineCallbacks
386+ def test_purge_uses_locking(self):
387+ """Verify purge uses the lifecycle's run lock."""
388+ original_method = ClientServerUnitWatcher.start
389+ watcher = self.mocker.patch(ClientServerUnitWatcher)
390+ finish_callback = Deferred()
391+
392+ @inlineCallbacks
393+ def long_op(*args):
394+ yield finish_callback
395+ yield original_method(*args)
396+
397+ watcher.start()
398+ self.mocker.call(long_op, with_object=True)
399+ self.mocker.replay()
400+
401+ start_complete = self.lifecycle.start()
402+ purge_complete = self.lifecycle.purge()
403+
404+ self.assertFalse(start_complete.called)
405+ self.assertFalse(purge_complete.called)
406+ finish_callback.callback(True)
407+
408+ yield start_complete
409+ self.assertTrue(purge_complete.called)
410+
411+ @inlineCallbacks
412 def test_start_scheduler(self):
413 yield self.lifecycle.start(start_scheduler=False)
414 self.assertTrue(self.lifecycle.watching)
415
416=== modified file 'juju/unit/tests/test_workflow.py'
417--- juju/unit/tests/test_workflow.py 2012-01-11 09:37:48 +0000
418+++ juju/unit/tests/test_workflow.py 2012-02-15 05:08:17 +0000
419@@ -4,11 +4,13 @@
420 import csv
421 import os
422
423-from twisted.internet.defer import inlineCallbacks, returnValue
424+from twisted.internet.defer import Deferred, inlineCallbacks, returnValue
425
426+from juju.errors import CharmInvocationError
427+from juju.hooks.invoker import Invoker
428+from juju.lib.mocker import MATCH
429 from juju.unit.tests.test_lifecycle import LifecycleTestBase
430 from juju.unit.lifecycle import UnitLifecycle, UnitRelationLifecycle
431-
432 from juju.unit.workflow import (
433 UnitWorkflowState, RelationWorkflowState, WorkflowStateClient,
434 is_unit_running, is_relation_running)
435@@ -481,8 +483,7 @@
436 yield self.setup_default_test_relation()
437 self.relation_name = self.states["service_relation"].relation_name
438 self.juju_directory = self.makeDir()
439- self.log_stream = self.capture_logging(
440- "unit.relation.lifecycle", logging.DEBUG)
441+ self.log = self.capture_logging(level=logging.DEBUG)
442
443 self.lifecycle = UnitRelationLifecycle(
444 self.client,
445@@ -497,7 +498,36 @@
446
447 self.workflow = RelationWorkflowState(
448 self.client, self.states["unit_relation"],
449- self.states["unit"].unit_name, self.lifecycle, self.state_directory)
450+ self.states["unit"].unit_name, self.lifecycle,
451+ self.state_directory)
452+
453+ self.original_invoker = None
454+ self.invoker = None
455+
456+ def mock_long_hook(self, hook_name, code=0):
457+ """Control the speed of hook execution with a returned `Deferred`.
458+
459+ Calls the actual hook, but overrides the return value with
460+ `code` so as to simulate hooks that can fail or succeed.
461+ """
462+ if not self.original_invoker:
463+ self.original_invoker = Invoker.__call__
464+ self.invoker = self.mocker.patch(Invoker)
465+
466+ finish_callback = Deferred()
467+
468+ @inlineCallbacks
469+ def long_hook(ctx, hook_path):
470+ yield finish_callback
471+ yield self.original_invoker(ctx, hook_path)
472+ if code:
473+ raise CharmInvocationError(hook_path, code)
474+ else:
475+ returnValue(code)
476+
477+ self.invoker(MATCH(lambda x: x.endswith(hook_name)))
478+ self.mocker.call(long_hook, with_object=True)
479+ return finish_callback
480
481 @inlineCallbacks
482 def test_is_relation_running(self):
483@@ -578,7 +608,7 @@
484 current_state = yield self.workflow.get_state()
485
486 # Add a new unit, and wait for the broken hook to result in
487- # the transition to the down state.
488+ # the transition to the error state.
489 yield self.add_opposite_service_unit(self.states)
490 yield self.wait_on_state(self.workflow, "error")
491
492@@ -597,6 +627,102 @@
493 "error_message": error}})
494
495 @inlineCallbacks
496+ def test_purge_queued_hook_after_error(self):
497+ """Verify queued hooks are purged to avoid invalid transitions.
498+
499+ The scenario being tested is that a joined event, which always
500+ calls -joined, then -changed hooks, will execute as a single
501+ unit, purging the second hook if the first hook fails.
502+ """
503+ self.write_exit_hook("%s-relation-joined" % self.relation_name, 1)
504+ self.write_exit_hook("%s-relation-changed" % self.relation_name, 0)
505+
506+ current_state = yield self.workflow.get_state()
507+ self.assertEqual(current_state, None)
508+ yield self.workflow.fire_transition("start")
509+ yield self.assertState(self.workflow, "up")
510+ current_state = yield self.workflow.get_state()
511+
512+ # Add a new unit, then wait for the broken -joined hook to
513+ # result in the transition to the error state. Relation
514+ # changes will always queue up -joined and -changed.
515+ yield self.add_opposite_service_unit(self.states)
516+ yield self.wait_on_state(self.workflow, "error")
517+
518+ f_state, history, zk_state = yield self.read_persistent_state(
519+ history_id=self.workflow.zk_state_id)
520+
521+ self.assertEqual(f_state, zk_state)
522+ error = "Error processing '%s': exit code 1." % (
523+ os.path.join(self.unit_directory,
524+ "charm", "hooks", "app-relation-joined"))
525+ self.assertEqual(f_state,
526+ {"state": "error",
527+ "state_variables": {
528+ "change_type": "joined",
529+ "error_message": error}})
530+ self.assertIn(
531+ "Purging hook app-relation-changed (chained with joined)",
532+ self.log.getvalue())
533+
534+ @inlineCallbacks
535+ def test_purge_multiple_queued_hooks_after_error(self):
536+ """Ensure queued hooks are purged to allow for a clean reset."""
537+ joined_hook = "%s-relation-joined" % self.relation_name
538+ d = self.mock_long_hook(joined_hook, 1)
539+ self.mocker.replay()
540+
541+ self.write_exit_hook(joined_hook)
542+ self.write_exit_hook("%s-relation-changed" % self.relation_name)
543+
544+ # Start workflow, verify in up state
545+ current_state = yield self.workflow.get_state()
546+ self.assertEqual(current_state, None)
547+ yield self.workflow.fire_transition("start")
548+ yield self.assertState(self.workflow, "up")
549+ current_state = yield self.workflow.get_state()
550+
551+ # Add a new unit, then wait for the broken -joined hook to
552+ # result in the transition to the error state. Relation
553+ # changes will always queue up -joined and -changed.
554+ wordpress_states = yield self.add_opposite_service_unit(self.states)
555+
556+ # Add another unit. However, the app-relation-joined hook is
557+ # currently overlapping in its execution, so this hook will
558+ # also get purged.
559+ yield self.add_related_service_unit(wordpress_states)
560+
561+ # Now complete the failing app-relation-joined hook. Verify
562+ # this takes the workflow in an error state, and that this
563+ # state is recorded into ZK.
564+ d.callback(None)
565+ yield self.wait_on_state(self.workflow, "error")
566+ f_state, history, zk_state = yield self.read_persistent_state(
567+ history_id=self.workflow.zk_state_id)
568+ self.assertEqual(f_state, zk_state)
569+ error = "Error processing '%s': exit code 1." % (
570+ os.path.join(self.unit_directory,
571+ "charm", "hooks", "app-relation-joined"))
572+ self.assertEqual(f_state,
573+ {"state": "error",
574+ "state_variables": {
575+ "change_type": "joined",
576+ "error_message": error}})
577+
578+ # Reset the workflow and verify back to up
579+ yield self.workflow.fire_transition("reset")
580+ yield self.assertState(self.workflow, "up")
581+
582+ # Verify that both queued hooks are purged accordingly by
583+ # looking at the log
584+ self.assertIn(
585+ "Purging hook app-relation-changed (chained with joined)",
586+ self.log.getvalue())
587+ self.assertIn(
588+ "purging hook for wordpress/1:joined",
589+ self.log.getvalue())
590+
591+ @inlineCallbacks
592 def test_depart(self):
593 """When the workflow is transition to the down state, a relation
594 broken hook is executed, and the unit stops responding to relation
595
596=== modified file 'juju/unit/workflow.py'
597--- juju/unit/workflow.py 2012-01-11 09:37:48 +0000
598+++ juju/unit/workflow.py 2012-02-15 05:08:17 +0000
599@@ -500,6 +500,7 @@
600
601 @param: error: The error from hook invocation.
602 """
603+ yield self._lifecycle.purge()
604 yield self.fire_transition("error",
605 change_type=relation_change.change_type,
606 error_message=str(error))

Subscribers

People subscribed via source and target branches

to status/vote changes: