Merge lp:~rvb/maas/retry-power-changes into lp:~maas-committers/maas/trunk

Proposed by Raphaël Badin
Status: Merged
Approved by: Raphaël Badin
Approved revision: no longer in the source branch.
Merged at revision: 2595
Proposed branch: lp:~rvb/maas/retry-power-changes
Merge into: lp:~maas-committers/maas/trunk
Diff against target: 424 lines (+331/-17)
4 files modified
src/provisioningserver/rpc/clusterservice.py (+5/-5)
src/provisioningserver/rpc/power.py (+83/-0)
src/provisioningserver/rpc/tests/test_clusterservice.py (+11/-12)
src/provisioningserver/rpc/tests/test_power.py (+232/-0)
To merge this branch: bzr merge lp:~rvb/maas/retry-power-changes
Reviewer Review Type Date Requested Status
Gavin Panella (community) Needs Fixing
Julian Edwards (community) Approve
Review via email: mp+227884@code.launchpad.net

Commit message

Add utility to retry powering up or down a node.

Description of the change

This is part of the robustness work.

In between retries, the code waits a little longer every time. This is to "avoid" a race condition where a node gets power-cycled and then is still in the process of being power-cycled when the power state check occurs. This is obviously a bit fragile but it's better than nothing.

I deliberately didn't use the retries() utility that we have because: a) I don't think the waiting time should be constant (to avoid the race condition I mentioned above) b) I didn't want to use a fixed timeout: some power templates take a long time executing and this shouldn't mean they can't be retried.

To post a comment you must log in.
Revision history for this message
Julian Edwards (julian-edwards) wrote :

Good branch but a few things to fix:

1. Use the test matchers, Luke! (See inline)
2. You're not catching Failures from the deferToThread calls (yield turns errback into exceptions)
3. The tests aren't very Twisted-like in terms of using a Clock(), which means that you've not tested the pause durations anywhere. If you don't know how to use Clock() ping me on IRC and I'll show you.

review: Needs Fixing
Revision history for this message
Raphaël Badin (rvb) wrote :

Thanks for the review. First batch of changes published. I'm working on your suggestion to use Clock() now.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Wednesday 23 Jul 2014 12:23:49 you wrote:
> Not dealing with errors was deliberate: I don't want the retry loop to catch
> failures at all. Failures should only happen when there is a problem
> executing the power template (syntax problem, missing argument, etc.) and I
> don't think we should catch any of those. If something inside
> perform_power_change() raises an exceptions, none of the twisted magic will
> swallow it right? It will simply be "raised" from 'yield deferToThread(…)
> correct?

I'm afraid I don't agree with this approach. :)

Yes, Twisted internally turns it into a Failure, and then yield will re-raise
the exception it contains.

We've had instances in the past where we've had support requests for "power
not working" which was caused by a dodgy template that someone had edited, and
they were not looking at the cluster log at all!

I really think that we should catch these and flag them up somewhere.

> That's not part of the interface (yet). I think the node event log will
> contain the explanation of why it failed but we can always refine this
> later based on actual user testing.

OK.

I'd like to see a failure_reason text on the node for this situation. The
user should be able to see this alongside the BROKEN status IMO.

> Fixed (I didn't realize we had so many matchers to deal with mocks). Here
> and elsewhere.

They're great, add more :)

Revision history for this message
Raphaël Badin (rvb) wrote :

On 07/23/2014 02:33 PM, Julian Edwards wrote:
> On Wednesday 23 Jul 2014 12:23:49 you wrote:
>> Not dealing with errors was deliberate: I don't want the retry loop to catch
>> failures at all. Failures should only happen when there is a problem
>> executing the power template (syntax problem, missing argument, etc.) and I
>> don't think we should catch any of those. If something inside
>> perform_power_change() raises an exceptions, none of the twisted magic will
>> swallow it right? It will simply be "raised" from 'yield deferToThread(…)
>> correct?
>
> I'm afraid I don't agree with this approach. :)
>
> Yes, Twisted internally turns it into a Failure, and then yield will re-raise
> the exception it contains.
>
> We've had instances in the past where we've had support requests for "power
> not working" which was caused by a dodgy template that someone had edited, and
> they were not looking at the cluster log at all!

Hum, I still think the stacktrace belongs to the log file… but one thing
is sure (and means that, like you said, we need to take care of this
case): we want the node marked as broken when the template execution
blows up.

>> Fixed (I didn't realize we had so many matchers to deal with mocks). Here
>> and elsewhere.
>
> They're great, add more :)

The errors I'm getting aren't so great sometimes:
[...]
      raise mismatch_error
MismatchError: !=:
reference = [call(context-key-22VPn2=u'context-val-JmgwGQ',
power_change=u'on')]
actual = [call(context-key-22VPn2=u'context-val-JmgwGQ',
power_change=u'on')]
: after <function get_mock_calls> on <Mock name='mock().execute'
id='140547462746960'>: calls do not match

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Wednesday 23 Jul 2014 12:48:24 you wrote:
> Hum, I still think the stacktrace belongs to the log file… but one thing
> is sure (and means that, like you said, we need to take care of this
> case): we want the node marked as broken when the template execution
> blows up.

There's no point putting a stack trace in the log for template errors. Other
errors, yes.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

Getting there! Couple of comments inline

Revision history for this message
Raphaël Badin (rvb) wrote :

Thanks for the review!

Revision history for this message
Julian Edwards (julian-edwards) wrote :

 review: approve

On Thursday 24 Jul 2014 10:36:00 you wrote:
> Turns out simply using 'yield power.change_power_state(…, clock=clock)'
> (instead of calling clock.callLater) is enough to remove the
> clock.advance(0.1). I /think/ this is due to the fact that all the waiting
> done inside power.change_power_state is using the clock that you pass to
> the method so the initial yield ensures the first run is done (and thus the
> first check succeeds) and after that, you can control the waiting by using
> clock.advance.

\o/

Awesome.

review: Approve
Revision history for this message
Julian Edwards (julian-edwards) :
Revision history for this message
Raphaël Badin (rvb) :
Revision history for this message
Gavin Panella (allenap) wrote :

A couple of things I noticed that means this branch needs fixing. Also, perform_power_change('query') is a weird concoction; it's not a change.

Revision history for this message
Gavin Panella (allenap) :
review: Needs Fixing
Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Sunday 27 Jul 2014 11:47:18 you wrote:
> Same here, mark_node_broken() does not exist.
>
> I think this highlights a shortcoming with using Mock: it lets you do
> anything. A couple of things come to mind that might help here:
>
> 1. Don't mock the whole client object, just mock __call__.
> 2. Use mock.create_autospec() to create mocks that only all things that the
> underlying implementation does (so you can mock the whole client).

*cough* end-to-end fixture *cough*

Revision history for this message
Gavin Panella (allenap) wrote :

> *cough* end-to-end fixture *cough*

This would means configuring Django and creating databases in the pserv tests. The pserv tests are currently fast and quick to iterate on, and they won't be if we do that.

We actually need to test against the RPC schemas. That's why I've been doing my utmost to keep them tightly defined. The call_responder helper function does that back-and-forth through the schemas. Where we need to stub out actual RPC calls we ought to use call_responder to keep us honest.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Sunday 27 Jul 2014 13:49:06 Gavin Panella wrote:
> > *cough* end-to-end fixture *cough*
>
> This would means configuring Django and creating databases in the pserv
> tests. The pserv tests are currently fast and quick to iterate on, and they
> won't be if we do that.
>
> We actually need to test against the RPC schemas. That's why I've been doing
> my utmost to keep them tightly defined. The call_responder helper function
> does that back-and-forth through the schemas. Where we need to stub out
> actual RPC calls we ought to use call_responder to keep us honest.

I didn't mean that we don't need a real Django running; a mock one will do.
But we need something that responds like a real call to avoid the mistake that
rvb made.

Revision history for this message
Gavin Panella (allenap) wrote :

> I didn't mean that we don't need a real Django running; a mock one
> will do.  But we need something that responds like a real call to
> avoid the mistake that rvb made.

Ah, okeydoke, then I agree. I think we can dynamically create an amp.AMP
subclass with stub responders that correspond to a subset of amp.Command
calls.

Maybe something like:

    def make_stub_amp_thing(*commands):
        responders = {
            "responder_for_%s" % command.name:
                command.responder(Mock())
            for command in commands
        }
        return type("DynamicAMP", amp.AMP, responders)

    amp = make_stub_amp_thing(MakeMeASandwich)
    amp.responder_for_MakeMeASandwich.return_value = "Use sudo"

    call_responder(amp(), MakeMeASandwich, {
        "fillings": ["ketchup", "crisps", "scratchings"],
    })

The `Client` returned from ClusterService.getClient() needs to be
patched to use call_responder in this way. Or, as ClusterRPCFixture
does, the connections that the client uses need to be patched.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

Right, my end aim is to be able to do client(GetBootSources) in the real code
and have that talk to a mock backend with canned responses, maybe even hook it
into a function in the tests that does that responding.

Perhaps we can talk more about it at the sprint this week!

Revision history for this message
Raphaël Badin (rvb) wrote :

> The client has no method mark_node_broken(). This needs to be client(MarkNodeBroken, system_id=system_id).
>

Damn. Using Mock is indeed a bit nasty indeed, and I wasn't really
about this… but I thought you had the end-to-end fixture in progress so
my intention to switch to that once it was done. Let's work on the
fixture this week.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'src/provisioningserver/rpc/clusterservice.py'
--- src/provisioningserver/rpc/clusterservice.py 2014-07-22 14:38:34 +0000
+++ src/provisioningserver/rpc/clusterservice.py 2014-07-24 10:46:38 +0000
@@ -32,7 +32,6 @@
32 ArchitectureRegistry,32 ArchitectureRegistry,
33 PowerTypeRegistry,33 PowerTypeRegistry,
34 )34 )
35from provisioningserver.power.poweraction import PowerAction
36from provisioningserver.rpc import (35from provisioningserver.rpc import (
37 cluster,36 cluster,
38 common,37 common,
@@ -45,6 +44,7 @@
45 get_preseed_data,44 get_preseed_data,
46 validate_license_key,45 validate_license_key,
47 )46 )
47from provisioningserver.rpc.power import change_power_state
48from twisted.application.internet import (48from twisted.application.internet import (
49 StreamServerEndpointService,49 StreamServerEndpointService,
50 TimerService,50 TimerService,
@@ -150,15 +150,15 @@
150 @cluster.PowerOn.responder150 @cluster.PowerOn.responder
151 def power_on(self, system_id, power_type, context):151 def power_on(self, system_id, power_type, context):
152 """Turn a node on."""152 """Turn a node on."""
153 action = PowerAction(power_type)153 change_power_state(
154 action.execute(power_change='on', **context)154 system_id, power_type, power_change='on', context=context)
155 return {}155 return {}
156156
157 @cluster.PowerOff.responder157 @cluster.PowerOff.responder
158 def power_off(self, system_id, power_type, context):158 def power_off(self, system_id, power_type, context):
159 """Turn a node off."""159 """Turn a node off."""
160 action = PowerAction(power_type)160 change_power_state(
161 action.execute(power_change='off', **context)161 system_id, power_type, power_change='off', context=context)
162 return {}162 return {}
163163
164 @amp.StartTLS.responder164 @amp.StartTLS.responder
165165
=== added file 'src/provisioningserver/rpc/power.py'
--- src/provisioningserver/rpc/power.py 1970-01-01 00:00:00 +0000
+++ src/provisioningserver/rpc/power.py 2014-07-24 10:46:38 +0000
@@ -0,0 +1,83 @@
1# Copyright 2014 Canonical Ltd. This software is licensed under the
2# GNU Affero General Public License version 3 (see the file LICENSE).
3
4"""RPC helpers relating to power control."""
5
6from __future__ import (
7 absolute_import,
8 print_function,
9 unicode_literals,
10 )
11
12str = None
13
14__metaclass__ = type
15__all__ = [
16 "change_power_state",
17]
18
19
20from provisioningserver.power.poweraction import PowerAction
21from provisioningserver.rpc import getRegionClient
22from provisioningserver.utils import pause
23from twisted.internet import reactor
24from twisted.internet.defer import inlineCallbacks
25from twisted.internet.threads import deferToThread
26
27# List of power_types that support querying the power state.
28# change_power_state() will only retry changing the power
29# state for these power types.
30# This is meant to be temporary until all the power types support
31# querying the power state of a node.
32QUERY_POWER_TYPES = ['amt', 'ipmi']
33
34
35def perform_power_change(system_id, power_type, power_change, context):
36 """Issue the given `power_change` command.
37
38 If any exception is raised during the execution of the command, mark
39 the node as broken and re-raise the exception.
40 """
41 action = PowerAction(power_type)
42 try:
43 return action.execute(power_change=power_change, **context)
44 except Exception:
45 client = getRegionClient()
46 client.mark_node_broken(system_id)
47 raise
48
49
50@inlineCallbacks
51def change_power_state(system_id, power_type, power_change, context,
52 clock=reactor):
53 """Change the power state of a node.
54
55 Monitor the result of the power change action by querying the
56 power state of the node and mark the node as failed if it doesn't
57 work.
58 """
59 assert power_change in ('on', 'off'), (
60 "Unknown power change: %s" % power_change)
61
62 # Use increasing waiting times to work around race conditions that could
63 # arise when power-cycling the node.
64 for waiting_time in (3, 5, 10):
65 # Perform power change.
66 yield deferToThread(
67 perform_power_change, system_id, power_type, power_change,
68 context)
69 # If the power_type doesn't support querying the power state:
70 # exit now.
71 if power_type not in QUERY_POWER_TYPES:
72 return
73 # Wait to let the node some time to change its power state.
74 yield pause(waiting_time, clock)
75 # Check current power state.
76 new_power_state = yield deferToThread(
77 perform_power_change, system_id, power_type, 'query', context)
78 if new_power_state == power_change:
79 return
80
81 # Failure: the power state of the node hasn't changed: mark it as broken.
82 client = getRegionClient()
83 client.mark_node_broken(system_id)
084
=== modified file 'src/provisioningserver/rpc/tests/test_clusterservice.py'
--- src/provisioningserver/rpc/tests/test_clusterservice.py 2014-07-22 14:38:34 +0000
+++ src/provisioningserver/rpc/tests/test_clusterservice.py 2014-07-24 10:46:38 +0000
@@ -857,8 +857,8 @@
857 responder = protocol.locateResponder(self.command.commandName)857 responder = protocol.locateResponder(self.command.commandName)
858 self.assertIsNot(responder, None)858 self.assertIsNot(responder, None)
859859
860 def test_executes_a_power_action(self):860 def test_executes_change_power_state(self):
861 PowerAction = self.patch(clusterservice, "PowerAction")861 change_power_state = self.patch(clusterservice, "change_power_state")
862862
863 system_id = factory.make_name("system_id")863 system_id = factory.make_name("system_id")
864 power_type = factory.make_name("power_type")864 power_type = factory.make_name("power_type")
@@ -873,17 +873,16 @@
873 })873 })
874874
875 def check(response):875 def check(response):
876 self.assertThat(PowerAction, MockCalledOnceWith(power_type))
877 self.assertThat(876 self.assertThat(
878 PowerAction.return_value.execute,877 change_power_state,
879 MockCalledOnceWith(878 MockCalledOnceWith(
880 power_change=self.expected_power_change,879 system_id, power_type,
881 **context))880 power_change=self.expected_power_change, context=context))
882 return d.addCallback(check)881 return d.addCallback(check)
883882
884 def test_power_on_can_propagate_UnknownPowerType(self):883 def test_power_on_can_propagate_UnknownPowerType(self):
885 PowerAction = self.patch(clusterservice, "PowerAction")884 self.patch(clusterservice, "change_power_state").side_effect = (
886 PowerAction.side_effect = UnknownPowerType885 UnknownPowerType)
887886
888 d = call_responder(Cluster(), self.command, {887 d = call_responder(Cluster(), self.command, {
889 "system_id": "id", "power_type": "type", "context": {},888 "system_id": "id", "power_type": "type", "context": {},
@@ -897,8 +896,8 @@
897 return d.addErrback(check)896 return d.addErrback(check)
898897
899 def test_power_on_can_propagate_NotImplementedError(self):898 def test_power_on_can_propagate_NotImplementedError(self):
900 PowerAction = self.patch(clusterservice, "PowerAction")899 self.patch(clusterservice, "change_power_state").side_effect = (
901 PowerAction.side_effect = NotImplementedError900 NotImplementedError)
902901
903 d = call_responder(Cluster(), self.command, {902 d = call_responder(Cluster(), self.command, {
904 "system_id": "id", "power_type": "type", "context": {},903 "system_id": "id", "power_type": "type", "context": {},
@@ -912,8 +911,8 @@
912 return d.addErrback(check)911 return d.addErrback(check)
913912
914 def test_power_on_can_propagate_PowerActionFail(self):913 def test_power_on_can_propagate_PowerActionFail(self):
915 PowerAction = self.patch(clusterservice, "PowerAction")914 self.patch(clusterservice, "change_power_state").side_effect = (
916 PowerAction.return_value.execute.side_effect = PowerActionFail915 PowerActionFail)
917916
918 d = call_responder(Cluster(), self.command, {917 d = call_responder(Cluster(), self.command, {
919 "system_id": "id", "power_type": "type", "context": {},918 "system_id": "id", "power_type": "type", "context": {},
920919
=== added file 'src/provisioningserver/rpc/tests/test_power.py'
--- src/provisioningserver/rpc/tests/test_power.py 1970-01-01 00:00:00 +0000
+++ src/provisioningserver/rpc/tests/test_power.py 2014-07-24 10:46:38 +0000
@@ -0,0 +1,232 @@
1# Copyright 2014 Canonical Ltd. This software is licensed under the
2# GNU Affero General Public License version 3 (see the file LICENSE).
3
4"""Tests for :py:module:`~provisioningserver.rpc.power`."""
5
6from __future__ import (
7 absolute_import,
8 print_function,
9 unicode_literals,
10 )
11
12str = None
13
14__metaclass__ = type
15__all__ = []
16
17
18import random
19
20from maastesting.factory import factory
21from maastesting.matchers import (
22 MockCalledOnceWith,
23 MockCallsMatch,
24 MockNotCalled,
25 )
26from maastesting.testcase import MAASTestCase
27from mock import (
28 call,
29 Mock,
30 )
31from provisioningserver.rpc import power
32from testtools.deferredruntest import (
33 assert_fails_with,
34 AsynchronousDeferredRunTest,
35 )
36from twisted.internet.defer import (
37 inlineCallbacks,
38 maybeDeferred,
39 )
40from twisted.internet.task import Clock
41
42
43class TestPowerHelpers(MAASTestCase):
44
45 run_tests_with = AsynchronousDeferredRunTest.make_factory(timeout=5)
46
47 def patch_power_action(self, return_value=None, side_effect=None):
48 """Patch the PowerAction object.
49
50 Patch the PowerAction object so that PowerAction().execute
51 is replaced by a Mock object created using the given `return_value`
52 and `side_effect`.
53
54 This can be used to simulate various successes or failures patterns
55 while manipulating the power state of a node.
56
57 Returns a tuple of mock objects: power.PowerAction and
58 power.PowerAction().execute.
59 """
60 power_action_obj = Mock()
61 power_action_obj_execute = Mock(
62 return_value=return_value, side_effect=side_effect)
63 power_action_obj.execute = power_action_obj_execute
64 power_action = self.patch(power, 'PowerAction')
65 power_action.return_value = power_action_obj
66 return power_action, power_action_obj_execute
67
68 @inlineCallbacks
69 def test_change_power_state_changes_power_state(self):
70 system_id = factory.make_name('system_id')
71 power_type = random.choice(power.QUERY_POWER_TYPES)
72 power_change = random.choice(['on', 'off'])
73 context = {
74 factory.make_name('context-key'): factory.make_name('context-val')
75 }
76 self.patch(power, 'pause')
77 client = Mock()
78 self.patch(power, 'getRegionClient').return_value = client
79 # Patch the power action utility so that it says the node is
80 # in the required power state.
81 power_action, execute = self.patch_power_action(
82 return_value=power_change)
83
84 yield power.change_power_state(
85 system_id, power_type, power_change, context)
86 self.assertThat(
87 execute,
88 MockCallsMatch(
89 # One call to change the power state.
90 call(power_change=power_change, **context),
91 # One call to query the power state.
92 call(power_change='query', **context),
93 ),
94 )
95 # The node hasn't been marked broken.
96 self.assertThat(client.mark_node_broken, MockNotCalled())
97
98 @inlineCallbacks
99 def test_change_power_state_doesnt_retry_for_certain_power_types(self):
100 system_id = factory.make_name('system_id')
101 # Use a power type that is not among power.QUERY_POWER_TYPES.
102 power_type = factory.make_name('power_type')
103 power_change = random.choice(['on', 'off'])
104 context = {
105 factory.make_name('context-key'): factory.make_name('context-val')
106 }
107 self.patch(power, 'pause')
108 client = Mock()
109 self.patch(power, 'getRegionClient').return_value = client
110 power_action, execute = self.patch_power_action(
111 return_value=random.choice(['on', 'off']))
112
113 yield power.change_power_state(
114 system_id, power_type, power_change, context)
115 self.assertThat(
116 execute,
117 MockCallsMatch(
118 # Only one call to change the power state.
119 call(power_change=power_change, **context),
120 ),
121 )
122 # The node hasn't been marked broken.
123 self.assertThat(client.mark_node_broken, MockNotCalled())
124
125 @inlineCallbacks
126 def test_change_power_state_retries_if_power_state_doesnt_change(self):
127 system_id = factory.make_name('system_id')
128 power_type = random.choice(power.QUERY_POWER_TYPES)
129 power_change = 'on'
130 context = {
131 factory.make_name('context-key'): factory.make_name('context-val')
132 }
133 self.patch(power, 'pause')
134 client = Mock()
135 self.patch(power, 'getRegionClient').return_value = client
136 # Simulate a failure to power up the node, then a success.
137 power_action, execute = self.patch_power_action(
138 side_effect=[None, 'off', None, 'on'])
139
140 yield power.change_power_state(
141 system_id, power_type, power_change, context)
142 self.assertThat(
143 execute,
144 MockCallsMatch(
145 call(power_change=power_change, **context),
146 call(power_change='query', **context),
147 call(power_change=power_change, **context),
148 call(power_change='query', **context),
149 )
150 )
151 # The node hasn't been marked broken.
152 self.assertThat(client.mark_node_broken, MockNotCalled())
153
154 @inlineCallbacks
155 def test_change_power_state_marks_the_node_broken_if_failure(self):
156 system_id = factory.make_name('system_id')
157 power_type = random.choice(power.QUERY_POWER_TYPES)
158 power_change = 'on'
159 context = {
160 factory.make_name('context-key'): factory.make_name('context-val')
161 }
162 self.patch(power, 'pause')
163 client = Mock()
164 self.patch(power, 'getRegionClient').return_value = client
165 # Simulate a persistent failure.
166 power_action, execute = self.patch_power_action(return_value='off')
167
168 yield power.change_power_state(
169 system_id, power_type, power_change, context)
170
171 # The node has been marked broken.
172 self.assertThat(
173 client.mark_node_broken, MockCalledOnceWith(system_id))
174
175 def test_change_power_state_marks_the_node_broken_if_exception(self):
176 system_id = factory.make_name('system_id')
177 power_type = random.choice(power.QUERY_POWER_TYPES)
178 power_change = 'on'
179 context = {
180 factory.make_name('context-key'): factory.make_name('context-val')
181 }
182 self.patch(power, 'pause')
183 client = Mock()
184 self.patch(power, 'getRegionClient').return_value = client
185 # Simulate an exception.
186 power_action, execute = self.patch_power_action(side_effect=Exception)
187
188 d = power.change_power_state(
189 system_id, power_type, power_change, context)
190 assert_fails_with(d, Exception)
191 return d.addErrback(
192 lambda failure: self.assertThat(
193 client.mark_node_broken, MockCalledOnceWith(system_id)))
194
195 def test_change_power_state_pauses_in_between_retries(self):
196 system_id = factory.make_name('system_id')
197 power_type = random.choice(power.QUERY_POWER_TYPES)
198 power_change = 'on'
199 context = {
200 factory.make_name('context-key'): factory.make_name('context-val')
201 }
202 client = Mock()
203 self.patch(power, 'getRegionClient').return_value = client
204 # Simulate two failures to power up the node, then a success.
205 power_action, execute = self.patch_power_action(
206 side_effect=[None, 'off', None, 'off', None, 'on'])
207 self.patch(power, "deferToThread", maybeDeferred)
208 clock = Clock()
209
210 calls_and_pause = [
211 ([
212 call(power_change=power_change, **context),
213 ], 3),
214 ([
215 call(power_change='query', **context),
216 call(power_change=power_change, **context),
217 ], 5),
218 ([
219 call(power_change='query', **context),
220 call(power_change=power_change, **context),
221 ], 10),
222 ([
223 call(power_change='query', **context),
224 ], 0),
225 ]
226 calls = []
227 yield power.change_power_state(
228 system_id, power_type, power_change, context, clock=clock)
229 for newcalls, waiting_time in calls_and_pause:
230 calls.extend(newcalls)
231 self.assertThat(execute, MockCallsMatch(*calls))
232 clock.advance(waiting_time)