Merge lp:~rvb/maas/retry-power-changes into lp:~maas-committers/maas/trunk

Proposed by Raphaël Badin
Status: Merged
Approved by: Raphaël Badin
Approved revision: no longer in the source branch.
Merged at revision: 2595
Proposed branch: lp:~rvb/maas/retry-power-changes
Merge into: lp:~maas-committers/maas/trunk
Diff against target: 424 lines (+331/-17)
4 files modified
src/provisioningserver/rpc/clusterservice.py (+5/-5)
src/provisioningserver/rpc/power.py (+83/-0)
src/provisioningserver/rpc/tests/test_clusterservice.py (+11/-12)
src/provisioningserver/rpc/tests/test_power.py (+232/-0)
To merge this branch: bzr merge lp:~rvb/maas/retry-power-changes
Reviewer Review Type Date Requested Status
Gavin Panella (community) Needs Fixing
Julian Edwards (community) Approve
Review via email: mp+227884@code.launchpad.net

Commit message

Add utility to retry powering up or down a node.

Description of the change

This is part of the robustness work.

In between retries, the code waits a little longer every time. This is to "avoid" a race condition where a node gets power-cycled and then is still in the process of being power-cycled when the power state check occurs. This is obviously a bit fragile but it's better than nothing.

I deliberately didn't use the retries() utility that we have because: a) I don't think the waiting time should be constant (to avoid the race condition I mentioned above) b) I didn't want to use a fixed timeout: some power templates take a long time executing and this shouldn't mean they can't be retried.

To post a comment you must log in.
Revision history for this message
Julian Edwards (julian-edwards) wrote :

Good branch but a few things to fix:

1. Use the test matchers, Luke! (See inline)
2. You're not catching Failures from the deferToThread calls (yield turns errback into exceptions)
3. The tests aren't very Twisted-like in terms of using a Clock(), which means that you've not tested the pause durations anywhere. If you don't know how to use Clock() ping me on IRC and I'll show you.

review: Needs Fixing
Revision history for this message
Raphaël Badin (rvb) wrote :

Thanks for the review. First batch of changes published. I'm working on your suggestion to use Clock() now.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Wednesday 23 Jul 2014 12:23:49 you wrote:
> Not dealing with errors was deliberate: I don't want the retry loop to catch
> failures at all. Failures should only happen when there is a problem
> executing the power template (syntax problem, missing argument, etc.) and I
> don't think we should catch any of those. If something inside
> perform_power_change() raises an exceptions, none of the twisted magic will
> swallow it right? It will simply be "raised" from 'yield deferToThread(…)
> correct?

I'm afraid I don't agree with this approach. :)

Yes, Twisted internally turns it into a Failure, and then yield will re-raise
the exception it contains.

We've had instances in the past where we've had support requests for "power
not working" which was caused by a dodgy template that someone had edited, and
they were not looking at the cluster log at all!

I really think that we should catch these and flag them up somewhere.

> That's not part of the interface (yet). I think the node event log will
> contain the explanation of why it failed but we can always refine this
> later based on actual user testing.

OK.

I'd like to see a failure_reason text on the node for this situation. The
user should be able to see this alongside the BROKEN status IMO.

> Fixed (I didn't realize we had so many matchers to deal with mocks). Here
> and elsewhere.

They're great, add more :)

Revision history for this message
Raphaël Badin (rvb) wrote :

On 07/23/2014 02:33 PM, Julian Edwards wrote:
> On Wednesday 23 Jul 2014 12:23:49 you wrote:
>> Not dealing with errors was deliberate: I don't want the retry loop to catch
>> failures at all. Failures should only happen when there is a problem
>> executing the power template (syntax problem, missing argument, etc.) and I
>> don't think we should catch any of those. If something inside
>> perform_power_change() raises an exceptions, none of the twisted magic will
>> swallow it right? It will simply be "raised" from 'yield deferToThread(…)
>> correct?
>
> I'm afraid I don't agree with this approach. :)
>
> Yes, Twisted internally turns it into a Failure, and then yield will re-raise
> the exception it contains.
>
> We've had instances in the past where we've had support requests for "power
> not working" which was caused by a dodgy template that someone had edited, and
> they were not looking at the cluster log at all!

Hum, I still think the stacktrace belongs to the log file… but one thing
is sure (and means that, like you said, we need to take care of this
case): we want the node marked as broken when the template execution
blows up.

>> Fixed (I didn't realize we had so many matchers to deal with mocks). Here
>> and elsewhere.
>
> They're great, add more :)

The errors I'm getting aren't so great sometimes:
[...]
      raise mismatch_error
MismatchError: !=:
reference = [call(context-key-22VPn2=u'context-val-JmgwGQ',
power_change=u'on')]
actual = [call(context-key-22VPn2=u'context-val-JmgwGQ',
power_change=u'on')]
: after <function get_mock_calls> on <Mock name='mock().execute'
id='140547462746960'>: calls do not match

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Wednesday 23 Jul 2014 12:48:24 you wrote:
> Hum, I still think the stacktrace belongs to the log file… but one thing
> is sure (and means that, like you said, we need to take care of this
> case): we want the node marked as broken when the template execution
> blows up.

There's no point putting a stack trace in the log for template errors. Other
errors, yes.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

Getting there! Couple of comments inline

Revision history for this message
Raphaël Badin (rvb) wrote :

Thanks for the review!

Revision history for this message
Julian Edwards (julian-edwards) wrote :

 review: approve

On Thursday 24 Jul 2014 10:36:00 you wrote:
> Turns out simply using 'yield power.change_power_state(…, clock=clock)'
> (instead of calling clock.callLater) is enough to remove the
> clock.advance(0.1). I /think/ this is due to the fact that all the waiting
> done inside power.change_power_state is using the clock that you pass to
> the method so the initial yield ensures the first run is done (and thus the
> first check succeeds) and after that, you can control the waiting by using
> clock.advance.

\o/

Awesome.

review: Approve
Revision history for this message
Julian Edwards (julian-edwards) :
Revision history for this message
Raphaël Badin (rvb) :
Revision history for this message
Gavin Panella (allenap) wrote :

A couple of things I noticed that means this branch needs fixing. Also, perform_power_change('query') is a weird concoction; it's not a change.

Revision history for this message
Gavin Panella (allenap) :
review: Needs Fixing
Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Sunday 27 Jul 2014 11:47:18 you wrote:
> Same here, mark_node_broken() does not exist.
>
> I think this highlights a shortcoming with using Mock: it lets you do
> anything. A couple of things come to mind that might help here:
>
> 1. Don't mock the whole client object, just mock __call__.
> 2. Use mock.create_autospec() to create mocks that only all things that the
> underlying implementation does (so you can mock the whole client).

*cough* end-to-end fixture *cough*

Revision history for this message
Gavin Panella (allenap) wrote :

> *cough* end-to-end fixture *cough*

This would means configuring Django and creating databases in the pserv tests. The pserv tests are currently fast and quick to iterate on, and they won't be if we do that.

We actually need to test against the RPC schemas. That's why I've been doing my utmost to keep them tightly defined. The call_responder helper function does that back-and-forth through the schemas. Where we need to stub out actual RPC calls we ought to use call_responder to keep us honest.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Sunday 27 Jul 2014 13:49:06 Gavin Panella wrote:
> > *cough* end-to-end fixture *cough*
>
> This would means configuring Django and creating databases in the pserv
> tests. The pserv tests are currently fast and quick to iterate on, and they
> won't be if we do that.
>
> We actually need to test against the RPC schemas. That's why I've been doing
> my utmost to keep them tightly defined. The call_responder helper function
> does that back-and-forth through the schemas. Where we need to stub out
> actual RPC calls we ought to use call_responder to keep us honest.

I didn't mean that we don't need a real Django running; a mock one will do.
But we need something that responds like a real call to avoid the mistake that
rvb made.

Revision history for this message
Gavin Panella (allenap) wrote :

> I didn't mean that we don't need a real Django running; a mock one
> will do.  But we need something that responds like a real call to
> avoid the mistake that rvb made.

Ah, okeydoke, then I agree. I think we can dynamically create an amp.AMP
subclass with stub responders that correspond to a subset of amp.Command
calls.

Maybe something like:

    def make_stub_amp_thing(*commands):
        responders = {
            "responder_for_%s" % command.name:
                command.responder(Mock())
            for command in commands
        }
        return type("DynamicAMP", amp.AMP, responders)

    amp = make_stub_amp_thing(MakeMeASandwich)
    amp.responder_for_MakeMeASandwich.return_value = "Use sudo"

    call_responder(amp(), MakeMeASandwich, {
        "fillings": ["ketchup", "crisps", "scratchings"],
    })

The `Client` returned from ClusterService.getClient() needs to be
patched to use call_responder in this way. Or, as ClusterRPCFixture
does, the connections that the client uses need to be patched.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

Right, my end aim is to be able to do client(GetBootSources) in the real code
and have that talk to a mock backend with canned responses, maybe even hook it
into a function in the tests that does that responding.

Perhaps we can talk more about it at the sprint this week!

Revision history for this message
Raphaël Badin (rvb) wrote :

> The client has no method mark_node_broken(). This needs to be client(MarkNodeBroken, system_id=system_id).
>

Damn. Using Mock is indeed a bit nasty indeed, and I wasn't really
about this… but I thought you had the end-to-end fixture in progress so
my intention to switch to that once it was done. Let's work on the
fixture this week.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'src/provisioningserver/rpc/clusterservice.py'
2--- src/provisioningserver/rpc/clusterservice.py 2014-07-22 14:38:34 +0000
3+++ src/provisioningserver/rpc/clusterservice.py 2014-07-24 10:46:38 +0000
4@@ -32,7 +32,6 @@
5 ArchitectureRegistry,
6 PowerTypeRegistry,
7 )
8-from provisioningserver.power.poweraction import PowerAction
9 from provisioningserver.rpc import (
10 cluster,
11 common,
12@@ -45,6 +44,7 @@
13 get_preseed_data,
14 validate_license_key,
15 )
16+from provisioningserver.rpc.power import change_power_state
17 from twisted.application.internet import (
18 StreamServerEndpointService,
19 TimerService,
20@@ -150,15 +150,15 @@
21 @cluster.PowerOn.responder
22 def power_on(self, system_id, power_type, context):
23 """Turn a node on."""
24- action = PowerAction(power_type)
25- action.execute(power_change='on', **context)
26+ change_power_state(
27+ system_id, power_type, power_change='on', context=context)
28 return {}
29
30 @cluster.PowerOff.responder
31 def power_off(self, system_id, power_type, context):
32 """Turn a node off."""
33- action = PowerAction(power_type)
34- action.execute(power_change='off', **context)
35+ change_power_state(
36+ system_id, power_type, power_change='off', context=context)
37 return {}
38
39 @amp.StartTLS.responder
40
41=== added file 'src/provisioningserver/rpc/power.py'
42--- src/provisioningserver/rpc/power.py 1970-01-01 00:00:00 +0000
43+++ src/provisioningserver/rpc/power.py 2014-07-24 10:46:38 +0000
44@@ -0,0 +1,83 @@
45+# Copyright 2014 Canonical Ltd. This software is licensed under the
46+# GNU Affero General Public License version 3 (see the file LICENSE).
47+
48+"""RPC helpers relating to power control."""
49+
50+from __future__ import (
51+ absolute_import,
52+ print_function,
53+ unicode_literals,
54+ )
55+
56+str = None
57+
58+__metaclass__ = type
59+__all__ = [
60+ "change_power_state",
61+]
62+
63+
64+from provisioningserver.power.poweraction import PowerAction
65+from provisioningserver.rpc import getRegionClient
66+from provisioningserver.utils import pause
67+from twisted.internet import reactor
68+from twisted.internet.defer import inlineCallbacks
69+from twisted.internet.threads import deferToThread
70+
71+# List of power_types that support querying the power state.
72+# change_power_state() will only retry changing the power
73+# state for these power types.
74+# This is meant to be temporary until all the power types support
75+# querying the power state of a node.
76+QUERY_POWER_TYPES = ['amt', 'ipmi']
77+
78+
79+def perform_power_change(system_id, power_type, power_change, context):
80+ """Issue the given `power_change` command.
81+
82+ If any exception is raised during the execution of the command, mark
83+ the node as broken and re-raise the exception.
84+ """
85+ action = PowerAction(power_type)
86+ try:
87+ return action.execute(power_change=power_change, **context)
88+ except Exception:
89+ client = getRegionClient()
90+ client.mark_node_broken(system_id)
91+ raise
92+
93+
94+@inlineCallbacks
95+def change_power_state(system_id, power_type, power_change, context,
96+ clock=reactor):
97+ """Change the power state of a node.
98+
99+ Monitor the result of the power change action by querying the
100+ power state of the node and mark the node as failed if it doesn't
101+ work.
102+ """
103+ assert power_change in ('on', 'off'), (
104+ "Unknown power change: %s" % power_change)
105+
106+ # Use increasing waiting times to work around race conditions that could
107+ # arise when power-cycling the node.
108+ for waiting_time in (3, 5, 10):
109+ # Perform power change.
110+ yield deferToThread(
111+ perform_power_change, system_id, power_type, power_change,
112+ context)
113+ # If the power_type doesn't support querying the power state:
114+ # exit now.
115+ if power_type not in QUERY_POWER_TYPES:
116+ return
117+ # Wait to let the node some time to change its power state.
118+ yield pause(waiting_time, clock)
119+ # Check current power state.
120+ new_power_state = yield deferToThread(
121+ perform_power_change, system_id, power_type, 'query', context)
122+ if new_power_state == power_change:
123+ return
124+
125+ # Failure: the power state of the node hasn't changed: mark it as broken.
126+ client = getRegionClient()
127+ client.mark_node_broken(system_id)
128
129=== modified file 'src/provisioningserver/rpc/tests/test_clusterservice.py'
130--- src/provisioningserver/rpc/tests/test_clusterservice.py 2014-07-22 14:38:34 +0000
131+++ src/provisioningserver/rpc/tests/test_clusterservice.py 2014-07-24 10:46:38 +0000
132@@ -857,8 +857,8 @@
133 responder = protocol.locateResponder(self.command.commandName)
134 self.assertIsNot(responder, None)
135
136- def test_executes_a_power_action(self):
137- PowerAction = self.patch(clusterservice, "PowerAction")
138+ def test_executes_change_power_state(self):
139+ change_power_state = self.patch(clusterservice, "change_power_state")
140
141 system_id = factory.make_name("system_id")
142 power_type = factory.make_name("power_type")
143@@ -873,17 +873,16 @@
144 })
145
146 def check(response):
147- self.assertThat(PowerAction, MockCalledOnceWith(power_type))
148 self.assertThat(
149- PowerAction.return_value.execute,
150+ change_power_state,
151 MockCalledOnceWith(
152- power_change=self.expected_power_change,
153- **context))
154+ system_id, power_type,
155+ power_change=self.expected_power_change, context=context))
156 return d.addCallback(check)
157
158 def test_power_on_can_propagate_UnknownPowerType(self):
159- PowerAction = self.patch(clusterservice, "PowerAction")
160- PowerAction.side_effect = UnknownPowerType
161+ self.patch(clusterservice, "change_power_state").side_effect = (
162+ UnknownPowerType)
163
164 d = call_responder(Cluster(), self.command, {
165 "system_id": "id", "power_type": "type", "context": {},
166@@ -897,8 +896,8 @@
167 return d.addErrback(check)
168
169 def test_power_on_can_propagate_NotImplementedError(self):
170- PowerAction = self.patch(clusterservice, "PowerAction")
171- PowerAction.side_effect = NotImplementedError
172+ self.patch(clusterservice, "change_power_state").side_effect = (
173+ NotImplementedError)
174
175 d = call_responder(Cluster(), self.command, {
176 "system_id": "id", "power_type": "type", "context": {},
177@@ -912,8 +911,8 @@
178 return d.addErrback(check)
179
180 def test_power_on_can_propagate_PowerActionFail(self):
181- PowerAction = self.patch(clusterservice, "PowerAction")
182- PowerAction.return_value.execute.side_effect = PowerActionFail
183+ self.patch(clusterservice, "change_power_state").side_effect = (
184+ PowerActionFail)
185
186 d = call_responder(Cluster(), self.command, {
187 "system_id": "id", "power_type": "type", "context": {},
188
189=== added file 'src/provisioningserver/rpc/tests/test_power.py'
190--- src/provisioningserver/rpc/tests/test_power.py 1970-01-01 00:00:00 +0000
191+++ src/provisioningserver/rpc/tests/test_power.py 2014-07-24 10:46:38 +0000
192@@ -0,0 +1,232 @@
193+# Copyright 2014 Canonical Ltd. This software is licensed under the
194+# GNU Affero General Public License version 3 (see the file LICENSE).
195+
196+"""Tests for :py:module:`~provisioningserver.rpc.power`."""
197+
198+from __future__ import (
199+ absolute_import,
200+ print_function,
201+ unicode_literals,
202+ )
203+
204+str = None
205+
206+__metaclass__ = type
207+__all__ = []
208+
209+
210+import random
211+
212+from maastesting.factory import factory
213+from maastesting.matchers import (
214+ MockCalledOnceWith,
215+ MockCallsMatch,
216+ MockNotCalled,
217+ )
218+from maastesting.testcase import MAASTestCase
219+from mock import (
220+ call,
221+ Mock,
222+ )
223+from provisioningserver.rpc import power
224+from testtools.deferredruntest import (
225+ assert_fails_with,
226+ AsynchronousDeferredRunTest,
227+ )
228+from twisted.internet.defer import (
229+ inlineCallbacks,
230+ maybeDeferred,
231+ )
232+from twisted.internet.task import Clock
233+
234+
235+class TestPowerHelpers(MAASTestCase):
236+
237+ run_tests_with = AsynchronousDeferredRunTest.make_factory(timeout=5)
238+
239+ def patch_power_action(self, return_value=None, side_effect=None):
240+ """Patch the PowerAction object.
241+
242+ Patch the PowerAction object so that PowerAction().execute
243+ is replaced by a Mock object created using the given `return_value`
244+ and `side_effect`.
245+
246+ This can be used to simulate various successes or failures patterns
247+ while manipulating the power state of a node.
248+
249+ Returns a tuple of mock objects: power.PowerAction and
250+ power.PowerAction().execute.
251+ """
252+ power_action_obj = Mock()
253+ power_action_obj_execute = Mock(
254+ return_value=return_value, side_effect=side_effect)
255+ power_action_obj.execute = power_action_obj_execute
256+ power_action = self.patch(power, 'PowerAction')
257+ power_action.return_value = power_action_obj
258+ return power_action, power_action_obj_execute
259+
260+ @inlineCallbacks
261+ def test_change_power_state_changes_power_state(self):
262+ system_id = factory.make_name('system_id')
263+ power_type = random.choice(power.QUERY_POWER_TYPES)
264+ power_change = random.choice(['on', 'off'])
265+ context = {
266+ factory.make_name('context-key'): factory.make_name('context-val')
267+ }
268+ self.patch(power, 'pause')
269+ client = Mock()
270+ self.patch(power, 'getRegionClient').return_value = client
271+ # Patch the power action utility so that it says the node is
272+ # in the required power state.
273+ power_action, execute = self.patch_power_action(
274+ return_value=power_change)
275+
276+ yield power.change_power_state(
277+ system_id, power_type, power_change, context)
278+ self.assertThat(
279+ execute,
280+ MockCallsMatch(
281+ # One call to change the power state.
282+ call(power_change=power_change, **context),
283+ # One call to query the power state.
284+ call(power_change='query', **context),
285+ ),
286+ )
287+ # The node hasn't been marked broken.
288+ self.assertThat(client.mark_node_broken, MockNotCalled())
289+
290+ @inlineCallbacks
291+ def test_change_power_state_doesnt_retry_for_certain_power_types(self):
292+ system_id = factory.make_name('system_id')
293+ # Use a power type that is not among power.QUERY_POWER_TYPES.
294+ power_type = factory.make_name('power_type')
295+ power_change = random.choice(['on', 'off'])
296+ context = {
297+ factory.make_name('context-key'): factory.make_name('context-val')
298+ }
299+ self.patch(power, 'pause')
300+ client = Mock()
301+ self.patch(power, 'getRegionClient').return_value = client
302+ power_action, execute = self.patch_power_action(
303+ return_value=random.choice(['on', 'off']))
304+
305+ yield power.change_power_state(
306+ system_id, power_type, power_change, context)
307+ self.assertThat(
308+ execute,
309+ MockCallsMatch(
310+ # Only one call to change the power state.
311+ call(power_change=power_change, **context),
312+ ),
313+ )
314+ # The node hasn't been marked broken.
315+ self.assertThat(client.mark_node_broken, MockNotCalled())
316+
317+ @inlineCallbacks
318+ def test_change_power_state_retries_if_power_state_doesnt_change(self):
319+ system_id = factory.make_name('system_id')
320+ power_type = random.choice(power.QUERY_POWER_TYPES)
321+ power_change = 'on'
322+ context = {
323+ factory.make_name('context-key'): factory.make_name('context-val')
324+ }
325+ self.patch(power, 'pause')
326+ client = Mock()
327+ self.patch(power, 'getRegionClient').return_value = client
328+ # Simulate a failure to power up the node, then a success.
329+ power_action, execute = self.patch_power_action(
330+ side_effect=[None, 'off', None, 'on'])
331+
332+ yield power.change_power_state(
333+ system_id, power_type, power_change, context)
334+ self.assertThat(
335+ execute,
336+ MockCallsMatch(
337+ call(power_change=power_change, **context),
338+ call(power_change='query', **context),
339+ call(power_change=power_change, **context),
340+ call(power_change='query', **context),
341+ )
342+ )
343+ # The node hasn't been marked broken.
344+ self.assertThat(client.mark_node_broken, MockNotCalled())
345+
346+ @inlineCallbacks
347+ def test_change_power_state_marks_the_node_broken_if_failure(self):
348+ system_id = factory.make_name('system_id')
349+ power_type = random.choice(power.QUERY_POWER_TYPES)
350+ power_change = 'on'
351+ context = {
352+ factory.make_name('context-key'): factory.make_name('context-val')
353+ }
354+ self.patch(power, 'pause')
355+ client = Mock()
356+ self.patch(power, 'getRegionClient').return_value = client
357+ # Simulate a persistent failure.
358+ power_action, execute = self.patch_power_action(return_value='off')
359+
360+ yield power.change_power_state(
361+ system_id, power_type, power_change, context)
362+
363+ # The node has been marked broken.
364+ self.assertThat(
365+ client.mark_node_broken, MockCalledOnceWith(system_id))
366+
367+ def test_change_power_state_marks_the_node_broken_if_exception(self):
368+ system_id = factory.make_name('system_id')
369+ power_type = random.choice(power.QUERY_POWER_TYPES)
370+ power_change = 'on'
371+ context = {
372+ factory.make_name('context-key'): factory.make_name('context-val')
373+ }
374+ self.patch(power, 'pause')
375+ client = Mock()
376+ self.patch(power, 'getRegionClient').return_value = client
377+ # Simulate an exception.
378+ power_action, execute = self.patch_power_action(side_effect=Exception)
379+
380+ d = power.change_power_state(
381+ system_id, power_type, power_change, context)
382+ assert_fails_with(d, Exception)
383+ return d.addErrback(
384+ lambda failure: self.assertThat(
385+ client.mark_node_broken, MockCalledOnceWith(system_id)))
386+
387+ def test_change_power_state_pauses_in_between_retries(self):
388+ system_id = factory.make_name('system_id')
389+ power_type = random.choice(power.QUERY_POWER_TYPES)
390+ power_change = 'on'
391+ context = {
392+ factory.make_name('context-key'): factory.make_name('context-val')
393+ }
394+ client = Mock()
395+ self.patch(power, 'getRegionClient').return_value = client
396+ # Simulate two failures to power up the node, then a success.
397+ power_action, execute = self.patch_power_action(
398+ side_effect=[None, 'off', None, 'off', None, 'on'])
399+ self.patch(power, "deferToThread", maybeDeferred)
400+ clock = Clock()
401+
402+ calls_and_pause = [
403+ ([
404+ call(power_change=power_change, **context),
405+ ], 3),
406+ ([
407+ call(power_change='query', **context),
408+ call(power_change=power_change, **context),
409+ ], 5),
410+ ([
411+ call(power_change='query', **context),
412+ call(power_change=power_change, **context),
413+ ], 10),
414+ ([
415+ call(power_change='query', **context),
416+ ], 0),
417+ ]
418+ calls = []
419+ yield power.change_power_state(
420+ system_id, power_type, power_change, context, clock=clock)
421+ for newcalls, waiting_time in calls_and_pause:
422+ calls.extend(newcalls)
423+ self.assertThat(execute, MockCallsMatch(*calls))
424+ clock.advance(waiting_time)