Merge lp:~julian-edwards/maas/only-ready-when-power-is-off into lp:~maas-committers/maas/trunk

Proposed by Julian Edwards
Status: Merged
Approved by: Julian Edwards
Approved revision: no longer in the source branch.
Merged at revision: 2994
Proposed branch: lp:~julian-edwards/maas/only-ready-when-power-is-off
Merge into: lp:~maas-committers/maas/trunk
Diff against target: 392 lines (+107/-82)
6 files modified
src/maasserver/api/tests/test_node.py (+8/-59)
src/maasserver/enum.py (+3/-0)
src/maasserver/models/node.py (+29/-2)
src/maasserver/models/tests/test_node.py (+57/-13)
src/maasserver/node_action.py (+0/-1)
src/maasserver/tests/test_node_action.py (+10/-7)
To merge this branch: bzr merge lp:~julian-edwards/maas/only-ready-when-power-is-off
Reviewer Review Type Date Requested Status
Jeroen T. Vermeulen (community) Approve
Review via email: mp+234619@code.launchpad.net

Commit message

Only mark nodes READY once power off is complete. It does this by introducing a new state, RELEASING, which is transitioned to READY when the power job calls back successfully. This does not affect the transition to BROKEN if the power job failed.

Description of the change

There was some collateral with this branch, my changed exposed some other potential problems some of which I fixed:

 - release() always tries to power off the node, even though it may have never been powered on. I've left this as is but I welcome your comments as the reviewer here.

 - release() was being unit tested via the API (!) so I removed all of those tests in favour of a more simple enhanced unit test.

 - The start node action could only operate on ALLOCATED or DEPLOYED nodes, however if it got a StaticIPAddressExhaustion error, it was releasing the node! So I stopped it doing that.

To post a comment you must log in.
Revision history for this message
Julian Edwards (julian-edwards) wrote :

I tested this locally and it seems to be ok, you can't start the node again until the poweroff script finishes. In the meantime, the UI shows "Releasing". It'd be great if we had all this on longpoll :)

Revision history for this message
Jeroen T. Vermeulen (jtv) wrote :

Thanks for cleaning up the tests. We have far too much logic, and tests, at the API level.

I think it's OK for release() to power off the node always, because our knowledge of its current power state may be out of date. I doubt we control all state transitions thoroughly at the moment, so there may be obscure paths like: node is switched off; power poller registers that it's off; node is marked as broken; power poller stops looking at the node because it's broken; admin powers node on and works their magic; admin marks node as fixed; user allocates node; MAAS issues power-on command; node misses the command because it's already on.

So as far as I'm concerned, the XXX comment can become just a regular comment note. In fact I probably wouldn't even bother with a special code path for the case where the node is already off: the power method can figure out that it's not needed, or if it prefers, it can issue a probably redundant power-off command for luck.

And looking at the code under that XXX comment, does it make sense to keep the owner set when releasing a node? The user is done with the node at that point. For debugging it may well be interesting to know who was using it, but that's historical information — maybe it belongs in the node event log. So I think I'd just clear the node's owner during release() either way.

One thing is completely unclear to me: what does the RESERVED state represent? It may have come up in London but if so I don't remember. There's a comment in the enum that talks about the node being ready for “named” deployment. I have no idea what that means. I'm not even sure whether it's talking about a DNS daemon or the normal adjective. I don't see any place in the code where a node might enter this state. Yet your test uses this state for a powered-off node and not for a powered-on node... Why? What's the distinction?

review: Approve
Revision history for this message
Raphaël Badin (rvb) wrote :

> I tested this locally and it seems to be ok, you can't start the node again until the poweroff
> script finishes.

What happens if powering the machine off fails? (if the template blows up or if the status of the node is never changed?) I'd say we need to introduce a RELEASING_FAILED state but I'd like to hear your thoughts on this.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Monday 15 Sep 2014 07:22:08 you wrote:
> > I tested this locally and it seems to be ok, you can't start the node
> > again until the poweroff script finishes.
>
> What happens if powering the machine off fails? (if the template blows up or
> if the status of the node is never changed?) I'd say we need to introduce
> a RELEASING_FAILED state but I'd like to hear your thoughts on this.

It will be marked BROKEN; that's already all been done.

Revision history for this message
Raphaël Badin (rvb) wrote :

> On Monday 15 Sep 2014 07:22:08 you wrote:
> > > I tested this locally and it seems to be ok, you can't start the node
> > > again until the poweroff script finishes.
> >
> > What happens if powering the machine off fails? (if the template blows up or
> > if the status of the node is never changed?) I'd say we need to introduce
> > a RELEASING_FAILED state but I'd like to hear your thoughts on this.
>
> It will be marked BROKEN; that's already all been done.

That's not what BROKEN means. BROKEN is a state only a manual action from a user should put a node into.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Monday 15 Sep 2014 09:34:27 you wrote:
> > On Monday 15 Sep 2014 07:22:08 you wrote:
> > > > I tested this locally and it seems to be ok, you can't start the node
> > > > again until the poweroff script finishes.
> > >
> > > What happens if powering the machine off fails? (if the template blows
> > > up or if the status of the node is never changed?) I'd say we need to
> > > introduce a RELEASING_FAILED state but I'd like to hear your thoughts
> > > on this.>
> > It will be marked BROKEN; that's already all been done.
>
> That's not what BROKEN means. BROKEN is a state only a manual action from a
> user should put a node into.

Then whoever did the work on the power stuff needs to know that.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Monday 15 Sep 2014 07:07:25 you wrote:
> I think it's OK for release() to power off the node always, because our
> knowledge of its current power state may be out of date. I doubt we
> control all state transitions thoroughly at the moment, so there may be
> obscure paths like: node is switched off; power poller registers that it's
> off; node is marked as broken; power poller stops looking at the node
> because it's broken; admin powers node on and works their magic; admin
> marks node as fixed; user allocates node; MAAS issues power-on command;
> node misses the command because it's already on.
>
> So as far as I'm concerned, the XXX comment can become just a regular
> comment note. In fact I probably wouldn't even bother with a special code
> path for the case where the node is already off: the power method can
> figure out that it's not needed, or if it prefers, it can issue a probably
> redundant power-off command for luck.

I'll try it and see how it works out, but I am concerned it will lead to
concurrent power operations - we must avoid the situation where we need to
serialize them at the power driver level, that's just madness. It's good
practice to catch problems as early as you can!

> And looking at the code under that XXX comment, does it make sense to keep
> the owner set when releasing a node? The user is done with the node at
> that point. For debugging it may well be interesting to know who was using
> it, but that's historical information — maybe it belongs in the node event
> log. So I think I'd just clear the node's owner during release() either
> way.

It's not released until it's off IMO, which is why I kept the owner on. The
state RELEASING implies it's not released yet as well.

> One thing is completely unclear to me: what does the RESERVED state
> represent? It may have come up in London but if so I don't remember.
> There's a comment in the enum that talks about the node being ready for
> “named” deployment. I have no idea what that means. I'm not even sure
> whether it's talking about a DNS daemon or the normal adjective. I don't
> see any place in the code where a node might enter this state. Yet your
> test uses this state for a powered-off node and not for a powered-on
> node... Why? What's the distinction?

It can be deleted as we discussed on the call.

Thanks for the review!

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'src/maasserver/api/tests/test_node.py'
2--- src/maasserver/api/tests/test_node.py 2014-09-10 16:20:31 +0000
3+++ src/maasserver/api/tests/test_node.py 2014-09-16 01:19:01 +0000
4@@ -47,6 +47,7 @@
5 from maasserver.testing.osystems import make_usable_osystem
6 from maasserver.testing.testcase import MAASServerTestCase
7 from maastesting.matchers import (
8+ Equals,
9 MockCalledOnceWith,
10 MockNotCalled,
11 )
12@@ -402,7 +403,7 @@
13 [httplib.OK] * len(owned_nodes),
14 [response.status_code for response in responses])
15 self.assertItemsEqual(
16- [NODE_STATUS.READY] * len(owned_nodes),
17+ [NODE_STATUS.RELEASING] * len(owned_nodes),
18 [node.status for node in reload_objects(Node, owned_nodes)])
19
20 def test_POST_release_releases_failed_node(self):
21@@ -414,60 +415,8 @@
22 self.assertEqual(
23 httplib.OK, response.status_code, response.content)
24 owned_node = Node.objects.get(id=owned_node.id)
25- self.assertEqual(
26- (NODE_STATUS.READY, None),
27- (owned_node.status, owned_node.owner))
28-
29- def test_POST_release_turns_on_netboot(self):
30- node = factory.make_Node(
31- status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user)
32- node.set_netboot(on=False)
33- self.client.post(self.get_node_uri(node), {'op': 'release'})
34- self.assertTrue(reload_object(node).netboot)
35-
36- def test_POST_release_resets_osystem_and_distro_series(self):
37- osystem = factory.pick_OS()
38- release = factory.pick_release(osystem)
39- node = factory.make_Node(
40- status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user,
41- osystem=osystem.name, distro_series=release)
42- self.client.post(self.get_node_uri(node), {'op': 'release'})
43- self.assertEqual('', reload_object(node).osystem)
44- self.assertEqual('', reload_object(node).distro_series)
45-
46- def test_POST_release_resets_license_key(self):
47- osystem = factory.pick_OS()
48- release = factory.pick_release(osystem)
49- license_key = factory.make_string()
50- node = factory.make_Node(
51- status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user,
52- osystem=osystem.name, distro_series=release,
53- license_key=license_key)
54- self.client.post(self.get_node_uri(node), {'op': 'release'})
55- self.assertEqual('', reload_object(node).license_key)
56-
57- def test_POST_release_resets_agent_name(self):
58- agent_name = factory.make_name('agent-name')
59- osystem = factory.pick_OS()
60- release = factory.pick_release(osystem)
61- node = factory.make_Node(
62- status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user,
63- osystem=osystem.name, distro_series=release,
64- agent_name=agent_name)
65- self.client.post(self.get_node_uri(node), {'op': 'release'})
66- self.assertEqual('', reload_object(node).agent_name)
67-
68- def test_POST_release_removes_token_and_user(self):
69- node = factory.make_Node(status=NODE_STATUS.READY)
70- self.client.post(reverse('nodes_handler'), {'op': 'acquire'})
71- node = Node.objects.get(system_id=node.system_id)
72- self.assertEqual(NODE_STATUS.ALLOCATED, node.status)
73- self.assertEqual(self.logged_in_user, node.owner)
74- self.assertEqual(self.client.token.key, node.token.key)
75- self.client.post(self.get_node_uri(node), {'op': 'release'})
76- node = Node.objects.get(system_id=node.system_id)
77- self.assertIs(None, node.owner)
78- self.assertIs(None, node.token)
79+ self.expectThat(owned_node.status, Equals(NODE_STATUS.RELEASING))
80+ self.expectThat(owned_node.owner, Equals(self.logged_in_user))
81
82 def test_POST_release_does_nothing_for_unowned_node(self):
83 node = factory.make_Node(
84@@ -530,8 +479,8 @@
85 self.become_admin()
86 response = self.client.post(
87 self.get_node_uri(node), {'op': 'release'})
88- self.assertEqual(httplib.OK, response.status_code)
89- self.assertEqual(NODE_STATUS.READY, reload_object(node).status)
90+ self.assertEqual(httplib.OK, response.status_code, response.content)
91+ self.assertEqual(NODE_STATUS.RELEASING, reload_object(node).status)
92
93 def test_POST_release_combines_with_acquire(self):
94 node = factory.make_Node(status=NODE_STATUS.READY)
95@@ -540,8 +489,8 @@
96 self.assertEqual(NODE_STATUS.ALLOCATED, reload_object(node).status)
97 node_uri = json.loads(response.content)['resource_uri']
98 response = self.client.post(node_uri, {'op': 'release'})
99- self.assertEqual(httplib.OK, response.status_code)
100- self.assertEqual(NODE_STATUS.READY, reload_object(node).status)
101+ self.assertEqual(httplib.OK, response.status_code, response.content)
102+ self.assertEqual(NODE_STATUS.RELEASING, reload_object(node).status)
103
104 def test_POST_commission_commissions_node(self):
105 node = factory.make_Node(
106
107=== modified file 'src/maasserver/enum.py'
108--- src/maasserver/enum.py 2014-09-15 10:35:35 +0000
109+++ src/maasserver/enum.py 2014-09-16 01:19:01 +0000
110@@ -74,6 +74,8 @@
111 ALLOCATED = 10
112 #: The deployment of the node failed.
113 FAILED_DEPLOYMENT = 11
114+ #: The node is powering down after a release request.
115+ RELEASING = 12
116
117
118 # Django choices for NODE_STATUS: sequence of tuples (key, UI
119@@ -91,6 +93,7 @@
120 (NODE_STATUS.RETIRED, "Retired"),
121 (NODE_STATUS.BROKEN, "Broken"),
122 (NODE_STATUS.FAILED_DEPLOYMENT, "Failed deployment"),
123+ (NODE_STATUS.RELEASING, "Releasing"),
124 )
125
126
127
128=== modified file 'src/maasserver/models/node.py'
129--- src/maasserver/models/node.py 2014-09-15 14:28:28 +0000
130+++ src/maasserver/models/node.py 2014-09-16 01:19:01 +0000
131@@ -161,6 +161,7 @@
132 NODE_STATUS.RETIRED,
133 NODE_STATUS.MISSING,
134 NODE_STATUS.BROKEN,
135+ NODE_STATUS.RELEASING,
136 ],
137 NODE_STATUS.ALLOCATED: [
138 NODE_STATUS.READY,
139@@ -168,6 +169,12 @@
140 NODE_STATUS.MISSING,
141 NODE_STATUS.BROKEN,
142 NODE_STATUS.DEPLOYING,
143+ NODE_STATUS.RELEASING,
144+ ],
145+ NODE_STATUS.RELEASING: [
146+ NODE_STATUS.READY,
147+ NODE_STATUS.BROKEN,
148+ NODE_STATUS.MISSING,
149 ],
150 NODE_STATUS.DEPLOYING: [
151 NODE_STATUS.ALLOCATED,
152@@ -176,18 +183,21 @@
153 NODE_STATUS.FAILED_DEPLOYMENT,
154 NODE_STATUS.DEPLOYED,
155 NODE_STATUS.READY,
156+ NODE_STATUS.RELEASING,
157 ],
158 NODE_STATUS.FAILED_DEPLOYMENT: [
159 NODE_STATUS.ALLOCATED,
160 NODE_STATUS.MISSING,
161 NODE_STATUS.BROKEN,
162 NODE_STATUS.READY,
163+ NODE_STATUS.RELEASING,
164 ],
165 NODE_STATUS.DEPLOYED: [
166 NODE_STATUS.ALLOCATED,
167 NODE_STATUS.MISSING,
168 NODE_STATUS.BROKEN,
169 NODE_STATUS.READY,
170+ NODE_STATUS.RELEASING,
171 ],
172 NODE_STATUS.MISSING: [
173 NODE_STATUS.NEW,
174@@ -205,6 +215,7 @@
175 NODE_STATUS.BROKEN: [
176 NODE_STATUS.COMMISSIONING,
177 NODE_STATUS.READY,
178+ NODE_STATUS.RELEASING,
179 ],
180 }
181
182@@ -1238,8 +1249,13 @@
183 self.delete_host_maps(deallocated_ips)
184 from maasserver.dns.config import change_dns_zones
185 change_dns_zones([self.nodegroup])
186- self.status = NODE_STATUS.READY
187- self.owner = None
188+ if self.power_state == POWER_STATE.OFF:
189+ self.status = NODE_STATUS.READY
190+ self.owner = None
191+ else:
192+ # update_power_state() will take care of making the node READY
193+ # and unowned when the power is finally off.
194+ self.status = NODE_STATUS.RELEASING
195 self.token = None
196 self.agent_name = ''
197 self.set_netboot()
198@@ -1300,7 +1316,10 @@
199 """
200 if self.status in RELEASABLE_STATUSES:
201 self.release()
202+ # release() normally sets the status to RELEASING and leaves the
203+ # owner in place, override that here as we're broken.
204 self.status = NODE_STATUS.BROKEN
205+ self.owner = None
206 self.error_description = error_description
207 self.save()
208
209@@ -1317,6 +1336,14 @@
210 def update_power_state(self, power_state):
211 """Update a node's power state """
212 self.power_state = power_state
213+ mark_ready = (
214+ self.status == NODE_STATUS.RELEASING and
215+ power_state == POWER_STATE.OFF)
216+ if mark_ready:
217+ # Ensure the node is fully released after a successful power
218+ # down.
219+ self.status = NODE_STATUS.READY
220+ self.owner = None
221 self.save()
222
223 def claim_static_ip_addresses(self):
224
225=== modified file 'src/maasserver/models/tests/test_node.py'
226--- src/maasserver/models/tests/test_node.py 2014-09-15 14:28:28 +0000
227+++ src/maasserver/models/tests/test_node.py 2014-09-16 01:19:01 +0000
228@@ -91,6 +91,7 @@
229 AfterPreprocessing,
230 Equals,
231 HasLength,
232+ Is,
233 MatchesStructure,
234 )
235 from twisted.internet import defer
236@@ -642,15 +643,37 @@
237 (user, NODE_STATUS.ALLOCATED, agent_name),
238 (node.owner, node.status, node.agent_name))
239
240- def test_release(self):
241- agent_name = factory.make_name('agent-name')
242- node = factory.make_Node(
243- status=NODE_STATUS.ALLOCATED, owner=factory.make_User(),
244- agent_name=agent_name)
245- node.release()
246- self.assertEqual(
247- (NODE_STATUS.READY, None, node.agent_name),
248- (node.status, node.owner, ''))
249+ def test_release_node_that_has_power_on(self):
250+ agent_name = factory.make_name('agent-name')
251+ owner = factory.make_User()
252+ node = factory.make_Node(
253+ status=NODE_STATUS.ALLOCATED, owner=owner, agent_name=agent_name)
254+ node.power_state = POWER_STATE.ON
255+ node.release()
256+ self.expectThat(node.status, Equals(NODE_STATUS.RELEASING))
257+ self.expectThat(node.owner, Equals(owner))
258+ self.expectThat(node.agent_name, Equals(''))
259+ self.expectThat(node.token, Is(None))
260+ self.expectThat(node.netboot, Is(True))
261+ self.expectThat(node.osystem, Equals(''))
262+ self.expectThat(node.distro_series, Equals(''))
263+ self.expectThat(node.license_key, Equals(''))
264+
265+ def test_release_node_that_has_power_off(self):
266+ agent_name = factory.make_name('agent-name')
267+ owner = factory.make_User()
268+ node = factory.make_Node(
269+ status=NODE_STATUS.ALLOCATED, owner=owner, agent_name=agent_name)
270+ node.power_state = POWER_STATE.OFF
271+ node.release()
272+ self.expectThat(node.status, Equals(NODE_STATUS.READY))
273+ self.expectThat(node.owner, Is(None))
274+ self.expectThat(node.agent_name, Equals(''))
275+ self.expectThat(node.token, Is(None))
276+ self.expectThat(node.netboot, Is(True))
277+ self.expectThat(node.osystem, Equals(''))
278+ self.expectThat(node.distro_series, Equals(''))
279+ self.expectThat(node.license_key, Equals(''))
280
281 def test_release_deletes_static_ip_host_maps(self):
282 remove_host_maps = self.patch_autospec(
283@@ -1208,10 +1231,11 @@
284 def test_mark_broken_releases_allocated_node(self):
285 node = factory.make_Node(
286 status=NODE_STATUS.ALLOCATED, owner=factory.make_User())
287- node.mark_broken(factory.make_name('error-description'))
288- node = reload_object(node)
289- self.assertEqual(
290- (NODE_STATUS.BROKEN, None), (node.status, node.owner))
291+ err_desc = factory.make_name('error-description')
292+ release = self.patch(node, 'release')
293+ node.mark_broken(err_desc)
294+ self.expectThat(node.owner, Is(None))
295+ self.assertThat(release, MockCalledOnceWith())
296
297 def test_mark_fixed_changes_status(self):
298 node = factory.make_Node(status=NODE_STATUS.BROKEN)
299@@ -1237,6 +1261,26 @@
300 node.update_power_state(state)
301 self.assertEqual(state, reload_object(node).power_state)
302
303+ def test_update_power_state_readies_node_if_releasing(self):
304+ node = factory.make_Node(
305+ power_state=POWER_STATE.ON, status=NODE_STATUS.RELEASING,
306+ owner=None)
307+ node.update_power_state(POWER_STATE.OFF)
308+ self.expectThat(node.status, Equals(NODE_STATUS.READY))
309+ self.expectThat(node.owner, Is(None))
310+
311+ def test_update_power_state_does_not_change_status_if_not_releasing(self):
312+ node = factory.make_Node(
313+ power_state=POWER_STATE.ON, status=NODE_STATUS.ALLOCATED)
314+ node.update_power_state(POWER_STATE.OFF)
315+ self.assertThat(node.status, Equals(NODE_STATUS.ALLOCATED))
316+
317+ def test_update_power_state_does_not_change_status_if_not_off(self):
318+ node = factory.make_Node(
319+ power_state=POWER_STATE.OFF, status=NODE_STATUS.ALLOCATED)
320+ node.update_power_state(POWER_STATE.ON)
321+ self.expectThat(node.status, Equals(NODE_STATUS.ALLOCATED))
322+
323 def test_end_deployment_changes_state(self):
324 node = factory.make_Node(status=NODE_STATUS.DEPLOYING)
325 node.end_deployment()
326
327=== modified file 'src/maasserver/node_action.py'
328--- src/maasserver/node_action.py 2014-09-12 03:16:19 +0000
329+++ src/maasserver/node_action.py 2014-09-16 01:19:01 +0000
330@@ -284,7 +284,6 @@
331 try:
332 Node.objects.start_nodes([self.node.system_id], self.user)
333 except StaticIPAddressExhaustion:
334- self.node.release()
335 raise NodeActionError(
336 "%s: Failed to start, static IP addresses are exhausted."
337 % self.node.hostname)
338
339=== modified file 'src/maasserver/tests/test_node_action.py'
340--- src/maasserver/tests/test_node_action.py 2014-09-10 16:20:31 +0000
341+++ src/maasserver/tests/test_node_action.py 2014-09-16 01:19:01 +0000
342@@ -23,6 +23,7 @@
343 NODE_STATUS,
344 NODE_STATUS_CHOICES,
345 NODE_STATUS_CHOICES_DICT,
346+ POWER_STATE,
347 )
348 from maasserver.exceptions import (
349 NodeActionError,
350@@ -50,6 +51,7 @@
351 from maasserver.testing.testcase import MAASServerTestCase
352 from maastesting.matchers import MockCalledOnceWith
353 from mock import ANY
354+from testtools.matchers import Equals
355
356
357 ALL_STATUSES = NODE_STATUS_CHOICES_DICT.keys()
358@@ -290,7 +292,8 @@
359 def test_StartNode_returns_error_when_no_more_static_IPs(self):
360 user = factory.make_User()
361 node = factory.make_node_with_mac_attached_to_nodegroupinterface(
362- status=NODE_STATUS.ALLOCATED, power_type='ether_wake', owner=user)
363+ status=NODE_STATUS.ALLOCATED, power_type='ether_wake', owner=user,
364+ power_state=POWER_STATE.OFF)
365 ngi = node.get_primary_mac().cluster_interface
366
367 # Narrow the available IP range and pre-claim the only address.
368@@ -300,10 +303,11 @@
369 ngi.static_ip_range_high, ngi.static_ip_range_low)
370
371 e = self.assertRaises(NodeActionError, StartNode(node, user).execute)
372- self.assertEqual(
373- "%s: Failed to start, static IP addresses are exhausted." %
374- node.hostname, e.message)
375- self.assertEqual(NODE_STATUS.READY, node.status)
376+ self.expectThat(
377+ e.message, Equals(
378+ "%s: Failed to start, static IP addresses are exhausted." %
379+ node.hostname))
380+ self.assertEqual(NODE_STATUS.ALLOCATED, node.status)
381
382 def test_StartNode_requires_edit_permission(self):
383 user = factory.make_User()
384@@ -351,8 +355,7 @@
385
386 ReleaseNode(node, user).execute()
387
388- self.assertEqual(NODE_STATUS.READY, node.status)
389- self.assertIsNone(node.owner)
390+ self.expectThat(node.status, Equals(NODE_STATUS.RELEASING))
391 self.assertThat(
392 stop_nodes, MockCalledOnceWith([node.system_id], user))
393