Merge lp:~julian-edwards/maas/only-ready-when-power-is-off into lp:~maas-committers/maas/trunk

Proposed by Julian Edwards
Status: Merged
Approved by: Julian Edwards
Approved revision: no longer in the source branch.
Merged at revision: 2994
Proposed branch: lp:~julian-edwards/maas/only-ready-when-power-is-off
Merge into: lp:~maas-committers/maas/trunk
Diff against target: 392 lines (+107/-82)
6 files modified
src/maasserver/api/tests/test_node.py (+8/-59)
src/maasserver/enum.py (+3/-0)
src/maasserver/models/node.py (+29/-2)
src/maasserver/models/tests/test_node.py (+57/-13)
src/maasserver/node_action.py (+0/-1)
src/maasserver/tests/test_node_action.py (+10/-7)
To merge this branch: bzr merge lp:~julian-edwards/maas/only-ready-when-power-is-off
Reviewer Review Type Date Requested Status
Jeroen T. Vermeulen (community) Approve
Review via email: mp+234619@code.launchpad.net

Commit message

Only mark nodes READY once power off is complete. It does this by introducing a new state, RELEASING, which is transitioned to READY when the power job calls back successfully. This does not affect the transition to BROKEN if the power job failed.

Description of the change

There was some collateral with this branch, my changed exposed some other potential problems some of which I fixed:

 - release() always tries to power off the node, even though it may have never been powered on. I've left this as is but I welcome your comments as the reviewer here.

 - release() was being unit tested via the API (!) so I removed all of those tests in favour of a more simple enhanced unit test.

 - The start node action could only operate on ALLOCATED or DEPLOYED nodes, however if it got a StaticIPAddressExhaustion error, it was releasing the node! So I stopped it doing that.

To post a comment you must log in.
Revision history for this message
Julian Edwards (julian-edwards) wrote :

I tested this locally and it seems to be ok, you can't start the node again until the poweroff script finishes. In the meantime, the UI shows "Releasing". It'd be great if we had all this on longpoll :)

Revision history for this message
Jeroen T. Vermeulen (jtv) wrote :

Thanks for cleaning up the tests. We have far too much logic, and tests, at the API level.

I think it's OK for release() to power off the node always, because our knowledge of its current power state may be out of date. I doubt we control all state transitions thoroughly at the moment, so there may be obscure paths like: node is switched off; power poller registers that it's off; node is marked as broken; power poller stops looking at the node because it's broken; admin powers node on and works their magic; admin marks node as fixed; user allocates node; MAAS issues power-on command; node misses the command because it's already on.

So as far as I'm concerned, the XXX comment can become just a regular comment note. In fact I probably wouldn't even bother with a special code path for the case where the node is already off: the power method can figure out that it's not needed, or if it prefers, it can issue a probably redundant power-off command for luck.

And looking at the code under that XXX comment, does it make sense to keep the owner set when releasing a node? The user is done with the node at that point. For debugging it may well be interesting to know who was using it, but that's historical information — maybe it belongs in the node event log. So I think I'd just clear the node's owner during release() either way.

One thing is completely unclear to me: what does the RESERVED state represent? It may have come up in London but if so I don't remember. There's a comment in the enum that talks about the node being ready for “named” deployment. I have no idea what that means. I'm not even sure whether it's talking about a DNS daemon or the normal adjective. I don't see any place in the code where a node might enter this state. Yet your test uses this state for a powered-off node and not for a powered-on node... Why? What's the distinction?

review: Approve
Revision history for this message
Raphaël Badin (rvb) wrote :

> I tested this locally and it seems to be ok, you can't start the node again until the poweroff
> script finishes.

What happens if powering the machine off fails? (if the template blows up or if the status of the node is never changed?) I'd say we need to introduce a RELEASING_FAILED state but I'd like to hear your thoughts on this.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Monday 15 Sep 2014 07:22:08 you wrote:
> > I tested this locally and it seems to be ok, you can't start the node
> > again until the poweroff script finishes.
>
> What happens if powering the machine off fails? (if the template blows up or
> if the status of the node is never changed?) I'd say we need to introduce
> a RELEASING_FAILED state but I'd like to hear your thoughts on this.

It will be marked BROKEN; that's already all been done.

Revision history for this message
Raphaël Badin (rvb) wrote :

> On Monday 15 Sep 2014 07:22:08 you wrote:
> > > I tested this locally and it seems to be ok, you can't start the node
> > > again until the poweroff script finishes.
> >
> > What happens if powering the machine off fails? (if the template blows up or
> > if the status of the node is never changed?) I'd say we need to introduce
> > a RELEASING_FAILED state but I'd like to hear your thoughts on this.
>
> It will be marked BROKEN; that's already all been done.

That's not what BROKEN means. BROKEN is a state only a manual action from a user should put a node into.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Monday 15 Sep 2014 09:34:27 you wrote:
> > On Monday 15 Sep 2014 07:22:08 you wrote:
> > > > I tested this locally and it seems to be ok, you can't start the node
> > > > again until the poweroff script finishes.
> > >
> > > What happens if powering the machine off fails? (if the template blows
> > > up or if the status of the node is never changed?) I'd say we need to
> > > introduce a RELEASING_FAILED state but I'd like to hear your thoughts
> > > on this.>
> > It will be marked BROKEN; that's already all been done.
>
> That's not what BROKEN means. BROKEN is a state only a manual action from a
> user should put a node into.

Then whoever did the work on the power stuff needs to know that.

Revision history for this message
Julian Edwards (julian-edwards) wrote :

On Monday 15 Sep 2014 07:07:25 you wrote:
> I think it's OK for release() to power off the node always, because our
> knowledge of its current power state may be out of date. I doubt we
> control all state transitions thoroughly at the moment, so there may be
> obscure paths like: node is switched off; power poller registers that it's
> off; node is marked as broken; power poller stops looking at the node
> because it's broken; admin powers node on and works their magic; admin
> marks node as fixed; user allocates node; MAAS issues power-on command;
> node misses the command because it's already on.
>
> So as far as I'm concerned, the XXX comment can become just a regular
> comment note. In fact I probably wouldn't even bother with a special code
> path for the case where the node is already off: the power method can
> figure out that it's not needed, or if it prefers, it can issue a probably
> redundant power-off command for luck.

I'll try it and see how it works out, but I am concerned it will lead to
concurrent power operations - we must avoid the situation where we need to
serialize them at the power driver level, that's just madness. It's good
practice to catch problems as early as you can!

> And looking at the code under that XXX comment, does it make sense to keep
> the owner set when releasing a node? The user is done with the node at
> that point. For debugging it may well be interesting to know who was using
> it, but that's historical information — maybe it belongs in the node event
> log. So I think I'd just clear the node's owner during release() either
> way.

It's not released until it's off IMO, which is why I kept the owner on. The
state RELEASING implies it's not released yet as well.

> One thing is completely unclear to me: what does the RESERVED state
> represent? It may have come up in London but if so I don't remember.
> There's a comment in the enum that talks about the node being ready for
> “named” deployment. I have no idea what that means. I'm not even sure
> whether it's talking about a DNS daemon or the normal adjective. I don't
> see any place in the code where a node might enter this state. Yet your
> test uses this state for a powered-off node and not for a powered-on
> node... Why? What's the distinction?

It can be deleted as we discussed on the call.

Thanks for the review!

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'src/maasserver/api/tests/test_node.py'
--- src/maasserver/api/tests/test_node.py 2014-09-10 16:20:31 +0000
+++ src/maasserver/api/tests/test_node.py 2014-09-16 01:19:01 +0000
@@ -47,6 +47,7 @@
47from maasserver.testing.osystems import make_usable_osystem47from maasserver.testing.osystems import make_usable_osystem
48from maasserver.testing.testcase import MAASServerTestCase48from maasserver.testing.testcase import MAASServerTestCase
49from maastesting.matchers import (49from maastesting.matchers import (
50 Equals,
50 MockCalledOnceWith,51 MockCalledOnceWith,
51 MockNotCalled,52 MockNotCalled,
52 )53 )
@@ -402,7 +403,7 @@
402 [httplib.OK] * len(owned_nodes),403 [httplib.OK] * len(owned_nodes),
403 [response.status_code for response in responses])404 [response.status_code for response in responses])
404 self.assertItemsEqual(405 self.assertItemsEqual(
405 [NODE_STATUS.READY] * len(owned_nodes),406 [NODE_STATUS.RELEASING] * len(owned_nodes),
406 [node.status for node in reload_objects(Node, owned_nodes)])407 [node.status for node in reload_objects(Node, owned_nodes)])
407408
408 def test_POST_release_releases_failed_node(self):409 def test_POST_release_releases_failed_node(self):
@@ -414,60 +415,8 @@
414 self.assertEqual(415 self.assertEqual(
415 httplib.OK, response.status_code, response.content)416 httplib.OK, response.status_code, response.content)
416 owned_node = Node.objects.get(id=owned_node.id)417 owned_node = Node.objects.get(id=owned_node.id)
417 self.assertEqual(418 self.expectThat(owned_node.status, Equals(NODE_STATUS.RELEASING))
418 (NODE_STATUS.READY, None),419 self.expectThat(owned_node.owner, Equals(self.logged_in_user))
419 (owned_node.status, owned_node.owner))
420
421 def test_POST_release_turns_on_netboot(self):
422 node = factory.make_Node(
423 status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user)
424 node.set_netboot(on=False)
425 self.client.post(self.get_node_uri(node), {'op': 'release'})
426 self.assertTrue(reload_object(node).netboot)
427
428 def test_POST_release_resets_osystem_and_distro_series(self):
429 osystem = factory.pick_OS()
430 release = factory.pick_release(osystem)
431 node = factory.make_Node(
432 status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user,
433 osystem=osystem.name, distro_series=release)
434 self.client.post(self.get_node_uri(node), {'op': 'release'})
435 self.assertEqual('', reload_object(node).osystem)
436 self.assertEqual('', reload_object(node).distro_series)
437
438 def test_POST_release_resets_license_key(self):
439 osystem = factory.pick_OS()
440 release = factory.pick_release(osystem)
441 license_key = factory.make_string()
442 node = factory.make_Node(
443 status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user,
444 osystem=osystem.name, distro_series=release,
445 license_key=license_key)
446 self.client.post(self.get_node_uri(node), {'op': 'release'})
447 self.assertEqual('', reload_object(node).license_key)
448
449 def test_POST_release_resets_agent_name(self):
450 agent_name = factory.make_name('agent-name')
451 osystem = factory.pick_OS()
452 release = factory.pick_release(osystem)
453 node = factory.make_Node(
454 status=NODE_STATUS.ALLOCATED, owner=self.logged_in_user,
455 osystem=osystem.name, distro_series=release,
456 agent_name=agent_name)
457 self.client.post(self.get_node_uri(node), {'op': 'release'})
458 self.assertEqual('', reload_object(node).agent_name)
459
460 def test_POST_release_removes_token_and_user(self):
461 node = factory.make_Node(status=NODE_STATUS.READY)
462 self.client.post(reverse('nodes_handler'), {'op': 'acquire'})
463 node = Node.objects.get(system_id=node.system_id)
464 self.assertEqual(NODE_STATUS.ALLOCATED, node.status)
465 self.assertEqual(self.logged_in_user, node.owner)
466 self.assertEqual(self.client.token.key, node.token.key)
467 self.client.post(self.get_node_uri(node), {'op': 'release'})
468 node = Node.objects.get(system_id=node.system_id)
469 self.assertIs(None, node.owner)
470 self.assertIs(None, node.token)
471420
472 def test_POST_release_does_nothing_for_unowned_node(self):421 def test_POST_release_does_nothing_for_unowned_node(self):
473 node = factory.make_Node(422 node = factory.make_Node(
@@ -530,8 +479,8 @@
530 self.become_admin()479 self.become_admin()
531 response = self.client.post(480 response = self.client.post(
532 self.get_node_uri(node), {'op': 'release'})481 self.get_node_uri(node), {'op': 'release'})
533 self.assertEqual(httplib.OK, response.status_code)482 self.assertEqual(httplib.OK, response.status_code, response.content)
534 self.assertEqual(NODE_STATUS.READY, reload_object(node).status)483 self.assertEqual(NODE_STATUS.RELEASING, reload_object(node).status)
535484
536 def test_POST_release_combines_with_acquire(self):485 def test_POST_release_combines_with_acquire(self):
537 node = factory.make_Node(status=NODE_STATUS.READY)486 node = factory.make_Node(status=NODE_STATUS.READY)
@@ -540,8 +489,8 @@
540 self.assertEqual(NODE_STATUS.ALLOCATED, reload_object(node).status)489 self.assertEqual(NODE_STATUS.ALLOCATED, reload_object(node).status)
541 node_uri = json.loads(response.content)['resource_uri']490 node_uri = json.loads(response.content)['resource_uri']
542 response = self.client.post(node_uri, {'op': 'release'})491 response = self.client.post(node_uri, {'op': 'release'})
543 self.assertEqual(httplib.OK, response.status_code)492 self.assertEqual(httplib.OK, response.status_code, response.content)
544 self.assertEqual(NODE_STATUS.READY, reload_object(node).status)493 self.assertEqual(NODE_STATUS.RELEASING, reload_object(node).status)
545494
546 def test_POST_commission_commissions_node(self):495 def test_POST_commission_commissions_node(self):
547 node = factory.make_Node(496 node = factory.make_Node(
548497
=== modified file 'src/maasserver/enum.py'
--- src/maasserver/enum.py 2014-09-15 10:35:35 +0000
+++ src/maasserver/enum.py 2014-09-16 01:19:01 +0000
@@ -74,6 +74,8 @@
74 ALLOCATED = 1074 ALLOCATED = 10
75 #: The deployment of the node failed.75 #: The deployment of the node failed.
76 FAILED_DEPLOYMENT = 1176 FAILED_DEPLOYMENT = 11
77 #: The node is powering down after a release request.
78 RELEASING = 12
7779
7880
79# Django choices for NODE_STATUS: sequence of tuples (key, UI81# Django choices for NODE_STATUS: sequence of tuples (key, UI
@@ -91,6 +93,7 @@
91 (NODE_STATUS.RETIRED, "Retired"),93 (NODE_STATUS.RETIRED, "Retired"),
92 (NODE_STATUS.BROKEN, "Broken"),94 (NODE_STATUS.BROKEN, "Broken"),
93 (NODE_STATUS.FAILED_DEPLOYMENT, "Failed deployment"),95 (NODE_STATUS.FAILED_DEPLOYMENT, "Failed deployment"),
96 (NODE_STATUS.RELEASING, "Releasing"),
94)97)
9598
9699
97100
=== modified file 'src/maasserver/models/node.py'
--- src/maasserver/models/node.py 2014-09-15 14:28:28 +0000
+++ src/maasserver/models/node.py 2014-09-16 01:19:01 +0000
@@ -161,6 +161,7 @@
161 NODE_STATUS.RETIRED,161 NODE_STATUS.RETIRED,
162 NODE_STATUS.MISSING,162 NODE_STATUS.MISSING,
163 NODE_STATUS.BROKEN,163 NODE_STATUS.BROKEN,
164 NODE_STATUS.RELEASING,
164 ],165 ],
165 NODE_STATUS.ALLOCATED: [166 NODE_STATUS.ALLOCATED: [
166 NODE_STATUS.READY,167 NODE_STATUS.READY,
@@ -168,6 +169,12 @@
168 NODE_STATUS.MISSING,169 NODE_STATUS.MISSING,
169 NODE_STATUS.BROKEN,170 NODE_STATUS.BROKEN,
170 NODE_STATUS.DEPLOYING,171 NODE_STATUS.DEPLOYING,
172 NODE_STATUS.RELEASING,
173 ],
174 NODE_STATUS.RELEASING: [
175 NODE_STATUS.READY,
176 NODE_STATUS.BROKEN,
177 NODE_STATUS.MISSING,
171 ],178 ],
172 NODE_STATUS.DEPLOYING: [179 NODE_STATUS.DEPLOYING: [
173 NODE_STATUS.ALLOCATED,180 NODE_STATUS.ALLOCATED,
@@ -176,18 +183,21 @@
176 NODE_STATUS.FAILED_DEPLOYMENT,183 NODE_STATUS.FAILED_DEPLOYMENT,
177 NODE_STATUS.DEPLOYED,184 NODE_STATUS.DEPLOYED,
178 NODE_STATUS.READY,185 NODE_STATUS.READY,
186 NODE_STATUS.RELEASING,
179 ],187 ],
180 NODE_STATUS.FAILED_DEPLOYMENT: [188 NODE_STATUS.FAILED_DEPLOYMENT: [
181 NODE_STATUS.ALLOCATED,189 NODE_STATUS.ALLOCATED,
182 NODE_STATUS.MISSING,190 NODE_STATUS.MISSING,
183 NODE_STATUS.BROKEN,191 NODE_STATUS.BROKEN,
184 NODE_STATUS.READY,192 NODE_STATUS.READY,
193 NODE_STATUS.RELEASING,
185 ],194 ],
186 NODE_STATUS.DEPLOYED: [195 NODE_STATUS.DEPLOYED: [
187 NODE_STATUS.ALLOCATED,196 NODE_STATUS.ALLOCATED,
188 NODE_STATUS.MISSING,197 NODE_STATUS.MISSING,
189 NODE_STATUS.BROKEN,198 NODE_STATUS.BROKEN,
190 NODE_STATUS.READY,199 NODE_STATUS.READY,
200 NODE_STATUS.RELEASING,
191 ],201 ],
192 NODE_STATUS.MISSING: [202 NODE_STATUS.MISSING: [
193 NODE_STATUS.NEW,203 NODE_STATUS.NEW,
@@ -205,6 +215,7 @@
205 NODE_STATUS.BROKEN: [215 NODE_STATUS.BROKEN: [
206 NODE_STATUS.COMMISSIONING,216 NODE_STATUS.COMMISSIONING,
207 NODE_STATUS.READY,217 NODE_STATUS.READY,
218 NODE_STATUS.RELEASING,
208 ],219 ],
209 }220 }
210221
@@ -1238,8 +1249,13 @@
1238 self.delete_host_maps(deallocated_ips)1249 self.delete_host_maps(deallocated_ips)
1239 from maasserver.dns.config import change_dns_zones1250 from maasserver.dns.config import change_dns_zones
1240 change_dns_zones([self.nodegroup])1251 change_dns_zones([self.nodegroup])
1241 self.status = NODE_STATUS.READY1252 if self.power_state == POWER_STATE.OFF:
1242 self.owner = None1253 self.status = NODE_STATUS.READY
1254 self.owner = None
1255 else:
1256 # update_power_state() will take care of making the node READY
1257 # and unowned when the power is finally off.
1258 self.status = NODE_STATUS.RELEASING
1243 self.token = None1259 self.token = None
1244 self.agent_name = ''1260 self.agent_name = ''
1245 self.set_netboot()1261 self.set_netboot()
@@ -1300,7 +1316,10 @@
1300 """1316 """
1301 if self.status in RELEASABLE_STATUSES:1317 if self.status in RELEASABLE_STATUSES:
1302 self.release()1318 self.release()
1319 # release() normally sets the status to RELEASING and leaves the
1320 # owner in place, override that here as we're broken.
1303 self.status = NODE_STATUS.BROKEN1321 self.status = NODE_STATUS.BROKEN
1322 self.owner = None
1304 self.error_description = error_description1323 self.error_description = error_description
1305 self.save()1324 self.save()
13061325
@@ -1317,6 +1336,14 @@
1317 def update_power_state(self, power_state):1336 def update_power_state(self, power_state):
1318 """Update a node's power state """1337 """Update a node's power state """
1319 self.power_state = power_state1338 self.power_state = power_state
1339 mark_ready = (
1340 self.status == NODE_STATUS.RELEASING and
1341 power_state == POWER_STATE.OFF)
1342 if mark_ready:
1343 # Ensure the node is fully released after a successful power
1344 # down.
1345 self.status = NODE_STATUS.READY
1346 self.owner = None
1320 self.save()1347 self.save()
13211348
1322 def claim_static_ip_addresses(self):1349 def claim_static_ip_addresses(self):
13231350
=== modified file 'src/maasserver/models/tests/test_node.py'
--- src/maasserver/models/tests/test_node.py 2014-09-15 14:28:28 +0000
+++ src/maasserver/models/tests/test_node.py 2014-09-16 01:19:01 +0000
@@ -91,6 +91,7 @@
91 AfterPreprocessing,91 AfterPreprocessing,
92 Equals,92 Equals,
93 HasLength,93 HasLength,
94 Is,
94 MatchesStructure,95 MatchesStructure,
95 )96 )
96from twisted.internet import defer97from twisted.internet import defer
@@ -642,15 +643,37 @@
642 (user, NODE_STATUS.ALLOCATED, agent_name),643 (user, NODE_STATUS.ALLOCATED, agent_name),
643 (node.owner, node.status, node.agent_name))644 (node.owner, node.status, node.agent_name))
644645
645 def test_release(self):646 def test_release_node_that_has_power_on(self):
646 agent_name = factory.make_name('agent-name')647 agent_name = factory.make_name('agent-name')
647 node = factory.make_Node(648 owner = factory.make_User()
648 status=NODE_STATUS.ALLOCATED, owner=factory.make_User(),649 node = factory.make_Node(
649 agent_name=agent_name)650 status=NODE_STATUS.ALLOCATED, owner=owner, agent_name=agent_name)
650 node.release()651 node.power_state = POWER_STATE.ON
651 self.assertEqual(652 node.release()
652 (NODE_STATUS.READY, None, node.agent_name),653 self.expectThat(node.status, Equals(NODE_STATUS.RELEASING))
653 (node.status, node.owner, ''))654 self.expectThat(node.owner, Equals(owner))
655 self.expectThat(node.agent_name, Equals(''))
656 self.expectThat(node.token, Is(None))
657 self.expectThat(node.netboot, Is(True))
658 self.expectThat(node.osystem, Equals(''))
659 self.expectThat(node.distro_series, Equals(''))
660 self.expectThat(node.license_key, Equals(''))
661
662 def test_release_node_that_has_power_off(self):
663 agent_name = factory.make_name('agent-name')
664 owner = factory.make_User()
665 node = factory.make_Node(
666 status=NODE_STATUS.ALLOCATED, owner=owner, agent_name=agent_name)
667 node.power_state = POWER_STATE.OFF
668 node.release()
669 self.expectThat(node.status, Equals(NODE_STATUS.READY))
670 self.expectThat(node.owner, Is(None))
671 self.expectThat(node.agent_name, Equals(''))
672 self.expectThat(node.token, Is(None))
673 self.expectThat(node.netboot, Is(True))
674 self.expectThat(node.osystem, Equals(''))
675 self.expectThat(node.distro_series, Equals(''))
676 self.expectThat(node.license_key, Equals(''))
654677
655 def test_release_deletes_static_ip_host_maps(self):678 def test_release_deletes_static_ip_host_maps(self):
656 remove_host_maps = self.patch_autospec(679 remove_host_maps = self.patch_autospec(
@@ -1208,10 +1231,11 @@
1208 def test_mark_broken_releases_allocated_node(self):1231 def test_mark_broken_releases_allocated_node(self):
1209 node = factory.make_Node(1232 node = factory.make_Node(
1210 status=NODE_STATUS.ALLOCATED, owner=factory.make_User())1233 status=NODE_STATUS.ALLOCATED, owner=factory.make_User())
1211 node.mark_broken(factory.make_name('error-description'))1234 err_desc = factory.make_name('error-description')
1212 node = reload_object(node)1235 release = self.patch(node, 'release')
1213 self.assertEqual(1236 node.mark_broken(err_desc)
1214 (NODE_STATUS.BROKEN, None), (node.status, node.owner))1237 self.expectThat(node.owner, Is(None))
1238 self.assertThat(release, MockCalledOnceWith())
12151239
1216 def test_mark_fixed_changes_status(self):1240 def test_mark_fixed_changes_status(self):
1217 node = factory.make_Node(status=NODE_STATUS.BROKEN)1241 node = factory.make_Node(status=NODE_STATUS.BROKEN)
@@ -1237,6 +1261,26 @@
1237 node.update_power_state(state)1261 node.update_power_state(state)
1238 self.assertEqual(state, reload_object(node).power_state)1262 self.assertEqual(state, reload_object(node).power_state)
12391263
1264 def test_update_power_state_readies_node_if_releasing(self):
1265 node = factory.make_Node(
1266 power_state=POWER_STATE.ON, status=NODE_STATUS.RELEASING,
1267 owner=None)
1268 node.update_power_state(POWER_STATE.OFF)
1269 self.expectThat(node.status, Equals(NODE_STATUS.READY))
1270 self.expectThat(node.owner, Is(None))
1271
1272 def test_update_power_state_does_not_change_status_if_not_releasing(self):
1273 node = factory.make_Node(
1274 power_state=POWER_STATE.ON, status=NODE_STATUS.ALLOCATED)
1275 node.update_power_state(POWER_STATE.OFF)
1276 self.assertThat(node.status, Equals(NODE_STATUS.ALLOCATED))
1277
1278 def test_update_power_state_does_not_change_status_if_not_off(self):
1279 node = factory.make_Node(
1280 power_state=POWER_STATE.OFF, status=NODE_STATUS.ALLOCATED)
1281 node.update_power_state(POWER_STATE.ON)
1282 self.expectThat(node.status, Equals(NODE_STATUS.ALLOCATED))
1283
1240 def test_end_deployment_changes_state(self):1284 def test_end_deployment_changes_state(self):
1241 node = factory.make_Node(status=NODE_STATUS.DEPLOYING)1285 node = factory.make_Node(status=NODE_STATUS.DEPLOYING)
1242 node.end_deployment()1286 node.end_deployment()
12431287
=== modified file 'src/maasserver/node_action.py'
--- src/maasserver/node_action.py 2014-09-12 03:16:19 +0000
+++ src/maasserver/node_action.py 2014-09-16 01:19:01 +0000
@@ -284,7 +284,6 @@
284 try:284 try:
285 Node.objects.start_nodes([self.node.system_id], self.user)285 Node.objects.start_nodes([self.node.system_id], self.user)
286 except StaticIPAddressExhaustion:286 except StaticIPAddressExhaustion:
287 self.node.release()
288 raise NodeActionError(287 raise NodeActionError(
289 "%s: Failed to start, static IP addresses are exhausted."288 "%s: Failed to start, static IP addresses are exhausted."
290 % self.node.hostname)289 % self.node.hostname)
291290
=== modified file 'src/maasserver/tests/test_node_action.py'
--- src/maasserver/tests/test_node_action.py 2014-09-10 16:20:31 +0000
+++ src/maasserver/tests/test_node_action.py 2014-09-16 01:19:01 +0000
@@ -23,6 +23,7 @@
23 NODE_STATUS,23 NODE_STATUS,
24 NODE_STATUS_CHOICES,24 NODE_STATUS_CHOICES,
25 NODE_STATUS_CHOICES_DICT,25 NODE_STATUS_CHOICES_DICT,
26 POWER_STATE,
26 )27 )
27from maasserver.exceptions import (28from maasserver.exceptions import (
28 NodeActionError,29 NodeActionError,
@@ -50,6 +51,7 @@
50from maasserver.testing.testcase import MAASServerTestCase51from maasserver.testing.testcase import MAASServerTestCase
51from maastesting.matchers import MockCalledOnceWith52from maastesting.matchers import MockCalledOnceWith
52from mock import ANY53from mock import ANY
54from testtools.matchers import Equals
5355
5456
55ALL_STATUSES = NODE_STATUS_CHOICES_DICT.keys()57ALL_STATUSES = NODE_STATUS_CHOICES_DICT.keys()
@@ -290,7 +292,8 @@
290 def test_StartNode_returns_error_when_no_more_static_IPs(self):292 def test_StartNode_returns_error_when_no_more_static_IPs(self):
291 user = factory.make_User()293 user = factory.make_User()
292 node = factory.make_node_with_mac_attached_to_nodegroupinterface(294 node = factory.make_node_with_mac_attached_to_nodegroupinterface(
293 status=NODE_STATUS.ALLOCATED, power_type='ether_wake', owner=user)295 status=NODE_STATUS.ALLOCATED, power_type='ether_wake', owner=user,
296 power_state=POWER_STATE.OFF)
294 ngi = node.get_primary_mac().cluster_interface297 ngi = node.get_primary_mac().cluster_interface
295298
296 # Narrow the available IP range and pre-claim the only address.299 # Narrow the available IP range and pre-claim the only address.
@@ -300,10 +303,11 @@
300 ngi.static_ip_range_high, ngi.static_ip_range_low)303 ngi.static_ip_range_high, ngi.static_ip_range_low)
301304
302 e = self.assertRaises(NodeActionError, StartNode(node, user).execute)305 e = self.assertRaises(NodeActionError, StartNode(node, user).execute)
303 self.assertEqual(306 self.expectThat(
304 "%s: Failed to start, static IP addresses are exhausted." %307 e.message, Equals(
305 node.hostname, e.message)308 "%s: Failed to start, static IP addresses are exhausted." %
306 self.assertEqual(NODE_STATUS.READY, node.status)309 node.hostname))
310 self.assertEqual(NODE_STATUS.ALLOCATED, node.status)
307311
308 def test_StartNode_requires_edit_permission(self):312 def test_StartNode_requires_edit_permission(self):
309 user = factory.make_User()313 user = factory.make_User()
@@ -351,8 +355,7 @@
351355
352 ReleaseNode(node, user).execute()356 ReleaseNode(node, user).execute()
353357
354 self.assertEqual(NODE_STATUS.READY, node.status)358 self.expectThat(node.status, Equals(NODE_STATUS.RELEASING))
355 self.assertIsNone(node.owner)
356 self.assertThat(359 self.assertThat(
357 stop_nodes, MockCalledOnceWith([node.system_id], user))360 stop_nodes, MockCalledOnceWith([node.system_id], user))
358361