Merge lp:~rackspace-titan/nova/instance_states into lp:~hudson-openstack/nova/trunk

Proposed by Brian Lamar
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1504
Merged at revision: 1514
Proposed branch: lp:~rackspace-titan/nova/instance_states
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 2339 lines (+849/-460)
20 files modified
nova/api/ec2/cloud.py (+38/-12)
nova/api/openstack/common.py (+54/-28)
nova/api/openstack/servers.py (+9/-13)
nova/api/openstack/views/servers.py (+4/-9)
nova/compute/api.py (+90/-28)
nova/compute/manager.py (+250/-225)
nova/compute/task_states.py (+59/-0)
nova/compute/vm_states.py (+39/-0)
nova/db/sqlalchemy/api.py (+4/-16)
nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py (+138/-0)
nova/db/sqlalchemy/models.py (+3/-13)
nova/exception.py (+1/-1)
nova/scheduler/driver.py (+4/-6)
nova/tests/api/openstack/test_server_actions.py (+22/-33)
nova/tests/api/openstack/test_servers.py (+67/-38)
nova/tests/integrated/test_servers.py (+23/-11)
nova/tests/scheduler/test_scheduler.py (+9/-4)
nova/tests/test_cloud.py (+10/-5)
nova/tests/test_compute.py (+21/-17)
nova/tests/vmwareapi/db_fakes.py (+4/-1)
To merge this branch: bzr merge lp:~rackspace-titan/nova/instance_states
Reviewer Review Type Date Requested Status
Vish Ishaya (community) Approve
Brian Waldon (community) Approve
Review via email: mp+72502@code.launchpad.net

Commit message

Fixed and improved the way instance "states" are set. Instead of relying on solely the power_state of a VM, there are now explicitly defined VM states and VM task states which respectively define the current state of the VM and the task which is currently being performed by the VM.

Description of the change

Currently instance states are not working as intended. This is remedied by using a strategy outlined in the SQLAlchemy code by Ewan which describes our transition from 2 columns (state, state_description) to three columns (power_state, vm_state, and task_state):

OLD:
state - This loosely represented the 'state' of the virtual machine
state_description - This gave slightly more information on the virtual machine state

NEW:
power_state - This corresponds to the *actual* VM state on the hypervisor
vm_state - This represents the concept of a VM -- what the VM should be doing
task_state - This represents a current task which is being worked on by the VM

To see a list of possible vm_states see nova/compute/vm_states.py
To see a list of possible task_states see nova/compute/task_states.py

While this change is rather large, it involved changing all 'state' references to 'power_state' and involved the insertion of more database updates to give an accurate reflection of an instance at any point in time.

To post a comment you must log in.
Revision history for this message
Brian Waldon (bcwaldon) wrote :

This is absolutely incredible. You've done a great job, here. One general comment before the line-by-line stuff:

Can we align the states with respect to tense? I don't think all of our states need to end in ING or ED. What do you think?

16: I think you might want to expand this comment to explain yourself better better

53/54: Should this be 'stopped' and 'terminated'? According to the allowed EC2 values, these don't seem correct.

189: Can you log the output of the command you allude to?

955/990: You should probably file a bug for this, seems like a simple cleanup item

1224: Thank you for cleaning this function up.

1335/1390: I would love to expand on these module-level docstrings. I think adding a bit more context/explanation of what the task/vm_states represent would be very helpful.

1458/1471: Not sure what these comments mean...

review: Needs Fixing
Revision history for this message
Brian Lamar (blamar) wrote :

> This is absolutely incredible. You've done a great job, here. One general
> comment before the line-by-line stuff:
>
> Can we align the states with respect to tense? I don't think all of our states
> need to end in ING or ED. What do you think?

The original design had all vm_states and task_states in the same tenses. No INGs or EDs.

task_states.SCHEDULE just didn't have the same effect on me as task_states.SCHEDULING but really it's not a huge difference in my mind. We're just a few seds away from changing everything so it's not difficult. Some of them get more confusing IMO without ING or ED:

NETWORK vs NETWORKING
PAUSE vs PAUSED
STOP vs STOPPED

Would you recommend unification of tenses or just removal of all ING/ED/tense?

Revision history for this message
Brian Waldon (bcwaldon) wrote :

> > This is absolutely incredible. You've done a great job, here. One general
> > comment before the line-by-line stuff:
> >
> > Can we align the states with respect to tense? I don't think all of our
> states
> > need to end in ING or ED. What do you think?
>
> The original design had all vm_states and task_states in the same tenses. No
> INGs or EDs.
>
> task_states.SCHEDULE just didn't have the same effect on me as
> task_states.SCHEDULING but really it's not a huge difference in my mind. We're
> just a few seds away from changing everything so it's not difficult. Some of
> them get more confusing IMO without ING or ED:
>
> NETWORK vs NETWORKING
> PAUSE vs PAUSED
> STOP vs STOPPED
>
> Would you recommend unification of tenses or just removal of all ING/ED/tense?

Actually, I think it makes a lot more sense now. Tasks make sense to end in -ING and vm_states in -ED (or no suffix at all). There are a few specific states I want to point out:

task_states.SPAWN -> task_states.SPAWNING

vm_states.VERIFY_RESIZE just seems weird. How does RESIZE_VERIFICATION sound, or WAITING?

Can task_states.UNPAUSING go away in favor of RESUMING? This is more of a question than a suggestion, I can see why we might want to leave it.

What about these for images:
task_states.SNAPSHOTTING -> IMAGE_SNAPSHOT
task_states.BACKING_UP -> IMAGE_BACKUP

task_states.HARD_REBOOTING should go away until we support it. If you want to leave it, can you rename it to REBOOTING_HARD?

task_states.PASSWORD -> UPDATING_PASSWORD

task_states.STARTING -> BOOTING
'starting' can easily mean a million different things

I don't see any 'shutdown' states.

1491. By Brian Lamar

Review feedback.

1492. By Brian Lamar

Merged trunk.

1493. By Brian Lamar

Bumped migration number.

Revision history for this message
Brian Lamar (blamar) wrote :

> 16: I think you might want to expand this comment to explain yourself better
> better

Updated with link to EC2 spec, does that help?

>
> 53/54: Should this be 'stopped' and 'terminated'? According to the allowed EC2
> values, these don't seem correct.

Fixed. They weren't like that before, but you're suggestion seems logical.

>
> 189: Can you log the output of the command you allude to?

Fixed to output status.

> 955/990: You should probably file a bug for this, seems like a simple cleanup
> item

Not sure it's a 'bug', but I created a BP https://blueprints.launchpad.net/nova/+spec/remove-virt-driver-callbacks

>
> 1224: Thank you for cleaning this function up.

Thanks, I really want to highlight this because I don't want anyone surprised at the difference in functionality. All we're doing in the interval tasks right now is sync'ing power states. All actual VM state transition should be done through explicit database updates in the compute API/manager and NOT through this method.

>
> 1335/1390: I would love to expand on these module-level docstrings. I think
> adding a bit more context/explanation of what the task/vm_states represent
> would be very helpful.

Added some details. Do these help?

>
> 1458/1471: Not sure what these comments mean...

Whoops. Those methods shouldn't have still been there. Removed those.

Revision history for this message
Brian Lamar (blamar) wrote :

> > > This is absolutely incredible. You've done a great job, here. One general
> > > comment before the line-by-line stuff:
> > >
> > > Can we align the states with respect to tense? I don't think all of our
> > states
> > > need to end in ING or ED. What do you think?
> >
> > The original design had all vm_states and task_states in the same tenses. No
> > INGs or EDs.
> >
> > task_states.SCHEDULE just didn't have the same effect on me as
> > task_states.SCHEDULING but really it's not a huge difference in my mind.
> We're
> > just a few seds away from changing everything so it's not difficult. Some of
> > them get more confusing IMO without ING or ED:
> >
> > NETWORK vs NETWORKING
> > PAUSE vs PAUSED
> > STOP vs STOPPED
> >
> > Would you recommend unification of tenses or just removal of all
> ING/ED/tense?
>
> Actually, I think it makes a lot more sense now. Tasks make sense to end in
> -ING and vm_states in -ED (or no suffix at all). There are a few specific
> states I want to point out:
>
> task_states.SPAWN -> task_states.SPAWNING

Updated.

>
> vm_states.VERIFY_RESIZE just seems weird. How does RESIZE_VERIFICATION sound,
> or WAITING?

I actually updated that state to be a task, because it's not really a VM state. The VM is active, but the task is "waiting for input to see if I should revert or not". It's now task_states.RESIZE_VERIFY

>
> Can task_states.UNPAUSING go away in favor of RESUMING? This is more of a
> question than a suggestion, I can see why we might want to leave it.

Technically unpausing is the opposite of pausing and resuming is the opposite of suspending. It's a little silly I admit, but they have subtle differences so I'd like to keep them if that's not a deal-breaker.

>
> What about these for images:
> task_states.SNAPSHOTTING -> IMAGE_SNAPSHOT
> task_states.BACKING_UP -> IMAGE_BACKUP

Good suggestion. Updated.

>
> task_states.HARD_REBOOTING should go away until we support it. If you want to
> leave it, can you rename it to REBOOTING_HARD?

I've removed it, it's not supported at all and it seems silly to have in there.

>
> task_states.PASSWORD -> UPDATING_PASSWORD

Updated.

>
> task_states.STARTING -> BOOTING
> 'starting' can easily mean a million different things

I very much like the task pairs. STOPPING and STARTING correspond well and when thought about in a compute/instance sense I'm not sure the word is quite so ambiguous?

>
> I don't see any 'shutdown' states.

STOPPED means the VM is stopped/shutoff. STOPPING mean the VM is in the process of being shut down. I'm open to suggestions on making this clearer.

1494. By Brian Lamar

review feedback

1495. By Brian Lamar

Test fixup after last review feedback commit.

1496. By Brian Lamar

Merged trunk and fixed conflicts.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Fantastic.

review: Approve
1497. By Brian Lamar

Tiny tweaks to the migration script.

1498. By Brian Lamar

Merged trunk.

1499. By Brian Lamar

Increased migration number.

1500. By Brian Lamar

Merged trunk.

1501. By Brian Lamar

Merged trunk.

1502. By Brian Lamar

Fix a bad merge on my part, this fixes rebuilds\!

Revision history for this message
Matt Dietz (cerberus) wrote :

I like the fixes here and the new functionality!

A question for later, but have we ever swung back around to using an actual statemachine to enforce transitions? I think the code here all looks sound, but experience tells me that decisions based on state in the code can be fragile, at best. I'm asking because of things like the following:

519 + self.update(context,
520 + instance_id,
521 + vm_state=vm_states.SUSPENDED,
522 + task_state=task_states.RESUMING)

It doesn't really matter what state the instance was in before. It's suspend and about to resume, now. I know we didn't really enforce the state in all scenarios previously, so I guess I can't expect this patch to take care of all of those in one shot. I bring it up because it seems like in certain instances, you've chosen to try and enforce that the instance is in a good state,
but others aren't covered. Do we maybe we want to consider implementing a more statemachine-esque cleanup in a subsequent patch?

948 + # NOTE(blamar): None of the virt drivers use the 'callback' param

I wondered this myself. Good catch. Make a bug?

Great work on the tests!

I'll add my approve. Just want to touch base with you first and see what (if any) intent there is regarding my comments above.

Revision history for this message
Alex Meade (alex-meade) wrote :

1424: should this be vm_state.STOPPED?
 Also, vm_state should be vm_states? and imported?

Revision history for this message
Vish Ishaya (vishvananda) wrote :

> 1424: should this be vm_state.STOPPED?
> Also, vm_state should be vm_states? and imported?

Yes the imports from a few lines up is wrong, importing power_states

I can't find any other issues, good work.

review: Needs Fixing
1503. By Brian Lamar

Merged trunk.

1504. By Brian Lamar

Removed extraneous import and s/vm_state.STOP/vm_states.STOPPED/

Revision history for this message
Brian Lamar (blamar) wrote :

Great catch, that should be fixed now.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

looks good now

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/api/ec2/cloud.py'
2--- nova/api/ec2/cloud.py 2011-08-17 23:44:34 +0000
3+++ nova/api/ec2/cloud.py 2011-08-31 14:15:31 +0000
4@@ -47,6 +47,7 @@
5 from nova import volume
6 from nova.api.ec2 import ec2utils
7 from nova.compute import instance_types
8+from nova.compute import vm_states
9 from nova.image import s3
10
11
12@@ -78,6 +79,30 @@
13 return {'private_key': private_key, 'fingerprint': fingerprint}
14
15
16+# EC2 API can return the following values as documented in the EC2 API
17+# http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/
18+# ApiReference-ItemType-InstanceStateType.html
19+# pending | running | shutting-down | terminated | stopping | stopped
20+_STATE_DESCRIPTION_MAP = {
21+ None: 'pending',
22+ vm_states.ACTIVE: 'running',
23+ vm_states.BUILDING: 'pending',
24+ vm_states.REBUILDING: 'pending',
25+ vm_states.DELETED: 'terminated',
26+ vm_states.STOPPED: 'stopped',
27+ vm_states.MIGRATING: 'migrate',
28+ vm_states.RESIZING: 'resize',
29+ vm_states.PAUSED: 'pause',
30+ vm_states.SUSPENDED: 'suspend',
31+ vm_states.RESCUED: 'rescue',
32+}
33+
34+
35+def state_description_from_vm_state(vm_state):
36+ """Map the vm state to the server status string"""
37+ return _STATE_DESCRIPTION_MAP.get(vm_state, vm_state)
38+
39+
40 # TODO(yamahata): hypervisor dependent default device name
41 _DEFAULT_ROOT_DEVICE_NAME = '/dev/sda1'
42 _DEFAULT_MAPPINGS = {'ami': 'sda1',
43@@ -1039,11 +1064,12 @@
44
45 def _format_attr_instance_initiated_shutdown_behavior(instance,
46 result):
47- state_description = instance['state_description']
48- state_to_value = {'stopping': 'stop',
49- 'stopped': 'stop',
50- 'terminating': 'terminate'}
51- value = state_to_value.get(state_description)
52+ vm_state = instance['vm_state']
53+ state_to_value = {
54+ vm_states.STOPPED: 'stopped',
55+ vm_states.DELETED: 'terminated',
56+ }
57+ value = state_to_value.get(vm_state)
58 if value:
59 result['instanceInitiatedShutdownBehavior'] = value
60
61@@ -1198,8 +1224,8 @@
62 self._format_kernel_id(instance, i, 'kernelId')
63 self._format_ramdisk_id(instance, i, 'ramdiskId')
64 i['instanceState'] = {
65- 'code': instance['state'],
66- 'name': instance['state_description']}
67+ 'code': instance['power_state'],
68+ 'name': state_description_from_vm_state(instance['vm_state'])}
69 fixed_addr = None
70 floating_addr = None
71 if instance['fixed_ips']:
72@@ -1618,22 +1644,22 @@
73 # stop the instance if necessary
74 restart_instance = False
75 if not no_reboot:
76- state_description = instance['state_description']
77+ vm_state = instance['vm_state']
78
79 # if the instance is in subtle state, refuse to proceed.
80- if state_description not in ('running', 'stopping', 'stopped'):
81+ if vm_state not in (vm_states.ACTIVE, vm_states.STOPPED):
82 raise exception.InstanceNotRunning(instance_id=ec2_instance_id)
83
84- if state_description == 'running':
85+ if vm_state == vm_states.ACTIVE:
86 restart_instance = True
87 self.compute_api.stop(context, instance_id=instance_id)
88
89 # wait instance for really stopped
90 start_time = time.time()
91- while state_description != 'stopped':
92+ while vm_state != vm_states.STOPPED:
93 time.sleep(1)
94 instance = self.compute_api.get(context, instance_id)
95- state_description = instance['state_description']
96+ vm_state = instance['vm_state']
97 # NOTE(yamahata): timeout and error. 1 hour for now for safety.
98 # Is it too short/long?
99 # Or is there any better way?
100
101=== modified file 'nova/api/openstack/common.py'
102--- nova/api/openstack/common.py 2011-08-17 07:41:17 +0000
103+++ nova/api/openstack/common.py 2011-08-31 14:15:31 +0000
104@@ -27,7 +27,8 @@
105 from nova import log as logging
106 from nova import quota
107 from nova.api.openstack import wsgi
108-from nova.compute import power_state as compute_power_state
109+from nova.compute import vm_states
110+from nova.compute import task_states
111
112
113 LOG = logging.getLogger('nova.api.openstack.common')
114@@ -38,36 +39,61 @@
115 XML_NS_V11 = 'http://docs.openstack.org/compute/api/v1.1'
116
117
118-_STATUS_MAP = {
119- None: 'BUILD',
120- compute_power_state.NOSTATE: 'BUILD',
121- compute_power_state.RUNNING: 'ACTIVE',
122- compute_power_state.BLOCKED: 'ACTIVE',
123- compute_power_state.SUSPENDED: 'SUSPENDED',
124- compute_power_state.PAUSED: 'PAUSED',
125- compute_power_state.SHUTDOWN: 'SHUTDOWN',
126- compute_power_state.SHUTOFF: 'SHUTOFF',
127- compute_power_state.CRASHED: 'ERROR',
128- compute_power_state.FAILED: 'ERROR',
129- compute_power_state.BUILDING: 'BUILD',
130+_STATE_MAP = {
131+ vm_states.ACTIVE: {
132+ 'default': 'ACTIVE',
133+ task_states.REBOOTING: 'REBOOT',
134+ task_states.UPDATING_PASSWORD: 'PASSWORD',
135+ task_states.RESIZE_VERIFY: 'VERIFY_RESIZE',
136+ },
137+ vm_states.BUILDING: {
138+ 'default': 'BUILD',
139+ },
140+ vm_states.REBUILDING: {
141+ 'default': 'REBUILD',
142+ },
143+ vm_states.STOPPED: {
144+ 'default': 'STOPPED',
145+ },
146+ vm_states.MIGRATING: {
147+ 'default': 'MIGRATING',
148+ },
149+ vm_states.RESIZING: {
150+ 'default': 'RESIZE',
151+ },
152+ vm_states.PAUSED: {
153+ 'default': 'PAUSED',
154+ },
155+ vm_states.SUSPENDED: {
156+ 'default': 'SUSPENDED',
157+ },
158+ vm_states.RESCUED: {
159+ 'default': 'RESCUE',
160+ },
161+ vm_states.ERROR: {
162+ 'default': 'ERROR',
163+ },
164+ vm_states.DELETED: {
165+ 'default': 'DELETED',
166+ },
167 }
168
169
170-def status_from_power_state(power_state):
171- """Map the power state to the server status string"""
172- return _STATUS_MAP[power_state]
173-
174-
175-def power_states_from_status(status):
176- """Map the server status string to a list of power states"""
177- power_states = []
178- for power_state, status_map in _STATUS_MAP.iteritems():
179- # Skip the 'None' state
180- if power_state is None:
181- continue
182- if status.lower() == status_map.lower():
183- power_states.append(power_state)
184- return power_states
185+def status_from_state(vm_state, task_state='default'):
186+ """Given vm_state and task_state, return a status string."""
187+ task_map = _STATE_MAP.get(vm_state, dict(default='UNKNOWN_STATE'))
188+ status = task_map.get(task_state, task_map['default'])
189+ LOG.debug("Generated %(status)s from vm_state=%(vm_state)s "
190+ "task_state=%(task_state)s." % locals())
191+ return status
192+
193+
194+def vm_state_from_status(status):
195+ """Map the server status string to a vm state."""
196+ for state, task_map in _STATE_MAP.iteritems():
197+ status_string = task_map.get("default")
198+ if status.lower() == status_string.lower():
199+ return state
200
201
202 def get_pagination_params(request):
203
204=== modified file 'nova/api/openstack/servers.py'
205--- nova/api/openstack/servers.py 2011-08-24 14:37:59 +0000
206+++ nova/api/openstack/servers.py 2011-08-31 14:15:31 +0000
207@@ -95,17 +95,15 @@
208 search_opts['recurse_zones'] = utils.bool_from_str(
209 search_opts.get('recurse_zones', False))
210
211- # If search by 'status', we need to convert it to 'state'
212- # If the status is unknown, bail.
213- # Leave 'state' in search_opts so compute can pass it on to
214- # child zones..
215+ # If search by 'status', we need to convert it to 'vm_state'
216+ # to pass on to child zones.
217 if 'status' in search_opts:
218 status = search_opts['status']
219- search_opts['state'] = common.power_states_from_status(status)
220- if len(search_opts['state']) == 0:
221+ state = common.vm_state_from_status(status)
222+ if state is None:
223 reason = _('Invalid server status: %(status)s') % locals()
224- LOG.error(reason)
225 raise exception.InvalidInput(reason=reason)
226+ search_opts['vm_state'] = state
227
228 # By default, compute's get_all() will return deleted instances.
229 # If an admin hasn't specified a 'deleted' search option, we need
230@@ -608,9 +606,8 @@
231
232 try:
233 self.compute_api.rebuild(context, instance_id, image_id, password)
234- except exception.BuildInProgress:
235- msg = _("Instance %s is currently being rebuilt.") % instance_id
236- LOG.debug(msg)
237+ except exception.RebuildRequiresActiveInstance:
238+ msg = _("Instance %s must be active to rebuild.") % instance_id
239 raise exc.HTTPConflict(explanation=msg)
240
241 return webob.Response(status_int=202)
242@@ -750,9 +747,8 @@
243 self.compute_api.rebuild(context, instance_id, image_href,
244 password, name=name, metadata=metadata,
245 files_to_inject=personalities)
246- except exception.BuildInProgress:
247- msg = _("Instance %s is currently being rebuilt.") % instance_id
248- LOG.debug(msg)
249+ except exception.RebuildRequiresActiveInstance:
250+ msg = _("Instance %s must be active to rebuild.") % instance_id
251 raise exc.HTTPConflict(explanation=msg)
252 except exception.InstanceNotFound:
253 msg = _("Instance %s could not be found") % instance_id
254
255=== modified file 'nova/api/openstack/views/servers.py'
256--- nova/api/openstack/views/servers.py 2011-08-23 04:17:57 +0000
257+++ nova/api/openstack/views/servers.py 2011-08-31 14:15:31 +0000
258@@ -21,13 +21,12 @@
259 import os
260
261 from nova import exception
262-import nova.compute
263-import nova.context
264 from nova.api.openstack import common
265 from nova.api.openstack.views import addresses as addresses_view
266 from nova.api.openstack.views import flavors as flavors_view
267 from nova.api.openstack.views import images as images_view
268 from nova import utils
269+from nova.compute import vm_states
270
271
272 class ViewBuilder(object):
273@@ -61,17 +60,13 @@
274
275 def _build_detail(self, inst):
276 """Returns a detailed model of a server."""
277+ vm_state = inst.get('vm_state', vm_states.BUILDING)
278+ task_state = inst.get('task_state')
279
280 inst_dict = {
281 'id': inst['id'],
282 'name': inst['display_name'],
283- 'status': common.status_from_power_state(inst.get('state'))}
284-
285- ctxt = nova.context.get_admin_context()
286- compute_api = nova.compute.API()
287-
288- if compute_api.has_finished_migration(ctxt, inst['uuid']):
289- inst_dict['status'] = 'RESIZE-CONFIRM'
290+ 'status': common.status_from_state(vm_state, task_state)}
291
292 # Return the metadata as a dictionary
293 metadata = {}
294
295=== modified file 'nova/compute/api.py'
296--- nova/compute/api.py 2011-08-26 20:36:45 +0000
297+++ nova/compute/api.py 2011-08-31 14:15:31 +0000
298@@ -37,6 +37,8 @@
299 from nova import volume
300 from nova.compute import instance_types
301 from nova.compute import power_state
302+from nova.compute import task_states
303+from nova.compute import vm_states
304 from nova.compute.utils import terminate_volumes
305 from nova.scheduler import api as scheduler_api
306 from nova.db import base
307@@ -75,12 +77,18 @@
308
309
310 def _is_able_to_shutdown(instance, instance_id):
311- states = {'terminating': "Instance %s is already being terminated",
312- 'migrating': "Instance %s is being migrated",
313- 'stopping': "Instance %s is being stopped"}
314- msg = states.get(instance['state_description'])
315- if msg:
316- LOG.warning(_(msg), instance_id)
317+ vm_state = instance["vm_state"]
318+ task_state = instance["task_state"]
319+
320+ valid_shutdown_states = [
321+ vm_states.ACTIVE,
322+ vm_states.REBUILDING,
323+ vm_states.BUILDING,
324+ ]
325+
326+ if vm_state not in valid_shutdown_states:
327+ LOG.warn(_("Instance %(instance_id)s is not in an 'active' state. It "
328+ "is currently %(vm_state)s. Shutdown aborted.") % locals())
329 return False
330
331 return True
332@@ -251,10 +259,10 @@
333 'image_ref': image_href,
334 'kernel_id': kernel_id or '',
335 'ramdisk_id': ramdisk_id or '',
336+ 'power_state': power_state.NOSTATE,
337+ 'vm_state': vm_states.BUILDING,
338 'config_drive_id': config_drive_id or '',
339 'config_drive': config_drive or '',
340- 'state': 0,
341- 'state_description': 'scheduling',
342 'user_id': context.user_id,
343 'project_id': context.project_id,
344 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),
345@@ -415,6 +423,8 @@
346 updates['display_name'] = "Server %s" % instance_id
347 instance['display_name'] = updates['display_name']
348 updates['hostname'] = self.hostname_factory(instance)
349+ updates['vm_state'] = vm_states.BUILDING
350+ updates['task_state'] = task_states.SCHEDULING
351
352 instance = self.update(context, instance_id, **updates)
353 return instance
354@@ -750,10 +760,8 @@
355 return
356
357 self.update(context,
358- instance['id'],
359- state_description='terminating',
360- state=0,
361- terminated_at=utils.utcnow())
362+ instance_id,
363+ task_state=task_states.DELETING)
364
365 host = instance['host']
366 if host:
367@@ -773,9 +781,9 @@
368 return
369
370 self.update(context,
371- instance['id'],
372- state_description='stopping',
373- state=power_state.NOSTATE,
374+ instance_id,
375+ vm_state=vm_states.ACTIVE,
376+ task_state=task_states.STOPPING,
377 terminated_at=utils.utcnow())
378
379 host = instance['host']
380@@ -787,12 +795,18 @@
381 """Start an instance."""
382 LOG.debug(_("Going to try to start %s"), instance_id)
383 instance = self._get_instance(context, instance_id, 'starting')
384- if instance['state_description'] != 'stopped':
385- _state_description = instance['state_description']
386+ vm_state = instance["vm_state"]
387+
388+ if vm_state != vm_states.STOPPED:
389 LOG.warning(_("Instance %(instance_id)s is not "
390- "stopped(%(_state_description)s)") % locals())
391+ "stopped. (%(vm_state)s)") % locals())
392 return
393
394+ self.update(context,
395+ instance_id,
396+ vm_state=vm_states.STOPPED,
397+ task_state=task_states.STARTING)
398+
399 # TODO(yamahata): injected_files isn't supported right now.
400 # It is used only for osapi. not for ec2 api.
401 # availability_zone isn't used by run_instance.
402@@ -1020,6 +1034,10 @@
403 @scheduler_api.reroute_compute("reboot")
404 def reboot(self, context, instance_id):
405 """Reboot the given instance."""
406+ self.update(context,
407+ instance_id,
408+ vm_state=vm_states.ACTIVE,
409+ task_state=task_states.REBOOTING)
410 self._cast_compute_message('reboot_instance', context, instance_id)
411
412 @scheduler_api.reroute_compute("rebuild")
413@@ -1027,21 +1045,25 @@
414 name=None, metadata=None, files_to_inject=None):
415 """Rebuild the given instance with the provided metadata."""
416 instance = db.api.instance_get(context, instance_id)
417+ name = name or instance["display_name"]
418
419- if instance["state"] == power_state.BUILDING:
420- msg = _("Instance already building")
421- raise exception.BuildInProgress(msg)
422+ if instance["vm_state"] != vm_states.ACTIVE:
423+ msg = _("Instance must be active to rebuild.")
424+ raise exception.RebuildRequiresActiveInstance(msg)
425
426 files_to_inject = files_to_inject or []
427+ metadata = metadata or {}
428+
429 self._check_injected_file_quota(context, files_to_inject)
430+ self._check_metadata_properties_quota(context, metadata)
431
432- values = {"image_ref": image_href}
433- if metadata is not None:
434- self._check_metadata_properties_quota(context, metadata)
435- values['metadata'] = metadata
436- if name is not None:
437- values['display_name'] = name
438- self.db.instance_update(context, instance_id, values)
439+ self.update(context,
440+ instance_id,
441+ metadata=metadata,
442+ display_name=name,
443+ image_ref=image_href,
444+ vm_state=vm_states.ACTIVE,
445+ task_state=task_states.REBUILDING)
446
447 rebuild_params = {
448 "new_pass": admin_password,
449@@ -1065,6 +1087,11 @@
450 raise exception.MigrationNotFoundByStatus(instance_id=instance_id,
451 status='finished')
452
453+ self.update(context,
454+ instance_id,
455+ vm_state=vm_states.ACTIVE,
456+ task_state=None)
457+
458 params = {'migration_id': migration_ref['id']}
459 self._cast_compute_message('revert_resize', context,
460 instance_ref['uuid'],
461@@ -1085,6 +1112,12 @@
462 if not migration_ref:
463 raise exception.MigrationNotFoundByStatus(instance_id=instance_id,
464 status='finished')
465+
466+ self.update(context,
467+ instance_id,
468+ vm_state=vm_states.ACTIVE,
469+ task_state=None)
470+
471 params = {'migration_id': migration_ref['id']}
472 self._cast_compute_message('confirm_resize', context,
473 instance_ref['uuid'],
474@@ -1130,6 +1163,11 @@
475 if (current_memory_mb == new_memory_mb) and flavor_id:
476 raise exception.CannotResizeToSameSize()
477
478+ self.update(context,
479+ instance_id,
480+ vm_state=vm_states.RESIZING,
481+ task_state=task_states.RESIZE_PREP)
482+
483 instance_ref = self._get_instance(context, instance_id, 'resize')
484 self._cast_scheduler_message(context,
485 {"method": "prep_resize",
486@@ -1163,11 +1201,19 @@
487 @scheduler_api.reroute_compute("pause")
488 def pause(self, context, instance_id):
489 """Pause the given instance."""
490+ self.update(context,
491+ instance_id,
492+ vm_state=vm_states.ACTIVE,
493+ task_state=task_states.PAUSING)
494 self._cast_compute_message('pause_instance', context, instance_id)
495
496 @scheduler_api.reroute_compute("unpause")
497 def unpause(self, context, instance_id):
498 """Unpause the given instance."""
499+ self.update(context,
500+ instance_id,
501+ vm_state=vm_states.PAUSED,
502+ task_state=task_states.UNPAUSING)
503 self._cast_compute_message('unpause_instance', context, instance_id)
504
505 def _call_compute_message_for_host(self, action, context, host, params):
506@@ -1200,21 +1246,37 @@
507 @scheduler_api.reroute_compute("suspend")
508 def suspend(self, context, instance_id):
509 """Suspend the given instance."""
510+ self.update(context,
511+ instance_id,
512+ vm_state=vm_states.ACTIVE,
513+ task_state=task_states.SUSPENDING)
514 self._cast_compute_message('suspend_instance', context, instance_id)
515
516 @scheduler_api.reroute_compute("resume")
517 def resume(self, context, instance_id):
518 """Resume the given instance."""
519+ self.update(context,
520+ instance_id,
521+ vm_state=vm_states.SUSPENDED,
522+ task_state=task_states.RESUMING)
523 self._cast_compute_message('resume_instance', context, instance_id)
524
525 @scheduler_api.reroute_compute("rescue")
526 def rescue(self, context, instance_id):
527 """Rescue the given instance."""
528+ self.update(context,
529+ instance_id,
530+ vm_state=vm_states.ACTIVE,
531+ task_state=task_states.RESCUING)
532 self._cast_compute_message('rescue_instance', context, instance_id)
533
534 @scheduler_api.reroute_compute("unrescue")
535 def unrescue(self, context, instance_id):
536 """Unrescue the given instance."""
537+ self.update(context,
538+ instance_id,
539+ vm_state=vm_states.RESCUED,
540+ task_state=task_states.UNRESCUING)
541 self._cast_compute_message('unrescue_instance', context, instance_id)
542
543 @scheduler_api.reroute_compute("set_admin_password")
544
545=== modified file 'nova/compute/manager.py'
546--- nova/compute/manager.py 2011-08-26 13:54:53 +0000
547+++ nova/compute/manager.py 2011-08-31 14:15:31 +0000
548@@ -56,6 +56,8 @@
549 from nova import utils
550 from nova import volume
551 from nova.compute import power_state
552+from nova.compute import task_states
553+from nova.compute import vm_states
554 from nova.notifier import api as notifier
555 from nova.compute.utils import terminate_volumes
556 from nova.virt import driver
557@@ -146,6 +148,10 @@
558 super(ComputeManager, self).__init__(service_name="compute",
559 *args, **kwargs)
560
561+ def _instance_update(self, context, instance_id, **kwargs):
562+ """Update an instance in the database using kwargs as value."""
563+ return self.db.instance_update(context, instance_id, kwargs)
564+
565 def init_host(self):
566 """Initialization for a standalone compute service."""
567 self.driver.init_host(host=self.host)
568@@ -153,8 +159,8 @@
569 instances = self.db.instance_get_all_by_host(context, self.host)
570 for instance in instances:
571 inst_name = instance['name']
572- db_state = instance['state']
573- drv_state = self._update_state(context, instance['id'])
574+ db_state = instance['power_state']
575+ drv_state = self._get_power_state(context, instance)
576
577 expect_running = db_state == power_state.RUNNING \
578 and drv_state != db_state
579@@ -177,29 +183,13 @@
580 LOG.warning(_('Hypervisor driver does not '
581 'support firewall rules'))
582
583- def _update_state(self, context, instance_id, state=None):
584- """Update the state of an instance from the driver info."""
585- instance_ref = self.db.instance_get(context, instance_id)
586-
587- if state is None:
588- try:
589- LOG.debug(_('Checking state of %s'), instance_ref['name'])
590- info = self.driver.get_info(instance_ref['name'])
591- except exception.NotFound:
592- info = None
593-
594- if info is not None:
595- state = info['state']
596- else:
597- state = power_state.FAILED
598-
599- self.db.instance_set_state(context, instance_id, state)
600- return state
601-
602- def _update_launched_at(self, context, instance_id, launched_at=None):
603- """Update the launched_at parameter of the given instance."""
604- data = {'launched_at': launched_at or utils.utcnow()}
605- self.db.instance_update(context, instance_id, data)
606+ def _get_power_state(self, context, instance):
607+ """Retrieve the power state for the given instance."""
608+ LOG.debug(_('Checking state of %s'), instance['name'])
609+ try:
610+ return self.driver.get_info(instance['name'])["state"]
611+ except exception.NotFound:
612+ return power_state.FAILED
613
614 def get_console_topic(self, context, **kwargs):
615 """Retrieves the console host for a project on this host.
616@@ -251,11 +241,6 @@
617
618 def _setup_block_device_mapping(self, context, instance_id):
619 """setup volumes for block device mapping"""
620- self.db.instance_set_state(context,
621- instance_id,
622- power_state.NOSTATE,
623- 'block_device_mapping')
624-
625 volume_api = volume.API()
626 block_device_mapping = []
627 swap = None
628@@ -389,17 +374,12 @@
629 updates = {}
630 updates['host'] = self.host
631 updates['launched_on'] = self.host
632- instance = self.db.instance_update(context,
633- instance_id,
634- updates)
635+ updates['vm_state'] = vm_states.BUILDING
636+ updates['task_state'] = task_states.NETWORKING
637+ instance = self.db.instance_update(context, instance_id, updates)
638 instance['injected_files'] = kwargs.get('injected_files', [])
639 instance['admin_pass'] = kwargs.get('admin_password', None)
640
641- self.db.instance_set_state(context,
642- instance_id,
643- power_state.NOSTATE,
644- 'networking')
645-
646 is_vpn = instance['image_ref'] == str(FLAGS.vpn_image_id)
647 try:
648 # NOTE(vish): This could be a cast because we don't do anything
649@@ -418,6 +398,11 @@
650 # all vif creation and network injection, maybe this is correct
651 network_info = []
652
653+ self._instance_update(context,
654+ instance_id,
655+ vm_state=vm_states.BUILDING,
656+ task_state=task_states.BLOCK_DEVICE_MAPPING)
657+
658 (swap, ephemerals,
659 block_device_mapping) = self._setup_block_device_mapping(
660 context, instance_id)
661@@ -427,9 +412,12 @@
662 'ephemerals': ephemerals,
663 'block_device_mapping': block_device_mapping}
664
665+ self._instance_update(context,
666+ instance_id,
667+ vm_state=vm_states.BUILDING,
668+ task_state=task_states.SPAWNING)
669+
670 # TODO(vish) check to make sure the availability zone matches
671- self._update_state(context, instance_id, power_state.BUILDING)
672-
673 try:
674 self.driver.spawn(context, instance,
675 network_info, block_device_info)
676@@ -438,13 +426,21 @@
677 "virtualization enabled in the BIOS? Details: "
678 "%(ex)s") % locals()
679 LOG.exception(msg)
680-
681- self._update_launched_at(context, instance_id)
682- self._update_state(context, instance_id)
683+ return
684+
685+ current_power_state = self._get_power_state(context, instance)
686+ self._instance_update(context,
687+ instance_id,
688+ power_state=current_power_state,
689+ vm_state=vm_states.ACTIVE,
690+ task_state=None,
691+ launched_at=utils.utcnow())
692+
693 usage_info = utils.usage_from_instance(instance)
694 notifier.notify('compute.%s' % self.host,
695 'compute.instance.create',
696 notifier.INFO, usage_info)
697+
698 except exception.InstanceNotFound:
699 # FIXME(wwolf): We are just ignoring InstanceNotFound
700 # exceptions here in case the instance was immediately
701@@ -480,8 +476,7 @@
702 for volume in volumes:
703 self._detach_volume(context, instance_id, volume['id'], False)
704
705- if (instance['state'] == power_state.SHUTOFF and
706- instance['state_description'] != 'stopped'):
707+ if instance['power_state'] == power_state.SHUTOFF:
708 self.db.instance_destroy(context, instance_id)
709 raise exception.Error(_('trying to destroy already destroyed'
710 ' instance: %s') % instance_id)
711@@ -496,9 +491,14 @@
712 """Terminate an instance on this host."""
713 self._shutdown_instance(context, instance_id, 'Terminating')
714 instance = self.db.instance_get(context.elevated(), instance_id)
715+ self._instance_update(context,
716+ instance_id,
717+ vm_state=vm_states.DELETED,
718+ task_state=None,
719+ terminated_at=utils.utcnow())
720
721- # TODO(ja): should we keep it in a terminated state for a bit?
722 self.db.instance_destroy(context, instance_id)
723+
724 usage_info = utils.usage_from_instance(instance)
725 notifier.notify('compute.%s' % self.host,
726 'compute.instance.delete',
727@@ -509,7 +509,10 @@
728 def stop_instance(self, context, instance_id):
729 """Stopping an instance on this host."""
730 self._shutdown_instance(context, instance_id, 'Stopping')
731- # instance state will be updated to stopped by _poll_instance_states()
732+ self._instance_update(context,
733+ instance_id,
734+ vm_state=vm_states.STOPPED,
735+ task_state=None)
736
737 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
738 @checks_instance_lock
739@@ -529,26 +532,46 @@
740 instance_ref = self.db.instance_get(context, instance_id)
741 LOG.audit(_("Rebuilding instance %s"), instance_id, context=context)
742
743- self._update_state(context, instance_id, power_state.BUILDING)
744+ current_power_state = self._get_power_state(context, instance_ref)
745+ self._instance_update(context,
746+ instance_id,
747+ power_state=current_power_state,
748+ vm_state=vm_states.REBUILDING,
749+ task_state=None)
750
751 network_info = self._get_instance_nw_info(context, instance_ref)
752-
753 self.driver.destroy(instance_ref, network_info)
754+
755+ self._instance_update(context,
756+ instance_id,
757+ vm_state=vm_states.REBUILDING,
758+ task_state=task_states.BLOCK_DEVICE_MAPPING)
759+
760 instance_ref.injected_files = kwargs.get('injected_files', [])
761 network_info = self.network_api.get_instance_nw_info(context,
762 instance_ref)
763 bd_mapping = self._setup_block_device_mapping(context, instance_id)
764
765+ self._instance_update(context,
766+ instance_id,
767+ vm_state=vm_states.REBUILDING,
768+ task_state=task_states.SPAWNING)
769+
770 # pull in new password here since the original password isn't in the db
771 instance_ref.admin_pass = kwargs.get('new_pass',
772 utils.generate_password(FLAGS.password_length))
773
774 self.driver.spawn(context, instance_ref, network_info, bd_mapping)
775
776- self._update_launched_at(context, instance_id)
777- self._update_state(context, instance_id)
778+ current_power_state = self._get_power_state(context, instance_ref)
779+ self._instance_update(context,
780+ instance_id,
781+ power_state=current_power_state,
782+ vm_state=vm_states.ACTIVE,
783+ task_state=None,
784+ launched_at=utils.utcnow())
785+
786 usage_info = utils.usage_from_instance(instance_ref)
787-
788 notifier.notify('compute.%s' % self.host,
789 'compute.instance.rebuild',
790 notifier.INFO,
791@@ -558,26 +581,34 @@
792 @checks_instance_lock
793 def reboot_instance(self, context, instance_id):
794 """Reboot an instance on this host."""
795- context = context.elevated()
796- self._update_state(context, instance_id)
797- instance_ref = self.db.instance_get(context, instance_id)
798 LOG.audit(_("Rebooting instance %s"), instance_id, context=context)
799-
800- if instance_ref['state'] != power_state.RUNNING:
801- state = instance_ref['state']
802+ context = context.elevated()
803+ instance_ref = self.db.instance_get(context, instance_id)
804+
805+ current_power_state = self._get_power_state(context, instance_ref)
806+ self._instance_update(context,
807+ instance_id,
808+ power_state=current_power_state,
809+ vm_state=vm_states.ACTIVE,
810+ task_state=task_states.REBOOTING)
811+
812+ if instance_ref['power_state'] != power_state.RUNNING:
813+ state = instance_ref['power_state']
814 running = power_state.RUNNING
815 LOG.warn(_('trying to reboot a non-running '
816 'instance: %(instance_id)s (state: %(state)s '
817 'expected: %(running)s)') % locals(),
818 context=context)
819
820- self.db.instance_set_state(context,
821- instance_id,
822- power_state.NOSTATE,
823- 'rebooting')
824 network_info = self._get_instance_nw_info(context, instance_ref)
825 self.driver.reboot(instance_ref, network_info)
826- self._update_state(context, instance_id)
827+
828+ current_power_state = self._get_power_state(context, instance_ref)
829+ self._instance_update(context,
830+ instance_id,
831+ power_state=current_power_state,
832+ vm_state=vm_states.ACTIVE,
833+ task_state=None)
834
835 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
836 def snapshot_instance(self, context, instance_id, image_id,
837@@ -593,37 +624,45 @@
838 :param rotation: int representing how many backups to keep around;
839 None if rotation shouldn't be used (as in the case of snapshots)
840 """
841+ if image_type == "snapshot":
842+ task_state = task_states.IMAGE_SNAPSHOT
843+ elif image_type == "backup":
844+ task_state = task_states.IMAGE_BACKUP
845+ else:
846+ raise Exception(_('Image type not recognized %s') % image_type)
847+
848 context = context.elevated()
849 instance_ref = self.db.instance_get(context, instance_id)
850
851- #NOTE(sirp): update_state currently only refreshes the state field
852- # if we add is_snapshotting, we will need this refreshed too,
853- # potentially?
854- self._update_state(context, instance_id)
855+ current_power_state = self._get_power_state(context, instance_ref)
856+ self._instance_update(context,
857+ instance_id,
858+ power_state=current_power_state,
859+ vm_state=vm_states.ACTIVE,
860+ task_state=task_state)
861
862 LOG.audit(_('instance %s: snapshotting'), instance_id,
863 context=context)
864- if instance_ref['state'] != power_state.RUNNING:
865- state = instance_ref['state']
866+
867+ if instance_ref['power_state'] != power_state.RUNNING:
868+ state = instance_ref['power_state']
869 running = power_state.RUNNING
870 LOG.warn(_('trying to snapshot a non-running '
871 'instance: %(instance_id)s (state: %(state)s '
872 'expected: %(running)s)') % locals())
873
874 self.driver.snapshot(context, instance_ref, image_id)
875-
876- if image_type == 'snapshot':
877- if rotation:
878- raise exception.ImageRotationNotAllowed()
879+ self._instance_update(context, instance_id, task_state=None)
880+
881+ if image_type == 'snapshot' and rotation:
882+ raise exception.ImageRotationNotAllowed()
883+
884+ elif image_type == 'backup' and rotation:
885+ instance_uuid = instance_ref['uuid']
886+ self.rotate_backups(context, instance_uuid, backup_type, rotation)
887+
888 elif image_type == 'backup':
889- if rotation:
890- instance_uuid = instance_ref['uuid']
891- self.rotate_backups(context, instance_uuid, backup_type,
892- rotation)
893- else:
894- raise exception.RotationRequiredForBackup()
895- else:
896- raise Exception(_('Image type not recognized %s') % image_type)
897+ raise exception.RotationRequiredForBackup()
898
899 def rotate_backups(self, context, instance_uuid, backup_type, rotation):
900 """Delete excess backups associated to an instance.
901@@ -691,7 +730,7 @@
902 for i in xrange(max_tries):
903 instance_ref = self.db.instance_get(context, instance_id)
904 instance_id = instance_ref["id"]
905- instance_state = instance_ref["state"]
906+ instance_state = instance_ref["power_state"]
907 expected_state = power_state.RUNNING
908
909 if instance_state != expected_state:
910@@ -726,7 +765,7 @@
911 context = context.elevated()
912 instance_ref = self.db.instance_get(context, instance_id)
913 instance_id = instance_ref['id']
914- instance_state = instance_ref['state']
915+ instance_state = instance_ref['power_state']
916 expected_state = power_state.RUNNING
917 if instance_state != expected_state:
918 LOG.warn(_('trying to inject a file into a non-running '
919@@ -744,7 +783,7 @@
920 context = context.elevated()
921 instance_ref = self.db.instance_get(context, instance_id)
922 instance_id = instance_ref['id']
923- instance_state = instance_ref['state']
924+ instance_state = instance_ref['power_state']
925 expected_state = power_state.RUNNING
926 if instance_state != expected_state:
927 LOG.warn(_('trying to update agent on a non-running '
928@@ -759,40 +798,41 @@
929 @checks_instance_lock
930 def rescue_instance(self, context, instance_id):
931 """Rescue an instance on this host."""
932- context = context.elevated()
933- instance_ref = self.db.instance_get(context, instance_id)
934 LOG.audit(_('instance %s: rescuing'), instance_id, context=context)
935- self.db.instance_set_state(context,
936- instance_id,
937- power_state.NOSTATE,
938- 'rescuing')
939- _update_state = lambda result: self._update_state_callback(
940- self, context, instance_id, result)
941+ context = context.elevated()
942+
943+ instance_ref = self.db.instance_get(context, instance_id)
944 network_info = self._get_instance_nw_info(context, instance_ref)
945- self.driver.rescue(context, instance_ref, _update_state, network_info)
946- self._update_state(context, instance_id)
947+
948+ # NOTE(blamar): None of the virt drivers use the 'callback' param
949+ self.driver.rescue(context, instance_ref, None, network_info)
950+
951+ current_power_state = self._get_power_state(context, instance_ref)
952+ self._instance_update(context,
953+ instance_id,
954+ vm_state=vm_states.RESCUED,
955+ task_state=None,
956+ power_state=current_power_state)
957
958 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
959 @checks_instance_lock
960 def unrescue_instance(self, context, instance_id):
961 """Rescue an instance on this host."""
962- context = context.elevated()
963- instance_ref = self.db.instance_get(context, instance_id)
964 LOG.audit(_('instance %s: unrescuing'), instance_id, context=context)
965- self.db.instance_set_state(context,
966- instance_id,
967- power_state.NOSTATE,
968- 'unrescuing')
969- _update_state = lambda result: self._update_state_callback(
970- self, context, instance_id, result)
971+ context = context.elevated()
972+
973+ instance_ref = self.db.instance_get(context, instance_id)
974 network_info = self._get_instance_nw_info(context, instance_ref)
975- self.driver.unrescue(instance_ref, _update_state, network_info)
976- self._update_state(context, instance_id)
977-
978- @staticmethod
979- def _update_state_callback(self, context, instance_id, result):
980- """Update instance state when async task completes."""
981- self._update_state(context, instance_id)
982+
983+ # NOTE(blamar): None of the virt drivers use the 'callback' param
984+ self.driver.unrescue(instance_ref, None, network_info)
985+
986+ current_power_state = self._get_power_state(context, instance_ref)
987+ self._instance_update(context,
988+ instance_id,
989+ vm_state=vm_states.ACTIVE,
990+ task_state=None,
991+ power_state=current_power_state)
992
993 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
994 @checks_instance_lock
995@@ -851,11 +891,12 @@
996
997 # Just roll back the record. There's no need to resize down since
998 # the 'old' VM already has the preferred attributes
999- self.db.instance_update(context, instance_ref['uuid'],
1000- dict(memory_mb=instance_type['memory_mb'],
1001- vcpus=instance_type['vcpus'],
1002- local_gb=instance_type['local_gb'],
1003- instance_type_id=instance_type['id']))
1004+ self._instance_update(context,
1005+ instance_ref["uuid"],
1006+ memory_mb=instance_type['memory_mb'],
1007+ vcpus=instance_type['vcpus'],
1008+ local_gb=instance_type['local_gb'],
1009+ instance_type_id=instance_type['id'])
1010
1011 self.driver.revert_migration(instance_ref)
1012 self.db.migration_update(context, migration_id,
1013@@ -882,8 +923,11 @@
1014 instance_ref = self.db.instance_get_by_uuid(context, instance_id)
1015
1016 if instance_ref['host'] == FLAGS.host:
1017- raise exception.Error(_(
1018- 'Migration error: destination same as source!'))
1019+ self._instance_update(context,
1020+ instance_id,
1021+ vm_state=vm_states.ERROR)
1022+ msg = _('Migration error: destination same as source!')
1023+ raise exception.Error(msg)
1024
1025 old_instance_type = self.db.instance_type_get(context,
1026 instance_ref['instance_type_id'])
1027@@ -977,6 +1021,11 @@
1028 self.driver.finish_migration(context, instance_ref, disk_info,
1029 network_info, resize_instance)
1030
1031+ self._instance_update(context,
1032+ instance_id,
1033+ vm_state=vm_states.ACTIVE,
1034+ task_state=task_states.RESIZE_VERIFY)
1035+
1036 self.db.migration_update(context, migration_id,
1037 {'status': 'finished', })
1038
1039@@ -1008,35 +1057,35 @@
1040 @checks_instance_lock
1041 def pause_instance(self, context, instance_id):
1042 """Pause an instance on this host."""
1043- context = context.elevated()
1044- instance_ref = self.db.instance_get(context, instance_id)
1045 LOG.audit(_('instance %s: pausing'), instance_id, context=context)
1046- self.db.instance_set_state(context,
1047- instance_id,
1048- power_state.NOSTATE,
1049- 'pausing')
1050- self.driver.pause(instance_ref,
1051- lambda result: self._update_state_callback(self,
1052- context,
1053- instance_id,
1054- result))
1055+ context = context.elevated()
1056+
1057+ instance_ref = self.db.instance_get(context, instance_id)
1058+ self.driver.pause(instance_ref, lambda result: None)
1059+
1060+ current_power_state = self._get_power_state(context, instance_ref)
1061+ self._instance_update(context,
1062+ instance_id,
1063+ power_state=current_power_state,
1064+ vm_state=vm_states.PAUSED,
1065+ task_state=None)
1066
1067 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1068 @checks_instance_lock
1069 def unpause_instance(self, context, instance_id):
1070 """Unpause a paused instance on this host."""
1071- context = context.elevated()
1072- instance_ref = self.db.instance_get(context, instance_id)
1073 LOG.audit(_('instance %s: unpausing'), instance_id, context=context)
1074- self.db.instance_set_state(context,
1075- instance_id,
1076- power_state.NOSTATE,
1077- 'unpausing')
1078- self.driver.unpause(instance_ref,
1079- lambda result: self._update_state_callback(self,
1080- context,
1081- instance_id,
1082- result))
1083+ context = context.elevated()
1084+
1085+ instance_ref = self.db.instance_get(context, instance_id)
1086+ self.driver.unpause(instance_ref, lambda result: None)
1087+
1088+ current_power_state = self._get_power_state(context, instance_ref)
1089+ self._instance_update(context,
1090+ instance_id,
1091+ power_state=current_power_state,
1092+ vm_state=vm_states.ACTIVE,
1093+ task_state=None)
1094
1095 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1096 def host_power_action(self, context, host=None, action=None):
1097@@ -1052,7 +1101,7 @@
1098 def get_diagnostics(self, context, instance_id):
1099 """Retrieve diagnostics for an instance on this host."""
1100 instance_ref = self.db.instance_get(context, instance_id)
1101- if instance_ref["state"] == power_state.RUNNING:
1102+ if instance_ref["power_state"] == power_state.RUNNING:
1103 LOG.audit(_("instance %s: retrieving diagnostics"), instance_id,
1104 context=context)
1105 return self.driver.get_diagnostics(instance_ref)
1106@@ -1061,33 +1110,35 @@
1107 @checks_instance_lock
1108 def suspend_instance(self, context, instance_id):
1109 """Suspend the given instance."""
1110- context = context.elevated()
1111- instance_ref = self.db.instance_get(context, instance_id)
1112 LOG.audit(_('instance %s: suspending'), instance_id, context=context)
1113- self.db.instance_set_state(context, instance_id,
1114- power_state.NOSTATE,
1115- 'suspending')
1116- self.driver.suspend(instance_ref,
1117- lambda result: self._update_state_callback(self,
1118- context,
1119- instance_id,
1120- result))
1121+ context = context.elevated()
1122+
1123+ instance_ref = self.db.instance_get(context, instance_id)
1124+ self.driver.suspend(instance_ref, lambda result: None)
1125+
1126+ current_power_state = self._get_power_state(context, instance_ref)
1127+ self._instance_update(context,
1128+ instance_id,
1129+ power_state=current_power_state,
1130+ vm_state=vm_states.SUSPENDED,
1131+ task_state=None)
1132
1133 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1134 @checks_instance_lock
1135 def resume_instance(self, context, instance_id):
1136 """Resume the given suspended instance."""
1137- context = context.elevated()
1138- instance_ref = self.db.instance_get(context, instance_id)
1139 LOG.audit(_('instance %s: resuming'), instance_id, context=context)
1140- self.db.instance_set_state(context, instance_id,
1141- power_state.NOSTATE,
1142- 'resuming')
1143- self.driver.resume(instance_ref,
1144- lambda result: self._update_state_callback(self,
1145- context,
1146- instance_id,
1147- result))
1148+ context = context.elevated()
1149+
1150+ instance_ref = self.db.instance_get(context, instance_id)
1151+ self.driver.resume(instance_ref, lambda result: None)
1152+
1153+ current_power_state = self._get_power_state(context, instance_ref)
1154+ self._instance_update(context,
1155+ instance_id,
1156+ power_state=current_power_state,
1157+ vm_state=vm_states.ACTIVE,
1158+ task_state=None)
1159
1160 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1161 def lock_instance(self, context, instance_id):
1162@@ -1498,11 +1549,14 @@
1163 'block_migration': block_migration}})
1164
1165 # Restore instance state
1166- self.db.instance_update(ctxt,
1167- instance_ref['id'],
1168- {'state_description': 'running',
1169- 'state': power_state.RUNNING,
1170- 'host': dest})
1171+ current_power_state = self._get_power_state(ctxt, instance_ref)
1172+ self._instance_update(ctxt,
1173+ instance_ref["id"],
1174+ host=dest,
1175+ power_state=current_power_state,
1176+ vm_state=vm_states.ACTIVE,
1177+ task_state=None)
1178+
1179 # Restore volume state
1180 for volume_ref in instance_ref['volumes']:
1181 volume_id = volume_ref['id']
1182@@ -1548,11 +1602,11 @@
1183 This param specifies destination host.
1184 """
1185 host = instance_ref['host']
1186- self.db.instance_update(context,
1187- instance_ref['id'],
1188- {'state_description': 'running',
1189- 'state': power_state.RUNNING,
1190- 'host': host})
1191+ self._instance_update(context,
1192+ instance_ref['id'],
1193+ host=host,
1194+ vm_state=vm_states.ACTIVE,
1195+ task_state=None)
1196
1197 for volume_ref in instance_ref['volumes']:
1198 volume_id = volume_ref['id']
1199@@ -1600,10 +1654,9 @@
1200 error_list.append(ex)
1201
1202 try:
1203- self._poll_instance_states(context)
1204+ self._sync_power_states(context)
1205 except Exception as ex:
1206- LOG.warning(_("Error during instance poll: %s"),
1207- unicode(ex))
1208+ LOG.warning(_("Error during power_state sync: %s"), unicode(ex))
1209 error_list.append(ex)
1210
1211 return error_list
1212@@ -1618,68 +1671,40 @@
1213 self.update_service_capabilities(
1214 self.driver.get_host_stats(refresh=True))
1215
1216- def _poll_instance_states(self, context):
1217+ def _sync_power_states(self, context):
1218+ """Align power states between the database and the hypervisor.
1219+
1220+ The hypervisor is authoritative for the power_state data, so we
1221+ simply loop over all known instances for this host and update the
1222+ power_state according to the hypervisor. If the instance is not found
1223+ then it will be set to power_state.NOSTATE, because it doesn't exist
1224+ on the hypervisor.
1225+
1226+ """
1227 vm_instances = self.driver.list_instances_detail()
1228 vm_instances = dict((vm.name, vm) for vm in vm_instances)
1229-
1230- # Keep a list of VMs not in the DB, cross them off as we find them
1231- vms_not_found_in_db = list(vm_instances.keys())
1232-
1233 db_instances = self.db.instance_get_all_by_host(context, self.host)
1234
1235+ num_vm_instances = len(vm_instances)
1236+ num_db_instances = len(db_instances)
1237+
1238+ if num_vm_instances != num_db_instances:
1239+ LOG.info(_("Found %(num_db_instances)s in the database and "
1240+ "%(num_vm_instances)s on the hypervisor.") % locals())
1241+
1242 for db_instance in db_instances:
1243- name = db_instance['name']
1244- db_state = db_instance['state']
1245+ name = db_instance["name"]
1246+ db_power_state = db_instance['power_state']
1247 vm_instance = vm_instances.get(name)
1248
1249 if vm_instance is None:
1250- # NOTE(justinsb): We have to be very careful here, because a
1251- # concurrent operation could be in progress (e.g. a spawn)
1252- if db_state == power_state.BUILDING:
1253- # TODO(justinsb): This does mean that if we crash during a
1254- # spawn, the machine will never leave the spawning state,
1255- # but this is just the way nova is; this function isn't
1256- # trying to correct that problem.
1257- # We could have a separate task to correct this error.
1258- # TODO(justinsb): What happens during a live migration?
1259- LOG.info(_("Found instance '%(name)s' in DB but no VM. "
1260- "State=%(db_state)s, so assuming spawn is in "
1261- "progress.") % locals())
1262- vm_state = db_state
1263- else:
1264- LOG.info(_("Found instance '%(name)s' in DB but no VM. "
1265- "State=%(db_state)s, so setting state to "
1266- "shutoff.") % locals())
1267- vm_state = power_state.SHUTOFF
1268- if db_instance['state_description'] == 'stopping':
1269- self.db.instance_stop(context, db_instance['id'])
1270- continue
1271+ vm_power_state = power_state.NOSTATE
1272 else:
1273- vm_state = vm_instance.state
1274- vms_not_found_in_db.remove(name)
1275+ vm_power_state = vm_instance.state
1276
1277- if (db_instance['state_description'] in ['migrating', 'stopping']):
1278- # A situation which db record exists, but no instance"
1279- # sometimes occurs while live-migration at src compute,
1280- # this case should be ignored.
1281- LOG.debug(_("Ignoring %(name)s, as it's currently being "
1282- "migrated.") % locals())
1283+ if vm_power_state == db_power_state:
1284 continue
1285
1286- if vm_state != db_state:
1287- LOG.info(_("DB/VM state mismatch. Changing state from "
1288- "'%(db_state)s' to '%(vm_state)s'") % locals())
1289- self._update_state(context, db_instance['id'], vm_state)
1290-
1291- # NOTE(justinsb): We no longer auto-remove SHUTOFF instances
1292- # It's quite hard to get them back when we do.
1293-
1294- # Are there VMs not in the DB?
1295- for vm_not_found_in_db in vms_not_found_in_db:
1296- name = vm_not_found_in_db
1297-
1298- # We only care about instances that compute *should* know about
1299- if name.startswith("instance-"):
1300- # TODO(justinsb): What to do here? Adopt it? Shut it down?
1301- LOG.warning(_("Found VM not in DB: '%(name)s'. Ignoring")
1302- % locals())
1303+ self._instance_update(context,
1304+ db_instance["id"],
1305+ power_state=vm_power_state)
1306
1307=== added file 'nova/compute/task_states.py'
1308--- nova/compute/task_states.py 1970-01-01 00:00:00 +0000
1309+++ nova/compute/task_states.py 2011-08-31 14:15:31 +0000
1310@@ -0,0 +1,59 @@
1311+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1312+
1313+# Copyright 2010 OpenStack LLC.
1314+# All Rights Reserved.
1315+#
1316+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1317+# not use this file except in compliance with the License. You may obtain
1318+# a copy of the License at
1319+#
1320+# http://www.apache.org/licenses/LICENSE-2.0
1321+#
1322+# Unless required by applicable law or agreed to in writing, software
1323+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1324+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1325+# License for the specific language governing permissions and limitations
1326+# under the License.
1327+
1328+"""Possible task states for instances.
1329+
1330+Compute instance task states represent what is happening to the instance at the
1331+current moment. These tasks can be generic, such as 'spawning', or specific,
1332+such as 'block_device_mapping'. These task states allow for a better view into
1333+what an instance is doing and should be displayed to users/administrators as
1334+necessary.
1335+
1336+"""
1337+
1338+SCHEDULING = 'scheduling'
1339+BLOCK_DEVICE_MAPPING = 'block_device_mapping'
1340+NETWORKING = 'networking'
1341+SPAWNING = 'spawning'
1342+
1343+IMAGE_SNAPSHOT = 'image_snapshot'
1344+IMAGE_BACKUP = 'image_backup'
1345+
1346+UPDATING_PASSWORD = 'updating_password'
1347+
1348+RESIZE_PREP = 'resize_prep'
1349+RESIZE_MIGRATING = 'resize_migrating'
1350+RESIZE_MIGRATED = 'resize_migrated'
1351+RESIZE_FINISH = 'resize_finish'
1352+RESIZE_REVERTING = 'resize_reverting'
1353+RESIZE_CONFIRMING = 'resize_confirming'
1354+RESIZE_VERIFY = 'resize_verify'
1355+
1356+REBUILDING = 'rebuilding'
1357+
1358+REBOOTING = 'rebooting'
1359+PAUSING = 'pausing'
1360+UNPAUSING = 'unpausing'
1361+SUSPENDING = 'suspending'
1362+RESUMING = 'resuming'
1363+
1364+RESCUING = 'rescuing'
1365+UNRESCUING = 'unrescuing'
1366+
1367+DELETING = 'deleting'
1368+STOPPING = 'stopping'
1369+STARTING = 'starting'
1370
1371=== added file 'nova/compute/vm_states.py'
1372--- nova/compute/vm_states.py 1970-01-01 00:00:00 +0000
1373+++ nova/compute/vm_states.py 2011-08-31 14:15:31 +0000
1374@@ -0,0 +1,39 @@
1375+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1376+
1377+# Copyright 2010 OpenStack LLC.
1378+# All Rights Reserved.
1379+#
1380+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1381+# not use this file except in compliance with the License. You may obtain
1382+# a copy of the License at
1383+#
1384+# http://www.apache.org/licenses/LICENSE-2.0
1385+#
1386+# Unless required by applicable law or agreed to in writing, software
1387+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1388+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1389+# License for the specific language governing permissions and limitations
1390+# under the License.
1391+
1392+"""Possible vm states for instances.
1393+
1394+Compute instance vm states represent the state of an instance as it pertains to
1395+a user or administrator. When combined with task states (task_states.py), a
1396+better picture can be formed regarding the instance's health.
1397+
1398+"""
1399+
1400+ACTIVE = 'active'
1401+BUILDING = 'building'
1402+REBUILDING = 'rebuilding'
1403+
1404+PAUSED = 'paused'
1405+SUSPENDED = 'suspended'
1406+RESCUED = 'rescued'
1407+DELETED = 'deleted'
1408+STOPPED = 'stopped'
1409+
1410+MIGRATING = 'migrating'
1411+RESIZING = 'resizing'
1412+
1413+ERROR = 'error'
1414
1415=== modified file 'nova/db/sqlalchemy/api.py'
1416--- nova/db/sqlalchemy/api.py 2011-08-26 02:18:46 +0000
1417+++ nova/db/sqlalchemy/api.py 2011-08-31 14:15:31 +0000
1418@@ -28,6 +28,7 @@
1419 from nova import ipv6
1420 from nova import utils
1421 from nova import log as logging
1422+from nova.compute import vm_states
1423 from nova.db.sqlalchemy import models
1424 from nova.db.sqlalchemy.session import get_session
1425 from sqlalchemy import or_
1426@@ -1102,12 +1103,11 @@
1427 def instance_stop(context, instance_id):
1428 session = get_session()
1429 with session.begin():
1430- from nova.compute import power_state
1431 session.query(models.Instance).\
1432 filter_by(id=instance_id).\
1433 update({'host': None,
1434- 'state': power_state.SHUTOFF,
1435- 'state_description': 'stopped',
1436+ 'vm_state': vm_states.STOPPED,
1437+ 'task_state': None,
1438 'updated_at': literal_column('updated_at')})
1439 session.query(models.SecurityGroupInstanceAssociation).\
1440 filter_by(instance_id=instance_id).\
1441@@ -1266,7 +1266,7 @@
1442 # Filters for exact matches that we can do along with the SQL query...
1443 # For other filters that don't match this, we will do regexp matching
1444 exact_match_filter_names = ['project_id', 'user_id', 'image_ref',
1445- 'state', 'instance_type_id', 'deleted']
1446+ 'vm_state', 'instance_type_id', 'deleted']
1447
1448 query_filters = [key for key in filters.iterkeys()
1449 if key in exact_match_filter_names]
1450@@ -1484,18 +1484,6 @@
1451 return fixed_ip_refs[0].floating_ips[0]['address']
1452
1453
1454-@require_admin_context
1455-def instance_set_state(context, instance_id, state, description=None):
1456- # TODO(devcamcar): Move this out of models and into driver
1457- from nova.compute import power_state
1458- if not description:
1459- description = power_state.name(state)
1460- db.instance_update(context,
1461- instance_id,
1462- {'state': state,
1463- 'state_description': description})
1464-
1465-
1466 @require_context
1467 def instance_update(context, instance_id, values):
1468 session = get_session()
1469
1470=== added file 'nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py'
1471--- nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py 1970-01-01 00:00:00 +0000
1472+++ nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py 2011-08-31 14:15:31 +0000
1473@@ -0,0 +1,138 @@
1474+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1475+
1476+# Copyright 2010 OpenStack LLC.
1477+#
1478+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1479+# not use this file except in compliance with the License. You may obtain
1480+# a copy of the License at
1481+#
1482+# http://www.apache.org/licenses/LICENSE-2.0
1483+#
1484+# Unless required by applicable law or agreed to in writing, software
1485+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1486+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1487+# License for the specific language governing permissions and limitations
1488+# under the License.
1489+
1490+import sqlalchemy
1491+from sqlalchemy import MetaData, Table, Column, String
1492+
1493+from nova.compute import task_states
1494+from nova.compute import vm_states
1495+
1496+
1497+meta = MetaData()
1498+
1499+
1500+c_task_state = Column('task_state',
1501+ String(length=255, convert_unicode=False,
1502+ assert_unicode=None, unicode_error=None,
1503+ _warn_on_bytestring=False),
1504+ nullable=True)
1505+
1506+
1507+_upgrade_translations = {
1508+ "stopping": {
1509+ "state_description": vm_states.ACTIVE,
1510+ "task_state": task_states.STOPPING,
1511+ },
1512+ "stopped": {
1513+ "state_description": vm_states.STOPPED,
1514+ "task_state": None,
1515+ },
1516+ "terminated": {
1517+ "state_description": vm_states.DELETED,
1518+ "task_state": None,
1519+ },
1520+ "terminating": {
1521+ "state_description": vm_states.ACTIVE,
1522+ "task_state": task_states.DELETING,
1523+ },
1524+ "running": {
1525+ "state_description": vm_states.ACTIVE,
1526+ "task_state": None,
1527+ },
1528+ "scheduling": {
1529+ "state_description": vm_states.BUILDING,
1530+ "task_state": task_states.SCHEDULING,
1531+ },
1532+ "migrating": {
1533+ "state_description": vm_states.MIGRATING,
1534+ "task_state": None,
1535+ },
1536+ "pending": {
1537+ "state_description": vm_states.BUILDING,
1538+ "task_state": task_states.SCHEDULING,
1539+ },
1540+}
1541+
1542+
1543+_downgrade_translations = {
1544+ vm_states.ACTIVE: {
1545+ None: "running",
1546+ task_states.DELETING: "terminating",
1547+ task_states.STOPPING: "stopping",
1548+ },
1549+ vm_states.BUILDING: {
1550+ None: "pending",
1551+ task_states.SCHEDULING: "scheduling",
1552+ },
1553+ vm_states.STOPPED: {
1554+ None: "stopped",
1555+ },
1556+ vm_states.REBUILDING: {
1557+ None: "pending",
1558+ },
1559+ vm_states.DELETED: {
1560+ None: "terminated",
1561+ },
1562+ vm_states.MIGRATING: {
1563+ None: "migrating",
1564+ },
1565+}
1566+
1567+
1568+def upgrade(migrate_engine):
1569+ meta.bind = migrate_engine
1570+
1571+ instance_table = Table('instances', meta, autoload=True,
1572+ autoload_with=migrate_engine)
1573+
1574+ c_state = instance_table.c.state
1575+ c_state.alter(name='power_state')
1576+
1577+ c_vm_state = instance_table.c.state_description
1578+ c_vm_state.alter(name='vm_state')
1579+
1580+ instance_table.create_column(c_task_state)
1581+
1582+ for old_state, values in _upgrade_translations.iteritems():
1583+ instance_table.update().\
1584+ values(**values).\
1585+ where(c_vm_state == old_state).\
1586+ execute()
1587+
1588+
1589+def downgrade(migrate_engine):
1590+ meta.bind = migrate_engine
1591+
1592+ instance_table = Table('instances', meta, autoload=True,
1593+ autoload_with=migrate_engine)
1594+
1595+ c_task_state = instance_table.c.task_state
1596+
1597+ c_state = instance_table.c.power_state
1598+ c_state.alter(name='state')
1599+
1600+ c_vm_state = instance_table.c.vm_state
1601+ c_vm_state.alter(name='state_description')
1602+
1603+ for old_vm_state, old_task_states in _downgrade_translations.iteritems():
1604+ for old_task_state, new_state_desc in old_task_states.iteritems():
1605+ instance_table.update().\
1606+ where(c_task_state == old_task_state).\
1607+ where(c_vm_state == old_vm_state).\
1608+ values(vm_state=new_state_desc).\
1609+ execute()
1610+
1611+ instance_table.drop_column('task_state')
1612
1613=== modified file 'nova/db/sqlalchemy/models.py'
1614--- nova/db/sqlalchemy/models.py 2011-08-26 01:38:35 +0000
1615+++ nova/db/sqlalchemy/models.py 2011-08-31 14:15:31 +0000
1616@@ -193,8 +193,9 @@
1617 key_name = Column(String(255))
1618 key_data = Column(Text)
1619
1620- state = Column(Integer)
1621- state_description = Column(String(255))
1622+ power_state = Column(Integer)
1623+ vm_state = Column(String(255))
1624+ task_state = Column(String(255))
1625
1626 memory_mb = Column(Integer)
1627 vcpus = Column(Integer)
1628@@ -238,17 +239,6 @@
1629 access_ip_v4 = Column(String(255))
1630 access_ip_v6 = Column(String(255))
1631
1632- # TODO(vish): see Ewan's email about state improvements, probably
1633- # should be in a driver base class or some such
1634- # vmstate_state = running, halted, suspended, paused
1635- # power_state = what we have
1636- # task_state = transitory and may trigger power state transition
1637-
1638- #@validates('state')
1639- #def validate_state(self, key, state):
1640- # assert(state in ['nostate', 'running', 'blocked', 'paused',
1641- # 'shutdown', 'shutoff', 'crashed'])
1642-
1643
1644 class VirtualStorageArray(BASE, NovaBase):
1645 """
1646
1647=== modified file 'nova/exception.py'
1648--- nova/exception.py 2011-08-26 01:38:35 +0000
1649+++ nova/exception.py 2011-08-31 14:15:31 +0000
1650@@ -61,7 +61,7 @@
1651 super(ApiError, self).__init__(outstr)
1652
1653
1654-class BuildInProgress(Error):
1655+class RebuildRequiresActiveInstance(Error):
1656 pass
1657
1658
1659
1660=== modified file 'nova/scheduler/driver.py'
1661--- nova/scheduler/driver.py 2011-07-19 13:22:38 +0000
1662+++ nova/scheduler/driver.py 2011-08-31 14:15:31 +0000
1663@@ -30,6 +30,7 @@
1664 from nova import rpc
1665 from nova import utils
1666 from nova.compute import power_state
1667+from nova.compute import vm_states
1668 from nova.api.ec2 import ec2utils
1669
1670
1671@@ -104,10 +105,8 @@
1672 dest, block_migration)
1673
1674 # Changing instance_state.
1675- db.instance_set_state(context,
1676- instance_id,
1677- power_state.PAUSED,
1678- 'migrating')
1679+ values = {"vm_state": vm_states.MIGRATING}
1680+ db.instance_update(context, instance_id, values)
1681
1682 # Changing volume state
1683 for volume_ref in instance_ref['volumes']:
1684@@ -129,8 +128,7 @@
1685 """
1686
1687 # Checking instance is running.
1688- if (power_state.RUNNING != instance_ref['state'] or \
1689- 'running' != instance_ref['state_description']):
1690+ if instance_ref['power_state'] != power_state.RUNNING:
1691 instance_id = ec2utils.id_to_ec2_id(instance_ref['id'])
1692 raise exception.InstanceNotRunning(instance_id=instance_id)
1693
1694
1695=== modified file 'nova/tests/api/openstack/test_server_actions.py'
1696--- nova/tests/api/openstack/test_server_actions.py 2011-08-24 15:11:20 +0000
1697+++ nova/tests/api/openstack/test_server_actions.py 2011-08-31 14:15:31 +0000
1698@@ -10,8 +10,8 @@
1699 from nova import exception
1700 from nova import flags
1701 from nova.api.openstack import create_instance_helper
1702+from nova.compute import vm_states
1703 from nova.compute import instance_types
1704-from nova.compute import power_state
1705 import nova.db.api
1706 from nova import test
1707 from nova.tests.api.openstack import common
1708@@ -35,17 +35,19 @@
1709 return _return_server
1710
1711
1712-def return_server_with_power_state(power_state):
1713- return return_server_with_attributes(power_state=power_state)
1714-
1715-
1716-def return_server_with_uuid_and_power_state(power_state):
1717- return return_server_with_power_state(power_state)
1718-
1719-
1720-def stub_instance(id, power_state=0, metadata=None,
1721- image_ref="10", flavor_id="1", name=None):
1722-
1723+def return_server_with_state(vm_state, task_state=None):
1724+ return return_server_with_attributes(vm_state=vm_state,
1725+ task_state=task_state)
1726+
1727+
1728+def return_server_with_uuid_and_state(vm_state, task_state=None):
1729+ def _return_server(context, id):
1730+ return return_server_with_state(vm_state, task_state)
1731+ return _return_server
1732+
1733+
1734+def stub_instance(id, metadata=None, image_ref="10", flavor_id="1",
1735+ name=None, vm_state=None, task_state=None):
1736 if metadata is not None:
1737 metadata_items = [{'key':k, 'value':v} for k, v in metadata.items()]
1738 else:
1739@@ -66,8 +68,8 @@
1740 "launch_index": 0,
1741 "key_name": "",
1742 "key_data": "",
1743- "state": power_state,
1744- "state_description": "",
1745+ "vm_state": vm_state or vm_states.ACTIVE,
1746+ "task_state": task_state,
1747 "memory_mb": 0,
1748 "vcpus": 0,
1749 "local_gb": 0,
1750@@ -175,11 +177,11 @@
1751 },
1752 }
1753
1754- state = power_state.BUILDING
1755- new_return_server = return_server_with_power_state(state)
1756+ state = vm_states.BUILDING
1757+ new_return_server = return_server_with_state(state)
1758 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
1759 self.stubs.Set(nova.db, 'instance_get_by_uuid',
1760- return_server_with_uuid_and_power_state(state))
1761+ return_server_with_uuid_and_state(state))
1762
1763 req = webob.Request.blank('/v1.0/servers/1/action')
1764 req.method = 'POST'
1765@@ -242,19 +244,6 @@
1766 res = req.get_response(fakes.wsgi_app())
1767 self.assertEqual(res.status_int, 500)
1768
1769- def test_resized_server_has_correct_status(self):
1770- req = self.webreq('/1', 'GET')
1771-
1772- def fake_migration_get(*args):
1773- return {}
1774-
1775- self.stubs.Set(nova.db, 'migration_get_by_instance_and_status',
1776- fake_migration_get)
1777- res = req.get_response(fakes.wsgi_app())
1778- self.assertEqual(res.status_int, 200)
1779- body = json.loads(res.body)
1780- self.assertEqual(body['server']['status'], 'RESIZE-CONFIRM')
1781-
1782 def test_confirm_resize_server(self):
1783 req = self.webreq('/1/action', 'POST', dict(confirmResize=None))
1784
1785@@ -642,11 +631,11 @@
1786 },
1787 }
1788
1789- state = power_state.BUILDING
1790- new_return_server = return_server_with_power_state(state)
1791+ state = vm_states.BUILDING
1792+ new_return_server = return_server_with_state(state)
1793 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
1794 self.stubs.Set(nova.db, 'instance_get_by_uuid',
1795- return_server_with_uuid_and_power_state(state))
1796+ return_server_with_uuid_and_state(state))
1797
1798 req = webob.Request.blank('/v1.1/fake/servers/1/action')
1799 req.method = 'POST'
1800
1801=== modified file 'nova/tests/api/openstack/test_servers.py'
1802--- nova/tests/api/openstack/test_servers.py 2011-08-24 16:12:11 +0000
1803+++ nova/tests/api/openstack/test_servers.py 2011-08-31 14:15:31 +0000
1804@@ -37,7 +37,8 @@
1805 from nova.api.openstack import xmlutil
1806 import nova.compute.api
1807 from nova.compute import instance_types
1808-from nova.compute import power_state
1809+from nova.compute import task_states
1810+from nova.compute import vm_states
1811 import nova.db.api
1812 import nova.scheduler.api
1813 from nova.db.sqlalchemy.models import Instance
1814@@ -91,15 +92,18 @@
1815 return _return_server
1816
1817
1818-def return_server_with_power_state(power_state):
1819+def return_server_with_state(vm_state, task_state=None):
1820 def _return_server(context, id):
1821- return stub_instance(id, power_state=power_state)
1822+ return stub_instance(id, vm_state=vm_state, task_state=task_state)
1823 return _return_server
1824
1825
1826-def return_server_with_uuid_and_power_state(power_state):
1827+def return_server_with_uuid_and_state(vm_state, task_state):
1828 def _return_server(context, id):
1829- return stub_instance(id, uuid=FAKE_UUID, power_state=power_state)
1830+ return stub_instance(id,
1831+ uuid=FAKE_UUID,
1832+ vm_state=vm_state,
1833+ task_state=task_state)
1834 return _return_server
1835
1836
1837@@ -148,7 +152,8 @@
1838
1839
1840 def stub_instance(id, user_id='fake', project_id='fake', private_address=None,
1841- public_addresses=None, host=None, power_state=0,
1842+ public_addresses=None, host=None,
1843+ vm_state=None, task_state=None,
1844 reservation_id="", uuid=FAKE_UUID, image_ref="10",
1845 flavor_id="1", interfaces=None, name=None,
1846 access_ipv4=None, access_ipv6=None):
1847@@ -184,8 +189,8 @@
1848 "launch_index": 0,
1849 "key_name": "",
1850 "key_data": "",
1851- "state": power_state,
1852- "state_description": "",
1853+ "vm_state": vm_state or vm_states.BUILDING,
1854+ "task_state": task_state,
1855 "memory_mb": 0,
1856 "vcpus": 0,
1857 "local_gb": 0,
1858@@ -494,7 +499,7 @@
1859 },
1860 ]
1861 new_return_server = return_server_with_attributes(
1862- interfaces=interfaces, power_state=1)
1863+ interfaces=interfaces, vm_state=vm_states.ACTIVE)
1864 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
1865
1866 req = webob.Request.blank('/v1.1/fake/servers/1')
1867@@ -587,8 +592,8 @@
1868 },
1869 ]
1870 new_return_server = return_server_with_attributes(
1871- interfaces=interfaces, power_state=1, image_ref=image_ref,
1872- flavor_id=flavor_id)
1873+ interfaces=interfaces, vm_state=vm_states.ACTIVE,
1874+ image_ref=image_ref, flavor_id=flavor_id)
1875 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
1876
1877 req = webob.Request.blank('/v1.1/fake/servers/1')
1878@@ -1209,9 +1214,8 @@
1879 def test_get_servers_allows_status_v1_1(self):
1880 def fake_get_all(compute_self, context, search_opts=None):
1881 self.assertNotEqual(search_opts, None)
1882- self.assertTrue('state' in search_opts)
1883- self.assertEqual(set(search_opts['state']),
1884- set([power_state.RUNNING, power_state.BLOCKED]))
1885+ self.assertTrue('vm_state' in search_opts)
1886+ self.assertEqual(search_opts['vm_state'], vm_states.ACTIVE)
1887 return [stub_instance(100)]
1888
1889 self.stubs.Set(nova.compute.API, 'get_all', fake_get_all)
1890@@ -1228,13 +1232,9 @@
1891
1892 def test_get_servers_invalid_status_v1_1(self):
1893 """Test getting servers by invalid status"""
1894-
1895 self.flags(allow_admin_api=False)
1896-
1897 req = webob.Request.blank('/v1.1/fake/servers?status=running')
1898 res = req.get_response(fakes.wsgi_app())
1899- # The following assert will fail if either of the asserts in
1900- # fake_get_all() fail
1901 self.assertEqual(res.status_int, 400)
1902 self.assertTrue(res.body.find('Invalid server status') > -1)
1903
1904@@ -1738,6 +1738,7 @@
1905 server = json.loads(res.body)['server']
1906 self.assertEqual(16, len(server['adminPass']))
1907 self.assertEqual(1, server['id'])
1908+ self.assertEqual("BUILD", server["status"])
1909 self.assertEqual(0, server['progress'])
1910 self.assertEqual('server_test', server['name'])
1911 self.assertEqual(expected_flavor, server['flavor'])
1912@@ -2467,23 +2468,51 @@
1913 self.assertEqual(res.status_int, 204)
1914 self.assertEqual(self.server_delete_called, True)
1915
1916- def test_shutdown_status(self):
1917- new_server = return_server_with_power_state(power_state.SHUTDOWN)
1918- self.stubs.Set(nova.db.api, 'instance_get', new_server)
1919- req = webob.Request.blank('/v1.0/servers/1')
1920- res = req.get_response(fakes.wsgi_app())
1921- self.assertEqual(res.status_int, 200)
1922- res_dict = json.loads(res.body)
1923- self.assertEqual(res_dict['server']['status'], 'SHUTDOWN')
1924-
1925- def test_shutoff_status(self):
1926- new_server = return_server_with_power_state(power_state.SHUTOFF)
1927- self.stubs.Set(nova.db.api, 'instance_get', new_server)
1928- req = webob.Request.blank('/v1.0/servers/1')
1929- res = req.get_response(fakes.wsgi_app())
1930- self.assertEqual(res.status_int, 200)
1931- res_dict = json.loads(res.body)
1932- self.assertEqual(res_dict['server']['status'], 'SHUTOFF')
1933+
1934+class TestServerStatus(test.TestCase):
1935+
1936+ def _get_with_state(self, vm_state, task_state=None):
1937+ new_server = return_server_with_state(vm_state, task_state)
1938+ self.stubs.Set(nova.db.api, 'instance_get', new_server)
1939+ request = webob.Request.blank('/v1.0/servers/1')
1940+ response = request.get_response(fakes.wsgi_app())
1941+ self.assertEqual(response.status_int, 200)
1942+ return json.loads(response.body)
1943+
1944+ def test_active(self):
1945+ response = self._get_with_state(vm_states.ACTIVE)
1946+ self.assertEqual(response['server']['status'], 'ACTIVE')
1947+
1948+ def test_reboot(self):
1949+ response = self._get_with_state(vm_states.ACTIVE,
1950+ task_states.REBOOTING)
1951+ self.assertEqual(response['server']['status'], 'REBOOT')
1952+
1953+ def test_rebuild(self):
1954+ response = self._get_with_state(vm_states.REBUILDING)
1955+ self.assertEqual(response['server']['status'], 'REBUILD')
1956+
1957+ def test_rebuild_error(self):
1958+ response = self._get_with_state(vm_states.ERROR)
1959+ self.assertEqual(response['server']['status'], 'ERROR')
1960+
1961+ def test_resize(self):
1962+ response = self._get_with_state(vm_states.RESIZING)
1963+ self.assertEqual(response['server']['status'], 'RESIZE')
1964+
1965+ def test_verify_resize(self):
1966+ response = self._get_with_state(vm_states.ACTIVE,
1967+ task_states.RESIZE_VERIFY)
1968+ self.assertEqual(response['server']['status'], 'VERIFY_RESIZE')
1969+
1970+ def test_password_update(self):
1971+ response = self._get_with_state(vm_states.ACTIVE,
1972+ task_states.UPDATING_PASSWORD)
1973+ self.assertEqual(response['server']['status'], 'PASSWORD')
1974+
1975+ def test_stopped(self):
1976+ response = self._get_with_state(vm_states.STOPPED)
1977+ self.assertEqual(response['server']['status'], 'STOPPED')
1978
1979
1980 class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase):
1981@@ -3536,8 +3565,8 @@
1982 "launch_index": 0,
1983 "key_name": "",
1984 "key_data": "",
1985- "state": 0,
1986- "state_description": "",
1987+ "vm_state": vm_states.BUILDING,
1988+ "task_state": None,
1989 "memory_mb": 0,
1990 "vcpus": 0,
1991 "local_gb": 0,
1992@@ -3682,7 +3711,7 @@
1993
1994 def test_build_server_detail_active_status(self):
1995 #set the power state of the instance to running
1996- self.instance['state'] = 1
1997+ self.instance['vm_state'] = vm_states.ACTIVE
1998 image_bookmark = "http://localhost/images/5"
1999 flavor_bookmark = "http://localhost/flavors/1"
2000 expected_server = {
2001
2002=== modified file 'nova/tests/integrated/test_servers.py'
2003--- nova/tests/integrated/test_servers.py 2011-08-26 13:54:53 +0000
2004+++ nova/tests/integrated/test_servers.py 2011-08-31 14:15:31 +0000
2005@@ -28,6 +28,17 @@
2006
2007 class ServersTest(integrated_helpers._IntegratedTestBase):
2008
2009+ def _wait_for_creation(self, server):
2010+ retries = 0
2011+ while server['status'] == 'BUILD':
2012+ time.sleep(1)
2013+ server = self.api.get_server(server['id'])
2014+ print server
2015+ retries = retries + 1
2016+ if retries > 5:
2017+ break
2018+ return server
2019+
2020 def test_get_servers(self):
2021 """Simple check that listing servers works."""
2022 servers = self.api.get_servers()
2023@@ -36,9 +47,9 @@
2024
2025 def test_create_and_delete_server(self):
2026 """Creates and deletes a server."""
2027+ self.flags(stub_network=True)
2028
2029 # Create server
2030-
2031 # Build the server data gradually, checking errors along the way
2032 server = {}
2033 good_server = self._build_minimal_create_server_request()
2034@@ -91,19 +102,11 @@
2035 server_ids = [server['id'] for server in servers]
2036 self.assertTrue(created_server_id in server_ids)
2037
2038- # Wait (briefly) for creation
2039- retries = 0
2040- while found_server['status'] == 'build':
2041- LOG.debug("found server: %s" % found_server)
2042- time.sleep(1)
2043- found_server = self.api.get_server(created_server_id)
2044- retries = retries + 1
2045- if retries > 5:
2046- break
2047+ found_server = self._wait_for_creation(found_server)
2048
2049 # It should be available...
2050 # TODO(justinsb): Mock doesn't yet do this...
2051- #self.assertEqual('available', found_server['status'])
2052+ self.assertEqual('ACTIVE', found_server['status'])
2053 servers = self.api.get_servers(detail=True)
2054 for server in servers:
2055 self.assertTrue("image" in server)
2056@@ -181,6 +184,7 @@
2057
2058 def test_create_and_rebuild_server(self):
2059 """Rebuild a server."""
2060+ self.flags(stub_network=True)
2061
2062 # create a server with initially has no metadata
2063 server = self._build_minimal_create_server_request()
2064@@ -190,6 +194,8 @@
2065 self.assertTrue(created_server['id'])
2066 created_server_id = created_server['id']
2067
2068+ created_server = self._wait_for_creation(created_server)
2069+
2070 # rebuild the server with metadata
2071 post = {}
2072 post['rebuild'] = {
2073@@ -212,6 +218,7 @@
2074
2075 def test_create_and_rebuild_server_with_metadata(self):
2076 """Rebuild a server with metadata."""
2077+ self.flags(stub_network=True)
2078
2079 # create a server with initially has no metadata
2080 server = self._build_minimal_create_server_request()
2081@@ -221,6 +228,8 @@
2082 self.assertTrue(created_server['id'])
2083 created_server_id = created_server['id']
2084
2085+ created_server = self._wait_for_creation(created_server)
2086+
2087 # rebuild the server with metadata
2088 post = {}
2089 post['rebuild'] = {
2090@@ -248,6 +257,7 @@
2091
2092 def test_create_and_rebuild_server_with_metadata_removal(self):
2093 """Rebuild a server with metadata."""
2094+ self.flags(stub_network=True)
2095
2096 # create a server with initially has no metadata
2097 server = self._build_minimal_create_server_request()
2098@@ -264,6 +274,8 @@
2099 self.assertTrue(created_server['id'])
2100 created_server_id = created_server['id']
2101
2102+ created_server = self._wait_for_creation(created_server)
2103+
2104 # rebuild the server with metadata
2105 post = {}
2106 post['rebuild'] = {
2107
2108=== modified file 'nova/tests/scheduler/test_scheduler.py'
2109--- nova/tests/scheduler/test_scheduler.py 2011-08-16 12:47:35 +0000
2110+++ nova/tests/scheduler/test_scheduler.py 2011-08-31 14:15:31 +0000
2111@@ -40,6 +40,7 @@
2112 from nova.scheduler import manager
2113 from nova.scheduler import multi
2114 from nova.compute import power_state
2115+from nova.compute import vm_states
2116
2117
2118 FLAGS = flags.FLAGS
2119@@ -94,6 +95,9 @@
2120 inst['vcpus'] = kwargs.get('vcpus', 1)
2121 inst['memory_mb'] = kwargs.get('memory_mb', 10)
2122 inst['local_gb'] = kwargs.get('local_gb', 20)
2123+ inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
2124+ inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
2125+ inst['task_state'] = kwargs.get('task_state', None)
2126 return db.instance_create(ctxt, inst)
2127
2128 def test_fallback(self):
2129@@ -271,8 +275,9 @@
2130 inst['memory_mb'] = kwargs.get('memory_mb', 20)
2131 inst['local_gb'] = kwargs.get('local_gb', 30)
2132 inst['launched_on'] = kwargs.get('launghed_on', 'dummy')
2133- inst['state_description'] = kwargs.get('state_description', 'running')
2134- inst['state'] = kwargs.get('state', power_state.RUNNING)
2135+ inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
2136+ inst['task_state'] = kwargs.get('task_state', None)
2137+ inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
2138 return db.instance_create(self.context, inst)['id']
2139
2140 def _create_volume(self):
2141@@ -664,14 +669,14 @@
2142 block_migration=False)
2143
2144 i_ref = db.instance_get(self.context, instance_id)
2145- self.assertTrue(i_ref['state_description'] == 'migrating')
2146+ self.assertTrue(i_ref['vm_state'] == vm_states.MIGRATING)
2147 db.instance_destroy(self.context, instance_id)
2148 db.volume_destroy(self.context, v_ref['id'])
2149
2150 def test_live_migration_src_check_instance_not_running(self):
2151 """The instance given by instance_id is not running."""
2152
2153- instance_id = self._create_instance(state_description='migrating')
2154+ instance_id = self._create_instance(power_state=power_state.NOSTATE)
2155 i_ref = db.instance_get(self.context, instance_id)
2156
2157 try:
2158
2159=== modified file 'nova/tests/test_cloud.py'
2160--- nova/tests/test_cloud.py 2011-08-16 16:18:13 +0000
2161+++ nova/tests/test_cloud.py 2011-08-31 14:15:31 +0000
2162@@ -38,6 +38,7 @@
2163 from nova import utils
2164 from nova.api.ec2 import cloud
2165 from nova.api.ec2 import ec2utils
2166+from nova.compute import vm_states
2167 from nova.image import fake
2168
2169
2170@@ -1163,7 +1164,7 @@
2171 self.compute = self.start_service('compute')
2172
2173 def _wait_for_state(self, ctxt, instance_id, predicate):
2174- """Wait for an stopping instance to be a given state"""
2175+ """Wait for a stopped instance to be a given state"""
2176 id = ec2utils.ec2_id_to_id(instance_id)
2177 while True:
2178 info = self.cloud.compute_api.get(context=ctxt, instance_id=id)
2179@@ -1174,12 +1175,16 @@
2180
2181 def _wait_for_running(self, instance_id):
2182 def is_running(info):
2183- return info['state_description'] == 'running'
2184+ vm_state = info["vm_state"]
2185+ task_state = info["task_state"]
2186+ return vm_state == vm_states.ACTIVE and task_state == None
2187 self._wait_for_state(self.context, instance_id, is_running)
2188
2189 def _wait_for_stopped(self, instance_id):
2190 def is_stopped(info):
2191- return info['state_description'] == 'stopped'
2192+ vm_state = info["vm_state"]
2193+ task_state = info["task_state"]
2194+ return vm_state == vm_states.STOPPED and task_state == None
2195 self._wait_for_state(self.context, instance_id, is_stopped)
2196
2197 def _wait_for_terminate(self, instance_id):
2198@@ -1562,7 +1567,7 @@
2199 'id': 0,
2200 'root_device_name': '/dev/sdh',
2201 'security_groups': [{'name': 'fake0'}, {'name': 'fake1'}],
2202- 'state_description': 'stopping',
2203+ 'vm_state': vm_states.STOPPED,
2204 'instance_type': {'name': 'fake_type'},
2205 'kernel_id': 1,
2206 'ramdisk_id': 2,
2207@@ -1606,7 +1611,7 @@
2208 self.assertEqual(groupSet, expected_groupSet)
2209 self.assertEqual(get_attribute('instanceInitiatedShutdownBehavior'),
2210 {'instance_id': 'i-12345678',
2211- 'instanceInitiatedShutdownBehavior': 'stop'})
2212+ 'instanceInitiatedShutdownBehavior': 'stopped'})
2213 self.assertEqual(get_attribute('instanceType'),
2214 {'instance_id': 'i-12345678',
2215 'instanceType': 'fake_type'})
2216
2217=== modified file 'nova/tests/test_compute.py'
2218--- nova/tests/test_compute.py 2011-08-24 23:48:04 +0000
2219+++ nova/tests/test_compute.py 2011-08-31 14:15:31 +0000
2220@@ -24,6 +24,7 @@
2221 from nova.compute import instance_types
2222 from nova.compute import manager as compute_manager
2223 from nova.compute import power_state
2224+from nova.compute import vm_states
2225 from nova import context
2226 from nova import db
2227 from nova.db.sqlalchemy import models
2228@@ -763,8 +764,8 @@
2229 'block_migration': False,
2230 'disk': None}}).\
2231 AndRaise(rpc.RemoteError('', '', ''))
2232- dbmock.instance_update(c, i_ref['id'], {'state_description': 'running',
2233- 'state': power_state.RUNNING,
2234+ dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE,
2235+ 'task_state': None,
2236 'host': i_ref['host']})
2237 for v in i_ref['volumes']:
2238 dbmock.volume_update(c, v['id'], {'status': 'in-use'})
2239@@ -795,8 +796,8 @@
2240 'block_migration': False,
2241 'disk': None}}).\
2242 AndRaise(rpc.RemoteError('', '', ''))
2243- dbmock.instance_update(c, i_ref['id'], {'state_description': 'running',
2244- 'state': power_state.RUNNING,
2245+ dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE,
2246+ 'task_state': None,
2247 'host': i_ref['host']})
2248
2249 self.compute.db = dbmock
2250@@ -841,8 +842,8 @@
2251 c = context.get_admin_context()
2252 instance_id = self._create_instance()
2253 i_ref = db.instance_get(c, instance_id)
2254- db.instance_update(c, i_ref['id'], {'state_description': 'migrating',
2255- 'state': power_state.PAUSED})
2256+ db.instance_update(c, i_ref['id'], {'vm_state': vm_states.MIGRATING,
2257+ 'power_state': power_state.PAUSED})
2258 v_ref = db.volume_create(c, {'size': 1, 'instance_id': instance_id})
2259 fix_addr = db.fixed_ip_create(c, {'address': '1.1.1.1',
2260 'instance_id': instance_id})
2261@@ -903,7 +904,7 @@
2262 instances = db.instance_get_all(context.get_admin_context())
2263 LOG.info(_("After force-killing instances: %s"), instances)
2264 self.assertEqual(len(instances), 1)
2265- self.assertEqual(power_state.SHUTOFF, instances[0]['state'])
2266+ self.assertEqual(power_state.NOSTATE, instances[0]['power_state'])
2267
2268 def test_get_all_by_name_regexp(self):
2269 """Test searching instances by name (display_name)"""
2270@@ -1323,25 +1324,28 @@
2271 """Test searching instances by state"""
2272
2273 c = context.get_admin_context()
2274- instance_id1 = self._create_instance({'state': power_state.SHUTDOWN})
2275+ instance_id1 = self._create_instance({
2276+ 'power_state': power_state.SHUTDOWN,
2277+ })
2278 instance_id2 = self._create_instance({
2279- 'id': 2,
2280- 'state': power_state.RUNNING})
2281+ 'id': 2,
2282+ 'power_state': power_state.RUNNING,
2283+ })
2284 instance_id3 = self._create_instance({
2285- 'id': 10,
2286- 'state': power_state.RUNNING})
2287-
2288+ 'id': 10,
2289+ 'power_state': power_state.RUNNING,
2290+ })
2291 instances = self.compute_api.get_all(c,
2292- search_opts={'state': power_state.SUSPENDED})
2293+ search_opts={'power_state': power_state.SUSPENDED})
2294 self.assertEqual(len(instances), 0)
2295
2296 instances = self.compute_api.get_all(c,
2297- search_opts={'state': power_state.SHUTDOWN})
2298+ search_opts={'power_state': power_state.SHUTDOWN})
2299 self.assertEqual(len(instances), 1)
2300 self.assertEqual(instances[0].id, instance_id1)
2301
2302 instances = self.compute_api.get_all(c,
2303- search_opts={'state': power_state.RUNNING})
2304+ search_opts={'power_state': power_state.RUNNING})
2305 self.assertEqual(len(instances), 2)
2306 instance_ids = [instance.id for instance in instances]
2307 self.assertTrue(instance_id2 in instance_ids)
2308@@ -1349,7 +1353,7 @@
2309
2310 # Test passing a list as search arg
2311 instances = self.compute_api.get_all(c,
2312- search_opts={'state': [power_state.SHUTDOWN,
2313+ search_opts={'power_state': [power_state.SHUTDOWN,
2314 power_state.RUNNING]})
2315 self.assertEqual(len(instances), 3)
2316
2317
2318=== modified file 'nova/tests/vmwareapi/db_fakes.py'
2319--- nova/tests/vmwareapi/db_fakes.py 2011-07-27 00:40:50 +0000
2320+++ nova/tests/vmwareapi/db_fakes.py 2011-08-31 14:15:31 +0000
2321@@ -23,6 +23,8 @@
2322
2323 from nova import db
2324 from nova import utils
2325+from nova.compute import task_states
2326+from nova.compute import vm_states
2327
2328
2329 def stub_out_db_instance_api(stubs):
2330@@ -64,7 +66,8 @@
2331 'image_ref': values['image_ref'],
2332 'kernel_id': values['kernel_id'],
2333 'ramdisk_id': values['ramdisk_id'],
2334- 'state_description': 'scheduling',
2335+ 'vm_state': vm_states.BUILDING,
2336+ 'task_state': task_states.SCHEDULING,
2337 'user_id': values['user_id'],
2338 'project_id': values['project_id'],
2339 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),