Merge lp:~rackspace-titan/nova/instance_states into lp:~hudson-openstack/nova/trunk

Proposed by Brian Lamar
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1504
Merged at revision: 1514
Proposed branch: lp:~rackspace-titan/nova/instance_states
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 2339 lines (+849/-460)
20 files modified
nova/api/ec2/cloud.py (+38/-12)
nova/api/openstack/common.py (+54/-28)
nova/api/openstack/servers.py (+9/-13)
nova/api/openstack/views/servers.py (+4/-9)
nova/compute/api.py (+90/-28)
nova/compute/manager.py (+250/-225)
nova/compute/task_states.py (+59/-0)
nova/compute/vm_states.py (+39/-0)
nova/db/sqlalchemy/api.py (+4/-16)
nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py (+138/-0)
nova/db/sqlalchemy/models.py (+3/-13)
nova/exception.py (+1/-1)
nova/scheduler/driver.py (+4/-6)
nova/tests/api/openstack/test_server_actions.py (+22/-33)
nova/tests/api/openstack/test_servers.py (+67/-38)
nova/tests/integrated/test_servers.py (+23/-11)
nova/tests/scheduler/test_scheduler.py (+9/-4)
nova/tests/test_cloud.py (+10/-5)
nova/tests/test_compute.py (+21/-17)
nova/tests/vmwareapi/db_fakes.py (+4/-1)
To merge this branch: bzr merge lp:~rackspace-titan/nova/instance_states
Reviewer Review Type Date Requested Status
Vish Ishaya (community) Approve
Brian Waldon (community) Approve
Review via email: mp+72502@code.launchpad.net

Commit message

Fixed and improved the way instance "states" are set. Instead of relying on solely the power_state of a VM, there are now explicitly defined VM states and VM task states which respectively define the current state of the VM and the task which is currently being performed by the VM.

Description of the change

Currently instance states are not working as intended. This is remedied by using a strategy outlined in the SQLAlchemy code by Ewan which describes our transition from 2 columns (state, state_description) to three columns (power_state, vm_state, and task_state):

OLD:
state - This loosely represented the 'state' of the virtual machine
state_description - This gave slightly more information on the virtual machine state

NEW:
power_state - This corresponds to the *actual* VM state on the hypervisor
vm_state - This represents the concept of a VM -- what the VM should be doing
task_state - This represents a current task which is being worked on by the VM

To see a list of possible vm_states see nova/compute/vm_states.py
To see a list of possible task_states see nova/compute/task_states.py

While this change is rather large, it involved changing all 'state' references to 'power_state' and involved the insertion of more database updates to give an accurate reflection of an instance at any point in time.

To post a comment you must log in.
Revision history for this message
Brian Waldon (bcwaldon) wrote :

This is absolutely incredible. You've done a great job, here. One general comment before the line-by-line stuff:

Can we align the states with respect to tense? I don't think all of our states need to end in ING or ED. What do you think?

16: I think you might want to expand this comment to explain yourself better better

53/54: Should this be 'stopped' and 'terminated'? According to the allowed EC2 values, these don't seem correct.

189: Can you log the output of the command you allude to?

955/990: You should probably file a bug for this, seems like a simple cleanup item

1224: Thank you for cleaning this function up.

1335/1390: I would love to expand on these module-level docstrings. I think adding a bit more context/explanation of what the task/vm_states represent would be very helpful.

1458/1471: Not sure what these comments mean...

review: Needs Fixing
Revision history for this message
Brian Lamar (blamar) wrote :

> This is absolutely incredible. You've done a great job, here. One general
> comment before the line-by-line stuff:
>
> Can we align the states with respect to tense? I don't think all of our states
> need to end in ING or ED. What do you think?

The original design had all vm_states and task_states in the same tenses. No INGs or EDs.

task_states.SCHEDULE just didn't have the same effect on me as task_states.SCHEDULING but really it's not a huge difference in my mind. We're just a few seds away from changing everything so it's not difficult. Some of them get more confusing IMO without ING or ED:

NETWORK vs NETWORKING
PAUSE vs PAUSED
STOP vs STOPPED

Would you recommend unification of tenses or just removal of all ING/ED/tense?

Revision history for this message
Brian Waldon (bcwaldon) wrote :

> > This is absolutely incredible. You've done a great job, here. One general
> > comment before the line-by-line stuff:
> >
> > Can we align the states with respect to tense? I don't think all of our
> states
> > need to end in ING or ED. What do you think?
>
> The original design had all vm_states and task_states in the same tenses. No
> INGs or EDs.
>
> task_states.SCHEDULE just didn't have the same effect on me as
> task_states.SCHEDULING but really it's not a huge difference in my mind. We're
> just a few seds away from changing everything so it's not difficult. Some of
> them get more confusing IMO without ING or ED:
>
> NETWORK vs NETWORKING
> PAUSE vs PAUSED
> STOP vs STOPPED
>
> Would you recommend unification of tenses or just removal of all ING/ED/tense?

Actually, I think it makes a lot more sense now. Tasks make sense to end in -ING and vm_states in -ED (or no suffix at all). There are a few specific states I want to point out:

task_states.SPAWN -> task_states.SPAWNING

vm_states.VERIFY_RESIZE just seems weird. How does RESIZE_VERIFICATION sound, or WAITING?

Can task_states.UNPAUSING go away in favor of RESUMING? This is more of a question than a suggestion, I can see why we might want to leave it.

What about these for images:
task_states.SNAPSHOTTING -> IMAGE_SNAPSHOT
task_states.BACKING_UP -> IMAGE_BACKUP

task_states.HARD_REBOOTING should go away until we support it. If you want to leave it, can you rename it to REBOOTING_HARD?

task_states.PASSWORD -> UPDATING_PASSWORD

task_states.STARTING -> BOOTING
'starting' can easily mean a million different things

I don't see any 'shutdown' states.

1491. By Brian Lamar

Review feedback.

1492. By Brian Lamar

Merged trunk.

1493. By Brian Lamar

Bumped migration number.

Revision history for this message
Brian Lamar (blamar) wrote :

> 16: I think you might want to expand this comment to explain yourself better
> better

Updated with link to EC2 spec, does that help?

>
> 53/54: Should this be 'stopped' and 'terminated'? According to the allowed EC2
> values, these don't seem correct.

Fixed. They weren't like that before, but you're suggestion seems logical.

>
> 189: Can you log the output of the command you allude to?

Fixed to output status.

> 955/990: You should probably file a bug for this, seems like a simple cleanup
> item

Not sure it's a 'bug', but I created a BP https://blueprints.launchpad.net/nova/+spec/remove-virt-driver-callbacks

>
> 1224: Thank you for cleaning this function up.

Thanks, I really want to highlight this because I don't want anyone surprised at the difference in functionality. All we're doing in the interval tasks right now is sync'ing power states. All actual VM state transition should be done through explicit database updates in the compute API/manager and NOT through this method.

>
> 1335/1390: I would love to expand on these module-level docstrings. I think
> adding a bit more context/explanation of what the task/vm_states represent
> would be very helpful.

Added some details. Do these help?

>
> 1458/1471: Not sure what these comments mean...

Whoops. Those methods shouldn't have still been there. Removed those.

Revision history for this message
Brian Lamar (blamar) wrote :

> > > This is absolutely incredible. You've done a great job, here. One general
> > > comment before the line-by-line stuff:
> > >
> > > Can we align the states with respect to tense? I don't think all of our
> > states
> > > need to end in ING or ED. What do you think?
> >
> > The original design had all vm_states and task_states in the same tenses. No
> > INGs or EDs.
> >
> > task_states.SCHEDULE just didn't have the same effect on me as
> > task_states.SCHEDULING but really it's not a huge difference in my mind.
> We're
> > just a few seds away from changing everything so it's not difficult. Some of
> > them get more confusing IMO without ING or ED:
> >
> > NETWORK vs NETWORKING
> > PAUSE vs PAUSED
> > STOP vs STOPPED
> >
> > Would you recommend unification of tenses or just removal of all
> ING/ED/tense?
>
> Actually, I think it makes a lot more sense now. Tasks make sense to end in
> -ING and vm_states in -ED (or no suffix at all). There are a few specific
> states I want to point out:
>
> task_states.SPAWN -> task_states.SPAWNING

Updated.

>
> vm_states.VERIFY_RESIZE just seems weird. How does RESIZE_VERIFICATION sound,
> or WAITING?

I actually updated that state to be a task, because it's not really a VM state. The VM is active, but the task is "waiting for input to see if I should revert or not". It's now task_states.RESIZE_VERIFY

>
> Can task_states.UNPAUSING go away in favor of RESUMING? This is more of a
> question than a suggestion, I can see why we might want to leave it.

Technically unpausing is the opposite of pausing and resuming is the opposite of suspending. It's a little silly I admit, but they have subtle differences so I'd like to keep them if that's not a deal-breaker.

>
> What about these for images:
> task_states.SNAPSHOTTING -> IMAGE_SNAPSHOT
> task_states.BACKING_UP -> IMAGE_BACKUP

Good suggestion. Updated.

>
> task_states.HARD_REBOOTING should go away until we support it. If you want to
> leave it, can you rename it to REBOOTING_HARD?

I've removed it, it's not supported at all and it seems silly to have in there.

>
> task_states.PASSWORD -> UPDATING_PASSWORD

Updated.

>
> task_states.STARTING -> BOOTING
> 'starting' can easily mean a million different things

I very much like the task pairs. STOPPING and STARTING correspond well and when thought about in a compute/instance sense I'm not sure the word is quite so ambiguous?

>
> I don't see any 'shutdown' states.

STOPPED means the VM is stopped/shutoff. STOPPING mean the VM is in the process of being shut down. I'm open to suggestions on making this clearer.

1494. By Brian Lamar

review feedback

1495. By Brian Lamar

Test fixup after last review feedback commit.

1496. By Brian Lamar

Merged trunk and fixed conflicts.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Fantastic.

review: Approve
1497. By Brian Lamar

Tiny tweaks to the migration script.

1498. By Brian Lamar

Merged trunk.

1499. By Brian Lamar

Increased migration number.

1500. By Brian Lamar

Merged trunk.

1501. By Brian Lamar

Merged trunk.

1502. By Brian Lamar

Fix a bad merge on my part, this fixes rebuilds\!

Revision history for this message
Matt Dietz (cerberus) wrote :

I like the fixes here and the new functionality!

A question for later, but have we ever swung back around to using an actual statemachine to enforce transitions? I think the code here all looks sound, but experience tells me that decisions based on state in the code can be fragile, at best. I'm asking because of things like the following:

519 + self.update(context,
520 + instance_id,
521 + vm_state=vm_states.SUSPENDED,
522 + task_state=task_states.RESUMING)

It doesn't really matter what state the instance was in before. It's suspend and about to resume, now. I know we didn't really enforce the state in all scenarios previously, so I guess I can't expect this patch to take care of all of those in one shot. I bring it up because it seems like in certain instances, you've chosen to try and enforce that the instance is in a good state,
but others aren't covered. Do we maybe we want to consider implementing a more statemachine-esque cleanup in a subsequent patch?

948 + # NOTE(blamar): None of the virt drivers use the 'callback' param

I wondered this myself. Good catch. Make a bug?

Great work on the tests!

I'll add my approve. Just want to touch base with you first and see what (if any) intent there is regarding my comments above.

Revision history for this message
Alex Meade (alex-meade) wrote :

1424: should this be vm_state.STOPPED?
 Also, vm_state should be vm_states? and imported?

Revision history for this message
Vish Ishaya (vishvananda) wrote :

> 1424: should this be vm_state.STOPPED?
> Also, vm_state should be vm_states? and imported?

Yes the imports from a few lines up is wrong, importing power_states

I can't find any other issues, good work.

review: Needs Fixing
1503. By Brian Lamar

Merged trunk.

1504. By Brian Lamar

Removed extraneous import and s/vm_state.STOP/vm_states.STOPPED/

Revision history for this message
Brian Lamar (blamar) wrote :

Great catch, that should be fixed now.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

looks good now

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/api/ec2/cloud.py'
--- nova/api/ec2/cloud.py 2011-08-17 23:44:34 +0000
+++ nova/api/ec2/cloud.py 2011-08-31 14:15:31 +0000
@@ -47,6 +47,7 @@
47from nova import volume47from nova import volume
48from nova.api.ec2 import ec2utils48from nova.api.ec2 import ec2utils
49from nova.compute import instance_types49from nova.compute import instance_types
50from nova.compute import vm_states
50from nova.image import s351from nova.image import s3
5152
5253
@@ -78,6 +79,30 @@
78 return {'private_key': private_key, 'fingerprint': fingerprint}79 return {'private_key': private_key, 'fingerprint': fingerprint}
7980
8081
82# EC2 API can return the following values as documented in the EC2 API
83# http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/
84# ApiReference-ItemType-InstanceStateType.html
85# pending | running | shutting-down | terminated | stopping | stopped
86_STATE_DESCRIPTION_MAP = {
87 None: 'pending',
88 vm_states.ACTIVE: 'running',
89 vm_states.BUILDING: 'pending',
90 vm_states.REBUILDING: 'pending',
91 vm_states.DELETED: 'terminated',
92 vm_states.STOPPED: 'stopped',
93 vm_states.MIGRATING: 'migrate',
94 vm_states.RESIZING: 'resize',
95 vm_states.PAUSED: 'pause',
96 vm_states.SUSPENDED: 'suspend',
97 vm_states.RESCUED: 'rescue',
98}
99
100
101def state_description_from_vm_state(vm_state):
102 """Map the vm state to the server status string"""
103 return _STATE_DESCRIPTION_MAP.get(vm_state, vm_state)
104
105
81# TODO(yamahata): hypervisor dependent default device name106# TODO(yamahata): hypervisor dependent default device name
82_DEFAULT_ROOT_DEVICE_NAME = '/dev/sda1'107_DEFAULT_ROOT_DEVICE_NAME = '/dev/sda1'
83_DEFAULT_MAPPINGS = {'ami': 'sda1',108_DEFAULT_MAPPINGS = {'ami': 'sda1',
@@ -1039,11 +1064,12 @@
10391064
1040 def _format_attr_instance_initiated_shutdown_behavior(instance,1065 def _format_attr_instance_initiated_shutdown_behavior(instance,
1041 result):1066 result):
1042 state_description = instance['state_description']1067 vm_state = instance['vm_state']
1043 state_to_value = {'stopping': 'stop',1068 state_to_value = {
1044 'stopped': 'stop',1069 vm_states.STOPPED: 'stopped',
1045 'terminating': 'terminate'}1070 vm_states.DELETED: 'terminated',
1046 value = state_to_value.get(state_description)1071 }
1072 value = state_to_value.get(vm_state)
1047 if value:1073 if value:
1048 result['instanceInitiatedShutdownBehavior'] = value1074 result['instanceInitiatedShutdownBehavior'] = value
10491075
@@ -1198,8 +1224,8 @@
1198 self._format_kernel_id(instance, i, 'kernelId')1224 self._format_kernel_id(instance, i, 'kernelId')
1199 self._format_ramdisk_id(instance, i, 'ramdiskId')1225 self._format_ramdisk_id(instance, i, 'ramdiskId')
1200 i['instanceState'] = {1226 i['instanceState'] = {
1201 'code': instance['state'],1227 'code': instance['power_state'],
1202 'name': instance['state_description']}1228 'name': state_description_from_vm_state(instance['vm_state'])}
1203 fixed_addr = None1229 fixed_addr = None
1204 floating_addr = None1230 floating_addr = None
1205 if instance['fixed_ips']:1231 if instance['fixed_ips']:
@@ -1618,22 +1644,22 @@
1618 # stop the instance if necessary1644 # stop the instance if necessary
1619 restart_instance = False1645 restart_instance = False
1620 if not no_reboot:1646 if not no_reboot:
1621 state_description = instance['state_description']1647 vm_state = instance['vm_state']
16221648
1623 # if the instance is in subtle state, refuse to proceed.1649 # if the instance is in subtle state, refuse to proceed.
1624 if state_description not in ('running', 'stopping', 'stopped'):1650 if vm_state not in (vm_states.ACTIVE, vm_states.STOPPED):
1625 raise exception.InstanceNotRunning(instance_id=ec2_instance_id)1651 raise exception.InstanceNotRunning(instance_id=ec2_instance_id)
16261652
1627 if state_description == 'running':1653 if vm_state == vm_states.ACTIVE:
1628 restart_instance = True1654 restart_instance = True
1629 self.compute_api.stop(context, instance_id=instance_id)1655 self.compute_api.stop(context, instance_id=instance_id)
16301656
1631 # wait instance for really stopped1657 # wait instance for really stopped
1632 start_time = time.time()1658 start_time = time.time()
1633 while state_description != 'stopped':1659 while vm_state != vm_states.STOPPED:
1634 time.sleep(1)1660 time.sleep(1)
1635 instance = self.compute_api.get(context, instance_id)1661 instance = self.compute_api.get(context, instance_id)
1636 state_description = instance['state_description']1662 vm_state = instance['vm_state']
1637 # NOTE(yamahata): timeout and error. 1 hour for now for safety.1663 # NOTE(yamahata): timeout and error. 1 hour for now for safety.
1638 # Is it too short/long?1664 # Is it too short/long?
1639 # Or is there any better way?1665 # Or is there any better way?
16401666
=== modified file 'nova/api/openstack/common.py'
--- nova/api/openstack/common.py 2011-08-17 07:41:17 +0000
+++ nova/api/openstack/common.py 2011-08-31 14:15:31 +0000
@@ -27,7 +27,8 @@
27from nova import log as logging27from nova import log as logging
28from nova import quota28from nova import quota
29from nova.api.openstack import wsgi29from nova.api.openstack import wsgi
30from nova.compute import power_state as compute_power_state30from nova.compute import vm_states
31from nova.compute import task_states
3132
3233
33LOG = logging.getLogger('nova.api.openstack.common')34LOG = logging.getLogger('nova.api.openstack.common')
@@ -38,36 +39,61 @@
38XML_NS_V11 = 'http://docs.openstack.org/compute/api/v1.1'39XML_NS_V11 = 'http://docs.openstack.org/compute/api/v1.1'
3940
4041
41_STATUS_MAP = {42_STATE_MAP = {
42 None: 'BUILD',43 vm_states.ACTIVE: {
43 compute_power_state.NOSTATE: 'BUILD',44 'default': 'ACTIVE',
44 compute_power_state.RUNNING: 'ACTIVE',45 task_states.REBOOTING: 'REBOOT',
45 compute_power_state.BLOCKED: 'ACTIVE',46 task_states.UPDATING_PASSWORD: 'PASSWORD',
46 compute_power_state.SUSPENDED: 'SUSPENDED',47 task_states.RESIZE_VERIFY: 'VERIFY_RESIZE',
47 compute_power_state.PAUSED: 'PAUSED',48 },
48 compute_power_state.SHUTDOWN: 'SHUTDOWN',49 vm_states.BUILDING: {
49 compute_power_state.SHUTOFF: 'SHUTOFF',50 'default': 'BUILD',
50 compute_power_state.CRASHED: 'ERROR',51 },
51 compute_power_state.FAILED: 'ERROR',52 vm_states.REBUILDING: {
52 compute_power_state.BUILDING: 'BUILD',53 'default': 'REBUILD',
54 },
55 vm_states.STOPPED: {
56 'default': 'STOPPED',
57 },
58 vm_states.MIGRATING: {
59 'default': 'MIGRATING',
60 },
61 vm_states.RESIZING: {
62 'default': 'RESIZE',
63 },
64 vm_states.PAUSED: {
65 'default': 'PAUSED',
66 },
67 vm_states.SUSPENDED: {
68 'default': 'SUSPENDED',
69 },
70 vm_states.RESCUED: {
71 'default': 'RESCUE',
72 },
73 vm_states.ERROR: {
74 'default': 'ERROR',
75 },
76 vm_states.DELETED: {
77 'default': 'DELETED',
78 },
53}79}
5480
5581
56def status_from_power_state(power_state):82def status_from_state(vm_state, task_state='default'):
57 """Map the power state to the server status string"""83 """Given vm_state and task_state, return a status string."""
58 return _STATUS_MAP[power_state]84 task_map = _STATE_MAP.get(vm_state, dict(default='UNKNOWN_STATE'))
5985 status = task_map.get(task_state, task_map['default'])
6086 LOG.debug("Generated %(status)s from vm_state=%(vm_state)s "
61def power_states_from_status(status):87 "task_state=%(task_state)s." % locals())
62 """Map the server status string to a list of power states"""88 return status
63 power_states = []89
64 for power_state, status_map in _STATUS_MAP.iteritems():90
65 # Skip the 'None' state91def vm_state_from_status(status):
66 if power_state is None:92 """Map the server status string to a vm state."""
67 continue93 for state, task_map in _STATE_MAP.iteritems():
68 if status.lower() == status_map.lower():94 status_string = task_map.get("default")
69 power_states.append(power_state)95 if status.lower() == status_string.lower():
70 return power_states96 return state
7197
7298
73def get_pagination_params(request):99def get_pagination_params(request):
74100
=== modified file 'nova/api/openstack/servers.py'
--- nova/api/openstack/servers.py 2011-08-24 14:37:59 +0000
+++ nova/api/openstack/servers.py 2011-08-31 14:15:31 +0000
@@ -95,17 +95,15 @@
95 search_opts['recurse_zones'] = utils.bool_from_str(95 search_opts['recurse_zones'] = utils.bool_from_str(
96 search_opts.get('recurse_zones', False))96 search_opts.get('recurse_zones', False))
9797
98 # If search by 'status', we need to convert it to 'state'98 # If search by 'status', we need to convert it to 'vm_state'
99 # If the status is unknown, bail.99 # to pass on to child zones.
100 # Leave 'state' in search_opts so compute can pass it on to
101 # child zones..
102 if 'status' in search_opts:100 if 'status' in search_opts:
103 status = search_opts['status']101 status = search_opts['status']
104 search_opts['state'] = common.power_states_from_status(status)102 state = common.vm_state_from_status(status)
105 if len(search_opts['state']) == 0:103 if state is None:
106 reason = _('Invalid server status: %(status)s') % locals()104 reason = _('Invalid server status: %(status)s') % locals()
107 LOG.error(reason)
108 raise exception.InvalidInput(reason=reason)105 raise exception.InvalidInput(reason=reason)
106 search_opts['vm_state'] = state
109107
110 # By default, compute's get_all() will return deleted instances.108 # By default, compute's get_all() will return deleted instances.
111 # If an admin hasn't specified a 'deleted' search option, we need109 # If an admin hasn't specified a 'deleted' search option, we need
@@ -608,9 +606,8 @@
608606
609 try:607 try:
610 self.compute_api.rebuild(context, instance_id, image_id, password)608 self.compute_api.rebuild(context, instance_id, image_id, password)
611 except exception.BuildInProgress:609 except exception.RebuildRequiresActiveInstance:
612 msg = _("Instance %s is currently being rebuilt.") % instance_id610 msg = _("Instance %s must be active to rebuild.") % instance_id
613 LOG.debug(msg)
614 raise exc.HTTPConflict(explanation=msg)611 raise exc.HTTPConflict(explanation=msg)
615612
616 return webob.Response(status_int=202)613 return webob.Response(status_int=202)
@@ -750,9 +747,8 @@
750 self.compute_api.rebuild(context, instance_id, image_href,747 self.compute_api.rebuild(context, instance_id, image_href,
751 password, name=name, metadata=metadata,748 password, name=name, metadata=metadata,
752 files_to_inject=personalities)749 files_to_inject=personalities)
753 except exception.BuildInProgress:750 except exception.RebuildRequiresActiveInstance:
754 msg = _("Instance %s is currently being rebuilt.") % instance_id751 msg = _("Instance %s must be active to rebuild.") % instance_id
755 LOG.debug(msg)
756 raise exc.HTTPConflict(explanation=msg)752 raise exc.HTTPConflict(explanation=msg)
757 except exception.InstanceNotFound:753 except exception.InstanceNotFound:
758 msg = _("Instance %s could not be found") % instance_id754 msg = _("Instance %s could not be found") % instance_id
759755
=== modified file 'nova/api/openstack/views/servers.py'
--- nova/api/openstack/views/servers.py 2011-08-23 04:17:57 +0000
+++ nova/api/openstack/views/servers.py 2011-08-31 14:15:31 +0000
@@ -21,13 +21,12 @@
21import os21import os
2222
23from nova import exception23from nova import exception
24import nova.compute
25import nova.context
26from nova.api.openstack import common24from nova.api.openstack import common
27from nova.api.openstack.views import addresses as addresses_view25from nova.api.openstack.views import addresses as addresses_view
28from nova.api.openstack.views import flavors as flavors_view26from nova.api.openstack.views import flavors as flavors_view
29from nova.api.openstack.views import images as images_view27from nova.api.openstack.views import images as images_view
30from nova import utils28from nova import utils
29from nova.compute import vm_states
3130
3231
33class ViewBuilder(object):32class ViewBuilder(object):
@@ -61,17 +60,13 @@
6160
62 def _build_detail(self, inst):61 def _build_detail(self, inst):
63 """Returns a detailed model of a server."""62 """Returns a detailed model of a server."""
63 vm_state = inst.get('vm_state', vm_states.BUILDING)
64 task_state = inst.get('task_state')
6465
65 inst_dict = {66 inst_dict = {
66 'id': inst['id'],67 'id': inst['id'],
67 'name': inst['display_name'],68 'name': inst['display_name'],
68 'status': common.status_from_power_state(inst.get('state'))}69 'status': common.status_from_state(vm_state, task_state)}
69
70 ctxt = nova.context.get_admin_context()
71 compute_api = nova.compute.API()
72
73 if compute_api.has_finished_migration(ctxt, inst['uuid']):
74 inst_dict['status'] = 'RESIZE-CONFIRM'
7570
76 # Return the metadata as a dictionary71 # Return the metadata as a dictionary
77 metadata = {}72 metadata = {}
7873
=== modified file 'nova/compute/api.py'
--- nova/compute/api.py 2011-08-26 20:36:45 +0000
+++ nova/compute/api.py 2011-08-31 14:15:31 +0000
@@ -37,6 +37,8 @@
37from nova import volume37from nova import volume
38from nova.compute import instance_types38from nova.compute import instance_types
39from nova.compute import power_state39from nova.compute import power_state
40from nova.compute import task_states
41from nova.compute import vm_states
40from nova.compute.utils import terminate_volumes42from nova.compute.utils import terminate_volumes
41from nova.scheduler import api as scheduler_api43from nova.scheduler import api as scheduler_api
42from nova.db import base44from nova.db import base
@@ -75,12 +77,18 @@
7577
7678
77def _is_able_to_shutdown(instance, instance_id):79def _is_able_to_shutdown(instance, instance_id):
78 states = {'terminating': "Instance %s is already being terminated",80 vm_state = instance["vm_state"]
79 'migrating': "Instance %s is being migrated",81 task_state = instance["task_state"]
80 'stopping': "Instance %s is being stopped"}82
81 msg = states.get(instance['state_description'])83 valid_shutdown_states = [
82 if msg:84 vm_states.ACTIVE,
83 LOG.warning(_(msg), instance_id)85 vm_states.REBUILDING,
86 vm_states.BUILDING,
87 ]
88
89 if vm_state not in valid_shutdown_states:
90 LOG.warn(_("Instance %(instance_id)s is not in an 'active' state. It "
91 "is currently %(vm_state)s. Shutdown aborted.") % locals())
84 return False92 return False
8593
86 return True94 return True
@@ -251,10 +259,10 @@
251 'image_ref': image_href,259 'image_ref': image_href,
252 'kernel_id': kernel_id or '',260 'kernel_id': kernel_id or '',
253 'ramdisk_id': ramdisk_id or '',261 'ramdisk_id': ramdisk_id or '',
262 'power_state': power_state.NOSTATE,
263 'vm_state': vm_states.BUILDING,
254 'config_drive_id': config_drive_id or '',264 'config_drive_id': config_drive_id or '',
255 'config_drive': config_drive or '',265 'config_drive': config_drive or '',
256 'state': 0,
257 'state_description': 'scheduling',
258 'user_id': context.user_id,266 'user_id': context.user_id,
259 'project_id': context.project_id,267 'project_id': context.project_id,
260 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),268 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),
@@ -415,6 +423,8 @@
415 updates['display_name'] = "Server %s" % instance_id423 updates['display_name'] = "Server %s" % instance_id
416 instance['display_name'] = updates['display_name']424 instance['display_name'] = updates['display_name']
417 updates['hostname'] = self.hostname_factory(instance)425 updates['hostname'] = self.hostname_factory(instance)
426 updates['vm_state'] = vm_states.BUILDING
427 updates['task_state'] = task_states.SCHEDULING
418428
419 instance = self.update(context, instance_id, **updates)429 instance = self.update(context, instance_id, **updates)
420 return instance430 return instance
@@ -750,10 +760,8 @@
750 return760 return
751761
752 self.update(context,762 self.update(context,
753 instance['id'],763 instance_id,
754 state_description='terminating',764 task_state=task_states.DELETING)
755 state=0,
756 terminated_at=utils.utcnow())
757765
758 host = instance['host']766 host = instance['host']
759 if host:767 if host:
@@ -773,9 +781,9 @@
773 return781 return
774782
775 self.update(context,783 self.update(context,
776 instance['id'],784 instance_id,
777 state_description='stopping',785 vm_state=vm_states.ACTIVE,
778 state=power_state.NOSTATE,786 task_state=task_states.STOPPING,
779 terminated_at=utils.utcnow())787 terminated_at=utils.utcnow())
780788
781 host = instance['host']789 host = instance['host']
@@ -787,12 +795,18 @@
787 """Start an instance."""795 """Start an instance."""
788 LOG.debug(_("Going to try to start %s"), instance_id)796 LOG.debug(_("Going to try to start %s"), instance_id)
789 instance = self._get_instance(context, instance_id, 'starting')797 instance = self._get_instance(context, instance_id, 'starting')
790 if instance['state_description'] != 'stopped':798 vm_state = instance["vm_state"]
791 _state_description = instance['state_description']799
800 if vm_state != vm_states.STOPPED:
792 LOG.warning(_("Instance %(instance_id)s is not "801 LOG.warning(_("Instance %(instance_id)s is not "
793 "stopped(%(_state_description)s)") % locals())802 "stopped. (%(vm_state)s)") % locals())
794 return803 return
795804
805 self.update(context,
806 instance_id,
807 vm_state=vm_states.STOPPED,
808 task_state=task_states.STARTING)
809
796 # TODO(yamahata): injected_files isn't supported right now.810 # TODO(yamahata): injected_files isn't supported right now.
797 # It is used only for osapi. not for ec2 api.811 # It is used only for osapi. not for ec2 api.
798 # availability_zone isn't used by run_instance.812 # availability_zone isn't used by run_instance.
@@ -1020,6 +1034,10 @@
1020 @scheduler_api.reroute_compute("reboot")1034 @scheduler_api.reroute_compute("reboot")
1021 def reboot(self, context, instance_id):1035 def reboot(self, context, instance_id):
1022 """Reboot the given instance."""1036 """Reboot the given instance."""
1037 self.update(context,
1038 instance_id,
1039 vm_state=vm_states.ACTIVE,
1040 task_state=task_states.REBOOTING)
1023 self._cast_compute_message('reboot_instance', context, instance_id)1041 self._cast_compute_message('reboot_instance', context, instance_id)
10241042
1025 @scheduler_api.reroute_compute("rebuild")1043 @scheduler_api.reroute_compute("rebuild")
@@ -1027,21 +1045,25 @@
1027 name=None, metadata=None, files_to_inject=None):1045 name=None, metadata=None, files_to_inject=None):
1028 """Rebuild the given instance with the provided metadata."""1046 """Rebuild the given instance with the provided metadata."""
1029 instance = db.api.instance_get(context, instance_id)1047 instance = db.api.instance_get(context, instance_id)
1048 name = name or instance["display_name"]
10301049
1031 if instance["state"] == power_state.BUILDING:1050 if instance["vm_state"] != vm_states.ACTIVE:
1032 msg = _("Instance already building")1051 msg = _("Instance must be active to rebuild.")
1033 raise exception.BuildInProgress(msg)1052 raise exception.RebuildRequiresActiveInstance(msg)
10341053
1035 files_to_inject = files_to_inject or []1054 files_to_inject = files_to_inject or []
1055 metadata = metadata or {}
1056
1036 self._check_injected_file_quota(context, files_to_inject)1057 self._check_injected_file_quota(context, files_to_inject)
1058 self._check_metadata_properties_quota(context, metadata)
10371059
1038 values = {"image_ref": image_href}1060 self.update(context,
1039 if metadata is not None:1061 instance_id,
1040 self._check_metadata_properties_quota(context, metadata)1062 metadata=metadata,
1041 values['metadata'] = metadata1063 display_name=name,
1042 if name is not None:1064 image_ref=image_href,
1043 values['display_name'] = name1065 vm_state=vm_states.ACTIVE,
1044 self.db.instance_update(context, instance_id, values)1066 task_state=task_states.REBUILDING)
10451067
1046 rebuild_params = {1068 rebuild_params = {
1047 "new_pass": admin_password,1069 "new_pass": admin_password,
@@ -1065,6 +1087,11 @@
1065 raise exception.MigrationNotFoundByStatus(instance_id=instance_id,1087 raise exception.MigrationNotFoundByStatus(instance_id=instance_id,
1066 status='finished')1088 status='finished')
10671089
1090 self.update(context,
1091 instance_id,
1092 vm_state=vm_states.ACTIVE,
1093 task_state=None)
1094
1068 params = {'migration_id': migration_ref['id']}1095 params = {'migration_id': migration_ref['id']}
1069 self._cast_compute_message('revert_resize', context,1096 self._cast_compute_message('revert_resize', context,
1070 instance_ref['uuid'],1097 instance_ref['uuid'],
@@ -1085,6 +1112,12 @@
1085 if not migration_ref:1112 if not migration_ref:
1086 raise exception.MigrationNotFoundByStatus(instance_id=instance_id,1113 raise exception.MigrationNotFoundByStatus(instance_id=instance_id,
1087 status='finished')1114 status='finished')
1115
1116 self.update(context,
1117 instance_id,
1118 vm_state=vm_states.ACTIVE,
1119 task_state=None)
1120
1088 params = {'migration_id': migration_ref['id']}1121 params = {'migration_id': migration_ref['id']}
1089 self._cast_compute_message('confirm_resize', context,1122 self._cast_compute_message('confirm_resize', context,
1090 instance_ref['uuid'],1123 instance_ref['uuid'],
@@ -1130,6 +1163,11 @@
1130 if (current_memory_mb == new_memory_mb) and flavor_id:1163 if (current_memory_mb == new_memory_mb) and flavor_id:
1131 raise exception.CannotResizeToSameSize()1164 raise exception.CannotResizeToSameSize()
11321165
1166 self.update(context,
1167 instance_id,
1168 vm_state=vm_states.RESIZING,
1169 task_state=task_states.RESIZE_PREP)
1170
1133 instance_ref = self._get_instance(context, instance_id, 'resize')1171 instance_ref = self._get_instance(context, instance_id, 'resize')
1134 self._cast_scheduler_message(context,1172 self._cast_scheduler_message(context,
1135 {"method": "prep_resize",1173 {"method": "prep_resize",
@@ -1163,11 +1201,19 @@
1163 @scheduler_api.reroute_compute("pause")1201 @scheduler_api.reroute_compute("pause")
1164 def pause(self, context, instance_id):1202 def pause(self, context, instance_id):
1165 """Pause the given instance."""1203 """Pause the given instance."""
1204 self.update(context,
1205 instance_id,
1206 vm_state=vm_states.ACTIVE,
1207 task_state=task_states.PAUSING)
1166 self._cast_compute_message('pause_instance', context, instance_id)1208 self._cast_compute_message('pause_instance', context, instance_id)
11671209
1168 @scheduler_api.reroute_compute("unpause")1210 @scheduler_api.reroute_compute("unpause")
1169 def unpause(self, context, instance_id):1211 def unpause(self, context, instance_id):
1170 """Unpause the given instance."""1212 """Unpause the given instance."""
1213 self.update(context,
1214 instance_id,
1215 vm_state=vm_states.PAUSED,
1216 task_state=task_states.UNPAUSING)
1171 self._cast_compute_message('unpause_instance', context, instance_id)1217 self._cast_compute_message('unpause_instance', context, instance_id)
11721218
1173 def _call_compute_message_for_host(self, action, context, host, params):1219 def _call_compute_message_for_host(self, action, context, host, params):
@@ -1200,21 +1246,37 @@
1200 @scheduler_api.reroute_compute("suspend")1246 @scheduler_api.reroute_compute("suspend")
1201 def suspend(self, context, instance_id):1247 def suspend(self, context, instance_id):
1202 """Suspend the given instance."""1248 """Suspend the given instance."""
1249 self.update(context,
1250 instance_id,
1251 vm_state=vm_states.ACTIVE,
1252 task_state=task_states.SUSPENDING)
1203 self._cast_compute_message('suspend_instance', context, instance_id)1253 self._cast_compute_message('suspend_instance', context, instance_id)
12041254
1205 @scheduler_api.reroute_compute("resume")1255 @scheduler_api.reroute_compute("resume")
1206 def resume(self, context, instance_id):1256 def resume(self, context, instance_id):
1207 """Resume the given instance."""1257 """Resume the given instance."""
1258 self.update(context,
1259 instance_id,
1260 vm_state=vm_states.SUSPENDED,
1261 task_state=task_states.RESUMING)
1208 self._cast_compute_message('resume_instance', context, instance_id)1262 self._cast_compute_message('resume_instance', context, instance_id)
12091263
1210 @scheduler_api.reroute_compute("rescue")1264 @scheduler_api.reroute_compute("rescue")
1211 def rescue(self, context, instance_id):1265 def rescue(self, context, instance_id):
1212 """Rescue the given instance."""1266 """Rescue the given instance."""
1267 self.update(context,
1268 instance_id,
1269 vm_state=vm_states.ACTIVE,
1270 task_state=task_states.RESCUING)
1213 self._cast_compute_message('rescue_instance', context, instance_id)1271 self._cast_compute_message('rescue_instance', context, instance_id)
12141272
1215 @scheduler_api.reroute_compute("unrescue")1273 @scheduler_api.reroute_compute("unrescue")
1216 def unrescue(self, context, instance_id):1274 def unrescue(self, context, instance_id):
1217 """Unrescue the given instance."""1275 """Unrescue the given instance."""
1276 self.update(context,
1277 instance_id,
1278 vm_state=vm_states.RESCUED,
1279 task_state=task_states.UNRESCUING)
1218 self._cast_compute_message('unrescue_instance', context, instance_id)1280 self._cast_compute_message('unrescue_instance', context, instance_id)
12191281
1220 @scheduler_api.reroute_compute("set_admin_password")1282 @scheduler_api.reroute_compute("set_admin_password")
12211283
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2011-08-26 13:54:53 +0000
+++ nova/compute/manager.py 2011-08-31 14:15:31 +0000
@@ -56,6 +56,8 @@
56from nova import utils56from nova import utils
57from nova import volume57from nova import volume
58from nova.compute import power_state58from nova.compute import power_state
59from nova.compute import task_states
60from nova.compute import vm_states
59from nova.notifier import api as notifier61from nova.notifier import api as notifier
60from nova.compute.utils import terminate_volumes62from nova.compute.utils import terminate_volumes
61from nova.virt import driver63from nova.virt import driver
@@ -146,6 +148,10 @@
146 super(ComputeManager, self).__init__(service_name="compute",148 super(ComputeManager, self).__init__(service_name="compute",
147 *args, **kwargs)149 *args, **kwargs)
148150
151 def _instance_update(self, context, instance_id, **kwargs):
152 """Update an instance in the database using kwargs as value."""
153 return self.db.instance_update(context, instance_id, kwargs)
154
149 def init_host(self):155 def init_host(self):
150 """Initialization for a standalone compute service."""156 """Initialization for a standalone compute service."""
151 self.driver.init_host(host=self.host)157 self.driver.init_host(host=self.host)
@@ -153,8 +159,8 @@
153 instances = self.db.instance_get_all_by_host(context, self.host)159 instances = self.db.instance_get_all_by_host(context, self.host)
154 for instance in instances:160 for instance in instances:
155 inst_name = instance['name']161 inst_name = instance['name']
156 db_state = instance['state']162 db_state = instance['power_state']
157 drv_state = self._update_state(context, instance['id'])163 drv_state = self._get_power_state(context, instance)
158164
159 expect_running = db_state == power_state.RUNNING \165 expect_running = db_state == power_state.RUNNING \
160 and drv_state != db_state166 and drv_state != db_state
@@ -177,29 +183,13 @@
177 LOG.warning(_('Hypervisor driver does not '183 LOG.warning(_('Hypervisor driver does not '
178 'support firewall rules'))184 'support firewall rules'))
179185
180 def _update_state(self, context, instance_id, state=None):186 def _get_power_state(self, context, instance):
181 """Update the state of an instance from the driver info."""187 """Retrieve the power state for the given instance."""
182 instance_ref = self.db.instance_get(context, instance_id)188 LOG.debug(_('Checking state of %s'), instance['name'])
183189 try:
184 if state is None:190 return self.driver.get_info(instance['name'])["state"]
185 try:191 except exception.NotFound:
186 LOG.debug(_('Checking state of %s'), instance_ref['name'])192 return power_state.FAILED
187 info = self.driver.get_info(instance_ref['name'])
188 except exception.NotFound:
189 info = None
190
191 if info is not None:
192 state = info['state']
193 else:
194 state = power_state.FAILED
195
196 self.db.instance_set_state(context, instance_id, state)
197 return state
198
199 def _update_launched_at(self, context, instance_id, launched_at=None):
200 """Update the launched_at parameter of the given instance."""
201 data = {'launched_at': launched_at or utils.utcnow()}
202 self.db.instance_update(context, instance_id, data)
203193
204 def get_console_topic(self, context, **kwargs):194 def get_console_topic(self, context, **kwargs):
205 """Retrieves the console host for a project on this host.195 """Retrieves the console host for a project on this host.
@@ -251,11 +241,6 @@
251241
252 def _setup_block_device_mapping(self, context, instance_id):242 def _setup_block_device_mapping(self, context, instance_id):
253 """setup volumes for block device mapping"""243 """setup volumes for block device mapping"""
254 self.db.instance_set_state(context,
255 instance_id,
256 power_state.NOSTATE,
257 'block_device_mapping')
258
259 volume_api = volume.API()244 volume_api = volume.API()
260 block_device_mapping = []245 block_device_mapping = []
261 swap = None246 swap = None
@@ -389,17 +374,12 @@
389 updates = {}374 updates = {}
390 updates['host'] = self.host375 updates['host'] = self.host
391 updates['launched_on'] = self.host376 updates['launched_on'] = self.host
392 instance = self.db.instance_update(context,377 updates['vm_state'] = vm_states.BUILDING
393 instance_id,378 updates['task_state'] = task_states.NETWORKING
394 updates)379 instance = self.db.instance_update(context, instance_id, updates)
395 instance['injected_files'] = kwargs.get('injected_files', [])380 instance['injected_files'] = kwargs.get('injected_files', [])
396 instance['admin_pass'] = kwargs.get('admin_password', None)381 instance['admin_pass'] = kwargs.get('admin_password', None)
397382
398 self.db.instance_set_state(context,
399 instance_id,
400 power_state.NOSTATE,
401 'networking')
402
403 is_vpn = instance['image_ref'] == str(FLAGS.vpn_image_id)383 is_vpn = instance['image_ref'] == str(FLAGS.vpn_image_id)
404 try:384 try:
405 # NOTE(vish): This could be a cast because we don't do anything385 # NOTE(vish): This could be a cast because we don't do anything
@@ -418,6 +398,11 @@
418 # all vif creation and network injection, maybe this is correct398 # all vif creation and network injection, maybe this is correct
419 network_info = []399 network_info = []
420400
401 self._instance_update(context,
402 instance_id,
403 vm_state=vm_states.BUILDING,
404 task_state=task_states.BLOCK_DEVICE_MAPPING)
405
421 (swap, ephemerals,406 (swap, ephemerals,
422 block_device_mapping) = self._setup_block_device_mapping(407 block_device_mapping) = self._setup_block_device_mapping(
423 context, instance_id)408 context, instance_id)
@@ -427,9 +412,12 @@
427 'ephemerals': ephemerals,412 'ephemerals': ephemerals,
428 'block_device_mapping': block_device_mapping}413 'block_device_mapping': block_device_mapping}
429414
415 self._instance_update(context,
416 instance_id,
417 vm_state=vm_states.BUILDING,
418 task_state=task_states.SPAWNING)
419
430 # TODO(vish) check to make sure the availability zone matches420 # TODO(vish) check to make sure the availability zone matches
431 self._update_state(context, instance_id, power_state.BUILDING)
432
433 try:421 try:
434 self.driver.spawn(context, instance,422 self.driver.spawn(context, instance,
435 network_info, block_device_info)423 network_info, block_device_info)
@@ -438,13 +426,21 @@
438 "virtualization enabled in the BIOS? Details: "426 "virtualization enabled in the BIOS? Details: "
439 "%(ex)s") % locals()427 "%(ex)s") % locals()
440 LOG.exception(msg)428 LOG.exception(msg)
441429 return
442 self._update_launched_at(context, instance_id)430
443 self._update_state(context, instance_id)431 current_power_state = self._get_power_state(context, instance)
432 self._instance_update(context,
433 instance_id,
434 power_state=current_power_state,
435 vm_state=vm_states.ACTIVE,
436 task_state=None,
437 launched_at=utils.utcnow())
438
444 usage_info = utils.usage_from_instance(instance)439 usage_info = utils.usage_from_instance(instance)
445 notifier.notify('compute.%s' % self.host,440 notifier.notify('compute.%s' % self.host,
446 'compute.instance.create',441 'compute.instance.create',
447 notifier.INFO, usage_info)442 notifier.INFO, usage_info)
443
448 except exception.InstanceNotFound:444 except exception.InstanceNotFound:
449 # FIXME(wwolf): We are just ignoring InstanceNotFound445 # FIXME(wwolf): We are just ignoring InstanceNotFound
450 # exceptions here in case the instance was immediately446 # exceptions here in case the instance was immediately
@@ -480,8 +476,7 @@
480 for volume in volumes:476 for volume in volumes:
481 self._detach_volume(context, instance_id, volume['id'], False)477 self._detach_volume(context, instance_id, volume['id'], False)
482478
483 if (instance['state'] == power_state.SHUTOFF and479 if instance['power_state'] == power_state.SHUTOFF:
484 instance['state_description'] != 'stopped'):
485 self.db.instance_destroy(context, instance_id)480 self.db.instance_destroy(context, instance_id)
486 raise exception.Error(_('trying to destroy already destroyed'481 raise exception.Error(_('trying to destroy already destroyed'
487 ' instance: %s') % instance_id)482 ' instance: %s') % instance_id)
@@ -496,9 +491,14 @@
496 """Terminate an instance on this host."""491 """Terminate an instance on this host."""
497 self._shutdown_instance(context, instance_id, 'Terminating')492 self._shutdown_instance(context, instance_id, 'Terminating')
498 instance = self.db.instance_get(context.elevated(), instance_id)493 instance = self.db.instance_get(context.elevated(), instance_id)
494 self._instance_update(context,
495 instance_id,
496 vm_state=vm_states.DELETED,
497 task_state=None,
498 terminated_at=utils.utcnow())
499499
500 # TODO(ja): should we keep it in a terminated state for a bit?
501 self.db.instance_destroy(context, instance_id)500 self.db.instance_destroy(context, instance_id)
501
502 usage_info = utils.usage_from_instance(instance)502 usage_info = utils.usage_from_instance(instance)
503 notifier.notify('compute.%s' % self.host,503 notifier.notify('compute.%s' % self.host,
504 'compute.instance.delete',504 'compute.instance.delete',
@@ -509,7 +509,10 @@
509 def stop_instance(self, context, instance_id):509 def stop_instance(self, context, instance_id):
510 """Stopping an instance on this host."""510 """Stopping an instance on this host."""
511 self._shutdown_instance(context, instance_id, 'Stopping')511 self._shutdown_instance(context, instance_id, 'Stopping')
512 # instance state will be updated to stopped by _poll_instance_states()512 self._instance_update(context,
513 instance_id,
514 vm_state=vm_states.STOPPED,
515 task_state=None)
513516
514 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())517 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
515 @checks_instance_lock518 @checks_instance_lock
@@ -529,26 +532,46 @@
529 instance_ref = self.db.instance_get(context, instance_id)532 instance_ref = self.db.instance_get(context, instance_id)
530 LOG.audit(_("Rebuilding instance %s"), instance_id, context=context)533 LOG.audit(_("Rebuilding instance %s"), instance_id, context=context)
531534
532 self._update_state(context, instance_id, power_state.BUILDING)535 current_power_state = self._get_power_state(context, instance_ref)
536 self._instance_update(context,
537 instance_id,
538 power_state=current_power_state,
539 vm_state=vm_states.REBUILDING,
540 task_state=None)
533541
534 network_info = self._get_instance_nw_info(context, instance_ref)542 network_info = self._get_instance_nw_info(context, instance_ref)
535
536 self.driver.destroy(instance_ref, network_info)543 self.driver.destroy(instance_ref, network_info)
544
545 self._instance_update(context,
546 instance_id,
547 vm_state=vm_states.REBUILDING,
548 task_state=task_states.BLOCK_DEVICE_MAPPING)
549
537 instance_ref.injected_files = kwargs.get('injected_files', [])550 instance_ref.injected_files = kwargs.get('injected_files', [])
538 network_info = self.network_api.get_instance_nw_info(context,551 network_info = self.network_api.get_instance_nw_info(context,
539 instance_ref)552 instance_ref)
540 bd_mapping = self._setup_block_device_mapping(context, instance_id)553 bd_mapping = self._setup_block_device_mapping(context, instance_id)
541554
555 self._instance_update(context,
556 instance_id,
557 vm_state=vm_states.REBUILDING,
558 task_state=task_states.SPAWNING)
559
542 # pull in new password here since the original password isn't in the db560 # pull in new password here since the original password isn't in the db
543 instance_ref.admin_pass = kwargs.get('new_pass',561 instance_ref.admin_pass = kwargs.get('new_pass',
544 utils.generate_password(FLAGS.password_length))562 utils.generate_password(FLAGS.password_length))
545563
546 self.driver.spawn(context, instance_ref, network_info, bd_mapping)564 self.driver.spawn(context, instance_ref, network_info, bd_mapping)
547565
548 self._update_launched_at(context, instance_id)566 current_power_state = self._get_power_state(context, instance_ref)
549 self._update_state(context, instance_id)567 self._instance_update(context,
568 instance_id,
569 power_state=current_power_state,
570 vm_state=vm_states.ACTIVE,
571 task_state=None,
572 launched_at=utils.utcnow())
573
550 usage_info = utils.usage_from_instance(instance_ref)574 usage_info = utils.usage_from_instance(instance_ref)
551
552 notifier.notify('compute.%s' % self.host,575 notifier.notify('compute.%s' % self.host,
553 'compute.instance.rebuild',576 'compute.instance.rebuild',
554 notifier.INFO,577 notifier.INFO,
@@ -558,26 +581,34 @@
558 @checks_instance_lock581 @checks_instance_lock
559 def reboot_instance(self, context, instance_id):582 def reboot_instance(self, context, instance_id):
560 """Reboot an instance on this host."""583 """Reboot an instance on this host."""
561 context = context.elevated()
562 self._update_state(context, instance_id)
563 instance_ref = self.db.instance_get(context, instance_id)
564 LOG.audit(_("Rebooting instance %s"), instance_id, context=context)584 LOG.audit(_("Rebooting instance %s"), instance_id, context=context)
565585 context = context.elevated()
566 if instance_ref['state'] != power_state.RUNNING:586 instance_ref = self.db.instance_get(context, instance_id)
567 state = instance_ref['state']587
588 current_power_state = self._get_power_state(context, instance_ref)
589 self._instance_update(context,
590 instance_id,
591 power_state=current_power_state,
592 vm_state=vm_states.ACTIVE,
593 task_state=task_states.REBOOTING)
594
595 if instance_ref['power_state'] != power_state.RUNNING:
596 state = instance_ref['power_state']
568 running = power_state.RUNNING597 running = power_state.RUNNING
569 LOG.warn(_('trying to reboot a non-running '598 LOG.warn(_('trying to reboot a non-running '
570 'instance: %(instance_id)s (state: %(state)s '599 'instance: %(instance_id)s (state: %(state)s '
571 'expected: %(running)s)') % locals(),600 'expected: %(running)s)') % locals(),
572 context=context)601 context=context)
573602
574 self.db.instance_set_state(context,
575 instance_id,
576 power_state.NOSTATE,
577 'rebooting')
578 network_info = self._get_instance_nw_info(context, instance_ref)603 network_info = self._get_instance_nw_info(context, instance_ref)
579 self.driver.reboot(instance_ref, network_info)604 self.driver.reboot(instance_ref, network_info)
580 self._update_state(context, instance_id)605
606 current_power_state = self._get_power_state(context, instance_ref)
607 self._instance_update(context,
608 instance_id,
609 power_state=current_power_state,
610 vm_state=vm_states.ACTIVE,
611 task_state=None)
581612
582 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())613 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
583 def snapshot_instance(self, context, instance_id, image_id,614 def snapshot_instance(self, context, instance_id, image_id,
@@ -593,37 +624,45 @@
593 :param rotation: int representing how many backups to keep around;624 :param rotation: int representing how many backups to keep around;
594 None if rotation shouldn't be used (as in the case of snapshots)625 None if rotation shouldn't be used (as in the case of snapshots)
595 """626 """
627 if image_type == "snapshot":
628 task_state = task_states.IMAGE_SNAPSHOT
629 elif image_type == "backup":
630 task_state = task_states.IMAGE_BACKUP
631 else:
632 raise Exception(_('Image type not recognized %s') % image_type)
633
596 context = context.elevated()634 context = context.elevated()
597 instance_ref = self.db.instance_get(context, instance_id)635 instance_ref = self.db.instance_get(context, instance_id)
598636
599 #NOTE(sirp): update_state currently only refreshes the state field637 current_power_state = self._get_power_state(context, instance_ref)
600 # if we add is_snapshotting, we will need this refreshed too,638 self._instance_update(context,
601 # potentially?639 instance_id,
602 self._update_state(context, instance_id)640 power_state=current_power_state,
641 vm_state=vm_states.ACTIVE,
642 task_state=task_state)
603643
604 LOG.audit(_('instance %s: snapshotting'), instance_id,644 LOG.audit(_('instance %s: snapshotting'), instance_id,
605 context=context)645 context=context)
606 if instance_ref['state'] != power_state.RUNNING:646
607 state = instance_ref['state']647 if instance_ref['power_state'] != power_state.RUNNING:
648 state = instance_ref['power_state']
608 running = power_state.RUNNING649 running = power_state.RUNNING
609 LOG.warn(_('trying to snapshot a non-running '650 LOG.warn(_('trying to snapshot a non-running '
610 'instance: %(instance_id)s (state: %(state)s '651 'instance: %(instance_id)s (state: %(state)s '
611 'expected: %(running)s)') % locals())652 'expected: %(running)s)') % locals())
612653
613 self.driver.snapshot(context, instance_ref, image_id)654 self.driver.snapshot(context, instance_ref, image_id)
614655 self._instance_update(context, instance_id, task_state=None)
615 if image_type == 'snapshot':656
616 if rotation:657 if image_type == 'snapshot' and rotation:
617 raise exception.ImageRotationNotAllowed()658 raise exception.ImageRotationNotAllowed()
659
660 elif image_type == 'backup' and rotation:
661 instance_uuid = instance_ref['uuid']
662 self.rotate_backups(context, instance_uuid, backup_type, rotation)
663
618 elif image_type == 'backup':664 elif image_type == 'backup':
619 if rotation:665 raise exception.RotationRequiredForBackup()
620 instance_uuid = instance_ref['uuid']
621 self.rotate_backups(context, instance_uuid, backup_type,
622 rotation)
623 else:
624 raise exception.RotationRequiredForBackup()
625 else:
626 raise Exception(_('Image type not recognized %s') % image_type)
627666
628 def rotate_backups(self, context, instance_uuid, backup_type, rotation):667 def rotate_backups(self, context, instance_uuid, backup_type, rotation):
629 """Delete excess backups associated to an instance.668 """Delete excess backups associated to an instance.
@@ -691,7 +730,7 @@
691 for i in xrange(max_tries):730 for i in xrange(max_tries):
692 instance_ref = self.db.instance_get(context, instance_id)731 instance_ref = self.db.instance_get(context, instance_id)
693 instance_id = instance_ref["id"]732 instance_id = instance_ref["id"]
694 instance_state = instance_ref["state"]733 instance_state = instance_ref["power_state"]
695 expected_state = power_state.RUNNING734 expected_state = power_state.RUNNING
696735
697 if instance_state != expected_state:736 if instance_state != expected_state:
@@ -726,7 +765,7 @@
726 context = context.elevated()765 context = context.elevated()
727 instance_ref = self.db.instance_get(context, instance_id)766 instance_ref = self.db.instance_get(context, instance_id)
728 instance_id = instance_ref['id']767 instance_id = instance_ref['id']
729 instance_state = instance_ref['state']768 instance_state = instance_ref['power_state']
730 expected_state = power_state.RUNNING769 expected_state = power_state.RUNNING
731 if instance_state != expected_state:770 if instance_state != expected_state:
732 LOG.warn(_('trying to inject a file into a non-running '771 LOG.warn(_('trying to inject a file into a non-running '
@@ -744,7 +783,7 @@
744 context = context.elevated()783 context = context.elevated()
745 instance_ref = self.db.instance_get(context, instance_id)784 instance_ref = self.db.instance_get(context, instance_id)
746 instance_id = instance_ref['id']785 instance_id = instance_ref['id']
747 instance_state = instance_ref['state']786 instance_state = instance_ref['power_state']
748 expected_state = power_state.RUNNING787 expected_state = power_state.RUNNING
749 if instance_state != expected_state:788 if instance_state != expected_state:
750 LOG.warn(_('trying to update agent on a non-running '789 LOG.warn(_('trying to update agent on a non-running '
@@ -759,40 +798,41 @@
759 @checks_instance_lock798 @checks_instance_lock
760 def rescue_instance(self, context, instance_id):799 def rescue_instance(self, context, instance_id):
761 """Rescue an instance on this host."""800 """Rescue an instance on this host."""
762 context = context.elevated()
763 instance_ref = self.db.instance_get(context, instance_id)
764 LOG.audit(_('instance %s: rescuing'), instance_id, context=context)801 LOG.audit(_('instance %s: rescuing'), instance_id, context=context)
765 self.db.instance_set_state(context,802 context = context.elevated()
766 instance_id,803
767 power_state.NOSTATE,804 instance_ref = self.db.instance_get(context, instance_id)
768 'rescuing')
769 _update_state = lambda result: self._update_state_callback(
770 self, context, instance_id, result)
771 network_info = self._get_instance_nw_info(context, instance_ref)805 network_info = self._get_instance_nw_info(context, instance_ref)
772 self.driver.rescue(context, instance_ref, _update_state, network_info)806
773 self._update_state(context, instance_id)807 # NOTE(blamar): None of the virt drivers use the 'callback' param
808 self.driver.rescue(context, instance_ref, None, network_info)
809
810 current_power_state = self._get_power_state(context, instance_ref)
811 self._instance_update(context,
812 instance_id,
813 vm_state=vm_states.RESCUED,
814 task_state=None,
815 power_state=current_power_state)
774816
775 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())817 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
776 @checks_instance_lock818 @checks_instance_lock
777 def unrescue_instance(self, context, instance_id):819 def unrescue_instance(self, context, instance_id):
778 """Rescue an instance on this host."""820 """Rescue an instance on this host."""
779 context = context.elevated()
780 instance_ref = self.db.instance_get(context, instance_id)
781 LOG.audit(_('instance %s: unrescuing'), instance_id, context=context)821 LOG.audit(_('instance %s: unrescuing'), instance_id, context=context)
782 self.db.instance_set_state(context,822 context = context.elevated()
783 instance_id,823
784 power_state.NOSTATE,824 instance_ref = self.db.instance_get(context, instance_id)
785 'unrescuing')
786 _update_state = lambda result: self._update_state_callback(
787 self, context, instance_id, result)
788 network_info = self._get_instance_nw_info(context, instance_ref)825 network_info = self._get_instance_nw_info(context, instance_ref)
789 self.driver.unrescue(instance_ref, _update_state, network_info)826
790 self._update_state(context, instance_id)827 # NOTE(blamar): None of the virt drivers use the 'callback' param
791828 self.driver.unrescue(instance_ref, None, network_info)
792 @staticmethod829
793 def _update_state_callback(self, context, instance_id, result):830 current_power_state = self._get_power_state(context, instance_ref)
794 """Update instance state when async task completes."""831 self._instance_update(context,
795 self._update_state(context, instance_id)832 instance_id,
833 vm_state=vm_states.ACTIVE,
834 task_state=None,
835 power_state=current_power_state)
796836
797 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())837 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
798 @checks_instance_lock838 @checks_instance_lock
@@ -851,11 +891,12 @@
851891
852 # Just roll back the record. There's no need to resize down since892 # Just roll back the record. There's no need to resize down since
853 # the 'old' VM already has the preferred attributes893 # the 'old' VM already has the preferred attributes
854 self.db.instance_update(context, instance_ref['uuid'],894 self._instance_update(context,
855 dict(memory_mb=instance_type['memory_mb'],895 instance_ref["uuid"],
856 vcpus=instance_type['vcpus'],896 memory_mb=instance_type['memory_mb'],
857 local_gb=instance_type['local_gb'],897 vcpus=instance_type['vcpus'],
858 instance_type_id=instance_type['id']))898 local_gb=instance_type['local_gb'],
899 instance_type_id=instance_type['id'])
859900
860 self.driver.revert_migration(instance_ref)901 self.driver.revert_migration(instance_ref)
861 self.db.migration_update(context, migration_id,902 self.db.migration_update(context, migration_id,
@@ -882,8 +923,11 @@
882 instance_ref = self.db.instance_get_by_uuid(context, instance_id)923 instance_ref = self.db.instance_get_by_uuid(context, instance_id)
883924
884 if instance_ref['host'] == FLAGS.host:925 if instance_ref['host'] == FLAGS.host:
885 raise exception.Error(_(926 self._instance_update(context,
886 'Migration error: destination same as source!'))927 instance_id,
928 vm_state=vm_states.ERROR)
929 msg = _('Migration error: destination same as source!')
930 raise exception.Error(msg)
887931
888 old_instance_type = self.db.instance_type_get(context,932 old_instance_type = self.db.instance_type_get(context,
889 instance_ref['instance_type_id'])933 instance_ref['instance_type_id'])
@@ -977,6 +1021,11 @@
977 self.driver.finish_migration(context, instance_ref, disk_info,1021 self.driver.finish_migration(context, instance_ref, disk_info,
978 network_info, resize_instance)1022 network_info, resize_instance)
9791023
1024 self._instance_update(context,
1025 instance_id,
1026 vm_state=vm_states.ACTIVE,
1027 task_state=task_states.RESIZE_VERIFY)
1028
980 self.db.migration_update(context, migration_id,1029 self.db.migration_update(context, migration_id,
981 {'status': 'finished', })1030 {'status': 'finished', })
9821031
@@ -1008,35 +1057,35 @@
1008 @checks_instance_lock1057 @checks_instance_lock
1009 def pause_instance(self, context, instance_id):1058 def pause_instance(self, context, instance_id):
1010 """Pause an instance on this host."""1059 """Pause an instance on this host."""
1011 context = context.elevated()
1012 instance_ref = self.db.instance_get(context, instance_id)
1013 LOG.audit(_('instance %s: pausing'), instance_id, context=context)1060 LOG.audit(_('instance %s: pausing'), instance_id, context=context)
1014 self.db.instance_set_state(context,1061 context = context.elevated()
1015 instance_id,1062
1016 power_state.NOSTATE,1063 instance_ref = self.db.instance_get(context, instance_id)
1017 'pausing')1064 self.driver.pause(instance_ref, lambda result: None)
1018 self.driver.pause(instance_ref,1065
1019 lambda result: self._update_state_callback(self,1066 current_power_state = self._get_power_state(context, instance_ref)
1020 context,1067 self._instance_update(context,
1021 instance_id,1068 instance_id,
1022 result))1069 power_state=current_power_state,
1070 vm_state=vm_states.PAUSED,
1071 task_state=None)
10231072
1024 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())1073 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1025 @checks_instance_lock1074 @checks_instance_lock
1026 def unpause_instance(self, context, instance_id):1075 def unpause_instance(self, context, instance_id):
1027 """Unpause a paused instance on this host."""1076 """Unpause a paused instance on this host."""
1028 context = context.elevated()
1029 instance_ref = self.db.instance_get(context, instance_id)
1030 LOG.audit(_('instance %s: unpausing'), instance_id, context=context)1077 LOG.audit(_('instance %s: unpausing'), instance_id, context=context)
1031 self.db.instance_set_state(context,1078 context = context.elevated()
1032 instance_id,1079
1033 power_state.NOSTATE,1080 instance_ref = self.db.instance_get(context, instance_id)
1034 'unpausing')1081 self.driver.unpause(instance_ref, lambda result: None)
1035 self.driver.unpause(instance_ref,1082
1036 lambda result: self._update_state_callback(self,1083 current_power_state = self._get_power_state(context, instance_ref)
1037 context,1084 self._instance_update(context,
1038 instance_id,1085 instance_id,
1039 result))1086 power_state=current_power_state,
1087 vm_state=vm_states.ACTIVE,
1088 task_state=None)
10401089
1041 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())1090 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1042 def host_power_action(self, context, host=None, action=None):1091 def host_power_action(self, context, host=None, action=None):
@@ -1052,7 +1101,7 @@
1052 def get_diagnostics(self, context, instance_id):1101 def get_diagnostics(self, context, instance_id):
1053 """Retrieve diagnostics for an instance on this host."""1102 """Retrieve diagnostics for an instance on this host."""
1054 instance_ref = self.db.instance_get(context, instance_id)1103 instance_ref = self.db.instance_get(context, instance_id)
1055 if instance_ref["state"] == power_state.RUNNING:1104 if instance_ref["power_state"] == power_state.RUNNING:
1056 LOG.audit(_("instance %s: retrieving diagnostics"), instance_id,1105 LOG.audit(_("instance %s: retrieving diagnostics"), instance_id,
1057 context=context)1106 context=context)
1058 return self.driver.get_diagnostics(instance_ref)1107 return self.driver.get_diagnostics(instance_ref)
@@ -1061,33 +1110,35 @@
1061 @checks_instance_lock1110 @checks_instance_lock
1062 def suspend_instance(self, context, instance_id):1111 def suspend_instance(self, context, instance_id):
1063 """Suspend the given instance."""1112 """Suspend the given instance."""
1064 context = context.elevated()
1065 instance_ref = self.db.instance_get(context, instance_id)
1066 LOG.audit(_('instance %s: suspending'), instance_id, context=context)1113 LOG.audit(_('instance %s: suspending'), instance_id, context=context)
1067 self.db.instance_set_state(context, instance_id,1114 context = context.elevated()
1068 power_state.NOSTATE,1115
1069 'suspending')1116 instance_ref = self.db.instance_get(context, instance_id)
1070 self.driver.suspend(instance_ref,1117 self.driver.suspend(instance_ref, lambda result: None)
1071 lambda result: self._update_state_callback(self,1118
1072 context,1119 current_power_state = self._get_power_state(context, instance_ref)
1073 instance_id,1120 self._instance_update(context,
1074 result))1121 instance_id,
1122 power_state=current_power_state,
1123 vm_state=vm_states.SUSPENDED,
1124 task_state=None)
10751125
1076 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())1126 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1077 @checks_instance_lock1127 @checks_instance_lock
1078 def resume_instance(self, context, instance_id):1128 def resume_instance(self, context, instance_id):
1079 """Resume the given suspended instance."""1129 """Resume the given suspended instance."""
1080 context = context.elevated()
1081 instance_ref = self.db.instance_get(context, instance_id)
1082 LOG.audit(_('instance %s: resuming'), instance_id, context=context)1130 LOG.audit(_('instance %s: resuming'), instance_id, context=context)
1083 self.db.instance_set_state(context, instance_id,1131 context = context.elevated()
1084 power_state.NOSTATE,1132
1085 'resuming')1133 instance_ref = self.db.instance_get(context, instance_id)
1086 self.driver.resume(instance_ref,1134 self.driver.resume(instance_ref, lambda result: None)
1087 lambda result: self._update_state_callback(self,1135
1088 context,1136 current_power_state = self._get_power_state(context, instance_ref)
1089 instance_id,1137 self._instance_update(context,
1090 result))1138 instance_id,
1139 power_state=current_power_state,
1140 vm_state=vm_states.ACTIVE,
1141 task_state=None)
10911142
1092 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())1143 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1093 def lock_instance(self, context, instance_id):1144 def lock_instance(self, context, instance_id):
@@ -1498,11 +1549,14 @@
1498 'block_migration': block_migration}})1549 'block_migration': block_migration}})
14991550
1500 # Restore instance state1551 # Restore instance state
1501 self.db.instance_update(ctxt,1552 current_power_state = self._get_power_state(ctxt, instance_ref)
1502 instance_ref['id'],1553 self._instance_update(ctxt,
1503 {'state_description': 'running',1554 instance_ref["id"],
1504 'state': power_state.RUNNING,1555 host=dest,
1505 'host': dest})1556 power_state=current_power_state,
1557 vm_state=vm_states.ACTIVE,
1558 task_state=None)
1559
1506 # Restore volume state1560 # Restore volume state
1507 for volume_ref in instance_ref['volumes']:1561 for volume_ref in instance_ref['volumes']:
1508 volume_id = volume_ref['id']1562 volume_id = volume_ref['id']
@@ -1548,11 +1602,11 @@
1548 This param specifies destination host.1602 This param specifies destination host.
1549 """1603 """
1550 host = instance_ref['host']1604 host = instance_ref['host']
1551 self.db.instance_update(context,1605 self._instance_update(context,
1552 instance_ref['id'],1606 instance_ref['id'],
1553 {'state_description': 'running',1607 host=host,
1554 'state': power_state.RUNNING,1608 vm_state=vm_states.ACTIVE,
1555 'host': host})1609 task_state=None)
15561610
1557 for volume_ref in instance_ref['volumes']:1611 for volume_ref in instance_ref['volumes']:
1558 volume_id = volume_ref['id']1612 volume_id = volume_ref['id']
@@ -1600,10 +1654,9 @@
1600 error_list.append(ex)1654 error_list.append(ex)
16011655
1602 try:1656 try:
1603 self._poll_instance_states(context)1657 self._sync_power_states(context)
1604 except Exception as ex:1658 except Exception as ex:
1605 LOG.warning(_("Error during instance poll: %s"),1659 LOG.warning(_("Error during power_state sync: %s"), unicode(ex))
1606 unicode(ex))
1607 error_list.append(ex)1660 error_list.append(ex)
16081661
1609 return error_list1662 return error_list
@@ -1618,68 +1671,40 @@
1618 self.update_service_capabilities(1671 self.update_service_capabilities(
1619 self.driver.get_host_stats(refresh=True))1672 self.driver.get_host_stats(refresh=True))
16201673
1621 def _poll_instance_states(self, context):1674 def _sync_power_states(self, context):
1675 """Align power states between the database and the hypervisor.
1676
1677 The hypervisor is authoritative for the power_state data, so we
1678 simply loop over all known instances for this host and update the
1679 power_state according to the hypervisor. If the instance is not found
1680 then it will be set to power_state.NOSTATE, because it doesn't exist
1681 on the hypervisor.
1682
1683 """
1622 vm_instances = self.driver.list_instances_detail()1684 vm_instances = self.driver.list_instances_detail()
1623 vm_instances = dict((vm.name, vm) for vm in vm_instances)1685 vm_instances = dict((vm.name, vm) for vm in vm_instances)
1624
1625 # Keep a list of VMs not in the DB, cross them off as we find them
1626 vms_not_found_in_db = list(vm_instances.keys())
1627
1628 db_instances = self.db.instance_get_all_by_host(context, self.host)1686 db_instances = self.db.instance_get_all_by_host(context, self.host)
16291687
1688 num_vm_instances = len(vm_instances)
1689 num_db_instances = len(db_instances)
1690
1691 if num_vm_instances != num_db_instances:
1692 LOG.info(_("Found %(num_db_instances)s in the database and "
1693 "%(num_vm_instances)s on the hypervisor.") % locals())
1694
1630 for db_instance in db_instances:1695 for db_instance in db_instances:
1631 name = db_instance['name']1696 name = db_instance["name"]
1632 db_state = db_instance['state']1697 db_power_state = db_instance['power_state']
1633 vm_instance = vm_instances.get(name)1698 vm_instance = vm_instances.get(name)
16341699
1635 if vm_instance is None:1700 if vm_instance is None:
1636 # NOTE(justinsb): We have to be very careful here, because a1701 vm_power_state = power_state.NOSTATE
1637 # concurrent operation could be in progress (e.g. a spawn)
1638 if db_state == power_state.BUILDING:
1639 # TODO(justinsb): This does mean that if we crash during a
1640 # spawn, the machine will never leave the spawning state,
1641 # but this is just the way nova is; this function isn't
1642 # trying to correct that problem.
1643 # We could have a separate task to correct this error.
1644 # TODO(justinsb): What happens during a live migration?
1645 LOG.info(_("Found instance '%(name)s' in DB but no VM. "
1646 "State=%(db_state)s, so assuming spawn is in "
1647 "progress.") % locals())
1648 vm_state = db_state
1649 else:
1650 LOG.info(_("Found instance '%(name)s' in DB but no VM. "
1651 "State=%(db_state)s, so setting state to "
1652 "shutoff.") % locals())
1653 vm_state = power_state.SHUTOFF
1654 if db_instance['state_description'] == 'stopping':
1655 self.db.instance_stop(context, db_instance['id'])
1656 continue
1657 else:1702 else:
1658 vm_state = vm_instance.state1703 vm_power_state = vm_instance.state
1659 vms_not_found_in_db.remove(name)
16601704
1661 if (db_instance['state_description'] in ['migrating', 'stopping']):1705 if vm_power_state == db_power_state:
1662 # A situation which db record exists, but no instance"
1663 # sometimes occurs while live-migration at src compute,
1664 # this case should be ignored.
1665 LOG.debug(_("Ignoring %(name)s, as it's currently being "
1666 "migrated.") % locals())
1667 continue1706 continue
16681707
1669 if vm_state != db_state:1708 self._instance_update(context,
1670 LOG.info(_("DB/VM state mismatch. Changing state from "1709 db_instance["id"],
1671 "'%(db_state)s' to '%(vm_state)s'") % locals())1710 power_state=vm_power_state)
1672 self._update_state(context, db_instance['id'], vm_state)
1673
1674 # NOTE(justinsb): We no longer auto-remove SHUTOFF instances
1675 # It's quite hard to get them back when we do.
1676
1677 # Are there VMs not in the DB?
1678 for vm_not_found_in_db in vms_not_found_in_db:
1679 name = vm_not_found_in_db
1680
1681 # We only care about instances that compute *should* know about
1682 if name.startswith("instance-"):
1683 # TODO(justinsb): What to do here? Adopt it? Shut it down?
1684 LOG.warning(_("Found VM not in DB: '%(name)s'. Ignoring")
1685 % locals())
16861711
=== added file 'nova/compute/task_states.py'
--- nova/compute/task_states.py 1970-01-01 00:00:00 +0000
+++ nova/compute/task_states.py 2011-08-31 14:15:31 +0000
@@ -0,0 +1,59 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""Possible task states for instances.
19
20Compute instance task states represent what is happening to the instance at the
21current moment. These tasks can be generic, such as 'spawning', or specific,
22such as 'block_device_mapping'. These task states allow for a better view into
23what an instance is doing and should be displayed to users/administrators as
24necessary.
25
26"""
27
28SCHEDULING = 'scheduling'
29BLOCK_DEVICE_MAPPING = 'block_device_mapping'
30NETWORKING = 'networking'
31SPAWNING = 'spawning'
32
33IMAGE_SNAPSHOT = 'image_snapshot'
34IMAGE_BACKUP = 'image_backup'
35
36UPDATING_PASSWORD = 'updating_password'
37
38RESIZE_PREP = 'resize_prep'
39RESIZE_MIGRATING = 'resize_migrating'
40RESIZE_MIGRATED = 'resize_migrated'
41RESIZE_FINISH = 'resize_finish'
42RESIZE_REVERTING = 'resize_reverting'
43RESIZE_CONFIRMING = 'resize_confirming'
44RESIZE_VERIFY = 'resize_verify'
45
46REBUILDING = 'rebuilding'
47
48REBOOTING = 'rebooting'
49PAUSING = 'pausing'
50UNPAUSING = 'unpausing'
51SUSPENDING = 'suspending'
52RESUMING = 'resuming'
53
54RESCUING = 'rescuing'
55UNRESCUING = 'unrescuing'
56
57DELETING = 'deleting'
58STOPPING = 'stopping'
59STARTING = 'starting'
060
=== added file 'nova/compute/vm_states.py'
--- nova/compute/vm_states.py 1970-01-01 00:00:00 +0000
+++ nova/compute/vm_states.py 2011-08-31 14:15:31 +0000
@@ -0,0 +1,39 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""Possible vm states for instances.
19
20Compute instance vm states represent the state of an instance as it pertains to
21a user or administrator. When combined with task states (task_states.py), a
22better picture can be formed regarding the instance's health.
23
24"""
25
26ACTIVE = 'active'
27BUILDING = 'building'
28REBUILDING = 'rebuilding'
29
30PAUSED = 'paused'
31SUSPENDED = 'suspended'
32RESCUED = 'rescued'
33DELETED = 'deleted'
34STOPPED = 'stopped'
35
36MIGRATING = 'migrating'
37RESIZING = 'resizing'
38
39ERROR = 'error'
040
=== modified file 'nova/db/sqlalchemy/api.py'
--- nova/db/sqlalchemy/api.py 2011-08-26 02:18:46 +0000
+++ nova/db/sqlalchemy/api.py 2011-08-31 14:15:31 +0000
@@ -28,6 +28,7 @@
28from nova import ipv628from nova import ipv6
29from nova import utils29from nova import utils
30from nova import log as logging30from nova import log as logging
31from nova.compute import vm_states
31from nova.db.sqlalchemy import models32from nova.db.sqlalchemy import models
32from nova.db.sqlalchemy.session import get_session33from nova.db.sqlalchemy.session import get_session
33from sqlalchemy import or_34from sqlalchemy import or_
@@ -1102,12 +1103,11 @@
1102def instance_stop(context, instance_id):1103def instance_stop(context, instance_id):
1103 session = get_session()1104 session = get_session()
1104 with session.begin():1105 with session.begin():
1105 from nova.compute import power_state
1106 session.query(models.Instance).\1106 session.query(models.Instance).\
1107 filter_by(id=instance_id).\1107 filter_by(id=instance_id).\
1108 update({'host': None,1108 update({'host': None,
1109 'state': power_state.SHUTOFF,1109 'vm_state': vm_states.STOPPED,
1110 'state_description': 'stopped',1110 'task_state': None,
1111 'updated_at': literal_column('updated_at')})1111 'updated_at': literal_column('updated_at')})
1112 session.query(models.SecurityGroupInstanceAssociation).\1112 session.query(models.SecurityGroupInstanceAssociation).\
1113 filter_by(instance_id=instance_id).\1113 filter_by(instance_id=instance_id).\
@@ -1266,7 +1266,7 @@
1266 # Filters for exact matches that we can do along with the SQL query...1266 # Filters for exact matches that we can do along with the SQL query...
1267 # For other filters that don't match this, we will do regexp matching1267 # For other filters that don't match this, we will do regexp matching
1268 exact_match_filter_names = ['project_id', 'user_id', 'image_ref',1268 exact_match_filter_names = ['project_id', 'user_id', 'image_ref',
1269 'state', 'instance_type_id', 'deleted']1269 'vm_state', 'instance_type_id', 'deleted']
12701270
1271 query_filters = [key for key in filters.iterkeys()1271 query_filters = [key for key in filters.iterkeys()
1272 if key in exact_match_filter_names]1272 if key in exact_match_filter_names]
@@ -1484,18 +1484,6 @@
1484 return fixed_ip_refs[0].floating_ips[0]['address']1484 return fixed_ip_refs[0].floating_ips[0]['address']
14851485
14861486
1487@require_admin_context
1488def instance_set_state(context, instance_id, state, description=None):
1489 # TODO(devcamcar): Move this out of models and into driver
1490 from nova.compute import power_state
1491 if not description:
1492 description = power_state.name(state)
1493 db.instance_update(context,
1494 instance_id,
1495 {'state': state,
1496 'state_description': description})
1497
1498
1499@require_context1487@require_context
1500def instance_update(context, instance_id, values):1488def instance_update(context, instance_id, values):
1501 session = get_session()1489 session = get_session()
15021490
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py'
--- nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/044_update_instance_states.py 2011-08-31 14:15:31 +0000
@@ -0,0 +1,138 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 OpenStack LLC.
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17import sqlalchemy
18from sqlalchemy import MetaData, Table, Column, String
19
20from nova.compute import task_states
21from nova.compute import vm_states
22
23
24meta = MetaData()
25
26
27c_task_state = Column('task_state',
28 String(length=255, convert_unicode=False,
29 assert_unicode=None, unicode_error=None,
30 _warn_on_bytestring=False),
31 nullable=True)
32
33
34_upgrade_translations = {
35 "stopping": {
36 "state_description": vm_states.ACTIVE,
37 "task_state": task_states.STOPPING,
38 },
39 "stopped": {
40 "state_description": vm_states.STOPPED,
41 "task_state": None,
42 },
43 "terminated": {
44 "state_description": vm_states.DELETED,
45 "task_state": None,
46 },
47 "terminating": {
48 "state_description": vm_states.ACTIVE,
49 "task_state": task_states.DELETING,
50 },
51 "running": {
52 "state_description": vm_states.ACTIVE,
53 "task_state": None,
54 },
55 "scheduling": {
56 "state_description": vm_states.BUILDING,
57 "task_state": task_states.SCHEDULING,
58 },
59 "migrating": {
60 "state_description": vm_states.MIGRATING,
61 "task_state": None,
62 },
63 "pending": {
64 "state_description": vm_states.BUILDING,
65 "task_state": task_states.SCHEDULING,
66 },
67}
68
69
70_downgrade_translations = {
71 vm_states.ACTIVE: {
72 None: "running",
73 task_states.DELETING: "terminating",
74 task_states.STOPPING: "stopping",
75 },
76 vm_states.BUILDING: {
77 None: "pending",
78 task_states.SCHEDULING: "scheduling",
79 },
80 vm_states.STOPPED: {
81 None: "stopped",
82 },
83 vm_states.REBUILDING: {
84 None: "pending",
85 },
86 vm_states.DELETED: {
87 None: "terminated",
88 },
89 vm_states.MIGRATING: {
90 None: "migrating",
91 },
92}
93
94
95def upgrade(migrate_engine):
96 meta.bind = migrate_engine
97
98 instance_table = Table('instances', meta, autoload=True,
99 autoload_with=migrate_engine)
100
101 c_state = instance_table.c.state
102 c_state.alter(name='power_state')
103
104 c_vm_state = instance_table.c.state_description
105 c_vm_state.alter(name='vm_state')
106
107 instance_table.create_column(c_task_state)
108
109 for old_state, values in _upgrade_translations.iteritems():
110 instance_table.update().\
111 values(**values).\
112 where(c_vm_state == old_state).\
113 execute()
114
115
116def downgrade(migrate_engine):
117 meta.bind = migrate_engine
118
119 instance_table = Table('instances', meta, autoload=True,
120 autoload_with=migrate_engine)
121
122 c_task_state = instance_table.c.task_state
123
124 c_state = instance_table.c.power_state
125 c_state.alter(name='state')
126
127 c_vm_state = instance_table.c.vm_state
128 c_vm_state.alter(name='state_description')
129
130 for old_vm_state, old_task_states in _downgrade_translations.iteritems():
131 for old_task_state, new_state_desc in old_task_states.iteritems():
132 instance_table.update().\
133 where(c_task_state == old_task_state).\
134 where(c_vm_state == old_vm_state).\
135 values(vm_state=new_state_desc).\
136 execute()
137
138 instance_table.drop_column('task_state')
0139
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-08-26 01:38:35 +0000
+++ nova/db/sqlalchemy/models.py 2011-08-31 14:15:31 +0000
@@ -193,8 +193,9 @@
193 key_name = Column(String(255))193 key_name = Column(String(255))
194 key_data = Column(Text)194 key_data = Column(Text)
195195
196 state = Column(Integer)196 power_state = Column(Integer)
197 state_description = Column(String(255))197 vm_state = Column(String(255))
198 task_state = Column(String(255))
198199
199 memory_mb = Column(Integer)200 memory_mb = Column(Integer)
200 vcpus = Column(Integer)201 vcpus = Column(Integer)
@@ -238,17 +239,6 @@
238 access_ip_v4 = Column(String(255))239 access_ip_v4 = Column(String(255))
239 access_ip_v6 = Column(String(255))240 access_ip_v6 = Column(String(255))
240241
241 # TODO(vish): see Ewan's email about state improvements, probably
242 # should be in a driver base class or some such
243 # vmstate_state = running, halted, suspended, paused
244 # power_state = what we have
245 # task_state = transitory and may trigger power state transition
246
247 #@validates('state')
248 #def validate_state(self, key, state):
249 # assert(state in ['nostate', 'running', 'blocked', 'paused',
250 # 'shutdown', 'shutoff', 'crashed'])
251
252242
253class VirtualStorageArray(BASE, NovaBase):243class VirtualStorageArray(BASE, NovaBase):
254 """244 """
255245
=== modified file 'nova/exception.py'
--- nova/exception.py 2011-08-26 01:38:35 +0000
+++ nova/exception.py 2011-08-31 14:15:31 +0000
@@ -61,7 +61,7 @@
61 super(ApiError, self).__init__(outstr)61 super(ApiError, self).__init__(outstr)
6262
6363
64class BuildInProgress(Error):64class RebuildRequiresActiveInstance(Error):
65 pass65 pass
6666
6767
6868
=== modified file 'nova/scheduler/driver.py'
--- nova/scheduler/driver.py 2011-07-19 13:22:38 +0000
+++ nova/scheduler/driver.py 2011-08-31 14:15:31 +0000
@@ -30,6 +30,7 @@
30from nova import rpc30from nova import rpc
31from nova import utils31from nova import utils
32from nova.compute import power_state32from nova.compute import power_state
33from nova.compute import vm_states
33from nova.api.ec2 import ec2utils34from nova.api.ec2 import ec2utils
3435
3536
@@ -104,10 +105,8 @@
104 dest, block_migration)105 dest, block_migration)
105106
106 # Changing instance_state.107 # Changing instance_state.
107 db.instance_set_state(context,108 values = {"vm_state": vm_states.MIGRATING}
108 instance_id,109 db.instance_update(context, instance_id, values)
109 power_state.PAUSED,
110 'migrating')
111110
112 # Changing volume state111 # Changing volume state
113 for volume_ref in instance_ref['volumes']:112 for volume_ref in instance_ref['volumes']:
@@ -129,8 +128,7 @@
129 """128 """
130129
131 # Checking instance is running.130 # Checking instance is running.
132 if (power_state.RUNNING != instance_ref['state'] or \131 if instance_ref['power_state'] != power_state.RUNNING:
133 'running' != instance_ref['state_description']):
134 instance_id = ec2utils.id_to_ec2_id(instance_ref['id'])132 instance_id = ec2utils.id_to_ec2_id(instance_ref['id'])
135 raise exception.InstanceNotRunning(instance_id=instance_id)133 raise exception.InstanceNotRunning(instance_id=instance_id)
136134
137135
=== modified file 'nova/tests/api/openstack/test_server_actions.py'
--- nova/tests/api/openstack/test_server_actions.py 2011-08-24 15:11:20 +0000
+++ nova/tests/api/openstack/test_server_actions.py 2011-08-31 14:15:31 +0000
@@ -10,8 +10,8 @@
10from nova import exception10from nova import exception
11from nova import flags11from nova import flags
12from nova.api.openstack import create_instance_helper12from nova.api.openstack import create_instance_helper
13from nova.compute import vm_states
13from nova.compute import instance_types14from nova.compute import instance_types
14from nova.compute import power_state
15import nova.db.api15import nova.db.api
16from nova import test16from nova import test
17from nova.tests.api.openstack import common17from nova.tests.api.openstack import common
@@ -35,17 +35,19 @@
35 return _return_server35 return _return_server
3636
3737
38def return_server_with_power_state(power_state):38def return_server_with_state(vm_state, task_state=None):
39 return return_server_with_attributes(power_state=power_state)39 return return_server_with_attributes(vm_state=vm_state,
4040 task_state=task_state)
4141
42def return_server_with_uuid_and_power_state(power_state):42
43 return return_server_with_power_state(power_state)43def return_server_with_uuid_and_state(vm_state, task_state=None):
4444 def _return_server(context, id):
4545 return return_server_with_state(vm_state, task_state)
46def stub_instance(id, power_state=0, metadata=None,46 return _return_server
47 image_ref="10", flavor_id="1", name=None):47
4848
49def stub_instance(id, metadata=None, image_ref="10", flavor_id="1",
50 name=None, vm_state=None, task_state=None):
49 if metadata is not None:51 if metadata is not None:
50 metadata_items = [{'key':k, 'value':v} for k, v in metadata.items()]52 metadata_items = [{'key':k, 'value':v} for k, v in metadata.items()]
51 else:53 else:
@@ -66,8 +68,8 @@
66 "launch_index": 0,68 "launch_index": 0,
67 "key_name": "",69 "key_name": "",
68 "key_data": "",70 "key_data": "",
69 "state": power_state,71 "vm_state": vm_state or vm_states.ACTIVE,
70 "state_description": "",72 "task_state": task_state,
71 "memory_mb": 0,73 "memory_mb": 0,
72 "vcpus": 0,74 "vcpus": 0,
73 "local_gb": 0,75 "local_gb": 0,
@@ -175,11 +177,11 @@
175 },177 },
176 }178 }
177179
178 state = power_state.BUILDING180 state = vm_states.BUILDING
179 new_return_server = return_server_with_power_state(state)181 new_return_server = return_server_with_state(state)
180 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)182 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
181 self.stubs.Set(nova.db, 'instance_get_by_uuid',183 self.stubs.Set(nova.db, 'instance_get_by_uuid',
182 return_server_with_uuid_and_power_state(state))184 return_server_with_uuid_and_state(state))
183185
184 req = webob.Request.blank('/v1.0/servers/1/action')186 req = webob.Request.blank('/v1.0/servers/1/action')
185 req.method = 'POST'187 req.method = 'POST'
@@ -242,19 +244,6 @@
242 res = req.get_response(fakes.wsgi_app())244 res = req.get_response(fakes.wsgi_app())
243 self.assertEqual(res.status_int, 500)245 self.assertEqual(res.status_int, 500)
244246
245 def test_resized_server_has_correct_status(self):
246 req = self.webreq('/1', 'GET')
247
248 def fake_migration_get(*args):
249 return {}
250
251 self.stubs.Set(nova.db, 'migration_get_by_instance_and_status',
252 fake_migration_get)
253 res = req.get_response(fakes.wsgi_app())
254 self.assertEqual(res.status_int, 200)
255 body = json.loads(res.body)
256 self.assertEqual(body['server']['status'], 'RESIZE-CONFIRM')
257
258 def test_confirm_resize_server(self):247 def test_confirm_resize_server(self):
259 req = self.webreq('/1/action', 'POST', dict(confirmResize=None))248 req = self.webreq('/1/action', 'POST', dict(confirmResize=None))
260249
@@ -642,11 +631,11 @@
642 },631 },
643 }632 }
644633
645 state = power_state.BUILDING634 state = vm_states.BUILDING
646 new_return_server = return_server_with_power_state(state)635 new_return_server = return_server_with_state(state)
647 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)636 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
648 self.stubs.Set(nova.db, 'instance_get_by_uuid',637 self.stubs.Set(nova.db, 'instance_get_by_uuid',
649 return_server_with_uuid_and_power_state(state))638 return_server_with_uuid_and_state(state))
650639
651 req = webob.Request.blank('/v1.1/fake/servers/1/action')640 req = webob.Request.blank('/v1.1/fake/servers/1/action')
652 req.method = 'POST'641 req.method = 'POST'
653642
=== modified file 'nova/tests/api/openstack/test_servers.py'
--- nova/tests/api/openstack/test_servers.py 2011-08-24 16:12:11 +0000
+++ nova/tests/api/openstack/test_servers.py 2011-08-31 14:15:31 +0000
@@ -37,7 +37,8 @@
37from nova.api.openstack import xmlutil37from nova.api.openstack import xmlutil
38import nova.compute.api38import nova.compute.api
39from nova.compute import instance_types39from nova.compute import instance_types
40from nova.compute import power_state40from nova.compute import task_states
41from nova.compute import vm_states
41import nova.db.api42import nova.db.api
42import nova.scheduler.api43import nova.scheduler.api
43from nova.db.sqlalchemy.models import Instance44from nova.db.sqlalchemy.models import Instance
@@ -91,15 +92,18 @@
91 return _return_server92 return _return_server
9293
9394
94def return_server_with_power_state(power_state):95def return_server_with_state(vm_state, task_state=None):
95 def _return_server(context, id):96 def _return_server(context, id):
96 return stub_instance(id, power_state=power_state)97 return stub_instance(id, vm_state=vm_state, task_state=task_state)
97 return _return_server98 return _return_server
9899
99100
100def return_server_with_uuid_and_power_state(power_state):101def return_server_with_uuid_and_state(vm_state, task_state):
101 def _return_server(context, id):102 def _return_server(context, id):
102 return stub_instance(id, uuid=FAKE_UUID, power_state=power_state)103 return stub_instance(id,
104 uuid=FAKE_UUID,
105 vm_state=vm_state,
106 task_state=task_state)
103 return _return_server107 return _return_server
104108
105109
@@ -148,7 +152,8 @@
148152
149153
150def stub_instance(id, user_id='fake', project_id='fake', private_address=None,154def stub_instance(id, user_id='fake', project_id='fake', private_address=None,
151 public_addresses=None, host=None, power_state=0,155 public_addresses=None, host=None,
156 vm_state=None, task_state=None,
152 reservation_id="", uuid=FAKE_UUID, image_ref="10",157 reservation_id="", uuid=FAKE_UUID, image_ref="10",
153 flavor_id="1", interfaces=None, name=None,158 flavor_id="1", interfaces=None, name=None,
154 access_ipv4=None, access_ipv6=None):159 access_ipv4=None, access_ipv6=None):
@@ -184,8 +189,8 @@
184 "launch_index": 0,189 "launch_index": 0,
185 "key_name": "",190 "key_name": "",
186 "key_data": "",191 "key_data": "",
187 "state": power_state,192 "vm_state": vm_state or vm_states.BUILDING,
188 "state_description": "",193 "task_state": task_state,
189 "memory_mb": 0,194 "memory_mb": 0,
190 "vcpus": 0,195 "vcpus": 0,
191 "local_gb": 0,196 "local_gb": 0,
@@ -494,7 +499,7 @@
494 },499 },
495 ]500 ]
496 new_return_server = return_server_with_attributes(501 new_return_server = return_server_with_attributes(
497 interfaces=interfaces, power_state=1)502 interfaces=interfaces, vm_state=vm_states.ACTIVE)
498 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)503 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
499504
500 req = webob.Request.blank('/v1.1/fake/servers/1')505 req = webob.Request.blank('/v1.1/fake/servers/1')
@@ -587,8 +592,8 @@
587 },592 },
588 ]593 ]
589 new_return_server = return_server_with_attributes(594 new_return_server = return_server_with_attributes(
590 interfaces=interfaces, power_state=1, image_ref=image_ref,595 interfaces=interfaces, vm_state=vm_states.ACTIVE,
591 flavor_id=flavor_id)596 image_ref=image_ref, flavor_id=flavor_id)
592 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)597 self.stubs.Set(nova.db.api, 'instance_get', new_return_server)
593598
594 req = webob.Request.blank('/v1.1/fake/servers/1')599 req = webob.Request.blank('/v1.1/fake/servers/1')
@@ -1209,9 +1214,8 @@
1209 def test_get_servers_allows_status_v1_1(self):1214 def test_get_servers_allows_status_v1_1(self):
1210 def fake_get_all(compute_self, context, search_opts=None):1215 def fake_get_all(compute_self, context, search_opts=None):
1211 self.assertNotEqual(search_opts, None)1216 self.assertNotEqual(search_opts, None)
1212 self.assertTrue('state' in search_opts)1217 self.assertTrue('vm_state' in search_opts)
1213 self.assertEqual(set(search_opts['state']),1218 self.assertEqual(search_opts['vm_state'], vm_states.ACTIVE)
1214 set([power_state.RUNNING, power_state.BLOCKED]))
1215 return [stub_instance(100)]1219 return [stub_instance(100)]
12161220
1217 self.stubs.Set(nova.compute.API, 'get_all', fake_get_all)1221 self.stubs.Set(nova.compute.API, 'get_all', fake_get_all)
@@ -1228,13 +1232,9 @@
12281232
1229 def test_get_servers_invalid_status_v1_1(self):1233 def test_get_servers_invalid_status_v1_1(self):
1230 """Test getting servers by invalid status"""1234 """Test getting servers by invalid status"""
1231
1232 self.flags(allow_admin_api=False)1235 self.flags(allow_admin_api=False)
1233
1234 req = webob.Request.blank('/v1.1/fake/servers?status=running')1236 req = webob.Request.blank('/v1.1/fake/servers?status=running')
1235 res = req.get_response(fakes.wsgi_app())1237 res = req.get_response(fakes.wsgi_app())
1236 # The following assert will fail if either of the asserts in
1237 # fake_get_all() fail
1238 self.assertEqual(res.status_int, 400)1238 self.assertEqual(res.status_int, 400)
1239 self.assertTrue(res.body.find('Invalid server status') > -1)1239 self.assertTrue(res.body.find('Invalid server status') > -1)
12401240
@@ -1738,6 +1738,7 @@
1738 server = json.loads(res.body)['server']1738 server = json.loads(res.body)['server']
1739 self.assertEqual(16, len(server['adminPass']))1739 self.assertEqual(16, len(server['adminPass']))
1740 self.assertEqual(1, server['id'])1740 self.assertEqual(1, server['id'])
1741 self.assertEqual("BUILD", server["status"])
1741 self.assertEqual(0, server['progress'])1742 self.assertEqual(0, server['progress'])
1742 self.assertEqual('server_test', server['name'])1743 self.assertEqual('server_test', server['name'])
1743 self.assertEqual(expected_flavor, server['flavor'])1744 self.assertEqual(expected_flavor, server['flavor'])
@@ -2467,23 +2468,51 @@
2467 self.assertEqual(res.status_int, 204)2468 self.assertEqual(res.status_int, 204)
2468 self.assertEqual(self.server_delete_called, True)2469 self.assertEqual(self.server_delete_called, True)
24692470
2470 def test_shutdown_status(self):2471
2471 new_server = return_server_with_power_state(power_state.SHUTDOWN)2472class TestServerStatus(test.TestCase):
2472 self.stubs.Set(nova.db.api, 'instance_get', new_server)2473
2473 req = webob.Request.blank('/v1.0/servers/1')2474 def _get_with_state(self, vm_state, task_state=None):
2474 res = req.get_response(fakes.wsgi_app())2475 new_server = return_server_with_state(vm_state, task_state)
2475 self.assertEqual(res.status_int, 200)2476 self.stubs.Set(nova.db.api, 'instance_get', new_server)
2476 res_dict = json.loads(res.body)2477 request = webob.Request.blank('/v1.0/servers/1')
2477 self.assertEqual(res_dict['server']['status'], 'SHUTDOWN')2478 response = request.get_response(fakes.wsgi_app())
24782479 self.assertEqual(response.status_int, 200)
2479 def test_shutoff_status(self):2480 return json.loads(response.body)
2480 new_server = return_server_with_power_state(power_state.SHUTOFF)2481
2481 self.stubs.Set(nova.db.api, 'instance_get', new_server)2482 def test_active(self):
2482 req = webob.Request.blank('/v1.0/servers/1')2483 response = self._get_with_state(vm_states.ACTIVE)
2483 res = req.get_response(fakes.wsgi_app())2484 self.assertEqual(response['server']['status'], 'ACTIVE')
2484 self.assertEqual(res.status_int, 200)2485
2485 res_dict = json.loads(res.body)2486 def test_reboot(self):
2486 self.assertEqual(res_dict['server']['status'], 'SHUTOFF')2487 response = self._get_with_state(vm_states.ACTIVE,
2488 task_states.REBOOTING)
2489 self.assertEqual(response['server']['status'], 'REBOOT')
2490
2491 def test_rebuild(self):
2492 response = self._get_with_state(vm_states.REBUILDING)
2493 self.assertEqual(response['server']['status'], 'REBUILD')
2494
2495 def test_rebuild_error(self):
2496 response = self._get_with_state(vm_states.ERROR)
2497 self.assertEqual(response['server']['status'], 'ERROR')
2498
2499 def test_resize(self):
2500 response = self._get_with_state(vm_states.RESIZING)
2501 self.assertEqual(response['server']['status'], 'RESIZE')
2502
2503 def test_verify_resize(self):
2504 response = self._get_with_state(vm_states.ACTIVE,
2505 task_states.RESIZE_VERIFY)
2506 self.assertEqual(response['server']['status'], 'VERIFY_RESIZE')
2507
2508 def test_password_update(self):
2509 response = self._get_with_state(vm_states.ACTIVE,
2510 task_states.UPDATING_PASSWORD)
2511 self.assertEqual(response['server']['status'], 'PASSWORD')
2512
2513 def test_stopped(self):
2514 response = self._get_with_state(vm_states.STOPPED)
2515 self.assertEqual(response['server']['status'], 'STOPPED')
24872516
24882517
2489class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase):2518class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase):
@@ -3536,8 +3565,8 @@
3536 "launch_index": 0,3565 "launch_index": 0,
3537 "key_name": "",3566 "key_name": "",
3538 "key_data": "",3567 "key_data": "",
3539 "state": 0,3568 "vm_state": vm_states.BUILDING,
3540 "state_description": "",3569 "task_state": None,
3541 "memory_mb": 0,3570 "memory_mb": 0,
3542 "vcpus": 0,3571 "vcpus": 0,
3543 "local_gb": 0,3572 "local_gb": 0,
@@ -3682,7 +3711,7 @@
36823711
3683 def test_build_server_detail_active_status(self):3712 def test_build_server_detail_active_status(self):
3684 #set the power state of the instance to running3713 #set the power state of the instance to running
3685 self.instance['state'] = 13714 self.instance['vm_state'] = vm_states.ACTIVE
3686 image_bookmark = "http://localhost/images/5"3715 image_bookmark = "http://localhost/images/5"
3687 flavor_bookmark = "http://localhost/flavors/1"3716 flavor_bookmark = "http://localhost/flavors/1"
3688 expected_server = {3717 expected_server = {
36893718
=== modified file 'nova/tests/integrated/test_servers.py'
--- nova/tests/integrated/test_servers.py 2011-08-26 13:54:53 +0000
+++ nova/tests/integrated/test_servers.py 2011-08-31 14:15:31 +0000
@@ -28,6 +28,17 @@
2828
29class ServersTest(integrated_helpers._IntegratedTestBase):29class ServersTest(integrated_helpers._IntegratedTestBase):
3030
31 def _wait_for_creation(self, server):
32 retries = 0
33 while server['status'] == 'BUILD':
34 time.sleep(1)
35 server = self.api.get_server(server['id'])
36 print server
37 retries = retries + 1
38 if retries > 5:
39 break
40 return server
41
31 def test_get_servers(self):42 def test_get_servers(self):
32 """Simple check that listing servers works."""43 """Simple check that listing servers works."""
33 servers = self.api.get_servers()44 servers = self.api.get_servers()
@@ -36,9 +47,9 @@
3647
37 def test_create_and_delete_server(self):48 def test_create_and_delete_server(self):
38 """Creates and deletes a server."""49 """Creates and deletes a server."""
50 self.flags(stub_network=True)
3951
40 # Create server52 # Create server
41
42 # Build the server data gradually, checking errors along the way53 # Build the server data gradually, checking errors along the way
43 server = {}54 server = {}
44 good_server = self._build_minimal_create_server_request()55 good_server = self._build_minimal_create_server_request()
@@ -91,19 +102,11 @@
91 server_ids = [server['id'] for server in servers]102 server_ids = [server['id'] for server in servers]
92 self.assertTrue(created_server_id in server_ids)103 self.assertTrue(created_server_id in server_ids)
93104
94 # Wait (briefly) for creation105 found_server = self._wait_for_creation(found_server)
95 retries = 0
96 while found_server['status'] == 'build':
97 LOG.debug("found server: %s" % found_server)
98 time.sleep(1)
99 found_server = self.api.get_server(created_server_id)
100 retries = retries + 1
101 if retries > 5:
102 break
103106
104 # It should be available...107 # It should be available...
105 # TODO(justinsb): Mock doesn't yet do this...108 # TODO(justinsb): Mock doesn't yet do this...
106 #self.assertEqual('available', found_server['status'])109 self.assertEqual('ACTIVE', found_server['status'])
107 servers = self.api.get_servers(detail=True)110 servers = self.api.get_servers(detail=True)
108 for server in servers:111 for server in servers:
109 self.assertTrue("image" in server)112 self.assertTrue("image" in server)
@@ -181,6 +184,7 @@
181184
182 def test_create_and_rebuild_server(self):185 def test_create_and_rebuild_server(self):
183 """Rebuild a server."""186 """Rebuild a server."""
187 self.flags(stub_network=True)
184188
185 # create a server with initially has no metadata189 # create a server with initially has no metadata
186 server = self._build_minimal_create_server_request()190 server = self._build_minimal_create_server_request()
@@ -190,6 +194,8 @@
190 self.assertTrue(created_server['id'])194 self.assertTrue(created_server['id'])
191 created_server_id = created_server['id']195 created_server_id = created_server['id']
192196
197 created_server = self._wait_for_creation(created_server)
198
193 # rebuild the server with metadata199 # rebuild the server with metadata
194 post = {}200 post = {}
195 post['rebuild'] = {201 post['rebuild'] = {
@@ -212,6 +218,7 @@
212218
213 def test_create_and_rebuild_server_with_metadata(self):219 def test_create_and_rebuild_server_with_metadata(self):
214 """Rebuild a server with metadata."""220 """Rebuild a server with metadata."""
221 self.flags(stub_network=True)
215222
216 # create a server with initially has no metadata223 # create a server with initially has no metadata
217 server = self._build_minimal_create_server_request()224 server = self._build_minimal_create_server_request()
@@ -221,6 +228,8 @@
221 self.assertTrue(created_server['id'])228 self.assertTrue(created_server['id'])
222 created_server_id = created_server['id']229 created_server_id = created_server['id']
223230
231 created_server = self._wait_for_creation(created_server)
232
224 # rebuild the server with metadata233 # rebuild the server with metadata
225 post = {}234 post = {}
226 post['rebuild'] = {235 post['rebuild'] = {
@@ -248,6 +257,7 @@
248257
249 def test_create_and_rebuild_server_with_metadata_removal(self):258 def test_create_and_rebuild_server_with_metadata_removal(self):
250 """Rebuild a server with metadata."""259 """Rebuild a server with metadata."""
260 self.flags(stub_network=True)
251261
252 # create a server with initially has no metadata262 # create a server with initially has no metadata
253 server = self._build_minimal_create_server_request()263 server = self._build_minimal_create_server_request()
@@ -264,6 +274,8 @@
264 self.assertTrue(created_server['id'])274 self.assertTrue(created_server['id'])
265 created_server_id = created_server['id']275 created_server_id = created_server['id']
266276
277 created_server = self._wait_for_creation(created_server)
278
267 # rebuild the server with metadata279 # rebuild the server with metadata
268 post = {}280 post = {}
269 post['rebuild'] = {281 post['rebuild'] = {
270282
=== modified file 'nova/tests/scheduler/test_scheduler.py'
--- nova/tests/scheduler/test_scheduler.py 2011-08-16 12:47:35 +0000
+++ nova/tests/scheduler/test_scheduler.py 2011-08-31 14:15:31 +0000
@@ -40,6 +40,7 @@
40from nova.scheduler import manager40from nova.scheduler import manager
41from nova.scheduler import multi41from nova.scheduler import multi
42from nova.compute import power_state42from nova.compute import power_state
43from nova.compute import vm_states
4344
4445
45FLAGS = flags.FLAGS46FLAGS = flags.FLAGS
@@ -94,6 +95,9 @@
94 inst['vcpus'] = kwargs.get('vcpus', 1)95 inst['vcpus'] = kwargs.get('vcpus', 1)
95 inst['memory_mb'] = kwargs.get('memory_mb', 10)96 inst['memory_mb'] = kwargs.get('memory_mb', 10)
96 inst['local_gb'] = kwargs.get('local_gb', 20)97 inst['local_gb'] = kwargs.get('local_gb', 20)
98 inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
99 inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
100 inst['task_state'] = kwargs.get('task_state', None)
97 return db.instance_create(ctxt, inst)101 return db.instance_create(ctxt, inst)
98102
99 def test_fallback(self):103 def test_fallback(self):
@@ -271,8 +275,9 @@
271 inst['memory_mb'] = kwargs.get('memory_mb', 20)275 inst['memory_mb'] = kwargs.get('memory_mb', 20)
272 inst['local_gb'] = kwargs.get('local_gb', 30)276 inst['local_gb'] = kwargs.get('local_gb', 30)
273 inst['launched_on'] = kwargs.get('launghed_on', 'dummy')277 inst['launched_on'] = kwargs.get('launghed_on', 'dummy')
274 inst['state_description'] = kwargs.get('state_description', 'running')278 inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
275 inst['state'] = kwargs.get('state', power_state.RUNNING)279 inst['task_state'] = kwargs.get('task_state', None)
280 inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
276 return db.instance_create(self.context, inst)['id']281 return db.instance_create(self.context, inst)['id']
277282
278 def _create_volume(self):283 def _create_volume(self):
@@ -664,14 +669,14 @@
664 block_migration=False)669 block_migration=False)
665670
666 i_ref = db.instance_get(self.context, instance_id)671 i_ref = db.instance_get(self.context, instance_id)
667 self.assertTrue(i_ref['state_description'] == 'migrating')672 self.assertTrue(i_ref['vm_state'] == vm_states.MIGRATING)
668 db.instance_destroy(self.context, instance_id)673 db.instance_destroy(self.context, instance_id)
669 db.volume_destroy(self.context, v_ref['id'])674 db.volume_destroy(self.context, v_ref['id'])
670675
671 def test_live_migration_src_check_instance_not_running(self):676 def test_live_migration_src_check_instance_not_running(self):
672 """The instance given by instance_id is not running."""677 """The instance given by instance_id is not running."""
673678
674 instance_id = self._create_instance(state_description='migrating')679 instance_id = self._create_instance(power_state=power_state.NOSTATE)
675 i_ref = db.instance_get(self.context, instance_id)680 i_ref = db.instance_get(self.context, instance_id)
676681
677 try:682 try:
678683
=== modified file 'nova/tests/test_cloud.py'
--- nova/tests/test_cloud.py 2011-08-16 16:18:13 +0000
+++ nova/tests/test_cloud.py 2011-08-31 14:15:31 +0000
@@ -38,6 +38,7 @@
38from nova import utils38from nova import utils
39from nova.api.ec2 import cloud39from nova.api.ec2 import cloud
40from nova.api.ec2 import ec2utils40from nova.api.ec2 import ec2utils
41from nova.compute import vm_states
41from nova.image import fake42from nova.image import fake
4243
4344
@@ -1163,7 +1164,7 @@
1163 self.compute = self.start_service('compute')1164 self.compute = self.start_service('compute')
11641165
1165 def _wait_for_state(self, ctxt, instance_id, predicate):1166 def _wait_for_state(self, ctxt, instance_id, predicate):
1166 """Wait for an stopping instance to be a given state"""1167 """Wait for a stopped instance to be a given state"""
1167 id = ec2utils.ec2_id_to_id(instance_id)1168 id = ec2utils.ec2_id_to_id(instance_id)
1168 while True:1169 while True:
1169 info = self.cloud.compute_api.get(context=ctxt, instance_id=id)1170 info = self.cloud.compute_api.get(context=ctxt, instance_id=id)
@@ -1174,12 +1175,16 @@
11741175
1175 def _wait_for_running(self, instance_id):1176 def _wait_for_running(self, instance_id):
1176 def is_running(info):1177 def is_running(info):
1177 return info['state_description'] == 'running'1178 vm_state = info["vm_state"]
1179 task_state = info["task_state"]
1180 return vm_state == vm_states.ACTIVE and task_state == None
1178 self._wait_for_state(self.context, instance_id, is_running)1181 self._wait_for_state(self.context, instance_id, is_running)
11791182
1180 def _wait_for_stopped(self, instance_id):1183 def _wait_for_stopped(self, instance_id):
1181 def is_stopped(info):1184 def is_stopped(info):
1182 return info['state_description'] == 'stopped'1185 vm_state = info["vm_state"]
1186 task_state = info["task_state"]
1187 return vm_state == vm_states.STOPPED and task_state == None
1183 self._wait_for_state(self.context, instance_id, is_stopped)1188 self._wait_for_state(self.context, instance_id, is_stopped)
11841189
1185 def _wait_for_terminate(self, instance_id):1190 def _wait_for_terminate(self, instance_id):
@@ -1562,7 +1567,7 @@
1562 'id': 0,1567 'id': 0,
1563 'root_device_name': '/dev/sdh',1568 'root_device_name': '/dev/sdh',
1564 'security_groups': [{'name': 'fake0'}, {'name': 'fake1'}],1569 'security_groups': [{'name': 'fake0'}, {'name': 'fake1'}],
1565 'state_description': 'stopping',1570 'vm_state': vm_states.STOPPED,
1566 'instance_type': {'name': 'fake_type'},1571 'instance_type': {'name': 'fake_type'},
1567 'kernel_id': 1,1572 'kernel_id': 1,
1568 'ramdisk_id': 2,1573 'ramdisk_id': 2,
@@ -1606,7 +1611,7 @@
1606 self.assertEqual(groupSet, expected_groupSet)1611 self.assertEqual(groupSet, expected_groupSet)
1607 self.assertEqual(get_attribute('instanceInitiatedShutdownBehavior'),1612 self.assertEqual(get_attribute('instanceInitiatedShutdownBehavior'),
1608 {'instance_id': 'i-12345678',1613 {'instance_id': 'i-12345678',
1609 'instanceInitiatedShutdownBehavior': 'stop'})1614 'instanceInitiatedShutdownBehavior': 'stopped'})
1610 self.assertEqual(get_attribute('instanceType'),1615 self.assertEqual(get_attribute('instanceType'),
1611 {'instance_id': 'i-12345678',1616 {'instance_id': 'i-12345678',
1612 'instanceType': 'fake_type'})1617 'instanceType': 'fake_type'})
16131618
=== modified file 'nova/tests/test_compute.py'
--- nova/tests/test_compute.py 2011-08-24 23:48:04 +0000
+++ nova/tests/test_compute.py 2011-08-31 14:15:31 +0000
@@ -24,6 +24,7 @@
24from nova.compute import instance_types24from nova.compute import instance_types
25from nova.compute import manager as compute_manager25from nova.compute import manager as compute_manager
26from nova.compute import power_state26from nova.compute import power_state
27from nova.compute import vm_states
27from nova import context28from nova import context
28from nova import db29from nova import db
29from nova.db.sqlalchemy import models30from nova.db.sqlalchemy import models
@@ -763,8 +764,8 @@
763 'block_migration': False,764 'block_migration': False,
764 'disk': None}}).\765 'disk': None}}).\
765 AndRaise(rpc.RemoteError('', '', ''))766 AndRaise(rpc.RemoteError('', '', ''))
766 dbmock.instance_update(c, i_ref['id'], {'state_description': 'running',767 dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE,
767 'state': power_state.RUNNING,768 'task_state': None,
768 'host': i_ref['host']})769 'host': i_ref['host']})
769 for v in i_ref['volumes']:770 for v in i_ref['volumes']:
770 dbmock.volume_update(c, v['id'], {'status': 'in-use'})771 dbmock.volume_update(c, v['id'], {'status': 'in-use'})
@@ -795,8 +796,8 @@
795 'block_migration': False,796 'block_migration': False,
796 'disk': None}}).\797 'disk': None}}).\
797 AndRaise(rpc.RemoteError('', '', ''))798 AndRaise(rpc.RemoteError('', '', ''))
798 dbmock.instance_update(c, i_ref['id'], {'state_description': 'running',799 dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE,
799 'state': power_state.RUNNING,800 'task_state': None,
800 'host': i_ref['host']})801 'host': i_ref['host']})
801802
802 self.compute.db = dbmock803 self.compute.db = dbmock
@@ -841,8 +842,8 @@
841 c = context.get_admin_context()842 c = context.get_admin_context()
842 instance_id = self._create_instance()843 instance_id = self._create_instance()
843 i_ref = db.instance_get(c, instance_id)844 i_ref = db.instance_get(c, instance_id)
844 db.instance_update(c, i_ref['id'], {'state_description': 'migrating',845 db.instance_update(c, i_ref['id'], {'vm_state': vm_states.MIGRATING,
845 'state': power_state.PAUSED})846 'power_state': power_state.PAUSED})
846 v_ref = db.volume_create(c, {'size': 1, 'instance_id': instance_id})847 v_ref = db.volume_create(c, {'size': 1, 'instance_id': instance_id})
847 fix_addr = db.fixed_ip_create(c, {'address': '1.1.1.1',848 fix_addr = db.fixed_ip_create(c, {'address': '1.1.1.1',
848 'instance_id': instance_id})849 'instance_id': instance_id})
@@ -903,7 +904,7 @@
903 instances = db.instance_get_all(context.get_admin_context())904 instances = db.instance_get_all(context.get_admin_context())
904 LOG.info(_("After force-killing instances: %s"), instances)905 LOG.info(_("After force-killing instances: %s"), instances)
905 self.assertEqual(len(instances), 1)906 self.assertEqual(len(instances), 1)
906 self.assertEqual(power_state.SHUTOFF, instances[0]['state'])907 self.assertEqual(power_state.NOSTATE, instances[0]['power_state'])
907908
908 def test_get_all_by_name_regexp(self):909 def test_get_all_by_name_regexp(self):
909 """Test searching instances by name (display_name)"""910 """Test searching instances by name (display_name)"""
@@ -1323,25 +1324,28 @@
1323 """Test searching instances by state"""1324 """Test searching instances by state"""
13241325
1325 c = context.get_admin_context()1326 c = context.get_admin_context()
1326 instance_id1 = self._create_instance({'state': power_state.SHUTDOWN})1327 instance_id1 = self._create_instance({
1328 'power_state': power_state.SHUTDOWN,
1329 })
1327 instance_id2 = self._create_instance({1330 instance_id2 = self._create_instance({
1328 'id': 2,1331 'id': 2,
1329 'state': power_state.RUNNING})1332 'power_state': power_state.RUNNING,
1333 })
1330 instance_id3 = self._create_instance({1334 instance_id3 = self._create_instance({
1331 'id': 10,1335 'id': 10,
1332 'state': power_state.RUNNING})1336 'power_state': power_state.RUNNING,
13331337 })
1334 instances = self.compute_api.get_all(c,1338 instances = self.compute_api.get_all(c,
1335 search_opts={'state': power_state.SUSPENDED})1339 search_opts={'power_state': power_state.SUSPENDED})
1336 self.assertEqual(len(instances), 0)1340 self.assertEqual(len(instances), 0)
13371341
1338 instances = self.compute_api.get_all(c,1342 instances = self.compute_api.get_all(c,
1339 search_opts={'state': power_state.SHUTDOWN})1343 search_opts={'power_state': power_state.SHUTDOWN})
1340 self.assertEqual(len(instances), 1)1344 self.assertEqual(len(instances), 1)
1341 self.assertEqual(instances[0].id, instance_id1)1345 self.assertEqual(instances[0].id, instance_id1)
13421346
1343 instances = self.compute_api.get_all(c,1347 instances = self.compute_api.get_all(c,
1344 search_opts={'state': power_state.RUNNING})1348 search_opts={'power_state': power_state.RUNNING})
1345 self.assertEqual(len(instances), 2)1349 self.assertEqual(len(instances), 2)
1346 instance_ids = [instance.id for instance in instances]1350 instance_ids = [instance.id for instance in instances]
1347 self.assertTrue(instance_id2 in instance_ids)1351 self.assertTrue(instance_id2 in instance_ids)
@@ -1349,7 +1353,7 @@
13491353
1350 # Test passing a list as search arg1354 # Test passing a list as search arg
1351 instances = self.compute_api.get_all(c,1355 instances = self.compute_api.get_all(c,
1352 search_opts={'state': [power_state.SHUTDOWN,1356 search_opts={'power_state': [power_state.SHUTDOWN,
1353 power_state.RUNNING]})1357 power_state.RUNNING]})
1354 self.assertEqual(len(instances), 3)1358 self.assertEqual(len(instances), 3)
13551359
13561360
=== modified file 'nova/tests/vmwareapi/db_fakes.py'
--- nova/tests/vmwareapi/db_fakes.py 2011-07-27 00:40:50 +0000
+++ nova/tests/vmwareapi/db_fakes.py 2011-08-31 14:15:31 +0000
@@ -23,6 +23,8 @@
2323
24from nova import db24from nova import db
25from nova import utils25from nova import utils
26from nova.compute import task_states
27from nova.compute import vm_states
2628
2729
28def stub_out_db_instance_api(stubs):30def stub_out_db_instance_api(stubs):
@@ -64,7 +66,8 @@
64 'image_ref': values['image_ref'],66 'image_ref': values['image_ref'],
65 'kernel_id': values['kernel_id'],67 'kernel_id': values['kernel_id'],
66 'ramdisk_id': values['ramdisk_id'],68 'ramdisk_id': values['ramdisk_id'],
67 'state_description': 'scheduling',69 'vm_state': vm_states.BUILDING,
70 'task_state': task_states.SCHEDULING,
68 'user_id': values['user_id'],71 'user_id': values['user_id'],
69 'project_id': values['project_id'],72 'project_id': values['project_id'],
70 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),73 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),