Merge lp:~rconradharris/nova/server_progress into lp:~hudson-openstack/nova/trunk
- server_progress
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Matt Dietz |
Approved revision: | 1575 |
Merged at revision: | 1613 |
Proposed branch: | lp:~rconradharris/nova/server_progress |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1033 lines (+322/-110) 22 files modified
nova/api/openstack/views/servers.py (+5/-5) nova/compute/api.py (+6/-3) nova/compute/manager.py (+9/-6) nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py (+43/-0) nova/db/sqlalchemy/models.py (+2/-0) nova/flags.py (+4/-0) nova/tests/api/openstack/contrib/test_createserverext.py (+4/-2) nova/tests/api/openstack/contrib/test_volumes.py (+2/-1) nova/tests/api/openstack/test_server_actions.py (+1/-0) nova/tests/api/openstack/test_servers.py (+10/-5) nova/tests/test_compute.py (+1/-1) nova/tests/test_virt_drivers.py (+2/-1) nova/tests/test_xenapi.py (+45/-31) nova/tests/xenapi/stubs.py (+5/-2) nova/utils.py (+14/-0) nova/virt/driver.py (+10/-6) nova/virt/fake.py (+3/-3) nova/virt/xenapi/vm_utils.py (+4/-0) nova/virt/xenapi/vmops.py (+126/-30) nova/virt/xenapi_conn.py (+14/-9) plugins/xenserver/xenapi/etc/xapi.d/plugins/glance (+5/-1) plugins/xenserver/xenapi/etc/xapi.d/plugins/migration (+7/-4) |
To merge this branch: | bzr merge lp:~rconradharris/nova/server_progress |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Trey Morris (community) | Approve | ||
Matt Dietz (community) | Approve | ||
Review via email: mp+75805@code.launchpad.net |
Commit message
Description of the change
This patch adds instance progress which is used by the OpenStack API to indicate how far along the current executing action is (BUILD/REBUILD, MIGRATION/RESIZE).
For the first cut, we decided to keep it simple and compute progress by counting discrete steps. This is not ideal since some steps, in particular, steps which involve transferring large amounts of data over the network, take *much* longer than others. A better approximation would account for the data-transferred to the destination host, since in most cases, this dominates the time spent.
In addition to adding progress, this patch:
- Allows resizes to use same host for source and destination which is useful for dev environments without a second host. This is enabled by the --allow_
- Fixes a bug in the glance and migration XenAPI plugins where the VHDs were being copied into the SR in the wrong order. Before the base-copy was copied first meaning it was possible for snapwatchd to see the base-copy before the dependent cow was present. It was treat the base_copy as an unreferenced parent, and GC it.
- Additional refactoring and cleanups.
Matt Dietz (cerberus) wrote : | # |
Jay Pipes (jaypipes) wrote : | # |
> So this looks really good from an implementation perspective. Beyond that, I
> really don't like that this kind of functionality is part of the API spec,
> which is of course not your fault. The following is a diatribe against the
> idea, not the patch.
>
> Though the progress is done in a way that mostly keeps all the moving parts
> decoupled (since it's just a guess based on the number of expected steps
> rather than disk progress, as you mention,) I'm really not a fan of the false
> sense of security it will provide end users. Questions like "Why is my Linux
> instance at 25% for a quarter of the time a Windows instance is?" will pop up
> shortly. Moving to a model of disk progress has all sorts of other scary
> implications, like trying to track the copy progress through the XS plugin
> that is currently running (yuck) and I don't even know how you'd do it for the
> other hypervisors. It also opens up all kinds of synchronization issues when
> you start including zones and trying to bubble that data up to the API in a
> reasonably timely manner.
>
> Long story short, I think including progress counters is a very strong
> statement regarding the type of cloud you expect it to be, and in this case, I
> don't think synchronous features like progress are indicative of the sort of
> scale we expect Nova to achieve.
>
> If the community decides they want this, you can consider the above an
> "approve." I don't want to block it just because I might have a different idea
> of the Way Things Should Be™
+10
Rick Harris (rconradharris) wrote : | # |
> Long story short, I think including progress counters is a very strong
> statement regarding the type of cloud you expect it to be, and in this case, I
> don't think synchronous features like progress are indicative of the sort of
> scale we expect Nova to achieve.
Nailed it.
Feels weird to argue against my own patch, but you're absolutely right.
If we get a consensus that this patch should be rejected, I'll go ahead and extract out the misc fixes I included in this patch so we don't lose that work.
Naveed Massjouni (ironcamel) wrote : | # |
I do not see where you are testing the new server progress feature you have added. For example, I see no new assertions in the tests. Is it because this is difficult to test and should be tested at a higher level?
Also, it looks like you have only added the progress feature for xen. Is this correct?
Rick Harris (rconradharris) wrote : | # |
> I do not see where you are testing the new server progress feature you have added. For example, I
> see no new assertions in the tests. Is it because this is difficult to test and should be tested at
> a higher level?
Automated functional testing for this didn't feel like it would be worth the additional effort. This was a judgement call, and feel free to disagree here, but it seemed like any tests for this would be slow, difficult to write, and not very useful.
The reason I think it would not be very useful is that, unlike some areas of the code where we have complex interactions, this is very straightforward. That's not to say this code can't break, it's just that the chances of that happening inadvertently are probably small relative to the pain of testing this.
> Also, it looks like you have only added the progress feature for xen. Is this correct?
Yeah adding it to XenServer for now since that's the hypervisor I'm most familiar with. At a high level the steps should be similar, but each virt driver may have a slightly different process for builds and migrations--and I'm not familiar enough with libvirt, etc to add those myself, without considerable additional research.
Rick Harris (rconradharris) wrote : | # |
TL;DR: Agree that instance progress is a bad idea, we should merge this anyway and reevaluate our options at the Essex Design Summit.
This patch is catching a lot of heat, and for good reason. The current implementation, while simple, is not all that accurate; and creating accurate progress at scale may turn out to be either far too complex or outright impossible.
To those points, I'm in complete agreement and at the Essex design summit I will argue against including any kind of instance progress.
All that said, I still think we should merge this patch in the interim. The reasons are:
1. REQUIREMENT: We committed to supporting the Openstack API and progress is a required part of the spec. If we're going to abandon part of the spec for technical reasons, that's something we should hash out in a formal setting where we can have some good back-and-forth on this.
Until we decide to change course, though, I think we should continue our march towards 100% compatibility.
2. EASY-TO-REMOVE: When and if we decide to drop support for instance progress, removing the code should be a dead-simple; it's just a matter of removing the update_progress method and its callers, dropping the column, and having the OSAPI return to its original behavior of stubbing out progress to be 0 or 100% depending on status.
3. INCLUDES OTHER USEFUL FIXES/REFACTORINGS: This patch includes additional bug-fixes and code cleanups which would be nice to have merged in. They could be extracted into a separate patch (in hindsight that would have been the right approach), but since point 1) is (to me) a hard-requirement, the effort of separating out these fixes is probably not worth it.
Trey Morris (tr3buchet) wrote : | # |
You have valid reasons for merging, and I definitely agree about hashing it out in a formal setting. I'd be on the side against in this setting.
Ease of code removal isn't the issue. My fear is once progress bars are supported we'll never be able to get rid of them. Much the same as how they were supported in the last cloud implementation and are still plaguing us in the next. It may take you <5 minutes to remove it, but months of meetings and bickering beforehand for us to convince people to "allow" it to be removed; to decide that would be okay. But if it were never merged in the first place and we decide against, there's the end of it.
Trey Morris (tr3buchet) wrote : | # |
I hate the feeling that I'm opposing you in this Rick. We really should be arguing against this being in the spec..
Rick Harris (rconradharris) wrote : | # |
Trey: I would say you're not really opposing me here, we're basically seeing things the same way.
I think the one difference is: I don't think we can pull an audible on API requirements, those need to be called in from the sidelines.
Once Essex gets here I'm sure we'll have a nice, vocal discussion as to whether we should break compatibility.
Trey Morris (tr3buchet) wrote : | # |
Regrettably, I could do nothing to prevent the disaster which devastated Tristram.
Naveed Massjouni (ironcamel) wrote : | # |
> Regrettably, I could do nothing to prevent the disaster which devastated
> Tristram.
Did you fall hopelessly in love with Rick?
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/utils.py
text conflict in nova/virt/fake.py
Preview Diff
1 | === modified file 'nova/api/openstack/views/servers.py' | |||
2 | --- nova/api/openstack/views/servers.py 2011-09-09 13:48:38 +0000 | |||
3 | +++ nova/api/openstack/views/servers.py 2011-09-21 21:22:24 +0000 | |||
4 | @@ -137,11 +137,11 @@ | |||
5 | 137 | response = super(ViewBuilderV11, self)._build_detail(inst) | 137 | response = super(ViewBuilderV11, self)._build_detail(inst) |
6 | 138 | response['server']['created'] = utils.isotime(inst['created_at']) | 138 | response['server']['created'] = utils.isotime(inst['created_at']) |
7 | 139 | response['server']['updated'] = utils.isotime(inst['updated_at']) | 139 | response['server']['updated'] = utils.isotime(inst['updated_at']) |
13 | 140 | if 'status' in response['server']: | 140 | |
14 | 141 | if response['server']['status'] == "ACTIVE": | 141 | status = response['server'].get('status') |
15 | 142 | response['server']['progress'] = 100 | 142 | if status in ('ACTIVE', 'BUILD', 'REBUILD', 'RESIZE', |
16 | 143 | elif response['server']['status'] == "BUILD": | 143 | 'VERIFY_RESIZE'): |
17 | 144 | response['server']['progress'] = 0 | 144 | response['server']['progress'] = inst['progress'] or 0 |
18 | 145 | 145 | ||
19 | 146 | response['server']['accessIPv4'] = inst.get('access_ip_v4') or "" | 146 | response['server']['accessIPv4'] = inst.get('access_ip_v4') or "" |
20 | 147 | response['server']['accessIPv6'] = inst.get('access_ip_v6') or "" | 147 | response['server']['accessIPv6'] = inst.get('access_ip_v6') or "" |
21 | 148 | 148 | ||
22 | === modified file 'nova/compute/api.py' | |||
23 | --- nova/compute/api.py 2011-09-21 20:17:05 +0000 | |||
24 | +++ nova/compute/api.py 2011-09-21 21:22:24 +0000 | |||
25 | @@ -846,7 +846,8 @@ | |||
26 | 846 | 846 | ||
27 | 847 | self.update(context, | 847 | self.update(context, |
28 | 848 | instance_id, | 848 | instance_id, |
30 | 849 | task_state=task_states.DELETING) | 849 | task_state=task_states.DELETING, |
31 | 850 | progress=0) | ||
32 | 850 | 851 | ||
33 | 851 | host = instance['host'] | 852 | host = instance['host'] |
34 | 852 | if host: | 853 | if host: |
35 | @@ -869,7 +870,8 @@ | |||
36 | 869 | instance_id, | 870 | instance_id, |
37 | 870 | vm_state=vm_states.ACTIVE, | 871 | vm_state=vm_states.ACTIVE, |
38 | 871 | task_state=task_states.STOPPING, | 872 | task_state=task_states.STOPPING, |
40 | 872 | terminated_at=utils.utcnow()) | 873 | terminated_at=utils.utcnow(), |
41 | 874 | progress=0) | ||
42 | 873 | 875 | ||
43 | 874 | host = instance['host'] | 876 | host = instance['host'] |
44 | 875 | if host: | 877 | if host: |
45 | @@ -1169,7 +1171,8 @@ | |||
46 | 1169 | display_name=name, | 1171 | display_name=name, |
47 | 1170 | image_ref=image_href, | 1172 | image_ref=image_href, |
48 | 1171 | vm_state=vm_states.ACTIVE, | 1173 | vm_state=vm_states.ACTIVE, |
50 | 1172 | task_state=task_states.REBUILDING) | 1174 | task_state=task_states.REBUILDING, |
51 | 1175 | progress=0) | ||
52 | 1173 | 1176 | ||
53 | 1174 | rebuild_params = { | 1177 | rebuild_params = { |
54 | 1175 | "new_pass": admin_password, | 1178 | "new_pass": admin_password, |
55 | 1176 | 1179 | ||
56 | === modified file 'nova/compute/manager.py' | |||
57 | --- nova/compute/manager.py 2011-09-21 15:54:30 +0000 | |||
58 | +++ nova/compute/manager.py 2011-09-21 21:22:24 +0000 | |||
59 | @@ -882,7 +882,9 @@ | |||
60 | 882 | migration_ref.instance_uuid) | 882 | migration_ref.instance_uuid) |
61 | 883 | 883 | ||
62 | 884 | network_info = self._get_instance_nw_info(context, instance_ref) | 884 | network_info = self._get_instance_nw_info(context, instance_ref) |
64 | 885 | self.driver.destroy(instance_ref, network_info) | 885 | self.driver.confirm_migration( |
65 | 886 | migration_ref, instance_ref, network_info) | ||
66 | 887 | |||
67 | 886 | usage_info = utils.usage_from_instance(instance_ref) | 888 | usage_info = utils.usage_from_instance(instance_ref) |
68 | 887 | notifier.notify('compute.%s' % self.host, | 889 | notifier.notify('compute.%s' % self.host, |
69 | 888 | 'compute.instance.resize.confirm', | 890 | 'compute.instance.resize.confirm', |
70 | @@ -937,7 +939,7 @@ | |||
71 | 937 | local_gb=instance_type['local_gb'], | 939 | local_gb=instance_type['local_gb'], |
72 | 938 | instance_type_id=instance_type['id']) | 940 | instance_type_id=instance_type['id']) |
73 | 939 | 941 | ||
75 | 940 | self.driver.revert_migration(instance_ref) | 942 | self.driver.finish_revert_migration(instance_ref) |
76 | 941 | self.db.migration_update(context, migration_id, | 943 | self.db.migration_update(context, migration_id, |
77 | 942 | {'status': 'reverted'}) | 944 | {'status': 'reverted'}) |
78 | 943 | usage_info = utils.usage_from_instance(instance_ref) | 945 | usage_info = utils.usage_from_instance(instance_ref) |
79 | @@ -961,7 +963,8 @@ | |||
80 | 961 | # of the instance down | 963 | # of the instance down |
81 | 962 | instance_ref = self.db.instance_get_by_uuid(context, instance_id) | 964 | instance_ref = self.db.instance_get_by_uuid(context, instance_id) |
82 | 963 | 965 | ||
84 | 964 | if instance_ref['host'] == FLAGS.host: | 966 | same_host = instance_ref['host'] == FLAGS.host |
85 | 967 | if same_host and not FLAGS.allow_resize_to_same_host: | ||
86 | 965 | self._instance_update(context, | 968 | self._instance_update(context, |
87 | 966 | instance_id, | 969 | instance_id, |
88 | 967 | vm_state=vm_states.ERROR) | 970 | vm_state=vm_states.ERROR) |
89 | @@ -1012,7 +1015,7 @@ | |||
90 | 1012 | {'status': 'migrating'}) | 1015 | {'status': 'migrating'}) |
91 | 1013 | 1016 | ||
92 | 1014 | disk_info = self.driver.migrate_disk_and_power_off( | 1017 | disk_info = self.driver.migrate_disk_and_power_off( |
94 | 1015 | instance_ref, migration_ref['dest_host']) | 1018 | context, instance_ref, migration_ref['dest_host']) |
95 | 1016 | self.db.migration_update(context, | 1019 | self.db.migration_update(context, |
96 | 1017 | migration_id, | 1020 | migration_id, |
97 | 1018 | {'status': 'post-migrating'}) | 1021 | {'status': 'post-migrating'}) |
98 | @@ -1057,8 +1060,8 @@ | |||
99 | 1057 | instance_ref.uuid) | 1060 | instance_ref.uuid) |
100 | 1058 | 1061 | ||
101 | 1059 | network_info = self._get_instance_nw_info(context, instance_ref) | 1062 | network_info = self._get_instance_nw_info(context, instance_ref) |
104 | 1060 | self.driver.finish_migration(context, instance_ref, disk_info, | 1063 | self.driver.finish_migration(context, migration_ref, instance_ref, |
105 | 1061 | network_info, resize_instance) | 1064 | disk_info, network_info, resize_instance) |
106 | 1062 | 1065 | ||
107 | 1063 | self._instance_update(context, | 1066 | self._instance_update(context, |
108 | 1064 | instance_id, | 1067 | instance_id, |
109 | 1065 | 1068 | ||
110 | === added file 'nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py' | |||
111 | --- nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py 1970-01-01 00:00:00 +0000 | |||
112 | +++ nova/db/sqlalchemy/migrate_repo/versions/049_add_instances_progress.py 2011-09-21 21:22:24 +0000 | |||
113 | @@ -0,0 +1,43 @@ | |||
114 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
115 | 2 | |||
116 | 3 | # Copyright 2010 OpenStack LLC. | ||
117 | 4 | # | ||
118 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
119 | 6 | # not use this file except in compliance with the License. You may obtain | ||
120 | 7 | # a copy of the License at | ||
121 | 8 | # | ||
122 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
123 | 10 | # | ||
124 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
125 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
126 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
127 | 14 | # License for the specific language governing permissions and limitations | ||
128 | 15 | # under the License. | ||
129 | 16 | |||
130 | 17 | from sqlalchemy import * | ||
131 | 18 | from migrate import * | ||
132 | 19 | |||
133 | 20 | from nova import log as logging | ||
134 | 21 | |||
135 | 22 | meta = MetaData() | ||
136 | 23 | |||
137 | 24 | instances = Table('instances', meta, | ||
138 | 25 | Column("id", Integer(), primary_key=True, nullable=False)) | ||
139 | 26 | |||
140 | 27 | # Add progress column to instances table | ||
141 | 28 | progress = Column('progress', Integer()) | ||
142 | 29 | |||
143 | 30 | |||
144 | 31 | def upgrade(migrate_engine): | ||
145 | 32 | meta.bind = migrate_engine | ||
146 | 33 | |||
147 | 34 | try: | ||
148 | 35 | instances.create_column(progress) | ||
149 | 36 | except Exception: | ||
150 | 37 | logging.error(_("progress column not added to instances table")) | ||
151 | 38 | raise | ||
152 | 39 | |||
153 | 40 | |||
154 | 41 | def downgrade(migrate_engine): | ||
155 | 42 | meta.bind = migrate_engine | ||
156 | 43 | instances.drop_column(progress) | ||
157 | 0 | 44 | ||
158 | === modified file 'nova/db/sqlalchemy/models.py' | |||
159 | --- nova/db/sqlalchemy/models.py 2011-09-21 12:04:37 +0000 | |||
160 | +++ nova/db/sqlalchemy/models.py 2011-09-21 21:22:24 +0000 | |||
161 | @@ -241,6 +241,8 @@ | |||
162 | 241 | access_ip_v4 = Column(String(255)) | 241 | access_ip_v4 = Column(String(255)) |
163 | 242 | access_ip_v6 = Column(String(255)) | 242 | access_ip_v6 = Column(String(255)) |
164 | 243 | 243 | ||
165 | 244 | progress = Column(Integer) | ||
166 | 245 | |||
167 | 244 | 246 | ||
168 | 245 | class VirtualStorageArray(BASE, NovaBase): | 247 | class VirtualStorageArray(BASE, NovaBase): |
169 | 246 | """ | 248 | """ |
170 | 247 | 249 | ||
171 | === modified file 'nova/flags.py' | |||
172 | --- nova/flags.py 2011-09-20 17:41:23 +0000 | |||
173 | +++ nova/flags.py 2011-09-21 21:22:24 +0000 | |||
174 | @@ -434,3 +434,7 @@ | |||
175 | 434 | 'nova.compute.api:nova.notifier.api.notify_decorator'], | 434 | 'nova.compute.api:nova.notifier.api.notify_decorator'], |
176 | 435 | 'Module list representing monkey ' | 435 | 'Module list representing monkey ' |
177 | 436 | 'patched module and decorator') | 436 | 'patched module and decorator') |
178 | 437 | |||
179 | 438 | DEFINE_bool('allow_resize_to_same_host', False, | ||
180 | 439 | 'Allow destination machine to match source for resize. Useful' | ||
181 | 440 | ' when testing in environments with only one host machine.') | ||
182 | 437 | 441 | ||
183 | === modified file 'nova/tests/api/openstack/contrib/test_createserverext.py' | |||
184 | --- nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-09 01:30:21 +0000 | |||
185 | +++ nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-21 21:22:24 +0000 | |||
186 | @@ -1,4 +1,4 @@ | |||
188 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | 1 | # vim: tabstop=5 shiftwidth=4 softtabstop=4 |
189 | 2 | 2 | ||
190 | 3 | # Copyright 2010-2011 OpenStack LLC. | 3 | # Copyright 2010-2011 OpenStack LLC. |
191 | 4 | # All Rights Reserved. | 4 | # All Rights Reserved. |
192 | @@ -54,6 +54,7 @@ | |||
193 | 54 | "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), | 54 | "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), |
194 | 55 | "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), | 55 | "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), |
195 | 56 | "security_groups": [{"id": 1, "name": "test"}], | 56 | "security_groups": [{"id": 1, "name": "test"}], |
196 | 57 | "progress": 0, | ||
197 | 57 | "image_ref": 'http://foo.com/123', | 58 | "image_ref": 'http://foo.com/123', |
198 | 58 | "instance_type": {"flavorid": '124'}, | 59 | "instance_type": {"flavorid": '124'}, |
199 | 59 | } | 60 | } |
200 | @@ -119,7 +120,8 @@ | |||
201 | 119 | 'user_id': 'fake', | 120 | 'user_id': 'fake', |
202 | 120 | 'project_id': 'fake', | 121 | 'project_id': 'fake', |
203 | 121 | 'created_at': "", | 122 | 'created_at': "", |
205 | 122 | 'updated_at': ""}] | 123 | 'updated_at': "", |
206 | 124 | 'progress': 0}] | ||
207 | 123 | 125 | ||
208 | 124 | def set_admin_password(self, *args, **kwargs): | 126 | def set_admin_password(self, *args, **kwargs): |
209 | 125 | pass | 127 | pass |
210 | 126 | 128 | ||
211 | === modified file 'nova/tests/api/openstack/contrib/test_volumes.py' | |||
212 | --- nova/tests/api/openstack/contrib/test_volumes.py 2011-09-16 18:35:10 +0000 | |||
213 | +++ nova/tests/api/openstack/contrib/test_volumes.py 2011-09-21 21:22:24 +0000 | |||
214 | @@ -1,4 +1,4 @@ | |||
216 | 1 | # Copyright 2011 Josh Durgin | 1 | # Copyright 2013 Josh Durgin |
217 | 2 | # All Rights Reserved. | 2 | # All Rights Reserved. |
218 | 3 | # | 3 | # |
219 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may |
220 | @@ -43,6 +43,7 @@ | |||
221 | 43 | 'project_id': 'fake', | 43 | 'project_id': 'fake', |
222 | 44 | 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), | 44 | 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), |
223 | 45 | 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0), | 45 | 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0), |
224 | 46 | 'progress': 0 | ||
225 | 46 | }] | 47 | }] |
226 | 47 | 48 | ||
227 | 48 | 49 | ||
228 | 49 | 50 | ||
229 | === modified file 'nova/tests/api/openstack/test_server_actions.py' | |||
230 | --- nova/tests/api/openstack/test_server_actions.py 2011-09-06 11:31:39 +0000 | |||
231 | +++ nova/tests/api/openstack/test_server_actions.py 2011-09-21 21:22:24 +0000 | |||
232 | @@ -91,6 +91,7 @@ | |||
233 | 91 | "access_ip_v6": "", | 91 | "access_ip_v6": "", |
234 | 92 | "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", | 92 | "uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", |
235 | 93 | "virtual_interfaces": [], | 93 | "virtual_interfaces": [], |
236 | 94 | "progress": 0, | ||
237 | 94 | } | 95 | } |
238 | 95 | 96 | ||
239 | 96 | instance["fixed_ips"] = { | 97 | instance["fixed_ips"] = { |
240 | 97 | 98 | ||
241 | === modified file 'nova/tests/api/openstack/test_servers.py' | |||
242 | --- nova/tests/api/openstack/test_servers.py 2011-09-21 15:54:30 +0000 | |||
243 | +++ nova/tests/api/openstack/test_servers.py 2011-09-21 21:22:24 +0000 | |||
244 | @@ -162,7 +162,7 @@ | |||
245 | 162 | vm_state=None, task_state=None, | 162 | vm_state=None, task_state=None, |
246 | 163 | reservation_id="", uuid=FAKE_UUID, image_ref="10", | 163 | reservation_id="", uuid=FAKE_UUID, image_ref="10", |
247 | 164 | flavor_id="1", interfaces=None, name=None, key_name='', | 164 | flavor_id="1", interfaces=None, name=None, key_name='', |
249 | 165 | access_ipv4=None, access_ipv6=None): | 165 | access_ipv4=None, access_ipv6=None, progress=0): |
250 | 166 | metadata = [] | 166 | metadata = [] |
251 | 167 | metadata.append(InstanceMetadata(key='seq', value=id)) | 167 | metadata.append(InstanceMetadata(key='seq', value=id)) |
252 | 168 | 168 | ||
253 | @@ -222,7 +222,8 @@ | |||
254 | 222 | "access_ip_v4": access_ipv4, | 222 | "access_ip_v4": access_ipv4, |
255 | 223 | "access_ip_v6": access_ipv6, | 223 | "access_ip_v6": access_ipv6, |
256 | 224 | "uuid": uuid, | 224 | "uuid": uuid, |
258 | 225 | "virtual_interfaces": interfaces} | 225 | "virtual_interfaces": interfaces, |
259 | 226 | "progress": progress} | ||
260 | 226 | 227 | ||
261 | 227 | instance["fixed_ips"] = { | 228 | instance["fixed_ips"] = { |
262 | 228 | "address": private_address, | 229 | "address": private_address, |
263 | @@ -548,7 +549,8 @@ | |||
264 | 548 | }, | 549 | }, |
265 | 549 | ] | 550 | ] |
266 | 550 | new_return_server = return_server_with_attributes( | 551 | new_return_server = return_server_with_attributes( |
268 | 551 | interfaces=interfaces, vm_state=vm_states.ACTIVE) | 552 | interfaces=interfaces, vm_state=vm_states.ACTIVE, |
269 | 553 | progress=100) | ||
270 | 552 | self.stubs.Set(nova.db.api, 'instance_get', new_return_server) | 554 | self.stubs.Set(nova.db.api, 'instance_get', new_return_server) |
271 | 553 | 555 | ||
272 | 554 | req = webob.Request.blank('/v1.1/fake/servers/1') | 556 | req = webob.Request.blank('/v1.1/fake/servers/1') |
273 | @@ -645,7 +647,7 @@ | |||
274 | 645 | ] | 647 | ] |
275 | 646 | new_return_server = return_server_with_attributes( | 648 | new_return_server = return_server_with_attributes( |
276 | 647 | interfaces=interfaces, vm_state=vm_states.ACTIVE, | 649 | interfaces=interfaces, vm_state=vm_states.ACTIVE, |
278 | 648 | image_ref=image_ref, flavor_id=flavor_id) | 650 | image_ref=image_ref, flavor_id=flavor_id, progress=100) |
279 | 649 | self.stubs.Set(nova.db.api, 'instance_get', new_return_server) | 651 | self.stubs.Set(nova.db.api, 'instance_get', new_return_server) |
280 | 650 | 652 | ||
281 | 651 | req = webob.Request.blank('/v1.1/fake/servers/1') | 653 | req = webob.Request.blank('/v1.1/fake/servers/1') |
282 | @@ -1527,6 +1529,7 @@ | |||
283 | 1527 | "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), | 1529 | "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), |
284 | 1528 | "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), | 1530 | "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), |
285 | 1529 | "config_drive": self.config_drive, | 1531 | "config_drive": self.config_drive, |
286 | 1532 | "progress": 0 | ||
287 | 1530 | } | 1533 | } |
288 | 1531 | 1534 | ||
289 | 1532 | def server_update(context, id, params): | 1535 | def server_update(context, id, params): |
290 | @@ -3819,7 +3822,8 @@ | |||
291 | 3819 | "accessIPv6": "fead::1234", | 3822 | "accessIPv6": "fead::1234", |
292 | 3820 | #"address": , | 3823 | #"address": , |
293 | 3821 | #"floating_ips": [{"address":ip} for ip in public_addresses]} | 3824 | #"floating_ips": [{"address":ip} for ip in public_addresses]} |
295 | 3822 | "uuid": "deadbeef-feed-edee-beef-d0ea7beefedd"} | 3825 | "uuid": "deadbeef-feed-edee-beef-d0ea7beefedd", |
296 | 3826 | "progress": 0} | ||
297 | 3823 | 3827 | ||
298 | 3824 | return instance | 3828 | return instance |
299 | 3825 | 3829 | ||
300 | @@ -3942,6 +3946,7 @@ | |||
301 | 3942 | def test_build_server_detail_active_status(self): | 3946 | def test_build_server_detail_active_status(self): |
302 | 3943 | #set the power state of the instance to running | 3947 | #set the power state of the instance to running |
303 | 3944 | self.instance['vm_state'] = vm_states.ACTIVE | 3948 | self.instance['vm_state'] = vm_states.ACTIVE |
304 | 3949 | self.instance['progress'] = 100 | ||
305 | 3945 | image_bookmark = "http://localhost/images/5" | 3950 | image_bookmark = "http://localhost/images/5" |
306 | 3946 | flavor_bookmark = "http://localhost/flavors/1" | 3951 | flavor_bookmark = "http://localhost/flavors/1" |
307 | 3947 | expected_server = { | 3952 | expected_server = { |
308 | 3948 | 3953 | ||
309 | === modified file 'nova/tests/test_compute.py' | |||
310 | --- nova/tests/test_compute.py 2011-09-19 21:53:17 +0000 | |||
311 | +++ nova/tests/test_compute.py 2011-09-21 21:22:24 +0000 | |||
312 | @@ -584,7 +584,7 @@ | |||
313 | 584 | pass | 584 | pass |
314 | 585 | 585 | ||
315 | 586 | self.stubs.Set(self.compute.driver, 'finish_migration', fake) | 586 | self.stubs.Set(self.compute.driver, 'finish_migration', fake) |
317 | 587 | self.stubs.Set(self.compute.driver, 'revert_migration', fake) | 587 | self.stubs.Set(self.compute.driver, 'finish_revert_migration', fake) |
318 | 588 | self.stubs.Set(self.compute.network_api, 'get_instance_nw_info', fake) | 588 | self.stubs.Set(self.compute.network_api, 'get_instance_nw_info', fake) |
319 | 589 | 589 | ||
320 | 590 | self.compute.run_instance(self.context, instance_id) | 590 | self.compute.run_instance(self.context, instance_id) |
321 | 591 | 591 | ||
322 | === modified file 'nova/tests/test_virt_drivers.py' | |||
323 | --- nova/tests/test_virt_drivers.py 2011-09-15 19:09:14 +0000 | |||
324 | +++ nova/tests/test_virt_drivers.py 2011-09-21 21:22:24 +0000 | |||
325 | @@ -185,7 +185,8 @@ | |||
326 | 185 | instance_ref = test_utils.get_test_instance() | 185 | instance_ref = test_utils.get_test_instance() |
327 | 186 | network_info = test_utils.get_test_network_info() | 186 | network_info = test_utils.get_test_network_info() |
328 | 187 | self.connection.spawn(self.ctxt, instance_ref, network_info) | 187 | self.connection.spawn(self.ctxt, instance_ref, network_info) |
330 | 188 | self.connection.migrate_disk_and_power_off(instance_ref, 'dest_host') | 188 | self.connection.migrate_disk_and_power_off( |
331 | 189 | self.ctxt, instance_ref, 'dest_host') | ||
332 | 189 | 190 | ||
333 | 190 | @catch_notimplementederror | 191 | @catch_notimplementederror |
334 | 191 | def test_pause(self): | 192 | def test_pause(self): |
335 | 192 | 193 | ||
336 | === modified file 'nova/tests/test_xenapi.py' | |||
337 | --- nova/tests/test_xenapi.py 2011-09-13 20:33:34 +0000 | |||
338 | +++ nova/tests/test_xenapi.py 2011-09-21 21:22:24 +0000 | |||
339 | @@ -76,7 +76,7 @@ | |||
340 | 76 | db_fakes.stub_out_db_instance_api(self.stubs) | 76 | db_fakes.stub_out_db_instance_api(self.stubs) |
341 | 77 | stubs.stub_out_get_target(self.stubs) | 77 | stubs.stub_out_get_target(self.stubs) |
342 | 78 | xenapi_fake.reset() | 78 | xenapi_fake.reset() |
344 | 79 | self.values = {'id': 1, | 79 | self.instance_values = {'id': 1, |
345 | 80 | 'project_id': self.user_id, | 80 | 'project_id': self.user_id, |
346 | 81 | 'user_id': 'fake', | 81 | 'user_id': 'fake', |
347 | 82 | 'image_ref': 1, | 82 | 'image_ref': 1, |
348 | @@ -132,7 +132,7 @@ | |||
349 | 132 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests) | 132 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests) |
350 | 133 | conn = xenapi_conn.get_connection(False) | 133 | conn = xenapi_conn.get_connection(False) |
351 | 134 | volume = self._create_volume() | 134 | volume = self._create_volume() |
353 | 135 | instance = db.instance_create(self.context, self.values) | 135 | instance = db.instance_create(self.context, self.instance_values) |
354 | 136 | vm = xenapi_fake.create_vm(instance.name, 'Running') | 136 | vm = xenapi_fake.create_vm(instance.name, 'Running') |
355 | 137 | result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') | 137 | result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') |
356 | 138 | 138 | ||
357 | @@ -152,7 +152,7 @@ | |||
358 | 152 | stubs.FakeSessionForVolumeFailedTests) | 152 | stubs.FakeSessionForVolumeFailedTests) |
359 | 153 | conn = xenapi_conn.get_connection(False) | 153 | conn = xenapi_conn.get_connection(False) |
360 | 154 | volume = self._create_volume() | 154 | volume = self._create_volume() |
362 | 155 | instance = db.instance_create(self.context, self.values) | 155 | instance = db.instance_create(self.context, self.instance_values) |
363 | 156 | xenapi_fake.create_vm(instance.name, 'Running') | 156 | xenapi_fake.create_vm(instance.name, 'Running') |
364 | 157 | self.assertRaises(Exception, | 157 | self.assertRaises(Exception, |
365 | 158 | conn.attach_volume, | 158 | conn.attach_volume, |
366 | @@ -369,7 +369,7 @@ | |||
367 | 369 | create_record=True, empty_dns=False): | 369 | create_record=True, empty_dns=False): |
368 | 370 | stubs.stubout_loopingcall_start(self.stubs) | 370 | stubs.stubout_loopingcall_start(self.stubs) |
369 | 371 | if create_record: | 371 | if create_record: |
371 | 372 | values = {'id': instance_id, | 372 | instance_values = {'id': instance_id, |
372 | 373 | 'project_id': self.project_id, | 373 | 'project_id': self.project_id, |
373 | 374 | 'user_id': self.user_id, | 374 | 'user_id': self.user_id, |
374 | 375 | 'image_ref': image_ref, | 375 | 'image_ref': image_ref, |
375 | @@ -379,7 +379,7 @@ | |||
376 | 379 | 'os_type': os_type, | 379 | 'os_type': os_type, |
377 | 380 | 'hostname': hostname, | 380 | 'hostname': hostname, |
378 | 381 | 'architecture': architecture} | 381 | 'architecture': architecture} |
380 | 382 | instance = db.instance_create(self.context, values) | 382 | instance = db.instance_create(self.context, instance_values) |
381 | 383 | else: | 383 | else: |
382 | 384 | instance = db.instance_get(self.context, instance_id) | 384 | instance = db.instance_get(self.context, instance_id) |
383 | 385 | network_info = [({'bridge': 'fa0', 'id': 0, 'injected': True}, | 385 | network_info = [({'bridge': 'fa0', 'id': 0, 'injected': True}, |
384 | @@ -623,28 +623,28 @@ | |||
385 | 623 | # Ensure that it will not unrescue a non-rescued instance. | 623 | # Ensure that it will not unrescue a non-rescued instance. |
386 | 624 | self.assertRaises(Exception, conn.unrescue, instance, None) | 624 | self.assertRaises(Exception, conn.unrescue, instance, None) |
387 | 625 | 625 | ||
389 | 626 | def test_revert_migration(self): | 626 | def test_finish_revert_migration(self): |
390 | 627 | instance = self._create_instance() | 627 | instance = self._create_instance() |
391 | 628 | 628 | ||
392 | 629 | class VMOpsMock(): | 629 | class VMOpsMock(): |
393 | 630 | 630 | ||
394 | 631 | def __init__(self): | 631 | def __init__(self): |
396 | 632 | self.revert_migration_called = False | 632 | self.finish_revert_migration_called = False |
397 | 633 | 633 | ||
400 | 634 | def revert_migration(self, instance): | 634 | def finish_revert_migration(self, instance): |
401 | 635 | self.revert_migration_called = True | 635 | self.finish_revert_migration_called = True |
402 | 636 | 636 | ||
403 | 637 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) | 637 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) |
404 | 638 | 638 | ||
405 | 639 | conn = xenapi_conn.get_connection(False) | 639 | conn = xenapi_conn.get_connection(False) |
406 | 640 | conn._vmops = VMOpsMock() | 640 | conn._vmops = VMOpsMock() |
409 | 641 | conn.revert_migration(instance) | 641 | conn.finish_revert_migration(instance) |
410 | 642 | self.assertTrue(conn._vmops.revert_migration_called) | 642 | self.assertTrue(conn._vmops.finish_revert_migration_called) |
411 | 643 | 643 | ||
412 | 644 | def _create_instance(self, instance_id=1, spawn=True): | 644 | def _create_instance(self, instance_id=1, spawn=True): |
413 | 645 | """Creates and spawns a test instance.""" | 645 | """Creates and spawns a test instance.""" |
414 | 646 | stubs.stubout_loopingcall_start(self.stubs) | 646 | stubs.stubout_loopingcall_start(self.stubs) |
416 | 647 | values = { | 647 | instance_values = { |
417 | 648 | 'id': instance_id, | 648 | 'id': instance_id, |
418 | 649 | 'project_id': self.project_id, | 649 | 'project_id': self.project_id, |
419 | 650 | 'user_id': self.user_id, | 650 | 'user_id': self.user_id, |
420 | @@ -654,7 +654,7 @@ | |||
421 | 654 | 'instance_type_id': '3', # m1.large | 654 | 'instance_type_id': '3', # m1.large |
422 | 655 | 'os_type': 'linux', | 655 | 'os_type': 'linux', |
423 | 656 | 'architecture': 'x86-64'} | 656 | 'architecture': 'x86-64'} |
425 | 657 | instance = db.instance_create(self.context, values) | 657 | instance = db.instance_create(self.context, instance_values) |
426 | 658 | network_info = [({'bridge': 'fa0', 'id': 0, 'injected': False}, | 658 | network_info = [({'bridge': 'fa0', 'id': 0, 'injected': False}, |
427 | 659 | {'broadcast': '192.168.0.255', | 659 | {'broadcast': '192.168.0.255', |
428 | 660 | 'dns': ['192.168.0.1'], | 660 | 'dns': ['192.168.0.1'], |
429 | @@ -732,7 +732,7 @@ | |||
430 | 732 | self.user_id = 'fake' | 732 | self.user_id = 'fake' |
431 | 733 | self.project_id = 'fake' | 733 | self.project_id = 'fake' |
432 | 734 | self.context = context.RequestContext(self.user_id, self.project_id) | 734 | self.context = context.RequestContext(self.user_id, self.project_id) |
434 | 735 | self.values = {'id': 1, | 735 | self.instance_values = {'id': 1, |
435 | 736 | 'project_id': self.project_id, | 736 | 'project_id': self.project_id, |
436 | 737 | 'user_id': self.user_id, | 737 | 'user_id': self.user_id, |
437 | 738 | 'image_ref': 1, | 738 | 'image_ref': 1, |
438 | @@ -743,22 +743,34 @@ | |||
439 | 743 | 'os_type': 'linux', | 743 | 'os_type': 'linux', |
440 | 744 | 'architecture': 'x86-64'} | 744 | 'architecture': 'x86-64'} |
441 | 745 | 745 | ||
442 | 746 | migration_values = { | ||
443 | 747 | 'source_compute': 'nova-compute', | ||
444 | 748 | 'dest_compute': 'nova-compute', | ||
445 | 749 | 'dest_host': '10.127.5.114', | ||
446 | 750 | 'status': 'post-migrating', | ||
447 | 751 | 'instance_uuid': '15f23e6a-cc6e-4d22-b651-d9bdaac316f7', | ||
448 | 752 | 'old_instance_type_id': 5, | ||
449 | 753 | 'new_instance_type_id': 1 | ||
450 | 754 | } | ||
451 | 755 | self.migration = db.migration_create( | ||
452 | 756 | context.get_admin_context(), migration_values) | ||
453 | 757 | |||
454 | 746 | fake_utils.stub_out_utils_execute(self.stubs) | 758 | fake_utils.stub_out_utils_execute(self.stubs) |
455 | 747 | stubs.stub_out_migration_methods(self.stubs) | 759 | stubs.stub_out_migration_methods(self.stubs) |
456 | 748 | stubs.stubout_get_this_vm_uuid(self.stubs) | 760 | stubs.stubout_get_this_vm_uuid(self.stubs) |
457 | 749 | glance_stubs.stubout_glance_client(self.stubs) | 761 | glance_stubs.stubout_glance_client(self.stubs) |
458 | 750 | 762 | ||
459 | 751 | def test_migrate_disk_and_power_off(self): | 763 | def test_migrate_disk_and_power_off(self): |
461 | 752 | instance = db.instance_create(self.context, self.values) | 764 | instance = db.instance_create(self.context, self.instance_values) |
462 | 753 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) | 765 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) |
463 | 754 | conn = xenapi_conn.get_connection(False) | 766 | conn = xenapi_conn.get_connection(False) |
465 | 755 | conn.migrate_disk_and_power_off(instance, '127.0.0.1') | 767 | conn.migrate_disk_and_power_off(self.context, instance, '127.0.0.1') |
466 | 756 | 768 | ||
467 | 757 | def test_revert_migrate(self): | 769 | def test_revert_migrate(self): |
469 | 758 | instance = db.instance_create(self.context, self.values) | 770 | instance = db.instance_create(self.context, self.instance_values) |
470 | 759 | self.called = False | 771 | self.called = False |
471 | 760 | self.fake_vm_start_called = False | 772 | self.fake_vm_start_called = False |
473 | 761 | self.fake_revert_migration_called = False | 773 | self.fake_finish_revert_migration_called = False |
474 | 762 | 774 | ||
475 | 763 | def fake_vm_start(*args, **kwargs): | 775 | def fake_vm_start(*args, **kwargs): |
476 | 764 | self.fake_vm_start_called = True | 776 | self.fake_vm_start_called = True |
477 | @@ -766,13 +778,14 @@ | |||
478 | 766 | def fake_vdi_resize(*args, **kwargs): | 778 | def fake_vdi_resize(*args, **kwargs): |
479 | 767 | self.called = True | 779 | self.called = True |
480 | 768 | 780 | ||
483 | 769 | def fake_revert_migration(*args, **kwargs): | 781 | def fake_finish_revert_migration(*args, **kwargs): |
484 | 770 | self.fake_revert_migration_called = True | 782 | self.fake_finish_revert_migration_called = True |
485 | 771 | 783 | ||
486 | 772 | self.stubs.Set(stubs.FakeSessionForMigrationTests, | 784 | self.stubs.Set(stubs.FakeSessionForMigrationTests, |
487 | 773 | "VDI_resize_online", fake_vdi_resize) | 785 | "VDI_resize_online", fake_vdi_resize) |
488 | 774 | self.stubs.Set(vmops.VMOps, '_start', fake_vm_start) | 786 | self.stubs.Set(vmops.VMOps, '_start', fake_vm_start) |
490 | 775 | self.stubs.Set(vmops.VMOps, 'revert_migration', fake_revert_migration) | 787 | self.stubs.Set(vmops.VMOps, 'finish_revert_migration', |
491 | 788 | fake_finish_revert_migration) | ||
492 | 776 | 789 | ||
493 | 777 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) | 790 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) |
494 | 778 | stubs.stubout_loopingcall_start(self.stubs) | 791 | stubs.stubout_loopingcall_start(self.stubs) |
495 | @@ -791,17 +804,17 @@ | |||
496 | 791 | 'label': 'fake', | 804 | 'label': 'fake', |
497 | 792 | 'mac': 'DE:AD:BE:EF:00:00', | 805 | 'mac': 'DE:AD:BE:EF:00:00', |
498 | 793 | 'rxtx_cap': 3})] | 806 | 'rxtx_cap': 3})] |
500 | 794 | conn.finish_migration(self.context, instance, | 807 | conn.finish_migration(self.context, self.migration, instance, |
501 | 795 | dict(base_copy='hurr', cow='durr'), | 808 | dict(base_copy='hurr', cow='durr'), |
502 | 796 | network_info, resize_instance=True) | 809 | network_info, resize_instance=True) |
503 | 797 | self.assertEqual(self.called, True) | 810 | self.assertEqual(self.called, True) |
504 | 798 | self.assertEqual(self.fake_vm_start_called, True) | 811 | self.assertEqual(self.fake_vm_start_called, True) |
505 | 799 | 812 | ||
508 | 800 | conn.revert_migration(instance) | 813 | conn.finish_revert_migration(instance) |
509 | 801 | self.assertEqual(self.fake_revert_migration_called, True) | 814 | self.assertEqual(self.fake_finish_revert_migration_called, True) |
510 | 802 | 815 | ||
511 | 803 | def test_finish_migrate(self): | 816 | def test_finish_migrate(self): |
513 | 804 | instance = db.instance_create(self.context, self.values) | 817 | instance = db.instance_create(self.context, self.instance_values) |
514 | 805 | self.called = False | 818 | self.called = False |
515 | 806 | self.fake_vm_start_called = False | 819 | self.fake_vm_start_called = False |
516 | 807 | 820 | ||
517 | @@ -832,7 +845,7 @@ | |||
518 | 832 | 'label': 'fake', | 845 | 'label': 'fake', |
519 | 833 | 'mac': 'DE:AD:BE:EF:00:00', | 846 | 'mac': 'DE:AD:BE:EF:00:00', |
520 | 834 | 'rxtx_cap': 3})] | 847 | 'rxtx_cap': 3})] |
522 | 835 | conn.finish_migration(self.context, instance, | 848 | conn.finish_migration(self.context, self.migration, instance, |
523 | 836 | dict(base_copy='hurr', cow='durr'), | 849 | dict(base_copy='hurr', cow='durr'), |
524 | 837 | network_info, resize_instance=True) | 850 | network_info, resize_instance=True) |
525 | 838 | self.assertEqual(self.called, True) | 851 | self.assertEqual(self.called, True) |
526 | @@ -841,8 +854,9 @@ | |||
527 | 841 | def test_finish_migrate_no_local_storage(self): | 854 | def test_finish_migrate_no_local_storage(self): |
528 | 842 | tiny_type_id = \ | 855 | tiny_type_id = \ |
529 | 843 | instance_types.get_instance_type_by_name('m1.tiny')['id'] | 856 | instance_types.get_instance_type_by_name('m1.tiny')['id'] |
532 | 844 | self.values.update({'instance_type_id': tiny_type_id, 'local_gb': 0}) | 857 | self.instance_values.update({'instance_type_id': tiny_type_id, |
533 | 845 | instance = db.instance_create(self.context, self.values) | 858 | 'local_gb': 0}) |
534 | 859 | instance = db.instance_create(self.context, self.instance_values) | ||
535 | 846 | 860 | ||
536 | 847 | def fake_vdi_resize(*args, **kwargs): | 861 | def fake_vdi_resize(*args, **kwargs): |
537 | 848 | raise Exception("This shouldn't be called") | 862 | raise Exception("This shouldn't be called") |
538 | @@ -866,12 +880,12 @@ | |||
539 | 866 | 'label': 'fake', | 880 | 'label': 'fake', |
540 | 867 | 'mac': 'DE:AD:BE:EF:00:00', | 881 | 'mac': 'DE:AD:BE:EF:00:00', |
541 | 868 | 'rxtx_cap': 3})] | 882 | 'rxtx_cap': 3})] |
543 | 869 | conn.finish_migration(self.context, instance, | 883 | conn.finish_migration(self.context, self.migration, instance, |
544 | 870 | dict(base_copy='hurr', cow='durr'), | 884 | dict(base_copy='hurr', cow='durr'), |
545 | 871 | network_info, resize_instance=True) | 885 | network_info, resize_instance=True) |
546 | 872 | 886 | ||
547 | 873 | def test_finish_migrate_no_resize_vdi(self): | 887 | def test_finish_migrate_no_resize_vdi(self): |
549 | 874 | instance = db.instance_create(self.context, self.values) | 888 | instance = db.instance_create(self.context, self.instance_values) |
550 | 875 | 889 | ||
551 | 876 | def fake_vdi_resize(*args, **kwargs): | 890 | def fake_vdi_resize(*args, **kwargs): |
552 | 877 | raise Exception("This shouldn't be called") | 891 | raise Exception("This shouldn't be called") |
553 | @@ -897,7 +911,7 @@ | |||
554 | 897 | 'rxtx_cap': 3})] | 911 | 'rxtx_cap': 3})] |
555 | 898 | 912 | ||
556 | 899 | # Resize instance would be determined by the compute call | 913 | # Resize instance would be determined by the compute call |
558 | 900 | conn.finish_migration(self.context, instance, | 914 | conn.finish_migration(self.context, self.migration, instance, |
559 | 901 | dict(base_copy='hurr', cow='durr'), | 915 | dict(base_copy='hurr', cow='durr'), |
560 | 902 | network_info, resize_instance=False) | 916 | network_info, resize_instance=False) |
561 | 903 | 917 | ||
562 | 904 | 918 | ||
563 | === modified file 'nova/tests/xenapi/stubs.py' | |||
564 | --- nova/tests/xenapi/stubs.py 2011-08-28 19:11:14 +0000 | |||
565 | +++ nova/tests/xenapi/stubs.py 2011-09-21 21:22:24 +0000 | |||
566 | @@ -295,9 +295,12 @@ | |||
567 | 295 | vm['is_control_domain'] = False | 295 | vm['is_control_domain'] = False |
568 | 296 | vm['domid'] = random.randrange(1, 1 << 16) | 296 | vm['domid'] = random.randrange(1, 1 << 16) |
569 | 297 | 297 | ||
570 | 298 | def VM_set_name_label(self, *args): | ||
571 | 299 | pass | ||
572 | 300 | |||
573 | 298 | 301 | ||
574 | 299 | def stub_out_migration_methods(stubs): | 302 | def stub_out_migration_methods(stubs): |
576 | 300 | def fake_get_snapshot(self, instance): | 303 | def fake_create_snapshot(self, instance): |
577 | 301 | return 'vm_ref', dict(image='foo', snap='bar') | 304 | return 'vm_ref', dict(image='foo', snap='bar') |
578 | 302 | 305 | ||
579 | 303 | @classmethod | 306 | @classmethod |
580 | @@ -327,7 +330,7 @@ | |||
581 | 327 | stubs.Set(vmops.VMOps, '_destroy', fake_destroy) | 330 | stubs.Set(vmops.VMOps, '_destroy', fake_destroy) |
582 | 328 | stubs.Set(vm_utils.VMHelper, 'scan_default_sr', fake_sr) | 331 | stubs.Set(vm_utils.VMHelper, 'scan_default_sr', fake_sr) |
583 | 329 | stubs.Set(vm_utils.VMHelper, 'scan_sr', fake_sr) | 332 | stubs.Set(vm_utils.VMHelper, 'scan_sr', fake_sr) |
585 | 330 | stubs.Set(vmops.VMOps, '_get_snapshot', fake_get_snapshot) | 333 | stubs.Set(vmops.VMOps, '_create_snapshot', fake_create_snapshot) |
586 | 331 | stubs.Set(vm_utils.VMHelper, 'get_vdi_for_vm_safely', fake_get_vdi) | 334 | stubs.Set(vm_utils.VMHelper, 'get_vdi_for_vm_safely', fake_get_vdi) |
587 | 332 | stubs.Set(xenapi_conn.XenAPISession, 'wait_for_task', lambda x, y, z: None) | 335 | stubs.Set(xenapi_conn.XenAPISession, 'wait_for_task', lambda x, y, z: None) |
588 | 333 | stubs.Set(vm_utils.VMHelper, 'get_sr_path', fake_get_sr_path) | 336 | stubs.Set(vm_utils.VMHelper, 'get_sr_path', fake_get_sr_path) |
589 | 334 | 337 | ||
590 | === modified file 'nova/utils.py' | |||
591 | --- nova/utils.py 2011-09-20 20:21:06 +0000 | |||
592 | +++ nova/utils.py 2011-09-21 21:22:24 +0000 | |||
593 | @@ -912,6 +912,20 @@ | |||
594 | 912 | return [{label: x} for x in lst] | 912 | return [{label: x} for x in lst] |
595 | 913 | 913 | ||
596 | 914 | 914 | ||
597 | 915 | def timefunc(func): | ||
598 | 916 | """Decorator that logs how long a particular function took to execute""" | ||
599 | 917 | @functools.wraps(func) | ||
600 | 918 | def inner(*args, **kwargs): | ||
601 | 919 | start_time = time.time() | ||
602 | 920 | try: | ||
603 | 921 | return func(*args, **kwargs) | ||
604 | 922 | finally: | ||
605 | 923 | total_time = time.time() - start_time | ||
606 | 924 | LOG.debug(_("timefunc: '%(name)s' took %(total_time).2f secs") % | ||
607 | 925 | dict(name=func.__name__, total_time=total_time)) | ||
608 | 926 | return inner | ||
609 | 927 | |||
610 | 928 | |||
611 | 915 | def generate_glance_url(): | 929 | def generate_glance_url(): |
612 | 916 | """Generate the URL to glance.""" | 930 | """Generate the URL to glance.""" |
613 | 917 | # TODO(jk0): This will eventually need to take SSL into consideration | 931 | # TODO(jk0): This will eventually need to take SSL into consideration |
614 | 918 | 932 | ||
615 | === modified file 'nova/virt/driver.py' | |||
616 | --- nova/virt/driver.py 2011-09-21 15:54:30 +0000 | |||
617 | +++ nova/virt/driver.py 2011-09-21 21:22:24 +0000 | |||
618 | @@ -225,12 +225,11 @@ | |||
619 | 225 | """ | 225 | """ |
620 | 226 | raise NotImplementedError() | 226 | raise NotImplementedError() |
621 | 227 | 227 | ||
623 | 228 | def migrate_disk_and_power_off(self, instance, dest): | 228 | def migrate_disk_and_power_off(self, context, instance, dest): |
624 | 229 | """ | 229 | """ |
625 | 230 | Transfers the disk of a running instance in multiple phases, turning | 230 | Transfers the disk of a running instance in multiple phases, turning |
626 | 231 | off the instance before the end. | 231 | off the instance before the end. |
627 | 232 | """ | 232 | """ |
628 | 233 | # TODO(Vek): Need to pass context in for access to auth_token | ||
629 | 234 | raise NotImplementedError() | 233 | raise NotImplementedError() |
630 | 235 | 234 | ||
631 | 236 | def snapshot(self, context, instance, image_id): | 235 | def snapshot(self, context, instance, image_id): |
632 | @@ -244,8 +243,8 @@ | |||
633 | 244 | """ | 243 | """ |
634 | 245 | raise NotImplementedError() | 244 | raise NotImplementedError() |
635 | 246 | 245 | ||
638 | 247 | def finish_migration(self, context, instance, disk_info, network_info, | 246 | def finish_migration(self, context, migration, instance, disk_info, |
639 | 248 | resize_instance): | 247 | network_info, resize_instance): |
640 | 249 | """Completes a resize, turning on the migrated instance | 248 | """Completes a resize, turning on the migrated instance |
641 | 250 | 249 | ||
642 | 251 | :param network_info: | 250 | :param network_info: |
643 | @@ -253,8 +252,13 @@ | |||
644 | 253 | """ | 252 | """ |
645 | 254 | raise NotImplementedError() | 253 | raise NotImplementedError() |
646 | 255 | 254 | ||
649 | 256 | def revert_migration(self, instance): | 255 | def confirm_migration(self, migration, instance, network_info): |
650 | 257 | """Reverts a resize, powering back on the instance""" | 256 | """Confirms a resize, destroying the source VM""" |
651 | 257 | # TODO(Vek): Need to pass context in for access to auth_token | ||
652 | 258 | raise NotImplementedError() | ||
653 | 259 | |||
654 | 260 | def finish_revert_migration(self, instance): | ||
655 | 261 | """Finish reverting a resize, powering back on the instance""" | ||
656 | 258 | # TODO(Vek): Need to pass context in for access to auth_token | 262 | # TODO(Vek): Need to pass context in for access to auth_token |
657 | 259 | raise NotImplementedError() | 263 | raise NotImplementedError() |
658 | 260 | 264 | ||
659 | 261 | 265 | ||
660 | === modified file 'nova/virt/fake.py' | |||
661 | --- nova/virt/fake.py 2011-09-15 18:44:49 +0000 | |||
662 | +++ nova/virt/fake.py 2011-09-21 21:22:24 +0000 | |||
663 | @@ -130,12 +130,12 @@ | |||
664 | 130 | def poll_rescued_instances(self, timeout): | 130 | def poll_rescued_instances(self, timeout): |
665 | 131 | pass | 131 | pass |
666 | 132 | 132 | ||
667 | 133 | def migrate_disk_and_power_off(self, context, instance, dest): | ||
668 | 134 | pass | ||
669 | 135 | |||
670 | 133 | def poll_unconfirmed_resizes(self, resize_confirm_window): | 136 | def poll_unconfirmed_resizes(self, resize_confirm_window): |
671 | 134 | pass | 137 | pass |
672 | 135 | 138 | ||
673 | 136 | def migrate_disk_and_power_off(self, instance, dest): | ||
674 | 137 | pass | ||
675 | 138 | |||
676 | 139 | def pause(self, instance, callback): | 139 | def pause(self, instance, callback): |
677 | 140 | pass | 140 | pass |
678 | 141 | 141 | ||
679 | 142 | 142 | ||
680 | === modified file 'nova/virt/xenapi/vm_utils.py' | |||
681 | --- nova/virt/xenapi/vm_utils.py 2011-09-10 17:56:54 +0000 | |||
682 | +++ nova/virt/xenapi/vm_utils.py 2011-09-21 21:22:24 +0000 | |||
683 | @@ -695,6 +695,10 @@ | |||
684 | 695 | return is_pv | 695 | return is_pv |
685 | 696 | 696 | ||
686 | 697 | @classmethod | 697 | @classmethod |
687 | 698 | def set_vm_name_label(cls, session, vm_ref, name_label): | ||
688 | 699 | session.get_xenapi().VM.set_name_label(vm_ref, name_label) | ||
689 | 700 | |||
690 | 701 | @classmethod | ||
691 | 698 | def lookup(cls, session, name_label): | 702 | def lookup(cls, session, name_label): |
692 | 699 | """Look the instance i up, and returns it if available""" | 703 | """Look the instance i up, and returns it if available""" |
693 | 700 | vm_refs = session.get_xenapi().VM.get_by_name_label(name_label) | 704 | vm_refs = session.get_xenapi().VM.get_by_name_label(name_label) |
694 | 701 | 705 | ||
695 | === modified file 'nova/virt/xenapi/vmops.py' | |||
696 | --- nova/virt/xenapi/vmops.py 2011-09-21 15:54:30 +0000 | |||
697 | +++ nova/virt/xenapi/vmops.py 2011-09-21 21:22:24 +0000 | |||
698 | @@ -56,6 +56,9 @@ | |||
699 | 56 | 'nova.virt.xenapi.vif.XenAPIBridgeDriver', | 56 | 'nova.virt.xenapi.vif.XenAPIBridgeDriver', |
700 | 57 | 'The XenAPI VIF driver using XenServer Network APIs.') | 57 | 'The XenAPI VIF driver using XenServer Network APIs.') |
701 | 58 | 58 | ||
702 | 59 | RESIZE_TOTAL_STEPS = 5 | ||
703 | 60 | BUILD_TOTAL_STEPS = 4 | ||
704 | 61 | |||
705 | 59 | 62 | ||
706 | 60 | def cmp_version(a, b): | 63 | def cmp_version(a, b): |
707 | 61 | """Compare two version strings (eg 0.0.1.10 > 0.0.1.9)""" | 64 | """Compare two version strings (eg 0.0.1.10 > 0.0.1.9)""" |
708 | @@ -111,20 +114,40 @@ | |||
709 | 111 | instance_infos.append(instance_info) | 114 | instance_infos.append(instance_info) |
710 | 112 | return instance_infos | 115 | return instance_infos |
711 | 113 | 116 | ||
714 | 114 | def revert_migration(self, instance): | 117 | def confirm_migration(self, migration, instance, network_info): |
715 | 115 | vm_ref = VMHelper.lookup(self._session, instance.name) | 118 | name_label = self._get_orig_vm_name_label(instance) |
716 | 119 | vm_ref = VMHelper.lookup(self._session, name_label) | ||
717 | 120 | return self._destroy(instance, vm_ref, network_info, shutdown=False) | ||
718 | 121 | |||
719 | 122 | def finish_revert_migration(self, instance): | ||
720 | 123 | # NOTE(sirp): the original vm was suffixed with '-orig'; find it using | ||
721 | 124 | # the old suffix, remove the suffix, then power it back on. | ||
722 | 125 | name_label = self._get_orig_vm_name_label(instance) | ||
723 | 126 | vm_ref = VMHelper.lookup(self._session, name_label) | ||
724 | 127 | |||
725 | 128 | # Remove the '-orig' suffix (which was added in case the resized VM | ||
726 | 129 | # ends up on the source host, common during testing) | ||
727 | 130 | name_label = instance.name | ||
728 | 131 | VMHelper.set_vm_name_label(self._session, vm_ref, name_label) | ||
729 | 132 | |||
730 | 116 | self._start(instance, vm_ref) | 133 | self._start(instance, vm_ref) |
731 | 117 | 134 | ||
734 | 118 | def finish_migration(self, context, instance, disk_info, network_info, | 135 | def finish_migration(self, context, migration, instance, disk_info, |
735 | 119 | resize_instance): | 136 | network_info, resize_instance): |
736 | 120 | vdi_uuid = self.link_disks(instance, disk_info['base_copy'], | 137 | vdi_uuid = self.link_disks(instance, disk_info['base_copy'], |
737 | 121 | disk_info['cow']) | 138 | disk_info['cow']) |
738 | 139 | |||
739 | 122 | vm_ref = self._create_vm(context, instance, | 140 | vm_ref = self._create_vm(context, instance, |
740 | 123 | [dict(vdi_type='os', vdi_uuid=vdi_uuid)], | 141 | [dict(vdi_type='os', vdi_uuid=vdi_uuid)], |
741 | 124 | network_info) | 142 | network_info) |
742 | 125 | if resize_instance: | 143 | if resize_instance: |
743 | 126 | self.resize_instance(instance, vdi_uuid) | 144 | self.resize_instance(instance, vdi_uuid) |
744 | 145 | |||
745 | 146 | # 5. Start VM | ||
746 | 127 | self._start(instance, vm_ref=vm_ref) | 147 | self._start(instance, vm_ref=vm_ref) |
747 | 148 | self._update_instance_progress(context, instance, | ||
748 | 149 | step=5, | ||
749 | 150 | total_steps=RESIZE_TOTAL_STEPS) | ||
750 | 128 | 151 | ||
751 | 129 | def _start(self, instance, vm_ref=None): | 152 | def _start(self, instance, vm_ref=None): |
752 | 130 | """Power on a VM instance""" | 153 | """Power on a VM instance""" |
753 | @@ -147,9 +170,35 @@ | |||
754 | 147 | def spawn(self, context, instance, network_info): | 170 | def spawn(self, context, instance, network_info): |
755 | 148 | vdis = None | 171 | vdis = None |
756 | 149 | try: | 172 | try: |
757 | 173 | # 1. Vanity Step | ||
758 | 174 | # NOTE(sirp): _create_disk will potentially take a *very* long | ||
759 | 175 | # time to complete since it has to fetch the image over the | ||
760 | 176 | # network and images can be several gigs in size. To avoid | ||
761 | 177 | # progress remaining at 0% for too long, which will appear to be | ||
762 | 178 | # an error, we insert a "vanity" step to bump the progress up one | ||
763 | 179 | # notch above 0. | ||
764 | 180 | self._update_instance_progress(context, instance, | ||
765 | 181 | step=1, | ||
766 | 182 | total_steps=BUILD_TOTAL_STEPS) | ||
767 | 183 | |||
768 | 184 | # 2. Fetch the Image over the Network | ||
769 | 150 | vdis = self._create_disks(context, instance) | 185 | vdis = self._create_disks(context, instance) |
770 | 186 | self._update_instance_progress(context, instance, | ||
771 | 187 | step=2, | ||
772 | 188 | total_steps=BUILD_TOTAL_STEPS) | ||
773 | 189 | |||
774 | 190 | # 3. Create the VM records | ||
775 | 151 | vm_ref = self._create_vm(context, instance, vdis, network_info) | 191 | vm_ref = self._create_vm(context, instance, vdis, network_info) |
776 | 192 | self._update_instance_progress(context, instance, | ||
777 | 193 | step=3, | ||
778 | 194 | total_steps=BUILD_TOTAL_STEPS) | ||
779 | 195 | |||
780 | 196 | # 4. Boot the Instance | ||
781 | 152 | self._spawn(instance, vm_ref) | 197 | self._spawn(instance, vm_ref) |
782 | 198 | self._update_instance_progress(context, instance, | ||
783 | 199 | step=4, | ||
784 | 200 | total_steps=BUILD_TOTAL_STEPS) | ||
785 | 201 | |||
786 | 153 | except (self.XenAPI.Failure, OSError, IOError) as spawn_error: | 202 | except (self.XenAPI.Failure, OSError, IOError) as spawn_error: |
787 | 154 | LOG.exception(_("instance %s: Failed to spawn"), | 203 | LOG.exception(_("instance %s: Failed to spawn"), |
788 | 155 | instance.id, exc_info=sys.exc_info()) | 204 | instance.id, exc_info=sys.exc_info()) |
789 | @@ -169,7 +218,7 @@ | |||
790 | 169 | if vm_ref is not None: | 218 | if vm_ref is not None: |
791 | 170 | raise exception.InstanceExists(name=instance_name) | 219 | raise exception.InstanceExists(name=instance_name) |
792 | 171 | 220 | ||
794 | 172 | #ensure enough free memory is available | 221 | # Ensure enough free memory is available |
795 | 173 | if not VMHelper.ensure_free_mem(self._session, instance): | 222 | if not VMHelper.ensure_free_mem(self._session, instance): |
796 | 174 | LOG.exception(_('instance %(instance_name)s: not enough free ' | 223 | LOG.exception(_('instance %(instance_name)s: not enough free ' |
797 | 175 | 'memory') % locals()) | 224 | 'memory') % locals()) |
798 | @@ -501,7 +550,8 @@ | |||
799 | 501 | """ | 550 | """ |
800 | 502 | template_vm_ref = None | 551 | template_vm_ref = None |
801 | 503 | try: | 552 | try: |
803 | 504 | template_vm_ref, template_vdi_uuids = self._get_snapshot(instance) | 553 | template_vm_ref, template_vdi_uuids =\ |
804 | 554 | self._create_snapshot(instance) | ||
805 | 505 | # call plugin to ship snapshot off to glance | 555 | # call plugin to ship snapshot off to glance |
806 | 506 | VMHelper.upload_image(context, | 556 | VMHelper.upload_image(context, |
807 | 507 | self._session, instance, template_vdi_uuids, image_id) | 557 | self._session, instance, template_vdi_uuids, image_id) |
808 | @@ -512,7 +562,7 @@ | |||
809 | 512 | 562 | ||
810 | 513 | logging.debug(_("Finished snapshot and upload for VM %s"), instance) | 563 | logging.debug(_("Finished snapshot and upload for VM %s"), instance) |
811 | 514 | 564 | ||
813 | 515 | def _get_snapshot(self, instance): | 565 | def _create_snapshot(self, instance): |
814 | 516 | #TODO(sirp): Add quiesce and VSS locking support when Windows support | 566 | #TODO(sirp): Add quiesce and VSS locking support when Windows support |
815 | 517 | # is added | 567 | # is added |
816 | 518 | 568 | ||
817 | @@ -529,7 +579,39 @@ | |||
818 | 529 | % locals()) | 579 | % locals()) |
819 | 530 | return | 580 | return |
820 | 531 | 581 | ||
822 | 532 | def migrate_disk_and_power_off(self, instance, dest): | 582 | def _migrate_vhd(self, instance, vdi_uuid, dest, sr_path): |
823 | 583 | instance_id = instance.id | ||
824 | 584 | params = {'host': dest, | ||
825 | 585 | 'vdi_uuid': vdi_uuid, | ||
826 | 586 | 'instance_id': instance_id, | ||
827 | 587 | 'sr_path': sr_path} | ||
828 | 588 | |||
829 | 589 | task = self._session.async_call_plugin('migration', 'transfer_vhd', | ||
830 | 590 | {'params': pickle.dumps(params)}) | ||
831 | 591 | self._session.wait_for_task(task, instance_id) | ||
832 | 592 | |||
833 | 593 | def _get_orig_vm_name_label(self, instance): | ||
834 | 594 | return instance.name + '-orig' | ||
835 | 595 | |||
836 | 596 | def _update_instance_progress(self, context, instance, step, total_steps): | ||
837 | 597 | """Update instance progress percent to reflect current step number | ||
838 | 598 | """ | ||
839 | 599 | # FIXME(sirp): for now we're taking a KISS approach to instance | ||
840 | 600 | # progress: | ||
841 | 601 | # Divide the action's workflow into discrete steps and "bump" the | ||
842 | 602 | # instance's progress field as each step is completed. | ||
843 | 603 | # | ||
844 | 604 | # For a first cut this should be fine, however, for large VM images, | ||
845 | 605 | # the _create_disks step begins to dominate the equation. A | ||
846 | 606 | # better approximation would use the percentage of the VM image that | ||
847 | 607 | # has been streamed to the destination host. | ||
848 | 608 | progress = round(float(step) / total_steps * 100) | ||
849 | 609 | instance_id = instance['id'] | ||
850 | 610 | LOG.debug(_("Updating instance '%(instance_id)s' progress to" | ||
851 | 611 | " %(progress)d") % locals()) | ||
852 | 612 | db.instance_update(context, instance_id, {'progress': progress}) | ||
853 | 613 | |||
854 | 614 | def migrate_disk_and_power_off(self, context, instance, dest): | ||
855 | 533 | """Copies a VHD from one host machine to another. | 615 | """Copies a VHD from one host machine to another. |
856 | 534 | 616 | ||
857 | 535 | :param instance: the instance that owns the VHD in question. | 617 | :param instance: the instance that owns the VHD in question. |
858 | @@ -537,6 +619,11 @@ | |||
859 | 537 | :param disk_type: values are 'primary' or 'cow'. | 619 | :param disk_type: values are 'primary' or 'cow'. |
860 | 538 | 620 | ||
861 | 539 | """ | 621 | """ |
862 | 622 | # 0. Zero out the progress to begin | ||
863 | 623 | self._update_instance_progress(context, instance, | ||
864 | 624 | step=0, | ||
865 | 625 | total_steps=RESIZE_TOTAL_STEPS) | ||
866 | 626 | |||
867 | 540 | vm_ref = VMHelper.lookup(self._session, instance.name) | 627 | vm_ref = VMHelper.lookup(self._session, instance.name) |
868 | 541 | 628 | ||
869 | 542 | # The primary VDI becomes the COW after the snapshot, and we can | 629 | # The primary VDI becomes the COW after the snapshot, and we can |
870 | @@ -546,34 +633,43 @@ | |||
871 | 546 | base_copy_uuid = cow_uuid = None | 633 | base_copy_uuid = cow_uuid = None |
872 | 547 | template_vdi_uuids = template_vm_ref = None | 634 | template_vdi_uuids = template_vm_ref = None |
873 | 548 | try: | 635 | try: |
876 | 549 | # transfer the base copy | 636 | # 1. Create Snapshot |
877 | 550 | template_vm_ref, template_vdi_uuids = self._get_snapshot(instance) | 637 | template_vm_ref, template_vdi_uuids =\ |
878 | 638 | self._create_snapshot(instance) | ||
879 | 639 | self._update_instance_progress(context, instance, | ||
880 | 640 | step=1, | ||
881 | 641 | total_steps=RESIZE_TOTAL_STEPS) | ||
882 | 642 | |||
883 | 551 | base_copy_uuid = template_vdi_uuids['image'] | 643 | base_copy_uuid = template_vdi_uuids['image'] |
884 | 552 | vdi_ref, vm_vdi_rec = \ | 644 | vdi_ref, vm_vdi_rec = \ |
885 | 553 | VMHelper.get_vdi_for_vm_safely(self._session, vm_ref) | 645 | VMHelper.get_vdi_for_vm_safely(self._session, vm_ref) |
886 | 554 | cow_uuid = vm_vdi_rec['uuid'] | 646 | cow_uuid = vm_vdi_rec['uuid'] |
887 | 555 | 647 | ||
898 | 556 | params = {'host': dest, | 648 | sr_path = VMHelper.get_sr_path(self._session) |
899 | 557 | 'vdi_uuid': base_copy_uuid, | 649 | |
900 | 558 | 'instance_id': instance.id, | 650 | # 2. Transfer the base copy |
901 | 559 | 'sr_path': VMHelper.get_sr_path(self._session)} | 651 | self._migrate_vhd(instance, base_copy_uuid, dest, sr_path) |
902 | 560 | 652 | self._update_instance_progress(context, instance, | |
903 | 561 | task = self._session.async_call_plugin('migration', 'transfer_vhd', | 653 | step=2, |
904 | 562 | {'params': pickle.dumps(params)}) | 654 | total_steps=RESIZE_TOTAL_STEPS) |
905 | 563 | self._session.wait_for_task(task, instance.id) | 655 | |
906 | 564 | 656 | # 3. Now power down the instance | |
897 | 565 | # Now power down the instance and transfer the COW VHD | ||
907 | 566 | self._shutdown(instance, vm_ref, hard=False) | 657 | self._shutdown(instance, vm_ref, hard=False) |
918 | 567 | 658 | self._update_instance_progress(context, instance, | |
919 | 568 | params = {'host': dest, | 659 | step=3, |
920 | 569 | 'vdi_uuid': cow_uuid, | 660 | total_steps=RESIZE_TOTAL_STEPS) |
921 | 570 | 'instance_id': instance.id, | 661 | |
922 | 571 | 'sr_path': VMHelper.get_sr_path(self._session), } | 662 | # 4. Transfer the COW VHD |
923 | 572 | 663 | self._migrate_vhd(instance, cow_uuid, dest, sr_path) | |
924 | 573 | task = self._session.async_call_plugin('migration', 'transfer_vhd', | 664 | self._update_instance_progress(context, instance, |
925 | 574 | {'params': pickle.dumps(params)}) | 665 | step=4, |
926 | 575 | self._session.wait_for_task(task, instance.id) | 666 | total_steps=RESIZE_TOTAL_STEPS) |
927 | 576 | 667 | ||
928 | 668 | # NOTE(sirp): in case we're resizing to the same host (for dev | ||
929 | 669 | # purposes), apply a suffix to name-label so the two VM records | ||
930 | 670 | # extant until a confirm_resize don't collide. | ||
931 | 671 | name_label = self._get_orig_vm_name_label(instance) | ||
932 | 672 | VMHelper.set_vm_name_label(self._session, vm_ref, name_label) | ||
933 | 577 | finally: | 673 | finally: |
934 | 578 | if template_vm_ref: | 674 | if template_vm_ref: |
935 | 579 | self._destroy(instance, template_vm_ref, | 675 | self._destroy(instance, template_vm_ref, |
936 | 580 | 676 | ||
937 | === modified file 'nova/virt/xenapi_conn.py' | |||
938 | --- nova/virt/xenapi_conn.py 2011-09-21 15:54:30 +0000 | |||
939 | +++ nova/virt/xenapi_conn.py 2011-09-21 21:22:24 +0000 | |||
940 | @@ -189,14 +189,19 @@ | |||
941 | 189 | """Create VM instance""" | 189 | """Create VM instance""" |
942 | 190 | self._vmops.spawn(context, instance, network_info) | 190 | self._vmops.spawn(context, instance, network_info) |
943 | 191 | 191 | ||
950 | 192 | def revert_migration(self, instance): | 192 | def confirm_migration(self, migration, instance, network_info): |
951 | 193 | """Reverts a resize, powering back on the instance""" | 193 | """Confirms a resize, destroying the source VM""" |
952 | 194 | self._vmops.revert_migration(instance) | 194 | # TODO(Vek): Need to pass context in for access to auth_token |
953 | 195 | 195 | self._vmops.confirm_migration(migration, instance, network_info) | |
954 | 196 | def finish_migration(self, context, instance, disk_info, network_info, | 196 | |
955 | 197 | resize_instance=False): | 197 | def finish_revert_migration(self, instance): |
956 | 198 | """Finish reverting a resize, powering back on the instance""" | ||
957 | 199 | self._vmops.finish_revert_migration(instance) | ||
958 | 200 | |||
959 | 201 | def finish_migration(self, context, migration, instance, disk_info, | ||
960 | 202 | network_info, resize_instance=False): | ||
961 | 198 | """Completes a resize, turning on the migrated instance""" | 203 | """Completes a resize, turning on the migrated instance""" |
963 | 199 | self._vmops.finish_migration(context, instance, disk_info, | 204 | self._vmops.finish_migration(context, migration, instance, disk_info, |
964 | 200 | network_info, resize_instance) | 205 | network_info, resize_instance) |
965 | 201 | 206 | ||
966 | 202 | def snapshot(self, context, instance, image_id): | 207 | def snapshot(self, context, instance, image_id): |
967 | @@ -229,10 +234,10 @@ | |||
968 | 229 | """Unpause paused VM instance""" | 234 | """Unpause paused VM instance""" |
969 | 230 | self._vmops.unpause(instance, callback) | 235 | self._vmops.unpause(instance, callback) |
970 | 231 | 236 | ||
972 | 232 | def migrate_disk_and_power_off(self, instance, dest): | 237 | def migrate_disk_and_power_off(self, context, instance, dest): |
973 | 233 | """Transfers the VHD of a running instance to another host, then shuts | 238 | """Transfers the VHD of a running instance to another host, then shuts |
974 | 234 | off the instance copies over the COW disk""" | 239 | off the instance copies over the COW disk""" |
976 | 235 | return self._vmops.migrate_disk_and_power_off(instance, dest) | 240 | return self._vmops.migrate_disk_and_power_off(context, instance, dest) |
977 | 236 | 241 | ||
978 | 237 | def suspend(self, instance, callback): | 242 | def suspend(self, instance, callback): |
979 | 238 | """suspend the specified instance""" | 243 | """suspend the specified instance""" |
980 | 239 | 244 | ||
981 | === modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/glance' | |||
982 | --- plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-09-10 17:56:54 +0000 | |||
983 | +++ plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-09-21 21:22:24 +0000 | |||
984 | @@ -211,7 +211,11 @@ | |||
985 | 211 | snap_info = prepare_if_exists(staging_path, 'snap.vhd', | 211 | snap_info = prepare_if_exists(staging_path, 'snap.vhd', |
986 | 212 | image_info[0]) | 212 | image_info[0]) |
987 | 213 | if snap_info: | 213 | if snap_info: |
989 | 214 | paths_to_move.append(snap_info[0]) | 214 | # NOTE(sirp): this is an insert rather than an append since the |
990 | 215 | # 'snapshot' vhd needs to be copied into the SR before the base copy. | ||
991 | 216 | # If it doesn't, then there is a possibliity that snapwatchd will | ||
992 | 217 | # delete the base_copy since it is an unreferenced parent. | ||
993 | 218 | paths_to_move.insert(0, snap_info[0]) | ||
994 | 215 | # We return this snap as the VDI instead of image.vhd | 219 | # We return this snap as the VDI instead of image.vhd |
995 | 216 | vdi_return_list.append(dict(vdi_type="os", vdi_uuid=snap_info[1])) | 220 | vdi_return_list.append(dict(vdi_type="os", vdi_uuid=snap_info[1])) |
996 | 217 | else: | 221 | else: |
997 | 218 | 222 | ||
998 | === modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/migration' | |||
999 | --- plugins/xenserver/xenapi/etc/xapi.d/plugins/migration 2011-07-20 15:16:36 +0000 | |||
1000 | +++ plugins/xenserver/xenapi/etc/xapi.d/plugins/migration 2011-09-21 21:22:24 +0000 | |||
1001 | @@ -48,7 +48,7 @@ | |||
1002 | 48 | 48 | ||
1003 | 49 | # Discover the copied VHDs locally, and then set up paths to copy | 49 | # Discover the copied VHDs locally, and then set up paths to copy |
1004 | 50 | # them to under the SR | 50 | # them to under the SR |
1006 | 51 | source_image_path = "%s/instance%d" % ('/images/', instance_id) | 51 | source_image_path = "/images/instance%d" % instance_id |
1007 | 52 | source_base_copy_path = "%s/%s.vhd" % (source_image_path, | 52 | source_base_copy_path = "%s/%s.vhd" % (source_image_path, |
1008 | 53 | old_base_copy_uuid) | 53 | old_base_copy_uuid) |
1009 | 54 | source_cow_path = "%s/%s.vhd" % (source_image_path, old_cow_uuid) | 54 | source_cow_path = "%s/%s.vhd" % (source_image_path, old_cow_uuid) |
1010 | @@ -74,9 +74,12 @@ | |||
1011 | 74 | (new_cow_path, new_base_copy_path)) | 74 | (new_cow_path, new_base_copy_path)) |
1012 | 75 | subprocess.call(shlex.split('/usr/sbin/vhd-util modify -n %s -p %s' % | 75 | subprocess.call(shlex.split('/usr/sbin/vhd-util modify -n %s -p %s' % |
1013 | 76 | (new_cow_path, new_base_copy_path))) | 76 | (new_cow_path, new_base_copy_path))) |
1014 | 77 | |||
1015 | 77 | logging.debug('Moving VHDs into SR %s' % sr_path) | 78 | logging.debug('Moving VHDs into SR %s' % sr_path) |
1018 | 78 | shutil.move("%s/%s.vhd" % (temp_vhd_path, new_base_copy_uuid), sr_path) | 79 | # NOTE(sirp): COW should be copied before base_copy to avoid snapwatchd |
1019 | 79 | shutil.move("%s/%s.vhd" % (temp_vhd_path, new_cow_uuid), sr_path) | 80 | # GC'ing an unreferenced base copy VDI |
1020 | 81 | shutil.move(new_cow_path, sr_path) | ||
1021 | 82 | shutil.move(new_base_copy_path, sr_path) | ||
1022 | 80 | 83 | ||
1023 | 81 | logging.debug('Cleaning up temporary SR path %s' % temp_vhd_path) | 84 | logging.debug('Cleaning up temporary SR path %s' % temp_vhd_path) |
1024 | 82 | os.rmdir(temp_vhd_path) | 85 | os.rmdir(temp_vhd_path) |
1025 | @@ -93,7 +96,7 @@ | |||
1026 | 93 | vhd_path = "%s.vhd" % vdi_uuid | 96 | vhd_path = "%s.vhd" % vdi_uuid |
1027 | 94 | 97 | ||
1028 | 95 | source_path = "%s/%s" % (sr_path, vhd_path) | 98 | source_path = "%s/%s" % (sr_path, vhd_path) |
1030 | 96 | dest_path = '%s:%sinstance%d/' % (host, '/images/', instance_id) | 99 | dest_path = '%s:/images/instance%d/' % (host, instance_id) |
1031 | 97 | 100 | ||
1032 | 98 | logging.debug("Preparing to transmit %s to %s" % (source_path, | 101 | logging.debug("Preparing to transmit %s to %s" % (source_path, |
1033 | 99 | dest_path)) | 102 | dest_path)) |
So this looks really good from an implementation perspective. Beyond that, I really don't like that this kind of functionality is part of the API spec, which is of course not your fault. The following is a diatribe against the idea, not the patch.
Though the progress is done in a way that mostly keeps all the moving parts decoupled (since it's just a guess based on the number of expected steps rather than disk progress, as you mention,) I'm really not a fan of the false sense of security it will provide end users. Questions like "Why is my Linux instance at 25% for a quarter of the time a Windows instance is?" will pop up shortly. Moving to a model of disk progress has all sorts of other scary implications, like trying to track the copy progress through the XS plugin that is currently running (yuck) and I don't even know how you'd do it for the other hypervisors. It also opens up all kinds of synchronization issues when you start including zones and trying to bubble that data up to the API in a reasonably timely manner.
Long story short, I think including progress counters is a very strong statement regarding the type of cloud you expect it to be, and in this case, I don't think synchronous features like progress are indicative of the sort of scale we expect Nova to achieve.
If the community decides they want this, you can consider the above an "approve." I don't want to block it just because I might have a different idea of the Way Things Should Be™