Merge lp:~jk0/nova/xs-rescue into lp:~hudson-openstack/nova/trunk
- xs-rescue
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Rick Harris |
Approved revision: | 603 |
Merged at revision: | 754 |
Proposed branch: | lp:~jk0/nova/xs-rescue |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
641 lines (+220/-62) 13 files modified
.mailmap (+1/-0) Authors (+1/-1) nova/api/openstack/__init__.py (+2/-0) nova/api/openstack/servers.py (+22/-0) nova/compute/manager.py (+24/-10) nova/db/sqlalchemy/models.py (+6/-1) nova/tests/xenapi/stubs.py (+1/-1) nova/virt/libvirt_conn.py (+2/-2) nova/virt/xenapi/fake.py (+1/-1) nova/virt/xenapi/vm_utils.py (+10/-12) nova/virt/xenapi/vmops.py (+132/-27) nova/virt/xenapi/volumeops.py (+1/-1) nova/virt/xenapi_conn.py (+17/-6) |
To merge this branch: | bzr merge lp:~jk0/nova/xs-rescue |
Related bugs: | |
Related blueprints: |
XenServer Rescue
(High)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rick Harris (community) | Approve | ||
Matt Dietz (community) | Approve | ||
Review via email: mp+51780@code.launchpad.net |
Commit message
Description of the change
Provide the ability to rescue and unrescue a XenServer instance.
Josh Kearney (jk0) wrote : | # |
> Good work jk0, this is a great start.
>
>
> Some small nits:
>
> > 80 + self.driver.
> > 81 + lambda result: self._update_
> > 82 + context,
> > 83 + instance_id,
> > 84 + result))
>
> Breaking this into two lines would probably aid readability.
Copied this directly from another call, wanted to maintain consistency. I'll go thru them all and clean up.
>
> > 145 + def rescue(self, instance, callback=None):
>
> The callback function isn't firing for rescue or unrescue. It looks like the
> other methods are using
>
> self._wait_
>
> To fire it off.
I had to keep this in place since libvirt uses the callback (passed down to the driver).
>
> > 342 + def _bootlock(self, vm, unlock=False):
> > 343 + """Prevent an instance from booting"""
> > 344 + if unlock:
> > 345 + self._session.
> > 346 + "VM.remove_
> > 347 + vm,
> > 348 + "start")
> > 349 + else:
> > 350 + self._session.
> > 351 + "VM.set_
> > 352 + vm,
> > 353 + {"start": ""})
> > 354 +
>
> I think this make more sense as two methods:
> _acquire_bootlock and _release_bootlock.
>
> Another thing you could do, is assert that we're in the correct state. For
> example, that we're not trying to unlock a VM that's already unlocked (that'd
> indicate a bug in the code).
Good points, I'll break this down into two methods.
>
> > 394 + try:
> > 395 + task =
> self._session.
> > 396 + self._session.
> > 397 + except self.XenAPI.
> > 398 + task =
> self._session.
> > 399 + self._session.
>
> By default, I think _shutdown should perform either a clean or hard shutdown
> depending on whats requested (Dietz added a hard=true kwarg in his
> xs-migrations branch).
>
> If at all possible, I'd like to avoid silent-fall-backs (like from clean to
> hard), because they're usually indicative of a larger problem
>
> That said, a _force_shutdown that attempts a clean and then falls-back to
> hard would be okay since it's very explicity when that behavior is being
> requested (And will be easier to remove later on if we want to).
>
Sounds like I could get by using Dietz' method. I'll do this to avoid conflicts down the road.
> > 484 + if vbd["userdevice"] == str(1):
>
> str(1) can just be "1".
Nice catch. Not sure why I used str() here.
>
> > 488 + task1 = self._session.
> rescue_vm)
> > 489 + self._session.
>
> This could be som...
- 602. By Josh Kearney
-
Review feedback
Matt Dietz (cerberus) wrote : | # |
Pinning rescue status on the name of the VM seems pretty messy to me. It seems to me that, since rescue is such a critical state to be aware of, that we should maintain that either through a flag on the instance model or through the instance status attribute. Is there a reason you didn't go this route?
Josh Kearney (jk0) wrote : | # |
> Pinning rescue status on the name of the VM seems pretty messy to me. It seems
> to me that, since rescue is such a critical state to be aware of, that we
> should maintain that either through a flag on the instance model or through
> the instance status attribute. Is there a reason you didn't go this route?
We had to go this route to avoid duplicate name-label exceptions. By doing this, we can spin up a rescue image using the same instance object and easily refer back to the original image when the time comes to unrescue. The name-labels are guaranteed to be unique.
Matt Dietz (cerberus) wrote : | # |
Sure, I get that. My concern comes in cases where we may want to auto-unrescue instances. If we were to search for anything with '-rescue' on the end, we might try to unrescue stuff that's just unfortunately named.
Matt Dietz (cerberus) wrote : | # |
Nevermind, feedback received off list. Forgot that name-labels were not instance-
Rick Harris (rconradharris) wrote : | # |
Thanks for the fixes jk0. Have a couple of more potential cleanups, nothing big:
> 136 @property
> 137 def name(self):
> 138 - return FLAGS.instance_
> 139 + base_name = FLAGS.instance_
> 140 + if getattr(self, '_rescue', False):
> 141 + base_name += "-rescue"
> 142 + return base_name
I know I suggested this, but I'm having second thoughts about putting this
logic here. At least right now, this is XenAPI hackiness, I'm not sure this
belongs in the generic instance model.
Instead, I think the XenAPI code should have a method, `get_name_label` which
it uses instead of instance.name to generate the VM name-labels. That code
would look like:
def get_name_
"""Generate name label for the XenServer VM record
We have two scenarios:
1. We're in normal mode and the INSTANCE VM record uses the instance name as
its name label
2. We're in rescue mode and the RESCUE VM Record uses instance name +
"""
base_name = instance.name
if getattr(self, '_rescue', False):
base_name += "-rescue"
return base_name
The code is really pretty readable, but, it still might be a good idea to
include a big doc-string on the XenAPI rescue method that explains, at a
high-level, what's involved in a rescue, like:
"""
Rescue consists of a number of steps
1. We shutdown the INSTANCE VM
2. We apply a 'bootlock' to prevent the instance VM from being started while
in rescue
2. We spawn a RESCUE VM (the vm name-label in this case will be
instance-
3. We attach the INSTANCE VM's VBDs to the RESCUE VM
Unrescue consists of:
1. We unplug the INSTANCE VM's from the RESCUE VM
2. We teardown the RESCUE VM
3. Release the bootlock to allow the INSTANCE VM to be started
4. Start the INSTANCE VM
"""
- 603. By Josh Kearney
-
Updated docstrings
Preview Diff
1 | === modified file '.mailmap' | |||
2 | --- .mailmap 2011-02-18 18:27:30 +0000 | |||
3 | +++ .mailmap 2011-03-02 23:12:16 +0000 | |||
4 | @@ -19,6 +19,7 @@ | |||
5 | 19 | <jmckenty@gmail.com> <jmckenty@joshua-mckentys-macbook-pro.local> | 19 | <jmckenty@gmail.com> <jmckenty@joshua-mckentys-macbook-pro.local> |
6 | 20 | <jmckenty@gmail.com> <jmckenty@yyj-dhcp171.corp.flock.com> | 20 | <jmckenty@gmail.com> <jmckenty@yyj-dhcp171.corp.flock.com> |
7 | 21 | <jmckenty@gmail.com> <joshua.mckenty@nasa.gov> | 21 | <jmckenty@gmail.com> <joshua.mckenty@nasa.gov> |
8 | 22 | <josh@jk0.org> <josh.kearney@rackspace.com> | ||
9 | 22 | <justin@fathomdb.com> <justinsb@justinsb-desktop> | 23 | <justin@fathomdb.com> <justinsb@justinsb-desktop> |
10 | 23 | <justin@fathomdb.com> <superstack@superstack.org> | 24 | <justin@fathomdb.com> <superstack@superstack.org> |
11 | 24 | <masumotok@nttdata.co.jp> Masumoto<masumotok@nttdata.co.jp> | 25 | <masumotok@nttdata.co.jp> Masumoto<masumotok@nttdata.co.jp> |
12 | 25 | 26 | ||
13 | === modified file 'Authors' | |||
14 | --- Authors 2011-03-01 00:46:39 +0000 | |||
15 | +++ Authors 2011-03-02 23:12:16 +0000 | |||
16 | @@ -31,7 +31,7 @@ | |||
17 | 31 | Jonathan Bryce <jbryce@jbryce.com> | 31 | Jonathan Bryce <jbryce@jbryce.com> |
18 | 32 | Jordan Rinke <jordan@openstack.org> | 32 | Jordan Rinke <jordan@openstack.org> |
19 | 33 | Josh Durgin <joshd@hq.newdream.net> | 33 | Josh Durgin <joshd@hq.newdream.net> |
21 | 34 | Josh Kearney <josh.kearney@rackspace.com> | 34 | Josh Kearney <josh@jk0.org> |
22 | 35 | Joshua McKenty <jmckenty@gmail.com> | 35 | Joshua McKenty <jmckenty@gmail.com> |
23 | 36 | Justin Santa Barbara <justin@fathomdb.com> | 36 | Justin Santa Barbara <justin@fathomdb.com> |
24 | 37 | Kei Masumoto <masumotok@nttdata.co.jp> | 37 | Kei Masumoto <masumotok@nttdata.co.jp> |
25 | 38 | 38 | ||
26 | === modified file 'nova/api/openstack/__init__.py' | |||
27 | --- nova/api/openstack/__init__.py 2011-02-24 23:35:21 +0000 | |||
28 | +++ nova/api/openstack/__init__.py 2011-03-02 23:12:16 +0000 | |||
29 | @@ -80,6 +80,8 @@ | |||
30 | 80 | server_members["actions"] = "GET" | 80 | server_members["actions"] = "GET" |
31 | 81 | server_members['suspend'] = 'POST' | 81 | server_members['suspend'] = 'POST' |
32 | 82 | server_members['resume'] = 'POST' | 82 | server_members['resume'] = 'POST' |
33 | 83 | server_members['rescue'] = 'POST' | ||
34 | 84 | server_members['unrescue'] = 'POST' | ||
35 | 83 | server_members['reset_network'] = 'POST' | 85 | server_members['reset_network'] = 'POST' |
36 | 84 | server_members['inject_network_info'] = 'POST' | 86 | server_members['inject_network_info'] = 'POST' |
37 | 85 | 87 | ||
38 | 86 | 88 | ||
39 | === modified file 'nova/api/openstack/servers.py' | |||
40 | --- nova/api/openstack/servers.py 2011-03-01 00:56:46 +0000 | |||
41 | +++ nova/api/openstack/servers.py 2011-03-02 23:12:16 +0000 | |||
42 | @@ -335,6 +335,28 @@ | |||
43 | 335 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 335 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
44 | 336 | return exc.HTTPAccepted() | 336 | return exc.HTTPAccepted() |
45 | 337 | 337 | ||
46 | 338 | def rescue(self, req, id): | ||
47 | 339 | """Permit users to rescue the server.""" | ||
48 | 340 | context = req.environ["nova.context"] | ||
49 | 341 | try: | ||
50 | 342 | self.compute_api.rescue(context, id) | ||
51 | 343 | except: | ||
52 | 344 | readable = traceback.format_exc() | ||
53 | 345 | LOG.exception(_("compute.api::rescue %s"), readable) | ||
54 | 346 | return faults.Fault(exc.HTTPUnprocessableEntity()) | ||
55 | 347 | return exc.HTTPAccepted() | ||
56 | 348 | |||
57 | 349 | def unrescue(self, req, id): | ||
58 | 350 | """Permit users to unrescue the server.""" | ||
59 | 351 | context = req.environ["nova.context"] | ||
60 | 352 | try: | ||
61 | 353 | self.compute_api.unrescue(context, id) | ||
62 | 354 | except: | ||
63 | 355 | readable = traceback.format_exc() | ||
64 | 356 | LOG.exception(_("compute.api::unrescue %s"), readable) | ||
65 | 357 | return faults.Fault(exc.HTTPUnprocessableEntity()) | ||
66 | 358 | return exc.HTTPAccepted() | ||
67 | 359 | |||
68 | 338 | def get_ajax_console(self, req, id): | 360 | def get_ajax_console(self, req, id): |
69 | 339 | """ Returns a url to an instance's ajaxterm console. """ | 361 | """ Returns a url to an instance's ajaxterm console. """ |
70 | 340 | try: | 362 | try: |
71 | 341 | 363 | ||
72 | === modified file 'nova/compute/manager.py' | |||
73 | --- nova/compute/manager.py 2011-02-24 23:35:21 +0000 | |||
74 | +++ nova/compute/manager.py 2011-03-02 23:12:16 +0000 | |||
75 | @@ -370,12 +370,19 @@ | |||
76 | 370 | context = context.elevated() | 370 | context = context.elevated() |
77 | 371 | instance_ref = self.db.instance_get(context, instance_id) | 371 | instance_ref = self.db.instance_get(context, instance_id) |
78 | 372 | LOG.audit(_('instance %s: rescuing'), instance_id, context=context) | 372 | LOG.audit(_('instance %s: rescuing'), instance_id, context=context) |
83 | 373 | self.db.instance_set_state(context, | 373 | self.db.instance_set_state( |
84 | 374 | instance_id, | 374 | context, |
85 | 375 | power_state.NOSTATE, | 375 | instance_id, |
86 | 376 | 'rescuing') | 376 | power_state.NOSTATE, |
87 | 377 | 'rescuing') | ||
88 | 377 | self.network_manager.setup_compute_network(context, instance_id) | 378 | self.network_manager.setup_compute_network(context, instance_id) |
90 | 378 | self.driver.rescue(instance_ref) | 379 | self.driver.rescue( |
91 | 380 | instance_ref, | ||
92 | 381 | lambda result: self._update_state_callback( | ||
93 | 382 | self, | ||
94 | 383 | context, | ||
95 | 384 | instance_id, | ||
96 | 385 | result)) | ||
97 | 379 | self._update_state(context, instance_id) | 386 | self._update_state(context, instance_id) |
98 | 380 | 387 | ||
99 | 381 | @exception.wrap_exception | 388 | @exception.wrap_exception |
100 | @@ -385,11 +392,18 @@ | |||
101 | 385 | context = context.elevated() | 392 | context = context.elevated() |
102 | 386 | instance_ref = self.db.instance_get(context, instance_id) | 393 | instance_ref = self.db.instance_get(context, instance_id) |
103 | 387 | LOG.audit(_('instance %s: unrescuing'), instance_id, context=context) | 394 | LOG.audit(_('instance %s: unrescuing'), instance_id, context=context) |
109 | 388 | self.db.instance_set_state(context, | 395 | self.db.instance_set_state( |
110 | 389 | instance_id, | 396 | context, |
111 | 390 | power_state.NOSTATE, | 397 | instance_id, |
112 | 391 | 'unrescuing') | 398 | power_state.NOSTATE, |
113 | 392 | self.driver.unrescue(instance_ref) | 399 | 'unrescuing') |
114 | 400 | self.driver.unrescue( | ||
115 | 401 | instance_ref, | ||
116 | 402 | lambda result: self._update_state_callback( | ||
117 | 403 | self, | ||
118 | 404 | context, | ||
119 | 405 | instance_id, | ||
120 | 406 | result)) | ||
121 | 393 | self._update_state(context, instance_id) | 407 | self._update_state(context, instance_id) |
122 | 394 | 408 | ||
123 | 395 | @staticmethod | 409 | @staticmethod |
124 | 396 | 410 | ||
125 | === modified file 'nova/db/sqlalchemy/models.py' | |||
126 | --- nova/db/sqlalchemy/models.py 2011-02-24 05:47:29 +0000 | |||
127 | +++ nova/db/sqlalchemy/models.py 2011-03-02 23:12:16 +0000 | |||
128 | @@ -126,11 +126,16 @@ | |||
129 | 126 | class Instance(BASE, NovaBase): | 126 | class Instance(BASE, NovaBase): |
130 | 127 | """Represents a guest vm.""" | 127 | """Represents a guest vm.""" |
131 | 128 | __tablename__ = 'instances' | 128 | __tablename__ = 'instances' |
132 | 129 | onset_files = [] | ||
133 | 130 | |||
134 | 129 | id = Column(Integer, primary_key=True, autoincrement=True) | 131 | id = Column(Integer, primary_key=True, autoincrement=True) |
135 | 130 | 132 | ||
136 | 131 | @property | 133 | @property |
137 | 132 | def name(self): | 134 | def name(self): |
139 | 133 | return FLAGS.instance_name_template % self.id | 135 | base_name = FLAGS.instance_name_template % self.id |
140 | 136 | if getattr(self, '_rescue', False): | ||
141 | 137 | base_name += "-rescue" | ||
142 | 138 | return base_name | ||
143 | 134 | 139 | ||
144 | 135 | admin_pass = Column(String(255)) | 140 | admin_pass = Column(String(255)) |
145 | 136 | user_id = Column(String(255)) | 141 | user_id = Column(String(255)) |
146 | 137 | 142 | ||
147 | === modified file 'nova/tests/xenapi/stubs.py' | |||
148 | --- nova/tests/xenapi/stubs.py 2011-02-25 16:47:08 +0000 | |||
149 | +++ nova/tests/xenapi/stubs.py 2011-03-02 23:12:16 +0000 | |||
150 | @@ -27,7 +27,7 @@ | |||
151 | 27 | def fake_fetch_image(cls, session, instance_id, image, user, project, | 27 | def fake_fetch_image(cls, session, instance_id, image, user, project, |
152 | 28 | type): | 28 | type): |
153 | 29 | # Stubout wait_for_task | 29 | # Stubout wait_for_task |
155 | 30 | def fake_wait_for_task(self, id, task): | 30 | def fake_wait_for_task(self, task, id): |
156 | 31 | class FakeEvent: | 31 | class FakeEvent: |
157 | 32 | 32 | ||
158 | 33 | def send(self, value): | 33 | def send(self, value): |
159 | 34 | 34 | ||
160 | === modified file 'nova/virt/libvirt_conn.py' | |||
161 | --- nova/virt/libvirt_conn.py 2011-01-27 23:58:22 +0000 | |||
162 | +++ nova/virt/libvirt_conn.py 2011-03-02 23:12:16 +0000 | |||
163 | @@ -362,7 +362,7 @@ | |||
164 | 362 | raise exception.APIError("resume not supported for libvirt") | 362 | raise exception.APIError("resume not supported for libvirt") |
165 | 363 | 363 | ||
166 | 364 | @exception.wrap_exception | 364 | @exception.wrap_exception |
168 | 365 | def rescue(self, instance): | 365 | def rescue(self, instance, callback=None): |
169 | 366 | self.destroy(instance, False) | 366 | self.destroy(instance, False) |
170 | 367 | 367 | ||
171 | 368 | xml = self.to_xml(instance, rescue=True) | 368 | xml = self.to_xml(instance, rescue=True) |
172 | @@ -392,7 +392,7 @@ | |||
173 | 392 | return timer.start(interval=0.5, now=True) | 392 | return timer.start(interval=0.5, now=True) |
174 | 393 | 393 | ||
175 | 394 | @exception.wrap_exception | 394 | @exception.wrap_exception |
177 | 395 | def unrescue(self, instance): | 395 | def unrescue(self, instance, callback=None): |
178 | 396 | # NOTE(vish): Because reboot destroys and recreates an instance using | 396 | # NOTE(vish): Because reboot destroys and recreates an instance using |
179 | 397 | # the normal xml file, we can just call reboot here | 397 | # the normal xml file, we can just call reboot here |
180 | 398 | self.reboot(instance) | 398 | self.reboot(instance) |
181 | 399 | 399 | ||
182 | === modified file 'nova/virt/xenapi/fake.py' | |||
183 | --- nova/virt/xenapi/fake.py 2011-02-09 10:08:15 +0000 | |||
184 | +++ nova/virt/xenapi/fake.py 2011-03-02 23:12:16 +0000 | |||
185 | @@ -401,7 +401,7 @@ | |||
186 | 401 | field in _db_content[cls][ref]): | 401 | field in _db_content[cls][ref]): |
187 | 402 | return _db_content[cls][ref][field] | 402 | return _db_content[cls][ref][field] |
188 | 403 | 403 | ||
190 | 404 | LOG.debuug(_('Raising NotImplemented')) | 404 | LOG.debug(_('Raising NotImplemented')) |
191 | 405 | raise NotImplementedError( | 405 | raise NotImplementedError( |
192 | 406 | _('xenapi.fake does not have an implementation for %s or it has ' | 406 | _('xenapi.fake does not have an implementation for %s or it has ' |
193 | 407 | 'been called with the wrong number of arguments') % name) | 407 | 'been called with the wrong number of arguments') % name) |
194 | 408 | 408 | ||
195 | === modified file 'nova/virt/xenapi/vm_utils.py' | |||
196 | --- nova/virt/xenapi/vm_utils.py 2011-02-25 16:47:08 +0000 | |||
197 | +++ nova/virt/xenapi/vm_utils.py 2011-03-02 23:12:16 +0000 | |||
198 | @@ -205,19 +205,17 @@ | |||
199 | 205 | """Destroy VBD from host database""" | 205 | """Destroy VBD from host database""" |
200 | 206 | try: | 206 | try: |
201 | 207 | task = session.call_xenapi('Async.VBD.destroy', vbd_ref) | 207 | task = session.call_xenapi('Async.VBD.destroy', vbd_ref) |
205 | 208 | #FIXME(armando): find a solution to missing instance_id | 208 | session.wait_for_task(task) |
203 | 209 | #with Josh Kearney | ||
204 | 210 | session.wait_for_task(0, task) | ||
206 | 211 | except cls.XenAPI.Failure, exc: | 209 | except cls.XenAPI.Failure, exc: |
207 | 212 | LOG.exception(exc) | 210 | LOG.exception(exc) |
208 | 213 | raise StorageError(_('Unable to destroy VBD %s') % vbd_ref) | 211 | raise StorageError(_('Unable to destroy VBD %s') % vbd_ref) |
209 | 214 | 212 | ||
210 | 215 | @classmethod | 213 | @classmethod |
212 | 216 | def create_vif(cls, session, vm_ref, network_ref, mac_address): | 214 | def create_vif(cls, session, vm_ref, network_ref, mac_address, dev="0"): |
213 | 217 | """Create a VIF record. Returns a Deferred that gives the new | 215 | """Create a VIF record. Returns a Deferred that gives the new |
214 | 218 | VIF reference.""" | 216 | VIF reference.""" |
215 | 219 | vif_rec = {} | 217 | vif_rec = {} |
217 | 220 | vif_rec['device'] = '0' | 218 | vif_rec['device'] = dev |
218 | 221 | vif_rec['network'] = network_ref | 219 | vif_rec['network'] = network_ref |
219 | 222 | vif_rec['VM'] = vm_ref | 220 | vif_rec['VM'] = vm_ref |
220 | 223 | vif_rec['MAC'] = mac_address | 221 | vif_rec['MAC'] = mac_address |
221 | @@ -269,7 +267,7 @@ | |||
222 | 269 | original_parent_uuid = get_vhd_parent_uuid(session, vm_vdi_ref) | 267 | original_parent_uuid = get_vhd_parent_uuid(session, vm_vdi_ref) |
223 | 270 | 268 | ||
224 | 271 | task = session.call_xenapi('Async.VM.snapshot', vm_ref, label) | 269 | task = session.call_xenapi('Async.VM.snapshot', vm_ref, label) |
226 | 272 | template_vm_ref = session.wait_for_task(instance_id, task) | 270 | template_vm_ref = session.wait_for_task(task, instance_id) |
227 | 273 | template_vdi_rec = get_vdi_for_vm_safely(session, template_vm_ref)[1] | 271 | template_vdi_rec = get_vdi_for_vm_safely(session, template_vm_ref)[1] |
228 | 274 | template_vdi_uuid = template_vdi_rec["uuid"] | 272 | template_vdi_uuid = template_vdi_rec["uuid"] |
229 | 275 | 273 | ||
230 | @@ -302,7 +300,7 @@ | |||
231 | 302 | 300 | ||
232 | 303 | kwargs = {'params': pickle.dumps(params)} | 301 | kwargs = {'params': pickle.dumps(params)} |
233 | 304 | task = session.async_call_plugin('glance', 'upload_vhd', kwargs) | 302 | task = session.async_call_plugin('glance', 'upload_vhd', kwargs) |
235 | 305 | session.wait_for_task(instance_id, task) | 303 | session.wait_for_task(task, instance_id) |
236 | 306 | 304 | ||
237 | 307 | @classmethod | 305 | @classmethod |
238 | 308 | def fetch_image(cls, session, instance_id, image, user, project, | 306 | def fetch_image(cls, session, instance_id, image, user, project, |
239 | @@ -345,7 +343,7 @@ | |||
240 | 345 | 343 | ||
241 | 346 | kwargs = {'params': pickle.dumps(params)} | 344 | kwargs = {'params': pickle.dumps(params)} |
242 | 347 | task = session.async_call_plugin('glance', 'download_vhd', kwargs) | 345 | task = session.async_call_plugin('glance', 'download_vhd', kwargs) |
244 | 348 | vdi_uuid = session.wait_for_task(instance_id, task) | 346 | vdi_uuid = session.wait_for_task(task, instance_id) |
245 | 349 | 347 | ||
246 | 350 | scan_sr(session, instance_id, sr_ref) | 348 | scan_sr(session, instance_id, sr_ref) |
247 | 351 | 349 | ||
248 | @@ -401,7 +399,7 @@ | |||
249 | 401 | #let the plugin copy the correct number of bytes | 399 | #let the plugin copy the correct number of bytes |
250 | 402 | args['image-size'] = str(vdi_size) | 400 | args['image-size'] = str(vdi_size) |
251 | 403 | task = session.async_call_plugin('glance', fn, args) | 401 | task = session.async_call_plugin('glance', fn, args) |
253 | 404 | filename = session.wait_for_task(instance_id, task) | 402 | filename = session.wait_for_task(task, instance_id) |
254 | 405 | #remove the VDI as it is not needed anymore | 403 | #remove the VDI as it is not needed anymore |
255 | 406 | session.get_xenapi().VDI.destroy(vdi) | 404 | session.get_xenapi().VDI.destroy(vdi) |
256 | 407 | LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi) | 405 | LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi) |
257 | @@ -493,7 +491,7 @@ | |||
258 | 493 | if image_type == ImageType.DISK_RAW: | 491 | if image_type == ImageType.DISK_RAW: |
259 | 494 | args['raw'] = 'true' | 492 | args['raw'] = 'true' |
260 | 495 | task = session.async_call_plugin('objectstore', fn, args) | 493 | task = session.async_call_plugin('objectstore', fn, args) |
262 | 496 | uuid = session.wait_for_task(instance_id, task) | 494 | uuid = session.wait_for_task(task, instance_id) |
263 | 497 | return uuid | 495 | return uuid |
264 | 498 | 496 | ||
265 | 499 | @classmethod | 497 | @classmethod |
266 | @@ -513,7 +511,7 @@ | |||
267 | 513 | args = {} | 511 | args = {} |
268 | 514 | args['vdi-ref'] = vdi_ref | 512 | args['vdi-ref'] = vdi_ref |
269 | 515 | task = session.async_call_plugin('objectstore', fn, args) | 513 | task = session.async_call_plugin('objectstore', fn, args) |
271 | 516 | pv_str = session.wait_for_task(instance_id, task) | 514 | pv_str = session.wait_for_task(task, instance_id) |
272 | 517 | pv = None | 515 | pv = None |
273 | 518 | if pv_str.lower() == 'true': | 516 | if pv_str.lower() == 'true': |
274 | 519 | pv = True | 517 | pv = True |
275 | @@ -654,7 +652,7 @@ | |||
276 | 654 | def scan_sr(session, instance_id, sr_ref): | 652 | def scan_sr(session, instance_id, sr_ref): |
277 | 655 | LOG.debug(_("Re-scanning SR %s"), sr_ref) | 653 | LOG.debug(_("Re-scanning SR %s"), sr_ref) |
278 | 656 | task = session.call_xenapi('Async.SR.scan', sr_ref) | 654 | task = session.call_xenapi('Async.SR.scan', sr_ref) |
280 | 657 | session.wait_for_task(instance_id, task) | 655 | session.wait_for_task(task, instance_id) |
281 | 658 | 656 | ||
282 | 659 | 657 | ||
283 | 660 | def wait_for_vhd_coalesce(session, instance_id, sr_ref, vdi_ref, | 658 | def wait_for_vhd_coalesce(session, instance_id, sr_ref, vdi_ref, |
284 | 661 | 659 | ||
285 | === modified file 'nova/virt/xenapi/vmops.py' | |||
286 | --- nova/virt/xenapi/vmops.py 2011-02-25 16:47:08 +0000 | |||
287 | +++ nova/virt/xenapi/vmops.py 2011-03-02 23:12:16 +0000 | |||
288 | @@ -49,6 +49,7 @@ | |||
289 | 49 | def __init__(self, session): | 49 | def __init__(self, session): |
290 | 50 | self.XenAPI = session.get_imported_xenapi() | 50 | self.XenAPI = session.get_imported_xenapi() |
291 | 51 | self._session = session | 51 | self._session = session |
292 | 52 | |||
293 | 52 | VMHelper.XenAPI = self.XenAPI | 53 | VMHelper.XenAPI = self.XenAPI |
294 | 53 | 54 | ||
295 | 54 | def list_instances(self): | 55 | def list_instances(self): |
296 | @@ -62,20 +63,20 @@ | |||
297 | 62 | 63 | ||
298 | 63 | def spawn(self, instance): | 64 | def spawn(self, instance): |
299 | 64 | """Create VM instance""" | 65 | """Create VM instance""" |
301 | 65 | vm = VMHelper.lookup(self._session, instance.name) | 66 | instance_name = instance.name |
302 | 67 | vm = VMHelper.lookup(self._session, instance_name) | ||
303 | 66 | if vm is not None: | 68 | if vm is not None: |
304 | 67 | raise exception.Duplicate(_('Attempted to create' | 69 | raise exception.Duplicate(_('Attempted to create' |
306 | 68 | ' non-unique name %s') % instance.name) | 70 | ' non-unique name %s') % instance_name) |
307 | 69 | 71 | ||
308 | 70 | #ensure enough free memory is available | 72 | #ensure enough free memory is available |
309 | 71 | if not VMHelper.ensure_free_mem(self._session, instance): | 73 | if not VMHelper.ensure_free_mem(self._session, instance): |
317 | 72 | name = instance['name'] | 74 | LOG.exception(_('instance %(instance_name)s: not enough free ' |
318 | 73 | LOG.exception(_('instance %(name)s: not enough free memory') | 75 | 'memory') % locals()) |
319 | 74 | % locals()) | 76 | db.instance_set_state(context.get_admin_context(), |
320 | 75 | db.instance_set_state(context.get_admin_context(), | 77 | instance['id'], |
321 | 76 | instance['id'], | 78 | power_state.SHUTDOWN) |
322 | 77 | power_state.SHUTDOWN) | 79 | return |
316 | 78 | return | ||
323 | 79 | 80 | ||
324 | 80 | user = AuthManager().get_user(instance.user_id) | 81 | user = AuthManager().get_user(instance.user_id) |
325 | 81 | project = AuthManager().get_project(instance.project_id) | 82 | project = AuthManager().get_project(instance.project_id) |
326 | @@ -116,10 +117,9 @@ | |||
327 | 116 | self.create_vifs(instance, networks) | 117 | self.create_vifs(instance, networks) |
328 | 117 | 118 | ||
329 | 118 | LOG.debug(_('Starting VM %s...'), vm_ref) | 119 | LOG.debug(_('Starting VM %s...'), vm_ref) |
332 | 119 | self._session.call_xenapi('VM.start', vm_ref, False, False) | 120 | self._start(instance, vm_ref) |
331 | 120 | instance_name = instance.name | ||
333 | 121 | LOG.info(_('Spawning VM %(instance_name)s created %(vm_ref)s.') | 121 | LOG.info(_('Spawning VM %(instance_name)s created %(vm_ref)s.') |
335 | 122 | % locals()) | 122 | % locals()) |
336 | 123 | 123 | ||
337 | 124 | def _inject_onset_files(): | 124 | def _inject_onset_files(): |
338 | 125 | onset_files = instance.onset_files | 125 | onset_files = instance.onset_files |
339 | @@ -143,18 +143,18 @@ | |||
340 | 143 | 143 | ||
341 | 144 | def _wait_for_boot(): | 144 | def _wait_for_boot(): |
342 | 145 | try: | 145 | try: |
344 | 146 | state = self.get_info(instance['name'])['state'] | 146 | state = self.get_info(instance_name)['state'] |
345 | 147 | db.instance_set_state(context.get_admin_context(), | 147 | db.instance_set_state(context.get_admin_context(), |
346 | 148 | instance['id'], state) | 148 | instance['id'], state) |
347 | 149 | if state == power_state.RUNNING: | 149 | if state == power_state.RUNNING: |
349 | 150 | LOG.debug(_('Instance %s: booted'), instance['name']) | 150 | LOG.debug(_('Instance %s: booted'), instance_name) |
350 | 151 | timer.stop() | 151 | timer.stop() |
351 | 152 | _inject_onset_files() | 152 | _inject_onset_files() |
352 | 153 | return True | 153 | return True |
353 | 154 | except Exception, exc: | 154 | except Exception, exc: |
354 | 155 | LOG.warn(exc) | 155 | LOG.warn(exc) |
355 | 156 | LOG.exception(_('instance %s: failed to boot'), | 156 | LOG.exception(_('instance %s: failed to boot'), |
357 | 157 | instance['name']) | 157 | instance_name) |
358 | 158 | db.instance_set_state(context.get_admin_context(), | 158 | db.instance_set_state(context.get_admin_context(), |
359 | 159 | instance['id'], | 159 | instance['id'], |
360 | 160 | power_state.SHUTDOWN) | 160 | power_state.SHUTDOWN) |
361 | @@ -202,6 +202,20 @@ | |||
362 | 202 | _('Instance not present %s') % instance_name) | 202 | _('Instance not present %s') % instance_name) |
363 | 203 | return vm | 203 | return vm |
364 | 204 | 204 | ||
365 | 205 | def _acquire_bootlock(self, vm): | ||
366 | 206 | """Prevent an instance from booting""" | ||
367 | 207 | self._session.call_xenapi( | ||
368 | 208 | "VM.set_blocked_operations", | ||
369 | 209 | vm, | ||
370 | 210 | {"start": ""}) | ||
371 | 211 | |||
372 | 212 | def _release_bootlock(self, vm): | ||
373 | 213 | """Allow an instance to boot""" | ||
374 | 214 | self._session.call_xenapi( | ||
375 | 215 | "VM.remove_from_blocked_operations", | ||
376 | 216 | vm, | ||
377 | 217 | "start") | ||
378 | 218 | |||
379 | 205 | def snapshot(self, instance, image_id): | 219 | def snapshot(self, instance, image_id): |
380 | 206 | """ Create snapshot from a running VM instance | 220 | """ Create snapshot from a running VM instance |
381 | 207 | 221 | ||
382 | @@ -254,7 +268,7 @@ | |||
383 | 254 | """Reboot VM instance""" | 268 | """Reboot VM instance""" |
384 | 255 | vm = self._get_vm_opaque_ref(instance) | 269 | vm = self._get_vm_opaque_ref(instance) |
385 | 256 | task = self._session.call_xenapi('Async.VM.clean_reboot', vm) | 270 | task = self._session.call_xenapi('Async.VM.clean_reboot', vm) |
387 | 257 | self._session.wait_for_task(instance.id, task) | 271 | self._session.wait_for_task(task, instance.id) |
388 | 258 | 272 | ||
389 | 259 | def set_admin_password(self, instance, new_pass): | 273 | def set_admin_password(self, instance, new_pass): |
390 | 260 | """Set the root/admin password on the VM instance. This is done via | 274 | """Set the root/admin password on the VM instance. This is done via |
391 | @@ -294,6 +308,11 @@ | |||
392 | 294 | raise RuntimeError(resp_dict['message']) | 308 | raise RuntimeError(resp_dict['message']) |
393 | 295 | return resp_dict['message'] | 309 | return resp_dict['message'] |
394 | 296 | 310 | ||
395 | 311 | def _start(self, instance, vm): | ||
396 | 312 | """Start an instance""" | ||
397 | 313 | task = self._session.call_xenapi("Async.VM.start", vm, False, False) | ||
398 | 314 | self._session.wait_for_task(task, instance.id) | ||
399 | 315 | |||
400 | 297 | def inject_file(self, instance, b64_path, b64_contents): | 316 | def inject_file(self, instance, b64_path, b64_contents): |
401 | 298 | """Write a file to the VM instance. The path to which it is to be | 317 | """Write a file to the VM instance. The path to which it is to be |
402 | 299 | written and the contents of the file need to be supplied; both should | 318 | written and the contents of the file need to be supplied; both should |
403 | @@ -320,8 +339,8 @@ | |||
404 | 320 | raise RuntimeError(resp_dict['message']) | 339 | raise RuntimeError(resp_dict['message']) |
405 | 321 | return resp_dict['message'] | 340 | return resp_dict['message'] |
406 | 322 | 341 | ||
409 | 323 | def _shutdown(self, instance, vm): | 342 | def _shutdown(self, instance, vm, hard=True): |
410 | 324 | """Shutdown an instance """ | 343 | """Shutdown an instance""" |
411 | 325 | state = self.get_info(instance['name'])['state'] | 344 | state = self.get_info(instance['name'])['state'] |
412 | 326 | if state == power_state.SHUTDOWN: | 345 | if state == power_state.SHUTDOWN: |
413 | 327 | LOG.warn(_("VM %(vm)s already halted, skipping shutdown...") % | 346 | LOG.warn(_("VM %(vm)s already halted, skipping shutdown...") % |
414 | @@ -332,8 +351,13 @@ | |||
415 | 332 | LOG.debug(_("Shutting down VM for Instance %(instance_id)s") | 351 | LOG.debug(_("Shutting down VM for Instance %(instance_id)s") |
416 | 333 | % locals()) | 352 | % locals()) |
417 | 334 | try: | 353 | try: |
420 | 335 | task = self._session.call_xenapi('Async.VM.hard_shutdown', vm) | 354 | task = None |
421 | 336 | self._session.wait_for_task(instance.id, task) | 355 | if hard: |
422 | 356 | task = self._session.call_xenapi("Async.VM.hard_shutdown", vm) | ||
423 | 357 | else: | ||
424 | 358 | task = self._session.call_xenapi("Async.VM.clean_shutdown", vm) | ||
425 | 359 | |||
426 | 360 | self._session.wait_for_task(task, instance.id) | ||
427 | 337 | except self.XenAPI.Failure, exc: | 361 | except self.XenAPI.Failure, exc: |
428 | 338 | LOG.exception(exc) | 362 | LOG.exception(exc) |
429 | 339 | 363 | ||
430 | @@ -350,7 +374,7 @@ | |||
431 | 350 | for vdi in vdis: | 374 | for vdi in vdis: |
432 | 351 | try: | 375 | try: |
433 | 352 | task = self._session.call_xenapi('Async.VDI.destroy', vdi) | 376 | task = self._session.call_xenapi('Async.VDI.destroy', vdi) |
435 | 353 | self._session.wait_for_task(instance.id, task) | 377 | self._session.wait_for_task(task, instance.id) |
436 | 354 | except self.XenAPI.Failure, exc: | 378 | except self.XenAPI.Failure, exc: |
437 | 355 | LOG.exception(exc) | 379 | LOG.exception(exc) |
438 | 356 | 380 | ||
439 | @@ -389,7 +413,7 @@ | |||
440 | 389 | args = {'kernel-file': kernel, 'ramdisk-file': ramdisk} | 413 | args = {'kernel-file': kernel, 'ramdisk-file': ramdisk} |
441 | 390 | task = self._session.async_call_plugin( | 414 | task = self._session.async_call_plugin( |
442 | 391 | 'glance', 'remove_kernel_ramdisk', args) | 415 | 'glance', 'remove_kernel_ramdisk', args) |
444 | 392 | self._session.wait_for_task(instance.id, task) | 416 | self._session.wait_for_task(task, instance.id) |
445 | 393 | 417 | ||
446 | 394 | LOG.debug(_("kernel/ramdisk files removed")) | 418 | LOG.debug(_("kernel/ramdisk files removed")) |
447 | 395 | 419 | ||
448 | @@ -398,7 +422,7 @@ | |||
449 | 398 | instance_id = instance.id | 422 | instance_id = instance.id |
450 | 399 | try: | 423 | try: |
451 | 400 | task = self._session.call_xenapi('Async.VM.destroy', vm) | 424 | task = self._session.call_xenapi('Async.VM.destroy', vm) |
453 | 401 | self._session.wait_for_task(instance_id, task) | 425 | self._session.wait_for_task(task, instance_id) |
454 | 402 | except self.XenAPI.Failure, exc: | 426 | except self.XenAPI.Failure, exc: |
455 | 403 | LOG.exception(exc) | 427 | LOG.exception(exc) |
456 | 404 | 428 | ||
457 | @@ -441,7 +465,7 @@ | |||
458 | 441 | def _wait_with_callback(self, instance_id, task, callback): | 465 | def _wait_with_callback(self, instance_id, task, callback): |
459 | 442 | ret = None | 466 | ret = None |
460 | 443 | try: | 467 | try: |
462 | 444 | ret = self._session.wait_for_task(instance_id, task) | 468 | ret = self._session.wait_for_task(task, instance_id) |
463 | 445 | except self.XenAPI.Failure, exc: | 469 | except self.XenAPI.Failure, exc: |
464 | 446 | LOG.exception(exc) | 470 | LOG.exception(exc) |
465 | 447 | callback(ret) | 471 | callback(ret) |
466 | @@ -470,6 +494,78 @@ | |||
467 | 470 | task = self._session.call_xenapi('Async.VM.resume', vm, False, True) | 494 | task = self._session.call_xenapi('Async.VM.resume', vm, False, True) |
468 | 471 | self._wait_with_callback(instance.id, task, callback) | 495 | self._wait_with_callback(instance.id, task, callback) |
469 | 472 | 496 | ||
470 | 497 | def rescue(self, instance, callback): | ||
471 | 498 | """Rescue the specified instance | ||
472 | 499 | - shutdown the instance VM | ||
473 | 500 | - set 'bootlock' to prevent the instance from starting in rescue | ||
474 | 501 | - spawn a rescue VM (the vm name-label will be instance-N-rescue) | ||
475 | 502 | |||
476 | 503 | """ | ||
477 | 504 | rescue_vm = VMHelper.lookup(self._session, instance.name + "-rescue") | ||
478 | 505 | if rescue_vm: | ||
479 | 506 | raise RuntimeError(_( | ||
480 | 507 | "Instance is already in Rescue Mode: %s" % instance.name)) | ||
481 | 508 | |||
482 | 509 | vm = self._get_vm_opaque_ref(instance) | ||
483 | 510 | self._shutdown(instance, vm) | ||
484 | 511 | self._acquire_bootlock(vm) | ||
485 | 512 | |||
486 | 513 | instance._rescue = True | ||
487 | 514 | self.spawn(instance) | ||
488 | 515 | rescue_vm = self._get_vm_opaque_ref(instance) | ||
489 | 516 | |||
490 | 517 | vbd = self._session.get_xenapi().VM.get_VBDs(vm)[0] | ||
491 | 518 | vdi_ref = self._session.get_xenapi().VBD.get_record(vbd)["VDI"] | ||
492 | 519 | vbd_ref = VMHelper.create_vbd( | ||
493 | 520 | self._session, | ||
494 | 521 | rescue_vm, | ||
495 | 522 | vdi_ref, | ||
496 | 523 | 1, | ||
497 | 524 | False) | ||
498 | 525 | |||
499 | 526 | self._session.call_xenapi("Async.VBD.plug", vbd_ref) | ||
500 | 527 | |||
501 | 528 | def unrescue(self, instance, callback): | ||
502 | 529 | """Unrescue the specified instance | ||
503 | 530 | - unplug the instance VM's disk from the rescue VM | ||
504 | 531 | - teardown the rescue VM | ||
505 | 532 | - release the bootlock to allow the instance VM to start | ||
506 | 533 | |||
507 | 534 | """ | ||
508 | 535 | rescue_vm = VMHelper.lookup(self._session, instance.name + "-rescue") | ||
509 | 536 | |||
510 | 537 | if not rescue_vm: | ||
511 | 538 | raise exception.NotFound(_( | ||
512 | 539 | "Instance is not in Rescue Mode: %s" % instance.name)) | ||
513 | 540 | |||
514 | 541 | original_vm = self._get_vm_opaque_ref(instance) | ||
515 | 542 | vbds = self._session.get_xenapi().VM.get_VBDs(rescue_vm) | ||
516 | 543 | |||
517 | 544 | instance._rescue = False | ||
518 | 545 | |||
519 | 546 | for vbd_ref in vbds: | ||
520 | 547 | vbd = self._session.get_xenapi().VBD.get_record(vbd_ref) | ||
521 | 548 | if vbd["userdevice"] == "1": | ||
522 | 549 | VMHelper.unplug_vbd(self._session, vbd_ref) | ||
523 | 550 | VMHelper.destroy_vbd(self._session, vbd_ref) | ||
524 | 551 | |||
525 | 552 | task1 = self._session.call_xenapi("Async.VM.hard_shutdown", rescue_vm) | ||
526 | 553 | self._session.wait_for_task(task1, instance.id) | ||
527 | 554 | |||
528 | 555 | vdis = VMHelper.lookup_vm_vdis(self._session, rescue_vm) | ||
529 | 556 | for vdi in vdis: | ||
530 | 557 | try: | ||
531 | 558 | task = self._session.call_xenapi('Async.VDI.destroy', vdi) | ||
532 | 559 | self._session.wait_for_task(task, instance.id) | ||
533 | 560 | except self.XenAPI.Failure: | ||
534 | 561 | continue | ||
535 | 562 | |||
536 | 563 | task2 = self._session.call_xenapi('Async.VM.destroy', rescue_vm) | ||
537 | 564 | self._session.wait_for_task(task2, instance.id) | ||
538 | 565 | |||
539 | 566 | self._release_bootlock(original_vm) | ||
540 | 567 | self._start(instance, original_vm) | ||
541 | 568 | |||
542 | 473 | def get_info(self, instance): | 569 | def get_info(self, instance): |
543 | 474 | """Return data about VM instance""" | 570 | """Return data about VM instance""" |
544 | 475 | vm = self._get_vm_opaque_ref(instance) | 571 | vm = self._get_vm_opaque_ref(instance) |
545 | @@ -556,8 +652,17 @@ | |||
546 | 556 | NetworkHelper.find_network_with_bridge(self._session, bridge) | 652 | NetworkHelper.find_network_with_bridge(self._session, bridge) |
547 | 557 | 653 | ||
548 | 558 | if network_ref: | 654 | if network_ref: |
551 | 559 | VMHelper.create_vif(self._session, vm_opaque_ref, | 655 | try: |
552 | 560 | network_ref, instance.mac_address) | 656 | device = "1" if instance._rescue else "0" |
553 | 657 | except AttributeError: | ||
554 | 658 | device = "0" | ||
555 | 659 | |||
556 | 660 | VMHelper.create_vif( | ||
557 | 661 | self._session, | ||
558 | 662 | vm_opaque_ref, | ||
559 | 663 | network_ref, | ||
560 | 664 | instance.mac_address, | ||
561 | 665 | device) | ||
562 | 561 | 666 | ||
563 | 562 | def reset_network(self, instance): | 667 | def reset_network(self, instance): |
564 | 563 | """ | 668 | """ |
565 | @@ -627,7 +732,7 @@ | |||
566 | 627 | args.update(addl_args) | 732 | args.update(addl_args) |
567 | 628 | try: | 733 | try: |
568 | 629 | task = self._session.async_call_plugin(plugin, method, args) | 734 | task = self._session.async_call_plugin(plugin, method, args) |
570 | 630 | ret = self._session.wait_for_task(instance_id, task) | 735 | ret = self._session.wait_for_task(task, instance_id) |
571 | 631 | except self.XenAPI.Failure, e: | 736 | except self.XenAPI.Failure, e: |
572 | 632 | ret = None | 737 | ret = None |
573 | 633 | err_trace = e.details[-1] | 738 | err_trace = e.details[-1] |
574 | 634 | 739 | ||
575 | === modified file 'nova/virt/xenapi/volumeops.py' | |||
576 | --- nova/virt/xenapi/volumeops.py 2011-01-19 20:26:09 +0000 | |||
577 | +++ nova/virt/xenapi/volumeops.py 2011-03-02 23:12:16 +0000 | |||
578 | @@ -83,7 +83,7 @@ | |||
579 | 83 | try: | 83 | try: |
580 | 84 | task = self._session.call_xenapi('Async.VBD.plug', | 84 | task = self._session.call_xenapi('Async.VBD.plug', |
581 | 85 | vbd_ref) | 85 | vbd_ref) |
583 | 86 | self._session.wait_for_task(vol_rec['deviceNumber'], task) | 86 | self._session.wait_for_task(task, vol_rec['deviceNumber']) |
584 | 87 | except self.XenAPI.Failure, exc: | 87 | except self.XenAPI.Failure, exc: |
585 | 88 | LOG.exception(exc) | 88 | LOG.exception(exc) |
586 | 89 | VolumeHelper.destroy_iscsi_storage(self._session, | 89 | VolumeHelper.destroy_iscsi_storage(self._session, |
587 | 90 | 90 | ||
588 | === modified file 'nova/virt/xenapi_conn.py' | |||
589 | --- nova/virt/xenapi_conn.py 2011-02-25 16:47:08 +0000 | |||
590 | +++ nova/virt/xenapi_conn.py 2011-03-02 23:12:16 +0000 | |||
591 | @@ -196,6 +196,14 @@ | |||
592 | 196 | """resume the specified instance""" | 196 | """resume the specified instance""" |
593 | 197 | self._vmops.resume(instance, callback) | 197 | self._vmops.resume(instance, callback) |
594 | 198 | 198 | ||
595 | 199 | def rescue(self, instance, callback): | ||
596 | 200 | """Rescue the specified instance""" | ||
597 | 201 | self._vmops.rescue(instance, callback) | ||
598 | 202 | |||
599 | 203 | def unrescue(self, instance, callback): | ||
600 | 204 | """Unrescue the specified instance""" | ||
601 | 205 | self._vmops.unrescue(instance, callback) | ||
602 | 206 | |||
603 | 199 | def reset_network(self, instance): | 207 | def reset_network(self, instance): |
604 | 200 | """reset networking for specified instance""" | 208 | """reset networking for specified instance""" |
605 | 201 | self._vmops.reset_network(instance) | 209 | self._vmops.reset_network(instance) |
606 | @@ -279,7 +287,7 @@ | |||
607 | 279 | self._session.xenapi.Async.host.call_plugin, | 287 | self._session.xenapi.Async.host.call_plugin, |
608 | 280 | self.get_xenapi_host(), plugin, fn, args) | 288 | self.get_xenapi_host(), plugin, fn, args) |
609 | 281 | 289 | ||
611 | 282 | def wait_for_task(self, id, task): | 290 | def wait_for_task(self, task, id=None): |
612 | 283 | """Return the result of the given task. The task is polled | 291 | """Return the result of the given task. The task is polled |
613 | 284 | until it completes. Not re-entrant.""" | 292 | until it completes. Not re-entrant.""" |
614 | 285 | done = event.Event() | 293 | done = event.Event() |
615 | @@ -306,10 +314,11 @@ | |||
616 | 306 | try: | 314 | try: |
617 | 307 | name = self._session.xenapi.task.get_name_label(task) | 315 | name = self._session.xenapi.task.get_name_label(task) |
618 | 308 | status = self._session.xenapi.task.get_status(task) | 316 | status = self._session.xenapi.task.get_status(task) |
623 | 309 | action = dict( | 317 | if id: |
624 | 310 | instance_id=int(id), | 318 | action = dict( |
625 | 311 | action=name[0:255], # Ensure action is never > 255 | 319 | instance_id=int(id), |
626 | 312 | error=None) | 320 | action=name[0:255], # Ensure action is never > 255 |
627 | 321 | error=None) | ||
628 | 313 | if status == "pending": | 322 | if status == "pending": |
629 | 314 | return | 323 | return |
630 | 315 | elif status == "success": | 324 | elif status == "success": |
631 | @@ -323,7 +332,9 @@ | |||
632 | 323 | LOG.warn(_("Task [%(name)s] %(task)s status:" | 332 | LOG.warn(_("Task [%(name)s] %(task)s status:" |
633 | 324 | " %(status)s %(error_info)s") % locals()) | 333 | " %(status)s %(error_info)s") % locals()) |
634 | 325 | done.send_exception(self.XenAPI.Failure(error_info)) | 334 | done.send_exception(self.XenAPI.Failure(error_info)) |
636 | 326 | db.instance_action_create(context.get_admin_context(), action) | 335 | |
637 | 336 | if id: | ||
638 | 337 | db.instance_action_create(context.get_admin_context(), action) | ||
639 | 327 | except self.XenAPI.Failure, exc: | 338 | except self.XenAPI.Failure, exc: |
640 | 328 | LOG.warn(exc) | 339 | LOG.warn(exc) |
641 | 329 | done.send_exception(*sys.exc_info()) | 340 | done.send_exception(*sys.exc_info()) |
Good work jk0, this is a great start.
Some small nits:
> 80 + self.driver. rescue( instance_ ref, state_callback( self,
> 81 + lambda result: self._update_
> 82 + context,
> 83 + instance_id,
> 84 + result))
Breaking this into two lines would probably aid readability.
> 145 + def rescue(self, instance, callback=None):
The callback function isn't firing for rescue or unrescue. It looks like the
other methods are using
To fire it off.
> 342 + def _bootlock(self, vm, unlock=False): call_xenapi( from_blocked_ operations" , call_xenapi( blocked_ operations" ,
> 343 + """Prevent an instance from booting"""
> 344 + if unlock:
> 345 + self._session.
> 346 + "VM.remove_
> 347 + vm,
> 348 + "start")
> 349 + else:
> 350 + self._session.
> 351 + "VM.set_
> 352 + vm,
> 353 + {"start": ""})
> 354 +
I think this make more sense as two methods:
_acquire_bootlock and _release_bootlock.
Another thing you could do, is assert that we're in the correct state. For
example, that we're not trying to unlock a VM that's already unlocked (that'd
indicate a bug in the code).
> 394 + try: call_xenapi( "Async. VM.clean_ shutdown" , vm) wait_for_ task(task, instance.id) Failure: call_xenapi( "Async. VM.hard_ shutdown" , vm) wait_for_ task(task, instance.id)
> 395 + task = self._session.
> 396 + self._session.
> 397 + except self.XenAPI.
> 398 + task = self._session.
> 399 + self._session.
By default, I think _shutdown should perform either a clean or hard shutdown
depending on whats requested (Dietz added a hard=true kwarg in his
xs-migrations branch).
If at all possible, I'd like to avoid silent-fall-backs (like from clean to
hard), because they're usually indicative of a larger problem
That said, a _force_shutdown that attempts a clean and then falls-back to
hard would be okay since it's very explicity when that behavior is being
requested (And will be easier to remove later on if we want to).
> 484 + if vbd["userdevice"] == str(1):
str(1) can just be "1".
> 488 + task1 = self._session. call_xenapi( "Async. VM.hard_ shutdown" , rescue_vm) wait_for_ task(task1, instance.id)
> 489 + self._session.
This could be something like _shutdown(instance, rescue_vm, hard=True).
> 491 + vdis = VMHelper. lookup_ vm_vdis( self._session, rescue_vm) call_xenapi( 'Async. VDI.destroy' , vdi) wait_for_ task(task, instance.id) Failure:
> 492 + for vdi in vdis:
> 493 + try:
> 494 + task = self._session.
> 495 + self._session.
> 496 + except self.XenAPI.
> 497 + continue
Should be able to use `_destroy_vdis` here.
> 499 + task2 = self._session. call_xenapi( 'Async. VM.destroy' , rescue_vm) wait_for_ task(task2, instance.i...
> 500 + self._session.