Merge lp:~cbehrens/nova/swapdisk into lp:~hudson-openstack/nova/trunk

Proposed by Chris Behrens
Status: Merged
Approved by: Josh Kearney
Approved revision: 1113
Merged at revision: 1115
Proposed branch: lp:~cbehrens/nova/swapdisk
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 484 lines (+202/-62)
6 files modified
nova/tests/test_xenapi.py (+23/-0)
nova/tests/xenapi/stubs.py (+26/-6)
nova/virt/xenapi/fake.py (+4/-1)
nova/virt/xenapi/vm_utils.py (+40/-9)
nova/virt/xenapi/vmops.py (+38/-21)
plugins/xenserver/xenapi/etc/xapi.d/plugins/glance (+71/-25)
To merge this branch: bzr merge lp:~cbehrens/nova/swapdisk
Reviewer Review Type Date Requested Status
Ed Leafe (community) Approve
Josh Kearney (community) Approve
Review via email: mp+62549@code.launchpad.net

Description of the change

Essentially adds support for wiring up a swap disk when building.

Modifies the glance plugin to check for a swap.vhd. Glance's download_vhd will now return a list of dictionaries describing VHDs found in the image. All returns from _fetch_image calls in xenapi have been modified accordingly.

One can now build a .ova for glance that contains an image.vhd and a swap.vhd files.

When a VM is created, it'll iterate through the list and create VBDs for all of the VDIs found.

Added a test for this, too, which required a slight fix to xenapi's fake.py.

To post a comment you must log in.
Revision history for this message
Vish Ishaya (vishvananda) wrote :
Download full text (21.1 KiB)

This looks cool. It would be nice to have similar functionality for libvirt/kvm.

Vish

On May 26, 2011, at 12:46 PM, Chris Behrens wrote:

> Chris Behrens has proposed merging lp:~cbehrens/nova/swapdisk into lp:nova.
>
> Requested reviews:
> Nova Core (nova-core)
>
> For more details, see:
> https://code.launchpad.net/~cbehrens/nova/swapdisk/+merge/62549
>
> Essentially adds support for wiring up a swap disk when building.
>
> Modifies the glance plugin to check for a swap.vhd. Glance's download_vhd will now return a list of dictionaries describing VHDs found in the image. All returns from _fetch_image calls in xenapi have been modified accordingly.
>
> One can now build a .ova for glance that contains an image.vhd and a swap.vhd files.
>
> When a VM is created, it'll iterate through the list and create VBDs for all of the VDIs found.
>
> Added a test for this, too, which required a slight fix to xenapi's fake.py.
> --
> https://code.launchpad.net/~cbehrens/nova/swapdisk/+merge/62549
> You are subscribed to branch lp:nova.
> === modified file 'nova/tests/test_xenapi.py'
> --- nova/tests/test_xenapi.py 2011-05-13 16:45:42 +0000
> +++ nova/tests/test_xenapi.py 2011-05-26 19:46:27 +0000
> @@ -395,6 +395,29 @@
> os_type="linux")
> self.check_vm_params_for_linux()
>
> + def test_spawn_vhd_glance_swapdisk(self):
> + # Change the default host_call_plugin to one that'll return
> + # a swap disk
> + orig_func = stubs.FakeSessionForVMTests.host_call_plugin
> +
> + stubs.FakeSessionForVMTests.host_call_plugin = \
> + stubs.FakeSessionForVMTests.host_call_plugin_swap
> +
> + try:
> + # We'll steal the above glance linux test
> + self.test_spawn_vhd_glance_linux()
> + finally:
> + # Make sure to put this back
> + stubs.FakeSessionForVMTests.host_call_plugin = orig_func
> +
> + # We should have 2 VBDs.
> + self.assertEqual(len(self.vm['VBDs']), 2)
> + # Now test that we have 1.
> + self.tearDown()
> + self.setUp()
> + self.test_spawn_vhd_glance_linux()
> + self.assertEqual(len(self.vm['VBDs']), 1)
> +
> def test_spawn_vhd_glance_windows(self):
> FLAGS.xenapi_image_service = 'glance'
> self._test_spawn(glance_stubs.FakeGlance.IMAGE_VHD, None, None,
>
> === modified file 'nova/tests/xenapi/stubs.py'
> --- nova/tests/xenapi/stubs.py 2011-05-13 16:47:18 +0000
> +++ nova/tests/xenapi/stubs.py 2011-05-26 19:46:27 +0000
> @@ -17,6 +17,7 @@
> """Stubouts, mocks and fixtures for the test suite"""
>
> import eventlet
> +import json
> from nova.virt import xenapi_conn
> from nova.virt.xenapi import fake
> from nova.virt.xenapi import volume_utils
> @@ -37,7 +38,7 @@
> sr_ref=sr_ref, sharable=False)
> vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref)
> vdi_uuid = vdi_rec['uuid']
> - return vdi_uuid
> + return [dict(vdi_type='os', vdi_uuid=vdi_uuid)]
>
> stubs.Set(vm_utils.VMHelper, 'fetch_image', fake_fetch_image)
>
> @@ -132,11 +133,30 @@
> def __init__(self, uri):
> s...

Revision history for this message
Chris Behrens (cbehrens) wrote :

I see I've accidentally removed an assert from glance plugin... fixing...

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Really Vish? Quoting the whole diff in your reply?

:-P

Revision history for this message
Josh Kearney (jk0) wrote :

Looks great (and works great -- tested in our lab). The extra docs are very much appreciated.

review: Approve
Revision history for this message
Chris Behrens (cbehrens) wrote :

Vish: Yeah, out of scope for what I need to do for RAX. This just builds on the xenserver glance plugin support.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Updating a comment..

Revision history for this message
Vish Ishaya (vishvananda) wrote :

Ha, oops. That is what i get for just hitting reply in my mail client.

:)

On May 26, 2011, at 1:01 PM, Ed Leafe wrote:

> Really Vish? Quoting the whole diff in your reply?
>
> :-P
> --
> https://code.launchpad.net/~cbehrens/nova/swapdisk/+merge/62549
> You are subscribed to branch lp:nova.

lp:~cbehrens/nova/swapdisk updated
1113. By Chris Behrens

add a comment when calling glance:download_vhd so it's clear what is returned

Revision history for this message
Ed Leafe (ed-leafe) wrote :

lgtm

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/tests/test_xenapi.py'
--- nova/tests/test_xenapi.py 2011-05-13 16:45:42 +0000
+++ nova/tests/test_xenapi.py 2011-05-26 20:22:46 +0000
@@ -395,6 +395,29 @@
395 os_type="linux")395 os_type="linux")
396 self.check_vm_params_for_linux()396 self.check_vm_params_for_linux()
397397
398 def test_spawn_vhd_glance_swapdisk(self):
399 # Change the default host_call_plugin to one that'll return
400 # a swap disk
401 orig_func = stubs.FakeSessionForVMTests.host_call_plugin
402
403 stubs.FakeSessionForVMTests.host_call_plugin = \
404 stubs.FakeSessionForVMTests.host_call_plugin_swap
405
406 try:
407 # We'll steal the above glance linux test
408 self.test_spawn_vhd_glance_linux()
409 finally:
410 # Make sure to put this back
411 stubs.FakeSessionForVMTests.host_call_plugin = orig_func
412
413 # We should have 2 VBDs.
414 self.assertEqual(len(self.vm['VBDs']), 2)
415 # Now test that we have 1.
416 self.tearDown()
417 self.setUp()
418 self.test_spawn_vhd_glance_linux()
419 self.assertEqual(len(self.vm['VBDs']), 1)
420
398 def test_spawn_vhd_glance_windows(self):421 def test_spawn_vhd_glance_windows(self):
399 FLAGS.xenapi_image_service = 'glance'422 FLAGS.xenapi_image_service = 'glance'
400 self._test_spawn(glance_stubs.FakeGlance.IMAGE_VHD, None, None,423 self._test_spawn(glance_stubs.FakeGlance.IMAGE_VHD, None, None,
401424
=== modified file 'nova/tests/xenapi/stubs.py'
--- nova/tests/xenapi/stubs.py 2011-05-13 16:47:18 +0000
+++ nova/tests/xenapi/stubs.py 2011-05-26 20:22:46 +0000
@@ -17,6 +17,7 @@
17"""Stubouts, mocks and fixtures for the test suite"""17"""Stubouts, mocks and fixtures for the test suite"""
1818
19import eventlet19import eventlet
20import json
20from nova.virt import xenapi_conn21from nova.virt import xenapi_conn
21from nova.virt.xenapi import fake22from nova.virt.xenapi import fake
22from nova.virt.xenapi import volume_utils23from nova.virt.xenapi import volume_utils
@@ -37,7 +38,7 @@
37 sr_ref=sr_ref, sharable=False)38 sr_ref=sr_ref, sharable=False)
38 vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref)39 vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref)
39 vdi_uuid = vdi_rec['uuid']40 vdi_uuid = vdi_rec['uuid']
40 return vdi_uuid41 return [dict(vdi_type='os', vdi_uuid=vdi_uuid)]
4142
42 stubs.Set(vm_utils.VMHelper, 'fetch_image', fake_fetch_image)43 stubs.Set(vm_utils.VMHelper, 'fetch_image', fake_fetch_image)
4344
@@ -132,11 +133,30 @@
132 def __init__(self, uri):133 def __init__(self, uri):
133 super(FakeSessionForVMTests, self).__init__(uri)134 super(FakeSessionForVMTests, self).__init__(uri)
134135
135 def host_call_plugin(self, _1, _2, _3, _4, _5):136 def host_call_plugin(self, _1, _2, plugin, method, _5):
136 sr_ref = fake.get_all('SR')[0]137 sr_ref = fake.get_all('SR')[0]
137 vdi_ref = fake.create_vdi('', False, sr_ref, False)138 vdi_ref = fake.create_vdi('', False, sr_ref, False)
138 vdi_rec = fake.get_record('VDI', vdi_ref)139 vdi_rec = fake.get_record('VDI', vdi_ref)
139 return '<string>%s</string>' % vdi_rec['uuid']140 if plugin == "glance" and method == "download_vhd":
141 ret_str = json.dumps([dict(vdi_type='os',
142 vdi_uuid=vdi_rec['uuid'])])
143 else:
144 ret_str = vdi_rec['uuid']
145 return '<string>%s</string>' % ret_str
146
147 def host_call_plugin_swap(self, _1, _2, plugin, method, _5):
148 sr_ref = fake.get_all('SR')[0]
149 vdi_ref = fake.create_vdi('', False, sr_ref, False)
150 vdi_rec = fake.get_record('VDI', vdi_ref)
151 if plugin == "glance" and method == "download_vhd":
152 swap_vdi_ref = fake.create_vdi('', False, sr_ref, False)
153 swap_vdi_rec = fake.get_record('VDI', swap_vdi_ref)
154 ret_str = json.dumps(
155 [dict(vdi_type='os', vdi_uuid=vdi_rec['uuid']),
156 dict(vdi_type='swap', vdi_uuid=swap_vdi_rec['uuid'])])
157 else:
158 ret_str = vdi_rec['uuid']
159 return '<string>%s</string>' % ret_str
140160
141 def VM_start(self, _1, ref, _2, _3):161 def VM_start(self, _1, ref, _2, _3):
142 vm = fake.get_record('VM', ref)162 vm = fake.get_record('VM', ref)
143163
=== modified file 'nova/virt/xenapi/fake.py'
--- nova/virt/xenapi/fake.py 2011-04-18 22:00:39 +0000
+++ nova/virt/xenapi/fake.py 2011-05-26 20:22:46 +0000
@@ -159,7 +159,10 @@
159 vbd_rec['device'] = ''159 vbd_rec['device'] = ''
160 vm_ref = vbd_rec['VM']160 vm_ref = vbd_rec['VM']
161 vm_rec = _db_content['VM'][vm_ref]161 vm_rec = _db_content['VM'][vm_ref]
162 vm_rec['VBDs'] = [vbd_ref]162 if vm_rec.get('VBDs', None):
163 vm_rec['VBDs'].append(vbd_ref)
164 else:
165 vm_rec['VBDs'] = [vbd_ref]
163166
164 vm_name_label = _db_content['VM'][vm_ref]['name_label']167 vm_name_label = _db_content['VM'][vm_ref]['name_label']
165 vbd_rec['vm_name_label'] = vm_name_label168 vbd_rec['vm_name_label'] = vm_name_label
166169
=== modified file 'nova/virt/xenapi/vm_utils.py'
--- nova/virt/xenapi/vm_utils.py 2011-05-09 15:35:45 +0000
+++ nova/virt/xenapi/vm_utils.py 2011-05-26 20:22:46 +0000
@@ -19,6 +19,7 @@
19their attributes like VDIs, VIFs, as well as their lookup functions.19their attributes like VDIs, VIFs, as well as their lookup functions.
20"""20"""
2121
22import json
22import os23import os
23import pickle24import pickle
24import re25import re
@@ -376,6 +377,9 @@
376 xenapi_image_service = ['glance', 'objectstore']377 xenapi_image_service = ['glance', 'objectstore']
377 glance_address = 'address for glance services'378 glance_address = 'address for glance services'
378 glance_port = 'port for glance services'379 glance_port = 'port for glance services'
380
381 Returns: A single filename if image_type is KERNEL_RAMDISK
382 A list of dictionaries that describe VDIs, otherwise
379 """383 """
380 access = AuthManager().get_access_key(user, project)384 access = AuthManager().get_access_key(user, project)
381385
@@ -390,6 +394,10 @@
390 @classmethod394 @classmethod
391 def _fetch_image_glance_vhd(cls, session, instance_id, image, access,395 def _fetch_image_glance_vhd(cls, session, instance_id, image, access,
392 image_type):396 image_type):
397 """Tell glance to download an image and put the VHDs into the SR
398
399 Returns: A list of dictionaries that describe VDIs
400 """
393 LOG.debug(_("Asking xapi to fetch vhd image %(image)s")401 LOG.debug(_("Asking xapi to fetch vhd image %(image)s")
394 % locals())402 % locals())
395403
@@ -408,18 +416,26 @@
408416
409 kwargs = {'params': pickle.dumps(params)}417 kwargs = {'params': pickle.dumps(params)}
410 task = session.async_call_plugin('glance', 'download_vhd', kwargs)418 task = session.async_call_plugin('glance', 'download_vhd', kwargs)
411 vdi_uuid = session.wait_for_task(task, instance_id)419 result = session.wait_for_task(task, instance_id)
420 # 'download_vhd' will return a json encoded string containing
421 # a list of dictionaries describing VDIs. The dictionary will
422 # contain 'vdi_type' and 'vdi_uuid' keys. 'vdi_type' can be
423 # 'os' or 'swap' right now.
424 vdis = json.loads(result)
425 for vdi in vdis:
426 LOG.debug(_("xapi 'download_vhd' returned VDI of "
427 "type '%(vdi_type)s' with UUID '%(vdi_uuid)s'" % vdi))
412428
413 cls.scan_sr(session, instance_id, sr_ref)429 cls.scan_sr(session, instance_id, sr_ref)
414430
431 # Pull out the UUID of the first VDI
432 vdi_uuid = vdis[0]['vdi_uuid']
415 # Set the name-label to ease debugging433 # Set the name-label to ease debugging
416 vdi_ref = session.get_xenapi().VDI.get_by_uuid(vdi_uuid)434 vdi_ref = session.get_xenapi().VDI.get_by_uuid(vdi_uuid)
417 name_label = get_name_label_for_image(image)435 primary_name_label = get_name_label_for_image(image)
418 session.get_xenapi().VDI.set_name_label(vdi_ref, name_label)436 session.get_xenapi().VDI.set_name_label(vdi_ref, primary_name_label)
419437
420 LOG.debug(_("xapi 'download_vhd' returned VDI UUID %(vdi_uuid)s")438 return vdis
421 % locals())
422 return vdi_uuid
423439
424 @classmethod440 @classmethod
425 def _fetch_image_glance_disk(cls, session, instance_id, image, access,441 def _fetch_image_glance_disk(cls, session, instance_id, image, access,
@@ -431,6 +447,8 @@
431 plugin; instead, it streams the disks through domU to the VDI447 plugin; instead, it streams the disks through domU to the VDI
432 directly.448 directly.
433449
450 Returns: A single filename if image_type is KERNEL_RAMDISK
451 A list of dictionaries that describe VDIs, otherwise
434 """452 """
435 # FIXME(sirp): Since the Glance plugin seems to be required for the453 # FIXME(sirp): Since the Glance plugin seems to be required for the
436 # VHD disk, it may be worth using the plugin for both VHD and RAW and454 # VHD disk, it may be worth using the plugin for both VHD and RAW and
@@ -476,7 +494,8 @@
476 LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi_ref)494 LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi_ref)
477 return filename495 return filename
478 else:496 else:
479 return session.get_xenapi().VDI.get_uuid(vdi_ref)497 vdi_uuid = session.get_xenapi().VDI.get_uuid(vdi_ref)
498 return [dict(vdi_type='os', vdi_uuid=vdi_uuid)]
480499
481 @classmethod500 @classmethod
482 def determine_disk_image_type(cls, instance):501 def determine_disk_image_type(cls, instance):
@@ -535,6 +554,11 @@
535 @classmethod554 @classmethod
536 def _fetch_image_glance(cls, session, instance_id, image, access,555 def _fetch_image_glance(cls, session, instance_id, image, access,
537 image_type):556 image_type):
557 """Fetch image from glance based on image type.
558
559 Returns: A single filename if image_type is KERNEL_RAMDISK
560 A list of dictionaries that describe VDIs, otherwise
561 """
538 if image_type == ImageType.DISK_VHD:562 if image_type == ImageType.DISK_VHD:
539 return cls._fetch_image_glance_vhd(563 return cls._fetch_image_glance_vhd(
540 session, instance_id, image, access, image_type)564 session, instance_id, image, access, image_type)
@@ -545,6 +569,11 @@
545 @classmethod569 @classmethod
546 def _fetch_image_objectstore(cls, session, instance_id, image, access,570 def _fetch_image_objectstore(cls, session, instance_id, image, access,
547 secret, image_type):571 secret, image_type):
572 """Fetch an image from objectstore.
573
574 Returns: A single filename if image_type is KERNEL_RAMDISK
575 A list of dictionaries that describe VDIs, otherwise
576 """
548 url = images.image_url(image)577 url = images.image_url(image)
549 LOG.debug(_("Asking xapi to fetch %(url)s as %(access)s") % locals())578 LOG.debug(_("Asking xapi to fetch %(url)s as %(access)s") % locals())
550 if image_type == ImageType.KERNEL_RAMDISK:579 if image_type == ImageType.KERNEL_RAMDISK:
@@ -562,8 +591,10 @@
562 if image_type == ImageType.DISK_RAW:591 if image_type == ImageType.DISK_RAW:
563 args['raw'] = 'true'592 args['raw'] = 'true'
564 task = session.async_call_plugin('objectstore', fn, args)593 task = session.async_call_plugin('objectstore', fn, args)
565 uuid = session.wait_for_task(task, instance_id)594 uuid_or_fn = session.wait_for_task(task, instance_id)
566 return uuid595 if image_type != ImageType.KERNEL_RAMDISK:
596 return [dict(vdi_type='os', vdi_uuid=uuid_or_fn)]
597 return uuid_or_fn
567598
568 @classmethod599 @classmethod
569 def determine_is_pv(cls, session, instance_id, vdi_ref, disk_image_type,600 def determine_is_pv(cls, session, instance_id, vdi_ref, disk_image_type,
570601
=== modified file 'nova/virt/xenapi/vmops.py'
--- nova/virt/xenapi/vmops.py 2011-05-25 17:55:51 +0000
+++ nova/virt/xenapi/vmops.py 2011-05-26 20:22:46 +0000
@@ -91,7 +91,8 @@
91 def finish_resize(self, instance, disk_info):91 def finish_resize(self, instance, disk_info):
92 vdi_uuid = self.link_disks(instance, disk_info['base_copy'],92 vdi_uuid = self.link_disks(instance, disk_info['base_copy'],
93 disk_info['cow'])93 disk_info['cow'])
94 vm_ref = self._create_vm(instance, vdi_uuid)94 vm_ref = self._create_vm(instance,
95 [dict(vdi_type='os', vdi_uuid=vdi_uuid)])
95 self.resize_instance(instance, vdi_uuid)96 self.resize_instance(instance, vdi_uuid)
96 self._spawn(instance, vm_ref)97 self._spawn(instance, vm_ref)
9798
@@ -105,24 +106,25 @@
105 LOG.debug(_("Starting instance %s"), instance.name)106 LOG.debug(_("Starting instance %s"), instance.name)
106 self._session.call_xenapi('VM.start', vm_ref, False, False)107 self._session.call_xenapi('VM.start', vm_ref, False, False)
107108
108 def _create_disk(self, instance):109 def _create_disks(self, instance):
109 user = AuthManager().get_user(instance.user_id)110 user = AuthManager().get_user(instance.user_id)
110 project = AuthManager().get_project(instance.project_id)111 project = AuthManager().get_project(instance.project_id)
111 disk_image_type = VMHelper.determine_disk_image_type(instance)112 disk_image_type = VMHelper.determine_disk_image_type(instance)
112 vdi_uuid = VMHelper.fetch_image(self._session, instance.id,113 vdis = VMHelper.fetch_image(self._session,
113 instance.image_id, user, project, disk_image_type)114 instance.id, instance.image_id, user, project,
114 return vdi_uuid115 disk_image_type)
116 return vdis
115117
116 def spawn(self, instance, network_info=None):118 def spawn(self, instance, network_info=None):
117 vdi_uuid = self._create_disk(instance)119 vdis = self._create_disks(instance)
118 vm_ref = self._create_vm(instance, vdi_uuid, network_info)120 vm_ref = self._create_vm(instance, vdis, network_info)
119 self._spawn(instance, vm_ref)121 self._spawn(instance, vm_ref)
120122
121 def spawn_rescue(self, instance):123 def spawn_rescue(self, instance):
122 """Spawn a rescue instance."""124 """Spawn a rescue instance."""
123 self.spawn(instance)125 self.spawn(instance)
124126
125 def _create_vm(self, instance, vdi_uuid, network_info=None):127 def _create_vm(self, instance, vdis, network_info=None):
126 """Create VM instance."""128 """Create VM instance."""
127 instance_name = instance.name129 instance_name = instance.name
128 vm_ref = VMHelper.lookup(self._session, instance_name)130 vm_ref = VMHelper.lookup(self._session, instance_name)
@@ -141,28 +143,43 @@
141 user = AuthManager().get_user(instance.user_id)143 user = AuthManager().get_user(instance.user_id)
142 project = AuthManager().get_project(instance.project_id)144 project = AuthManager().get_project(instance.project_id)
143145
144 # Are we building from a pre-existing disk?
145 vdi_ref = self._session.call_xenapi('VDI.get_by_uuid', vdi_uuid)
146
147 disk_image_type = VMHelper.determine_disk_image_type(instance)146 disk_image_type = VMHelper.determine_disk_image_type(instance)
148147
149 kernel = None148 kernel = None
150 if instance.kernel_id:149 if instance.kernel_id:
151 kernel = VMHelper.fetch_image(self._session, instance.id,150 kernel = VMHelper.fetch_image(self._session, instance.id,
152 instance.kernel_id, user, project, ImageType.KERNEL_RAMDISK)151 instance.kernel_id, user, project,
152 ImageType.KERNEL_RAMDISK)
153153
154 ramdisk = None154 ramdisk = None
155 if instance.ramdisk_id:155 if instance.ramdisk_id:
156 ramdisk = VMHelper.fetch_image(self._session, instance.id,156 ramdisk = VMHelper.fetch_image(self._session, instance.id,
157 instance.ramdisk_id, user, project, ImageType.KERNEL_RAMDISK)157 instance.ramdisk_id, user, project,
158158 ImageType.KERNEL_RAMDISK)
159 use_pv_kernel = VMHelper.determine_is_pv(self._session, instance.id,159
160 vdi_ref, disk_image_type, instance.os_type)160 # Create the VM ref and attach the first disk
161 vm_ref = VMHelper.create_vm(self._session, instance, kernel, ramdisk,161 first_vdi_ref = self._session.call_xenapi('VDI.get_by_uuid',
162 use_pv_kernel)162 vdis[0]['vdi_uuid'])
163163 use_pv_kernel = VMHelper.determine_is_pv(self._session,
164 instance.id, first_vdi_ref, disk_image_type,
165 instance.os_type)
166 vm_ref = VMHelper.create_vm(self._session, instance,
167 kernel, ramdisk, use_pv_kernel)
164 VMHelper.create_vbd(session=self._session, vm_ref=vm_ref,168 VMHelper.create_vbd(session=self._session, vm_ref=vm_ref,
165 vdi_ref=vdi_ref, userdevice=0, bootable=True)169 vdi_ref=first_vdi_ref, userdevice=0, bootable=True)
170
171 # Attach any other disks
172 # userdevice 1 is reserved for rescue
173 userdevice = 2
174 for vdi in vdis[1:]:
175 # vdi['vdi_type'] is either 'os' or 'swap', but we don't
176 # really care what it is right here.
177 vdi_ref = self._session.call_xenapi('VDI.get_by_uuid',
178 vdi['vdi_uuid'])
179 VMHelper.create_vbd(session=self._session, vm_ref=vm_ref,
180 vdi_ref=vdi_ref, userdevice=userdevice,
181 bootable=False)
182 userdevice += 1
166183
167 # TODO(tr3buchet) - check to make sure we have network info, otherwise184 # TODO(tr3buchet) - check to make sure we have network info, otherwise
168 # create it now. This goes away once nova-multi-nic hits.185 # create it now. This goes away once nova-multi-nic hits.
@@ -172,7 +189,7 @@
172 # Alter the image before VM start for, e.g. network injection189 # Alter the image before VM start for, e.g. network injection
173 if FLAGS.xenapi_inject_image:190 if FLAGS.xenapi_inject_image:
174 VMHelper.preconfigure_instance(self._session, instance,191 VMHelper.preconfigure_instance(self._session, instance,
175 vdi_ref, network_info)192 first_vdi_ref, network_info)
176193
177 self.create_vifs(vm_ref, network_info)194 self.create_vifs(vm_ref, network_info)
178 self.inject_network_info(instance, network_info, vm_ref)195 self.inject_network_info(instance, network_info, vm_ref)
179196
=== modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/glance'
--- plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-05-20 21:45:19 +0000
+++ plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-05-26 20:22:46 +0000
@@ -22,6 +22,10 @@
22#22#
2323
24import httplib24import httplib
25try:
26 import json
27except ImportError:
28 import simplejson as json
25import os29import os
26import os.path30import os.path
27import pickle31import pickle
@@ -87,8 +91,8 @@
87 conn.close()91 conn.close()
8892
8993
90def _fixup_vhds(sr_path, staging_path, uuid_stack):94def _import_vhds(sr_path, staging_path, uuid_stack):
91 """Fixup the downloaded VHDs before we move them into the SR.95 """Import the VHDs found in the staging path.
9296
93 We cannot extract VHDs directly into the SR since they don't yet have97 We cannot extract VHDs directly into the SR since they don't yet have
94 UUIDs, aren't properly associated with each other, and would be subject to98 UUIDs, aren't properly associated with each other, and would be subject to
@@ -98,16 +102,25 @@
98 To avoid these we problems, we use a staging area to fixup the VHDs before102 To avoid these we problems, we use a staging area to fixup the VHDs before
99 moving them into the SR. The steps involved are:103 moving them into the SR. The steps involved are:
100104
101 1. Extracting tarball into staging area105 1. Extracting tarball into staging area (done prior to this call)
102106
103 2. Renaming VHDs to use UUIDs ('snap.vhd' -> 'ffff-aaaa-...vhd')107 2. Renaming VHDs to use UUIDs ('snap.vhd' -> 'ffff-aaaa-...vhd')
104108
105 3. Linking the two VHDs together109 3. Linking VHDs together if there's a snap.vhd
106110
107 4. Pseudo-atomically moving the images into the SR. (It's not really111 4. Pseudo-atomically moving the images into the SR. (It's not really
108 atomic because it takes place as two os.rename operations; however,112 atomic because it takes place as multiple os.rename operations;
109 the chances of an SR.scan occuring between the two rename()113 however, the chances of an SR.scan occuring between the rename()s
110 invocations is so small that we can safely ignore it)114 invocations is so small that we can safely ignore it)
115
116 Returns: A list of VDIs. Each list element is a dictionary containing
117 information about the VHD. Dictionary keys are:
118 1. "vdi_type" - The type of VDI. Currently they can be "os_disk" or
119 "swap"
120 2. "vdi_uuid" - The UUID of the VDI
121
122 Example return: [{"vdi_type": "os_disk","vdi_uuid": "ffff-aaa..vhd"},
123 {"vdi_type": "swap","vdi_uuid": "ffff-bbb..vhd"}]
111 """124 """
112 def rename_with_uuid(orig_path):125 def rename_with_uuid(orig_path):
113 """Rename VHD using UUID so that it will be recognized by SR on a126 """Rename VHD using UUID so that it will be recognized by SR on a
@@ -158,27 +171,59 @@
158 "VHD %(path)s is marked as hidden without child" %171 "VHD %(path)s is marked as hidden without child" %
159 locals())172 locals())
160173
161 orig_base_copy_path = os.path.join(staging_path, 'image.vhd')174 def prepare_if_exists(staging_path, vhd_name, parent_path=None):
162 if not os.path.exists(orig_base_copy_path):175 """
176 Check for existance of a particular VHD in the staging path and
177 preparing it for moving into the SR.
178
179 Returns: Tuple of (Path to move into the SR, VDI_UUID)
180 None, if the vhd_name doesn't exist in the staging path
181
182 If the VHD exists, we will do the following:
183 1. Rename it with a UUID.
184 2. If parent_path exists, we'll link up the VHDs.
185 """
186 orig_path = os.path.join(staging_path, vhd_name)
187 if not os.path.exists(orig_path):
188 return None
189 new_path, vdi_uuid = rename_with_uuid(orig_path)
190 if parent_path:
191 # NOTE(sirp): this step is necessary so that an SR scan won't
192 # delete the base_copy out from under us (since it would be
193 # orphaned)
194 link_vhds(new_path, parent_path)
195 return (new_path, vdi_uuid)
196
197 vdi_return_list = []
198 paths_to_move = []
199
200 image_info = prepare_if_exists(staging_path, 'image.vhd')
201 if not image_info:
163 raise Exception("Invalid image: image.vhd not present")202 raise Exception("Invalid image: image.vhd not present")
164203
165 base_copy_path, base_copy_uuid = rename_with_uuid(orig_base_copy_path)204 paths_to_move.append(image_info[0])
166205
167 vdi_uuid = base_copy_uuid206 snap_info = prepare_if_exists(staging_path, 'snap.vhd',
168 orig_snap_path = os.path.join(staging_path, 'snap.vhd')207 image_info[0])
169 if os.path.exists(orig_snap_path):208 if snap_info:
170 snap_path, snap_uuid = rename_with_uuid(orig_snap_path)209 paths_to_move.append(snap_info[0])
171 vdi_uuid = snap_uuid210 # We return this snap as the VDI instead of image.vhd
172 # NOTE(sirp): this step is necessary so that an SR scan won't211 vdi_return_list.append(dict(vdi_type="os", vdi_uuid=snap_info[1]))
173 # delete the base_copy out from under us (since it would be
174 # orphaned)
175 link_vhds(snap_path, base_copy_path)
176 move_into_sr(snap_path)
177 else:212 else:
178 assert_vhd_not_hidden(base_copy_path)213 assert_vhd_not_hidden(image_info[0])
179214 # If there's no snap, we return the image.vhd UUID
180 move_into_sr(base_copy_path)215 vdi_return_list.append(dict(vdi_type="os", vdi_uuid=image_info[1]))
181 return vdi_uuid216
217 swap_info = prepare_if_exists(staging_path, 'swap.vhd')
218 if swap_info:
219 assert_vhd_not_hidden(swap_info[0])
220 paths_to_move.append(swap_info[0])
221 vdi_return_list.append(dict(vdi_type="swap", vdi_uuid=swap_info[1]))
222
223 for path in paths_to_move:
224 move_into_sr(path)
225
226 return vdi_return_list
182227
183228
184def _prepare_staging_area_for_upload(sr_path, staging_path, vdi_uuids):229def _prepare_staging_area_for_upload(sr_path, staging_path, vdi_uuids):
@@ -324,8 +369,9 @@
324 try:369 try:
325 _download_tarball(sr_path, staging_path, image_id, glance_host,370 _download_tarball(sr_path, staging_path, image_id, glance_host,
326 glance_port)371 glance_port)
327 vdi_uuid = _fixup_vhds(sr_path, staging_path, uuid_stack)372 # Right now, it's easier to return a single string via XenAPI,
328 return vdi_uuid373 # so we'll json encode the list of VHDs.
374 return json.dumps(_import_vhds(sr_path, staging_path, uuid_stack))
329 finally:375 finally:
330 _cleanup_staging_area(staging_path)376 _cleanup_staging_area(staging_path)
331377